text
stringlengths 3
1.04M
| lang
stringclasses 4
values | len
int64 3
1.04M
|
---|---|---|
\begin{document}
\title[Surface diffusion with elasticity in 2D]{The surface diffusion flow with elasticity in the plane}
\author{Nicola Fusco}
\author{Vesa Julin}
\author{Massimiliano Morini}
\keywords{}
\begin{abstract}
In this paper we prove short-time existence of a smooth solution in the plane to the surface diffusion equation with an elastic term and without an additional curvature regularization. We also prove the asymptotic stability
of strictly stable stationary sets.
\varepsilonnd{abstract}
\maketitle
\tableofcontents
\section{Introduction}
In the last years, the physical
literature has shown a rapidly growing interest toward the study of the morphological instabilities of interfaces between elastic phases generated by the competition between elastic and surface energies, the so called stress driven rearrangement instabilities (SDRI). They occur, for instance, in hetero-epitaxial growth of elastic films when a lattice mismatch between film and substrate is present, or in certain alloys that, under specific
temperature conditions, undergo a phase separation in many small connected phases
(that we call particles) within a common elastic body. A third interesting situation is
represented by the formation and evolution of material voids inside a stressed elastic
solid. Mathematically, the common thread is that equilibria are identified with local or global minimizers under a volume constraint of a free energy functional, which is given
by the sum of the stored elastic energy and the surface energy (of isotropic or
anisotropic perimeter type) accounting for the surface tension along the unknown profile
of the film or the interface between the phases. The associated variational problems
can be seen as non-local instances of the {\varepsilonm isoperimetric principle},
where the non-locality is given by the elastic term. They are very well studied in the
physical and numerical literature, but the available rigorous mathematical results
are very few. We refer to \cite{BGZ15, Bo0,BC, FFLM, FM09, GZ14} for some existence, regularity and stability results related to a variational model describing the equilibrium configurations of two-dimensional epitaxially strained elastic films, and to
\cite{Bo, CS07} for results in three-dimensions.
We also mention that a hierarchy of variational principles to describe the equilibrium shapes in the aforementioned contexts has been introduced in \cite{GuVo}. The simplest prototypical example is perhaps given by the following problem, which can be used to describe the equilibrium shapes of voids in elastically stressed solids (see for instance \cite{siegel-miksis-voorhees04}):
\begin{equation}\langlebel{prot1}
\text{minimize }\, J(F):=\frac12\int_{\Omega\setminus F}\mathbb{C} E(u_F):E(u_F)\, dz+\int_{\partial F}\varphi(\nu_F)\, d\sigma
\varepsilonnd{equation}
where minimization takes place among all sets $F\subset\Omega$ with prescribed measure $|F|=m$. Here, the set $F$ represents the shape of the void that has formed within
the elastic body $\Omega$ (an open subset of $\mathbb{R}^2$ or $\mathbb{R}^3$), $u_F$ stands for the equilibrium elastic displacement in $\Omega\setminus F$ subject to a prescribed boundary conditions $u_F=w_0$ on $\partial \Omega$ (see \varepsilonqref{uF} below), $\mathbb{C}$ is the elasticity tensor
of the (linearly) elastic material, $E(u_F):=(\nabla u_F+\nabla^T u_F)/2$ denotes the elastic strain of $u_F$, and $\varphi(\nu_F)$ is the anisotropic surface energy density evaluated at the outer unit normal $\nu_F$ to $F$. The presence of a nontrivial Dirichlet boundary condition $u_F=w_0$ on $\partial \Omega$ is what causes the solid $\Omega\setminus F$ to be elastically stressed. Indeed, note that when $w_0=0$
the elastic term becomes irrelevant and \varepsilonqref{prot1} reduces to the classical
Wulff shape problem (with the confinement constraint $F\subseteq\Omega$).
We refer to \cite{ CJP, FFLMi} for related existence, regularity and stability results in two dimensions. See also \cite{BCS} for a relaxation result valid in all dimensions for a variant of \varepsilonqref{prot1}.
In this paper we address the evolutive counterpart of \varepsilonqref{prot1} in two-dimensions, namely the morphologic evolution of shapes towards equilbria of the functional $J$, driven by stress and surface diffusion.
Assuming that mass transport in the bulk occurs at a much faster time scale, see \cite{Mu63}, we have, according to the Einstein–Nernst relation, that the evolution is governed by the {\varepsilonm area preserving} evolution law
\begin{equation}\langlebel{i1}
V_t=\partial_{\sigma\sigma}\mu_t \qquad\text{on $\partial F(t)$}
\varepsilonnd{equation}
where $V_t$ denotes the (outer) normal velocity of the evolving curve $\partial F(t)$ at time $t$ and $\partial_{\sigma\sigma} \mu_t$ stands for the tangential laplacian of the chemical potential $\mu_t$ on $\partial F(t)$. In turn, $\mu_t$ is given by the {\varepsilonm first variation} of the free-energy functional $J$ at $F(t)$, and thus (see \varepsilonqref{eq:J'} below) \varepsilonqref{i1} takes the form
\begin{equation}\langlebel{i2}
V_t=\partial_{\sigma\sigma}\bigl(k_{\varphi,t}-Q(E(u_{F(t)}))\bigr)\,,
\varepsilonnd{equation}
where $k_{\varphi,t}$ is the anisotropic curvature of $\partial F(t)$, $u_{F(t)}$ denotes as before the elastic equilibrium in $\Omega\setminus F(t)$ subject to
$u_{F(t)}=w_0$ on $\partial \Omega$, and $Q$ is the quadratic form defined as $Q(A):=\frac12\mathbb{C} A: A$ for all $2\times2$-symmetric matrices $A$.
Note that when $w_0=0$ the elastic term vanishes and thus \varepsilonqref{i1} reduces to the {\varepsilonm surface diffusion flow } equation
\begin{equation}\langlebel{sdintro}
V_t=\partial_{\sigma\sigma}k_{\varphi,t}
\varepsilonnd{equation}
for evolving curves, studied in \cite{EllGarcke} in the isotropic case (see also \cite{EMS} for the $N$-dimensional case). Thus, we may also regard \varepsilonqref{i2} as a sort of prototypical nonlocal perturbation of \varepsilonqref{sdintro} by an additive elastic contribution.
As observed by Cahn and Taylor in the case without elasticity (see \cite{cahn-taylor94}), the evolution equation \varepsilonqref{i2} can be seen as the gradient flow of the energy functional $J$ with respect to a suitable Riemannian structure of $H^{-1}$ type, see Remark~\ref{rm:accameno1}.
When the anisotropy $\varphi$ is {\varepsilonm strongly elliptic}, that is when it satisfies
\begin{equation}\langlebel{strongEll}
D^2\varphi(\nu) \,\tau\cdot \tau>0\qquad\text{for all $\nu\in \mathbb{S}^{1}$ and all $\tau\perp\nu$, $\tau\neq 0$}
\varepsilonnd{equation}
the evolution \varepsilonqref{i2} yields a {\varepsilonm parabolic } 4-th order (geometric) equation,
time by time coupled with the {\varepsilonm elliptic system} describing the elastic equilibrium in $\Omega\setminus F(t)$.
However, we mention here that for some physically relevant anisotropies the ellipticity condition
\varepsilonqref{strongEll} may fail at some directions $\nu$, see for instance \cite{dicarlo-gurtin-guidugli92, siegel-miksis-voorhees04}. Whenever this happens, \varepsilonqref{i2} becomes {\varepsilonm backward parabolic} and thus ill-posed.
To overcome this ill-posedness, a canonical approach inspired by Herring's work \cite{herring51} consists in considering a {\varepsilonm regularized curvature-dependent} surface energy of the form
$$
\int_{\partial F}\Bigl(\varphi(\nu_F)+\frac\varepsilon2 k^2\Bigr)\, d\mathcal{H}^1,
$$
where $\varepsilon>0$ and $k$ denotes the standard curvature, see \cite{dicarlo-gurtin-guidugli92, GurJab}.
In this case \varepsilonqref{i1} yields the following $6$-th order area preserving evolution equation
\begin{equation}
V_t=\partial_{\sigma\sigma}\Bigl(k_{\varphi,t}-Q(E(u_{F(t)}))-\varepsilon\bigl(\partial_{\sigma\sigma}k+\frac12k^3\bigr)\Bigr)\,.
\langlebel{sixth order evolution equation}
\varepsilonnd{equation}
This equation was studied numerically in \cite{siegel-miksis-voorhees04} (see also \cite{RRV, BHSV} and references therein) and analytically in
\cite{FFLM2}, where local-in-time existence of a solution was established in the context of
periodic graphs, modelling the evolution of epitaxially strained elastic films. We refer also to
\cite{FFLM3} for corresponding results in three-dimensions. We remark that
the analysis of \cite{FFLM2} (and of \cite{FFLM3}), which is based on the so-called minimizing movements approach, relies heavily on the presence of the curvature regularization and, in fact, all the estimates provided there are $\varepsilon$-dependent and degenerate as $\varepsilon\to 0^+$, even when $\varphi$ is strongly elliptic. Thus, the methods developed in \cite{FFLM2, FFLM3} do not apply to the case $\varepsilon=0$ in \varepsilonqref{sixth order evolution equation}.
In this paper we are able to address the case $\varepsilon=0$ and in one of the main results (see Theorems~\ref{th:existence}~and~\ref{Cinfty} below) we prove short time existence and uniqueness of a smooth solution of \varepsilonqref{i2} starting from sufficiently regular initial sets. To the best of our knowledge this is {\varepsilonm the first existence result for the surface diffusion flow with elasticity {and without curvature regularization}}.
Note that in general one cannot expect global-in-time existence. Indeed, even when no elasticity is present and $\varphi$ is isotropic, singularities such as pinching may develop in finite time, see for instance \cite{GigaIto}.
In the second main result of the paper we establish global-in-time existence and study the long-time behavior for a class of initial data: we show that {\varepsilonm strictly stable stationary sets}, that is, sets $E$ that are stationary for the energy functional $J$ and with positive second variation $\partial^2J(E)$ are {\varepsilonm exponentially stable} for the flow \varepsilonqref{i2}. More precisely, if the initial set $F_0$ is sufficiently close to the strictly stable set $E$ and has the same area, then the flow \varepsilonqref{i2} starting from $F_0$ exists for all times and converges to $E$ exponentially fast as $t\to+\infty$ (see Theorem~\ref{thmstability} for the precise statement).
A few comments on the strategy of the proofs are in order. The main technical difficulties in proving short-time existence clearly originate from the presence of the nonlocal elastic term $Q(E(u_{F(t)}))$ in \varepsilonqref{i2}. When a curvature regularization as in \varepsilonqref{sixth order evolution equation} is present, the elastic term may be regarded and treated as a lower order perturbation and thus is more easily handled. When $\varepsilon=0$ this is no longer possible and so one has to find a way to show that the parabolicity of the geometric part of the equation still tames the elastic contribution. Our strategy is based on
the natural idea of thinking of $Q$ as a {\varepsilonm forcing term} in order to set up a fixed point argument.
Roughly speaking, given an initial set $F_0$ and a forcing term $f$, we let $t\mapsto F(t)$ be the flow starting from $F_0$ and solving
$$
V_t=\partial_{\sigma\sigma}\bigl(k_{\varphi,t}-f\bigr),
$$
and we consider the correspondig $t\mapsto Q(E(u_{F(t)}))$, with $u_{F(t)}$ being as usual the elastic equilibrium in $\Omega\setminus F(t)$. The existence proof then amounts to finding a fixed point for the map $f\mapsto Q(E(u_{F(\cdot)}))$.
In order to implement this strategy, the crucial idea is to look at the squared $L^2$-norm of the tangential gradient of the chemical potential $(k_{\varphi,t}-f)$, that is, to study the behavior of the quantity
\begin{equation}\langlebel{monotoneQ}
\int_{\partial F(t)}\bigl(\partial_{\sigma}(k_{\varphi,t}-f)\bigr)^2\, d\mathcal{H}^1
\varepsilonnd{equation}
with respect to time. More precisely, by computing the time derivative of \varepsilonqref{monotoneQ} we derive suitable energy inequalities involving \varepsilonqref{monotoneQ} (see Lemma~\ref{monotonisuus lemma}) yielding the a priori regularity estimates needed to carry out the fixed point argument.
The quantity \varepsilonqref{monotoneQ}, with $f$ now given by the elastic term $Q$, is also crucial in the aforementioned asymptotic stability analysis. Here, by adapting to the present situation the methods developed in \cite{AFJM} for the surface diffusion flow without elasticity, we are able to show that for properly chosen initial sets, \varepsilonqref{monotoneQ} becomes monotone decreasing in time and, in fact, exponentially decays to zero, thus giving the desired exponential convergence result.
This paper is organized as follows.
In Section~\ref{sec:preliminaries} we set up the problem, introduce the main notations and collect several auxiliary results concerning the energy functional $J$ in \varepsilonqref{prot1}. Some of these results, which deal with the properties of strictly stable stationary sets, are then crucial for the asymptotic stability analysis carried out in Section~\ref{sec:stability}.
The short-time existence, uniqueness and regularity of the flow \varepsilonqref{i2} for sufficiently regular initial data is addressed in Section~\ref{sec:existence}.
In Section~\ref{sec:graphs} we briefly illustrate how to apply our main existence and asymptotic stability results in the case of evolving periodic graphs, that is in the geometric setting considered in \cite{FFLM2}. In particular, in Theorem~\ref{th:2dliapunov} we address and analytically characterize the exponential asymptotic stability of {\varepsilonm flat configurations}, thus extending to the evolutionary setting the results of \cite{FM09, Bo0}.
In the final Appendix, for the reader's convenience we provide the proof of an interpolation result, probably known to the experts, that is used throughout the paper.
We conclude this introduction by mentioning that it would be interesting to investigate whether under the assumption \varepsilonqref{strongEll} the flows \varepsilonqref{sixth order evolution equation} studied in \cite{FFLM2} converge to \varepsilonqref{i2} as $\varepsilon\to 0^+$, perhaps using the methods developed in \cite{BMN}. This issue as well as the extension of the results of this paper to three-dimensions will be addressed in future investigations.
\section{Preliminary results} \langlebel{sec:preliminaries}
\subsection{Geometric preliminaries and notation}
Let $F\subset\mathbb{R}^2$ be a bounded open set of class $C^2$. We denote the unit outer normal to $F$ by $\nu_F$ and the tangent vector $\tau_F$. Throughout the paper we choose the orientation so that $\tau_F = \mathcal{R }\nu_F$, where $\mathcal{R}$ is the counterclockwise rotation by $\pi/2$.
The differential of a vector field $X$ along $\partial F$
is denoted by $\partial_\sigma X$. We recall that
\[
\partial_\sigma \nu_F = k_F \tau_F \qquad \text{and} \qquad \partial_\sigma \tau_F = - k_F \nu_F,
\]
where $k_F$ is the curvature of $\partial F$. When no confusion arises, we will simply write $\nu$, $\tau$, and $k$ in place of $\nu_F$, $\tau_F$ and $k_F$. The tangential divergence of $X$ is $\operatorname{div}_\tau X := \partial_\sigma X \cdot \tau$. The divergence theorem on $\partial F$ states that for every vector field $X \in C^1(\partial F; \mathbb{R}^2)$ it holds
\begin{equation}\langlebel{divform}
\int_{\partial F} \operatorname{div}_\tau X \, d \mathcal{H}^1 = \int_{\partial F} k \, X \cdot \nu \, d \mathcal{H}^1.
\varepsilonnd{equation}
If the boundary of $F$ is of class $C^m$, with $m\geq 2$, then the signed distance function $d_F$ is of class $C^m$ in a tubular neighborhood of $\partial F$. We may extend $\nu, \tau$ and $k$ to such a neighborhood of $\partial F$ by setting $\nu := \nabla d_F$, $\tau :=\mathcal{R} \nu$ and $k := \operatorname{div} \nu = \Delta d_F$.
Throughout the paper, we fix a bounded Lipschitz open set $\Omegaega \subset \mathbb{R}^2$. Moreover, $G$ will always denote a smooth reference set, with the property that $G \subset\subset \Omegaega$. We will also denote by $\pi_G$
the orthogonal projection on $\partial G$ and by $\bar\varepsilonta$ a positive number such that
\begin{equation}\langlebel{bareta}
\text{$d_G$ and $\pi_G$ are smooth in $\mathcal{N}_{\bar\varepsilonta}(\partial G)$,}
\varepsilonnd{equation}
where $\mathcal{N}_{\bar\varepsilonta}(\partial G)$ denotes the $\bar\varepsilonta$-tubular neighborhood of $\partial G$.
We now introduce a class of sets $F$ sufficiently ``close'' to $G$ so that the boundary can be written as
\begin{equation}\langlebel{bdr}
\partialrtial F = \{ x + h_F(x) \nu_{G}(x) \mid x\in \partial G \},
\varepsilonnd{equation}
for a suitable function $h_F$ defined on $\partial G$.
More precisely, for $k\in \mathbb{N}$ and $\alpha\in (0,1)$ we set
\begin{multline}\langlebel{calH}
\mathfrak{h}^{k,\alpha}_M(G):=\{F\subset\subset\Omega:\, \varepsilonqref{bdr}\text{ holds for some $h_F\in C^{k,\alpha}(\partial G)$}, \\ \text{with } \|h_F\|_{L^\infty}\leq\bar\varepsilonta/2 \text{ and }\|h_F\|_{C^{k,\alpha}} \leq M\}.
\varepsilonnd{multline}
For such sets $F$ we also denote by $\pi_{F}^{-1}:\partial G \to \partial F$ the map $\pi_{F}^{-1}(x) = x + h_F(x) \nu_G(x)$
and set
\[
J_F := \sqrt{(1 + h_Fk_G)^2 + (\partial_\sigma h_F)^2},
\]
that is the tangential Jacobian on $\partial G$ of the map $\pi_{F}^{-1}$.
We recall now some useful transformation formulas:
\begin{equation} \langlebel{formula tangent}
\tau_F\circ \pi_{F}^{-1} = \frac{(1+h_F k_G ) \tau_G + \partial_\sigma h_F \nu_G }{J_F }
\varepsilonnd{equation}
and
\begin{equation} \langlebel{formula normal}
\nu_F\circ \pi_{F}^{-1} = \frac{ -\partial_\sigma h_F \tau_G + (1+h_F k_G )\nu_G}{J_F}.
\varepsilonnd{equation}
Similarly, the curvature $k_F$ of $F$ at $y = \pi_{F}^{-1}(x)$ is given by
\begin{equation} \langlebel{curvature formula}
k_F\circ \pi_{F}^{-1} = \frac{- \partial_{\sigma\sigma}h_F(1+h_F k_G ) + 2 (\partial_\sigma h_F)^2 k_G + (1+h_Fk_G)^2k_G + h_F\partial_\sigma h_F \, \partial_\sigma k_G}{J_F^3} .
\varepsilonnd{equation}
We now fix some notation, which will be used throughout the paper. If $t\mapsto F_t$ is a (smooth) flow of sets, in order to simplify the notation, we will sometimes write $h_t$, $\nu_t$, $\tau_t$, and $k_t$ instead of $h_{F_t}$,
$\nu_{F_t}$, $\tau_{F_t}$, and $k_{F_t}$, respectively. Similarly, we will set $k_{\varphi, t}:=g(\nu_t)k_t$.
Moreover, whenever we have a one-parameter family $(g_t)_t$ of functions (or vector fields) we shall denote by $\dot g_t$ the partial derivative with respect to $t$ of the function
$(x,t)\mapsto g_t(x)$, and by $\nabla^k g_t$ the $k$-th order differential of the function $(x,t)\mapsto g_t(x)$ with respect to $x$.
\subsection{The energy functional}
In this section we introduce the energy functional that underlies the flow. We also introduce the proper notions of stationary points and stability that will be needed in the study of the long-time behavior of the flow, see Section~\ref{sec:stability}.
As explained in the introduction, the free energy functional is the sum of an anisotropic perimeter and a bulk elastic term.
We start by introducing the anisotropic surface energy density, which is given by a positively one-homogeneous function $\varphi \in C^\infty(\mathbb{R}^2\setminus \{0\}; (0,+\infty))$
\begin{equation} \langlebel{ellipticity}
D^2 \varphi(\nu) \tau \cdot \tau \geq c_0 >0
\varepsilonnd{equation}
for every $\nu \in \mathbb{S}^1$ and every $\tau \in \mathbb{S}^1$ such that $\tau \perp \nu$. Note that the above condition is equivalent to requiring that the level sets of $\varphi$ have positive curvature.
Concerning the elastic part, for $F \subset \! \subset \Omegaega$ and for the {elastic displacement} $u: \Omegaega\setminus F\to \mathbb{R}^2$ we denote by $E(u)$ the symmetric part of $\nabla u$, that is, $E(u):= \frac{\nabla u + (\nabla u)^T}{2}$. In what follows, $\mathbb{C}$ stands for a fourth order {\varepsilonm elasticity tensor} acting on $2\times2$ symmetric matrices $A$, such that $\mathbb{C} A:A>0$ if $A \neq 0$. Finally we shall denote by
$Q(A) := \frac{1}{2}\mathbb{C} A : A$ the {\varepsilonm elastic energy density}.
We are now ready to write the energy functional. For a fixed {\varepsilonm boundary displacement} $w_0\in H^{\frac12}(\partial \Omega)$, we set
\begin{equation} \langlebel{energy}
J(F) := \int_{\partialrtial F} \varphi(\nu_F) \, d\mathcal{H}^1 + \int_{\Omegaega \setminus F} Q(E(u_F))\, dx,
\varepsilonnd{equation}
where $u_F$ is the elastic equilibrium satisfying the Dirichlet boundary condition $w_0$ on a fixed relatively open subset $\partial_D \Omega\subseteq \partial \Omega$. More precisely, $u_F$ is the unique solution in $H^1(\Omegaega \setminus F; \mathbb{R}^2)$ of the following elliptic system
\begin{equation}\langlebel{uF}
\begin{cases}
\operatorname{div} \mathbb{C} E(u_F)=0 & \text{in }\Omega\setminus F,\\
\mathbb{C} E(u_F)[\nu_F]=0 & \text{on }\partial F\cup (\partial \Omega\setminus \partial_D\Omega),\\
u_F=w_0 &\text {on }\partial_D\Omega.
\varepsilonnd{cases}
\varepsilonnd{equation}
Next, we provide the first and the second variation formulas for \varepsilonqref{energy}.
We start by recalling the well-known first variation formula for the anisotropic perimeter.
To this aim, for any
vector field $X \in C_c^1(\mathbb{R}^2; \mathbb{R}^2)$, let $(P_{\T^3}hi_t)_{t\in (-1,1)}$ be the associated flow, that is the solution of
\begin{equation}\langlebel{flussoX}
\begin{cases}
\displaystyle \frac{\partial P_{\T^3}hi_t}{\partial t}=X(P_{\T^3}hi_t),\\
P_{\T^3}hi_0=Id.
\varepsilonnd{cases}
\varepsilonnd{equation}
Then we have
\[
\frac{d }{d t} \Bigl|_{t=0} \int_{\partialrtial P_{\T^3}hi_t(F)} \varphi(\nu_{P_{\T^3}hi_t(F)}) \, d\mathcal{H}^1 = \int_{\partialrtial F} k_\varphi X\cdot \nu \, d\mathcal{H}^1,
\]
where the {\varepsilonm anisotropic curvature} $k_\varphi$ of $\partial F$ is given by $k_{\varphi} := \operatorname{div}_\tau (\nabla \varphi(\nu))$ and can be written also as
\[
\begin{split}
k_{\varphi} &= \operatorname{div}_\tau (\nabla \varphi(\nu)) = \operatorname{div} (\nabla \varphi(\nu))= D^2 \varphi(\nu) : D\nu \\
&= (D^2 \varphi(\nu) \tau \cdot \tau)\, k \\
&=: g(\nu)\, k,
\varepsilonnd{split}
\]
on $\partial F$.
Concerning the full functional $J$, we have the following theorem.
\begin{theorem}\langlebel{th:12var}
Let $F\subset\subset\Omega$ be a smooth set, $X \in C_c^1(\Omega; \mathbb{R}^2)$ and let $(P_{\T^3}hi_t)_{t\in (-1,1)}$ be the associated flow as in \varepsilonqref{flussoX}. Set $\psi:=X\cdot \nu_F$ and $X_\tau:= (X\cdot \tau_F)\tau_F$ on $\partial F$.
Then,
\begin{equation}\langlebel{eq:J'}
\frac{d}{dt}J(P_{\T^3}hi_t(F))_{\bigl|_{t=0}}=\int_{\partial F}(g(\nu_F)k_F-Q(E(u_F))) \psi\, d\mathcal{H}^{1}.
\varepsilonnd{equation}
If in addition $\operatorname{div} X=0$ in a neighborhood of $\partial F$ we have
\begin{align}\langlebel{eq:J''}
\frac{d^2}{dt^2}J(P_{\T^3}hi_t(F))_{\bigl|_{t=0}}&=
\int_{\partial F} \bigl[g(\nu_F) (\partial_{\sigma}\psi)^2- g(\nu_F) k_F^2 \psi^2\bigr] \, d \mathcal{H}^1 - 2 \int_{\Omegaega \setminus F} Q(E(u_\psi)) \, dx\nonumber\\
& - \int_{\partial F} \partial_{\nu_F} (Q(E(u_F))) \psi^2 \, d \mathcal{H}^1
-\int_{\partial F}(g(\nu_F)k_F-Q(E(u_F)))\operatorname{div}_\tau(\psi X_\tau)\, d\mathcal{H}^1,
\varepsilonnd{align}
where the function $u_\psi$ is the unique solution in $H^1(\Omega\setminus F; \mathbb{R}^2)$, with
$u_\psi=0$ on $\partial_D\Omega$, of
\begin{equation}\langlebel{eq u dot2}
\int_{\Omega \setminus F} \mathbb{C} E(u_\psi) : E(\varphi) \, dx = -\int_{\partial F}\operatorname{div}_\tau (\psi E(u_F) ) \cdot \varphi \, d \mathcal{H}^1
\varepsilonnd{equation}
for all $\varphi \in H^1(\Omega \setminus F; \mathbb{R}^2)$ such that $\varphi = 0$ on $\partial_D\Omegaega$.
\varepsilonnd{theorem}
Formulas \varepsilonqref{eq:J'} and \varepsilonqref{eq:J''} have been derived in \cite{CJP} for the case where $\phi$ is the Euclidean norm, and in a slightly different setting, namely when $F$ is the subgraph of a periodic function, in \cite{FM09, Bo}. The very same calculations apply to the more general situation considered here.
Throughout the paper, given a (sufficiently smooth) set $F \subset \!\subset \Omegaega$, we denote by $\Gamma_{F,1}, \dots, \Gamma_{F, m}$ the $m$ connected components of $\partial F$ and by $F_i$ the bounded open set enclosed by $\Gamma_{F,i}$. Note that the $F_i$'s are not in general the connected components of $F$ and it may happen that $F_i\subset F_j$ for some $i \neq j$.
We are interested in area preserving variations, in the following sense.
\begin{definition}\langlebel{def:admissibleX}
Let $F\subset\subset\Omega$ be a smooth set. Given a vector field
$X\in C^\infty_c(\Omega; \mathbb{R}^2)$, we say that the associated flow
$(P_{\T^3}hi_t)_{t\in (-1,1)}$ is {\varepsilonm admissible for $F$} if there exists $\varepsilon_0\in (0,1)$ such that
$$
|P_{\T^3}hi_t(F_i)|=|F_i|\quad\text{for $t\in (-\varepsilon_0,\varepsilon_0)$ and $i=1,\dots, m$.}
$$
\varepsilonnd{definition}
\begin{remark}\langlebel{rm:mn}
Note that if the flow associated with $X$ is admissible in the sense of the previous definition, then
for $i=1,\dots, m$ we have
$$
\int_{\Gamma_{F,i}}X\cdot\nu_F\, d\mathcal{H}^1=0.
$$
In view of this remark it is convenient to introduce the space
$\tilde H^1(\partial F)$ consisting of all functions $\psi \in H^1(\partial F)$ with zero average on each component of $\partial F$, i.e.,
\[
\int_{\Gamma_{F,i}} \psi \, d \mathcal{H}^1 = 0 \qquad \text{for every } \, i = 1, \dots, m.
\]
We observe that given $\psi\in \tilde H^1(\partial F)\cap C^{\infty}(\partial F)$ it is possible to construct a sequence of vector fields $X_n\in C^\infty_c(\Omega; \mathbb{R}^2)$, with $\operatorname{div} X_n=0$ in a neighborhood of $\partial F$, such that
$X_n\cdot \nu_F\to \psi$ in $C^1(\partial F)$, see \cite[Proof of Corollary~3.4]{AFM} for the details. Note that in particular the flows associated with $X_n$ are admissible.
\varepsilonnd{remark}
\begin{definition}
\langlebel{def stationarity}
Let $F \subset \!\subset \Omegaega$ be a set of class $C^2$. We say that $F$ is \varepsilonmph{stationary} if
$$
\frac{d}{dt}J(P_{\T^3}hi_t(F))_{\bigl|_{t=0}}=0
$$
for all admissible flows in the sense of Definition~\ref{def:admissibleX}.
\varepsilonnd{definition}
\begin{remark}\langlebel{rm:station}
By Remark~\ref{rm:mn} and in view of \varepsilonqref{eq:J'} it follows that a set $F \subset \!\subset \Omegaega$ of class $C^2$ is stationary if and only if there exist constants $\langlembda_1, \dots, \langlembda_m$ such that
\[
g(\nu_F)k_F - Q(E(u_F)) = \langlembda_i \qquad \text{on }\, \Gamma_{F,i}
\]
for every $i = 1,\dots, m$. In turn, note that if $F$ is stationary, then the second variation formula \varepsilonqref{eq:J''} reduces to
\begin{align} \langlebel{sv}
\frac{d^2}{dt^2}J(P_{\T^3}hi_t(F))_{\bigl|_{t=0}}= &
\int_{\partial F} \bigl[g(\nu_F) (\partial_{\sigma}\psi)^2- g(\nu_F) k_F^2 \psi^2\bigr] \, d \mathcal{H}^1 \nonumber\\
&- 2 \int_{\Omegaega \setminus F} Q(E(u_\psi)) \, dx
- \int_{\partial F} \partial_{\nu_F} (Q(E(u_F))) \psi^2 \, d \mathcal{H}^1,
\varepsilonnd{align}
where we recall that $\psi= X\cdot \nu_F$ and $u_\psi$ is the function satisfying \varepsilonqref{eq u dot2}.
Note that if $F$ is a sufficiently regular (local) minimizer of \varepsilonqref{energy} under the constraint
$|F|=const.$, then there exists a constant $\langlembda$ such that
\[
g(\nu_F)k_F - Q(E(u_F)) = \langlembda \qquad \text{on }\, \partial F.
\]
Thus, our notion of stationarity differs from the usual notion of criticality just recalled.
\varepsilonnd{remark}
In view of \varepsilonqref{sv}, for any set $F\subset\subset\Omega$ of class $C^2$ it is convenient to introduce the quadratic form $\partial^2 J(F)$ defined on $\tilde H^1(\partial F)$ as
\begin{equation} \langlebel{eq:pa2J}
\begin{split}
\partial^2J(F)[\psi] :=& \int_{\partial F} \bigl[g(\nu_F) (\partial_{\sigma}\psi)^2- g(\nu_F) k_F^2 \psi^2\bigr] \, d \mathcal{H}^1 \\
&- 2 \int_{\Omegaega \setminus F} Q(E(u_\psi)) \, dx
- \int_{\partial F} \partial_{\nu_F} (Q(E(u_F))) \psi^2 \, d \mathcal{H}^1,
\varepsilonnd{split}
\varepsilonnd{equation}
where $u_\psi$ is the unique solution of \varepsilonqref{eq u dot2} under the Dirichlet condition $u_\psi=0$ on
$\partial_D\Omega$.
We conclude this section by defining the notion of stability for a stationary point.
\begin{definition}\langlebel{def:stable}
Let $F\subset\subset\Omega$ be a stationary set in the sense of Definition~\ref{def stationarity}. We say that $F$ is \varepsilonmph{strictly stable} if
\begin{equation}\langlebel{j2>0}
\partial^2J(F)[\psi]>0\qquad\text{for all }\psi\in \tilde H^1(\partial F)\setminus\{0\}.
\varepsilonnd{equation}
\varepsilonnd{definition}
It is not difficult to see that \varepsilonqref{j2>0} is equivalent to the coercivity of $\partial^2J(F)$ on $ \tilde H^1(\partial F)$. More precisely, \varepsilonqref{j2>0} holds if and only if there exists $m_0>0$ such that
\begin{equation}\langlebel{emmepiccolo0}
\partial^2J(F)[\psi]\geq m_0\|\psi\|^2_{\tilde H^1(\partial F)}\qquad\text{for all }\psi\in \tilde H^1(\partial F),
\varepsilonnd{equation}
see \cite{FM09}. In turn the latter coercivity property is stable with respect to small $C^{2,\alpha}$-perturbations. More precisely, we have:
\begin{lemma}\langlebel{lemma:j2>0near}
Assume that the reference set $G\subset\subset\Omega$ is a (smooth) strictly stable stationary set in the sense of Definition~\ref{def:stable} and fix $\alpha\in (0,1)$. Then, there exists $\sigma_0>0$ such that for all $F\in \mathfrak{h}^{2,\alpha}_{\sigma_0}(G)$ (see \varepsilonqref{calH}) we have
$$
\partial^2J(F)[\psi]\geq \frac{m_0}2\|\psi\|^2_{\tilde H^1(\partial F)} \text{ for all $\psi\in \tilde H^1(\partial F)$,}
$$
where $m_0$ is the constant in \varepsilonqref{emmepiccolo0}.
\varepsilonnd{lemma}
\begin{proof}The proof of the lemma goes as in \cite[Proof of Lemma~4.12]{FFLM3}, where the case of $F$ being the subgraph of a periodic function of two variables is considered. Although the geometric framework here is more general, we can follow exactly the same line of argument up to the obvious changes due to the different setting (and some simplifications due the fact that here we work in two-dimensions). We refer the reader to the aforementioned reference for the details.
\varepsilonnd{proof}
Recall that $G_1$, \dots, $G_m$ are the bounded open sets enclosed by the connected components $\Gamma_{G,1}$, \dots, $\Gamma_{G,m}$ of the boundary $\partial G$ of the reference set and observe
that if $F\in \mathfrak{h}^{2,\alpha}_M(G)$, then $\partial F$ has the same number $m$ of connected components $\Gamma_{F,1}$, \dots, $\Gamma_{F,m}$, which can be numbered in such a way that
\begin{equation}\langlebel{numbered}
\Gamma_{F,i}= \{ x + h_{F}(x) \nu_{G}(x) \mid x\in \Gamma_{G,i} \},
\varepsilonnd{equation}
for suitable $h_{F}\in C^{k,\alpha}(\partial G)$.
In the next lemma we show that pairs of sets which are sufficiently close in a $C^{2,\alpha}$-sense can always be connected through area preserving flows in the sense of
Definition~\ref{def:admissibleX}. More precisely we have:
\begin{lemma}\langlebel{lemma:connect}
Let $M>0$ and $\alpha\in (0,1)$. There exists $C>0$ with the following property: If $F_1$, $F_2\in \mathfrak{h}^{2,\alpha}_M(G)$ are such that $|F_{1, i}|=|F_{2,i}|$, $i=1,\dots, m$, then there exists a flow
$(P_{\T^3}hi_t)_{t\in (-1,1)}$ admissible for $F_1$ in the sense of Definition~\ref{def:admissibleX}, such that $P_{\T^3}hi_0(F_1)=F_1$, $P_{\T^3}hi_1(F_1)=F_2$, $|P_{\T^3}hi_t(F_{1,i})|=|F_{1,i}|$ for all $i=1,\dots, m$ and $t\in [0,1]$. Moreover
\begin{equation}\langlebel{vicinovicino}
\sup_{t\in [0,1]}\|P_{\T^3}hi_t-Id\|_{C^{2,\alpha}(\mathcal{N}_{{\bar\varepsilonta}/2}(\partial G))}\leq C\|h_{F_1}- h_{F_2}\|_{C^{2,\alpha}(\partial G)},
\varepsilonnd{equation}
and the velocity field $X$ satisfies $\operatorname{div} X=0$ in the $\bar \varepsilonta/2$-neighborhood $\mathcal{N}_{{\bar\varepsilonta}/2}(\partial G)$.
Here $F_{i, 1}$, \dots, $F_{i,m}$ denote the bounded open sets enclosed by the connected components $\Gamma_{F_i, 1}, \dots, \Gamma_{F_i, m}$ of $\partial F_i$, $i=1,2$, which are supposed to be numbered as in \varepsilonqref{numbered}.
\varepsilonnd{lemma}
\begin{proof}
We adapt the construction of \cite[Proposition~3.4]{Morini}.
We start by constructing a $C^\infty$~vector-field $\tilde X:{\mathcal N}_{\bar \varepsilonta}(\partial G)\to\mathbb{R}^2$ satisfying
\begin{equation}\langlebel{campo1}
\operatorname{div} \tilde X=0\quad\text{in ${\mathcal N}_{\bar \varepsilonta}(\partial G)$},\qquad\quad \tilde X= \nu_G\quad\text{on $\partial G$}.
\varepsilonnd{equation}
To this aim, let $\zeta$ be the solution of
$$
\begin{cases}
\nabla \zeta\cdot \nabla d_G+\zeta\Delta d_G=0 & \text{in ${\mathcal N}_{\bar \varepsilonta}(\partial G)$,}\\
\zeta=1 & \text{on $\partial G$.}
\varepsilonnd{cases}
$$
We may solve the above PDE by the method of characteristics, constructing such a $\zeta$ amounts to solving
for every $x\in\partial G$ the Cauchy problem
$$
\begin{cases}
(f_x)^\prime(t)+f_x(t)\Delta d_G(x+t \nu_G(x))=0 & \text{in $(-\bar\varepsilonta,\bar\varepsilonta)$,} \\
f_x(0)=1\,,
\varepsilonnd{cases}
$$
and setting
$$
\zeta(x+t \nu_G(x)):=f_x(t)=\varepsilonxp\Bigl(-\int_0^t\Delta d_G(x+s \nu_G(x))\, ds\Bigr)\,.
$$
We can now define $\tilde X:=\zeta\nabla d_G$ and check that
$\operatorname{div}(\zeta\nabla d_G)= \nabla \zeta\cdot \nabla d_G+\zeta\Delta d_G=0$.
Let now $F_1$ and $F_2$ be as in the statement.
We choose $X\in C^\infty_c(\Omega; \mathbb{R}^2)$ in such a way
that
\begin{equation}\langlebel{campo2}
X(x):=\biggl(\int_{h_{F_1}(\pi_G(x))}^{h_{F_2}(\pi_G(x))}\frac{ds}{\zeta(\pi_G(x)+s \nu_G(\pi_G(x)))}\biggr)\,\tilde X(x)\qquad \text{ for every $x\in {\mathcal N}_{\bar \varepsilonta/2}(\partial G)$}
\varepsilonnd{equation}
and we let $P_{\T^3}hi$ be the associated flow.
Notice that the integral appearing in \varepsilonqref{campo2} represents the time
needed to go from $\pi_G(x)+h_{F_1}(\pi_G(x))\nu_G(\pi_G(x))$ to $\pi_G(x)+h_{F_2}(\pi_G(x))\nu_G(\pi_G(x))$ along the trajectory of the vector field $\tilde X$.
Therefore the above definition ensures that the time needed to go from $\pi_G(x)+h_{F_1}(\pi_G(x))\nu_G(\pi_G(x))$ to $\pi_G(x)+h_{F_2}(\pi_G(x))\nu_G(\pi_G(x))$ along the modified vector field $X$ is
one. This is equivalent to saying that for all $x\in \partial G$ we have $P_{\T^3}hi_1(x+h_{F_1}(x)\nu_G(x))= x+h_{F_2}(x)\nu_G(x)$ and, in turn, $P_{\T^3}hi_1(F_1)=F_2$. Moreover, recalling the first equation in \varepsilonqref{campo1} and using the fact that the function
$$
x\mapsto\int_{h_{F_1}(\pi_G(x))}^{h_{F_2}(\pi_G(x))}\frac{ds}{\zeta(\pi_G(x)+s \nu_G(\pi_G(x)))}
$$
is constant along the trajectories of $\tilde X$, we deduce from \varepsilonqref{campo2} that the modified field $ X$ is divergence free in ${\mathcal N}_{\bar \varepsilonta/2}(\partial G)$.
Note that by \varepsilonqref{campo2} it also follows
$$
\|X\|_{C^{2,\alpha}({\mathcal N}_{\bar \varepsilonta/2}(\partial G))}\leq C \|h_{F_1}-h_{F_2}\|_{C^{2,\alpha}(\partial G)}
$$
for a constant $C>0$ depending on $G$, and thus \varepsilonqref{vicinovicino} easily follows.
Observe now that for $i=1, \dots, m$ and for $\varepsilon_0>0$ small enough by \cite[equation (2.30)]{ChSte} we have
$$
\frac{d^2}{dt^2}|P_{\T^3}hi_t(F_{1,i})|=\int_{P_{\T^3}hi_t(\Gamma_{F_i, 1})}(\operatorname{div} X)(X\cdot \nu_{ P_{\T^3}hi_t(F_{1,i})})=0\,\qquad\text{for all $t\in [-\varepsilon_0,1]$,}
$$
where we used the fact that $ X$ is divergence free in ${\mathcal N}_{\bar \varepsilonta/2}(\partial G)$.
Hence the function $t\mapsto |P_{\T^3}hi_t(F_{1,i})|$ is affine in $[-\varepsilon_0,1]$. Since by assumption $|P_{\T^3}hi_0(F_{1,i})|=|F_{1,i}|=|F_{2,i}|=|P_{\T^3}hi_1(F_{1,i})|$, it is in fact constant.
This concludes the proof of the lemma.
\varepsilonnd{proof}
We conclude this section by showing that in a sufficiently small $C^{2,\alpha}$-neighborhood of $G$ the stationary sets are isolated, once we fix the areas enclosed by the connected components of the boundary.
\begin{proposition}\langlebel{stationary}
Assume that the reference set $G\subset\subset\Omega$ is a (smooth) strictly stable stationary set in the sense of Definition~\ref{def:stable}, fix $\alpha\in (0,1)$, and let $\sigma_0$ be the constant provided by Lemma~\ref{lemma:j2>0near}. There exists $\sigma_1\in (0, \sigma_0)$ with the following property: Let $F_1$, $F_2\in \mathfrak{h}^{2,\alpha}_{\sigma_1}(G)$ be stationary sets in the sense of Definition~\ref{def stationarity} and (with same notation of Lemma~\ref{lemma:connect}) assume that $|F_{1, i}|=|F_{2,i}|$ for $i=1,\dots, m$. Then $F_1=F_2$.
\varepsilonnd{proposition}
\begin{proof}
We start by observing that for any $\varepsilonta\in (0,\sigma_0)$ we may choose $\sigma_1>0$ so small that for any pair $F_1$, $F_2\in \mathfrak{h}^{2,\alpha}_{\sigma_1}(G)$ the flow $P_{\T^3}hi_t$ provided by Lemma~\ref{lemma:connect} satisfies
\begin{equation}\langlebel{cista}
P_{\T^3}hi_t(F_1)\in \mathfrak{h}^{2,\alpha}_{\varepsilonta}(G)\subseteq \mathfrak{h}^{2,\alpha}_{\sigma_0}(G)\qquad\text{for all }t\in [0,1],
\varepsilonnd{equation}
Notice that this is possible thanks to \varepsilonqref{vicinovicino}.
Recall that by Remark~\ref{rm:station} there exist constants $\langlembda_i$ such that $g(\nu_G)k_G - Q(E(u_G))=\langlembda_i$ on $\Gamma_{G,i}$ for $i=1, \dots, m$. In what follows, the subscript $t$ stands for the subscript $P_{\T^3}hi_t(F_1)$, where $P_{\T^3}hi_t$ is the flow of Lemma~\ref{lemma:connect}. Fix $\varepsilon>0$ and observe that by taking $\varepsilonta$ in \varepsilonqref{cista} and, in turn, $\sigma_1$ smaller if needed, we may ensure that
\begin{equation}\langlebel{cista1}
\sup_{t\in [0,1]}\|\nu_t-\nu_G\circ\pi_G\|_{C^1(P_{\T^3}hi_t(\partial F))}\leq \varepsilon
\varepsilonnd{equation}
and
\begin{equation}\langlebel{cista2}
\sup_{i=1,\dots, m}\sup_{t\in [0,1]}\|g(\nu_t) k_t - Q(E(u_t))-\langlembda_i\|_{C^0(P_{\T^3}hi_t(\Gamma_{F,i}))}\leq \varepsilon.
\varepsilonnd{equation}
The latter condition can be guaranteed thanks also to the elliptic estimates proved later in Lemma~\ref{elastiset}. Let $X$ be the velocity field of $P_{\T^3}hi_t$ and recall that by the explicit construction given in the proof of Lemma~\ref{lemma:connect} we have
$X=[X\cdot (\nu_G\circ\pi_G)]\nu_G\circ \pi_G$ in ${\mathcal N}_{\bar \varepsilonta/2}(\partial G)$. Thus, writing
$X=[X\cdot (\nu_G\circ\pi_G-\nu_t)]\nu_G\circ \pi_G+ (X\cdot \nu_t)\nu_G\circ \pi_G$ on
$P_{\T^3}hi_t(\partial F)$ and using
\varepsilonqref{cista1} with $\varepsilon$ (and in turn $\sigma_1$) sufficiently small, we find that for all $t\in [0,1]$
\begin{equation}\langlebel{cista3}
|X|\leq 2|X\cdot \nu_t|\quad\text{ and }\quad
|\partial_\sigma X|\leq C\bigl(|X\cdot \nu_t|+|\partial_\sigma(X\cdot \nu_t)|)\bigr)\quad\text{ on }P_{\T^3}hi_t(\partial F),
\varepsilonnd{equation}
for some constant $C>0$ depending only on $G$.
Let now $F_1$ and $F_2$ be as in the statement of the proposition and $P_{\T^3}hi_t$ as above. Recalling \varepsilonqref{eq:J''} and \varepsilonqref{eq:pa2J}, for every $s\in[0,1]$
we may write
\begin{align}
\frac{d^2}{dt^2}J(P_{\T^3}hi_t(F))_{\bigl|_{t=s}}&=\partial^2J(P_{\T^3}hi_s(F_1))[X\cdot \nu_s]\nonumber\\
&-\int_{P_{\T^3}hi_s(\partial F_1)}\bigl[g(\nu_s)k_s-Q(E(u_s))\bigr]\operatorname{div}_\tau\bigl( X_\tau(X\cdot \nu_s)\bigr)\, d\mathcal{H}^1\nonumber\\
&=\partial^2J(P_{\T^3}hi_s(F_1))[X\cdot \nu_s]\nonumber\\
&-\sum_{i=1}^m\int_{P_{\T^3}hi_s(\Gamma_{F_1, i})}\bigl[g(\nu_s)k_s-Q(E(u_s))-\langlembda_i\bigr]\operatorname{div}_\tau\bigl( X_\tau(X\cdot \nu_s)\bigr)\, d\mathcal{H}^1. \langlebel{catenona}
\varepsilonnd{align}
Recall that $(P_{\T^3}hi_t)$ is an admissible flow and thus $X\cdot\nu_s\in \tilde H^1(P_{\T^3}hi_s(\partial F_1))$ for every $s\in[0,1]$ due to Remark~\ref{rm:mn}. In turn, by \varepsilonqref{cista} and
Lemma~\ref{lemma:j2>0near} we deduce that
\[
\partial^2J(P_{\T^3}hi_s(F_1))[X\cdot \nu_s]\geq\frac{m_0}{2}\|X\cdot\nu_s\|^2_{\tilde H^1(P_{\T^3}hi_s(\partial F_1))}.
\]
Note also that by \varepsilonqref{cista3} it is easily checked that
\[
\|\operatorname{div}_\tau\bigl( X_\tau(X\cdot \nu_s)\bigr)\|_{L^1(P_{\T^3}hi_s(F_1))}\leq C\|X\cdot\nu_s\|^2_{\tilde H^1(P_{\T^3}hi_s(\partial F_1))}.
\]
Thus, collecting all the above observations and recalling also \varepsilonqref{cista2}, we deduce from \varepsilonqref{catenona} that for every $s\in[0,1]$
\[
\frac{d^2}{dt^2}J(P_{\T^3}hi_t(F))_{\bigl|_{t=s}}\geq \Bigl(\frac{m_0}{2}-Cm\varepsilon\Bigr)\|X\cdot\nu_s\|^2_{\tilde H^1(P_{\T^3}hi_s(\partial F_1))}\geq \frac{m_0}{4}\|X\cdot\nu_s\|^2_{\tilde H^1(P_{\T^3}hi_s(\partial F_1))},
\]
where the last inequality holds true provided we choose in \varepsilonqref{cista2} a sufficiently small $\varepsilon$ (and $\sigma_1$). Since on the other hand by the stationarity of $F_1$ and $F_2$ we have
\[
\frac{d}{dt}J(P_{\T^3}hi_t(F))_{\bigl|_{t=0}}=\frac{d}{dt}J(P_{\T^3}hi_t(F))_{\bigl|_{t=1}}=0,
\]
we infer that $\frac{d^2}{dt^2}J(P_{\T^3}hi_t(F))_{\bigl|_{t=s}}=0$ and in turn $X\cdot\nu_s=0$ on $P_{\T^3}hi_s(\partial F_1)$ for all
$s\in[0,1]$. This means that $s\mapsto P_{\T^3}hi_s(F_1)$ is constant in $[0,1]$ and, in particular, $F_1=F_2$.
\varepsilonnd{proof}
\section{Short-time existence and regularity}\langlebel{sec:existence}
We are interested in the evolution law
\begin{equation} \langlebel{flow}
V_t = \partial_{\sigma \sigma}\big(g(\nu_t)k_t - Q(E(u_t))\big) \quad\text{on }\partial F_t,
\varepsilonnd{equation}
where $V_t$ stands for the outer normal velocity of $\partial F_t$, and
$u_t\in H^1(\Omega\setminus F_t; \mathbb{R}^2)$ is the unique solution of
\begin{equation}\langlebel{ut}
\begin{cases}
\operatorname{div} \mathbb{C} E(u_t)=0 & \text{in }\Omega\setminus F_t,\\
\mathbb{C} E(u_t)[\nu_t]=0 & \text{on }\partial F_t\cup (\partial \Omega\setminus \partial_D\Omega),\\
u_t=w_0 &\text {on }\partial_D\Omega.
\varepsilonnd{cases}
\varepsilonnd{equation}
\begin{remark}\langlebel{rm:accameno1}
We remark that \varepsilonqref{flow} can be regarded as the gradient flow of \varepsilonqref{energy} with respect to a suitable Riemannian structure of $H^{-1}$-type.
To illustrate this fact, consider the dual $H^{-1}_t$ of $H^1_t:=H^1(\partial F(t)$) endowed with the scalar product
\begin{multline}\langlebel{hmeno1scalar}
\langlengle \psi_1, \psi_2\ranglengle_{H^{-1}_t}:=\int_{\partial F(t)}\partial_\sigma v_{\psi_1}\partial_\sigma v_{\psi_2}\,d\mathcal{H}^1=-\langlengle \partial_{\sigma\sigma}v_{\psi_2},
v_{\psi_1}\ranglengle_{H^{-1}_t\times H^{1}_t}\\ =
\langlengle \psi_2,
v_{\psi_1}\ranglengle_{H^{-1}_t\times H^{1}_t}=\langlengle \psi_1,
v_{\psi_2}\ranglengle_{H^{-1}_t\times H^{1}_t},
\varepsilonnd{multline}
where $\partial_\sigma$ denotes the tangential derivative on $\partial F(t)$ and for any $\psi\in H^{-1}_t$ the function $v_\psi$ is the unique function in $H^1_t$ satisfying
\begin{equation}\langlebel{vpsiintro}
\begin{cases}
-\partial_{\sigma\sigma}v_\psi= \psi & \text{on $\partial F(t)$}\,,
\\
\displaystyle \int_{\partial F(t)} v_\psi\, d\mathcal{H}^1=0\,.
\varepsilonnd{cases}
\varepsilonnd{equation}
As already recalled, the first variation $\partial J(F(t))$ satisfies
$$
\partial J(F(t))[\psi]=\int_{\partial F(t)} \bigl(k_{\varphi, t}-Q(E(u_{F(t)}))\bigr) \psi\, d\mathcal{H}^1
$$
for all $\psi\in C^\infty(\partial F(t))$.
Thus, recalling also \varepsilonqref{flow}, \varepsilonqref{hmeno1scalar} and \varepsilonqref{vpsiintro}, we have
\begin{align*}
\langlengle V_t, \psi \ranglengle_{H^{-1}(\partial F(t))}&=\int_{\partial F(t)}V_t\, v_{\psi}\, d\mathcal{H}^1
=\int_{\partial F(t)} \partial _{\sigma\sigma}\bigl(k_{\varphi, t}-Q(E(u_{F(t)}))\bigr) v_{\psi}\, d\mathcal{H}^1\\
&= \int_{\partial F(t)} \bigl(k_{\varphi, t}-Q(E(u_{F(t)}))\bigr) \partial_{\sigma\sigma}v_{\psi}\, d\mathcal{H}^1=-\partial J(F(t))[\psi].
\varepsilonnd{align*}
Hence, time by time the normal velocity $V_t$ is the element of $H^{-1}_t$ that represents the action of $-\partial J(F(t))$ with respect to the scalar product defined in \varepsilonqref{hmeno1scalar}. This formally establishes the $H^{-1}$-gradient flow structure of \varepsilonqref{flow}.
\varepsilonnd{remark}
The following theorem establishes the short-time existence of a unique weak solution of \varepsilonqref{flow}.
In Theorem~\ref{Cinfty} below we will show that the weak solution is in fact smooth and therefore classical.
\begin{theorem}\langlebel{th:existence}
Let $F_0\subset\subset\Omega$ be such that
\begin{equation}\langlebel{f0}
\partialrtial F_0 = \{ x + h_0(x) \nu_{G}(x) \mid h_0 \in H^3(\partialrtial G) \}.
\varepsilonnd{equation}
There exist $\deltalta$ and $T>0$, which depend on the $H^3$-norm of $h_0$, such that if $\|h_0\|_{L^2(\partial G)} \leq \deltalta$ then the flow \varepsilonqref{flow} admits a unique local-in-time weak solution
$(F_t)_{t \in (0,T)}$ with an initial set $F_0$.
More precisely, we have $\partialrtial F_t = \{ x + h_t(x) \nu_{G}(x) \}$, where
$(h_t)_t\in H^1(0,T; H^1(\partial G))\cap L^2(0, T; H^3(\partial G))$.
Moreover $(h_t)_t \in C^0([0, T); C^{2,\alpha}(\partial G))$ for all $\alpha\in (0, \frac12)$ and
$\bigl([g(\nu_t)k_t-Q(E(u_t))]\circ\pi_{F_t}^{-1}\bigr)_t \in L^2(0,T; H^3(\partial G))$.
\varepsilonnd{theorem}
Note that when the initial set $F_0$ is smooth we may take $G=F_0$.
We give the proof of Theorem \ref{th:existence} at the end of the section. We will first prove a sequence of lemmas needed for the proof of the short-time existence result.
We will need some preliminary results. Our proof of Theorem \ref{th:existence} is based on a fixed point argument. To this aim, for a given smooth function $f : \partial G \times (0,T) \to \mathbb{R}$, we consider the forced surface diffusion flow given by
\begin{equation}\langlebel{eqf}
V_t = \partial_{\sigma \sigma}\big(g(\nu_t)k_t + f_t \circ \pi_G\big)\quad\text{on }\partial F_t
\varepsilonnd{equation}
with initial datum $F_0$ of class $H^3$, where we denoted $f_t:=f(\cdot, t)$. To simplify the notation we will denote
\begin{equation}\langlebel{Rt}
R_t:=g(\nu_t)k_t + f_t \circ \pi_G \quad\text{on }\partial F_t.
\varepsilonnd{equation}
The following monotone quantities are the starting point of our analysis.
\begin{lemma} \langlebel{monotonisuus lemma}
Let $F_0$ be a set with smooth boundary, $f \in C^{\infty}(\partial G\times [0,\infty))$, and let $(F_t)_{t \in (0,T)}$ be a smooth flow satisfying \varepsilonqref{eqf}, with initial datum $F_0$. Then we have
\begin{equation}\langlebel{stimaf3}
\frac{d}{dt}\int_{F_t\Delta G} \text{dist}(x, \partial G)\, dx = \int_{\partial F_t} d_{G}\, \partial_{\sigma\sigma}R_{t}\, d\mathcal{H}^1 \leq P(F_t)^{\frac{1}{2}} \left( \int_{\partial F_t}(\partial_\sigma R_t)^2 \, d\mathcal{H}^1\right)^{\frac{1}{2}},
\varepsilonnd{equation}
whenever the flow \varepsilonqref{eqf} exists. Moreover there exists $C_1$, which depends on $\sup_{(0,T)} ||h_t||_{C^{2,\alpha}}$ and $\sup_{(0,T)} ||f_t||_{C^{1,\alpha}}$, such that
\begin{multline}\langlebel{stimaf2}
\frac{d}{dt}\int_{\partial F_t}(\partial_\sigma R_t)^2\, d\mathcal{H}^1+
c_0\int_{\partial F_t} (\partial_{\sigma\sigma\sigma}R_t)^2\, d\mathcal{H}^1 \\
\leq C_1\| \dot f_t \|_{H^{-\frac12}(\partial G)}^2 + C_1\left( 1+ \int_{\partial F_t}(\partial_\sigma R_t)^2\, d\mathcal{H}^1\right)^q\,,
\varepsilonnd{multline}
for some $q>1$.
\varepsilonnd{lemma}
\begin{proof}
Let $X_t$ be the velocity field associated with the flow. In particular we have that
\[
X_t \cdot \nu_{t} = \partial_{\sigma \sigma}R_t.
\]
For $t \in (0,T)$ and $s>0$ $P_{\T^3}hi_s: \partial F_t \to \partial F_{t+s}, P_{\T^3}hi_s = \pi_{F_{t+s}}^{-1} \circ \pi_{F_t}$ are admissible diffeomorphisms and by the above equality it holds $\frac{\partial}{\partial s} P_{\T^3}hi_s \big|_{s=0} = X_t$.
As mentioned in the previous section we can extend $\nu_t, \tau_t$ and $k_t$ by means of the signed distance function $d_{F_t}$ in a tubular neighborhood of $\partial F_t$. This extension yields the following identities (see for instance \cite[Lemma 4.2]{Bo}):
\begin{equation}
\langlebel{app1}
\partial_{\nu_t} k_{\varphi, t} = - k_t^2 g(\nu_{t}),
\varepsilonnd{equation}
\begin{equation}
\langlebel{app2}
\dot{\nu}_t = - \partial_\sigma (X_t \cdot \nu_t) \, \tau_t = - \partial_{\sigma \sigma \sigma} R_t\, \tau_t
\varepsilonnd{equation}
and
\begin{equation}
\langlebel{app3}
\dot{k}_{\varphi, t} = \operatorname{div} (D^2 \varphi(\nu_t) \,\dot{\nu}_t ) = - \partial_\sigma (g(\nu_{t}) \partial_{\sigma \sigma \sigma}R_t).
\varepsilonnd{equation}
Note that
\[
\int_{F_t \Delta G} \text{dist}(x, \partial G)\, dx = \int_{F_t} d_G\, dx - \int_{G}d_G\, dx .
\]
Thus,
\[
\begin{split}
\frac{d}{dt} \int_{F_t \Delta G} \text{dist}(x, \partial G)\, dx &= \frac{d}{dt}\int_{F_t} d_G\, dx = \int_{F_t} \operatorname{div}( d_G X_t) \, dx \\
&= \int_{\partial F_t} d_G (X_t\cdot \nu_t ) \, d \mathcal{H}^1 = \int_{\partial F_t} d_G \, \partial_{\sigma \sigma}R_t\, d \mathcal{H}^1\\
&= - \int_{\partial F_t} \partial_{\sigma}d_G \, \partial_{\sigma}R_t\, d \mathcal{H}^1 \leq P(F_t)^{\frac12} \left( \int_{\partial F_t} \, (\partial_{\sigma}R_{t})^2 \, d \mathcal{H}^1 \right)^{\frac12}.
\varepsilonnd{split}
\]
This proves \varepsilonqref{stimaf3}.
Concerning \varepsilonqref{stimaf2} we begin by calculating
\begin{equation}
\langlebel{app4}
\begin{split}
\frac{d}{dt} &\left(\frac{1}{2} \int_{\partial F_{t}} (\partial_{\sigma}R_{t})^2 \, d \mathcal{H}^1\right) = \frac{\partial}{\partial s} \left(\frac{1}{2} \int_{\partial F_{t}} \big( (\nabla R_{t+s})(P_{\T^3}hi_s(x)) \cdot \tau_{t+s}( P_{\T^3}hi_s(x)) \big)^2 \, J_{\tau} P_{\T^3}hi_s \, d \mathcal{H}^1\right) \big|_{s=0}\\
&= \frac{1}{2} \int_{\partial F_{t}} (\partial_{\sigma}R_{t})^2 \operatorname{div}_\tau X_t \, d \mathcal{H}^1 + \int_{\partial F_{t}} \partial_{\sigma}R_{t} \, \frac{\partial}{\partial s} \Big(\nabla R_{t+s}(P_{\T^3}hi_s(x)) \cdot \tau_{t+s}( P_{\T^3}hi_s(x)) \Big) \big|_{s=0} \, d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
Using our notation we write the last term as
\[
\begin{split}
\frac{\partial}{\partial s} & \Big(\nabla R_{t+s}(P_{\T^3}hi_s(x)) \cdot \tau_{t+s}( P_{\T^3}hi_s(x)) \Big) \big|_{s=0}\\
&= \partial_{\sigma} \dot{R_t} + (\nabla^2R_t \, X_t)\cdot \tau_t + \nabla R_t \cdot \dot{\tau_t} + \nabla R_t\cdot( \nabla \tau_t X_t).
\varepsilonnd{split}
\]
We write $X_{t, \tau} := X_t \cdot \tau_t $.
Note that by \varepsilonqref{app2} we have that $\dot{\tau}_t = \mathcal{R}\dot{\nu}_t = \partial_{\sigma \sigma \sigma}R_t \, \nu_t$. Moreover it holds
$D\tau_t \, \nu_t = 0.$ Therefore we get
\[
\begin{split}
\frac{\partial}{\partial s} & \Big(\nabla R_{t+s}(P_{\T^3}hi_s(x)) \cdot \tau_{t+s}( P_{\T^3}hi_s(x)) \Big) \big|_{s=0} \\
&= \partial_{\sigma} \dot{R_t} + \partial_{\sigma \sigma \sigma} R_t\, \partial_{\nu_t} R_t + \partial_{\sigma \sigma}R_t (\nabla^2R_t \, \nu_t)\cdot \tau_t + (\nabla^2R_t \, \tau_t)\cdot \tau_t \, X _{t,\tau} + \nabla R_t \cdot(\nabla \tau_t \, \tau_t)\, X _{t,\tau}.
\varepsilonnd{split}
\]
Therefore, using the fact that $\partial_\sigma (\partial_{\nu_t} R_t) = k_t \, \partial_{\sigma}R_t +
(\nabla^2R_t \, \nu_t)\cdot \tau_t $ and integrating by parts, \varepsilonqref{app4} can be written as
\[
\begin{split}
\frac{d}{dt} &\left(\frac{1}{2} \int_{\partial F_{t}} (\partial_{\sigma}R_{t})^2 \, d \mathcal{H}^1\right) = \int_{\partial F_{t}} \frac{1}{2}(\partial_{\sigma}R_{t})^2 \operatorname{div}_\tau X_t + \partial_{\sigma}R_{t} \, \partial_\sigma\dot{R_t} + \partial_{\sigma}R_{t} \, \partial_{\sigma \sigma \sigma}R_t\, \partial_{\nu_t} R_t \, d \mathcal{H}^1\\
&+\int_{\partial F_{t}}\partial_{\sigma}R_{t} \, \partial_{\sigma \sigma}R_t (\nabla^2R_t \, \nu_t)\cdot \tau_t +\partial_{\sigma}R_{t} \, X _{t,\tau} \, (\nabla^2R_t \, \tau_t)\cdot \tau_t + \partial_{\sigma}R_{t} \, \nabla R_t\cdot (\nabla \tau_t\, \tau_t) \, X _{t,\tau} \, d \mathcal{H}^1\\
&= \int_{\partial F_{t}} \frac{1}{2}(\partial_{\sigma}R_{t})^2 \operatorname{div}_\tau X_t -\partial_{\sigma \sigma}R_t \dot{R_t} - (\partial_{\sigma \sigma}R_t)^2 \partial_{\nu_t} R_t - k_t \, (\partial_{\sigma}R_{t})^2 \partial_{\sigma \sigma}R_t \, d \mathcal{H}^1\\
&+ \int_{\partial F_{t}} \partial_{\sigma}R_{t} \, (\nabla^2R_t \, \tau_t)\cdot \tau_t \, X _{t,\tau} +\partial_{\sigma}R_{t} \nabla R_t\cdot (\nabla \tau_t\, \tau_t) \, X _{t,\tau} \, d \mathcal{H}^1.
\varepsilonnd{split}
\]
Note that
\[
\frac{1}{2}\operatorname{div}_\tau\big((\partial_\sigma R_t)^2 X_t\big) =\frac{1}{2}(\partial_{\sigma}R_{t})^2 \operatorname{div}_\tau X_t +\partial_{\sigma}R_{t} \, (\nabla^2R_t \, \tau_t)\cdot \tau_t \, X _{t,\tau} +\partial_{\sigma}R_{t} \nabla R_t\cdot (\nabla \tau_t\, \tau_t) \, X _{t,\tau}
\]
Hence, using also \varepsilonqref{divform}, we get
\begin{equation} \langlebel{app5}
\begin{split}
\frac{d}{dt} &\left(\frac{1}{2} \int_{\partial F_{t}} (\partial_{\sigma}R_{t})^2 \, d \mathcal{H}^1\right)\\
&= \int_{\partial F_{t}} \frac{1}{2}\operatorname{div}_\tau\big((\partial_\sigma R_t)^2 X_t\big) \, d \mathcal{H}^1
-\partial_{\sigma \sigma}R_t \dot{R_t} - (\partial_{\sigma \sigma}R_t)^2 \partial_{\nu_t} R_t - k_t \, (\partial_{\sigma}R_{t})^2 \partial_{\sigma \sigma}R_t \, d \mathcal{H}^1\\
&= - \int_{\partial F_{t}} \partial_{\sigma \sigma}R_t \dot{R_t} + (\partial_{\sigma \sigma}R_t)^2 \partial_{\nu_t} R_t +\frac12 k_t \, (\partial_{\sigma}R_{t})^2 \partial_{\sigma \sigma}R_t \, d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
Therefore, recalling \varepsilonqref{Rt}, by \varepsilonqref{app1} and \varepsilonqref{app3} we get from \varepsilonqref{app5} that
\begin{multline} \langlebel{app6}
\frac{d}{dt} \left(\frac{1}{2} \int_{\partial F_{t}} (\partial_{\sigma}R_{t})^2 \, d \mathcal{H}^1\right)= - \int_{\partial F_{t}} g(\nu_t) (\partial_{\sigma \sigma \sigma}R_t)^2\, d \mathcal{H}^1 - \int_{\partial F_{t}} \partial_{\sigma \sigma }R_t ( \dot f_t \circ \pi_G )\, d \mathcal{H}^1 \\
- \int_{\partial F_{t}} \Big( \partial_{\nu_t} (f_t \circ \pi_G) (\partial_{\sigma \sigma} R_t)^2 - g(\nu_t) k_t^2 (\partial_{\sigma \sigma} R_t)^2 + \frac{1}{2} k_t \, (\partial_{ \sigma} R_t)^2 \partial_{\sigma \sigma} R_t \Big) \, d \mathcal{H}^1.
\varepsilonnd{multline}
By the ellipticity assumptions \varepsilonqref{ellipticity} we have that $c_0 \leq g(\nu_t) \leq C_0$ and $|k_t| \leq \frac{1}{c_0} |k_{\varphi,t}| \leq C$, where $C$ depends also on the $C^{2,\alpha}$-norm of $h_t$. For $\varepsilon>0$ to be chosen, using also Young's inequality, we may estimate \varepsilonqref{app6} as
\[
\begin{split}
\frac{d}{dt} &\left(\frac{1}{2} \int_{\partial F_{t}} (\partial_{\sigma}R_{t})^2 \, d \mathcal{H}^1\right) + c_0\int_{\partial F_{t}} (\partial_{\sigma \sigma \sigma}R_t)^2\, d \mathcal{H}^1 \\
&\leq C_\varepsilon\| \dot f_t\|_{H^{-\frac12}(\partial G)}^2 + \varepsilon ||\partial_{\sigma \sigma}R_t||_{H^{\frac12}(\partial F_t)}^2 + C \int_{\partial F_{t}} \left( 1+ (\partial_{\sigma \sigma}R_t)^2 + (\partial_\sigma R_t)^2 |\partial_{\sigma \sigma}R_t| \right) \, d \mathcal{H}^1,
\varepsilonnd{split}
\]
where the constant $C$ depends on the $C^{2,\alpha}$-norm of $h_t$ and the $C^{1,\alpha}$-norm of $f_t$. Since
$\|\partial_{\sigma \sigma}R_t\|_{H^{\frac12}(\partial F_t)}\leq C \|\partial_{\sigma \sigma \sigma}R_t\|_{L^{2}(\partial F_t)}$, by choosing $\varepsilon$ small enough we get
\[
\begin{split}
\frac{d}{dt} &\left(\frac{1}{2} \int_{\partial F_{t}} (\partial_{\sigma}R_{t})^2 \, d \mathcal{H}^1\right) + \frac23c_0\int_{\partial F_{t}} (\partial_{\sigma \sigma \sigma}R_t)^2\, d \mathcal{H}^1 \\
&\leq C\| \dot f_t\|_{H^{-\frac12}(\partial G)}^2 + C \int_{\partial F_{t}} \left( 1+ (\partial_{\sigma \sigma}R_t)^2 + (\partial_\sigma R_t)^4 \right) \, d \mathcal{H}^1.
\varepsilonnd{split}
\]
Note now that by Theorem~\ref{interpolation},
\[
\|\partial_{\sigma \sigma} R_t\|_{L^{2}}^2 \leq C \|\partial_{\sigma \sigma \sigma} R_t\|_{L^{2}}\,\|\partial_\sigma R_t\|_{L^{2}} \leq \varepsilon \|\partial_{\sigma \sigma \sigma} R_t\|_{L^{2}}^2 + \frac{C}{\varepsilon} \|\partial_{ \sigma} R_t\|_{L^{2}}^2
\]
and
\[
\|\partial_{ \sigma} R_t\|_{L^{4}}^4 \leq C \|\partial_{ \sigma\sigma\sigma } R_t\|_{L^{2}}^{\frac12}\|\partial_{ \sigma} R_t\|_{L^{2}}^{\frac72} \leq \varepsilon \|\partial_{ \sigma\sigma\sigma} R_t\|_{L^{2}}^2 + \frac{C}{\varepsilon} \|\partial_{ \sigma} R_t\|_{L^{2}}^{\frac{14}{3}}.
\]
Hence the estimate \varepsilonqref{stimaf2} follows for $q = \frac{7}{3}$ by choosing $\varepsilon$ small enough.
\varepsilonnd{proof}
Next theorem establishes the local in time existence for \varepsilonqref{eqf}. The same result for $f=0$ is proved in \cite{chinese}. The case considered here follows essentially from the same argument.
\begin{theorem}\langlebel{th:probabile}
Let $h_0\in H^4(\partial G)$ and let $f\in C^\infty(\partial G\times [0,+\infty))$. Then, there exist $\delta>0$ and $T>0$ such that if $\|h_0\|_{C^1(\partial G))}\leq\delta$, then \varepsilonqref{eqf} admits a smooth solution $(F_t)_t$ defined for all $t\in (0,T)$. Moreover, setting
$\partialrtial F_t = \{ x + h_t(x) \nu_{G}(x) \}$, we have that $(h_t)_t\in H^1(0,T; L^2(\partial G))\cap L^2(0,T; H^4(\partial G))$. Finally, there exists $\bar \delta\in (0, \bar \varepsilonta)$, where $\bar \varepsilonta$ is as in \varepsilonqref{bareta}, depending only on $G$ and $\Omega$, such that if $\sup_{(0,T)}\|h_t-h_0\|_{C^1(\partial G)}< \bar \delta$, then the solution can be extended beyond the time $T$.
\varepsilonnd{theorem}
\begin{proof}
The proof goes exactly as the one of Theorem~2.5 of \cite{chinese}, taking into account the presence of the forcing term $f$. Note that as in \cite{chinese} in the first part of the proof we can only conclude that the time $T$ depends on $\|h_0\|_{H^4}$ and on $\|f\|_{L^2(0,T; H^2)}$. However, one can then argue as in the second part of proof of Theorem~2.5 of \cite{chinese} to conclude that the $\bar\delta$ for which the extension property holds is independent of $\|h_0\|_{H^4}$ and $\|f\|_{L^2(0,T; H^2)}$, as long as
$f\in L^2(0,T; H^2)$ (a property which is implied by our assumption on $f$). Finally, the $C^\infty$-regularity of the solution for $t>0$ follows by standard arguments (or arguing as in the proof of Theorem~\ref{Cinfty} below where in fact the more complicated equation \varepsilonqref{flow} is dealt with).
\varepsilonnd{proof}
In the next lemma we use the monotone quantity \varepsilonqref{stimaf2} to deduce regularity estimates for the flow \varepsilonqref{eqf}.
\begin{lemma} \langlebel{flow exists 1}
Let $F_0$ be a smooth initial set and $h_0 \in C^\infty(\partial G)$ the function representing $\partial F_0$ as in \varepsilonqref{f0}. Fix $M_0>0$, $\alpha \in (0,\tfrac12)$, $\delta_1\in (0,\bar \delta)$, with $\bar\delta$ as in
Theorem~\ref{th:probabile}. There exist $\deltalta_0>0$ and $T_0$, depending on $M_0$, $\alpha$, and $\delta_1$, such that if $f \in C^\infty(\partial G\times [0,+\infty))$ satisfies
\begin{equation}\langlebel{flex0}
\sup_{(0,T_0)} \|f_t\|_{C^{1, \alpha}(\partial G)} \leq M_0\qquad \text{and} \qquad \int_0^{T_0} \|\dot f_t\|_{H^{-\frac12}(\partial G)}^2 \leq M_0
\varepsilonnd{equation}
and if $\|h_0\|_{H^3(\partial G)} \leq M_0$, $\|h_0\|_{L^{2}(\partial G)} < \deltalta_0$, then the flow \varepsilonqref{eqf} exists on $(0,T_0)$ and
\begin{equation} \langlebel{flex1}
\sup_{(0,T_0)}\|h_t\|_{C^{2, \alpha}(\partial G)} \leq \deltalta_1 \qquad \text{and} \qquad \sup_{(0,T_0)} \|\partial_\sigma R_t\|_{L^{2}(\partial F_t)}^2 \leq 2C_1 M_0 + \|\partial_\sigma R_0\|_{L^{2}(\partial F_0)}^2\,,
\varepsilonnd{equation}
where $C_1$ is the constant appearing in Lemma~\ref{monotonisuus lemma}.
\varepsilonnd{lemma}
\begin{proof}
We fix $\deltalta_1 < \bar \delta$ and observe if $\delta_0>0$ is sufficiently small, $\|h_0\|_{H^3(\partial G)} \leq M_0$ and $\|h_0\|_{L^{2}(\partial G)} < \deltalta_0$, then from \varepsilonqref{inter3} we get $\|h_0\|_{C^{2, \alpha}(\partial G)} < \bar\delta-\deltalta_1$. In particular, by Theorem~\ref{th:probabile} the flow exists for a short time and as long as $\|h_t\|_{C^{2, \alpha}(\partial G)} < \deltalta_1$, since this implies that $\|h_t-h_0\|_{C^1(\partial G)}<\bar\delta$.
Let us denote by $T_0$ the maximal time such that
\begin{equation}\langlebel{flex2}
\|h_t\|_{C^{2, \alpha}(\partial G)} < \deltalta_1 \text{ and } \|\partial_\sigma R_t\|_{L^{2}(\partial F_t)}^2 < 2C_1 M_0 + \|\partial_\sigma R_0\|_{L^{2}(\partial F_0)}^2 \quad \text{for all }t\in (0, T_0)\,.
\varepsilonnd{equation}
We want to show that if \varepsilonqref{flex0} is satisfied, then $T_0$ is bounded away from 0 by a constant depending only $M_0$, $\alpha$, and $\delta_1$. Thus, without loss of generality we may assume that
$T_0\leq 1$, otherwise there is nothing to prove.
Assume first that $\|h_{T_0}\|_{C^{2, \alpha}(\partial G)} =\deltalta_1$. From \varepsilonqref{flex2}, from the first inequality in \varepsilonqref{flex0} and from the assumption $\|h_0\|_{H^3(\partial G)} \leq M_0$ we conclude that
\begin{equation}\langlebel{stop est 00}
\|\partial_\sigma R_t\|_{L^2(\partial F_t)} \leq C(M_0) \qquad \text{for all } \, t \in (0, T_0).
\varepsilonnd{equation}
In turn, using again the first inequality in \varepsilonqref{flex0} we get
\[
\|\partial_\sigma k_{\varphi,t}\|_{L^2(\partial F_t)}^2 \leq C(M_0) \qquad \text{for all } \, t \in (0, T_0).
\]
Now, by the first inequality in \varepsilonqref{flex2}, recalling also formula \varepsilonqref{curvature formula}, we deduce that
\begin{equation}\langlebel{stop est 0}
\|h_t\|_{H^3(\partial G)} \leq C(M_0) \qquad \text{for all } \, t \in (0, T_0).
\varepsilonnd{equation}
Moreover, we conclude by integrating \varepsilonqref{stimaf3} over $(0,t)$ and by \varepsilonqref{stop est 00} that
$$
\|h_t\|_{L^2(\partial G)}^2 \leq C(M_0)T_0 + C\|h_0\|_{L^2(\partial G)}^2\leq C(M_0)T_0 + C \deltalta_0^2\qquad \text{for all } \, t \in (0, T_0)\,.
$$
In turn, by \varepsilonqref{inter3} and by \varepsilonqref{stop est 0} we get
\begin{align*}
\deltalta_1 = \|h_{T_0}\|_{C^{2, \alpha}(\partial G)} \leq C \, \bigl(\|h_{T_0}\|_{H^{3}(\partial G)}^{\theta} \, \|h_{T_0}\|_{L^{2}(\partial G)}^{1-\theta}+ \|h_{T_0}\|_{L^{2}(\partial G)}\bigr)\leq C(M_0)\left(\sqrt{T_0}+\deltalta_0\right)^{1-\theta}\,,
\varepsilonnd{align*}
where $\theta$ depends only on $\alpha$. It is clear from the above inequality that if $\delta_0$ is sufficiently small we get that $T_0$ is bounded away from $0$.
Assume now that $y(T_0)=2C_1M_0+ y(0)$, where we have set $y(t):=||R_\sigma||_{L^{2}(\partial F_t)}^2$.
By integrating \varepsilonqref{stimaf2} over the time interval $(0,T_0)$ and using the second inequality in \varepsilonqref{flex0} we get
\[
y(T_0) \leq y(0) + C_1M_0 + C_1 \int_0^{T_0}(1 + y)^q \, dt.
\]
Now, using the second inequality in \varepsilonqref{flex2} we conclude that
$$
2C_1M_0+ y(0)=y(T_0)\leq y(0) + C_1M_0 + C_1T_0 (1+ 2C_1 M_0 + y(0))^q.
$$
From this estimate we get again that $T_0$ is bounded away from $0$. This concludes the proof of the lemma.
\varepsilonnd{proof}
We will need the following result for the elastic equilibrium, which states that if $F$, $\widetilde F \in \mathfrak{h}^{2,\alpha}_M(G)$ are $C^{2, \alpha}$-close, then the corresponding elastic equilibria are $C^{2,\alpha}$-close to each other. More precisely, we have the following lemma.
\begin{lemma} \langlebel{elastiset}
Let $0<\alpha<\beta\leq1$, $M>0$ and $k\in \mathbb{N}$. Then there exists $C>0$ such that if $F, \widetilde F \in \mathfrak{h}^{k,\beta}_M(G)$ we have
\begin{equation}\langlebel{linear}
\| u_{F} \circ \pi_F^{-1} - u_{\widetilde F} \circ \pi_{\widetilde F}^{-1}\|_{C^{k,\alpha}(\partial G)} \leq C \| h_{F}-h_{\widetilde F}\|_{C^{k,\alpha}(\partial G)}.
\varepsilonnd{equation}
Here, we recall that $u_{F}$ and $u_{\widetilde F}$ denote the elastic equilibria
corresponding to $F$ and $\tilde F$, respectively, as defined in \varepsilonqref{uF}.
\varepsilonnd{lemma}
\begin{proof} The case $k=1$ can be proved as in \cite[Lemma~6.10]{FFLM2}. We now assume $k\geq 2$.
Denote by $\mathcal U$ the open set in $C^{k,\alpha}(\partial G)$ defined as
$$
\mathcal U:=\big\{h\in C^{k,\alpha}(\partial G):\,\,\|h\|_{L^\infty(\partial G)}<2\bar\varepsilonta/3,\,\,\|h\|_{C^{k,\alpha}(\partial G)}<M'\big.\},
$$
where $M'>0$ is chosen so large that $\mathfrak{h}^{k,\beta}_M(G)\subset\mathcal U$.
Given $h\in \mathcal U$, we denote by $u_h$ the solution to \varepsilonqref{uF}, with
$F$ replaced by the bounded set $F_h$ whose boundary is given by the (normal) graph of $h$ over $\partial G$.
Fix $\psi\in C^\infty_c(\Omega)$, $0\leq\psi\leq 1$, $\psi\varepsilonquiv 1$ in $\{d_G\leq 2\bar\varepsilonta/3\}$, and $\mathrm{supp\,}\psi\subset\subset \{d_G< \bar\varepsilonta\}$ and notice that if $\| h\|_{C^{k,\alpha}(\partial G)}\leq \delta'$, for $\delta'$ sufficiently small, then the map $P_{\T^3}hi_h:\Omega\setminus F_h\to \Omega\setminus G$ of the form
$$
P_{\T^3}hi_h(x)=x-h(\pi_G(x))\psi(x)\nu_G(\pi_G(x))
$$
is a $C^{k,\alpha}$-diffeomorphism.
Then setting $v_h:=u_h\circ(P_{\T^3}hi_h)^{-1}$ one can see that $v_h$ is the solution to
$$
\begin{cases}
\operatorname{div}(\mathbb{A}(y, h(\pi_G(y)), \partial_\sigma h(\pi_G(y)))\nabla v)=0 & \text{in $\Omegaega\setminus G$}\\
\mathbb{A}(y, h(y), \partial_\sigma h(y))\nabla v[\nu_G]=0 & \text{on $\partial G$,}\\
v=w_0 & \text{on $\partial_D\Omega$,}
\varepsilonnd{cases}
$$
where the entries of the tensor valued function $\mathbb{A}$ are 4-th order polynomials in $h\circ\pi$ and $\partial_\sigma h\circ\pi$ with $C^{k-1,\alpha}$-coefficients. It is easily checked that the map $\mathcal F:\mathcal U\times C^{k,\alpha}(\Omega\setminus G)\to
C^{k-2,\alpha}(\Omega\setminus G)\times C^{k-1,\alpha}(\partial G)$ given by
$$
\mathcal F(h, v):=(\operatorname{div}(\mathbb{A}(y, h(\pi_G(y)), \partial_\sigma h(\pi_G(y)))\nabla v),\mathbb{A}(y, h(y), \partial_\sigma h(y))\nabla v[\nu_G] )
$$
is of class $C^1$.
We now check the invertibility (with continuity of the inverse) of $\partial_v \mathcal F(h, v_h)$, which is a linear operator from $C^{k,\alpha}(\Omega\setminus G)$ to $C^{k-2,\alpha}(\Omega\setminus G)\times C^{k-1,\alpha}(\partial G)$. This amounts to showing that for every $f\in C^{k-2,\alpha}(\Omega\setminus G)$ and $g\in C^{k-1,\alpha}(\partial G)$ the system
$$
\begin{cases}
\operatorname{div}(\mathbb{A}(y, h(\pi_G(y)), \partial_\sigma h(\pi_G(y)))\nabla w)=f & \text{in $\Omegaega\setminus G$}\\
\mathbb{A}(y, h(y), \partial_\sigma h(y))\nabla w[\nu_G]=g & \text{on $\partial G$,}\\
w=0 & \text{on $\partial_D\Omega$,}
\varepsilonnd{cases}
$$
admits a unique solution $w\in C^{k,\alpha}(\Omega\setminus G)$ such that $\|w\|_{C^{k,\alpha}(\Omega\setminus G)}\leq C(\|f\|_{C^{k-2,\alpha}(\Omega\setminus G)}+\|g\|_{C^{k-1,\alpha}(\partial G)})$, with $C$ depending only on $k$, $\alpha$, $G$ and $\Omega$. This follows from the classical Schauder estimates for linear elliptic systems (see for instance \cite{ADN}). Thus, since $\mathcal F(h, v_h)=0$, we may apply the Implicit Function Theorem (see \cite[Theorem~2.3]{AP}) to deduce that there exists $\delta>0$ such that the map $h\mapsto\mathcal S(h):=v_h$ is of class $C^1$ from a $\delta$-neighborhood of $h$ in the $C^{k,\alpha}(\partial G)$-norm to $C^{k,\alpha}(\Omega\setminus G)$.
Since $\mathcal S$ is $C^1$ in $\mathcal U$ and $\mathfrak{h}^{k,\beta}_M(G)\subset\mathcal U$ is compact in $C^{k\,\alpha}(\partial G)$, the Fr\'echet derivative $D\mathcal S(h)$ is equibounded in $\mathcal L(C^{k,\alpha}(\partial G);C^{k,\alpha}(\Omegaega\setminus G))$ for $h\in\mathfrak{h}^{k,\beta}_M(G)$. Hence \varepsilonqref{linear} easily follows.
\varepsilonnd{proof}
Let us consider the flow \varepsilonqref{eqf}, where $f$ satisfies the assumptions of Lemma \ref{flow exists 1}. For every given time $t$ we
consider the elastic equilibrium $u_t$ defined in \varepsilonqref{ut}. We start by observing that arguing as in \cite[Theorem 3.2]{FM09} one can show that $\dot u_t$ satisfies
\begin{equation}\langlebel{eq u dot}
\int_{\Omega \setminus F_t} \mathbb{C} E(\dot u_t) : E(\varphi) \, dx =- \int_{\partial F_t}\text{div}_\tau (\partial_{\sigma \sigma}R_t\, \mathbb{C} E(u_t) ) \cdot \varphi \, d \mathcal{H}^1
\varepsilonnd{equation}
for all $\varphi \in H^1(\Omegaega \setminus F_t; \mathbb{R}^2)$ such that $\varphi = 0$ on $\partial_D\Omegaega$. Note also that $\dot u_t=0$ on $\partial_D\Omegaega$.
We can now prove the regularity for the elastic equilibria $u_{t}$.
\begin{lemma} \langlebel{regularity u}
Let $F_0$ and $\alpha$ be as in Lemma \ref{flow exists 1}. Fix
\begin{equation}\langlebel{emmezero}
M_0 > 2\|Q(E(u_G))\|_{C^{1,\alpha}(\partialrtial G)}.
\varepsilonnd{equation}
There exist $\deltalta_1\in (0,\bar \delta)$, $T_0>0$ and $\deltalta_0>0$ such that if $f$ satisfies \varepsilonqref{flex0}, $\|h_0\|_{H^3(\partial G)} \leq M_0$ and $\|h_0\|_{L^{2}(\partial G)} < \deltalta_0$, then the flow \varepsilonqref{eqf} exists on $(0,T_0)$, \varepsilonqref{flex1} holds true and for every $t \in (0,T_0)$
\begin{equation}\langlebel{regu esti}
\|Q(E(u_t))\circ \pi_{F_t}^{-1}\|_{C^{1,\alpha}(\partial G)} < M_0 \quad \text{and} \quad \int_0^{T_0} \| \partial_t \big(Q(E(u_t)) \circ \pi_{F_t}^{-1}\big) \|_{H^{-\frac12}(\partial G)}^2 \, dt< M_0.
\varepsilonnd{equation}
\varepsilonnd{lemma}
\begin{proof}
Let $u_G$ be the elastic equilibrium in $G$. We first recall that if $\delta_0$ is as in Lemma~\ref{flow exists 1}, then by the first inequality in \varepsilonqref{flex1} and \varepsilonqref{linear} we have
\begin{equation}\langlebel{regu esti 2}
\|u_t\circ \pi_{F_t}^{-1} -u_G\|_{C^{2,\alpha}(\partialrtial G)} \leq C \|h_t\|_{C^{2,\alpha}(\partial G)} \leq C \deltalta_1.
\varepsilonnd{equation}
Therefore, choosing $\deltalta_1\in (0, \bar\delta)$ sufficiently small (depending on $M_0$) and recalling \varepsilonqref{emmezero}, the first estimate in \varepsilonqref{regu esti} follows.
For the second estimate we calculate
\[
\begin{split}
\partial_t &\big(Q(E(u_t))(x + h_t(x)\nu_G(x))\big) \\
&= \mathbb{C} E(u_t) \circ \pi_{F_t}^{-1} : \big((\nabla E(u_t) \circ \pi_{F_t}^{-1}) [\dot h_t \nu_G]\big) + (\mathbb{C} E(u_t) :E(\dot u_t)) \circ \pi_{F_t}^{-1}.
\varepsilonnd{split}
\]
Therefore by the $C^{2,\alpha}$-bound \varepsilonqref{flex1} on $h_t$ and by \varepsilonqref{regu esti 2} we have that
\begin{equation} \langlebel{regu est 3}
\|\partial_t \big( Q(E(u_t)) \circ \pi^{-1}_{F_t}\big) \|_{H^{-\frac12}(\partial G)} \leq C(M_0) \|\dot h_t\|_{L^{2}(\partial G)} + C(M_0) \|\mathbb{C} E(u_t) :E(\dot u_t)\|_{H^{-\frac12}(\partial F_t)}.
\varepsilonnd{equation}
Observe first that the normal velocity of $\partial F_t$ can be written on $\partial G$ as
\[
V \circ \pi_{F_t}^{-1} = \dot h_t \frac{1+h_tk_G}{J_t}.
\]
Therefore by the definition of the flow, recalling the interpolation inequality \varepsilonqref{inter2}, we get
\begin{equation} \langlebel{regu est 4}
\|\dot h_t\|_{L^2(\partial G)} \leq C \|V\|_{L^2(\partial F_t)} = C\|\partial_{\sigma\sigma}R_t\|_{L^{2}(\partial F_t)} \leq C\|\partial_{\sigma\sigma\sigma}R_t\|_{L^2(\partial F_t)}^{\frac12}\|\partial_{\sigma}R_t\|_{L^2(\partial F_t)}^{\frac12}.
\varepsilonnd{equation}
In order to estimate second term in \varepsilonqref{regu est 3} we recall that $\mathbb{C} E(u_t)[\nu] = 0$. Therefore, using \varepsilonqref{regu esti 2} again, we get
\begin{equation} \langlebel{est udot 1}
\begin{split}
\|\mathbb{C} E(u_t) :E(\dot u_t)\|_{H^{-\frac12}(\partial F_t)} &= \|\mathbb{C} E(u_t) : \nabla\dot u_t\|_{H^{-\frac12}(\partial F_t)} \\
&= \|\mathbb{C} E(u_t) : \nabla_\tau\dot u_t\|_{H^{-\frac12}(\partial F_t)} \leq C \|\nabla_\tau \dot u_t\|_{H^{-\frac12}(\partial F_t)}.
\varepsilonnd{split}
\varepsilonnd{equation}
Choosing as a test function $\varphi = \dot u_t$ in the equation \varepsilonqref{eq u dot} we get arguing as above that
\begin{equation} \langlebel{est udot 2}
\begin{split}
2 \int_{F_t} Q(E(\dot u_t)) \, dx &= - \int_{\partial F_t}\text{div}_\tau (\partial_{\sigma\sigma}R_t \,\mathbb{C} E(u_t) ) \cdot \dot u_t \, d \mathcal{H}^1\\
& = \int_{\partial F_t} \partial_{\sigma\sigma}R_t \,\mathbb{C} E(u_t) : \nabla_\tau\dot u_t \, d \mathcal{H}^1\\
&\leq C(M_0) \|\partial_{\sigma\sigma}R_t \|_{H^{\frac12}(\partial F_t)}\|\nabla_\tau\dot u_t\|_{H^{-\frac12}(\partial F_t)}.
\varepsilonnd{split}
\varepsilonnd{equation}
Recall that $\dot u_t = 0$ on $\partial_D\Omegaega$. Therefore by Korn's inequality, by \varepsilonqref{est udot 2} and by the interpolation inequality \varepsilonqref{inter1} we get
\[
\begin{split}
\|\nabla_\tau\dot u_t\|_{H^{-\frac12}(\partial F_t)}^2 &\leq C \|\dot u_t\|_{H^{\frac12}(\partial F_t)}^2\leq C\int_{\Omegaega\setminus F_t} |\nabla \dot u_t|^2 \, dx \\
&\leq C\int_{F_t} Q(E(\dot u_t)) \, dx \leq C(M_0) \|\partial_{\sigma\sigma}R_t \|_{H^{\frac12}(\partial F_t)}\|\nabla_\tau\dot u_t\|_{H^{-\frac12}(\partial F_t)}\\
&\leq C(M_0) \big(\|\partial_{\sigma \sigma \sigma}R_t\|_{L^2}^{\frac34}\|\partial_\sigma R_t\|_{L^2}^{\frac14}+\|\partial_\sigma R_t\|_{L^2}\big)\|\nabla_\tau\dot u_t\|_{H^{-\frac12}(\partial F_t)}.
\varepsilonnd{split}
\]
Note that the first inequality above uses the fact that the tangential derivative is a continuous operator from $H^{1/2}$ to $H^{-1/2}$. This is a well-known fact, see for instance \cite[Theorem~8.6]{FM09}.
This inequality together with \varepsilonqref{est udot 1} yields
\begin{equation} \langlebel{est udot 3}
\|\mathbb{C} E(u_t) :E(\dot u_t)\|_{H^{-\frac12}(\partial F_t)} \leq C(M_0) \big(\|\partial_{\sigma \sigma \sigma}R_t\|_{L^2(\partial F_t)}^{\frac34}\|\partial_\sigma R_t\|_{L^2(\partial F_t)}^{\frac14}+\|\partial_\sigma R_t\|_{L^2(\partial F_t)}\big).
\varepsilonnd{equation}
By combining the estimates \varepsilonqref{regu est 3}, \varepsilonqref{regu est 4} and \varepsilonqref{est udot 3} we deduce that for every $\varepsilon>0$
\[
\int_0^{T_0}\| \big(\partial_t Q(E(u_t))\big) \circ \pi^{-1}_{F_t} \|_{H^{-\frac12}(\partial G)}^2\,dt \leq \int_0^{T_0} \big(\varepsilon \|\partial_{\sigma \sigma \sigma}R_t\|_{L^2(\partial F_t)}^2 + C_\varepsilon\|\partial_\sigma R_t\|_{L^2(\partial F_t)}^2\big)\,dt.
\]
Hence, taking $\varepsilon$ sufficiently small we obtain by \varepsilonqref{stimaf2} and by Lemma \ref{flow exists 1}
\[
\begin{split}
\int_0^{T_0} \| \big(\partial_t &Q(E(u_t)) \big) \circ \pi^{-1}_{F_t} \|_{H^{-\frac12}(\partial G)}^2\,dt \leq \\
&\leq \frac12 \int_0^{T_0}\| \dot f_t \|_{H^{-\frac12}(\partial G)}^2\,dt + C(M_0) \sup_{(0,T_0)} \big( 1+ \|\partial_\sigma R_t\|_{L^2(\partial F_t}^2 )^q T_0\leq \frac12 M_0 + C T_0.
\varepsilonnd{split}
\]
Thus the second estimate in \varepsilonqref{regu esti} follows by choosing $T_0$ sufficiently small.
\varepsilonnd{proof}
\begin{proof}[\textbf{Proof of Theorem~\ref{th:existence}}]
We divide the proof into several steps.
\noindent{\bf Step 1.} Fix $\mu\in (0,1)$ and let $M_0$, $T_0$, $\alpha$, $\deltalta_1$ and $\deltalta_0$ be as in
Lemma~\ref{regularity u}. Let $f_1$, $f_2\in C^\infty(\partial G\times [0,+\infty)$ satisfy the assumptions of
Lemma~\ref{regularity u}, let $h_0\in C^\infty(\partial G)$ satisfy $\|h_0\|_{H^3(\partial G)}\leq M_0$, $\|h_0\|_{L^2(\partial G)} < \deltalta_0$, and let $F_{t,i} $ be a solution of \varepsilonqref{eqf} with $f$ replaced by $f_i$.
Denote by $h_{t,i}$ the function such that
$\partial F_{t,i}= \{ x + h_{t,i}(x) \nu_G(x) : x \in \partial G\}$.
We start by showing that there exists $T\in (0, T_0)$ such that
\begin{equation}
\langlebel{contraction}
\int_0^{T} \int_{\partial G} (h_{t,2} - h_{t,1})^2 \, d\mathcal{H}^1dt \leq \mu \int_0^{T} \int_{\partial G} (f_2 - f_1)^2 \, d\mathcal{H}^1dt.
\varepsilonnd{equation}
Note in particular that the above inequality implies the uniqueness of the solution of \varepsilonqref{eqf} when all the data are smooth.
Recall that by Lemma \ref{flow exists 1} we have that $\|h_{t,i}\|_{C^{2,\alpha}} \leq \deltalta_1$ for all $t \in (0,T_0)$, for $i = 1,2$.
Note that we may write the equation \varepsilonqref{eqf} as
\begin{equation} \langlebel{eq on G}
\frac{(1+ h_{t,i} k_G)}{J_{t,i}} \, \dot h_{t,i} = \frac{1}{J_{t,i}} \partial_\sigma \left(\frac{1}{J_{t,i}} \partial_\sigma \left( (g(\nu_{F_{t,i}}) k_{F_{t,i}}) \circ \pi_{F_{t,i}}^{-1} + f_i \right) \right)\quad\text{ on }\partial G
\varepsilonnd{equation}
for $i = 1,2$ respectively, where $J_{t,i} = \sqrt{(1 + h_{t,i} k_G)^2 + (\partial_\sigma h_{t,i})^2}$.
To simplify the notation we write
$g_{t, i}$ and $k_{t, i}$ in place of $g(\nu_{F_{t,i}})$ and $k_{F_{t,i}}$, respectively. Note that by the
$C^{2,\alpha}$ bounds on $h_{t,2}$ and $h_{t,1}$, and by the expressions \varepsilonqref{formula tangent} and \varepsilonqref{formula normal} we may estimate
\begin{equation} \langlebel{uniso lip}
|g_{t,2}(\pi_{F_{t,2}}^{-1}(x)) - g_{t,1}(\pi_{F_{t,1}}^{-1}(x))|
\leq C \left(|\partial_\sigma (h_{t,2}-h_{t,1})(x)| + |(h_{t,2}-h_{t,1})(x)| \right)
\varepsilonnd{equation}
for every $x \in \partial G$. Moreover from the expression \varepsilonqref{curvature formula}, from the $C^{2,\alpha}$-bounds on $h_2$ and $h_1$, and from the ellipticity
assumptions on $\varphi$ we deduce that there are positive constants $c$ and $C$ such that
\begin{equation} \langlebel{curv lip}
\begin{split}
[k_{t,2}(\pi_{F_{t,2}}^{-1}) - k_{t,1}(\pi_{F_{t,1}}^{-1})]& \, \partial_{\sigma \sigma}(h_{t,2}-h_{t,1}) \\
&\leq - c |\partial_{\sigma \sigma}(h_{t,2}- h_{t,1})|^2 + C \left(|\partial_\sigma (h_{t,2}-h_{t,1})|^2 + |(h_{t,2}-h_{t,1})|^2 \right)
\varepsilonnd{split}
\varepsilonnd{equation}
on $\partial G$.
Multiply the equation \varepsilonqref{eq on G} by $h_{t,2}-h_{t,1}$ for $i = 1,2$, integrate over $\partial G$ and integrate by parts twice to get
\[
\begin{split}
\int_{\partial G} \dot h_{t,i} & \, (h_{t,2} - h_{t,1}) \,d \mathcal{H}^1 \\
& = \int_{\partial G} \partial_\sigma \left(\frac{1}{J_{t,i}} \partial_\sigma \left( (g_{t,i} \, k_{t,i})\circ \pi_{F_{t,i}}^{-1} + f_i \right) \right) \, \left( \frac{1}{1 + h_{t,i} k_G}(h_{t,2} - h_{t,1}) \right) \, d \mathcal{H}^1\\
&= \int_{\partial G} \left( (g_{t,i} \, k_{t,i} )\circ \pi_{F_{t,i}}^{-1} + f_i \right) \, \partial_\sigma \left(\frac{1}{J_{t,i}} \partial_\sigma \left( \frac{1}{1 + h_{t,i} k_G} (h_{t,2} - h_{t,1}) \right) \right)\, d \mathcal{H}^1.
\varepsilonnd{split}
\]
Substract the equation for $i = 1$ from the equation for $i = 2$ to get
\[
\begin{split}
\frac{d}{dt} \biggl(\frac12\int_{\partial G} & \, (h_{t,2} - h_{t,1})^2 \,d \mathcal{H}^1 \biggr) = \int_{\partial G} (h_{t,2} - h_{t,1}) \, (\dot h_{t,2} - \dot h_{t,1}) \,d \mathcal{H}^1 \\
&=\int_{\partial G} ( (g_{t,2} \, k_{t,2} )\circ \pi_{F_{t,2}}^{-1} + f_2 ) \, \partial_\sigma \left(\frac{1}{J_{t,2}} \partial_\sigma \left( \frac{1}{1 + h_{t,2} k_G} (h_{t,2} - h_{t,1}) \right) \right) \, d \mathcal{H}^1\\
&\,\,\,\,\, -\int_{\partial G} ( (g_{t,1} \, k_{t,1} )\circ \pi_{F_{t,1}}^{-1} + f_1 ) \, \partial_\sigma \left(\frac{1}{J_{t,1}} \partial_\sigma \left( \frac{1}{1 + h_{t,1} k_G} (h_{t,2} - h_{t,1}) \right) \right) \, d \mathcal{H}^1.
\varepsilonnd{split}
\]
By the $C^{2,\alpha}$-bounds on $h_{t,2}, h_{t,1}$, by $C^{0,\alpha}$-bounds on $f_2, f_1$, by \varepsilonqref{curvature formula} and by \varepsilonqref{uniso lip} and \varepsilonqref{curv lip} we conclude that there are positive constants $c$ and $C$ such that
\begin{equation} \langlebel{exist pt1}
\begin{split}
\frac{d}{dt} \biggl(\frac12\int_{\partial G} \, (h_{t,2} & - h_{t,1})^2 \,d \mathcal{H}^1 \biggr) + c \int_{\partial G} |\partial_{\sigma \sigma}(h_{t,2} -h_{t,1})|^2\,d \mathcal{H}^1 \\
&\leq C \int_{\partial G} (f_2 -f_1)^2\,d \mathcal{H}^1 + C \int_{\partial G} |\partial_{\sigma}(h_{t,2} -h_{t,1})|^2 + (h_{t,2}-h_{t,1})^2 \,d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
Denote $w_t(x) := h_{t,2}(x)-h_{t,1}(x)$. By interpolation we have
\[
\|\partial_\sigma w_t\|_{L^2}^2 \leq C\, \|\partial_{\sigma \sigma} w_t\|_{L^2} \|w_t\|_{L^2} + C \|w_t\|_{L^2}^2 \leq \varepsilon \|\partial_{\sigma \sigma} w_t\|_{L^2}^2 + C_\varepsilon \|w_t\|_{L^2}^2.
\]
Hence, for $\varepsilon$ small enough, we obtain by \varepsilonqref{exist pt1} that
\begin{equation} \langlebel{exist pt2}
\frac{d}{dt} \left(\int_{\partial G} \, w_t^2 \, \mathcal{H}^1 \right) \leq C \int_{\partial G} w_t^2\,d \mathcal{H}^1 + C \int_{\partial G} (f_2 -f_1)^2\,d \mathcal{H}^1.
\varepsilonnd{equation}
Take $T \in (0,T_0)$. Recall that $w_0\varepsilonquiv 0$. Therefore integrating \varepsilonqref{exist pt2} over $(0,t)$, with $t\in (0, T)$, implies
\begin{equation} \langlebel{exist pt3}
\int_{\partial G} \, w_t^2 \,d \mathcal{H}^1 \leq C \int_0^{T} \int_{\partial G}w_s^2\,d \mathcal{H}^1 ds + C \int_0^{T} \int_{\partial G} (f_2 -f_1)^2\,d \mathcal{H}^1 ds.
\varepsilonnd{equation}
Integrating \varepsilonqref{exist pt3} over $(0,T)$ yields
\[
\int_0^{T} \int_{\partial G} \, w_t^2 \,d \mathcal{H}^1 dt \leq CT \int_0^{T} \int_{\partial G}w_t^2\,d \mathcal{H}^1 dt + CT \int_0^{T} \int_{\partial G} (f_2 -f_1)^2\,d \mathcal{H}^1 dt.
\]
Therefore \varepsilonqref{contraction} follows by taking $T\in (0, T_0)$ sufficiently small.
\noindent {\bf Step 2.} Fix $M_0> 2
\|Q(E(u_G))\|_{C^{1,\alpha}(\partialrtial G)}$, $\mu \in (0,1)$, and let $T$, $\delta_0$ be as in Step 1.
Define the function space
\[
\mathcal{C}:= \Big{\{} f \in L^2(0,T; L^2(\partial G)) \, : \, \sup_{(0,T)} \|f\|_{C^{1,\alpha}} \leq M_0, \, \int_0^{T} \| \dot f_t \|_{H^{-\frac12}(\partial G)}^2 \leq M_0 \Big{\}}.
\]
We want to show that for every $f\in \mathcal{C}$ equation \varepsilonqref{eqf} has a solution in the interval $(0,T)$, and that \varepsilonqref{contraction} holds for $f_1$, $f_2\in \mathcal{C}$.
To this end, fix $h_0\in H^3(\partial G)$ satisfying $\|h_0\|_{H^3(\partial G)}\leq M_0$, $\|h_0\|_{L^2(\partial G)} < \deltalta_0$, and let $f\in \mathcal{C}$. Consider a sequence $f_n\in \mathcal{C}\cap C^\infty(\partial G\times [0,+\infty))$ such that $f_n\to f$ in $L^2(0, T; L^2(\partial G))$ and a sequence of smooth functions $h_n$ such that
$\|h_n\|_{H^3(\partial G)}\leq M_0$ and $h_n \to h_0$ in $L^2(\partial G)$. Denote by $F_{t, n}$ the solution
of \varepsilonqref{eqf} with forcing term $f_n$ and initial datum $h_n$, and let $h_{t,n}$ be the function on $\partial G$ such that $\partial F_{t,n}= \{ x + h_{t,n}(x) \nu_G(x) : x \in \partial G\}$.
Observe that from \varepsilonqref{flex1} we have that
$$
\sup_n\sup_{(0,T)}\bigl(\|h_{t, n}\|_{C^{2, \alpha}(\partial G)} + \|\partial_\sigma R_{t,n}\|_{L^{2}(\partial F_t)}\bigr)<+\infty,
$$
where $R_{t,n}$ is defined as in \varepsilonqref{Rt} with $f$ replaced by $f_n$. In turn \varepsilonqref{stimaf2} yields that
$R_{t,n}$ is uniformly bounded in $L^2(0, T; H^3(\partial G))$ and thus $h_{t,n}$ is uniformly bounded in
$H^1(0, T; H^1(\partial G))$. Therefore, up to a (not relabelled) subsequence, we may assume that
$h_{t,n}\rightharpoonup h_t$ weakly in $H^1(0, T; H^1(\partial G))$ and, recalling the uniform $C^{2,\alpha}$ bounds on
$h_{t,n}$ we may conclude that in fact $h_{t,n}\to h_t$ in $C^{2,\beta}(\partial G)$ for all $\beta\in (0, \alpha)$ and for all $t\in (0, T)$ and thus $R_{t,n}\circ \pi_{F_{t,n}}^{-1}\to R_t\circ \pi_{F_{t}}^{-1}$ in $C^{0, \beta}(\partial G)$, where $F_t$ is the set corresponding to $h_t$. It is now easy to see that the equation passes to the limit and $F_t$ is a solution of \varepsilonqref{eqf} with initial datum $h_0$ and forcing term $f$. Note also that the same approximation argument yields that \varepsilonqref{contraction} holds true also in the case where $f_1$, $f_2\in \mathcal C$, so that in particular the solution is unique also in this case. Moreover, again by approximation, the conclusions of Lemmas~\ref{flow exists 1} and \ref{regularity u} remain true.
\noindent{\bf Step 3.} Fix $h_0$ as in Step 2 and consider the map $\mathcal T : \mathcal{C} \to \mathcal{C}$ defined by $Tf(\cdot, t) =- Q(E(u_t))\circ \pi_{F_t}^{-1}$ for all $t\in [0, T)$, where $F_t$ is the solution of \varepsilonqref{eqf} with initial datum $h_0$ and forcing term $f$ (and $u_t$ is the elastic equilibrium in $\Omegaega\setminus F_t$). From \varepsilonqref{regu esti}, which holds also in our case thanks to the previous step, it follows that the map is well defined. In order to conclude the proof it is enough to show that $\mathcal T$ is a contraction and thus it admits a fixed point.
To this aim, with the same notation of Step 1, for any $f_1$, $f_2\in \mathcal C$ and for any $\varepsilon>0$ we have
\begin{align*}
\int_0^{T}&\|Q(E(u_{t,1}))\circ \pi_{F_{t,1}}^{-1}-Q(E(u_{t,2}))\circ \pi_{F_{t,2}}^{-1}\|^2_{L^2(\partial G)}\, dt
\\
&\leq C\int_0^{T}\|Q(E(u_{t,1}))\circ \pi_{F_{t,1}}^{-1}-Q(E(u_{t,2}))\circ \pi_{F_{t,2}}^{-1}\|^2_{C^{0,\alpha}(\partial G)}\, dt\\
& \leq C \int_0^{T}\|h_{t, 1}-h_{t, 2}\|_{C^{1, \alpha}(\partial G)}^2\, dt \\
&\leq C \int_0^{T}\bigl[\|\partial_{\sigma\sigma}(h_{t, 1}-h_{t, 2})\|_{L^2}^{2\theta}\|h_{t, 1}-h_{t, 2}\|_{L^2}^{2(1-\theta)}+\|h_{t, 1}-h_{t, 2}\|_{L^2}^{2}\bigr]\, dt\\
&\leq \int_0^{T}\bigl[\varepsilon\|\partial_{\sigma\sigma}(h_{t, 1}-h_{t, 2})\|_{L^2}^{2}+C_\varepsilon\|h_{t, 1}-h_{t, 2}\|_{L^2}^{2}\bigr]\, dt,
\varepsilonnd{align*}
where we used \varepsilonqref{linear} and \varepsilonqref{inter3}.
We use \varepsilonqref{exist pt1} and \varepsilonqref{inter2}, argue as in Step 1, to control the last integral in the above chain of inequalities and deduce that there exists $C_1>0$ independent of $\varepsilon$ such that
\begin{align*}
\int_0^{T}\|Q(E(u_{t,1}))\circ \pi_{F_{t,1}}^{-1}&-Q(E(u_{t,2}))\circ \pi_{F_{t,2}}^{-1}\|^2_{L^2(\partial G)}\, dt\\
&\leq C_1 \varepsilon \int_0^{T}\|f_1-f_2\|_{L^2}^2\, dt+ C_\varepsilon\int_0^{T}\|h_{t, 1}-h_{t, 2}\|_{L^2}^{2}\, dt\\
&\leq C_1 \varepsilon \int_0^{T}\|f_1-f_2\|_{L^2}^2\, dt+ C_\varepsilon\mu \int_0^{T}\|f_1-f_2\|_{L^2}^2\, dt,
\varepsilonnd{align*}
where the last inequality follows from \varepsilonqref{contraction}. The conclusion follows by taking $\varepsilon$ and then $\mu$ sufficiently small.
\varepsilonnd{proof}
We conclude this section by showing that the solution provided by Theorem~\ref{th:existence} is in fact classical, namely of class
$C^{\infty}$.
\begin{theorem}\langlebel{Cinfty}
Under the assumptions of Theorem~\ref{th:existence} we have $(h_t)_{t\in (0,T)}\in C^\infty(0,T;C^{\infty}(\partial G))$.
\varepsilonnd{theorem}
\begin{proof}
As in the proof of Theorem~\ref{th:existence} we rewrite the equation on $\partial G$, see \varepsilonqref{eq on G}, thus getting
\begin{equation} \langlebel{Cinfty1}
\frac{(1+ h_{t} k_G)}{J_{t}} \, \dot h_{t} = \frac{1}{J_{t}} \partial_\sigma \left(\frac{1}{J_{t}} \partial_\sigma \left( (g(\nu_{F_{t}}) k_{F_{t}}) \circ \pi_{F_{t}}^{-1} - Q_t \right) \right)\quad\text{ on }\partial G,
\varepsilonnd{equation}
where we have set $Q_t:=Q(E(u_t))\circ\pi_{F_{t}}^{-1}$. We divide the proof in four steps.
\noindent{\bf Step 1.}
Given $\Delta t\not=0$, let us subtract the equation above at time $t$ from the same equation at time $t+\Delta t$ and multiply both sides by $h_{t+\Delta t}-h_t$. Then, integrating by part and arguing as in the proof of \varepsilonqref{exist pt1} we get, using also Proposition~\ref{interpolation} to estimate $\|\partial_{\sigma}(h_{t+\Delta t}-h_t)\|_{L^2(\partial G)}$,
\begin{equation} \langlebel{Cinfty2}
\begin{split}
\frac{d}{dt} \biggl(\frac12\int_{\partial G} \, (h_{t+\Delta t} - h_{t})^2 \,d \mathcal{H}^1 \biggr) & + c \int_{\partial G} |\partial_{\sigma \sigma}(h_{t+\Delta t} -h_{t})|^2\,d \mathcal{H}^1 \\
&\leq C \int_{\partial G} (Q_{t+\Delta t} -Q_t)^2\,d \mathcal{H}^1 + C\int_{\partial G}(h_{t+\Delta t}-h_{t})^2 \,d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
Fix now $\alpha\in (0,\frac12)$. Using Proposition~\ref{interpolation} and the estimate \varepsilonqref{linear} with $F$ and $G$ replaced respectively by $F_{t+\Delta t}$ and $F_t$ we have
\begin{align*}
\|Q_{t+\Delta t} -Q_t\|_{L^2(\partial G)}
&\leq C\|Q_{t+\Delta t} -Q_t\|_{C^{0,\alpha}(\partial G)}\leq C\|h_{t+\Delta t} -h_t\|_{C^{1,\alpha}(\partial G)} \\
& \leq C\big(\|\partial_{\sigma\sigma}(h_{t+\Delta t} -h_t)\|_{L^2(\partial G)}^{\vartheta}\|h_{t+\Delta t} -h_t\|_{L^2(\partial G)}^{1-\vartheta}+\|h_{t+\Delta t} -h_t\|_{L^2(\partial G)}\big).
\varepsilonnd{align*}
Inserting this inequality in \varepsilonqref{Cinfty2} we get
\[
\frac{d}{dt} \biggl(\frac12\int_{\partial G} \, (h_{t+\Delta t} - h_{t})^2 \,d \mathcal{H}^1 \biggr) + c \int_{\partial G} |\partial_{\sigma \sigma}(h_{t+\Delta t} -h_{t})|^2\,d \mathcal{H}^1
\leq C\|h_{t+\Delta t} -h_t\|_{L^2(\partial G)}^2.
\]
Then for $\mathcal L^1$-a.e. $t_0,t_1$ with $0<t_0<t_1<T$, integrating the above inequality in $(t_0,t_1)$, we have
\[
\begin{split}
\|h_{t_1+\Delta t} - h_{t_1} \|_{L^2(\partial G)}^2 +\int_{t_0}^{t_1}&\|\partial_{\sigma \sigma}(h_{t+\Delta t} -h_{t})\|_{L^2(\partial G)}^2\,dt \\
&\leq \|h_{t_0+\Delta t} - h_{t_0} \|_{L^2(\partial G)}^2+C\int_{t_0}^{t_1}\|h_{t+\Delta t} -h_{t}\|_{L^2(\partial G)}^2\,dt.
\varepsilonnd{split}
\]
Finally, dividing both sides of this inequality by $(\Delta t)^2$, letting $\Delta t\to0$ and recalling that $h\in H^1(0,T;H^1(\partial G))$, we conclude that for any time interval $J\subset\!\subset(0,T)$
\begin{equation}\langlebel{Cinfty3}
\sup_{t\in J}\| \dot h_t\|_{L^2(\partial G)}^2+\int_J\|\partial_t(\partial_{\sigma \sigma}h_t)\|_{L^2(\partial G)}^2\,dt<\infty.
\varepsilonnd{equation}
\noindent{\bf Step 2.} We start again by subtracting equation \varepsilonqref{Cinfty1} at time $t$ from the same equation at time $t+\Delta t$. We now multiply both sides by $\partial_{\sigma\sigma}(h_{t+\Delta t}-h_t)$. Then, arguing as in the proof of \varepsilonqref{Cinfty2} we get the following estimate
\begin{equation} \langlebel{Cinfty4}
\begin{split}
\frac{d}{dt} \biggl(\frac12\int_{\partial G} (\partial_\sigma(h_{t+\Delta t} - &h_{t}))^2 \,d \mathcal{H}^1 \biggr) + c \int_{\partial G} |\partial_{\sigma \sigma\sigma}(h_{t+\Delta t} -h_{t})|^2\,d \mathcal{H}^1 \\
&\leq C \int_{\partial G} (\partial_\sigma(Q_{t+\Delta t} -Q_t))^2\,d \mathcal{H}^1 + C\int_{\partial G}(h_{t+\Delta t}-h_{t})^2 \,d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
As in the previous step we may estimate, using \varepsilonqref{linear}
and Proposition~\ref{interpolation},
\begin{align*}
\|\partial_\sigma(Q_{t+\Delta t} -Q_t)\|_{L^2(\partial G)}
& \leq
C\|\partial_\sigma(Q_{t+\Delta t} -Q_t)\|_{C^{0,\alpha}(\partial G)}\leq C\|h_{t+\Delta t} -h_t\|_{C^{2,\alpha}(\partial G)} \\
&\leq C\|\partial_{\sigma\sigma\sigma}(h_{t+\Delta t} -h_t)\|_{L^2}^{\vartheta}\|h_{t+\Delta t} -h_t\|_{L^2}^{1-\vartheta}+C\|h_{t+\Delta t} -h_t\|_{L^2}.
\varepsilonnd{align*}
Using this estimate and integrating \varepsilonqref{Cinfty4} in $(t_0,t_1)$ for $\mathcal L^1$-a.e. $t_0,t_1$ with $0<t_0<t_1<T$, we have
\[
\begin{split}
\|\partial_\sigma(h_{t_1+\Delta t} - h_{t_1})\|_{L^2(\partial G)}^2 &+c \int_{t_0}^{t_1}\|\partial_{\sigma \sigma\sigma}(h_{t+\Delta t} -h_{t})\|_{L^2(\partial G)}^2\,dt \\
&\leq \|\partial_\sigma(h_{t_0+\Delta t} - h_{t_0}) \|_{L^2(\partial G)}^2+ C\int_{t_0}^{t_1}\|h_{t+\Delta t} -h_{t}\|_{L^2(\partial G)}^2\,dt
\varepsilonnd{split}
\]
Divide both sides of this inequality by $(\Delta t)^2$ and recall that
$\partial_{\sigma\sigma\sigma}h_t\in L^2(0,T;L^2(\partial G))$ and that by \varepsilonqref{Cinfty3} $\partial_t(\partial_\sigma h_t)\in L^2_{loc}(0,T;H^1(\partial G))$. Using this information and letting $\Delta t\to0$ we conclude that for every interval $J\subset\!\subset(0,T)$
\begin{equation} \langlebel{Cinfty5}
\sup_{t\in J}\| \partial_t(\partial_\sigma h_t)\|_{L^2(\partial G)}^2+\int_J\|\partial_t(\partial_{\sigma \sigma\sigma}h_t)\|_{L^2(\partial G)}^2\,dt<\infty.
\varepsilonnd{equation}
Note that from the previous inequality and by the equation \varepsilonqref{eq on G} we have that for every interval $J\subset\!\subset(0,T)$
$$
\sup_{t\in J}\big(\|\partial_{\sigma\sigma\sigma}R_t\|_{L^2(\partial G)}+\|\partial_{\sigma\sigma\sigma}h_t\|_{L^2(\partial G)}\big)<\infty.
$$
In particular, from this inequality we deduce that
$$
\sup_{t\in J}\big(\|\partial_{\sigma}R_t\|_{C^{0,\alpha}(\partial G)}+\|h_t\|_{C^{2,\alpha}(\partial G)}\big)<\infty.
$$
In turn, since $\|\partial_\sigma Q_t\|_{C^{0,\alpha}(\partial G)}\leq C(G,\|h_t\|_{C^{2,\alpha}(\partial G)})$, the above inequality implies immediately that
\begin{equation} \langlebel{Cinfty5.1}
\sup_{t\in J}\|h_t\|_{C^{3,\alpha}(\partial G)}<\infty
\varepsilonnd{equation}
\noindent{\bf Step 3.} At this point we would like to continue as before, subtracting the equations \varepsilonqref{Cinfty1} at times $t+\Delta t$ an $t$ and multiplying the resulting difference by $\partial_{\sigma\sigma\sigma\sigma}(h_{t+\Delta t}-h_t)$.
However this argument only works provided we know that $h\in L^2_{loc}(0,T;H^4(\partial G))$.
To prove this property of $h$ we go back to equation \varepsilonqref{Cinfty1} and, denoting by $s$ the arclength on $\partial G$, we subtract the equation for $h$ from the same equation for $h(\cdot+\Delta s)$, where $\Delta s$ is a non zero increment of the arclength. Then we multiply both sides by $\partial_{\sigma\sigma}h(\cdot+\Delta s)-\partial_{\sigma\sigma} h$ to deduce with the usual calculations that
\[
\begin{split}
\frac{d}{dt} \biggl(\frac12\int_{\partial G} (\partial_\sigma(h_t(\cdot+& \Delta s) - h_{t}))^2 \,d \mathcal{H}^1 \biggr) + c \int_{\partial G} |\partial_{\sigma \sigma\sigma}(h_t(\cdot+\Delta s) -h_{t})|^2\,d \mathcal{H}^1 \\
&\leq C \int_{\partial G} (\partial_\sigma(Q_t(\cdot+\Delta s) -Q_t))^2\,d \mathcal{H}^1 +C \int_{\partial G}(h_t(\cdot+\Delta s) -h_{t})^2 \,d \mathcal{H}^1.
\varepsilonnd{split}
\]
As before we estimate
\begin{align*}
\|\partial_\sigma(Q_t(\cdot+\Delta s) &-Q_t)\|_{L^2(\partial G)}\leq C\|h_t(\cdot+\Delta s) -h_t)\|_{C^{2,\alpha}(\partial G)} \\
& \leq C\|\partial_{\sigma\sigma\sigma}(h_t(\cdot+\Delta s) -h_t)\|_{L^2}^{\vartheta}\|h_t(\cdot+\Delta s) -h_t\|_{L^2}^{1-\vartheta}+C\|h_t(\cdot+\Delta s) -h_t\|_{L^2}
\varepsilonnd{align*}
so to obtain that for $\mathcal L^1$-a.e. $t_0,t_1$ with $0<t_0<t_1<T$
\[
\begin{split}
\|\partial_\sigma(h_{t_1}(\cdot+\Delta s) - & h_{t_1})\|_{L^2(\partial G)}^2 +\int_{t_0}^{t_1}\|\partial_{\sigma \sigma\sigma}(h_{t}(\cdot+\Delta s) -h_{t})\|_{L^2(\partial G)}^2\,dt \\
&\leq \|\partial_\sigma(h_{t_0}(\cdot+\Delta s) - h_{t_0})\|_{L^2(\partial G)}^2+C\int_{t_0}^{t_1}\|h_{t}(\cdot+\Delta s) -h_{t}\|_{L^2(\partial G)}^2\,dt.
\varepsilonnd{split}
\]
Thus, we may conclude that for every interval $J\subset\!\subset(0,T)$
$$
\sup_{t\in J}\| \partial_{\sigma\sigma} h_t\|_{L^2(\partial G)}^2+\int_J\|\partial_{\sigma \sigma\sigma\sigma}h_t\|_{L^2(\partial G)}^2\,dt<\infty.
$$
We now use this estimate, together with the estimate \varepsilonqref{Cinfty5.1} obtained in the previous step, in order to show \varepsilonqref{Cinfty6.2} below.
To this end, we subtract equation \varepsilonqref{Cinfty1} at time $t$ from the same equation at time $t+\Delta t$ and multiply both sides by $\partial_{\sigma\sigma\sigma\sigma}(h_{t+\Delta t}-h_t)$ and we obtain
\[
\begin{split}
\frac{d}{dt} \biggl(\frac12\int_{\partial G} (\partial_{\sigma\sigma}(h_{t+\Delta t} - &h_{t}))^2 \,d \mathcal{H}^1 \biggr) + c \int_{\partial G} |\partial_{\sigma \sigma\sigma\sigma}(h_{t+\Delta t} -h_{t})|^2\,d \mathcal{H}^1 \\
&\leq C \int_{\partial G} (\partial_{\sigma\sigma}(Q_{t+\Delta t} -Q_t))^2\,d \mathcal{H}^1 + C\int_{\partial G}(h_{t+\Delta t}-h_{t})^2 \,d \mathcal{H}^1.
\varepsilonnd{split}
\]
Then using, \varepsilonqref{Cinfty5.1}, \varepsilonqref{linear}
and Proposition~\ref{interpolation}, we have
\begin{align*}
\|\partial_{\sigma\sigma}(Q_{t+\Delta t} -Q_t)\|_{L^2(\partial G)}
& \leq
C\|\partial_{\sigma\sigma}(Q_{t+\Delta t} -Q_t)\|_{C^{0,\alpha}(\partial G)}\leq C\|h_{t+\Delta t} -h_t\|_{C^{3,\alpha}(\partial G)} \\
&\leq C\|\partial_{\sigma\sigma\sigma}(h_{t+\Delta t} -h_t)\|_{L^2}^{\vartheta}\|h_{t+\Delta t} -h_t\|_{L^2}^{1-\vartheta}+C\|h_{t+\Delta t} -h_t\|_{L^2}.
\varepsilonnd{align*}
Then, arguing as in the proof of \varepsilonqref{Cinfty5}, we get
\begin{equation} \langlebel{Cinfty6.2}
\sup_{t\in J}\| \partial_t(\partial_{\sigma\sigma} h_t)\|_{L^2(\partial G)}^2+\int_J\|\partial_t(\partial_{\sigma \sigma\sigma\sigma}h_t)\|_{L^2(\partial G)}^2\,dt<\infty.
\varepsilonnd{equation}
Then, arguing as in the proof of \varepsilonqref{Cinfty5.1} we have that
$$
\sup_{t\in J}\|h_t\|_{C^{4,\alpha}(\partial G)}<\infty
$$
At this point we proceed by induction, obtaining at each step first an increment in the space regularity and then the corresponding estimate with respect to time. More precisely, for every interval $J\subset\!\subset(0,T)$ and every integer $k\geq2$ we first have that
$$
\sup_{t\in J}\| \partial_{\sigma}^k h_t\|_{L^2(\partial G)}^2+\int_J\|\partial_{\sigma }^{k+2}h_t\|_{L^2(\partial G)}^2\,dt<\infty
$$
Then from this we deduce that again for every interval $J\subset\!\subset(0,T)$
$$
\sup_{t\in J}\| \partial_t(\partial_{\sigma}^k h_t)\|_{L^2(\partial G)}^2+\int_J\|\partial_t(\partial_{\sigma }^{k+2}h_t)\|_{L^2(\partial G)}^2\,dt<\infty
$$
and in turn that
$$
\sup_{t\in J}\|h_t\|_{C^{k+2,\alpha}(\partial G)}<\infty
$$
In conclusion this proves that $h\in W^{1,\infty}_{loc}(0,T;C^{\infty}(\partial G))$.
\noindent{\bf Step 4.} Let us now show the full regularity of $h$ with respect to time. As in Step 1 we fix $\Delta t\not=0$ and subtract equation \varepsilonqref{Cinfty1} from the same equation at time $t+\Delta t$. However, differently from before, we multiply both sides of this difference by $\dot h_{t+\Delta t}-\dot h_t$. Then, a simple use of Young's inequality and Proposition~\ref{interpolation} yields
\begin{equation} \langlebel{Cinfty7}
\begin{split}
\int_{\partial G} (\dot h_{t+\Delta t} -\dot h_{t})^2 \,d \mathcal{H}^1 & \leq C \int_{\partial G} |\partial_{\sigma \sigma\sigma\sigma}(h_{t+\Delta t} -h_{t})|^2\,d \mathcal{H}^1 \\
& +C \int_{\partial G} (\partial_{\sigma\sigma}(Q_{t+\Delta t} -Q_t))^2\,d \mathcal{H}^1 + C\int_{\partial G}(h_{t+\Delta t}-h_{t})^2 \,d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
Then we estimate as usual
\begin{align*}
\|\partial_{\sigma\sigma}(Q_{t+\Delta t} -Q_t)\|_{L^2}
& \leq
C\|\partial_{\sigma\sigma}(Q_{t+\Delta t} -Q_t)\|_{C^{0,\alpha}}\leq C\|h_{t+\Delta t} -h_t\|_{C^{3,\alpha}} \\
&\leq C\|\partial_{\sigma\sigma\sigma\sigma}(h_{t+\Delta t} -h_t)\|_{L^2}^{\vartheta}\|h_{t+\Delta t} -h_t\|_{L^2}^{1-\vartheta}+C\|h_{t+\Delta t} -h_t\|_{L^2}.
\varepsilonnd{align*}
Thus, from \varepsilonqref{Cinfty7} one gets
$$
\int_{\partial G} (\dot h_{t+\Delta t} -\dot h_{t})^2 \,d \mathcal{H}^1 \leq C \int_{\partial G} |\partial_{\sigma \sigma\sigma\sigma}(h_{t+\Delta t} -h_{t})|^2\,d \mathcal{H}^1 + C\int_{\partial G}(h_{t+\Delta t}-h_{t})^2 \,d \mathcal{H}^1.
$$
Dividing this inequality by $(\Delta t)^2$ and recalling what was proved in Step 3 we conclude that for every interval $J\subset\!\subset(0,T)$
$$
\sup_{t\in J}\| \partial_{tt} h_t\|_{L^2(\partial G)}^2<\infty.
$$
Similarly, differentiating $k$ times the equation \varepsilonqref{Cinfty1} and arguing as before we conclude that indeed for every integer $k$ and for every interval $J\subset\!\subset(0,T)$
$$
\sup_{t\in J}\| \partial_{tt}(\partial_{\sigma}^kh_t)\|_{L^2(\partial G)}^2<\infty.
$$
Then we have that $h\in W^{2,\infty}_{loc}(0,T;C^{\infty}(\partial G))$. Finally, differentiating \varepsilonqref{Cinfty1} with respect to $t$ and repeating the same argument as before we end up by proving that $h\in W^{k,\infty}_{loc}(0,T;C^{\infty}(\partial G))$ for every integer $k\geq2$. This concludes the proof.
\varepsilonnd{proof}
\section{Asymptotic Stability} \langlebel{sec:stability}
In this section we address the long-time behavior of the flow for a special class of initial data.
To this aim, we start by noticing that if $G$ is stationary, then a standard bootstrap argument shows that in fact $G$ is of class
$C^\infty$. Moreover, by the results in \cite{KLM} $G$ turns out to be analytic.
Recall also that the definition of stationary set is weaker than the notion of criticality, where one requires the first variation to be constant on the whole $\partial G$ (see Remark~\ref{rm:station}).
However, the above definition fits better in our framework, since during the evolution there is no mass transfer from one Jordan component to the other. More precisely, denoting as before by $F_{t,i}$ the bounded open set enclosed by the $i$-th connected component $\Gamma_{F_t, i}$ of $\partial F_t$, one has that the area $|F_{t,i}|$ is preserved during the flow. Indeed, one has
\begin{equation}\langlebel{componenti}
\frac{d}{ds}|F_{t+s,i}|_{|_{s=0}}=\int_{\Gamma_{F_t, i}} V_t\, d\mathcal{H}^1=
\int_{\Gamma_{F_t, i}} \partial_{\sigma\sigma}R_t\, d\mathcal{H}^1=0.
\varepsilonnd{equation}
We are now ready to state the main result of this section.
\begin{theorem}
\langlebel{thmstability}
Let $G \subset\subset\Omega$ be a regular strictly stable stationary set in the sense of Definition~\ref{def:stable} and fix $M>0$, $\alpha\in (0,1)$. There exists $\deltalta_0>0$ with the following property: Let $F_0\in \mathfrak{h}^{2,\alpha}_M(\partial G)$ be such that
\[
|F_0 \Delta G| < \deltalta_0, \qquad \text{and} \qquad \int_{\partial F_0} (\partial_\sigma R_0)^2 \, d\mathcal{H}^1 < \deltalta_0,
\]
where $R_0:=g(\nu_{F_0})k_{F_0}-Q(E(u_{F_0}))$ on $\partial F_0$.
Then the unique solution $(F_t)_{t>0}$ of the flow \varepsilonqref{flow} with intial datum $F_0$ is defined for all times $t>0$.
Moreover $F_t \to F_{\infty}$ $H^3$-exponentially fast, where $F_\infty$ is the unique stationary set in $\mathfrak{h}^{2,\alpha}_{\sigma_1}(\partial G)$
(see Proposition~\ref{stationary}) such that $|F_{\infty, i}|=|F_{0,i}|$ for $i=1,\dots, m$. In particular, if $|F_{0,i}|=|G_i|$ for $i=1, \dots, m$, then $F_t \to G$ $H^3$-exponentially fast.
Here $(F_{\infty, i})_{i=1, \dots, m}$ and $(F_{0,i})_{i=1, \dots, m}$ denote the open sets enclosed by the connected components $(\Gamma_{F_\infty, i})_{i=1, \dots, m}$ of $\partial F_\infty$ and $(\Gamma_{F_0, i})_{i=1, \dots, m}$ of $\partial F_0$, respectively, numbered according to \varepsilonqref{numbered}.
\varepsilonnd{theorem}
\begin{remark}\langlebel{rm:precise}
In the previous statement by $H^3$ exponential convergence of $F_t$ to $F_\infty$ we mean precisely the following:
writing $\partial F_t:=\{x+\tilde h_t(x)\nu_{F_\infty}(x):x\in \partial F_\infty\}$, we have
\[
\|\tilde h_t\|_{H^3(\partial F_\infty)} \leq C e^{-c t}.
\]
for suitable constants $C, c>0$.
\varepsilonnd{remark}
\partialr\noindent For an example of strictly stable set $G$ to which Theorem~\ref{thmstability} applies we refer to \cite{CJP}.
In order to proof the theorem, we need the following preliminary energy identities.
\begin{proposition}
\langlebel{energia identiteetit}
Let $(F_t)_{t \in (0,T)}$ solve \varepsilonqref{flow}. Then
we have:
$$
\frac{d}{dt} J(F_t) = - \int_{\partial F_t} (\partial_\sigma R_t)^2 \, d\mathcal{H}^1
$$
and
$$
\frac{d}{dt} \left(\frac{1}{2} \int_{\partial F_t} (\partial_\sigma R_t)^2 \, d\mathcal{H}^1 \right) = - \partial^2 J[\partial_{\sigma\sigma}R_t] - \frac12 \int_{\partial F_t} k_t \, (\partial_\sigma R_t)^2 \, \partial_{\sigma \sigma}R_t\, d \mathcal{H}^1.
$$
\varepsilonnd{proposition}
\begin{proof}
The first identity follows immediately recalling that $R_t = g(\nu_t)k_t - Q(E(u_t))$ is the first variation of the energy at $J(F_t)$, and thus
\[
\frac{d}{dt} J(F_t) = \int_{\partial F_t} R_t (X_t \cdot \nu_t) \, d \mathcal{H}^1 = \int_{\partial F_t} R_t\, \partial_{\sigma \sigma}R_t\, d \mathcal{H}^1 = - \int_{\partial F_t} (\partial_{\sigma}R_t)^2 \, d \mathcal{H}^1.
\]
For the second identity we note that the calculations leading to \varepsilonqref{app6} still apply with $f_t \circ \pi$ replaced by $-Q(E(u_t))$ on $\partial F_t$. Hence we have
\begin{equation}
\langlebel{app7}
\begin{split}
\frac{d}{dt} \left(\frac{1}{2} \int_{\partial F_{t}} (\partial_\sigma R_t)^2 \, d \mathcal{H}^1\right) = &- \int_{\partial F_{t}} g(\nu_t) (\partial_{\sigma \sigma \sigma}R_t)^2\, d \mathcal{H}^1 + \int_{\partial F_{t}} \partial_{\sigma \sigma }R_t \frac{\partial}{\partial t} \left( Q(E(u(\cdot,t)) \right) \, d \mathcal{H}^1\\
&+ \int_{\partial F_{t}} g(\nu_t) k_t^2 (\partial_{\sigma \sigma}R_t)^2 \, d \mathcal{H}^1 + \int_{\partial F_{t}} \partial_{\nu_t} (Q(E(u_t))) (\partial_{\sigma \sigma}R_t)^2 \, d \mathcal{H}^1 \\
&- \frac{1}{2} \int_{\partial F_{t}} k_t \,(\partial_{\sigma}R_t)^2 \partial_{\sigma \sigma}R_t \, d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
In order to conclude, we need to show that
$$
\int_{\partial F_{t}} \partial_{\sigma \sigma }R_t \frac{\partial}{\partial t} \left( Q(E(u(\cdot,t)) \right) \, d \mathcal{H}^1 = 2 \int_{\Omegaega \setminus F_{t}} Q(E(\dot u_t)) \, dx
$$
to recognize the quadratic form $ - \partial^2 J[\partial_{\sigma\sigma}R_t]$ in the four first terms of \varepsilonqref{app7}.
To this aim, observe that since $\mathbb{C} E(u_t)[\nu_t] = 0$ on $\partial F_t$, we have
\[
\begin{split}
\int_{\partial F_{t}} \partial_{\sigma \sigma }R_t &\frac{\partial}{\partial t} \left( Q(E(u(\cdot,t)) \right) \, d \mathcal{H}^1 = \int_{\partial F_{t}} R_{\sigma \sigma } \mathbb{C} E(u_t) : E(\dot u_t) \, d \mathcal{H}^1 = \int_{\partial F_{t}} R_{\sigma \sigma } \mathbb{C} E(u_t) : D \dot u_t \, d \mathcal{H}^1 \\
&= \int_{\partial F_{t}} \partial_{\sigma \sigma }R_t \mathbb{C} E(u_t) : D_\tau \dot u_t \, d \mathcal{H}^1 = - \int_{\partial F_{t}} \operatorname{div}_\tau (\partial_{\sigma \sigma }R_t \mathbb{C} E(u_t) ) \cdot \dot u_t \, d \mathcal{H}^1\\
&=2 \int_{\Omegaega \setminus F_{t}} Q(E(\dot u_t)) \, dx,
\varepsilonnd{split}
\]
where the last equality follows by choosing $\varphi = \dot u_t$ as a test function in the equation~\varepsilonqref{eq u dot}.~\varepsilonnd{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thmstability}.}]
Throughout the proof, $C$ will denote a constant depending only on the $C^{2,\alpha}$-bounds on the boundary of the set $G$. Here we always assume that $\alpha < 1/2$ and the value of $C$ may change from line to line.
For any set $F \in \mathfrak{h}^{2,\alpha}_M(G)$ consider
\begin{equation}\langlebel{D(F)0}
D(F):=\int_{F\Delta G}\mathrm{dist\,}(x, \partial G)\, dx
\varepsilonnd{equation}
and note that
$$
|F\Delta G|\leq C\|h_F\|_{L^2(\partial G)}\leq C \sqrt{D(F)}\,
$$
for constants depending only on $G$. Recall that $h_F$ is the function such that
$$
\partial F = \{ x +h_F(x)\nu_G(x) : x \in \partial G \}.
$$
For every $\varepsilon_1>0$ sufficiently small, there exists $\delta_1\in (0,1)$ so small that for any set $F\in \mathfrak{h}^{2,\alpha}_M(G)$ the following implications hold true:
\begin{equation}\langlebel{de01}
F\in \mathfrak{h}^{2,\alpha}_M(G)\text{ and } D(F)\leq \delta_1\Longrightarrow \|h_F\|_{C^{1,\alpha}(\partial G)}\leq \frac{\varepsilon_1}2\,,
\varepsilonnd{equation}
and
\begin{equation}\langlebel{de02}
\|h_F\|_{C^{1,\alpha}(\partial G)}\leq \varepsilon_1\text{ and } \int_{\partial F} (\partial_\sigma R_F)^2\, d \mathcal{H}^1 \leq 1\Longrightarrow \|h_F\|_{C^{2,\alpha}(\partial G)}\leq \omega(\varepsilon_1)\leq\sigma_1\langlend M,
\varepsilonnd{equation}
where $\omega$ is a positive non-decreasing function such that $\omega(\varepsilon_1)\to 0$ as $\varepsilon_1\to 0^+$, and $\sigma_1$ is the constant provided by Proposition~\ref{stationary}. Here $R_F$ stands, as usual, for $g(\nu_F)k_F-Q(E(u_F))$ on $\partial F$.
Fix $\varepsilon_1$, $\delta_1\in (0,1)$ satisfying \varepsilonqref{de01} and \varepsilonqref{de02} and choose an initial set $F_0\in \mathfrak{h}^{2,\alpha}_M(G)$ such that
\begin{equation}\langlebel{initial}
D(F_0)\leq \deltalta_0\qquad\text{and}\qquad \int_{\partial F_0} (\partial_\sigma R_0)^2\, d \mathcal{H}^1 \, dx \leq \deltalta_0\,,
\varepsilonnd{equation}
where the choice of $\deltalta_0 < \deltalta_1$ will be made later. Here, we denote $R_0$ instead of $R_{F_0}$.
Let $(F_t)_{t\in (0, T(F_0))}$ the unique classical solution of the flow \varepsilonqref{flow} provided
by Theorem~\ref{th:existence}. Here $T(F)\in (0,+\infty]$ stands for the maximal time of existence of the classical solution starting from $F$. By the same theorem, there exists $\delta>0$ and $T_0>0$ such that
\begin{equation}\langlebel{dalbasso}
T(F)\geq T_0\quad\text{for all $F\subset\subset\Omega$ s.t. $\|h_F\|_{L^2(\partial G)}\leq \delta$ and $\|h_F\|_{H^{3}(\partial G)}\leq 1$.}
\varepsilonnd{equation}
Without loss of generality, in what follows we may also assume $\delta_1$ to be so small that
$ D(F)\leq \delta_1$ implies $\|h_F\|_{L^2(\partial G)}\leq \delta$, with $\delta$ as in \varepsilonqref{dalbasso}.
We now split the rest of the proof into two steps.
\noindent {\bf Step 1.}{\it (Stopping-time)} Let $\bar t\leq T(F_0)$ be the maximal time such that
\begin{equation}\langlebel{Tprimo}
\|h_t\|_{C^{1,\alpha}(\partial G)}<\varepsilon_1\quad\text{and}\quad \int_{\partial F_t} (\partial_\sigma R_t)^2\, d \mathcal{H}^1< 2\deltalta_0.
\quad\text{for all $t\in (0, \bar t)$,}
\varepsilonnd{equation}
Note that such a maximal time is well defined in view of \varepsilonqref{de01} and \varepsilonqref{initial}. We claim that by taking $\varepsilon_1$ and $\delta_0$ smaller if needed, we have $\bar t=T(F_0)$.
To this aim, assume by contradiction that $\bar t< T(F_0)$. Then,
\[
\|h_{\bar t}\|_{C^{1,\alpha}(\partial G)}= \varepsilon_1 \quad\text{or}\quad \int_{\partial F_{\bar t}} (\partial_\sigma R_{\bar t})^2 \, d \mathcal{H}^1 =2\deltalta_0
\]
We split the proof into steps, according to the two alternatives above.
{\bf Step 1-(a).} Assume that
\begin{equation}\langlebel{step1a}
\int_{\partial F_{\bar t}} (\partial_\sigma R_{\bar t})^2 \, d \mathcal{H}^1 =2\deltalta_0.
\varepsilonnd{equation}
Since \varepsilonqref{de02} holds for $h_t$ for every $t \in (0,\bar t)$ then by Lemma ~\ref{lemma:j2>0near} (and the fact that $\sigma_1<\sigma_0$) we get
\[
J(F_t)[\partial_{\sigma \sigma} R_t] \geq \frac{m_0}{2} \|\partial_{\sigma \sigma} R_t\|_{H^1(\partial F_t)}^2.
\]
Therefore by Proposition \ref{energia identiteetit} we get
\begin{equation} \langlebel{from_stability}
\begin{split}
\frac{d}{dt} \biggl(\frac{1}{2} \int_{\partial F_t} (\partial_\sigma R_t)^2& \, d\mathcal{H}^1 \biggr) = -J(F_t)[\partial_{\sigma \sigma} R_t] - \frac12 \int_{\partial F_t} k_t \, (\partial_\sigma R_t)^2 \, \partial_{\sigma \sigma}R_t\, d \mathcal{H}^1\\
&\leq -\frac{m_0}{2} \|\partial_{\sigma \sigma} R_t\|_{H^1(\partial F_t)}^2 +C \int_{\partial F_t} (\partial_\sigma R_t)^2 \, |\partial_{\sigma \sigma}R_t| \, d \mathcal{H}^1\\
&\leq -\frac{m_0}{2} \int_{\partial F_t} (\partial_{\sigma \sigma\sigma} R_t)^2 d \mathcal{H}^1 + C \int_{\partial F_t} |\partial_\sigma R_t|^3 + |\partial_{\sigma \sigma}R_t|^3 \, d \mathcal{H}^1.
\varepsilonnd{split}
\varepsilonnd{equation}
In turn, by Proposition \ref{interpolation} and the Poincar\'e Inequality we get
\[
\begin{split}
\|\partial_\sigma R_t \|_{L^3(\partial F_t)}^3 &\leq C \, \|\partial_{\sigma \sigma \sigma}R_t \|_{L^2(\partial F_t)}^{\frac14}\|\partial_{\sigma}R_t \|_{L^2(\partial F_t)}^{\frac{11}{4}} \\
&\leq C \|\partial_{\sigma \sigma \sigma}R_t \|_{L^2(\partial F_t)}^{2} \|\partial_{\sigma}R_t \|_{L^2(\partial F_t)}\\
&\leq C \sqrt{\deltalta_0} \|\partial_{\sigma \sigma \sigma}R_t \|_{L^2(\partial F_t)}^{2},
\varepsilonnd{split}
\]
where the last inequality follows from \varepsilonqref{Tprimo}. Similarly we get
\[
\begin{split}
\|\partial_{\sigma \sigma}R_t\|_{L^3(\partial F_t)}^3 &\leq C \, \|\partial_{\sigma \sigma\sigma}R_t \|_{L^2(\partial F_t)}^{\frac74}\|\partial_{\sigma }R_t \|_{L^2(\partial F_t)}^{\frac{5}{4}} \\
&\leq C \sqrt{\deltalta_0} \|\partial_{\sigma \sigma\sigma}R_t \|_{L^2(\partial F_t)}^{2}.
\varepsilonnd{split}
\]
Therefore, choosing $\deltalta_0$ small enough, we deduce from \varepsilonqref{from_stability} that
\[
\begin{split}
\frac{d}{dt} \biggl(\frac{1}{2} \int_{\partial F_t} (\partial_\sigma R_t)^2 \, d\mathcal{H}^1 \biggr) &\leq \bigl( -\frac{m_0}{2} + C \sqrt{\deltalta_0}\bigr) \int_{\partial F_t} (\partial_{\sigma \sigma\sigma} R_t)^2 \, d \mathcal{H}^1\\
&\leq -\frac{m_0}{4} \int_{\partial F_t} (\partial_{\sigma \sigma\sigma} R_t)^2 \, d \mathcal{H}^1 \\
&\leq -m_1 \int_{\partial F_t} (\partial_{\sigma} R_t)^2 \, d \mathcal{H}^1
\varepsilonnd{split}
\]
for all $t \leq \bar t$ and for some $m_1>0$. Note that the last inequality above follows from the Poincar\'e Inequality. Integrating the above differential inequality implies
\begin{equation} \langlebel{exponential}
\int_{\partial F_t} (\partial_\sigma R_t)^2 \, d\mathcal{H}^1 \leq e^{-m_1 t} \int_{\partial F_0} (\partial_\sigma R_0)^2 \, d\mathcal{H}^1 \leq \deltalta_0\, e^{-m_1 t}
\varepsilonnd{equation}
for every $t \leq \bar t$. This contradicts \varepsilonqref{step1a}.
{\bf Step 1-(b).} Assume now that
\begin{equation}\langlebel{step1b}
\|h_{\bar t}\|_{C^{1,\alpha}(\partial F)}=\varepsilon_1\,.
\varepsilonnd{equation}
By \varepsilonqref{stimaf3} we have that
\begin{equation}\langlebel{leading2}
\frac{d}{dt} D(F_t) \leq P(F_t)^{\frac12} \left( \int_{\partial F_t} (\partial_\sigma R_t)^2 \, d\mathcal{H}^1 \right)^{\frac12},
\varepsilonnd{equation}
where $D(F_t)$ is defined in \varepsilonqref{D(F)0}. Therefore we may use \varepsilonqref{exponential} to estimate
\begin{equation} \langlebel{L2 decay}
\frac{d}{dt} D(F_t) \leq C \sqrt{\deltalta_0 e^{-m_1 t}} \,
\varepsilonnd{equation}
for every $t \leq \bar t$. This implies
\[
D(F_t) \leq D(F_0) + C\sqrt{\deltalta_0} \leq C\sqrt{\deltalta_0}
\]
for every $t \leq \bar t$. We may choose $\deltalta_0$ so small enough the above estimate implies $D(F_t) \leq \deltalta_1$ and, in turn, by
\varepsilonqref{de01}, $\|h_{t}\|_{C^{1,\alpha}(\partial F)}\leq \frac{\varepsilon_1}{2}\,$ for every $t \leq \bar t$. This contradicts \varepsilonqref{step1b}.
\noindent {\bf Step 2.}{\it (Global-in-time existence and convergence)} By the previous step we have that as long as the flow is defined, i.e., over $(0,T(F_0))$, the estimates \varepsilonqref{Tprimo} hold. In turn, by taking $\varepsilon_1$ (and $\delta_1$, $\delta_0$) smaller if needed, we have $\|h_t\|_{L^2(\partial G)} \leq \delta$ and $\|h_t\|_{H^3(\partial G)} \leq 1$ for all $t\in (0,T(F_0))$. By \varepsilonqref{dalbasso} and a standard continuation argument, we deduce that $(F_t)_{t}$ is defined for all times, i.e., $T(F_0) = \infty$.
From \varepsilonqref{exponential} we also deduce that
\begin{equation} \langlebel{exponential2}
\int_{\partial F_t} (\partial_\sigma R_t)^2 \, d\mathcal{H}^1 \leq e^{-m_1 t} \int_{\partial F_0} (\partial_\sigma R_0)^2 \, d\mathcal{H}^1 \leq \deltalta_0\, e^{-m_1 t}
\varepsilonnd{equation}
for all $t >0$, and in turn, by \varepsilonqref{de02} we have
\begin{equation} \langlebel{exponential3}
\|h_t\|_{C^{2,\alpha}(\partial G)}\leq M
\varepsilonnd{equation}
for all $t>0$. Therefore, we deduce that there exists $h_\infty \in C^{2,\alpha}(\partial G)$ and a sequence $t_n\to+\infty$ such that
\begin{equation}\langlebel{tenne}
h_{t_n} \to h_\infty\qquad\text{ in $C^{2,\beta}(\partial G)$ for all $\beta < \alpha$.}
\varepsilonnd{equation}
Moreover, by \varepsilonqref{exponential2} we have $\partial_\sigma R_{\infty}=0$, and thus, the set $F_\infty\in \mathfrak{h}^{2,\alpha}_M(G)$ such that
\[
\partial F_\infty = \{ x + h_\infty(x) \nu_G(x) : x \in \partial G\}
\]
is stationary. Recall that for every $t\in [0, +\infty]$, $(\Gamma_{F_t,i})_{i=1,\dots, m}$ denote the connected components of $\partial F_t$,
numbered according to \varepsilonqref{numbered}. Denote also as usual by $F_{t,i}$ the bounded open set enclosed by $\Gamma_{F_t,i}$.
Since $|F_{t,i}|=|F_{i,0}|$ for every $t>0$ by \varepsilonqref{componenti}, taking also into account \varepsilonqref{de02} and Proposition~\ref{stationary}, we deduce that in fact $F_\infty$ is the unique stationary set in $\mathfrak{h}^{2,\alpha}_{\sigma_1}(\partial G)$
such that $|F_{\infty, i}|=|F_{0,i}|$ for $i=1,\dots, m$.
It remains to show that the whole flow exponentially converges to $F_\infty$.
To this aim, define
$$
D_\infty(E):=\int_{E\Delta F_\infty}\mathrm{dist\,}(x, \partial F_\infty)\, dx\,.
$$
The same calculations and arguments leading to \varepsilonqref{leading2} and \varepsilonqref{L2 decay} show that
\begin{equation}\langlebel{step6}
\frac{d}{dt} D_\infty(F_t) \leq P(F_t)^{\frac12} \left( \int_{\partial F_t} (\partial_\sigma R_t)^2 \, d\mathcal{H}^1 \right)^{\frac12}\leq C \sqrt{\deltalta_0 e^{-m_1 t}}
\varepsilonnd{equation}
for all $t>0$. From this inequality it is easy to deduce that $\lim_{t\to +\infty} D_\infty(F_t)$ exists. Thus, by \varepsilonqref{tenne},
$D_\infty(F_t)\to 0$ as $t\to +\infty$. In turn, integrating \varepsilonqref{step6} and writing $\partial F_t=\{x+\tilde h_t(x)\nu_{F_\infty}(x): x\in \partial F_\infty\}$ we get
$$
\|\tilde h_t\|_{L^2(\partial F_\infty)}^2\leq C D_\infty(F_t)\leq\int_t^{+\infty}C \sqrt{\deltalta_0 e^{-m_1 s}}, ds\leq
C \sqrt{\deltalta_0 e^{-m_1 t}}\,.
$$
Since $(\tilde h_t)_{t>0}$ are bounded in $C^{2,\alpha}(\partial G)$ by \varepsilonqref{exponential3}, we obtain by the above estimate together with standard interpolation that also $\|\tilde h_t\|_{C^{2,\beta}(\partial G)}\to 0$ exponentially fast to zero for $\beta < \alpha$. Finally, using also \varepsilonqref{exponential2} and Lemma~\ref{elastiset} (with $G=F_\infty$), we deduce that $\|\tilde h_t\|_{H^3(\partial G)} \to 0$ exponentially fast.
\varepsilonnd{proof}
\section{Periodic graphs}\langlebel{sec:graphs}
In this section we briefly describe how our main results read in the context of evolving periodic graphs.
In this framework, given a (sufficiently regular) non-negative $\varepsilonll$-periodic function $h:[0,\varepsilonll]\to [0,+\infty)$, the free energy associated with it reads
\begin{equation}\langlebel{functional}
J(h):=\int_{\Omegaega_{h}}Q(E(u_h))\,dx+\int_{\Gamma_{h}}\varphi(\nu_{\Omega_h})\,d{\mathcal{H}}^{1},
\varepsilonnd{equation}
where $x=(x_1,x_2)\in\mathbb{R}^{2}$, $\Gamma_{h}$ denotes the graph of $h$, $\Omegaega_{h}$ is the subgraph of $h$, i.e.,
\[
\Omegaega_{h}:=\{(x_1,x_2)\in(0,\varepsilonll )\times\mathbb{R}:\,0<x_2<h(x_1)\},
\]
and $u_h$ is the elastic equilibrium in $\Omega_h$, namely the solution of the elliptic system
\begin{equation}\langlebel{leiintro}
\begin{cases}
\operatorname{div}\,\mathbb{C} E(u_h)=0 \quad \text{in $\Omega_{h}$},\\
\mathbb{C} E(u_h)[\nu_{\Omega_h}]=0 \quad \text{on $\Gamma_h$,}\\
\nabla u_h(\cdot, x_2) \quad \text{ is $\varepsilonll$-periodic,}\\
u(x_1,0)=e_0(x_1,0)\,,
\varepsilonnd{cases}
\varepsilonnd{equation}
for a suitable fixed constant $e_0\neq 0$. As mentioned already in the introduction, the above energy relates to a variational model for epitaxial growth: The graph $\Gamma_h$ describes the (free) profile of the elastic films, which occupies the region $\Omega_h$ and is grown on a (rigid) and much thicker substrate, while the \varepsilonmph{mismatch strain} constant $e_0$ appearing in the Dirichlet condition for $u_h$ at the interface $\{x_1=0\}$ between film and substrate measures the mismatch between the characteristic atomic distances in the lattices of the two materials.
In this framework, the (local) minimizers of \varepsilonqref{functional} under an area constraint on $\Omega_h$ describe the equilibrium configurations of epitaxially strained elastic films, see \cite{FFLM, FFLM2, FFLM3, FM09} and the reference therein.
In the context of periodic graphs, given an initial $\varepsilonll$-periodic profile $\bar h\in H^3(0,\varepsilonll)$ (in short $\bar h\in H^3_{\rm per}(0,\varepsilonll)$), we look for a solution $(h_t)_{t\in [0,T)}$ of the following problem:
\begin{equation}\langlebel{flow2}
\begin{cases}
\frac 1{J_t}\dot h_t=\left(g(\nu_t)k_t+Q(E(u_t))\right)_{\sigma\sigma}
& \text{on $\Gamma_{h_t}$ and for all $t\in(0, T)$,}\\
h_t \text{ is $\varepsilonll$-periodic}& \text{for all $t\in(0, T)$,}\\
h_0=\bar h\,,
\varepsilonnd{cases}
\varepsilonnd{equation}
where we set
$J_t:=\sqrt{1+\left|\tfrac{\partialrtial h_t}{\partialrtial x_1} \right|^2},
$
$u_t$ stands for the solution of \varepsilonqref{leiintro}, with $\Omega_{h_t}$ in place of $\Omega_h$, and we wrote $\nu_t$, $k_t$ instead of $\nu_{\Omega_{h_t}}$ and $k_{\Omega_{h_t}}$, respectively. Note that in the first equation of \varepsilonqref{flow2} we have $+Q(E(u_t))$ instead of $-Q(E(u_t))$. This is due to the fact that in
\varepsilonqref{functional} the vector $\nu_{\Omega_h}$ now point outwards with respect to the elastic body.
Although the setting is slightly different from that of the previous sections, the short-time existence and regularity theory of Section~\ref{sec:existence} clearly extends also to the present situation, with the same arguments. In this way we improve upon the results of \cite{FFLM2}, showing that there is no need of a curvature regularization in the case where the anisotropy
$\varphi$ is convex and satisfies the ellipticity condition \varepsilonqref{ellipticity}. Also the stabilty analysis of Section~\ref{sec:stability} goes through without changes, thus showing that strictly stable stationary $\varepsilonll$-periodic configurations are $H^3$-exponentially stable (in the sense made precise by Remark~\ref{rm:precise}).
In the case of flat configurations, that is, of constant profiles $h\varepsilonquiv a$ for some $a>0$, and when $Q$ is of the form
$$
Q(E):=\mu|E|^2+\frac{\langlembda}2(\mathrm{trace\,} E)^2
$$
for some constants $\mu>0$ and $\langlembda>-\mu$ (the so called \varepsilonmph{Lam\'e coefficients}), the relation between the $a$, $\mu$, $\langlembda$, $\varepsilonll$, and $e_0$ (see \varepsilonqref{leiintro}) that guarantees the strict stability of flat configuration $h\varepsilonquiv a$ with respect to $\varepsilonll$-periodic perturbations is analytically determined. For the reader's convenience, we recall the results. Consider the
{\varepsilonm Grinfeld function} $K$ defined by
$$
K(s):=\max_{n\in\mathbb{N}}\frac{1}{n}H(ns)\,, \quad s\geq0\,,
$$
where
$$
H(s):=\frac{y+(3-4\nu_p)\sinh s\cosh s}{4(1-\nu_p)^2+s^2+(3-4\nu_p)\sinh^2s}\,,
$$
and $\nu_p$ is the {\varepsilonm Poisson modulus} of the elastic material, i.e.,
$\nu_p: =\frac{\langlembda}{2(\langlembda+\mu)}$\,.
It turns out that $K$ is strictly increasing and continuous, $K(s)\leq Cs$, and $\displaystyle\lim_{s\to +\infty}K(s)=1$, for some positive constant $C$, see \cite[Corollary~5.3]{FM09}.
Set $\mathbf{e_1}:=(1,0)$ and $\mathbf{e_2}:=(0,1)$.
Combining \cite[Theorem~2.9]{FM09} and \cite[Theorem~2.8]{Bo0} with the results of the previous section, we obtain the following theorem.
\begin{theorem}\langlebel{th:2dliapunov}
Let $a_{\rm stable}:(0,+\infty)\to (0,+\infty]$ be defined as $a_{\rm stable} (\varepsilonll):=+\infty$, if $0<\varepsilonll\leq \frac{\pi}{4}\frac{(2\mu+\langlembda)\partialrtial_{\mathbf{e_1}\mathbf{e_1}}\varphi(\mathbf{e_2})}{e_0^2\mu(\mu+\langlembda)}$, and as the solution of
$$
K\Bigl(\frac{2\pi a_{\rm stable}(\varepsilonll)}{\varepsilonll}\Bigr)=\frac{\pi}{4}\frac{(2\mu+\langlembda)\partialrtial_{\mathbf{e_1}\mathbf{e_1}}\varphi(\mathbf{e_2})}{e_0^2\mu(\mu+\langlembda)}\frac1\varepsilonll\,,
$$
otherwise.
Then $h\varepsilonquiv a$ is an $\varepsilonll$-periodic strictly stable stationary configuration for \varepsilonqref{functional} if and only if
$0<a<a_{\rm stable}(\varepsilonll)$.
In particular, for all $a\in (0,a_{\rm stable}(\varepsilonll))$ there exists $\delta>0$ such that if $\|\bar h-a\|_{H^3_{\rm per}(0,\varepsilonll)}\leq \delta$ and
$|\Omega_{\bar h}|=a\varepsilonll$, then the unique solution $(h_t)_t$ of \varepsilonqref{flow2} is defined for all times and satisfies
$$
\|h_t-a\|_{H^3_{\rm per}(0,\varepsilonll)}\leq C e^{-c t} \qquad\text{for all $t>0$,}
$$
for suitable constants $C, c>0$.
\varepsilonnd{theorem}
\section{Appendix}
Let $s\in (0,1)$ and $p\geq 1$. We recall that for a function $f: \mathbb{S}^1\to \mathbb{R}$ the Gagliardo seminorm $[f]_{s,p}$ is defined as
$$
[f]^p_{s,p}:=\int_{\mathbb{S}^1}\int_{\mathbb{S}^1}\frac{|f(x)-f(y)|^p}{|x-y|^{1+sp}}\, dxdy\,.
$$
If $s>0$ and $\varepsilonll$ is the integer part of $s$, the Sobolev space $W^{s,p}(\mathbb{S}^1)$ is the space of all functions $f$ in
$W^{\varepsilonll, p}(\mathbb{S}^1)$ such that $[\partialrtial^\varepsilonll f]_{s-\varepsilonll,p}$ is finite, endowed with the norm
$\|f\|^p_{W^{s,p}(\mathbb{S}^1)}:=\|f\|_{W^{\varepsilonll, p}(\mathbb{S}^1)}^p+[\partialrtial_\sigma^{\varepsilonll}f]_{s-\varepsilonll,p}^p$. Here we used the convention $W^{0,p}=L^p$ and $[\partialrtial_\sigma^{\varepsilonll}f]_{s-\varepsilonll,p}^p=\|\partialrtial_\sigma^{\varepsilonll}f\|_{L^p(\Gamma)}$. We recall also that for
$p=2$ the seminorm $[\partialrtial_\sigma^{\varepsilonll}f]_{s-\varepsilonll,p}^p$ is equivalent to
$$
\bigg(\sum_{k\in \mathbb{Z}} k^{2s} a_k(f)^2 \bigg)^{\frac12}\,,
$$
where $\{a_k(f)\}$ is the sequence of the Fourier coefficients of $f$ with respect to the orthornormal basis $\{(2\pi)^{\frac12}\mathrm{e}^{-ikz}\}_{k\in \mathbb{Z}}$. These definitions extend in the obvious way to the case where $\mathbb{S}^1$ is replaced by any regular Jordan curve $\Gamma$.
We prove the following interpolation inequality for curves. Note that in the statement we are using and $W^{t,2}=H^t$ for all $t>0$.
\begin{proposition}
\langlebel{interpolation}
Let $\Gamma$ be a regular Jordan curve. Let $m\geq 1$ be an integer, $0\leq s< m$ and
$p\in [2,+\infty)$ such that $s+ 1/2 -1/p<m$. There exists a constant $C>0$, depending only on $m, s, p$ and on the length of $\Gamma$ such that for every $f\in H^{m}(\Gamma)$
\begin{equation}\langlebel{inter1}
\|f\|_{W^{s,p}(\Gamma)} \leq C\big(\|\partialrtial_\sigma^m f\|_{L^2(\Gamma)}^\theta \|f\|_{L^2(\Gamma)}^{1-\theta} + \|f\|_{L^2(\Gamma)} \big),
\varepsilonnd{equation}
where
\[
\theta = \frac{s+ 1/2 -1/p}{m}.
\]
If $s$ is a positive integer, then
\begin{equation}\langlebel{inter2}
|| \partialrtial_\sigma^s f||_{L^p(\Gamma)} \leq C\,|| \partialrtial_\sigma^m f||_{L^2(\Gamma)}^\theta ||f||_{L^2(\Gamma)}^{1-\theta}\,,
\varepsilonnd{equation}
with $\theta$ as before. The same inequality also holds if $s=0$, provided that $f$ has zero average.
Finally, if $0<\alpha<\frac12$, there exists $\theta'$, depending only on $m$ and $\alpha$, such that for every $f\in H^{m}(\Gamma)$
\begin{equation}\langlebel{inter3}
\|f\|_{C^{m-1, \alpha}(\Gamma)}\leq C (\|\partialrtial^m_\sigma f\|_{L^2(\Gamma)}^{\theta'}\|f\|_{L^2(\Gamma)}^{1-\theta'}+\|f\|_{L^2(\Gamma)})\,.
\varepsilonnd{equation}
\varepsilonnd{proposition}
\begin{proof}
It is enough to prove the statement for $\Gamma=\mathbb{S}^1$. The general case will follow by parametrizing $\Gamma$ by the arclength and by rescaling. Let $t=s+\frac12-\frac1p$.
Observe that
\begin{equation}\langlebel{inter4}
\bigg(\sum_{k\in \mathbb{Z}}^\infty k^{2t}a_k(f)^2 \bigg)^{\frac12}\leq
\bigg(\sum_{k\in \mathbb{Z}} k^{2m} a_k(f)^2 \bigg)^{\frac\theta2}
\bigg(\sum_{k\in \mathbb{Z}} a_k(f)^2 \bigg)^{\frac{1-\theta}2}\,,
\varepsilonnd{equation}
and thus \varepsilonqref{inter1} follows with $\|f \|_{W^{s,p}(\mathbb{S}^1)}$ replaced by
$\|f \|_{W^{t,2}(\mathbb{S}^1)}$. The general case follows recalling that
$W^{t,2}(\mathbb{S}^1)$ is continuously embedded in $W^{s,p}(\mathbb{S}^1)$, since $t=s+\frac12-\frac1p$
(see \cite[Th. 1.4.4.1]{Gr}).
Observe that it is enough to prove \varepsilonqref{inter2} for functions $f$ with zero average also when
$s$ is a positive integer. On the other hand if $f$ has zero average, \varepsilonqref{inter2} follows from
\varepsilonqref{inter4} and the aforementioned Sobolev Embedding after observing that the
$W^{s,p}$-norm of $f$ is equivalent to the $L^p$-norm of $\partialrtial_\sigma^s f$.
Finally, to prove \varepsilonqref{inter3} it is enough to assume $m=1$ and then to argue by induction with respect to $m$. To this aim, we observe that for every $z$, $w\in\mathbb{S}^1$
$$
|f(z)-f(w)|\leq c |z-w|^{\frac12}\|\partialrtial_\sigma f\|_{L^2}\,,
$$
for some universal constant $c$. Then if $0<\alpha<\frac12$
\begin{eqnarray*}
|f(z)-f(w)|\leq |f(z)-f(w)|^{2\alpha}|f(z)-f(w)|^{1-2\alpha}\leq c |z-w|^{\alpha} \|\partialrtial_\sigma f\|_{L^2}^{2\alpha}\|f\|_{L^\infty}^{1-2\alpha}\,.
\varepsilonnd{eqnarray*}
The conclusion follows by estimating $\|f\|_{L^\infty}$ by \varepsilonqref{inter1}, with $m=1$.
\varepsilonnd{proof}
\begin{thebibliography}{50}
\bibitem{AFJM} \textsc{Acerbi E.; Fusco N.; Julin V.; Morini M.}, \varepsilonmph{Nonlinear stability results for the modified Mullins-Sekerka and the surface diffusion flow}. Preprint 2016. To appear on {\it J. Differential. Geom.}
\bibitem{AFM} \textsc{Acerbi E.; Fusco N.; Morini M.}, \varepsilonmph{ Minimality via second variation for a nonlocal isoperimetric problem,} Comm. Math. Phys. \textbf{322} (2013), 515--557.
\bibitem{ADN} \textsc{Agmon S.; Douglis A.; Nirenberg L.}, \varepsilonmph{Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. II,} Comm. Pure Appl. Math. \textbf{17} (1964), 35--92.
\bibitem{AP} \textsc{Ambrosetti A.; Prodi G.}, \varepsilonmph{A primer of nonlinear analysis.},
Cambridge Studies in Advanced Mathematics, 34. Cambridge University Press, Cambridge, 1995.
\bibitem{angenent-gurtin89} \textsc{Angenent S.}; \textsc{Gurtin M.E.},
\varepsilonmph{Multiphase thermomechanics with interfacial structure. {II}.\
{E}volution of an isothermal interface.} Arch. Rational Mech. Anal. \textbf{108} (1989), 323--391.
\bibitem{BGZ15} \textsc{Bella P.; Goldman M.; Zwicknagl B.}, \varepsilonmph{Study of island formation in epitaxially strained films on unbounded domains.} Arch. Ration. Mech. Anal. \textbf{218} (2015), 163--217.
\bibitem{BMN} \textsc{Bellettini G.; Mantegazza C.; Novaga M.}, \varepsilonmph{Singular perturbations of mean curvature flow.} J. Differential Geom. \textbf{75} (2007), 403--431.
\bibitem{Bo0} \textsc{Bonacini M.}, \varepsilonmph{Epitaxially strained elastic films: the case of anisotropic
surface energies.} ESAIM Control Optim. Calc. Var. \textbf{19} (2013), 167--189.
\bibitem{Bo}\textsc{Bonacini M.}, \varepsilonmph{Stability of equilibrium configurations for elastic films
in two and three dimensions.} Adv. Calc. Var. \textbf{8} (2015), 117--153.
\bibitem{BC}\textsc{Bonnetier E.}; \textsc{Chambolle A.}, \varepsilonmph{Computing
the equilibrium configuration of epitaxially strained crystalline films.} SIAM
J. Appl. Math. \textbf{62} (2002), 1093--1121.
\bibitem{BCS} \textsc{Braides A.; Chambolle A.; Solci M.}, \varepsilonmph{A relaxation result for energies defined on pairs
set-function and applications.} ESAIM Control Optim. Calc. Var. \textbf{13} (2007), 717--734.
\bibitem{BHSV} \textsc{Burger M.}; \textsc{Hausser H.}; \textsc{St\"ocker C.}; \textsc{Voigt A.},
\varepsilonmph{A level set approach to anisotropic flows with curvature regularization.} J. Comput. Phys. \textbf{225} (2007), 183--205.
\bibitem{cahn-taylor94} \textsc{Cahn J. W.}; \textsc{Taylor J. E.},
\varepsilonmph{Overview N0-113 - Surface motion by surface-diffusion}.
Acta Metallurgica et Materialia, \textbf{42} (1994), 1045--1063.
\bibitem{CJP}\textsc{Capriani G.M.; Julin V.; Pisante G.}, \varepsilonmph{A quantitative second order minimality criterion for cavities in elastic bodies.} SIAM J. Math. Anal. \textbf{45} (2013), 1952--1991.
\bibitem{CS07} \textsc{Chambolle A.; Solci M.}, \varepsilonmph{Interaction of a bulk and a surface energy with a geometrical constraint.} SIAM J. Math. Anal. \textbf{39} (2007), 77--102.
\bibitem{chinese}\textsc{Chen X.}, \varepsilonmph{The Hele-Shaw problem and area-preserving curve-shortening motions.} Arch. Rational Mech. Anal. \textbf{123} (1993), 117--151.
\bibitem{ChSte}\textsc{Choksi R.; Sternberg P.}, \varepsilonmph{On the first and second variations of a nonlocal isoperimetric problem.} J. reine angew. Math. \textbf{611} (2007), 75--108.
\bibitem{dicarlo-gurtin-guidugli92} \textsc{Di Carlo A.}; \textsc{Gurtin M. E.}; \textsc{Podio-Guidugli P.},
\varepsilonmph{A regularized equation for anisotropic motion-by-curvature.}
SIAM J. Appl. Math. \textbf{52} (1992), 1111--1119.
\bibitem{EllGarcke} \textsc {Elliott C. M.}; \textsc{Garcke H.}, \varepsilonmph{Existence results for diffusive surface motion laws.} Adv. Math. Sci. Appl. \textbf{7} (1997), 467--490.
\bibitem{EMS} \textsc{Escher J.}; \textsc{Mayer U. F.}; \textsc{Simonett G.}, \varepsilonmph{The surface diffusion flow for immersed hypersurfaces.} SIAM J. Math. Anal. \textbf{29} (1998), 1419--1433.
\bibitem {FFLM}\textsc{Fonseca I.; Fusco N.; Leoni G.; Morini M.}, \varepsilonmph{Equilibrium configurations of epitaxially strained crystalline films: existence and regularity results.} Arch. Ration. Mech. Anal. \textbf{186} (2007), 477--537.
\bibitem{FFLM2}\textsc{Fonseca I.; Fusco N.; Leoni G.; Morini M.},
\varepsilonmph{Motion of elastic thin films by anisotropic surface diffusion with curvature regularization.} Arch. Ration. Mech. Anal. \textbf{205} (2012), 425--466.
\bibitem{FFLM3}\textsc{Fonseca I.; Fusco N.; Leoni G.; Morini M.},
\varepsilonmph{Motion of elastic three-dimensional elastic films by anisotropic surface diffusion with curvature regularization.} Anal. PDE \textbf{8} (2015), 373--423.
\bibitem{FFLMi}\textsc{Fonseca I.; Fusco N.; Leoni G.; Millot V.},
\varepsilonmph{Material voids in elastic solids with anisotropic surface energies.} J. Math. Pures Appl. \textbf{96} (2011), 591--639.
\bibitem{FM09}\textsc{Fusco N.}; \textsc{Morini M.}, \varepsilonmph{Equilibrium configurations of epitaxially strained elastic films: second order minimality conditions and qualitative properties of solutions.} Arch. Rational Mech. Anal. \textbf{203} (2012), 247--327.
\bibitem{GigaIto} \textsc{Giga Y; Ito K.}, \varepsilonmph{On pinching of curves moved by surface diffusion.} Commun. Appl. Anal. \textbf{2} (1998), 393--405.
\bibitem{GZ14} \textsc{Goldman M.; Zwicknagl B.}, \varepsilonmph{Scaling law and reduced models for epitaxially strained crystalline films.} SIAM J. Math. Anal. \textbf{46} (2014), 1--24.
\bibitem {Gr}\textsc{Grisvard P.}, \varepsilonmph{Elliptic problems in
nonsmooth domains.} Monographs and Studies in Mathematics, 24. Pitman
(Advanced Publishing Program), Boston, MA, 1985.
\bibitem{GurJab} \textsc{Gurtin M. E.}; \textsc{Jabbour M. E.}, \varepsilonmph{Interface evolution in three dimensions with curvature-dependent energy and surface diffusion: interface-controlled
evolution, phase transitions, epitaxial growth of elastic films.}
Arch. Ration. Mech. Anal. \textbf{163} (2002), 171--208.
\bibitem{GuVo} \textsc{Gurtin M.; Voorhees P.}, \varepsilonmph{The continuum mechanics of coherent two-phase
elastic solids with mass transport.} Proc. R. Soc. Lond. A \textbf{440} (1993), 323--343.
\bibitem{herring51}\textsc{Herring C.}, \varepsilonmph{Some theorems on the free energies of crystal surfaces.}, Physical Review
\textbf{82} (1951), 87--93.
\bibitem {KLM}\textsc{Koch H.}; \textsc{Leoni G.}; \textsc{Morini M.},
\varepsilonmph{On Optimal regularity of Free Boundary Problems and a Conjecture of De
Giorgi.} Comm. Pure Applied Math. \textbf{58} (2005), 1051--1076.
\bibitem{Morini} \textsc{Morini M.}, \varepsilonmph{Local and global minimality results for an isoperimetric problem with long-range interactions}, in `Free Discontinuity Problems' 153--224, CRM series \textbf{19}, Ed. Norm., Pisa, 2017.
\bibitem{Mu63} \textsc{Mullins W. W.}, \varepsilonmph{Solid surface morphologies governed by capillarity.} In: Metal Surfaces. American society for metals, 1963.
\bibitem{RRV} \textsc{R\"atz A.}; \textsc{Ribalta A.}; \textsc{Voigt A.}, \varepsilonmph{Surface evolution of elastically stressed films under deposition by a diffuse interface model.} J. Comp. Phys. \textbf{214} (2006), 187--208.
\bibitem{siegel-miksis-voorhees04}\textsc{Siegel M.}; \textsc{Miksis M. J.}; \textsc{Voorhees P. W.},
\varepsilonmph{ Evolution of material voids for highly anisotropic surface energy.} J. Mech. Phys. Solids \textbf{52} (2004), 1319--1353.
\varepsilonnd{thebibliography}
\varepsilonnd{document} | math | 137,611 |
\begin{document}
\title{Extremal storage functions and minimal realizations of discrete-time linear switching systems}
\begin{abstract}
We study the $\mathcal{L}_p$ induced gain of discrete-time linear switching
systems with graph-constrained switching sequences.
We first prove that, for stable systems in a minimal realization, for
every $p \geq 1$, the $\mathcal{L}_p$-gain is exactly
characterized through switching storage functions.
These functions are shown to be the $p$th power of a norm.
In order to consider general systems, we provide an algorithm for computing minimal realizations.
These realizations are \emph{rectangular systems}, with a state
dimension that varies according to the mode of the system.\\
We apply our tools to the study on the of $\mathcal{L}_2$-gain. We provide algorithms for its approximation, and provide a converse result for the existence
of quadratic switching storage functions. We finally illustrate the results with a physically motivated example.
\end{abstract}
\section{Introduction}
Discrete-time linear switching systems are multi-modal systems that switch between a finite set of modes. They arise in many practical and theoretical applications \cite{LiSISA,JuTJSR,AlRaCMAA,JuDiFSOD,HeMiOAMS,ShWiAPSM}.
They are of the form
\begin{equation}
\begin{aligned}
x_{t+1} & = A_{\sigma(t)} x_t + B_{\sigma(t)} w_t, \\
z_{t} & = C_{\sigma(t)} x_t + D_{\sigma(t)} w_t,
\end{aligned}
\label{eq:switching}
\end{equation}
where $x \in \mathbb{R}^n$, $w \in \mathbb{R}^d$ and $z \in \mathbb{R}^m$ are respectively the state of the system,
a \emph{disturbance input}, and a \emph{performance output}. The function $\sigma(\cdot) : \mathbb{N} \rightarrow \{1, \ldots, N\}$ is called a \emph{switching sequence}, and at time $t$, $\sigma(t)$ is called the \emph{mode} of the system at time $t$. We consider switching sequences following a set of logical rules: they need be generated by walks in a given labeled graph $\Theta$ (we will denote this by $\sigma \in \Theta$).\\
The analysis and control of switched systems is an active research area, and many questions that are easy to understand and decide for LTI (i.e., Linear Time Invariant) systems are known to be hard for switching systems (see, e.g., \cite{BlTsTBOA, JuTJSR, LiSISA, PhMiDTBA}). Nevertheless, several analysis and control techniques have been devised (see e.g. \cite{LeDuUSOD, LeDuODAF, KuChSSSF, EsLeCOLS}), often providing conservative yet tractable methods. The particular question of the \emph{stability} of a switched system attracts a lot of attention; in the past few years, approaches using multiple Lyapunov functions have been devised \cite{DaBePDLF, BlFeSAOD,LeDuUSOD}, with recent works \cite{PhEsSODT, EsPhTMAS, AhJuJSRA} analyzing how conservative these methods are.
\\
In this paper, we are interested in the analysis and computation of the \emph{$\mathcal{L}_p$ induced} gain of switching systems:
\begin{equation}
\gamma_p = \left ( \s{ \mathbf{w} \in \mathcal{L}_p, \, \sigma \in \Theta, \, x_0 = 0.} \frac{\sum_{t = 0}^{\infty} \|z_t\|^p_p}{ \sum_{t = 0}
^{\infty} \|w_t\|^p_p } \right )^{1/p},
\label{eq:gain}
\end{equation}
where $\mathbf{w} \in \mathcal{L}_p$ means that the disturbance signal satisfies $\sum_{t = 0}^{\infty} \|w_t\|_p^p < + \infty$, $\|w\|_p$ being the $p$-norm of $w$.\\ This quantity is a measure of a system's ability to dissipate the energy of a disturbance.
As noted in \cite{NaVoAUFF}, the $\mathcal{L}_2$-gain is one of the most studied performance measures. Some approaches link the gain of a switching system to the individual performances of its LTI modes (see e.g. \cite{CoBoRMSG, HeLIGO}); in \cite{EsLeCOLS, LeDuODAF}, a hierarchy of LMIs is presented to decide whether $\gamma_2 \leq 1$. In \cite{PuZhASOT}, a generalized version of the gain is studied through generating functions. For continuous time systems, \cite{HiHeLIGA} gives a tractable approach based on the computation of a storage function. For the general \emph{$\mathcal{L}_p$-gain}, we refer to \cite{NaVoCAOO} that relies on an operator-theoretic approach.\\
Our approach is inspired from the works \cite{PhEsSODT, EsPhTMAS, AhJuJSRA} on \emph{stability analysis}. These works present a framework for the stability analysis of switching systems (with constrained switching sequences) using multiple Lyapunov functions with pieces that are \emph{norms}. In Section \ref{sec:StorageTheory}, we provide a characterization of $\mathcal{L}_p$-gain using \emph{multiple storage functions}, where pieces are now the $p$th power of norms. We rely on two assumptions, namely that
the switching system be \emph{internally stable} and in a \emph{minimal realization}. In order to assert the generality of the results, both assumptions are discussed in Subsections \ref{subsec:Stability} and Subsection \ref{subsec:Minimize} respectively. In particular, we give an algorithm for computing minimal realizations: these are \emph{rectangular systems} for which the dimension of the state space varies with the modes of the system. In Section \ref{sec:L2}, we narrow our focus to the $\mathcal{L}_2$-gain, providing two approaches for estimating the gain and constructing storage functions. The first uses dynamic programming and provides asymptotically tight lower bound on the gain, while the second, based on the work of \cite{EsLeCOLS, LeDuODAF}, provides asymptotically tight upper bounds.
Then, in Subsection \ref{Subsection:Converse} we present a converse result for the existence of \emph{quadratic storage functions}, which can be obtained using convex optimization.
Section \ref{sec:Example} illustrates our results with a simple practical example.
\section*{Preliminaries}
Given a system of the form (\ref{eq:switching}) on $N$ modes, we denote the set of parameters of the system by
$$ \mathbf{\Sigma}= \{ (A_\sigma, B_\sigma, C_\sigma, D_\sigma)_{\sigma = 1, \ldots, N}\}. $$
We define constraints on the switching sequences through a strongly connected, labeled and directed graph $\Theta(V,E)$ (see e.g.
\cite{PhEsSODT, KuChSSSF, EsLeCOLS}). The edges of $\Theta$ are of the form $(u,v,\sigma) \in E$, where $u,v \in V$ are the source and destination nodes and $\sigma \in \{1, \ldots, N\}$ is a label that corresponds to a mode of the switching system. A \emph{constrained switching system} is then defined by a pair $(\Theta, \mathbf{\Sigma})$, and admissible switching sequences correspond to paths in $\Theta$ (see Fig. \ref{fig:intro_ex} for examples).
\begin{figure}
\caption{ The automaton of Fig. \ref{fig:intro_exb}
\label{fig:intro_ex}
\end{figure}
More precisely, let $\pi \in \Theta$ denote a finite-length path $\pi$ in the graph. The associated switching sequence is written $t \mapsto \sigma_{\pi}(t)$, where, for a given $t \geq 0$, $\sigma_{\pi}(t)$ is the \emph{label} on the $t+1$th edge of $\pi$. The length of $\pi$, i.e., the number of \emph{edges} it contains, is written $|\pi|$. We let $\pi(i:j)$ with $1 \leq i \leq j \leq |\pi|$ denote the path formed from the $i$th edge of $\pi$ to its $j$th (both included).
We let $t \mapsto v_{\pi}(t) \in V$ be the sequence of \emph{nodes} encountered by $\pi$, where $v_{\pi}(0)$ is the source node, and if $|\pi| < + \infty$, $v_{\pi}(|\pi|)$ is the destination node. The functions $v_\pi(t)$ and $\sigma_{\pi}(t)$ need not be defined for $t \geq |\pi|$. In order to compactly describe the input-output maps of the system, we write
\begin{equation}
A_\pi := A_{\sigma_\pi(|\pi|-1)} \cdots A_{\sigma_\pi(0)},
\label{eq:Amap}
\end{equation}
\begin{equation}
B_\pi := \begin{pmatrix} A_{\sigma_\pi(|\pi|-1)}\cdots A_{\sigma_\pi(1)}B_{\sigma_\pi(0)},& \ldots, & B_{\sigma_\pi(|\pi|-1)}\end{pmatrix},
\label{eq:Bmap}
\end{equation}
\begin{equation}
C_\pi := \begin{pmatrix}
C_{\sigma_\pi(0)},\\
C_{\sigma_\pi(1)}A_{\sigma_\pi(0)},\\
\vdots \\
C_{\sigma_\pi(|\pi|-1)}A_{\sigma_\pi(|\pi|-2)} \cdots A_{\sigma_\pi(0)}
\end{pmatrix},
\label{eq:Cmap}
\end{equation}
\begin{multline}
D_\pi := \\
\begin{pmatrix}
D_{\sigma_\pi(0)} & \hspace{-12pt} 0 & \hspace{-9pt} & \hspace{-6pt} 0 \\
C_{\sigma_\pi(1)}B_{\sigma_\pi(0)} & \hspace{-12pt} D_{\sigma(1)} & \hspace{-9pt} & \hspace{-6pt} 0 \\
\vdots & \hspace{-12pt} \vdots & \hspace{-9pt} \ddots & \hspace{-6pt} 0 \\
C_{\sigma_\pi(|\pi| - 1 )}A_{\sigma_\pi(|\pi| - 2 )} \cdots A_{\sigma_\pi(1)}B_{\sigma_\pi(0)} & \hspace{-12pt} \ldots & \hspace{-9pt} \ldots & \hspace{-6pt} D_{\sigma_\pi(|\pi| - 1)}
\end{pmatrix}\hspace{-1pt}\\.
\label{eq:Dmap}
\end{multline}
If $\pi$ is an \emph{infinite} path, we define $A_\pi$, $B_\pi$, $C_\pi$, and $D_\pi$ similarly, the last three being operators on $\mathcal{L}_p$.
When in a matrix, $0$ and $I$ are the null and identity matrix of appropriate dimensions. \\
Given a matrix $M \in \mathbb{R}^{n \times m}$, $\text{Im}(M)$ is the image of $M$ and $\text{Ker}(M)$ the kernel of $M$. The orthogonal of a subspace $\mathcal{X}$ of $\mathbb{R}^n$ is $\mathcal{X}^\perp$. The sum of $K$ subspaces of $ \mathbb{R}^n$, $(\mathcal{X}_i)_{i \in 1, \ldots, K}$, is $\sum_{i=1}^K \mathcal{X}_i = \{ \sum_{i=1}^K x_i: \, x_i \in \mathcal{X}_i\}.$
\section{Extremal storage functions}
\label{sec:StorageTheory}
Our main results for this section rely on two assumptions.
They are discussed in Subsections \ref{subsec:Stability} and \ref{subsec:Minimize}.
\begin{assumption}[Minimality]
The system $(\Theta(V,E), \mathbf{\Sigma})$ is \emph{minimal} in the sense that, defining for all $v \in V$,
\begin{equation}
\mathcal{B}_v = \hspace{-6pt} \sum_{\pi \in \Theta, \, v_\pi(|\pi|) = v} \hspace{-6pt} \text{Im}(B_\pi), \ \ \mathcal{C}_v = \hspace{-6pt} \bigcap_{\pi \in \Theta, \, v_\pi(0) = v} \hspace{-6pt}
\text{Ker}(C_\pi),
\label{ass:eq:reach}
\end{equation} where $B_\pi$ and $C_\pi$ are given in (\ref{eq:Bmap}), (\ref{eq:Cmap}), we have
$\mathcal{B}_v = \mathbb{R}^{n}, \, \mathcal{C}_v = \{0\} \subset \mathbb{R}^{n}. $
\label{ass:minimal}
\end{assumption}
This definition of minimality equivalent to that found in \cite{PeBaRTOD}, Theorem 1.
Minimality requires that any state $x \in \mathbb{R}^n$ has a reachable and a detectable component.
\begin{assumption}[Internal stability]
The system $(\Theta, \mathbf{\Sigma} )$ is \emph{internally (exponentially) stable}, i.e.,
there exist $K \geq 1$ and $\rho < 1$ such that $ \forall \pi \in \Theta, \|A_\pi\| \leq K \rho^{|\pi|}.$
\label{ass:stability}
\end{assumption}
Internal stability guarantees that the $\mathcal{L}_p$-gain of the system is bounded (see e.g. \cite{DuLaANAF}). It is difficult to decide whether a given constrained system is internally stable. Towards this end,
generalizing the tools introduced in \cite{AhJuJSRA},
the concept of \emph{multinorm} is introduced in \cite{PhEsSODT}:
\begin{definition}
A multinorm for a system $(\Theta(V,E), \mathbf{\Sigma})$ is a set $\{(|\cdot|_v)_{v \in V}\}$ of one norm of $\mathbb{R}^n$ per node in $V$.
\end{definition}
This concept allowed to develop tools for the approximation of the \emph{constrained joint spectral radius}, denoted $\hat \rho$, of systems $(\Theta, \mathbf{\Sigma})$ (see \cite{DaAGTS,PhEsSODT}).
It can be shown that the internal stability of a system is equivalent to $\hat{\rho} < 1$ (see e.g. \cite{DaAGTS,KoTBWF}), and in \cite{PhEsSODT}-Proposition 2.2, the authors show that
\begin{equation}
\hat \rho = \inf \rho : \left \{
\begin{aligned}
&\exists \{(|\cdot|_v)_{v \in V}\} : \\
& \forall x \in \mathbb{R}^n, \forall (u,v,\sigma) \in E, |A_\sigma x |_v \leq \rho|x|_u.
\end{aligned} \right .
\label{eq:jsrmulno}
\end{equation}
Thus, a constrained switching system is stable if and only if it has a multiple Lyapunov function, with no more than one piece per node of $\Theta$, which can be taken as a norm. Whenever the infimum is actually attained by a multinorm, we say it is \emph{extremal}. The existence of such an extremal multinorm is not guaranteed.
We now show that multinorms provide a characterization of the $\mathcal{L}_p$-gain of constrained switching systems. They play the role of \emph{multiple storage functions}.
\begin{theorem}
Consider a system $(\Theta, \mathbf{\Sigma})$ satisfying Assumptions \ref{ass:minimal} and \ref{ass:stability}. Its $\mathcal{L}_p$-gain satisfies
\begin{align}
\gamma_p(\Theta, & \mathbf{\Sigma}) = \min \gamma: \label{eq:optigain} \\
& \exists \{(|\cdot|_v)_{ v \in V}\}\ s.t.\ \forall x \in \mathbb{R}^n, \forall w, \forall (u,v,\sigma) \in E, \nonumber \\
&
\left (
\begin{aligned}
|A_\sigma x + B_\sigma w|_v^p + \|C_\sigma x + D_\sigma w\|_p^p \\
\leq |x|_u^p + \gamma^p \|w\|_p^p.
\end{aligned} \label{eq:normineqgain}
\right )
\end{align}
\label{thm:multinorms}
\end{theorem}
Moreover, there exists always an \emph{extremal multinorm} acting as storage function for internally stable and minimal systems, i.e. that attains the value $\gamma = \gamma_p(\Theta, \mathbf{\Sigma})$ in (\ref{eq:optigain}).
\begin{theorem}
Consider a system $(\Theta(V,E), \mathbf{\Sigma})$, an integer $p \geq 1$, and $\gamma \geq \gamma_p(\Theta, \mathbf{\Sigma})$. At each node $v \in V$, define the function
\begin{align}
F_{v,\gamma,p}(x) &= \s{\pi \in \Theta, v_{\pi}(0) = v; \, \mathbf{w} \in \mathcal{L}_p }\|C_\pi x + D_\pi w\|_p^p - \gamma^p \|w\|_p^p, \nonumber\\
& = \sup_{\pi \in \Theta, v_{\pi}(0) = v; \mathbf{w} \in \mathcal{L}_p} \sum \|z_t\|^p_p - \gamma^p\sum \|w_t\|_p^p. \label{eq:Fv}
\end{align}
The set of functions $\{F_{v, \gamma, p}^{1/p}(x), \, v \in V\}$ is an extremal multinorm, i.e., for all $(u,v,\sigma) \in E$, $x \in \mathbb{R}^n$ and $w \in \mathbb{R}^d$,
\begin{equation}
\begin{aligned}
F_{v,\gamma,p}(A_\sigma x + B_\sigma w) + \|C_\sigma x + D_\sigma w\|_p^p \\
\leq F_{u,\gamma,p}(x) + (\gamma\|w\|_p)^p.
\end{aligned}
\label{eq:Fgamma}
\end{equation}
\label{thm:extremality}
\end{theorem}
\begin{shortVersion}
\subsubsection*{Proofs of main results}
Consider a system $(\Theta, \mathbf{\Sigma})$. We need to show that if $(\ref{eq:normineqgain})$ holds, then $\gamma \geq \gamma_p(\Theta, \mathbf{\Sigma})$. Then, proving Theorem \ref{thm:extremality} proves Theorem \ref{thm:multinorms}. We know $\gamma_p(\Theta, \mathbf{\Sigma})$ is bounded as a consequence of Assumption \ref{ass:stability}.\\
The first element is proven by simple algebra. Assume (\ref{eq:normineqgain}) holds, take $x_0 = 0$, an infinite path $\pi \in \Theta$, and a sequence $\mathbf{w} \in \mathcal{L}_p$. Unfolding the inequality, we obtain for all $T \geq 1$,
$$
\begin{aligned}
|x_0|_{v_{\pi}(0)}^p & + \gamma^p (\|w_0\|_p^p + \sum_{t = 1}^\infty\|w_t\|_p^p) \\
& \geq |x_1|_{v_{\pi}(1)}^p + \|z_0\|_{p}^p + \gamma^p(\sum_{t = 1}^\infty\|w_t\|_p^p) \geq \cdots \\
& \geq |x_T|_{v_{\pi}(T)}^p \hspace{-2pt} + \hspace{-2pt} \sum_{t = 0}^{T-1} \|z_t\|_p^p + \sum_{t = T}^\infty \|w_{t}\|_p^p \geq \sum_{t = 0}^{T-1} \|z_t\|_p^p.\\
\end{aligned}$$
Therefore, for all $T \geq 1$,
$ (\sum_{t = 0}^{T-1} \|z_t\|_p^p)/(\sum_{t = 0}^\infty \|w_t\|_p^p) \leq \gamma^p,$
and this being independent of $\pi$ and $\mathbf{w} \in \mathcal{L}_p$, we get $\gamma_p(\Theta, \mathbf{\Sigma}) \leq \gamma$.
We now move onto the proof of Theorem \ref{thm:extremality}. Proving (\ref{eq:Fgamma}) is easy, so we focus on proving that for $\gamma \geq \gamma_p(\Theta, \mathbf{\Sigma})$,
the functions $F_{v,\gamma,p}^{1/p}$ are norms. We omit the subscripts $\gamma$ and $p$ in what follows. The functions have the following properties:
\begin{enumerate}
\item $\mathbf{F_{v}(0) = 0}.$ Observe that $F_{v}(0) \geq 0$, taking a disturbance $\mathbf{w} = 0 \in \mathcal{L}_p$.
Assume by contradiction that there is $K > 0$ such that $F(0) \geq K$. There must be a path $\pi$ and a sequence $\mathbf{w} \in \mathcal{L}_p$
such that $$ \sum_{t = 0}^\infty \|z_t\|_p^p \geq K/2 + \gamma^p \sum_{t = 0}^\infty \|w_t\|_p^p > \gamma^p \sum_{t = 0}^\infty \|w_t\|_p^p. $$
Thus, $\gamma < \gamma_p(\Theta, \mathbf{\Sigma})$ (see (\ref{eq:gain})), a contradiction.
\item $\mathbf{F_{v}(x) > 0}$ \textbf{is positive definite}. By Assumption \ref{ass:minimal}, for any node $v \in V$, for any $x \in \mathbb{R}^n$, there is a path $\pi \in \Theta$, $v_{\pi}(0) = v$, such that
$C_\pi x \neq 0$. Thus, $F_{v}(x) > 0$ is guaranteed by taking $\mathbf{w} = 0 \in \mathcal{L}_p$.
\item $\mathbf{F_{v}(x)}$ \textbf{is convex and positively homogeneous of order} $\mathbf{p}$.
Given a path $\pi \in \Theta$ of finite length, define the output sequence
$$\mathbf{z}_\pi(x_0, \mathbf{w}) = \begin{pmatrix} z_0 \\ \vdots \\ z_{|\pi|-1} \end{pmatrix} = C_\pi x_0 + D_\pi \begin{pmatrix} w_0 \\ \vdots \\ w_{|\pi| - 1}\end{pmatrix}.$$
Clearly, $\mathbf{z}_\pi(\alpha x_0, \alpha \mathbf{w}) = \alpha \mathbf{z}_\pi(x_0, \mathbf{w})$ for any $\alpha \in \mathbb{R}$.
Therefore, for any $\alpha \in \mathbb{R}$, $\alpha \neq 0$, we have
$$
\begin{aligned}
F_{v}( \alpha x) &= \s{\pi, \mathbf{w} \in \mathcal{L}_p} \sum_t \|z_t(\alpha x, \mathbf{w})\|_p^p - \gamma_p^p |w_t|_p^p, \\
& = \s{\pi, \alpha \mathbf{w'} \in \mathcal{L}_p} \sum_t |\alpha|^p \|z_t(x, \mathbf{w'})\|_p^p - \gamma_p^p\alpha^p \|w'_t\|_p^p,
\end{aligned}
$$
Since $\|\mathbf{z}_\pi(x_0, \mathbf{w})\|_p^p = \sum_t \|z_t(x_0, \mathbf{w})\|_p^p$ is convex in $x_0$, the convexity of $F_v(x)$ is then easily proven.
\item $\mathbf{F_v(x)< +\infty, \, \forall x \in \mathbb{R}^n}$.
Take $x \in \text{Im}(B_\pi)$ for some $\pi \in \Theta$ with $|\pi| < + \infty$, $v_{\pi}(|\pi|) = v$. Let $u = v_{\pi}(0)$, and let $\mathbf{w}$ be such that $ x = B_\pi \begin{pmatrix} w_0^\top, \ldots, w_{|\pi|-1}^\top \end{pmatrix} ^\top$. We have that
$$
\begin{aligned}
F_{u}(0) = 0 \geq \sum_{t = 0}^{|\pi|-1}( \|z_t\|_p^p - \gamma_p^p \|w_t\|_p^p) + F_{v}(x),
\end{aligned}
$$
and thus $F_{v}(x)$ is bounded.
Now, take any $x \in \mathbb{R}^n$.
By Assumption $\ref{ass:minimal}$, we know that $x$ is a linear combination of points in the reachable sets $\text{Im}(B_\pi)$, $v_\pi(|\pi|) = v.$ Thus, by convexity, $F_v(x) < + \infty$.
\end{enumerate}
\end{shortVersion}
We discuss the generality of the Assumptions \ref{ass:minimal} and \ref{ass:stability} in the next subsections.
\subsection{Internal stability or undecidability}
\label{subsec:Stability}
Assumption \ref{ass:stability} is a sufficient condition for $\gamma_{p}(\Theta, \mathbf{\Sigma}) < + \infty$, which is key in Theorems \ref{thm:multinorms} and \ref{thm:extremality}. Relaxing the assumption leads to decidability issues:
\begin{proposition}
Given a switching system $(\Theta, \mathbf{\Sigma})$ and $p \geq 1$, the question of whether or not $\gamma_p(\Theta, \mathbf{\Sigma}) < + \infty$ is \emph{undecidable}.
\end{proposition}
\begin{shortVersion}
\begin{proof}
Consider an arbitrary switching system on two modes, with $\mathbf{A} = \{A_1, A_2\}$ and $\|A_i\| > 1$ for $i \in \{1,2\}$. The undecidability result presented in \cite{BlTsTBOA} states that the question of the existence of a uniform bound $K > 0$ such that, for all $T$ and all $\{\sigma(0), \ldots, \sigma(T-1)\} \in \{1,2\}^T$, the products of matrices satisfy $\|A_{\sigma(T-1)} ,\ldots, A_\sigma(0)\| \leq K$, is undecidable.
We show that as a consequence, one cannot in general decide the boundedness of the gain of a (constrained) switching system. Given a pair of matrices, construct an arbitrary switching system of the form (\ref{eq:switching}) on 3 modes with the parameters
$$\mathbf{\Sigma} = \{\{A_1,0,0,0\}, \{A_2,0,0,0\}, \{0,\mathbf{I},\mathbf{I},0\}\}, $$
with input and output dimension $d = m = n$.
Then, the $\mathcal{L}_p$-gain of this system is given by
$$\gamma_p(\Theta, \mathbf{\Sigma}) = \hspace{-4pt} \s{\substack{w_0, T, \\ \sigma(0), \ldots, \sigma(T-1) \in \{1,2\}^T}} \hspace{-4pt} \frac{\|A_{\sigma(T-1)} \cdots A_{\sigma(0)}w_0\|_p}{\|w_0\|_p}. $$
Thus, the gain of the system $(\Theta, \mathbf{\Sigma})$ is bounded if and only if there is a uniform bound on the norm of all products of matrices from the pair $\mathbf{A} = \{A_1, A_2\}$.
\end{proof}
\end{shortVersion}
\subsection{Obtaining minimal realizations.}
\label{subsec:Minimize}
We now turn our attention to Assumption \ref{ass:minimal}. \\
Section \ref{sec:Example} presents a practically motivated system for which a very natural model turns out to be non-minimal. We need to provide a minimal realization for the system. Algorithms exist in the literature for computing minimal realizations for arbitrary switching systems (see \cite{PeBaRTOD}). We use an approach similar to that of \cite{PeBaRTOD}, for constrained switching systems. This produces a system with the same input-output relations, but that is \emph{rectangular}, with a dimension of the state space varying in time.
\begin{definition}
A \emph{rectangular} constrained switching system a tuple $(\Theta(V,E), \mathbf{\Sigma}, \mathbf{n})$. The graph $\Theta$ is strongly connected with nodes $V$ and labeled, directed edges $E$. Edges are of the form $(v_1, v_2, \sigma) \in E$, where $\sigma \in \{1, \ldots, |E|\}$ are \emph{unique} labels assigned to each edge. The \emph{dimension vector} $\mathbf{n} = (n_v)_{v \in V}$ assigns a \emph{dimension} to each node $v \in V$. The set $\mathbf{\Sigma} = \{\{A_\sigma, B_\sigma, C_\sigma, D_\sigma\}_{\sigma \in 1, \ldots, |E|}\}$ satisfies, for $(u, v, \sigma) \in E$,
$A_\sigma \in \mathbb{R}^{n_{v} \times n_{u}}, \, B_{\sigma} \in \mathbb{R}^{n_{v} \times d}, \, C_{\sigma} \in \mathbb{R}^{m \times n_{u}}, \, D_\sigma \in \mathbb{R}^{m \times d}. $
$\,$
\end{definition}
\begin{remark}
The systems considered so far can be cast as rectangular systems, with $\mathbf{n} = \{n_v = n, v \in V\}$. The one-to-one correspondence between modes and edges is easily obtained through relabeling.
\label{rem:torect}
\end{remark}
\begin{remark}
Given any pair $(v_1, v_2, \sigma_1), (v_2,v_3,\sigma_2) \in E$, the products $A_{\sigma_2} A_{\sigma_1}$, $C_{\sigma_2}A_{\sigma_1}$, etc... are compatible.
Thus, for a rectangular system $(\Theta, \mathbf{\Sigma}, \mathbf{n})$ and a path $\pi \in \Theta$, we may still compute the matrices
$A_\pi, \, B_\pi, \, C_\pi$ and $D_\pi$ (see (\ref{eq:Amap}), (\ref{eq:Bmap}), (\ref{eq:Cmap}), (\ref{eq:Dmap})). The definitions for the subspaces $\mathcal{B}_v \subseteq \mathbb{R}^{n_v}$ and $\mathcal{C}_v \subseteq \mathbb{R}^{n_v} $ (see Assumption \ref{ass:minimal}) for rectangular systems follow immediately.
\end{remark}
\begin{definition}
A rectangular system $(\Theta(V,E), \mathbf{\Sigma}, \mathbf{n})$, $\mathbf{n}= (n_v)_{v \in V}$, is said to be \emph{minimal} if, for all $v \in V$,
$\mathcal{B}_v = \mathbb{R}^{n_v},$ and $\mathcal{C}_v = \{0\} \subset \mathbb{R}^{n_v}. $
\end{definition}
\begin{remark}
Rectangular switching systems are only special cases of more general hybrid systems. The results we present here regarding the minimization procedure, and the mechanisms behind them, can be deduced from those presented in \cite{PeScRTOD,PeBaRTOD}.
\end{remark}
In order to compute a minimal realization for a system $(\Theta, \mathbf{\Sigma}, \mathbf{n})$ we first compute the subspaces $\mathcal{B}_v$ and $\mathcal{C}_v$.
\begin{proposition}[Construction of the subspaces $\mathcal{C}_v$]
Given a rectangular system $(\Theta, \mathbf{\Sigma}, \mathbf{n})$, let for $v \in V$
$$ X_{v,1} = \sum_{(v,u,\sigma) \in E} C_\sigma^\top C_\sigma; \, \mathcal{C}_{v,1} = \text{Ker}(X_{v,1}). $$
For $k = 1,2, \ldots$, consider the following iteration
$$ X_{v,k+1} = \sum_{(v,u,\sigma) \in E} A_\sigma^\top X_{u,k} A_\sigma; \, \mathcal{C}_{v,k} = \text{Ker}( \sum_{t = 1}^k X_{v,k}). $$
The sequence $\mathcal{C}_{v,k}$ converges and $\mathcal{C}_{v} = \mathcal{C}_{v,K}$, $K = \sum n_v$.
\label{prop:algoForC}
\end{proposition}
\begin{shortVersion}
\begin{proof}
We construct $X_{v,k}$ in such a way that for $x \in \mathbb{R}^n$, $X_{v,l}x = 0$ if and only if $x \in \mathcal{C}_{v,k}$, that is, for any path initiated at $v$ and of length $k$, $x$ generates only the 0 output.
The argument for convergence is that if at some iteration, $\mathcal{C}_{v,k+1} = \mathcal{C}_{v,k}$ for all $v \in V$, then the algorithm terminates. Thus, there can at most $K = \sum n_v$ steps.
\end{proof}
\end{shortVersion}
\begin{remark}
We may use the procedure above to construct $\mathcal{B}_v$. Indeed,
we can write $$\mathcal{B}_v^\perp = \bigcap_{\pi \in \Theta; \, v_\pi(|\pi|) = v} \text{Ker}(B_\pi^\top). $$
Thus, given $(\Theta(V,E), \mathbf{\Sigma}, \mathbf{n})$ it suffice to apply Proposition \ref{prop:algoForC} to the \emph{dual system} $(\Theta(V,\bar{E}), \bar{\mathbf{\Sigma}}, \mathbf{n})$, defined with
$\bar{E} = \{(v,u,\sigma) : (u,v,\sigma) \in E \}$, and
$$\bar{\mathbf{\Sigma}} = \{(A_\sigma^\top, C_\sigma^\top, B_\sigma^\top, D_\sigma^\top): \,(A_\sigma, B_\sigma, C_\sigma, D_\sigma) \in \mathbf{\Sigma} \},$$ to compute $\mathcal{B}_v^\perp $.
\end{remark}
\begin{proposition}
Given a rectangular system $(\Theta, \mathbf{\Sigma},\mathbf{n})$, let $n_v$ be the dimension of the node $v \in V$ and $m_v \leq n_v$ be the dimension of $\mathcal{B}_v$.
For $v \in V$, let $L_v \in \mathbb{R}^{n_v \times m_v}$ be an orthogonal basis of $\mathcal{B}_v$. The rectangular system $(\Theta(V,E), \bar{\mathbf{\Sigma}}, \bar{\mathbf{n}})$ on the set
$$\bar{\mathbf{\Sigma}} = \left \{
(
\begin{aligned}
L_v^\top A_\sigma L_u , \,
L_v^\top B_\sigma, \,
C_\sigma L_u , \,
D_\sigma ,
\end{aligned}
)_{(u,v,\sigma) \in E} \right \} ,$$
has $\mathcal{B}_v = \mathbb{R}^{m_v}$, $\bar{\mathbf{n}} = \{m_v, v\in V\}$, and $\gamma_p(\Theta,\mathbf{\Sigma}, \mathbf{n}) = \gamma_p(\Theta, \bar{\mathbf{\Sigma}}, \bar{\mathbf{n}})$.
\label{prop:getBvright}
\end{proposition}
\begin{shortVersion}
\begin{proof}
Observe that the subspaces $\mathcal{B}_v$ are invariant in the sense that for any $(u,v,\sigma) \in E$,
$ \mathcal{B}_v \supseteq A_\sigma \mathcal{B}_u + \text{Im}(B_\sigma).$
Thus, if $x_0 = 0$, we know that after $t$ steps, if the next edge is $(u,v,\sigma)$,
$$
\begin{aligned}
L_v^\top L_vx_{t+1} &= A_\sigma L_u^\top L_u x_t + B_\sigma w_t,\\
z_t &= C_\sigma L_u^\top L_u x_t + D_\sigma w_t.
\end{aligned}
$$
The system on the parameters $\bar{\Sigma}$ is then obtained by multiplying the first equation on the left by $L_v$ (since $L_vL_v^\top L_v = L_v$), and then doing the change of variable $y_t = L_{v_{\pi}(t)}^\top x_t$.
\end{proof}
\end{shortVersion}
\begin{proposition}
Given a rectangular system $(\Theta, \mathbf{\Sigma},\mathbf{n})$, let $n_v$ be the dimension of the node $v \in V$ and $m_v \leq n_v$ be the dimension of $\mathcal{C}_v^\perp$.
Let $K_v \in \mathbb{R}^{n_v \times m_v}$ be an orthogonal basis of $\mathcal{C}_v^\perp$. The rectangular system $(\Theta(V,E), \bar{\mathbf{\Sigma}}, \bar{\mathbf{n}})$ on the set
$$\bar{\mathbf{\Sigma}} = \left \{
(
\begin{aligned}
K_v^\top A_\sigma K_u , \,
K_v^\top B_\sigma, \,
C_\sigma K_u , \,
D_\sigma ,
\end{aligned}
)_{(u,v,\sigma) \in E} \right \} ,$$
has $\mathcal{C}_v^\perp = \mathbb{R}^{m_v}$, $\bar{\mathbf{n}} = \{m_v, v \in V\}$, and $\gamma_p(\Theta,\mathbf{\Sigma}, \mathbf{n}) = \gamma_p(\Theta, \bar{\mathbf{\Sigma}}, \bar{\mathbf{n}})$.
\label{prop:getCvright}
\end{proposition}
\begin{shortVersion}
\begin{proof}
The proof is similar to that of Proposition \ref{prop:getCvright}. Observe that for any $(u,v,\sigma) \in E$,
$ \mathcal{C}_v \supseteq A_\sigma \mathcal{C}_u.$
What we do then is to project the state onto $\mathcal{C}_v^\perp$ at every step. Doing so does not affect the gain since the part of the state in $\mathcal{C}$ would not affect the output at any time after the projection (by invariance).
\end{proof}
\end{shortVersion}
\begin{theorem}
Given a rectangular system $(\Theta, \mathbf{\Sigma}, \mathbf{n}= \{n_v, v \in V\})$, consider the following iteration procedure. Let $S_0 = (\Theta, \mathbf{\Sigma}, \mathbf{n})$, and for $k = 1,2,\ldots,$ let $S_k$ be the system obtained by applying first Proposition \ref{prop:getBvright} then Proposition \ref{prop:getCvright} to $S_{k-1}$.
Then $S_{N}$ is minimal for $N = \sum n_v$, and $\gamma_p(\Theta, \mathbf{\Sigma}, \mathbf{n}) = \gamma_p(\Theta, \bar{\mathbf{\Sigma}}, \bar{\mathbf{n}})$.
\label{thm:minimize}
\end{theorem}
\begin{shortVersion}
\begin{proof}
The fact that the gain is conserved along the iterations is granted from Proposition \ref{prop:getBvright} and Proposition \ref{prop:getCvright}. The number of iterations needed to converge is actually conservative, but is easily shown. If the system is non-minimal, then at least one node will have a reduced dimension after an iteration. We can register a decrease at at least one node at most $\sum_{v} n_v$ times.
\end{proof}
\end{shortVersion}
As a conclusion, even when given a non-minimal system $(\Theta, \mathbf{\Sigma})$, we may always construct a minimal \emph{rectangular} system $(\Theta, \bar{\mathbf{\Sigma}}, \{(n_v)_{v\in V}\})$ with the same $\mathcal{L}_p$-gain.
Theorem \ref{thm:multinorms} is easily translated for rectangular systems, by defining a multinorm $\{(|{\cdot}|_v)_{v \in V}\}$ as a set of norms, where $|{\cdot}|_v$ is a norm on the space $\mathbb{R}^{n_v}$. The storage functions of Theorem \ref{thm:extremality} remain well defined, with $F_{v, \gamma,p}$ now taking values in $\mathbb{R}^{n_v}$.
\begin{remark}
There might be some pathological cases where the dimension of some nodes reduces to $0$. This would correspond to a situation where, initially, $\mathcal{B}_v \subseteq \mathcal{C}_v$.
\end{remark}
\section{Storage functions for the $\mathcal{L}_2$-gain.}
\label{sec:L2}
We now aim at computing the extremal storage functions $F_{v, \gamma,2}(x)$ (see Theorem \ref{thm:extremality}) of a system for computing its $\mathcal{L}_2$-gain. We no longer consider rectangular systems here, but the generalization of the results is straightforward. The Assumptions \ref{ass:minimal} and \ref{ass:stability} still hold.
In approximating the function $F_{v, \gamma,2}(x)$, two questions arise. First, given a value of $\gamma$, how can we get a good approximation of $F_{v, \gamma,2}(x)$? Second, and more importantly, how can we obtain obtain a good estimate of the $\mathcal{L}_2$-gain? \\
We propose a first method, based on dynamic programming and asymptotically tight under-approximations of the $\mathcal{L}_2$-gain, and a second method, based on the path-dependent Lyapunov function framework of \cite{EsLeCOLS,LeDuUSOD}, for obtaining over-approximations of the $\mathcal{L}_2$-gain.
Both are based on the following observation. Given a system $(\Theta(V,E), \mathbf{\Sigma})$, we can write
$$
F_{v, \gamma,p}(x) = \s{\pi \in \Theta, |\pi| = \infty, v_{\pi}(0) = v} F_{v, \gamma,p, \pi}(x),
$$
with
\begin{equation}
F_{v, \gamma,p, \pi}(x) = \s{\mathbf{w} \in \mathcal{L}_p }\|C_\pi x + D_\pi w\|_p^p - \gamma^p \|w\|_p^p, \label{eq:Fgpi}
\end{equation}
where the path $\pi$ is fixed. The definition above holds for any path of finite or infinite length. A first approximation of $F_{v, \gamma, p}$ is obtained limiting the length of the paths to some $K \geq 1$:
$$\check{F}_{v, \gamma, p,K} = \max_{\pi \in \Theta, v_{\pi}(0)= v, |\pi| = K}F_{v, \gamma, p, \pi}.$$
We have the following for $p = 2$:
\begin{align}
& F_{v, \gamma,2, \pi} (x) = \label{eq:F2} \\
& x^\top \left( C_\pi^\top C_\pi - C_\pi^\top D_\pi (D_\pi^\top D_\pi - \gamma^2 {I}_d) ^{-1} D_\pi^\top C_\pi \right) x. \nonumber
\end{align}
If $|\pi| = K$, $F_{v, \gamma, 2, \pi}$ can be computed using dynamic programming. Indeed,
$$\begin{aligned}
F_{v, \gamma, 2, \pi}(x)& \\
& \hspace{-28pt} = \max_{\mathbf{w}} \|C_{\pi(1:K-1)}x \hspace{-1pt} + \hspace{-1pt} D_{\pi(1:K-1)} (w_0^\top, \ldots, w_{K-2}^\top)^\top \hspace{-1pt}\|^2_2 \\
& \hspace{-18pt} + \max_{w_{K-1}} \|C_{\sigma_{\pi}(K)} A_{\pi(1:K-1)} x + D_{\sigma_{\pi}(K)} w_{K-1}\|^2_2,
\end{aligned} $$
We can then solve for $w_{K-1}$ as a function of $y = A_{\pi(1:K-1)}x$, and then solve for $w_{K-2}, \ldots, w_0$ in succession.
\begin{proposition}
Given a system $(\Theta(V,E), \mathbf{\Sigma})$, and an integer $K \geq 1$ let
$$\check{\gamma}_{K,p} = \s{\pi \in \Theta, |\pi| = K} \|D_\pi\|_p.$$
For any $\gamma > \check{\gamma}_{K,2}$, $K \geq n |V|$, $\check{F}_{v, \gamma , 2,K}^{1/2}$ is a norm.
\label{prop:truncatedNorms}
\end{proposition}
\begin{shortVersion}
\begin{proof}
The term $(D_\pi^\top D_\pi - \gamma^2 I)$ in (\ref{eq:F2}) is by definition negative definite if $\gamma > \check{\gamma}_{K,2}$.
Thus, $F_{v, \gamma, 2, \pi}$ is well defined. Then, by Assumption \ref{ass:minimal} and Proposition \ref{prop:algoForC}, for $K \geq n|V|$, for all $x \in \mathbb{R}^n$, there is $\pi$ such that $C_\pi x \neq 0$. We can then conclude that $\check{F}^{1/2}$ is a norm.
\end{proof}
\end{shortVersion}
\begin{proposition}
Given a system $(\Theta(V,E), \mathbf{\Sigma})$, for any $k \geq 1$, $\check{\gamma}_{k,p} \leq \gamma_p(\Theta, \mathbf{\Sigma})$, and $\lim_{k \rightarrow \infty} \check{\gamma}_{k,p} =\gamma_p(\Theta, \mathbf{\Sigma})$.
\label{prop:gainLowerbound}
\end{proposition}
The approach above allows to get asymptotically tight lower-bounds.
In order to get upper bounds, we can \emph{approximate} storage functions by quadratic norms using the Horizon-Dependent Lyapunov functions of \cite{EsLeCOLS}.
\begin{proposition}[\cite{EsLeCOLS}]
Consider a system $(\Theta, \mathbf{\Sigma})$. For any $K \geq 1$, consider the following program:
$$
\begin{aligned}
& \hat\gamma_K= \inf_{\gamma,\, X_\pi \in \mathbb{R}^{n \times n}, \, \pi \in \Theta, \, |\pi| = K } \gamma \qquad s.t.\\
& \forall \pi_1, \pi_2 \in \Theta, \, |\pi_1| \hspace{-1pt} = \hspace{-1pt} |\pi_2| \hspace{-1pt} = \hspace{-1pt} K, \, \pi_1(2:K) = \pi_2(1:K-1), \\
&
\begin{pmatrix}
A_{\sigma_{\pi_1}(0)} & B_{\sigma_{\pi_1}(0)} \\
D_{\sigma_{\pi_1}(0)} & C_{\sigma_{\pi_1}(0)}
\end{pmatrix}^\top
\begin{pmatrix}
X_{\pi_2} & 0 \\
0 & I
\end{pmatrix}
\begin{pmatrix}
A_{\sigma_{\pi_1}(0)} & B_{\sigma_{\pi_1}(0)} \\
D_{\sigma_{\pi_1}(0)} & C_{\sigma_{\pi_1}(0)}
\end{pmatrix}\\
& \qquad -
\begin{pmatrix}
X_{\pi_1} & 0 \\
0 & \gamma^2I
\end{pmatrix} \preceq 0,\\
&\forall \pi \in \Theta, \, |\pi| = K, \, X_\pi \succ 0.
\end{aligned}
$$
Then $\lim_{K \rightarrow \infty} \hat\gamma_K = \gamma_p(\Theta, \mathbf{\Sigma}),$ and $\hat \gamma_K \geq \gamma_p(\Theta, \mathbf{\Sigma}).$
\label{prop:gainUpperbound}
\end{proposition}
\begin{remark}
There is an interesting link between Horizon-Dependent Lyapunov functions and the approximation of the functions $F_{v,\gamma,p}$.
Consider the function
$$
\hat{F}_{v, \gamma,p, \pi}(x) = \s{\phi \in \Theta, \, \phi(1:K) = \pi(1:K)} F_{v, \gamma, p, \phi}(x),
$$
with $F_{v, \gamma, p, \phi}$ as in (\ref{eq:Fgpi}). Compared to $F_{v, \gamma, p}$, we now fix the first $K$ edges of the path considered in (\ref{eq:Fv}) to be those of a path $\pi$. It is easily seen that
$$
F_{v, \gamma,p}(x) = \max_{\pi \in \Theta, v_{\pi}(0) = v, |\pi| = K} \hat{F}_{v, \gamma,p, \pi}(x).
$$
By definition, if we take two paths $\pi_1$ and $\pi_2$ of length $K$ such that $\pi_1(2:K) = \pi_2(1:K-1)$, we can then verify that
$$ \begin{aligned}
\hat{F}_{v, \gamma,p, \pi_1}(x) + \gamma^p \|w\|^2_2 & \geq \hat{F}_{v, \gamma,p, \pi_2}(A_{\sigma_{\pi}(0)}x + B_{\sigma_{\pi}(0)}w) \\
&+ \|C_{\sigma_{\pi}(0)}x + D_{\sigma_{\pi}(0)}w)\|^2_2.
\end{aligned}
$$
For $p = 2$, assuming the functions $\hat{F}$ to be quadratic, the inequalities above are equivalent to the LMIs of Proposition \ref{prop:gainUpperbound}.
\end{remark}
Together, Propositions \ref{prop:gainLowerbound} and \ref{prop:gainUpperbound} enable us to approximate the $\mathcal{L}_2$-gain of a system, and its storage functions, in an arbitrarily accurate manner.
\subsection{Converse results for the existence of quadratic storage functions.}
\label{Subsection:Converse}
Quadratic (multiple) Lyapunov functions have received a lot of attention in the past for the stability analysis of switching systems (see e.g. \cite{LiMoBPIS,JuTJSR, BlFeSAOD, PhEsSODT}). Checking for their existence is computationally easy as it boils down to solving LMIs. They are however conservative certificates of stability, and thus it is interesting to seek ways to quantify \emph{how conservative} these methods are. In the following, we extend existing results on the conservatism of quadratic Lyapunov functions (see \cite{AnShSC}, \cite{BlNeOTAO}, \cite{PhEsSODT}) to the performance analysis case. We give a converse theorem for the existence of quadratic storage functions, along with a conjecture related to Horizon-Dependent Storage functions (Proposition \ref{prop:gainUpperbound}).
\begin{theorem}
Given a constrained switching system $(\Theta(V,E), \mathbf{\Sigma})$, consider the system $(\Theta, \mathbf{\Sigma}')$ with
$$\mathbf{\Sigma}' = \{ (\sqrt{n}A_\sigma, \, \sqrt{n}B_\sigma, \, C_\sigma, D_\sigma): \,(A_\sigma, \, B_\sigma, \, C_\sigma, D_\sigma) \in \mathbf{\Sigma}\}.$$
If $\gamma_{2}(\Theta, \mathbf{\Sigma}') \leq 1$, then $\gamma_2(\Theta, \mathbf{\Sigma}) \leq 1$ and there is a set of \emph{quadratic} norms $\{(|\cdot|_{Q,v})_{v\in V}\}$ such that for all
$x \in \mathbb{R}^n$, $w \in \mathbb{R}^d$ and $(u,v,\sigma) \in E$,
$$|A_\sigma x + B_\sigma w|_{Q,v}^{2} + \|C_\sigma x + D_\sigma w\|_2^2 \leq |x|_{Q,u}^{2} + \|w\|_2^{2}.$$
\end{theorem}
\begin{shortVersion}
\begin{proof}
The result relies on Theorem \ref{thm:multinorms}, more precisely on the fact that the functions $F_{v, \gamma, 2}$ are norms. The proof is similar to that of \cite{PhEsSODT}, Theorem 3.1: we can use John's ellipsoid theorem (see e.g. \cite{BlNeOTAO}) to approximate the norms of a storage function for $(\Theta,\mathbf{\Sigma}')$ with quadratic norms. These quadratic norms then provide a storage function for $(\Theta, \mathbf{\Sigma})$.
\end{proof}
\end{shortVersion}
We conjecture that the following extension, which follows from the stability analysis case (see e.g. \cite{PhEsSODT}, Theorem 3.5), holds true for the performance analysis case:
\begin{conjecture}
Given a constrained switching system $(\Theta(V,E), \mathbf{\Sigma})$, consider the system $(\Theta, \mathbf{\Sigma}')$ with
$$\mathbf{\Sigma}' = \{ (n^{\frac{1}{2d}}A_\sigma, \, n^{\frac{1}{2d}}B_\sigma, \, C_\sigma, D_\sigma): \,(A_\sigma, \, B_\sigma, \, C_\sigma, D_\sigma) \in \mathbf{\Sigma}\}.$$
If $\gamma_{2}(\Theta, \mathbf{\Sigma}') \leq 1$, then $(\Theta, \mathbf{\Sigma})$ has a Horizon-Dependent storage function (see Proposition \ref{prop:gainUpperbound}) for $K = d+1$.
\end{conjecture}
\section{Example.}
\label{sec:Example}
We are given a stabilized LTI system\footnote{We use the simple model of an inverted pendulum with mass $2$kg. The system is linearized around the ``up'' position, and discretized at 100 hz. The control gains are computed through LQR, with cost 1 on the norm of the output, and 10 on the norm of the input. Computations done in Matlab, codes available at \url{http://sites.uclouvain.be/scsse/gainsAndStorage.zip}},
$$x_{t+1} = A x_t + B u_t, \, z_t = x_t, \, u_t = K x_t. $$
We let $x \in \mathbb{R}^2$, $z \in \mathbb{R}^2$, $w \in \mathbb{R}^1$.
We assume that there might be \emph{delays} in the control updates. At any time, either $u_t = Kx_t$, or $u_t = u_{t-1}$, and we assume that there cannot be more than two delays in a row. Moreover, when there is a delay, the system undergoes disturbances from the actuator, i.e. of the form $Bw_t$.
The situation can be modeled as a constrained switching system $(\Theta, \mathbf{\Sigma})$ on two modes (with $\sigma(t) = 1$ if the control input is updated or $\sigma(t) = 2$ else) with the graph $\Theta$ of Figure \ref{fig:graphExample} and
$$\mathbf{\Sigma} = \left \{
\begin{aligned}
&\left ( \begin{pmatrix} A+BK & 0 \\ I & 0 \end{pmatrix}, \, \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \,
\begin{pmatrix} I & 0 \end{pmatrix}, \, \begin{pmatrix} 0 \end{pmatrix} \right )\\
&\left ( \begin{pmatrix} A & BK \\ 0 & I \end{pmatrix}, \, \begin{pmatrix} B \\ 0 \end{pmatrix}, \,
\begin{pmatrix} I & 0 \end{pmatrix}, \, \begin{pmatrix} 0 \end{pmatrix} \right )
\end{aligned}
\right \}.$$
\begin{figure}
\caption{Graph for the example. Here, we cannot have more than 2 failures (that is, mode $\sigma(t) = 2$) in a row. Nodes have been labeled $a$,$b$, and $c$ for further discussions.}
\label{fig:graphExample}
\end{figure}
The system here is actually \emph{not minimal}. Indeed, take the third node $c$. Any vector of the form $ x =\begin{pmatrix}0 & 0 & \alpha & \beta \end{pmatrix} ^\top$ satisfies $C_\pi x = 0 $, for any $\pi$ such that $v_\pi(0) = c$. Thus, $\dim(\mathcal{C}_c) = 2$. From there, we can see that at nodes $a$ and $b$, $\mathcal{C}_a$ and $\mathcal{C}_b$ are formed of any vector of the form $ x =\begin{pmatrix}0 & 0 & \alpha & \beta \end{pmatrix}^\top$, where $BK\begin{pmatrix} \alpha & \beta \end{pmatrix}^\top = 0$. Thus, $\dim(\mathcal{C}_a) = \dim(\mathcal{C}_b) = 1$. Applying Theorem \ref{thm:minimize} to the system $(\Theta, \mathbf{\Sigma})$ (after applying Remark \ref{rem:torect}), we obtain a minimal rectangular system with nodal dimensions $(2,3,2)$. We can then approximate the storage functions $F_{v,\gamma,2}^{1/2}, v \in \{a,b,c\}$.
Applying Proposition \ref{prop:truncatedNorms} with paths of length 10, we obtain, a lower-bound on the $\mathcal{L}_2$ gain
$$ \tilde{\gamma}= \sup_{\pi \in \Theta, |\pi| = K} \|D_\pi\|_p \simeq 0.0188. $$
We use this bound to compute the approximations of $F_{v,\gamma,2}^{1/2}, v \in \{a,b,c\}$, $\check{F}_{v,\tilde{\gamma},2}^{1/2}, v \in \{a,b,c\}$. The level sets of these functions are displayed on Figure \ref{fig:norms}.
\begin{figure}
\caption{Level sets of $\check{F}
\label{fig:storageA}
\caption{Level sets of $\check{F}
\label{fig:storageB}
\caption{Level sets of the storage functions approximation at each node. Node $b$ has a state dimension of 3.}
\label{fig:norms}
\end{figure}
\section{Conclusion.}
We provide a general characterization of the $\mathcal{L}_p$-gain of discrete-time linear switching system under the form of switching storage functions. Under the assumptions of internal stability and minimality, the pieces of these functions are the $p$th power of norms. The generality of these assumptions is discussed and we provide means to compute minimal realizations of constrained switching systems. We then turn our focus on the $\mathcal{L}_2$ gain, and provide algorithms for obtaining asymptotically tight lower and upper bounds on the gain based on the approximation of storage functions. Finally, we provide a converse result for the existence of quadratic storage functions exploiting the nature of storage functions, and formulate a conjecture about Horizon-Dependent storage functions. We believe an answer to the conjecture (positive or negative) will allow for a better understanding of the geometry underlying Lyapunov methods for performance analysis.
\end{document} | math | 43,148 |
\begin{document}
\title{State Complexity of Two Combined Operations: Reversal-Catenation and Star-Catenation}
\begin{abstract}
In this paper, we show that, due to the structural properties of the resulting automaton obtained from a prior operation, the state complexity of a combined operation may not be equal but close to the mathematical composition of the state complexities of its component operations.
In particular, we provide two witness combined operations: reversal combined with catenation and star combined with catenation.
\end{abstract}
\section{Introduction}
State complexity is a type of descriptional complexity based on {\it
deterministic finite automaton} (DFA) model. The state complexity of
an operation on regular languages is the number of states that are
necessary and sufficient in the worst case for the minimal, complete
DFA that accepts the resulting language of the operation. While many
results on the state complexities of individual operations, such as
union, intersection, catenation, star, reversal, shuffle, orthogonal
catenation, proportional removal, and cyclic
shift~\cite{CaSaYu02,DaDoSa08,Domaratzki02,HoKu02,JiJiSz05,JiOk05,Jriaskova05,SaWoYu04,YuZhSa94,Yu01},
have been obtained in the past 15 years, the research of state
complexities of combined operations, which was initiated by A.
Salomaa, K. Salomaa, and S. Yu in 2007~\cite{SaSaYu07}, is
attracting more attention. This is because, in practice, a
combination of several individual operations, rather than only one
individual operation, is often performed in a certain order. For
example, in order to obtain a precise regular expression, a
combination of basic operations is usually required.
In recent
publications~\cite{CGKY10-cat-sr,CGKY10-cat-ui,EGLY2009,GaSaYu08,GaYu09,GaYu10,JiOk07,LiMaSaYu08,SaSaYu07},
it has been shown that the state complexity of a combined operation
is not always a simple mathematical composition of the state
complexities of its component operations.
This is sometimes due to the structural properties of the DFA accepting the resulting language obtained from a prior operation of a combined operation.
For example, the languages that are obtained from performing reversal and reach the upper bound of the state complexity of this operation are accepted by DFAs such that half of their states are final; and the initial state of the DFA accepting a language obtained after performing star is always a final state.
As a result, the resulting language obtained from a prior operation may not be among the worst cases of the subsequent operation.
Since such issues are not concerned by the study of the state complexity of individual operations, they are certainly important in the research of the state complexity of combined operations.
Although the number of combined operations is unlimited and it is impossible to study the state complexities of all of them, the study on combinations of two individual operations is clearly necessary.
In this paper, we study the state complexities of reversal combined with catenation, i.e., $L(A)^R L(B)$, and star combined with catenation, i.e., $L(A)^*L(B)$, for minimal complete DFAs $A$ and $B$ of sizes $m,n \ge 1$, respectively.
For $L(A)^R L(B)$, we will show that the general upper bound $\frac{3}{4}2^{m+n}$, which is close to the composition of the state complexities of reversal and catenation $2^{m+n} - 2^{n-1}$, is reachable when $m,n \ge 2$, and it can be lower to $2^{n-1}$ and $2^{m-1}+1$ when $ m = 1$ and $n \ge 1$ and when $m \ge 2$ and $n = 1$, respectively.
For $L(A)^*L(B)$, we will show that, if $A$ has only one final state and it is also the initial state, i.e., $L(A) = L(A)^*$, the state complexity of catenation (also $L(A)^*L(B)$) is $m(2^n-1)-2^{n-1}+1$, which is lower than that of catenation $m2^n - 2^{n-1}$.
In the other cases, that is when $A$ contains some final states that are not the initial state, the state complexity of $L(A)^*L(B)$ is $5 \cdot 2^{m+n-3} - 2^{m-1} - 2^n +1$ instead of $\frac{3}{4}2^{m+n} - 2^{n-1}$, the composition of the state complexities of star and catenation.
In the next section, we introduce the basic definitions and notations used in the paper.
Then, we prove our results on reversal combined with catenation and star combined with catenation in Sections~\ref{sec:rev-cat} and~\ref{sec:star-cat}, respectively.
We conclude the paper in Section~\ref{sec:conclusion}.
\section{Preliminaries}
A DFA is denoted by a 5-tuple $A = (Q,
\Sigma, \delta, s, F)$, where $Q$ is the finite set of states, $\Sigma$ is the finite input alphabet, $\delta: Q \times \Sigma \rightarrow Q$ is the state transition function, $s \in Q$ is the initial state, and $F \subseteq Q$ is the set of final states.
A DFA is said to be complete if $\delta(q,a)$ is defined for all $q \in Q$ and $a \in \Sigma$.
All the DFAs we mention in this paper are assumed to be complete.
We extend $\delta$ to $Q \times \Sigma^* \rightarrow Q$ in the usual way.
A {\it non-deterministic finite automaton} (NFA) is denoted by a 5-tuple $A = (Q,
\Sigma, \delta, s, F)$, where the definitions of $Q$, $\Sigma$, $s$, and $F$ are the same to those of DFAs, but the state transition function $\delta$ is defined as $\delta: Q \times \Sigma \to 2^Q$, where $2^Q$ denotes the power set of $Q$, i.e. the set of all subsets of $Q$.
In this paper, the state transition function $\delta$ is often extended to $\hat{\delta} : 2^Q \times \Sigma \rightarrow 2^Q$. The function $\hat{\delta}$ is defined by $\hat{\delta}(R,a) = \{\delta(r,a) \mid r \in R\}$, for $R \subseteq Q$ and $a \in \Sigma$.
We just write $\delta$ instead of $\hat{\delta}$ if there is no confusion.
A word $w \in \Sigma^*$ is accepted by a finite automaton if $\delta(s,w) \cap F
\neq \emptyset$.
Two states in a finite automaton $A$ are said to be {\it equivalent} if and only if for every word $w \in \Sigma^*$, if $A$ is started in either state with $w$ as input, it either accepts in both cases or rejects in both cases.
It is well-known that a language which is accepted by
an NFA can be accepted by a DFA, and such a language is said to be {\it regular}.
The language accepted by a DFA $A$ is denoted by $L(A)$.
The reader may refer to~\cite{HoMoUl01,Yu97} for more
details about regular languages and finite automata.
The {\it state complexity} of a regular language $L$, denoted by
$sc(L)$, is the number of states of the minimal complete DFA that
accepts $L$. The state complexity of a class $S$ of regular
languages, denoted by $sc(S)$, is the supremum among all $sc(L)$, $L
\in S$. The state complexity of an operation on regular languages is
the state complexity of the resulting languages from the operation as a function of the state complexity of the operand languages.
Thus, in a certain sense, the state complexity of an operation is a worst-case complexity.
\section{Reversal combined with catenation}\label{sec:rev-cat}
In this section, we study the state complexity of $L_1^R L_2$ for an
$m$-state DFA language $L_1$ and an $n$-state DFA language $L_2$. We
first show that the state complexity of $L_1^R L_2$ is upper bounded
by $\frac{3}{4}2^{m+n}$ in general (Theorem~\ref{L_1^R L_2 upper
bound}). Then we prove that this upper bound can be reached when
$m,n \ge 2$ (Theorem~\ref{L_1^R L_2 lower bound}). Next, we
investigate the case when $m=1$ and $n\ge 1$ and prove the state
complexity can be lower to $2^{n-1}$ in such a case
(Theorem~\ref{L_1^R L_2 state complexity m=1 n>=1}). Finally, we
show that the state complexity of $L_1^R L_2$ is $2^{m-1}+1$ when $m
\ge 2$ and $n=1$ (Theorem~\ref{L_1^R L_2 state complexity m>=2
n=1}).
Now, we start with a general upper bound of state complexity of
$L_1^R L_2$ for any integers $m,n \ge 1$.
\begin{theorem}
\label{L_1^R L_2 upper bound} For two integers $m,n \ge 1$, let
$L_1$ and $L_2$ be two regular languages accepted by an $m$-state
DFA and an $n$-state DFA, respectively. Then there exists a DFA of
at most $\frac{3}{4}2^{m+n}$ states that accepts $L_1^R L_2$.
\end{theorem}
\begin{proof}
Let $M=(Q_M,\Sigma , \delta_M , s_M, F_M)$ be a DFA of $m$ states,
$k_1$ final states and $L_1=L(M)$. Let $N=(Q_N,\Sigma , \delta_N ,
s_N, F_N)$ be another DFA of $n$ states and $L_2=L(N)$.
Let $M'=(Q_M,\Sigma , \delta_{M'} , F_M, \{s_M\})$ be an NFA with
$k_1$ initial states. $\delta_{M'}(p,a)=q$ if $\delta_M(q,a)=p$
where $a\in \Sigma$ and $p,q\in Q_M$. Clearly,
$$L(M')=L(M)^R=L_1^R.$$
By performing subset construction on NFA $M'$, we can get an
equivalent, $2^m$-state DFA $A=(Q_A,\Sigma , \delta_A , s_A, F_A)$
such that $L(A)=L_1^R$. Since $M'$ has only one final state $s_M$,
we know that $F_A=\{i\mid i\subseteq Q_M, s_M\in i\}$. Thus, $A$ has
$2^{m-1}$ final states in total. Now we construct a DFA
$B=(Q_B,\Sigma , \delta_B , s_B, F_B)$ accepting the language $L_1^R
L_2$, where
\begin{eqnarray*}
Q_B & = & \{\langle i,j \rangle \mid i\in Q_A\mbox{, } j\subseteq Q_N\},\\
s_B & = & \langle s_A,\emptyset \rangle, \mbox{ if } s_A \not\in F_A;\\
& = & \langle s_A, \{s_N\} \rangle, \mbox{ otherwise}, \\
F_B & = & \{\langle i,j \rangle\in Q_B \mid j\cap F_N\neq \emptyset\},\\
\delta_B(\langle i,j \rangle, a) & = & \langle i',j' \rangle \mbox{,
if } \delta_A(i,a)=i'\mbox{, }\delta_N(j,a)=j'\mbox{, }a\in
\Sigma \mbox{, } i'\notin F_A; \\
& = & \langle i',j'\cup \{s_N\} \rangle \mbox{, if }
\delta_A(i,a)=i'\mbox{, }\delta_N(j,a)=j'\mbox{, }a\in \Sigma
\mbox{, } i'\in F_A.
\end{eqnarray*}
From the above construction, we can see that all the states in $B$
starting with $i\in F_A$ must end with $j$ such that $s_N \in j$.
There are in total $2^{m-1}\cdot 2^{n-1}$ states which don't meet
this.
Thus, the number of states of the minimal DFA accepting $L_1^RL_2$
is no more than
$$2^{m+n}-2^{m-1}\cdot 2^{n-1}=\frac{3}{4}2^{m+n}.$$
\end{proof}
This result gives an upper bound for the state complexity of $L_1^R
L_2$. Next we show that this bound is reachable when $m,n \ge 2$.
\begin{theorem}
\label{L_1^R L_2 lower bound} Given two integers $m,n\geq 2$, there
exists a DFA $M$ of $m$ states and a DFA $N$ of $n$ states such that
any DFA accepting $L(M)^R L(N)$ needs at least $\frac{3}{4}2^{m+n}$
states.
\end{theorem}
\begin{proof}
Let $M=(Q_M,\Sigma , \delta_M , 0, \{m-1\} )$ be a DFA, shown in
Figure~\ref{DFAM-rev-cat}, where $Q_M = \{0,1,\ldots ,m-1\}$,
$\Sigma = \{a,b,c,d\}$, and the transitions are given as:
\begin{itemize}
\item $\delta_M(i, a) = i+1 \mbox{ mod }m \mbox{, } i=0, \ldots , m-1,$
\item $\delta_M(i, b) = i \mbox{, } i=0, \ldots , m-2,$ $\delta_M(m-1, b) = m-2 \mbox{, }$
\item $\delta_M(m-2, c) = m-1 \mbox{, }$ $\delta_M(m-1, c) = m-2 \mbox{,}$\\
if $m\geq 3$, $\delta_M(i, c) = i \mbox{, } i=0, \ldots , m-3,$
\item $\delta_M(i, d) = i \mbox{, } i=0, \ldots , m-1,$
\end{itemize}
\begin{figure}
\caption{Witness DFA $M$ of Theorem~\ref{L_1^R L_2 lower bound}
\label{DFAM-rev-cat}
\end{figure}
Let $N=(Q_N,\Sigma , \delta_N , 0, \{n-1\} )$ be a DFA, shown in
Figure~\ref{DFAN-rev-cat}, where $Q_N = \{0,1,\ldots ,n-1\}$,
$\Sigma = \{a,b,c,d\}$, and the transitions are given as:
\begin{itemize}
\item $\delta_N(i, a) = i \mbox{, } i=1, \ldots , n-1,$
\item $\delta_N(i, b) = i \mbox{, } i=1, \ldots , n-1,$
\item $\delta_N(i, c) = 0 \mbox{, } i=1, \ldots , n-1,$
\item $\delta_N(i, d) = i+1 \mbox{ mod }n \mbox{, } i=0, \ldots , n-1,$
\end{itemize}
\begin{figure}
\caption{Witness DFA $N$ of Theorem~\ref{L_1^R L_2 lower bound}
\label{DFAN-rev-cat}
\end{figure}
Now we design a DFA $A=(Q_A, \Sigma , \delta_A , \{m-1\}, F_A )$,
where $Q_A = \{q \mid q\subseteq Q_M\}$, $\Sigma = \{a,b,c,d\}$,
$F_A = \{q \mid 0\in q \mbox{, }q\in Q_A\}$, and the transitions are
defined as:
\[
\delta_A(p, e) = \{j \mid \delta_M(j, e)=i\mbox{, }i\in p\} \mbox{,
} p\in Q_A\mbox{, } e \in \Sigma.
\]
It is easy to see that $A$ is a DFA that accepts $L(M)^R$. We prove
that $A$ is minimal before using it.
(I) We first show that every state $I\in Q_A$, is reachable from
$\{m-1\}$. There are three cases.
\begin{itemize}
\item[{\rm 1.}]$|I|=0$.
$|I|=0$ if and only if $I=\emptyset$. $\delta_A(\{ m-1 \}, b) =
I=\emptyset.$
\item[{\rm 2.}]$|I|=1$.
Let $I=\{ i \}$, $0\leq i\leq m-1$. $\delta_A(\{ m-1 \}, a^{m-1-i})
=I.$
\item[{\rm 3.}]$2\leq |I|\leq m$.
Let $I=\{ i_1, i_2, \ldots ,i_k \}$, $0\leq i_1<i_2< \ldots <i_k
\leq m-1$, $2\leq k\leq m$. $\delta_A(\{ m-1 \}, w) = I$, where
$$w = ab(ac)^{i_2-i_1-1}ab(ac)^{i_{3}-i_{2}-1}\cdots ab(ac)^{i_k-i_{k-1}-1}a^{m-1-i_k}.$$
\end{itemize}
(II) Any two different states $I$ and $J$ in $Q_A$ are
distinguishable.
Without loss of generality, we may assume that $|I|\geq |J|$. Let
$x\in I-J$. Then a string $a^{x}$ can distinguish these two states
because
\begin{eqnarray*}
\delta_A(I, a^{x})& \in & F_A,\\
\delta_A(J, a^{x}) & \notin & F_A.
\end{eqnarray*}
Due to (I) and (II), $A$ is a minimal DFA with $2^m$ states which
accepts $L(M)^R$. Now let $B=(Q_B, \Sigma , \delta_B , s_B, F_A\}$
be another DFA, where
\begin{eqnarray*}
Q_B & = & \{\langle p,q\rangle \mid p\in Q_A-F_A \mbox{, }q\subseteq Q_N\}\\
& & \qquad \cup \,\, \{\langle p', q'\rangle \mid p'\in F_A \mbox{, }q'\subseteq Q_N\mbox{, }0\in q'\},\\
\Sigma & = & \{a,b,c,d\},\\
s_B & = & \langle \{m-1\},\emptyset \rangle,\\
F_B & = & \{\langle p,q\rangle \mid n-1\in q \mbox{, } \langle
p,q\rangle\in Q_B\},
\end{eqnarray*}
and for each state $\langle p,q\rangle\in Q_B$ and each letter $e\in
\Sigma,$
\begin{eqnarray*}
\delta_B(\langle p,q\rangle, e) = \left\{
\begin{array}{l l}
\langle p',q'\rangle & \mbox{if }\delta_A(p, e)=p'\notin F_A\mbox{, } \delta_N(q, e)=q',\\
\langle p',q'\rangle & \mbox{if }\delta_A(p, e)=p'\in F_A\mbox{, }\delta_N(q, e)=r'\mbox{, $q' =r'\cup \{0\}$.} \\
\end{array} \right.
\end{eqnarray*}
As we mentioned in last proof, all the states starting with $p\in
F_A$ must end with $q\subseteq Q_N$ such that $0\in q$. Clearly, $B$
accepts the language $L(M)^RL(N)$ and it has
$$2^m\cdot 2^n-2^{m-1}\cdot 2^{n-1}=\frac{3}{4}2^{m+n}$$
states. Now we show that $B$ is a minimal DFA.
(I) Every state $\langle p,q\rangle \in Q_B$ is reachable. We
consider the following five cases:
\begin{itemize}
\item[{\rm 1.}]$p=\emptyset$, $q=\emptyset$.
$\langle \emptyset,\emptyset\rangle$ is the sink state of $B$.
$\delta_B(\langle \{m-1\},\emptyset \rangle, b) = \langle
p,q\rangle$.
\item[{\rm 2.}]$p\neq \emptyset$, $q=\emptyset$.
Let $p=\{ p_1, p_2, \ldots ,p_k \}$, $1\leq p_1<p_2< \ldots <p_k
\leq m-1$, $1\leq k\leq m-1$. Note that $0\notin p$, because $0\in
p$ guarantees $0\in q$. $\delta_B(\langle \{m-1\},\emptyset \rangle,
w) = \langle p,q\rangle$, where
$$w = ab(ac)^{p_2-p_1-1}ab(ac)^{p_{3}-p_{2}-1}\cdots ab(ac)^{p_k-p_{k-1}-1}a^{m-1-p_k}.$$
Please note that $w=a^{m-1-p_1}$ when $k=1$.
\item[{\rm 3.}]$p= \emptyset$, $q\neq \emptyset$.
In this case, let $q=\{ q_1, q_2, \ldots ,q_l \}$, $0\leq q_1<q_2<
\ldots <q_l \leq n-1$, $1\leq l\leq n$. $\delta_B(\langle
\{m-1\},\emptyset \rangle, x) = \langle p,q\rangle$, where
$$x = a^md^{q_l-q_{l-1}}a^md^{q_{l-1}-q_{l-2}}\cdots a^md^{q_2-q_1}a^md^{q_1}b.$$
\item[{\rm 4.}]$p\neq \emptyset$, $0\notin p$, $q\neq \emptyset$.
Let $p=\{ p_1, p_2, \ldots ,p_k \}$, $1\leq p_1<p_2< \ldots <p_k
\leq m-1$, $1\leq k\leq m-1$ and $q=\{ q_1, q_2, \ldots ,q_l \}$,
$0\leq q_1<q_2< \ldots <q_l \leq n-1$, $1\leq l\leq n$. We can find
a string $uv$ such that $\delta_B(\langle \{m-1\},\emptyset \rangle,
uv) = \langle p,q\rangle$, where
$$u = ab(ac)^{p_2-p_1-1}ab(ac)^{p_{3}-p_{2}-1}\cdots ab(ac)^{p_k-p_{k-1}-1}a^{m-1-p_k},$$
$$v = a^md^{q_l-q_{l-1}}a^md^{q_{l-1}-q_{l-2}}\cdots a^md^{q_2-q_1}a^md^{q_1}.$$
\item[{\rm 5.}]$p\neq \emptyset$, $0\in p$, $m-1\notin p$, $q\neq \emptyset$.
Let $p=\{ p_1, p_2, \ldots ,p_k \}$, $0= p_1<p_2< \ldots <p_k <m-1$,
$1\leq k\leq m-1$ and $q=\{ q_1, q_2, \ldots ,q_l \}$, $0= q_1<q_2<
\ldots <q_l \leq n-1$, $1\leq l\leq n$. Since $0$ is in $p$,
according to the definition of $B$, $0$ has to be in $q$ as well.
There exists a string $u'v'$ such that $\delta_B(\langle
\{m-1\},\emptyset \rangle, u'v') = \langle p,q\rangle$, where
$$u' = ab(ac)^{p_2-p_1-1}ab(ac)^{p_{3}-p_{2}-1}\cdots ab(ac)^{p_k-p_{k-1}-1}a^{m-2-p_k},$$
$$v' = a^md^{q_l-q_{l-1}}a^md^{q_{l-1}-q_{l-2}}\cdots a^md^{q_2-q_1}a^md^{q_1}a.$$
\item[{\rm 6.}]$p\neq \emptyset$, $\{0,m-1\}\subseteq p$, $q\neq \emptyset$.
Let $p=\{ p_1, p_2, \ldots ,p_k \}$, $0= p_1<p_2< \ldots <p_k =m-1$,
$2\leq k\leq m$ and $q=\{ q_1, q_2, \ldots ,q_l \}$, $0= q_1<q_2<
\ldots <q_l \leq n-1$, $1\leq l\leq n$. In this case, we have
\begin{eqnarray*}
\langle p,q\rangle = \left\{
\begin{array}{l l}
\delta_B(\langle \{0,1,p_2+1,\ldots,p_{k-1}+1\} , q \rangle, a), & \mbox{if }m-2\notin p,\\
\delta_B(\langle p-\{m-1\},q\rangle, b), & \mbox{if }m-2\in p,
\end{array} \right.
\end{eqnarray*}
where states $\langle \{0,1,p_2+1,\ldots,p_{k-1}+1\} , q \rangle$
and $\langle p-\{m-1\},q\rangle$ have been proved to be reachable in
Case 5.
\end{itemize}
(II) We then show that any two different states $\langle
p_1,q_1\rangle$ and $\langle p_2,q_2\rangle$ in $Q_B$ are
distinguishable.
\begin{itemize}
\item[{\rm 1.}]$q_1\neq q_2$.
Without loss of generality, we may assume that $|q_1|\geq |q_2|$. Let $x\in q_1-q_2$. A string $d^{n-1-x}$ can distinguish them because
\begin{eqnarray*}
\delta_B(\langle p_1,q_1\rangle, d^{n-1-x})& \in & F_B,\\
\delta_B(\langle p_2,q_2\rangle, d^{n-1-x}) & \notin & F_B.
\end{eqnarray*}
\item[{\rm 2.}]$p_1\neq p_2$, $q_1= q_2$. Without loss of generality, we assume that $|p_1|\geq |p_2|$. Let $y\in p_1-p_2$. Then there always exists a string $a^yc^2d^{n}$ such that
\begin{eqnarray*}
\delta_B(\langle p_1,q_1\rangle, a^yc^2d^{n})& \in & F_B,\\
\delta_B(\langle p_2,q_2\rangle, a^yc^2d^{n}) & \notin & F_B.
\end{eqnarray*}
\end{itemize}
Since all the states in $B$ are reachable and pairwise
distinguishable, DFA $B$ is minimal. Thus, any DFA accepting
$L(M))^RL(N)$ needs at least $\frac{3}{4}2^{m+n}$ states.
\end{proof}
This result gives a lower bound for the state complexity of
$L_1^RL_2$ when $m,n \ge 2$. It coincides with the upper bound shown
in Theorem~\ref{L_1^R L_2 upper bound} exactly. Thus, we obtain the
state complexity of the combined operation $L_1^RL_2$ for $m \ge 2$
and $n\ge 2$.
\begin{theorem}
\label{L_1^R L_2 state complexity} For any integers $m, n\geq 2$,
let $L_1$ be an $m$-state DFA language and $L_2$ be an $n$-state DFA
language. Then $\frac{3}{4}2^{m+n}$ states are both necessary and
sufficient in the worst case for a DFA to accept $L_1^RL_2$.
\end{theorem}
In the rest of this section, we study the remaining cases when
either $m =1$ or $n=1$.
We first consider the case when $m=1$ and $n\geq 2$. In this case,
$L_1=\emptyset$ or $L_1=\Sigma^*$. $L_1^RL_2=L_1L_2$ holds no matter
$L_1$ is $\emptyset$ or $\Sigma^*$, since $\emptyset ^R=\emptyset$
and $(\Sigma^*)^R=\Sigma^*$. It has been shown in~\cite{YuZhSa94}
that $2^{n-1}$ states are both sufficient and necessary in the worst
case for a DFA to accept the catenation of a 1-state DFA language
and an $n$-state DFA language, $n\ge 2$.
When $m =1$ and $n=1$, it is also easy to see that $1$ state is
sufficient and necessary in the worst case for a DFA to accept
$L_1^RL_2$, because $L_1^RL_2$ is either $\emptyset$ or $\Sigma^*$.
Thus, we have the following theorem concerning the state complexity
of $L_1^RL_2$ for $m=1$ and $n\ge 1$.
\begin{theorem}
\label{L_1^R L_2 state complexity m=1 n>=1} Let $L_1$ be a 1-state
DFA language and $L_2$ be an $n$-state DFA language, $n\ge 1$. Then
$2^{n-1}$ states are both sufficient and necessary in the worst case
for a DFA to accept $L_1^RL_2$.
\end{theorem}
Now, we study the state complexity of $L_1^RL_2$ for $m \ge 2$ and
$n = 1$. Let us start with the following upper bound.
\begin{theorem}
\label{L_1^R L_2 upper bound m>=2 n=1} For any integer $m\ge 2$, let
$L_1$ and $L_2$ be two regular languages accepted by an $m$-state
DFA and a $1$-state DFA, respectively. Then there exists a DFA of at
most $2^{m-1}+1$ states that accepts $L_1^R L_2$.
\end{theorem}
\begin{proof}
Let $M=(Q_M,\Sigma , \delta_M , s_M, F_M)$ be a DFA of $m$ states,
$m\ge 2$, $k_1$ final states and $L_1=L(M)$. Let $N$ be another DFA
of $1$ state and $L_2=L(N)$. Since $N$ is a complete DFA, as we
mentioned before, $L(N)$ is either $\emptyset$ or $\Sigma^*$.
Clearly, $L_1^R\cdot \emptyset=\emptyset$. Thus, we need to consider
only the case $L_2=L(N)=\Sigma^*$.
We construct an NFA $M'=(Q_M,\Sigma , \delta_{M'} , F_M, \{s_M\})$
with $k_1$ initial states which is similar to the proof of
Theorem~\ref{L_1^R L_2 upper bound}. $\delta_{M'}(p,a)=q$ if
$\delta_M(q,a)=p$ where $a\in \Sigma$ and $p,q\in Q_M$. It is easy
to see that
$$L(M')=L(M)^R=L_1^R.$$
By performing subset construction on NFA $M'$, we get an equivalent,
$2^m$-state DFA $A=(Q_A,\Sigma , \delta_A , s_A, F_A)$ such that
$L(A)=L_1^R$. $F_A=\{i\mid i\subseteq Q_M, s_M\in i\}$ because $M'$
has only one final state $s_M$. Thus, $A$ has $2^{m-1}$ final states in total.
Define $B=(Q_B,\Sigma , \delta_B , s_B, \{f_B\})$ where $f_B\notin
Q_A$, $Q_B=(Q_A-F_A)\cup \{f_B\}$,
\begin{eqnarray*}
s_B = \left\{
\begin{array}{l l}
s_A & \mbox{if }s_A\notin F_A,\\
f_B & \mbox{otherwise.}\\
\end{array} \right.
\end{eqnarray*}
and for any $a\in \Sigma$ and $p\in Q_B$,
\begin{eqnarray*}
\delta_B(p, a) = \left\{
\begin{array}{l l}
\delta_A(p, a) & \mbox{if }\delta_A(p, a)\notin F_A,\\
f_B & \mbox{if }\delta_A(p, a)\in F_A,\\
f_B & \mbox{if }p=f_B.\\
\end{array} \right.
\end{eqnarray*}
The automaton $B$ is exactly the same as $A$ except that $A$'s
$2^{m-1}$ final states are made to be sink states and these sink,
final states are merged into one, since they are equivalent. When
the computation reaches the final state $f_B$, it remains there.
Now, it is clear that $B$ has
$$2^m-2^{m-1}+1=2^{m-1}+1$$ states and $L(B)=L_1^R \Sigma^*$.
\end{proof}
This theorem shows an upper bound for the state complexity of $L_1^R
L_2$ for $m \ge 2$ and $n = 1$. Next we prove that this upper bound
is reachable.
\begin{lemma}
\label{L_1^R L_2 lower bound m=2 or 3 n=1}Given an integer $m=2$ or
$3$, there exists an $m$-state DFA $M$ and a $1$-state DFA $N$ such
that any DFA accepting $L(M)^RL(N)$ needs at least $2^{m-1}+1$
states.
\end{lemma}
\begin{proof}
When $m=2$ and $n = 1$. We can construct the following witness DFAs.
Let $M=(\{0, 1\},\Sigma , \delta_M , 0, \{1\} )$ be a DFA, where
$\Sigma = \{a,b\}$, and the transitions are given as:
\begin{itemize}
\item $\delta_M(0, a) = 1 \mbox{, } \delta_M(1, a) = 0,$
\item $\delta_M(0, b) = 0 \mbox{, } \delta_M(1, b) = 0.$
\end{itemize}
Let $N$ be the DFA accepting $\Sigma^*$. Then the resulting DFA for
$L(M)^R \Sigma^*$ is $A=(\{0, 1, 2\},\Sigma , \delta_A , 0, \{1\} )$
where
\begin{itemize}
\item $\delta_A(0, a) = 1 \mbox{, } \delta_A(1, a) = 1\mbox{, }\delta_A(2, a) = 2\mbox{, }$
\item $\delta_A(0, b) = 2 \mbox{, } \delta_A(1, b) = 1\mbox{, }\delta_A(2, b) = 2.$
\end{itemize}
When $m=3$ and $n = 1$. The witness DFAs are as follows. Let
$M'=(\{0, 1, 2\},\Sigma' , \delta_{M'} , 0, \{2\} )$ be a DFA, where
$\Sigma' = \{a,b,c\}$, and the transitions are:
\begin{itemize}
\item $\delta_{M'}(0, a) = 1 \mbox{, } \delta_{M'}(1, a) = 2\mbox{, }\delta_{M'}(2, a) = 0\mbox{, }$
\item $\delta_{M'}(0, b) = 0 \mbox{, } \delta_{M'}(1, b) = 0\mbox{, }\delta_{M'}(2, b) = 1\mbox{, }$
\item $\delta_{M'}(0, c) = 0 \mbox{, } \delta_{M'}(1, c) = 2\mbox{, }\delta_{M'}(2, c) = 1\mbox{. }$
\end{itemize}
Let $N'$ be the DFA accepting $\Sigma'^*$. The resulting DFA for
$L(M')^R \Sigma'^*$ is $A'=(\{0, 1, 2, 3, 4\},\Sigma' , \delta_{A'}
, 0, \{3\} )$ where
\begin{itemize}
\item $\delta_{A'}(0, a) = 1 \mbox{, } \delta_{A'}(1, a) = 3\mbox{, }\delta_{A'}(2, a) = 2\mbox{, }\delta_{A'}(3, a) = 3\mbox{, }\delta_{A'}(4, a) = 3\mbox{, }$
\item $\delta_{A'}(0, b) = 2 \mbox{, } \delta_{A'}(1, b) = 4\mbox{, }\delta_{A'}(2, b) = 2\mbox{, }\delta_{A'}(3, b) = 3\mbox{, }\delta_{A'}(4, b) = 4\mbox{, }$
\item $\delta_{A'}(0, c) = 1 \mbox{, } \delta_{A'}(1, c) = 0\mbox{, }\delta_{A'}(2, c) = 2\mbox{, }\delta_{A'}(3, c) = 3\mbox{, }\delta_{A'}(4, c) = 4\mbox{. }$
\end{itemize}
\end{proof}
The above result shows that the bound $2^{m-1}+1$ is reachable when
$m$ is equal to 2 or 3 and $n = 1$. The last case is $m\ge 4$ and $n =
1$.
\begin{theorem}
\label{L_1^R L_2 lower bound m>=4 n=1} Given an integer $m\ge 4$,
there exists a DFA $M$ of $m$ states and a DFA $N$ of $1$ state such
that any DFA accepting $L(M)^R L(N)$ needs at least $2^{m-1}+1$
states.
\end{theorem}
\begin{proof}
Let $M=(Q_M,\Sigma , \delta_M , 0, \{m-1\} )$ be a DFA,
shown in Figure~\ref{DFAM-rev-cat-n=1},
where $Q_M = \{0,1,\ldots ,m-1\}$, $m\ge 4$, $\Sigma = \{a,b,c,d\}$,
and the transitions are given as:
\begin{itemize}
\item $\delta_M(i, a) = i+1 \mbox{ mod }m \mbox{, } i=0, \ldots , m-1,$
\item $\delta_M(i, b) = i \mbox{, } i=0, \ldots , m-2\mbox{, } \delta_M(m-1, b) = m-2 \mbox{, }$
\item $\delta_M(i, c) = i \mbox{, } i=0, \ldots , m-3\mbox{, } \delta_M(m-2, c) = m-1 \mbox{, } \delta_M(m-1, c) = m-2 \mbox{,}$
\item $\delta_M(0, d) = 0 \mbox{, } \delta_M(i, d) = i+1 \mbox{, } i=1, \ldots , m-2\mbox{, } \delta_M(m-1, d) = 1 \mbox{. }$
\end{itemize}
\begin{figure}
\caption{Witness DFA $M$ of Theorem~\ref{L_1^R L_2 lower bound m>=4 n=1}
\label{DFAM-rev-cat-n=1}
\end{figure}
Let $N$ be the DFA accepting $\Sigma^*$. Then
$L(M)^RL(N)=L(M)^R\Sigma^*$. Now we design a DFA $A=(Q_A, \Sigma ,
\delta_A , \{m-1\}, F_A)$ similar to the proof of Theorem~\ref{L_1^R
L_2 lower bound}, where $Q_A = \{q \mid q\subseteq Q_M\}$, $\Sigma =
\{a,b,c,d\}$, $F_A = \{q \mid 0\in q \mbox{, }q\in Q_A\}$, and the
transitions are defined as:
\[
\delta_A(p, e) = \{j \mid \delta_M(j, e)=i\mbox{, }i\in p\} \mbox{,
} p\in Q_A\mbox{, } e \in \Sigma.
\]
It is easy to see that $A$ is a DFA that accepts $L(M)^R$. Since the
transitions of $M$ on letters $a$, $b$, and $c$ are exactly the same
as those of DFA $M$ in the proof of Theorem~\ref{L_1^R L_2 lower
bound}, we can say that $A$ is minimal and it has $2^{m}$ states,
among which $2^{m-1}$ states are final.
Define $B=(Q_B,\Sigma , \delta_B , s_B, \{f_B\})$ where $f_B\notin
Q_A$, $Q_B=(Q_A-F_A)\cup \{f_B\}$,
\begin{eqnarray*}
s_B = \left\{
\begin{array}{l l}
s_A & \mbox{if }s_A\notin F_A,\\
f_B & \mbox{otherwise.}\\
\end{array} \right.
\end{eqnarray*}
and for any $e\in \Sigma$ and $I\in Q_B$,
\begin{eqnarray*}
\delta_B(I, e) = \left\{
\begin{array}{l l}
\delta_A(I, e) & \mbox{if }\delta_A(I, e)\notin F_A,\\
f_B & \mbox{if }\delta_A(I, e)\in F_A,\\
f_B & \mbox{if }I=f_B.\\
\end{array} \right.
\end{eqnarray*}
DFA $B$ is the same as $A$ except that $A$'s $2^{m-1}$ final states
are changed into sink states and merged to one sink, final state, as
we did in the proof of Theorem~\ref{L_1^R L_2 upper bound m>=2 n=1}.
Clearly, $B$ has $2^m-2^{m-1}+1=2^{m-1}+1$ states and
$L(B)=L(M)^R\Sigma^*$. Next we show that $B$ is a minimal DFA.
(I) Every state $I\in Q_B$ is reachable from $\{m-1\}$. The proof is
similar to that of Theorem~\ref{L_1^R L_2 lower bound}. We consider
the following four cases:
\begin{itemize}
\item[{\rm 1.}]$I=\emptyset$.
$\delta_A(\{ m-1 \}, b) = I=\emptyset .$
\item[{\rm 2.}]$I=f_B$. $\delta_A(\{ m-1 \}, a^{m-1}) =I=f_B.$
\item[{\rm 3.}]$|I|=1$.
Assume that $I=\{ i \}$, $1\leq i\leq m-1$. Note that $i\neq 0$
because all the final states in $A$ have been merged into $f_B$. In
this case, $\delta_A(\{ m-1 \}, a^{m-1-i}) =I.$
\item[{\rm 4.}]$2\leq |I|\leq m$.
Assume that $I=\{ i_1, i_2, \ldots ,i_k \}$, $1\leq i_1<i_2< \ldots
<i_k \leq m-1$, $2\leq k\leq m$. $\delta_A(\{ m-1 \}, w) = I$, where
$$w = ab(ac)^{i_2-i_1-1}ab(ac)^{i_{3}-i_{2}-1}\cdots ab(ac)^{i_k-i_{k-1}-1}a^{m-1-i_k}.$$
\end{itemize}
(II) Any two different states $I$ and $J$ in $Q_B$ are
distinguishable.
Since $f_B$ is the only final state in $Q_B$, it is inequivalent to
any other state. Thus, we consider the case when neither of $I$ and
$J$ is $f_B$.
Without loss of generality, we may assume that $|I|\geq |J|$. Let
$x\in I-J$. $x$ is always greater than $0$ because all the states
which include $0$ have been merged into $f_B$. Then a string
$d^{x-1}a$ can distinguish these two states because
\begin{eqnarray*}
\delta_B(I, d^{x-1}a)& = & f_B,\\
\delta_B(J, d^{x-1}a) & \neq & f_B.
\end{eqnarray*}
Since all the states in $B$ are reachable and pairwise
distinguishable, $B$ is a minimal DFA. Thus, any DFA accepting
$L(M))^R\Sigma^*$ needs at least $2^{m-1}+1$ states.
\end{proof}
After summarizing Theorem~\ref{L_1^R L_2 upper bound m>=2 n=1},
Theorem~\ref{L_1^R L_2 lower bound m>=4 n=1} and Lemma~\ref{L_1^R
L_2 lower bound m=2 or 3 n=1}, we obtain the state complexity of the
combined operation $L_1^RL_2$ for $m \ge 2$ and $n=1$.
\begin{theorem}
\label{L_1^R L_2 state complexity m>=2 n=1} For any integer $m \ge
2$, let $L_1$ be an $m$-state DFA language and $L_2$ be a $1$-state
DFA language. Then $2^{m-1}+1$ states are both sufficient and
necessary in the worst case for a DFA to accept $L_1^RL_2$.
\end{theorem}
\section{Star combined with catenation}\label{sec:star-cat}
In this section, we investigate the state complexity of $L(A)^*L(B)$ for two DFAs $A$ and $B$ of sizes $m,n \ge 1$, respectively.
We first notice that, when $n = 1$, the state complexity of $L(A)^*L(B)$ is 1 for any $m \ge 1$.
This is because $B$ is complete ($L(B)$ is either $\emptyset$ or $\Sigma^*$), and we have either $L(A)^*L(B) = \emptyset$ or $\Sigma^* \subseteq L(A)^*L(B) \subseteq \Sigma^*$.
Thus, $L(A)^*L(B)$ is always accepted by a 1 state DFA.
Next, we consider the case where $A$ has only one final state and it is also the initial state.
In such a case, $L(A)^*$ is also accepted by $A$, and hence the state complexity of $L(A)^*L(B)$ is equal to that of $L(A)L(B)$.
We will show that, for any $A$ of size $m \ge 1$ in this form and any $B$ of size $n \ge 2$, the state complexity of $L(A)L(B)$ (also $L(A)^*L(B)$) is $m(2^n-1) - 2^{n-1} + 1$ (Theorems~\ref{thm:star-cat-upper-special} and~\ref{thm:star-cat-lower-special}), which is lower than the state complexity of catenation in the general case.
Lastly, we consider the state complexity of $L(A)^*L(B)$ in the remaining case, that is when $A$ has at least a final state that is not the initial state and $n \ge 2$.
We will show that its upper bound (Theorem~\ref{thm:star-cat-upper}) coincides with its lower bound (Theorem~\ref{thm:star-cat-lower}), and the state complexity is $5 \cdot 2^{m+n-3} - 2^{m-1} - 2^n +1$.
Now, we consider the case where DFA $A$ has only one final state and it is also the initial state, and first obtain the following upper bound of the state complexity of $L(A)L(B)$ ($L(A)^*L(B)$), for any DFA $B$ of size $n \ge 2$.
\begin{theorem}\label{thm:star-cat-upper-special}
For integers $m \ge 1$ and $n \ge 2$, let $A$ and $B$ be two DFAs with $m$ and $n$ states, respectively, where $A$ has only one final state and it is also the initial state.
Then, there exists a DFA of at most $m(2^n-1) - 2^{n-1} + 1$ states that accepts $L(A)L(B)$, which is equal to $L(A)^*L(B)$.
\end{theorem}
\begin{proof}
Let $A = (Q_1, \Sigma, \delta_1, s_1, \{s_1\})$ and $B = (Q_2, \Sigma, \delta_2, s_2, F_2)$.
We construct a DFA $C = (Q, \Sigma, \delta, s, F)$ such that
\begin{eqnarray*}
& & Q = Q_1 \times ( 2^{Q_2} - \{ \emptyset \}) - \{s_1\} \times ( 2^{Q_2 - \{s_2\}} - \{\emptyset\}), \\
& & s = \langle s_1, \{ s_2 \} \rangle, \\
& & F = \{ \langle q, T \rangle \in Q \mid T \cap F_2 \neq \emptyset \}, \\
& & \delta(\langle q, T \rangle, a) = \langle q', T' \rangle,
\mbox{ for $a \in \Sigma$, where $q' = \delta_1(q, a)$ and $T' = R \cup \{s_2\}$} \\
& & \hspace{2cm} \mbox{if $q' = s_1$, $T' = R$ otherwise, where $R = \delta_2(T,a)$.}
\end{eqnarray*}
Intuitively, $Q$ contains the pairs whose first component is a state of $Q_1$ and second component is a subset of $Q_2$.
Since $s_1$ is the final state of $A$, without reading any letter, we can enter the initial state of $B$.
Thus, states $\langle q, \emptyset \rangle$ such that $q \in Q_1$ can never be reached in $C$, because $B$ is complete.
Moreover, $Q$ does not contain those states whose first component is $s_1$ and second component does not contain $s_2$.
Clearly, $C$ has $m(2^n-1) - 2^{n-1} + 1$ states, and we can verify that $L(C) = L(A)L(B)$.
\end{proof}
Next, we show that this upper bound can be reached by some witness DFAs in the specific form.
\begin{figure}
\caption{Witness DFA $A$ for Theorem~\ref{thm:star-cat-lower-special}
\label{fig:DFAA-star-cat-special}
\end{figure}
\begin{figure}
\caption{Witness DFA $B$ for Theorem~\ref{thm:star-cat-lower-special}
\label{fig:DFAB-star-cat-special}
\end{figure}
\begin{theorem}\label{thm:star-cat-lower-special}
For any integers $m \ge 1$ and $n \ge 2$, there exist a DFA $A$ of $m$ states and a DFA $B$ of $n$ states, where $A$ has only one final state and it is also the initial state, such that any DFA accepting the language $L(A)L(B)$, which is equal to $L(A)^*L(B)$, needs at least $m(2^n-1) - 2^{n-1} + 1$ states.
\end{theorem}
\begin{proof}
When $m = 1$, the witness DFAs used in the proof of Theorem 1 in~\cite{YuZhSa94} can be used to show that the upper bound proposed in Theorem~\ref{thm:star-cat-upper-special} can be reached.
Next, we consider the case when $m \ge 2$.
We provide witness DFAs $A$ and $B$, depicted in Figures~\ref{fig:DFAA-star-cat-special} and~\ref{fig:DFAB-star-cat-special}, respectively, over the three letter alphabet $\Sigma = \{a,b,c\}$.
$A$ is defined as $A = (Q_1, \Sigma, \delta_1, 0 , \{0\})$ where $Q_1 = \{0,1,\ldots,m-1\}$, and the transitions are given as
\begin{itemize}
\item $\delta_1 (i, a) = i+1 \mbox{ mod } m$, for $i \in Q_1$,
\item $\delta_1 (i, x) = i$, for $i \in Q_1$, where $x \in \{b,c\}$.
\end{itemize}
$B$ is defined as $B = (Q_2, \Sigma, \delta_2, 0, \{n-1\})$ where $Q_2 = \{0,1,\ldots,n-1\}$, where the transitions are given as
\begin{itemize}
\item $\delta_2 (i, a) = i$, for $i \in Q_2$,
\item $\delta_2 (i, b) = i+1 \mbox{ mod } n$, for $i \in Q_2$,
\item $\delta_2 (0, c) = 0$, $\delta_2 (i, c) = i+1 \mbox{ mod } n$, for $i \in \{1, \ldots, n-1\}$.
\end{itemize}
Following the construction described in the proof of Theorem~\ref{thm:star-cat-upper-special}, we construct a DFA $C = (Q, \Sigma, \delta, s, F)$ that accepts $L(A)L(B)$ (also $L(A)^*L(B)$).
To prove that $C$ is minimal, we show that (I) all the states in $Q$ are reachable from $s$, and (II) any two different states in $Q$ are not equivalent.
For (I), we show that all the state in $Q$ are reachable by induction on the size of $T$.
The basis clearly holds, since, for any $i \in Q_1$, state $\langle i, \{0\} \rangle$ is reachable from $\langle 0, \{0\} \rangle$ by reading string $a^{i}$, and state $\langle i, \{j\} \rangle$ can be reached from state $\langle i, \{0\} \rangle$ on string $b^{j}$, for any $i \in \{1, \ldots, m-1\}$ and $j \in Q_2$.
In the induction steps, we assume that all the states $\langle q, T \rangle$ such that $|T| < k$ are reachable.
Then, we consider the states $\langle q, T \rangle$ where $|T| = k$.
Let $T = \{j_1, j_2, \ldots, j_k\}$ such that $0 \le j_1 < j_2 < \ldots < j_k \le n-1$.
We consider the following three cases:
\begin{enumerate}
\item $j_1 = 0$ and $j_2 = 1$.
For any state $i \in Q_1$, state $\langle i , T \rangle \in Q$ can be reached as
\[
\langle i, \{0, 1, j_3, \ldots, j_k\} \rangle = \delta(\langle 0, \{0, j_3 - 1, \ldots, j_k - 1\}\rangle, ba^{i}),
\]
where $\{0, j_3 - 1, \ldots, j_k - 1\}$ is of size $k-1$.
\item $j_1 = 0$ and $j_2 > 1$.
For any state $i \in Q_1$, state $\langle i, \{0, j_2, \ldots, j_k\} \rangle$ can be reached from state $\langle i, \{0, 1, j_3 - j_2 + 1, \ldots, j_k - j_2 + 1\} \rangle$ by reading string $c^{j_2 - 1}$.
\item $j_1 > 0$.
In such a case, the first component of state $\langle q, T \rangle$ cannot be $0$.
Thus, for any state $i \in \{1, \ldots, m-1\}$, state $\langle i, \{j_1, j_2, \ldots, j_k\} \rangle$ can be reached from state $\langle i, \{0, j_2 - j_1, \ldots, j_k - j_1\} \rangle$ by reading string $b^{j_1}$.
\end{enumerate}
Next, we show that any two distinct states $\langle q, T \rangle$ and $\langle q', T' \rangle$ in $Q$ are not equivalent.
We consider the following two cases:
\begin{enumerate}
\item $q \neq q'$.
Without loss of generality, we assume $ q \neq 0$.
Then, string $w = c^{n-1}a^{m-q}b^{n}$ can distinguish the two states, since $\delta(\langle q, T \rangle, w) \in F$ and $\delta(\langle q', T' \rangle, w) \not\in F$.
\item $q = q'$ and $T \neq T'$.
Without loss of generality, we assume that $|T| \ge |T'|$.
Then, there exists a state $j \in T - T'$.
It is clear that, when $q \neq 0$, string $b^{n-1-j}$ can distinguish the two states, and when $q = 0$, string $c^{n-1-j}$ can distinguish the two states since $j$ cannot be $0$.
\end{enumerate}
Due to (I) and (II), DFA $C$ needs at least $m(2^n-1) - 2^{n-1} + 1$ states and is minimal.
\end{proof}
In the rest of this section, we focus on the case where DFA $A$ contains at least one final state that is not the initial state.
Thus, this DFA is of size at least 2.
We first obtain the following upper bound for the state complexity.
\begin{theorem}\label{thm:star-cat-upper}
Let $A = (Q_1, \Sigma, \delta_1, s_1, F_1)$ be a DFA such that $|Q_1| = m > 1$ and $|F_1 - \{s_1\}| = k_1 \ge 1$, and $B = (Q_2, \Sigma, \delta_2, s_2, F_2)$ be a DFA such that $|Q_2| = n > 1$.
Then, there exists a DFA of at most $(\dfrac{3}{4}2^m - 1)(2^n-1) - (2^{m-1}-2^{m-k_1-1})(2^{n-1}-1)$ states that accepts $L(A)^* L(B)$.
\end{theorem}
\begin{proof}
We denote $F_1 - \{s_1\}$ by $F_0$. Then, $|F_0| = k_1 \ge 1$.
We construct a DFA $C = \{Q, \Sigma, \delta, s, F\}$ for the language $L_1^* L_2$, where $L_1$ and $L_2$ are the languages accepted by DFAs $A$ and $B$, respectively.
Let $Q = \{ \langle p, t \rangle \mid p \in P \mbox{ and } t \in T\} - \{\langle p', t' \rangle \mid p' \in P' \mbox{ and } t' \in T'\}$, where
\begin{eqnarray*}
P & = & \{ R \mid R \subseteq (Q_1 - F_0) \mbox{ and } R \neq \emptyset\} \cup \{ R \mid R \subseteq Q_1, s_1 \in R, \mbox{ and } R \cap F_0 \neq \emptyset \},\\
T & = & 2^{Q_2} - \{\emptyset\}, \\
P' & = & \{ R \mid R \subseteq Q_1, s_1 \in R, \mbox{ and } R \cap F_0 \neq \emptyset \}, \\
T' & = & 2^{Q_2 - \{s_2\}}- \{\emptyset\}.
\end{eqnarray*}
The initial state $s$ is $s = \langle \{s_1\}, \{s_2\}\rangle$.
The set of final states is defined to be $F = \{ \langle p, t\rangle \in Q \mid t \cap F_2 \neq \emptyset\}$.
The transition relation $\delta$ is defined as follows:
\[
\delta (\langle p,t \rangle, a) = \left\{
\begin{array}{l l}
\langle p', t'\rangle & \quad \text{if $p' \cap F_1 = \emptyset$,}\\
\langle p', t' \cup \{s_2\} \rangle & \quad \text{otherwise,}\\
\end{array} \right.
\]
where, $a \in \Sigma$, $p' = \delta_1(p, a)$, and $t' = \delta_2(t, a)$.
Intuitively, $C$ is equivalent to the NFA $C'$ obtained by first constructing an NFA $A'$ that accepts $L_1^*$, then catenating this new NFA with DFA $B$ by $\lambda$-transitions.
Note that, in the construction of $A'$, we need to add a new initial and final state $s_1'$.
However, this new state does not appear in the first component of any of the states in $Q$.
The reason is as follows.
First, note that this new state does not have any incoming transitions.
Thus, from the initial state $s_1'$ of $A'$, after reading a nonempty word, we will never return to this state.
As a result, states $\langle p, t \rangle$ such that $p \subseteq Q_1 \cup \{s_1'\}$, $ s_1' \in p$, and $t \in 2^{Q_2}$ is never reached in DFA $C$ except for the state $\langle\{s_1'\}, \{s_2\}\rangle$.
Then, we note that, in the construction of $A'$, states $s_1'$ and $s_1$ should reach the same state on any letter in $\Sigma$.
Thus, we can say that states $\langle \{s_1'\}, \{s_2\}\rangle$ and $\langle \{s_1\}, \{s_2\} \rangle$ are equivalent, because either of them is final if $s_2 \not\in F_2$, and they are both final states otherwise.
Hence, we merge this two states and let $\langle \{s_1\}, \{s_2\} \rangle$ be the initial state of $C$.
Also, we notice that states $\langle p, \emptyset \rangle$ such that $p \in P $ can never be reached in $C$, because $B$ is complete.
Moreover, $C$ does not contain those states whose first component contains a final state of $A$ and whose second component does not contain the initial state of $B$.
Therefore, we can verify that DFA $C$ indeed accepts $L_1^* L_2$, and it is clear that the size of $Q$ is
\[
(\dfrac{3}{4}2^m - 1)(2^n-1) - (2^{m-1}-2^{m-k_1-1})(2^{n-1}-1).
\]
\end{proof}
Then, we show that this upper bound is reachable by some witness DFAs.
\begin{figure}
\caption{Witness DFA $A$ for Theorem~\ref{thm:star-cat-lower}
\label{fig:DFAA-star-cat}
\end{figure}
\begin{figure}
\caption{Witness DFA $B$ for Theorem~\ref{thm:star-cat-lower}
\label{fig:DFAB-star-cat}
\end{figure}
\begin{theorem}\label{thm:star-cat-lower}
For any integers $m,n \ge 2$, there exist a DFA $A$ of $m$ states and a DFA $B$ of $n$ states such that any DFA accepting $L(A)^* L(B)$ needs at least $5 \cdot 2^{m+n-3} - 2^{m-1} - 2^n +1$ states.
\end{theorem}
\begin{proof}
We define the following two automata over a four letter alphabet $\Sigma = \{a,b,c,d\}$.
Let $A = (Q_1, \Sigma, \delta_1, 0, \{m-1\})$, shown in Figure~\ref{fig:DFAA-star-cat}, where $Q_1 = \{0,1,\ldots,m-1\}$, and the transitions are defined as
\begin{itemize}
\item $\delta_1 (i, a) = i+1 \mbox{ mod } m$, for $i \in Q_1$,
\item $\delta_1 (0, b) = 0$, $\delta_1 (i, b) = i+1 \mbox{ mod } m$, for $i \in \{1, \ldots, m-1\}$,
\item $\delta_1 (i, x) = i$, for $i \in Q_1$, $x \in \{c,d\}$.
\end{itemize}
Let $B = (Q_2, \Sigma, \delta_2, 0, \{n-1\})$, shown in Figure~\ref{fig:DFAB-star-cat}, where $Q_2 = \{0,1,\ldots,n-1\}$, and the transitions are defined as
\begin{itemize}
\item $\delta_2 (i, x) = i$, for $i \in Q_2$, $x \in \{a,b\}$,
\item $\delta_2 (i, c) = i+1 \mbox{ mod } n$, for $i \in Q_2$,
\item $\delta_2 (i, d) = 0$, for $i \in Q_2$.
\end{itemize}
Let $C = \{Q, \Sigma, \delta, \langle \{0\}, \{0\} \rangle, F\}$ be the DFA accepting the language $L(A)^*L(B)$ which is constructed from $A$ and $B$ exactly as described in the proof of Theorem~\ref{thm:star-cat-upper}.
Now, we prove that the size of $Q$ is minimal by showing that (I) any state in $Q$ can be reached from the initial state, and (II) no two different states in $Q$ are equivalent.
We first prove (I) by induction on the size of the second component $t$ of the states in $Q$.
{\bf Basis:} for any $i \in Q_2$, state $\langle \{0\}, \{i\} \rangle$ can be reached from the initial state $\langle \{0\}, \{0\}\rangle$ on string $c^i$.
Then, by the proof of Theorem 5 in~\cite{YuZhSa94}, it is clear that state $\langle p, \{i\}\rangle$ of $Q$, where $p \in P$ and $i \in Q_2$, is reachable from state $\langle \{0\}, \{i\}\rangle$ on strings over letters $a$ and $b$.
{\bf Induction step:} assume that all the states $\langle p, t\rangle$ in $Q$ such that $p \in P$ and $|t| < k$ are reachable.
Then, we consider the states $\langle p, t\rangle$ in $Q$ where $p \in P$ and $|t| = k$.
Let $t = \{j_1, j_2, \ldots, j_k\}$ such that $0 \le j_1 < j_2 < \ldots < j_k \le n-1$.
Note that states such that $p = \{0\}$ and $j_1 = 0$ are reachable as follows:
\[
\langle \{0\}, \{0, j_2, \ldots, j_k\}\rangle = \delta( \langle \{0\}, \{0, j_3 - j_2, \ldots, j_k - j_2 \}\rangle, c^{j_2}a^{m-1}b).
\]
Then, states such that $p = \{0\}$ and $j_1 > 0$ can be reached as follows:
\[
\langle \{0\}, \{j_1, j_2, \ldots, j_k\}\rangle = \delta(\langle \{0\}, \{0, j_2 - j_1, \ldots, j_k - j_1\}\rangle, c^{j_1}).
\]
Once again, by using the proof of Theorem 5 in~\cite{YuZhSa94}, states $\langle p, t\rangle$ in $Q$, where $p \in P$ and $|t| = k$, can be reached from the state $\langle \{0\}, t \rangle$ on strings over letters $a$ and $b$.
Next, we show that any two states in $Q$ are not equivalent.
Let $\langle p, t\rangle$ and $\langle p', t' \rangle$ be two different states in $Q$.
We consider the following two cases:
\begin{enumerate}
\item $p \neq p'$.
Without loss of generality, we assume $|p| \ge |p'|$.
Then, there exists a state $ i \in p - p'$.
It is clear that string $a^{m-1-i}dc^n$ is accepted by $C$ starting from state $\langle p, t\rangle$, but it is not accepted starting from state $\langle p', t' \rangle$.
\item $p = p'$ and $t \neq t'$.
We may assume that $|t| \ge |t'|$ and let $j \in t-t'$.
Then, state $\langle p, t \rangle$ reaches a final state on string $c^{n-1-j}$, but state $\langle p', t' \rangle$ does not on the same string.
Note that, when $m-1 \in p$, we can say that $j \neq 0$.
\end{enumerate}
Due to (I) and (II), DFA $C$ has at least $5 \cdot 2^{m+n-3} - 2^{m-1} - 2^n +1$ reachable states, and any two of them are not equivalent.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
In this paper, we have studied the state complexities of two combined operations: reversal combined with catenation and star combined with catenation.
We showed that, due to the structural properties of DFAs obtained from reversal and star, the state complexities of these two combined operations are not equal but close to the mathematical compositions of the state complexities of their individual participating operations.
\end{document} | math | 45,597 |
\begin{document}
\title{Energy scaling laws for geometrically linear elasticity models for microstructures in shape memory alloys$^*$}
\author{Sergio Conti$^1$, Johannes Diermeier$^1$, David Melching$^2$ and Barbara Zwicknagl$^3$}
\date{March 8, 2020}
\maketitle
\renewcommand*{\thefootnote}{*}
\footnotetext{This work was mainly done while all authors were at the University of Bonn and
was partially supported by the {\em Deutsche Forschungsgemeinschaft} via
project 211504053 - SFB 1060/A{06}.}
\renewcommand*{\thefootnote}{\arabic{footnote}}
\footnotetext[1]{
Institut f\"ur Angewandte Mathematik, Universit\"at Bonn,
53115 Bonn, Germany}
\footnotetext[2]{Fakult\"at f\"ur Mathematik, Universit\"at Wien,
1090 Wien, Austria}
\footnotetext[3]{
Institut f\"ur Mathematik, {Humboldt-Universit\"at zu Berlin,
10117} Berlin, Germany}
\begin{abstract} We consider a singularly-perturbed two-well problem in the context of planar geometrically linear elasticity to model a rectangular martensitic nucleus in an austenitic matrix. We derive the scaling regimes for the minimal energy in terms of the problem parameters, which represent the {shape} of the nucleus, the quotient of the elastic moduli of the two phases, the surface energy constant, and the volume fraction of the two martensitic variants. We identify several different scaling regimes, which are distinguished either by the exponents in the parameters, or by logarithmic corrections, for which we have matching upper and lower bounds.
\end{abstract}
\section{Introduction}
Solid-solid phase transitions are a classical model problem in the variational study of pattern formation in solids, both in the context of the theory of relaxation and in the study of singularly perturbed problems.
Their study has led on the one side to many important abstract developments in the calculus of variations, on the other side to a mathematical explanation of the physical behavior of shape-memory alloys and other materials with peculiar properties \cite{ball-james:87,ball-james:92,bhattacharya:92,MuellerLectureNotes,ball:04,kohn:06,James2019}.
The basic model is a vectorial, nonconvex variational problem, where the integrand depends on the gradient of the deformation field.
The study of the macroscopic material behavior is strongly coupled to the development of the theory of quasiconvexity and relaxation \cite{MuellerLectureNotes,Dacorognabuch},
and focuses on average properties of the microstructures without resolving the geometric details and the microscopic length scales.
A finer analysis requires the introduction of a length scale, typically in the form of a small parameter times a convex function of a second gradient, which penalizes interfaces.
The resulting singularly-perturbed nonconvex problem contains a
scale dependence and is much more difficult to study in detail, a numerical treatment is in most cases not feasible {either}.
Starting with the papers by Kohn and Müller \cite{kohn-mueller:92,kohn-mueller:94} it has become clear that the key
property is the scaling of the optimal energy in terms of the parameters present in the problem, and that it is appropriate
to start by focusing on the exponents and ignoring the prefactor. One obtains mesoscopic phase diagrams which characterize the different
regimes of material behavior and the qualitative properties of the microstructure \cite{kohn:06,CapellaOtto2009,knuepfer-kohn:11,knuepfer-kohn-otto:13}.
The techniques developed for singularly-perturbed functionals modeling martensitic microstructures have proven useful also in the study
of a variety of other physical problems, such as {for example} magnetic microstructures \cite{choksi-kohn:98,choksi-et-al:98,knuepfer-muratov:11}, flux tubes in superconductors \cite{CKO03,conti-et-al:15,ContiGoldmanOttoSerfaty2018},
diblock copolymers \cite{Choksi01}, wrinkling in thin elastic films \cite{JinSternberg2,belgacem-et-al:02,bella-kohn:14},
and compliance minimization \cite{Kohn-Wirth:15}.
One aspect which is very important {for} practical applications of materials with solid-solid phase transitions is the detailed study of the transformation path from austenite to martensite and the corresponding hysteresis.
It is known that the amplitude of the hysteresis cycle crucially depends on the microstructures that emerge during nucleation \cite{cui-et-al:06,zjm:09,zwicknagl:14}.
Specifically, transition-state theory explains that the transformation from austenite to martensite is strongly influenced by the energetics of the critical nucleus, which is a small inclusion of martensite in an austenitic matrix.
{It is known that stress-free inclusions with interfaces of finite total area (or length, in two dimensions) are possible only for special material parameters \cite{dolzmann-mueller:95,kirchheim:03,knuepfer-kohn:11,knuepfer-kohn-otto:13,rueland:16,rueland:16_2,ContiKlarZwicknagl2017,cesana-et-al:19}.}
We investigate here a variational model for the formation of microstructures in a martensitic nucleus embedded in an austenitic matrix.
The mechanical framework is the theory of geometrically linear elasticity, the mathematical framework is a singularly perturbed nonconvex vectorial functional.
Before discussing the large body of mathematical literature that has been devoted to variants of this problem in the last decades, let us briefly introduce the setting.
For simplicity we work in two spatial dimensions and consider a large body, identified with $\mathbb{R}^2$,
which is mostly austenitic with a bounded martensitic inclusion $\omega\subset\subset\mathbb{R}^2$.
The inclusion $\omega$ is selected by a process slower than elastic equilibration and therefore, for the present purposes, fixed.
We take the austenite state as reference configuration and denote by $u:\mathbb{R}^2\to\mathbb{R}^2$ the elastic displacement.
We assume that two variants of martensite are relevant, which are characterized by strains $A, B\in\mathbb{R}^{2\times 2}_\text{sym}$.
Experimentally it is known that martensitic transformations are to a very good approximation volume preserving \cite{bhattacharya:92}, therefore we assume $\operatorname{Tr} A=\operatorname{Tr} B=0$.
Relaxation theory predicts zero macroscopic energy if $A$ and $B$ are compatible
and austenite can be realized as a weighted average of the two martensitic variants. This means
that
there are matrices $\hat A, \hat B\in\mathbb{R}^{2\times 2}$
with $\hat A-A$ and $\hat B-B$ skew-symmetric
such that
$\operatorname{rank} (\hat A-\hat B)=1$ and
$(1-\theta) \hat A+\theta \hat B=0$ for some $\theta\in (0,1)$.
In this situation, a finer analysis, which includes a singular perturbation regularizing the microstructure, is necessary in order to understand the detailed material behavior.
Since austenite/martensite interfaces which are not aligned with the rank-one direction have very large energy, one {expects} the nucleus to be elongated in the rank-one direction.
For mathematical simplicity it is convenient to further restrict the geometry. Since $\hat A$ and $\hat B$ are rank-one connected, we have $\hat A-\hat B=c\otimes n$ for some $c,n\in\mathbb{R}^2$. By scaling we can assume $|c|=|n|=1$.
From $\operatorname{Tr} A=\operatorname{Tr} B=0$ one obtains
$\operatorname{Tr} \hat A=\operatorname{Tr} \hat B=0$ and therefore
$c\cdot n=0$, and from $(1-\theta) \hat A+\theta \hat B=0$ one obtains
$\hat A=\theta c\otimes n$, $\hat B=(\theta-1)c\otimes n$. By a change of variables one can reduce to the case that
$\theta\le\frac12$, $c=e_1$ and $n=e_2$. We shall then assume that the martensitic domain is a rectangle elongated along $e_1$, and by scaling it suffices to consider
\begin{eqnarray}\label{eq:Omega}
\Omega_{2L}:=(0,2L)\times(0,1)\subset\mathbb{R}^2.
\end{eqnarray}
The same pair of matrices allows for a second rank-one connection, rotated by 90 degrees. Therefore we can assume without loss of generality that
\[L\geq \frac 12. \]
In particular, both edges of $\Omega_{2L}$ are aligned with the habit planes of exact austenite/martensite interfaces.
In the austenite the elastic energy vanishes if
the strain $e(u)$, defined by
\begin{equation}\label{eqdefeu}
e(u):=\left(\nabla u\right)_{\text{sym}}:=\frac{1}{2}(\nabla u+\nabla^Tu),
\end{equation}
vanishes; in the martensite if $e(u)\in\{A,B\}$, and (assuming sufficient regularity) grows quadratically close to these minima. The relevant constructions have strains which are
not larger than a multiple of the order parameter, hence we do not expect the behavior of the energy at infinity to be important for the scaling results we shall derive, provided sufficient coercivity is present. For simplicity we restrict to quadratic energies, characterized as the squared distance from the energy wells.
We use $|a|:=(\text{Tr } a^Ta)^{1/2}=(\sum a_{ij}^2)^{1/2}$ for the Euclidean norm of a matrix and $\operatorname{dist}(a,M):=\inf\{|a-m|:m\in M\}$ for the distance of a matrix to a set and consider the
functional $J:W_\mathrm {loc}^{1,2}(\mathbb{R}^2;\mathbb{R}^2)\to\mathbb{R}\cup\{\infty\}$ given by
\begin{eqnarray}\label{eq:funcfull}
&&J(u):=\mu\int_{\mathbb{R}^2\setminus\Omega_{2L}}|e(u)|^2\text{ d} \mathcal{L}^2+\int_{\Omega_{2L}}\operatorname{dist}^2\left(e( u),K\right)\text{ d} \mathcal{L}^2+\varepsilon|D^2u|(\Omega_{2L}).
\end{eqnarray}
Let us briefly explain the terms in the functional.
The first term in $J(u)$ represents the elastic energy of the surrounding austenite, where $\mu$ stands for the ratio of typical elastic moduli of austenite and martensite.
This term favors configurations whose gradients are approximately skew symmetric. The second term measures the elastic energy inside the martensitic nucleus, which vanishes on
\begin{eqnarray}\label{eq:K}
K:=\left\{\frac{1}{2}\left(\begin{array}{c c}
0&\theta\\
\theta&0
\end{array}\right),\ \frac{1}{2}\left(\begin{array}{c c}
0&-1+\theta\\
-1+\theta&0
\end{array}\right)\right\}{.}
\end{eqnarray}
The parameter $\theta\in(0,1/2]$ measures the compatibility between this majority martensitic variant and the surrounding austenite.
The so-measured compatibility has been found to play an important role in the control of the thermal hysteresis of the phase transition (see e.g. \cite{james-zhang:05,cui-et-al:06,zjm:09} and the references therein).
Of particular interest is the almost compatible case $\theta\ll1$ which corresponds to particularly low hysteresis
\cite{cui-et-al:06,zjm:09,zwicknagl:14}.
The third term in \eqref{eq:funcfull}
is a singular perturbation that regularizes the nonconvex part of the functional. It
prevents too fine oscillations between the martensitic variants in $\Omega_{2L}$, and can be related to an interfacial energy, $\varepsilon>0$ being a typical surface energy constant per unit length.
Whereas one could physically imagine that similar terms are present also in the austenitic phase, they are normally not included
since the convex austenitic energy does not need regularization. Although we expect most of our results to carry over to a setting in which this term is extended to $\mathbb{R}^2$, for brevity we do not pursue this investigation here.
The energy of the austenite/martensite interface depends only on the shape of the inclusion, which is fixed here, and is hence irrelevant for the present purposes.
We denote by $D^2u$ the second distributional derivative of $u$. If it is a measure, then we denote by
$|D^2u|(\Omega_{2L})$ {the} total variation of $D^2u$, otherwise we set
$|D^2u|(\Omega_{2L}):=\infty$.
Existence of minimizers can be readily established by the direct method of the calculus of variations and will not be discussed explicitly, as it is not important for the study of the scaling of the energy.
\\
To determine exact minimizers of functionals like \eqref{eq:funcfull} is generally not possible, and we follow the strategy to determine the scaling regimes of the minima in terms of the problem parameters $L$, $\mu$, $\theta$ and $\varepsilon$.
{We remark that $L\to\infty$ corresponds to the long-inclusion limit, which is relevant due to the compatibility condition; $\theta\to0$ is the almost-compatible limit, which is the one of low-hysteresis materials; $\varepsilon\to0$ is the large-body limit, in which complex structures arise.}
Our main result is the following scaling law for the minimal energy.
For an overview over the individual regimes we refer to Section {\ref{sec:tabulars}} below. Explicit constructions are given in Section {\ref{sec:ub}}.
{We remark that the same result {holds for} nonlinear energy densities $W_A(e(u))$, $W_M(e(u))$ with $\frac1c|a|^2\le W_A(a)\le c |a|^2$ and $\frac1c \operatorname{dist}^2(a,K)\le W_M(a)\le c \operatorname{dist}^2(a,K)$ for all $a\in\mathbb{R}^{2\times 2}$.}
\begin{thrm}\label{th:main}
There exists a constant $c>0$ such that for all {$\mu>0$, $\varepsilon>0$, $\theta\in (0,1/2]$, and $L\geq1/2$}
\begin{align*}
&\frac{1}{c}\mathcal{I}({\mu,\varepsilon,\theta,L})
\leq \min_u J(u)
\leq c\mathcal{I}({\mu,\varepsilon,\theta,L}),
\end{align*}
where {$J$ was defined in (\ref{eq:Omega}), (\ref{eqdefeu}), (\ref{eq:funcfull}), (\ref{eq:K}) and}
\begin{align*}
{\mathcal{I}({\mu,\varepsilon,\theta,L})}:= \min\Big{\{ }&
\theta^2 L, & {\text{(constant)}}\\
&\mu \theta^2 \ln (3+L), & {\text{(affine)}} \\
&\mu \theta^2 \ln (3+ \frac L \mu ) + \varepsilon\theta ,&{\text{(linear interpolation)}}\\
&{\mu \theta^2 \ln (3+ \frac { \varepsilon {L}}{\mu \theta^{2}})
+\mu\theta^2\ln (3+\frac{\varepsilon}{\mu^2\theta^2})
+ \varepsilon^{1/2}\theta^{3/2},} & {\text{(single truncated branching)}}\\
&{\mu \theta^2 \ln (3+ \frac { \varepsilon {L}}{\mu \theta^{2}})
+\mu\theta^2\ln (3+\frac{\theta}{\mu})}
, & {\text{(corner laminate)}}\\
&\varepsilon^{2/3}\theta^{2/3} L^{1/3} +\varepsilon L, & {\text{(branching)}}\\
&\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac 1 {\theta^{2}}))^{1/2}+\varepsilon L, & {\text{(laminate)}} \\
&\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac{\varepsilon}{ \mu^{3}\theta^{2}{L}}))^{1/2}+\varepsilon L
\Big{\}} & {\text{(two-scale branching).}}
\end{align*}
\end{thrm}
{
\begin{proof}
{The upper bound follows from Theorem \ref{th:upperbound}, and the lower bound follows from Theorem \ref{th:lowerbound}.}
\end{proof}
}
The proof of Theorem \ref{th:main} is split into two main steps.
In Section \ref{sec:upperbound}
we combine constructions from the literature with some new ones for the upper bound.
The ansatz-free lower bound is proven in Section \ref{sec:lowerbound}.
{We remark that the constant $3$ inside the terms of the form $\ln(3+x)$ is, to a certain degree, arbitrary. We choose 3 so that $\ln (3+x)\ge 1$ for all $x\ge0$. This simplifies some estimates in the proofs. The constant 3 could be replaced by 2 or by any number larger than 1, changing correspondingly the constant $c$ in the statement.}\\
{We point out that in contrast to previous works on a single austenite/martensite interface (see e.g. \cite{conti:06,zwicknagl:14,conti-zwicknagl:16}) there is no relevant regime which corresponds to a construction with a single laminate near the left and right boundaries of the nucleus.}
\subsection{Comparison to the literature and new contributions}
We point out that the study of microstructures in shape memory alloys by means of the Calculus of Variations has a long history, and we generalize and build on several earlier works that we will briefly discuss to point out our new contributions. Our proof uses techniques developed in the study of scalar-valued models taking into account all problem parameters on the one hand, and of vectorial models in specific parameter regimes on the other hand, combined with several new arguments. As will be outlined in more detail below, the latter include in particular
\begin{itemize}
\item the careful treatment of the elastic energy in the austenite part {which completely surrounds a} martensitic inclusion{. This} introduces new difficulties in both the upper and the lower bound of Theorem \ref{th:main}; and
\item the explicit use of the full geometrically linearized energy instead of the scalar simplification, {which requires e.g.} a $BD$-type slicing argument in the proof of the lower bound.
\end{itemize}
In the '90s, Kohn \& M\"uller proposed a reduced scalar-valued model for the formation of microstructures near interfaces between austenite and twinned martensite \cite{kohn-mueller:92,kohn-mueller:94}. These models are by now well understood in terms of scaling of the minimal energy and more quantitative properties of minimizers in specific regimes ({see} e.g. \cite{conti:00,conti:06,diermeier:10,zwicknagl:14,cdz:15,diermeier:16,conti-zwicknagl:16}). Roughly speaking, depending on the problem parameters, minimizers are expected to be uniform, or show laminated structures, or branched patterns, where the different martensitic variants finely mix close to the interface. Compared to the setting we consider here, in these earlier works there were two main simplifications: \\
First, only one component of the displacement has been taken into account, i.e., only functions $u=(u_1,u_2)$ with $u_2=0$ are considered, which makes the problem scalar (similarly for $u_1=0$). On a more technical level, the {first two terms of the} functional \eqref{eq:funcfull} then in particular provide control on the full gradient of the displacements. In our more general setting, only the symmetric part of the gradient is {directly} controlled by the functional, and this introduces several additional difficulties in the proof of the lower bound (see also the discussion of vectorial models below). Nevertheless, the constructions we use to prove the upper bound here, are in fact scalar valued. {Some of them build upon constructions introduced in the above mentioned references,
but others are new, as for example the one for the corner laminate and the single truncated branching, see the proof of Theorem \ref{th:upperbound} in}
Section \ref{sec:upperbound}. \\
Second, in the above references, only one austenite/martensite interface is considered. That is, in the elastic energy of the austenite part (the first term in \eqref{eq:funcfull}), only the contribution from $(-\infty,0)\times(0,1)$ is taken into account. If we restricted the energy in \eqref{eq:funcfull} to this strip, there would be configurations with vanishing total energy, e.g.,
\[u(x)=\begin{cases}
0,&\text{\ if \ }x\in (-\infty,0]\times (0,1),\\
(0,\theta x_1),&\text{\ if\ }x\in(0,2L) \times (0,1).
\end{cases} \]
The functional \eqref{eq:funcfull} is more nonlocal than the scalar valued models in the sense that interactions between the traces at the upper and lower boundaries of $\Omega_{2L}$ (captured by the elastic energy of the austenite part) make it sometimes more favorable to pay elastic energy inside $\Omega_{2L}$ to release elastic energy in the austenite part. Let us consider a typical example. Deep in $\Omega_{2L}$, in the simplified setting one expects the affine configuration $u_1(x)=\theta x_2$, which has zero energy in $\Omega_{2L}$. In our setting instead, by Rellich's trace theorem, this configuration bears elastic energy in the austenite part. It therefore competes with a single laminate, which requires only surface energy in the interior of $\Omega_{2L}$.
The proof of the lower bound correspondingly needs a treatment of the interplay of the energy in the austenite and the martensite on many different scales. This is done by line integrals, inspired by the arguments used for proving Korn-Poincar\'e inequalities in $BD$,
see for example Lemma \ref{lemmaystin}
and Lemma \ref{lem:costsu2inA} below.
One important ingredient is a separate treatment of the austenite part, where one controls
three of the four components of $\nabla u$ (but with a coefficient $\mu$), and the martensite part, where only the two diagonal entries are
controlled independently of the variant.
\\[1mm]
On the other hand, we extend techniques developed in \cite{chan:13,chan-conti:14,chan-conti:14-1}. In these works, the geometrically nonlinear analogue to \eqref{eq:funcfull} has been considered for the case $\theta=\frac{1}{2}$ and hard austenite $\mu=\infty$. Some of their techniques, in particular related to localization in the proof of the lower bound, have been adopted to the geometrically linear setting and refined in two of the authors' Masters's theses \cite{diermeier:13,melching:15} on which we build here. A main difficulty in our setting compared to those works (in addition to the {`non-locality'} due to the elastic energy in the austenite part discussed above) lies in the treatment of small $\theta$. As pointed out in \cite{conti-zwicknagl:16}, such localization techniques are not sufficient to obtain the precise scaling of laminated structures since the logarithmic corrections require a rather precise understanding of the geometry of the set in which
the minority variant is active. Here, a careful $BD$-type slicing argument for almost diagonal slices allows us to combine the techniques from the studies of vectorial models with techniques developed to treat small volume fractions in the scalar valued case. {In particular, test functions need to be obtained that reproduce the fine-scale structure of the martensite and have a controlled behavior at the boundaries, see Lemma \ref{lem:thetasmallbranching} where for example separate test functions on the top and bottom boundaries need to be constructed for $\theta\ge\mu$ (the {parameter range} with corner laminates) and $\theta\le \mu$ (without corner laminates){, and also} Lemma \ref{lemmabdryln} where the corner logarithm is treated by a test function on the boundary.
At the same time the ``horizontal'' interpolation between different variants needs to be localized in order to capture the optimal power of $\theta$ (Lemma \ref{lemmainterpolationestim}).}\\
Analytical results on microstructures for related three-dimensional models based on geometrically linearized elasticity functionals were obtained in \cite{CapellaOtto2009,CapellaOtto2012,TS} for a cubic-to-tetragonal phase transition and in \cite{rueland:16,rueland:16_2} for a cubic-to-orthorhombic transition. As in our case, the focus there lies on planar austenite/martensite interfaces. Some results that take into account also the volume dependence of the energy of a martensitic inclusion (by penalizing the area of the austenite/martensite interfaces) and the resulting optimal shapes of nuclei (which typically differ significantly from a rectangle) were given in \cite{knuepfer-kohn:11} for a two-well potential and in \cite{knuepfer-kohn-otto:13,bella-goldman:15} for the cubic-to-tetragonal transition in whole space and domains with generic corners, respectively. We note that these works predict a different scaling behavior. We hope that a precise understanding of microstructures in a fixed domain as derived {here}
provides also a step towards a better understanding of the full nucleation problem.
\subsection{Notation}
Throughout the text, we denote by $c$ positive constants that may change from expression to expression, we use
$x\lesssim y$ to state that there is $c>0$ such that $x\le cy$. We use capitalized letters and $c_i$ with indices $i\in\mathbb{N}$ to denote specific fixed constants that will not be changed throughout the text.\\
For a measurable set $A\subset \mathbb{R}^d$ {with $\mathcal{L}^d(A)\neq 0$} and a function $w\in L^1(A)$, we denote the average by $\langle w\rangle_{A}:=\mathcal{L}^d(A)^{-1}\int_Aw\text{\,d}\mathcal{L}^d$.
{{\bf Energy.}}
Let us first fix a notation for the function space
\[
\mathcal{X}:=\left\{ u\in W_{\text{loc}}^{1,2}(\mathbb{R}^2,\mathbb{R}^2)\,:\,\partial_i u_j \in BV(\Omega_{2L}) \text{ for } i,j \in \{1,2\},\, \nabla u\in L^2(\mathbb{R}^2,\mathbb{R}^{2\times 2})\right\}
\]
on which the energy is finite. In the proofs it will be convenient to consider a slightly modified energy functional where the symmetrized gradient in the elastic energy of the austenite part is replaced by the full gradient. This does not change the scaling regimes of the minimal energy due to
Korn's inequality {since}
the constant in Korn's inequality in $\mathbb{R}^2 \setminus \Omega_{2L}$ can be chosen independently of $L$, i.e., there is a constant $C_K>0$ independent of $L$ such that
\begin{eqnarray}\label{eq:korn}
\min_{A\in\text{Skew}(2)}\|\nabla u-A\|_{L^2(\mathbb{R}^2\setminus \Omega_{2L})}\leq C_K\|e(u)\|_{L^2(\mathbb{R}^2\setminus \Omega_{2L})} \text{\ for all\ }u\in W_{\text{loc}}^{1,2}(\mathbb{R}^2,\mathbb{R}^2).
\end{eqnarray}
To see this, we use the decomposition
\[\mathbb{R}^2\setminus\Omega_{2L}=[(-\infty,0)\times\mathbb{R}]\cup[\mathbb{R}\times(1,\infty)]\cup[(2L,\infty)\times\mathbb{R}]\cup[\mathbb{R}\times(-\infty,0)]. \]
Each one of the sets on the right-hand side is a half-space, and therefore on each of them, a Korn's inequality holds with a constant independent of $L$ (see \cite{kondratev-oleinik:88}). Further, every set intersects another one on a set of infinite measure{. Hence, for any function $u$, the Korn's inequality in each one of the four parts holds with the same skew symmetric matrix $A$, and therefore, \eqref{eq:korn} holds.}
We may therefore without changing the qualitative scaling behavior replace the symmetrized gradient in the first term in (\ref{eq:funcfull}) by the full gradient and define
\[
I(u):=\mu\int_{\mathbb{R}^2\setminus\Omega_{2L}}|\nabla u|^2\text{ d} \mathcal{L}^2+\int_{\Omega_{2L}} \min\left\{ |e(u)-\theta e_1 \odot e_2|^2,|e(u)+(1-\theta) e_1 \odot e_2|^2\right\}\text{ d} \mathcal{L}^2+\varepsilon|D^2u|(\Omega_{2L})
\]
where
\[e_1\odot e_2 := (e_1\otimes e_2)_{\text{sym}}.\]
Sometimes it will be useful to consider the energy only on parts of the domain. For any Borel set $A\subset \mathbb{R}^2$ we define
\begin{align*}
I_{A}(u)&:=\mu\int_{A\setminus\Omega_{2L}}|\nabla u|^2\text{ d} \mathcal{L}^2+\\
&+\int_{A\cap\Omega_{2L}} \min\left\{ |e(u)-\theta e_1 \odot e_2|^2,|e(u)+(1-\theta) e_1 \odot e_2|^2\right\}\text{ d} \mathcal{L}^2+\varepsilon|D^2u|(A\cap \Omega_{2L}),\\
I^\mathrm {int}(u)&:= I_{\Omega_{2L}}(u) \qquad \text{ and } \qquad
I^\mathrm {ext}(u):= I_{\mathbb{R}^2\setminus \Omega_{2L}}(u) .
\end{align*}
{{\bf The $H^{1/2}$-norm.}}\\
It has proven useful to interpret the energetic contribution in the austenite region as a trace norm at the austenite/martensite interface.
For $\rho >0$ and $u_0 \in L^2((0,\rho))$ we define the $H^{1/2}$-seminorm by
\[
[u_0]_{H^{1/2}((0,\rho))}^2:= \inf \Big\{ \int_{-\infty}^0\int_0^{\rho} |\nabla v (x_1,x_2)|^2 \text{ d}x_2\text{ d}x_1 \,:\, v(0,x_2)=u_0(x_2), v\in W^{1,2}_{\text{loc}}((-\infty,0)\times(0,\rho)) \Big\}.
\]
The subspace of $L^2((0,\rho))$ on which this seminorm is finite is called $H^{1/2}((0,\rho))$.
{We state a variant of } Lemma 4.1 from \cite{conti-zwicknagl:16}.
\begin{lmm}\label{lem:interpolH12}
{Let $\omega\subset\subset(0,1)$.} Then there is $c=c(\omega)>0$ such that for all $v \in H^{1/2}((0,1))$ and $\psi \in H^{1/2}((0,1))\cap H^1((0,1))$ {with $\supp \psi\subset\omega$ one has}
\[
\int_0^1 v(t) \psi'(t) \text{ d} t \leq c [v]_{H^{1/2}((0,1))} [\psi]_{H^{1/2}((0,1))} .
\]
\end{lmm}
{{\bf Remark.} Lemma 4.1 from \cite{conti-zwicknagl:16} incorrectly does not state
that $c$ depends on $\omega$ (or on $\supp\psi$). This is not relevant for the usage in
\cite{conti-zwicknagl:16}, since the test function $\psi$ can be constructed to be supported in $(1/12, 11/12)$.}
\begin{proof}
If $\supp v\subset\subset(0,1)$, then the assertion is readily proven by Fourier series,
\begin{equation*}
|\sum_k \hat v_k^* i k \psi_k | \le (\sum_k |k|\, |v_k|^2)^{1/2} (\sum_k |k|\, |\psi_k|^2)^{1/2}.
\end{equation*}
Otherwise, one fixes $\varphi\in C^\infty_c((0,1))$ with $\varphi=1$ on $\omega$, applies
the Poincar\'e estimate for $H^{1/2}$ to obtain $\|v-v_0\|_{L^2((0,1))}\le [v]_{H^{1/2}((0,1))}$ for some $v_0\in\mathbb{R}$,
and {then applies}
the previous assertion to $\tilde v:=\varphi (v-v_0)$ and $\psi$.
\end{proof}
In our proof of the lower bound, we shall use a related estimate given in the next lemma.
\newcommand{v}{v}
\begin{lmm}\label{lemmah12app} Let $\omega\subset\mathbb{R}^2$ be a bounded Lipschitz set, $\Psi\in\Lip(\bar\omega)$, $v\in W^{1,2}(\omega)$. Then
\begin{equation*}
\left| \int_{\partial\omega} v\partial_\tau \Psi \dcalH^1 \right| \le \|\nabla\Psi\|_{L^2(\omega)} \|\nablav\|_{L^2(\omega)},
\end{equation*}
where $\partial_\tau$ denotes the tangential derivative and $v$ is identified with its trace on $\partial\omega$.
\end{lmm}
\begin{proof}
Assume first that $\Psi\in C^2(\bar\omega)$. We define $f\in W^{1,2}(\omega;\mathbb{R}^2)$ by
$f:=v\,(\nabla\Psi)^\perp =(-v\partial_2\Psi, v\partial_1\Psi)$ and observe that $\Div f= \nabla\Psi\times \nablav{=\partial_2v\partial_1\Psi-\partial_1v\partial_2\Psi}$. Then
\begin{equation*}
\left|\int_{\partial\omega} v \, \partial_\tau \Psi \dcalH^1\right|
=\left|\int_{\partial\omega} f\cdot \nu\dcalH^1\right|
=\left|\int_{\omega} \Div f\dxy\right|
=\left|\int_{\omega} \nabla\Psi\times \nablav\dxy\right|
\end{equation*}
implies the result.
In the general case, we choose a sequence of smooth functions $\Psi_\varepsilon\in C^\infty(\mathbb{R}^2)$ such that $\|\nabla \Psi_\varepsilon\|_{L^\infty(\omega)}
\le \|\nabla \Psi\|_{L^\infty(\omega)}$, with $\Psi_\varepsilon$ converging uniformly and strongly in $W^{1,2}(\omega)$ to $\Psi$.
Then $\partial_\tau \Psi_\varepsilon$ converges weakly{-$\ast$} to $\partial_\tau \Psi$ in $L^\infty(\partial\omega)$, therefore
\begin{equation*}
\left|\int_{\partial\omega} v \, \partial_\tau \Psi \dcalH^1\right|
=\left|\lim_{\varepsilon\to0} \int_{\partial\omega} v \, \partial_\tau \Psi_\varepsilon \dcalH^1\right|
=\left|\lim_{\varepsilon\to0} \int_{\omega} \nabla\Psi_\varepsilon\times \nablav\dxy\right|
=\left|\int_{\omega} \nabla\Psi\times \nablav\dxy\right|.
\end{equation*}
\end{proof}
We will moreover use the following variant of Lemma 2 in \cite{conti:06} that has also been used in the proof of Theorem 1 in \cite{zwicknagl:14}:
\begin{lmm}\label{lem:interpolSC}
There is $c>0$ such that for all $\rho>0$, $v\in H^{1/2}((0,\rho))$, $a\in\mathbb{R}$, and $b\in \mathbb{R}$ there holds
\[
c\rho^2 a^2 \leq |a|\int_0^\rho |v(y)-a y - b|\text{ d}y + [v]^2_{H^{1/2}((0,\rho))}.
\]
\end{lmm}
{\bf{BD-type slicing.}}
For any $u:\mathbb{R}^2\rightarrow \mathbb{R}^2$, $x_1\in \mathbb{R}$, and $\xi\in \mathbb{R}^2$ with $\xi_1\neq 0\neq \xi_2$, we define the one-dimensional, scalar-valued function on the slice $(x_1,0)+\mathbb{R}\xi$ as
\begin{align}\label{eq:defuxi}
u_{x_1}^{\xi}(s):= \frac 1 {\xi_1\xi_2}u((x_1,0) +s\xi)\cdot \xi.
\end{align}
Notice that this definition is {motivated by the characterization of $BD$ via suitable one-dimensional sections} as introduced by Ambrosio, Coscia and Dal Maso {\cite[Proposition 3.2]{ambrosio-et-al:97}}. {For convenience, our definition} differs {from the one given in the above reference} by the prefactor $\frac 1 {\xi_1\xi_2}$.
If $u\in W_{\text{loc}}^{1,2}(\mathbb{R}^2,\mathbb{R}^2)$ we have $u_{x_1}^\xi\in W_{\text{loc}}^{1,2}(\mathbb{R})$ for almost every $x_1\in \mathbb{R}$, and
\begin{align}\label{eq:estduxiv}
{u_{x_1}^\xi}'(s)
&= \left(\frac {\xi_1}{\xi_2} \partial_1 u_1 + \partial_2 u_1+\partial_1 u_2 +\frac{\xi_2}{\xi_1} \partial_2 u_2\right)((x_1,0)+s\xi).
\end{align}
Throughout this work we will fix the direction $\xi:=(\frac 14,1) $ and define for some $x_1\in (0,2L-\xi_1)$ the almost diagonal segment with base point $(x_1,0)$ as
\begin{eqnarray}\label{eq:sliceDelta}
\Delta^\xi_{x_1}:=\left\{(x_1,0)+s\xi\,:\, s\in (0,1)\right\}.
\end{eqnarray}
With a small abuse of notation we shall write, for $f:\Delta^\xi_{x_1}\to\mathbb{R}$,
\begin{equation}\label{eqdeffl2delta}
\|f\|_{L^2(\Delta^\xi_{x_1})}^2:=\int_0^1 |f|^2((x_1,0)+s\xi) \text{ d}s,
\end{equation}
and the same for $L^1$. This definition differs from the usual one, in which one integrates with respect to $\mathcal{H}^1$, by a factor of $\sqrt{17/16}$, which is irrelevant for our argument but would make notation cumbersome.
For any $a\in\mathbb{R}$, almost every $x_1\in (0,2L-\xi_1)$ and almost every $s \in (0,1)$
we compute
\begin{align}\label{eqestduxiv1}
| {u_{x_1}^\xi}'(s)-a | &=
\Big|\frac{1}{\xi_1\xi_2}\sum_{ij}\xi_i\xi_j( e(u)-ae_1\odot e_2)_{ij} \Big|((x_1,0)+s\xi) \nonumber\\
&\le \frac{|\xi|^2}{|\xi_1\xi_2|} |e(u)-ae_1\odot e_2|((x_1,0)+s\xi)\le 5 |e(u)-ae_1\odot e_2|((x_1,0)+s\xi)
\end{align}
where in the last step we used the specific choice $\xi=(1/4,1)$.
This implies in particular
\begin{align}\label{eq:estduxi}
&\min\left\{|{u_{x_1}^\xi}'-\theta|^2,|{u_{x_1}^\xi}'+(1-\theta)|^2\right\}(s)
\leq 25 \min \left\{|e(u)-\theta e_1 \odot e_2|^2,|e(u)+(1-\theta) e_1 \odot e_2|^2\right\}((x_1,0)+s\xi).
\end{align}
{We remark that vertical slices cannot be used {to obtain similar estimates}, since $\partial_2 u_2$ does not distinguish between the two variants and $\partial_2 u_1$ cannot be controlled without an independent estimate on $\partial_1 u_2$.}
\section{Upper bound}\label{sec:upperbound}
In this section we will prove the upper bound, i.e., the second inequality in Theorem \ref{th:main}. For that, we provide explicit constructions for the different energy {scaling regimes} in Subsection \ref{sec:ub}. In Subsection \ref{sec:tabulars} we give an overview {over typical} parameter {ranges} to illustrate that indeed all scalings are attained.
\subsection{Explicit constructions}\label{sec:ub}
Let us point out that all our constructions will be scalar valued.
{W}e define for every $u\in W^{1,2}_{\text{loc}}(\mathbb{R}^2,\mathbb{R})$ with $\nabla u \in BV(\Omega,\mathbb{R}^2)$
\[
{E}(u):={I((u,0))}
\leq \int_{\Omega_{2L}}\min\left\{|\nabla u - \theta e_2|^2, |\nabla u +(1-\theta)e_2|^2\right\} \text{ d} \mathcal{L}^2 + \mu \int_{\mathbb{R}^2\setminus\Omega_{2L}}|\nabla u|^2 \text{ d} \mathcal{L}^2 +\varepsilon |D^2u|(\Omega_{2L})
\]
and correspondingly ${E}_A(u):=I_A({(u,0)})$, ${E}^\mathrm {int}$, and ${E}^\mathrm {ext}$.
Some of the test functions we consider below are taken from the literature,
{some constructions have to be modified, and some are new.}
We shall use only constructions that are symmetric with respect to the axis $\{x_1=L\}$, {working explicitly in $(-\infty,L]\times\mathbb{R}$ and then extending each construction by symmetry.}
This introduces an additional term $|D^2u|(\{L\}\times(0,1))$. In some cases, it vanishes (constant, affine, linear interpolation). In the other cases, we use that
the relevant gradients are bounded,
and hence
the term ${\varepsilon} |D^2u|(\{L\}\times(0,1))$ is bounded by $\varepsilon$. Then this term can be incorporated in the regimes using $\varepsilon\leq\mu\theta^2$, see the proof of Theorem \ref{th:upperbound}. We will therefore not explicitly mention this term in the discussion of the constructions below.
We shall use the following short-hand notation: For $a<b$, we set
\begin{eqnarray}
\label{eq:iota}
\iota_{a,b}:[a,b]\rightarrow\mathbb{R} , \quad{\iota_{a,b}}(t):=\frac{t-a}{b-a},
\end{eqnarray}
i.e., $\iota_{a,b}$ is the affine function with $\iota_{a,b}(a)=0$, $\iota_{a,b}(b)=1$.\\
We start with auxiliary lemmata to estimate the energy contribution from $\mathbb{R}^2\setminus \Omega_{2L}$.
\begin{lmm}
\label{lem:lemout}
Let $\bar{L}\in[1/2,\infty)$, $1\leq\alpha<\beta\leq \bar{L}$. Then there exists $u_{\alpha,\beta,\bar{L}}\in
W^{1,2}_\mathrm {loc}{\left(\left((-\infty,\bar L]\times \mathbb{R}\right)\setminus\Omega_{\bar{L}}\right)}\cap C^0{\left(\left((-\infty,\bar L]\times \mathbb{R}\right)\setminus\Omega_{\bar{L}}\right)}$ such that
\begin{itemize}
\item[(i)] on the lower boundary $u_{\alpha,\beta,\bar{L}}(x_1,0)=0$ for all $x_1\in[0,\bar{L}]$;
\item[(ii)] on the upper boundary
\[u_{\alpha,\beta,\bar{L}}(x_1,1)=\begin{cases}
0,&\text{\qquad if\ }x_1\in[0,\alpha],\\
\theta\iota_{\alpha,\beta}(x_1),&\text{\qquad if\ }x_1\in(\alpha,\beta],\\
\theta,&\text{\qquad if\ }x_1\in(\beta,\bar{L}];
\end{cases} \]
\item[(iii)] on the interface $u_{\alpha,\beta,\bar{L}}(0,x_2)=0$ {for all $x_2\in[0,1]$;}
\item[(iv)] and there is a constant $c>0$ independent of $\alpha$, $\beta$, $\bar{L}$ and $\theta$ such that
\[\int_{\mathbb{R}^2\setminus\Omega_{2\bar{L}}}\left|\nabla u_{\alpha,\beta,\bar{L}}\right|^2\dcalL^2\leq c\theta^2\left(\ln\left(3+\frac{\bar{L}}{\alpha}\right)+\frac{\beta+\alpha}{\beta-\alpha}\right). \]
\end{itemize}
\end{lmm}
\begin{proof}
{We use polar coordinates, denoting by $\phi(x)\in(0,2\pi)$ and $r(x)\in[0,\infty)$ the coordinates of
$x=(x_1,x_2)\in\mathbb{R}^2\setminus\left((0,\infty)\times\{0\}\right)$ so
that}
\[x_1=r(x)\cos(\phi(x)),\qquad x_2=r(x)\sin(\phi(x)). \]
We define $\hat{f}_{\alpha,\beta}:(0,2\pi)\times[0,\infty) \rightarrow \mathbb{R}$ by
\[
{\hat{ f}_{\alpha,\beta}(\phi,r)}:=
\begin{cases}
0, &\text{if } r\in [0,\alpha],\\
(1-\frac {\phi}{2\pi} )\iota_{\alpha,\beta}(r)\theta, &\text{if }r\in (\alpha, \beta], \\
(1- \frac {\phi}{2\pi} )\theta , &\text{if }r \in (\beta, {\bar{L}}],\\
\frac {{\bar{L}}^{1/2}}{r^{1/2}} (1-\frac {\phi}{2\pi}) \theta, &\text{if }r \in ({\bar{L}},\infty),\\
\end{cases}
\]
and consider the transformation $T:\mathbb{R}^2\rightarrow \mathbb{R}^2$ given by
\[
T(x_1,x_2):=
\begin{cases}
(x_1,x_2), &\text{\qquad if\ } x_2\le 0,\\
(x_1,0), &\text{\qquad if\ }0<x_2\le 1,\\
(x_1,x_2-1), & \text{\qquad if\ }x_2> 1.
\end{cases}
\]
We set for $x\in\left((-\infty,\bar{L}]\times\mathbb{R}\right)\setminus\overline{\Omega_{2\bar{L}}}$
\[u_{\alpha,\beta,\bar{L}}(x):=\hat{f}_{\alpha,\beta}\left( \phi(T(x)),r(T(x))\right). \]
Note that for $x_1>0$ and $x_2\searrow 1$, we have $r(T(x_1,x_2))\to x_1$, and $\phi(T(x_1,x_2))\searrow 0$. Similarly, for $x_1>0$ and $x_2\nearrow0$, we have
$r(T(x_1,x_2))\to x_1$ and $\phi(T(x_1,x_2)\to2\pi$. Finally, for $x_2\in(0,1)$ and $x_1\nearrow 0$, we have {$r(T(x_1,x_2))\to 0$}. Therefore $u_{\alpha,\beta,\bar{L}}$ {has a continuous extension} to $\left((-\infty,\bar{L}]\times\mathbb{R}\right)\setminus\Omega_{\bar{L}}$ {and it satisfies} (i), (ii) and (iii).
It remains to verify (iv). From the definition of $T$ we obtain, with
$f_{\alpha,\beta}( x):=\hat{f}(\phi(x),r(x))$,
\[
\|\nabla u_{\alpha,\beta,\bar{L}}\|^2_{{{\bar{L}}}^2({(}(-\infty,\bar{L}) \times \mathbb{R}{)} \setminus \Omega_{\bar L})} = \|\nabla f_{\alpha,\beta}\|^2_{L^2({(}(-\infty, \bar{L}) \times \mathbb{R}{)} \setminus {(}[0,\bar{L}]\times \{0\}{)}) } +\|\partial_1 f_{\alpha,\beta}(\cdot,0)\|^2_{L^2((-\infty,0))}
\]
where $\partial_1$ refers to the usual derivative in direction $e_1$.
Using polar coordinates we compute
\begin{align*}
\|\nabla f_{\alpha,\beta}\|^2_{L^2({B_{\bar{L}}(0)}\setminus{(} [0,\bar{L}]\times\{0\}{)})}
&= \int_0^{{\bar{L}}}\int_0^{2\pi} \frac 1 r |\partial_\phi{\hat f}_{\alpha,\beta}|^2 + r |\partial_r {\hat f}_{\alpha,\beta}|^2 \text{ d}\phi \text{ d}r
\leq c \theta^2\left( \int_{\alpha}^{{{\bar{L}}}} \frac 1 {r}\text{ d}r + \int_\alpha^\beta \frac {r}{(\beta-\alpha)^2} \text {d}r\right) \\
& \le c {\theta^2 }\left( \ln {\frac{{\bar{L}}}\alpha} + \frac {\beta +\alpha}{\beta - \alpha}\right)
\end{align*}
and
\begin{align*}
\|\nabla f_{\alpha,\beta}\|^2_{L^2({(}(-\infty,\bar{L})\times \mathbb{R}{)}\setminus B_{\bar{L}}(0))}
&\leq \int_{ \bar{L}}^\infty \int_0^{2\pi} \frac 1 r |\partial_\phi {\hat f}_{\alpha,\beta}|^2 + r |\partial_r \hat f_{\alpha,\beta}|^2 \text{ d}\phi \text{ d}r
\leq c \theta^2 \int_{\bar{L}}^\infty \frac{\bar{L}}{r^{2}}\text{ d}r
= c\theta^2.
\end{align*}
We also estimate,
\[
\|\partial_1 f_{\alpha,\beta}(\cdot,0)\|^2_{L^2((-\infty,0))} = \frac{\theta^2}{4}\int_{\alpha}^\beta \frac 1 {(\beta-\alpha)^2} \text{ d}x_1+ \frac {\theta^2}{16}\int_{\bar{L}}^\infty \frac {\bar{L}}{x_1^3}\text{ d}x_1
{=\frac{\theta^2/4}{\beta-\alpha}+\frac{\theta^2/{32}}{\bar{L}}\le \frac{\theta^2}{\beta-\alpha}.}
\]
{Recalling that $\alpha\ge 1$, we conclude}
\[
\|\nabla u_{\alpha,\beta,\bar{L}}\|^2_{L^2({(}(-\infty,\bar{L}) \times \mathbb{R}) \setminus ((0,\bar{L})\times (0,1)){)}} \leq c \theta^2 \left( \ln {(3+\frac{\bar{L}}\alpha)} + \frac {\beta +\alpha}{\beta - \alpha}\right) .
\]
\end{proof}
{
\begin{rmrk}
We will often use that the upper bound in Lemma \ref{lem:lemout}(iv) is monotonically increasing in $\bar{L}$ and decreasing in $\beta$.
\end{rmrk}
}
The following lemma has been used in \cite{conti-zwicknagl:16} without being explicitly stated. We refer to Fig.~\ref{figlamoutside} for a sketch.
\begin{lmm}
\label{lem:lamoutside}
Let $N\in\mathbb{N}$, $N\ge 1$, $h\in(0,\frac{1}{2}]$, $\theta\in[0,\frac{1}{2}]$. Then there exists $v_{N,h}\in W^{1,2}_\mathrm {loc}\left((-\infty,0)\times(0,1)\right)\cap{C^0}\left((-\infty,0]\times[0,1]\right)$ such that
\begin{itemize}
\item[(i)] $v_{N,h}(x_1,x_2)=v_{N,h}(x_1, x_2+\frac{1}{N})$ for all $x_2\in[0,1-\frac{1}{N}]$ and all $x_1{\in(-\infty,0]}$;
\item[(ii)] for $x_1=0$ we have
\[ v_{N,h}(0,x_2)=\begin{cases}
\theta x_2,&\text{\qquad if \ }x_2\in[0,\frac{1-h}{N}],\\
{\theta\frac{1-h}{h} (\frac1N-x_2)},&\text{\qquad if \ }x_2\in(\frac{1-h}{N},\frac{1}{N}];
\end{cases}
\]
\item[(iii)] for all $x_1\in(-\infty,0]$, we have
$v_{N,h}(x_1,0)=v_{N,h}(x_1,1)=0$;
\item[(iv)] {for all $x_1\in(-\infty,-\frac1N]$ and all $x_2$, we have
$v_{N,h}(x_1,x_2)=0$;}
\item[(v)] and there exists $c>0$ independent of $N$, {$\theta$} and $h$ such that
\[\int_{(-\infty,0)\times(0,1)}\left|\nabla v_{N,h}\right|^2\,\dcalL^2\leq c\frac{\theta^2}{N}\ln\left(3+\frac{1}{h}\right). \]
\end{itemize}
\end{lmm}
\begin{figure}
\caption{Construction in the proof of Lemma \ref{lem:lamoutside}
\label{figlamoutside}
\end{figure}
\begin{proof}
{We write $v$ for $v_{N,h}$ and set}
\[v(x):=\begin{cases}
0,&\text{\qquad if\ } {x\in A:=(-\infty,-\frac{1}{N}]\times[0,\frac1N)},\\
\theta x_2,&\text{\qquad if\ }{x\in B:=\{x_1\in(-\frac{1}{N},0], 0\leq x_2\leq (1-h)(x_1+\frac{1}{N})\}},\\
\frac{(1-Nx_2)\theta(1-h)(Nx_1+1)}{N\left(h-(1-h)Nx_1\right)},&\text{\qquad if\ }
{x\in C:=\{
x_1\in(-\frac{1}{N},0], (1-h)(x_1+\frac{1}{N})< x_2<\frac{1}{N}\}},
\end{cases} \]
and extend it periodically in $x_2$ to $(-\infty,0]\times[0,1]$ as stated in (i) {(see Fig.~\ref{figlamoutside})}.
One easily checks that $v$ is continuous and satisfies (ii), (iii) { and (iv)}. To show (v), we
{first work in $(-\frac1N,0]\times[0,\frac1N){=B\cup C}$. In region $C$ we have
$(1-h)(Nx_1+1)\le Nx_2<1$, and therefore $0< 1-Nx_2\le h-(1-h)Nx_1$, so that
\begin{equation*}
|\nabla v|(x)\le \theta(1-h)\Big( \frac{2}{h-(1-h)Nx_1}+
\frac{1-Nx_2}{(h-(1-h)Nx_1)^2}\Big)
\le \frac{3 \theta (1-h)}{h-(1-h)Nx_1} \text{ for all }x\in C.
\end{equation*}
Since $|\nabla v|=\theta$ in $B$ we obtain
\begin{equation*}
\begin{split}
\int_{-1/N}^0\int_0^{1/N} |\nabla v|^2\dy\dx&\le
\frac{\theta^2}{N^2} +9 \theta^2 \int_{-1/N}^0\int_{(1-h)(x_1+\frac{1}{N})}^{1/N}
\frac{{(1-h)^2}}{(h-(1-h)Nx_1)^2}\dy\dx\\
&=
\frac{\theta^2}{N^2} +\frac{9 \theta^2}N \int_{-1/N}^0 \frac{{(1-h)^2}}{h-(1-h)Nx_1}\dx
{\leq}
\frac{\theta^2}{N^2} +\frac{9 \theta^2}{N^2} \ln \frac1h\le
c\frac{\theta^2}{N^2}\ln\left(3+\frac{1}{h}\right).
\end{split}
\end{equation*}
By periodicity the proof is concluded.
}
\end{proof}
We now recall the basic branching construction from \cite{conti-zwicknagl:16}, which refines the one in \cite{kohn-mueller:94} (see Fig.~\ref{fig:branching}).
\begin{figure}
\caption{Sketch of the branched construction in Lemma \ref{lem:2scbrobenbranchN}
\label{fig:branching}
\end{figure}
\begin{lmm}
\label{lem:2scbrobenbranchN}
Suppose that $\theta\in(0,1/2]$, $N\in\mathbb{N}$, $N\ge 1$, $h\in[\theta,1]$, $\ell\in[\theta,\infty)$. Then there exists $u:=u_{h,\ell,N}\in W^{1,\infty}\left((0,\ell)\times(1-\frac hN,1)\right)\cap C^0\left([0,\ell]\times[1-\frac hN,1]\right)$ with the following properties:
\begin{itemize}
\item[(i)] $u(0,x_2)=\frac{\theta}{h}(1-h)(1-x_2)$ for all $x_2\in[1-\frac hN,1]$;
\item[(ii)]
\[u(\ell,x_2)=\begin{cases}
\theta x_2 -\theta(1-\frac1N),&\text{\qquad if \ }x_2\in [1-\frac hN,1-\frac\theta N],\\
(1-\theta)(1-x_2),&\text{\qquad if \ }x_2\in (1-\frac\theta N,1];
\end{cases} \]
\item[(iii)] $u(x_1,1-\frac hN)=\frac1N\theta(1-h)$ for all $x_1\in [0,\ell]$;
\item[(iv)] $u(x_1,1)=0$ for all $x_1\in[0,\ell]$;
\item[(v)] $|D^2u|\left((0,\ell)\times (1-\frac hN,1)\right)\leq c{(\ell+\frac hN)}${and $\|\nabla u\|_{L^\infty}\le c$;}
\item[(vi)]
\[\int_{(0,\ell)\times(1-\frac hN,1)}\left(\partial_1u\right)^2+\min\left\{(\partial_2u-\theta)^2,(\partial_2u+1-\theta)^2\right\}\dcalL^2\leq c\frac{\theta^2 h}{N^3\ell}. \]
\end{itemize}
\end{lmm}
\begin{proof}
One can use the finite branching construction given in \cite{conti-zwicknagl:16} building on Lemma \cite[Lemma 5.2]{conti-zwicknagl:16} and truncation parameter $I\in\mathbb{N}$ such that $(3/2)^I\sim\ell/\theta$. The estimates then follow from the considerations in the proof of \cite[Proposition 6.1]{conti-zwicknagl:16}.
For the convenience of the reader we sketch the main steps of the construction, referring to
Figure \ref{fig:branching} for an illustration.
For $i\in\mathbb{N}$ we set $h_i:=2^{-i}h/N$, $\theta_i:=2^{-i}\theta/N$, $\ell_i:=3^{-i}\ell$, and let $k$ be the largest integer such that $\theta_k\le \ell_k$.
For $0\le i\le k$, the function $x_2\mapsto \partial_2 u(\ell_i,x_2)$ is $h_i$-periodic, with
$\partial_2u(\ell_i,x_2)=\theta$ for $x_2\in (1-h_i,1-\theta_i)$ and
$\partial_2u(\ell_i,x_2)=\theta-1$ for $x_2\in(1-\theta_i,1)$.
In $(\ell_{i+1}, \ell_{i})\times(1-\frac hN,1)$, $0\le i<k$, the function $x_2\mapsto u(x_1,x_2)-\frac\theta{h}(1-h)(1-x_2)$ is $h_i$-periodic in the $x_2$ direction, {$u$} obeys (iii) and (iv), and is defined interpolating the boundary values as sketched in Figure \ref{fig:branching}.
By construction $\partial_2 u\in\{\theta,\theta-1\}$ almost everywhere in this set. One checks that
$|\partial_1u|\le c\theta_i/\ell_i$, leading to
$|D^2u|([\ell_{i+1},\ell_i]\times(1-\frac hN,1))\le c 2^i \ell_i+c2^i \frac{h_i\theta_i}{\ell_i}$
and
$\|\partial_1u\|_{L^2([\ell_{i+1},\ell_i]\times(1-\frac hN,1))}^2\le c (\frac{\theta_i}{\ell_i})^2 \ell_i\frac hN$. Summing the two geometric series, this leads to the bounds in (v) and (vi) on $(\ell_k,\ell)\times(1-\frac hN,1)$. In $(0,\ell_k)\times(1-\frac hN,1)$ we use an affine interpolation. The condition $\theta_k\le\ell_k$ gives $|\nabla u|\le c$ in this region, so that the elastic energy is bounded by $c\frac hN\ell_k$.
Since $\ell_{k+1}<\theta_{k+1}$ we have $\ell\le \frac\theta N (\frac32)^{k+1}\le 2 \frac\theta N 3^{k/2}$, which implies
$\ell_k\ell \le 4 \frac{\theta^2}{N^2}$ and hence (v). In turn, $|D^2u|((0, \ell_k]\times(1-\frac hN,1))\le c (2^k\ell_k+\frac hN)\le c(\ell+\frac hN)$.
This concludes the proof.
\end{proof}
{Finally, we shall frequently use a function that interpolates between a single laminate and an affine function.
\begin{lmm}\label{lem:tildev}
Let $\theta\in(0,1/2]$ and $\tilde{\beta}>0$. There exists a function $\tilde{v}_{\tilde{\beta}}{=}\tilde{v}{\in C^0([0,\tilde{\beta}]\times[0,\theta])}$ such that
\begin{itemize}
\item[(i)] $\tilde{v}(0,x_2)=(-1+\theta)x_2$ and $\tilde{v}(\tilde{\beta},x_2)=\theta x_2$ for all $x_2\in[0,\theta]$;
\item[(ii)] $\tilde{v}(x_1,0)=0$ for all $x_1\in[0,\tilde{\beta}]$;
\item[(iii)] $\tilde{v}(x_1,\theta)=\theta \iota_{0,\tilde{\beta}}(x_1)+\theta(-1+\theta)$, with $\iota_{0,\tilde{\beta}}$ as in (\ref{eq:iota});
\item[(iv)]{$\|\nabla \tilde v\|_{L^\infty}\le 1+\frac \theta{\tilde\beta}$} and the energy is estimated by
\[\int_{(0,\tilde \beta)\times (0,\theta)}{\min}\left\{|\nabla \tilde{v} -
\theta e_2
|^2,
|\nabla \tilde{v} +
(1-\theta)e_2
|^2\right\} \dcalL^2 +
\varepsilon |D^2\tilde{v}|((0,\tilde\beta)\times{(0,\theta))}\leq c\left( \frac {\theta^3}{\tilde \beta} + \varepsilon (\tilde \beta + \frac{\theta^2}{\tilde\beta})\right).\]
\end{itemize}
\end{lmm}
\begin{proof} The standard interpolation
\begin{eqnarray}
\label{eq:tildev}
\tilde v(x):=
\begin{cases}
\theta x_2, &\text{if }x_2 \leq \frac\theta {\tilde \beta} x_1,\\
-(1-\theta) x_2 +\frac \theta {\tilde \beta} x_1, &\text{if }x_2 > \frac \theta {\tilde \beta} x_1
\end{cases}
\end{eqnarray}
satisfies all required properties.
{We simplified the {estimate} using $\theta\le \tilde\beta+\frac{\theta^2}{\tilde\beta}$.}\end{proof}
}
We shall now provide the proof of the upper bound in Theorem \ref{th:main}. We proceed in two steps: In the first step, we adapt a result from the literature (Proposition \ref{prop:upperbound}), and in the second step, we provide test functions for the remaining regimes (Theorem \ref{th:upperbound}).
\begin{figure}
\caption{Sketch of the three regimes from Proposition \ref{prop:upperbound}
\label{fig-br2-disc}
\end{figure}
\begin{prpstn}[Upper bound: constructions from the literature]\label{prop:upperbound}
There exists $c>0$ such that for all $\mu>0$, $\varepsilon>0$, $\theta\in(0,1/2]$, and $L\geq1/2$ there holds
\begin{align*}
\min_u E(u)
\leq c \min \Big{\{}
&\varepsilon^{2/3}\theta^{2/3} L^{1/3} +\varepsilon L,\,\, \mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac 1 {\theta^{2}}))^{1/2}+\varepsilon L, \\
&\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac{\varepsilon}{ \mu^{3}\theta^{2}{L}}))^{1/2}+\varepsilon L
\Big{\}}.
\end{align*}
\end{prpstn}
\begin{proof}
{The assertion follows from the proof of \cite[Proposition 5.1]{conti-zwicknagl:16}, checking carefully that the differences between the two functionals are not relevant.
{For clarity} we provide a short self-contained argument, based on
Lemma \ref{lem:lamoutside} and Lemma \ref{lem:2scbrobenbranchN} (both taken from \cite{conti-zwicknagl:16}).
The three constructions are illustrated in Figure \ref{fig-br2-disc}.
\begin{enumerate}
\item[(a)] Let $N\ge 1$, $h:=1$, $\ell:=L$, and let $u\in C^0([0,L]\times[1-\frac1N,1])$ be as in Lemma \ref{lem:2scbrobenbranchN}, extended periodically in $x_2$ to
$[0,L]\times [0,1]$, symmetrically for $x_1\in (L,2L]$ and then by zero outside $\overline{\Omega_{2L}}$. We have
\begin{equation*}
E(u)\le c N \varepsilon L + c\frac{\theta^2}{N^2L} {+\varepsilon}.
\end{equation*}
Choosing $N$ to be the smallest integer above $\theta^{2/3}\varepsilon^{-1/3}L^{-2/3}$, we obtain \\
$E(u)\le c(\varepsilon^{2/3}\theta^{2/3} L^{1/3} +\varepsilon L)$.
\item[(b)]
Let $N\ge 1$, $h:=\theta$, and let $v_{N,\theta}$ be as in Lemma \ref{lem:lamoutside}.
We extend it setting
$v_{N,\theta}(x)=v_{N,\theta}(0,x_2)$ for $x_1\in(0,L]$, symmetrically for $x_1\ge L$, and by zero on $\mathbb{R}\times (\mathbb{R}\setminus[0,1])$. By
Lemma \ref{lem:lamoutside}(ii) we have $\partial_1 v_{N,\theta}=0$ and $\partial_2 v_{N,\theta}\in\{\theta,\theta-1\}$ in $\Omega_{2L}$, so that
$I_{\Omega_{2L}}(v_{N,\theta})\le 4N\varepsilon L$.
By Lemma \ref{lem:lamoutside}(v),
\begin{equation*}
E(v_{N,\theta})\le c\mu\theta^2 N^{-1} \ln(3+\frac1{\theta^2}) + cN\varepsilon L.
\end{equation*}
Choosing $N$ as the smallest integer above $(\varepsilon^{-1}L^{-1}\mu\theta^2\ln(3+\frac1{\theta^2}))^{1/2}$, we obtain \\
$E(v_{N,\theta})\le c(\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac 1 {\theta^{2}}))^{1/2}+\varepsilon L)$.
\item[(c)]
It remains to show that $I(c)\le c\left(
\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac{\varepsilon}{ \mu^{3}\theta^{2}{L}}))^{1/2}+\varepsilon L\right)$.
If $\mu^{3/2}\varepsilon^{-1/2} \theta L^{1/2}\le \theta$, this follows from (b). Otherwise, we
fix again $N\ge 1$, $h\in[\theta,1]$, $\ell:=L$ and use
Lemma \ref{lem:2scbrobenbranchN} and Lemma \ref{lem:lamoutside} to obtain a function $u$ with
\begin{equation*}
E(u)\le c\mu\theta^2 N^{-1} \ln(3+\frac1{h^2}) + cN\varepsilon L +c\frac{\theta^2h}{N^2L}{+\varepsilon}{,}
\end{equation*}
{where we used that $\ln (3+\frac{1}{h})\leq \ln(3+\frac{1}{h^2})$ since $h\leq 1$.}
We choose $h:=\min\{1,\mu L (\mu\theta^2/(\varepsilon L))^{1/2}\}=\min\{1,\mu^{3/2}\varepsilon^{-1/2}\theta L^{1/2}\}{\in[\theta,1]}$ and
$N$ to be the smallest integer above $(\mu\theta^2\ln(3+\frac{\varepsilon}{\mu^3\theta^2 L}) /(\varepsilon L))^{1/2}$.
Using $h\le \mu^{3/2}\varepsilon^{-1/2}\theta L^{1/2}$ {and $\ln (3+\frac{\varepsilon}{\mu^3\theta^2 L}) \geq 1$},
the proof is concluded.
\end{enumerate}
}
\end{proof}
\begin{figure}
\caption{
Left: Sketch of the {affine}
\label{fig-regimes1}
\end{figure}
\begin{thrm}[Upper bound: conclusion]\label{th:upperbound}
{There is a constant $c>0$ such that for all $\mu>0$, $\varepsilon>0$, $\theta\in (0,1/2]$, and $L\geq1/2$}
\begin{align*}
&\min_u J(u)
\leq c\mathcal{I}({\mu,\varepsilon,\theta,L}),
\end{align*}
where $\mathcal{I}({\mu,\varepsilon,\theta,L})$ is given in Theorem \ref{th:main}.
\end{thrm}
\begin{proof}
{By the estimate $|e(u)|\leq |\nabla u|$, it suffices to show an upper bound for $I$, which in turn follows from an upper bound for $E$.}
We provide test functions for the respective regimes separately. Some constructions are used for several test functions. We will describe them in detail the first time we use them and refer to the arguments in the path of the proof.
\begin{itemize}
\item[(0)]
{Branching, laminate, two-scale branching:}
By Proposition \ref{prop:upperbound}, we have
\begin{align*} \min_uE(u) \leq c
\min \Big{\{}\varepsilon^{2/3}\theta^{2/3} L^{1/3}+\varepsilon L,\, & \mu^{1/2}\varepsilon^{1/2}\theta{L^{1/2} } (\ln (3+ \frac 1{\theta^{2}}))^{1/2} +\varepsilon L,\\
&\mu^{1/2}\varepsilon^{1/2} \theta {L^{1/2}}(\ln (3+\frac {\varepsilon}{ \mu^{3}\theta^{2}{L}}))^{1/2}+\varepsilon L
\Big{\}}.
\end{align*}
\item[(i)]{Constant:} Set ${u^{(1)}}:=0$ in $\mathbb{R}^2$. This shows $\min_{u} E(u)\leq {E}({u^{(1)}}) \le2 \theta^2L$.
\item[(ii)]{Affine:} We aim to show that $\min_u E(u) \leq c \mu \theta^2 \ln (3+L)$. Choose $\bar{L}:=L+2$, $\beta:=2$ and $\alpha:=1$. We note that the assumptions of Lemma \ref{lem:lemout} are satisfied, and we shall use the function $u_{\alpha,\beta,\bar{L}}$. Precisely, we set
\[{u^{(2)}}(x):=\begin{cases}
\theta x_2,&\text{if }x\in\Omega_{L},\\
u_{{1,2,L+2}}(x_1+2,x_2),&\text{if }x\in ((-\infty,L]\times\mathbb{R})\setminus ((-2,L]\times(0,1)),\\
0,&\text{if }x\in (-2,-1]\times (0,1),\\
(1+x_1)\theta x_2,&\text{if }x\in (-1,0]\times(0,1),\\
u^{(2)}(2L-x_1,x_2),&{\text{if }x_1>L}
\end{cases} \]
{see Fig.~\ref{fig-regimes1}, left panel.}
We obtain, by Lemma \ref{lem:lemout} {(using that $\frac{\beta+\alpha}{\beta-\alpha}=3\leq c\ln(3+L)$)} and an explicit computation in $(-1,0)\times(0,1)$,
\[\min_u E(u)\leq E(u^{(2)})={E}^\mathrm {ext}({u^{(2)}}) \leq c \mu \theta^2 \ln (3+L).\]
\item[(iii)]
{{Linear} interpolation: We}
aim to show that $\min_u E(u) \leq c \left(\mu \theta^2 \ln (3+L/\mu) +\varepsilon \theta\right)$.\\
We distinguish some cases.\\
a) If $\mu\le1$, this holds by (ii). \\
b) If $\mu\ge L/3$, this holds by (i). \\
c) If $\mu\in(1,L/3)$, we choose $\alpha :=\mu$, $\beta:= 2\mu$ and $\overline{L}:=L$. Note that these choices are admissible for Lemma \ref{lem:lemout} since $\alpha=\mu>1$ and $L\ge 3\mu\ge \beta$. We set (with $\iota$ as defined in \eqref{eq:iota})
\[u^{(3)}(x):=\begin{cases}
0, & \text{if }x\in [0,\mu]\times[0,1]{,}\\
\iota_{\mu, 2\mu}(x_1)\theta x_2, & \text{if }x \in (\mu, 2\mu]\times[0,1]{,}\\
\theta x_2, & \text{if }x \in (2\mu, L] \times[0,1]{,}\\
u_{\mu,2\mu,L}(x),&\text{if\ } x\in((-\infty,L]\times\mathbb{R})\setminus\overline{\Omega_L},\\
u^{(3)}(2L-x_1,x_2),&{\text{if }x_1>L}
\end{cases}
\]
{see Fig.~\ref{fig-regimes1}, right panel.}
Then ${E}^\mathrm {ext}({u^{(3)}}) \leq c\mu \theta^2 \ln (3+\frac{L}{\mu}) $ by Lemma \ref{lem:lemout}, and hence {by an explicit computation in $\Omega_L$ using that $\mu<1$}
\[{E}({u^{(3)}}) \leq c\left( \mu \theta^2 \ln \big(3+\frac{L}{\mu}\big) +\varepsilon \theta +{\frac {\varepsilon\theta} \mu+\mu \theta^2}\right)\leq c\left( \mu \theta^2 \ln \big(3+\frac{L}{\mu}\big) +\varepsilon \theta\right).\]
\item[(iv)]
Next we aim to show an auxiliary result, namely that
\begin{equation}\label{eqivE}
\min_u E(u) \leq c\left(\mu \theta^2 \ln (3+ \frac { \varepsilon {L}}{\mu \theta^{2}})
{+\mu\theta^2\ln (3+\frac{1}{\theta^2})}
+ \varepsilon^{1/2}\theta^{3/2}\right).
\end{equation}
This bound is not needed for the proof of the theorem, but it introduces a new construction method that will be used for cases (v) and (vi) below. {The idea behind the construction is a single laminate close to the left and right boundaries of the nucleus, interpolated to an affine function deep in the bulk. Related estimates play also a role in the proof of the lower bound, see e.g. the assumptions of Proposition \ref{lem:lbbranching}.}
\\
We distinguish three cases:\\
a) If $\mu\theta^2\varepsilon^{-1}\leq 1$, then (\ref{eqivE}) follows from (ii).\\
b) If $\mu\theta^2\varepsilon^{-1}\geq L$, we use the function ${v_{1,\theta}}$ from Lemma \ref{lem:lamoutside} with $N=1$ and $h=\theta$, and set
\[u^{(4)}(x):=\begin{cases}
\theta x_2,&\text{if }x\in[0,L]\times[0,1-\theta],\\
{(1-\theta)(1-x_2)},&\text{if }x\in[0,L]\times(1-\theta,1],\\
{v_{1,\theta}(x)},&\text{if }x\in(-\infty,0)\times [0,1],\\
0, &{\text{if } x\in (-\infty,L]\times (\mathbb{R}\setminus[0,1]),}\\
u^{(4)}(2L-x_1,x_2),&{\text{if }x_1>L}.
\end{cases} \]
Then by Lemma \ref{lem:lamoutside},
\[E(u^{(4)})\leq c\left(\mu\theta^2\ln(3+{\frac{1}{\theta}})+\varepsilon L\right)\leq c\mu\theta^2\ln(3+\frac{1}{\theta^2}) \]
which concludes the proof of (\ref{eqivE}).\\
c) If $\mu\theta^2 \varepsilon^{-1}\in(1, L)$, we use {$v_{1,\theta}$ as above} and Lemma \ref{lem:lemout} with ${\alpha:=\mu\theta^2\varepsilon^{-1}}$, $\beta:=\alpha+\tilde{\beta}$ with $\tilde{\beta}\geq \alpha$ chosen below,
and {$\bar L:=\max\{L+1, \beta+1\}$.} Precisely, we set with {$\tilde{v}_{\tilde{\beta}}$ from Lemma \ref{lem:tildev} and $v_{1,\theta}$ from Lemma \ref{lem:lamoutside}}
\[U^{(4)}(x):=\begin{cases}
\theta x_2, & \text{if } x\in [0,L]\times [0, 1-\theta]{,}\\
(1-\theta)(1-x_2), & \text{if } x\in[0,\alpha]\times (1-\theta,1],\\
\tilde v_{{\tilde{\beta}}}(x_1-{\alpha},x_2-(1-\theta)) + \theta(1-\theta), & \text{if } x\in(\alpha,{\alpha}+ \tilde \beta]\times
(1-\theta,1],\\
\theta x_2, & \text{if } x\in ( {\alpha}+ \tilde \beta,{L}]\times(1-\theta,1],\\
{ v_{1,\theta}(x),} &\text{if } x\in (-1,0)\times[0,1],\\
{u_{\alpha+1,\beta+1 ,\bar{L}}(x_1+1,x_2),}&\text{if }x\in((-\infty,L]\times\mathbb{R})\setminus ((-1,L]\times[0,1]),\\
U^{(4)}(2L-x_1,x_2),&{\text{if }x_1>L}.
\end{cases} \]
One readily checks that $U^{(4)}$ is continuous. By Lemma \ref{lem:lemout} {and Lemma \ref{lem:lamoutside}}, we have
\begin{equation*}
E^\mathrm {ext}({U^{(4)}}) \leq c\mu \theta^2\left(\ln \Big(3+\frac{L+\alpha+\tilde\beta{+1}}{\alpha}\Big) + \frac{\beta+\alpha}{\beta-\alpha}
+\ln(3+\frac1{\theta^2})
\right){.}
\end{equation*}
Altogether, we obtain {using Lemma \ref{lem:tildev}} {and $\beta\ge 2\alpha$}
\begin{align*}
{E}({U^{(4)}})&\leq c\left(\frac{\theta^3}{ {\tilde{\beta}}} + \varepsilon {\tilde{\beta}}
+\frac{\varepsilon\theta^2}{{\tilde{\beta}}}
\right)
+ c\mu \theta^2
{\ln\Big(3+ \frac{L}{\alpha}+\frac{\tilde{\beta}}{\alpha}\Big)}
+c\mu\theta^2\ln (3+\frac{1}{\theta^2}){+\varepsilon}
\\
&\le c \left(\frac{\theta^3}{ {\tilde{\beta}}} + \varepsilon {\tilde{\beta}}
\right)
+ c\mu \theta^2\ln\Big(3+ \frac{\varepsilon L }{\mu\theta^2}+\frac{{\tilde{\beta}}}{\alpha}\Big) {+c\mu\theta^2\ln (3+\frac{1}{\theta^2})}
\end{align*}
where in the second step we used
that $\tilde\beta\ge1$ implies $\varepsilon\theta^2/\tilde{\beta}\leq\varepsilon\tilde\beta${, and the assumption $\varepsilon\leq\mu\theta^2$.}
At this point we distinguish two further subcases. If $\mu\theta^2\le \varepsilon^{1/2}\theta^{3/2}$ then we set $\tilde{\beta}:= \varepsilon^{-1/2} \theta^{3/2}{\geq \mu\theta^2\varepsilon^{-1}}=\alpha$ and obtain
\begin{equation*}
{E}({U^{(4)}})\le c\left(\varepsilon^{1/2}\theta^{3/2}
+ \mu \theta^2\ln\Big(3+ \frac{\varepsilon L }{\mu\theta^2}+\frac{\varepsilon^{1/2}\theta^{3/2}}{\mu\theta^2}\Big) \right)
{+c\mu\theta^2\ln (3+\frac{1}{\theta^2})}.
\end{equation*}
We treat the first logarithm using $\ln (3+x+y)\le \ln (3+x)+\ln (1+y)\le \ln (3+x)+y$ {for $x,y\geq 0$}, leading to
\begin{equation*}
{E}({U^{(4)}})\le c\left(\varepsilon^{1/2}\theta^{3/2}
+ \mu \theta^2\ln\Big(3+ \frac{\varepsilon L }{\mu\theta^2}\Big) \right){+c\mu\theta^2\ln (3+\frac{1}{\theta^2})},
\end{equation*}
which concludes the proof of (\ref{eqivE}).
If instead $\mu\theta^2> \varepsilon^{1/2}\theta^{3/2}$ then we set {$\tilde{\beta}:= \alpha\ge \varepsilon^{-1/2}\theta^{3/2}$} and obtain
\begin{equation*}
{E}({U^{(4)}})\le c\left(\varepsilon^{1/2}\theta^{3/2}
+ \mu \theta^2\ln\Big(3+ \frac{\varepsilon L }{\mu\theta^2}\Big) \right) {+c\mu\theta^2\ln (3+\frac{1}{\theta^2})}.
\end{equation*}
\begin{figure}
\caption{{Sketch of the single truncated branching construction.
We refer to Fig.~\ref{fig:branching}
\label{figsingletruncbra}
\end{figure}
\item[(v)]{Single truncated {branching}: We} aim to show that
$\min_u E(u)\leq c\left(\mu \theta^2 \ln (3+ \frac { \varepsilon {L}}{\mu \theta^{2}})
+\mu\theta^2\ln (3+\frac{\varepsilon}{\mu^2\theta^2})
+ \varepsilon^{1/2}\theta^{3/2}\right)$. Again, we distinguish several cases.\\
a) If $\mu\theta^2\varepsilon^{-1}\leq 1$, this follows from (ii).\\
b) If $\mu^2\theta^2\varepsilon^{-1}{<} \theta$, then this follows from (iv).\\
c) If $\mu^2\theta^2\varepsilon^{-1}{\geq\theta} $ {and $\mu\theta^2\varepsilon^{-1}> 1$}, we
use the truncated branching construction
from \cite[Proofs of Theorems 3.1, 3.2]{zwicknagl:14}, in the version of Lemma \ref{lem:2scbrobenbranchN} {and $v_{1,h}$ from Lemma \ref{lem:lamoutside}}. Precisely, we {choose $h:=\min\{1,\mu^2\theta^2\varepsilon^{-1}\}\in[\theta,1]$}, $N:=1$, and
$\ell:=\mu\theta^2\varepsilon^{-1}{\geq 1{\geq}h}$ {and set
\begin{equation}\label{eqdefU5}
{u}^{(5)}(x):=\begin{cases}
\theta x_2,&\text{\ if\ } x\in[0,\min\{\ell,L\}]\times[0,1-h],\\
u_{h,\ell,1}(x),&\text{\ if\ }x\in[0,\min\{\ell,L\}]\times(1-h,1],\\
v_{1,h}(x),&\text{\ if\ }x\in[-1,0)\times[0,1],
\end{cases}
\end{equation}
which satisfies}
\begin{equation}\label{eqestw}
E_{{[-1,\ell)}\times(0,1)}(w)\leq {c\frac{\theta^2{h}}{\ell}+c\varepsilon{(\ell+{h})}}{+c\mu\theta^2\ln(3+\frac{1}{h}) \le c\mu\theta^2\ln(3+\frac{\varepsilon}{\mu^2\theta^2})}.
\end{equation}
{In the second inequality we used that $\frac{\theta^2 h}{\ell}=\frac{\varepsilon h}{\mu}\leq \mu\theta^2$ and $\ell\geq h$.
If $\ell\geq L/2$, we extend $u^{(5)}$ inside $\Omega_L$ by a simple laminate, i.e.,
\[u^{(5)}(x):=\begin{cases}
\theta x_2,&\text{\ if\ }x\in(\ell,L]\times[0,1-\theta],\\
(1-\theta)(1-x_2),&\text{\ if\ }x\in(\ell, L]\times(1-\theta,1],\\
0,&\text{\ if\ }x\in((-\infty,L]\times\mathbb{R})\setminus([-1,L]\times[0,1]),\\
u^{(5)}(2L-x_1,x_2),&{\text{\ if }x_1>L}.
\end{cases} \]
Note that $(\ell,L]=\emptyset$ if $L<\ell$. We have $E(u^{(5)})\leq c\left(E_{{[-1,\ell)}\times(0,1)}(u^{(5)})+\varepsilon \ell\right)$, and the assertion follows.\\
Otherwise, if $\ell< L/2$, we proceed as in (iv)c) using Lemma \ref{lem:tildev} and Lemma \ref{lem:lemout}
with $\tilde{\beta}:=\alpha:=\ell$, $\beta:=\ell+\tilde\beta$, and $\bar{L}:=\max\{\beta+1,L+1\}$ and set
\[u^{(5)}(x):=\begin{cases}
\theta x_2, & \text{if } x\in ( {\ell},{L}]\times[0,1-\theta],\\
\tilde v_{\tilde{\beta}}(x_1-{\ell},x_2-(1-\theta)) + \theta(1-\theta), & \text{if } x\in(\ell,{\ell}+ \tilde \beta]\times
(1-\theta,1],\\
\theta x_2, & \text{if } x\in ( {\ell}+ \tilde \beta,{L}]\times(1-\theta,1],\\
u_{\ell+1,\ell+\tilde\beta+1 ,\bar{L}}(x_1+1,x_2),&\text{if }x\in((-\infty,L]\times\mathbb{R})\setminus ((-1,L]\times[0,1]),\\
u^{(5)}(2L-x_1,x_2),&{\text{if }x_1>L}.
\end{cases} \]
This leads to
\[
E(u^{(5)})\le c \left(\mu\theta^2\ln\left(3+\frac{\varepsilon}{\mu^2\theta^2}\right)+\varepsilon \ell +\frac{\theta^3}{\tilde\beta} + \varepsilon\tilde\beta+\frac{\varepsilon\theta^2}{\tilde\beta} + \mu\theta^2\ln
\left(3+\frac{L+\ell+1}{\ell}\right){+\varepsilon}\right),
\]
and the assertion follows as in (iv)c).}\\
\begin{figure}
\caption{Construction for $\tilde u^{(6)}
\label{figlnthetamu}
\end{figure}
\begin{figure}
\caption{{Corner-laminate}
\label{figlnthetamu2}
\end{figure}
\item[(vi)]{Corner laminate:} We show that
{\[\min_u E(u)\leq c\left({\mu \theta^2 \ln (3+ \frac { \varepsilon {L}}{\mu \theta^{2}})
+\mu\theta^2\ln (3+\frac{\theta}{\mu})}
\right).\] }
Again, we distinguish several cases.\\
a) If $\varepsilon\geq\mu\theta^2$, then the assertion follows from (ii).\\
b) If $\theta\le\varepsilon<\mu\theta^2$, then {$\frac{\varepsilon L}{\mu\theta^2}\geq \frac{L}{\mu\theta}>\frac{L}{\mu}$, and $\mu\theta^2> \varepsilon> \varepsilon\theta$, and the assertion} follows from (iii).\\
c) If $\theta/\mu\geq L$, this follows from (ii).\\
d) If $\mu^2\theta\le \varepsilon$, then
$\frac{\varepsilon L}{\mu\theta^2}\frac\theta\mu\ge L$. The assertion follows from (ii) using that $\ln(3+a)+\ln(3+b)\ge\ln(3+ab)$ for any $a,b\ge0$.\\
e) It remains to consider the case that $\varepsilon<\min\{\mu\theta^2,\,\theta, {\mu^2\theta}\}$ and $\theta/\mu< L$.
We choose {$\gamma:=\max\{\frac{1}{4},\frac{\theta}{\mu\ln(3+\frac\theta\mu)}\}$} and note that $\gamma< L$ since $\theta/ \mu< L$ and $L\geq\frac{1}{2}$.
We first construct a function $\tilde u^{(6)}$ in $R_\gamma:=[-\gamma,\gamma]\times[-\gamma,\gamma+1]$,
\begin{subnumcases}{\tilde
u^{(6)}(x):=}
\theta x_2, & $\text{if }x_1\in[0, \gamma],\ 0\le x_2\le 1-{\theta\frac{x_1}\gamma}$,\nonumber\\
(1-\theta)(1-x_2) + \theta - \theta\frac{x_1}\gamma, &
${\text{if\ }} x_1\in[0, \gamma],\ 1- \theta\frac{x_1}\gamma< x_2\le1$,\nonumber\\
\theta(1-\frac{x_1+x_2-1}\gamma), & $\text{if } x_1{\in[0,\gamma], \ 1<x_2\leq 1+\gamma-x_1}$,\label{eq:6B}\\
\theta(1-\frac{x_2-1}\gamma), & ${\text{if }} x_1{\in[-\gamma,0], \ 1-x_1\leq x_2\le \gamma+1}$,\label{eq:6C}\\
\theta(1+\frac{x_1}\gamma)\frac{x_2}{1-x_1}, &
${\text{if }} x_1{\in[-\gamma,0]}$ \text{ and } $0\le x_2< 1-x_1$,\label{eq:6A}\\
0, & $\text{elsewhere in }R_\gamma$,\nonumber
\end{subnumcases}
see Fig.~\ref{figlnthetamu}. One easily checks that $\tilde u^{(6)}$ is continuous and that
\[E_{(0,\gamma)\times(0,1)}(\tilde u^{(6)}){\leq c\left(\frac{\theta^3}{\gamma}+\varepsilon(\gamma+\theta) + \frac{\varepsilon\theta^2}{\gamma}\right)}
\leq c\left(\frac{\theta^3}{\gamma}+\varepsilon \gamma+\varepsilon\theta\right), \]
{where we used in the last estimate that $\varepsilon\leq\theta$.}
To estimate the energy outside $\Omega_{2L}$, we observe that $|\nabla \tilde u^{(6)}|\leq c\theta/\gamma$ in the parts given in \eqref{eq:6B} and \eqref{eq:6C}, and {$|\nabla \tilde u^{(6)}(x)|\leq c\frac{\theta}{|1-x_1|}$} in the part given in \eqref{eq:6A} {(recall that $\gamma\geq 1/4$)}, which yields
\[E_{R_\gamma\setminus(0,\gamma)\times(0,1)}(\tilde u^{(6)})\leq c\mu\theta^2\ln\left(3+\gamma\right).\]
We then proceed as in (iv)c), using $\tilde{v}_{\tilde{\beta}}$ from Lemma \ref{lem:tildev} with $\tilde\beta:=3\mu\theta^2\varepsilon^{-1}$.
Since $\varepsilon<\mu\theta^2$ we have $1\le\tilde\beta$. We use Lemma \ref{lem:lemout} with $\alpha:=\tilde{\beta}$,
$\beta:=\alpha+\tilde{\beta}{=2\alpha}$ and $\bar{L}:=\max\{\beta,L\}$ and
define $u^{(6)}$ by
\[{u}^{(6)}(x):=\begin{cases}
{\tilde u^{(6)}(x)},&{\text{\ if\ }x\in R_\gamma,}\\
\theta x_2,&\text{\ if\ }x\in (\gamma, \gamma+\alpha+\tilde\beta]\times[0,1-\theta],\\
(1-\theta)(1-x_2),&\text{\ if\ }x\in (\gamma, \gamma+\alpha]\times(1-\theta,1],\\
\tilde{v}_{\tilde{\beta}}(x_1-(\gamma+\alpha),x_2-(1-\theta))+\theta(1-\theta),& \text{\ if\ }x\in(\gamma+\alpha,\gamma+\alpha+\tilde{\beta}]\times(1-\theta,1],\\
\theta x_2,&\text{\ if\ }x\in(\gamma+\alpha+\tilde{\beta},L]\times[0,1],\\
u_{\alpha,\beta,\bar{L}}(x_1-\gamma,x_2),&\text{\ otherwise in } (-\infty,L]\times\mathbb{R},\\
u^{(6)}(2L-x_1,x_2),&\text{\ if\ }x_1> L,
\end{cases} \]
see Fig.~\ref{figlnthetamu2}. The condition $\varepsilon\le {\min\{\mu^2\theta,\mu\theta^2\}}$ implies $3\gamma\le \alpha$, so that the construction for $\tilde u^{(6)}$ and the one for $u_{\alpha,\beta,\bar{L}}(x_1-\gamma,x_2)$ match continuously.
The function $u^{(6)}$ is continuous and
\begin{eqnarray*}
E(u^{(6)})&\leq& c\left(\frac{\theta^3}{\gamma}+\varepsilon\gamma+{\varepsilon\theta+}\mu\theta^2\ln(3+\gamma)+\varepsilon\alpha+\frac{\theta^3}{\tilde{\beta}}+\varepsilon(\tilde{\beta}+\frac{\theta^2}{\tilde{\beta}})+\mu\theta^2\ln(3+\frac{\bar{L}}{\alpha})+\mu\theta^2\frac{\beta+\alpha}{\beta-\alpha}{+\varepsilon}\right)\\
&\leq&\mu\theta^2\ln(3+\frac{\theta}{\mu})+\mu\theta^2\ln\left(3+\frac{\varepsilon L}{\mu\theta^2}\right).
\end{eqnarray*}
We used here {$\varepsilon\gamma\leq\max\{\varepsilon,\frac{\varepsilon \theta}{\mu}\}\leq\mu\theta^2$ since $\varepsilon\leq\min\{\mu\theta^2,\mu^2\theta\}$; and similarly $\varepsilon({1}+\theta+\alpha+\tilde \beta)\le c\mu\theta^2$ and $\frac{\varepsilon\theta^2}{\tilde\beta}\le \frac{\theta^3}{\tilde \beta}=\frac{\varepsilon\theta^3}{3\mu\theta^2}\le \mu\theta^2$.}
\end{itemize}
This concludes the proof of the upper bound.
\end{proof}
\subsection{Comments on the scaling law}\label{sec:tabulars}
The purpose of this subsection is two-fold: On the one hand, in Subsection \ref{sec:regnec} we shall prove that all terms in the definition of $\mathcal{I}(\mu,\varepsilon,\theta,L)$ are relevant in the sense that the statement is false if we remove one of them. Furthermore, we give some intuition on the constructions used in the proof of the upper bound. On the other hand, in Subsection \ref{sec:regimesrough}, we shall explain the different parameter regimes and motivate why they are treated separately in the proof of the lower bound.
\subsubsection{Do all regimes really exist?}
\label{sec:regnec}
{We} will use the following abbreviatory notation:
We denote as
{\em scaling} an expression like $\varepsilon^{1/2}\theta^{3/2}$, and
as {\em regime} something like $\mu\theta^2\ln(3+\frac L\mu)+\varepsilon\theta$ (which is the sum of a few scalings). {In particular, $\mathcal I$ is defined in Theorem \ref{th:main} as the minimum of eight regimes.}
{We show below that no regime $R=R(\mu,\varepsilon,\theta, L)$ can be eliminated from the definition of $\mathcal I$ in Theorem \ref{th:main}. To do this,}
we shall exhibit a sequence of parameters $\mu_j,\varepsilon_j,\theta_j,L_j$ such that
$\lim_{j\to\infty} \frac{\mathcal I_R(\mu_j,\varepsilon_j,\theta_j,L_j)}{R(\mu_j,\varepsilon_j,\theta_j, L_j)}\to\infty$, where $\mathcal I_R$ is the minimum in Theorem \ref{th:main} without regime $R$.
Additionally, we show that no scaling can be eliminated {in the regimes that consist of more than one scaling}. Consider a regime $R$ which consists of the scalings $S^{(k)}$, in the sense that $R=S^{(1)}+\dots +S^{(K)}$ {for $K\geq 2$}. For any $i\in\{1, \dots, K\}$ let
$R_i:=\sum_{k\ne i} S^{(k)}$ be the regime $R$ without $S^{(i)}$. We shall provide a sequence $(\mu_j,\varepsilon_j,\theta_j,L_j)$ such that $\frac{R_i{(\mu_j,\varepsilon_j,\theta_j,L_j)}}{\mathcal I{(\mu_j,\varepsilon_j,\theta_j,L_j)}}\to0$, proving that
$R$ cannot be replaced by $R_i$. In most cases, this will be done constructing a sequence with
$\frac{\mathcal I_R}{R}\to\infty$,
$\frac{S^{(i)}}{R}\to1$ and $\frac{S^{(i)}}{S^{(k)}}\to0$ if $k\ne i$ along that sequence, which additionally shows that {the scaling} $S^{(i)}$ dominates the {regime} $R$.
To briefly sketch the ideas behind the constructions in the proof of the upper bound, we describe them only {inside} the martensitic nucleus. They should be considered to be extended optimally (in the sense of trace) to the austenite part. The precise constructions and references {to the literature} are given in Section \ref{sec:upperbound}.
{We recall that we} write $\ln^\alpha x$ for $(\ln x)^\alpha$, and the same for $\ln^\alpha\ln x$. We write $a_j\sim b_j$ if there is a constant $c>0$ such that $\frac{1}{c} a_j\leq b_j\leq c a_j$ for all $j\in\mathbb{N}$.
\begin{enumerate}
\item $R=\theta^2L$ {(constant)}: This regime is attained by a constant test function, corresponding to austenite. This regime is the only one that does not depend on $\varepsilon$ nor $\mu$. We take $\theta_j=L_j=\frac12$, $\varepsilon_j=\mu_j\to\infty$. Then all other regimes have diverging energy.
\item $R=\mu\theta^2\ln(3+L)$ {(affine)}: This regime is attained by using an affine function inside the nucleus corresponding to the majority variant of martensite {(see Figure \ref{fig-regimes1} (left)).} We take $\theta_{{j}}=\mu_{{j}}=\frac12$, {$L_j\to\infty$, $\varepsilon_j=e^{L_j}$}. All regimes which contain one of the scalings $\varepsilon L$, $\varepsilon\theta$, $\varepsilon^{1/2}\theta^{3/2}$, $\theta^2 L$ have energ{ies} which diverge at least as a power of {$L_j$},
and also $\mu_j\theta_j^2\ln(3+\frac{\varepsilon_j L_j}{\mu_j\theta_j^2})\gg L_j$, and only $\mu\theta^2\ln(3+L)$ is logarithmic.
\item $R=\mu\theta^2\ln(3+\frac L\mu)+\varepsilon\theta$ {(linear interpolation)}. This regime is (when relevant) attained by a test function that is constant near the left and the right boundaries of $\Omega_{2L}$ (corresponding to austenite), and affine near the middle $\{x_1=L\}$ of the nucleus (corresponding to the majority variant of martensite). There is a competition of the energy inside the nucleus, which favours the test function to be in the martensitic variant on a large part, and the energy contribution from the austenite part, which favours the function to be constant in a large neighbourhood of the left and right boundaries {(see Figure \ref{fig-regimes1} (right)).}
\begin{itemize}
\item[(a)] $S=\mu\theta^2\ln(3+\frac L\mu)$:
We take {$L_j\to\infty$,} $\theta_j=\frac12$, $\mu_j=\frac{L_j}{\ln L_j}$, $\varepsilon_j=\mu_j\theta_j$. Then $\varepsilon_j\theta_j=\mu_j\theta_j^2$, $\frac {L_j}{\mu_j}=\ln L_j$, so that $R(\mu_j,\varepsilon_j,\theta_j,L_j)\sim S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim L_j \frac{\ln\ln L_j}{\ln L_j}$, whereas $\theta_j^2 L_j \sim L_j$, $ \mu_j\theta_j^2\ln(3+L_j)\sim L_j$,
$\varepsilon_j L_j \sim L_j^2/\ln L_j$,
and $\frac{\varepsilon_j L_j}{\mu_j\theta_j^2}=\frac{L_j}{\theta_j}$ implies
$\mu_j\theta_j^2\ln(3+\frac{\varepsilon_j L_j}{\mu_j\theta_j^2})\sim \mu_j\ln L_j\sim L_j$.
\item[(b)] $S=\varepsilon\theta$: As above,
we take {$L_j\to\infty$,} $\theta_j=\frac12$, $\mu_j=\frac{L_j}{\ln L_j}$, but this time $\varepsilon_j=\mu_j\theta_j (\ln \ln L_j)^2$. Then $\frac{L_j}{\mu_j}=\ln L_j$, so that $R(\mu_j,\varepsilon_j,\theta_j,L_j)\sim S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim \theta_j^2L_j \frac{(\ln\ln L_j)^2}{\ln L_j}$, in the other terms the $\ln\ln L_j$ correction does not change the argument.
\end{itemize}
\item $R=\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})+\mu\theta^2(3+\frac{\varepsilon}{{\mu^2\theta^2}})+{\varepsilon^{1/2}\theta^{3/2}}$
(single truncated branching). This regime is (when relevant) attained by a test function that consists of roughly three parts: Close to the left and right boundaries of $\Omega_{2L}$, a branching construction is used, which goes over to a single laminate, and then interpolates to an affine function (which corresponds to the majority variant of martensite) near the vertical middle $\{x_1=L\}$ of the nucleus, {see Fig.~\ref{figsingletruncbra}}.
\begin{itemize}
\item[(a)] $S=\varepsilon^{1/2}\theta^{3/2}$: We take $\theta_j=\frac12$, $L_j\to\infty$, $\mu_j=\frac{1}{L_j}$, $\varepsilon_j=\frac{\mu_j\theta_j^2}{L_j}\ln L_j=\frac{\ln L_j}{4L_j^2}$.
Then $\mu_j\theta_j^2=\frac{1}{ 4L_j}$, $\varepsilon_j L_j=\frac{\ln L_j}{ 4 L_j}$.
In particular, $\theta_j^2L_j\sim L_j$, $\mu_j\theta_j^2 \ln L_j\sim\mu_j\theta_j^2\ln \frac{L_j}{\mu_j} \sim \frac{\ln L_j}{L_j}$.
We have $\frac{\varepsilon_j L_j}{\mu_j\theta_j^2}=\ln L_j$, $\frac{\varepsilon_j}{\mu_j^2\theta_j^2}
=\ln L_j$,
and $S(\mu_j,\varepsilon_j,\theta_j,L_j)=\varepsilon_j^{1/2}\theta_j^{3/2}\sim \frac{1}{L_j}\ln^{1/2} L_j$, so that $R(\mu_j,\varepsilon_j,\theta_j,L_j)=O(\frac1L_j\ln\ln L_j)+S(\mu_j,\varepsilon_j,\theta_j,L_j)
=O(\frac{1}{L_j}\ln\ln L_j)+\frac{1}{L_j} \ln^{1/2} L_j$, and $S(\mu_j,\varepsilon_j,\theta_j,L_j)/R(\mu_j,\varepsilon_j,\theta_j,L_j)\to1$.
All regimes which contain the scaling $\varepsilon_j L_j\sim\frac{\ln L_j}{L_j}$ can be ignored.
Finally, $\theta_j/\mu_j=\theta_j L$, hence
$\mu_j\theta_j^2 \ln\frac{\theta_j}{\mu_j}\sim \frac{1}{L_j}\ln L_j\gg S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim R(\mu_j,\varepsilon_j,\theta_j,L_j)$. This concludes the proof.
\item[(b)] $S=\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})$:
We take $\theta_j=\frac12$, $L_j\to\infty$, $\mu_j=\frac1{L_j^{1/2}}$, $\varepsilon_j = \frac{\mu_j\theta_j^2}{L_j}\ln L_j=\frac1{ 4L_j^{3/2}}\ln L_j$. Then $\mu_j\theta_j^2=\frac1{ 4L_j^{1/2}}$, $\varepsilon_j L_j=\frac1{ 4L_j^{1/2}}\ln L_j$,
$\frac{\varepsilon_j L_j}{\mu_j\theta_j^2}=\ln L_j$,
$\frac{\varepsilon_j}{\mu_j^2\theta_j^2}=\frac{\ln L_j}{L_j^{1/2}}\to0$,
and $\varepsilon_j^{1/2}\sim \frac{1}{L_j^{3/4}}\ln^{1/2}L_j$. Therefore $S(\mu_j,\varepsilon_j,\theta_j,L_j)$ dominates $R(\mu_j,\varepsilon_j,\theta_j,L_j)$,
and $S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim R(\mu_j,\varepsilon_j,\theta_j,L_j)\sim \frac{1}{L_j^{1/2}}\ln\ln L_j$.
All regimes with $\varepsilon_j L_j$ are higher, as is obviously $\theta_j^2L_j$.
{Further, $\frac{\theta_j}{\mu_j}\sim L_j^{1/2}$ implies $\mu_j\theta_j^2\ln(3+\frac{\theta_j}{\mu_j})\sim \frac{1}{L_j^{1/2}}\ln L_j$}, and finally,
$\mu_j\theta_j^2\ln L_j\sim \mu_j\theta_j^2\ln \frac{L_j}{\mu_j}\sim \frac{1}{L_j^{1/2}}\ln L_j\gg R{(\mu_j,\varepsilon_j,\theta_j,L_j))}$.
\item[(c)] $S=\mu\theta^2\ln (3+\frac{\varepsilon}{\mu^2\theta^2})$:
We take $L_j\to\infty$, $\theta_j=\frac1{\ln^{2/5}L_j}$, $\mu_j=\frac{\ln\ln L_j}{L_j\ln^{1/5} L_j}$, $\varepsilon_j=\frac{\ln^5\ln L_j}{L_j^2\ln L_j}$. Then $\varepsilon_j L_j=\frac{\ln^5\ln L_j}{L_j\ln L_j}$, $\mu_j\theta_j^2=\frac{\ln\ln L_j}{L_j\ln L_j}$, $(\varepsilon_j L_j\mu_j\theta_j^2)^{1/2}=\mu_j\theta_j^2\ln^2\ln L_j$.
Further, $\frac{\varepsilon_j L_j}{\mu_j\theta_j^2}=\ln^4\ln L_j$, $\frac{\varepsilon_j}{\mu_j^2\theta_j^2}=\ln^{1/5} L_j\ln^3\ln L_j$,
and $S(\mu_j,\varepsilon_j,\theta_j,L_j)
\sim {\mu_j\theta_j^2 \ln\ln L_j}\gg \mu_j\theta_j^2\ln\frac{\varepsilon_j L_j}{\mu_j\theta_j^2}
\sim {\mu_j\theta_j^2 \ln\ln\ln L_j}$.
For the third scaling in this regime,
$\varepsilon_j^{1/2}\theta_j^{3/2}=\frac{\ln^{5/2}\ln L_j}{L_j\ln^{1/2}L_j \ln^{3/5}L_j}=
{\mu_j\theta_j^2 \frac{\ln^{3/2}\ln L_j}{\ln^{1/10}L_j}}\ll S(\mu_j,\varepsilon_j,\theta_j,L_j)$. For the {corner laminate} regime, we estimate $\frac{\theta_j}{\mu_j}=\frac{L_j}{\ln^{1/5}L_j\ln\ln L_j}$ which gives
$\mu_j\theta_j^2\ln(3+\frac{\theta_j}{\mu_j})\sim {\mu_j\theta_j^2}\ln L_j\gg S(\mu_j,\varepsilon_j,\theta_j,L_j)$. The other regimes are simpler. $\theta_j^2L_j$ is linear in $L_j$, $\mu_j\theta_j^2\ln(3+L_j)$ and {$\mu\theta^2\ln(3+\frac{L}{\mu})$} behave as $\mu_j\theta_j^2\ln L_j$, which is much larger than $S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim\mu_j\theta_j^2\ln\ln L_j$, and the last ones {(branching, laminate and two-scale-branching)} are eliminated by $\varepsilon_j L_j$.
\end{itemize}
\item {$R=\mu\theta^2\ln(3+\frac\theta\mu)+\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})$}
{(corner laminate)} : This scaling is (when relevant) attained by a construction sketched in Figure \ref{figlnthetamu2} (left). Note that this leads to two relevant contributions from the austenite part as sketched in Figure \ref{figlnthetamu2} (right){.}
\begin{itemize}
\item[(a)]
$S=\mu\theta^2\ln(3+\frac\theta\mu)$:
We take $L_j\to\infty$, $\theta_j=\frac1{L_j^2}$, $\mu_j=\frac{1}{L_j^2\ln L_j}$, $\varepsilon_j=\frac{\ln^4\ln L_j}{L_j^7\ln L_j}$.
Then $\frac{\theta_j}{\mu_j}=\ln L_j$, $\mu_j\theta_j^2=\frac{1}{L_j^6\ln L_j}$, $\varepsilon_j L_j=\frac{\ln^4\ln L_j}{L_j^6\ln L_j}$.
In particular, $\ln \frac{\varepsilon_j L_j}{\mu_j\theta_j^2}\sim \ln\ln\ln L_j \ll \ln\frac{\theta_j}{\mu_j}= \ln\ln L_j$, and
$S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim \frac{1}{L_j^6\ln L_j}\ln\ln L_j$.
Therefore $S(\mu_j,\varepsilon_j,\theta_j,L_j)$ dominates $R(\mu_j,\varepsilon_j,\theta_j,L_j)$.
To eliminate the other regimes, we observe that $\frac{\varepsilon_j}{\mu_j^2\theta_j^2}=L_j {(}\ln L_j{)}{(}\ln^4\ln L_j {)}$ shows that
$\mu_j\theta_j^2\ln(3+\frac{\varepsilon_j}{\mu_j^2\theta_j^2})/S(\mu_j,\varepsilon_j,\theta_j,L_j)\to\infty$.
Since $\varepsilon_j L_j=\frac{\ln^4\ln L_j}{L_j^6\ln L_j}\gg S(\mu_j,\varepsilon_j,\theta_j,L_j)$, branching, laminates and two-scale branching are ruled out. Since
$\mu_j\theta_j^2\ln L_j\sim \mu_j\theta_j^2L_j\ln \frac{L_j}{\mu_j}\sim \frac{1}{L_j^6}\gg S(\mu_j,\varepsilon_j,\theta_j,L_j)${, and $\theta_j^2 L_j=\frac{1}{L_j^3}$, all remaining regimes are eliminated}.
\item[(b)] $S=\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})$.
{
We take $L_j\to\infty$, $\theta_j=\mu_j=\frac1{L_j^2}$, $\varepsilon_j=\frac{\ln L_j}{L_j^7}$.
Then $\frac{\theta_j}{\mu_j}=1$, $\mu_j\theta_j^2=\frac{1}{L_j^6}$, $\varepsilon_j L_j=\frac{\ln L_j}{L_j^6}$.
In particular, $\ln \frac{\varepsilon_j L_j}{\mu_j\theta_j^2}\sim \ln\ln L_j \gg \ln(3+\frac{\theta_j}{\mu_j})=\ln 4$, and
$S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim \frac{1}{L_j^6}\ln\ln L_j$.
Therefore $S(\mu_j,\varepsilon_j,\theta_j,L_j)$ dominates $R(\mu_j,\varepsilon_j,\theta_j,L_j)$.
To eliminate the other regimes, we observe that $\frac{\varepsilon_j}{\mu_j^2\theta_j^2}=L_j \ln L_j$ shows that
$\mu_j\theta_j^2\ln(3+\frac{\varepsilon_j}{\mu_j^2\theta_j^2})/S(\mu_j,\varepsilon_j,\theta_j,L_j)\to\infty$.
The bottom ones {(branching, laminates and two-scale branching)} are eliminated by $\varepsilon_j L_j/S(\mu_j,\varepsilon_j,\theta_j,L_j)
\sim \frac{\ln L_j}{\ln\ln L_j}\to\infty$, the top ones {(constant, affine and linear interpolation)} by $\mu_j\le 1$ {which implies}
${\min\{\theta_j^2 L, \mu_j\theta_j^2\ln(3+\frac{L_j}{\mu_j})\}\geq }\mu_j\theta_j^2\ln L_j\sim \frac{\ln L_j}{L_j^6}\gg S(\mu_j,\varepsilon_j,\theta_j,L_j)$.
}
\end{itemize}
\item $R=\varepsilon^{2/3}\theta^{2/3}L^{1/3}+\varepsilon L$ {(branching)}: This regime is (when relevant) attained by a branching construction sketched in Figure \ref{fig-br2-disc} (left).
\begin{itemize}
\item[(a)] $S=\varepsilon^{2/3}\theta^{2/3}L^{1/3}$. We take $\theta_j=\mu_j=L_j=\frac12$, $\varepsilon_j=\frac1j\to0$. Only the last three regimes {(branching, laminates and two-scale branching)} have infinitesimal energy.
We have $\varepsilon_j^{2/3}\theta_j^{2/3}L_j^{1/3}{\sim j^{-2/3}}$, {whereas ${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}\sim j^{-1/2}$}. At the same time, $\varepsilon_j^{2/3}\gg \varepsilon_j${, and hence $S(\mu_j,\varepsilon_j,\theta_j,L_j)$ dominates $R(\mu_j,\varepsilon_j,\theta_j,L_j)$.}
\item[(b)] $S=\varepsilon L$. We take $\theta_j=\mu_j=\frac12$,
$L_j=j^{2/3}\to\infty$,
$\varepsilon_j=\frac1j\to0$.
Then $\varepsilon_j L_j=j^{-1/3}\to0$, ${\varepsilon_j}^{2/3}L_j^{1/3}=j^{-4/9}\ll j^{-1/3}$,
$\mu_j\theta_j^2=\frac18$, ${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}\sim j^{-1/6}\gg j^{-1/3}$.
\end{itemize}
\item $R={\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac1{\theta^2})}{+}\varepsilon L$ {(laminate)}: This regime is attained by a laminate construction as sketched in Figure \ref{fig-br2-disc} (middle).
\begin{itemize}
\item[(a)]
{$S=\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac1{\theta^2})$:}
We take $L_j\to\infty$, $\theta_j=\frac 1{L_j}$, $\mu_j=L_j^2e^{-L_j}$, $\varepsilon_j=\frac1{L_j^2}e^{-L_j}$.
Then $\theta_j^2L_j=\frac1L_j$, $\mu_j\theta_j^2=e^{-L_j}$, $\varepsilon_j L_j=\frac{1}{L_j}e^{-L_j}$, and ${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}= \frac{1}{L_j^{1/2}}e^{-L_j}$.
We compute in detail the last two regimes. Since $\ln(3+\frac{1}{\theta_j^2})\sim\ln L_j$, {we have} $S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim \frac{1}{L_j^{1/2}}e^{-L_j} \ln L_j$ and $\varepsilon_j L_j/S(\mu_j,\varepsilon_j,\theta_j,L_j)\to0$.
Since
$\ln(3+\frac{\varepsilon_j}{\mu_j^3L_j\theta_j^2})=\ln(3+\frac{e^{2L_j}}{L_j^7})\sim L_j$, {we have}
${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}\ln^{1/2}(3+\frac{\varepsilon_j}{\mu_j^3L_j\theta_j^2})\sim e^{-L_j}$. With $\mu_j\theta_j^2/S(\mu_j,\varepsilon_j,\theta_j,L_j)\to\infty$ the proof is concluded.
\item[(b)]{$S=\varepsilon L$:} {We take $L_j\to\infty$, $\theta_j=\frac 1{\ln^{1/2} L_j}$, $\mu_j=\frac{1}{L_j^{3/2}}$, $\varepsilon_j=\frac1{L_j^{5/2}\ln^{1/2}L_j}$.
Then $\mu_j\theta_j^2=\frac{1}{L_j^{3/2}\ln L_j}$, $\varepsilon_j L_j=
\frac{1}{L_j^{3/2}\ln^{1/2} L_j}$, and ${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}=\frac{1}{L_j^{3/2}\ln^{3/4} L_j}$.
We compute in detail the last two regimes. Since $\ln(3+\frac{1}{\theta_j^2})\sim\ln\ln L_j$, {we have}
${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}} \ln^{1/2}(3+\frac{1}{\theta_j^2})\sim\frac{1}{L_j^{3/2}\ln^{3/4} L_j}\ln^{1/2}\ln L_j\ll \varepsilon_j L_j$.
Since
$\ln(3+\frac{\varepsilon_j}{\mu_j^3L_j\theta_j^2})=\ln(3+L_j\ln^{1/2}L_j)\sim \ln L_j$, {we have}
${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}\ln^{1/2}(3+\frac{\varepsilon_j}{\mu_j^3L_j\theta_j^2})\sim
\frac{1}{L_j^{3/2}\ln^{1/4} L_j}\gg \varepsilon_j L_j$.
Further, $\theta_j^2L_j\sim L_j/\ln L_j$, $\mu\le 1$, and
the three terms
$\ln(3+L_j)$, $\ln(3+\frac{\varepsilon_j}{\mu_j^2\theta_j^2})$
and $\ln(3+\frac{\theta_j}{\mu_j})$ {behave as} $ \ln L_j$, eliminating the first five regimes {(constant, affine, linear interpolation, single truncated branching and corner laminate)}.
}
\end{itemize}
\item $R={\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac{\varepsilon}{\mu^3\theta^2L})}+\varepsilon L$ {(two-scale branching)}: This regime is (when relevant) attained by a two-scale branching construction sketched in Figure \ref{fig-br2-disc} (right).
\begin{itemize}
\item[(a)] $S=(\mu\theta^2 \varepsilon L \ln(3+\frac{\varepsilon}{\mu^3\theta^2L}))^{1/2}$:
We take ${L_j\to} \infty$, $\theta_j=\frac1{L_j}$, $\mu_j=\frac1{L_j^{5/2}}$, $\varepsilon_j=\mu_j^3\theta_j^2{L_j}\ln L_j=\frac{1}{L_j^{8+\frac12}}\ln L_j$.
Then $\mu_j\theta_j^2=\frac1{L_j^{4+\frac12}}$, $\varepsilon_j L_j=\frac{1}{L_j^{7+\frac12}}\ln L_j$, $\varepsilon_j^{2/3}\theta_j^{2/3}L_j^{1/3}=
\frac{1}{L_j^{6}}\ln^{2/3}L_j$,
${\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}=\frac{1}{L_j^{6}}\ln^{1/2}L_j$, $\frac{\varepsilon}{\mu_j^3\theta_j^2L_j}=\ln L_j$, so that
$S(\mu_j,\varepsilon_j,\theta_j,L_j)\sim \frac{1}{L_j^{6}}\ln^{1/2} L_j \ln^{1/2}\ln L_j$ and $\varepsilon_j L_j/S(\mu_j,\varepsilon_j,\theta_j,L_j)\to0$.
The {laminate is} eliminated by
$(\mu_j\theta_j^2\varepsilon L_j)^{1/2}\ln^{1/2}(3+\frac1{\theta_j^2})\sim \frac{1}{L_j^{6}}\ln L_j\gg S(\mu_j,\varepsilon_j,\theta_j,L_j)$, those with $\mu_j\theta_j^2$
by $\mu_j\theta_j^2/S(\mu_j,\varepsilon_j,\theta_j,L_j)\to\infty$, and
{$\theta_j^2L_j=\frac{1}{L_j}$}.
\item[(b)] $S=\varepsilon L$:
{We take $L_j\to \infty$, $\theta_j=\frac1{\ln^{1/5}L_j}$, $\mu_j=\frac{1}{L_j\ln L_j}$,
$\varepsilon_j=\frac{1}{L_j^2\ln^{3/5}L_j}$. Then
$\mu_j\theta_j^2=\frac{1}{L_j\ln^{7/5}L_j}$, $\varepsilon_jL_j=\frac{1}{L_j\ln^{3/5}L_j}=\mu_j\theta_j^2\ln^{4/5}L_j$,
$\frac{\varepsilon_j}{\mu_j^3\theta_j^2L_j}=\ln^{14/5}L_j$, which implies that
$S_{TSB}:={\mu_j^{1/2}\varepsilon_j^{1/2}\theta_j L_j^{1/2}}\ln^{1/2}(3+\frac{\varepsilon}{\mu^3\theta^2L}) \sim (\mu_j\theta_j^2\varepsilon_jL_j)^{1/2}\ln^{1/2} \ln L_j
\sim{\frac{\ln^{1/2} \ln L_j}{L_j\ln L_j}\sim} \mu_j\theta_j^2{(}\ln^{2/5}L_j {)}{(}\ln^{1/2}\ln L_j{)}\ll \varepsilon_j L_j$.
To conclude, we need to check that for all regimes $R$ entering $\mathcal I$ we have
$S_{TSB}/R\to0$.
This is obvious for $\theta_j^2 L_j$, for
$\mu_j\theta_j^2\ln L_j$, and for all regimes that contain an $\varepsilon_jL_j$ scaling. Since $\mu_j\le 1$, the regime with $\ln(3+L_j/\mu_j)$ is also irrelevant.
Since $\varepsilon_j^{1/2}\theta_j^{3/2}=\varepsilon_jL_j$, this is also true for the regimes that contain the $\varepsilon_j^{1/2}\theta_j^{3/2}$ scaling{, and finally $\frac{\theta_j}{\mu_j}=L_j\ln^{4/5}L_j$ shows that this also holds true for the corner laminate.}
}
\end{itemize}
\end{enumerate}
\subsubsection{Rough overview over some parameter ranges}\label{sec:regimesrough}
The proof of the lower bound in Section \ref{sec:lowerbound} is split into several parts that address different parameter ranges. We shall briefly motivate and sketch heuristically why different behaviours are expected in the considered ranges, and how this is reflected in our scaling law.\\[.3cm]
(i) We first consider the range in which $\varepsilon$ is not so small, in the sense that $\varepsilon\geq\min\{\theta^2,\mu\theta^2\}$. This is the range considered in Subsection \ref{sec:lbunif}. Roughly speaking, interfacial energy is expensive, and one expects rather uniform structures. Note that in the scaling regimes, the "uniform" constructions of constant functions (austenite, $L\theta^2$) and affine functions (majority variant of martensite, $\mu\theta^2\ln(3+L)$) scale differently in the size of the nucleus $L$. Hence, comparing these two regimes leads for large $\mu$ to a competition between $\mu$ and $L$.
\begin{itemize}
\item[(a)] If $\mu< 1$ then elastic strain in the austenite part is more favourable than elastic strain in the martensite part. Further, $\varepsilon \geq\min\{\mu\theta^2,\theta^2\}=\mu\theta^2$ implies that also interfacial energy is expensive compared to elastic energy in the austenite part. Therefore, one would expect that low energy configurations behave roughly like affine functions (corresponding to the majority variant of martensite) inside the nucleus. This is reflected in our scaling law: Since $\mu\theta^2\ln(3+L)\leq\mu\theta^2\ln(3+\frac{L}{\mu})\leq c\mu\theta^2(3+\frac{L}{\mu})\leq c(\mu\theta^2+L\theta^2)\leq cL\theta^2, \quad \frac{\varepsilon L}{\mu\theta^2}\geq L,\text{\ and\ }\varepsilon L\geq \mu\theta^2 L\geq c\mu\theta^2\ln(3+L)$,
we obtain $\mathcal{I}(\mu,\varepsilon,\theta,L)\sim \mu\theta^2\ln(3+L)$, which corresponds to the affine test function.
\item[(b)] If $\mu\geq 1$, then the behaviour is different, and the above mentioned competition between $L$ and $\mu$ becomes relevant. Note that $\mu\geq 1$ and $\varepsilon\geq\min\{\mu\theta^2,\theta^2\}=\theta^2$ implies that $\varepsilon L\geq\theta^2 L$, and hence all the branching and laminate regimes with a scaling $\varepsilon L$ are not relevant. To see that the regimes with three scalings are not relevant is more complicated: We always have $\frac{\varepsilon L}{\mu\theta^2}\geq\frac{L}{\mu}$. Hence, if $\mu\theta^2\ln(3+\frac{L}{\mu})\gtrsim \varepsilon \theta$, we have $\mu\theta^2\ln(3+{\frac{\varepsilon L}{\mu\theta^2}})\geq \mu\theta^2\ln(3+\frac{L}{\mu})\gtrsim \mu\theta^2\ln(3+\frac{L}{\mu})+\varepsilon \theta$, and we are done. Otherwise, $\varepsilon\theta\gtrsim \mu\theta^2\ln(3+\frac{L}{\mu})$ implies $\varepsilon\gtrsim\mu\theta^2$ and hence $\frac{\varepsilon L}{\mu\theta^2}\gtrsim L$, which yields $\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})\gtrsim \mu\theta^2\ln(3+L)$. Summarizing, we obtain
\[\mathcal{I}(\mu,\varepsilon,\theta,L)\sim\min\left\{\theta^2L,\mu\theta^2\ln(3+L),\mu\theta^2\ln(3+\frac{L}{\mu})+\varepsilon \theta\right\}. \]
The examples given in Subsection \ref{sec:regnec} show that all scalings are relevant in this parameter range.
\end{itemize}
(ii) The parameter range $\varepsilon\leq\min\{\theta^2,\mu\theta^2\}$ is more delicate since here many contributions compete. Note that in this range, the scaling ${\theta^2 L}$ is not relevant since $4{\theta^2 L}\geq \varepsilon^{2/3}{\theta^{2/3}}L^{1/3}+\varepsilon L$ (recall that $L\geq 1/2$). Also the regime $\mu\theta^2\ln(3+\frac{L}{\mu})+\varepsilon \theta$ does not occur: {Indeed,
if} $\mu\le 1$ then $\mu\theta^2\ln(3+L)\leq\mu\theta^2\ln(3+\frac{L}{\mu})$.
If $\mu>1$,
then $\varepsilon^{1/2}\theta^{3/2}\le \theta^2\le \mu\theta^2$,
{$\frac{\varepsilon L}{\mu\theta^2}\le \frac{L}{\mu}$} and $\frac{\varepsilon}{\mu^2\theta^2}\le \frac{1}{\mu^2}\le 1$. Summarizing, $ 4\mu\theta^2\ln(3+\frac{L}{\mu})\geq\min\{\mu\theta^2\ln(3+L),\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})+\mu\theta^2\ln(3+\frac{\varepsilon}{\mu^2\theta^2})+\varepsilon^{1/2}\theta^{3/2}\}$, and hence $\mu\theta^2\ln(3+\frac{L}{\mu})+\varepsilon\theta$ does not occur.\\
In this parameter range, the main difficulty lies in the logarithmic corrections. Roughly speaking, complex patterns and rather uniform structures can occur, and the overall behaviour is mainly determined by the comparison of $\varepsilon L$ and several scalings with $\mu\theta^2$. The latter, however, contain logarithmic corrections that make the comparison rather involved and lead to mixtures of different constructions. There are mainly two qualitatively different reasons for the logarithmic terms: Some of them arise (rather locally) for laminated structures in the vicinity of the left and right boundaries of the nucleus (see Lemma \ref{lem:lamoutside}). Others are due to the fact that in long nuclei affine structure deep inside the nucleus lead to non-periodic boundary conditions at the top and bottom boundaries of the nucleus and hence to elastic strain in the austenite (see Lemma \ref{lem:lemout}).
To indicate the different phenomena, we consider several subcases, corresponding to the competition between $\varepsilon^{2/3}\theta^{2/3}L^{1/3}$ and $\varepsilon L$, and the size of $\mu$.
\begin{itemize}
\item[(a)] Assume $\varepsilon \geq\frac{\theta^2}{L^2}$. For these rather large values of $\varepsilon$, one expects that the relevant structures are rather uniform with few horizontal interfaces passing through the whole nucleus. This behaviour is reflected in our scaling law as follows: We have $\varepsilon L\geq \varepsilon^{2/3}\theta^{2/3}L^{1/3}$ which means that the branching regime behaves as $\varepsilon L$ and that the laminate and two-scale branching regimes are not relevant.
{
Since $\big(\frac{\varepsilon}{\mu^2\theta^2}\big)^{1/2}\leq \frac{\varepsilon L}{\mu\theta^2}$, the {single truncated branching} regime reduces to
$\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})+\varepsilon^{1/2} \theta^{3/2}$.
The {corner laminate} regime can then be removed. Indeed, if $\varepsilon\le\mu^2\theta$
then $\varepsilon^{1/2}\theta^{3/2}\le \mu\theta^2$. If $\mu^2\theta\le \varepsilon$, then $L\le\frac\theta\mu\frac{\varepsilon L}{\mu\theta^2}$ implies
$\ln(3+L)\le \ln(3+\frac\theta\mu)+\ln(3+\frac\theta\mu\frac{\varepsilon L}{\mu\theta^2})${, for details see the proof of Theorem \ref{th:upperbound} (vi)d). Therefore,}}
\[\mathcal{I}(\mu,\varepsilon,\theta,L)\sim\min\left\{\mu\theta^2\ln(3+L),\ \mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})+\varepsilon^{1/2} \theta^{3/2},\ \varepsilon L\right\}. \]
{The corresponding lower bound is} the statement of Proposition \ref{propinterp} with the additional assumption $\varepsilon L^2\geq\theta^2$. Using $\varepsilon L\geq \varepsilon^{1/2}\theta^{3/2}$, $\frac{\varepsilon L}{\mu\theta^2}\leq L$ and $\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})\leq \mu\theta^2+\varepsilon L$, one can see that all the scalings are relevant.
\item[(b)] Assume $\varepsilon \leq\min\{\theta^2,\mu\theta^2,\theta^2/L^2\}$. Note that the condition $\varepsilon \leq\frac{\theta^2}{L^2}$ implies in particular $\varepsilon L\leq\theta^2/L$, i.e., roughly speaking, in the martensite part, a single laminate is cheaper than having a constant function or interpolating from a constant function at the left and right boundaries to an affine function deep inside the nucleus. The first two conditions indicate that interfacial energy is cheap compared to elastic energy in both, the austenite and the martensite part. However, there are various competitions between the interfacial energy, the elastic energies in the austenite and martensite parts, and the size of the nucleus. We shall outline the main points in these competitions by considering the cases $\mu\geq 1$ (i.e., elastic energy in the austenite is more expensive than in the martensite part), $\frac{1}{L}\leq\mu\leq 1$ (i.e., elastic energy in the austenite part is less expensive than in the martensite part but the nucleus is rather large), and the case $\mu\leq \frac{1}{L}$ (i.e., elastic energy in the austenite part is less expensive than in the martensite part and the nucleus is not so large).
\begin{itemize}
\item Assume $\mu\geq 1$. Then interfacial energy is rather cheap, the size of the nucleus is small in terms of $\varepsilon$, and elastic energy in the austenite part is rather expensive. Therefore, one expects that optimal configurations form complex microstructures inside the nucleus, with little strain the austenite part. This is reflected in our scaling law as follows: We have $\varepsilon L\leq \varepsilon^{2/3}\theta^{2/3}L^{1/3}\leq ({\frac{\theta^2}{L^2}})^{2/3}\theta^{2/3}L^{1/3}\lesssim \mu\theta^2$ (since $L\geq 1/2$ and $\mu\geq 1$), which shows that all regimes with a scaling $\mu\theta^2\ln(3+X)$ are irrelevant. Since also $\varepsilon^{2/3}\theta^{2/3}L^{1/3}\lesssim \mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}$, also the laminate and two-scale branching regimes with a scaling $\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+Y)$ are not relevant, and therefore
{we are in the branching regime of Fig.~\ref{fig-br2-disc}(left),}
$$\mathcal{I}(\mu,\varepsilon,\theta, L)\sim \varepsilon^{2/3}\theta^{2/3}L^{1/3}.$$
\item Assume $\frac{1}{L}\leq\mu\leq 1$. The situation is similar to the case above. However, if the size $L$ of the nucleus is large (in terms of $\mu$), then one expects a competition between the formation of complex patterns inside the nucleus and elastic energy in the austenite part in the vicinity of the left and right boundaries: This is reflected in our scaling law as follows: Again, $\varepsilon L\leq\varepsilon^{2/3}\theta^{2/3}L^{1/3}\leq (\frac{\theta^2}{{L^2}})^{2/3}\theta^{2/3}L^{1/3}\lesssim \mu\theta^2$ implies that all regimes with a scaling $\mu\theta^2\ln(3+X)$ are not relevant. Furthermore, $\frac{\varepsilon}{\mu^3\theta^2 L}\leq\frac{1}{\theta^2}$ since $\mu^3 L\geq\frac{1}{L^2}\geq\frac{\theta^2}{L^2}\geq \varepsilon$, which means that two-scale branching is more favourable than laminates. Therefore, in this parameter range
$$\mathcal{I}(\mu,\varepsilon,\theta, L)\sim\min\left\{\varepsilon^{2/3}\theta^{2/3}L^{1/3},\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac{\varepsilon}{\mu^3\theta^2 L})+\varepsilon L\right\}.$$ Using that $\varepsilon^{2/3}\theta^{2/3}L^{1/3}\geq \mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac{\varepsilon}{\mu^3\theta^2 L})$ is equivalent to $y\ln^{1/2}(3+y)\geq 1$ for $y:=\frac{\varepsilon}{\mu^3 L\theta^2}$, i.e., $y\geq c$, one easily checks that all three scalings are relevant.
\item Assume finally $\mu\leq \frac{1}{L}$. This is the richest and most complex parameter range. \\[.2cm]
If $\varepsilon$ is very small in the sense that additionally $\varepsilon\leq\frac{\mu^{3/2}\theta^2}{L^{1/2}}\leq\frac{\mu}{L}\theta^2$ then one expects the formation of complex patterns inside the nucleus. This is reflected in our scaling law as follows: On the one hand $\varepsilon L\leq \varepsilon^{2/3}\theta^{2/3}L^{1/3}\leq\mu\theta^2$, which shows that branching behaves as $\varepsilon^{2/3}\theta^{2/3}L^{1/3}$, and that all the regimes with a term $\mu\theta^2\ln(3+X)$ are not relevant. On the other hand, $\varepsilon L\leq \mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}$. Summarizing, only branching, two-scale branching and laminates {(see Fig. \ref{fig-br2-disc})} are relevant, and $$\mathcal{I}(\mu,\varepsilon,\theta, L)\sim\min\{\varepsilon^{2/3}\theta^{2/3}L^{1/3},\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac{\varepsilon}{\mu^3\theta^2 L}),\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac{1}{\theta^2})\}.$$ As above, one easily checks that all the scalings are relevant. {For the case of large $\theta\geq m_1$, the logarithms disappear since $\ln(3+\frac{1}{\theta^2})\leq c$, and the corresponding lower bound is proven in Lemma \ref{lem:thetalargebranching}.}\\[.2cm]
Let us finally address the remaining range $\frac{\mu^{3/2}\theta^2}{L^{1/2}}\leq\varepsilon\leq\min\{\mu\theta^2,\frac{\theta^2}{L^2}\}$ in which the overall behaviour is essentially determined by the logarithmic terms. Setting $y:=\frac{\varepsilon}{L\mu^3\theta^2}\geq 1$, we have $y/\ln^{3}(3+y)\geq c$ and hence $\varepsilon^{2/3}\theta^{2/3}L^{1/3}\geq c\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2}\ln^{1/2}(3+\frac{\varepsilon}{\mu^3\theta^2 L})$, which shows that branching is not relevant. We also note that $\frac{\varepsilon L}{\mu\theta^2}\leq\frac{\varepsilon}{\mu^2\theta^2}$. In this case,
\begin{eqnarray*}
\mathcal{I}(\theta,\varepsilon,L,\mu)= &\min\Big\{
\mu \theta^2 \ln (3+L),
\mu\theta^2\ln (3+\frac{\varepsilon}{\mu^2\theta^2})
+ \varepsilon^{1/2}\theta^{3/2},
{\mu \theta^2 \ln (3+ \frac { \varepsilon {L}}{\mu \theta^{2}})
+\mu\theta^2\ln (3+\frac{\theta}{\mu})}
,\\
& \mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} \ln^{1/2} (3+ \frac 1 {\theta^{2}})+\varepsilon L,
\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} \ln^{1/2} (3+ \frac{\varepsilon}{ \mu^{3}\theta^{2}{L}})+\varepsilon L
\Big\}.
\end{eqnarray*}
Here many competitions between the "more local" (lower line) and "global"(upper line) logarithms take place, and the different contributions are treated separately in the proof of the lower bound. Precisely, in Lemma \ref{lemmabdryln}, the "global" logarithms are captured which always are in competition with a single laminate ($\varepsilon L$). Combined with the energy required for an interpolation from a {constant} to an affine function (see Lemma \ref{lem:tildev}), this leads to the lower bound in Proposition \ref{propinterp}. The case of a single laminate requires additional care since {there the} incompatibility at the left and right boundaries lead to a competition between complex microstructures {inside the nucleus} and elastic strain in the austenite part. We point out that the situation here is (even in a scalar-valued setting) more complicated and the scaling behaviour is more complex than in the well-studied case of a vertical austenite/martensite interface with periodic boundary conditions at the top and bottom {boundaries}. This is in particular reflected in Proposition \ref{lem:lbbranching} by the additional regime $\mu\theta^2\ln(3+\frac{\theta}{\mu})$.
\end{itemize}
\end{itemize}
\section{Lower bound}\label{sec:lowerbound}
The proof of the lower bound will be divided in three main parts, addressing various regimes in which qualitatively different behavior is expected from the constructions in Subsection \ref{sec:ub}. We shall briefly outline the structure of the proof:\\
In Subsection \ref{sec:lbunif}, we deal with the case that $\varepsilon$ is not very small. Specifically, we assume that one of $\theta^2$ and $\mu\theta^2$ is below $\varepsilon$. Roughly speaking, in this regime, one expects rather uniform structures inside the nucleus. The lower bound in this regime is given in Proposition \ref{lem:logsammelnthetasmall}. The key competition is between the bulk energy in the martensite and the bulk energy in the austenite, and is made quantitative in Lemma \ref{lem:gammawuerfel}. \\
In Subsection \ref{sec:ubcomplexstrip}, we treat the case of small $\varepsilon$, in which we expect microstructure. This is the most interesting and richest regime, in which a variety of one- and two-scale branching patterns appear. The smallness of $\varepsilon$ corresponds to two conditions: {firstly,} it should be such that there is at least a single interface over the entire length of the sample, as made quantitative by comparing $\varepsilon L$ with $\mu\theta^2$ (up to a logarithmic factor, see below for the precise condition). Secondly, it must be such that the cost of a branching pattern is
not dominated by the cost of a single straight interface, {in the sense} that $ \varepsilon L\le \varepsilon^{2/3}\theta^{2/3}L^{1/3}$, {which is equivalent to}
$\varepsilon L^2\le \theta^2$.
Roughly speaking, in this regime, one expects complex patterns inside the whole martensitic nucleus, and contributions from the austenite part only close to the left and right boundaries of $\Omega_{2L}$. The lower bound is derived in Proposition \ref{lem:lbbranching}, which builds upon a series of {Lemmata} for specific parts of the estimate. \\
In Subsection \ref{sec:lb3}, we address the remaining part of the small-$\varepsilon$ {range} which is not covered in Subsection \ref{sec:ubcomplexstrip}, corresponding to the cases that $\mu$ is small ({in the sense that} $\mu\theta^2 \lesssim \varepsilon {L}$) or that $L$ is large ({in the sense that}
$\theta^2<\varepsilon L^2$). In this case, one expects that there are parts inside the nucleus in which the displacement is affine or a single laminate. The relevant lower bound is obtained in Proposition \ref{lem:logsammeln}. \\
Finally, in Subsection \ref{sec:lbconclusion}, we put together the above results and conclude the proof of the lower bound.
\\
{We start by making} a few general observations and definitions that will be used all over the argument.
{The condition} $\nabla u\in BV{(\Omega_{2L};\mathbb{R}^{2\times 2})}$ implies that $u$ has a representative {which is continuous on $\overline{\Omega_{2L}}$} (see, for example, \cite[Lemma 9]{co16}). We work with this representative, and mainly work on slices in direction $\xi=(1/4,1)$.
One important quantity is the set $\mathcal C$ of slices which are almost affine with slope $\theta$ or $\theta-1$ {(recall \eqref{eq:defuxi})}
\begin{equation}\label{eqdefC}
\mathcal{C}:= \Big{\{}x_1\in (0,L-\xi_1)\,:\,
\min\big\{ \|u_{x_1}^\xi(s)-u^\xi_{x_1}(0)-s\theta\|_{L^\infty((0,1))},
\|u_{x_1}^\xi(s)-u^\xi_{x_1}(0)-s(\theta-1)\|_{L^\infty((0,1))}{\big\}} < \frac1{16}\theta\Big\}.
\end{equation}
These are slices which have almost no energy in the martensitic nucleus. The boundary values on the top and bottom of the slice, however, differ by approximately $\theta$ (or $1-\theta$). Therefore, these slices generate a large energy in the austenitic matrix.
Correspondingly, we shall consider the set $\mathcal P$ of slices where the boundary values are close,
\begin{equation}\label{eqdefPth}
\mathcal{P} := \left\{x_1\in (0,L-\xi_1)\,:\,
|u((x_1,0)+\xi)-u(x_1,0)| \leq 2^{-7} \theta \right\}.
\end{equation}
These slices generate very small energy in the austenitic matrix, but cannot be low energy inside the martensitic nucleous. They can be realized either by microstructure (with energy density at least $\varepsilon$) or by having a deformation which does not match the eigendeformation of the martensite (with energy density at least $\theta^2$). The sets {$\mathcal{C}$ and $\mathcal{P}$} are clearly disjoint. This competition and these energy contributions will be made {precise} in Lemma \ref{lem:gammawuerfel} below.
{Over the entire lower bound we shall often focus on ``typical'' slices, and relate the one-dimensional integrals over slices to the energy via Fubini's theorem. For example, recalling the definition (\ref{eqdeffl2delta}), one has
$\int_{(0,L)\times(0,1)} |f|^2 \dxy\ge \int_0^{L-\xi_1} \|f\|_{L^2(\Delta^\xi_{x_1})}^2 \dx
\ge \alpha^2 \mathcal{L}^1\left(\{x_1\in(0, L-\xi_1): \|f\|_{L^2(\Delta^\xi_{x_1})}\ge\alpha\}\right)$ for any $\alpha>0$.
}
\subsection{A lower bound in the parameter range $\theta^2\le \varepsilon$ {or} $\mu\theta^2\le \varepsilon$}\label{sec:lbunif}
In this {case} the structure inside the martensite is coarse, and the optimal bound is obtained considering a path that contains the segment $(x_1,0)$ {to} $(x_1+\xi_1,1)$, and then goes back in the austenite phase, staying at a distance of order $x_1$ from the martensite (see Figure \ref{figlemma34} and Lemma \ref{lem:gammawuerfel}).
\begin{prpstn} \label{lem:logsammelnthetasmall}
There exists $c>0$ such that for all $u\in\mathcal{X}$, $\mu>0$, $\theta\in (0,1/2]$, $\varepsilon>0$, and {$L\in[1/2,\infty)$}
with
\begin{equation*}
\min\{\theta^2,\mu\theta^2\}\le \varepsilon
\end{equation*}
there holds
\begin{equation*}
I(u)\ge c \min\Big\{\mu\theta^2\ln (3+L), \theta^2L, \mu\theta^2\ln (3+\frac L\mu)+\varepsilon\theta\Big\}.
\end{equation*}
\end{prpstn}
The key estimate, that will be useful also later in the proof, is the following.
We recall the definition of the sets $\mathcal P$ and $\mathcal C$ in
(\ref{eqdefC}) and (\ref{eqdefPth}),
{the definition of $\Delta^\xi_{x_1}$
in (\ref{eq:sliceDelta}) and of the $L^2(\Delta^\xi_{x_1})$ norm in (\ref{eqdeffl2delta}).} The geometry is illustrated in Figure \ref{figlemma34}.
\begin{lmm}[Bulk energy]\label{lem:gammawuerfel}
Suppose that $\theta\in(0,1/2]$, $L\in[1/2,\infty)$, $u\in{\mathcal{X}}$. The following holds:
\begin{enumerate}
\item\label{lem:gammawuerfelout} For $i\in\{1,2\}$
and for almost every $x_1\in (0,L-\xi_1)$
one has
\[\mu\int_{S_{x_1}}|\nabla u_i|^2\dcalH^1 \geq \frac18 \mu \frac{1}{1+ x_1}
|u_i(x_1+\xi_1,1)-u_i(x_1,0)|^2,\]
where $S_{x_1}$ is the polygonal {arc in $\mathbb{R}^2\setminus\Omega_{2L}$} joining the points
\begin{eqnarray*}
&&(x_1+\xi_1,1), \,(x_1+\xi_1,1+x_1),\,(-x_1,1+x_1), \,
(-x_1,-x_1), \,(x_1,-x_1), \, (x_1,0).
\end{eqnarray*}
\item\label{lem:gammawuerfelin}
{If $x_1\in[0, L-\xi_1]$ obeys} $\|\min \{|e(u)-\theta e_1 \odot e_2|,\,|e(u)+(1-\theta) e_1 \odot e_2|\}\|_{L^2(\Delta^\xi_{x_1})}^2 \leq \tilde{C}_1\theta^2$ and $|\partial_s\partial_s u_{x_1}^\xi|((0,1)) \leq \tilde{C}_1$, where
$\tilde C_1:=2^{-13}$, {then} $x_1\in\mathcal C$.
\item\label{lem:gammawuerfelC}
For any $x_1\in\mathcal C$ we have
$|u_{x_1}^\xi(1)-u_{x_1}^\xi(0)|\ge \frac34\theta$.
In particular, the sets $\mathcal P$ and $\mathcal C$
are disjoint.
\item\label{lem:gammawuerfelest}
One has
\begin{equation*}
I(u)\ge c \min\{\varepsilon,\theta^2\} \mathcal{L}^1([0,L-\xi_1]\setminus \mathcal C),
\end{equation*}
and
\begin{equation*}
I(u)\ge c\mu\theta^2\ln\frac{L+1-\xi_1}{\mathcal{L}^1(P)+1}+c\min\{\varepsilon,\theta^2\} \mathcal{L}^1(\mathcal P).
\end{equation*}
\end{enumerate}
\end{lmm}
\begin{figure}
\caption{
Sketch of the geometry in Lemma \ref{lem:gammawuerfel}
\label{figlemma34}
\end{figure}
\begin{proof}
(\ref{lem:gammawuerfelout}): The assertion follows by a direct computation, using that
\[ \mathcal{H}^1(S_{x_1})=8x_1+1+\xi_1\leq 8(1+x_1).\]
{Indeed, for} almost every $x_1 \in (0,L-\xi_1)$ we have that $u_i \in W^{1,2}(S_{x_1})$.
It then follows that
\begin{eqnarray*}
\int_{S_{x_1}}|\nabla u_i|^2\text{ d}\mathcal{H}^1\geq(\mathcal{H}^1(S_{x_1}))^{-1} \Big(\int_{S_{x_1}}|\nabla u_i|\text{ d}\mathcal{H}^1\Big)^2 \geq \frac{|u_i(x_1+\xi_1,1)-u_i(x_1,0)|^2}{8(1+x_1)}.
\end{eqnarray*}
(\ref{lem:gammawuerfelin}): Fix such an $x_1$ and define $v(s):= u^\xi_{x_1}(s)$.
Since $|\partial_s \partial_s v|\le \frac18$, there is $b\in\mathbb{R}$ such that $|v'(s)-b|\le \frac18$ for almost all $s\in(0,1)$.
We observe that {by H\"older's inequality and \eqref{eq:estduxi}}
\begin{equation*}
\min\{|b-\theta|, |b+(1-\theta)|\} \le
\int_0^1 \min\left\{|v'-\theta|, |v'+(1-\theta)|\right\}+|v'-b| \ds \le {5}\tilde{C}_1^{1/2}\theta+\frac18 \le \frac14.
\end{equation*}
Therefore there is $\sigma\in\{0,1\}$ such that $|b+\sigma-\theta|\le \frac14$, and correspondingly $|v'(s)+\sigma-\theta|\le \frac12$ for almost all $s${. Since $|\theta-(1-\theta)|=1$ it follows that $|v'(s)+\sigma-\theta|=\min\{|v'-\theta|, |v'+1-\theta|\}$ for almost every $s$, and thus}
\begin{equation*}
\int_0^1 |v'+\sigma-\theta|\ds=
\int_0^1 \min\{|v'-\theta|, |v'+1-\theta|\} \ds\le{5} \tilde C_1^{1/2}\theta.
\end{equation*}
Integrating we obtain
\begin{equation}
|v(s)+(\sigma-\theta)s-v(0)|\le {5}\tilde C_1^{1/2}\theta \text{ for all $s\in(0,1)$.}
\end{equation}
Since $\tilde C_1=2^{-13}$ this implies $x_1\in\mathcal C$.
(\ref{lem:gammawuerfelC}): For $x_1\in\mathcal C$ we have
$|u^\xi_{x_1}(1)-u^\xi_{x_1}(0)|\ge \min_{\sigma\in\{0,1\}} |\sigma-\theta| -2 \frac1{16}\theta
\ge {\frac34}\theta$.
The second assertion follows from
${\frac34}\theta\le |u^\xi_{x_1}(1)-u^\xi_{x_1}(0)|
\le 4|\xi|\, |u((x_1,0)+\xi)-u(x_1,0)|$ and $|\xi|\le 2$.
{(\ref{lem:gammawuerfelest})
The first assertion follows from (\ref{lem:gammawuerfelin}) with Fubini's theorem. Indeed, if
$x_1\in(0,L-\xi_1)\setminus\mathcal C$ {then}
$\|\min \{|e(u)-\theta e_1 \odot e_2|,\,|e(u)+(1-\theta) e_1 \odot e_2|\}\|_{L^2(\Delta^\xi_{x_1})}^2 > \tilde{C}_1\theta^2$ or $\varepsilon |\partial_s\partial_s u_{x_1}^\xi|((0,1)) > \varepsilon \tilde{C}_1$.
Integrating over all such $x_1$ we obtain $I(u)\ge c \min\{\varepsilon,\theta^2\} \mathcal{L}^1([0,L-\xi_1]\setminus \mathcal C)$.
To prove the {other} one, for almost every $x_1\in (0, L-\xi_1)\setminus\mathcal P$
we obtain by (\ref{lem:gammawuerfelout}) that
\begin{equation*}
\mu\int_{S_{x_1}}|\nabla u|^2\text{ d}\mathcal{H}^1 \geq c \frac{\mu\theta^2}{1+ x_1}
\end{equation*}
so that, using Fubini and monotonicity of $1/(1+x_1)$,
\begin{eqnarray*}
I^\mathrm {ext}(u)\geq c\mu\theta^2\int_{(0, L-\xi_1)\setminus\mathcal P}\frac{1}{x_1+1}\text{ d}x_1\geq
c\mu\theta^2\int_{\mathcal L^1(\mathcal P)}^{L-\xi_1} \frac{1}{x_1+1}\text{ d}x_1= c\mu\theta^2\ln\frac{L+1-\xi_1}{\mathcal L^1(\mathcal P)+1}.
\end{eqnarray*}
Finally, since $\mathcal P$ and $\mathcal C$ are disjoint, we have $\mathcal P\subset[0,L-\xi_1]\setminus\mathcal C$ and
$I(u)\ge c \min\{\varepsilon,\theta^2\} \mathcal{L}^1([0,L-\xi_1]\setminus \mathcal C)
\ge c \min\{\varepsilon,\theta^2\} \mathcal{L}^1(\mathcal P)$.
}
\end{proof}
\begin{proof}[Proof of Proposition \ref{lem:logsammelnthetasmall}]
{
By Lemma \ref{lem:gammawuerfel}(\ref{lem:gammawuerfelest}) we have, with $p:=\mathcal{L}^1(\mathcal P)$,
\begin{equation}\label{eqboundpp34}
I(u)\ge c\mu\theta^2\ln\frac{L+1-\xi_1}{p+1} + cp \min\{\varepsilon,\theta^2\}.
\end{equation}
We distinguish two cases. }
\begin{enumerate}
\item { Assume $\mu\le1$. Then the assumption on $\varepsilon$ implies $\mu\theta^2\le\varepsilon$, so that $\mu\theta^2\le \min\{\varepsilon,\theta^2\}$ and (\ref{eqboundpp34}) gives
\begin{equation*}
I(u)\ge c \min_{p'\in[0,L-\xi_1]}\Big( \mu\theta^2\ln \frac{L+\frac34}{p'+1} + p'\mu\theta^2\Big)
=c\mu\theta^2\ln(L+\frac34) \ge c\mu\theta^2\ln (3+L),
\end{equation*}
which concludes the proof. We used here that the expression to be minimized is nondecreasing in $p'$, hence the minimum is attained at $p'=0$.
}
\item {
Assume $1<\mu$. Then the assumption on $\varepsilon$ implies $\theta^2\le\varepsilon$, and
(\ref{eqboundpp34}) gives
\begin{equation*}
I(u)\ge c \Big( \mu\theta^2\ln \frac{L+\frac34}{p+1} + p\theta^2\Big).
\end{equation*}
If $p\ge\frac15 L$, then $I(u)\ge c \theta^2L$ and we are done.
If $p=0$, then $I(u)\ge c \mu\theta^2 \ln(3+L)$ as above and we are done.
Assume now $0<p<\frac15 L$. We observe that $p\le \frac15 L$ and $\frac12\le L$ imply
$\frac{10}9(p+1)\le L+\frac34$ and therefore
$I(u)\ge c\mu\theta^2\ln\frac{10}9\ge c \mu\theta^2$. Further,
\begin{equation*}
I(u)\ge c \min_{p'\in[0,L-\xi_1]}\Big( \mu\theta^2\ln \frac{L+\frac34}{p'+1} + p'\theta^2\Big)
\ge c \min\left\{ \mu\theta^2\ln (3+L),\mu\theta^2\ln \Big(\frac {L+\frac34}\mu\Big),\theta^2L\right\}
\end{equation*}
(we used here that the only zero of the derivative is at $p'+1=\mu$, hence the minimum over $[0,\infty)$ is at $p'=\mu-1$). Since we had already proven $I(u)\ge c\mu\theta^2$, distinguishing the two cases $L/\mu\ge 3$ and $L/\mu\le 3$ gives
\begin{equation}\label{eqmutheta3lldsf}
I(u)\ge c\min\left\{ \mu\theta^2\ln (3+L),\mu\theta^2\ln (3+\frac {L}\mu),\theta^2L\right\}.
\end{equation}
At this point it only remains to obtain the $\varepsilon\theta$ term.
We are working under the assumption that $p>0$.
For almost any $x_p\in\mathcal P$ we have
\begin{equation*}
\int_{(0,1)} (u_{x_p}^\xi)' (s) \ds = |u_{x_p}^\xi(1)-u_{x_p}^\xi(0)|\le \frac1{4}\theta.
\end{equation*}
If $\mathcal C=\emptyset$ then by
Lemma \ref{lem:gammawuerfel}(\ref{lem:gammawuerfelest})
we have $I(u)\ge c\min\{\varepsilon,\theta^2\}( L-\xi_1)\ge c \theta^2 L$, {and we are done}. Otherwise, for any $x_c\in\mathcal C$ we have
\begin{equation*}
\frac34\theta\le |u_{x_c}^\xi(1)-u_{x_c}^\xi(0)|\le \int_{(0,1)} (u_{x_c}^\xi)' (s) \ds .
\end{equation*}
Therefore
\begin{equation*}
\frac12\theta\le \int_{(0,1)} | (u_{x_c}^\xi)' - (u_{x_p}^\xi)' | \ds\le
4|D^2u|((0,L)\times(0,1))
\end{equation*}
and therefore $I(u)\ge c\varepsilon\theta$. Recalling (\ref{eqmutheta3lldsf}) we have
\begin{equation*}
I(u)\ge c \min\left\{ \mu\theta^2\ln (3+L),\mu\theta^2\ln (3+\frac L\mu) +\varepsilon\theta, \theta^2L\right\}
\end{equation*}
which concludes the proof.
}
\end{enumerate}
\end{proof}
\subsection{A lower bound in the {parameter range} $\varepsilon L^2\le\theta^2$ and {$\varepsilon L\lesssim\mu \theta^{2}$}}\label{sec:ubcomplexstrip}
We turn to the case in which $\varepsilon L^2\le \theta^2$, and formulate a lower bound on the energy restricted to {the set}
\begin{equation}\label{eq:tildeOmegal}
{\tilde{\Omega}_\ell:=(-\infty,\ell)\times\mathbb{R}. }
\end{equation}
We give these estimates for general $\ell\leq 2L$ since this does not require extra work. To prove the main theorem, however, we will only need the case $\ell=2L$.
\begin{prpstn}\label{lem:lbbranching}
There exists $c>0$ such that for all $u\in \mathcal{X}$, $\theta \in (0, 1/2]$,
{ $\varepsilon>0$, $\ell>0$, $\mu>0$ }
which obey
\begin{equation*}
1\le \ell\le 2L\,,\qquad
\varepsilon\ell^2\le\theta^2\,,\qquad
\text{and}\qquad \varepsilon\ell\le \mu\theta^2 \min\left\{
\ln(3+\frac{1}{\theta^2}),
\ln(3+\frac{\varepsilon}{\mu^3\theta^2\ell})
\right\}
\end{equation*}
there holds
\begin{equation*}
\begin{split}
I_{\tilde{\Omega}_\ell}(u) \geq c\min
\Big{\{}&
{ \mu \theta^2 \ln(3+\ell)},
{\mu\theta^2\ln(3+\frac\theta\mu)},
{\varepsilon^{2/3} \theta^{2/3}\ell^{1/3}},
\\
&
\mu^{1/2}\varepsilon^{1/2} { \theta}\ell^{1/2} (\ln (3+ \frac 1 {\theta^2}))^{1/2} , \mu^{1/2}\varepsilon^{1/2}{\theta} \ell^{1/2} (\ln (3+ \frac \varepsilon {\mu^3\theta^2{\ell}}))^{1/2}
\Big{\}}.
\end{split} \end{equation*}
\end{prpstn}
The key estimate for the most difficult case, in which $\theta$ is small, is proven in Lemma \ref{lem:thetasmallbranching} below.
A proof of similar statements has been provided by two of the authors in \cite{conti-zwicknagl:16} in the context of simplified scalar-valued models for austenite/martensite interfaces and crystal plasticity. Our proof follows the general {strategy} of {the} proof and builds on techniques from there with two main differences: {First,} the vectorial structure requires more refined arguments; and second, the isotropic elastic modulus allows for more flexibility which is treated differently than in the corresponding model for dislocation microstructures.\\
The vector-valued setting including a symmetrized gradient requires more careful slicing techniques than in \cite{conti-zwicknagl:16}.
We follow a $BD$-type {approach} and consider diagonal slices instead of nearly vertical slices.
An intuitive choice of the direction of slices would be $(1,1)$.
However, this would lead to problems in the case $L=1/2$. {Indeed, in this case $\Omega_{2L}=(0,1)^2$ contains only a single segment parallel to $(1,1)$ which connects the bottom and the top boundaries, and it would not be possible to choose a {`typical'} slice (in the sense of Fubini's theorem).} {Note that the energy controls only the symmetric part of the gradient of $u$, and hence we rely on $BD$-type slicing results which hold along diagonal slices but not on vertical ones.}
For {these reasons}, we fixed the direction of the slices as
\begin{equation*}
\xi=(\frac14,1).
\end{equation*}
We first treat the simpler case in which $\theta$ is bounded away from zero. In this case we use a different proof which uses ideas that were introduced in \cite{chan-conti:14-1} in the geometrically nonlinear {setting} and were refined in the linear setting in \cite{diermeier:13,melching:15}.
All these works treat only the case $\theta=\frac 12$, but the argument can easily {be} extended to the case $\theta \in [m_1, \frac 12 ]$. We start with this argument, which is simpler and does not require much preparation.
\begin{lmm}[The case of large $\theta$] \label{lem:thetalargebranching}
Let $m_1 \in(0,1/2]${.} There exists $c>0$ depending only on $m_1$ such that for all $u\in \mathcal{X}$,
$\varepsilon>0$, $\ell>0$, $\mu>0$, $\theta>0$
which obey
\begin{equation*}
1\le \ell\le 2L\,,\qquad
\varepsilon\ell^2\le 1\,,\qquad
\varepsilon\ell\le \mu \,,\qquad
\text{and}\qquad
m_1\le \theta\le \frac12
\end{equation*}
we have
\[
I_{\tilde{\Omega}_\ell}(u) \geq c\min
\Big{\{}
\varepsilon^{2/3}{\ell^{1/3}}, {\mu^{1/2}\varepsilon^{1/2} \ell^{1/2} }
\Big{\}}.
\]
\end{lmm}
\begin{figure}
\caption{Sketch of the geometry in the proof of Lemma \ref{lem:thetalargebranching}
\label{figlemmamm1}
\end{figure}
\begin{proof}
Let $\sigma:\Omega_\ell\rightarrow \{\theta,-1+\theta\}$ be the function that indicates which variant is locally {attained,} i.e., such that $\min\{|e(u)-\theta e_1 \odot e_2|,|e(u)+(1-\theta)e_1\odot e_2|\} = |e(u)-\sigma e_1 \odot e_2|$ almost everywhere.
We fix $\rho \in (0,1]$,
and choose $\tilde x_1\in [0,\ell-\rho]$, $\tilde x_2\in [0,1-\rho]$ such that the
infinite horizontal strip $S_h:=(-\infty,\ell)\times(\tilde{x}_2,\tilde{x}_2+\rho)$, and {the }infinite vertical strip $S_v:=(\tilde{x}_1,\tilde{x}_1+\rho)\times \mathbb{R}$ obey,
recalling the definition of $\tilde{\Omega}_{\ell}$ in \eqref{eq:tildeOmegal},
\begin{eqnarray}\label{eq:choiceQ}
I_Q(u)\leq c\ell^{-1}\rho^2 I_{\tilde{\Omega}_\ell}(u), \quad I_{S_v}(u) \leq c \ell^{-1}\rho I_{\tilde{\Omega}_\ell}(u), \text{\ and\ } I_{S_h}(u) \leq c\rho I_{\tilde{\Omega}_\ell}(u),
\end{eqnarray}
{ where} $Q:=S_h\cap S_v\subset \Omega_\ell$ (see Figure \ref{figlemmamm1}).
By Poincar\'e's inequality we have that
\begin{eqnarray}\label{eq:Pbigtheta}
\|\partial_2 u_1-\langle \partial_2 u_1\rangle_Q \|_{L^1(Q)}\leq \rho |D^2u_1|(Q) \quad\text{ and } \quad\|\partial_1 u_2-\langle \partial_1 u_2\rangle_Q \|_{L^1(Q)}\leq \rho |D^2u_2|(Q)
\end{eqnarray}
where $\langle f\rangle_Q$ denotes the average of $f$ over $Q$.
Combined with H\"older's inequality, we obtain
\begin{eqnarray*}
\|\sigma-\langle \partial_2u_1\rangle_Q -\langle \partial_1u_2\rangle_Q\|_{L^1(Q)} &\leq& \|\sigma-\left( \partial_2u_1+\partial_1u_2\right)\|_{L^1(Q)}+ \|\partial_2u_1-\langle \partial_2u_1\rangle_Q \|_{L^1(Q)}\\
&&+ \|\partial_1u_2-\langle \partial_1u_2\rangle_Q \|_{L^1(Q)}\\
&\leq& c\left(\rho I^{1/2}_Q(u)+\rho \varepsilon^{-1}I_Q(u)\right) \leq c\left(\rho^2\ell^{-1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u)+\rho^3 \varepsilon^{-1}\ell^{-1}I_{\tilde{\Omega}_\ell}(u)\right).
\end{eqnarray*}
If $\mathcal{L}^2(\{\sigma=\theta\}\cap Q)\geq \frac 12\rho^2$, then
\[\frac{1}{2}\rho^2\left|\theta- \langle \partial_2u_1\rangle_Q-\langle \partial_1u_2\rangle_Q\right|\leq \left\|\sigma-\langle \partial_2u_1\rangle_Q -\langle \partial_1u_2\rangle_Q\right\|_{L^1(Q)}. \]
Otherwise $\mathcal{L}^2(\{\sigma=\theta-1\}\cap Q)\geq \frac 12\rho^2$ and hence
\[\frac{1}{2}\rho^2|(\theta-1)- \langle \partial_2u_1\rangle_Q-\langle \partial_1u_2\rangle_Q|\leq \|\sigma-\langle \partial_2u_1\rangle_Q -\langle \partial_1u_2\rangle_Q\|_{L^1(Q)}. \]
Hence if $|\langle \partial_2u_1\rangle_Q|\leq m_1/4$ and $|\langle \partial_1u_2\rangle_Q|\leq m_1/4$, then {since $m_1\leq \theta$, $|\theta-\langle\partial_2u_1\rangle_Q-\langle\partial_1u_2\rangle_Q|\geq\theta/2$ and $|\theta-1-\langle\partial_2u_1\rangle_Q-\langle\partial_1u_2\rangle_Q|\geq\theta/2$, we deduce}
\[
m_1 \rho^2 \leq c\left(\rho^2 \ell^{-1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u)+\rho^3 \varepsilon^{-1}\ell^{-1}I_{\tilde{\Omega}_\ell}(u)\right),
\]
which implies that
\begin{align}\label{eq:lbfirstest}
I_{\tilde{\Omega}_\ell}(u)\geq c\min\{\ell,\varepsilon \ell\rho^{-1}\}.
\end{align}
This estimate will be considered as one part of (\ref{eq:lbfinaltest}) below. \\
It remains to consider the other cases.
If $|\langle \partial_2u_1\rangle_Q| > m_1/4$, we set $a:= \langle \partial_2u_1\rangle_Q$ and $b:= \langle u_1 -{a} x_2\rangle_Q$. Then by Poincar\'e's and H\"older's inequalities {(see also \eqref{eq:Pbigtheta})}
\begin{align*}
\|u_1 - a x_2 -b\|_{L^1(Q)}\leq \rho\|\nabla(u_1-\langle\partial_2u_1\rangle_Qx_2)\|_{L^1(Q)}
&\leq \rho \|\partial_1u_1\|_{L^1(Q)} + \rho \|\partial_2u_1 -\langle \partial_2u_1 \rangle_Q\|_{L^1(Q)}\\
&\leq \rho^2 I^{1/2}_Q(u) +\rho^2 \varepsilon^{-1} I_Q(u).
\end{align*}
By Fubini's theorem, there is $x_1^\ast\in(\tilde{x}_1,\tilde{x}_1+\rho)$ such that
\[\int_{\tilde{x}_2}^{\tilde{x}_2+\rho}|u_1(x_1^\ast,x_2)-ax_2-b|\text{ d}x_2\leq \rho I^{1/2}_Q(u) +\rho \varepsilon^{-1} I_Q(u)\]
and $u_1(x_1^*,\cdot)$ is the trace of $u_1$ on $\{x_1=x_1^*\}$.
We use the fundamental theorem to transfer this information to a corresponding slice on the boundary.
Precisely, we estimate
\begin{align*}
&{
\int_{\tilde{x}_2}^{\tilde{x}_2+\rho}|u_1(0,x_2)-ax_2-b|\text{ d}x_2 }\\
&\leq
\int_{\tilde{x}_2}^{\tilde{x}_2+\rho}|u_1(x_1^\ast,x_2)-ax_2-b|\text{ d}x_2+\int_{\tilde{x}_2}^{\tilde{x}_2+\rho}\int_0^{x_1^\ast}|\partial_1u_1|(x_1,x_2)\text{ d}x_1 \text{ d}x_2
\\
&\leq c\left( \rho I^{1/2}_Q(u) +\rho \varepsilon^{-1} I_Q(u) + \ell^{1/2} \rho^{1/2} I^{1/2}_{S_h}(u)\right)\\
&\leq c\left( \rho^2 \ell^{-1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u) + \rho^3\varepsilon^{-1}\ell^{-1} I_{\tilde{\Omega}_\ell}(u) + \rho \ell^{1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u) \right) .
\end{align*}
Since $|a|\geq m_1/4$, we get by Lemma \ref{lem:interpolSC} applied to $u_1(0,\tilde x_2+\cdot)$ that
\begin{align}\nonumber
m_1 \rho^2
\leq c\left( \rho^2 \ell^{-1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u) + \rho^3\varepsilon^{-1}\ell^{-1} I_{\tilde{\Omega}_\ell}(u) + \rho \ell^{1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u) +{m_1^{-1}}\mu^{-1} I_{S_h}(u)\right)\\
\leq c\left( \rho^2 \ell^{-1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u) + \rho^3\varepsilon^{-1}\ell^{-1} I_{\tilde{\Omega}_\ell}(u) + \rho \ell^{1/2} I_{\tilde{\Omega}_\ell}^{1/2}(u) +\mu^{-1} \rho I_{\tilde{\Omega}_\ell}(u)\right), \label{eq:lbsecondtest}
\end{align}
where in the last step we subsumed $m_1$ in the constant $c$.
Finally, if $|\langle\partial_1u_2\rangle_Q|> m_1/4$, we proceed analogously, interchanging the indices $1$ and $2$, and obtain the estimate
\begin{eqnarray*}
m_1\rho^2&\leq& c\left(\int_{\tilde{x}_1}^{\tilde{x}_1+\rho} |u_2(x_1,x_2^\ast)-\tilde{a}x_1-\tilde{b}|\text{ d}x_1+\int_{\tilde{x}_1}^{\tilde{x}_1+\rho}\int_0^{x_2^\ast}|\partial_2u_2|\text{ d}x + \mu^{-1}[u(\cdot ,0)]_{H^{1/2}(\tilde {x}_1,\tilde{x}_1+\rho)}^2\right)\\
&\leq& c\left(\rho^2\ell^{-1/2}I_{\tilde{\Omega}_\ell}^{1/2}(u)+\rho^3\varepsilon^{-1}\ell^{-1}I_{\tilde{\Omega}_\ell}(u)+\rho I_{\tilde{\Omega}_\ell}^{1/2}(u) +\mu^{-1} \ell^{-1} \rho I_{\tilde{\Omega}_\ell}(u)\right)
\end{eqnarray*}
with $\tilde{a}:= \langle \partial_1u_2\rangle_Q$ and $\tilde{b}:= \langle u_2 -
{\tilde a} x_1\rangle_Q$ and an appropriately chosen $x_2^\ast\in (\tilde{x}_2,\tilde{x}_2+\rho)$.
Putting this together with \eqref{eq:lbfirstest} and \eqref{eq:lbsecondtest}, we see that for any $\rho\in (0,1)$
\begin{align}\label{eq:lbfinaltest}
I_{\tilde{\Omega}_\ell}(u) \geq c\min \{ \ell, \varepsilon {\ell }\rho^{-1}, \ell^{-1}\rho^2 ,\mu \rho \}
= c\min \{ \varepsilon {\ell }\rho^{-1}, \ell^{-1}\rho^2 ,\mu \rho \},
\end{align}
where we used that $\rho\le\ell$ implies $\ell^{-1}\rho^2\le\ell$.
\\
If $\mu \geq \varepsilon^{1/3} \ell^{-1/3}$ we choose $\rho= \varepsilon^{1/3} \ell^{2/3}$ and conclude $I_{\tilde{\Omega}_\ell}(u) \geq c \varepsilon^{2/3}\ell^{1/3} $ ($\rho\le 1$ since $\varepsilon\ell^2\le1$).\\
If $\mu < \varepsilon^{1/3} \ell^{-1/3}$ we choose $\rho= \mu^{-1/2}\varepsilon^{1/2} \ell^{1/2}$ and conclude $I_{\tilde{\Omega}_\ell}(u) \geq c \mu^{1/2} \varepsilon^{1/2}\ell^{1/2} $ since $\mu^{1/2} \varepsilon^{1/2}\ell^{1/2}\leq \ell^{-1}\rho^2=\mu^{-1}\varepsilon$ by the assumption on $\mu$
({note that} $\rho\leq 1$ since $\mu\leq \varepsilon\ell$).
This concludes the proof of Lemma \ref{lem:thetalargebranching}.
\end{proof}
If $\theta$ is small then a more complex procedure is needed, in order to capture the various logarithmic divergences in the energy. Before presenting the main estimate in Lemma \ref{lem:thetasmallbranching} we need a number of preliminary results, which characterize the behavior of low-energy functions close to the corners in $(0,0)$ and $(0,1)$ and on ``good'' slices.
We {begin} by proving estimate for the behavior of $u_1$ close to the corners $(0,0)$ and $(0,1)$, which captures the logarithmic divergence of the matrix energy, see (\ref{equ1diffc}).
\begin{lmm}\label{lemmaystin}
{For any $c_*\in(0,1]$ there is
$C=C(c_*)$ such that whenever ${1\leq \ell\le 2L}$, $m\in(0,\frac14]$ and
\begin{equation*}
0<\mu\le \theta\le m
\end{equation*}
one of the following holds: either
\begin{equation*}
\frac1C I(u)\ge \min\left\{\mu\theta^2 \ln(3+\frac\theta\mu),\mu\theta^2 \ln\frac1m,\mu\theta^2\ln(3+\ell)
\right\}
\end{equation*}
or
\begin{equation}\label{equ1diffc}
\int_m^{1} \frac{|(u_1(x_1,1)-u_1(x_1,0)) |}{x_1}\dx\le
c_*\theta\ln\frac{1}{m}.
\end{equation}
}
\end{lmm}
\begin{figure}
\caption{Sketch of the geometry in the proof of Lemma \ref{lemmaystin}
\label{figlh}
\end{figure}
\begin{proof}
We show below the following:
either (\ref{equ1diffc}) holds or
\begin{equation}\label{eqa5iquezq}
\frac1c I(u)\ge \min\left\{\frac{\theta^3}q,\mu\theta^2 \ln(3+q),\mu\theta^2 \ln\frac1m
\right\}
\text{ for any $q\in [1,\ell]$.}
\end{equation}
We first prove that this implies the assertion.
We let $q_*:=\frac{ 2\theta}{\mu \ln(3+\theta/\mu)}$, observe that $\mu\le\theta$ implies $q_*\ge 1$, and
distinguish two cases. If $q_*\le \ell$ we set $q=q_*$. Then $\theta^3/q_\ast=\frac12\mu\theta^2\ln(3+\frac{\theta}{\mu})$. Further, observe that for any $t>0$ one has
$3+\frac{ 2t}{\ln(3+t)}\ge (3+t)^{1/2}$, which implies $\ln(3+\frac{ 2t}{\ln (3+t)})\ge \frac12\ln (3+t)$,
and obtain
$ \mu\theta^2\ln(3+q_*)\ge \frac 12\mu\theta^2\ln(3+\frac\theta\mu)$, which concludes the proof.
If instead $q_*\ge \ell$ we set $q=\ell$ and observe that in this case
$\frac{\theta^3}{q}\ge \frac{\theta^3}{q_*}=\frac12\mu\theta^2\ln(3+\frac\theta\mu)$,
which also concludes the proof.
Therefore it suffices to show that one of (\ref{equ1diffc}) and (\ref{eqa5iquezq}) holds.
\newcommand\ystar{\delta}
Fix $q\in [1,\ell]$.
{We can assume that
$I(u)\le 2^{-9} c_*^3 \frac{\theta^3}{q}$ (if not, (\ref{eqa5iquezq}) holds and the proof is finished). Since $\|\partial_1 u_1\|_{L^2(\Omega_\ell)}^2\le I(u)$,
we can choose $\ystar\in (0, \frac1{16}c_*\theta)$ such that
$u_1(\cdot,\delta)$, $u_1(\cdot, 1-\delta)\in W^{1,2}((0,\ell))$ with
\begin{equation*}
\int_0^{\ell} |\partial_1 u_1|^2(x_1,\ystar)+|\partial_1 u_1|^2(x_1,1-\ystar)\dx \le \frac1{32} c_*^2\frac{\theta^2}q.
\end{equation*}
This implies, setting
$\bar u_1^-:=u_1(0,\ystar)$ and
$\bar u_1^+:=u_1(0,1-\ystar)$,
\begin{equation}\label{equ1diffb}
\left| u_1({\tilde{x_1}},\ystar)-\bar u_1^-\right|
+\left| u_1({\tilde{x_1}},1-\ystar)-\bar u_1^+\right|
\le \frac14c_* \theta \text{ for all } {\tilde{x_1}}\in [0,q].
\end{equation}
}
To shorten notation we define
\begin{equation*}
f(x):=
\begin{cases}
\min\{|e(u)(x)-\theta e_1\odot e_2|, |e(u)(x)-(\theta-1) e_1\odot e_2|\}, & \text{ if } x\in(0,\ell)\times(0,1),\\
|\nabla u|(x), & \text{ otherwise.}
\end{cases}
\end{equation*}
For $x_1\in (\ystar,q)$ we define $z:[0,1]\to\mathbb{R}$ by $z(s):=(u_1-u_2)(x_1-s,s)$. For almost every $x_1$ we can estimate, similar to (\ref{eqestduxiv1})-(\ref{eq:estduxi}),
\begin{equation}\label{eqzprime}
\begin{split}
|z'(s)|=&|\partial_1 u_2+\partial_2 u_1-\partial_1 u_1-\partial_2 u_2 |(x_1-s,s)\\
\le & \min_{\sigma\in\{\theta,\theta-1\}} |\sigma| +
|(e_{12}(u)+e_{21}(u)-\sigma)-(e_{11}(u)+e_{22}(u))|(x_1-s,s)\\
\le & 1 + 2\min\left\{|e(u)(x_1-s,s)-\theta e_1\odot e_2|, |e(u)(x_1-s,s)-(\theta-1) e_1\odot e_2|\right\}
\le 1+ 2f(x_1-s,s).
\end{split}
\end{equation}
With multiple triangular inequalities,
\begin{equation}\label{eqpolygonalminus}
\begin{split}
|u_1(x_1,0)-u_1^-|\le&
|u_1(x_1,0)-u_1(x_1-\ystar,\ystar)|+|u_1(x_1-\ystar,\ystar)-u_1^-| \\
\le& \
|z(0)-z(\ystar)| + |u_2(x_1,0)-u_2(x_1-\ystar,\ystar)|+|u_1(x_1-\ystar,\ystar)-u_1^-|\\
\le&
|z(0)-z(\ystar)|+ |u_2(x_1,0)-u_2(x_1-\ystar,-\ystar)|
+ |u_2(x_1-\ystar,-\ystar)-u_2(x_1-\ystar,\ystar)|\\ &+|u_1(x_1-\ystar,\ystar)-u_1^-|.
\end{split}
\end{equation}
By the fundamental theorem of calculus, using (\ref{eqzprime}), $|\partial_2 u_2|(x)\le f(x)$ everywhere
and $|\nabla u_2|(x)\le f(x)$ for $x_2\le 0$,
\begin{equation}\label{eqpolygonalminus2}
\begin{split}
|u_1(x_1,0)-u_1^-|\le&
|u_1(x_1-\ystar,\ystar)-u_1^-| +\delta+2 \int_{S_{x_1}^-} f \dcalH^1,
\end{split}
\end{equation}
where $S_{x_1}^-$ is the polygonal joining
$ (x_1,0), (x_1-\ystar,\ystar), (x_1-\ystar, -\ystar), (x_1,0)$
(see Figure \ref{figlh}). Repeating the computation on the other side with $\tilde z(s):=(u_1+u_2)(x_1-s,1-s)$ leads to
\begin{equation}
\begin{split}
|u_1(x_1,1)-u_1^+|\le&
|u_1(x_1-\ystar,1-\ystar)-u_1^+| +\delta+2 \int_{S_{x_1}^+} f \dcalH^1,
\end{split}
\end{equation}
where $S_{x_1}^+$ is the polygonal joining
$ (x_1,1), (x_1-\ystar,1-\ystar), (x_1-\ystar, 1+\ystar), (x_1,1)$.
Adding the two, and recalling (\ref{equ1diffb}) {for $\tilde{x_1}=x_1-\delta$} yields, since $\mathcal{H}^1(S_{x_1}^+)=(2+2\sqrt2)\delta\le 5\delta$,
and $2\delta\le \frac18 c_*\theta$,
\begin{equation*}
\begin{split}
|u_1(x_1,0)-u_1^-|+ |u_1(x_1,1)-u_1^+|\le&
\frac14 c_*\theta +2\delta+ 2\int_{S_{x_1}^-\cup S_{x_1}^+} f \dcalH^1
\le \frac38 c_*\theta + 8\ystar^{1/2} \left(\int_{S_{x_1}^-\cup S_{x_1}^+} f^2\dcalH^1\right)^{1/2}
\end{split}
\end{equation*}
for almost every $x_1\in (\ystar,q)$.
We divide by $x_1$ and integrate over $x_1\in (a,b)$, for some $a,b$ with $\ystar\le a<b\le q$,
\begin{equation*}
\begin{split}
\int_{a}^b \frac{|u_1(x_1,0)-u_1^-|+ |u_1(x_1,1)-u_1^+|}{x_1} \dx \le&
\frac38 c_*\theta \ln\frac{b}{a}+8\ystar^{1/2} \int_{a}^b\frac{1}{x_1} \left(\int_{S_{x_1}^-\cup S_{x_1}^+} f^2 \dcalH^1\right)^{1/2}\dx\\
\le&
\frac38 c_*\theta \ln\frac{b}{a} + 8\ystar^{1/2}\left(\int_a^b \frac{1}{x_1^2} \dx \right)^{1/2} \left(\int_a^b\int_{S_{x_1}^-\cup S_{x_1}^+}
f^2 \dcalH^1 \dx\right)^{1/2}.
\end{split}
\end{equation*}
The first integral is controlled by $1/a$. For the second one we use Fubini's theorem,
\begin{equation*}
\int_a^b\int_{S_{x_1}^-\cup S_{x_1}^+} f^2 \dcalH^1\dx\le 2
\int_{(0,b)\times(-\ystar,1+\ystar)} f^2 \dxy \le
\max\{1,\mu^{-1}\}
2 I(u)= \frac{2 I(u)}{\mu},
\end{equation*}
since by assumption $\mu\le \theta\le 1$.
Therefore
\begin{equation}\label{eqabint}
\int_{a}^b \frac{|u_1(x_1,0)-u_1^-|+ |u_1(x_1,1)-u_1^+|}{x_1} \dx \le
\frac38 c_*\theta \ln\frac{b}{a} + \frac{12\ystar^{1/2}I(u)^{1/2}}{a^{1/2} \mu^{1/2} }
\text{ whenever $\ystar\le a<b\le q$.}
\end{equation}
We first use (\ref{eqabint}) with $a=m$, $b=1$. This gives, recalling $\ystar\le \theta\le m$,
\begin{equation*}
\begin{split}
\int_{m}^1 \frac{|u_1(x_1,0)-u_1^-|+ |u_1(x_1,1)-u_1^+|}{x_1} \dx \le&
\frac38 c_*\theta \ln\frac{1}{m} + \frac{12}{ \mu^{1/2} } I(u)^{1/2}.
\end{split}
\end{equation*}
If the second term is larger than $\frac18 c_*\theta \ln\frac{1}{m}$
then $I(u)\ge c \mu\theta^2\ln \frac{1}{m}$, (\ref{eqa5iquezq}) holds and we are done. Otherwise
the right-hand side is not larger than $ \frac12 c_*\theta \ln\frac{1}{m}$, so that
\begin{equation*}
\begin{split}
\int_{m}^1 \frac{|u_1(x_1,0)-u_1(x_1,1)|}{x_1} \dx \le&
|u_1^--u_1^+|\ln\frac1m +
\frac12 c_*\theta \ln\frac{1}{m} .
\end{split}
\end{equation*}
If
\begin{equation}\label{equ1pm}
|u_1^--u_1^+| \le
\frac12 c_*\theta
\end{equation}
then (\ref{equ1diffc}) holds and we are done.
It remains to consider the case that
(\ref{equ1pm}) does not hold.
For $x_1\in (0,\ell)$ we let $S_{x_1}^\text{out}:=(\partial ((-x_1,x_1)\times(-x_1,1+x_1))) \setminus (0,L)\times(0,1)$ (see Figure \ref{figlh}). Then
\begin{equation*}
|u_1(x_1,0)-u_1(x_1,1)|\le \int_{S_{x_1}^\text{out}} |\nabla u_1|\dcalH^1
\le \int_{S_{x_1}^\text{out}} f \dcalH^1
\le \left(\mathcal{H}^1(S_{x_1}^\text{out})\right)^{1/2} \left(\int_{S_{x_1}^\text{out}} f^2 \dcalH^1\right)^{1/2}.
\end{equation*}
Therefore
\begin{equation}\label{eqqvcp}
\begin{split}
\int_{1/8}^q\frac{ |u_1(x_1,0)-u_1(x_1,1)|}{x_1}\dx \le &\int_{1/8}^q\frac{ \left(\mathcal{H}^1(S_{x_1}^\text{out})\right)^{1/2}}{x_1} \left(\int_{S_{x_1}^\text{out}} f^2 \dcalH^1\right)^{1/2}\dx\\
\le& \left( \int_{1/8}^q\frac{\mathcal{H}^1(S_{x_1}^\text{out})}{x_1^2} \dx\right)^{1/2} \left(\int_0^q\int_{S_{x_1}^\text{out}} f^2 \dcalH^1\dx\right)^{1/2}\\
\le& c \mu^{-1/2} \ln^{1/2}(8q) I^{1/2}(u),
\end{split}
\end{equation}
where in the last step we used that $\mathcal{H}^1(S_{x_1}^\text{out})=8x_1+1\le 16x_1$ for $x_1\ge \frac18$.
We use (\ref{eqabint}) with $a=\frac18$, $b=q$, and obtain
\begin{equation*}
\begin{split}
\int_{1/8}^q \frac{|u_1(x_1,0)-u_1^-|+ |u_1(x_1,1)-u_1^+|}{x_1} \dx \le
\frac38 c_*\theta \ln(8q) + \frac{c\ystar^{1/2}}{\mu^{1/2} } I^{1/2}(u)
\le
\frac38 c_*\theta \ln(8q) +
\frac{c \ln^{1/2}(8q) I^{1/2}(u) }{\mu^{1/2}}
\end{split}
\end{equation*}
where we used $\ystar\le 1/8$ and $q\ge 1$.
Combining with
(\ref{eqqvcp}) and $\delta\le \frac18$ yields
\begin{equation*}
|u_1^--u_1^+|\ln (8q)
=\int_{1/8}^q \frac{|u_1^--u_1^+|}{x_1} \dx
\le
\frac38 c_*\theta \ln(8q) +
\frac{c \ln^{1/2}(8q) I^{1/2}(u) }{\mu^{1/2}}.
\end{equation*}
Since (\ref{equ1pm}) does not hold, we have
$ I(u)\ge c \mu\theta^2\ln(8q)\ge
c\mu\theta^2\ln(3+q)$
since $q\ge 1$. Therefore (\ref{eqa5iquezq}) holds and the proof is concluded also in this case.
\end{proof}
The next Lemma proves that $u_2$ is, up to a small exceptional set, very well controlled by the energy. In particular, it is significantly smaller than $\theta$, in different measures. In this Section (Lemma \ref{lem:thetasmallbranching}) we shall use
(\ref{lem:costsu2inA46l32}). Since the proofs are naturally connected, to avoid repetition we present here also the proof of two estimates that will be used in the next Section, specifically, (\ref{lem:costsu2inA3}) in Lemma \ref{lemmainterpolationestim}
and (\ref{lem:costsu2inA46bdryln}) in Lemma \ref{lemmabdryln}.
\newcommand\Gstar{{\mathcal G}}
\begin{lmm}[Local estimates for $u_2$]\label{lem:costsu2inA}
For all $\bar C>0$ there {exist} $C=C(\bar C)>0$
such that for any $\delta\in(0,1]$ and any $u \in W^{1,2}_{\mathrm {loc}}(\mathbb{R}^2,\mathbb{R}^2)$
there is a set $\Gstar{\subseteq[0,L-\xi_1}]$ such that
\begin{equation*}
I(u)\ge C\min\{\mu,1\} \theta^2\mathcal{L}^1(\Gstar)
\end{equation*}
and
\begin{enumerate}
\item\label{lem:costsu2inA46l32}{
for any $m\in(0, 1/4]$ and any $x_1\in [0,L-\xi_1]\setminus \Gstar$, one has
\begin{equation*}
\int_m^{1} \frac{|u_2(x_1-s\xi_1,1)-u_2(x_1+(1+s)\xi_1,1)|}{s} \ds\le 3\bar C\theta \ln \frac1m,
\end{equation*}
}
\item\label{lem:costsu2inA46bdryln}{
for any $m\in(0, 1/4]$ and any $x_1\in [0,L-\xi_1]\setminus \Gstar$, one has
\begin{equation*}
\int_m^{1} \frac{|u_2(x_1+s\xi_1,s)-u_2(x_1+(1-s)\xi_1,1-s)|}{s} \ds\le 6\bar C\theta \ln\frac1m,
\end{equation*}
}
\item\label{lem:costsu2inA3}
for any $x_1\in [0,L-\xi_1]\setminus \Gstar$ and any $\delta\in(0,1]$ one has
\begin{equation*}
{\frac1\delta\int_{(0,\delta)} |u_2(x_1+{(1-s)\xi_1},1-s)-u_2(x_1+s\xi_1,s)|\ds \le 5\bar C\theta}.
\end{equation*}
\end{enumerate}
\end{lmm}
\newcommand\Gvert{\mathcal G_\text{vert}}
\newcommand\Ghor{\mathcal G_\text{hor}}
\newcommand\Ghoravob{\mathcal G_\text{hor}^\text{av,+}}
\newcommand\Ghoravun{\mathcal G_\text{hor}^\text{av,-}}
\newcommand\Gvertav{\mathcal G_\text{vert}^\text{av}}
\begin{figure}
\caption{
Sketch of the geometry in Lemma \ref{lem:costsu2inA}
\label{figlemma310}
\end{figure}
\begin{proof}
{
{\bf Step 1. Construction of $\Gstar$.}
We construct $\Gstar$ as the union of different pieces, which are all defined and estimated similarly.
}
The first one contains points with large vertical differences (see Fig. \ref{figlemma310} (middle)),
\begin{equation}\label{eqdefgvert}
\Gvert:=\left\{x_1\in[0,L-\xi_1]: \bar C\theta\le |u_2(x_1,0)-u_2(x_1,1)|\right\}.
\end{equation}
By the fundamental theorem of calculus, for almost every $x_1$ we have
\begin{equation*}
|u_2(x_1,0)-u_2(x_1,1)|\le \int_{(0,1)} |\partial_2 u_2|(x_1,s)\ds
\end{equation*}
which gives, using first Hölder's inequality and then Fubini's theorem,
\begin{equation*}
\bar C^2\theta^2\mathcal{L}^1(\Gvert)\le \int_{(0,L-\xi_1)} |u_2(x_1,0)-u_2(x_1,1)|^2\dx \le \int_{\Omega_L} |\partial_2 u_2|^2\dxy \le I(u).
\end{equation*}
The second one contains points with large horizontal differences along the top boundary,
\begin{equation}\label{eqdefghor}
\Ghor:=\left\{x_1\in[0,L-\xi_1]:\bar C\theta\le |u_2(x_1,1)-u_2(x_1+\xi_1,1)| \right\}.
\end{equation}
Let $S_{x_1}$ be the polygonal joining the points {(see Fig. \ref{figlemma310} (middle))}
\begin{equation*}
(x_1,1), (x_1+\frac12\xi_1,1+\frac12\xi_1), (x_1+\xi_1,1).
\end{equation*}
By the fundamental theorem of calculus, for almost every $x_1$ we have
\begin{equation*}
|u_2(x_1,1)-u_2(x_1+\xi_1,1)|\le \int_{S_{x_1}} |\nabla u_2|\dcalH^1
\le\Big(\mathcal{H}^1(S_{x_1})\int_{S_{x_1}} |\nabla u_2|^2\dcalH^1\Big)^{1/2}
\end{equation*}
so that, squaring, integrating over $\Ghor$, and using $\mathcal{H}^1(S_{x_1})=\sqrt2\xi_1=\frac{1}{2\sqrt2}$,
\begin{equation*}
\begin{split}
\bar C^2\theta^2\mathcal{L}^1(\Ghor)\le &
\int_{(0,L-\xi_1)} |u_2(x_1,1)-u_2(x_1+\xi_1,1)|^2\dx
\le\frac1{2\sqrt2} \int_{\mathbb{R}} \int_{S_{x_1}} |\nabla u_2|^2\dcalH^1\dx\\
\le& \frac12 \int_\mathbb{R} \int_{(0,\frac12\xi_1)} \big[|\nabla u_2|^2(x_1+t,1+t)
+|\nabla u_2|^2(x_1+\xi_1-t,1+t) \big]
\dt\dx
\\\le&
\int_{\mathbb{R}\times(1,2)} |\nabla u_2|^2\dxy \le \mu^{-1} I(u).
\end{split}
\end{equation*}
The next one controls vertical fluctuations, it will be used to estimate $u_2(x_1+s\xi_1,0)-u_2(x_1+s\xi_1,s)$, and the same term on the other side. {Precisely, we set}
\begin{equation}\label{eqdefgvertav}
\Gvertav:=\left\{x_1\in[0,L-\xi_1]: \bar C^2\theta^2\le
\int_{(0,1)} \int_{(0,1)} \big( |\partial_2 u_2|^2(x_1+s\xi_1,t) + |\partial_2 u_2|^2(x_1+(1-s)\xi_1,t) \big) \ds\dt\right\}.
\end{equation}
Integrating over all $x_1\in\Gvertav$ and swapping the order of integration gives
\begin{equation*}
\bar C^2\theta^2\mathcal{L}^1(\Gvertav)\le
2 \int_{(0,1)} \ds \int_{(0,L)}\int_{(0,1)} |\partial_2 u_2|^2(x_1,t) {\dt \dx}
\le
2\int_{\Omega_L} |\partial_2 u_2|^2\dxy \le 2I(u).
\end{equation*}
{
And finally we consider an averaged version of $\Ghor$,
\begin{equation}\label{eqdefghoravob}
\Ghoravob:=\left\{x_1\in\mathbb{R}: \bar C^2\theta^2\le \int_{(-1,1)} \frac{|u_2(x_1+s\xi_1,1)-u_2(x_1,1)|^2}{|s|}
\ds \right\}.
\end{equation}
Let $S^+_{x_1,s}$ be the polygonal line which joins the points
\begin{equation*}
(x_1,1),\,\,
(x_1+\frac12s\xi_1,1+\frac12 |s|\xi_1),\,\,
(x_1+s\xi_1,1)
\end{equation*}
(see Figure \ref{figlemma310}).}
{By the fundamental theorem of calculus for Sobolev functions,
Hölder's inequality and $\mathcal{H}^1(S^+_{x_1,s})= |s| \xi_1\sqrt2\le |s|$ we get
\begin{equation*}
|u_2(x_1+s\xi_1,1)-u_2(x_1,1) |
\le \int_{S^+_{x_1,s}} |\nabla u_2|{\dcalH^1}
\le |s|^{1/2}\left(\int_{S^+_{x_1,s}} |\nabla u_2|^2 {\dcalH^1}
\right)^{1/2} .
\end{equation*}
Squaring, dividing by $|s|$ and integrating over $s\in(-1,1)$ gives
\begin{equation*}
\int_{(-1,1)}\frac{
|u_2(x_1+s\xi_1,1)-u_2(x_1,1) |^2}{|s|}
\ds
\le \int_{(-1,1)}\int_{S^+_{x_1,s}} |\nabla u_2|^2 {\dcalH^1}\ds.
\end{equation*}
}
{
We integrate over all $x_1\in\Ghoravob$ and obtain, using Fubini's theorem as above,
\begin{equation*}
\begin{split}
\bar C^2\theta^2 \mathcal{L}^1(\Ghoravob) \le&
\int_{\mathbb{R}} \int_{(-1,1)} \frac{|u_2(x_1+s\xi_1,1)-u_2(x_1,1) |^2}{|s|}
\ds \dx \\
\le &\int_{\mathbb{R}} \int_{(-1,1)} \int_{S^+_{x_1,s}} |\nabla u_2|^2 {\dcalH^1}\ds\dx\\
\le& 2\sqrt2\int_{(-1,1)} \int_{(0,\frac12 |s| \xi_1)} \int_{\mathbb{R}} |\nabla u_2|^2(x_1,1+t) \dx\dt\ds \\
\le& 4\sqrt2 \int_{\mathbb{R}\times (1,2)} |\nabla u_2|^2\dxy\le 6 \mu^{-1} I(u).
\end{split}
\end{equation*}
}
The {analogue estimate} holds for
\begin{equation}\label{eqdefghoravun}
\Ghoravun:=\{x_1\in\mathbb{R}: \bar C^2\theta^2\le \int_{(-1,1)} \frac{|u_2(x_1+s\xi_1,0)-u_2(x_1,0)|^2}{|s|}
\ds \}.
\end{equation}
{We finally define
\begin{equation*}
\Gstar :=\Gvert\cup \Ghor \cup\Gvertav \cup
\Ghoravob\cup \Ghoravun\cup (\xi_1+\Ghoravob).
\end{equation*}
The previous estimates imply $I(u) \ge C_4 \min\{\theta^2,\mu\theta^2\} \mathcal{L}^1(\Gstar)$, with
$C_4:=\frac1{36} \bar C^{2}$.
}
{\bf Step 2. Proof of (\ref{lem:costsu2inA46l32}) (estimate on the boundary).}
For any $f\in L^2((0,1))$ one has {for $m\in(0,1)$}
\begin{equation}\label{eqdlog1mdf}
\int_{(m,1)} \frac{|f|}{|s|} \ds \le
\Big(\int_{(m,1)}\frac1{|s|}\ds\Big)^{1/2}
\Big(\int_{(m,1)}\frac{|f|^2}{|s|}\ds\Big)^{1/2}
\le \ln^{1/2}\frac1m \Big(\int_{(0,1)}\frac{|f|^2}{|s|}\ds\Big)^{1/2},
\end{equation}
and {similarly} on $(-1,-m)$.
Therefore, for $x_1\in\mathbb{R}\setminus \Ghoravob$ (recall (\ref{eqdefghoravob})) we have
\begin{equation}\label{eq1m1mg0}
\begin{split}
\int_{(m,1)} \frac{|u_2(x_1+s\xi_1,1)-u_2(x_1,1)|}{|s|}\ds\le &
\bar C\theta \ln^{1/2} \frac1m ,
\end{split}
\end{equation}
{and }analogously
\begin{equation}\label{eq1m1mg1}
\int_{(m,1)} \frac{|u_2(x_1-s\xi_1,1)-u_2(x_1,1)|}{|s|}\ds\le
\bar C\theta \ln^{1/2} \frac1m \,.
\end{equation}
For later reference we notice that a similar computation shows that
for $x_1\in\mathbb{R}\setminus \Ghoravun$ (recall (\ref{eqdefghoravun})) we have
\begin{equation}\label{eq1m1mg0min}
\begin{split}
\int_{(m,1)} \frac{|u_2(x_1+s\xi_1,0)-u_2(x_1,0)|}{|s|}\ds\le &
\bar C\theta \ln^{1/2} \frac1m .
\end{split}
\end{equation}
For $x_1\in[0,L-\xi_1]\setminus \Gstar$ we have
\begin{equation*}
\begin{split}
\int_m^{1} \frac{|u_2(x_1-s\xi_1,1)-u_2(x_1+(1+s)\xi_1,1)|}{s} \ds\le&
\int_m^{1} \frac{|u_2(x_1-s\xi_1,1)-u_2(x_1,1)|}{s} \ds\\
&+ |u_2(x_1,1)-u_2(x_1+\xi_1,1) |
\int_m^{1} \frac1s \ds\\
&+\int_m^{1} \frac{|u_2(x_1+\xi_1,1)-u_2(x_1+(1+s)\xi_1,1)|}{s} \ds
\\
\le &3\bar C \theta \ln\frac1m,
\end{split}
\end{equation*}
where we used
$x_1\not\in \Ghoravob$ and (\ref{eq1m1mg1}) to estimate the first term,
$x_1\not\in \Ghor$ to estimate the second one,
and
$x_1+\xi_1\not\in \Ghoravob$ and (\ref{eq1m1mg0}) to estimate the third one, and then $\ln^{1/2}\frac1m\le\ln\frac1m$ to simplify the estimate.
This concludes the proof of (\ref{lem:costsu2inA46l32}).
{\bf Step 3. Proof of (\ref{lem:costsu2inA46bdryln}) (estimate on the diagonal).}
Let $x_1\in[0,L-\xi_1]\setminus \Gstar$.
For any $s\in(0,1)$ we have, by the fundamental theorem of calculus and Hölder's inequality,
\begin{equation*}
\begin{split}
\frac{ |u_2(x_1+s\xi_1,s)-u_2(x_1+s\xi_1,0)|^2}{s}\le&
\frac1s\left( \int_{(0,s)} |\partial_2 u_2|(x_1+s\xi_1,t)
\dt \right)^2 \le \int_{(0,1)} |\partial_2 u_2|^2(x_1+s\xi_1,t)\dt.
\end{split}
\end{equation*}
Integrating over $s\in(m,1)$ and using $x_1\not\in\Gvertav$ (recall (\ref{eqdefgvertav})),
\begin{equation*}
\begin{split}
\int_0^1\frac{ |u_2(x_1+s\xi_1,s)-u_2(x_1+s\xi_1,0)|^2}{s}\ds\le \bar C^2\theta^2,
\end{split}
\end{equation*}
and combining with $x_1\not\in\Ghoravun$ (recall (\ref{eqdefghoravun}),
\begin{equation}\label{step4unten}
\begin{split}
\int_0^1\frac{ |u_2(x_1+s\xi_1,s)-u_2(x_1,0)|^2}{s}\ds\le 4\bar C^2\theta^2.
\end{split}
\end{equation}
The same estimate on the other side gives, using $x_1+\xi_1\not\in\Ghoravob$ (recall (\ref{eqdefghoravob})),
\begin{equation}\label{step4oben}
\begin{split}
\int_0^1\frac{ |u_2(x_1+(1-s)\xi_1,1-s)-u_2(x_1+\xi_1,1)|^2}{s}\ds\le 4\bar C^2\theta^2,
\end{split}
\end{equation}
By the triangular inequality,
\begin{equation*}
\begin{split}
\int_m^{1} \frac{|u_2(x_1+s\xi_1,s)-u_2(x_1+(1-s)\xi_1,1-s)|}{s} \ds\le&
\int_m^{1} \frac{|u_2(x_1+s\xi_1,s)-u_2(x_1,0)|}{s} \ds\\
&+ |u_2(x_1,0)-u_2(x_1+\xi_1,1) |
\int_m^{1} \frac1s \ds\\
&+\int_m^{1} \frac{|u_2(x_1+\xi_1,1)-u_2(x_1+(1-s)\xi_1,1-s)|}{s} \ds.
\end{split}
\end{equation*}
By (\ref{step4unten}) and (\ref{eqdlog1mdf}), the first term is estimated by $2\bar C\theta\ln^{1/2}\frac1m\le 2\bar C\theta\ln\frac1m$.
The same holds for the last one, by (\ref{step4oben}) and (\ref{eqdlog1mdf}).
For the middle one we use
$x_1\not\in\Gvert$
and $x_1\not\in \Ghor$, which give
$ |u_2(x_1,0)-u_2(x_1+\xi_1,1) |\le 2\bar C\theta$.
Adding these three estimates leads to
\begin{equation*}
\begin{split}
\int_m^{1} \frac{|u_2(x_1+s\xi_1,s)-u_2(x_1+(1-s)\xi_1,1-s)|}{s} \ds
\le &6\bar C \theta \ln\frac1m,
\end{split}
\end{equation*}
which concludes the proof.
{\bf Step 4. Proof of (\ref{lem:costsu2inA3}) (estimate close to the boundary).}
For any $f\in L^2((0,1))$ and any $\delta\in(0,1)$ one has
\begin{equation}\label{eqfdeltal1delta}
\frac1\delta\int_{(0,\delta)} |f| \ds \le
\frac1\delta \Big(\int_{(0,\delta)}\frac{|f|^2}{|s|}\ds\Big)^{1/2}
\Big(\int_{(0,\delta)}{|s|}\ds\Big)^{1/2}
\le\frac1{\sqrt2}\Big(\int_{(0,1)}\frac{|f|^2}{|s|}\ds\Big)^{1/2},
\end{equation}
and {analogously} on $(-\delta,0)$.
Let $x_1\in[0,L-\xi_1]\setminus\Gstar$.
Using (\ref{eqfdeltal1delta}) and (\ref{step4oben}) we obtain
\begin{equation*}
\frac1\delta \int_{(0,\delta)} |u_2(x_1+(1-s)\xi_1,1-s)-u_2(x_1+\xi_1,1)|\ds\le \sqrt 2\bar C\theta.
\end{equation*}
Analogously, with (\ref{eqfdeltal1delta}) and (\ref{step4unten}) we obtain
\begin{equation*}
\frac1\delta \int_{(0,\delta)} |u_2(x_1+s\xi_1,s)-u_2(x_1,0)|\ds\le \sqrt 2\bar C\theta.
\end{equation*}
As above,
$x_1\not\in\Gvert$
and $x_1\not\in \Ghor$ give
$ |u_2(x_1,0)-u_2(x_1+\xi_1,1) |\le 2\bar C\theta$, so that {by triangle inequality}
\begin{equation*}
\frac1\delta \int_{(0,\delta)} |u_2(x_1+(1-s)\xi_1,1-s)-u_2(x_1+s\xi_1,s)|\ds\le 5\bar C\theta
\end{equation*}
which concludes the proof of (\ref{lem:costsu2inA3}).
\end{proof}
At this point we are ready to present the main result of this Section, which basically gives the proof of the lower bound in the cases with fine microstructure and small $\theta$.
{Following \cite{conti-zwicknagl:16} we introduce two new parameters, $\lambda,m>0$, which will be chosen below (see the proof of Proposition \ref{lem:lbbranching}) in different ways depending on the regime. This permits to unify different parts of the proof of the lower bound.} {Roughly speaking, the parameters $\lambda$ and $m$ correspond to the length scales of the martensitic laminate deep inside the nucleus and on the vertical austenite/martensite interface.}
\begin{lmm}[The case of small $\theta$]\label{lem:thetasmallbranching}
There exists $m_0\in (0,1/{4}]$ and $c>0$ such that for all $u\in \mathcal{X}$,
{
$\theta>0$, $\mu>0$, $\varepsilon>0$, $\lambda>0$, $\ell>0$, and $m>0$
which obey}
\begin{equation*}
{
\theta\le m\le m_0\,,\qquad
1\le \ell\le 2L\,,\qquad
\varepsilon\le \theta^2\lambda\,,\qquad
\text{and}\qquad
\lambda\le 1
}
\end{equation*}
one has
\[
I_{\tilde{\Omega}_\ell}(u) \geq c\min
\Big{\{}
\frac{ \varepsilon \ell }{\lambda},
\mu m^2,\,
\mu \theta^2 {\lambda}\ln \frac 1m ,\,
\theta^2 {\ell^{-1}} {\lambda^2} m\left( \ln \frac1 m \right)^2,
{\mu\theta^2\ln(3+\frac\theta\mu)},
{
\mu\theta^2\ln(3+\ell)}
\Big{\}}.
\]
\end{lmm}
This proof follows the {strategy} of \cite[Section 5.2]{conti-zwicknagl:16}, with important modifications to treat both the vectorial nature of this problem and the additional logarithmic terms which appear here, due to the different boundary conditions and the fact that we do not have a hard constraint on the order parameter.
\begin{proof}[Proof of Lemma \ref{lem:thetasmallbranching}]
\textbf{Step 1: Preparation.}\\
{We start by choosing a ``good'' slice, parametrized as usual by $x_1$.
The slice is chosen so as to have boundary values {at the upper and lower boundary of $\Omega_{2L}$} which are close together, in a sense similar to the one used in the definition of
the set $\mathcal P$ defined in (\ref{eqdefPth}). However, in order to capture the different logarithmic factor, we need to use a larger set, in which the difference between the boundary values is controlled in the scale of $m\ge\theta$.
Specifically,
we consider the set
\begin{equation*}
\mathcal{P}_* := \left\{x_1\in (0,\ell-\xi_1)\,:\,
|u((x_1,0)+\xi)-u(x_1,0)| \leq \frac1{10} m \right\}.
\end{equation*}}
For almost every $x_1\in (0, \ell-\xi_1)\setminus\mathcal P_*$
we obtain
by Lemma \ref{lem:gammawuerfel}(\ref{lem:gammawuerfelout}) that
\begin{equation*}
\mu\int_{S_{x_1}}|\nabla u|^2\text{ d}\mathcal{H}^1 \geq c \frac{\mu m^2}{1+ x_1}
\end{equation*}
so that, using Fubini and monotonicity of $1/(1+x_1)$ as in the proof of Lemma \ref{lem:gammawuerfel}(\ref{lem:gammawuerfelest}),
\begin{eqnarray*}
I^\mathrm {ext}_{\tilde{\Omega}_\ell}(u)\geq c\mu m^2\int_{(0, \ell-\xi_1)\setminus\mathcal P_*}\frac{1}{x_1+1}\text{ d}x_1\geq c\mu m^2\int_{p}^{\ell-\xi_1} \frac{1}{x_1+1}\text{ d}x_1= c\mu m^2\ln\frac{\ell+1-\xi_1}{p+1},
\end{eqnarray*}
where $p:=\mathcal{L}^1(\mathcal P_*)$.
If $p\le \frac12\ell$, then, recalling $1\le\ell$, we see that
$p+1\le \frac12\ell+1\le \frac12\ell+\frac5{14}\ell+\frac9{14}=\frac67(\ell+\frac34)$. Therefore
in this case $\ln\frac{\ell+1-\xi_1}{p+1}\ge \ln\frac76>0$, so that
$I(u)\ge c\mu m^2$ and we are done. Therefore we can assume $\mathcal{L}^1(\mathcal P_*)> \frac12\ell$ in the following.
{We set $\hat C:=2^{-10}$ and consider the set where the surface energy or the elastic energy are large along slices in the $\xi$ direction.
For $x_1 \in (0, \ell-\xi_1)$ we recall that
in \eqref{eq:defuxi} we defined $u_{x_1}^\xi:[0,1]\to\mathbb{R}^2$ by $u_{x_1}^\xi
:=(4u_1+u_2)((x_1,0) +s\xi)$. We set
\begin{align*}
\mathcal{R} :=& \left\{x_1\in (0,\ell-\xi_1)\,:\, |\partial_s\partial_su^\xi_{x_1}|((0,1)) \ge \frac14\hat C\lambda^{-1} \, {\text{ or }}
\left\|\min \left\{|\partial_s u^\xi_{x_1}-\theta |,\,|\partial_s u^\xi_{x_1}+(1-\theta) |\right\}\right\|_{L^2((0,1))}^2 \ge \hat C \varepsilon \lambda^{-1}
\right\}.\end{align*}}
{
By \eqref{eq:estduxi} and Fubini's theorem,
\begin{equation*}
I_{\tilde\Omega_\ell}(u)\ge c \mathcal{L}^1(\mathcal R)
\frac\varepsilon\lambda.
\end{equation*}
}
{
If $\mathcal{L}^1(\mathcal R)\ge \frac18\ell$, then we have
$I_{\tilde\Omega_\ell}(u)\ge
c \varepsilon\ell \lambda^{-1}$
and the proof is concluded.}
Let $\Gstar$ be as in Lemma \ref{lem:costsu2inA}, with $\bar C=2^{-7}$.
If $\mathcal{L}^1(\Gstar)\ge\frac18\ell$
then
{$I_{\tilde{\Omega}_\ell}(u)\ge c\min\{\theta^2\ell,\mu\theta^2\ell\}\geq \\c\min\{\theta^2\ell^{-1}\lambda^2m\ln^2\frac{1}{m},\,\mu\theta^2\ln(3+\ell)\}$} and we are done (recall that $\ell^{-1}\lambda^2m\ln^2\frac1m\le \ell$ and $\ln(3+\ell)\le c \ell$ by our assumptions). Therefore we can assume
$\mathcal L^1(\mathcal R\cup\Gstar)<\frac14\ell$.
We choose $x_1^\ast \in\mathcal P_*\setminus\mathcal R\setminus \Gstar$ such that
$v:=u_{x_1^\ast}^\xi\in W^{1,2}((0,1))$ is the trace of $u$, which necessarily satisfies
\begin{equation}\label{eq:estv}
\begin{split}
|v(1)-v(0)|\le \frac12 m, \qquad
|\partial_s\partial_s v |((0,1)) <\frac14\hat C\lambda^{-1}\,, \quad
\left\|\min \left\{| v'-\theta|, | v'+(1-\theta)|\right\}\right\|^2_{L^2((0,1))} <\hat C\varepsilon\lambda^{-1}
\end{split}
\end{equation}
{and, since $x_1^*\not\in \Gstar$, by Lemma \ref{lem:costsu2inA}(\ref{lem:costsu2inA46l32})
\begin{equation}\label{eq:estv2}
\int_{(m,1)} \frac{1}{s}\left|u_2(x_1^\ast-s\xi_1,1)-u_2(x_1^\ast+(s+1)\xi_1,1)\right| \ds\le \frac1{32}\theta\ln\frac1m.
\end{equation}
}
For $t\in\mathbb{R}$, we define $\omega_t:=\{s\in (0,1)\,:\, v'(s) \leq \theta-t\}$ and $\chi_t:=\chi_{\omega_t}$. We observe that
\begin{equation}\label{eq:chi12}
|v'(s)-\theta+\chi_{1/2}(s)|=\min\left\{|v'(s)-\theta|,|v'(s)+(1-\theta)|\right\}.
\end{equation}
By the coarea formula we have
\[\int_{\frac{1}{2}}^{\frac{3}{4}}\mathcal{H}^0\left(\partial\omega_t\cap(0,1)\right)\dt
\leq\int_{\mathbb{R}}\mathcal{H}^0\left(\partial\omega_t\cap(0,1)\right)\dt
=\left|\partial_s\partial_s v\right|\left(\left(0,1\right)\right), \]
and hence, by \eqref{eq:estv},
there is $t^\ast \in (1/2,3/4)$ such that $\omega:=\omega_{t^\ast}$ consists of at most $\hat C \lambda^{-1}$ many intervals.
{
We compute
\begin{equation*}
v(1)-v(0)=\int_{(0,1)} v'\ds =\theta-\mathcal{L}^1(\omega_{1/2})+\int_{(0,1)} (v'-\theta+\chi_{{1/2}}) \ds
\end{equation*}
which, by Hölder's inequality, (\ref{eq:estv}), $\varepsilon\le \theta^2\lambda$, {\eqref{eq:chi12}}, and the choice of $\hat C$ gives
\begin{equation*}
| v(1)-v(0)-\theta+\mathcal{L}^1(\omega_{1/2})|\le \|v'-\theta+\chi_{{1/2}}\|_{L^1((0,1))} \le {\hat{C}^{1/2}\varepsilon^{1/2}\lambda^{-1/2}\leq}2^{-5}\theta.
\end{equation*}
}
{
Using \eqref{eq:estv}, $\omega\subset\omega_{1/2}$ and $\varepsilon\le\theta^2\lambda$ as above, {(note that $v'(s)-\theta\in[-3/4,-1/2]$ implies $v'(s)+1-\theta\in[1/4,1/2]$)}
\begin{equation}\label{eqomegaomega12}
\begin{split}
0\le
\mathcal{L}^1(\omega_{1/2})-\mathcal{L}^1(\omega) &\le \mathcal{L}^1(\{s: v'(s)-\theta \in [-3/4, -1/2]\})
\le 16 \int_0^1 \min\{|v'-\theta|^2,|v'+(1-\theta)|^2\} \ds \\
&\le 16\hat C \varepsilon \lambda^{-1}\le 16 \hat C \theta^2\le 2^{-6}\theta,
\end{split}
\end{equation}
so that
\begin{equation}\label{eqv1v0}
| v(1)-v(0)-\theta+\mathcal{L}^1(\omega)|\le {|v(1)-v(0)-\theta+\mathcal{L}^1(\omega_{1/2})|+|\mathcal{L}^1(\omega_{1/2})-\mathcal{L}^1(\omega_{1})|\leq}2^{-4}\theta.
\end{equation}
}
{We conclude that $\omega$ consists of at most $\hat C \lambda^{-1}$ many intervals and obeys {(recall \eqref{eq:estv} and $\theta\leq m$)}
\begin{equation}
\mathcal{L}^1(\omega) \le |v(1)-v(0)|+\theta+2^{-4}\theta\le 2 m.
\end{equation}}
\begin{figure}
\caption{Sketch of the construction of the test function $\psi$ in the proof of Lemma \ref{lem:thetasmallbranching}
\label{figpsi}
\end{figure}
\noindent
\textbf{Step 2: A test function for the logarithmic scaling in the interior.}\\
We denote the connected components of $\omega$ by $(y_i-r_i, y_i+r_i)$ and define {$g_i:=\min\{r_i+\lambda m,m\}$.} {Notice that $2r_i\le\mathcal{L}^1(\omega)\le 2m$ implies $r_i\le g_i$ for all $i$.}
Recall that the number $n$ of these components is at most $\hat C \lambda^{-1}$.
We then have
\begin{eqnarray}\label{eq:gi}
\lambda m\leq g_i\leq m \text{\qquad for all $i$},
\end{eqnarray}
and \begin{equation*}
\sum_{i=1}^n 2g_i\le{\mathcal{L}^1}(\omega)+2n\lambda m \leq 2m+2 \hat C m\leq {3}m.
\end{equation*}
We consider the test function $\hat\psi:\mathbb{R}\to\mathbb{R}$ defined by (see Fig.~\ref{figpsi})
\begin{equation*}\label{eq:defpsi}
\hat\psi(t):=\max_{1, \dots n} \psi_i(t-y_i)\,,\hskip1cm
\psi_i(t):=\left[ \ln \frac{1}{m} - \left(\ln \frac{|t|}{g_i}\right)_+ \right]_+
{=
\begin{cases}
\ln\frac1m, & \text{ if } |t|\le g_i,\\
\ln\frac{g_i}{m|t|}, & \text{ if } g_i<|t|\le \frac{g_i}{m},\\
0, & \text{ if } |t|>\frac{g_i}{m}\,,
\end{cases}}
\end{equation*}
{where $a_+:=\max\{a,0\}$.}
{Since $m_0\leq 1/{4}$, we have $\ln 1/m\ge {\ln {4}>0}$ for any $m\le m_0$.}
{Recalling $y_i\in \omega\subset(0,1)$ and $g_i\le m$ we see that $\supp\hat\psi\subset(-1,2)$.}
{We compute
\begin{equation*}
\|\psi_i\|_{L^1(\mathbb{R})}\le 2\int_0^{g_i/m} \ln\frac{g_i}{m t} \dt = \frac{2g_i}{m} \int_0^1 \ln \frac1t \dt = \frac{2g_i}{m},
\end{equation*}
}
{and analogously $\|\psi_i\|_{L^2(\mathbb{R})}^2\le 4 g_i/ m$, which imply, recalling that $\sum_i 2g_i\le 3m$,}
\begin{equation*}
\|\hat\psi\|_{L^1(\mathbb{R})} \le \sum_{i=1}^n \|\psi_i\|_{L^1(\mathbb{R})}\leq\frac{1}{m}\sum_{i=1}^n 2 g_i\leq 3
{\qquad\text{and}\qquad
\|\hat\psi\|_{L^2(\mathbb{R})}^2 \le
\int_\mathbb{R} \max_i |\psi_i|^2 (t-y_i)\dt \le
\sum_{i=1}^n \|\psi_i\|_{L^2(\mathbb{R})}^2\leq 6 .}
\end{equation*}
{To estimate the $L^2$ norm of $\hat\psi'$ we first compute
\begin{equation*}
|\psi'_i|(t)=\frac{1}{|t|}\chi_{\{g_i\le |t|\le g_i/m\}}
\end{equation*}
and observe that $|\hat\psi'|^2(t)\le \max_i |\psi_i'|^2(t-y_i)\le \sum_i |\psi_i'|^2(t-y_i)$.
This implies}
\begin{equation*}
\|\hat\psi'\|^2_{L^2(\mathbb{R})}\le \sum_{i=1}^n\int_{\mathbb{R}}|\psi_i'(t)|^2\dt
={2}\sum_{i=1}^n\int_{g_i}^{g_i/m} \frac1{t^2}\dt
\le 2\sum_{i=1}^n\frac{1}{g_i}
\leq \frac{2n}{\lambda m}
\le \frac{1}{{\lambda^2} m}.
\end{equation*}
We then estimate the $H^{1/2}$ norm of $\hat\psi$. Specifically, we define an extension and estimate its homogeneous $H^1$ norm. Following
\cite[Lemma 5.2]{conti-zwicknagl:16} we let
$\Psi_i(x):=\psi_i(|x|)$ be the radially symmetric extension of $\psi_i$ to $\mathbb{R}^2$ and compute
\begin{eqnarray*}
\int_{\mathbb{R}^2}|\nabla \Psi_i|^2\dxy=2\pi\int_{g_i}^{g_i/m} r\frac{1}{r^2}\,\dr=2\pi\ln\frac{1}{m}.
\end{eqnarray*}
We define the function $\hat\Psi(x_1,x_2):=\max_i \Psi_i(x_1,x_2-y_i)$, which obeys $\hat\Psi(0,t)=\hat\psi(t)$
for $t\in \mathbb{R}$ and $|\nabla \hat\Psi|(x)\le \max_i |\nabla \Psi_i|(x_1, x_2-y_i)$ for almost every $x\in\mathbb{R}^2$. This implies (recall \eqref{eq:gi} and $n\leq \hat{C}\lambda^{-1}\leq (2\pi)^{-1}\lambda^{-1}$)
\begin{equation*}
{\|\nabla \hat\Psi\|^2_{L^2(\mathbb{R}^2)}
=\int_{\mathbb{R}^2} |\nabla \hat\Psi|^2\,\dxy}\le
\sum_{i=1}^n\int_{{\mathbb{R}^2}}|\nabla \Psi_i(x_1,x_2-y_i)|^2\,\dxy
=2\pi n\ln\frac{1}{m}\leq\frac{1}{\lambda}\ln\frac{1}{m}.
\end{equation*}
\textbf{Step 3: Boundary correction for $\mu\le\theta$}.\\
In this situation we take the largest value, $\ln\frac1m$.
{Specifically, we set $\psi(t):=\max \{\hat\psi(t),\psi_B(t), \psi_B(1-t)\}$, where
\begin{equation*}
\psi_B(t):=\left[ \ln \frac{1}{m} - \left(\ln \frac{|t|}{m}\right)_+ \right]_+
{=
\begin{cases}
\ln\frac1m, & \text{ if } |t|\le m,\\
\ln\frac{1}{|t|}, & \text{ if } m<|t|\le 1,\\
0, & \text{ if } |t|>1\,.
\end{cases}}
\end{equation*}
We remark that $\psi_B$ has the same form as the functions $\psi_i$, with the only difference that the width of the central region is not $g_i\in[\lambda m,m]$ but exactly $m$. This is important to ensure symmetry of the boundary conditions.
One computes
$\|\psi_B\|_{L^1((0,1))} \le 1$, $\|\psi_B\|_{L^2((0,1))}^2\le 2$ and
$\|\psi_B'\|^2_{L^2((0,1))}\le \frac1m$.
The previous estimates for $\hat\psi$ lead then to
\begin{equation}\label{eqestpsiwithbdry}
\|\psi\|_{L^1((0,1))}\le 5\,,\hskip4mm
\|\psi\|^2_{L^2((0,1))}\le 10\,,\hskip4mm
\|\psi'\|_{L^2(\mathbb{R})}\le \frac{3}{\lambda m^{1/2}}\,,\hskip4mm
\|\nabla \Psi\|_{L^2(\mathbb{R}^2)} \le c\frac{1}{\lambda^{1/2}}\ln^{1/2}\frac1m
\end{equation}
where $\Psi(x):=\max\{\hat\Psi (x), \psi_B(|x|), \psi_B(|x-e_2|)\}$ obeys $\supp\Psi\subseteq[-1,1]\times[-1,2]$,
$\Psi(0,t)=\psi(t)$, $\Psi(t,0)=\Psi(t,1)=\psi_B(t)$. Here it is important that $g_i\le m$, so that, in taking the maximum, $\psi_B$ is always the larger one on the top and bottom boundaries, $x_2\in\{0,1\}$.
}
{Since we are working in the case $\mu\le\theta$, from Lemma \ref{lemmaystin} with $c_*:=\frac1{8}$ we obtain the following: either
\begin{equation*}
\frac1c I(u)\ge \min\{\mu\theta^2 \ln(3+\frac\theta\mu),\mu\theta^2 \ln\frac1m, \mu\theta^2\ln(3+\ell)
\}
\end{equation*}
and (since $\lambda\le 1$) we are done, or
\begin{equation*}
\int_m^{1} \frac{|(u_1(s,1)-u_1(s,0)) |}{s}\ds\le
\frac1{8}\theta\ln\frac{1}{m}.
\end{equation*}
Since $\psi_B'=0$ on $(0,m)$ and $\psi_B'(s)=-1/s$ on $(m,1)$,
this implies
\begin{equation}\label{claimstep3}
\int_{(0,1)} \psi_B'(s) \big[u_1(s, 1)-u_1(s,0)\big]\ds \le \frac1{8} \theta\ln \frac1m .
\end{equation}
}
We now turn to $u_2$ and recall that (\ref{eq:estv2}) {implies}
\begin{equation}\label{claimstep3b}
\int_{(0,1)} \psi'_B(s) \big[u_2(x_1^\ast-s\xi_1,1)-u_2(x_1^\ast+(s+1)\xi_1,1)\big] \ds\le \frac1{32}\theta\ln\frac1m.
\end{equation}
\textbf{Step 4: Energy estimate for $\mu\le\theta$.\\}
We compute, recalling that $\omega\subset\omega_{1/2}$ and that $\psi=\ln\frac1m$ on $\omega\cup\{0,1\}$,
\begin{equation*}
\begin{split}
\mathcal{L}^1(\omega)\ln\frac1m =& \int_{(0,1)} \chi_{\omega}\psi \ds \le \int_{(0,1)} \chi_{\omega_{1/2}}\psi \ds \\
=&
\int_{(0,1)} (\chi_{\omega_{1/2}}-\theta+v')\psi\ds+
\int_{(0,1)} (\theta-v') \psi \ds
\\
\le&
\|v'+\chi_{\omega_{1/2}}-\theta\|_{L^2((0,1))} \|\psi\|_{L^2((0,1))} +
\theta \|\psi\|_{L^1((0,1))}+
(v(0)-v(1))\ln\frac1m +\int_{(0,1)} v \psi'\ds,
\end{split}
\end{equation*}
{where we integrated by parts in the last term. }
We recall that (\ref{eqv1v0})
gives
\begin{equation*}
( v(0)-v(1))\ln\frac1m\le
( \mathcal{L}^1(\omega)-\theta+2^{-4}\theta)\ln\frac1m
\end{equation*}
and that $ \|v'+\chi_{\omega_{1/2}}-\theta\|_{L^2((0,1))}\le 2^{-5}\theta$ by (\ref{eq:estv}) {using $\varepsilon\lambda^{-1}\leq\theta^2$}.
{
Inserting in the previous expression gives
\begin{equation*}
\begin{split}
\theta\ln\frac1m
\le& 2^{-5}\theta\|\psi\|_{L^2((0,1))}
+\theta \|\psi\|_{L^1((0,1))}
+2^{-4}\theta \ln\frac1m
+\int_{(0,1)} v \psi'\ds.
\end{split}
\end{equation*}
Since the estimates in (\ref{eqestpsiwithbdry}) give $\|\psi\|_{L^2((0,1))} \le 4$ and $\|\psi\|_{L^1((0,1))} \le 5$, choosing $m_0$ such that $5\le 2^{-4}\ln\frac1m$ for $m\in(0,m_0]$ we get
\begin{equation*}
\begin{split}
\frac12 \theta\ln\frac1m
\le&
\int_{(0,1)} v \psi'\ds.
\end{split}
\end{equation*}
}
{Using the definition of $v$, this gives
\begin{equation} \label{eq:compN}
{\frac12 \theta\ln \frac1{ m}}
\leq \int_{(0,1)} u_1((x_1^\ast,0)+s\xi) \psi'(s)\text{ d}s+ 4\int_{(0,1)} u_2((x_1^\ast,0)+s\xi) \psi'(s)\text{ d}s .
\end{equation}
}
{At this point we distinguish two cases, depending on which of the two terms in \eqref{eq:compN} is larger. In the first case,}
by the fundamental theorem {and the trace theorem} we have
\begin{equation*}
{\frac1{4} \theta\ln \frac1{ m}}\le
\int_{(0,1)}( u_1(x_1^\ast,0)+s\xi) \psi'(s)\text{ d}s
= \int_{(0,1)} \int_0^{x_1^\ast+s \xi_1} \partial_1 u_1(x_1,s\xi_2) \text{ d}x_1\, \psi'(s) \text{ d}s + \int_{(0,1)} u_1(0,s) \psi'(s) \text{ d}s.
\end{equation*}
The first integral can be estimated by $\|\psi'\|_{L^2((0,1))}\ell^{1/2} I^{1/2}(u)$.
Recalling (\ref{claimstep3}),
\begin{equation*}
{\frac1{8} \theta\ln \frac1{ m}}\le
\|\psi'\|_{L^2((0,1))} \ell^{1/2} I^{1/2}(u)
+ \int_{(0,1)} u_1(0,s) \psi'(s) \ds
+ \int_{(0,1)} (u_1(s,1) -u_1(s,0)) \psi_B'(s) \ds.
\end{equation*}
For brevity in we write here $I(u)$ for $I_{\tilde{\Omega}_\ell}(u)$.
{We define $F_1:=(-1,1)\times(-1,2)\setminus[0,1]^2$ and observe that the last two integrals can be written as a boundary integral
of $u_1$ times the tangential derivative $\partial_\tau\Psi$, and that $\Psi$ vanishes on the rest of the boundary of $F_1$. Therefore
\begin{align*}
{\frac1{8} \theta\ln \frac1{ m}}
\le \int_{\partial F_1} \partial_\tau \psi u_1 \dcalH^1+\|\psi'\|_{L^2((0,1))}\ell^{1/2} I^{1/2}(u)
\end{align*}
With Lemma \ref{lemmah12app} and the estimates for $\psi$ in (\ref{eqestpsiwithbdry}) this gives
\begin{align*}
{\frac1{8} \theta\ln \frac1{ m}}
\le \| \nabla\Psi\|_{L^2(F_1)} \|\nabla u_1\|_{L^2(F_1)}
+\|\psi'\|_{L^2((0,1))}\ell^{1/2} I^{1/2}(u)
\le c\frac{1}{\lambda^{1/2}}\ln^{1/2}\frac1m \mu^{-1/2} I^{1/2}(u)
+c\frac{1}{\lambda m^{1/2}} \ell^{1/2} I^{1/2}(u),
\end{align*}
which gives $I(u)\ge c\min\{\mu\theta^2 \lambda \ln\frac1m, \frac{\theta^2\lambda^2 m}{\ell} \ln^2\frac1m \}$ and concludes the proof in this case.
}
We now turn to the second case, in which the second term in \eqref{eq:compN} is the largest, and write correspondingly using the fundamental lemma of calculus
\begin{equation*}\begin{split}
{\frac1{16} \theta\ln \frac1{ m}}\le&
\int_{(0,1)} u_2(x_1^\ast+s\xi_1,s\xi_2) \psi'(s)\ds\\
=& - \int_{(0,1)} \int_{( s\xi_2,1)} \partial_2 u_2(x_1^\ast+s\xi_1,x_2) \, \psi'(s) \dy\ds + \int_{(0,1)} u_2(x_1^\ast+s\xi_1,1) \psi'(s) \text{ d}s,
\end{split}
\end{equation*}
with the first integral being estimated by $2\|\psi'\|_{L^2((0,1))} I^{1/2}(u)$.
We recall that (\ref{claimstep3b}) states, after changing variables separately in the two terms,
\begin{equation*}
- \int_{(-1,0)} u_2(x_1^\ast+s\xi_1,1) \psi'_B(s) \ds
{-}\int_{(1,2)}u_2(x_1^\ast+s\xi_1,1) \psi'_B(s-1) \ds\le \frac1{32}\theta\ln\frac1m,
\end{equation*}
sum and obtain
\begin{equation*}
{\frac1{32} \theta\ln \frac1{ m}}\le
2 \|\psi'\|_{L^2((0,1))} I^{1/2}(u)+ \int_{\mathbb{R}} u_2(x_1^\ast+s\xi_1,{1}) \psi'(s) \text{ d}s .
\end{equation*}
As above, using the estimates for $\psi$ in (\ref{eqestpsiwithbdry}) and Lemma
\ref{lemmah12app} with
the extension to $F_2:=(-2,\ell+2)\times {(1,2)}$, {and using $\tilde\Psi(x_1^\ast+s\xi_1,x_2):=\Psi(x_2-1,s)$,} leads to
\begin{equation*}
{\frac1{32} \theta\ln \frac1{ m}}
\le
2\|\psi'\|_{L^2((0,1))} I^{1/2}(u)
{+2\| \nabla \tilde\Psi\|_{L^2(F_2)}} \|\nabla u_2\|_{L^2(F_2)}
\le
c\frac{1}{\lambda m^{1/2}} I^{1/2}(u)
+ c{\lambda^{-1/2}}\ln^{1/2}\frac1m \mu^{-1/2} I^{1/2}(u),
\end{equation*}
which gives $I(u)\ge c\min\{\mu\theta^2 \lambda\ln\frac1m, {\theta^2\lambda^2 m} \ln^2\frac1m \}$ and, since $1\le\ell$, concludes the proof also in this case.
\textbf{Step 5: Boundary correction for $\theta<\mu$}.\\
In this case we truncate, so that the new function vanishes at $s=0$ and $s=1$. We set
\begin{equation*}
\psi_T(t):=\left[ \ln \frac{1}{m} - \left(\ln \frac{3|t|}{m}\right)_+ \right]_+
{=
\begin{cases}
\ln\frac1m, & \text{ if } |t|\le \frac13 m,\\
\ln\frac{1}{3|t|}, & \text{ if } \frac13 m<|t|\le \frac13 ,\\
0 ,& \text{ if } |t|>\frac13 \,.
\end{cases}}
\end{equation*}
We remark that $\psi_T$ has the same form as the functions $\psi_i$, with the only difference that the width of the central region is not $g_i\in[\lambda m,m]$ but exactly $\frac13 m$, so that $\supp \psi_T=[-\frac13,\frac13]$. This is important to ensure symmetry of the boundary conditions.
{We then define
\begin{equation*}
\psi(t):=
\begin{cases}
\max \{\hat\psi(t),\psi_T(t-\frac13), \psi_T(t-\frac23)\}, & \text{ if } t\in(\frac13,\frac23),\\
\psi_T(t-\frac13),&\text{ if } t\le \frac13,\\
\psi_T(t-\frac23),&\text{ if } t\ge \frac23 ,
\end{cases}
\end{equation*}
and observe that $\psi=0$ on $\mathbb{R}\setminus (0,1)$.
Correspondingly,
\begin{equation*}
\Psi(x):=
\begin{cases}
\max \{\hat\Psi(x),\psi_T(| (\frac13 x_1,x_2-\frac13)|)
,\psi_T(| (\frac13 x_1,x_2-\frac23)|), & \text{ if } t\in(\frac13,\frac23),\\
\psi_T(| (\frac13 x_1,x_2-\frac13)|)
,&\text{ if } t\le \frac13, \\
\psi_T(| (\frac13 x_1,x_2-\frac23)|),&\text{ if } t\ge \frac23 .
\end{cases}
\end{equation*}
It is apparent that $\Psi(0,t)=\psi(t)$, and that $\Psi=0$ outside $(-1,1)\times(0,1)$.
For $x_2=\frac13$ we observe that
$\hat\Psi((x_1,\frac13))\le \max_i \psi_i(x_1)\le \min\{\ln\frac1m, (\ln \frac{\max g_i}{m|x_1|})_+\}\le \min\{\ln\frac 1m, (\ln\frac1{|x_1|})_+\}=\psi_T(\frac13|x_1|)$. Therefore $\Psi$ is continuous across the boundaries $x_2\in\{\frac13,\frac23\}$.
}
Repeating the same estimates as above we obtain
\begin{equation}\label{eqestpsiwithbdryd}
\|\psi\|_{L^1((0,1))}\le 5\,,\hskip4mm
\|\psi\|^2_{L^2((0,1))}\le 10\,,\hskip4mm
\|\psi'\|_{L^2(\mathbb{R})}\le \frac{c}{\lambda m^{1/2}}\,,\hskip4mm
\|\nabla \Psi\|_{L^2(\mathbb{R}^2)} \le c{\frac{1}{\lambda^{1/2}}}\ln^{1/2}\frac1m.
\end{equation}
At this point we need to check that
restricting to the central one-third of $(0,1)$ {we} did not {loose} most of the minority phase. Specifically, we claim that {we may assume that}
\begin{equation}\label{eqvol1323}
\mathcal{L}^1(\omega\cap (\frac13,\frac23))\ge \frac1{8}\theta.
\end{equation}
To prove (\ref{eqvol1323}), we first
show that
\begin{equation}\label{eqalphavalpha}
\min_{\alpha\in\mathbb{R}}\|v-\alpha \|_{L^1((\frac13,\frac23))}
\le c (\mu^{-1/2}+ \ell^{1/2}) I^{1/2}(u).
\end{equation}
Let $\alpha_1:=\int_{(-1,0)\times(0,1)} u_1(0,s)\ds$. By Poincar\'{e}'s inequality and the trace theorem
in $W^{1,2}$ we have
\begin{equation*}
\| u_1(0,\cdot)-\alpha_1\|_{L^1((\frac13,\frac23))}\le
\| u_1(0,\cdot)-\alpha_1\|_{L^1((0,1))}\le c\|\nabla u_1\|_{L^2((-1,0)\times(0,1))} \le c\mu^{-1/2} I^{1/2}(u)
\end{equation*}
and
\begin{equation*}
\begin{split}
\int_{1/3}^{2/3} |u_1((x_1^\ast,0)+\xi s)-\alpha_1| \ds
\le &
\int_{1/3}^{2/3} |u_1(0,s)-\alpha_1| \ds
+\int_{1/3}^{2/3}\int_{(0,x_1^\ast+\xi_1)} |\partial_1 u_1|(t,s) \dt\ds\\
\le& c\mu^{-1/2} I^{1/2}(u) + c \ell^{1/2} I^{1/2}(u).
\end{split}
\end{equation*}
Analogously, with $\alpha_2:=\int_{(x_1^\ast, x_1^\ast+1)\times(-1,0)} u_2(0,s)\ds$,
\begin{equation*}
\int_{1/3}^{2/3} |u_2((x_1^\ast,0)+\xi s)-\alpha_2| \ds
\le c\mu^{-1/2} I^{1/2}(u) + c I^{1/2}(u).
\end{equation*}
Recalling that $v(s)=(u_1+4u_2)((x_1^\ast,0)+\xi s)$ concludes the proof of (\ref{eqalphavalpha}) {since $\ell\geq 1$}.
We now prove (\ref{eqvol1323}). If it does not hold, then
\begin{equation*}
\begin{split}
\int_{1/3}^{2/3} |v'-\theta| \ds \le&
\int_{1/3}^{2/3} |v'-\theta+\chi_{1/2}| \ds +\mathcal{L}^1(\omega_{1/2}\cap(\frac13,\frac23)) \\
\le&
\int_{(0,1)} |v'-\theta+\chi_{1/2}| \ds +\mathcal{L}^1(\omega_{1/2}\setminus\omega)
+\mathcal{L}^1(\omega\cap(\frac13,\frac23))\\
\le&
\hat C^{1/2} \theta+2^{-{6}}\theta + \frac18\theta\le \frac14\theta
\end{split}
\end{equation*}
where we used \eqref{eq:chi12}, \eqref{eq:estv}, $\varepsilon\lambda^{-1}\leq\theta^2$ and (\ref{eqomegaomega12}).
This implies $v(s)-v(s')\ge \theta(s-s')-\frac14\theta$ for all $s,s'\in(\frac13,\frac23)$, and therefore
\begin{equation*}
\min_{\alpha\in\mathbb{R}}\|v-\alpha \|_{L^1((\frac13,\frac23))}\ge
\frac12 \int_{0}^{1/6}|v(\frac12+s)-v(\frac12-s)|\ds
\ge \frac12\theta \int_{1/8}^{1/6} (2s -\frac14) ds =c\theta.
\end{equation*}
Recalling (\ref{eqalphavalpha}), this implies $I(u)\ge c \min\{\mu\theta^2, \ell^{-1}\theta^2\}$ which, since $\lambda^2m\ln^2\frac1m\le 1$ and $\theta\le \mu$, concludes the proof. Therefore we can assume that (\ref{eqvol1323}) holds.
\textbf{Step 6: Energy estimate for $\theta<\mu$.\\}
{
The computation is similar to Step 4, but with significant differences in the treatment of the boundary terms. }
Recalling (\ref{eqvol1323}), that $\psi=\ln\frac1m$
and $-v' \psi\geq t^\ast \ln \frac{1}{m}$ on $\omega\cap (\frac13,\frac23)$, with $t^\ast\ge \frac12$, and $\psi\geq0$, we have
\begin{align*}
{\frac1{16} \theta\ln \frac1{ m}}&\le
t^\ast \mathcal{L}^1(\omega\cap (\frac13,\frac23)) \ln \frac1{ m}
\le -\int_{(0,1)} v'(s) \psi(s)\text{ d}s+\int_{\{v'\geq 0\}\cap(0,1)}v'(s) \psi(s) \text{ d}s.
\end{align*}
{First we observe that
\begin{equation*}
\begin{split}
\int_{\{v'\geq 0\}\cap(0,1)}v'(s) \psi(s) \text{ d}s
& \le \theta \|\psi\|_{L^1((0,1))} +
\| \min\{|v'-\theta|, |v'+(1-\theta)|\}\|_{L^2((0,1))} \|\psi\|_{L^2((0,1))}
\\&\le 5\theta + \hat C^{1/2}\theta 10^{1/2}\le\frac1{32}\theta\ln\frac1m ,
\end{split}
\end{equation*}
where in the first step we used (\ref{eqestpsiwithbdryd}) and in the second we assumed that $m_0$ is chosen {such} that $5+\hat C^{1/2}10^{1/2}\le\frac1{32}\ln\frac1m $.
}
Inserting in the previous expression and integrating by parts we get
\begin{equation*}
\begin{split}
\frac1{32} \theta\ln\frac1m
\le&
\int_{(0,1)} v \psi'\ds.
\end{split}
\end{equation*}
The rest of the proof is very close to the one of Step 4, with some simplifications in the treatment of the exterior field. For the convenience of the reader we repeat the computation here. Recalling the definition of $v$,
\begin{equation} \label{eq:compNdue}
{\frac1{32} \theta\ln \frac1{ m}}
\leq \int_{(0,1)} u_1((x_1^\ast,0)+s\xi) \psi'(s)\text{ d}s+ 4\int_{(0,1)} u_2((x_1^\ast,0)+s\xi) \psi'(s)\text{ d}s .
\end{equation}
If the first term in \eqref{eq:compNdue} is larger than the second,
by the fundamental theorem {and the trace theorem}
\begin{equation*}
{\frac1{64} \theta\ln \frac1{ m}}\le
\int_{(0,1)}u_1(x_1^\ast,0)+s\xi) \psi'(s)\text{ d}s
= \int_{(0,1)} \int_0^{x_1^\ast+s \xi_1} \partial_1 u_1(x_1,s\xi_2) \text{ d}x_1\, \psi'(s) \text{ d}s + \int_{(0,1)} u_1(0,s) \psi'(s) \text{ d}s,
\end{equation*}
{with the first integral being estimated by $\|\psi'\|_{L^2((0,1))} \ell^{1/2}I^{1/2}(u)$.
}
{Letting $F_3:=(-1,0)\times(0,1)$ and recalling that $\Psi(0,t)=\psi(t)$ and $\Psi(-1,t)=\Psi(-t,0)=\Psi(-t,1)=0$ for $t\in(0,1)$,
we see that the last two integrals can be written as a boundary integral
of $u_1$ times the tangential derivative $\partial_\tau\Psi$, and that $\Psi$ vanishes on the rest of the boundary of $F_3$. Therefore
\begin{align*}
{\frac1{64} \theta\ln \frac1{ m}}
\le \int_{\partial F_3} u_1\, \partial_\tau \psi \dcalH^1+\|\psi'\|_{L^2((0,1))} \ell^{1/2}I^{1/2}(u).
\end{align*}
With Lemma \ref{lemmah12app} and the estimates for $\psi$ in (\ref{eqestpsiwithbdryd}) this gives
\begin{align*}
{\frac1{64} \theta\ln \frac1{ m}}
\le \| \nabla\Psi\|_{L^2(F_3)} \|\nabla u_1\|_{L^2(F_3)}
+\|\psi'\|_{L^2((0,1))} \ell^{1/2} I^{1/2}(u)
\le c\frac{1}{\lambda^{1/2}}\ln^{1/2}\frac1m \mu^{-1/2} I^{1/2}(u)
+c\frac{1}{\lambda m^{1/2}}\ell^{1/2} I^{1/2}(u),
\end{align*}
which gives $I(u)\ge c\min\{\mu\theta^2\lambda \ln\frac1m, \frac{\theta^2\lambda^2 m}{\ell} \ln^2\frac1m \}$ and concludes the proof in this case.
}
If instead the second term in \eqref{eq:compNdue} is the {larger one}, we write
\begin{equation*}\begin{split}
{\frac1{256} \theta\ln \frac1{ m}}\le&
\int_{(0,1)} u_2(x_1^\ast+s\xi_1,s\xi_2) \psi'(s)\ds\\
=& - \int_{(0,1)} \int_{( s\xi_2,1)} \partial_2 u_2(x_1^\ast+s\xi_1,x_2) \, \psi'(s) \dy\ds + \int_{(0,1)} u_2(x_1^\ast+s\xi_1,1) \psi'(s) \text{ d}s,
\end{split}
\end{equation*}
with the first integral being estimated by $2\|\psi'\|_{L^2((0,1))} I^{1/2}(u)$.
{As above, using the estimates for $\psi$ in (\ref{eqestpsiwithbdryd}) and Lemma
\ref{lemmah12app} with
the extension to $F_4:=(-2,\ell+2)\times (-1,0)$ leads to
\begin{equation*}
{\frac1{256} \theta\ln \frac1{ m}}
\le 2\| \nabla \Psi\|_{L^2(F_4)} \|\nabla u_2\|_{L^2(F_4)}
+2\|\psi'\|_{L^2((0,1))} I^{1/2}(u)
\le c\ln^{1/2}\frac1m \mu^{-1/2} \frac{1}{\lambda^{1/2}} I^{1/2}(u)
+c\frac{1}{\lambda m^{1/2}} I^{1/2}(u),
\end{equation*}
which gives $I(u)\ge c\min\{\mu\theta^2 \lambda\ln\frac1m, {\theta^2\lambda^2 m} \ln^2\frac1m \}$ and, since $1\le\ell$, concludes the proof also in this case.}
\end{proof}
We finally turn to the proof of Proposition \ref{lem:lbbranching}, in which the different ingredients proven in this Section are put together.\\
\begin{proof}[Proof of Proposition \ref{lem:lbbranching}]
Let $m_0\in(0,\frac14]$ be given as in Lemma \ref{lem:thetasmallbranching}, and define $m_1\in(0,m_0)$ as the unique solution to $m_1\ln\frac1{m_1}=m_0$.
If {$\theta \geq m_1$}
the statement follows directly from Lemma \ref{lem:thetalargebranching}.
Otherwise we use Lemma \ref{lem:thetasmallbranching} with the following choices of the parameters $\lambda$ and $m$:
\begin{itemize}
\item [i)] Consider first the case $\mu \geq \varepsilon^{1/3} {\theta^{-2/3}}\ell^{-1/3}{m_0^{2/3}}$.
Choose $m:=m_0$ and $\lambda:=\varepsilon^{1/3}{\theta^{-2/3}}\ell^{2/3}$.
{ Then $\varepsilon\ell^2\le \theta^2$ implies on the one hand $\lambda\le 1$,
and on the other hand $\varepsilon\le \theta^2\ell^{-2}\le \theta^2\ell$, which gives $\varepsilon\le \theta^{2}\lambda$.}
Thus, $\lambda$ is admissible.
In this case, $\mu\theta^2\lambda \ln \frac1m\le c\mu\theta^2$, hence the second and the last two terms in the minimum can be ignored.
Therefore, Lemma \ref{lem:thetasmallbranching} yields
that
\[I_{\tilde{\Omega}_\ell}(u)\geq c\min\{\varepsilon^{2/3}{\theta^{2/3}}\ell^{1/3},\,
\mu\varepsilon^{1/3}{\theta^{4/3}}\ell^{2/3}\}\ge c \varepsilon^{2/3}{\theta^{2/3}}\ell^{1/3}\]
{where we used $\mu\theta^2\lambda\ge
\varepsilon^{2/3}\theta^{2/3}\ell^{1/3}{m_0^{2/3}}$ by the assumption on $\mu$ and the definition of $\lambda$.}
\item[ii)] Suppose now that $\mu < \varepsilon^{1/3} \theta^{-2/3}{\ell^{-1/3}}{m_0^{2/3}}$.
We set $m := \max \{ \mu^{3/2}\varepsilon^{-1/2} \theta \ell^{1/2},\theta\ln\frac1\theta\}$ and $\lambda:=\max\{\varepsilon\theta^{-2},{{\frac12}
\mu^{-1/2}\varepsilon^{1/2}\theta^{-1}\ell^{1/2}}\ln^{-1/2}{\frac{1}{m}}\}$.
{By the assumption on $\mu$ and $0<\theta<m_1$ we obtain $m\in[\theta,m_0]$.}
Further, $\lambda\ge\varepsilon\theta^{-2}$ by definition.
{It remains to show $\lambda\le1$.
Since $\varepsilon\ell^2\le\theta^2$ implies $\varepsilon\theta^{-2}\le 1$, it suffices to prove that
\begin{equation}\label{eqminmmln}
\min\{
\ln(3+\frac{\varepsilon}{\mu^3\theta^2\ell}),
\ln(3+\frac1{\theta^2})\}
\le 4 \ln \frac1m.
\end{equation}
Indeed, (\ref{eqminmmln}) and the assumption on $\varepsilon\ell$ give
$ \varepsilon\ell \le 4 \mu\theta^2\ln\frac1m$
which immediately implies $\lambda\le1$.
It remains to prove the algebraic inequality (\ref{eqminmmln}).
If $m= \mu^{3/2}\varepsilon^{-1/2} \theta \ell^{1/2}$, then
(\ref{eqminmmln}) follows from the fact that for any $x\in(0,\frac14)$ we have $\ln(3+\frac1{x^2})\le 3\ln\frac1x$.
If instead $m=\theta\ln\frac1\theta$, we use analogously
\begin{equation*}
\ln(3+\frac1{x^2})\le 4\ln\frac1{x\ln \frac1x} \text{ for all } x\in(0,\frac14].
\end{equation*}
The last inequality is equivalent to
$3+\frac1{x^2}\le\frac1{x^4\ln^4 \frac1x}$,
which is true, since $x\ln^2\frac1x\le 4e^{-2}\le\frac23$ for all $x\in (0,1)$.
Therefore (\ref{eqminmmln}) holds.
}
\sloppypar
We use Lemma \ref{lem:thetasmallbranching} and estimate the terms separately below. First, using that $\lambda^{-1} = \min \{\varepsilon^{-1}\theta^2,
{2}\mu^{1/2} \varepsilon^{-1/2}\theta\ell^{-1/2}\ln^{1/2}\frac{1}{m} \}$
and then that $\varepsilon\le\theta^2\ell^{-2}\le \theta^2\ell$, we find
{
\[ \frac{\varepsilon\ell}{\lambda}=\min\left\{\theta^2\ell,\ 2\mu^{1/2}\varepsilon^{1/2}\theta\ell^{1/2}\ln^{1/2}\frac{1}{m}\right\}
\ge\min\left\{\varepsilon^{2/3}\theta^{2/3}\ell^{1/3},\ 2\mu^{1/2}\varepsilon^{1/2}\theta\ell^{1/2}\ln^{1/2}\frac{1}{m}\right\}
.\]}
{From $m\ge \theta\ln\frac1\theta$ we get $\mu m^2\ge \frac13\mu\theta^2\ln(3+\frac1{\theta^2})$, and recalling the assumption on $\varepsilon \ell$ we get
\begin{equation*}
\mu m^2\ge \frac13 \mu\theta^2\ln(3+\frac1{\theta^2})\ge \frac14(\varepsilon \ell \mu\theta^2)^{1/2}\ln^{1/2}(3+\frac1{\theta^2}).
\end{equation*}
}
{Next, by definition of $\lambda$, we have $\lambda\geq {\frac12}\mu^{-1/2}\varepsilon^{1/2}\theta^{-1}\ell^{1/2}\ln^{-1/2}\frac{1}{m}$ and hence
\[\mu\theta^2\lambda\ln \frac{1}{m}\geq \frac12\mu^{1/2}\varepsilon^{1/2}\theta\ell^{1/2}\ln^{1/2} \frac{1}{m}.\]
Finally, using that $\lambda\geq {\frac12}\mu^{-1/2}\varepsilon^{1/2}\theta^{-1}\ell^{1/2}\ln^{-1/2}\frac{1}{m}$, $m\geq\mu^{3/2}\varepsilon^{-1/2} \theta \ell^{1/2}$ and then $\ln^{1/2}\frac{1}{m}\leq \ln\frac{1}{m}$, we find
\begin{eqnarray*}
\theta^2 \ell^{-1}\lambda^2m\ln^2\frac{1}{m}\geq {\frac14}\theta^2\ell^{-1}\left(\mu^{-1}\varepsilon\theta^{-2}\ell\ln^{-1}\frac{1}{m}\right)\left(\mu^{3/2}\varepsilon^{-1/2}\theta\ell^{1/2}\right)\ln^2\frac{1}{m}\geq {\frac14}\mu^{1/2}\varepsilon^{1/2}\theta\ell^{1/2}\ln^{1/2}\frac{1}{m}.
\end{eqnarray*}
}
Putting things together,
and recalling (\ref{eqminmmln})
we obtain
\begin{equation*}\begin{split}
I_{\tilde{\Omega}_\ell}(u)&\geq c\min\Big\{\varepsilon^{2/3}\theta^{2/3}\ell^{1/3},\\
& \mu^{1/2}\varepsilon^{1/2}\theta\ell^{1/2}\ln^{1/2}(3+\frac{1}{\theta^2}),
\mu^{1/2}\varepsilon^{1/2}\theta\ell^{1/2}\ln^{1/2}(3+\frac{\varepsilon}{\mu^3\theta^2\ell}),
{\mu\theta^2\ln(3+\frac\theta\mu)},
{ \mu \theta^2 \ln(3+\ell)}
{\Big\}}
\end{split} \end{equation*}
which concludes the proof also in this case.
\end{itemize}
\end{proof}
\subsection{A lower bound for small $\varepsilon$, but $\varepsilon L$ not small.}\label{sec:lb3}
We will now consider the remaining cases. We focus here on the situation in which $\varepsilon$ is small, but $L$ is so large that a straight interface along the entire martensitic sample is not optimal. Although this condition does not appear explicitly in the assumptions of
Proposition \ref{propinterp}, the result will only be useful in this situation, as the term $\varepsilon L$ appears as one of the options in the estimate.
The proof of the lower bound in this case has a structure similar to the one of Section \ref{sec:lbunif}.
We shall use, as above, the subdivision of the set of diagonal slices into various subsets. In particular, we shall show in Lemma
\ref{lemmainterpolationestim} that the interface between a $\mathcal P$ and a $\mathcal C$ slice is, energetically speaking, expensive. At variance with
Section \ref{sec:lbunif}, the interpolation between an affine region and a region with periodic boundary values will no longer be penalized with a $D^2u$ term but with a $\partial_1u_1$ term. This requires estimates on $u$, and not only on its derivatives.
\begin{prpstn}[A lower bound in the case {$\varepsilon\le \mu\theta^2$ and $\varepsilon \le \theta^2$}] \label{lem:logsammeln}\label{propinterp}
There exists $c>0$ such that for all $L\in[1/2,\infty)$, $\theta\in(0,1/2]$, $\varepsilon>0$, $\mu>0$, and $u \in \mathcal{X}$ with
\begin{equation*}
\varepsilon\le\mu\theta^2 \qquad\text{ and } \qquad \varepsilon\le\theta^2
\end{equation*}
{we have}
\begin{equation*}
I(u)\ge c \min\{\varepsilon L,\mu\theta^2 \ln(3+L), \mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\varepsilon^{1/2}\theta^{3/2} +
\mu\theta^2\ln (3+\frac{\varepsilon}{\mu^2\theta^2}),
\mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu) \}.
\end{equation*}
\end{prpstn}
We first prove {some} lemmata used in the proof of Proposition \ref{lem:logsammeln}.
The first one concerns a local variant of the set $\mathcal C$, for which a sharper estimate on the volume is possible.
\newcommand\setRone{\mathcal R}
\begin{lmm}[Estimates near the boundary]\label{lem:linftyonsmallsets}
Assume that {$ \delta\in(0,\frac1{64}\theta]$}, $u \in W^{1,2}(\Omega_L,\mathbb{R}^2)$, and let
{\begin{equation*}
\setRone:= \Big{\{}x_1\in (0,L-\xi_1)\,:\,
\frac1{32}\theta\le
\max\{ \|u_{x_1}^\xi(s)-u^\xi_{x_1}(0)\|_{L^\infty((0,\delta ))},
\|u_{x_1}^\xi(1-s)-u^\xi_{x_1}(1)\|_{L^\infty((0,\delta))} \Big\}.
\end{equation*}
Then
\begin{equation*}
I(u)\ge c\theta \mathcal{L}^1(\setRone).
\end{equation*}
}
\end{lmm}
\begin{proof}
For almost every $x_1\in \setRone$ we have $v:=u^\xi_{x_1}\in W^{1,2}((0,1))$.
For $s\in (0,\delta)$ we estimate, using (\ref{eq:estduxi}),
\begin{align*}
|v(1-s)-v(1)|&\le s^{1/2} \|v'\|_{L^2(( 1-s,1))}\\
& \le s^{1/2}( \|1\|_{L^2(( 1-s,1))}+ \|\min \{ |v' -\theta|, |v'+(1-\theta)|\}\|_{L^2((0,1))})\\
& \le {\delta} + \delta ^{1/2} 5 \|\min \{ |e(u)-\theta e_1 \odot e_2|, |e(u)+(1-\theta) e_1 \odot e_2|\}\|_{L^2(\Delta^\xi_{x_1})}
\end{align*}
{
and correspondingly for $|v(s)-v(0)|$. Therefore for almost any $x_1\in \setRone$
we have
\begin{equation*}
\frac1{32}\theta\le \delta+\delta ^{1/2} 5 \|\min \{ |e(u)-\theta e_1 \odot e_2|, |e(u)+(1-\theta) e_1 \odot e_2|\}\|_{L^2(\Delta^\xi_{x_1})}.
\end{equation*}
For $\delta\le\frac1{64}\theta$ we deduce
\begin{equation*}
c\theta\le \|\min \{ |e(u)-\theta e_1 \odot e_2|, |e(u)+(1-\theta) e_1 \odot e_2|\}\|_{L^2(\Delta^\xi_{x_1})}^2
\end{equation*}
and integrating {over $x_1\in\mathcal{R}$} {we obtain} the assertion.
}
\end{proof}
\begin{lmm}[Interpolation estimate]\label{lemmainterpolationestim}
Let $\mathcal P$ be defined as in (\ref{eqdefPth}), $p:=\mathcal{L}^1(\mathcal P)$.
Assume
\begin{equation*}
\varepsilon\le\mu\theta^2 \qquad\text{ and } \qquad \varepsilon\le\theta^2.
\end{equation*}
Then
\begin{equation*}
I(u)\ge c\min\{\varepsilon^{1/2}\theta^{3/2}, \mu\theta^2 p, \theta^2 p, \varepsilon L\}.
\end{equation*}
\end{lmm}
\begin{proof}
The proof is based on selecting a good slice in which $u$ is approximately affine, and another one in which the boundary values are close to each other, and estimating the energy in between. We shall work on a thin slice around the boundary, of width $\delta:=\frac1{64}\theta$.
{\bf Step 1. Estimate on good $\mathcal C$-slices.}\\
We recall that $\mathcal C$
was defined in (\ref{eqdefC}) as the set of slices such that the deformation is close to one of the two martensite variants in the interior, and that by Lemma \ref{lem:gammawuerfel}(\ref{lem:gammawuerfelest}) it obeys {(recall that $\varepsilon\leq\theta^2$)}
\begin{equation}\label{eqfinaestC}
I(u)\ge c \varepsilon \mathcal{L}^1\left([0,L-\xi_1]\setminus \mathcal C\right).
\end{equation}
We let $\Gstar$ be the set from Lemma \ref{lem:costsu2inA} with $\bar C:=2^{-7}$, which obeys
\begin{equation}\label{eqfinaestG}
I(u)\ge c \min\{\theta^2 ,\mu\theta^2 \}\mathcal{L}^1(\Gstar).
\end{equation}
By Lemma \ref{lem:costsu2inA}(\ref{lem:costsu2inA3}),
\begin{equation*}
{\frac1\delta\int_{(0,\delta)} |u_2(x_1+{(1-s)\xi_1},1-s)-u_2(x_1+s\xi_1,s)|\ds < \frac1{16}\theta}
\text{ for any } x_1\in [0,L-\xi_1]\setminus\Gstar.
\end{equation*}
{
We can assume $\mathcal{L}^1(\mathcal C\setminus\Gstar)>0$. Indeed, if this were not the case, then (up to null sets)
$\mathcal{C}\subset\Gstar$ and $([0,L-\xi_1]\setminus \mathcal{C}) \cup \Gstar=[0,L-\xi_1]$, which implies
$I(u)\ge cL \min\{\theta^2,\mu\theta^2,\varepsilon\}=c \varepsilon L$ and concludes the proof.}
{We claim that
\begin{equation}\label{claimxc}
\begin{split}
\frac1{2}\theta
&\le\frac1\delta
\int_{(0,\delta)} |u_1((x_c,0)+s\xi)-u_1((x_c,0)+(1-s)\xi)|\ds
\text{ for any ${x_c}\in\mathcal C\setminus\Gstar$.}
\end{split}
\end{equation}
}
{To see this,
assume $x_1\in\mathcal C$ and for $\sigma\in\{0,1\}$ let $f_\sigma(s):=u_{x_1}^\xi(0)+s(\theta-\sigma)$.
Then
$|f_\sigma(1-s)-f_\sigma(s)|
=|1-2s|\,|\theta-\sigma|\ge |1-2s|\theta$, therefore for any $s\in(0,\delta)$ we have
\begin{equation*}
\begin{split}
\frac78\theta\le (1-2s)\theta\le&
\min_{\sigma\in\{0,1\}} |f_\sigma(s)-f_\sigma(1-s)|\\
\le &|u^\xi_{x_1}(s)-u^\xi_{x_1}(1-s)|
+ \min_{\sigma\in\{0,1\}} (|u^\xi_{x_1}(s)-f_\sigma(s)|+
|u^\xi_{x_1}(1-s)-f_\sigma(1-s)|)\\
\le &|u^\xi_{x_1}(s)-u^\xi_{x_1}(1-s)|+\frac2{16}\theta.
\end{split}
\end{equation*}
Therefore, recalling that $u_{x_1}^\xi(s)=u_1((x_1,0)+s\xi)+4u_2((x_1,0)+s\xi)$,
\begin{equation*}
\begin{split}
\frac34\theta&\le
|u_{x_1}^\xi(s)-u_{x_1}^\xi(1-s)|\\
&\le
|u_1((x_1,0)+s\xi)-u_1((x_1,0)+(1-s)\xi)|
+4|u_2((x_1,0)+s\xi)-u_2((x_1,0)+(1-s)\xi)|
\end{split}
\end{equation*}
for any $x_1\in\mathcal C$.
Averaging over $s\in(0,\delta)$, and using that $x_1\not\in\Gstar$,
\begin{equation*}
\begin{split}
\frac34\theta &\le\frac1\delta
\int_{(0,\delta)}\left|u_{x_1}^\xi(s)-u_{x_1}^\xi(1-s)\right|\ds\\
&\le\frac1\delta
\int_{(0,\delta)} \left|u_1((x_1,0)+s\xi)-u_1((x_1,0)+(1-s)\xi)\right|\ds
+\frac14\theta
\end{split}
\end{equation*}
which proves (\ref{claimxc}).
}
{\bf Step 2. Estimate on good $\mathcal P$-slices.}\\
We
consider
the set $\setRone$ defined in Lemma \ref{lem:linftyonsmallsets}, which obeys
\begin{equation}\label{eqfinaestR}
I(u)\ge c \theta \mathcal{L}^1(\setRone).
\end{equation}
We can assume $\mathcal{L}^1(\mathcal P\setminus\setRone\setminus\Gstar)>0$. Indeed, if this were not the case, then
$\mathcal{L}^1(\setRone\cup\Gstar)\ge p$, $I(u)\ge cp \min\{\theta^2,\mu\theta^2\}$, and the proof is concluded.\\
We claim that
\begin{equation}\label{claimxp}
\begin{split}
\frac1\delta
\int_{(0,\delta)} \left|u_1((x_p,0)+s\xi)-u_1((x_p,0)+(1-s)\xi)\right|\ds\le
\frac3{8}\theta
\text{ for any $x_p\in\mathcal P\setminus\setRone\setminus\Gstar$.}
\end{split}
\end{equation}
Indeed, let $x_1\in \mathcal P$. Then
$|u(x_1,0)-u(x_1+\xi_1,1)|\le 2^{-7}\theta$, therefore
$|u_{x_1}^\xi(0)-u_{x_1}^\xi(1)|\le 2^{-4}\theta$.
If $x_1\not\in\setRone$, then
a triangular inequality shows that for any $s\in(0,\delta)$
\begin{equation*}
\begin{split}
|u_{x_1}^\xi(s)-u_{x_1}^\xi(1-s)|&\le
|u_{x_1}^\xi(0)-u_{x_1}^\xi(1)| +
|u_{x_1}^\xi(s)-u_{x_1}^\xi(0)| +
|u_{x_1}^\xi(1-s)-u_{x_1}^\xi(1)|\le\frac{1}{16}\theta+\frac{2}{32}\theta=\frac18\theta.
\end{split}
\end{equation*}
A similar computation as above, using
\begin{equation*}
\begin{split}
|u_1((x_1,0)+s\xi)-u_1((x_1,0)+(1-s)\xi)|
\le |u_{x_1}^\xi(s)-u_{x_1}^\xi(1-s)|
+
4|u_2((x_1,0)+s\xi)-u_2((x_1,0)+(1-s)\xi)|
\end{split}
\end{equation*}
and $x_1\not\in\Gstar$, leads to
\begin{equation*}
\begin{split}
\frac1\delta
\int_{(0,\delta)} |u_1((x_1,0)+s\xi)-u_1((x_1,0)+(1-s)\xi)|\ds \le
&\frac1\delta
\int_{(0,\delta)}|u_{x_1}^\xi(s)-u_{x_1}^\xi(1-s)|\ds
+\frac14\theta\le \frac38\theta.
\end{split}
\end{equation*}
This concludes the proof of (\ref{claimxp}).
{\bf Step 3. Interpolation.}\\
We choose $x_p\in\mathcal{P}\setminus \setRone\setminus\Gstar$ and $x_c\in\mathcal C\setminus\Gstar$ and compute
\begin{equation*}
\begin{split}
| u_1((x_c,0)+s\xi)-u_1((x_c,0)+(1-s)\xi)|
\le&
| u_1((x_c,0)+s\xi)-u_1((x_p,0)+s\xi)|\\
&+
| u_1((x_p,0)+s\xi)-u_1((x_p,0)+(1-s)\xi)|\\
&+
| u_1((x_p,0)+(1-s)\xi)-u_1((x_c,0)+(1-s)\xi|.
\end{split}
\end{equation*}
Averaging over $s\in(0,\delta)$ and using (\ref{claimxc}), (\ref{claimxp}) and Hölder gives
\begin{equation*}
\begin{split}
\frac12\theta\le & \frac1\delta
\int_{(0,\delta)} | u_1((x_c,0)+s\xi)-u_1((x_c,0)+(1-s)\xi)|\ds\\
\le &
\frac1\delta \int_{(0,\delta)} | u_1((x_p,0)+s\xi)-u_1((x_p,0)+(1-s)\xi)|\ds+
\frac1\delta
\int_{(0,\delta)\cup(1-\delta,1)} | u_1((x_p,0)+s\xi)-u_1((x_c,0)+s\xi)|\ds\\
\le & \frac38\theta +
\frac{\sqrt2}{\delta^{1/2}} \Big(\int_{(0,\delta)\cup(1-\delta,1)} | u_1((x_p,0)+s\xi)-u_1((x_c,0)+s\xi)|^2\ds\Big)^{1/2}.
\end{split}
\end{equation*}
Therefore
\begin{equation*}
\begin{split}
\frac1{128}\theta^2\delta\le
\int_{(0,1)} | u_1((x_p,0)+s\xi)-u_1((x_c,0)+s\xi)|^2\ds.
\end{split}
\end{equation*}
By the fundamental theorem of calculus and Hölder's inequality, for any $s\in (0,1)$ we have
\begin{equation*}
| u_1((x_p,0)+s\xi)-u_1((x_c,0)+s\xi)|^2
\le |x_p-x_c| \int_{(0,L)} |\partial_1 u_1|^2(t,s) \dt.
\end{equation*}
Integrating over $s$ gives
\begin{equation*}
\frac{\theta^3}{c}\le
\int_0^1 |u_1((x_p,0)+s\xi)-u_1((x_c,0)+s\xi)|^2\ds\le
|x_c-x_p|\, \|\partial_1u_1\|^2_{L^2((x_c,x_p)\times(0,1))}\le|x_c-x_p|\, I(u).
\end{equation*}
{\bf Step 4. Conclusion of the proof.}\\
{
Let $\beta:=\mathcal{L}^1(
\setRone \cup \Gstar
\cup(
[0,L-\xi_1]\setminus\mathcal C\setminus\mathcal P))$.
On the one hand, (\ref{eqfinaestC}), (\ref{eqfinaestG}), (\ref{eqfinaestR})
give $I(u)\ge c \min\{\varepsilon,\theta^2,\mu\theta^2\} \beta=c \varepsilon \beta$.
On the other hand,
$\inf\{|x_c-x_p|: \text{ (\ref{claimxc}) and (\ref{claimxp}) hold}\}\le \beta$. Therefore
\begin{equation*}
I(u)\ge c \min_{\beta'\in(0,L]}\big[ \frac{\theta^3}{\beta'} +\varepsilon \beta' \big].
\end{equation*}
The minimum is attained at $\beta'=\varepsilon^{-1/2}\theta^{3/2}$ or at $\beta'=L$, and
gives
\begin{equation*}
I(u)\ge c\min\{\theta^{3/2}\varepsilon^{1/2}, \varepsilon L\}
\end{equation*}
which concludes the proof.
}
\end{proof}
\begin{lmm}[The boundary logarithm]\label{lemmabdryln}
There are $c>0$ and $m_2\in(0,\frac14]$ such that the following holds.
If
\begin{equation*}
\frac12\le L, \qquad
0<\theta\le m_2, \qquad
\varepsilon\le \mu\theta^2 \qquad
\mu\le \theta,\quad\text{and}\quad
\quad \mu^2\theta^2\le\varepsilon,
\end{equation*}
then for any $u\in{\mathcal{X}}$ one has
\begin{equation*}
\frac1c I(u)\ge
\min\left\{\varepsilon L,
\mu\theta^2 \ln(3+\frac1{\theta^2}),
\mu\theta^2 \ln(3+L) ,
\mu\theta^2\ln(3+\frac\theta\mu),
{\mu\theta^2}\ln (3+\frac{\varepsilon}{\mu^2\theta^2})
\right\}.
\end{equation*}
\end{lmm}
\begin{proof}
{\bf Step 1. Energy estimate.}\\
We show that there are $c>0$, $m_2\in(0,\frac14]$ such that for any $m\in[\theta,m_2]$
there is $q\in[0,L]$ such that
\begin{equation}\label{eqstep1energyest}
\frac1c I(u)\ge \min\Big\{ \varepsilon L,
\mu\theta^2\ln(3+\frac\theta\mu),
\varepsilon q + \frac{\theta^2m}{q+1}\ln^2 \frac1m,
\mu\theta^2 \ln\frac1m,
\mu\theta^2 \ln(3+L)
\Big\}.
\end{equation}
{
We define, similar to Lemma \ref{lem:thetasmallbranching}, $\psi_C(t):=\max \{\psi_B(t), \psi_B(1-t)\}$, where
\begin{equation*}
\psi_B(t):=\left[ \ln \frac{1}{m} - \left(\ln \frac{|t|}{m}\right)_+ \right]_+
=
\begin{cases}
\ln\frac1m, & \text{ if } |t|\le m,\\
\ln\frac{1}{|t|}, & \text{ if } m<|t|\le 1,\\
0, & \text{ if } |t|>1\,,
\end{cases}
\end{equation*}
and compute
$\|\psi_B\|_{L^1((0,1))} \le 1$,
$\|\psi_B'\|_{L^1((0,1))}\le \ln\frac1m$, and
$\|\psi_B'\|_{L^2((0,1))}\le m^{-1/2}$, which imply
\begin{equation}\label{eqestpsiclemmaln}
\|\psi_C\|_{L^1((0,1))} \le 2, \quad
\|\psi_C'\|_{L^1((0,1))}\le 2\ln\frac1m,\quad \text{ and }\quad
\|\psi_C'\|_{L^2((0,1))}\le \frac2{m^{1/2}}.
\end{equation}
}
We first claim that
\begin{equation}\label{eqthetalnxc}
\begin{split}
\frac78 \theta\ln\frac1m
&\le
\Big|\int_0^1 u_{x_1}^\xi(s)\psi_C'(s) \ds \Big|
\quad \text{ for any $x_1\in\mathcal C$},
\end{split}
\end{equation}
where $\mathcal C$ was defined in (\ref{eqdefC}).
{To see this we compute, for any $x_1\in\mathcal C$ and $\sigma\in\mathbb{R}$,
\begin{equation*}
\begin{split}
\sigma\ln\frac1m &= \int_0^1 \frac{d}{ds} (s\sigma \psi_C(s)) \ds
=\sigma \int_0^1\psi_C\ds + \int_0^1 (s\sigma)\psi_C'(s) \ds\\
& =\sigma \int_0^1\psi_C\ds + \int_0^1 (s\sigma+u^\xi_{x_1}(0)-u^\xi_{x_1}(s))\psi_C'(s) \ds
+ \int_0^1 (u_{x_1}^\xi(s){-u^\xi_{x_1}(0)})\psi_C'(s) \ds.
\end{split}
\end{equation*}
{Since $\psi_C(0)=\psi_C(1)$, the last term disappears, and}
\begin{equation*}
\begin{split}
| \sigma|\ln\frac1m & \le |\sigma|\, \|\psi_C\|_{L^1((0,1))} +
\|s\sigma{+u^\xi_{x_1}(0)}-u^\xi_{x_1}(s)\|_{L^\infty((0,1))}
\|\psi_C'\|_{L^1((0,1))}
+\Big| \int_0^1 u_{x_1}^\xi(s)\psi_C'(s) \ds\Big|.
\end{split}
\end{equation*}
At this point we recall (\ref{eqestpsiclemmaln}) and that $x_1\in\mathcal C$. Therefore there is a choice of $\sigma\in\{\theta,{\theta-1}\}$ such that
\begin{equation*}
\begin{split}
| \sigma|\ln\frac1m & \le 2|\sigma|+ \frac1{16}\theta
\ln\frac1m+\Big| \int_0^1 u_{x_1}^\xi(s)\psi_C'(s) \ds\Big|.
\end{split}
\end{equation*}
}
{
If $m_2$ is sufficiently small, $2\le 2^{-4}\ln\frac1m$. For both choices of $\sigma$ we have $\theta\le|\sigma|$. Therefore
\begin{equation*}
\begin{split}
\frac78 \theta\ln\frac1m & \le \Big| \int_0^1 u_{x_1}^\xi(s)\psi_C'(s) \ds\Big|,
\end{split}
\end{equation*}
which concludes the proof of (\ref{eqthetalnxc}).
}
{
Let $\Gstar$ be as in Lemma \ref{lem:costsu2inA} with
$\bar C:=2^{-8}$.
Since $\psi_C'(s)=-\psi_C'(1-s)=-1/s$ on $(m,1/2)$ and $\psi_C'(s)=\psi_C'(1-s)=0$ on $(0,m)$,
using Lemma \ref{lem:costsu2inA}(\ref{lem:costsu2inA46bdryln}) for
$x_1\not\in\Gstar$ we have
\begin{equation*}
\begin{split}
\left|\int_0^1 4u_2((x_1,0)+s\xi)\psi_C'(s) \ds\right|\le &
\int_m^{1/2} \frac4s |u_2((x_1,0)+s\xi)-u_2((x_1,0)+(1-s)\xi)|\ds
\le \frac18\theta\ln\frac1m.
\end{split}
\end{equation*}
Recalling that $u_{x_1}^\xi(s)= (u_1+4u_2)((x_1,0)+s\xi)$, we see that
\begin{equation}\label{eq34gttheatpsic1}
\frac34 \theta\ln\frac1m \le \left|\int_0^1 u_1((x_1,0)+s\xi) \psi_C'(s)\ds\right| \qquad\text{for any $x_1\in\mathcal C\setminus\Gstar$}.
\end{equation}
}
By Lemma \ref{lem:gammawuerfel}(\ref{lem:gammawuerfelest}) and $\varepsilon \le\theta^2$ we have $I(u)\ge c\varepsilon \mathcal{L}^1([0,L-\xi_1]\setminus\mathcal C)$.
By Lemma \ref{lem:costsu2inA} and $\varepsilon\le \min\{\theta^2,\mu\theta^2\}$ we have $I(u)\ge c \varepsilon \mathcal{L}^1(\Gstar)$.
Then
\begin{equation*}
I(u)\ge c \varepsilon q, \quad\text{ with }\quad q:=\mathcal{L}^1(\Gstar\cup ([0,L-\xi_1]\setminus\mathcal C)).
\end{equation*}
If $q\ge \frac12 L$ we are done. Otherwise we
pick $x_1\in[0,{q+\frac14}]\cap \mathcal C\setminus\Gstar$. We compute
\begin{equation*}
\begin{split}
\left| \int_0^1 (u_1((x_1,0)+s\xi)-u_1(0,s)) \psi_C'(s)\ds\right|
&\le \int_0^1\int_0^{x_1+\xi_1} |\partial_1 u_1|(t,s) |\psi_C'|(s) \dt\ds\\
& \le \|\psi_C'\|_{L^2((0,1))} (x_1+\frac14)^{1/2} I(u)^{1/2}
\le c m^{-1/2} (q+1)^{1/2} I(u)^{1/2}
\end{split}
\end{equation*}
where we used (\ref{eqestpsiclemmaln}).
If the right-hand side is larger than $\frac14\theta\ln\frac1m$ then
$I(u)\ge c (q+1)^{-1} \theta^2 m\ln^2\frac1m$ and {(\ref{eqstep1energyest}) is proven.} Otherwise, with (\ref{eq34gttheatpsic1}) we obtain
\begin{equation*}
\frac12 \theta\ln\frac1m \le \Big|
\int_0^1 u_1(0,s) \psi_C'(s)\ds\Big|.
\end{equation*}
{
We define $\Psi_C:\mathbb{R}^2\to\mathbb{R}$ by
\begin{equation*}
\Psi_C(x):=\max\{\psi_B(|x-(0,1)|), \psi_B(|x|)\}.
\end{equation*}
One easily checks that $\Psi_C(0,t)=\psi_C(t)$ and
$\Psi_C(t,0)=\Psi_C(t,1)=\psi_B(t)$ for $t\in(0,1)$, $\Psi_C=0$ on the rest of
the boundary of $F_1:=(-1,1)\times(-1,2)\setminus(0,1)^2$. Further, $\|\nabla\Psi_C\|_{L^2(\mathbb{R}^2)}^2\le c\ln\frac1m$.
By Lemma \ref{lemmaystin} with $c_*=2^{-4}$ and $\ell=2L$, either $I(u)\ge c\min\{\mu\theta^2\ln\frac1m, \mu\theta^2\ln(3+\frac\theta\mu),\mu\theta^2\ln(3+L)\}$ and (\ref{eqstep1energyest}) holds, so that we are done, or
\begin{equation*}
\left| \int_0^1 \psi_B'(s)(u_1(s,1)-u_1(s,0)) \ds\right| \le \frac1{16}\theta\ln\frac1m.
\end{equation*}
We
conclude that (recalling Lemma \ref{lemmah12app})
\begin{equation*}
\frac18 \theta\ln\frac1m \le \int_{\partial F_1} u_1 \partial_\tau \Psi_C \dcalH^1
\le c \|\nabla\Psi_C\|_{L^{2}(F_1)} \|\nabla u_1\|_{L^{2}(F_1)}
\le c \ln^{1/2}\frac1m
\mu^{-1/2} I(u)^{-1/2}
\end{equation*}
which implies $I(u)\ge c\mu\theta^2\ln \frac1m$ and concludes the proof of (\ref{eqstep1energyest}).
}
{\bf Step 2. Choice of the parameters.}\\
{
We first remark that
\begin{equation}\label{eqminabx1}
\min_{x\ge0} \big[ ax+\frac{b}{x+1}\big] =\min_{x\ge0} \big[a(x+1)+\frac{b}{x+1}\big] -a\ge 2a^{1/2}b^{1/2}-a\ge a^{1/2}b^{1/2}
\quad\text{ whenever $0<a\le b$ }.
\end{equation}
}
{
We use (\ref{eqstep1energyest}) with $m:=\max\{\theta,m_2\frac{\mu^2\theta^2}{\varepsilon}\}\in[\theta,m_2]$.
If the first, the second or the last term are the smallest, the proof is concluded.
Assume that the smallest is the third or the fourth one.
We remark that $\varepsilon\le\mu\theta^2$ and $\mu\le\theta$ imply $\varepsilon\le\theta^3$, hence $\varepsilon\le \theta^2m\ln^2\frac1m$. Optimizing
the third term in (\ref{eqstep1energyest})
in $q$ with (\ref{eqminabx1}) leads to
\begin{equation*}
\frac1c I(u)\ge
\min\left\{
{\varepsilon^{1/2}\theta m^{1/2} }\ln\frac1m,
\mu\theta^2 \ln\frac1m
\right\}.
\end{equation*}
}
At this point we distinguish two cases. If
{$\varepsilon\le m_2\mu^2\theta$ then}
$m=m_2\frac{\mu^2\theta^2}{\varepsilon}$, we insert and obtain
$I(u)\ge c\mu\theta^2 \ln \frac{\varepsilon}{m_2 \mu^2\theta^2}
\ge c \mu\theta^2 \ln (3+\frac{\varepsilon}{\mu^2\theta^2})$, which concludes the proof.
If instead $m_2\mu^2\theta<\varepsilon$ then $m=\theta$, and the above estimate gives
$ I(u)\ge
c\mu\theta^2 \ln\frac1\theta
\ge c\mu\theta^2 \ln(3+\frac1{\theta^2}) $, which also concludes the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{propinterp}]
{We first recall
that by Lemma \ref{lem:gammawuerfel}(\ref{lem:gammawuerfelest}) we have, since $\varepsilon\le\theta^2$,
\begin{equation}\label{eqsdfmujsth3}
\frac1c I(u)\ge \mu\theta^2\ln\frac{L-\xi_1+1}{p+1} +\varepsilon p.
\end{equation}
The
interpolation estimate from Lemma \ref{lemmainterpolationestim} gives
\begin{equation}\label{eqinterpprop}
I(u)\ge c\min\{\varepsilon^{1/2}\theta^{3/2},\mu\theta^2 p, \theta^2 p, \varepsilon L\}.
\end{equation}
We treat the four cases separately.}
If the minimum in (\ref{eqinterpprop}) is $\varepsilon L$, we are done.
If the minimum in (\ref{eqinterpprop}) is $\mu\theta^2p$, then,
as in (i) in the proof of Prop. \ref{lem:logsammelnthetasmall},
\begin{equation*}
\frac1c I(u)\ge \min_{p'\in[0,L-\xi_1]} \big(\mu \theta^2p' +\mu\theta^2\ln\frac{L-\xi_1+1}{p'+1}\big)
=\mu\theta^2\ln(L-\xi_1+1)\ge c \mu\theta^2\ln(3+L),
\end{equation*}
and we are done.
If the minimum in (\ref{eqinterpprop}) is $\theta^2p$, then {we can assume $\mu\ge 1$ and,}
as in (ii) in the proof of Prop. \ref{lem:logsammelnthetasmall},
\begin{equation*}
\frac1c I(u)\ge \min_{p'\in[0,L-\xi_1]} \left( \theta^2p'+\mu\theta^2\ln\frac{L-\xi_1+1}{p'+1}\right)
\ge c\min\left\{\mu\theta^2\ln(3+L),\mu\theta^2\ln(3+\frac L\mu), {\theta^2L}\right\}.
\end{equation*}
{Also in this case we are done. Indeed,
recalling $\varepsilon\le \theta^2$, we have
{$\theta^2L\ge \varepsilon L$ and}
$\varepsilon^{1/2}\theta^{3/2}\le \theta^2\le \mu\theta^2$,
$\frac{\varepsilon L}{\mu\theta^2}\le \frac{L}{\mu}$ and $\frac{\varepsilon}{\mu^2\theta^2}\le \frac{1}{\mu^2}\le 1$, so that
$\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})+\mu\theta^2\ln(3+\frac{\varepsilon}{\mu^2\theta^2})+\varepsilon^{1/2}\theta^{3/2}
\le 4 \mu\theta^2\ln(3+\frac{L}{\mu})\le 4c I(u)$}.
We are left with the case that (\ref{eqinterpprop}) states $I(u)\ge c \varepsilon^{1/2}\theta^{3/2}$.
We now show that (\ref{eqsdfmujsth3}) implies $I(u)\ge c \min\{\varepsilon L, \mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}),{\mu\theta^2(3+L)}\}$. Indeed,
the minimum of the expression in the right-hand side of (\ref{eqsdfmujsth3}) is attained at $p=0$, or at $p=L-\xi_1$, or at
$p+1=\mu\theta^2/\varepsilon$.
If it is at $p=0$ then $I(u)\ge c\mu\theta^2\ln (L-\xi_1+1)\ge c\mu\theta^2\ln(3+L)$ and the proof is concluded.
If it is at some $p\ge\frac15 L$ then $I(u)\ge c \varepsilon p\ge c\varepsilon L$ and the proof is concluded.
We are left with the case that
the first term
is at least $\mu\theta^2$ and $p+1=\mu\theta^2/\varepsilon$. Then, recalling the previous result from the interpolation estimate, we have
\begin{equation}\label{eqendpf381}
\frac1c I(u)\ge \varepsilon^{1/2}\theta^{3/2}+
\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2}).
\end{equation}
Let $m_2$ be as in Lemma \ref{lemmabdryln}. {We next show that we can assume (with a constant $c$ depending on $m_2$) that
\begin{equation}\label{eqendpf382}
\frac1c I(u)\ge \min\left\{
\mu\theta^2 \ln(3+\frac1{\theta^2}),
\mu\theta^2\ln(3+\frac\theta\mu),
{\mu\theta^2}\ln (3+\frac{\varepsilon}{\mu^2\theta^2})
\right\}.
\end{equation}}
If at least one of
$m_2\le\theta$,
$\theta\le \mu$,
$\varepsilon\le\mu^2\theta^2$ holds then
{the minimum in (\ref{eqendpf382}) is below $c\mu\theta^2$ and (\ref{eqendpf382}) follows from (\ref{eqendpf381}).
If instead} $\theta<m_2$, $\mu<\theta$, $\mu^2\theta^2<\varepsilon$ then {Lemma \ref{lemmabdryln} shows that
either $\frac1c I(u)\ge\min\{\varepsilon L, \mu\theta^2\ln(3+L)\}$, and we are done, or (\ref{eqendpf382}) holds.}
{It remains to show that (\ref{eqendpf381}) and (\ref{eqendpf382}) imply the assertion.
This is clear if $\varepsilon\le \mu^2$, since in this case the first term in (\ref{eqendpf382}) is not relevant
{and $\varepsilon^{1/2}\theta^{3/2}\ge0$.}
If instead $\mu^2<\varepsilon$, {then $\mu^2\theta^2<\varepsilon$ and therefore $(\frac{\mu^2\theta^2}{\varepsilon})^{1/4}\ln(3+\frac{\varepsilon}{\mu^2\theta^2})\le C$. This implies
$\mu\theta^2\ln(3+\frac{\varepsilon}{\mu^2\theta^2})\le \mu^{1/2}\theta^{3/2}\varepsilon^{1/4}C\le C \varepsilon^{1/2}\theta^{3/2}$,} so that
(\ref{eqendpf381}) concludes the proof.
}
\end{proof}
{
\begin{rmrk}
In the last step of the proof, we just removed the scaling $\varepsilon^{1/2}\theta^{3/2}$ in the regime $\mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu) +\varepsilon^{1/2}\theta^{3/2}$. We note that this does not change the scaling behaviour of our lower bound:
We distinguish two possibilities. If $\varepsilon \leq\mu^2\theta$, then $\varepsilon^{1/2}\theta^{3/2}\leq \mu\theta^2$, and we have
\[\mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu)\leq \mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu) +\varepsilon^{1/2}\theta^{3/2}\leq 2\left( \mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu) \right).\]
Otherwise, if $\varepsilon>\mu^2\theta$, we have as in the proof of the upper bound (Theorem \ref{th:upperbound} (d))
\begin{eqnarray*}
\mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu) +\varepsilon^{1/2}\theta^{3/2}\geq \mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu) \geq c \mu\theta^2\ln(3+L).
\end{eqnarray*}
\end{rmrk}
}
\subsection{Conclusion of the lower bound}\label{sec:lbconclusion}
We finally bring together the bounds proven in the previous Sections to obtain the desired lower bound.
\begin{thrm}[Lower bound] \label{th:lowerbound}
For any $\varepsilon>0$, $\mu>0$, $L\ge \frac12$, $\theta\in (0,\frac12]$ and {any $u\in\mathcal{X}$} we have
\begin{equation*}
\begin{split}
I(u)\ge
c\mathcal{I}({\mu,\varepsilon,\theta,L}),
\end{split}
\end{equation*}
where $\mathcal I$ was defined in Theorem \ref{th:main}.
\end{thrm}
\begin{proof}
{We distinguish several cases. }
\begin{enumerate}
\item Assume {that} at least one of $\theta^2\le \varepsilon$ and $\mu\theta^2\le \varepsilon$ holds.
Then Proposition \ref{lem:logsammelnthetasmall}
gives
\begin{equation*}
I(u)\ge c \min\left\{\mu\theta^2\ln (3+L), \theta^2L, \mu\theta^2\ln (3+\frac L\mu)+\varepsilon\theta\right\},
\end{equation*}
and the proof is concluded.
\item
From now on we have $\varepsilon\le\theta^2$ and $\varepsilon\le\mu\theta^2$.
We start by applying
Proposition \ref{propinterp}, see also the remark afterwards. This
gives that
\begin{eqnarray*}
I(u)\ge c \min\left\{\varepsilon L,\mu\theta^2 \ln(3+L), \mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\varepsilon^{1/2}\theta^{3/2} +
\mu\theta^2\ln (3+\frac{\varepsilon}{\mu^2\theta^2}),
\mu\theta^2\ln (3+\frac{\varepsilon L}{\mu\theta^2}) +
\mu\theta^2\ln(3+\frac\theta\mu) \right\}.
\end{eqnarray*}
Note that the proof is concluded
unless the first term is the smallest.
Assume now that
\begin{equation}\label{eqtheolbepsl}
I(u)\ge c \varepsilon L.
\end{equation}
If $\theta^2\le \varepsilon (2L)^2$, then
\begin{equation*}
\frac1c I(u)\ge 2 \varepsilon L
=\varepsilon L + \varepsilon^{2/3}L^{1/3}(\varepsilon L^2)^{1/3}
\ge \varepsilon L + \frac12 \varepsilon^{2/3}\theta^{2/3}L^{1/3}
\end{equation*}
concludes the proof. \\[.2cm]
In the following we can assume {that} $\varepsilon (2L)^2<\theta^2$ and $\varepsilon\le\mu\theta^2$ and (\ref{eqtheolbepsl}) {hold}.
We distinguish more subcases, depending on the competition between the interfacial energy and the austenite elasticity. Specifically, the critical condition is whether
\begin{equation}\label{eqellessmutheta}
{2\varepsilon L}\le \mu\theta^2 \min\left\{\ln(3+\frac{\varepsilon}{{2\mu^3\theta^2L}}),
\ln(3+\frac{1}{\theta^2})\right\}
\end{equation}
holds or not. \\
Assume that (\ref{eqellessmutheta}) does not hold. Then
\begin{equation*}
2\varepsilon L \ge \min\left\{
\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac 1 {\theta^{2}}))^{1/2}+\varepsilon L,
\mu^{1/2}\varepsilon^{1/2}\theta L^{1/2} (\ln (3+ \frac{\varepsilon}{ \mu^{3}\theta^{2}{L}}))^{1/2}+\varepsilon L\right\}
\end{equation*}
and with (\ref{eqtheolbepsl}) the proof is concluded.\\[.2cm]
Finally, assume that (\ref{eqellessmutheta}) holds. In this situation
we can use Proposition \ref{lem:lbbranching} with $\ell=2L$. Recalling (\ref{eqtheolbepsl}), this gives
\begin{equation*}
\begin{split}
\frac1c I(u) \geq \varepsilon L + \min
\Big{\{}
&{ \mu \theta^2 \ln(3+L)},
{\mu\theta^2\ln(3+\frac\theta\mu)},
{\varepsilon^{2/3} \theta^{2/3}L^{1/3}},\\
&\mu^{1/2}\varepsilon^{1/2}
{ \theta}L^{1/2} (\ln (3+ \frac 1 {\theta^2}))^{1/2}, \mu^{1/2}\varepsilon^{1/2}{\theta} L^{1/2} (\ln (3+ \frac \varepsilon {\mu^3\theta^2{L}}))^{1/2}
\Big{\}}.
\end{split}
\end{equation*}
We remark that the term $\mu\theta^2\ln(3+\frac\theta\mu)$ can be dropped. Indeed, if $\theta\ge \mu L$ then it is larger than $\ln(3+L)$. If instead $\theta< \mu L$, then it is equivalent to the term $\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2}) + \mu\theta^2\ln(3+\frac\theta\mu)+\varepsilon^{1/2}\theta^{3/2}$, that we already included before. Indeed, in this case
$\varepsilon^{1/2}\theta^{3/2}\le \varepsilon^{1/2}L^{1/2}\mu^{1/2}\theta\le \varepsilon L+\mu\theta^2$
and, using $\ln(3+x)\le 2+x$ in the first term,
\[
\mu\theta^2\ln(3+\frac{\varepsilon L}{\mu\theta^2})
+ \mu\theta^2\ln(3+\frac\theta\mu) + \varepsilon^{1/2}\theta^{3/2}
\le 2\mu\theta^2+\varepsilon L+ \mu\theta^2\ln(3+\frac\theta\mu)+\varepsilon L +\mu\theta^2
\le 4 \left( \mu\theta^2\ln(3+\frac\theta\mu)+\varepsilon L\right).
\]
This concludes the proof.
\end{enumerate}
\end{proof}
\end{document} | math | 213,594 |
\begin{document}
\title{Node Multiway Cut and Subset Feedback Vertex Set on Graphs of Bounded Mim-width}
\author[1,*]{Benjamin Bergougnoux}
\author[2]{Charis Papadopoulos}
\author[1]{Jan Arne Telle}
\affil[1]{University of Bergen, Norway}
\affil[2]{University of Ioannina, Greece}
\affil[*]{Corresponding author: [email protected]}
\maketitle
\thanks{An extended abstract appeared in the proceedings of 46th International Workshop on Graph-Theoretic Concepts in Computer Science, WG 2020.}
\begin{abstract}
The two weighted graph problems \textsc{Node Multiway Cut} (NMC) and \textsc{Subset Feedback Vertex Set} (SFVS)
both ask for a vertex set of minimum total weight, that for NMC
disconnects a given set of terminals, and for SFVS intersects all cycles containing a vertex of a given set.
We design a meta-algorithm that allows to solve both problems in time $2^{O(rw^3)}\cdot n^{4}$, $2^{O(q^2\log(q))}\cdot n^{4}$, and $n^{O(k^2)}$ where $rw$ is the rank-width, $q$ the $\mathbb{Q}$-rank-width, and $k$ the mim-width of a given decomposition.
This answers in the affirmative an open question raised by Jaffke et~al.~(Algorithmica, 2019) concerning an \textsf{XP}\xspace algorithm for SFVS parameterized by mim-width.
By a unified algorithm, this solves both problems in polynomial-time on the following graph classes:
\textsc{Interval}, \textsc{Permutation}, and \textsc{Bi-Interval} graphs,
\textsc{Circular Arc} and \textsc{Circular
Permutation} graphs, \textsc{Convex} graphs, \textsc{$k$-Polygon},
\textsc{Dilworth-$k$} and \textsc{Co-$k$-Degenerate} graphs for fixed
$k$; and also on \textsc{Leaf Power} graphs if a leaf root is given as input,
on \textsc{$H$-Graphs} for fixed $H$ if an $H$-representation is given as input,
and on arbitrary powers of graphs in all the above classes.
Prior to our results, only SFVS was known to be tractable restricted only on \textsc{Interval} and \textsc{Permutation} graphs, whereas all other results are new.
\end{abstract}
\keywords{Subset feedback vertex set, node multiway cut, neighbor equivalence, rank-width, mim-width.}
\section{Introduction }
Given a vertex-weighted graph $G$ and a set $S$ of its vertices, the \textsc{Subset Feedback Vertex Set} (SFVS) problem asks for a vertex set of minimum weight
that intersects all cycles containing a vertex of $S$.
SFVS was introduced by Even et~al.~\cite{EvenNZ00} who proposed an $8$-approximation algorithm.
Cygan et~al.~\cite{CyganPPW13} and Kawarabayashi and Kobayashi \cite{KK12} independently
showed that SFVS is fixed-parameter tractable (\textsf{FPT}\xspace) parameterized by the solution size, while Hols and Kratsch \cite{HolsK18} provide a randomized polynomial kernel for the problem.
As a generalization of the classical \textsf{NP}\xspace-complete \textsc{Feedback Vertex Set} (FVS) problem, for which $S=V(G)$, there has been a considerable amount of work to obtain faster algorithms for SFVS, both for general graphs~\cite{ChitnisFLMRS17,FominGLS16} where the current best is an $O^*(1.864^{n})$ algorithm due to Fomin et~al.~\cite{FominHKPV14}, and restricted to special graph classes~\cite{BergougnouxBBK20,GolovachHKS14,PapadopoulosT20,PapadopoulosT19}.
Naturally, FVS and SFVS differ in complexity, as exemplified by
split graphs where FVS is polynomial-time solvable \cite{fvs:chord:corneil:1988} whereas SFVS remains \textsf{NP}\xspace-hard \cite{FominHKPV14}.
Moreover, note that the vertex-weighted variation of SFVS behaves differently than the unweighted one, as exposed on graphs with bounded independent set sizes: weighted SFVS is \textsf{NP}\xspace-complete on graphs with independent set size at most four, whereas unweighted SFVS is in \textsf{XP}\xspace parameterized by the independent set size \cite{PapadopoulosT20}.
Closely related to SFVS is the \textsf{NP}\xspace-hard \textsc{Node Multiway Cut} (NMC) problem in which we are given a
vertex-weighted graph $G$ and a set $T$ of (terminal) vertices, and asked to find a vertex set of minimum weight that disconnects all the terminals \cite{Calinescu08,GargVY04}.
NMC is a well-studied problem in terms of approximation \cite{GargVY04}, as well as parameterized algorithms \cite{Calinescu08,CLL09,ChitnisFLMRS17,CPPW13,DahlhausJPSY94,FominHKPV14}.
It is not difficult to see that SFVS for $S = \{v\}$ coincides with NMC in which $T=N(v)$.
In fact, NMC reduces to SFVS by adding a single vertex $v$ with a large weight that is adjacent to all terminals
and setting $S=\{v\}$ \cite{FominHKPV14}.
Thus, in order to solve NMC on a given graph one may apply a known algorithm for SFVS on a vertex-extended graph.
Observe, however, that through such an approach one needs to clarify that the vertex-extended graph still obeys the necessary properties of the known algorithm for SFVS.
This explains why most of the positive results on SFVS on graph families \cite{GolovachHKS14,PapadopoulosT19,PapadopoulosT20} can not be translated to NMC.
In this paper, we investigate the complexity of SFVS and NMC when parameterized by structural graph width parameters.
Well-known graph width parameters include tree-width \cite{Bodlaender06}, clique-width \cite{CourcelleO00}, rank-width \cite{Oum05a}, and maximum induced matching width (\emph{a.k.a.} mim-width) \cite{Vatshelle12}.
These are of varying strength, with tree-width of modeling power strictly weaker than clique-width, as it is bounded on a proper subset of the graph classes having bounded clique-width, with rank-width and clique-width of the same modeling power, and with mim-width much stronger than clique-width. Belmonte and Vatshelle~\cite{BelmonteV13} showed that several graph classes, like interval graphs and permutation graphs, have bounded mim-width and a decomposition witnessing this can be found in polynomial time, whereas it is known that the clique-width of such graphs can be proportional to the square root of the number of vertices \cite{GolumbicR00}.
In this way, an \textsf{XP}\xspace algorithm parameterized by mim-width has the feature of unifying several algorithms on well-known graph classes.
We obtain most of these parameters through the well-known notion of \emph{branch-decomposition} introduced in \cite{RobertsonS91}.
This is a natural hierarchical clustering of $G$, represented as a subcubic tree $T$ with the vertices of $G$ at its leaves.
Any edge of the tree defines a cut of $G$ given by the leaves of the two subtrees that result from removing the edge from $T$.
Judiciously choosing a cut-function to measure the complexity of such cuts, or rather of the bipartite subgraphs of $G$ given by the edges crossing the cuts, this framework then defines a graph width parameter by a minmax relation, minimum over all trees and maximum over all its cuts.
Several graph width parameters have been defined this way, like carving-width, maximum matching-width, boolean-width etc.
We will in this paper focus on: (i) rank-width~\cite{Oum05a} whose cut function is the $GF[2]$-rank of the adjacency matrix, (ii) $\mathbb{Q}$-rank-width~\cite{OumSV13} a variant of rank-width with interesting algorithmic properties which instead uses the rank over the rational field, and (iii) mim-width~\cite{Vatshelle12} whose cut function is the size of a maximum induced matching of the graph crossing the cut.
Concerning their computations, for rank-width and $\mathbb{Q}$-rank-width, there are $2^{3k}\cdot n^{4}$ time algorithms that, given a graph $G$ as input and $k\in \mathbb{N}$, either output a decomposition for $G$ of width at most $3k+1$ or confirms that the width of $G$ is more than $k$ \cite{OumSV13,OumS06}. However, it is not known whether the mim-width of a graph can be approximated within a constant factor in time $n^{f(k)}$ for some function $f$.
Let us mention what is known regarding the complexity of NMC and SFVS parameterized by these width measures.
Since these problems can be expressed in MSO$_1$-logic it follows that
they are \textsf{FPT}\xspace parameterized by tree-width, clique-width, rank-width or $\mathbb{Q}$-rank-width \cite{CourcelleMR00,Oum09a}, however the runtime will contain a tower of 2's with more than $4$ levels.
Recently, Bergougnoux et al.~\cite{BergougnouxBBK20} proposed $k^{O(k)}\cdot n^{3}$ time algorithms for these two problems parameterized by treewidth and proved that they cannot be solved in time $k^{o(k)}\cdot n^{O(1)}$ unless ETH fails.
For mim-width, we know that FVS and thus SFVS are both \textsf{W}[1]-hard when parameterized by the mim-width of a given decomposition \cite{JaffkeKT20}.
Attacking SFVS seems to be a hard task that requires more tools than for FVS.
Even for very small values of mim-width that capture several graph classes, the tractability of SFVS, prior to our result, was left open besides interval and permutation graphs \cite{PapadopoulosT19}. Although FVS was known to be tractable on such graphs for more than a decade \cite{KratschMT08}, the complexity status of SFVS still remained unknown.
\paragraph{\bf Our results.}
We design a meta-algorithm that, given a graph and a branch-decomposition, solves SFVS (or NMC via its reduction to SFVS).
The runtime of this algorithm is upper bounded by $2^{O(rw^3)}\cdot n^4$, $2^{O(q^2\log(q))}\cdot n^4$ and $n^{O(k^2)}$ where $rw$, $q$ and $k$ are the rank-width, the $\mathbb{Q}$-rank-width and the mim-width of the given branch-decomposition.
For clique-width, our meta-algorithm implies that we can solve SFVS and NMC in time $2^{O(k^2)}\cdot n^{O(1)}$ where $k$ is the clique-width of a given clique-width expression.
However, we do not prove this as it is not asymptotically optimal, indeed Jacob et al.~\cite{JacobBDP21} show recently that SFVS and NMC is solvable in time $2^{O(k\log k )}\cdot n$ given a clique-width expression.
We resolve in the affirmative the question raised by Jaffke et~al.~\cite{JaffkeKT20}, also mentioned in \cite{PapadopoulosT19} and \cite{PapadopoulosT20},
asking whether there is an \textsf{XP}\xspace-time algorithm for SFVS parameterized by the mim-width of a given decomposition. For rank-width and $\mathbb{Q}$-rank-width
we provide the first explicit \textsf{FPT}\xspace-algorithms with low exponential dependency that avoid the MSO$_1$ formulation.
Our main results are summarized in the following theorem:
\begin{theorem}\label{theo:intromain}
Let $G$ be a graph on $n$ vertices.
We can solve \textsc{Subset Feedback Vertex Set} and \textsc{Node Multiway Cut} in time $2^{O(rw^3)}\cdot n^4$ and $2^{O(q^2\log(q))}\cdot n^4$, where $rw$ and $q$ are the rank-width and the $\mathbb{Q}$-rank-width of $G$, respectively.
Moreover, if a branch-decomposition of mim-width $k$ for $G$ is given as input, we can solve \textsc{Subset Feedback Vertex Set} and \textsc{Node Multiway Cut} in time $n^{O(k^2)}$.
\end{theorem}
Note it is not known whether the mim-width of a graph can be approximated within a constant factor in time $n^{f(k)}$ for some function $f$.
However, by the previously mentioned results of Belmonte and Vatshelle \cite{BelmonteV13} on computing decompositions of bounded mim-width, combined with a result of \cite{JaffkeKST19tcs} showing that for any positive integer $r$ a decomposition of mim-width $k$ of a graph $G$ is also a decomposition of mim-width at most $2k$ of its power $G^r$, we get the following corollary.
\begin{corollary}\label{cor:mainintro}
We can solve \textsc{Subset Feedback Vertex Set} and \textsc{Node Multiway Cut} in polynomial time on \textsc{Interval}, \textsc{Permutation}, and \textsc{Bi-Interval} graphs,
\textsc{Circular Arc} and \textsc{Circular
Permutation} graphs, \textsc{Convex} graphs, \textsc{$k$-Polygon},
\textsc{Dilworth-$k$} and \textsc{Co-$k$-Degenerate} graphs for fixed
$k$, and on arbitrary powers of graphs in any of these classes.
\end{corollary}
Previously, such polynomial-time tractability was known only for SFVS and only on \textsc{Interval} and \textsc{Permutation} graphs \cite{PapadopoulosT19}.
It is worth noticing that Theorem \ref{theo:intromain} implies also that we can solve \textsc{Subset Feedback Vertex Set} and \textsc{Node Multiway Cut} in polynomial time on \textsc{Leaf Power} if an intersection model is given as input (from which we can compute a decomposition of mim-width 1) \cite{BelmonteV13,JaffkeKST19tcs} and on \textsc{$H$-Graphs} for a fixed $H$ if an $H$-representation is given as input (from which we can compute a decomposition of mim-width $2|E(H)|$) \cite{FominGR18}.
\paragraph{\bf Our approach.}
We give some intuition to our meta-algorithm, that will focus on \textsc{Subset Feedback Vertex Set}.
Since NMC can be solved by adding a vertex $v$ of large weight adjacent to all terminals and solving SFVS with $S=\{v\}$, all within the same runtime as extending the given branch-decomposition to this new graph increases the width at most by one for all considered width measures.
Towards achieving our goal, we use the $d$-neighbor equivalence, with $d=1$ and $d=2$, a notion introduced by Bui-Xuan et~al.~\cite{BuiXuanTV13}.
Two subsets $X$ and $Y$ of $A \subseteq V(G)$ are \emph{$d$-neighbor equivalent \mathsf{w}rt $A$}, if $\operatorname{min}(d,|X\cap N(u)|) = \operatorname{min}(d,|Y\cap N(u)|)$ for all $u\in V(G) \setminus A$.
For a cut $(A, \comp{A})$ this equivalence relation on subsets of vertices was used by Bui-Xuan et~al.~\cite{BuiXuanTV13} to design a meta-algorithm, also giving \textsf{XP}\xspace algorithms by mim-width, for so-called $(\sigma, \rho)$ generalized domination problems.
Recently, Bergougnoux and Kant\'e \cite{BergougnouxK19esa} extended the uses of this notion to acyclic and connected variants of $(\sigma, \rho)$ generalized domination and similar problems like FVS. An earlier \textsf{XP}\xspace algorithm for FVS parameterized by mim-width had been given by Jaffke et~al.~\cite{JaffkeKT20} but instead of the $d$-neighbor equivalences this algorithm was based on the notions of reduced forests and minimal vertex covers.
Our meta-algorithm does a bottom-up traversal of a given branch-decomposition of the input graph $G$, computing a vertex subset $X$ of maximum weight that induces an \emph{$S$-forest} ({i.e.}\xspace, a graph where no cycle contains a vertex of $S$) and outputs $V(G) \setminus X$ which is necessarily a solution of SFVS.
As usual, our dynamic programming algorithm relies on a notion of representativity between sets of partial solutions.
For each cut $(A,\comp{A})$ induced by the decomposition, our algorithm computes a set of partial solutions $\mathcal{A} \subseteq 2^{A}$ of small size that represents $2^{A}$. We say that a set of partial solutions $\mathcal{A}\subseteq 2^{A}$ represents a set of partial solutions $\mathcal{B}\subseteq 2^{A}$, if, for each $Y\subseteq \comp{A}$, we have $\operatorname{best}(\mathcal{A},Y)=\operatorname{best}(\mathcal{B},Y)$ where $\operatorname{best}(\mathcal{C},Y)$ is the maximum weight of a set $X\in\mathcal{C}$ such that $X\cup Y$ induces an $S$-forest.
Since the root of the decomposition is associated with the cut $(V(G),\varnothing)$, the set of partial solutions computed for this cut represents $2^{V(G)}$ and thus contains an $S$-forest of maximum weight.
Our main tool is a subroutine that, given a set of partial solutions $\mathcal{B}\subseteq 2^{A}$, outputs a subset $\mathcal{A}\subseteq \mathcal{B}$ of small size that represents $\mathcal{B}$.
To design this subroutine, we cannot use directly the approaches solving FVS of any earlier approaches, like \cite{BergougnouxK19esa} or \cite{JaffkeKT20}.
This is due to the fact that $S$-forests behave quite differently than forests; for example, given an $S$-forest $F$, the graph induced by the edges between $A\cap V(F)$ and $\comp{A}\cap V(F)$ could be a biclique.
Instead, we introduce a notion of vertex contractions and prove that, for every $X\subseteq A$ and $Y\subseteq \comp{A}$, the graph induced by $X\cup Y$ is an $S$-forest if and only if there exists a partition of $X \setminus S$ and of $Y \setminus S$, satisfying certain properties, such that contracting the blocks of these partitions into single vertices transforms the $S$-forest into a forest.
This equivalence between $S$-forests in the original graph and forests in the contracted graphs allows us to adapt some ideas from \cite{BergougnouxK19esa} and \cite{JaffkeKT20}.
Most of all, we use the property that, if the mim-width of the given decomposition is $mim$, then the contracted graph obtained from the bipartite graph induced by $X$ and $Y$ admits a vertex cover $\mathsf{VC}$ of size at most $4mim$.
Note however, that in our case the elements of $\mathsf{VC}$ are contracted subsets of vertices. Such a vertex cover allows us to control the cycles which are crossing the cut.
We associate each possible vertex cover $\mathsf{VC}$ with an index $i$ which contains basically a representative for the 2-neighbor equivalence for each subset of vertices in $\mathsf{VC}$.
Moreover, for each index $i$, we introduce the notions of partial solutions and complements solutions associated with $i$ which correspond, respectively, to subsets of $X\subseteq A$ and subsets $Y\subseteq \comp{A}$ such that, for some contractions of $X$ and $Y,$ the contracted graph obtained from the bipartite graph induced by $X$ and $Y$ admits a vertex cover $\mathsf{VC}$ associated with $i$.
We define an equivalence relation $\sim_i$ between the partial solutions associated with $i$ such that $X\sim_i W$, if $X$ and $W$ connect in the same way the representatives of the vertex sets which belongs to the vertex covers described by $i$.
Given a set of partial solutions $\mathcal{B}\subseteq 2^{A}$, our subroutine outputs a set $\mathcal{A}$ that contains, for each index $i$ and each equivalence class $\mathcal{C}$ of $\sim_i$ over $\mathcal{B}$, a partial solution in $\mathcal{C}$ of maximum weight.
In order to prove that $\mathcal{A}$ represents $\mathcal{B}$, we show that:
\begin{itemize}
\item for every $S$-forest $F$, there exists an index $i$ such that $V(F)\cap A$ is a partial solution associated with $i$ and $V(F)\cap \comp{A}$ is a complement solutions associated with $i$.
\item if $X\sim_i W$, then, for every complement solution $Y$ associated with $i$, the graph induced by $X\cup Y$ is an $S$-forest if and only if $W\cup Y$ induces an $S$-forest.
\end{itemize}
The number of indices $i$ is upper bounded by $2^{O(q^2\log(q))}$, $2^{O(rw^3)}$ and $n^{O(mim^2)}$.
This follows from the known upper-bounds on the number of 2-neighbor equivalence classes and the fact that the vertex covers we consider have size at most $4mim$.
Since there are at most $(4mim)^{4mim}$ ways of connecting $4mim$ vertices and $rw,q\mathsf{g}eq mim$, we deduce that the size of $\mathcal{A}$ is upper bounded by $2^{O(q^2\log(q))}$, $2^{O(rw^3)}$ and $n^{O(mim^2)}$.
To the best of our knowledge, this is the first time a dynamic programming algorithm parameterized by graph width measures uses this notion of vertex contractions.
Note that in contrast to the meta-algorithms in \cite{BergougnouxK19esa,BuiXuanTV13}, the number of representatives (for the $d$-neighbor equivalence) contained in the indices of our meta-algorithm are not upper bounded by a constant but by $4mim$.
This explains the differences between the runtimes in Theorem~\ref{theo:intromain} and those obtained in~\cite{BergougnouxK19esa,BuiXuanTV13}, i.e. $n^{O(mim^2)}$ versus $n^{O(mim)}$.
However, for the case $S=V(G)$, thus solving FVS, our meta-algorithm will have runtime $n^{O(mim)}$, as the algorithms for FVS of~\cite{BergougnouxK19esa, JaffkeKT20}.
We do not expect that SFVS can be solved as fast as FVS when parameterized by graph width measures.
In fact, we know that it is not the case for tree-width as FVS can be solved in $2^{O(k)} \cdot n$~\cite{BodlaenderCKN15} but SFVS cannot be solved in $k^{o(k)} \cdot n^{O(1)}$ unless ETH fails~\cite{BergougnouxBBK20}.
\section{Preliminaries }
The size of a set $V$ is denoted by $|V|$ and its power set is denoted by $2^V$.
We write $A\setminus B$ for the set difference of $A$ from $B$.
We let $\operatorname{min} (\varnothing)= +\infty$ and $\operatorname{max}(\varnothing)=-\infty$.
\paragraph{\bf Graphs}
The vertex set of a graph $G$ is denoted by $V(G)$ and its edge set by $E(G)$.
An edge between two vertices $x$ and $y$ is denoted by $xy$ (or $yx$).
Given $\mathcal{S}\subseteq 2^{V(G)}$, we denote by $V(\mathcal{S})$ the set $\bigcup_{S\in\mathcal{S}}S$.
For a vertex set $U \subseteq V(G)$, we denote by $\comp{U}$ the set $V(G) \setminus U$.
The set of vertices that are adjacent to $x$ is denoted by $N_G(x)$, and for $U\subseteq V(G)$, we let $N_G(U)=\left(\cup_{v\in U}N_G(v)\right)\setminus U$.
The subgraph of $G$ induced by a subset $X$ of its vertex set is denoted by $G[X]$.
For two disjoint subsets $X$ and $Y$ of $V(G)$, we denote by $G[X,Y]$ the bipartite graph with vertex set $X\cup Y$ and edge set $\{xy \in E(G)\mid x\in X \text{ and } \ y\in Y \}$. We denote by $M_{X,Y}$ the adjacency matrix between $X$ and $Y$, {i.e.}\xspace, the $(X,Y)$-matrix such that $M_{X,Y}[x,y]=1$ if $y\in N(x)$ and 0 otherwise.
A \emph{vertex cover} of a graph $G$ is a set of vertices $\mathsf{VC}\subseteq V(G)$ such that, for every edge $uv\in E(G)$, we have $u\in\mathsf{VC}$ or $v\in \mathsf{VC}$.
A \emph{matching} is a set of edges having no common endpoint
and an \emph{induced matching} is a matching $M$ of edges such that $G[V(M)]$ has no other edges besides $M$.
The \emph{size of an induced matching} $M$ refers to the number of edges in $M$.
For a graph $G$, we denote by $\operatorname{cc}_G(X)$ the partition $\{C\subseteq V(G) \mid G[C]$ is a connected component of $G[X]\}$.
We will omit the subscript $G$ of the neighborhood and components notations whenever there is no ambiguity.
For two graphs $G_1$ and $G_2$, we denote by $G_1 - G_2$ the graph $(V(G_1), E(G_1)\setminus E(G_2))$.
Given a graph $G$ and $S\subseteq V(G)$, we say that a cycle of $G$ is an \emph{$S$-cycle} if it contains a vertex in $S$.
Moreover, we say that a subgraph $F$ of $G$ is an \emph{$S$-forest} if $F$ does not contain an $S$-cycle.
Typically, the \textsc{Subset Feedback Vertex Set} problem asks for a vertex set of minimum (weight) size such that its removal results in an $S$-forest.
Here we focus on the following equivalent formulation:
\pbDef{Subset Feedback Vertex Set (SFVS)}
{A graph $G$, $S\subseteq V(G)$ and a weight function $\mathsf{w} : V(G)\to \mathbb{Q}$}
{The maximum among the weights of the $S$-forests of $G$.}
\paragraph{\bf Rooted Layout}
For the notion of branch-decomposition, we consider its rooted variant called \emph{rooted layout}.
A \emph{rooted binary tree} is a binary tree with a distinguished vertex called the \emph{root}.
Since we manipulate at the same time graphs and trees representing them, the vertices of trees will be called \emph{nodes}.
A rooted layout of $G$ is a pair $(T,\delta)$ of a rooted binary tree $T$ and a bijective function $\delta$ between $V(G)$ and the leaves of $T$.
For each node $x$ of $T$, let $L_x$ be the set of all the leaves $l$ of $T$ such that the path from the root of $T$ to $l$ contains $x$.
We denote by $V_x$ the set of vertices that are in bijection with $L_x$, {i.e.}\xspace, $V_x:=\{v \in V(G)\mid \delta(v)\in L_x\}$.
All the width measures dealt with in this paper are special cases of the following one, where the difference in each case is the used set function.
Given a set function $\mathsf{f}: 2^{V(G)} \to\mathbb{N}$ and a rooted layout $\mathcal{L}=(T,\delta)$, the $\mathsf{f}$-width of a node $x$ of $T$ is $\mathsf{f}(V_x)$ and the $\mathsf{f}$-width of $(T,\delta)$, denoted by $\mathsf{f}(T,\delta)$ (or $\mathsf{f}(\mathcal{L})$), is $\operatorname{max}\{\mathsf{f}(V_x) \mid x \in V(T)\}$.
Finally, the $\mathsf{f}$-width of $G$ is the minimum $\mathsf{f}$-width over all rooted layouts of $G$.
\paragraph{\bf $(\mathbb{Q})$-Rank-width}
The rank-width and $\mathbb{Q}$-rank-width are, respectively, the $\mathsf{rw}$-width and $\mathsf{rw}_\mathbb{Q}$-width where $\mathsf{rw}(A)$ (resp. $\mathsf{rw}_\mathbb{Q}(A)$) is the rank over $GF(2)$ (resp. $\mathbb{Q}$) of the matrix $M_{A,\comp{A}}$ for all $A\subseteq V(G)$.
\paragraph{\bf Mim-width}
The mim-width of a graph $G$ is the $\mathsf{mim}$-width of $G$ where $\mathsf{mim}(A)$ is the size of a maximum induced matching of the graph $G[A,\comp{A}]$ for all $A\subseteq V(G)$.
Observe that all three parameters $\mathsf{rw}$-, $\mathsf{rw}_\mathbb{Q}$-, and $\mathsf{mim}$-width are symmetric, {i.e.}\xspace, for the associated set function $f$ and for any $A \subseteq V(G)$, we have $f(A) = f(\comp{A})$.
The following lemma provides upper bounds between mim-width and the other two parameters.
\begin{lemma}[\cite{Vatshelle12}]\label{lem:comparemim}
Let $G$ be a graph. For every $A\subseteq V(G)$,
we have $\mathsf{mim}(A) \leqslant \mathsf{rw}(A)$ and $\mathsf{mim}(A) \leqslant \mathsf{rw}_\mathbb{Q}(A)$.
\end{lemma}
\begin{proof}
Let $A\subseteq V(G)$.
Let $S$ be the vertex set of a maximum induced matching of the graph $G[A,\comp{A}]$.
By definition, we have $\mathsf{mim}(A)= |S\cap A| = |S \cap \comp{A}|$.
Observe that the restriction of the matrix $M_{A,\comp{A}}$ to rows in $S \cap A$ and columns in $S \cap \comp{A}$ is a permutation matrix: a binary square matrix with exactly one entry of 1 in each row and each column. The rank of this permutation matrix over $GF[2]$ or $\mathbb{Q}$ is $|S\cap A|=\mathsf{mim}(A)$.
Hence, $\mathsf{mim}(A)$ is upper bounded both by $\mathsf{rw}(A)$ and $\mathsf{rw}_\mathbb{Q}(A)$.
\end{proof}
\paragraph{\bf $d$-neighbor-equivalence.} The following concepts were introduced in \cite{BuiXuanTV13}. Let $G$ be a graph.
Let $A\subseteq V(G)$ and $d\in \mathbb{N}^+$.
Two subsets $X$ and $Y$ of $A$ are \emph{$d$-neighbor equivalent \mathsf{w}rt $A$}, denoted by $X\equi{A}{d} Y$, if $\operatorname{min}(d,|X\cap N(u)|) = \operatorname{min}(d,|Y\cap N(u)|)$ for all $u\in \comp{A}$.
It is not hard to check that $\equi{A}{d}$ is an equivalence relation.
See Figure \ref{fig:nec} for an example of $2$-neighbor equivalent sets.
\begin{figure}
\caption{We have $X\equi{A}
\label{fig:nec}
\end{figure}
For all $d\in \mathbb{N}^+$, we let $\mathsf{nec}_d : 2^{V(G)}\to \mathbb{N}$ where for all $A\subseteq V(G)$, $\mathsf{nec}_d(A)$ is the number of equivalence classes of $\equi{A}{d}$.
Notice that $\mathsf{nec}_1$ is a symmetric function \cite[Theorem 1.2.3]{Kim82} but $\mathsf{nec}_d$ is not necessarily symmetric for $d\mathsf{g}eq 2$.
To simplify the running times, we will use the shorthand $\mathsf{s\text{-}nec}_2(A)$ to denote $\operatorname{max}(\mathsf{nec}_2(A),\mathsf{nec}_2(\comp{A}))$ (where $\mathsf{s}$ stands for symmetric).
The following lemma shows how $\mathsf{nec}_d(A)$ is upper bounded by the other parameters.
\begin{lemma}[\cite{BelmonteV13,OumSV13,Vatshelle12}]\label{lem:compare}
Let $G$ be a graph. For every $A\subseteq V(G)$ and $d\in \mathbb{N}^+$, we have the following upper bounds on $\mathsf{nec}_d(A)$:
\begin{multicols}{3}
\begin{enumerate}[(a)]
\item $2^{d \mathsf{rw}(A)^2}$,
\item $2^{\mathsf{rw}_\mathbb{Q}(A)\log(d \mathsf{rw}_\mathbb{Q}(A) + 1 )}$,
\item $|A|^{d \mathsf{mim}(A)}$.
\end{enumerate}
\end{multicols}
\end{lemma}
In order to manipulate the equivalence classes of $\equi{A}{d}$, one needs to compute a representative for each equivalence class in polynomial time.
This is achieved with the following notion of a representative.
Let $G$ be a graph with an arbitrary ordering of $V(G)$ and let $A\subseteq V(G)$.
For each $X\subseteq A$, let us denote by $\rep{A}{d}(X)$ the lexicographically smallest set $R\subseteq A$ such that $|R|$ is minimized and $R\equi{A}{d} X$.
Moreover, we denote by $\Rep{A}{d}$ the set $\{\rep{A}{d}(X)\mid X\subseteq A\}$.
It is worth noticing that the empty set always belongs to $\Rep{A}{d}$, for all $A\subseteq V(G)$ and $d\in\mathbb{N}^+$.
Moreover, we have $\Rep{V(G)}{d}=\Rep{\varnothing}{d}=\{\varnothing\}$ for all $d\in \mathbb{N}^+$.
In order to compute these representatives, we use the following lemma.
\begin{lemma}[\cite{BuiXuanTV13}]\label{lem:computenecd}
Let $G$ be an $n$-vertex graph. For every $A\subseteq V(G)$ and $d\in \mathbb{N}^+$, one can compute in time $O(\mathsf{nec}_d(A) \cdot n^2 \cdot \log(\mathsf{nec}_d(A)))$, the sets $\Rep{A}{d}$ and a data structure that, given
a set $X\subseteq A$, computes $\rep{A}{d}(X)$ in time $O(|A|\cdot n\cdot \log(\mathsf{nec}_d(A)))$.
\end{lemma}
\paragraph{\bf Vertex Contractions}
In order to deal with SFVS, we will use the ideas of the algorithms for \textsc{Feedback Vertex Set} from \cite{BergougnouxK19esa,JaffkeKT18}.
To this end, we will contract subsets of $\comp{S}$ in order to transform $S$-forests into forests.
In order to compare two partial solutions associated with $A\subseteq V(G)$, we define an auxiliary graph in which we replace contracted vertices by their representative sets in $\Rep{A}{2}$.
Since the sets in $\Rep{A}{2}$ are not necessarily pairwise disjoint, we will use the following notions of graphs ``induced'' by collections of subsets of vertices. We will also use these notions to define the contractions we make on partial solutions.
Let $G$ be a graph.
Given $\mathcal{A}\subseteq 2^{V(G)}$, we define $G[\mathcal{A}]$ as the graph with vertex set $\mathcal{A}$ where $A,B\in \mathcal{A}$ are adjacent if and only if $N(A)\cap B\neq\varnothing$.
Observe that if the sets in $\mathcal{A}$ are pairwise disjoint, then $G[\mathcal{A}]$ is obtained from an induced subgraph of $G$ by \emph{vertex contractions} (i.e., by replacing two vertices $u$ and $v$ with a new vertex with neighborhood $N(\{u,v\})$) and, for this reason, we refer to $G[\mathcal{A}]$ as a \emph{contracted graph}.
Notice that we will never use the neighborhood notation and connected component notations on contracted graphs.
Given $\mathcal{A},\mathcal{B}\subseteq 2^{V(G)}$, we denote by $G[\mathcal{A},\mathcal{B}]$ the bipartite graph with vertex set $\mathcal{A}\cup\mathcal{B}$ and where $A,B\in \mathcal{A}\cup \mathcal{B}$ are adjacent if and only if $A\in \mathcal{A}$, $B\in \mathcal{B}$, and $N(A)\cap B\neq\varnothing$.
Moreover, we denote by $G[\mathcal{A} \mid \mathcal{B}]$ the graph with vertex set $\mathcal{A}\cup\mathcal{B}$ and with edge set $E(G[\mathcal{A}])\cup E(G[\mathcal{A},\mathcal{B}])$.
Observe that both graphs $G[\mathcal{A},\mathcal{B}]$ and $G[\mathcal{A} \mid \mathcal{B}]$ are subgraphs of the contracted graph $G[\mathcal{A} \cup \mathcal{B}]$.
To avoid confusion with the original graph, we refer to the vertices of the contracted graphs as \textit{blocks}.
It is worth noticing that in the contracted graphs used in this paper, whenever two blocks are adjacent, they are disjoint.
The following observation states that we can contract from a partition without increasing the size of a maximum induced matching of a graph.
It follows directly from the definition of contractions.
\begin{observation}\label{obs:contactionmimwidth}
Let $H$ be a graph.
For any partition $\mathcal{P}$ of a subset of $V(H)$, the size of a maximum induced matching of $H[\mathcal{P}]$ is at most the size of a maximum induced matching of $H$.
\end{observation}
Let $(G,S)$ be an instance of SFVS.
The vertex contractions that we use on a partial solution $X$ are defined from a given partition of $X\setminus S$.
A partition of the vertices of $X \setminus S$ is called an \emph{$\comp{S}$-contraction} of $X$.
We will use the following notations to handle these contractions.
Given $Y\subseteq V(G)$, we denote by $\binom{Y}{1}$ the partition of $Y$ which contains only singletons, i.e., $\binom{Y}{1}=\{\{ v\} \mid v \in Y\}$.
Moreover, for an $\comp{S}$-contraction $\mathcal{P}$ of $X$, we denote by $X_{\downarrow \mathcal{P}}$ the partition of $X$ where $X_{\downarrow\mathcal{P}}= \mathcal{P} \cup \binom{X\cap S}{1}$.
Given a subgraph $H$ of $G$ and an $\comp{S}$-contraction $\mathcal{P}$ of $V(H)$, we denote by $H_{\downarrow \mathcal{P}}$ the graph $H[V(H)_{\downarrow \mathcal{P}}]$.
For example, given two $\comp{S}$-contractions $\mathcal{P}_X,\mathcal{P}_Y$ of two disjoint subsets $X,Y$ of $V(G)$, we denote the graph $G[X_{\downarrow \mathcal{P}_X}, Y_{\downarrow \mathcal{P}_Y}]$ by $G[X, Y]_{\downarrow\mathcal{P}_X\cup \mathcal{P}_Y}$ and the graph $G[X_{\downarrow \mathcal{P}_X} | Y_{\downarrow \mathcal{P}_Y}]$ by $G[X| Y]_{\downarrow\mathcal{P}_X\cup \mathcal{P}_Y}$.
It is worth noticing that in our contracted graphs, all the blocks of $S$-vertices are singletons and we denote them by $\{v\}$.
Given a set $X\subseteq V(G)$, we will intensively use the graph $G[X]_{\downarrow\operatorname{cc}(X\setminus S)}$ which corresponds to the graph obtained from $G[X]$ by contracting the connected components of $G[X\setminus S]$, see Figure~\ref{fig:S-contraction}.
\begin{figure}
\caption{An $S$-forest induced by a set $X\subseteq V(G)$, the vertices of $S$ are white. The gray circles represent the blocks of $X_{\downarrow \operatorname{cc}
\label{fig:S-contraction}
\end{figure}
Observe that, for every subset $X\subseteq V(G)$, if $G[X]$ is an $S$-forest, then $G[X]_{\downarrow\operatorname{cc}(X\setminus S)}$ is a forest.
The converse is not true as we may delete $S$-cycles with contractions: take a triangle with one vertex $v$ in $S$ and contract the neighbors of $v$.
However, we can prove the following equivalence.
\begin{fact}\label{fact:Sforestandforest}
Let $G$ be a graph and $S\subseteq V(G)$.
For every $X\subseteq V(G)$, $G[X]$ is an $S$-forest if and only if there exists an $\comp{S}$-contraction $\mathcal{P}$ of $X$ satisfying the following two properties:
\begin{itemize}
\item $G[X]_{\downarrow \mathcal{P}}$ is a forest, and
\item for every $B\in \mathcal{P}$ and $v\in X\cap S$, we have $|N(v)\cap B|\leqslant 1$.
\end{itemize}
Moreover, if $G[X]$ is an $S$-forest, then the $\comp{S}$-contraction $\operatorname{cc}(X\setminus S)$ satisfies these two properties.
\end{fact}
\begin{proof}
($\Rightarrow$) Suppose first that $G[X]$ is an $S$-forest. We claim that the $\comp{S}$-contraction $\operatorname{cc}(X\setminus S)$ satisfies the two properties.
Assume towards a contradiction that there is a cycle $C$ in $G[X]_{\downarrow\operatorname{cc}(X\setminus S)}$.
By definition of $G[X]_{\downarrow\operatorname{cc}(X\setminus S)}$, the blocks of this graph are the connected components $\operatorname{cc}(X\setminus S)$ and the singletons in $\binom{X\cap S}{1}$.
The blocks in $\operatorname{cc}(X\setminus S)$ are pairwise non-adjacent, thus $C$ contains a block $\{s\}$ with $s\in X\cap S$.
Observe that for every pair of consecutive blocks $B_1,B_2$ of $C$, there exists a vertex $v_1\in B_1$ and $v_2\in B_2$ such that $v_1v_2\in E(G[X])$.
As every block of $G[X]_{\downarrow \operatorname{cc}(X\setminus S)}$ induces a connected component in $G$, we can construct an $S$-cycle in $G[X]$ by replacing every block of $C$ by a path in $G[X]$, yielding a contradiction.
Hence, $G[X]_{\downarrow\operatorname{cc}(X\setminus S)}$ is a forest, i.e. the first property is satisfied.
Observe that if there exists $C\in \operatorname{cc}(X\setminus S)$ and $v\in X\cap S$ such that $v$ has two neighbors in $C$, then there exists an $S$-cycle in $G[X]$ since $C$ is a connected component.
Hence, $\operatorname{cc}(X\setminus S) $ satisfies the second property.
($\Leftarrow$) Let $\mathcal{P}$ be a $\comp{S}$ contraction of a subset $X\subseteq V(G)$ that satisfies the two properties.
Assume for contradiction that there is an $S$-cycle $C$ in $G[X]$.
Let $v$ be a vertex of $C$ that belongs to $S$ and let $u$ and $w$ be the neighbors of $v$ in $C$.
Let $B_u$ and $B_w$ be the blocks in $(X\cup Y)_{\downarrow \mathcal{P}}$ that contain $u$ and $w$ respectively.
As $v$ is in $S$, it belongs to the block $\{v\}$ of $G[X\cup Y]_{\downarrow \mathcal{P}}$ and thus it is not contained in $B_u$ nor $B_w$.
The second property implies that $|N(v)\cap B|\leqslant 1$ for each $B\in \mathcal{P}$.
Thus, $B_u$ and $B_w$ are two distinct blocks both connected to the block $\{v\}$.
Since there exists a path between $u$ and $w$ in $C$ that does not go through $v$, we deduce that there is a path between $B_u$ and $B_w$ in $G[X]_{\downarrow \mathcal{P}}$ that does not go through $\{v\}$.
Indeed, this follows from the fact that if there is an edge between two vertices $a$ and $b$ in $G[X]$, then either $a$ and $b$ belong to the same block of $G[X]_{\downarrow\mathcal{P}}$ or there exists an edge between the blocks in $G[X]_{\downarrow\mathcal{P}}$ which contain $a$ and $b$.
We conclude that there exists a cycle in $G[X]_{\downarrow \mathcal{P}}$, a contradiction with the first property.
\end{proof}
\section{A Meta-Algorithm for Subset Feedback Vertex Set}
In the following, we present a meta-algorithm that, given a rooted layout $(T,\delta)$ of $G$, solves \textsc{SFVS}.
We will show that this meta-algorithm will imply that \textsc{SFVS} can be solved in time $2^{O(\mathsf{rw}_\mathbb{Q}(G)^2\log(\mathsf{rw}_\mathbb{Q}(G)))}\cdot n^{4}$, $2^{O(\mathsf{rw}(G)^3)}\cdot n^{4}$ and $n^{O(\mathsf{mim}(T,\delta)^2)}$.
The main idea of this algorithm is to use $\comp{S}$-contractions in order to employ similar properties of the algorithm for \textsc{Maximum Induced Tree} of \cite{BergougnouxK19esa} and the $n^{O(\mathsf{mim}(T,\delta))}$ time algorithm for \textsc{Feedback Vertex Set} of \cite{JaffkeKT18}.
In particular, we use the following lemma which is proved implicitly in \cite{BergougnouxK19esa}.
To simplify the following statements, we fix a graph $G$, a rooted layout $(T,\delta)$ of $G$ and a node $x\in V(T)$.
\begin{lemma}\label{lem:X2+}
Let $X$ and $Y$ be two disjoint subsets of $V(G)$.
If $G[X\cup Y]$ is a forest, then the number of vertices of $X$ that have at least two neighbors in $Y$ is bounded by $2w$ where $w$ is the size of a maximum induced matching in the bipartite graph $G[X,Y]$.
\end{lemma}
\begin{proof}
Let $X^{2+}$ be the set of vertices in $X$ having at least $2$ neighbors in $Y$.
In the following, we prove that $F=G[X^{2+},Y]$ admits a \emph{good bipartition}, that is a bipartition $\{X_0,X_1\}$ of $X^{2+}$ such that, for each $i\in\{0,1\}$ and, for each $v\in X_i$, there exists $y_v\in Y$ such that $N_{F}(y_v)\cap X_i = \{v\}$.
Observe that this is enough to prove the lemma since if $F$ admits a good bipartition $\{X_0,X_1\}$, then $|X_0|\leqslant w$ and $|X_1|\leqslant w$.
Indeed, if $F$ admits a good bipartition $\{X_0,X_1\}$, then, for each $i\in\{0,1\}$, the set of edges $M_i=\{ vy_v \mid v\in X_i \}$ is an induced matching of $G[X,Y]$.
In order to prove that $F$ admits a good bipartition it is sufficient to prove that each connected component of $F$ admits a good bipartition.
Let $C$ be a connected component of $F$ and $x_0\in C\cap X^{2+}$. As $G[X\cup Y]$ is a forest, we deduce that $F[C]$ is a tree.
Observe that the distance in $F[C]$ between each vertex $v\in C\cap X^{2+}$ and $x_0$ is even because $F[C]$ is bipartite \mathsf{w}rt $(C\cap X^{2+}, C\cap Y)$.
Let $X_0$ (resp. $X_1$) be the set of all vertices $v\in C\cap X^{2+}$ at distance $2\ell$ from $x_0$ with $\ell$ even (resp. odd).
We claim that $\{X_0,X_1\}$ is a good bipartition of $F[C]$.
Let $i \in \{0,1\}$, $v\in X_i$ and $\ell\in \mathbb{N}$ such that the distance between $v$ and $x_0$ in $F[C]$ is $2\ell$.
Let $P$ be the set of vertices in $C\setminus \{v\}$ that share a common neighbor with $v$ in $F[C]$.
We want to prove that $v$ has a neighbor $y$ in $F$ that is not adjacent to $P\cap X_i$.
Observe that, for every $v'\in P$, the distance between $v'$ and $x_0$ in $F[C]$ is either $2\ell-2$, $2\ell$ or $2\ell+2$ because $F[C]$ is a tree and the distance between $v$ and $x_0$ is $2\ell$.
By construction of $\{X_0,X_1\}$, every vertex at distance $2\ell-2$ or $2\ell+2$ from $x_0$ belongs to $X_{1-i}$.
Thus, every vertex in $P\cap X_i$ is at distance $2\ell$ from $x_0$.
If $\ell=0$, then we are done because $v=x_0$ and $P\cap X_i=\varnothing$.
Assume that $\ell\neq 0$.
As $F[C]$ is a tree, $v$ has only one neighbor $w$ at distance $2\ell-1$ from $x_0$ in $F[C]$.
Because $F[C]$ is a tree, $w$ is the only vertex adjacent to $v$ and the vertices in $P\cap X_i$.
By definition of $X^{2+}$, $v$ has at least two neighbors in $Y$, so $v$ admits a neighbor that is not $w$ and this neighbor is not adjacent to the vertices in $P\cap X_i$.
Hence, we deduce that $\{X_0,X_1\}$ is a good bipartition of $F[C]$.
We deduce that every connected component of $F$ admits a good bipartition and, thus, $F$ admits a good bipartition. This proves that $|X^{2+}|\leqslant 2w$.
\end{proof}
The following lemma generalizes Fact~\ref{fact:Sforestandforest} and presents the equivalence between $S$-forests and forests that we will use in our algorithm.
\begin{lemma}\label{lem:equiStreeScontrac}
Let $X\subseteq V_x$ and $Y\subseteq \comp{V_x}$.
If the graph $G[X\cup Y]$ is an $S$-forest, then there exists an $\comp{S}$-contraction $\mathcal{P}_Y$ of $Y$ that satisfies the following conditions:
\begin{enumerate}[(1)]
\item $G[X\cup Y]_{\downarrow\operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest,
\item for all block $P\in\operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y$ and $v\in (X\cup Y)\cap S$, we have $|N(v)\cap P|\leqslant 1$,
\item the graph $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ admits a vertex cover $\mathsf{VC}$ of size at most $4 \mathsf{mim}(V_x)$ such that the neighborhoods of the blocks in $\mathsf{VC}$ are pairwise distinct in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that $G[X\cup Y]$ is an $S$-forest.
Let us explain how we construct $\mathcal{P}_Y$ that satisfies Conditions (1)-(3).
First, we initialize $\mathcal{P}_Y=\operatorname{cc}(Y\setminus S)$.
Observe that there is no cycle in $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ that contains a block in $\binom{S}{1}$ because $G[X\cup Y]$ is an $S$-forest.
Moreover, $\operatorname{cc}(X\setminus S)$ and $\mathcal{P}_Y$ form two independent sets in $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$.
Consequently, for all the cycles $C$ in $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ we have $C=(X_1,Y_1,X_2,Y_2,\dots,X_t,Y_t)$ where $X_1,\dots,X_t\in\operatorname{cc}(X\setminus S)$ and $Y_1,\dots,Y_t\in \mathcal{P}_Y$.
We do the following operation, until the graph $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest: take a cycle $C=(X_1,Y_1,X_2,Y_2,\dots,X_t,Y_t)$ in $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ and replace the blocks $Y_1,\dots,Y_t$ in $\mathcal{P}_Y$ by the block $Y_1\cup\dots\cup Y_t$.
See Figures~\ref{fig:killcycle} and~\ref{fig:contracted} in Appendix~\ref{appendix} for an example of $\comp{S}$-contraction $\mathcal{P}_Y$.
For each $B\in \operatorname{cc}(X\setminus S)\cup \operatorname{cc}(Y\setminus S)$, the vertices of $B$ are pairwise connected in $G[(X\cup Y)\setminus S]$. We deduce by induction that whenever we apply the operation on a cycle $C=(X_1,Y_1,X_2,Y_2,\dots,X_t,Y_t)$, it holds that the vertices of the new block $Y_1\cup\dots\cup Y_t$ are pairwise connected in $G[(X\cup Y)\setminus S]$.
Thus, for every block $B$ of $\operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y$, the vertices of $B$ are pairwise connected in $G[(X\cup Y)\setminus S]$.
It follows that for every $v\in (X\cup Y)\cap S$ and $B\in \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y$, since $G[X\cup Y]$ is an $S$-forest, we have $|N(v)\cap B|\leqslant 1$. Thus, Condition (2) is satisfied.
It remains to prove Condition (3).
Let $\mathsf{VC}$ be the set of blocks of $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ containing:
\begin{itemize}
\item the blocks that have at least 2 neighbors in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$, and
\item one block in every isolated edge of $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$.
\end{itemize}
By construction, it is clear that $\mathsf{VC}$ is indeed a vertex cover of $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ as every edge is either isolated or incident to a block of degree at least 2.
We claim that $|\mathsf{VC}|\leqslant 4\mathsf{mim}(V_x)$.
By Observation \ref{obs:contactionmimwidth}, we know that the size of a maximum induced matching in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is at most $\mathsf{mim}(V_x)$.
Let $t$ be the number of isolated edges in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$.
Observe that the size of a maximum induced matching in the graph obtained from $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ by removing isolated edges is at most $\mathsf{mim}(V_x)-t$.
By Lemma \ref{lem:X2+}, we know that there are at most $4(\mathsf{mim}(V_x)-t)$ blocks that have at least 2 neighbors in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$.
We conclude that $|\mathsf{VC}|\leqslant 4\mathsf{mim}(V_x)$.
Since $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest, the neighborhoods of the blocks that have at least 2 neighbors must be pairwise distinct.
We conclude from the construction of $\mathsf{VC}$ that the neighborhoods of the blocks of $\mathsf{VC}$ in $G[X,Y]_{\downarrow\operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ are pairwise distinct.
Hence, Condition (3) is satisfied.
\end{proof}
In the following, we will use Lemma \ref{lem:equiStreeScontrac} to design some sort of equivalence relation between partial solutions.
To this purpose, we use the following set of tuples. We call each such tuple an index because it corresponds to an index into a table in a DP (dynamic programming) approach. We do this even though the presentation of our algorithm is not given by a standard DP description.
\begin{definition}[$\mathbb{I}_x$]\label{def:indices}
We define the set $\mathbb{I}_x$ of indices as the set of tuples
\[ (\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S}) \in
2^{\Rep{{V_x}}{2}}\times 2^{\Rep{V_x}{1}}\times \Rep{V_x}{1}\times 2^{\Rep{\comp{V_x}}{2}} \times 2^{\Rep{\comp{V_x}}{1}} \]
such that $|\mathscr{X}_{\mathsf{vc}}^{\comp{S}}|+|\mathscr{X}_{\mathsf{vc}}^{S}|+|\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}| + |\mathscr{Y}_{\mathsf{vc}}^{S}| \leqslant 4\mathsf{mim}(V_x)$.
\end{definition}
These sets of indices play a major role in our meta-algorithm, in particular, the sizes of these sets of indices appear in the runtime of our meta-algorithm. In fact, to prove the algorithmic consequences of our meta-algorithm for rank-width, $\mathbb{Q}$-rank-width and mim-width, we show (Lemma~\ref{lem:consequences}) that the size of $\mathbb{I}_x$ is upper bounded by $2^{O(\mathsf{rw}(V_x)^3)}$, $2^{O(\mathsf{rw}_\mathbb{Q}(V_x)^2\log(\mathsf{rw}_\mathbb{Q}(V_x)))}$ and $n^{O(\mathsf{mim}(V_x)^2)}$.
In the following, we will define partial solutions associated with an index $i\in\mathbb{I}_x$ (a partial solution may be associated with many indices).
In order to prove the correctness of our algorithm (the algorithm itself will not use this concept), we will also define \textit{complement solutions} (the sets $Y\subseteq \comp{V_x}$ and their $\comp{S}$-contractions $\mathcal{P}_Y$) associated with an index $i$.
We will prove that, for every partial solution $X$ and complement solution $(Y,\mathcal{P}_Y)$ associated with $i$, if the graph $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ is a forest, then $G[X\cup Y]$ is an $S$-forest.
Let us give some intuition on these indices by explaining how one index is associated with a solution, figures explaining this association and the purposes of the sets $\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S}$ can be found in Appendix~\ref{appendix}.
Let $X\subseteq V_x$ and $Y\subseteq \comp{V_x}$ such that $G[X\cup Y]$ is an $S$-forest.
Let $\mathcal{P}_Y$ be the $\comp{S}$-contraction of $Y$ and $\mathsf{VC}$ be a vertex cover of $G[X,Y]_{\downarrow\operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ given by Lemma \ref{lem:equiStreeScontrac}.
Then, $X$ and $Y$ are associated with $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in \mathbb{I}_x$ such that:
\begin{itemize}
\item $\mathscr{X}_{\mathsf{vc}}^{S}$ (resp. $\mathscr{Y}_{\mathsf{vc}}^{S}$) contains the representatives of the blocks $\{v\}$ in $\mathsf{VC}$ such that $v\in X\cap S$ (resp. $v\in Y\cap S$) \mathsf{w}rt the 1-neighbor equivalence over $V_x$ (resp. $\comp{V_x}$).
We will only use the indices where $\mathscr{X}_{\mathsf{vc}}^{S}$ contains representatives of singletons, in other words, $\mathscr{X}_{\mathsf{vc}}^{S}$ is included in $\{ \rep{V_x}{1}(\{v\}) \mid v\in V_x\}$ which can be much smaller than $\Rep{V_x}{1}$.
The same observation holds for $\mathscr{Y}_{\mathsf{vc}}^{S}$.
In Definition \ref{def:indices}, we state that $\mathscr{X}_{\mathsf{vc}}^{S}$ and $\mathscr{Y}_{\mathsf{vc}}^{S}$ are, respectively, subsets of $2^{\Rep{V_x}{1}}$ and $2^{\Rep{\comp{V_x}}{1}}$, for the sake of simplicity.
\item $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}$ (resp. $\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$) contains the representatives of the blocks in $\operatorname{cc}(X\setminus S)\cap \mathsf{VC}$ (resp. $\mathcal{P}_Y\cap \mathsf{VC}$) w.r.t. the 2-neighbor equivalence relation over $V_x$ (resp. $\comp{V_x}$).
\item $\mathscr{X}_{\comp{\mathsf{vc}}}$ is the representative of $X\setminus V(\mathsf{VC})$ (the set of vertices which do not belong to the vertex cover) w.r.t. the 1-neighbor equivalence over $V_x$.
\end{itemize}
Because the neighborhoods of the blocks in $\mathsf{VC}$ are pairwise distinct in $G[X,Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ (Property (3) of Lemma \ref{lem:equiStreeScontrac}), there is a one to one correspondence between the representatives in $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S}\cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S}$ and the blocks in $\mathsf{VC}$.
While $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}, \mathscr{X}_{\mathsf{vc}}^{S},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S}$ describe $\mathsf{VC}$, the representative set $\mathscr{X}_{\comp{\mathsf{vc}}}$ describes the neighborhood of the blocks of $X_{\downarrow \operatorname{cc}(X\setminus S)}$ which are not in $\mathsf{VC}$.
The purpose of $\mathscr{X}_{\comp{\mathsf{vc}}}$ is to make sure that, for every partial solution $X$ and complement solution $(Y,\mathcal{P}_Y)$ associated with $i$, the set $\mathsf{VC}$ described by $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}, \mathscr{X}_{\mathsf{vc}}^{S},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S}$ is a vertex cover of $G[X,Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$.
For doing so, it is sufficient to require that $Y\setminus V(\mathsf{VC})$ has no neighbor in $\mathscr{X}_{\comp{\mathsf{vc}}}$ for every complement solution $(Y,\mathcal{P}_Y)$ associated with $i$.
Observe that the sets $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}$ and $\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$ contain representatives for the $2$-neighbor equivalence.
We need the 2-neighbor equivalence to control the $S$-cycles which might disappear after vertex contractions.
To prevent this situation, we require, for example, that every vertex in $X\cap S$ has at most one neighbor in $\comp{R}$ for each $\comp{R}\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$.
Thanks to the 2-neighbor equivalence, a vertex $v$ in $X\cap S$ has at most one neighbor in $\comp{R}\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$ if and only if $v$ has at most one neighbor in the block of $\mathcal{P}_Y$ associated with $\comp{R}$.
This property of the 2-neighbor equivalence is captured by the following fact.
\begin{fact}\label{fact:2neighborequivalence}
For every $A\subseteq V(G)$ and $B,P\subseteq A$, if $B\equi{A}{2} P$, then, for all $v\in \comp{A}$, we have $|N(v)\cap B|\leqslant 1$ if and only if $|N(v)\cap P|\leqslant 1$.
\end{fact}
In order to define partial solutions associated with $i$, we need the following notion of auxiliary graph.
Given $X\subseteq V_x$ and $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in\mathbb{I}_x$, we write $\operatorname{aux}(X,i)$ to denote the graph
\[ G[X_{\downarrow \operatorname{cc}(X\setminus S)}\mid \mathscr{Y}_{\mathsf{vc}}^{\comp{S}} \cup \mathscr{Y}_{\mathsf{vc}}^{S} ]. \]
Observe that $\operatorname{aux}(X,i)$ is obtained from the graph induced by $X_{\downarrow \operatorname{cc}(X\setminus S)} \cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}} \cup \mathscr{Y}_{\mathsf{vc}}^{S}$ by removing the edges between the blocks from $ \mathscr{Y}_{\mathsf{vc}}^{\comp{S}} \cup \mathscr{Y}_{\mathsf{vc}}^{S}$. Figure \ref{fig:auxxi} illustrates an example of the graph $\operatorname{aux}(X,i)$ and the related notions. The figures in Appendix~\ref{appendix} explain the relations between an $S$-forest and these auxiliary graphs.
\begin{figure}
\caption{An example of a set $X\subseteq V_x$ and its auxiliary graph associated with the index $i=(\mathscr{X}
\label{fig:auxxi}
\end{figure}
We will ensure that, given a complement solution $(Y,\mathcal{P}_Y)$ associated with $i$, the graph $\operatorname{aux}(X,i)$ is isomorphic to $G[X_{\downarrow \operatorname{cc}(X\setminus S)} \mid Y_{\downarrow\mathcal{P}_Y}\cap \mathsf{VC}]$.
We are now ready to define the notion of partial solution associated with an index $i$.
\begin{definition}[Partial solutions]\label{def:partialsolution}
Let $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in \mathbb{I}_x$.
We say that $X\subseteq V_x$ is a partial solution associated with $i$ if the following conditions are satisfied:
\begin{enumerate}[(a)]
\item for every $R\in \mathscr{X}_{\mathsf{vc}}^{S}$, there exists a unique $v\in X\cap S$ such that $R\equi{V_x}{1} \{v\}$,
\item for every $R\in \mathscr{X}_{\mathsf{vc}}^{\comp{S}}$, there exists a unique $C\in\operatorname{cc}(X\setminus S)$ such that $R\equi{V_x}{2} C$,
\item $\operatorname{aux}(X,i)$ is a forest,
\item for every $C\in\operatorname{cc}(X\setminus S)$ and $\{v\}\in \mathscr{Y}_{\mathsf{vc}}^{S}$, we have $|N(v)\cap C |\leqslant 1$,
\item for every $v\in X\cap S$ and $U\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \operatorname{cc}(X\setminus S)$, we have $|N(v)\cap U|\leqslant 1$,
\item $\mathscr{X}_{\comp{\mathsf{vc}}} \equi{V_x}{1} X\setminus V(\mathsf{VC}_X)$ , where $\mathsf{VC}_X$ contains the blocks $\{v\}\in \binom{X\cap S}{1}$ such that $\rep{V_x}{1}(\{v\})\in \mathscr{X}_{\mathsf{vc}}^{S}$ and the components $C$ of $G[X\setminus S]$ such that $\rep{V_x}{2}(C)\in \mathscr{X}_{\mathsf{vc}}^{\comp{S}}$.
\end{enumerate}
\end{definition}
Similarly to Definition \ref{def:partialsolution}, we define the notion of \textit{complement solutions} associated with an index $i\in\mathbb{I}_x$.
We use this concept only to prove the correctness of our algorithm.
\begin{definition}[Complement solutions]\label{def:complementsolution}
Let $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in \mathbb{I}_x$.
We call complement solutions associated with $i$ all the pairs $(Y,\mathcal{P}_Y)$ such that $Y\subseteq \comp{V_x}$, $\mathcal{P}_Y$ is an $\comp{S}$-contraction of $Y$ and the following conditions are satisfied:
\begin{enumerate}[(a)]
\item for every $U\in \mathscr{Y}_{\mathsf{vc}}^{S}$, there exists a unique $v \in Y\cap S$ such that $U\equi{V_x}{1} \{v\}$,
\item for every $U\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$, there exists a unique $P\in \mathcal{P}_Y$ such that $U\equi{V_x}{2} P$,
\item $G[Y]_{\downarrow\mathcal{P}_Y}$ is a forest,
\item for every $P\in\mathcal{P}_Y$ and $\{v\}\in \mathscr{X}_{\mathsf{vc}}^{S}$, we have $|N(v)\cap P |\leqslant 1$,
\item for every $y\in Y\cap S$ and $R\in \mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathcal{P}_Y$, we have $|N(y)\cap R| \leqslant 1$,
\item $N(\mathscr{X}_{\comp{\mathsf{vc}}})\cap V(\mathscr{Y}_{\comp{\mathsf{vc}}})=\varnothing$, where $\mathscr{Y}_{\comp{\mathsf{vc}}}$ contains the blocks $\{v\}\in \binom{Y\cap S}{1}$ such that $\rep{\comp{V_x}}{1}(\{v\})\notin \mathscr{Y}_{\mathsf{vc}}^{S}$ and the blocks $P\in\mathcal{P}_Y$ such that $\rep{\comp{V_x}}{2}(P)\notin \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$.
\end{enumerate}
\end{definition}
Let us give some explanations on the conditions of Definitions \ref{def:partialsolution} and \ref{def:complementsolution}.
Let $X$ be a partial solution associated with an index $i\in \mathbb{I}_x$ and $(Y,\mathcal{P}_Y)$ be a complement solution associated with $i$.
Conditions (a) and (b) of both definitions guarantee that there exists a subset $\mathsf{VC}$ of $X_{\downarrow \operatorname{cc}(X\setminus S)}\cup Y_{\downarrow\mathcal{P}_Y}$ such that there is a one-to-one correspondence between the blocks of $\mathsf{VC}$ and the representatives in $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S}\cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S}$.
Condition (c) of Definition \ref{def:partialsolution} guarantees that the connections between $X_{\downarrow \operatorname{cc}(X\setminus S)}$ and $\mathsf{VC}$ are acyclic.
As explained earlier, Conditions (d) and (e) of both definitions are here to control the $S$-cycles which might disappear with the vertex contractions.
In particular, by Fact~\ref{fact:Sforestandforest}, Conditions~(c), (d) and (e) together imply that $G[X]$ and $G[Y]$ are $S$-forests.
Finally, as explained earlier, the last conditions of both definitions ensure that $\mathsf{VC}$ the set described by $\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$ and $\mathscr{Y}_{\mathsf{vc}}^{S}$ is a vertex cover of $G[X,Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$.
Notice that $X\setminus V(\mathsf{VC}_X)$ and $V(\mathscr{Y}_{\comp{\mathsf{vc}}})$ correspond the set of vertices in $X$ and $Y$, respectively, that do not belong to a block in the vertex cover $\mathsf{VC}$.
Such observations are used to prove the following two results.
\begin{lemma}\label{lemma:exitsAnIndex}
Let $X\subseteq V_x$ and $Y\subseteq \comp{V_x}$ such that $G[X\cup Y]$ is an $S$-forest.
There exist $i\in \mathbb{I}_x$ and an $\comp{S}$-contraction $\mathcal{P}_Y$ of $Y$ such that (1)~$G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest, (2)~$X$ is a partial solution associated with $i$ and (3)~$(Y,\mathcal{P}_Y)$ is a complement solution associated with $i$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:equiStreeScontrac}, there exists an $\comp{S}$-contraction $\mathcal{P}_Y$ of $Y$ such that the following properties are satisfied:
\begin{enumerate}[(A)]
\item $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest,
\item for all $P\in\operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y$ and all $v\in (X\cup Y)\cap S$, we have $|N(v)\cap P|\leqslant 1$,
\item the graph $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ admits a vertex cover $\mathsf{VC}$ of size at most $4\mathsf{mim}(V_x)$ such that the neighborhoods of the blocks in $\mathsf{VC}$ are pairwise distinct in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$.
\end{enumerate}
We construct $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in\mathbb{I}_x$ from $\mathsf{VC}$ as follows:
\begin{itemize}
\item $\mathscr{X}_{\mathsf{vc}}^{S}=\{\rep{V_x}{1}(\{v\}) \mid \{v\}\in \binom{X\cap S}{1}\cap\mathsf{VC}\}$,
\item $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}=\{\rep{V_x}{2}(P) \mid P\in \operatorname{cc}(X\setminus S) \cap \mathsf{VC} \}$,
\item $\mathscr{X}_{\comp{\mathsf{vc}}}= \rep{V_x}{1}(X\setminus V(\mathsf{VC}))$,
\item $\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}= \{ \rep{\comp{V_x}}{2}(P)\mid P\in \mathcal{P}_Y\cap \mathsf{VC}\}$,
\item $\mathscr{Y}_{\mathsf{vc}}^{S}=\{\rep{\comp{V_x}}{1}(\{v\}) \mid \{v\}\in \binom{Y\cap S}{1}\cap \mathsf{VC}\}$.
\end{itemize}
Since $|\mathsf{VC}|\leqslant 4\mathsf{mim}(V_x)$, we have $|\mathscr{X}_{\mathsf{vc}}^{\comp{S}}|+|\mathscr{X}_{\mathsf{vc}}^{S}|+|\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}| + |\mathscr{Y}_{\mathsf{vc}}^{S}| \leqslant 4\mathsf{mim}(V_x)$ and thus we have $i\in\mathbb{I}_x$.
We claim that $X$ is a partial solution associated with $i$ .
By construction of $i$, Conditions (a), (b) and (f) of Definition~\ref{def:partialsolution} are satisfied.
In particular, Condition (a) and (b) follow from Property~(C), i.e. the neighborhoods of the blocks in $\mathsf{VC}$ are pairwise distinct in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$.
So, the blocks in $X_{\downarrow \operatorname{cc}(X\setminus S)}\cap \mathsf{VC}$ are pairwise non-equivalent for the $d$-neighbor equivalence over $V_x$ for any $d\in \mathbb{N}^+$ including $1$ and $2$.
Consequently, there is a one to one correspondence between the blocks of $X_{\downarrow \operatorname{cc}(X\setminus S)}\cap \mathsf{VC}$ and the representatives in $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S}$.
It remains to prove Conditions (c), (d) and (e).
We claim that Condition (c) is satisfied: $\operatorname{aux}(X,i)$ is a forest.
Observe that, by construction, $\operatorname{aux}(X,i)$ is isomorphic to the graph $G[X_{\downarrow\operatorname{cc}(X\setminus S)} \mid Y_{\downarrow \mathcal{P}_Y}\cap \mathsf{VC} ]$.
Indeed, for every $P\in Y_{\downarrow \mathcal{P}_Y}\cap \mathsf{VC}$, by construction, there exists a unique $U\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S}$ such that $U\equi{\comp{V_x}}{1} P$ or $U\equi{\comp{V_x}}{2} P$.
In both case, we have $N(U)\cap V_x=N(P)\cap V_x$ and thus, the neighborhood of $P$ in $G[X_{\downarrow\operatorname{cc}(X\setminus S)} \mid Y_{\downarrow \mathcal{P}_Y}\cap \mathsf{VC} ]$ is the same as the neighborhood of $U$ in $\operatorname{aux}(X,i)$.
Since $G[X_{\downarrow\operatorname{cc}(X\setminus S)} \mid Y_{\downarrow \mathcal{P}_Y}\cap \mathsf{VC} ]$ is a subgraph of $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ and this latter graph is a forest, we deduce that $\operatorname{aux}(X,i)$ is also a forest.
We deduce that Conditions (d) and (e) are satisfied from property~(B) and Fact~\ref{fact:2neighborequivalence}.
Consequently, $X$ is a partial solution associated with $i$.
Let us now prove that $(Y,\mathcal{P}_Y)$ is a complement solution associated with $i$.
From the construction of $i$ and with the same argument used earlier, we deduce that that Conditions~(a) and (b) of Definition~\ref{def:complementsolution} are satisfied.
By Property~(A) $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest, as a subgraph, $G[Y]_{\downarrow \mathcal{P}_Y}$ is also a forest and thus Condition~(c) is satisfied.
Conditions (d) and (e) are satisfied from Property~(B) and Fact~\ref{fact:2neighborequivalence}.
It remains to prove Condition (f): $N(\mathscr{X}_{\comp{\mathsf{vc}}})\cap \mathscr{Y}_{\comp{\mathsf{vc}}}=\varnothing$ where $\mathscr{Y}_{\comp{\mathsf{vc}}}$ contains the blocks $\{v\}\in \binom{Y\cap S}{1}$ such that $\rep{\comp{V_x}}{1}(\{v\})\notin \mathscr{Y}_{\mathsf{vc}}^{S}$ and the blocks $P\in\mathcal{P}_Y$ such that $\rep{\comp{V_x}}{2}(P)\notin \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$.
By construction, $\mathscr{Y}_{\comp{\mathsf{vc}}} = Y \setminus V(\mathsf{VC})$.
Since, $\mathsf{VC}$ is a vertex cover of the graph $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$, there are no edges between $X_{\downarrow \operatorname{cc}(X\setminus S)}\setminus\mathsf{VC}$ and $Y_{\downarrow \mathcal{P}_Y}\setminus \mathsf{VC}$ in $G[X, Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$.
We deduce that $N(X \setminus V(\mathsf{VC})) \cap \mathscr{Y}_{\comp{\mathsf{vc}}}= \varnothing$.
Since $X \setminus V(\mathsf{VC})\equi{V_x}{1} \mathscr{X}_{\comp{\mathsf{vc}}}$, we conclude that $N(\mathscr{X}_{\comp{\mathsf{vc}}})\cap \mathscr{Y}_{\comp{\mathsf{vc}}}=\varnothing$.
This proves that $(Y,\mathcal{P}_Y)$ is a complement solution associated with $i$.
\end{proof}
\begin{lemma}\label{lemma:forestImpliesS-forest}
Let $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in \mathbb{I}_x$, $X$ be a partial solution associated with $i$ and $(Y,\mathcal{P}_Y)$ be a complement solution associated with $i$. If the graph $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest, then $G[X\cup Y]$ is an $S$-forest.
\end{lemma}
\begin{proof}
Assume that $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest.
By Fact~\ref{fact:Sforestandforest}, in order to prove that $G[X\cup Y]$ is an $S$-forest, it is enough to prove that for all $v\in (X\cup Y)\cap S$ and all $P\in \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y$, we have $|N(v)\cap P|\leqslant 1$.
Let us prove this statement for a vertex $v\in X\cap S$ the proof is symmetric for $v\in Y\cap S$.
Let $P\in(X\cup Y)_{\downarrow\operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$.
If $P\notin (\operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y)$, then $P$ is a singleton in $\binom{X\cup Y}{1}$ and we are done.
If $P\in \operatorname{cc}(X\setminus S)$, then Condition (e) of Definition \ref{def:partialsolution} guarantees that we have $|N(v)\cap P|\leqslant 1$.
Assume now that $P\in \mathcal{P}_Y$.
Suppose first that $\rep{\comp{V_x}}{2}(P)\notin \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$.
From Condition~(f) of Definition \ref{def:complementsolution}, we have $N(P)\cap \mathscr{X}_{\comp{\mathsf{vc}}}=\varnothing$.
Let $r=\rep{V_x}{1}(\{v\})$.
From the definition of $\mathscr{X}_{\comp{\mathsf{vc}}}$ in Definition~\ref{def:partialsolution}, we deduce that if $r\notin \mathscr{X}_{\mathsf{vc}}^{S}$, then $N(v)\cap \comp{V_x}\subseteq N(\mathscr{X}_{\comp{\mathsf{vc}}})$ and thus $N(v)\cap P=\varnothing$.
On the other hand, if $r\in \mathscr{X}_{\mathsf{vc}}^{S}$, then Condition~(d) of Definition~\ref{def:complementsolution} ensures that $|N(r)\cap P|\leqslant 1$ and thus $|N(v)\cap P|\leqslant 1$.
Now, suppose that $\rep{\comp{V_x}}{2}(P)\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$.
By Condition~(e) of Definition \ref{def:partialsolution}, we know that $|N(v)\cap \rep{\comp{V_x}}{2}(P)|\leqslant 1$.
From Fact~\ref{fact:2neighborequivalence}, we conclude that $|N(v)\cap P|\leqslant 1$. This concludes the proof of Lemma~\ref{lemma:forestImpliesS-forest}.
\end{proof}
For each index $i\in\mathbb{I}_x$, we will design an equivalence relation $\sim_i$ between the partial solutions associated with $i$.
We will prove that, for any partial solutions $X$ and $W$ associated with $i$, if $X\sim_i W$, then, for any complement solution $Y\subseteq \comp{V_x}$ associated with $i$, the graph $G[X\cup Y]$ is an $S$-forest if and only if $G[W\cup Y]$ is an $S$-forest.
Then, given a set of partial solutions $\mathcal{A}$ whose size needs to be reduced, it is sufficient to keep, for each $i\in\mathbb{I}_x$ and each equivalence class $\mathcal{C}$ of $\sim_i$, one partial solution in $\mathcal{C}$ of maximal weight.
The resulting set of partial solutions has size bounded by $|\mathbb{I}_x|\cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$ because $\sim_i$ generates at most $(4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$ equivalence classes.
Intuitively, given two partial solutions $X$ and $W$ associated with $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})$, we have $X\sim_i W$ if the blocks of $\mathsf{VC}$ (i.e., the vertex cover described by $i $) are \textit{equivalently connected} in $G[X_{\downarrow \operatorname{cc}(X\setminus S)}\mid \mathscr{Y}_{\mathsf{vc}}^{\comp{S}} \cup \mathscr{Y}_{\mathsf{vc}}^{S} ]$ and $G[W_{\downarrow \operatorname{cc}(W\setminus S)}\mid \mathscr{Y}_{\mathsf{vc}}^{\comp{S}} \cup \mathscr{Y}_{\mathsf{vc}}^{S} ]$.
In order to compare these connections, we use the following notion.
\begin{definition}[$\operatorname{cc}(X,i)$]\label{def:aux(X,i)}
Let $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in \mathbb{I}_x$ and $X\subseteq V_x$ be a partial solution associated with $i$. For each connected component $C$ of $\operatorname{aux}(X,i)$, we define the set $C_\mathsf{vc}$ as follows:
\begin{itemize}
\item for every $U\in C$ such that $U\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}} \cup \mathscr{Y}_{\mathsf{vc}}^{S}$, we have $U \in C_\mathsf{vc}$,
\item for every $\{v\}\in \binom{X\cap S}{1}\cap C$ such that $\{v\}\equi{V_x}{1} R$ for some $R\in \mathscr{X}_{\mathsf{vc}}^{S}$, we have $R\in C_\mathsf{vc}$,
\item for every $U\in \operatorname{cc}(X\setminus S)$ such that $U\equi{V_x}{2} R$ for some $R \in \mathscr{X}_{\mathsf{vc}}^{\comp{S}}$, we have $R \in C_\mathsf{vc}$.
\end{itemize}
We define $\operatorname{cc}(X,i)$ as the collection $\{C_\mathsf{vc} \mid C \text{ is a connected component of } \operatorname{aux}(X,i)\}$.
\end{definition}
For a connected component $C$ of $\operatorname{aux}(X,i)$, the set $C_\mathsf{vc}$ contains $C \cap (\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S})$ and the representatives of the blocks in $C\cap X_{\downarrow \operatorname{cc}(X\setminus S)}\cap \mathsf{VC}$ with $\mathsf{VC}$ the vertex cover described by $i$.
Consequently, for every $X\subseteq V_x$ and $i\in\mathbb{I}_x$, the collection $\operatorname{cc}(X,i)$ is partition of $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S}\cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S}$.
For the example given in Figure \ref{fig:auxxi}, observe that $\operatorname{cc}(X,i)$ is the partition that contains $\{R_1,U_1,U_2\}$, $\{R_2,U_3\}$ and $\{U_4\}$ (see also Figure~\ref{fig:ccXi})
Now we are ready to give the notion of equivalence between partial solutions.
We say that two partial solutions $X,W$ associated with $i$ are $i$-equivalent, denoted by $X\sim_i W$, if $\operatorname{cc}(X,i)=\operatorname{cc}(W,i)$.
Our next result is the most crucial step.
As already explained, our task is to show equivalence between partial solutions under any complement solution with respect to $S$-forests.
Figure~\ref{fig:equivalent} gives an example of two $i$-equivalent partial solutions.
\begin{lemma}\label{lemma:equivalence}
Let $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in \mathbb{I}_x$. For every partial solutions $X,W$ associated with $i$ such that $X\sim_i W$ and for every complement solution $(Y,\mathcal{P}_Y)$ associated with $i$, the graph $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest if and only if the graph $G[W\cup Y]_{\downarrow \operatorname{cc}(W\setminus S)\cup \mathcal{P}_Y}$ is a forest.
\end{lemma}
\begin{proof}
Let $X,W$ be two partial solutions associated with $i$ such that $X\sim_i W$ and let $(Y,\mathcal{P}_Y)$ be a complement solution associated with $i$.
To prove this lemma, we show that if $G[W\cup Y]_{\downarrow \operatorname{cc}(W\setminus S)\cup \mathcal{P}_Y}$ contains a cycle, then $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ contains a cycle too. See Figure~\ref{fig:intuitionCycles} for some intuitions on this proof.
We will use the following notation in this proof.
For $Z\in \{X,W\}$, we denote by $\mathsf{VC}_Z$ the set that contains:
\begin{itemize}
\item all $\{v\}\in \binom{Z\cap S}{1}$ such that $\rep{V_x}{1}(\{v\})\in \mathscr{X}_{\mathsf{vc}}^{S}$,
\item all $P\in \operatorname{cc}(Z\setminus S)$ such that $\rep{V_x}{2}(P)\in \mathscr{X}_{\mathsf{vc}}^{\comp{S}}$.
\end{itemize}
We define also $\mathsf{VC}_Y$ as the set that contains:
\begin{itemize}
\item all $\{v\}\in \binom{Y\cap S}{1}$ such that $\rep{\comp{V_x}}{1}(\{v\})\in \mathscr{Y}_{\mathsf{vc}}^{S}$,
\item all $P\in \mathcal{P}_Y$ such that $\rep{\comp{V_x}}{2}(P)\in \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$.
\end{itemize}
The sets $\mathsf{VC}_X,\mathsf{VC}_W$ and $\mathsf{VC}_Y$ contain the blocks in $X_{\downarrow \operatorname{cc}(X\setminus S)}, W_{\downarrow \operatorname{cc}(W\setminus S)}$ and $Y_{\downarrow \mathcal{P}_Y}$, respectively, which belong to the vertex cover described by $i$.
Finally, for each $Z\in\{X,W\}$, we define the following two edge-disjoint subgraphs of $G[Z\cup Y]_{\downarrow\operatorname{cc}(Z\setminus S)\cup\mathcal{P}_Y}$:
\begin{itemize}
\item $G_Z=G[Z_{\downarrow \operatorname{cc}(Z\setminus S)} \mid \mathsf{VC}_Y]$,
\item $\comp{G_Z}= G[Z\cup Y]_{\downarrow \operatorname{cc}(Z\setminus S)\cup\mathcal{P}_Y} - G_Z$.
\end{itemize}
As explained in the proof of Lemma \ref{lemma:exitsAnIndex}, for any $Z\in \{X,W\}$, the graph $\operatorname{aux}(Z,i)$ is isomorphic to the graph $G_Z$.
Informally, $G_Z$ contains the edges of $G[Z\cup Y]_{\downarrow \operatorname{cc}(Z\setminus S)\cup \mathcal{P}_Y}$ which are induced by $Z_{\downarrow\operatorname{cc}(Z\setminus S)}$ and those between $Z_{\downarrow\operatorname{cc}(Z\setminus S)}$ and $\mathsf{VC}_Y$.
The following fact implies that $\comp{G_Z}$ contains the edges of $G[Z\cup Y]_{\downarrow \operatorname{cc}(Z\setminus S)\cup \mathcal{P}_Y}$ that are induced by $Y_{\downarrow \mathcal{P}_Y}$ and those between $Y_{\downarrow \mathcal{P}_Y} \setminus \mathsf{VC}_Y$ and $\mathsf{VC}_Z$.
\begin{fact}\label{fact:vertexcover}
For any $Z\in \{X,W\}$, the set $\mathsf{VC}_Z\cup\mathsf{VC}_Y$ is a vertex cover of $G[Z,Y]_{\downarrow \operatorname{cc}(Z\setminus S)\cup \mathcal{P}_Y}$.
\end{fact}
\begin{proof}
First observe that $N(Y\setminus V(\mathsf{VC}_Y)) \cap \mathscr{X}_{\comp{\mathsf{vc}}}=\varnothing$ thanks to Condition~(f) of Definition \ref{def:complementsolution}.
Moreover, we have $\mathscr{X}_{\comp{\mathsf{vc}}}\equi{V_x}{1} Z\setminus V(\mathsf{VC}_Z)$ by Condition (f) of Definition \ref{def:partialsolution}.
We conclude that there are no edges between $Y_{\downarrow\mathcal{P}_Y}\setminus \mathsf{VC}_Y$ and $Z_{\downarrow \operatorname{cc}(Z\setminus S)}\setminus \mathsf{VC}_Z$ in $G[Z,Y]_{\downarrow \operatorname{cc}(Z\setminus S)\cup\mathcal{P}_Y}$.
Hence, $\mathsf{VC}_Z\cup \mathsf{VC}_Y$ is a vertex cover of $G[Z,Y]_{\downarrow \operatorname{cc}(Z\setminus S)\cup\mathcal{P}_Y}$.
\end{proof}
Assume that $G[W\cup Y]_{\downarrow \operatorname{cc}(W\setminus S)\cup \mathcal{P}_Y}$ contains a cycle $C$.
Our task is to show that $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ contains a cycle as well.
We first explore properties of $C$ with respect to $G_W$ and $\comp{G_W}$.
Since the graph $\operatorname{aux}(W,i)$ is a forest and it is isomorphic to $G_W$, we know that $C$ must contain at least one edge from $\comp{G_W}$.
Moreover, $C$ must go though a block of $W_{\downarrow\operatorname{cc}(W\setminus S)}$ because $G[Y]_{\downarrow\mathcal{P}_Y}$ is a forest.
Consequently, (and because from Fact~\ref{fact:vertexcover} $\mathsf{VC}_W\cup \mathsf{VC}_Y$ is a vertex cover of $G[W\cup Y]_{\downarrow \operatorname{cc}(W\setminus S)\cup \mathcal{P}_Y}$), we deduce that $C$ is the concatenation of edge-disjoint paths $P_1\dots P_t$ such that for each $\ell\in [t]$ we have:
\begin{itemize}
\item $P_\ell$ is a non-empty \textit{path} with endpoints in $\mathsf{VC}_W\cup \mathsf{VC}_Y$ and internal blocks not in $\mathsf{VC}_W\cup \mathsf{VC}_Y$ and $P_\ell$ is either a path of $G_W$ or $\comp{G_W}$.
\end{itemize}
At least one of these paths is in $\comp{G_W}$ and, potentially, $C$ may be entirely contained in $\comp{G_W}$.
Figure~\ref{fig:cyclewy} presents two possible interactions between $C$ and the graphs $G_W$ and $\comp{G_W}$.
\begin{figure}
\caption{How cycles in $G[W\cup Y]_{\downarrow \operatorname{cc}
\label{fig:cyclewy}
\end{figure}
Given an endpoint $U\in \mathsf{VC}_W\cup \mathsf{VC}_Y$ of one of the paths $P_1,\dots,P_\ell$, we define $U_X$ and $U_i$ as the analogs of $U$ in $\mathsf{VC}_X\cup \mathsf{VC}_Y$ and $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S} \cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S}$, respectively, as follows:
\begin{itemize}
\item if $U\in \operatorname{cc}(W\setminus S)$, then $U_X$ and $U_i$ are the unique elements of $\operatorname{cc}(X\setminus S)$ and $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}$, respectively, such that $U\equi{V_x}{2} U_X \equi{V_x}{2} U_i$,
\item if $U=\{v\}\in \binom{W\cap S}{1}$, then $U_X$ and $U_i$ are the unique elements in $\binom{X\cap S}{1}$ and $\mathscr{X}_{\mathsf{vc}}^{S}$, respectively, such that $U\equi{V_x}{1} U_X \equi{V_x}{1} U_i$,
\item if $U\in \mathcal{P}_Y$, then $U_X=U$ and $U_i$ is the unique element of $\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}$ such that $U\equi{\comp{V_x}}{2} U_i$;
\item otherwise, if $U=\{v\}\in \binom{Y\cap S}{1}$, then $U_X=U$ and $U_i$ is the unique element of $\mathscr{Y}_{\mathsf{vc}}^{S}$ such that $U\equi{\comp{V_x}}{1} U_i$.
\end{itemize}
Observe that $U_X$ and $U_i$ exist by Conditions (a) and (b) of Definition \ref{def:partialsolution} and Definition \ref{def:complementsolution}.
For each $\ell\in [t]$, we construct a non-empty path $P'_\ell$ whose endpoints are the analogs in $\mathsf{VC}_X\cup \mathsf{VC}_Y$ of the endpoints of $P_\ell$ and such that if $P_\ell$ is a path in $G_W$ (resp. $\comp{G_W}$), then $P'_\ell$ is a path in $G_X$ (resp. $\comp{G_X}$).
This is sufficient to prove the claim because thanks to this, we can construct a closed walk in $G[X\cup Y]_{\downarrow\operatorname{cc}(X\setminus S)\cup\mathcal{P}_Y}$ by concatenating the paths $P'_1,\dots,P'_t$.
Since $G_X$ and $\comp{G_X}$ are edge-disjoint and the paths $P'_1,\dots,P'_t$ are non-empty paths, this closed walk must contain a cycle.
Let $\ell\in [t]$ and $U,T$ be the endpoints of $P_\ell$.
We denote by $U_X,T_X,U_i$ and $T_i$ the analogs of $U$ and $T$ in $\mathsf{VC}_X\cup \mathsf{VC}_Y$ and $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S} \cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S}$, respectively.
First, assume that $P_\ell$ is path of $G_W$.
Observe that $U_i$ and $T_i$ belong to the same partition class of $\operatorname{cc}(W,i)$.
This follows from the definitions of $U_i,T_i$ and the fact that $G_W$ is isomorphic to $\operatorname{aux}(W,i)$.
As $W\sim_i X$, we deduce that $U_i$ and $T_i$ belong to the same partition class of $\operatorname{cc}(X,i)$.
By construction, $U_i$ and $T_i$ are the analogs of $U_X$ and $T_X$ in $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S} \cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{Y}_{\mathsf{vc}}^{S}$.
We conclude that $U_X$ and $T_X$ are connected in $G_X$ via a path $P'_\ell$.
We claim that $P'_\ell$ is not empty because we have $U_X\neq T_X$.
As $P_\ell$ is a non empty path of $G_W$ and because $G_W$ is acyclic, we know that $U$ and $T$ are distinct.
Hence, by the construction of $U_X$ and $T_X$, we deduce that $U_X\neq T_X$.
Now, assume that $P_\ell$ is a non-empty path of $\comp{G_W}$.
Since $\mathsf{VC}_W\cup \mathsf{VC}_Y$ is a vertex cover of $G[W,Y]_{\downarrow \operatorname{cc}(W\setminus S)\cup \mathcal{P}_Y}$, the blocks in $W_{\downarrow \operatorname{cc}(W\setminus S)}$ that do not belong to $VC_W$ are isolated in $\comp{G_W}$.
As $P_\ell$ is not empty, the blocks of $P_\ell$ which belong to $W_{\downarrow \operatorname{cc}(W\setminus S)}$ are in $\mathsf{VC}_W$.
Because the internal blocks of the paths $P_1,\dots,P_t$ are not in $\mathsf{VC}_W\cup \mathsf{VC}_Y$, we deduce that the internal blocks of $P_\ell$ belong to $Y_{\downarrow \mathcal{P}_Y}$. We distinguish the following cases:
\begin{itemize}
\item If both endpoints of $P_\ell$ belong to $\mathsf{VC}_Y$, then $U_X=U$, $T_X=T$ and all the blocks of $P_\ell$ belong to $Y_{\downarrow \mathcal{P}_Y}$. It follows that $P_\ell$ is a non-empty path of $\comp{G_X}$ because $G[Y]_{\downarrow \mathcal{P}_Y}$ is a subgraph of $\comp{G_X}$. In this case, we take $P'_\ell=P_\ell$.
\item Assume now that one or two endpoints of $P_\ell$ belong to $\mathsf{VC}_W$.
Suppose w.l.o.g. that $U$ belongs to $\mathsf{VC}_W$.
Since $P_\ell$ is non-empty and the internal blocks of $P_\ell$ are in $Y_{\downarrow \mathcal{P}_Y}$, $U$ has a neighbor $Q\in Y_{\downarrow \mathcal{P}_Y}$ in $P_\ell$.
We claim that $Q$ is adjacent to $U_X$ in $\comp{G_X}$.
By definition of $U_X$, we have $U\equi{V_x}{d} U_X$ for some $d\in\{1,2\}$ and in particular $N(U)\cap \comp{V_x}= N(U_X)\cap \comp{V_x}$.
As $U$ and $Q$ are adjacent in $\comp{G_W}$, we deduce that $N(U)\cap Q \neq \varnothing$.
It follows that $N(U_X)\cap Q\neq\varnothing$ and thus $Q$ and $U_X$ are adjacent in $\comp{G_X}$.
Symmetrically, we can prove that if $T\in \mathsf{VC}_W$, then the neighbor of $T$ in $P_\ell$ is adjacent to $T_X$ in $\comp{G_X}$.
Hence, the neighbors of $U$ and $T$ in $P_\ell$ are adjacent to $U_X$ and $T_X$ respectively in $\comp{G_X}$.
We obtain $P_\ell'$ from $P_\ell$ by replacing $U$ and $T$ by $U_X$ and $T_X$.
Since the internal blocks of $P_\ell$ belong to $Y_{\downarrow \mathcal{P}_Y}$ and $G[Y]_{\downarrow \mathcal{P}_Y}$ is a subgraph of $\comp{G_X}$, we deduce that $P'_\ell$ is a path of $\comp{G_X}$.
The path $P'_\ell$ is not-empty because it contains $U_X$ and $Q$ which are distinct blocks of $\comp{G_X}$ as $U_X\in \mathsf{VC}_X$ (since $U\in\mathsf{VC}_W$ by assumption) and $Q\in Y_{\downarrow \mathcal{P}_Y}$.
\end{itemize}
\end{proof}
The following theorem proves that, for every set of partial solutions $\mathcal{A}\subseteq 2^{V_x}$, we can compute a small subset $\mathcal{B}\subseteq \mathcal{A}$ such that $\mathcal{B}$ \textit{represents} $\mathcal{A}$, i.e., for every $Y\subseteq \comp{V_x}$, the best solutions we obtain from the union of $Y$ with a set in $\mathcal{A}$ are as good as the ones we obtain from $\mathcal{B}$.
Firstly, we formalize this notion of representativity.
\begin{definition}[Representativity]\label{def:representativity}
For every $\mathcal{A}\subseteq 2^{V_x}$ and $Y\subseteq \comp{V_x}$, we define
\[ \operatorname{best}(\mathcal{A},Y)= \operatorname{max}\{ \mathsf{w}(X) \mid X\in \mathcal{A} \text{ and } G[X\cup Y] \text{ is an } S\text{-forest} \} .\]
Given $\mathcal{A},\mathcal{B}\subseteq 2^{V_x}$, we say that $\mathcal{B}$ \emph{represents} $\mathcal{A}$ if, for every $Y\subseteq \comp{V_x}$, we have $\operatorname{best}(\mathcal{A},Y)=\operatorname{best}(\mathcal{B},Y)$.
\end{definition}
We recall that $\mathsf{s\text{-}nec}_2(A)=\operatorname{max}(\mathsf{nec}_2(A),\mathsf{nec}_2(\comp{A}))$.
\begin{lemma}\label{lem:reduce}
There exists an algorithm $\operatorname{reduce}$ that, given a set $\mathcal{A}\subseteq 2^{V_x}$, outputs in time $O(|\mathcal{A}|\cdot |\mathbb{I}_x| \cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}\cdot \log(\mathsf{s\text{-}nec}_2(V_x)) \cdot n^3)$ a subset $\mathcal{B}\subseteq \mathcal{A}$ such that $\mathcal{B}$ represents $\mathcal{A}$ and $|\mathcal{B}|\leqslant |\mathbb{I}_x| \cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$.
\end{lemma}
\begin{proof}
Given $\mathcal{A}\subseteq 2^{V_x}$ and $i\in\mathbb{I}_x$, we define $\operatorname{reduce}(\mathcal{A},i)$ as the operation which returns a set containing one partial solution $X\in \mathcal{A}$ associated with $i$ of each equivalence class of $\sim_i$ such that $\mathsf{w}(X)$ is maximum.
Moreover, we define $\operatorname{reduce}(\mathcal{A})= \bigcup_{i\in\mathbb{I}_x}\operatorname{reduce}(\mathcal{A},i)$.
We prove first that $\operatorname{reduce}(\mathcal{A})$ represents $\mathcal{A}$, that is $\operatorname{best}(\mathcal{A},Y)=\operatorname{best}(\operatorname{reduce}(\mathcal{A}),Y)$ for all $Y\subseteq \comp{V_x}$.
Let $Y\subseteq \comp{V_x}$.
Since $\operatorname{reduce}(\mathcal{A})\subseteq \mathcal{A}$, we already have $\operatorname{best}(\operatorname{reduce}(\mathcal{A}),Y)\leqslant \operatorname{best}(\mathcal{A},Y)$.
Consequently, if there is no $X\in\mathcal{A}$ such that $G[X\cup Y]$ is an $S$-forest, we have $\operatorname{best}(\operatorname{reduce}(\mathcal{A}),Y)= \operatorname{best}(\mathcal{A},Y)=\operatorname{max}(\varnothing)=-\infty$.
Assume that there exists $X\in\mathcal{A}$ such that $G[X\cup Y]$ is an $S$-forest.
Let $X\in \mathcal{A}$ such that $G[X\cup Y]$ is an $S$-forest and $\mathsf{w}(X)=\operatorname{best}(\mathcal{A},Y)$.
By Lemma \ref{lemma:exitsAnIndex}, there exists $i\in\mathbb{I}_x$ and an $\comp{S}$-contraction $\mathcal{P}_Y$ of $Y$ such that (1)~$G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest, (2)~$X$ is a partial solution associated with $i$ and (3)~$(Y,\mathcal{P}_Y)$ is a complement solution associated with $i$.
From the construction of $\operatorname{reduce}(\mathcal{A},i)$, there exists $W\in \operatorname{reduce}(\mathcal{A})$ such that $W$ is a partial solution associated with $i$, $X\sim_i W$, and $\mathsf{w}(W)\mathsf{g}eq \mathsf{w}(X)$.
By Lemma \ref{lemma:equivalence} and since $G[X\cup Y]_{\downarrow \operatorname{cc}(X\setminus S)\cup \mathcal{P}_Y}$ is a forest, we deduce that $G[W\cup Y]_{\downarrow \operatorname{cc}(W\setminus S)\cup \mathcal{P}_Y}$ is a forest too.
Thanks to Lemma \ref{lemma:forestImpliesS-forest}, we deduce that $G[W\cup Y]$ is an $S$-forest.
As $\mathsf{w}(W)\mathsf{g}eq \mathsf{w}(X)=\operatorname{best}(\mathcal{A},Y)$, we conclude that $\operatorname{best}(\mathcal{A},Y)=\operatorname{best}(\operatorname{reduce}(\mathcal{A}),Y)$.
Hence, $\operatorname{reduce}(\mathcal{A})$ represents $\mathcal{A}$.
We claim that $|\operatorname{reduce}(\mathcal{A})| \leqslant |\mathbb{I}_x| \cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$.
For every $i=(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in\mathbb{I}_x$ and partial solution $X$ associated with $i$, $\operatorname{cc}(X,i)$ is a partition of $\mathscr{X}_{\mathsf{vc}}^{\comp{S}}\cup \mathscr{X}_{\mathsf{vc}}^{S}\cup \mathscr{X}_{\comp{\mathsf{vc}}} \cup \mathscr{Y}_{\mathsf{vc}}^{\comp{S}} \cup \mathscr{Y}_{\mathsf{vc}}^{S}$.
Since $|\mathscr{X}_{\mathsf{vc}}^{\comp{S}}|+|\mathscr{X}_{\mathsf{vc}}^{S}|+|\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}| + |\mathscr{Y}_{\mathsf{vc}}^{S}| \leqslant 4\mathsf{mim}(V_x)$, there are at most $(4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$ possible values for $\operatorname{cc}(X,i)$.
We deduce that, for every $i\in\mathbb{I}_x$, the relation $\sim_i$ generates at most $(4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$ equivalence classes, so $|\operatorname{reduce}(\mathcal{A},i)|\leqslant (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$ for every $i\in \mathbb{I}_x$.
By construction, we conclude that $|\operatorname{reduce}(\mathcal{A})|\leqslant |\mathbb{I}_x| \cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$.
It remains to prove the runtime.
As $\mathsf{nec}_1(V_x)\leqslant \mathsf{nec}_2(V_x)$, by Lemma \ref{lem:computenecd} we can compute in time $O(\mathsf{s\text{-}nec}_2(V_x) \cdot \log(\mathsf{s\text{-}nec}_2(V_x)) \cdot n^2)$ the sets $\Rep{V_x}{1},\Rep{V_x}{2}$, $\Rep{\comp{V_x}}{2}$ and data structures which compute $\rep{V_x}{1},\rep{V_x}{2}$ and $\rep{\comp{V_x}}{2}$ in time $O(\log(\mathsf{s\text{-}nec}_2(V_x))\cdot n^2)$.
Given $\Rep{V_x}{1},\Rep{V_x}{2}$, $\Rep{\comp{V_x}}{2}$, we can compute $\mathbb{I}_x$ in time $O(|\mathbb{I}_x|\cdot n^2)$.
Since, $\mathsf{s\text{-}nec}_2(V_x)\leqslant |\mathbb{I}_x|$, the time required to compute these sets and data structures is less than $O(|\mathcal{A}|\cdot |\mathbb{I}_x| \cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}\cdot \log(\mathsf{s\text{-}nec}_2(V_x)) \cdot n^3)$.
For each $i\in\mathbb{I}_x$ and $X\in \mathcal{A}$, we can decide whether $X$ is a partial solution associated with $i$ and compute $\operatorname{aux}(X,i)$, $\operatorname{cc}(X,i)$ in time $O(\log(\mathsf{s\text{-}nec}_2(V_x))\cdot n^3)$.
For doing so, we simply start by computing $\rep{V_x}{2}(C)$ and $\rep{V_x}{1}(\{v\})$ for each $C\in \operatorname{cc}(X\setminus S)$ and $\{v\}\in \binom{X\cap S}{1}$, this is doable in $O( \log(\mathsf{s\text{-}nec}_2(V_x))\cdot n^3)$ since $|\operatorname{cc}(X\setminus S)|+|X\cap S|\leqslant n$. Then with standard algorithmic techniques, we check whether $X$ satisfies all the conditions of Definition~\ref{def:partialsolution} and compute $\operatorname{aux}(X,i)$ and $\operatorname{cc}(X,i)$.
Given two partial solutions $X,W$ associated with $i$, $\operatorname{cc}(X,i)$ and $\operatorname{cc}(W,i)$, we can decide whether $X\sim_i W$ in time $O(\mathsf{mim}(V_x)) \leqslant O(n)$.
We deduce that, for each $i\in\mathbb{I}_x$, we can compute $\operatorname{reduce}(\mathcal{A},i)$ in time
$O(|\mathcal{A}|\cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)} \cdot \log(\mathsf{s\text{-}nec}_2(V_x))\cdot n^3)$.
We deduce the running time to compute $\operatorname{reduce}(\mathcal{A})$ by multiplying the running time needed for $\operatorname{reduce}(\mathcal{A},i)$ by $|\mathbb{I}_x|$.
\end{proof}
We are now ready to prove the main theorem of this paper. For two subsets $\mathcal{A}$ and $\mathcal{B}$ of $2^{V(G)}$, we define the \emph{merging} of $\mathcal{A}$ and $\mathcal{B}$, denoted by $\mathcal{A}\otimes \mathcal{B}$, as $\mathcal{A} \otimes \mathcal{B} :=\{ X\cup Y \mid X\in \mathcal{A} \,\text{ and }\, Y\in \mathcal{B}\}$.
Observe that $\mathcal{A}\otimes \mathcal{B}=\varnothing$ whenever $\mathcal{A}=\varnothing$ or $\mathcal{B}=\varnothing$.
\begin{theorem}\label{thm:main}
There exists an algorithm that, given an $n$-vertex graph $G$ and a rooted layout $(T,\delta)$ of $G$, solves \textsc{Subset Feedback Vertex Set} in time
\[ O\left(\sum_{x\in V(T)} |\mathbb{I}_x|^3\cdot (4\mathsf{mim}(V_x))^{12\mathsf{mim}(V_x)}\cdot \log(\mathsf{s\text{-}nec}_2(V_x)) \cdot n^3\right). \]
\end{theorem}
\begin{proof}
The algorithm is a usual bottom-up dynamic programming algorithm.
For every node $x$ of $T$, the algorithm computes a set of partial solutions $\mathcal{A}_x \subseteq 2^{V_x}$ such that $\mathcal{A}_x$ represents $2^{V_x}$ and $|\mathcal{A}_x|\leqslant |\mathbb{I}_x| \cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$.
For the leaves $x$ of $T$ such that $V_x=\{v\}$, we simply take $\mathcal{A}_x= 2^{V_x} = \{\varnothing, \{v\}\}$.
In order to compute $\mathcal{A}_x$ for $x$ an internal node of $T$ with $a$ and $b$ as children, our algorithm will simply compute $\mathcal{A}_x= \operatorname{reduce}(\mathcal{A}_a\otimes \mathcal{A}_b)$.
Once the the set $\mathcal{A}_r$ is computed with $r$ the root of $T$, our algorithm outputs a set $X\in \mathcal{A}_r$ of maximum weight.
By Lemma~\ref{lem:reduce}, we have $|\mathcal{A}_x|\leqslant |\mathbb{I}_x|\cdot (4\mathsf{mim}(V_x))^{4\mathsf{mim}(V_x)}$, for every node $x$ of $T$.
The following claim helps us to prove that $\mathcal{A}_x$ represents $2^{V_x}$ for the internal nodes $x$ of $T$.
\begin{claim}\label{claim:representsPropagation}
Let $x$ be an internal of $T$ with $a$ and $b$ as children.
If $\mathcal{A}_a$ and $\mathcal{A}_b$ represent, respectively, $2^{V_a}$ and $2^{V_b}$, then $\operatorname{reduce}(\mathcal{A}_a\otimes \mathcal{A}_b)$ represents $2^{V_x}$.
\end{claim}
\begin{proof}
Assume that $\mathcal{A}_a$ and $\mathcal{A}_b$ represent, respectively, $2^{V_a}$ and $2^{V_b}$.
First, we prove that $\mathcal{A}_a\otimes \mathcal{A}_b$ represents $2^{V_x}$.
We have to prove that, for every $Y\subseteq \comp{V_x}$, we have $\operatorname{best}(\mathcal{A}_a\otimes \mathcal{A}_b,Y)=\operatorname{best}(2^{V_x},Y)$.
Let $Y\subseteq \comp{V_x}$.
By definition of $\operatorname{best}$, we have the following
\begin{align*}
\operatorname{best}(\mathcal{A}_a\otimes \mathcal{A}_b,Y) &= \operatorname{max}\{ \mathsf{w}(X)+ \mathsf{w}(W)\mid X\in \mathcal{A}_a\mathsf{w}edge W\in \mathcal{A}_b\mathsf{w}edge G[X\cup W\cup Y] \text{ is an $S$-forest} \} \\
&= \operatorname{max}\{ \operatorname{best}(\mathcal{A}_a,W\cup Y) + \mathsf{w}(W) \mid W\in \mathcal{A}_b \}.
\end{align*}
As $\mathcal{A}_a$ represents $2^{V_a}$, we have $\operatorname{best}(\mathcal{A}_a,W\cup Y)=\operatorname{best}(2^{V_a},W\cup Y)$ and we deduce that $\operatorname{best}(\mathcal{A}_a\otimes \mathcal{A}_b,Y)=\operatorname{best}(2^{V_a}\otimes\mathcal{A}_b,Y)$.
Symmetrically, as $\mathcal{A}_b$ represents $2^{V_b}$, we infer that $\operatorname{best}(2^{V_a}\otimes\mathcal{A}_b,Y)=\operatorname{best}(2^{V_a}\otimes 2^{V_b},Y)$.
Since $2^{V_a}\otimes 2^{V_b}= 2^{V_x}$, we conclude that $\operatorname{best}(\mathcal{A}_a\otimes \mathcal{A}_b,Y)$ equals $\operatorname{best}(2^{V_x},Y)$.
As this holds for every $Y$, it proves that $\mathcal{A}_a\otimes \mathcal{A}_b$ represents $2^{V_x}$.
By Lemma~\ref{lem:reduce}, we know that $\operatorname{reduce}(\mathcal{A}_a\otimes \mathcal{A}_b)$ represents $\mathcal{A}_a\otimes \mathcal{A}_b$.
As the relation ``represents'' is transitive, we conclude that $\operatorname{reduce}(\mathcal{A}_a\otimes \mathcal{A}_b)$ represents $2^{V_x}$.
\end{proof}
For the leaves $x$ of $T$, we obviously have that $\mathcal{A}_x$ represents $2^{V_x}$, since $\mathcal{A}_x=2^{V_x}$.
From Claim~\ref{claim:representsPropagation} and by induction, we deduce that $\mathcal{A}_x$ represents $2^{V_x}$ for every node $x$ of $T$.
In particular, $\mathcal{A}_r$ represents $2^{V(G)}$ with $r$ the root of $T$.
By Definition \ref{def:representativity}, $\mathcal{A}_r$ contains a set $X$ of maximum size such that $G[X]$ is an $S$-forest.
This proves the correctness of our algorithm.
It remains to prove the running time.
Observe that, for every internal node $x$ of $T$ with $a$ and $b$ as children, the size of $\mathcal{A}_a\otimes \mathcal{A}_b$ is at most $|\mathbb{I}_x|^2\cdot (4\mathsf{mim}(V_x))^{8\mathsf{mim}(V_x)}$ and it can be computed in time $O(|\mathbb{I}_x|^2\cdot (4\mathsf{mim}(V_x))^{8\mathsf{mim}(V_x)}\cdot n^2)$.
By Lemma~\ref{lem:reduce}, the set $\mathcal{A}_x=\operatorname{reduce}(\mathcal{A}_a\otimes \mathcal{A}_b)$ is computable in time $O(|\mathbb{I}_x|^3\cdot (4\mathsf{mim}(V_x))^{12\mathsf{mim}(V_x)}\cdot \log(\mathsf{s\text{-}nec}_2(V_x))\cdot n^3)$.
This proves the running time.
\end{proof}
\subsection{Algorithmic consequences}
In order to obtain the algorithmic consequences of our meta-algorithm given in Theorem~\ref{thm:main}, we need the following lemma
which bounds the size of each set of indices with respect to the considered parameters.
\begin{lemma}\label{lem:consequences}
For every $x\in V(T)$, the size of $\mathbb{I}_x$ is upper bounded by:
\begin{multicols}{3}
\begin{itemize}
\item $2^{O(\mathsf{rw}(V_x)^3)}$,
\item $2^{O(\mathsf{rw}_\mathbb{Q}(V_x)^2\log(\mathsf{rw}_\mathbb{Q}(V_x)))}$,
\item $n^{O(\mathsf{mim}(V_x)^2)}$.
\end{itemize}
\end{multicols}
\end{lemma}
\begin{proof}
For $A\subseteq V(G)$, let $\mathsf{mw}(A)$ be the number of different rows in the matrix $M_{A,\comp{A}}$.
Observe that, for every $A\subseteq V(G)$, we have $\mathsf{mw}(A)=\{ \rep{A}{1}(\{v\}) \mid v\in A\}$.
Remember that the $\mathscr{X}_{\mathsf{vc}}^{S}$'s and $\mathscr{Y}_{\mathsf{vc}}^{S}$'s are subsets of $\{ \rep{V_x}{1}(\{v\}) \mid v\in V_x\}$ and $\{ \rep{\comp{V_x}}{1}(\{v\}) \mid v\in \comp{V_x}\}$ respectively.
From Definition \ref{def:indices}, we have $|\mathscr{X}_{\mathsf{vc}}^{\comp{S}}|+|\mathscr{X}_{\mathsf{vc}}^{S}|+|\mathscr{Y}_{\mathsf{vc}}^{\comp{S}}| + |\mathscr{Y}_{\mathsf{vc}}^{S}| \leqslant 4\mathsf{mim}(V_x)$, for every $(\mathscr{X}_{\mathsf{vc}}^{\comp{S}},\mathscr{X}_{\mathsf{vc}}^{S}, \mathscr{X}_{\comp{\mathsf{vc}}},\mathscr{Y}_{\mathsf{vc}}^{\comp{S}},\mathscr{Y}_{\mathsf{vc}}^{S})\in\mathbb{I}_x$.
Thus, the size of $\mathbb{I}_x$ is at most
\[ \mathsf{nec}_1(V_x)\cdot \big(\mathsf{nec}_2(V_x)+ \mathsf{mw}(V_x) + \mathsf{nec}_2(\comp{V_x}) + \mathsf{mw}(\comp{V_x})\big)^{4\mathsf{mim}(V_x)}. \]
\paragraph{\bf Rank-width.} By Lemma \ref{lem:compare}, we have $\mathsf{nec}_1(V_x)\leqslant 2^{\mathsf{rw}(V_x)^2}$ and $\mathsf{nec}_2(V_x),\mathsf{nec}_2(\comp{V_x})\leqslant 2^{2\mathsf{rw}(V_x)^2}$.
Moreover, there is at most $2^{\mathsf{rw}(V_x)}$ different rows in the matrices $M_{V_x,\comp{V_x}}$ and $M_{\comp{V_x},V_x}$, so $\mathsf{mw}(V_x)$ and $\mathsf{mw}(\comp{V_x})$ are upper bounded by $2^{\mathsf{rw}(V_x)}$.
By Lemma \ref{lem:comparemim}, we have $4\mathsf{mim}(V_x)\leqslant 4\mathsf{rw}(V_x)$.
We deduce from these inequalities that $|\mathbb{I}_x|\leqslant 2^{\mathsf{rw}(V_x)^2}\cdot (2^{2\mathsf{rw}(V_x)^2 + 1} + 2^{\mathsf{rw}(V_x)+1})^{4\mathsf{rw}(V_x)}\in 2^{O(\mathsf{rw}(V_x)^3)}$.
\paragraph{\bf $\mathbb{Q}$-rank-width.} By Lemma \ref{lem:compare}, we have $\mathsf{nec}_1(V_x),\mathsf{nec}_2(V_x),\mathsf{nec}_2(\comp{V_x})\in 2^{O(\mathsf{rw}_\mathbb{Q}(V_x)\log(\mathsf{rw}_\mathbb{Q}(V_x)))}$.
Moreover, there is at most $2^{\mathsf{rw}_\mathbb{Q}(V_x)}$ different rows in the matrices $M_{V_x,\comp{V_x}}$ and $M_{\comp{V_x},V_x}$, so $\mathsf{mw}(V_x)$ and $\mathsf{mw}(\comp{V_x})$ are upper bounded by $2^{\mathsf{rw}_\mathbb{Q}(V_x)}$.
By Lemma \ref{lem:comparemim}, we have $4\mathsf{mim}(V_x)\leqslant 4\mathsf{rw}_\mathbb{Q}(V_x)$.
We deduce from these inequalities that
\[ |\mathbb{I}_x|\leqslant 2^{O(\mathsf{rw}_\mathbb{Q}(V_x)\log(\mathsf{rw}_\mathbb{Q}(V_x)))}\cdot \Big(2^{O(\mathsf{rw}_\mathbb{Q}(V_x)\log(\mathsf{rw}_\mathbb{Q}(V_x)))} + 2^{\mathsf{rw}_\mathbb{Q}(V_x)+1}\Big)^{4\mathsf{rw}_\mathbb{Q}(V_x)}. \]
We conclude that $|\mathbb{I}_x|\in 2^{O(\mathsf{rw}_\mathbb{Q}(V_x)^2\log(\mathsf{rw}_\mathbb{Q}(V_x)))}$.
\paragraph{\bf Mim-width.} By Lemma \ref{lem:compare}, we know that $\mathsf{nec}_1(V_x)\leqslant |V_x|^{\mathsf{mim}(V_x)}$, $\mathsf{nec}_2(V_x) \leqslant |V_x|^{2\mathsf{mim}(V_x)}$, and $\mathsf{nec}_2(\comp{V_x})\leqslant |\comp{V_x}|^{2\mathsf{mim}(V_x)}$.
We can assume that $n>2$ (otherwise the problem is trivial), so $\mathsf{nec}_2(V_x)+\mathsf{nec}_2(\comp{V_x})\leqslant |V_x|^{2\mathsf{mim}(V_x)} + |\comp{V_x}|^{2\mathsf{mim}(V_x)} \leqslant n^{2\mathsf{mim}(V_x)}$.
Moreover, notice that, for every $A\subseteq V(G)$, we have $\{ \rep{A}{1}(\{v\}) \mid v\in A\}\leqslant |A|$.
We deduce that $|\mathbb{I}_x|\leqslant n^{\mathsf{mim}(V_x)} \cdot (n+n^{2\mathsf{mim}(V_x)})^{4\mathsf{mim}(V_x)}$.
As we assume that $n>2$, we have $|\mathbb{I}_x|\leqslant n^{8\mathsf{mim}(V_x)^2+5\mathsf{mim}(V_x)}\in n^{O(\mathsf{mim}(V_x)^2)}$.
\end{proof}
Now we are ready to state our algorithms with respect to the parameters rank-width $\mathsf{rw}(G)$ and $\mathbb{Q}$-rank-width $\mathsf{rw}_\mathbb{Q}(G)$.
In particular, with our next result we show that \textsc{Subset Feedback Vertex Set} is in \textsf{FPT}\xspace parameterized by $\mathsf{rw}_\mathbb{Q}(G)$ or $\mathsf{rw}(G)$.
\begin{theorem}\label{thm:qrankmain}
There exist algorithms that solve \textsc{Subset Feedback Vertex Set} in time $2^{O(\mathsf{rw}(G)^3)}\cdot n^4$ and $2^{O(\mathsf{rw}_\mathbb{Q}(G)^2\log(\mathsf{rw}_\mathbb{Q}(G))))}\cdot n^{4}$.
\end{theorem}
\begin{proof}
We first compute a rooted layout $\mathcal{L}=(T,\delta)$ of $G$ such that $\mathsf{rw}(\mathcal{L})\in O(\mathsf{rw}(G))$ or $\mathsf{rw}_\mathbb{Q}(\mathcal{L})\in O(\mathsf{rw}_\mathbb{Q}(G))$.
This is achieved through a $(3k + 1)$-approximation algorithm that runs in \textsf{FPT}\xspace time $O(8^k \cdot n^4)$ parameterized by $k \in \{\mathsf{rw}(G),\mathsf{rw}_\mathbb{Q}(G)\}$ \cite{Oum09a}.
Then, we apply the algorithm given in Theorem~\ref{thm:main}.
Observe that for every node $x\in V(T)$, by Lemma~\ref{lem:consequences},
$|\mathbb{I}_x|^3$ lies in $2^{O(\mathsf{rw}(V_x)^3)}$ and $2^{O(\mathsf{rw}_\mathbb{Q}(V_x)^2\log(\mathsf{rw}_\mathbb{Q}(V_x)))}$ and by Lemma~\ref{lem:compare}, $\mathsf{s\text{-}nec}_2(V_x)$ lies in $2^{O(\mathsf{rw}(V_x)^2)}$ and $2^{O(\mathsf{rw}_\mathbb{Q}(V_x)\log(\mathsf{rw}_\mathbb{Q}(V_x)))}$.
Moreover, Lemma~\ref{lem:comparemim} implies that $\mathsf{mim}(V_x)^{\mathsf{mim}(V_x)}$ is upper bounded by $2^{\mathsf{rw}(G) \log(\mathsf{rw}(G))}$ and $2^{\mathsf{rw}_\mathbb{Q}(G)\log(\mathsf{rw}_\mathbb{Q}(G))}$.
Therefore, we get the claimed runtimes for SFVS since $T$ contains $2n-1$ nodes.
\end{proof}
Regarding mim-width, our algorithm given below shows that \textsc{Subset Feedback Vertex Set} is in \textsf{XP}\xspace parameterized by the mim-width of a given rooted layout.
Note that we cannot solve SFVS in \textsf{FPT}\xspace time parameterized by the mim-width of a given rooted layout unless $\textsf{FPT}\xspace=\textsf{W}[1]$,
since \textsc{Subset Feedback Vertex Set} is known to be \textsf{W}[1]-hard for this parameter even for the special case of $S=V(G)$ \cite{JaffkeKT20}.
Moreover, contrary to the algorithms given in Theorem~\ref{thm:qrankmain}, here we need to assume that the input graph is given with a rooted layout.
However, our next result actually provides a unified polynomial-time algorithm for \textsc{Subset Feedback Vertex Set}
on well-known graph classes having bounded mim-width and for which a layout of bounded mim-width can be computed in polynomial time \cite{BelmonteV13}
(e.g., \textsc{Interval} graphs, \textsc{Permutation} graphs, \textsc{Circular Arc} graphs, \textsc{Convex} graphs, \textsc{$k$-Polygon},
\textsc{Dilworth-$k$} and \textsc{Co-$k$-Degenerate} graphs for fixed $k$).
\begin{theorem}\label{thm:mimmain}
There exists an algorithm that, given an $n$-vertex graph $G$ and a rooted layout $\mathcal{L}$ of $G$, solves \textsc{Subset Feedback Vertex Set} in time $n^{O(\mathsf{mim}(\mathcal{L})^2)}$.
\end{theorem}
\begin{proof}
We apply the algorithm given in Theorem~\ref{thm:main}.
By Lemma~\ref{lem:compare}, we have $\mathsf{s\text{-}nec}_2(V_x)\leqslant n^{O(\mathsf{mim})}$ and from Lemma~\ref{lem:consequences} we have $|\mathbb{I}_x|^3\in n^{O(\mathsf{mim}(V_x)^2)}$.
The claimed runtime for SFVS follows by the fact that the rooted tree $T$ of $\mathcal{L}$ contains $2n-1$ nodes.
\end{proof}
Let us relate our results for \textsc{Subset Feedback Vertex Set} to the \textsc{Node Multiway Cut}.
It is known that \textsc{Node Multiway Cut} reduces to \textsc{Subset Feedback Vertex Set} \cite{FominHKPV14}.
In fact, we can solve \textsc{Node Multiway Cut} by adding a new vertex $v$ with a large weight that is adjacent to all terminals and, then, run our algorithms for \textsc{Subset Feedback Vertex Set} with $S= \{v\}$ on the resulting graph.
Now observe that any extension of a rooted layout $\mathcal{L}$ of the original graph to the resulting graph has mim-width $\mathsf{mim}(\mathcal{L})+1$.
Therefore, all of our algorithms given in Theorems~\ref{thm:qrankmain} and \ref{thm:mimmain} have the same running times for the \textsc{Node Multiway Cut} problem.
\section{Conclusion }
This paper highlights the importance of the $d$-neighbor-equivalence relation to obtain meta-algorithm for several width measures at once.
We extend the range of applications of this relation~\cite{BergougnouxK19esa,BuiXuanTV13,GolovachHKKSV18,OumSV13}
by proving that it is useful for the atypical acyclicity constraint of the \textsc{Subset Feedback Vertex Set} problem.
It would be interesting to see whether this relation can be helpful with other kinds of constraints such as 2-connectivity and other generalizations of
\textsc{Feedback Vertex Set} such as the ones studied in~\cite{BonnetBKM19}.
In particular, one could consider the following generalization of \textsc{Odd Cycle Transversal}:
\pbDef{Subset Odd Cycle Transveral (SOCT)}{A graph $G$ and $S\subseteq V(G)$}{A set $X\subseteq V(G)$ of minimum weight such that $G[\comp{X}]$ does not contain an odd cycle that intersects $S$.}
Similar to SFVS, we can solve SOCT in time $k^{O(k)}\cdot n^{O(1)}$ parameterized by treewidth and this is optimal under ETH~\cite{BergougnouxBBK20}.
We do not know whether SOCT is in \textsf{XP}\xspace parameterized by mim-width, though it is in \textsf{FPT}\xspace parameterized by clique-width or rank-width, since we can express it in MSO$_1$ (with the characterization used in~\cite{BergougnouxBBK20}).
For many well-known graph classes a decomposition of bounded mim-width can be found in polynomial time. However, for general graphs it is known that computing mim-width is \textsf{W}[1]-hard and not in \textsf{APX}\xspace unless \textsf{NP}\xspace=\ ZPP \cite{SaetherV16}, while Yamazaki \cite{Yamazaki18} shows that under the small set expansion hypothesis it is not in \textsf{APX}\xspace unless \textsf{P}\xspace=\ \textsf{NP}\xspace.
For dynamic programming algorithms as in this paper, to circumvent the assumption that we are given a decomposition, we want functions $f, g$ and an algorithm that given a graph of mim-width OPT computes an $f(OPT)$-approximation to mim-width in time $n^{g(OPT)}$, i.e. \textsf{XP}\xspace by the natural parameter. This is the main open problem in the field. The first task could be to decide if there is a constant $c$ and a polynomial-time algorithm that given a graph $G$ either decides that its mim-width is larger than 1 or else returns a decomposition of mim-width $c$.
\appendix
\section{Explanations with several figures}
\label{appendix}
The following figures explain the relation between solutions ($S$-forests) and an index $i$.
\begin{figure}
\caption{Illustration of the $\comp{S}
\label{fig:killcycle}
\end{figure}
\begin{figure}
\caption{The contracted graph $G[X\cup Y]_{\downarrow \operatorname{cc}
\label{fig:contracted}
\end{figure}
\begin{figure}
\caption{The bipartite graph $G[X,Y]_{\downarrow \operatorname{cc}
\end{figure}
\begin{figure}
\caption{The graph $G[X\cup Y]_{\downarrow \operatorname{cc}
\end{figure}
\begin{figure}
\caption{The set $\mathscr{X}
\end{figure}
\begin{figure}
\caption{The set $\mathscr{X}
\end{figure}
\begin{figure}
\caption{The set $\mathscr{Y}
\end{figure}
\begin{figure}
\caption{The set $\mathscr{Y}
\end{figure}
\begin{figure}
\caption{The auxiliary graph $\operatorname{aux}
\label{fig:ccXi}
\end{figure}
\begin{figure}
\caption{The auxiliary graphs $\operatorname{aux}
\label{fig:equivalent}
\end{figure}
\begin{figure}
\caption{The graphs $G[X\cup Y]_{\downarrow \operatorname{cc}
\end{figure}
\begin{figure}
\caption{New examples for the graphs $G[X\cup Y]_{\downarrow \operatorname{cc}
\label{fig:intuitionCycles}
\end{figure}
\end{document} | math | 109,597 |
\begin{document}
{\large
\section*{
Consistency in the Naturally Vertex-Signed Line Graph\\ of a Signed Graph
}}
\begin{center}
Thomas Zaslavsky
\\[10pt]
Department of Mathematical Sciences \\
Binghamton University (SUNY) \\
Binghamton, NY 13902-6000, U.S.A.
\\
Email: {\tt [email protected]}
\\[10pt]
\end{center}
\begin{center}
Dedicated to a great man, Dr.\ B.\ Devadas Acharya (1947--2013)
\end{center}
{\small
\begin{quote}
\textbf{Abstract.}
A \emph{signed graph} is a graph whose edges are signed. In a \emph{vertex-signed graph} the vertices are signed. The latter is called \emph{consistent} if the product of signs in every circle is positive. The line graph of a signed graph is naturally vertex-signed. Based on a characterization by Acharya, Acharya, and Sinha in 2009, we give constructions for the signed simple graphs whose naturally vertex-signed line graph is consistent.
\textbf{Mathematics Subject Classification (2010)}: Primary 05C22; Secondary 05C75.
\textbf{Keywords}: Signed graph, line graph, consistent vertex-signed graph, consistent marked graph.
\end{quote}
}
\section{Introduction}
A \emph{signed graph} $\S=(\G,\s)$ consists of a graph $\G=(V,E)$, called its \emph{underlying graph}, and a sign function $\s: E\to\{+1,-1\}$. The most essential question about a signed graph is whether it is \emph{balanced}, that is, the product of the edge signs in every circle (cycle, polygon, circuit) is positive. Signed graphs were introduced by Harary \cite{NB}. The vertex analog is a \emph{vertex-signed graph} (often called a \emph{marked graph}) in which the vertices are signed; the vertex analog of balance is \emph{consistency}, the property that in every circle the product of vertex signs is positive. These analogs were introduced by Beineke and Harary \cite{BH}.
As the edges of a simple graph $\G$ become the vertices of its line graph, if $\G$ is signed by $\s$ then $L(\G)$ naturally has its vertices signed by $\s$; it is a vertex-signed graph, which we call the \emph{naturally vertex-signed line graph} of $\S$ and denote by $\L_\s(\S)$. The natural question is then to find a characterization of signed graphs whose naturally vertex-signed line graphs are consistent in the sense of Beineke and Harary. (Then $\S$ may be called \emph{line consistent} \cite{CLC}.) This question was taken up by Acharya, Acharya, and Sinha in 2009 \cite{AAS}; their solution was the following theorem, which they proved by means of the characterization of consistent vertex-signed graphs due to Hoede \cite{H}.
(Slilaty and Zaslavsky \cite{CLC} simplify the theorem and give a short proof.)
The degree of a vertex is $d(v)$; the \emph{negative degree} $d^-(v)$ is the number of incident negative edges.
\begin{theorem}[{Characterization from \cite[Theorem 2.1]{AAS}}]\label{T:aas}
Assume $\S$ is a signed simple graph. Then $\L_\s(\S)$ is consistent if and only if\/ $\S$ satisfies both the following conditions:
\begin{enumerate}[Property 1. ]
\item $\S$ is balanced.
\label{Prop1}
\item For each vertex $v\in V$,
\label{Prop2}
\begin{enumerate}[{\rm(a)}]
\item if $d(v)>3$, then $d^-(v)=0$;
\item if $d(v)=3$, then $d^-(v)=0$ or $2$;
\item if $d^-(v)=2$ and $v$ lies on a circle, then both negative edges at $v$ belong to that circle.
\end{enumerate}
\end{enumerate}
\end{theorem}
One interprets Property \ref{Prop2}(c) to mean that the two negative edges lie on every circle through $v$; thus (as observed in \cite{CLC}) the third edge at $v$ (if it exists) is positive and is an isthmus of $\S$.
Theorem \ref{T:aas} is surprisingly simple when applied to signed blocks.
\begin{corollary}[\cite{BSG}]\label{C:2conn}
Let $\S$ be a signed simple, $2$-connected graph. Then $\L_\s(\S)$ is consistent if and only if\/ $\S$ is balanced and all endpoints of negative edges have degree at most $2$.
\end{corollary}
In this paper we build upon Theorem \ref{T:aas} by providing constructive characterizations of signed graphs that satisfy Property \ref{Prop2} and thereby of those signed simple graphs that are line consistent.
\section{Definitions}
We need definitions about graphs, some of which are unusual. We denote an edge with endpoints $u$ and $v$ by $uv$, even if there exist other edges with the same endpoint. (We can do that here because we do not treat parallel edges explicitly.)
A loop contributes 2 to the degree of its vertex.
A \emph{block} of a graph is a maximal subgraph $B$ such that any two edges in $B$ belong to a circle together. (An isolated vertex, an isthmus, and a loop are blocks.) A \emph{nontrivial block} is a block that contains a circle; if the graph is simple it is a block of order three or more. A \emph{megablock} of a graph is a maximal connected union of one or more nontrivial blocks.
A path may be open or closed (unlike the usual definition that excludes closed paths); it joins two \emph{termini}, which are equal in the case of a closed path. An open path may have length 0; it is \emph{nontrivial} if it has positive length. A closed path must have positive length. The \emph{internal vertices} of a path are the vertices other than its termini. The \emph{terminal edges} of a path are its edges that are incident with its termini.
A circle is the graph of a closed path; the difference between a closed path and a circle is that a closed path has a terminus; a circle has no terminus and all its vertices are internal.
For a subgraph $\G' \subseteq \G$ define a \emph{$\G'$-divalent} path or circle to be a nontrivial path or circle in $\G'$ whose internal vertices have $d_{\G'}(v)=2$.
In a signed graph the \emph{sign} of a path or circle is the product of the signs of its edges. The \emph{negative subgraph} of $\S$ is the (unsigned) graph $\S^-$ that has all the vertices and all the negative edges of $\S$.
By \emph{suppressing a divalent vertex} $v$ in a graph, we mean replacing $v$ and its incident edges $uv$, $vw$ by a single edge $uw$. When we suppress a divalent vertex in a signed graph, we give the new edge $uw$ the sign $\s(uw) := \s(uv)\s(vw)$. It is clear that, when suppressing several divalent vertices in a path or circle, the order in which we suppress them does not affect the result. There is one kind of divalent vertex that cannot be suppressed: a divalent vertex that supports a loop. By \emph{suppressing all possible divalent vertices} we mean suppressing divalent vertices until the only ones remaining are those that support loops.
Property \ref{Prop1}, balance, is well characterized by the first theorem of signed graph theory. A \emph{cut} is the set of edges with one endpoint in some subset $X\subset V$ and the other endpoint in $V \setminus X$, provided there is at least one such edge (since the empty set is not a cut).
\begin{theorem}[Harary \cite{NB}]\label{T:balance}
A signed graph is balanced if and only if its set of negative edges is empty or a cut.
\end{theorem}
\section{Constructions}
We present four (more precisely, two and two halves) constructions that enforce Property \ref{Prop2}. The first construction is the simplest. The second has a smaller initial graph and reveals more about line-graph consistency. The third is a variant of the second in which the signs are chosen late rather than early, and the fourth is a special case of the second in which balance is assured through the process of construction.
\begin{construct}\label{CA}
Let $\G$ be a graph. Choose any set $\cA$ of pairwise disjoint paths and circles in $\G$ such that for each one, $P$, either
\begin{enumerate}[(i)]
\item $P$ is a $B$-divalent path in a nontrivial block $B$ of $\G$, every vertex of $P$ has $d_\G(v)\leq3$, and its termini have $d_\G(v)=2$
(note that no vertex of $P$ can be a cutpoint of a megablock subgraph as any such cutpoint has degree at least 4; also, the third edge at each trivalent vertex in $P$ will be an isthmus of $\G$); or
\item $P$ is a path consisting of isthmi, every vertex of $P$ has $d_\G(v)\leq3$, and its termini have $d_\G(v)\leq2$
(note that all edges adjacent to $P$ are isthmi; for if such an edge $uv$, with $u$ in $P$ but not an terminus, belonged to a nontrivial block $B$, $u$ would have degree at least 2 in $B$ and 2 in $P$, thereby having $d_\G(u)\geq4$; similarly, a terminus $u$ would have $d_\G(u)\geq3$); or
\item $P$ is a megablock of $\G$ that is a circle and every vertex of $P$ has $d_\G(v)\leq3$
(note that any third edge will be an isthmus).
\end{enumerate}
Make the edges of the chosen paths and circles negative and the remaining edges positive.
\end{construct}
The purpose of Construction \ref{CB} is to clarify the structure of a line-consistent signed graph by starting with a smaller graph which is signed and enlarged so as to satisfy Property \ref{Prop2}.
\begin{construct}\label{CB}
Let $\G'$ be a graph with no divalent vertices except those, if any, that support a loop.
\emph{Subdividing} an edge means replacing it by a nontrivial path with the same termini or (if it is a loop) a circle; in particular, retaining the edge is considered a subdivision of that edge.
We construct a graph $\G$ by subdividing $\G'$ and then we sign the edges of $\G$ to get $\S$.
\begin{enumerate}[Step 1.]
\item Sign $\G'$ arbitrarily. Call this signed graph $\S'$.
\label{BS'}
\item Choose a subset $F'$ of edges in $\G'$ such that $F' \supseteq E^-(\G')$.
\label{BF'}
\item Partition the edges of $F'$ into nontrivial paths $P'$ and circles $C'$, so that
\begin{enumerate}[ (a)]
\item each path or circle has internal vertices of degree at most 3,
\item the third edge at each trivalent internal vertex is an isthmus,
\item each open path is either contained within a nontrivial block or composed entirely of isthmi (note that a closed path or a circle can only lie within a nontrivial block), and
\item no terminus is divalent in $\G'$.
\end{enumerate}
Let $\cD'$ be the set of these paths and circles. This partitioning can always be done; at worst, each edge is its own path, or circle if a loop component.
(Note that (d) only forces a loop component in $F'$ to be a circle in $\cD'$.)
\label{BD'}
\item Subdivide each edge $e' \in E'$ into a path $P_{e'}$. The resulting graph is $\G$. Write $P$ for the path or circle that results from subdividing the edges of $P' \in \cD'$ and let $\cD$ be the set of paths and circles $P$ derived from all $P' \in \cD'$.
\label{Bsubdiv}
\item Sign $P_{e'}$ all positive if $e' \notin F'$.
\label{Bpos}
\item For each path or circle $P'\in\cD'$, sign the edges of $P$ so that for each edge $f'$ in $P'$,
\begin{enumerate}[ (a)]
\item $\s(P_{f'}) = \s'(f')$,
\item $P_{f'}$ is not all positive,
\item a terminal edge of $P$ that is incident with a non-univalent terminus is positive, and
\item any edge of $P$ that is incident to a trivalent internal vertex is negative.
\end{enumerate}
\label{Bpath}
\end{enumerate}
\end{construct}
It is sufficient in Step \ref{Bpath}(b) to assume that $P$ is not all positive. (If some $P_{f'} \subset P$ were all positive, it would be impossible to satisfy Step \ref{Bpath}(d) at both termini of $P_{f'}$.)
Construction \ref{CC} is a variant of Construction \ref{CB} in which the signs of the initial graph $\G'$ are not specified until the end.
\begin{construct}\label{CC}
Carry out Construction \ref{CB} but omit Steps \ref{BS'} and \ref{Bpath}(a); also, in Step \ref{BF'} the choice of $F'$ is arbitrary. After the construction of $\S$, form $\S'$ by defining $\s'(e'):=\s(P_{e'})$ for each edge $e' \in E'$.
(Note that since $\G$ in Construction \ref{CC} is the same as in Construction \ref{CB}, it follows that $\S'$ in Construction \ref{CC} is the same as in Construction \ref{CB}.)
\end{construct}
Construction \ref{CD} differs from \ref{CB} by inserting balance at the beginning; thus its constructs necessarily have Properties \ref{Prop1} and \ref{Prop2}, though they need not be simple graphs.
\begin{construct}\label{CD}
This is the same as Construction \ref{CB} except that in Step \ref{BS'}, each nontrivial block of $\G'$ is signed so it is balanced (but otherwise arbitrarily). By Theorem \ref{T:balance}, to do that either make all edges in the block positive or choose a cut in the block and make it negative, the other edges being positive.
\end{construct}
\section{Results}
Here is our theorem.
\begin{theorem}[Constructive Characterization]\label{T:cons}
Assume $\S$ is a signed graph, not necessarily simple.
\begin{enumerate}[{\rm(a)}]
\item The following properties of a signed graph $\S$ are equivalent:
\label{T:cons-1}
\begin{enumerate}[{\rm(i)}]
\item $\S$ satisfies Property \ref{Prop2}.
\item $\S$ is constructed by Construction \ref{CA}.
\item $\S$ is constructed by Construction \ref{CB}.
\item $\S$ is constructed by Construction \ref{CC}.
\end{enumerate}
\item The following properties of a signed simple graph $\S$ are equivalent:
\label{T:cons-2}
\begin{enumerate}[{\rm(i)}]
\item $\L_\s(\S)$ is consistent.
\item $\S$ is constructed by Construction \ref{CA} (or \ref{CB} or \ref{CC}) and is balanced.
\item $\S$ is constructed by Construction \ref{CD}.
\end{enumerate}
\end{enumerate}
\end{theorem}
The proof will appear after some preliminary results.
\begin{lemma}\label{T:bal}
{\rm(a)} A signed graph $\S$ is balanced if and only if the signed graph resulting from it by suppressing all possible divalent vertices is balanced.
{\rm(b)} In particular, a signed graph resulting from Construction \ref{CA} is balanced if and only if, in each nontrivial block $B$ of $\S$, when all possible divalent vertices (with degrees measured in $B$, not in $\S$) are suppressed, the negative edge set becomes empty or a cut.
{\rm(c)} In Construction \ref{CD}, $\S$ is balanced.
\end{lemma}
\begin{proof}
For part (a), let $\S'$ be the result of suppressing all possible divalent vertices in $\S$. Note that a $\S$-divalent path $P$ (in $\S$) and the single edge $e$ it becomes in $\S'$ have the same sign and belong to the same circles, if we identify circles in $\S$ with the circles in $\S'$ that result after suppression. Those observations make the first assertion obvious. Part (c) is a special case.
For (b), it is clear that $\S$, resulting from Construction \ref{CA}, is balanced if and only if each block $B$ is balanced. A trivial block is always balanced. Let $B'$ be the signed graph resulting from suppression applied to $B$. By (a), $B$ is balanced if and only if $B'$ is balanced. Balance of $B'$ is determined using Harary's theorem.
\end{proof}
Note that the suppression in Lemma \ref{T:bal}(a) can produce a graph that is not simple even if $\S$ is simple.
\begin{lemma}\label{L:prop3}
Property \ref{Prop2} of a signed graph $\S$ is equivalent to:
\begin{enumerate}[Property 1. ]
\setcounter{enumi}{2}
\item For each vertex $v \in V$,
\label{Prop3}
\begin{enumerate}[{\rm(a)}]
\item $d^-(v) = 0$, or
\item $1 \leq d^-(v) \leq d(v) \leq 2$, or
\item $d^-(v) = 2$, $d(v) = 3$, and the positive edge at $v$ is an isthmus.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
Considering all possible cases for $v$ in Property \ref{Prop2}, $d^-(v)$ is at most 2 and when $d^-(v)>0$ then $d(v) \leq d^-(v)+1$. If $d^-(v)=2$, Property \ref{Prop2}(c) entails that a third edge at $v$ must be an isthmus.
It is easy to check that Property \ref{Prop3} implies each part of Property \ref{Prop2}.
\end{proof}
\begin{lemma}[Uniqueness]\label{L:unique}
The initial graph $\G'$ in Construction \ref{CB} is determined (up to isomorphism) by the resulting unsigned graph $\G$.
A signed graph can be constructed by Construction \ref{CB} in only one way. That is, $\S'$, $F'$, and the list $\cD'$ are determined by the resulting signed graph $\S$.
\end{lemma}
\begin{proof}
Both $\G'$ and $\S'$ are obtained by suppressing all possible divalent vertices in $\G$ and $\S$, respectively.
Similarly, $F'$ consists of all edges of $\S'$ that after subdivision are not all positive.
To see how those edges associate to form $\cD'$, consider distinct edges $e$ and $f$ in $\S$ incident to a vertex $v$ of $\S'$.
Let $e'$ and $f'$ be the edges of $\S'$ such that $e \in P_{e'}$ and $f \in P_{f'}$ and assume first that $e' \neq f'$, so $d_\S(v)\geq3$. The only way $e$ and $f$ can be in the same path or circle of $\cD$ is when both are negative; and if that is the case, then neither can be a terminal edge so they must be in the same path or circle.
Now assume that $e'=f'$; then that edge is a loop in $\S'$ and $P'$ is a closed path or a circle consisting of that loop and its supporting vertex $v$.
If the graph of $P$ is a circle, we need to decide whether $P$ is a closed path or a circle. It must be a circle if it has only divalent vertices. Otherwise, only a vertex of $P$ that is trivalent and at which the incident edges of $P$ are both positive can be a terminus. Thus, $P$ is a closed path or a circle according as such a vertex in $P$ exists or does not.
Since the elements of $\cD'$ are determined by $\S$, the proposition is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:cons}\eqref{T:cons-1}]
We may substitute Property \ref{Prop3} for Property \ref{Prop2} in (i).
(i)$\implies$(ii): Suppose $\S$ is a signed graph that satisfies Property \ref{Prop3}.
Since $d^-(v)\leq2$ for every vertex, the components of $\S^-$ are paths (possibly trivial) and circles.
In a circle component $P$ of $\S^-$, every vertex has $d_\G(v)\leq3$ and, by Property \ref{Prop3}(c), if there is a third edge at $v$, it is an isthmus. Hence $P$ is a megablock, so it is an instance of Construction \ref{CA}(iii).
Take a path or circle component $P$ of $\S^-$ that has an edge $f$ in a nontrivial block $B$. Consider (if one exists) an edge $g$ of $P$ that is adjacent to $f$, say at vertex $v$. Since $d_\G(v)\leq3$ by Property \ref{Prop3}(b, c), either $d_\G(v)=2$ and both edges at $v$ are in $B$, or $d_\G(v)=3$. In the latter case the third edge, $h$, is an isthmus by Property \ref{Prop3}(c). Then $g$ cannot be an isthmus; it must be in $B$. It follows that all edges of $P$ are in $B$ and $P$ is $B$-divalent. The termini of $P$ have $d_\G(v)\leq2$ by Property \ref{Prop3}(b), and they must be divalent because no edge of $P$ is an isthmus. That is, $P$ satisfies Construction \ref{CA}(i).
Finally, consider $P$ that is a path consisting of isthmi. Its internal vertices have $d_\G(v)\leq3$, and by Property \ref{Prop3}(b) its termini have $d_\G(v)\leq2$. Hence, it satisfies Construction \ref{CA}(ii).
(ii)$\implies$(iii): Let $\S_A$ result from Construction \ref{CA}. To obtain it from Construction \ref{CB} we begin with $\S'$ and $\G'$ obtained by suppressing all possible divalent vertices in $\S_A$ and its underlying graph $\G_A$. By Lemma \ref{L:unique} those are the proper initial graph and signed graph to get $\S_A$ from Construction \ref{CB}. As in the proof of that lemma, $F'$ is the set of edges $e'\in E'$ such that $P_{e'}$ is not all positive. We also define $\cD$ and $\cD'$ as in the proof of Lemma \ref{L:unique}; that is possible because the terminus of a negative path $N \in \cA$ cannot have degree greater than 2, so if it has degree 2 the negative path $N$ can be continued with a positive edge before a vertex of degree greater than 2 is reached. We need only to verify that $\cD'$ satisfies the assumptions of Construction \ref{CB}.
Clearly, $\cD'$ partitions $F'$ into nontrivial paths and circles. Let $P'$ be one such path or circle, corresponding to $P \in \cD$, and let $v$ be an internal vertex of $P'$. The edges $e, f \in P$ incident with $v$ are both negative, by the construction of $\cD$. Hence, they belong to a path $P_A$ in Construction \ref{CA} and therefore $d_{\G'}(v)=d_{\G_A}(v)\leq3$ and a third edge, if there is one, is an isthmus of $\G_A$, thus of $\G'$. Moreover, $e$ and $f$ are both isthmi or both contained within a nontrivial block of $\G_A$. Finally, no terminus is divalent in $\G'$ by the construction of $\G'$ and $\cD'$. Therefore, the requirements of Step \ref{BD'} are satisfied.
The requirements of Step \ref{Bpos} and Step \ref{Bpath}(a, b) are satisfied by the definition of $F'$. Those of Step \ref{Bpath}(c, d) is satisfied due to the way we constructed $\cD$.
It follows that the signed graph $\S$ resulting from Construction \ref{CB} given the initial data can be made to agree with $\S_A$; i.e., $\S_A$ is constructible by Construction \ref{CB}.
(iii)$\implies$(i): Let $\S$ result from Construction \ref{CB}. We show it has Property \ref{Prop3}. A \emph{new vertex} of $\S$ is one that results from subdividing edges of $\S'$; any other vertex is an \emph{original vertex}.
Suppose $e$ is a negative edge in some path or circle $P\in\cD$ and is incident with vertex $v$. If $d_\G(v)\leq2$, $v$ satisfies Property \ref{Prop3}. Suppose, therefore, that $d_\G(v)\geq3$. Since $e$ is negative, by Step \ref{Bpath}(c) $v$ cannot be a terminus of $P$. Hence $d_\G(v)\leq3$ by Step \ref{BD'}(a) and the third edge $f$ incident with $v$ is an isthmus by Step \ref{BD'}(b). The two edges of $P$ at $v$ are negative by Step \ref{Bpath}(d), so $d^-(v)\geq2$. The edge $f$ is either terminal in a path $P_1\in\cD$, hence positive by Step \ref{Bpath}(c), or in a subdivision path $P_{f'}$ of an edge $f'\notin F'$, hence positive by Step \ref{Bpos}. Either way, $d_\S^-(v)\leq2$. Thus, $\S$ satisfies Property \ref{Prop3}.
(iii)$\iff$(iv): The note in Construction \ref{CC} explains why Constructions \ref{CB} and \ref{CC} yield the same signed graphs.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:cons}\eqref{T:cons-2}]
The equivalence of (ii) and (iii) follows from \eqref{T:cons-1}[(ii)$\iff$(iii)] and the fact that $\S$ resulting from Construction \ref{CD} is balanced by Lemma \ref{T:bal}(c).
As for (i), it is equivalent to $\S$'s having Property \ref{Prop2} and being balanced, which is equivalent by \eqref{T:cons-1}[(i)$\iff$(ii)] to $\S$'s resulting from Construction \ref{CA} and being balanced, i.e., (ii).
\end{proof}
\end{document} | math | 22,151 |
\begin{document}
\widetext
\title{\textbf{Heralded Three-Photon Entanglement from a Single-Photon Source on a Photonic Chip}}
\author{Si Chen}
\author{Li-Chao Peng}
\author{Yong-Peng Guo}
\author{Xue-Mei Gu}
\author{Xing Ding}
\author{Run-Ze Liu}
\affiliation{
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
}
\affiliation{
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
}
\affiliation{
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
}
\author{Xiang You}
\affiliation{
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
}
\affiliation{
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
}
\affiliation{
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
}
\affiliation{
University of Science and Technology of China, School of Cyberspace Security, Hefei, China
}
\author{Jian Qin}
\author{Yun-Fei Wang}
\author{Yu-Ming He}
\affiliation{
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
}
\affiliation{
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
}
\affiliation{
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
}
\author{Jelmer J. Renema}
\affiliation{
Adaptive Quantum Optics Group, Mesa+ Institute for Nanotechnology, University of Twente,
P.O. Box 217, 7500 AE Enschede, Netherlands
}
\author{Yong-Heng Huo}
\author{Hui Wang}
\author{Chao-Yang Lu}
\author{Jian-Wei Pan}
\affiliation{
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
}
\affiliation{
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
}
\affiliation{
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
}
\date{\today}
\begin{abstract}
In the quest to build general-purpose photonic quantum computers, fusion-based quantum computation has risen to prominence as a promising strategy. This model allows a ballistic construction of large cluster states which are universal for quantum computation, in a scalable and loss-tolerant way without feed-forward, by fusing many small $n$-photon entangled resource states. However, a key obstacle to this architecture lies in efficiently generating the required essential resource states on photonic chips. One such critical seed state that has not yet been achieved is the heralded three-photon Greenberger-Horne-Zeilinger (3-GHZ) state. Here, we address this elementary resource gap, by reporting the first experimental realization of a heralded dual-rail encoded 3-GHZ state. Our implementation employs a low-loss and fully programmable photonic chip that manipulates six indistinguishable single photons of wavelengths in the telecommunication regime. Conditional on the heralding detection, we obtain the desired 3-GHZ state with a fidelity $0.573\pm0.024$. Our work marks an important step for the future fault-tolerant photonic quantum computing, leading to the acceleration of building a large-scale optical quantum computer.
\end{abstract}
\pacs{}
\maketitle
Photon is a favourable candidate for universal quantum computing \cite{knill2001scheme,kok2007linear,pan2012multiphoton}, allowing several advantages such as room-temperature operation, negligible decoherence, and easy integration into existing fiber-optic-based telecommunications systems. Especially, the rapidly developed integrated optics makes it an appealing physical platform for large-scale fault-tolerant quantum computing \cite{o2007optical,ladd2010quantum,rudolph2017optimistic}.
Measurement-based quantum computing (MBQC) \cite{raussendorf2001one, raussendorf2003mbqc, walther2005experimental, briegel2009measurement}, where quantum algorithms are performed by making single qubit measurements on a large entangled state---usually called cluster states, holds significant potential for photonic systems based on linear optics. Photonic cluster states can be efficiently created from small resource states in a fusion mechanism \cite{browne2005resource,duan2005efficient}.
Later, a ballistic strategy for MBQC has been proposed \cite{kieling2007percolation, gimeno2015three,pant2019percolation}, enabling scalable and loss-tolerant generation of large cluster states by fusing many small entangled resource states without any feed-forward, which has subsequently renormalized to fusion-based quantum computation \cite{bartolucci2023fusion}. So far, the heralded three-photon Greenberger-Horne-Zeilinger state has been identified as the minimal initial entangled state \cite{bartolucci2023fusion}, serving as an essential building block for constructing large entangled cluster states \cite{gimeno2015three, zaidi2015near, rudolph2017optimistic}.
Deterministic generations of multiphoton cluster states from single-photon sources have been proposed \cite{lindner2009proposal,economou2010optically,gimeno2019deterministic} and implemented \cite{schwartz2016deterministic,lee2019quantum,thomas2022efficient,cogan2023deterministic}. However, high-quality deterministic preparation still remains challenging with existing technology. Alternatively, without detecting or destroying the photons, one can near-deterministically generate such entangled clusters with heralded 3-GHZ states, which can be obtained using six single photons \cite{varnava2008good,gubarev2020improved}.
Here, we report the first experimental demonstration of a heralded dual-encoded 3-GHZ state using six single photons manipulated in a photonic chip. A high-quality single-photon source based on a semiconductor quantum dot \cite{senellart2017high} embedded in an open microcavity is used to deterministically produce single photons that are converted to the telecommunication band with a quantum frequency converter \cite{weber2019two,you2022quantum}. These single photons are deterministically demultiplexed into six indistinguishable single-photon sources \cite{wang2017high,wang2019boson}, which are manipulated in a fully programmable photonic chip \cite{taballione2021universal}. Heraled by the detection of four output spatial modes with high-efficiency single-photon detectors, we obtain a heralded 3-GHZ state with a fidelity of $0.573\pm0.024$. Our work is an important step towards fault-tolerant scalable photonic quantum computation.
\begin{figure}
\caption{The heralded generation scheme for a 3-GHZ state. Six single photons are prepared at the input ports of a linear optical circuit, where we can describe the initial input quantum state as $\ket{\psi_{in}
\label{Fig:1}
\end{figure}
Fig.~\ref{Fig:1} illustrates a heralded generation of a dual-rail-encoded 3-GHZ state out of six single photons \cite{gubarev2020improved}. The scheme exploits a ten-mode linear optical circuit, which consists of twelve optical beams plitters with three distinct transmission coefficients and a pair of $\pi$-phase shifters. Six photons are injected into six specific input modes of the photonic circuit, preparing a number-basis initial state represented as $\ket{\psi_{in}}=a_1^{\dagger}a_3^{\dagger}a_4^{\dagger}a_6^{\dagger}a_8^{\dagger}a_9^{\dagger}\ket{0}^{\otimes 10}=\ket{1011010110}$.
Here, $a_i^{\dagger}$ refers to the creation operator in mode $i$, $\ket{0}$ symbolizes the vacuum state, and the numbers $0$ and $1$ indicate the number of photons occupied in each mode. The underlying unitary transformation $U$ of the circuit is optimally parameterized such that a particular measurement patterns in four ancillary modes (7-10) herald a desired 3-GHZ state at six output modes (1-6) \cite{gubarev2020improved}. Qubits $Q_{1}$, $Q_{2}$ and $Q_{3}$ are identified with output mode pairs (1,2), (3,4) and (5,6). This association uses a dual-rail encoding method, which signifies the presence of one single photon in either of the two spatial modes within each mode pair, e.g., $\ket{0}_{d}=\ket{10}$ and $\ket{1}_{d}=\ket{01}$. The GHZ state is heralded in modes (1-6) only when single photons are detected in both (7,8) and in just one of the (9,10) ports, with the other one remaining in a vacuum state. Each event has a success probability of $1/108$, resulting in an overall success rate of $1/54$ \cite{gubarev2020improved}. Using a phase-tunable Mach-Zehnder interferometers (MZIs) at each output mode pair, one can perform arbitrary local projective measurements for estimating the full state. Details of the underlying state evolution and state measurements are provided in Supplementary.
To prepare six single photons, we firstly use the state-of-the-art self-assembled InAs/GaAs quantum dot (QD), which is coupled to an polarized and tunable microcavity \cite{wang2019towards,tomm2021bright} and cooled down to $\sim$4 K. Under resonant pumping by a $\pi$-pulse laser with a repetition rate of $\sim$76 MHz, the QD emits $\sim$50 MHz polarized resonance fluorescence single photons at the end of the single-mode fiber. We measured second-order correlation of the photon source with a Hanbury Brown-Twiss (HBT) interferometer \cite{hanbury1956question}, and obtained $g^2(0)=0.028(8)$ at zero delay, which indicates a high single-photon purity of 97.2(8)\%. The single photon indistinguishability is tested using a Hong-Ou-Mandel (HOM) interferometer \cite{hong1987measurement}, yielding a visibility of 89(1)\% between two photons separated by $\sim$13 ns.
We then employ a quantum frequency converter (QFC) to transfer the near-infrared wavelength of the produced single photons to the preferable telecommunication regime \cite{you2022quantum}. For this purpose, we fabricate a periodically poled lithium niobate (PPLN) waveguide for difference-frequency generation process that can be adjusted by the wavelengths of the pump lasers. A continuous wave (CW) pump laser at $\sim$2060 nm and the QD-emitted single photons at $\sim$884.5 nm are then coupled into the PPLN waveguide, in which the difference frequency generation occurs, thus generating the output single photons at 1550 nm. By optimizing the waveguide coupling, and transmission and detection rate, we eventually achieve a overall single-photon conversion efficiency of $\sim$50\%. To test whether the converted photons still preserve the single-photon nature and their indistinguishability, we preform the HBT and HOM measurements on the photons after conversion. The purity of the single photons at 1550 nm stays $97.4(6)\%$, and the indistinguishability between photon 1 and photons 2, 3, 4, 5, 6 are respectively 0.883(7), 0.86(1), 0.86(3), 0.88(1), 0.87(3), as shown in Fig.~\ref{Fig:3} (a) and (b).
\begin{figure*}
\caption{The experimental setup. A single InAs/GaAs quantum dot, resonantly coupled to an open microcavity, is used to produce pulsed resonance fluorescence single photons. The wavelength of these single photons in near-infrared is then converted into telecom-wavelength of 1550 nm with a quantum frequency converter by coupling a continuous wave laser and single photons into a periodically poled lithium niobate waveguide. Five pairs of Pockels cells and polarizing beam splitters are employed to actively translate a stream of single photons into six spatial modes. Optical fibers of different lengths are used to precisely compensate time delays. The six indistinguishable single photons are then fed into a 12-modes programmable linear-optical quantum circuit, which allows the preparation and measurement of a heralded three-qubit dual-rail encoded GHZ state. The single photons at the output ports of the photonic circuit is detected by superconducting nanowire single-photon detectors. All coincidence are recorded by a coincidence count unit (not shown).}
\label{Fig:2}
\end{figure*}
The converted single-photon stream is then deterministically demultiplexed into six spatial modes using a tree-like structure demultiplexer constructed by five pairs of Pockels cells (PCs) and polarizing beam splitters (PBSs). The PCs, synchronized to the laser pulses and operated at a repetition rate of $\sim$705 kHz, actively control the photon polarization when loaded with high-voltage electrical pulses. The measured average optical switches efficiency is $\sim$77\%, which is mainly due to the coupling efficiency and propagation loss. With the help of six single-mode fibers of different lengths and translation stages, we precisely compensate the relative time delays of the six single photons such that they can simultaneously arrive at the input ports of the photonic circuit.
To realize the functional design of the unitary transformation in Fig.~\ref{Fig:1}, we employ a photonic chip that is low-loss and fully programmable \cite{taballione2021universal}. The circuit is based on stoichiometric silicon nitride waveguides which are fabricated for single-mode operation at a wavelength of 1550 nm. It consists 12 input and output spatial modes that are interconnected through an arrangement of adjustable beam splitters and thermo-optic phase shifters, as shown in Fig.\ref{Fig:2} (details about the photonic circuit, please refer to Ref.~\cite{taballione2021universal}). To achieve a heralded 3-GHZ state, six single photons are injected into six inputs (1, 3, 4, 6, 8, 9) of the circuit and propagate through the circuit. The heralded outputs are 7, 8, 9 and 10. At each heralded output (7, 8, 9), two superconducting nanowire single-photon detectors (SNSPDs) are employed and act as a pseudo-photon-number-detector that can resolve up to two photons. When each mode (7, 8, 9) contains a single photon and mode 10 has vacuum state, one can obtain a heralded GHZ generation for three dual-rail encoded qubits defined in the modes $Q1=(1,2)$, $Q2=(3,4)$ and $Q3=(5,6)$.
\begin{figure*}
\caption{Characterization of the single-photon source and the photonic quantum circuit. (a), The purity of the single-photon source of telecom-wavelength is evaluated by a second-order correlation function $g^2(0)$. The observed anti-bunched peak at zero-time delay reveals $g^2(0)=0.026(6)$. (b), Measurement of the photon indistinguishability by performing a Hong-Ou-Mandel interferometer between photon 1 and photon 2 (3, 4, 5 or 6). The corresponding measured indistinguishability are $0.883(7)$, $0.86(1)$, $0.86(3)$, $0.88(1)$, $0.87(3)$, respectively. The error bars indicates one standard deviation, deduced from propagated Poissonian counting statistics of the raw photon detection events. (c)-(d), Amplitude (c) and Phase (d) of the ideal unitary transformation matrix (black wire frames) and their experimentally reconstructions (colored bars). We send six single photons into the used six input ports of the photonic circuit, and measure the normalized amplitude and phase for each input–output combination.}
\label{Fig:3}
\end{figure*}
We then send the six photons one by one and measure the output distribution at the nine output modes (1-9) for analyzing the quality of the photonic circuit. For each input-output combination, we implement a Mach-Zehnder-type coherence measurement to record the corresponding phase. The normalized amplitude and measured phase compared to their theoretical distributions are summarized in Fig.~\ref{Fig:3} (c) and (d).
To analyze the generated heralded three-photon GHZ state in dual-rail encoding, we use the phase-tunable MZIs to perform any local projective measurements on the single photons. The transformation matrixes of these local measurements are compiled in the whole circuit. We then collect the six-photon coincidence counts at the used ten outputs in which each output mode (7, 8, 9) contains only one photon. To validate the three-qubit GHZ entanglement, we first measure the six-photon events in the $\ket{0}_{d}/\ket{1}_{d}$ basis (see data in Fig.~\ref{Fig:4} (a)) to calculate the population of $(\ket{0}_{d}\bra{0}_{d})^{\otimes3}+(\ket{1}_{d}\bra{1}_{d})^{\otimes3}$ over all the possible $2^{3}$ combinations, leading to a population $P=0.758 \pm 0.025$. We further estimate the expectation value of the observable $M^{\bigotimes N}_{\theta}=(\cos \theta\hat{\sigma}_x+\sin\theta\hat{\sigma}_y)^{\bigotimes N}$, where $\theta=k\pi/3$ ($k=0,1,2$) and $\hat{\sigma}_x$,$ \hat{\sigma}_y$ are Pauli matrices. The coherence of the three-qubit GHZ state is defined by the off-diagonal element of its density matrix and can be calculated by $C=(1/3)\sum_{k=0}^{2}(-1)^k\langle M_{k\pi/3}^{\bigotimes3}\rangle$, which is $C=0.389\pm0.040$ (see data in Fig.~\ref{Fig:4} (b)). The state fidelity can be directly estimated by $F=(P+C)/2=0.573\pm0.024$, which surpasses the classical threshold of 0.5 by more than 3 standard deviations and is sufficient to show the presence of entanglement \cite{guhne2009entanglement}.
\begin{figure}
\caption{Experimental results of the generated heralded three-qubit GHZ state. (a), Six-photon coincidence counts measured in the dual-rail basis $\ket{0}
\label{Fig:4}
\end{figure}
In summary, we have demonstrated, for the first time, the heralded three-photon GHZ state using six photons from a high-quality quantum-dot single-photon source and a 12-mode fully programmable photonic chip. It should be noted that this experiment is unrealistic for commonly used spontaneous parametric down-conversion sources \cite{zhong201812}, since the expected count rate will be four orders of magnitude smaller than this experiment, showing a huge advantage of deterministic quantum-dot single-photon sources. Our demonstrated three-qubit GHZ state is heralded and has a heralding efficiency that could reach to one in principle, which is the main building block for fusion-based photonic quantum computing. Here, the heralding efficiency is defined as the probability of successfully achieving a desired GHZ state when there are coincidences of desired heralding detectors. In our experiment, we obtain a heralding efficiency of $\sim$0.0005 after the correction of detectors' imperfections (see Supplementary). This heralding efficiency can be significantly improved in the near future, by further promoting the efficiency of single-photon sources and photonic circuit to enable the large-scale photonic quantum computing.
~\\S. C. and L.-C P. contributed equally to this work.
\nocite{*}
\end{document} | math | 19,163 |
\begin{document}
\title{Fidelity, entanglement, and information complementarity relation}
\author{Jian-Ming Cai}
\email{[email protected]}
\author{Zheng-Wei Zhou}
\email{[email protected]}
\author{Guang-Can Guo}
\email{[email protected]}
\affiliation{Key Laboratory of Quantum
Information, University of Science and Technology of China,
Chinese Academy of Sciences, Hefei, Anhui 230026, China}
\begin{abstract}
We investigate the dynamics of information in isolated multi-qubit
systems. It is shown that information is in not only local form
but also nonlocal form. We apply a measure of local information
based on fidelity, and demonstrate that nonlocal information can
be directly related to some appropriate well defined entanglement
measures. Under general unitary transformations, local and
nonlocal information will exhibit unambiguous complementary
behavior with the total information conserved.
\end{abstract}
\pacs{03.67.-a, 03.65.Ud, 73.43.Nq, 89.70.+c}
\maketitle
\section{Introduction}
Understanding the physics of quantum many-body systems is a fundamental goal
of condensed matter theory. People are interested in the study of strongly
correlated quantum states which will exhibit lots of fascinating phenomena,
such as quantum phase transitions \cite{Sachdev,Entanglement and Phase
Transitions,Osborne032110} and Kondo effect \cite{Kondo}. In the field of
quantum information processing, quantum many-body systems are also the
essential ingredient. Perspective scalable quantum computation \cite
{Briegel2001,QC1,QC2,QC3,QC4} and communication \cite
{Bose2003&Christandl2004} schemes based on different many-body models of
condensed matter physics have have attracted intensive interest. Moreover,
unlike the situation of bipartite entanglement, the picture of multipartite
entanglement in quantum many-body systems is still not very clear \cite
{Plenio,Lunkes,SLOCC}.
However, the fact that the number of parameters required to describe a
quantum state of many particles grows exponentially with the number of
particles leaves a practical obstacle to the study of quantum many-body
systems. Therefore most of research focuses on the static properties of the
ground states of certain types of many-body models with quantum Monte Carlo
calculations \cite{Suzuki} and the density matrix renormalization group \cite
{Rommer}. In this paper, we investigate the dynamics of two- and
three-qubit systems from the viewpoint of quantum information.
Through simple examples, we demonstrate perfect complementary
behavior between local information and entanglement, and reveal
the heuristic connection between measures of entanglement and
nonlocal information, which are expected to be generalized to
many-qubit systems.
Given a system of $N$ qubits, the system initial state is $|\psi\rangle_{12
\cdots N}$, with $\rho_{i}=Tr_{{1,\cdots,i-1,i+1,\cdots,n}
}(|\psi\rangle\langle\psi|)$ the reduced density matrix of each individual
qubit. It is known that information contained in multi-qubit systems is in
two forms \cite{Horodecki}. One is local form, that is the information
content in each individual qubit $\rho_{i}$. The other is nonlocal form,
that is entanglement between different qubits. If the multi-qubit system is
isolated, i.e. initial pure and under unitary transformations only, then
information contained in the system will be transferred between different
qubits and converted between local and nonlocal forms with the total
information conserved, which results in the information dynamics \cite
{Horodecki,peng,Teisser}. In this paper, we adopt a measure of
local information based on fidelity and reveal a heuristic
connection between measures of entanglement and nonlocal
information. Therefore, we establish an elegant complementarity
relation between local and nonlocal information. Our results link
the local information and measures of entanglement, particularly
genuine multi-qubit entanglement (i.e. shared by all the involved
qubits). This make it possible to propose some appropriate
information-theoretic measure of such genuine multi-qubit entanglement \cite
{GME}, which is one of the most central issues in quantum information theory
\cite{Wong,Hyperdeterminants,Mayer,GE,Gerardo}.
The paper is structured as follows. In Sec II, we introduce the
measure of local information based on the definition of optimal
fidelity. In Sec III and IV, we investigate the information
dynamics and demonstrate the perfect complementary behavior
through simple examples in two- and three-qubit systems. In Sec V,
we formulate the information complementarity relation based on
appropriate measures of local information and entanglement. In Sec
VI are discussions and conclusions.
\section{Optimal fidelity and local information}
We start by considering the information content in one qubit. Suppose Alice
get one qubit in the state $|\varphi \rangle $ from a random source $
\{|\Omega \rangle ,|\Omega ^{\bot }\rangle \}$ with the probabilities $
\{1/2,1/2\}$. Alice hold the qubit in state $|\varphi \rangle $, and can
exactly eliminate the uncertainty about the state preparation, i.e. the
information content in $|\varphi \rangle $ should be 1 bit. If Alice sent
the qubit to Bob through quantum channels, the qubit Bob received becomes $
\rho =\xi (|\varphi \rangle \langle \varphi |)$. If $\rho =\frac{1}{2}(I+
\vec{r}\cdot \vec{\sigma})$, with $\vec{r}$ the Bloch vector and the Pauli
operators $\vec{\sigma}=(\sigma _{x},\sigma _{y},\sigma _{z})$, is a mixed
state, Bob can only tell which of the two states that Alice sent to him with
some success probability, which results from the information loss through
the quantum channels. The success probability can be characterized by the
fidelity $F=\langle \varphi |\rho |\varphi \rangle $ \cite{Nielsen & Chuang}
. However, we note that Bob can apply some kind of physical realizable local
strategy to maximize the success probability. Therefore, we can naturally
define the optimal fidelity as
\begin{equation}
F_{o}(\rho )=\underset{A\in SU\left( 2\right) }{\max }\langle \varphi |A\rho
A^{\dagger }|\varphi \rangle
\end{equation}
Here unitary operation $A$ can be interpreted as some kind of local strategy
for Bob to maximize the success probability. After simple calculations, it
can be seen that $F_{o}(\rho )=\frac{1}{2}(1+|\vec{r}|)$. The optimal
fidelity ranges from $\frac{1}{2}$ to $1$, i.e. $\frac{1}{2}\leq F_{0}(\rho
)\leq 1$. Therefore, Bob hold the qubit in state $\rho $ can eliminate a
part of uncertainty about the state preparation. The information content in
the state $\rho $ can thus be characterized by the above success probability
in Eq.(1). In principle any monotone function of the success probability can
serve as a measure of information content in the state $\rho $. We introduce
a measure of local information based on the optimal fidelity as
\begin{equation}
I_{F}(\rho )=[2F_{o}(\rho )-1]^{2}
\end{equation}
where $I_{F}(\rho )$ is normalized such at $I_{F}(\rho )=0$ for $F_{o}(\rho
)=\frac{1}{2}$ and $I_{F}(\rho )=1$ for $F_{o}(\rho )=1$. We will show that $
I_{F}(\rho )$ defined above is equivalent to an operationally invariant
information measure \cite{Brukner and Zeilinger} and is a suitable measure
of local information in the situation we discuss here. For an $N$-qubit
quantum system $|\psi \rangle $, the total local information is
\begin{equation}
I_{l}^{total}=\sum\limits_{i=1}^{N}I_{F}(\rho _{i})
\end{equation}
where $\rho _{i}=Tr_{{1,\cdots ,i-1,i+1,\cdots ,n}}(|\psi \rangle \langle
\psi |)$ is the reduced density operator of the $ith$ qubit.
\section{Two-qubit system}
We start out by investigating the information dynamics through simple
examples in a system of two qubits with interaction between qubits. The
interaction between qubits can be described as the following Hamiltonian in
the canonical form with three parameters $c_{1},c_{2},c_{3}$ \cite{Bennett
et al @2002&Nielsen@2003}:
\begin{equation}
\mathbf{H}=c_{1}\sigma _{x}^{1}\otimes \sigma _{x}^{2}+c_{2}\sigma
_{y}^{1}\otimes \sigma _{y}^{2}+c_{3}\sigma _{z}^{1}\otimes \sigma _{z}^{2}
\end{equation}
Here, for simplicity, the local evolutions have been neglected. The initial
state is set as $|\psi(0)\rangle=|\Omega\rangle\otimes|0\rangle$, where $
|\Omega\rangle=\alpha|0\rangle+\beta|1\rangle$ is some prescribed pure
state. Due to the interaction between two qubits, the information will be
transferred between two qubits and can also be converted into the form of
nonlocal information.
\textit{Ising coupling.} We first consider the Ising interaction, i.e. the
coupling parameter $c_{1}=c$, $c_{2}=c_{3}=0$. After some time $t$, the
system becomes $|\psi(t)\rangle=exp(-itH)|\psi(0)\rangle=\alpha\cos{ct}
|00\rangle -i\beta\sin{ct}|01\rangle+\beta\cos{ct}|10\rangle-i\alpha\sin{ct}
|11\rangle$. Therefore, we can calculate the local optimal fidelity defined
in Eq.(1) $F_{0}(\rho_{i},t)=\{1+[1-|\alpha^2-\beta^2|\sin^2(2ct)]^{1/2}\}/2$
, which will lead to the total local information as
\begin{equation}
I_{l}^{total}(t)=2[1-|\alpha^2-\beta^2|\sin^2(2ct)]
\end{equation}
On the other hand, we can obtain the entanglement contained in the state $
|\psi(t)\rangle$ measured by $2-$tangle, which is the square of concurrence
\cite{ConC}. After some simple calculations, we can get
\begin{equation}
\tau_{12}(t)=|\alpha^2-\beta^2|\sin^2(2ct)
\end{equation}
We depict the dynamics behavior of local information and
entanglement in Fig.(a). It can be seen that these two quantities
exhibit perfect complementary behavior during the time evolution.
In fact, this can be easily verified from the above Eq.(5-6) by
deriving the complementarity relation that $I_{l}^{total}(t)+2\tau
_{12}(t)=2$. It is well known that entanglement is some kind of
nonlocal information. Our results in this simple example
demonstrate that based on some suitable definition of local
information (e.g. $I_{F}$ here), the nonlocal information is
directly related to some appropriate measure of entanglement. In
the following section, we can see that this viewpoint of
entanglement and nonlocal information is also applicable in
three-qubit systems and for arbitrary two-qubit system Hamiltonian
$H$.
\begin{figure}
\caption{(Color online) Information dynamics via Ising interaction (a) and
XY interaction (b). Total local information $I_{l}
\end{figure}
\textit{XY coupling.} In addition, we also demonstrate the quantum
state information dynamics for XY interaction, when the coupling
parameters of the Hamiltonian in Eq.(4) are $c_{1}=c_{2}=c,
c_{3}=0$. The result is depicted in Fig.1 (b), which exhibit the
same perfect complementary behavior as the situation of Ising
coupling.
\section{Three-qubit system}
In this section, we will extend the above discussions to three-qubit
systems. In such a system, not only two-qubit entanglement but also genuine
three-qubit entanglement, which is shared by all the three qubits.
Therefore, the corresponding relation between nonlocal information and
entanglement should be dealt with carefully. We consider a simple example
for demonstration, the system Hamiltonian is
\begin{equation}
H=\sum\limits_{ij}(c_{1}\sigma _{x}^{i}\otimes \sigma _{x}^{j}+c_{2}\sigma
_{y}^{i}\otimes \sigma _{y}^{j}+c_{3}\sigma _{z}^{i}\otimes \sigma _{z}^{j})
\end{equation}
The initial state is set as $|\psi(0)\rangle=|\Omega\rangle\otimes|0\rangle
\otimes|0\rangle$, where
$|\Omega\rangle=\alpha|0\rangle+\beta|1\rangle$. We first
calculate the spectrum of the above Hamiltonian $H$, the eigen
energy is denoted as $\epsilon_{k}$ ($k=1,2, \cdots ,8$) and the
corresponding eigen state is $|\phi_{k}\rangle$. The initial state
can be expressed in the
eigen basis as $|\psi(0)\rangle=\sum\limits_{k=1}^{8}\gamma_{k}|\phi_{k}
\rangle$. Therefore, the system state at time $t$ becomes $
|\psi(t)\rangle=exp(-itH)|\psi(0)\rangle=\sum\limits_{k=1}^{8}
\gamma_{k}e^{-i\epsilon_{k}t}|\phi_{k}\rangle$. According to Eq.(1-3), we
can obtain the total local information $I_{l}^{total}(t)$ easily. The
two-qubit entanglement between qubit $i,j$ ($ij=12,23,13$) measured by $2-$
tangle $\tau_{ij}(t)$ can also be calculated from the reduced
density matrices $\rho_{ij}(t)$ \cite{ConC}. As we have mentioned
above, there will be another form of entanglement besides pairwise
entanglement, i.e. genuine three-qubit entanglement, in systems of
three qubits. The genuine three-qubit entanglement can be measured
by the $3-$tangle $\tau_{123}(t)$ proposed in \cite{CKW}. In order
to observe the perfect complementary behavior between the total
local information and entanglement, we should adopt some suitable
function which combine the contributions of both
two-qubit and three-qubit entanglement. Here we choose the function as $
\mathcal{E}(t)=2[\tau_{12}(t)+\tau_{23}(t)+\tau_{13}(t)]+3\tau_{123}(t)$
and the information dynamics exhibit perfect complementary
behavior in Fig.(2). In the following section, we can see that the
coefficients before $\tau_{ij}$ and $\tau_{123}$ are not
arbitrary. In fact, this function has clear information-theoretic
meaning.
\begin{figure}
\caption{(Color online) Information dynamics via Ising interaction (a) and
XY interaction (b). Total local information $I_{l}
\end{figure}
Similar to the situation of two qubits, we can also easily derive
the complementarity relation as
$I_{l}^{total}(t)+\mathcal{E}(t)=3$. In three-qubit systems, both
two-qubit and three-qubit entanglement are nonlocal form of
information. The above complementary behavior imply that if we
choose suitable measures of local information and different levels
of entanglement (e.g. two-qubit and genuine three-qubit
entanglement here), the nonlocal information is just contributed
by the entanglement linearly with appropriate weights. Though we
only demonstrate this result via simple examples, we can see that
it is applicable for any Hamiltonian $H$ of three-qubit systems in
the following section.
\section{Information complementarity relation}
In this section, we will formalize the basic information
complementarity relation underline the above information dynamics.
We adopt the operationally invariant information measure proposed
by Brukner and Zeilinger \cite{Brukner and Zeilinger} as the
measure of local information, which is defined as the sum of
one-shot information gained over a complete set of mutually
complementary observables (MCO). Consider a pure n-qubit state, if
measured by operationally invariant information measure, the total
information content is $n$ bit and is completely contained in the
system. For a spin-$1/2$ system with the density matrix $\rho $,
the operationally invariant information content is
\begin{equation}
I_{BZ}(\rho )=2Tr\rho ^{2}-1.
\end{equation}
Therefore, for an $n$-qubit quantum system in pure state $|\Omega \rangle $,
the amount of information in local form is $I_{local}(|\Omega \rangle
)=\sum\limits_{i=1}^{n}I_{i}$, where $I_{i}$ is the local information
measured by $I_{i}=I_{BZ}(\rho _{i})=\sum\limits_{i=1}^{n}(2Tr\rho
_{i}^{2}-1)$, where $\rho _{i}=Tr_{{1,\cdots ,i-1,i+1,\cdots ,n}}(|\Omega
\rangle \langle \Omega |)$ is the reduced density operator of the $ith$
qubit. The non-local information is $I_{non-local}=n-I_{local}$. We will
show that such non-local information is related to entanglement. In other
words, \textit{entanglement can be viewed as non-local form of information}.
We start by considering the simplest case of a two-qubit system in the pure
state $|\Omega\rangle_{12}=\sum\limits_{i,j=0,1}a_{ij}|ij\rangle$. The local
information contained in qubit $1$ and $2$ is $
I_{1}=I_{2}=1-4|a_{00}a_{11}-a_{01}a_{10}|^{2}$. Therefore the non-local
information is $I_{non-local}=2-I_{1}-I_{2}=8|a_{00}a_{11}-a_{01}a_{10}|^{2}$
. If measured by 2-tangle, which is the square of concurrence \cite{ConC},
the pairwise entanglement is $\tau_{12}=4|a_{00}a_{11}-a_{01}a_{10}|^{2}$.
Thus we can write local and non-local information as $I_{1}=I_{2}=1-\tau_{12}
$ and $I_{non-local}=2\tau_{12}$. The relation between local information and
nonlocal entanglement is depicted in Fig. 3 (A).
\begin{equation}
I_{1}+I_{2}+2\tau_{12}=2.
\end{equation}
\begin{figure}
\caption{(Color online) Information diagram for local information and
entanglement. (A). two-qubit pure states; (B) three-qubit pure states. One
circle represent one bit information.}
\end{figure}
If we focus on one qubit, say qubit $1$, then the $1$ bit information of
this qubit is partly contained in itself, \textit{i.e.}, $I_{1}$. The
residual information is contained in its entanglement with its environment,
in this case qubit $2$. The amount of this kind of information is $\tau_{12}$
. It can be seen that $I_{1(2)}+\tau_{12}=1$. If the system is not isolated,
in general it will be in a mixed state $\rho_{12}$. In this case, the $2$
bit information is not only contained in the system but also in its
correlations with the outside environment. Therefore, $I_{1}+I_{2}+2
\tau_{12}<2$ for a mixed state $\rho_{12}$. This result can be proved
through the convexity of $I_{1}$, $I_{2}$ and $\tau_{12}$.
Now we will extend our discussions to the case of three-qubit systems in a
pure state. The total information content in this system is $3$ bit. The
local information contained in each individual qubit is $I_{m}=2Tr
\rho_{m}^{2}-1$, $m=1,2,3$, where $\rho_{m}$ is the reduced density matrix
for each qubit. Different from the two-qubit system, in this case the
non-local information exists not only in 2-qubit entanglement, but also in
genuine 3-qubit entanglement. It can be written as $
I_{nonlocal}=3-(I_{1}+I_{2}+I_{3})$. Similar to the case of two-qubit, the
nonlocal information in two-qubit entanglement is $
I_{nonlocal}(2)=I_{12}+I_{13}+I_{23}=2(\tau_{12}+\tau_{13}+\tau_{23})$,
where $\tau_{ij}$ is the 2-tangle between qubit $i$ and $j$. Therefore the
residual non-local information in the form of genuine three-qubit
entanglement should be $I_{nonlocal}(3)=3-(I_{1}+I_{2}+I_{3})-2(\tau_{12}+
\tau_{13}+\tau_{23})$. We note that $2\tau_{12}=2(\lambda_{12}^{1}-
\lambda_{12}^{2})^{2}=(1-I_{1}-I_{2}+I_{3})-4\lambda_{12}^{1}\lambda_{12}^{2}
$, where $\lambda_{12}^{1} \geq \lambda_{12}^{2}$ are the squared roots of
the eigenvalues of $\rho_{12}\tilde{\rho}_{12}$. Here $\tilde{\rho}_{12}=(
\hat{\sigma}_{y}\bigotimes\hat{\sigma}_{y})\rho_{12}^{*}(\hat{\sigma}
_{y}\bigotimes\hat{\sigma}_{y})$ is the time-reversed density matrix of $
\rho_{12}$. Similarly $2\tau_{13}=(1-I_{1}-I_{3}+I_{2})-4\lambda_{13}^{1}
\lambda_{13}^{2}$ and $2\tau_{23}=(1-I_{2}-I_{3}+I_{1})-4\lambda_{23}^{1}
\lambda_{23}^{2}$. Then the residual non-local information in the form of
genuine three-qubit entanglement is $I_{nonlocal}(3)=4(\lambda_{12}^{1}
\lambda_{12}^{2}
+\lambda_{13}^{1}\lambda_{13}^{2}+\lambda_{23}^{1}\lambda_{23}^{2})$. If
measured by 3-tangle proposed in \cite{CKW}, the genuine 3-qubit
entanglement is $\tau_{123}=4\lambda_{12}^{1}\lambda_{12}^{2}=4
\lambda_{13}^{1}\lambda_{13}^{2}=4\lambda_{23}^{1}\lambda_{23}^{2}$.
Therefore, we can establish a direct relation between nonlocal information
and some appropriate measure of genuine three-qubit entanglement, i.e. $
I_{nonlocal}(3)=3\tau_{123}$. The complementarity relation between
local information and entanglement is as follows
\begin{equation}
I_{1}+I_{2}+I_{3}+2(\tau_{12}+\tau_{13}+\tau_{23})+3\tau_{123}=3.
\end{equation}
Here, we present the complementarity relations in the formal
information-theoretic framework. If the system is in a mixed
state, the
above equation will be replaced by an inequality, \textit{i.e.}, $
I_{1}+I_{2}+I_{3}+2(\tau_{12}+\tau_{13}+\tau_{23})+3\tau_{123}\leq3$. If we
focus on qubit $1$, then the $1$ bit information of this qubit is partly
contained in itself ($I_{1}$). The residual information is contained in its
entanglement with its environment, \textit{i.e.}, qubit $2$ and $3$. In fact
the relation $I_{1}+\tau_{12}+\tau_{13}+\tau_{123}=1$ is satisfied for qubit
$1$. Similar results hold for qubit $2$ and $3$. The above results are
depicted in Fig. 3 (B).
If we write the local reduced density matrix as $\rho =\frac{1}{2}(I+\vec{r}
\cdot \vec{\sigma})$, with $\vec{r}$ the Bloch vector, the measure
of local information proposed by Brukner and Zeilinger as in
Eq.(8) $I_{BZ}(\rho )=2Tr\rho ^{2}-1=|\vec{r}|^{2}$. It can be
easily verified that this measure of local information is
equivalent to the measure based on optimal fidelity in Eq.(2),
i.e.,
\begin{equation}
I_{BZ}(\rho )=I_{F}(\rho )
\end{equation}
Therefore, the complementarity relations between local information
quantified by $I_{F}(\rho )$ and entanglement and that between
local information based on $I_{BZ}(\rho )$ and entanglement are
equivalent in nature. However, it can be seen from their
definitions that $I_{F}(\rho )$ and $I_{BZ}(\rho )$
have different physical meanings. The local information based on fidelity $
I_{F}(\rho )$ is defined from the viewpoint of quantum
communications, while $I_{BZ}(\rho )$ is an operationally
information measure from the measurement viewpoint \cite{Brukner
and Zeilinger}. Since the above two complementarity relations are
equivalent, we could obtain that the complementarity relation
between local information quantified by $I_{F}(\rho )$ and
entanglement is hold for arbitrary initial pure states. In
particularly, the initial states could be entangled states, which
means that the isolated system contains nonlocal information
initially. Due to the interactions, the entanglement will change,
which results in the change of nonlocal information.
\section{Conclusions and discussions}
In summary, we adopt a measure of local information based on
optimal fidelity to investigate the information dynamics in two-
and three-qubit systems with interactions between qubits. Through
simple examples, we demonstrate the perfect complementary behavior
between local information and entanglement. We also show that the
measure of local information based on optimal fidelity is
equivalent to the operationally information measure proposed by
Brukner and Zeilinger. Furthermore, we establish a direct relation
between nonlocal information and different levels of entanglement,
and formalize the information complementarity relation by some
appropriate measures of local information and entanglement.
For two-qubit pure states, using von Neumann entropy as a measure
of local information, there has been a similar complementarity
relation between local information and entanglement, which is
measured by entanglement of formation \cite{Horodecki}. Here we
adopt a measure of local information by using linear entropy
rather than von Neumann entropy. This is based on the following
two considerations. One is that linearity always implies
additivity, which is simple and suitable for establishing
complementarity relations. The other point is that using linear
entropy we demonstrate that nonlocal information can be directly
related to the polynomial measures of entanglement, i.e. k-tangle.
For the situation of two qubits, 2-tangle is just the square of
concurrence, which is a function of entanglement of formation.
However, there are no straightforward generalization of
entanglement of formation to quantum states of more than two
qubits. Therefore, using linear entropy, it is more simple for us
to generalize the information complementarity relations to the
situations of more two qubits straightforwardly .
Though the relation between nonlocal information and entanglement
is demonstrated for two- and three-qubit pure states. It is
possible to generalize the information complementarity relation to
arbitrary $n$-qubit pure states naturally as the following
conjecture
\begin{equation}
\sum\limits_{i} I_{i}+2\sum\limits_{i_{1}<i_{2}}\tau_{i_{1}i_{2}}+ \cdots +
n \sum\limits_{i_{1}<i_{2}<\cdots <i_{n}}\tau_{i_{1}\cdots i_{n}}= n
\end{equation}
where $\tau_{i_{1}\cdots i_{k}}$ $(k=2,3,\cdots, n)$ are some
appropriate measures of genuine $k-$qubit entanglement. Since
nonlocal information is contributed by different levels of
entanglement as can be seen from the above discussions, conversely
we can characterize entanglement through nonlocal information. In
our recent work \cite{GME}, we have proposed such an
information-theoretic measure of genuine multi-qubit entanglement,
and utilize it to explore the genuine multi-qubit entanglement in
spin systems.
\section{Acknowledgment}
This work was funded by National Fundamental Research Program, the
Innovation funds from Chinese Academy of Sciences, NCET-04-0587,
and National Natural Science Foundation of China (Grant No.
60121503, 10574126).
\end{document} | math | 24,602 |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
\if11
{
\title{\bf Sharp nonparametric bounds for decomposition effects with two binary mediators}
\author{Erin E Gabriel\hspace{.2cm}\\
and \\
Michael C Sachs \\
and \\
Arvid Sjölander \\
Department of Medical Epidemiology and Biostatistics \\ Karolinska Institutet, \\
Stockholm,
Sweden.}
\maketitle
} \fi
\if01
{
\begin{center}
{\LARGE\bf Sharp nonparametric bounds for decomposition effects with two binary mediators}
\end{center}
} \fi
\begin{abstract}
In randomized trials, once the total effect of the intervention has been estimated, it is often of interest to explore mechanistic effects through mediators along the causal pathway between the randomized treatment and the outcome. In the setting with two sequential mediators, there are a variety of decompositions of the total risk difference into mediation effects. We derive sharp and valid bounds for a number of mediation effects in the setting of two sequential mediators both with unmeasured confounding with the outcome. We provide five such bounds in the main text corresponding to two different decompositions of the total effect, as well as the controlled direct effect, with an additional thirty novel bounds provided in the supplementary materials corresponding to the terms of twenty-four four-way decompositions. We also show that, although it may seem that one can produce sharp bounds by adding or subtracting the limits of the sharp bounds for terms in a decomposition, this almost always produces valid, but not sharp bounds that can even be completely noninformative. We investigate the properties of the bounds by simulating random probability distributions under our causal model and illustrate how they are interpreted in a real data example.
\end{abstract}
\noindent
{\it Keywords:} Causal bounds; Effect decomposition; Mediation analysis; Natural effects; Randomized trials
\spacingset{1.5}
\section{Introduction}
In randomized trials with full compliance, the observed association between the intervention and the outcome has a causal interpretation as the total intervention effect through all possible pathways. Once a total intervention effect has been established, there is often additional interest in specific pathways and mechanisms through which the intervention may affect the outcome. For settings with a single binary mediator, \cite{robins1992identifiability} used counterfactual arguments to provide a formal framework for reasoning about direct effects and indirect (mediated) effects. \cite{pearl2001direct} proposed counterfactual definitions of direct and indirect effects for a single mediator. Specifically, \cite{pearl2001direct} distinguished between the controlled direct effect (CDE), which sets the mediator to a fixed value for each subject, and the natural direct effect (NDE), which sets the mediator to a counterfactual value that may differ across subjects. Although the natural direct effect is more difficult to conceptualize, it has the appealing property that it adds up, together with the corresponding natural indirect effect (NIE), to the total effect.
A problem with such effect decompositions is that the separate effect components are often not identified from data. Even when randomization rules out intervention-outcome confounding, there may still be unmeasured confounders for the mediator(s) and the outcome. If so, then any attempt to estimate the direct intervention effect by controlling for the mediator(s) will open back-door paths through the unmeasured confounders, thereby inducing an association between the intervention and the outcome \citep{robins1992identifiability}.
When causal effects are not identified, bounds for the target parameter of interest, i.e. a range that is guaranteed to include the true parameter value given the observed data, can be used to reduce the possible range of values that need to be considered. \citet{balke1994counterfactual} developed a method for deriving bounds for simple causal estimands based on linear programming techniques. For settings with a single binary mediator, \citet{cai2007non} used this linear programming technique to derive bounds on the CDE, assuming that the intervention is (or can be considered) randomized, but allowing for unmeasured confounding of the mediator and the outcome. \citet{sjolander2009bounds} provided analogous bounds for the NDE. \cite{kaufman2009analytic} provided bounds for both the CDE and the NDE while relaxing the assumption about the exposure being randomized. Other authors have derived bounds for the CDE and NDE under certain monotonicity assumptions \citep{vanderweele2011controlled,chiba2010bounds}, and for settings where the mediator has more than two levels \citep{miles2017partial}.
Recently, there has been a growing interest in effect decompositions with multiple mediators \cite[e.g][]{avin2005identifiability,albert2011generalized,vanderweele2014mediation,daniel2015causal,steen2017flexible}. However, this line of literature has focused on appropriate definitions and sufficient criteria for identification; to our knowledge, bounds for the nonidentified effect components in settings with more than one mediator have not been published. This is an important gap in the literature, since the introduction of multiple mediators poses identification problems that are not present in settings with a single mediator. Specifically, \cite{daniel2015causal} showed that when a mediator has a direct effect on a subsequent mediator, the indirect effect through the former mediator is not generally nonparametrically identified, even in the complete absence of unmeasured confounders. Hence, the importance of bounds is even stronger in setting with multiple mediators than in settings with a single mediator.
We derive bounds for the setting of a two-armed randomized trial with two causally ordered binary mediators that are confounded with the binary outcome of interest. \cite{steen2017flexible} proposed a two-way and a three-way decomposition of the total effect, for which we provide bounds for each component. The two-way decomposition is obtained by separating the total effect into the direct effect of the intervention on the outcome and total indirect effects through both of the mediators or the `joint natural indirect effect', while the three-way decomposition further separates this joint natural indirect effect through the two mediators into two indirect effects. We also provide bounds for the controlled direct effects in this setting. In addition to providing sharp bounds for each term in the two-way and three-way decompositions, we demonstrate that in general the bounds for the separate terms of the decompositions cannot be combined to yield sharp bounds on their sum or difference. The exception is in the two-way decomposition when subtracting the limits of the joint indirect effect or the direct effect from the identified total effect, which we prove will always provide sharp bounds for the remaining effect.
Further decompositions have been derived, and as is shown in \citet{daniel2015causal}, one of the indirect effects in the three-way decomposition can be further decomposed into the effect through the second mediator due to the effect of intervention directly on that mediator and the effect of the intervention through the first mediator on the second, resulting in a four-way decomposition. In addition to the bounds we provide in the main text, we provide bounds for the exhaustive list of the thirty-two terms appearing in the twenty-four possible decompositions of the total intervention effect into the four-way decompositions of \citet{daniel2015causal} in the supplementary materials. For each set of bounds we also provide easily downloadable \texttt{R} functions to facilitate their use.
In practice, there is a tendency towards point estimation, even in the absence of a discussed or well defined estimand or even if that estimand is only identifiable under strong untestable assumptions. Although sensitivity analyses is sometimes used to mitigate concerns about assumptions, these procedures rarely provide an assumption free range of the true causal effect. Due to the rising interest in the ``estimands framework'' for randomized clinical trials \citep{LipkovichRCT}, assumption free, i.e. nonparametric, bounds may become part of the standard for clinical trials analysis. In such settings, there are often multiple binary mediators, such as initiation of rescue mediation, withdrawal from treatment, or relapse or remission prior to death, making our bounds of practical relevance in these settings. Nonparametric bounds may sometimes be considered too wide to be useful in practice. In contrast, we believe that wide bounds are useful to present, since they highlight how little information the observed data contains about the target parameter, and how much a point estimate would have to rely on strong, potentially untestable assumptions.
The paper is organized as follows. In Section \ref{sec:not} we provide our notation and outline our estimands and settings of interest. In Section \ref{results} we provide the bounds for each of the settings and estimands of interest. In Section \ref{sec:sims} we conduct some numerical studies to gain insight into the bounds we provide. In Section \ref{real} we illustrate our derived bounds in an illustrative data example from the \texttt{mediation} package in \texttt{R}. Code to reproduce all results in the simulations and real data example is available at \url{[blinded-url]} in addition to \texttt{R} functions to use the bounds in real data. Finally, in Section \ref{dis} we outline the limitations of our bounds and discussion future areas of research.
\section{Preliminaries} \label{sec:not}
\subsection{Notation and Setting}
Let $X$ and $Y$ be the binary intervention and binary outcome of interest. Let $M_1$ and $M_2$ be two binary mediators on paths from $X$ to $Y$. Let $U$ be an unmeasured set of confounders between $Y$, $M_1$ and $M_2$ that are independent of $X$. The variables in this set have an unrestricted and unknown distribution, e.g. they may be a set of continuous and correlated unmeasured variables. We let $Y(X=x)$ denote the potential outcome $Y$ under the intervention which sets $X$ to $x$, and $Y(x, m_1, m_2)$ be the equivalent under an intervention which sets $X$ to $x$, $M_1$ to $m_1$, and $M_2$ to $m_2$.
Our setting of interest is as depicted in the causal model or directed acyclic graph (DAG) in Figure \ref{b}. We interpret the DAG in Figure \ref{b} to represent a set of nonparametric structural equations such that:
\begin{eqnarray*}
\label{seqs}
x &=& f_X(\epsilon_X)\\
m_1 &=& f_{M_1}(x, u, \epsilon_{m_{1}}) \\
m_2 &=& f_{M_2}(x, u, m_1, \epsilon_{m_{2}}) \\
y &=& f_Y(x, u, m_1, m_2, \epsilon_y)
\end{eqnarray*}
for some response functions $f_X, f_{M_1}, f_{M_2}, f_Y$. The set of $\epsilon$'s represent `errors terms' due to omitted factors, which are assumed independent of $U$ and of each other. Given the values of the errors and the values of a variable's parents in the graph, the value of the variable is determined by its response function. The errors determine the manner in which the variable is determined from its parents.
In this setting, the total effect (TE) is identified and can be estimated. However, none of the mediation effects are identified from the observed data due to the unmeasured confounder(s) $U$, which open pathways between $X$ and $Y$ when conditioning on $M_1$ or $M_2$. Both \citet{steen2017flexible} and \citet{daniel2015causal} provide rigorous proofs of this statement.
\begin{figure}
\caption{Causal diagram of the setting of interest. \label{b}
\label{b}
\end{figure}
Thus, in the following section we provide valid and sharp bounds for these estimands. These bounds are functions of the observed data distribution $p(Y,M_1,M_2|X)$. We say that the bounds are `sharp' when each value within the bounds is a possible value for the estimand, given the true probabilities that are estimable from data. Similarly, we say that the bounds are `valid' when no value outside the bounds is a possible value, given the true probabilities that are estimable from data.
\subsection{Estimands}
There are several other potential outcomes to define in the setting with two mediators when the mediators are controlled `naturally'; what these estimands are depends on whether there is an effect of $M_1$ on $M_2$. When there is an effect of $M_1$ on $M_2$, as in our setting of interest, Figure \ref{b}, the potential outcome $Y(x, M_1(x_1), M_2(x_2, M_1(x_3)))$ becomes relevant. This is the potential outcome of $Y$ under the interventions that sets $X$ to $x$, and $M_1$ to the value it would take on under the intervention that sets $X$ to $x_1$, and $M_2$ to the value it would take on under the intervention that sets $X$ to $x_2$ and $M_1$ to the value it would take on if $X$ was set to $x_3$. If there is no effect of $M_1$ on $M_2$, this potential outcome simplifies to $Y(X=x, M_1(x_1), M_2(x_2))$, as the value of $M_1$ will not impact $M_2$. In what follows, we focus on the setting of Figure \ref{b} where we allow an effect of $M_1$ on $M_2$.
The total effect of $X$ on $Y$ is defined as:
$$\mbox{TE} = p\{Y(1)=1\} -p\{Y(0)=1\}.$$
In the randomized and perfect compliance setting, such as our setting of interest, the TE is identified, and equal to $p(Y=1|X=1)-p(Y=1|X=0)$. We are also interested in the direct effect of $X$ on $Y$ holding the mediators to either a fixed level for all subjects, or the natural level they would have taken on had $X$ been set to $x$. Holding a single mediator to a fixed level gives what is referred to as the controlled direct effect. This can directly be extended to multiple mediators. We define the controlled direct effects (CDE) for two mediators, which have four possible levels defined by the values to which the mediators are being held, by:
\begin{eqnarray*}
\mbox{CDE}\mbox{-}m_1 m_2 &=& p\{Y(1, M_1=m_1, M_2=m_2)=1\}\\ &-&p\{Y(0, M_1=m_1, M_2=m_2)=1\},
\end{eqnarray*}
$\mbox{ for } m_1, m_2 \in \{0, 1\}$.
This estimand fully describes the possible controlled direct effects in the setting of Figure \ref{b} below. When you hold the mediators to the counterfactual value they would have taken had you intervened on $X$, this is referred to as the natural direct effect, for a single mediator. Similar to the multiple mediators in the controlled direct effect case, we define the natural direct effects (NDE), in the setting of Figure \ref{b}. These are the estimands discussed in \citet{daniel2015causal}, and here we follow their nomenclature directly.
\begin{eqnarray*}
\mbox{NDE}\mbox{-}x_1 x_2 x_3&=&p\{Y(1, M_1(x_1), M_2(x_2, M_1(x_3)))=1\}\\
&-&p\{Y(0, M_1(x_1), M_2(x_2, M_1(x_3)))=1\},
\end{eqnarray*}
for $x_1, x_2, x_3 \in \{0, 1\}$. The estimand $\mbox{NDE}\mbox{-}{000}$ was said to be the obvious extension of the pure natural direct effect in \citet{daniel2015causal}, and is the term in equation 7 in the decomposition of \citet{steen2017flexible}.
We also consider the effect transmitted along either one or both mediators, the joint natural indirect effect:
\begin{eqnarray*}
\mbox{JNIE}_{x}&=&p\{Y(x, M_1(1), M_2(1, M_1(1)))=1\}\\ &-& p\{Y(x, M_1(0), M_2(0, M_1(0)))=1\},
\end{eqnarray*}
for $x \in \{0, 1\}$. $\mbox{JNIE}_{1}$ is equal to the term in equation 6 of the decomposition as given in \citet{steen2017flexible}.
We can further decompose this joint effect into two indirect effects, where the first component is the effect through $M_1$, directly and also through $M_2$, in the notation of \citet{daniel2015causal}:
\begin{eqnarray*}
\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}x x_2 &=& p\{Y(x, M_1(1), M_2(x_2, M_1(1)))=1\}\\ &-& p\{Y(x, M_1(0), M_2(x_2, M_1(0)))=1\},
\end{eqnarray*}
for $x, x_2 \in \{0, 1\}$. $\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11$ is equal to the term in equation 8 of the decomposition given in \citet{steen2017flexible}.
Finally, we consider the indirect effect through $M_2$, excluding the effect of $M_1$ through $M_2$, as given in \citet{daniel2015causal}:
\begin{eqnarray*}
\mbox{NIE}_{2}\mbox{-}x x_1 x_3 &=&p\{Y(x, M_1(x_1), M_2(1, M_1(x_3)))=1\}\\ &-& p\{Y(x, M_1(x_1), M_2(0, M_1(x_3)))=1\},
\end{eqnarray*}
for $x, x_1, x_3 \in \{0, 1\}$. $\mbox{NIE}_{2}\mbox{-}100$ is equal to the term in equation 9 of the decomposition given in \citet{steen2017flexible}. Then the decomposition in \citet{steen2017flexible} is in our notation $$\mbox{TE} = \mbox{NDE}\mbox{-}{000} + \mbox{JNIE}_{1}$$ and is equivalent to
$$\mbox{TE}= \mbox{NDE}\mbox{-}{000} + \mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11 + \mbox{NIE}_{2}\mbox{-}100.$$ We focus on the bounds for the terms of these two decompositions in the main text. However, we also give the bounds on all terms in the twenty-four, four-term decompositions of \citet{daniel2015causal} in Section S3 of the supplementary materials and the decompositions are in Section S2. For this we need to define two more estimands, where again we use the same notation as \citet{daniel2015causal}.
The indirect effect through $M_1$, excluding the effect of $M_1$ through $M_2$.
\begin{eqnarray*}
\mbox{NIE}_{1}\mbox{-}x x_2 x_3 &=&p\{Y(x, M_1(1), M_2(x_2, M_1(x_3)))=1\}\\ &-& p\{Y(x, M_1(0), M_2(x_2, M_1(x_3)))=1\},
\end{eqnarray*}
and the indirect effect of $M_1$ through $M_2$, excluding the direct effect of $M_1$.
\begin{eqnarray*}
\mbox{NIE}_{12}\mbox{-}x x_1 x_2 &=&p\{Y(x, M_1(x_1), M_2(x_2, M_1(1)))=1\}\\ &-& p\{Y(x, M_1(x_1), M_2(x_2, M_1(0)))=1\}.
\end{eqnarray*}
These terms add to the effect through $M_1$ and possibly also through $M_2$, $\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11 =\mbox{NIE}_{1}\mbox{-}110 + \mbox{NIE}_{12}\mbox{-}111$, thus the four-way decomposition is given as in decomposition 3 of \citet{daniel2015causal} $$\mbox{TE} = \mbox{NDE}\mbox{-}{000}+ \mbox{NIE}_{1}\mbox{-}110 + \mbox{NIE}_{2}\mbox{-}100 +\mbox{NIE}_{12}\mbox{-}111.$$
Define the short hand notation for the estimable probabilities as:
$$p_{ym_1m_2\cdot x} = p\{Y=y, M_1=m_1, M_2=m_2|X=x\}.$$ For example, $p_{111\cdot 1} = p\{Y=1, M_1=1, M_2=1|X=1\}$.
\section{Results} \label{results}
As the variables of interest in the DAG in Figure \ref{b} are all assumed binary, there exists a canonical partitioning of the unmeasured confounder $U$ into finite states, as described by \citet{balke1994counterfactual}. In this partitioning, the response function corresponding to each variable in the DAG is categorical with $2^{2^k}$ levels, where $k$ is the number of parents of that variable in the DAG. Ignoring the response function variable for $X$ since we condition on it, this leads to a total of $2^{2^1}\times 2^{2^2}\times 2^{2^3}=16,384$ probabilities associated with the joint response function variable distribution for $(M_1, M_2, Y)$, on which the thirty-two estimable probabilities form constraints that are used to bound the estimands of interest. The mediation effects of interest are the objectives that we maximize and minimize symbolically using vertex enumeration, resulting in bounds on the counterfactual probabilities in terms of the estimable data distribution. \\
\noindent \textbf{Result 1}:\\
The bounds given in \eqref{Beq:1} are valid and sharp for $\mbox{CDE}\mbox{-}m_1 m_2$ under Figure \ref{b}.
\begin{eqnarray}
\label{Beq:1}
-1 + p_{0m_1m_2.0} + p_{1m_1m_2.1} \leq \theta \leq 1 - p_{0m_1m_2.1} - p_{1m_1m_2.0}
\end{eqnarray}
With a small amount of algebra one can write the bounds in Result 1 in terms of the total effect, TE, with the lower bound as TE $- B(m_1,m_2)$, and the upper as TE $- B(m_1,m_2) + g(m_1,m_2)$. Here,
$ B(m_1,m_2)$ is the sum of all differences, $p_{1m'_1m'_2.1} -p_{0m'_1m'_2.0}$ such that $m'_1m'_2 \neq m_1m_2$ and $ g(m_1,m_2) = p((M_1, M_2) \neq (m_1, m_2)|X = 0) + p((M_1, M_2) \neq (m_1, m_2)|X = 1)$. From this representation it is easy to see that when $g(m_1,m_2) = 0$, this also implies $B(m_1,m_2)=0$, and the bounds collapse to the single point, TE. Additionally, it can be seen from this that if TE $>B(m_1,m_2)$ then $\mbox{CDE}\mbox{-}m_1 m_2 >0.$ It is also immediately evident that when $p(M_2=m_2, M_1=m_1)=1$, then all NDE collapse to the CDE for the same $m_1,m_2$, and the same result applies. However, in this setting this is less mediation and more a deterministic relationship.
\noindent \textbf{Result 2}:\\
The bounds given in \eqref{Beq:2} and \eqref{Beq:3} are valid and sharp for $\mbox{NDE}\mbox{-}000$ under Figure \ref{b}.
\begin{align}
& \mbox{NDE}\mbox{-}000 \geq \nonumber \\
\label{Beq:2}
& \max\left\{\begin{array}{l}
p_{111\cdot 1} - p_{100\cdot 0} - p_{110\cdot 0} - p_{101\cdot 0} + p_{011\cdot 0} - 1\\
-2 + p_{000\cdot 0} + p_{010\cdot 0} + 2*p_{001\cdot 0} + p_{101\cdot 0} + p_{101\cdot 1} + p_{011\cdot 0}\\
-2 + p_{000\cdot 0} + 2*p_{010\cdot 0} + p_{110\cdot 0} + p_{110\cdot 1} + p_{001\cdot 0} + p_{011\cdot 0}\\
-2 + 2*p_{000\cdot 0} + p_{100\cdot 0} + p_{100\cdot 1} + p_{010\cdot 0} + p_{001\cdot 0} + p_{011\cdot 0}\\
-1 + p_{000\cdot 0} + p_{010\cdot 0} + p_{001\cdot 0} + p_{011\cdot 0}
\end{array}\right\},
\end{align}
\\
and
\begin{align}
&\mbox{NDE}\mbox{-}000 \leq \nonumber \\
\label{Beq:3}
&\min\left\{\begin{array}{l}
1 + p_{000\cdot 0} - p_{010\cdot 1} - p_{110\cdot 0} + p_{001\cdot 0} + p_{011\cdot 0}\\
1 + p_{000\cdot 0} + p_{010\cdot 0} - p_{001\cdot 1} - p_{101\cdot 0} + p_{011\cdot 0}\\
1 + p_{000\cdot 0} - p_{111\cdot 0} + p_{010\cdot 0} + p_{001\cdot 0} - p_{011\cdot 1}\\
p_{000\cdot 0} + p_{010\cdot 0} + p_{001\cdot 0} + p_{011\cdot 0}\\
1 - p_{000\cdot 1} - p_{100\cdot 0} + p_{010\cdot 0} + p_{001\cdot 0} + p_{011\cdot 0}
\end{array}\right\}.
\end{align}
\noindent \textbf{Result 3}:\\
The bounds given in \eqref{Beq:4} and \eqref{Beq:5} are valid and sharp for $\mbox{JNIE}_{1}$ under Figure \ref{b}.
\begin{align}
&\mbox{JNIE}_{1}\geq \nonumber \\
\label{Beq:4}
&\max\left\{\begin{array}{l}
-1 + p_{111\cdot0} + p_{011\cdot 0} - p_{000\cdot 1} - p_{010\cdot 1} - p_{001\cdot 1} \\
-p_{000\cdot 1} - p_{010\cdot 1} - p_{001\cdot 1} - p_{011\cdot 1}\\
-1 - p_{000\cdot 1} - p_{010\cdot 1} + p_{001\cdot 0} + p_{101\cdot 0} - p_{011\cdot 1}\\
-1 - p_{000\cdot 1} + p_{010\cdot 0} + p_{110\cdot 0} - p_{001\cdot 1} - p_{011\cdot 1}\\
-1 + p_{000\cdot 0} + p_{100\cdot 0} - p_{010\cdot 1} - p_{001\cdot 1} - p_{011\cdot 1}
\end{array}\right\},
\end{align}
\\
and
\begin{align}
&\mbox{JNIE}_{1} \leq \nonumber \\
\label{Beq:5}
&\min\left\{\begin{array}{l}
2 - p_{000\cdot 0} - p_{000\cdot 1} - p_{100\cdot 0} - p_{100\cdot 1} - p_{010\cdot 1} - p_{001\cdot 1} - p_{011\cdot 1}\\
1 - p_{000\cdot 1} - p_{010\cdot 1} - p_{001\cdot 1} - p_{011\cdot 1}\\
2 - p_{000\cdot 1} - p_{010\cdot 0} - p_{010\cdot 1} - p_{110\cdot 0} - p_{110\cdot 1} - p_{001\cdot 1} - p_{011\cdot 1}\\
2 - p_{000\cdot 1} - p_{010\cdot 1} - p_{001\cdot 0} - p_{001\cdot 1} - p_{101\cdot 0} - p_{101\cdot 1} - p_{011\cdot 1}\\
1 - p_{111\cdot 0} - p_{011\cdot 0} + p_{100\cdot 1} + p_{110\cdot 1} + p_{101\cdot 1}
\end{array}\right\}.
\end{align}
\noindent \textbf{Result 4}:\\
The bounds given in \eqref{Beq:6} and \eqref{Beq:7} are valid and sharp for $\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11$ under Figure \ref{b}.
\begin{align}
&\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11 \geq \nonumber \\
\label{Beq:6}
&\max\left\{\begin{array}{l}
-p_{000\cdot 0} - p_{000\cdot 1} - p_{100\cdot 0} - p_{001\cdot 0} - p_{001\cdot 1} - p_{101\cdot 0}\\
-p_{000\cdot 1} - p_{010\cdot 1} - p_{001\cdot 1} - p_{011\cdot 1}\\
-1 + p_{000\cdot 0} + p_{100\cdot 0} - p_{010\cdot 1} + p_{001\cdot 0} + p_{101\cdot 0} - p_{011\cdot 1}
\end{array}\right\},
\end{align}
\\
and
\begin{align}
&\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11 \leq \nonumber \\
\label{Beq:7}
&\min\left\{\begin{array}{l}
1 - p_{000\cdot 0} - p_{100\cdot 0} - p_{001\cdot 0} - p_{101\cdot 0} + p_{111 \cdot1} + p_{110 \cdot1}\\
1 - p_{000\cdot 1} - p_{010\cdot 1} - p_{001\cdot 1} - p_{011\cdot 1}\\
p_{000\cdot 0} + p_{100\cdot 0} + p_{100\cdot 1} + p_{001\cdot 0} + p_{101\cdot 0} + p_{101\cdot 1}
\end{array}\right\}.
\end{align}
\noindent \textbf{Result 5}:\\
The bounds given in \eqref{Beq:8} and \eqref{Beq:9} are valid and sharp for $\mbox{NIE}_{2}\mbox{-}100$ under Figure \ref{b}.
\begin{align}
&\mbox{NIE}_{2}\mbox{-}100 \geq \nonumber \\
\label{Beq:8}
&\max\left\{\begin{array}{l}
-1 + p_{011\cdot 0} + p_{111\cdot 0} - p_{000\cdot 1} - p_{100\cdot 1} - p_{010\cdot 1} - p_{001\cdot 1} - p_{101\cdot 1} \\
-1 + p_{110\cdot 1} + p_{111\cdot 1} - p_{000\cdot 0} - p_{100\cdot 0} - p_{001\cdot 0} - p_{101\cdot 0} \\
-2 + p_{100\cdot 1} + p_{001\cdot 0} + p_{001\cdot 1} + p_{101\cdot 0} + p_{101\cdot 1} \\
-2 + p_{000\cdot 0} + p_{100\cdot 0} + p_{100\cdot 1} + p_{001\cdot 0} + p_{101\cdot 0} + p_{101\cdot 1} \\
-1 - p_{000\cdot 1} - p_{100\cdot 1} + p_{010\cdot 0} + p_{110\cdot 0} - p_{001\cdot 1} - p_{101\cdot 1} - p_{011\cdot 1} \\
-2 + p_{000\cdot 0} + p_{000\cdot 1} + p_{100\cdot 0} + p_{100\cdot 1} + p_{101\cdot 1} \\
-1
\end{array}\right\},
\end{align}
\\
and
\begin{align}
&\mbox{NIE}_{2}\mbox{-}100 \leq \nonumber \\
\label{Beq:9}
&\min\left\{\begin{array}{l}
2 - p_{000\cdot 0} - p_{000\cdot 1} - p_{100\cdot 0} - p_{100\cdot 1} - p_{001\cdot 1}\\
2 - p_{000\cdot 0} - p_{000\cdot 1} - p_{100\cdot 0} - p_{001\cdot 0} - p_{001\cdot 1} - p_{101\cdot 0} \\
2 - p_{000\cdot 1} - p_{001\cdot 0} - p_{001\cdot 1} - p_{101\cdot 0} - p_{101\cdot 1} \\
2 - p_{010\cdot 0} - p_{010\cdot 1} - p_{110\cdot 0} - p_{110\cdot 1} - p_{011\cdot 1} \\
1 + p_{000\cdot 0} + p_{100\cdot 0} - p_{010\cdot 1} + p_{001\cdot 0} + p_{101\cdot 0} - p_{011\cdot 1} \\
1 - p_{011\cdot 0} - p_{111\cdot 0} + p_{000\cdot 1} + p_{100\cdot 1} + p_{110\cdot 1} + p_{001\cdot 1} + p_{101\cdot 1} \\
1
\end{array}\right\}.
\end{align}
To compare the bounds presented above, it is useful to consider alternative valid bounds defined by `subtraction procedures' or `addition procedures', as follows. Consider an estimand of interest $\theta$ such that it can be decomposed into several terms $\theta = \sum_{i=1}^I \gamma_i.$ Let $(l_i, u_i)$ be the valid and sharp bounds for each $\gamma_i$, respectively. Given the decomposition of $\theta$ we have $\gamma_i = \theta - \sum_{j\neq i} \gamma_j$ so that alternative bounds for $\gamma_i$ are given by $(l_i^*, u_i^*)$, where $l_i^* = \theta_i - \sum_{j\neq i} u_j$, and $u_i^*= \theta_i - \sum_{j\neq i} l_j$. This is what we will call the subtraction procedure. Similarly, one can obtain valid bounds for $\theta$ by adding the bounds for $\gamma_i$ $u^*=\sum_i u_i$, and $l^*= \sum_i l_i$. This is what we will call the addition procedure. These procedures will not always produce sharp bounds, as is demonstrated in the real data example, but there are at least some special cases where the subtraction procedure can be used to produce valid and sharp bounds.
\noindent \textbf{Result 6}:\\
For any two-way decomposition of the identified TE the subtraction of the limits of sharp bounds for one term in the decomposition from the TE will produce the sharp bounds for the other term.\\
Result 6 is easily proven by considering three estimands, $\theta$, $\gamma_1$ and $\gamma_2$, such that $\theta = \gamma_1+\gamma_2$ where $\theta$ is identified, but $\gamma_1$ and $\gamma_2$ are not. Suppose that $(l_1,u_1)$ and $(l_2,u_2)$ are valid and sharp bounds for $\gamma_1$ and $\gamma_2$, respectively. We know that $\gamma_2 = \theta - \gamma_1$ and that the validity of the bounds for $\gamma_1$ implies that $\theta- u_1 \leq \gamma_2 \leq \theta - l_1$ is valid.
In addition, the sharpness of the bounds for $\gamma_1$ imply that any point in $\theta- u_1 \leq \gamma_2 \leq \theta - l_1$ is possible, which then implies that $\theta - u_1 = l_2$ and $\theta- l_1 = u_2$, if the bounds for $\gamma_2$ are also valid and sharp.
This is also easy to see by looking at the first panel of Figure \ref{fig:my_label4}. If the bounds are sharp, then the triangles must be of the same size and mirror images of themselves over the line defined by the TE. If one of the triangles is smaller on one side, then by projecting across the TE line you could produce narrower bounds for the other estimand. This contradicts the conditions of the result. Alternatively, if one of the triangles is too large, projecting from the smaller triangle towards the term you are subtracting results in the same contradiction, thus proving the result.
Result 6 does not generalize to a three-way decomposition. For example, $\theta = \gamma_1+\gamma_2 +\gamma_3$, where again $\theta$ is identified, but $\gamma_1$, $\gamma_2$ and $\gamma_3$ are not. We now also have sharp and valid bounds for $\gamma_3$ $(l_3,u_3)$. We can again bound $\gamma_2$ by $\theta - u_1 - u_3 \leq \gamma_2 \leq \theta - l_1 - l_3$. By the validity of the bounds for $\gamma_1$ and $\gamma_3$ this bound is valid for $\gamma_2$. However, they are only sharp if the limiting values $l_1$ and $l_3$ are simultaneously possible values of $\gamma_1$ and $\gamma_3$, and $u_1$ and $u_3$ are also simultaneously possible. If the bounds are not constructed by considering the $\gamma$'s jointly, there is no guarantee that $\gamma_1$ can take on the value $l_1$, while $\gamma_3$ is equal to $l_3$, or $\gamma_3$ taken on the value $u_3$ while $\gamma_1$ is equal to $u_1$. Even in the case of a two-way decomposition it is clear that $l_1+l_2$ and $u_1+u_2$ will not always provide sharp bounds for the identified $\theta$, unless $l_1=u_1$ and $l_2=u_2$. Additionally, as is demonstrated in our real data example, if $\theta$ is also not identified with $\theta=\gamma_1+\gamma_2$ and has valid and sharp bounds $(l_0,u_0)$, the bounds derived for $\gamma_2$ using the subtraction procedure, $l_0-u_1 \leq \gamma_2 \leq u_0-l_1$, have no guarantee of being sharp or even informative, i.e. not containing -1 and 1.
This makes it clear that the bounds for $\mbox{JNIE}_{1}$ are generally narrower than those resulting from adding the limits of the bounds for each term in its decomposition; this is demonstrated in the lower panel of Figure \ref{fig:my_label4}. In this example, the bounds for $\mbox{NIE}_{2}\mbox{-}100$ are wider than those for $\mbox{JNIE}_{1}$, making clear that the addition procedure will not produce sharp bounds.
Although in our real data example bounds for $\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11$ are narrower than $\mbox{JNIE}_{1}$, this will not always be the case. The bounds for $\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11$ differ by a single term in each of the upper and lower bounds \eqref{Beq:4} and \eqref{Beq:5} for $\mbox{JNIE}_{1}$. These terms, when active, tend to make the bounds narrower, although this does not often mean they exclude zero, as we demonstrate in simulations below.
\section{Simulations} \label{sec:sims}
We randomly generated counterfactual probabilities and then use the constraints implied by the DAG to generate the true estimable probabilities, allowing us to ensure that we are generating distributions under the DAG. We generated the K = 16,384 dimensional counterfactual probability distribution vector by sampling from a Dirichlet distribution with the vector of parameters $\overline{\alpha}=\{\alpha_1,\ldots, \alpha_K\}$, that has probability distribution function:
$$f(q_1,\ldots,q_K;\alpha_1,\ldots,\alpha_K)=\frac{1}{B(\overline{\alpha})}\sum^{K}_{i=1} q_i^{\alpha_i-1},$$
where $B(\cdot)$ is the multivariate beta function, and where $\sum_i q_i = 1$.
The generated counterfactuals describe a 16,384-dimensional space that is difficult to explore in any exhaustive manner. We consider two special cases, points at the vertices of the space where a single counterfactual probability is 1 and all others are zero, and the symmetric Dirichlet distribution, where all $\alpha_i$ are equal. Additionally we consider only two characteristics of the bounds under each of these special cases, the width, with anything less than 2 providing information over the always valid (-1,1) bounds for any risk difference, and if the bounds cover zero, thus not providing evidence against the causal null hypothesis.
\begin{table}[ht]
\caption{\label{tab01} Counts (proportion) out of the 16,384 vertices where the lower bound equals the label in the rows, and the upper bound equals the label in the columns. }
\centering
\begin{tabular}{lc|p{14mm}p{14mm}p{14mm}|cc|p{14mm}p{14mm}}
& \multicolumn{4}{c|}{$\mbox{NDE}$-000} & \multicolumn{4}{c}{$\mbox{JNIE}_{1}$} \\
& & \multicolumn{3}{c|}{Upper bound} & & & \multicolumn{2}{c}{Upper bound} \\
& & -1 & 0 & 1 & & & 0 & 1 \\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Lower bound}} &-1 & 1024 (0.0625) & 6144 (0.375) & 0 & & -1 & 6144 (0.375) & 0 \\
&0 & 0 & 2048 (0.125) & 6144 (0.375) & & 0 & 4096 (0.25) & 6144 (0.375) \\
&1 & 0 & 0 & 1024 (0.0625) & & & & \\
\cline{2-9}
&\multicolumn{4}{c|}{$\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11$} & \multicolumn{4}{c}{$\mbox{NIE}_{2}\mbox{-}100$} \\
\multirow{3}{*}{\rotatebox[origin=c]{90}{Lower bound}} & & & 0 & 1 & & & 0 & 1 \\
\cline{2-9}
&-1 & & 4096 (0.25) & 0 & & -1 & 2048 (0.125) & 8192 (0.50) \\
&0 & & 8192 (0.50) & 4096 (0.25) & & 0 & 4096 (0.25) & 2048 (0.125) \\
\end{tabular}
\end{table}
When considering the vertices of the 16,384-dimensional space, we found that the width of the bounds under these distributions is bimodal/trimodal, see Table \ref{tab01}. Specifically, for the $\mbox{NDE}$-000 and $\mbox{JNIE}_{1}$, the widths of the bounds are 0 for 25\% of the vertices and 1 for 75\% of the vertices. For the $\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11$, the widths of the bounds are 0 for half of the vertices and 1 for the other half. In all those cases, all of these bounds either exclude 0 or 0 is the lower or upper limit of the bounds. For the $\mbox{NIE}_{2}\mbox{-}100$, the widths of the bounds are 0 for 25\%, 1 for 25\%, and 2 for 50\% of the vertices. In the cases where the widths of the bounds are not 2, they either exclude 0 or 0 is the lower or upper limit of the bounds.
\begin{figure}
\caption{Width of bounds (points) with boxplots for 1,000 simulated distributions generated under different values of the parameter of a symmetric Dirichlet distribution. The orange numbers at the top of each figure indicate the \% of bounds that have width less than 1. For all estimands other than NIE$_2$-100, 100 minus this \% provides the bounds with width of exactly 1, as the bounds for these estimands never exceeds 1. For NIE$_2$-100 it can be seen that the bounds width is often larger than 1 and in many cases is width 2.}
\label{fig:my_label1}
\end{figure}
When the $\alpha_i$ take a common value decreasing from 1 to 0, the counterfactual space has more and more of a bowl shape, such that the areas where all the counterfactual probabilities would be non-zero and equal has the lowest probability of being generated and the vertices where only one probability is non-zero has the highest. Figures \ref{fig:my_label1} and \ref{fig:my_label2} trace the path from all the $\alpha_i$ being nearly zero (0.000001) to all being equal to 0.001, for the width and exclusion of 0, respectively. As can be seen in the figures, over the 1,000 simulations at each level of the $\alpha_i$ only at very low values do all of the estimands have a large proportion of bounds with widths less than 1. However, at no value of the $\alpha_i$ do the widths exceed 1 for the direct effect, the joint indirect effect, or the indirect effect through $M_1$. The bounds for the effect through $M_2$ only are quite wide, often being greater than 1, with a large proportion having a width of 2 at all $\alpha_i=0.001$. Additionally, we see that in most cases the bounds contain zero for the indirect effects, even when, in the case of the MS$^2$-NIE$_1$-111, there are a large proportion of the bounds that have width less than 1. The bounds for NDE-000, on the other hand, often exclude zero for low $\alpha_i$ values. This suggests that when many of the counterfactual probabilities are zero, the bounds are more informative. This is not surprising, as counterfactual probabilities being zero can be considered constraints, such as the well-known constraint in the instrumental variable setting called no defiers or monotonicity. In a 16,384-dimensional space, such constraints are more difficult to give intuitive names. When the $\alpha_i$ are equal and held to be 1 this is the same as putting a flat uniform distribution on the probabilities. The trend in Figure \ref{fig:my_label1} continues, with estimated probability of the width of the bounds being less than one approaching zero as the $\alpha_i$ approaches one.
\begin{figure}
\caption{Proportion of bounds for the NDE-000 that exclude 0 out of 1,000 simulated distributions generated under different values of the parameter of a symmetric Dirichlet distribution. Grey ribbon indicates Clopper-Pearson 95\% confidence limits for the proportion.}
\label{fig:my_label2}
\end{figure}
Our simulations do not give a complete picture of what is occurring in the full 16,384-dimensional space, as we have also found examples of sections of the space where two counterfactuals are both nonzero and the rest are equal but nonzero, where both the width of the bounds are less than 1 and the bounds exclude zero for the NDE-000, JNIE$_1$ and $\mbox{MS}^2\mbox{-}\mbox{NIE}_{1}\mbox{-}11$ effects. Given our limited exploration we have not uncovered a discernible pattern, other than the bounds tend to be more informative when a large number of the counterfactuals are zero.
\section{Real Data Example} \label{real}
We illustrate our bounds using the \texttt{jobs} dataset from the \texttt{mediation} package in R \citep{mediation}. These data come from a randomized experiment designed to investigate the efficacy of a job training intervention on unemployed workers. In the experiment, 899 eligible unemployed workers completed a pre-screening questionnaire and were then randomly assigned to treatment, which consisted of participation in job skills workshops, or control, who received a booklet of job search tips. The randomization was 2:1 in favor of treatment with 600 assigned to treatment and 299 to control. The primary outcome is a binary variable representing whether the respondent had become employed, with 35\% on the treatment arm and 29\% on the control arm becoming employed at the end of the study. The mediators of interest are a binary indicator of depressive symptoms after treatment, and a binary indicator of high job seeking self-efficacy. We believe that depression is the first mediator, or $M_1$, which may have a causal effect on job seeking self-efficacy, $M_2$. The bounds are displayed in Figure \ref{fig:my_label4}.
\begin{figure}
\caption{Bounds for the jobs dataset. The orange arrows show the addition procedure, summing the upper and lower bounds values of MSE$^2$-NIE$_1$-11 and NIE$_2$-100 to obtain valid, but not sharp bounds for NIE$_1$. The brown arrows demonstrate the subtraction procedure where an alternative lower bound for MSE$^2$-NIE$_1$-11 is derived by subtracting the upper bound value of NIE$_2$-100 from the lower bound value of JNIE$_1$, and an alternative upper bound for MSE$^2$-NIE$_1$-11 is derived by subtracting the lower bound value of NIE$_2$-100 from the upper bound value of JNIE$_1$. Again, this is valid but not sharp. \label{fig:my_label4}
\label{fig:my_label4}
\end{figure}
As can be seen in the first panel of Figure \ref{fig:my_label4}, the total effect was estimated to be barely above zero, $0.057$, which by an exact test is not significantly different than 0, with a p-value of 0.10 and 95\% CI $(-0.01, 0.12)$. Also in this panel it can be seen that the bounds for the natural direct effect $(-0.29, 0.71)$ and the joint indirect effect $(-0.65, 0.34)$ are mirror images across the line of the total effect. This is as expected, given Result 6. Looking at the second panel in Figure \ref{fig:my_label4} it can be seen that the bounds for the MS$^2$-NIE$_1$-111 $(-0.50, 0.34)$ are narrower than both the natural indirect effect through high job seeking $(-0.96, 0.84)$ and the joint indirect effect. It can also be seen that the addition procedure used on the bounds for MS$^2$-NIE$_1$-111 and NIE$_2$-100 is wider than the sharp bounds for JNIE$_1$, as the bounds for NIE$_2$-100 alone are wider than those for JNIE$_1$. The subtraction procedure of the lower bound of NIE$_2$-100 from the upper bound of the JNIE$_1$, as well as the upper from the lower, produce much wider bounds for MS$^2$-NIE$_1$-111. This is, again, as expected as the bounds are sharp. In this case, the addition procedure and subtraction procedure both produce noninformative bounds, i.e. the bounds are outside the possible range of the causal effect $(-1,1)$. Additionally, although it was not the focus of this paper, Result 1 provides the bounds for the controlled direct effects and they are as follows in these data: CDE-00 $\in (-0.71, 0.79)$, CDE-01 $\in (-0.51, 0.54)$, CDE-10 $\in (-0.84, 0.86)$, and CDE-11 $\in (-0.87, 0.87)$. Each CDE bound covers zero and has a width greater than one, with CDE-01 having the narrowest width of 1.03.
\section{Discussion} \label{dis}
We present bounds for estimands of mediation effects in the two-mediator setting, including many of the estimands considered in \citet{daniel2015causal, steen2017flexible}. We show that in many cases the bounds will be narrower than one, and this occurs frequently when a large number of the counterfactual probabilities are zero. We also find that the addition or subtraction of sharp bounds for two estimands that are not identified does not produce, in general, sharp bounds for the estimand of their addition or subtraction. Additionally, the subtraction of sharp bounds does not produce sharp bounds for the remaining term(s) unless one of the estimands in the subtraction is identified. We prove that in a two-way decomposition of the identified TE, the subtraction of a set of sharp bounds will always produce sharp bounds for the other term.
\cite{daniel2015causal} showed that many of the mediation estimands are not identified even in the complete absence of unmeasured confounders, making bounds of particular relevance. However, it should be noted that the bounds we provide are not generated under the assumption of no confounding. So, while the bounds will be valid under the assumptions of no confounding, one could possibly obtain narrower sharp bounds by including that assumption in the derivation.
Although we explore two specific scenarios in the simulations, exploring the full space, perhaps with some type of greedy algorithm, to determine if there is a pattern which corresponds to constraints is beyond the scope of this paper, but an area of future research for the authors. Additionally, in exploring the space in greater detail, we will likely find constraints to consider in the construction of bounds, which may result in narrower or different sets of bounds for these decomposition terms.
We provide a large number of bounds, and insight into their interpretation, but we do not discuss an estimation procedure. As the terms of the bounds are all linear combinations of conditional probabilities, estimation is straightforward. For inference, the same bootstrap procedure suggested in \citet{Gabriel2020} and \citet{horowitz2000nonparametric} can be used here. In both of those papers, extensive empirical results were provided showing nominal coverage of quantile bootstrap confidence intervals for similarly constructed bounds. Although we believe that the bootstrap is likely theoretically justified, there may be closed form variance estimates for many, if not all, of the bounds. We do not focus on accounting for the sampling variability here, as for any reasonable sample size, we expect the uncertainty in causal effects due to unmeasured confounding to be far greater than uncertainty due to sampling variability.
Although we only consider the setting where there are unmeasured and no measured confounders, in the simplest case where the measured confounders are discrete and bounded and do not restrict the impact of the unmeasured confounders on the mediators and the outcome, the bounds can be applied within levels of the measured confounders. More complex settings would need to be considered on a case-by-case basis and may result in narrower sharp bounds under particular assumptions. There are other scenarios where the same linear programming method may be used to determine if the bounds change, such as allowing the intervention to have a direct effect on the confounder. Defining the general conditions under which a DAG and target define a linear programming problem, and the automated checking of these conditions is an area of future research for the authors. Finally, our present results are limited to binary exposures, mediators, and outcomes so it would be worthwhile to extend these to multicategorical variables.
\end{document} | math | 45,340 |
\begin{document}
\title{The Tyranny of Qubits - Quantum Technology's Scalability Bottleneck}
\author{John Gough\\ [email protected]\\
Aberystwyth University, SY23 3BZ, Wales, United Kingdom}
\maketitle
\begin{abstract}
In this essay, I look at (bemoan) the issues surrounding simulating quantum systems in order to design quantum devices for quantum technologies. The program runs into a natural difficulty that simulating quantum systems really require a proper quantum simulator. The problem is likened to the \lq\lq tyranny of numbers\rq\rq that faced computer engineers in the 1960s.
\end{abstract}
\section{Introduction}
\begin{quotation}
\emph{Prediction is very difficult, especially about the future.}
(Niels Bohr)
\end{quotation}
Why don't we have a quantum computer by now? Quantum Computing has become the Holy Grail quest of Science and Technology in the 21st Century: it has attracted substantial attention from theoreticians and experimentalist; it has worked it way into the public consciousness; and its promises has influenced national governments. As a back up, the pursuit of quantum computing offers several spin-offs - quantum simulation, quantum enhanced sensing, quantum communications, etc. It has promised a step change in cyber-security, an exponential speed up in computation, and the solution to Big Data problems. Intensive and sustained lobbying has lead to several large scale programme investments by national and international funding agencies.
But the idea is not new, and there have been experiments done for a long time now - so why do we not have even moderately sized quantum computers?
This special issue is concerned with \lq\lq quantum coherent feedback and reservoir engineering\rq\rq . In short, the use of feedback in quantum systems for technological goals. This is something I would consider to be a corner stone of quantum engineering (if for no other reason than that this is the way things pan out in the classical world of technology).
The field of quantum feedback is by now reasonably well-understood, both theoretically and mathematically, and its implementation is currently limited by two factors - 1) experimental realization, and 2) numerical simulation. There may be many answers that one could propose to the question in the opening, but I want to focus on the issue of these limitations as a quantum technology bottleneck.
\subsection{Experimental Directions}
The first of these, experimental realization, is one on which I cannot really give an expert opinion: there have be a small number of experiments showing how feedback can enhance performance of quantum systems (both for measurement-based quantum feedback \cite{WM_book,Sayrin}, and coherent quantum feedback \cite{KNPM10}-\cite{Crisafulli13}), but these seem to be hard to do and there has been no obvious programme to extend these results to quantum technology applications. My own experience from talking to experimentalists in Britain is that they dismiss feedback as impractical. This is somewhat poignant given that Britain pioneered the use of feedback during the first great industrial revolution through the practical work of James Watt, and subsequently the theoretical work of James Clerk Maxwell\footnote{It has been suggested that the reason why feedback became so prominent in Britain during the first industrial revolution was in no small part due to its long tradition of empiricist philosophers such as Locke, Hobbes, Bacon and Hume \cite{Mayr}, and indeed due to the notion of self-regulation and equilibrium in society put forward by Hume and Adam Smith \cite{Richardson}: on continental Europe, in contrast, the prevailing philosophy was the Leibnizian one (as satirized by Voltaire in Candide) which favoured the notion of planning optimally and then fixing the plans. In other words, anticipating what we now understand as closed-loop (feedback) versus open-loop control. see also \cite{Hammond}.}.
Sadly, Britain does not seem set to use feedback in the quantum industrial revolution, though it is by no means alone in this position. It does not augur well that arguably the most important and useful concepts from classical engineering hardly features in the minds of most experimentalists in the emerging quantum technology sector.
Maybe this will change, but it ought to be a major worry nonetheless.
To dwell on the issue of quantum feedback for a moment. There are two forms: measurement-based control which requires us to extract information from our system via measurements and actuate back on the system accordingly; and coherent quantum feedback control which involves coupling a suitably designed quantum controller to the system.
Intuitively, the latter is the more suggestive of an autonomous self-regulating system-controller pair similar to the Watt fly-ball governor. Unfortunately, it is the one with the least amount of experimental investigation.
We should mention the proposal of Kerckhoff \textit{et al}. \cite{KNPM10} who consider a qubit undergoing syndrome errors, but being regulated by a set of ancillary cavity modes - here everything is autonomous, there is no measurement, however, the syndrome errors are corrected by means of the specific coupling to the ancillary modes which is engineered so as to dissipate the appropriate qubit energy. Here the qubit and ancillary modes form a combined open system which is driven by coherent quantum input processes \cite{HP}-\cite{Gardiner}, and engineered to \lq\lq cool\rq\rq the system back to the desired state.
My gut feeling is that quantum feedback and, more generally, the designability of quantum devices through quantum feedback network models must play a key role if the quantum world is ever to be an actual technology. This is at present a minority opinion - one that may turn out unfounded - based on the rather philosophical pointers above, but these principles do come with the weight of over 200 years of technological success behind them.
We can only hope that this type of control theoretic thinking eventually catches on in quantum technology.
\subsection{Numerical Modelling}
The second factor, however, will be the main discussion point of this piece. The problem is a familiar one - quantum systems are notoriously difficult to simulate using a classical computer, so the problem of \textit{designing} quantum controlled systems becomes hard. As it stands, it is a bottleneck for Quantum Technology. To fully innovate quantum technologies, we need a complete theory of quantum control engineering that allows us to design engineered quantum devices, and for this we need to be able to simulated quantum systems and optimize over various realizations - if we cannot do this, then we do not have a future technology.
Retrospectively, the motivation for quantum computing is traced back to Richard Feynman's observation that simulation of quantum systems cannot be efficiently performed on classical computers, and his proposition that analogue quantum devices acting as universal quantum \textit{simulators} would lead to an exponential speed up compared to classical computers \cite{Feynman_QC}. The technological problem is that, in order to design a quantum
computer, one first needs to be able to efficiently simulate classes of quantum systems that can only be efficiently simulated by a quantum computer!
Before thinking of ways around this, let's indulge in some diversion. A serviceable plot for a science fiction tale would be to have a time-traveler who uses a quantum computer to build a time machine to return to the 21st Century and, for whatever reasons, leaves his quantum computing device behind - the scientists (for some reason it's never the engineers in science fiction!) cannot reverse engineer the laptop but use its quantum computing capabilities to at least design a second functioning quantum computer - this inaugurates the quantum computing revolution eventually leading to our intrepid time
traveler using the latest quantum computer to build a time machine, then going back in time and leaving his quantum laptop/mobile-phone/credit-card behind in the 21st Century.
But back now to the real world and we are left with the core question: \textit{how do we design a quantum computer without the aid of a quantum computer?}
\section{The Problem with Wiring Things Up}
\begin{quotation}
\emph{For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.}
(Richard Feynman)
\end{quotation}
It is not the first time that technology has faced a seemingly insurmountable hurdle. The more informed reader may have noticed
that the title of this letter takes its queue from the phrase \emph{Tyranny of Numbers} coined by Jack Morton (vice President of Bell Labs) to describe the main problem facing the nascent computing industry: in Morton's own words (quoted from \cite{Love,Schwaderer}) ... \textit{For some time now, electronic man has known how \lq in principle\rq to extend greatly his visual, tactile, and mental abilities through the digital transmission and processing of all kinds of information. However, all these functions suffer from what has been called \lq \textbf{the tyranny of numbers}\rq . Such systems, because of their complex digital nature, require hundreds, thousands, and sometimes tens of thousands of electron devices.} The problem has two aspects: a practical one that having workers try and solder connections onto transistor components was too labour intensive, costly and unreliable; and a pragmatic one that there was a level of complexity beyond which computers became too unreliable and error-prone to justify their computational advantage.
The essential problem here was that electronic circuits were becoming increasingly smaller scale and more reliable, but this resulted in increasingly more complex devices consisting on thousands of components (capacitors, resistors, transistors, etc.) with substantially more interconnection possibilities (which were themselves becoming more difficult to make). It was a massive effort to cut up the silicon blocks and attach these interconnections - often by hand. And later, as the number of components desired turned to millions, it soon became clear that the modelling problem itself was becoming intractable. Morton's own idea to tackle this
was through model and architectural reduction - that is, to for simplified ``functional devices'' possessing as few components and interconnections as possible while achieving as close to universal functionality as one reasonably can \cite{Gertner}.
As it turned out, this led nowhere. The solution to the problem - in this case making functional devices from silicon - was to fashion a complete circuit on chip: the integrated circuit! The idea had been around for a long time: an integrated circuit semiconductor amplifier had been patented by Werner Jacobi of Siemens as far back as 1949. Тhe concept of the integrated circuit had already been presented by radio engineer Geoffrey Dummer \cite{Schwaderer}: in 1952 he wrote \textit{I would like to take a peek into the future. With the advent of the transistor and the work on semi-conductors generally, it now seems possible to envisage electron equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying and amplifying materials, the electronic functions being connected directly by cutting out areas of the various layers}. It was only in 1958 that Jack Kilby, Robert Noyce and Jean Hoerni independently developed the first prototypes.
(Kilby won the 2000 Nobel prize in Physics for the invention of the integrated circuit, though there were clearly major contributions from multiple researchers.)
The solution was remarkably simple: the patterned wafer \textit{is} the machine ... it doesn't need to be cut up into minute pieces only to be reassembled with lots of unreliable wires.
It is worth noting that the integrated chip itself was very slow to get off the ground \cite{Schwaderer}.
\section{Quantum Interconnections and Scalability}
\begin{quotation}
\emph{No, [of course, I don't believe the horseshoe brings good luck] ... but I am told it works even if you don't believe in it.}
(Niels Bohr)
\end{quotation}
Quantum computers operate by means of qubits - quantum components with a 2-dimensional Hilbert space - and there are a variety of different ways to realize qubits physically. To be precise, these are the \textit{logical qubits}, and they are the staple concept of standard theories of quantum computation \cite{NC}. Quantum computation to a large extent deals with idealized models that are never found in Nature and this was quickly realized as being a potentially fatal obstacle to performing actual quantum computations. The operations - \textit{quantum gates} - performed on logical qubits are again an idealization not precisely available in Nature. Fortunately, Peter Shor \cite{Shor} was able to play one of the great get-out-of-jail-free cards with the proposal that quantum error correction is in principle possible. However, in order to achieve quantum error correction one has to perform a syndrome measurement to check whether the qubit has changed from its desired state - the syndrome measurement should to be done on auxiliary degrees of freedom (in Shor's scheme one needs 9 auxiliary qubits for each logical qubit) which are coupled to our logical qubit. If a syndrome error is detected, then an appropriate unitary gate is applied to restore the logical qubit back to the desired state. But we still have to engineer the auxiliary qubits, couple them to the logical qubits, perform potentially imprecise quantum gates - this is a limiting factor. The natural question is whether quantum error correction actually introduces more error than it corrects. The Quantum Fault Tolerance (or Quantum Threshold) Theorem of Aharonov and Ben-Or \cite{Threshold} shows that it is nevertheless possible to get a noisy quantum computer to simulate an idealized quantum computer to a given level of accuracy provided the error-rate is below a sufficient threshold level. We should mention that the applicability of the Quantum Threshold Theorem
to physical systems is not without criticism, notably by Robert Alicki on thermodynamical grounds \cite{Alicki_Hor} - \cite{Alicki13}.
The good news is that we can in principle run quantum algorithms on a noisy quantum computer with suitably engineered quantum error corrections: the bad news is that we'll need a lot more qubits to do this. The success of quantum computing, and indeed much of the lower hanging quantum technology fruit, therefore hinges on our ability to scale up quantum components. Unfortunately this is where things become even noisier, more process-intensive and hardware-intensive, and incoherent.
At present, there seems to be a limit of about 5 logical qubits for a quantum computer achieved so far, though with aspirations for hundreds, if not thousands, of qubits in the near future \cite{who}. It is clear that scalability is a concern amongst the quantum technology sector. There are several experimental proposals that claim to be scalable platforms. This may be true, and it may be that we will soon get our quantum computer if we are patient enough, but this is not my concern.
My contribution to quantum control has been primarily through the theory of quantum feedback networks developed with Matthew James
\cite{GouJam09a}-\cite{NM_QFN_I}. This is a modular theory of Markovian quantum input-output components that allows one to apply the block-design thinking from traditional engineering to quantum systems. For linear quantum components, one gets a bilinear control theory similar to the one found in linear systems theory, however, the theory is much more general ... the components can be qubits, spin systems, etc. The framework has been referred to as the SLH theory due to the fact that each component has its own internal Hamiltonian $H$, coupling (or collapse) operators $L$, and a unitary scattering matrix $S$ of quantum input to output fields. A computer package, QNET, has been developed by researchers at Stanford \cite{Tezak} to apply these rules for networks of standard components - qubits, cavity modes, beam-splitters, phase shifter, Kerr non-linearities, etc. Once the total network description has been determined, one may then apply simulation packages such as QuTiP.
On the one hand this leads us down a route that is highly suggestive of the standard approaches routinely used in classical technologies. The bad news, of course, is that we are dealing with quantum systems so simulation rapidly becomes a major issue. Even a moderately small number of simple quantum components in a quantum feedback network becomes impractical. Sadly, even Small-Scale-Integration design looks as infeasible in the quantum world. (For more discussion about these issues, see \cite{Santori}, \cite{Bowen2017}.)
Where does this leave us? It could of course turn out to be a non-problem. It may happen that quantum technology goes on its present course and gets to the quantum supremacy stage without advanced quantum control theoretic thinking. I doubt this, but even if it does happen then we arrive at a situation where we have things working without knowing why exactly they are working or how best to improve them. Throughout the historical development of science and technology, this is a highly non-ideal situation. If we are expecting a new industrial revolution, then we need to be able to design the components otherwise we do not have a true technology. How we get to Medium, Large, or even Very-Large Quantum Scale Integration would be still anyone's guess.
\section{Outlook}
\begin{quotation}
\emph{The search for the Grail is the search for the divine in all of us. But if you want facts, Indy, I've none to give you. At my age, I'm prepared to take a few things on faith.}
(Marcus Brody: Indiana Jones and the Last Crusade)
\end{quotation}
An obvious way to try and get out of this conundrum is that we direct the attention of quantum simulation away from the proposed areas of investigation \cite{QSim} in the various quantum technology programmes, and instead use it as a resource for designing better quantum simulators. To the best of my knowledge, this has not been suggested up to now.
It is not clear, however, whether this would work! The problem is somewhat akin to von Neumann's question of whether there existed universal constructors - machines capable of self-replication, or even building more complicated machines. There the answer is affirmative, but it needed the brilliance of von Neumann to show this. In contrast, the suggestion above is a practical one of using current state-of-the-art quantum machines as a design tool to improve the next generation. Like most things in quantum technology, this is uncharted territory.
On the other hand, the quantum feedback network theory may give some insight into the scalability issues - at least we may get some idea of figures of merit as to how noise, time delays, model imperfections, etc., scale up. Bounds on the performance as a function of number of components may reveal how drastic the scalability problem is for realistic situations.
Alternatively, it may happen that the solution is similar to that of the originally tyranny of numbers problem: we just start fabricating large integrated quantum circuits. The difficulty here is that the sub-components to be wired up ought to quantum dynamical systems - this is a hard thing to do experimentally. Most progress has been made on quantum circuits which act as static devices - effectively Mach-Zehnder networks consisting of up to about 20 beam-splitters - but an assembly of qubits on-chip communicating by quantum field signals still looks to be very far away.
There is general agreement that the scientists involved in quantum technology sector need to be cautious and realistic in their promises: this may be coming too late! One of the principle dangers of lobbying for science funding is that it creates a runaway positive feedback loop where success in attracting funding is proportional to the proposed impact. At any rate, the sector now has some pretty intimidating goals to make good.
My view is that the central question here is whether quantum systems are \textit{technologizable} or not. By this, I mean whether the standard approaches of design and development that characterize modern industries can be applied to products that are quantum systems. Otherwise, we will be limited to a small number of applications lacking an overarching control engineering and product innovation framework. In principle, we now know that many of the core ideas of feedback control can be carried over into the quantum domain, but this has yet to progress beyond mathematical results, and a few proof-of-concept experiments. At the present time, one senses that there is view that a quantum computer is a collection of quantum gates, that being able to realize quantum gates means we are guaranteed to be able to build a quantum computer, and once that is done we can get engineers in to optimize performance. This is too naive! A good description of what a quantum systems engineering theory should look like has been given by Everitt \textit{et al.} \cite{Everitt}, but this still assumes a good enough understanding of quantum devices - for the time being we still do not know how to simulate even relatively low numbers of components in a quantum feedback network, let alone get to the device level.
It is worth looking at the birth of modern physics, in particular, quantum mechanics. Planck and Einstein were able to make their intellectual leaps because they were thinking like modern physicists. The reason why they were doing so was because they were following the ideas of Boltzmann, who is arguably the first to think like a modern physicist (see, for instance, \cite{everdell,ellis}), and who was the first to introduce probability as a staple into physics.
Modern physics happened because the right people were primed to think about problems in the right way.
The same applies to industrial revolutions.
To close, it might be worth recalling how Bell Labs dealt with the problem of commercializing the transistor following the breakthrough experiments of Bardeen, Brattain and Shockley. In 1948 Jack Morton - the same who coined the phrase tyranny of numbers - was told by the head, who was about to go off on a one month tour of Europe, to come up with an action plan ready by the time he returned \cite{Gertner}. Morton's eventual report emphasized that innovation should be thought of as a total process ... \textit{ It is not just the discovery of new phenomena, nor the development of a new product or manufacturing technique, nor the creation of a new market. Rather, the process is all these things acting together in an integrated way toward a common industrial goal} \cite{Gertner}. This subsequently became the blueprint for industrial innovation from that point onwards. Morton's vision was that the principle challenges for technology where reliability, reproducibility and designability of devices, \cite{Gertner} pg. 109.
Quantum systems may well be technologized, but it will require something beyond the current mainstream thinking. Areas like quantum feedback and control engineering indicate gaps that must be closed if this is to ever happen. The tyranny of qubits problem will likely remain a bottleneck on quantum technology roadmaps (whether sign-posted or not) for some time to come, however, by identifying the problem and some of the surrounding issues may be productive.
\end{document} | math | 23,771 |
\begin{document}
\title{Fractional calculus approach for the phase dynamics of Josephson
junction under the influence of magnetic field}
\author{
Amer Rasheed\thanks{Department of Mathematics, School of Science and Engineering, Lahore University of Management Sciences, Opposite Sector U, DHA, Lahore Cantt., 54792, Pakistan.},
\and
Imtiaz Ali
}
\maketitle
\begin{abstract}
This article presents the phase dynamics of an inline long Josephson junction in voltage state under the influence of constant external magnetic field. Fractional calculus approach is used to model the evolution of the phase difference between the macroscopic wave functions of the two superconductors across the junction. The governing non-linear partial differential equation is then solved using finite difference-finite element schemes. Other quantities of interest like Josephson current density and voltage across the junction are also computed. The effects of various parameters in the model on phase difference,Josephson current density and voltage
are analyzed graphically with the help of numerical simulations.
\end{abstract}
\noindent {\mathbf{f}ootnotesize {\bf Key words.} Inline long Josephson junction;voltage state;Fractional Nonlinear PDEs; Finite element method.}
\section{Introduction}
Superconductors have found a wide range of applications in modern times. They are used in alomst all modern day technologies . Extensive researches have been carried out about superconductors and are still going on. An important phenomenon which arises from superconductivity is Josephson Effect. Two superconductors when weakly connected, for example by a very thin non superconducting barrier, a current flows through the barrier without any dissipation of energy and power supply\cite{r7}. This phenomenon is called Josephson Effect and the junction is called Josephson Junction. A wide range of modern day technologies operate making use of this phenomenon. Superconducting quantum interference devices (SQUIDs) and Superconducting tunnel junction detectors (STJs) are based on Josephson Effect, see for example \cite{r11}. SQUID is a highly sensitive magnetometer which can detect extremely small magnetic fields. Similarly shuted Josephson junctions are used in RSFQ digital electronics \cite{r11}. RSFQ is used to process digital signals.\partialar
The electrodynamics of the Josepshon junction is completely described by the phase difference of the macroscopic wavefunctions of the two superconductors across the junction\cite{r7}. The evolution of the phase difference is modeled by a nonlinear partial differential equation which is a perturbed Time-Dependent Sine Gordon Equation \cite{r7}. No general analytical solution to this equation has been found yet\cite{r8}. So various numerical schemes are used to find approximate solution. Different numerical schemes have been used by different researchers to solve the model for various geometries of the junction. The solution of long Josephson junction using finite difference method is discussed by Tinega Abel Kurura, Oduor Okoya and Omolo Ongati \cite{r1}. The steady state case of window Josephson junction using Finite element mehtod is studied by Manolis Vavalis, Mo Muc and Giorgos Sarailidi \cite{r2}. P. Kh. AtanasovaEmail, T. L. Boyadjiev, Yu. M. Shukrinov, E. V. Zemlyanaya \cite{r12} discussed static distributions of the magnetic flux in long Josephson junctions. The Numerical simulations of long Josephson junctions driven by large external radio-frequency signals is analyzed by G. Rotoli, G. Costabile, and R. D. Parmentier \cite{r13}. J. G. Caputo, N. Flytzanis, E. A. Vavalis \cite{r14} discussed a semi-linear elliptic PDE model for the static solution of Josephson junctions. Numerical Study of a system of long Josephson junctions with inductive and capacitive couplings are studied by I. R. Rahmonov , Yu. M. Shukrinov , A. Plecenik , E. V. Zemlyanaya and M. V. Bashashin \cite{r15}. \partialar
In all above mentioned researches they have used integer order time derivative in their models. But it has been validated by experiments that the physical models involving continuous evolution are better modeled by employing fractional derivatives. These models keep tracks
and memory of previous deformations which helps to give more accurate physical simulations as compared to integer order time derivatives.\partialar
In this article the evolution of the phase difference between the macroscopic wave functions of the two superconductors across a long
inline Josephson junction, in voltage state and under the influence of magnetic field, is modeled
using fractional time derivatives instead of integer order time derivatives. The model is then solved using the Finite element scheme along with Finite difference method. Quantities of interest, $i.e$, phase difference, Josephson current density and voltage, are then studied graphically. The effect of various parameters involved in the model, on the aforementioned quantities of interest are then discussed in detail.\partialar
The mathematical formulation of the model is presented in Section \cref{S2}. In Section \cref{S3} the numerical solution of the model using finite element-finite difference scheme is discussed. Error analysis and convergence of the numerical scheme is presented in Section \cref{S4}. Graphical results and their discussions are given in Section \cref{S5} and conclusion is given in Section \cref{S6}.
\section{Mathematical Formulation} \label{S2}
\begin{definition}
The Caputo left sided fractional derivative of order $\alpha\,$, where $\alpha \in \mathbb{C}\,$ and $\,\mathbb{R}e e \{\alpha\} > 0 \,$, of a complex valued function \textit{f(t)}, $\,\textit{t} \in \mathbb{R}\,$, with respect to \textit{t} is defined as \cite{r5}
$$ \mathbf{f}rac{\partialartial^\alpha}{\partialartial {\textit{t}}^\alpha} f(t) = \partialartial^\alpha f(t) = \mathbf{f}rac{1}{\mathbf{G}amma(m-\alpha)}\int_0^t (\,t - \widetilde{a}u\,)^{m-\alpha-1}\mathbf{f}rac{\partialartial^m}{\partialartial{\widetilde{a}u}^m} f(\widetilde{a}u)\,d\widetilde{a}u$$
where $\textit{m} \in \mathbb{N}$ is such that $\,m-1<\mathbb{R}e e\{\alpha\} < m.\,$\\
Here $\mathbf{G}amma(\cdot)$ denotes the Euler Gamma function defined as:
$$ \mathbf{G}amma(z) = \int_\mathbb{R}\xi^{z-1}e^{-\xi}\,d\xi,\mathbf{q}quad z\in \mathbb{C\mathbf{q}uad}, \mathbb{R}e e\{z\}>0$$
\end{definition}
\begin{remark}
If $\alpha \in \mathbb{R}$ then the fractional derivative $\partialartial^\alpha f(t)$ converges to usual derivative of integer order $\partialartial^mf(t)$ as $\alpha \rightarrow m \in \mathbb{N},\, where \mathbf{q}uad m-1< \alpha < m.$ \cite{r5}
\end{remark}
\begin{proposition}
Let $\alpha \in \mathbb{C}\,$ such that $\mathbb{R}e e\{\alpha\} > 0$ and $\,m-1<\mathbb{R}e e\{\alpha\} < m\,$ for some $m \in \mathbb{N}.\,$ Then for $s \in \mathbb{C}\,$ such that $\mathbb{R}e e\{s\} > 0\,$ we have: \cite{r5}
$$ \partialartial^{\alpha} t^s = \mathbf{f}rac{\mathbf{G}amma (s+1)}{\mathbf{G}amma (s+1-\alpha)} t^{s-\alpha},\mathbf{q}quad \mathbf{f}orall t > 0 $$
\end{proposition}
\subsection{Governing equations of the model}
In this article we consider a long Josephson junction with length 2\textit{L} . So $ 2\textit{L} \gg \lambda_J $, here $\lambda_J $ is the Josephson penetration depth \cite{r7}. The junction lies in the \textit{y}-\textit{z} plane. To make the model simpler and one-dimensional the width $\textit{W}$ of the junction is kept very small as compared to $\, \lambda_J\,$. i.e. $ \textit{W} \ll \lambda_J $. In this case the phase difference $\varphi$ will vary only along the length of the junction i.e along z-axis \cite{r7}. We have chosen the origin of the coordinate system at the midpoint of the length of the junction.The barrier thickness is $\textit{d}$ and its area is \textit{A}. A constant external magnetic field $\mathbf{B}^{ex}$ is applied across the junction in the \textit{y}-direction. So $\mathbf{B}^{ex} = B_y^{ex}\,\mathbf{\hat{j}}$. A constant current, applied in the negative \textit{x}-direction, is maintained across the junction . Let $\mathbf{J}$ be the density of the total current flowing through the junction. So $\mathbf{J} = J_{x}\,\mathbf{\hat{i}}$ and we consider $J_{x} > J_c $, here $J_c $ is the maximum Josephson current density. So under these circumstances the Josephson junction resists in the voltage state \cite{r7}.
In long Josephson junction magnetic field generated by Josephson current cannot be neglected \cite{r7}, so the total magnetic field $\mathbf{B}$ through the junction includes both the externally applied magnetic field and the one generated by Josephson current. For the geometry in Fig. \cref{fig:1.1} \cite{r7} the total magnetic field $\mathbf{B}$ will still point in the \textit{y}-direction and it will vary only along the \textit{z}-direction i.e. $\mathbf{B}(z,t) = B_y(z,t)\,\mathbf{\hat{j}}$ and we can write: \cite{r7}
\begin{figure}
\caption{Josephson Junction}
\label{fig:1.1}
\end{figure}
\begin{equation} \label{eq:1}
\mathbf{f}rac{\partialartial \varphi(z,t)}{\partialartial z} = \mathbf{f}rac{2\partiali t_{B} B_y(z,t)}{\Phi_{0}}
\end{equation}
Where $\Phi_{0} = \mathbf{f}rac{\displaystyleplaystyle h}{\displaystyleplaystyle 2e}$ is the magnetic flux quantum , $t_B = d + \lambda_{L1} + \lambda_{L2}$ $, \, \lambda_{L1} \,\textrm{and}\,\lambda_{L2}\,$ are the London penetration depth of the two superconductors respectively across the junction and $d$ is junction's thickness.\partialar
Using fractional form, with respect to time $t$, of Ampere's law, given in \cite{r16}-\cite{r17}
\begin{equation} \label{eq:2}
\nabla \times \mathbf{B} = \mu_0\mathbf{J} + \mathbf{f}rac{\mu_0 \epsilonsilon \epsilonsilon_0}{\eta^{1 - \alpha}} \mathbf{f}rac{\partialartial^\alpha \mathbf{E}}{{\partialartial t}^\alpha}
\end{equation}
where $\mathbf{E}$ is the electric field, $\epsilonsilon$ is the dielectric constant of the junction material and $\eta$ is an arbitrary quantity with dimension of [$\mathbf{T}$]. Since current is flowing only in the \textit{x}-direction, so $\mathbf{E}$ will point in the \textit{x}-direction and will vary along the \textit{z}-direction i.e. $\mathbf{E}(z,t) = {E_x(z,t)}\,\mathbf{\hat{j}}$. So \eqref{eq:2} becomes,
\begin{equation} \label{eq:3}
-\mathbf{f}rac{\partialartial B_{y}(z,t)}{\partialartial z} = \mu_0 J_{x}(z,t) + \mu_0 \epsilonsilon \epsilonsilon_0 \mathbf{f}rac{{\partialartial}^\alpha E_{x}(z,t)}{{\partialartial t}^\alpha}
\end{equation}
Now \eqref{eq:1} implies,
\begin{equation} \label{eq:4}
\mathbf{f}rac{\partialartial B_y(z,t)}{\partialartial z} = \mathbf{f}rac{\Phi_{0}}{2\partiali t_{B}} \mathbf{f}rac{{\partialartial}^2 \varphi(z,t)}{{\partialartial z}^2}
\end{equation}
Now let \textit{V(z,t)} be the voltage across the junction. So
\begin{equation*}
E_{x}(z,t) = -\mathbf{f}rac{V(z,t)}{d}.
\end{equation*}
On the same line of argument given in \cite{r16}-\cite{r17} the fractional form, with respect to time $t$, of Josephson second equation(phase-voltage relation) can be written as:
\begin{equation} \label{eq:5}
\mathbf{f}rac{1}{{\eta}^{1- \alpha}}\mathbf{f}rac{\partialartial^{\alpha}\varphi(z,t)}{{\partialartial t}^{\alpha}} = \mathbf{f}rac{2\partiali}{{\Phi}_0}V(z,t)
\end{equation}
So,
\begin{equation} \label{eq:6}
\mathbf{f}rac{{\partialartial}^\alpha E_{x}(z,t)}{{\partialartial t}^\alpha} = -\mathbf{f}rac{\Phi_{0}}{2\partiali d {\eta}^{1- \alpha}}\mathbf{f}rac{{\partialartial}^{2\alpha} \varphi}{{\partialartial t}^{2\alpha}}
\end{equation}
Now the total current density $J_{x}$ would have contributions from three different types which are Josephson current(or supercurrent) density $(J_{s})$ , normal current(or resistive current) density $(J_N)$ and bias current density $(J_{bias}).$ \cite{r7}
\begin{equation} \label{eq:7}
J_{x} = -J_{s} - J_{N} + J_{bias}
\end{equation}
Negative sign because the current is flowing in negative \textit{x}-direction.Now,
\begin{equation} \label{eq:8}
J_{N} = \mathbf{f}rac{\sigma_{0} V}{A}= \mathbf{f}rac{\sigma_{0} \Phi_{0}}{2\partiali A {\eta}^{1- \alpha}}\mathbf{f}rac{\partialartial^{\alpha} \varphi}{{\partialartial t}^{\alpha}},
\end{equation}
Where $\sigma_{0}$ is the resistivity of the junction. Now from Josephson first equation (phase-current relation) we have,
\begin{equation} \label{eq:9}
J_{s} = J_{c} \sin{(\varphi)}
\end{equation}
Substituting \eqref{eq:8} and \eqref{eq:9} in \eqref{eq:7}, we get
\begin{equation} \label{eq:10}
J_{x}(z,t) = -J_{c}\sin{(\varphi(z,t))} - \mathbf{f}rac{\sigma_{0} \Phi_{0}}{2\partiali A {\eta}^{1- \alpha}}\mathbf{f}rac{\partialartial^{\alpha} \varphi}{{\partialartial t}^{\alpha}} + J_{bias}
\end{equation}
Substituting \eqref{eq:4},\eqref{eq:6} and \eqref{eq:10} in \eqref{eq:3} and simplifying, we get,
\begin{equation} \label{eq:11}
-\mathbf{f}rac{{\partialartial}^2 \varphi(z,t)}{{\partialartial z}^2} + \mathbf{f}rac{1}{{\overline{c}}^2 {\eta}^{1- \alpha}}\mathbf{f}rac{{\partialartial}^{2\alpha} \varphi(z,t)}{{\partialartial t}^{2\alpha}}
+ \mathbf{f}rac{\beta}{{\overline{c}}^2 {\eta}^{1- \alpha}}\mathbf{f}rac{{\partialartial}^{\alpha} \varphi(z,t)}{{\partialartial t}^{\alpha}}
+ \mathbf{f}rac{1}{{\lambda_J}^2}\sin{(\varphi(z,t))}
= \mathbf{f}rac{J_{bias}}{{J_{c} \lambda_J}^2}
\end{equation}
\begin{align*}
\textrm{where}\mathbf{q}uad \overline{c} &= \sqrt{\mathbf{f}rac{d}{\epsilonsilon \epsilonsilon_{0} \mu_{0} t_{B}}}, \ \ \beta = \mathbf{f}rac{\sigma_{0}}{C} \ \ \textrm{and} \ \
{\lambda}_J = \sqrt{\mathbf{f}rac{{\Phi}_0}{2\partiali{\mu}_0 t_B J_c}} \\
C &= \mathbf{f}rac{\epsilonsilon \epsilonsilon_{0} A}{d} \mathbf{q}uad \textrm{is the capacitance of the junction}
\end{align*}
\subsection{Initial and Boundary Conditions}
Initially the voltage \textit{V} across the junction is 0. So from Josephson second equation we have:
\begin{equation}\label{eq:12}
\mathbf{f}rac{\partialartial \varphi}{\partialartial t}(z,0) = 0
\end{equation}
In case of inline junction the Neuman boundary is given by\cite{r7}:
\begin{equation} \label{eq:13}
\mathbf{f}rac{\partialartial \varphi}{\partialartial z}(-L,t) = \mathbf{f}rac{2\partiali t_{B}}{\Phi_{0}}\big({B_y}^{ex} -\mathbf{f}rac{I\mu_{0}}{2W} \big) \mathbf{q}uad \textrm{and}\mathbf{q}uad \mathbf{f}rac{\partialartial \varphi}{\partialartial z}(L,t) = \mathbf{f}rac{2\partiali t_{B}}{\Phi_{0}}\big({B_y}^{ex} + \mathbf{f}rac{I\mu_{0}}{2W} \big)
\end{equation}
where \textit{I} is the total current flowing through the junction.\partialar
Since $L \gg \lambda_J$. Lets take $ L = 10\lambda_J$. We take following boundary conditions:
\begin{equation}
\mathbf{f}rac{\partialartial \varphi}{\partialartial z}(-10\lambda_J,t) = \mathbf{f}rac{0.00189}{\lambda_J} \mathbf{q}uad
\textrm{and} \mathbf{q}uad \mathbf{f}rac{\partialartial \varphi}{\partialartial z}(10\lambda_J,t) = \mathbf{f}rac{0.00163}{\lambda_J}
\end{equation} \label{eq:14}
For $\varphi(z,0)$ we take a particular solution of the stationary Sine-Gordon equation as our initial phase difference given in \cite{r7}:
\begin{equation} \label{eq:15}
\varphi(z,0) = 4 \arctan\bigg(x_\varepsilonp{\bigg({\mathbf{f}rac{ \mathbf{f}rac{\displaystyleplaystyle z}{\lambda_J} - 0.1}{\sqrt{1 - c^2}}}\bigg)}\bigg)
\end{equation}
Here we take $c = 0.9$. So the final model becomes:
\begin{equation}\label{eq:16}
\begin{cases} \displaystyleplaystyle
-\mathbf{f}rac{{\displaystyleplaystyle\partialartial}^2 \displaystyleplaystyle\varphi(z,t)}{{\displaystyleplaystyle\partialartial \displaystyleplaystyle z}^2} + \mathbf{f}rac{1}{{\displaystyleplaystyle \overline{c}}^2 {\eta}^{1 - \alpha}}\mathbf{f}rac{{\displaystyleplaystyle \partialartial}^{2\alpha} \displaystyleplaystyle \varphi(z,t)}{{\displaystyleplaystyle \displaystyleplaystyle \partialartial \displaystyleplaystyle t}^{2\alpha}}
+ \mathbf{f}rac{\displaystyleplaystyle \beta}{{\displaystyleplaystyle \overline{c}}^2 {\eta}^{1 - \alpha}}\mathbf{f}rac{\displaystyleplaystyle {\partialartial}^{\alpha} \displaystyleplaystyle \varphi(z,t)}{\displaystyleplaystyle {{\partialartial \displaystyleplaystyle t}^{\alpha}}}
+ \mathbf{f}rac{1}{\displaystyleplaystyle {\lambda_J}^2}\displaystyleplaystyle \sin{(\displaystyleplaystyle \varphi(z,t))}
= \mathbf{f}rac{\displaystyleplaystyle J_{bias}}{\displaystyleplaystyle {J_{c} \lambda_J}^2} \\
\mathbf{q}quad \mathbf{q}quad \mathbf{f}rac{\displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{\displaystyleplaystyle \partialartial \displaystyleplaystyle z}(-10 \displaystyleplaystyle \lambda_J,\displaystyleplaystyle t) = \mathbf{f}rac{\displaystyleplaystyle A}{\displaystyleplaystyle \lambda_J} \mathbf{q}quad \mathbf{f}rac{\displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{\displaystyleplaystyle \partialartial \displaystyleplaystyle z}(10\displaystyleplaystyle \lambda_J,\displaystyleplaystyle t) = \mathbf{f}rac{\displaystyleplaystyle B}{\displaystyleplaystyle \lambda_J} \\
\displaystyleplaystyle \varphi(\displaystyleplaystyle z,0) = \displaystyleplaystyle 4 \displaystyleplaystyle \arctan \displaystyleplaystyle \bigg(\displaystyleplaystyle x_\varepsilonp{\displaystyleplaystyle \bigg({\mathbf{f}rac{ \mathbf{f}rac{\displaystyleplaystyle z}{\displaystyleplaystyle \lambda_J} - \displaystyleplaystyle 0.1}{\displaystyleplaystyle \sqrt{1 - c^2}}}\bigg)}\bigg) \ \ \displaystyleplaystyle \textrm{and} \ \
\mathbf{f}rac{\displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{\displaystyleplaystyle \partialartial t}(\displaystyleplaystyle z , 0) = 0
\end{cases}
\end{equation}
where $A = 0.00189$ and $B = 0.00163$
\subsection{Non-dimensionalization of the Model}
To non-dimensionalized the IBVP we use following dimensionless quantities:
\begin{equation}\label{eq:17}
\hat{z} = \mathbf{f}rac{z}{{\lambda}_J } \ \ ,\ \ \hat{t} = \mathbf{f}rac{\overline{c} t}{{\lambda}_J}
\end{equation}
As ${\lambda}_J$ has dimension of [$\mathbf{L}$] and $\overline{c}$ has dimension of [$\mathbf{LT^{-1}}$]
Substituting \eqref{eq:17} in \eqref{eq:16} and ignoring hats, we get following dimensionaless model:
\begin{equation} \label{eq:18}
\begin{cases} \displaystyleplaystyle
-{\mathbf{f}rac{ \displaystyleplaystyle \partialartial^2 \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial z^2}} + {\displaystyleplaystyle \gamma}_2\mathbf{f}rac{ \displaystyleplaystyle \partialartial^{2} \displaystyleplaystyle \varphi}{{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}^{2}} + {\displaystyleplaystyle \gamma}_1\mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}} + \displaystyleplaystyle \sin{( \displaystyleplaystyle \varphi)} = \displaystyleplaystyle \lambda \\
\mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle z}(-10, \displaystyleplaystyle t ) = A \ \ ,\ \ \mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle z}(10 , \displaystyleplaystyle t ) = B \\
\displaystyleplaystyle \varphi(z,0) = 4 \displaystyleplaystyle \arctan\bigg( \displaystyleplaystyle x_\varepsilonp{\bigg({\mathbf{f}rac{ \displaystyleplaystyle z - 0.1}{ \displaystyleplaystyle \sqrt{1 - c^2}}}\bigg)}\bigg) \ \ , \ \
\mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}( \displaystyleplaystyle z , 0) = 0
\end{cases}
\end{equation}
where ${\gamma}_1 = \mathbf{f}rac{{\displaystyleplaystyle \beta {\lambda_J}^{2 - \alpha}}}{\displaystyleplaystyle {(\overline{c})}^{2 + \alpha}} ,\mathbf{q}uad {\gamma}_2 = \mathbf{f}rac{\displaystyleplaystyle 1}{\displaystyleplaystyle {(\overline{c})}^{2(\alpha - 1)} {\lambda_J}^{2(\alpha - 1)} {\eta}^{1 - \alpha}} \mathbf{q}uad \textrm{and} \mathbf{q}uad \lambda = \mathbf{f}rac{\displaystyleplaystyle J_{bias}}{\displaystyleplaystyle J_c}$
\section{ Finite Difference-Finite Element appprximation scheme} \label{S3}
In this section a numerical scheme is developed to find the approximate solution of the model \eqref{eq:18}. Finite difference method is used to discretize time variable while space variable is discretized using finite element method.
\subsection{Spaces and Notations}
$\textit{L}^2(\,\Omegaega\,)$ denotes the space of square integrable functions over the domain $\Omegaega = [-10 , 10].\, \textit{L}^2(\,\Omegaega\,)$ is equipped with following inner product and norm:
\begin{equation} \label{eq:19}
(\,f,g\,)_{\textit{L}^2(\,\Omegaega\,)} := \int_{\Omegaega}fg\;dz \mathbf{q}uad \textrm{and} \mathbf{q}uad ||\,\textit{f}\,||_{\textit{L}^2(\,\Omegaega\,)} := {\mathbb{B}igg( \int_\Omegaega |\,\textit{f}\,|^2 \mathbb{B}igg)}^\mathbf{f}rac{1}{2} dz \mathbf{q}uad f,g \in \textit{L}^2(\,\Omegaega\,)
\end{equation}
In this article we will denote $(\,f,g\,)_{\textit{L}^2(\,\Omegaega\,)} \textrm{simply by} (\,f,g\,)$. We also define,
\begin{equation} \label{eq:20}
\langle\, f , g \,\rangle := \bigg(\,\mathbf{f}rac{\partialartial f}{\partialartial z}\, ,\,\mathbf{f}rac{\partialartial g}{\partialartial z}\,\bigg)
\end{equation}
$\textit{H\,}^m(\,\Omegaega\,)$ denotes Sobolev space of order $m > 0$ over the domain $\Omegaega = [-10 , 10].\, \textit{H\,}^m(\,\Omegaega\,)$ is equipped with following inner product and norm:
\begin{equation} \label{eq:21}
(\,f\,,\,g\,)_{{\textit{H}\,}^\textit{m}} := \sum_{i = 0}^m \bigg(\,\mathbf{f}rac{d^i f}{dz^i}\, ,\,\mathbf{f}rac{d^i g}{dz^i}\,\bigg) \mathbf{q}uad \textrm{and} \mathbf{q}uad ||\,\textit{f}\,||_{\textit{H}^m(\,\Omegaega\,)} := \sum_{i = 0}^m \bigg(\,\bigg|\bigg|\,\mathbf{f}rac{d^i f}{dz^i}\,\bigg|\bigg|_{\textit{L}^2(\,\Omegaega\,)}^2\,\bigg)^{\mathbf{f}rac{1}{2}} \mathbf{q}uad f,g \in \textit{H\,}^m(\,\Omegaega\,)
\end{equation}
We define following differential operator:
\begin{equation} \label{eq:22}
\mathcal{L}_t^{\alpha}[\xi(.)](t) := \bigg({\displaystyleplaystyle \gamma}_2\mathbf{f}rac{\partialartial^{2\alpha}}{\partialartial t^{2\alpha}} + {\displaystyleplaystyle \gamma}_1\mathbf{f}rac{\partialartial^{\alpha}}{\partialartial t^{\alpha}}\bigg)[\xi(t)]
\end{equation}
\subsection{Finite Difference approximation}
Let $t_k = \widetilde{a}u k \mathbf{q}uad \text{for} \mathbf{q}uad k = 0,1,\cdots , m$ and $\widetilde{a}u = \mathbf{f}rac{\displaystyleplaystyle T}{\displaystyleplaystyle m} ,\, T > 0 $ , is the time step size. Consider the following approximations of first order time derivatives for $0 \leq k < m$
\begin{equation} \label{eq:23}
\mathbf{f}rac{\partialartial \varphi}{\partialartial t}(z,t) \approx \mathbf{f}rac{\partialartial \varphi}{\partialartial t}(z,t_{k+1}) \approx \mathbf{f}rac{\varphi(z,t_{k+1}) - \varphi(z,t_k)}{\widetilde{a}u} \mathbf{q}quad \text{for} \mathbf{q}uad t_k \leq t \leq t_{k+1}
\end{equation}
Initial conditions in \eqref{eq:18} yield,
\begin{equation} \label{eq:24}
\varphi(z , t_0) \approx 4 \arctan\bigg(x_\varepsilonp{\bigg({\mathbf{f}rac{ z - 0.1}{\sqrt{1 - c^2}}}\bigg)}\bigg) \approx \varphi (z , t_1) \
\end{equation}
Consider the following approximation of second order time derivative for $\, 0 \leq k < m-1$
\begin{equation} \label{eq:25}
\mathbf{f}rac{\partialartial^2 \varphi}{\partialartial t^2}(z,t) \approx \mathbf{f}rac{\partialartial^2 \varphi}{\partialartial t^2}(z,t_{k+2}) \approx \mathbf{f}rac{\varphi(z,t_{k+2}) - 2\varphi(z,t_{k+1}) + \varphi(z,t_{k})}{\widetilde{a}u^2} \mathbf{q}quad \text{for} \mathbf{q}uad t_{k+1} \leq t \leq t_{k+2}
\end{equation}
Fractional order time derivative $\partialartial_t^\alpha \varphi \,( 0 < \alpha < 1) \, $ can be approximated using the scheme given by\cite{r9}-\cite{r10}. For $0 \leq k < m$,
\begin{align} \label{eq:26}
\mathbf{f}rac{\partialartial^\alpha \varphi}{\partialartial t^\alpha}(z , t_{k+1}) & = \mathbf{f}rac{1}{\mathbf{G}amma(1 - \alpha)} \sum_{s=0}^{s=k}\int_{s\widetilde{a}u}^{(s+1)\widetilde{a}u}\mathbf{f}rac{\partialartial \varphi(z,\omega)}{\partialartial \omega}\mathbf{f}rac{1}{(t_{k+1}-\omega)^\alpha}\,d \omega \nonumber \\
&= \mathbf{f}rac{1}{\mathbf{G}amma(1 - \alpha)}\sum_{s=0}^{s=k}\mathbf{f}rac{\varphi(z , t_{s+1}) - \varphi(z , t_{s})}{\widetilde{a}u}\int_{s\widetilde{a}u}^{(s+1)\widetilde{a}u}\mathbf{f}rac{1}{(t_{k+1}-\omega)^\alpha}\,d \omega \nonumber\\
&= C_\alpha\big[\varphi(z , t_{k+1}) - \varphi(z , t_{k})\big] + C_\alpha \zeta_k^\alpha[\varphi]
\end{align}
\begin{align} \label{eq:27}
\textrm{where} \mathbf{q}uad \zeta_k^\alpha[\varphi] &= \sum_{p=1}^{p=k}\big[\varphi(z , t_{k-p+1}) - \varphi(z , t_{k-p})\big]\,b_p^\alpha \mathbf{q}quad \mathbf{q}uad \textrm{for} \mathbf{q}uad 0 \leq k < m \\
\zeta_0^\alpha[\varphi] := 0, \mathbf{q}uad C_\alpha &= \mathbf{f}rac{\widetilde{a}u^{-\alpha}}{\mathbf{G}amma(2 - \alpha)} \mathbf{q}uad \textrm{and} \mathbf{q}uad b_p^\alpha = \big[(p+1)^{1-\alpha} - p^{1-\alpha}\big] \nonumber
\end{align}
\eqref{eq:24} implies that $\zeta_1^\alpha[\varphi] = 0\,.$ \partialar
Similarly fractional order time derivative $\partialartial_t^{2\alpha} \varphi \,( 0 < \alpha < 1) \, $ can be approximated again using the scheme given in \cite{r9}-\cite{r10}. So for $0 \leq k < m-1$,
\begin{align}
\mathbf{f}rac{\partialartial^{2\alpha} \varphi}{\partialartial t^{2\alpha}}(z , t_{k+2}) &= C_{2\alpha} \big[\varphi(z , t_{k+2}) - 2\varphi(z , t_{k+1}) + \varphi(z , t_{k})\big] + C_{2\alpha} \zeta_k^{2\alpha}[\varphi] \label{eq:28} \\
\textrm{where} \mathbf{q}uad \zeta_k^{2\alpha}[\varphi] &= \sum_{p=1}^{p=k}\big[\varphi(z , t_{k-p+2}) - 2\varphi(z , t_{k-p+1})+ \varphi(z , t_{k-p})\big]\,b_p^{2\alpha} \mathbf{q}uad 0 \leq k < m-1 \label{eq:29} \\
\zeta_0^{2\alpha}[\varphi] := 0, \mathbf{q}uad C_{2\alpha} &= \mathbf{f}rac{\widetilde{a}u^{-2\alpha}}{\mathbf{G}amma(3 - 2\alpha)} \mathbf{q}uad \textrm{and} \mathbf{q}uad b_p^{2\alpha} = \big[(p+1)^{2(1-\alpha)} - p^{2(1-\alpha)}\big] \nonumber
\end{align}
Using \eqref{eq:26} and \eqref{eq:28} in \eqref{eq:22} $\mathcal{L}_t^{\alpha}[\varphi](t)$ can be approximated as (for $ 0 \leq k < m-1$):
\begin{align}
\mathcal{L}_t^{\alpha}[\varphi](t_{k+2}) &= \bigg({\displaystyleplaystyle \gamma}_1\mathbf{f}rac{\partialartial^{2\alpha}}{\partialartial t^{2\alpha}} + {\displaystyleplaystyle \gamma}_1\mathbf{f}rac{\partialartial^{\alpha}}{\partialartial t^{\alpha}}\bigg)[\varphi](t_{k+2}) \nonumber \\
&\approx {\displaystyleplaystyle \gamma}_1 C_\alpha \big[\varphi(t_{k+2}) - \varphi(t_{k+1})\big] + {\displaystyleplaystyle \gamma}_1 C_\alpha \zeta_k^\alpha[\varphi]
+ {\displaystyleplaystyle \gamma}_2 C_{2\alpha} \big[\varphi(t_{k+2}) - 2\varphi(t_{k+1}) + \varphi(t_{k})\big] \label{eq:30} \\
&+ {\displaystyleplaystyle \gamma}_2 C_{2\alpha} \zeta_k^{2\alpha}[\varphi] \nonumber
\end{align}
\subsection{Finite Element Discretization}
Form a partition of the domain $\Omegaega = (-10,10)$ with $n$ sub-domains $\Omegaega_i = (z_{i-1} \,, \,z_{i}) \mathbf{q}uad \text{for} \mathbf{q}uad i = 1,\cdots , n$ such that
$$-10 = z_0 < z_1 < \cdots < z_n=10$$
$$\textrm{i.e.} \bigcup_{i=1}^n \overline{\Omegaega_i} = \overline{\Omegaega} \mathbf{q}uad \textrm{and} \mathbf{q}uad \Omegaega_i \bigcap \Omegaega_j = \emptyset \mathbf{q}uad \mathbf{f}orall i,j = 1,2,\cdots , n \mathbf{q}uad \textrm{and} \mathbf{q}uad i\neq j$$
We consider sub-domains $\Omegaega_i$ of equal length, $\,h = \mathbf{f}rac{20}{\displaystyleplaystyle n}\,$
We define a finite dimensional subspace $V_h^1(\Omegaega)$ of $H^1(\Omegaega)$ as follows:
$$ V_h^1(\Omegaega) = \{v_h \in H^1(\Omegaega) : v_h\big |_{\Omegaega_i} \in \mathbb{P}_r(\Omegaega_i) ,\, \mathbf{f}orall \, i=1,\cdots,n \}$$
where $\,\mathbb{P}_r(\Omegaega_i)\,$ represents space of Lagrange Polynomials of degree at most r over each sub-domain $\Omegaega_i \mathbf{q}uad \mathbf{f}orall \, i = 1,\cdots , n$.\partialar
$\textbf{Weak Formulation}$. Find $\; \varphi \in C^1\big([0,T];H^1(\Omegaega)\big)\,$ such that: \cite{r5}
\begin{equation} \label{eq:31}
\begin{cases} \displaystyleplaystyle
\displaystyleplaystyle \langle \displaystyleplaystyle\partialsi, \displaystyleplaystyle \varphi \displaystyleplaystyle \rangle + \displaystyleplaystyle \mathcal{L}_t^{\alpha}( \displaystyleplaystyle \partialsi, \displaystyleplaystyle \varphi) + ( \displaystyleplaystyle \partialsi, \displaystyleplaystyle \sin{ \displaystyleplaystyle \varphi}) = ( \displaystyleplaystyle \partialsi, \displaystyleplaystyle \lambda) - A \displaystyleplaystyle \partialsi(-10) + B \displaystyleplaystyle \partialsi(10) \\
\displaystyleplaystyle \varphi(z,0) = 4 \displaystyleplaystyle \arctan\bigg( \displaystyleplaystyle x_\varepsilonp{\bigg({\mathbf{f}rac{ z - 0.1}{ \displaystyleplaystyle \sqrt{1 - c^2}}}\bigg)}\bigg) \mathbf{q}quad
\mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}(z , 0) = 0
\end{cases}
\end{equation}
for all $\,,\partialsi \in H^1(\Omegaega)$ \partialar
Let $\; \varphi_h$ represents an approximate solution of $\varphi$ in $C^1\big([0,T];V_h^1(\Omegaega)\big)\,$\partialar
$\textbf{Discrete Weak Formulation}$. Find $\; \varphi_h \in C^1\big([0,T];V_h^1(\Omegaega)\big)\,$ such that: \cite{r5}
\begin{equation} \label{eq:32}
\begin{cases} \displaystyleplaystyle
\displaystyleplaystyle \langle \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \varphi_h \displaystyleplaystyle \rangle + \displaystyleplaystyle \mathcal{L}_t^{\alpha}( \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \varphi_h) + ( \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \sin{ \displaystyleplaystyle \varphi_h}) = ( \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \lambda) - A \displaystyleplaystyle \partialsi_h(-10) + B \displaystyleplaystyle \partialsi_h(10)\\
\displaystyleplaystyle \varphi_h(z,0) = 4 \displaystyleplaystyle \arctan\bigg( \displaystyleplaystyle x_\varepsilonp{\bigg({\mathbf{f}rac{ z - 0.1}{ \displaystyleplaystyle \sqrt{1 - c^2}}}\bigg)}\bigg) \mathbf{q}uad
\mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi_h}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}(z , 0) = 0
\end{cases}
\end{equation}
for all $\,\partialsi_h \in V_h^1(\Omegaega)$.\partialar
Using the approximations of $\mathcal{L}_t^{\alpha}[\varphi](t)$ from \eqref{eq:30} in \eqref{eq:32} for a particular time $t_{k+2} \,(0\leq k < m-1)$ the discrete weak formulation takes the following form: \partialar
Find $\varphi_h(y,t_{k+2}) \in V_h^1(\Omegaega)\,$ such that \cite{r5}:
\begin{equation} \label{eq:33}
\begin{cases} \displaystyleplaystyle
\displaystyleplaystyle \langle \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \varphi_h(z,t_{k+2}) \displaystyleplaystyle \rangle + \displaystyleplaystyle \mathcal{L}_{k+2}^{\alpha}( \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \varphi_h(z,t_{k+2})) + ( \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \sin{ \displaystyleplaystyle \varphi_h(z,t_{k+2})}) \\
= ( \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \lambda) - A \displaystyleplaystyle \partialsi_h(-10) + B \displaystyleplaystyle \partialsi_h(10) \\
\varphi_h(z , t_0) = 4 \arctan\bigg(x_\varepsilonp{\bigg({\displaystyleplaystyle \mathbf{f}rac{ z - 0.1}{ \displaystyleplaystyle \sqrt{1 - c^2}}}\bigg)}\bigg) = \varphi_h (z , t_1)
\end{cases}
\end{equation}
for all $\,\partialsi_h \in V_h^1(\Omegaega)$.
Here $\mathcal{L}_{k+2}^{\alpha}( \displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \varphi_h(z,t_{k+2}))$ represents $\mathcal{L}_{t}^{\alpha}[ (\displaystyleplaystyle \partialsi_h, \displaystyleplaystyle \varphi_h)](t_{k+1})$.\partialar
$\,\varphi_h(z,t)\,$ is defined as follows:
\begin{equation} \label{eq:34}
\varphi_h(z,t) = \sum_{p=1}^{D_h} \varphi_p^h(t) \partialhi_p^h (z) \mathbf{q}uad \textrm{where} \mathbf{q}uad z \in {\overline{\Omegaega}} \mathbf{q}uad \varphi_p^h(t) = \varphi_h(z_p,t)
\end{equation}
where $\{ \partialhi_p^h(z) \big | p=1,\cdots,n\}$ forms a basis of $V_h^1$ and $D_h = \textrm{dim}(V_h^1) $. By choosing $\partialsi_h$ as $\partialhi_p^h$ for different values of $ p=1,\cdots,D_h $ in \eqref{eq:33}, following system of non-linear equations is obtained for each $0 \leq k < m-1$ :
\begin{equation} \label{eq:35}
\begin{cases} \displaystyleplaystyle
\displaystyleplaystyle \mathbb{A}^h[\displaystyleplaystyle \varphi_{k+2}^h] + \displaystyleplaystyle \mathbb{M}^h \displaystyleplaystyle \mathcal{L}_{k+2}([ \displaystyleplaystyle \varphi_{k+2}^h]) + \displaystyleplaystyle \mathbb{G}^h( \displaystyleplaystyle \varphi_{k+2}^h) = \displaystyleplaystyle \mathbb{F}^h\\
[ \displaystyleplaystyle \varphi_0^h] = 4 \displaystyleplaystyle \arctan\bigg( \displaystyleplaystyle x_\varepsilonp{\bigg({\mathbf{f}rac{ z - 0.1}{ \displaystyleplaystyle \sqrt{1 - c^2}}}\bigg)}\bigg) = [ \displaystyleplaystyle \varphi_1^h]
\end{cases}
\end{equation}
Where $\, \mathbf{f}orall \, p,q = 1,\cdots,D_h$,
\begin{eqnarray*}
(\mathbb{A}^h)_{pq} &=& (\partialhi_p^h , \partialhi_q^h) =
\mathbf{f}rac{1}{h} \begin{pmatrix}
1 & -1\\
-1 & 1
\end{pmatrix}\\
(\mathbb{M}^h)_{pq} &=& \langle \partialhi_p^h , \partialhi_q^h \rangle = \mathbf{f}rac{h}{6} \begin{pmatrix}
2 & \, 1\\
1 & \,2
\end{pmatrix}\\
(\mathbb{G}^h(\varphi_{k+1}^h))_p &=& ( \sin(\varphi^h),\partialhi_p^h) = \mathbf{f}rac{h}{\varphi_{i+1}^h - \varphi_{i}^h}\begin{pmatrix}
\cos(\varphi_i^h) - \mathbf{f}rac{\displaystyleplaystyle \sin(\varphi_{i+1}^h)}{\displaystyleplaystyle \varphi_{i+1}^h + \varphi_{i}^h} + \mathbf{f}rac{\displaystyleplaystyle \sin(\varphi_{i}^h)}{\displaystyleplaystyle \varphi_{i+1}^h - \varphi_{i}^h}
\\
-\cos(\varphi_{i+1}^h) + \mathbf{f}rac{\displaystyleplaystyle \sin(\varphi_{i+1}^h)}{\displaystyleplaystyle \varphi_{i+1}^h - \varphi_{i}^h} - \mathbf{f}rac{\displaystyleplaystyle \sin(\varphi_{i}^h)}{\displaystyleplaystyle \varphi_{i+1}^h - \varphi_{i}^h}
\end{pmatrix}\\
(\mathbb{F}^h)_p &=& (\lambda , \partialhi_p^h) = \mathbf{f}rac{\lambda h}{2}\begin{pmatrix}
1\\
1
\end{pmatrix} \mathbf{q}uad \textrm{for} \mathbf{q}uad \textrm{for} \mathbf{q}uad 1 < p < D_h\\
(\mathbb{F}^h)_1 &=& \begin{pmatrix}
\mathbf{f}rac{\displaystyleplaystyle \lambda h}{\displaystyleplaystyle 2} - A \\
\mathbf{f}rac{\displaystyleplaystyle \lambda h}{\displaystyleplaystyle 2}
\end{pmatrix} \mathbf{q}uad \textrm{for} \mathbf{q}uad p = 1 \\
(\mathbb{F}^h)_{D_h} &=& \begin{pmatrix}
\mathbf{f}rac{\displaystyleplaystyle \lambda h}{\displaystyleplaystyle 2} \\
\mathbf{f}rac{\displaystyleplaystyle \lambda h}{\displaystyleplaystyle 2} + B
\end{pmatrix} \mathbf{q}uad \textrm{for} \mathbf{q}uad p = D_h \\
\end{eqnarray*}
A code has been developed in MatLab R2015a to solve the system \eqref{eq:35}
\section{Error Analysis and Scheme convergence} \label{S4}
In this section, the convergence and the numerical error analysis of the scheme developed in previous section are discussed. The theoretical predication for the error is then compared with the one obtained using the numerical scheme. The two are found in good agreement with each other thus validating the numerical scheme developed in previous section. The theoretical error estimates of the problems of type under consideration in this article is demonstrated by following theorem .
\subsection{Theoretical Error Predictions}
\begin{theorem}
If $\;\varphi(z,t)\;$ is exact solution of \eqref{eq:18} and $\;\varphi_h(z,t_{k+1})\; $ is solution of the problem \eqref{eq:35}, which is approximate solution of \eqref{eq:18} at time step $ t_{k+1}$, then there exist a constant $C > 0$, independent of space step size h and time step size $\widetilde{a}u$ such that:\\
$
\big\|\varphi(z,t_{k+1}) - \varphi_h(z,t_{k+1})\big\|_{L^2(\Omegaega)} \leq C(h^{r+1} + \widetilde{a}u^2)\mathbf{q}quad 0 \leq k < m \\
\big\|\varphi(z,t_{k+1}) - \varphi_h(z,t_{k+1})\big\|_{H^1(\Omegaega)} \leq C(h^{r} + \widetilde{a}u^2)\mathbf{q}quad \mathbf{q}uad 0 \leq k < m \\
\textrm{where r is the degree of the Lagrange polynomials used as the basis for} \, V^h(\Omegaega)$
\end{theorem}
\begin{proof}
The proof of above error estimates can be derived by using similar reasonong given in \cite{r9}- \cite{r10}.
\end{proof}
For the model under consideration in this article, linear Lagrange polynomials are used as the basis functions of the space $V_h^1(\Omegaega)$, so here r = 1.
\begin{figure}
\caption{Validation of Numerical Scheme}
\label{fig:6.1(a)}
\label{fig:6.1(b)}
\label{fig:6.1}
\end{figure}
\begin{figure}
\caption{Surface plots of approximate and fabricated solution}
\label{fig:6.2(a)}
\label{fig:6.2(b)}
\label{fig:6.2}
\end{figure}
\subsection{Numerical Error Estimates and Scheme validation}
To compare the theoretical and numerical error estimates of the scheme developed in the previous section, an exact solution of the model \eqref{eq:18} is fabricated by adding an artificial term $\,F_{art}(z,t)\,$ on the left side of \eqref{eq:18} \cite{r5}. So we get:
\begin{equation} \label{eq:36}
\begin{cases} \displaystyleplaystyle
-{\mathbf{f}rac{ \displaystyleplaystyle \partialartial^2 \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle z^2}} + {\displaystyleplaystyle \gamma}_2 \mathbf{f}rac{ \displaystyleplaystyle \partialartial^{2 \alpha} \displaystyleplaystyle \varphi}{{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}^{2 \alpha}} + {\displaystyleplaystyle \gamma}_1\mathbf{f}rac{ \displaystyleplaystyle \partialartial^{ \alpha} \displaystyleplaystyle \varphi}{{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}^{ \alpha}} + \displaystyleplaystyle \sin \displaystyleplaystyle \varphi = \displaystyleplaystyle \lambda + \displaystyleplaystyle F_{art}(z,t) \\
\mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle z}(-10 , \displaystyleplaystyle t ) = A \mathbf{q}quad \mathbf{q}quad \mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial z}(10, t ) = B \\
\displaystyleplaystyle \varphi(z,0) = 4 \displaystyleplaystyle \arctan\bigg( \displaystyleplaystyle x_\varepsilonp{\bigg({\mathbf{f}rac{ z - 0.1}{ \displaystyleplaystyle \sqrt{1 - c^2}}}\bigg)}\bigg) \mathbf{q}quad
\mathbf{f}rac{ \displaystyleplaystyle \partialartial \displaystyleplaystyle \varphi}{ \displaystyleplaystyle \partialartial \displaystyleplaystyle t}(z , 0) = 0
\end{cases}
\end{equation}
The fabricated exact solution $\,\varphi_{ex}(z,t)\,$ should be smooth and satisfy the IC's and BC's of \eqref{eq:18}. We choose $\,\varphi_{ex}(z,t)\,$ to be:
\begin{equation*}
\,\varphi_{ex}(z,t) = 4\arctan\bigg(x_\varepsilonp{\bigg({\mathbf{f}rac{ z - 0.1}{\sqrt{1 - c^2}}}\bigg)}\bigg) + t^2\,
\end{equation*}
Substituting $\,\varphi_{ex}(z,t)\,$ in \eqref{eq:36} and then solving for $\,F_{art}(z,t)\,$, we get:
{ \small
\begin{align}
F_{art}(z,t) &= \displaystyleplaystyle \sin{\bigg(4\arctan \bigg(x_\varepsilonp{\bigg (\mathbf{f}rac{x - 0.1}{\sqrt{1 - c^2}}\bigg)}\bigg)+ t^2\bigg )} - 4\mathbb{B}igg(\mathbf{f}rac{ \displaystyleplaystyle x_\varepsilonp{\bigg(\mathbf{f}rac{ \displaystyleplaystyle x - 0.1}{ \displaystyleplaystyle \sqrt{1 - c^2}}\bigg)}}{(1 - c^2)\bigg(x_\varepsilonp{(\mathbf{f}rac{ \displaystyleplaystyle (2(x - 0.1))}{ \displaystyleplaystyle \sqrt{(1 - c^2)}}}\big) + 1\bigg)} \nonumber\\
&- \mathbf{f}rac{2x_\varepsilonp{\bigg (\mathbf{f}rac{ \displaystyleplaystyle 3(x - 0.1)}{ \displaystyleplaystyle \sqrt{1 - c^2}}\bigg)}}{(1-c^2)\bigg( \displaystyleplaystyle x_\varepsilonp{\bigg (\mathbf{f}rac{ \displaystyleplaystyle 2(x - 0.1)}{ \displaystyleplaystyle \sqrt{1 - c^2}}\bigg)} + 1\bigg)^2}\mathbb{B}igg) + \mathbf{f}rac{2 \displaystyleplaystyle {\displaystyleplaystyle \gamma}_1 \displaystyleplaystyle t^{ (2- \alpha)}}{ \displaystyleplaystyle \mathbf{G}amma(3 - \alpha)} + \mathbf{f}rac{2 {\displaystyleplaystyle \gamma}_2 \displaystyleplaystyle t^{ (2-2 \alpha)}}{ \displaystyleplaystyle \mathbf{G}amma(3 - 2 \alpha)} \label{eq:37}
\end{align}
}
Approximate solution of \eqref{eq:36} is then obtained using the same numerical scheme discussed in the previous section. Errors in the approximate solution of \eqref{eq:36} are calculated in $\,L^2(\Omegaega)\,$ and $\,H^1(\Omegaega)\,$ norms using four different values of space step size \textit{h}. These errors are then converted to logarithmic scale and then plotted against $\,\log(h) \,$ in Fig. \cref{fig:6.1(a)}. The slopes of the $\,L^2(\Omegaega)\,$ and $\,H^1(\Omegaega)\,$ error curves turns out be 2.02 and 0.98 respectively. According to theoretical predictions given by above mentioned theorem, the slopes of these two error curves should be (r+1) and r respectively\cite{r5}. Since here linear polynomials are used as basis functions , so the slopes should be 2 and 1 respectively. So the theoretical error estimate agrees well with the numerical one which demonstrates that the numerical scheme developed in the previous section is convergent with the theoretically predicted convergent rates.The approximate and exact solutions of \eqref{eq:36} for the final time (t = 1) are plotted in Fig. \cref{fig:6.1(b)} . The two curves agrees very well. Also the surface plots of the approximate and exact solutions plotted in Fig. \cref{fig:6.2}, agrees very well. All these computations validates the numerical scheme developed in the previous section.
\section{Simulations and discussions} \label{S5}
In this section the effects of various non-dimensional parameters $\,\lambda \,$ ,$\,{\gamma}_1\, $,$\,{\gamma}_2\, $ and $\,\alpha\,$ on phase difference $\,\varphi\,$ , supper current density $\,J_s\,$ and voltage $\,V\,$ are analyzed in detail with graphical representations. The effects of various parameters on the quantities of interest discussed above are plotted for the final time $\,t = 1\,$ in subsequent figures. In Fig. \cref{fig:7.1} the initial phase difference (scaled by 2$\partiali$) and initial supper current density (normalized by maximum Josephson current density $\,(J_{c})\,$) are
displayed.
In Fig. \cref{fig:7.2} the effect of fractional exponent $\,\alpha\,$ on the phase difference, super current density and voltage are plotted. It is observed that with increase in $\,\alpha\,$ phase difference, super current density and voltage decreases.
Effect of parameter $\,{\gamma}_1\,$ on phase difference, super current density and voltage is displayed in Fig. \cref{fig:7.3} . The parameter $\,{\gamma}_1\,$ is related to the internal properties of the junction and its structure, like conductivity, capacitance, dielectric constant, London penetration depth, width etc. It is inspected that phase difference , voltage and super current density decrease as $\,\gamma\,$ increases. Since ${\gamma}_1$ is directly related to conductivity or equivalently inversely to resistance, so increase in ${\gamma}_1$ means increase in conductivity or decrease in resistance of the junction.So voltage will decrease by $V = IR$.
In Fig. \cref{fig:7.4} phase difference, super current density and voltage are plotted for different values of the parameter ${\gamma}_2$. The parameter ${\gamma}_2$ was got introduced due to the use of fractional time derivative. So it is observed that the phase difference, super current density and voltage decrease with increase in ${\gamma}_2$.
Similarly in Fig. \cref{fig:7.5} phase difference, super current density and voltage are plotted for different values of the parameter $\,\lambda\,$. The parameter $\,\lambda\,$ is related to the physical bias current flowing through the junction and the maximum Josephson current density. It is noticed that phase difference, super current density and voltage all increase as $\,\lambda\,$ increases. Since $\lambda$ is directly related to the bias current, so increase in $\lambda$ means increase in bias current. Hence voltage will increase as $V = IR$.
In Fig. \cref{fig:7.6} the evolution of phase difference over the time interval [0,1] is displaced for two different sets of parameter values.
\begin{figure}
\caption{Initial Phase difference and Josephson current density}
\caption{Effect of fractional exponent $\alpha$ on Phase difference, Josephson current density and Voltage}
\label{fig:7.1(a)}
\label{fig:7.1(b)}
\label{fig:7.1}
\label{fig:7.2(a)}
\label{fig:7.2(b)}
\label{fig:7.2(c)}
\label{fig:7.2}
\end{figure}
\begin{figure}
\caption{Effect of ${\gamma}
\label{fig:7.3(a)}
\label{fig:7.3(b)}
\label{fig:7.3(c)}
\label{fig:7.3}
\end{figure}
\begin{figure}
\caption{Effect of ${\gamma}
\label{fig:7.4(a)}
\label{fig:7.4(b)}
\label{fig:7.4(c)}
\label{fig:7.4}
\end{figure}
\begin{figure}
\caption{Effect of $\lambda$ on Phase difference, Josephson current density and Voltage}
\caption{Evolution of phase difference for different parameter values over the time interval [0,1]}
\label{fig:7.5(a)}
\label{fig:7.5(b)}
\label{fig:7.5(c)}
\label{fig:7.5}
\label{fig:7.6(a)}
\label{fig:7.6(b)}
\label{fig:7.6}
\end{figure}
\section{Conclusion} \label{S6}
The evolution of the phase difference between the two superconductors across a long inline Josephson junction, in voltage state and under the influence of magnetic field, is discussed in this article.Josephson current density and voltage across the junction are also calculated from the phase difference.Finite element method along with finite difference method is used to numerically solve the mathematical model describing the evolution of phase difference. Convergence and error analysis of the finite element scheme are also discussed. Effects of different parameters on phase difference, Josephson current density and voltage are studied graphically. Phase difference, Josephson current density and voltage all decrease with increase in $\alpha$ and ${\gamma}_1$. Similar trend is found with increase in ${\gamma}_2$. But with the increase in $\lambda$ phase difference, Josephson current density and voltage increase.\partialar
The results graphical results obtained for the phase difference, Josephson current density and voltage, using fractional time derivative in the model, are almost similar to the ones we already had for integer order time derivative \cite{r7}, as the fractional exponent $\alpha$ and ${\gamma}_2$ approaches 1.
\end{document} | math | 46,922 |
\begin{document}
\title{{\Large {\bf Stationary measures for the three-state Grover walk \\ with one defect in one dimension}
\par\noindent
\begin{small}
\par\noindent
{\bf Abstract}. We obtain stationary measures for the one-dimensional three-state Grover walk with one defect by solving the corresponding eigenvalue problem. We clarify a relation between stationary and limit measures of the walk.
\footnote[0]{
{\it Keywords: }
Quantum walk, stationary measure, Grover walk
}
\end{small}
\setcounter{equation}{0}
\section{Introduction \label{intro}}
The quantum walk (QW) was introduced as a quantum version of the classical random walk. The QW has attracted much attention in various fields. The review and books on QWs are Venegas-Andraca [16], Konno [9], Cantero et al. [1], Portugal [14], Manouchehri and Wang [13], for examples.
The present paper deals with stationary measures of the discrete-time case QWs on $\mathbb{Z}$, where $\mathbb{Z}$ is the set of integers. The stationary measures of Markov chains have been intensively investigated, however, the corresponding study for QW has not been given sufficiently. As for stationary measures of two-state QWs, Konno et al. [11] treated QWs with one defect at the origin and showed that a stationary measure with exponential decay with respect to the position for the QW starting from infinite sites is identical to a time-averaged limit measure for the same QW starting from just the origin. Konno [10] investigated stationary measures for various cases. Endo et al. [6] got a stationary measure of the QW with one defect whose quantum coins are defined by the Hadamard matrix at $x \not= 0$ and the rotation matrix at $x = 0$. Endo and Konno [3] calculated a stationary measure of QW with one defect which was introduced and studied by W\'ojcik et al. [15]. Moreover, Endo et al. [5] and Endo et al. [2] obtained stationary measures of the two-phase QW without defect and with one defect, respectively. Konno and Takei [12] considered stationary measures of QWs and gave non-uniform stationary measures. They proved that the set of the stationary measures contains uniform measure for the QW in general. As for stationary measures of three-state QWs, Konno [10] obtained stationary measures of the three-state Grover walk. Furthermore, Wang et al. [17] investigated stationary measures of the three-state Grover walk with one defect at the origin. Endo et al. [4] got stationary measures for the three-state diagonal quantum walks without defect or with one defect. In this paper, we consider stationary measures for the three-state Grover walk with one defect introduced by Wang et al. [17] by clarifying their argument. Moreover, we find out a relation between stationary and limit measures of the walk.
The rest of the paper is organized as follows. Section \ref{def} gives the definition of our model. In Section \ref{gfm}, we present solutions of eigenvalue problem by a generating function method. In Section \ref{sm}, we obtain stationary measures and clarify a relation between stationary and limit measures for the walk. Section \ref{sum} is devoted to summary.
\section{Definition of Our Model \label{def}}
This section gives the definition of our three-state QW with one defect at the origin on $\mathbb{Z}$. The discrete-time QW is a quantum version of the classical random walk with additional degree of freedom called chirality. The chirality takes values left, stay, or right, and it means the motion of the walker. At each time step, if the walker has the left chirality, it moves one step to the left, and if it has the right chirality, it moves one step to the right. If it has the stay chirality, it stays at the same position.
Let us define
\begin{align*}
\ket{L} =
\left[
\begin{array}{cc}
1 \\
0 \\
0
\end{array}
\right],
\qquad
\ket{O} =
\left[
\begin{array}{cc}
0 \\
1 \\
0
\end{array}
\right],
\qquad
\ket{R} =
\left[
\begin{array}{cc}
0 \\
0 \\
1
\end{array}
\right],
\end{align*}
where $L, O$ and $R$ refer to the left, stay and right chirality states, respectively.
The time evolution of the walk is determined by a sequence of $3 \times 3$ unitary matrices $\{ U_x : x \in \mathbb{Z} \}$, where
\begin{align*}
U_x =
\left[
\begin{array}{ccc}
u_{x,11} & u_{x,12} & u_{x,13} \\
u_{x,21} & u_{x,22} & u_{x,23} \\
u_{x,31} & u_{x,32} & u_{x,33}
\end{array}
\right],
\end{align*}
with $u_{x,jk} \in \mathbb C \> (x, \in \mathbb{Z}, j,k =1,2,3)$ and $\mathbb{C}$ is the set of complex numbers. To define the dynamics of our model, we divide $U_x$ into three matrices:
\begin{eqnarray*}
U_x^{L} =
\left[
\begin{array}{ccc}
u_{x,11} & u_{x,12} & u_{x,13} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}
\right],
\quad
U_x^{O} =
\left[
\begin{array}{ccc}
0 & 0 & 0 \\
u_{x,21} & u_{x,22} & u_{x,23} \\
0 & 0 & 0
\end{array}
\right],
\quad
U_x^{R} =
\left[
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
u_{x,31} & u_{x,32} & u_{x,33}
\end{array}
\right],
\end{eqnarray*}
with $U_x =U_x^{L}+U_x^{O}+U_x^{R}$. The important point is that $U_x^{L}$ (resp. $U_x^{R}$) represents that the walker moves to the left (resp. right) at position $x$ at each time step. $U_x^{O}$ represents that the walker stays at position $x$.
The model considered here is
\begin{align*}
U_x=
\begin{cases}
\omega U_G & (x=0),
\\
U_G & (x= \pm1, \pm2, \ldots).
\end{cases}
\end{align*}
where $\omega = e^{i \theta} \> (\theta \in [0,2\pi))$. Here $U_G$ is the Grover matrix given by
\begin{align*}
U_G
=
\frac{1}{3}
\begin{bmatrix}
-1&2&2\\
2&-1&2\\
2&2&-1
\end{bmatrix}.
\end{align*}
If $\theta = 0$, that is, $\omega =1$, then $U_x=U_G$ for any $x \in \mathbb{Z}$. So this space-homogeneous model is equivalent to the usual three-state {\it Grover walk} on $\mathbb{Z}$.
Let $\Psi_n$ denote the amplitude at time $n$ of the QW as follows.
\begin{align*}
\Psi_{n}
&= {}^T\![\cdots,\Psi_{n}^{L}(-1),\Psi_{n}^{O}(-1),\Psi_{n}^{R}(-1),\Psi_{n}^{L}(0),\Psi_{n}^{O}(0),\Psi_{n}^{R}(0),\Psi_{n}^{L}(1),\Psi_{n}^{O}(1),\Psi_{n}^{R}(1),\cdots ],
\\
&= {}^T\!\left[\cdots,
\begin{bmatrix}
\Psi_{n}^{L}(-1)\\
\Psi_{n}^{O}(-1)\\
\Psi_{n}^{R}(-1)
\end{bmatrix},
\begin{bmatrix}
\Psi_{n}^{L}(0)\\
\Psi_{n}^{O}(0)\\
\Psi_{n}^{R}(0)
\end{bmatrix},
\begin{bmatrix}
\Psi_{n}^{L}(1)\\
\Psi_{n}^{O}(1)\\
\Psi_{n}^{R}(1)
\end{bmatrix},
\cdots\right],
\end{align*}
where $T$ means the transposed operation. Then the time evolution of the walk is defined by
\begin{align}
\Psi_{n+1} (x) = U_{x+1}^{L} \Psi_{n} (x+1) + U_{x}^{O} \Psi_{n} (x) + U_{x-1}^{R} \Psi_{n} (x-1).
\label{lets001}
\end{align}
Now let
\begin{align*}
U^{(s)}=
\begin{bmatrix}
\ddots&\vdots&\vdots&\vdots&\vdots&\vdots&\ldots\\
\ldots&U_{-2}^O&U_{-1}^L&O&O&O&\ldots\\
\ldots&U_{-2}^R&U_{-1}^O&U_0^L&O&O&\ldots\\
\ldots&O&U_{-1}^R&U_0^O&U_1^L&O&\ldots\\
\ldots&O&O&U_0^R&U_1^O&U_2^L&\ldots\\
\ldots&O&O&O&U_1^R&U_2^O&\ldots\\
\ldots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{bmatrix},
\;\;\;
with\;\;\;
O=\begin{bmatrix}
0&0&0\\
0&0&0\\
0&0&0
\end{bmatrix}.
\end{align*}
Then the state of the QW at time $n$ is given by
\begin{align*}
\Psi_{n}=(U^{(s)})^{n}\Psi_{0},
\end{align*}
for any $n\geq0$. Let $\mathbb{R}_{+}=[0,\infty)$. Here we introduce a map $\phi:(\mathbb{C}^{3})^{\mathbb{Z}}\rightarrow \mathbb{R}_{+}^{\mathbb{Z}}$ such that if
\begin{align*}
\Psi= {}^T\!\left[\cdots,\begin{bmatrix}
\Psi^{L}(-1)\\
\Psi^{O}(-1)\\
\Psi^{R}(-1)\end{bmatrix},\begin{bmatrix}
\Psi^{L}(0)\\
\Psi^{O}(0)\\
\Psi^{R}(0)\end{bmatrix},\begin{bmatrix}
\Psi^{L}(1)\\
\Psi^{O}(1)\\
\Psi^{R}(1)\end{bmatrix},\cdots\right]\in(\mathbb{C}^{3})^{\mathbb{Z}},
\end{align*}
then
\begin{align*}
\phi(\Psi)
&= {}^T\!
\left[ \ldots,
|\Psi^{L}(-1)|^2 + |\Psi^{O}(-1)|^2 + |\Psi^{R}(-1)|^2,
|\Psi^{L}(0)|^2 + |\Psi^{O}(0)|^2 + |\Psi^{R}(0)|^2,
\right.
\\
& \qquad \qquad \left. |\Psi^{L}(1)|^2 + |\Psi^{O}(1)|^2 + |\Psi^{R}(1)|^2, \ldots
\right] \in \mathbb{R}_{+}^{\mathbb{Z}}.
\end{align*}
That is, for any $x \in \mathbb{Z}$,
\begin{align*}
\phi(\Psi) (x) = \phi(\Psi(x))= |\Psi^{L}(x)|^2 + |\Psi^{O}(x)|^2 + |\Psi^{R}(x)|^2.
\end{align*}
Moreover we define the measure of the QW at position $x$ by
\begin{align*}
\mu(x)=\phi(\Psi(x)) \quad (x \in \mathbb{Z}).
\end{align*}
Now we are ready to introduce the set of stationary measure:
\begin{align*}
{\cal M}_{s}
= {\cal M}_s (U)
= \left\{ \phi(\Psi_{0})\in\mathbb{R}_{+}^{\mathbb{Z}} \setminus \{ \boldsymbol{0} \} : there\;exists\;\Psi_{0}\;such\;that\;\;\phi((U^{(s)})^{n}\Psi_{0})=\phi(\Psi_{0})\;for\;any\;n\geq 0 \right\},
\end{align*}
where $\boldsymbol{0}$ is the zero vector. We call the element of ${\cal M}_{s}$ the stationary measure of the QW.
Next we consider the eigenvalue problem of the QW:
\begin{align}
U^{(s)} \Psi = \lambda \Psi \quad (\lambda \in \mathbb{C}).
\label{samui}
\end{align}
Remark that $|\lambda|=1$, since $U^{(s)}$ is unitary. We sometime write $\Psi=\Psi^{(\lambda)}$ in order to emphasize the dependence on eigenvalue $\lambda$. Then we see that $\phi (\Psi^{(\lambda)}) \in {\cal M}_s$.
Let $\mu_n (x)$ be the measure of the QW at position $x$ and at time $n$, i.e., \begin{align*}
\mu_n (x)=\phi(\Psi_n(x)) \quad (x \in \mathbb{Z}).
\end{align*}
If $\lim_{n \to \infty} \mu_n (x)$ exists for any $x \in \mathbb{Z}$, then we define the limit measure $\mu_{\infty} (x)$ by
\begin{align*}
\mu_{\infty} (x) = \lim_{n \to \infty} \mu_n (x) \quad (x \in \mathbb{Z}).
\end{align*}
\section{Splitted Generating Function Method \label{gfm}}
In this section, we give solutions of eigenvalue problem, $U^{(s)} \Psi = \lambda \Psi$, by the splitted generating function method developed in the previous studies [11,3]. First we see that $U^{(s)} \Psi = \lambda \Psi$ is equivalent to the following relations:
\begin{align*}
\lambda
\begin{bmatrix}
\Psi^{L}(1)\\
\Psi^{O}(1)\\
\Psi^{R}(1)
\end{bmatrix}
&=\frac{1}{3}
\begin{bmatrix}
-\Psi^{L}(2)+2\Psi^{O}(2)+2\Psi^{R}(2)\\
2\Psi^{L}(1)-\Psi^{O}(1)+2\Psi^{R}(1)\\
2\omega\alpha\ \ +\ \ 2\omega\beta\ \ -\ \ \omega\gamma
\end{bmatrix}
,
\\
\lambda
\begin{bmatrix}
\alpha\\
\beta\\
\gamma
\end{bmatrix}
&=\frac{1}{3}
\begin{bmatrix}
-\Psi^{L}(1)+2\Psi^{O}(1)+2\Psi^{R}(1)\\
2\omega\alpha\ \ -\ \ \omega\beta\ \ +\ \ 2\omega\gamma\\
2\Psi^{L}(-1)+2\Psi^{O}(-1)-\Psi^{R}(-1)
\end{bmatrix}
,
\\
\lambda
\begin{bmatrix}
\Psi^{L}(-1)\\
\Psi^{O}(-1)\\
\Psi^{R}(-1)
\end{bmatrix}
&=\frac{1}{3}
\begin{bmatrix}
-\omega\alpha\ \ +\ \ 2\omega\beta\ \ +\ \ 2\omega\gamma\\
2\Psi^{L}(-1)-\Psi^{O}(-1)+2\Psi^{R}(-1)\\
2\Psi^{L}(-2)+2\Psi^{O}(-2)-\Psi^{R}(-2)
\end{bmatrix}
,
\end{align*}
and for $x\neq-1,0,1$,
\begin{align*}
\lambda
\begin{bmatrix}
\Psi^{L}(x)\\
\Psi^{O}(x)\\
\Psi^{R}(x)
\end{bmatrix}
=\frac{1}{3}
\begin{bmatrix}
-\Psi^{L}(x+1)+2\Psi^{O}(x+1)+2\Psi^{R}(x+1)\\
2\Psi^{L}(x)\ \ -\ \ \Psi^{O}(x)\ \ +\ \ 2\Psi^{R}(x)\\
2\Psi^{L}(x-1)+2\Psi^{O}(x-1)-\Psi^{R}(x-1)
\end{bmatrix}
,
\end{align*}
where $\Psi^{L}(0)=\alpha, \> \Psi^{O}(0)=\beta, \> \Psi^{R}(0)=\gamma$ with $|\alpha|^2 + |\beta|^2 + |\gamma|^2 >0$.
Here we introduce six generating functions as follows:
\begin{align*}
f^j_+ (z)=\sum_{x=1}^{\infty}\Psi^{j}(x)z^x,\ \ f^j_- (z)=\sum_{x=-1}^{-\infty}\Psi^{j}(x)z^x,\ \ (j=L,\ O,\ R).
\end{align*}
Then the following lemma was given by Wang et al. [17].
\begin{lem}
We put
\begin{align*}
A=
\begin{bmatrix}
\lambda+\dfrac{1}{3z}&-\dfrac{2}{3z}&-\dfrac{2}{3z}\\\\
-\dfrac{2}{3}&\lambda+\dfrac{1}{3}&-\dfrac{2}{3}\\\\
-\dfrac{2z}{3}&-\dfrac{2z}{3}&\lambda+\dfrac{z}{3}
\end{bmatrix},\ \ \ \
f_\pm(z)=
\begin{bmatrix}
f^L_{\pm}(z)\\\\
f^O_{\pm}(z)\\\\
f^R_{\pm}(z)
\end{bmatrix}
,\\\\
\text{a}_+ (z)=
\begin{bmatrix}
-\lambda \alpha\\\\
0\\\\
\dfrac{\omega z(2\alpha+2\beta-\gamma)}{3}
\end{bmatrix},\ \ \ \
a_- (z)=
\begin{bmatrix}
\dfrac{\omega(-\alpha+2\beta+2\gamma)}{3z}\\\\
0\\\\
-\lambda \gamma
\end{bmatrix}
,
\end{align*}
where $|\alpha|^2 + |\beta|^2 + |\gamma|^2 >0$. Then we have
\begin{align*}
Af_\pm (z)=a_\pm (z).
\end{align*}
\label{lem001}
\end{lem}
We should remark that
\begin{align*}
\det A=\dfrac { \lambda(\lambda-1)}{3z} \bigg\{z^2+3 \bigg( \lambda+\frac{4}{3}+\frac{1}{\lambda} \bigg) z+1 \bigg\}.
\end{align*}
Then $\theta_s$ and $\theta_l (\in \mathbb{C})$ are defined by
\begin{align*}
\det A=\dfrac{\lambda(\lambda-1)}{3z}(z+\theta_s)(z+\theta_l),
\end{align*}
where $|\theta_s| \leq 1 \leq |\theta_l|$. Note that $\theta_s \theta_l=1$. Lemma \ref{lem001} gives the following lemma which was also shown by Wang et al. [17].
\begin{lem}
\begin{align*}
\Psi^{L}(x)&=
\begin{cases}
\alpha{(-\theta^{L}_{s}(+))}^x & (x\geq1),
\\\\
-\frac{(3\lambda+1)\Delta(-)\omega-6(\lambda+1)\gamma}{3(\lambda-1)}{(-\theta^{L}_{s}(-))}^{-x} & (x\leq-1),
\end{cases}
\\
\\
\Psi^{O}(x)&=
\begin{cases}
-\frac{2(\Delta(+)\omega-3\alpha)}{3(\lambda-1)}{(-\theta^{O}_{s} (+))}^x & (x\geq1), \\\\
-\frac{2(\Delta(-)\omega-3\gamma)}{3(\lambda-1)}{(-\theta^{O}_{s}(-))}^{-x} &(x\leq-1),
\end{cases}
\\
\\
\Psi^{R}(x)&=
\begin{cases}
-\frac{(3\lambda+1)\Delta(+)\omega-6(\lambda+1)\alpha}{3(\lambda-1)}{(-\theta^{R}_{s}(+))}^{x} & (x\geq1),
\\\\
\gamma{(-\theta^{R}_{s}(-))}^{-x} & (x\leq-1).
\end{cases}
\end{align*}
Here $\Delta(+)=2\alpha+2\beta-\gamma,\ \Delta(-)=-\alpha+2\beta+2\gamma$, and
\begin{align*}
\theta^{L}_{s}(+)
&=-\frac{2(\lambda+1)\Delta(+)\omega-3{{\lambda}^2} (3\lambda+1)\alpha}{3\lambda(\lambda-1)\alpha},
\qquad
\theta^{L}_{s}(-)=\frac{(\lambda-1)\Delta(-)\omega}{\lambda\{(3\lambda+1)\Delta(-)\omega-6(\lambda+1)\gamma\}},
\\
\theta^{O}_{s}(+)
&=\frac{\Delta(+)\omega-3 \lambda^2 \alpha}
{\lambda(\Delta(+)\omega-3\alpha)},
\qquad
\theta^{O}_{s}(-)=\frac{\Delta(-)\omega-3 \lambda^2 \gamma}
{\lambda(\Delta(-)\omega-3\gamma)},
\\
\theta^{R}_{s}(+)
&=\frac{(\lambda-1)\Delta(+)\omega}{\lambda\{(3\lambda+1)\Delta(+)\omega-6(\lambda+1)\alpha\}},
\qquad
\theta^{R}_{s}(-)=-\frac{2(\lambda+1)\Delta(-)\omega-3{{\lambda}^2} (3\lambda+1)\gamma}{3\lambda(\lambda-1)\gamma}.
\end{align*}
\label{lem002}
\end{lem}
From now on, we find out a necessary and sufficient condition for
\begin{align*}
\theta^{L}_{s}(+)=\theta^{O}_{s}(+)=\theta^{R}_{s}(+)=\theta^{L}_{s}(-)=\theta^{O}_{s}(-)=\theta^{R}_{s}(-).
\end{align*}
First we see that $\theta^{L}_{s}(+)=\theta^{R}_{s}(-)$ and $\theta^{R}_{s}(+)=\theta^{L}_{s}(-)$ give
\begin{align*}
(\alpha-\gamma)(\alpha+\gamma-2\beta)=0.
\end{align*}
In a similar fashion, $\theta^{O}_{s}(+)=\theta^{O}_{s}(-)$ implies
\begin{align*}
(\alpha-\gamma)(\alpha+\gamma-2\beta)(\lambda+1)(\lambda-1)=0.
\end{align*}
Moreover $\theta^{L}_{s}(+)=\theta^{O}_{s}(+)$ and $\theta^{O}_{s}(+)=\theta^{R}_{s}(+)$ give
\begin{align*}
(\lambda+1) \bigg\{9 \alpha (\omega\Delta(+)-2\alpha){\lambda}^2-6\alpha\omega\Delta(+)\lambda-\omega\Delta(+)(2\omega\Delta(+)-9\alpha) \bigg\}=0.
\end{align*}
Similarly combining $\theta^{L}_{s}(-)=\theta^{O}_{s}(-)$ with $\theta^{O}_{s}(-)=\theta^{R}_{s}(-)$ implies
\begin{align*}
(\lambda+1) \bigg\{9 \gamma (\omega\Delta(-)-2\gamma){\lambda}^2-6\gamma\omega\Delta(-)\lambda-\omega\Delta(-)(2\omega\Delta(-)-9\gamma) \bigg\}=0.
\end{align*}
From $\theta^{L}_{s}(+)=\theta^{R}_{s}(+)$, we get
\begin{align*}
(3\lambda+1)(\lambda+1) \bigg\{9 \alpha (\omega\Delta(+)-2\alpha){\lambda}^2-6\alpha\omega\Delta(+)\lambda-\omega\Delta(+)(2\omega\Delta(+)-9\alpha) \bigg\}=0.
\end{align*}
Furthermore, $\theta^{L}_{s}(-)=\theta^{R}_{s}(-)$ gives
\begin{align*}
(3\lambda+1)(\lambda+1) \bigg\{9 \gamma (\omega\Delta(-)-2\gamma){\lambda}^2-6\gamma\omega\Delta(-)\lambda-\omega\Delta(-)(2\omega\Delta(-)-9\gamma) \bigg\}=0.
\end{align*}
Therefore we have
\begin{lem}
A necessary and sufficient condition for
\begin{align*}
\theta^{L}_{s}(+)=\theta^{O}_{s}(+)=\theta^{R}_{s}(+)=\theta^{L}_{s}(-)=\theta^{O}_{s}(-)=\theta^{R}_{s}(-)
\end{align*}
is that $\alpha,\ \beta,\ \gamma,$ and $\lambda (\in \mathbb{C})$ with $|\alpha|^2 + |\beta|^2 + |\gamma|^2 >0$ and $|\lambda|=1$ satisfy
\begin{align}
&\beta
=\frac{2\omega(\alpha+\gamma)}{3\lambda+\omega},
\label{sineno001}
\\
&(\alpha-\gamma)(\alpha+\gamma-2\beta)=0,
\label{sineno002}
\\
&(\lambda+1) \bigg\{9 \alpha (\omega\Delta(+)-2\alpha){\lambda}^2-6\alpha\omega\Delta(+)\lambda-\omega\Delta(+)(2\omega\Delta(+)-9\alpha) \bigg\}=0,
\label{sineno003}
\\
&(\lambda+1) \bigg\{9 \gamma (\omega\Delta(-)-2\gamma){\lambda}^2-6\gamma\omega\Delta(-)\lambda-\omega\Delta(-)(2\omega\Delta(-)-9\gamma) \bigg\}=0.
\label{sineno004}
\end{align}
\label{lem003}
\end{lem}
We should note that a relation \eqref{sineno001} is missing in Wang et al. [17].
By Lemma \ref{lem003}, we obtain the eigenvalue $\lambda$ for $U^{(s)} \Psi = \lambda \Psi$ as follows. Here we assume that $\omega \not= 1$, that is, our QW is space-inhomogeneous.
\begin{description}
\item[(i)] $\alpha=\gamma$ case. We see that Eq. \eqref{sineno003} is equivalent to Eq. \eqref{sineno004}, since $\Delta(+)=\Delta(-)=\alpha+2\beta.$
@@\begin{description}
\item [(a)] $\alpha \neq \beta$ case.
If $\alpha=0$, then Eq. \eqref{sineno001} gives $\beta=0$. So we assume $\alpha\beta\gamma\neq0$. Then Eq. \eqref{sineno003} implies
\begin{align*}
&\frac{27{\alpha}^2}{(3\lambda+\omega)^2}(\lambda+1) \bigg\{3(\omega-2){\lambda}^4 +2(5\omega-3)\omega{\lambda}^3
\nonumber
\\
&\qquad \qquad \qquad \qquad +(3{\omega}^2-8\omega+3)\omega{\lambda}^2
+2(5-3\omega){\omega}^2\lambda+3{\omega}^3(1-2\omega) \bigg\}=0.
\end{align*}
One solution of this equation is $\lambda_1=-1$. The rest of solutions $\lambda_2,\ \lambda_3,\ \lambda_4,\ \lambda_5$ are not obtained explicitly. So we do not get stationary measures.
\item[(b)] $\alpha=\beta$ case.
If $\alpha=0$, then Eq. \eqref{sineno001} gives $\beta=0$. So we assume $\alpha\beta\gamma\neq0$. Then Eq. \eqref{sineno001} implies $\lambda=\omega$. Eq. \eqref{sineno003} gives
\begin{align*}
27\lambda{{\alpha}^2}(\lambda+1){(\lambda-1)^2}=0.
\end{align*}
Then we have $\lambda=-1$, since $\omega \neq 1$.
\end{description}
\item[(ii)] $\beta=\dfrac{\alpha+\gamma}{2}$ case.
\begin{description}
\item[(a)] $\beta=0$ case.
Combining Eq. \eqref{sineno001} with $\beta=(\alpha+\gamma)/2$ gives $\alpha=-\gamma$. Then from Eq. \eqref{sineno003} and $\alpha=-\gamma$, we have
\begin{align*}
9 {{\alpha}^{2}} (\lambda+1)
\left(\lambda-\frac {\omega+\sqrt {6\omega { { (\omega-1) }^{2} } } }
{3\omega-2} \right) \left( \lambda-\frac {\omega-\sqrt {6 \omega { { (\omega-1) }^{2} } } } {3\omega-2} \right)=0.
\end{align*}
Remark that Eq. \eqref{sineno003} is equivalent to Eq. \eqref{sineno004}, since $\Delta(+)=-\Delta(-)=3\alpha.$ Thus, $\alpha\neq0$ implies
\begin{eqnarray}
\lambda=-1,\ \ \frac {\omega\pm\sqrt {6 \omega { { (\omega-1) }^{2} } } }
{3\omega-2}.
\end{eqnarray}
\item[(b)] $\beta \neq 0$ case.
Combining Eq. \eqref{sineno001} with $\beta=(\alpha+\gamma)/2$ gives $\lambda=\omega$. Then from Eq. \eqref{sineno003} and $\lambda=\omega$, we have
\begin{align*}
27 \alpha^2 \lambda (\lambda+1) {(\lambda-1)^2} =0.
\end{align*}
Similarly, combining Eq. \eqref{sineno004} with $\lambda=\omega$ gives
\begin{align*}
27 {\gamma^2} \lambda (\lambda+1) {(\lambda-1)^2} =0.
\end{align*}
Then $\lambda=-1$ follows from above two equations.
@\end{description}
\end{description}
We note that $\alpha=\gamma$ and $\beta=(\alpha+\gamma)/2$ gives $\alpha=\beta=\gamma$. This case is (i-b).
\section{Stationary Measures \label{sm}}
First we obtain stationary measures for (ii-a) case with respect to the following $\lambda$:
\begin{align*}
\lambda(\pm)=\frac {\omega\pm\sqrt {6 \omega { { (\omega-1) }^{2} } } }
{3\omega-2}.
\end{align*}
Then we see that for $j=L,\ O,\ R$,
\begin{align*}
\theta^{j}_{s}(\pm) =
&
\frac{-(3\omega+2) + 2 \sqrt{6} e^{\frac{\theta}{2}i } } {(2-3\omega)
(1+2 \sqrt{3(1-\cos\theta)} i)}, \quad
\frac{-(3\omega+2) + 2 \sqrt{6} e^{\frac{\theta}{2}i } } {(2-3\omega)
(1-2 \sqrt{3(1-\cos\theta)} i)},
\\
&
\frac{-(3\omega+2) - 2 \sqrt{6} e^{\frac{\theta}{2}i } } {(2-3\omega)
(1+2 \sqrt{3(1-\cos\theta)} i)}, \quad
\frac{-(3\omega+2) - 2 \sqrt{6} e^{\frac{\theta}{2}i } } {(2-3\omega)
(1-2 \sqrt{3(1-\cos\theta)} i)}.
\end{align*}
Note that $\theta^{j}_{s}(\pm),\ (j=L,\ O,\ R\ )$ do not depend on $j$ and $\pm$, so we put $\theta_s = \theta^{j}_{s}(\pm).$ Then we get
\begin{align*}
|\theta_s|^2=\frac{37+12\cos\theta \pm 20\sqrt{6} \cos (\theta/2)}
{(13-12\cos\theta)^2}.
\end{align*}
We should remark that if $0\leq\theta\leq4\pi$ with $\arccos (1/3) =1.2309 \ldots \le \theta \le 4\pi - \arccos (1/3) = 11.3354 \ldots$, then
\begin{align*}
|\theta_s|^2=\frac{37+12\cos\theta + 20\sqrt{6} \cos (\theta/2)}
{(13-12\cos\theta)^2} \le 1.
\end{align*}
Similarly, if $0\leq\theta\leq4\pi$ with $0 \le \theta \le 2 \pi - \arccos (1/3) = 5.0522 \ldots
$ and $2 \pi + \arccos (1/3) = 7.5141 \ldots \le \theta \le 4 \pi$, then
\begin{align*}
|\theta_s|^2=\frac{37+12\cos\theta - 20\sqrt{6} \cos (\theta/2)}
{(13-12\cos\theta)^2} \le 1.
\end{align*}
From Lemma \ref{lem002} for this case, we have the eigenvalues for $\lambda(\pm)$ as follows.
\begin{align*}
\Psi^{L}(x)&=\alpha\times
\begin{cases}
(-\theta_s)^x & (x\geq1), \\
\frac{(3\lambda+1)\omega -2 (\lambda+1)}{\lambda-1} (-\theta_s)^{-x} & (x\leq-1),
\end{cases}\\\nonumber\\
\Psi^{O}(x)&=\alpha\times
\begin{cases}
-2 \frac{\omega-1}{\lambda-1} (-\theta_s)^x & (x\geq1),\\
2 \frac{\omega-1}{\lambda-1} (-\theta_s)^{-x} & (x\leq-1),
\end{cases}\\\nonumber\\
\Psi^{R}(x)&= \alpha\times
\begin{cases}
-\frac{(3\lambda+1)\omega -2 (\lambda+1)}{\lambda-1} (-\theta_s)^{x} & (x\geq1), \\
-(-\theta_s)^{-x} & (x\leq-1).
\end{cases}
\end{align*}
Here we note that
\begin{align*}
\frac{(3\lambda(\pm)+1)\omega-2 (\lambda(\pm)+1)}{\lambda-1}=
&
\frac{(3 e^{i\theta} -2) (\sqrt{6} e^{i \frac{\theta}{2}} +2)}{\sqrt{6} e^{i \frac{\theta}{2}} -2}, \quad
\frac{(3 e^{i\theta} -2) (\sqrt{6} e^{i \frac{\theta}{2}} +2)}{-\sqrt{6} e^{i \frac{\theta}{2}} -2},
\\
&
\frac{(3 e^{i\theta} -2) (-\sqrt{6} e^{i \frac{\theta}{2}} +2)}{\sqrt{6} e^{i \frac{\theta}{2}} -2}, \quad
\frac{(3 e^{i\theta} -2) (-\sqrt{6} e^{i \frac{\theta}{2}} +2)}{-\sqrt{6} e^{i \frac{\theta}{2}}-2},
\end{align*}
and
\begin{align*}
\frac{\omega-1}{\lambda(\pm)-1}=
\frac{3 e^{i\theta} -2}{-2 + \sqrt{6} e^{i \frac{\theta}{2}}}, \quad
\frac{3 e^{i\theta} -2}{-2 - \sqrt{6} e^{i \frac{\theta}{2}}}.
\end{align*}
Therefore we obtain the following main result:
\begin{thm}
We consider the three-state Grover walk with one defect at the origin on $\mathbb{Z}$. Here $\omega =e^{i\theta} ( \theta \in (0,2\pi))$ and $\alpha=-\gamma, \> \beta =0$. Then a solution of $U^{(s)} \Psi =\lambda \Psi$ with
\begin{align*}
\lambda=\frac {\omega\pm\sqrt {6 \omega { { (\omega-1) }^{2} } } }{3\omega-2}
\end{align*}
is given by
\begin{eqnarray*}
\Psi (x)= \alpha (-\theta_s )^{|x|} \times
\begin{cases}
\begin{bmatrix}
1
\\
-\dfrac{2(\omega-1)}{\lambda-1}
\\
-\dfrac{(3\lambda+1)\omega-2(\lambda+1)}{\lambda-1}
\end{bmatrix}
& (x\geq1),
\\
\\
\begin{bmatrix}
1\\
0\\
-1
\end{bmatrix}
& (x=0),
\\
\\
\begin{bmatrix}
\dfrac{(3\lambda+1)\omega-2(\lambda+1)}{\lambda-1}
\\
\dfrac{2(\omega-1)}{\lambda-1}
\\
-1
\end{bmatrix}
& (x\leq-1).
\end{cases}
\end{eqnarray*}
Moreover the stationary measure of the walk is
\begin{eqnarray*}
\mu(x)= |\alpha|^2 \times
\begin{cases}
|\theta_s|^{2x} \bigg\{1+(13-12\cos\theta) \bigg(\dfrac{2}{m_1} +
\dfrac{m_2}{m_3} \bigg) \bigg\} & (x\neq0),
\\
\\
2 & (x=0).
\end{cases}
\end{eqnarray*}
Here
\begin{align*}
|\theta_s|^2
=\frac{37+12\cos\theta \pm 20\sqrt{6} \cos (\theta/2)}{(13-12\cos\theta)^2},
\qquad
m_k
=5+(-1)^{n_k} \ 2 \sqrt{6} \cos (\theta/2) \quad (k=1,2,3),
\end{align*}
with $n_k \in \{0,\ 1\}$.
\label{thm001}
\end{thm}
As a corollary, we give a relation between stationary and limit measures of the walk. If $\omega=1$, then our model becomes the usual space-homogeneous three-state Grover walk on $\mathbb{Z}$. As for the limit measure for an initial state; $\Psi_0 (0)={}^T{[\tilde{\alpha},\ \tilde{\beta},\ \tilde{\gamma}]}$ and $\Psi_0 (x)={}^T{[0, 0, 0]} \> (x \neq 0)$, the following result is known (see Konno [10], for example).
\begin{eqnarray*}
\mu_{\infty} (x)=
\begin{cases}
\{(3+\sqrt{6}) |2\tilde{\alpha}+\tilde{\beta}|^2 +(3-\sqrt{6}) | \tilde{\beta}+2 \tilde{\gamma}|^2 \\
\hspace{30mm} -2 | \tilde{\alpha}+ \tilde{\beta}+ \tilde{\gamma}|^2 \}
\times (49-20 \sqrt{6})^x & (x\geq1),
\\
\\
\frac{5-2\sqrt{6}}{2} (|2\tilde{\alpha}+ \tilde{\beta}|^2+| \tilde{\beta}+2 \tilde{\gamma}|^2) & (x=0),
\\
\\
\{(3-\sqrt{6}) |2\tilde{\alpha}+\tilde{\beta}|^2 +(3+\sqrt{6}) | \tilde{\beta}+2 \tilde{\gamma}|^2 \\
\hspace{30mm} -2 | \tilde{\alpha}+ \tilde{\beta}+ \tilde{\gamma}|^2 \}
\times (49-20 \sqrt{6})^{-x} & (x\leq-1).
\end{cases}\\\nonumber
\end{eqnarray*}
Combining this with the corresponding (ii-a) case ($\tilde{\alpha}=-\tilde{\gamma}, \> \tilde{\beta}=0$) gives
\begin{eqnarray}
\mu_{\infty} (x)=
\begin{cases}
24{|\tilde{\alpha}|}^2 (49-20 \sqrt{6})^x & (x \not= 0),
\\
\\
4(5-2\sqrt{6}) |\tilde{\alpha}|^2 & (x=0).
\end{cases}
\label{enoshima001}
\end{eqnarray}
On the other hand, Theorem \ref{thm001} with $\theta=0$ and $-$ part ($n_1=1,\ n_2=0,\ n_3=1$) implies
\begin{eqnarray}
\mu(x)=
\begin{cases}
12 {|\alpha|^{2}} (5+2\sqrt{6}) {(49-20\sqrt{6})}^{|x|} & (x\neq0),
\\
\\
2{|\alpha|^2} & (x=0).
\end{cases}
\label{enoshima002}
\end{eqnarray}
If we put $\alpha= \pm (2-\sqrt{6}) \tilde{\alpha}$, then a stationary measure given by Eq. \eqref{enoshima002} is equivalent to a limit measure given by Eq. \eqref{enoshima001}.
Finally, we give stationary measures for $\lambda=-1$. Remark that we see $\theta_s=-1$ for this case.
\begin{description}
\item[(i)] $\alpha=\gamma$ case.
\begin{eqnarray*}
\Psi(x)=\begin{cases}
\begin{bmatrix}
\alpha\\
\frac{\alpha}{\omega-3} (3 {\omega}^2-2\omega+3)\\
\frac{\alpha\omega}{\omega-3} (1-3\omega)
\end{bmatrix} & (x\geq1),
\\\\
\begin{bmatrix}
\alpha\\
\frac{4\omega\alpha}{\omega-3}\\
\alpha
\end{bmatrix} & (x=0),
\\\\
\begin{bmatrix}
\frac{\alpha\omega}{\omega-3} (1-3\omega)\\
\frac{\alpha}{\omega-3} (3 {\omega}^2-2\omega+3)\\
\alpha
\end{bmatrix} & (x\leq-1).
\end{cases}
\end{eqnarray*}
Therefore the corresponding stationary measure is given by
\begin{eqnarray*}
\mu(x)= \frac{6 {|\alpha|}^2}{5-3 \cos\theta}\times
\begin{cases}
3 \cos^2 \theta -3\cos\theta+2 & (x\neq0),
\\
\\
3 - 2\cos\theta & (x=0).
\end{cases}
\end{eqnarray*}
\item[(ii)] $\beta=\frac{\alpha+\gamma}{2}$ case.
\begin{description}
\item[(a)] $\beta=0$ case.
\begin{eqnarray*}
\Psi(x)=\begin{cases}
\begin{bmatrix}
\alpha\\
\alpha(\omega-1)\\
-\omega\alpha
\end{bmatrix} & (x\geq1),\\\\
\begin{bmatrix}
\alpha\\
0\\
-\alpha
\end{bmatrix} & (x=0),\\\\
\begin{bmatrix}
\omega\alpha\\
-\alpha(\omega-1)\\
-\alpha
\end{bmatrix} & (x\leq-1).
\end{cases}
\end{eqnarray*}
Therefore the corresponding stationary measure is given by
\begin{eqnarray*}
\mu(x)=2 {|\alpha|^2}\times
\begin{cases}
(2-\cos\theta) & (x\neq0), \\\\
1 & (x=0).
\end{cases}
\end{eqnarray*}
\item[(b)] $\beta\neq0$ case.
\begin{eqnarray*}
\Psi(x)=\begin{cases}
\begin{bmatrix}
\alpha\\
-2\alpha\\
\alpha
\end{bmatrix} & (x\geq1),\\\\
\begin{bmatrix}
\alpha\\
\frac{\alpha+\gamma}{2}\\
\gamma
\end{bmatrix} & (x=0),\\\\
\begin{bmatrix}
\gamma\\
-2\gamma\\
\gamma
\end{bmatrix} & (x\leq-1).
\end{cases}
\end{eqnarray*}
Therefore the corresponding stationary measure is given by
\begin{eqnarray*}
\mu(x)=\begin{cases}
6{|\alpha|^2} & (x\geq1),\\\\
\dfrac{5}{4} ({|\alpha|}^2+{|\gamma|^2})+\dfrac{1}{4}
(\alpha \bar{\gamma}+\bar{\alpha} \gamma) & (x=0),\\\\
6{|\gamma|^2} & (x\leq-1).
\end{cases}
\end{eqnarray*}
\end{description}
\end{description}
\section{Summary \label{sum}}
We obtained stationary measures for the three-state Grover walk with one defect at the origin on $\mathbb{Z}$ by solving the corresponding eigenvalue problem. Moreover, we found out a relation between stationary and limit measures of the walk. As a future work, it would be interesting to investigate the relation between stationary measure, (time-averaged) limit measure, and rescaled weak limit measure [7,8] for QWs in the more general setting.
\par
\
\par\noindent
{\bf Acknowledgment.} This work is partially supported by the Grant-in-Aid for Scientific Research (Challenging Exploratory Research) of Japan Society for the Promotion of Science (Grant No.15K13443).
\par
\
\par
\begin{small}
\end{small}
\end{document} | math | 28,729 |
\betagin{document}
\title{Low dimensional nilpotent $n$-Lie algebras}
\author[M. Eshrati]{mehdi eshrati$^1$}
\author[F. Saeedi]{farshid saeedi$^2$}
\author[H. Darabi]{hamid darabi$^3$}
\date{}
\keywords{Nilpotent $n$-Lie algebra, Classification, Low dimensions}
\subjclass[2010]{Primary 17B05, 17B30; Secondary 17D99.}
\address{$^{1}$Farhangiyan University, Shahid Beheshti Campus, Mashhad, Iran.}
\address{$^{2}$Department of Mathematics, Mashhad Branch, Islamic Azad University, Mashhad, Iran.}
\address{$^{3}$ Department of Mathematics, Esfarayen University of Technology, Esfarayen, Iran.}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\betagin{abstract}
In this paper, nilpotent $n$-Lie algebras of dimension $n+3$ as well as nilpotent $n$-Lie algebras of class $2$ and dimension $n+4$ are classified.
\end{abstract}
\maketitle
\section{Introduction and preliminaries}
The classification of low dimensional Lie algebras is an early problem in the study of Lie algebras. Such classifications can be find in standard references of Lie algebras. A first step to classify $6$-dimensional Lie algebras was done by Umlauf \cite{ku} in 1891. In 1950, Mozorov \cite{vm} obtained the classification of nilpotent Lie algebras of dimension less than $6$ over arbitrary fields and those of dimensions $6$ over a field of characteristic zero. The classification of $6$-dimensional nilpotent Lie algebras is completed by Cicalo \cite{sc.wg.cs} in 2012. The $7$-dimensional nilpotent Lie algebras over the fields of real and complex numbers are classified in \cite{cs}. Also, the $8$-dimensional nilpotent Lie algebras of class $2$ with a $4$-dimensional center, those with a $2$-dimensional center, and those with a $4$-dimensional center over the field of complex numbers are classified in \cite{sx.br}, \cite{br.lz} and \cite{yw.hc.yn}, respectively. We know that a classification of Lie algebras with respect to the dimension of their multiplier is given in \cite{lb}.
In 1985, Fillipov \cite{vf} defined an \textit{$n$-Lie algebra} as an antisymmetric $n$-linear map on a vector space that satisfy the following Jacobi identity:
\[[[x_1,x_2,\ldots,x_n],y_2,\ldots,y_n]=\sum_{i=1}^n[x_1,\ldots,x_{i-1},[x_i,y_2,\ldots,y_n],x_{i+1},\ldots,x_n]\]
for all $x_i,y_i\in L$, $1\leq i\leq n$ and $2\leq j\leq n$. Also, he has classified the $n$-Lie algebras of dimension $n+1$ over an algebraically closed field of characteristic zero. In 2008, Bai \cite{rb.xw.wx.ha} classified those $n$-Lie algebras of dimension $n+1$ whose underlying field has characteristic $2$. Also, in 2011, Bai et. al. \cite{rb.gs.yz} classified the $n$-Lie algebras of dimension $n+2$ over algebraically closed fields of characteristic zero.
Let $A_1,A_2,\ldots,A_n$ be subalgebras of $n$-Lie algebra $A$. Then the subalgebra of $A$ generated by all commutators $[x_1,\ldots,x_n]$, in which $x_i\in A_i$, is denoted by $[A_1,\ldots,A_n]$. The subalgebra $A^2=[A,\ldots,A]$ is called the \textit{derived} $n$-Lie subalgebra of $A$. If $A^2=0$, we call $A$ an abelian $n$-Lie algebra. The \textit{center} of the $n$-Lie algebra $A$ is defined as
\[Z(A)=\{x\in A:[x,A,\ldots,A]=0\}.\]
Assume $Z_0(A)=0$, then $i$th center of $A$ is defined inductively as $Z_i(A)/Z_{i-1}(A)=Z(A/Z_{i-1}(A))$ for all $i\geq1$. In 1987, Kasimov \cite{sk} defines the notion of nilpotency of $n$-Lie algebras as follows:
An $n$-Lie algebra $A$ is called nilpotent if $A^s=0$ for some non-negative integer $s$, in which $A^{i+1}$ is defined inductively as $A^1=A$ and $A^{i+1}=[A^i,A,\ldots,A]$. The $n$-Lie algebra $A$ is nilpotent of class $c$ provided that $A^{c+1}=0$ and $A^c\neq0$. Similar results for the same groups are obtained for nilpotent $n$-Lie algebras in \cite{mw}.
The goal of this paper is to classify $(n+3)$-dimensional nilpotent $n$-Lie algebras as well as $(n+4)$-dimensional nilpotent $n$-Lie algebras of class $2$ over an arbitrary field. This paper is organized as follows: Section 2 presents the preliminary results that will be used in the next sections. In section 3, we give a classification of all $(n+3)$-dimensional nilpotent $n$-Lie algebras over an arbitrary field. Also, in section 4, we shall classify all $(n+4)$-dimensional nilpotent $n$-Lie algebras of class $2$ over an arbitrary field.
\section{Known results}
In this section, we shall present some known results, without proofs, that will be used later. Recall that an $n$-Lie algebra $A$ is called \textit{Special Heisenberg} if $A^2=Z(A)$ and $\dim A^2=1$.
\betagin{theorem}[\cite{me.fs.hd}]\label{special Heisenberg}
Every Special Heisenberg $n$-Lie algebra is of dimension $mn+1$ for some positive integer $m$ and it has a presentation as:
\[H(n,m)=\gen{x,x_1,\ldots,x_{mn}:[x_{n(i-1)+1},x_{n(i-1)+2},\ldots,x_{ni}]=x,i=1,\ldots,m}.\]
\end{theorem}
\betagin{theorem}[\cite{hd.fs.me}]\label{dimA^2=1}
Let $A$ be a nilpotent $n$-Lie algebra of dimension $d$ satisfying $\dim A^2=1$. Then $A\cong H(n,m)\oplus F(d-mn-1)$ for some $m\geq1$.
\end{theorem}
\betagin{theorem}[\cite{hd.fs.me}]\label{A^2=Z(A)}
Let $A$ be a nilpotent $n$-Lie algebra of dimension $d=n+k$, for $3 \leq k\leq n+1$ such that $A^2=Z(A)$ and $\dim A^2=2$. Then
\[ A\cong \gen{e_1,\ldots,e_{n+k}:[e_{k-1},\ldots,e_{n+k-2}]=e_{n+k},[e_1,\ldots,e_n]=e_{n+k-1}}.\]
This $n$-Lie algebra is denoted by $A_{n,k}$.
\end{theorem}
\betagin{theorem}[\cite{hd.fs.me}]\label{d<=n+2}
Let $A$ be a non-abelian nilpotent $n$-Lie algebra of dimension $d\leq n+2$. Then
\[A\cong H(n,1),\ H(n,1)\oplus F(1)\ \text{or}\ A_{n+2,3},\]
in which
\[A_{n+2,3}=\gen{e_1,\ldots,e_{n+2}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+2}}.\]
\end{theorem}
\betagin{theorem}[\cite{sc.wg.cs}]
\
\betagin{itemize}
\item[(1)]
Over a field $F$ of characteristic different from $2$, the list of the isomorphisms type of $6$-dimensional nilpotent Lie algebras is the following: $L_{5,k}\oplus F$ with $k\in\{1,\ldots,9\}$ ; $L_{6,k}$ with $k\in\{10,\ldots,18,20,23,25,\ldots,28\}$; $L_{6,k}(\varepsilon_1)$ with $k\in\{19,21\}$ and $\varepsilon_1\in F^*/(\overset{*}{\mathop \sim})$; $L_{6,k}(\varepsilon_2)$ with $k\in\{22,24\}$ and $\varepsilon_2\in F/(\overset{*}{\mathop \sim})$.
\item[(2)]
Over a field $F$ of characteristic $2$, the isomorphism types of $6$-dimensional nilpotent Lie algebras are:
$L_{5,k}\oplus F$ with $k\in\{1,\ldots,9\}$; $L_{6,k}$ with $k\in\{10,\ldots,18,20,23,25,\ldots,28\}$; $L_{6,k}(\varepsilon_1)$ with $k\in\{19,21\}$ and $\varepsilon_1\in F^*/(\overset{*}{\mathop \sim})$; $L_{6,k}(\varepsilon_2)$ with $k\in\{22,24\}$ and $\varepsilon_2\in F/(\overset{*+}{\mathop \sim})$; $L_{6,k}^{(2)}$ with $k\in\{1,2,5,6\}$; $L_{5,k}^{(2)}(\varepsilon_3)$ with $k\in\{3,4\}$ and $\varepsilon_3\in F^*/(\overset{*+}{\mathop \sim})$; $L_{6,k}^{(2)}(\varepsilon_4)$ with $k\in\{7,8\}$ and $\varepsilon_4\in\{0,\omega\}$.
\end{itemize}
\end{theorem}
\section{Classification of $(n+3)$-dimensional nilpotent $n$-Lie algebras}
By Theorem \ref{d<=n+2}, the number of nilpotent $n$-Lie algebras of dimensions $n$, $n+1$ and $n+2$ are one, two and three up to isomorphism. In this section, we shall classify all $(n+3)$-dimensional nilpotent $n$-Lie algebras over an arbitrary field.
Let $A$ be a nilpotent $n$-Lie algebra of dimension $n+3$ with basis $\{e_1,\ldots,e_{n+3}\}$. If $e_{n+3}$ is a central element of $A$, then $A/\gen{e_{n+3}}$ is a nilpotent $n$-Lie algebra of dimension $n+2$. We discuss on the abelian-ness of $A/\gen{e_{n+3}}$.
If $A/\gen{e_{n+3}}$ is abelian, then brackets in $A$ can be written as
\[[e_1,\ldots,\hat{e}_i,\ldots,\hat{e}_j,\ldots,e_{n+2}]=\theta_{ij}e_{n+3},\quad1\leq i<j\leq n+2.\]
If $\theta_{ij}=0$ for all $1\leq i<j\leq n+2$, then $A$ is an abelian $n$-Lie algebra, which we denote it by $A_{n+3,1}$. If $\theta_{ij}$ are not all equal to zero, then $A$ is non-abelian satisfying $\dim A^2=1$. Hence, by Theorem \ref{dimA^2=1}, $A\cong H(n,m)\oplus F(n+3-nm-1)$.
In case $n>2$, we must have $m=1$ and hence
\[A=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+3}}\cong H(n,1)\oplus F(2),\]
which we denote it by $A_{n+3,2}$.
Also, in case $n=2$, we have two $n$-Lie algebras for $A$, namely $H(2,2)$ and $H(2,1)\oplus F(2)$.
Now, assume that $A/\gen{e_{n+3}}$ is a non-abelian $n$-Lie algebra. By Theorem \ref{d<=n+2}, we have two possibilities for $A/\gen{e_{n+3}}$:
Case 1: $A/\gen{e_{n+3}}\cong\gen{\overline{e}_1,\ldots,\overline{e}_{n+2}:[\overline{e}_1,\ldots,\overline{e}_n]=\overline{e}_{n+1}}$. Then the brackets in $A$ can be written as
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1}+\alpha e_{n+3},&\\
[e_1,\ldots,\hat{e}_i,\ldots,\hat{e}_j,\ldots,e_n,e_{n+1},e_{n+2}]=\theta_{ij}e_{n+3},&1\leq i<j\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_i e_{n+3},&1\leq i\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+2}]=\gamma_ie_{n+3},&1\leq i\leq n.
\end{cases}\]
Regarding a suitable change of basis, one can assume that $\alpha=0$, and the Jacobi identities give us \[\theta_{ij}=0,\quad1\leq i<j\leq n.\]
Hence
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1},&\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+3},&1\leq i\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+2}]=\gamma_ie_{n+3},&1\leq i\leq n.
\end{cases}\]
The above brackets show that the dimension of the center of $A$ is at most $3$. We discuss on the dimension of the center of $A$.
(i) $\dim Z(A)=1$. Then, we must have $\beta_i,\gamma_j\neq0$ for some $i$ and $j$. Without loss of generality assume that $\beta_1,\gamma_1\neq0$. Applying the following transformations
\[e'_1=e_1+\sum_{i=2}^n(-1)^{i-1}\frac{\beta_i}{\beta_1}e_i,\quad e'_j=e_j,\quad2\leq j\leq n+2,\quad e'_{n+3}=\beta_1e_{n+3}\]
we obtain
\[\betagin{cases}
[e'_1,\ldots,e'_n]=e'_{n+1},&\\
[e'_2,\ldots,e'_{n+1}]=e'_{n+3},&\\
[e'_2,\ldots,e'_n,e'_{n+2}]=\frac{\gamma_1}{\beta_1}e'_{n+3},&\\
[e'_1,\ldots,\hat{e}'_i,\ldots,e'_n,e'_{n+2}]=\frac{1}{\beta_1}\parn{\gamma_i-\frac{\beta_i\gamma_1}{\beta_1}}e'_{n+3},&2\leq i\leq n.
\end{cases}\]
Next, by applying the transformations
\betagin{align*}
e''_1&=e'_1+\sum_{i=2}^n(-1)^{i-1}\parn{\frac{\gamma_i}{\gamma_1}-\frac{\beta_i}{\beta_1}}e'_i,\\
e''_j&=e'_j,\quad2\leq j\leq n+1,\quad e''_{n+2}=\frac{\beta_1}{\gamma_1}e'_{n+2},\quad e''_{n+3}=e'_{n+3}
\end{align*}
it yields
\[[e''_1,\ldots,e''_n]=e''_{n+1},\quad[e''_2,\ldots,e''_{n+1}]=e''_{n+3},\quad[e''_2,\ldots,e''_n,e''_{n+2}]=e''_{n+3}.\]
Hence, we conclude that
\[A=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+3},[e_2,\ldots,e_n,e_{n+2}]=e_{n+3}},\]
which we denote it by $A_{n+3,3}$.
(ii) $\dim Z(A)=2$. We have two possibilities:
(ii-a) If $Z(A)=\gen{e_{n+2},e_{n+3}}$, then at least one of the $\beta_i$ is non-zero while all $\gamma_i$ are zero. Without loss of generality assume that $\beta_1\neq0$. Then
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1},&\\
[e_2,\ldots,e_{n+1}]=\beta_1e_{n+3},&\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+3},&2\leq i\leq n.
\end{cases}\]
Applying the following transformations
\[e'_1=e_1+\sum_{i=2}^n(-1)^{i-1}\frac{\beta_i}{\beta_1}e_i,\quad e'_j=e_j,\quad2\leq j\leq n+2,\quad e'_{n+3}=\beta_1e_{n+3}\]
we obtain
\[\betagin{cases}
[e'_1,\ldots,e'_n]=e'_{n+1},&\\
[e'_2,\ldots,e'_{n+1}]=e'_{n+3}.
\end{cases}\]
Hence
\[A=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+3}},\]
which is isomorphic to $A_{n+2,3}\oplus F(1)$ and it is denoted by $A_{n+3,4}$.
(ii-b) If $Z(A)=\gen{e_{n+1},e_{n+3}}$, then $A^2=Z(A)$ and Theorem \ref{A^2=Z(A)} yields
\[A=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_n,e_{n+2}]=e_{n+3}},\]
which is denoted by $A_{n+3,5}$.
(iii) $\dim Z(A)=3$. Then $Z(A)=\gen{e_{n+1},e_{n+2},e_{n+3}}$ and so that $\beta_i=\gamma_i=0$ for all $1\leq i\leq n$. The only non-zero bracket is $[e_1,\ldots,e_n]=e_{n+1}$, which gives rise to the algebra $H(n,1)\oplus F(2)=A_{n+3,2}$.
Case 2: $A/\gen{e_{n+3}}\cong\gen{\overline{e}_1,\ldots,\overline{e}_{n+2}:[\overline{e}_1,\ldots,\overline{e}_n]=\overline{e}_{n+1},[\overline{e}_2,\ldots,\overline{e}_{n+1}]=\overline{e}_{n+2}}$. Then the brackets in $A$ are as follows:
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1}+\alpha e_{n+3},&\\
[e_2,\ldots,e_{n+1}]=e_{n+2}+\beta e_{n+3},&\\
[e_1,\ldots,\hat{e}_i,\ldots,\hat{e}_j,\ldots,e_n,e_{n+1},e_{n+2}]=\theta_{ij}e_{n+3},&1\leq i<j\leq n,\\
[e_2,\ldots,e_n,e_{n+2}]=\gammamma e_{n+3},&\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+2}]=\alpha_ie_{n+3},&2\leq i\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+3},&2\leq i\leq n.
\end{cases}\]
With a suitable change of basis, one can assume that $\alpha=\beta=0$. Moreover, from the Jacobi identities, it follows that
\[\theta_{ij}=0,\quad1\leq i<j\leq n,\quad\alpha_i=0,\quad2\leq i\leq n,\]
so
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1},&\\
[e_2,\ldots,e_{n+1}]=e_{n+2},&\\
[e_2,\ldots,e_n,e_{n+2}]=\gammamma e_{n+3},&\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+3},&2\leq i\leq n.
\end{cases}\]
The above relations show that the dimension of $Z(A)$ is at most $2$. We have two possibilities:
(i) $\dim Z(A)=1$. Then $\gamma\neq0$ and $e'_{n+3}=\gamma e_{n+3},\quad e'_j=e_j,\quad 1\leq j\leq n+2$ yields
\[\betagin{cases}
[e'_1,\ldots,e'_n]=e'_{n+1},&\\
[e'_2,\ldots,e'_{n+1}]=e'_{n+2},&\\
[e'_2,\ldots,e'_n,e'_{n+2}]=e'_{n+3},&\\
[e'_1,\ldots,\hat{e'}_i,\ldots,e'_n,e'_{n+1}]=\frac{\beta_i}{\gamma}e'_{n+3},&2\leq i\leq n.
\end{cases}\]
If $\beta_i=0$ for all $2\leq i\leq n$, then
\[A=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+2},[e_2,\ldots,e_n,e_{n+2}]=e_{n+3}},\]
which is denoted by $A_{n+3,6}$. On the other hand, if $\beta_i\neq0$ for some $2\leq i\leq n$, say $\beta_2\neq0$, then, by applying the following transformations,
\betagin{align*}
e'_1&=\frac{\beta_2}{\gamma}e_1,\quad e'_2=\parn{\frac{\beta_2}{\gamma}}^2 e_2+\sum_{i=3}^n(-1)^i\frac{\beta_2\beta_i}{\gamma}e_i,\quad e'_i=e_i,\quad3\leq i\leq n-1,\\
e'_n&=\parn{\frac{\gamma}{\beta_2}}^2e_n,e'_i=\frac{\beta_2}{\gamma}e_i,\quad n+1\leq i\leq n+2,\quad e'_{n+3}=\frac{\beta_2}{\gamma}e_{n+3},
\end{align*}
we observe that
\betagin{multline*}
A=\langle e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+2},\\
[e_2,\ldots,e_n,e_{n+2}]=[e_1,e_3,\ldots,e_{n+1}]=e_{n+3}\rangle,
\end{multline*}
which we denote it by $A_{n+3,7}$.
(ii) $\dim Z(A)=2$. Then $\gamma=0$ and the brackets in $A$ reduce to
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1},&\\
[e_2,\ldots,e_{n+1}]=e_{n+2},&\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+3},&2\leq i\leq n.
\end{cases}\]
If $\beta_i=0$ for all $2\leq i\leq n$, then
\[A=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+2}}.\]
One can easily see that this algebra is isomorphic to $A_{n+3,4}$, while if $\beta_i\neq0$ for some $2\leq i\leq n$, say $\beta_2\neq0$, then the following transformations
\[e'_1=e_1,\quad e'_2=e_2+\sum_{i=3}^n(-1)^i\frac{\beta_i}{\beta_2}e_i,\quad e'_j=e_j,\quad3\leq j\leq n+2,\quad e'_{n+3}=\beta_2e_{n+3}\]
show that
\betagin{multline*}
A=\langle e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},\\
[e_2,\ldots,e_{n+1}]=e_{n+2},[e_1,e_3,\ldots,e_{n+1}]=e_{n+3}\rangle.
\end{multline*}
This algebra is denoted by $A_{n+3,8}$.
The above results summary as:
\betagin{theorem}\label{d=n+3}
The only nilpotent $n$-Lie algebras of dimension $n+3$ with $n>2$ are $A_{n+3,i}$ with $i\in\{1,\ldots,8\}$. For $n=2$ we have instead the algebra $H(2,2)$.
\end{theorem}
\section{Classification of $(n+4)$-dimensional nilpotent $n$-Lie algebras of class $2$}
Nilpotent $n$-Lie algebras of class $2$ appear in some problems of geometry like commutative Riemannian manifolds. Also, classifying nilpotent Lie algebras of class $2$ has been an important problem in Lie algebras. In \cite{br.lz} nilpotent Lie algebras of class $2$ and dimension $8$ with $2$-dimensional center over the field of complex numbers are classified. In this section, we intend to classify $(n+4)$-dimensional nilpotent $n$-Lie algebras of class $2$. Regarding Theorem \ref{d=n+3}, the following result obtains immediately.
\betagin{theorem}\label{d=n+3 of class 2}
The only $(n+3)$-dimensional nilpotent $n$-Lie algebras of class $2$ are
\betagin{itemize}
\item[(1)]$A_{n+3,2}=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+3}}\cong H(n,1)\oplus F(2)$;
\item[(2)]$A_{n+3,5}=\gen{e_1,\ldots,e_{n+3}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_n,e_{n+2}]=e_{n+3}}$;
\item[(3)]$H(2,2)$.
\end{itemize}
\end{theorem}
Suppose $A$ is an $(n+4)$-dimensional nilpotent $n$-Lie algebra of class $2$ with basis $\{e_1,\ldots,e_{n+4}\}$. If $\dim A^2=1$, Theorem \ref{dimA^2=1} yields $A\cong H(n,m)\oplus F(n+4-nm-1)$. Hence $A$ is isomorphic to one of the following algebras
\[H(n,1)\oplus F(3),\quad H(2,2)\oplus F(1)\ \text{or}\ H(3,2).\]
Now, we assume that $\dim A^2\geq2$ and $\gen{e_{n+3},e_{n+4}}\subseteq A^2$. Then $A/\gen{e_{n+4}}$ is an $(n+3)$-dimensional nilpotent $n$-Lie algebra of class $2$ so that, by Theorem \ref{d=n+3 of class 2}, $A/\gen{e_{n+4}}$ is of the following forms:
Case 1: $A/\gen{e_{n+4}}\cong\gen{\overline{e}_1,\ldots,\overline{e}_{n+3}:[\overline{e}_1,\ldots,\overline{e}_n]=\overline{e}_{n+3}}$. Then the brackets in $A$ can be written as:
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+3}+\alpha e_{n+4},&\\
[e_1,\ldots,\hat{e}_i,\ldots,\hat{e}_j,\ldots,e_n,e_{n+1},e_{n+2}]=\theta_{ij}e_{n+4},&1\leq i<j\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+4},&1\leq i\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+2}]=\gamma_ie_{n+4},&1\leq i\leq n.
\end{cases}\]
By a suitable change of basis, we may assume that $\alpha=0$, hence
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+3},&\\
[e_1,\ldots,\hat{e}_i,\ldots,\hat{e}_j,\ldots,e_n,e_{n+1},e_{n+2}]=\theta_{ij}e_{n+4},&1\leq i<j\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+4},&1\leq i\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+2}]=\gamma_ie_{n+4},&1\leq i\leq n.
\end{cases}\]
Since $\dim A^2\geq2$, it is obvious that $\dim Z(A)\leq3$. First, assume that $\dim Z(A)=3$. Then, without loss of generality, we may assume that $Z(A)=\gen{e_{n+2},e_{n+3},e_{n+4}}$. Hence $\gamma_i$ and $\theta_{ij}$ are all zero while $\beta_i\neq0$ for some $i$. We may assume that $\beta_1\neq0$. Then
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+3},&\\
[e_2,\ldots,,e_n,e_{n+1}]=\beta_1e_{n+4},&\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+4},&2\leq i\leq n.
\end{cases}\]
Using the following transformations
\[e'_1=e_1+\sum_{i=1}^n(-1)^{i-1}\frac{\beta_i}{\beta_1}e_i,\quad e'_j=e_j,\quad2\leq j\leq n+3,\quad e'_{n+4}=\beta_1e_{n+4},\]
it follows that
\[A=\gen{e_1,\ldots,e_{n+4}:[e_1,\ldots,e_n]=e_{n+3},[e_2,\ldots,e_{n+1}]=e_{n+4}}.\]
This algebra is denoted by $A_{n+4,1}$.
Next, assume that $\dim Z(A)=2$. Then $Z(A)=A^2$. If $n\geq3$, then, by Theorem \ref{A^2=Z(A)}, we have
\[A=\gen{e_1,\ldots,e_{n+4}:[e_1,\ldots,e_n]=e_{n+3},[e_3,\ldots,e_{n+2}]=e_{n+4}},\]
which is denoted by $A_{n+3,2}$. From \cite{sc.wg.cs} in case $n=2$, the only $6$-dimensional Lie algebra satisfying $A^2=Z(A)$ and $\dim Z(A)=2$ is
\[A\cong L_{6,22}(\varepsilon)=\gen{e_1,\ldots,e_6:[e_1,e_2]=e_5,[e_1,e_3]=e_6,[e_2,e_4]=\varepsilon e_6,[e_3,e_4]=e_5}.\]
Note that this algebra does not satisfy $A/\gen{e_6}\cong H(2,1)\oplus F(2)$.
Case 2: $A/\gen{e_{n+4}}\cong\gen{\overline{e}_1,\ldots,\overline{e}_{n+3}:[\overline{e}_1,\ldots,\overline{e}_n]=\overline{e}_{n+1},[\overline{e}_2,\ldots,\overline{e}_n,\overline{e}_{n+2}]=\overline{e}_{n+3}}$. Then the brackets in $A$ can be written as
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1}+\alpha e_{n+4},&\\
[e_2,\ldots,e_n,e_{n+2}]=e_{n+3}+\beta e_{n+4},&\\
[e_1,\ldots,\hat{e}_i,\ldots,\hat{e}_j,\ldots,e_n,e_{n+1},e_{n+2}]=\theta_{ij}e_{n+4},&1\leq i<j\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+1}]=\beta_ie_{n+4},&1\leq i\leq n,\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+2}]=\gamma_ie_{n+4},&2\leq i\leq n.
\end{cases}\]
With a suitable change of basis, one can assume that $\alpha=\beta=0$. A simple verification shows $Z(A)=\gen{e_{n+1},e_{n+3},e_{n+4}}$ such that
\[\betagin{cases}
[e_1,\ldots,e_n]=e_{n+1},&\\
[e_2,\ldots,e_n,e_{n+2}]=e_{n+3},&\\
[e_1,\ldots,\hat{e}_i,\ldots,e_n,e_{n+2}]=\gamma_ie_{n+4},&2\leq i\leq n.
\end{cases}\]
If $\gamma_i=0$ for all $2\leq i\leq n$, then
\[A=\gen{e_1,\ldots,e_{n+4}:[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_n,e_{n+2}]=e_{n+3}}.\]
One can easily see that this algebra is isomorphic to $A_{n+4,1}$. On the other hand, if $\gamma_i\neq0$ for some $2\leq i\leq n$, say $\gamma_2\neq0$, then we may apply the following transformations
\[e'_2=e_2+\sum_{j=3}^n(-1)^{j-1}\frac{\gamma_j}{\gamma_2}e_j,\quad e'_i=e_i,\quad i=1,3,\ldots,n+3,\quad e'_{n+4}=\gamma_2e_{n+4}.\]
Then
\betagin{multline*}
A=\langle e_1,\ldots,e_{n+4}:[e_1,\ldots,e_n]=e_{n+1},\\
[e_2,\ldots,e_n,e_{n+2}]=e_{n+3},[e_1,e_3,\ldots,e_n,e_{n+2}]=e_{n+4}\rangle,
\end{multline*}
which we denote it by $A_{n+4,3}$.
Case 3: $A/\gen{e_{n+4}}\cong H(2,2)$. From \cite{sc.wg.cs}, we observe that
\[A\cong L_{6,22}(\varepsilon)=\gen{e_1,\ldots,e_6:[e_1,e_2]=e_5,[e_1,e_3]=e_6,[e_2,e_4]=\varepsilon e_6,[e_3,e_4]=e_5}.\]
\betagin{theorem}
The only $(n+4)$-dimensional nilpotent $n$-Lie algebras of class $2$ are
\[H(n,1)\oplus F(3),\ H(2,2)\oplus F(1),\ H(3,2),\ A_{n+4,1},\ A_{n+4,2},\ A_{n+4,3}\ \text{and}\ L_{6,22}(\varepsilon).\]
\end{theorem}
In table I, we have illustrated all $n$-Lie algebras obtained in this paper.
\betagin{center}
Table I
\betagin{tabular}{|c|c|}
\hline
Nilpotent $n$-Lie algebra&Non-zero multiplications\\
\hline
$A_{n+3,1}$&--\\
\hline
$A_{n+3,2}$&$[e_1,\ldots,e_n]=e_{n+3}$\\
\hline
$A_{n+3,3}$&$\betagin{array}{c}[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+3},\\{[e_2,\ldots,e_n,e_{n+2}]}=e_{n+3}\end{array}$\\
\hline
$A_{n+3,4}$&$[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+3}$\\
\hline
$A_{n+3,5}$&$[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_n,e_{n+2}]=e_{n+3}$\\
\hline
$A_{n+3,6}$&$\betagin{array}{c}[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+2},\\{[e_2,\ldots,e_n,e_{n+2}]}=e_{n+3}\end{array}$\\
\hline
$A_{n+3,7}$&$\betagin{array}{c}[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+2},\\{[e_2,\ldots,e_n,e_{n+2}]}=[e_1,e_3,\ldots,e_{n+1}]=e_{n+3}\end{array}$\\
\hline
$A_{n+3,8}$&$\betagin{array}{c}[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_{n+1}]=e_{n+2},\\{[e_1,e_3,\ldots,e_{n+1}]}=e_{n+3}\end{array}$\\
\hline
$A_{n+4,1}$&$[e_1,\ldots,e_n]=e_{n+3},[e_2,\ldots,e_{n+1}]=e_{n+4}$\\
\hline
$A_{n+4,2}$&$[e_1,\ldots,e_n]=e_{n+3},[e_3,\ldots,e_{n+2}]=e_{n+4}$ ($n\geq3$)\\
\hline
$A_{n+4,3}$&$\betagin{array}{c}[e_1,\ldots,e_n]=e_{n+1},[e_2,\ldots,e_n,e_{n+2}]=e_{n+3},\\{[e_1,e_3,\ldots,e_n,e_{n+2}]}=e_{n+4}\end{array}$\\
\hline
\end{tabular}
\end{center}
\betagin{thebibliography}{0}
\bibitem{rb.gs.yz}
R. Bai, G. Song and Y. Zhang, On classification of $n$-Lie algebras, \textit{Front. Math. China} \textbf{6}(4) (2011), 581--606.
\bibitem{rb.xw.wx.ha}
R. Bai, X. Wang, W. Xiao and H. An, Structure of low dimensional $n$-Lie algebras over a field of characteristic $2$, \textit{Linear Alg. Appl.} \textbf{428} (2008), 1912--1920.
\bibitem{lb}
L. Bosko, On Schur multipliers of Lie algebras and groups of maximal class, \textit{Int. J. Algebra Comput.} \textbf{20} (2010), 807--821.
\bibitem{sc.wg.cs}
S. Cicalo, W. A. de Graaf and Csaba Schneider, Six-dimensional nilpotent Lie algebras, \textit{Linear Algebra Appl.} \textbf{436} (2012), 163--189.
\bibitem{hd.fs.me}
H. Darabi, F. Saeedi and M. Eshrati, A characterization of finite dimensional nilpotent Fillipov algebras, \textit{J. Geom. Phys.} \textbf{101} (2016), 100--107.
\bibitem{me.fs.hd}
M. Eshrati, F. Saeedi and H. Darabi, On the multiplier of nilpotent $n$-Lie algebras, \textit{J. Algebra} \textbf{450} (2016), 162--172.
\bibitem{vf}
V. T. Fillipov, $n$-Lie algebras, \textit{Sib. Math. Zh.} \textbf{26}(6) (1985), 126--140.
\bibitem{sk}
S. M. Kasymov, Theory of $n$-Lie algebras, \textit{Algebra Logika} \textbf{26}(3) (1987), 277--297.
\bibitem{vm}
V. V. Morozov, Classification des algebras de Lie nilpotent de dimension $6$, \textit{Izv. Vyssh. Ucheb. Zar} \textbf{4} (1958), 161--171.
\bibitem{br.lz}
B. Ren and L. Zhu, Classification of $2$-step nilpotent Lie algebras of dimension $8$ with $2$-dimensional center, \textit{Comm. Algebra} \textbf{39}(6) (2011), 2068--2081.
\bibitem{cs}
C. Seeley, $7$-dimensional nilpotent Lie algebras, \textit{Trans. Amer. Math. Soc.} \textbf{335}(2) (1993), 479--496.
\bibitem{ku}
K. A. Umlauf, \textit{Ber die Zusammensetzung der endlichen continuierlichen transformationsgruppen insbesondere der gruppen vom range null}, Ph.D. Thesis, University of Leipzig, Germany, 1891.
\bibitem{yw.hc.yn}
Y. Wang, H. Chen and Y. Niu, Classification of $8$-dimensional nilpotent Lie algebras with $4$-dimensional center, \textit{J. Math. Res. Appl.} \textbf{33}(6) (2013), 699--707.
\bibitem{mw}
M. P. Williams, Nilpotent $n$-Lie algebras, \textit{Comm. Algebra} \textbf{37} (2009), 1843--1849.
\bibitem{sx.br}
S. Xia and B. Ren, Classification of $2$-step nilpotent Lie algebras of dimension $8$ with $4$ dimensional center, \textit{J. University of Science and Technology of Suzhou} \textbf{27} (2010), 19--23.
\end{thebibliography}
\end{document} | math | 25,203 |
\begin{document}
\noindent{(20170325)}
\centerline{\LARGE\bf The Minimal Position of }
\centerline{\LARGE\bf a Stable Branching Random Walk}
\centerline{Jingning Liu and Mei
Zhang\,\footnote{Corresponding author. Supported by NSFC
(11371061)}}
\centerline{School of Mathematical Sciences }
\centerline{Laboratory of Mathematics and Complex Systems }
\centerline{ Beijing Normal University }
\centerline{Beijing 100875, People's Republic of China}
\centerline{E-mails:\;{\tt
[email protected]
and
[email protected]}}
{\narrower
\noindent\textit{\bf Abstract} In this paper, a branching random walk $(V(x))$ in the boundary case is studied, where the associated one dimensional random walk is in the domain of attraction of an $\alpha-$stable law with $1<\alpha<2$. Let $M_n$ be the minimal position of $(V(x))$ at generation $n$. We established an integral test to describe the lower limit of $M_n-\frac{1}{\alpha}\log n$ and a law of iterated logarithm for the upper limit of $M_n-(1+\frac{1}{\alpha})\log n$.
\noindent\textit{Mathematics Subject Classification (2010).} Primary
60J80; secondary 60F05.
\noindent\textit{\bf Key words and phrases} branching random walk, minimal position, additive martingale, upper limit, lower limit.
\section{introduction}
We consider a discrete-time one-dimensional branching random walk. It starts with an initial ancestor particle located at the origin. At time $1$, the particle dies, producing a certain number of new particles. These new particles are positioned according to the distribution of the point process $\Theta$. At time $2$, these particles die, each giving birth to new particles positioned (with respect to the birth place) according to the law of $\Theta$. And the process goes on with the same mechanism. We assume the particles produce new particles independently of each other. This system can be seen as a branching tree $\textbf{T}$ with the origin as the root. For each vertex $x$ on $\textbf{T}$, we denote its position by $V(x)$. The family of the random variables $(V(x))$ is usually referred as a branching random walk (Biggins~\cite{bi}). The generation of $x$ is denoted by $|x|$.
We assume throughout the remainder of the paper, including in the statements of
theorems and lemmas, that
\begin{align}
& \mathbf{E}\Big(\sum_{|x|=1}1\Big)>1, \;\; \mathbf{E}\Big(\sum_{|x|=1}e^{-V(x)}\Big)=1,\;\; \mathbf{E}\Big(\sum_{|x|=1}V(x)e^{-V(x)}\Big)=0. \label{stable0}&
\end{align}
Condition (\ref{stable0}) means that the branching random walk $(V(x))$ is supercritical and in boundary case (see for example, Biggins and Kyprianou~\cite{BK05}). Every branching random walk satisfying certain mild integrability assumptions can be reduced to this case by some renormalization; see Jaffuel~\cite{Ja} for more details.
Denote $M_n=\min_{|x|=n}V(x)$, i.e., the minimal position at generation $n$. We introduce the conditional probability $\mathbf{P}^*(\cdot):=\mathbf{P}(\cdot\,|\mbox{non-extinction}).$ Under (\ref{stable0}), $M_n\rightarrow\infty, \mathbf{P}^*\mbox{--a.s.}$ (See for example, \cite{B77} and \cite{Ly}). The asymptotic behaviors of $M_n$ have been extensively studied in \cite{AR09}, \cite{A13}, \cite{BZ06}, \cite{HS09}, etc. In particular, under (\ref{stable0}) and certain exponential integrability conditions, Hu and Shi~\cite{HS09} obtained the following:
\begin{align}
\limsup_{n\to \infty}\,\frac{M_n}{\log n}=\frac{3}{2}, \quad \mathbf{P}^*\mbox{--a.s.}\nonumber\\
\liminf_{n\to \infty}\,\frac{M_n}{\log n}=\frac{1}{2}, \quad \mathbf{P}^*\mbox{--a.s.}\nonumber
\end{align}
It showed that there is a phenomena of fluctuation at the logarithmic scale. Aidekon~\cite{A13} proved the convergence in law of $M_n-\frac{3}{2}\log n$ when (\ref{stable0}) and the following two conditions hold:
\begin{align}
\mathbf{E}\Big(\sum_{|x|=1}V^2(x)e^{-V(x)}\Big)<\infty,\label{incon}
\end{align}
\begin{align}
\mathbf{E}\big(X(\log_{+}{X})^{2}+\widetilde{X}(\log_{+}{\widetilde{X}})\big)<\infty, \label{incon1}
\end{align}
where $X:=\sum_{|x|=1}e^{-V(x)}$, $\widetilde{X}:=\sum_{|x|=1}V(x)_+e^{-V(x)}$, and $V(x)_+:=\max\{V(x),0\}$.
Later, Aidekon and Shi~\cite{AS14} proved that under \eqref{stable0}--\eqref{incon1},
$$
\liminf_{n\rightarrow\infty}\,\big(M_n-\frac{1}{2}\log n\big)=-\infty,\quad \mathbf{P}^*\mbox{--a.s.}
$$
Based on this result, Hu~\cite{H15} established the second order limit under the same assumptions \eqref{stable0}--\eqref{incon1}:
$$
\liminf_{n\rightarrow\infty} \,\frac{M_n-\frac{1}{2}\log n}{\log\log n}=-1,\quad \mathbf{P}^*\mbox{--a.s.}
$$
When (\ref{stable0}), (\ref{incon1}) and a higher order integrability condition for $V(x)$ hold \Big(i.e. $\mathbf{E}\big(\sum_{|x|=1}$ $(V(x)_+)^3e^{-V(x)}\big)<\infty$\Big), the upper limit was obtained by Hu ~\cite{H13}:
$$
\limsup_{n\rightarrow\infty}\frac{M_n-\frac{3}{2}\log n}{\log\log\log n}=1.
$$
Throughout the following, $c, c', c_1,c_2,\cdots$ will denote some positive constants whose value may change from place to place. $f(x)\sim g(x)$ as $x\to \infty$ means that $\lim_{x\to \infty} \frac{f(x)}{g(x)}=1$; $f(x)=O(g(x))$ as $x\to \infty$ means that $\lim_{ x\to \infty} \frac{f(x)}{g(x)}=c$.
In this paper, we shall consider the random walk by assuming that
\begin{align}
& \mathbf{E}\,\Big(\sum_{|x|=1}\mathbf{1}_{\{V(x)\leq -y\}} e^{-V(x)}\Big)=O({y^{-\alpha-\varepsilon})},\quad y\to \infty; \label{stable1}&\\
& \mathbf{E}\,\Big(\sum_{|x|=1}\mathbf{1}_{\{V(x)\geq y\}} e^{-V(x)}\Big)\sim \frac{c}{y^{\alpha}},\quad y\to \infty; \label{stable2}&\\
& \mathbf{E}\big(X(\log_{+}{X})^{\alpha}+\widetilde{X}(\log_{+}{\widetilde{X}})^{\alpha-1}\big)<\infty, \label{stable3}&
\end{align}
where $\alpha\in (1,2)$, $\varepsilon>0$, $c>0$. Under (\ref{stable1}) and (\ref{stable2}), in Section 2, we shall see that there is one-dimensional random walk $\{S_n\}$ corresponding to $(V(x))$, where
$S_1$ belongs to the domain of attraction of a spectrally positive stable law. We call $(V(x))$ a {\it stable random walk}.
we shall study the asymptotic behavior of $M_n$ for the stable random walk $(V(x))$ under the conditions \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. Our main results are the following Theorems~\ref{T:1.1}--\ref{T:1.4}.
\begin{theorem}\label{T:1.1}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. For any nondecreasing function $f$ satisfying \\
$\lim_{x\rightarrow\infty}f(x)=\infty$, we have
\begin{align}
{\mathbf{P}^*(M_n-\frac{1}{\alpha}\log n<-f(n), \quad\mbox{i.o.})=\left\{
\begin{aligned}
&0\\
&1\\
\end{aligned}
\right.\;\;\Leftrightarrow\;\; \int_0^\infty \frac{1}{\;t\,e^{f(t)}\;}d\,t\left\{
\begin{aligned}
&<\infty\\
&=\infty. \\
\end{aligned}
\right.}
\end{align}
\end{theorem}
The behavior of the minimal position $M_n$ is closely related to the so-called additive martingale $(W_n)_{n\geq0}$:
\begin{align}
W_n:=\sum_{|u|=n}e^{-V(u)}, n\geq0. \nonumber
\end{align}
By \cite{bi2} and \cite{Ly}, $W_n\rightarrow0$ almost surely as $n\rightarrow\infty$. A similar integral test for the upper limits of $W_n$ can be described as follows:
\begin{theorem} \label{T:1.2}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. For any nondecreasing function $f$ satisfying\\
$
\lim_{x\rightarrow\infty}f(x)=\infty$, we have $\mathbf{P}^*\mbox{--a.s.}$
\begin{align}
\limsup_{n\rightarrow\infty} \frac{n^{\frac{1}{\alpha}}W_n}{f(n)}=\left\{
\begin{aligned}
&0\\
&\infty\\
\end{aligned}
\right.\;\;\Leftrightarrow\;\; \int_0^\infty \frac{1}{\;t\,f(t)\;}d\,t\left\{
\begin{aligned}
&<\infty\\
&=\infty. \\
\end{aligned}
\right.
\end{align}
\end{theorem}
\begin{theorem}\label{T:1.3}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. We have \begin{align}
\liminf_{n\rightarrow\infty}\frac{M_n-\frac{1}{\alpha}\log n}{\log\log n}=-1, \; \mathbf{P}^*\mbox{--a.s.}
\end{align}
\end{theorem}
\begin{remark}
For the random walk $(V(x))$ satisfying (\ref{stable0})--(\ref{incon1}), where the one-dimensional random walk associated with $(V(x))$ has finite variance, Hu \cite[Theorem 1.1, 1.2, Proposition 1.3]{H15} established the corresponding theorems for $M_n$ and $W_n$. Theorem \ref{T:1.1}--\ref{T:1.3} are extensions of them for the stable random walk under \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. Now the one-dimensional random walk $\{S_n\}$ associated with $(V(x))$ has no finite variance (see Section 2 for details).
\end{remark}
\begin{theorem}\label{T:1.4}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. We have
\begin{align}
\limsup_{n\rightarrow\infty}\frac{M_n-(1+\frac{1}{\alpha})\log n}{\log\log\log n}\geq 1, \; \mathbf{P}^*\mbox{--a.s.} \nonumber
\end{align}
\end{theorem}
\begin{remark}
The upper limit for $M_n$ is established in Hu~\cite{H13} under (\ref{stable0})--(\ref{incon1}) and finite third order moment $$\mathbf{E}\Big(\sum_{|x|=1}(V(x)_+)^3e^{-V(x)}\Big)<\infty. $$ While in this paper, under \eqref{stable0}, \eqref{stable1}--\eqref{stable3}, for $k\ge \alpha$, $(V(x))$ no longer satisfies the integrability condition $$\mathbf{E}\Big(\sum_{|x|=1}(V(x)_+)^ke^{-V(x)}\Big)<\infty. $$ In this case, we have not got the condition for the upper limit of $\frac{M_n-(1+\frac{1}{\alpha})\log n}{\log\log\log n}$. In Theorem~\ref{T:1.4}, only a lower bound is obtained.
\end{remark}
\section{Stable random walk}
In this section, we first introduce an one-dimensional random walk associated with the branching random walk.
For $a\in\mathbb{R}$, we denote by $\mathbf{P}_a$ the probability distribution associated to the branching random walk $(V(x))$ starting from $a$, and $\mathbf{E}_a$ the corresponding expectation. For any vertex $x$ on the tree $\bf T$, we denote the shortest path from the root $\varnothing$ to $x$ by $\langle\varnothing,x\rangle\!:=\!\{x_0,x_1,x_2,\ldots,x_{|x|}\}$. Here $x_i$ is the ancestor of $x$ at the $i$-th generation. For any $\mu, \nu\in \textbf{T} $, we use the partial order $\mu<\nu$ if $\mu$ is an ancestor of $\nu$. Under \eqref{stable0}, there exists a sequence of independently and identically distributed real-valued random variables $S_1,S_2-S_1,S_3-S_2,\ldots,$ such that for any $n\geq1, a\in\mathbb{R}$ and any measurable function $g:\mathbb{R}^n\rightarrow[0,\infty),$
\begin{align}\label{manytoone}
\mathbf{E}_a\Big(\sum_{|x|=n}g\big(V(x_1),\ldots,V(x_n)\big)\Big)=\mathbf{E}_a\Big(e^{S_n-a}g(S_1,\ldots,S_n)\Big),
\end{align}
where, under $\mathbf{P}_a$, we have $S_0=a$ almost surely. (\ref{manytoone}) is called the {\it many-to-one formula}. We will write $\mathbf{P}$ and $\mathbf{E}$ instead of $\mathbf{P}_0$ and $\mathbf{E}_0$. Since $\mathbf{E}\big(\sum_{|x|=1}V(x)e^{-V(x)}\big)=0$, we have $\mathbf{E}(S_1)=0$. By (\ref{stable1}) and (\ref{stable2}), it is not difficult to see that $\mathbf{E}S_1^k=\infty$ for $k\ge \alpha$.
Under conditions \eqref{stable1} and \eqref{stable2}, $S_1$ belongs to the domain of attraction of a spectrally positive stable law with characteristic function
\begin{align}
G(t) :=\exp\big\{-c_0|t|^\alpha\big(1-i\frac{t}{|t|}\tan{\frac{\pi\alpha}{2}}\big)\big\},\quad c_0>0. \nonumber
\end{align}
The following are some estimates on $(S_n)$, which are key in the proofs of the main theorems.
\begin{lemma}
Let $0<\lambda<1$. There exist positive constants $c_1,c_2,\cdots, c_5$ such that for any $a\ge 0, b\ge -a, 0\le u\le v$ and $n\ge1$,
\begin{align}
&\mathbf{P}(\underline{S}_n\geq -a) \leq c_1 \frac{(1+a)}{\,n^{\frac{1}{\alpha}}},\label{S:1}\\
&\mathbf{P}(\underline{-S}_n\geq -a) \leq c_2 \frac{(1+a)^{\alpha-1}}{\,n^{1-\frac{1}{\alpha}}},\ \label{S:2}\\
&\mathbf{P}(S_n\leq b,\,\underline{S}_n\geq-a)\leq c_3\frac{\;(1+a)(1+a+b)^\alpha}{n^{1+\frac{1}{\alpha}}},\label{S:3}\\
& \mathbf{P}\big(\underline{S}_{\;\llcorner\lambda n\lrcorner}\geq-a,\min_{i\in[\lambda n,n]\cap \mathbb{Z}}S_i\geq b,S_n\in[b+u,b+v]\,\big)\leq c_4 \frac{(1+v)^{\alpha\!-\!1}(1+v-u)(1+a)}{n^{1+\frac{1}{\alpha}}},\label{S:4}\\
& \mathbf{P}(\underline{S}_n\geq-a, \min_{\lambda n\leq i<n}S_i>b, S_n\leq b)\leq c_5(1+a)n^{-1-\frac{1}{\alpha}}, \label{S:5}
\end{align}
where $\underline{S}_n:=\min_{0\le i\leq n}S_i$ and $\underline{-S}_n:=\min_{0\le i\leq n}(-S_i)$.
\end{lemma}
\textbf{Proof.} The proofs of (\ref{S:1})--(\ref{S:4}) have been given in \cite[Lemmas 2.1--2.4]{sta}. Here we only prove (\ref{S:5}).
Let $f(x):=\mathbf{P}(S_1\leq -x)$. Denote the event in \eqref{S:5} by {\color{blue}$E_{\eqref{S:5}}$}. Applying the Markov property of $(S_i)$ at $n-1$, and using \eqref{S:4} we have that
\begin{align}
\mathbf{P} (E_{\eqref{S:5}})
\leq & \;\sum_{j=0}^\infty f(j)\,\mathbf{P}\Big(\underline{S}_{n-1}\geq-a, \min_{\lambda n\leq i<n}S_i>b,
b+j<S_{n-1}{\color{blue}\leq}b+j+1\Big)\nonumber\\
\leq & \;c\,(1+a)n^{-1-\frac{1}{\alpha}}\sum_{j=0}^\infty f(j) (2+j)^{\alpha-1}.\nonumber\\
\end{align}
By (\ref{stable1}), $$\sum_{j=0}^\infty f(j) (2+j)^{\alpha-1}\leq c\,\mathbf{E}\big((-S_1)^\alpha\textbf{1}_{\{S_1<0\}}\big)<\infty. $$
Then the proof is completed.
$\Box$
}
\section{Proofs of Theorems 1.1--1.3}
In this section, first we prove Theorem 1.1 and Theorem 1.2. Noticing that $W_n\geq e^{-M_n}$, we only need to prove the the convergence part in Theorem \ref{T:1.2}, i.e.,
\begin{align}
\int_0^\infty \frac{dt}{tf(t)}<\infty \Rightarrow \limsup_{n\to \infty} \frac{n^\frac{1}{\alpha}W_n}{f(n)}=0,\;\;\; \mathbf{P}^*\mbox{--a.s.} \,\,; \label{conver}
\end{align}
and the divergence part in Theorem \ref{T:1.1}, i.e.,
\begin{align}
\int_0^\infty \frac{dt}{te^{f(t)}}=\infty \Rightarrow \mathbf{P}^*(M_n-\frac{1}{\alpha}\log n<-f(n),\quad \mbox{i.o.})=1. \label{div}
\end{align}
We define the set containing brothers of vertex $x$ by $\Omega(x)$, i.e., $\Omega(x)=\{y:y_{|y|-1}=x_{|x|-1},y\neq x\}$.
For $\beta\ge 0$, define $$W_n^{\beta}:=\sum_{|x|=n}e^{-V(x)}
\mathbf{1}_{\{\underline{V}(x)\geq-\beta\}},$$
where
$\underline{V}(x):=\min_{0\leq i\leq|x|}V(x_i).$
To prove (\ref{conver}), we need the following lemma.
\begin{lemma}\label{L:3.1}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. For any $\beta\geq0$, there exists a constant $c$ such that for any $1<n\leq m$ and $\lambda>0$, we have
\begin{align}
\mathbf{P}(\max_{n\leq k\leq m}k^{\frac{1}{\alpha}}W^{(\beta)}_k>\lambda)\leq c~\Big(\frac{\log n}{n^{\frac{1}{\alpha}}}+\frac{1}{\lambda}(\frac{m}{n})^{\frac{1}{\alpha}}\Big).
\end{align}
\end{lemma}
\begin{proof}
We introduce another martingale related to $W^\beta_k$:
\begin{align}
W^{(\beta,n)}_k:=\sum_{|x|=k}e^{-V(x)}\mathbf{1}_{\{\underline{V}(x_n)\geq-\beta\}}, \quad n\leq k\leq m+1,\nonumber
\end{align}
where $\underline{V}(x_n):=\min_{1\le i\leq n}V(x_i)$. Therefore,
\begin{align}
\mathbf{P}(\max_{n\leq k\leq m}k^{\frac{1}{\alpha}}W^{(\beta)}_k>\lambda)&\leq \mathbf{P}(\max_{n\leq k\leq m}k^{\frac{1}{\alpha}}W^{(\beta,n)}_k>\lambda)+ \mathbf{P}(\min_{n\leq k\leq m}\min_{|x|=k}V(x)<-\beta)\nonumber.
\end{align}
By the branching property, for $n\leq k\leq m$, we have that
$$\mathbf{E}(W^{(\beta,n)}_{k+1}|\mathcal{F}_k)=W^{(\beta,n)}_k. $$
Hence, by Doob's maximal inequality,
\begin{align}
\mathbf{P}(\max_{n\leq k\leq m}k^{\frac{1}{\alpha}}W^{(\beta,n)}_k\geq\lambda)\leq\frac{m^\frac{1}{\alpha}}{\lambda}
\mathbf{E}(W^{(\beta,n)}_n)=\frac{m^\frac{1}{\alpha}}{\lambda}
\mathbf{E}(W^\beta_n).
\end{align}
From (\ref{manytoone}) and $\eqref{S:1}$, it follows that
\begin{align}
\mathbf{P}(\max_{n\leq k\leq m}k^{\frac{1}{\alpha}}W^{(\beta,n)}_k\geq\lambda)\leq c \frac{1}{\lambda}(\frac{m}{n})^{\frac{1}{\alpha}}. \label{L:3.1_1}
\end{align}
On the other hand, By Aidekon \cite[P. 1403]{A13} we know $\mathbf{P}(\inf_{x\in\textbf{T}}V(x)<-x)\leq e^{-x}$ for $x\geq0$. Hence
\begin{align}
&\mathbf{P}(\min_{n\leq k\leq m}\min_{|x|=k}V(x)<-\beta\,)\nonumber\\ &\leq\mathbf{P}(\inf_{x\in\textbf{T}}V(x)<-\log n)
+\mathbf{P}(\min_{n\leq k\leq m}\min_{|x|=k}V(x)<-\beta,\inf_{x\in\textbf{T}}V(x)\geq-\log n) \nonumber\\
&\leq \frac{1}{n}+\sum_{k=n}^m\mathbf{E}(\sum_{|x|=k}\mathbf{1}_{\{V(x)<-\beta,V(x_n)\geq-\beta,\ldots,V(x_{k-1})\geq-\beta,
\underline{V}(x)\geq-\log n\}}) .\nonumber
\end{align}
By (\ref{manytoone}) and \eqref{S:1},
\begin{align}
\mathbf{P}(\min_{n\leq k\leq m}\min_{|x|=k}V(x)<-\beta\,)\leq \frac{1}{n}+c \,\mathbf{P}(\underline{S}_n\geq-\log n)\leq c~\frac{\log n}{n^{\frac{1}{\alpha}}},\nonumber
\end{align}
which together with \eqref{L:3.1_1}, completes the proof.
\end{proof}
$\Box$
\noindent\textbf{Proof of (\ref{conver}). }Let $n_j=2^j$. According to Lemma \ref{L:3.1}, for all large $j$ we have
\begin{align}
\mathbf{P}\big(\max_{n_j\leq k\leq n_{j+1}}k^{\frac{1}{\alpha}}W^\beta_k>f(n_j)\big)\leq c\,\bigg( \frac{\log n_j}{n_j^{\frac{1}{\alpha}}}+\frac{2^{\frac{1}{\alpha}}}{f(n_j)}\bigg). \label{F:3.6}
\end{align}
By our assumption for $f$, $$\sum_{j\geq j_0}\frac{1}{f(n_j)}\leq\sum_{j\geq j_0}\frac{1}{\log2}\int_{n_{j-1}}^{n_j}\frac{1}{f(x)x}\mathrm{d}x<\infty. $$ Hence
\begin{align*}
\sum_{j\geq j_0} \mathbf{P}\big(\max_{n_j\leq k\leq n_{j+1}}k^{\frac{1}{\alpha}}W^\beta_k>f(n_j)\big)<\infty.
\end{align*} By Borel-Cantelli Lemma, for all large $k$,
\begin{align}
k^{\frac{1}{\alpha}}W^\beta_k\leq f(k),\quad\mathbf{P}\mbox{--a.s.} \nonumber
\end{align}
Letting $\beta\to \infty$, we have $k^{\frac{1}{\alpha}}W_k\leq f(k),\quad\mathbf{P}\mbox{--a.s.}$ As a consequence,
\begin{align}
\limsup_{k\rightarrow\infty}\frac{k^{\frac{1}{\alpha}}W_k}{f(k)}\leq1,\quad\mathbf{P}\mbox{--a.s.} \nonumber
\end{align}
Replacing $f$ by $\varepsilon f$, and letting $\varepsilon\rightarrow0$, we complete the proof.
$\Box$
Fix $K\geq0$. Now we define for $n<k\leq \alpha n$,
\begin{align}
&A_k^{(n,\lambda)}:=\big\{x:|x|=k,V(x_i)\geq a_i^{(n,\lambda)}, 0\leq i\leq k,V(x)\leq\frac{1}{\alpha}\log n-\lambda+K\big\},\nonumber\\
&B_k^{(n,\lambda)}:=\big\{x:|x|=k,\sum_{u\in\Omega(x_{i+1})}(1+(V(u)-a_i^{(n,\lambda)})_+)e^{-(V(u)-a_i^{(n,\lambda)})}\leq c'e^{-b_i^{(k,n)}},\; 0\leq i\leq k\!-\!1\big\},\nonumber
\end{align}
where $
a_i^{(n,\lambda)}=\mathbf{1}_{\{\frac{\alpha}{4}n<i\leq k\}}(\frac{1}{\alpha}\log n-\lambda)\,$,
$b_i^{(k,n)}=\mathbf{1}_{\{0\leq i\leq\frac{\alpha}{4}n\}}i^{\frac{\gamma}{2}}+\mathbf{1}_{\{\frac{\alpha}{4}n<i\leq\alpha n\}}(k-i)^{\frac{\gamma}{2}}$, $K>0$, $\gamma=\frac{1}{\alpha(\alpha\!+\!1)}$ and $c'$ is a positive constant chosen as in \cite[Lemma 7.1]{sta}.
Lemmas~\ref{L:3.2} and \ref{L:3.3} are preparing works for the proof of (\ref{div}).
\begin{lemma}\label{L:3.2}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. There exist some positive constants $K$ and
$c_6, c_7$ such that for all $n\geq2$, $0\leq\lambda\leq\frac{1}{\,2\alpha}\log n$,
\begin{align}
c_6e^{-\lambda}\leq\mathbf{P}\Big(\bigcup_{k=n\!+\!1}^{\alpha n}A_k^{(n,\lambda)}\cap B_k^{(n,\lambda)}\Big)\leq c_7\,e^{-\lambda} \nonumber,
\end{align}
\end{lemma}
\begin{proof}
The proof of the lower bound goes in the same way in \cite[Lemma 7.1]{sta} by replacing $\frac{1}{\alpha}\log n$ to $\frac{1}{\alpha}\log n-\lambda$. Let $s:=\frac{1}{\alpha}\log n-\lambda$. Applying (\ref{manytoone}) and \eqref{S:4}, we get
\begin{align*}
\mathbf{P}\Big(\bigcup_{k=n+1}^{\alpha n}A_k^{(n,\lambda)}\Big)&\leq \sum_{k=n+1}^{\alpha n}\mathbf{E}
\Big(\sum_{|x|=k}\mathbf{1}_{\{V(x_i)\geq a_i^{(n,\lambda)},\, i\leq k, V(x)\leq s+K\}}\Big)\nonumber\\
&=\sum_{k=n+1}^{\alpha n}\mathbf{E}\Big(e^{S_k}\mathbf{1}_{\{S_i\geq a_i^{(n,\lambda)},\, i\leq k, S_k\leq s+K\}}\Big)\nonumber\\
&\leq\sum_{k=n+1}^{\alpha n}e^{s+K}\mathbf{P}\big(S_i\geq a_i^{(n,\lambda)}, i\leq k, S_k\leq s+K\big)\nonumber\\
&\leq c\sum_{k=n+1}^{\alpha n}e^{s+K} \frac{1}{n^{1+\frac{1}{\alpha}}} \nonumber\\
&\leq c e^{-\lambda},
\end{align*}
completing the proof.
\end{proof}
$\Box$
Denote the natural filtration of the branching random walk by $(\mathcal{F}_n, n\geq0)$. Here we introduce the well-known change-of-probabilities setting in Lyons \cite{Ly} and spinal decomposition. With the nonnegative martingale $W_n$, we can define a new probability measure $\mathbf{Q}$ such that for any $n\geq1$,
\begin{align}
\mathbf{Q}\big|_{\mathcal{F}_n}:=W_n\cdot\mathbf{P}\big|_{\mathcal{F}_n},
\end{align}
where $\mathbf{Q}$ is defined on $\mathcal{F}_\infty(:=\!\!\vee_{n\geq0}\mathcal{F}_n)$. Similarly we denote by $\mathbf{Q}_a $ the probability distribution associated to the branching random walk starting from $a$, and $\mathbf{E}_Q$ the corresponding expectation related $\mathbf{Q}(:=\mathbf{Q}_0)$. Let us give a description of the branching random walk under $\mathbf{Q}$. We start from one single particle $\omega_0\!\!:=\!\!\varnothing$, located at
$V(\omega_0)=0$. At time $n+1$, each particle $\upsilon$ in the $n$th generation dies and gives birth to a point process independently distributed as $(V(x),|x|=1)$ under $\mathbf{P}_{V(\upsilon)}$ except one particle $\omega_n$, which dies and produces a point process distributed as $(V(x),|x|=1)$ under $\mathbf{Q}_{V(\omega_n)}$. While $\omega_{n+1}$ is chosen to be $\mu$ among the children of $\omega_n$, proportionally to $e^{-V(\mu)}$. Next we state the following fact about the spinal decomposition. \\
\textbf{Fact 7.1 (Lyons \cite{Ly})}.
Assume \eqref{stable0}. \\
(\romannumeral1) For any $|x|=n$, we have
\begin{align}
\mathbf{Q}(\omega_n=x|\mathcal{F}_n)=\frac{\,e^{-V(x)}}{W_n}.\nonumber
\end{align}
(\romannumeral2)
The spine process $(V(\omega_n))_{n\geq0}$ under $\mathbf{Q}$ has the distribution of $(S_n)_{n\geq0}$ (introduced in Section 2) under $\mathbf{P}$. \\
(\romannumeral3) Let
$\mathcal{G}_\infty:=\sigma\{\omega_j,V(\omega_j),\Omega(\omega_j), (V(u))_{u\,\in\,\Omega(\omega_j)},j\geq1\} \nonumber$ be the $\sigma$-algebra of the spine and its brothers. Denote by $\{\mu\nu,|\nu|\geq0\}$ the subtree of $\textbf{T}$ rooted at $\mu$. For any $\mu\in\Omega(\omega_k)$, the induced branching random walk $(V(\mu\nu),|\nu|\geq0)$ under $\mathbf{Q}$ and conditioned on $\mathcal{G}_\infty$ is distributed as $\mathbf{P}_{V(\mu)}$.
For $n\geq2$ and $0\leq\lambda\leq\frac{1}{2\alpha}\log n$, we define
\begin{align}
E(n,\lambda):=\bigcup_{k=n\!+\!1}^{\alpha n}(A_k^{(n,\lambda)}\cap B_k^{(n,\lambda)}).
\end{align}
\begin{lemma}\label{L:3.3}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. There exists $c>0$ such that for any $n\geq2,\;0\leq\lambda\leq\frac{1}{2\alpha}\log n$, $m\geq4n$ and $0\leq \mu\leq \frac{1}{2\alpha}\log m$,
\begin{align}
\mathbf{P}\big(E(n,\lambda)\cap F(m,\mu)\big)\leq c\,e^{-\lambda-\mu}+c\,e^{-\mu}\frac{\log n}{n^{\frac{1}{\alpha}}} . \nonumber
\end{align}
\end{lemma}
\begin{proof}
For convenience, we write $s\!:=\!\frac{1}{\alpha}\log n\!-\!\lambda, \;t\!:=\!\frac{1}{\alpha}\log m\!-\!\mu$ .
\begin{align}
\mathbf{P}\big(E(n,\lambda)\cap F(m,\mu)\big)&\leq\mathbf{E}\Big(\mathbf{1}_{E(n,\lambda)}\sum_{k=m+1}^{\alpha m}\sum_{|x|=k}\mathbf{1}_{\{x\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)}\}}\Big)\nonumber\\
&=\sum_{k=m+1}^{\alpha m} \mathbf{E_Q}\Big(\mathbf{1}_{E(n,\lambda)}e^{V(\omega_k)}\mathbf{1}_{\{\omega_k\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)}\}}\Big)\nonumber\\
&\leq e^{t+K}\sum_{k=m+1}^{\alpha m}\sum_{l=n+1}^{\alpha n} \mathbf{E_Q}\Big(\sum_{|x|=l}
\mathbf{1}_{\{x\in A_l^{(n,~\!\lambda)}\cap B_l^{(n,~\!\lambda)},\omega_k\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)}\}}\Big)\nonumber\\
&=:e^{t+K}\sum_{k=m+1}^{\alpha m}\sum_{l=n+1}^{\alpha n}I(k,l)\label{L:3.3_2}.
\end{align}
Decomposing the sum on the brothers of the spine, we obtain
\begin{align}
I(k,l)&=\mathbf{Q}(\omega_l\in A_l^{(n,~\!\lambda)}\cap B_l^{(n,~\!\lambda)}, \omega_k\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)})\nonumber\\
& +\sum_{p=1}^l\mathbf{E_Q}\Big(\mathbf{1}_{\{\omega_k\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)}\}}\sum_{x\in\Omega(\omega_p)}f_{k,l,p}\big(V(x)\big)\Big) \nonumber\\
&=:I_1{(k,l)}+\sum_{p=1}^lJ{(k,l,p)},
\end{align}
where $f_{k,l,p}(V(x)):=\mathbf{E_Q}\Big(\sum_{u\ge x, |u|=l}\mathbf{1}_{\{u\in
A_l^{(n,~\!\lambda)}\cap B_l^{(n,~\!\lambda)}\}}\big|\mathcal{G}_\infty\Big)$. Recalling \textbf{Fact 7.1(\romannumeral3)},
we obtain,
\begin{align}
f_{k,l,p}(x)&\leq \mathbf{E}_{x}\Big(\sum_{|\nu|=l-p}\mathbf{1}_{\{V(\nu_i)\geq a_{i+p}^{(n,\lambda)},0\leq i\leq l-p, V(\nu)\leq s+K\}}\Big)\nonumber\\
& \leq e^{-x+s+K}\mathbf{P}_x(S_i\geq a_{i+p}^{(n,\lambda)}, 0\leq i\leq l-p, S_{l-p}\leq s+K) \label{L:3.3_1},
\end{align}
where the last step is from (\ref{manytoone}). To estimate $\sum_{p=1}^lJ{(k,l,p)} $, we break the sum into
two parts. Firstly consider the case $1\leq p\leq\frac{\alpha n}{4}$. By \eqref{S:4}, we have
\begin{align}
f_{k,l,p}(x)\leq ce^{-x+s+K}\frac{\;1+x_+}{n^{1+\frac{1}{\alpha}}}.\nonumber
\end{align}
Consequently,
\begin{align}
\sum_{p=1}^{\frac{\alpha n}{4}}J{(k,l,p)}&\leq ce^sn^{-1-\frac{1}{\alpha}}\sum_{p=1}^{\frac{\alpha n}{4}} \mathbf{E_Q}\Big(\mathbf{1}_{\{\omega_k\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)}\}}\sum_{x\in\Omega(\omega_p)}(1+V(x)_+)e^{-V(x)}\Big)\nonumber\\
&\leq ce^sn^{-1-\frac{1}{\alpha}}\sum_{p=1}^{\frac{\alpha n}{4}} \mathbf{E_Q}
\Big(\mathbf{1}_{\{\omega_k\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)}\}}e^{-(p-1)^{\frac{\gamma}{2}}}\Big), \nonumber
\end{align}
where the last inequality comes from the definition of $B_k^{(m,~\!\mu)}$. Note that by \eqref{S:4},
$$\mathbf{Q}(\omega_k\in A_k^{(m,~\!\mu)})=\mathbf{P}(t\leq S_k\leq t+K, S_i\ge a_i^{(m,\mu)}, 0\leq i\leq k)\leq c m^{-1-\frac{1}{\alpha}}$$ for all $m<k\leq \alpha m$. It follows that
\begin{align}
\sum_{p=1}^{\frac{\alpha n}{4}}J{(k,l,p)}\leq ce^sn^{-1-\frac{1}{\alpha}}\mathbf{Q}(\omega_k\in A_k^{(m,\mu)})\leq ce^sn^{-1-\frac{1}{\alpha}}m^{-1-\frac{1}{\alpha}} \label{L:3.3a}.
\end{align}
On the other hand, when $\frac{\alpha n}{4}<p\leq l$, returning to \eqref{L:3.3_1},
\begin{align}
f_{k,l,p}(x)&\leq e^{-x+s+K}\mathbf{P}_x(S_i\geq a_{i+p}^{(n,\lambda)}, 0\leq i\leq l-p, S_{l-p}\leq s+K)\nonumber\\
&\leq ce^{-x+s+K} \frac{\;1\!+\!(x\!-\!s)_+}{(1\!+\!l\!-\!p)^{1\!+\!\frac{1}{\alpha}}} \nonumber,
\end{align}
which is from \eqref{S:3}. Hence
\begin{align}
&\sum_{\frac{\alpha n}{4}<p\leq l}J{(k,l,p)}\nonumber\\
&\leq c e^{s+K}\sum_{\frac{\alpha n}{4}<p\leq l}\frac{1}{\,(1\!+\!l\!-\!p)^{1\!+\!\frac{1}{\alpha}}}\mathbf{E_Q}\Big(\mathbf{1}_{\{\omega_k\in A_k^{(m,~\!\mu)}\cap B_k^{(m,~\!\mu)}\}}\sum_{x\in\Omega(\omega_p)}e^{-V(x)}{\;\big(1\!+\!(V(x)\!-\!s)_+\big)}\Big)\nonumber\\
&\leq c e^s \sum_{\frac{\alpha n}{4}<p\leq l} \frac{ e^{-(p-1)^{\frac{\gamma}{2}}}} {\,(1\!+\!l\!-\!p)^{1\!+\!\frac{1}{\alpha}}}\mathbf{Q}(\omega_k\in A_k^{(m,~\!\mu)}). \nonumber
\end{align}
As a consequence,
\begin{align}
\sum_{\frac{\alpha n}{4}<p\leq l}J{(k,l,p)} \leq c {e^s} e^{-n^{\frac{\gamma}{3}}}m^{-1-\frac{1}{\alpha}}. \label{L:3.3b}
\end{align}
It remains to estimate $I_1(k,l)$ for $n<l\le \alpha n<\frac{\alpha m}{4}<k\le \alpha m$. Clearly,
\begin{align}
I_1(k,l)&\leq \mathbf{Q}(\omega_l\in A_l^{(n,~\!\lambda)}, \omega_k\in A_k^{(m,~\!\mu)})\nonumber\\
&=\mathbf{P}(S_i\ge a_i^{(n,\lambda)},0\le i\le l, S_l\le s+K, S_j\ge a_j^{(m,\mu)}, 0\le j\le k, S_k\le t+K).\nonumber
\end{align}
We use the Markov property at $l$ and \eqref{S:4} to arrive at
\begin{align}
I_1(k,l)&\leq \frac{c}{(k-l)^{1+\frac{1}{\alpha}}}\mathbf{E}\big((1+S_l)\mathbf{1}_{\{S_i\ge a_i^{(n,\lambda)},0\le i\le l, S_l\le s+K\}}\big)\nonumber\\
&\leq c (1+s+K)(k-l)^{-1-\frac{1}{\alpha}}l^{-1-\frac{1}{\alpha}} \nonumber,
\end{align}
which together with \eqref{L:3.3a} and \eqref{L:3.3b} leads to
\begin{align}
I(k,l)\leq ce^s(n^{-1-\frac{1}{\alpha}}m^{-1-\frac{1}{\alpha}}+e^{-n^{\frac{\gamma}{3}}}m^{-1-\frac{1}{\alpha}})+
c (1+s+K)(k-l)^{-1-\frac{1}{\alpha}}l^{-1-\frac{1}{\alpha}} \nonumber.
\end{align}
Recalling \eqref{L:3.3_2}, we have
\begin{align}
\mathbf{P}\big(E(n,\lambda)\cap F(m,\mu)\big)&\leq e^{t\!+\!K}\sum_{k=m+1}^{\alpha m}\sum_{l=n+1}^{\alpha n}
c\left(e^s(n^{\!-\!1\!-\!\frac{1}{\alpha}}m^{-1-\frac{1}{\alpha}})+\!(1\!+\!s\!+\!K)(k\!-\!l)^{\!-\!1\!-\!\frac{1}{\alpha}}l^{\!-\!1\!-\!\frac{1}{\alpha}} \right)\\&\leq c\,e^{-\lambda-\mu}+c\,e^{-\mu}\frac{\log n}{n^{\frac{1}{\alpha}}} .\nonumber
\end{align}
$\Box$
\end{proof}
\noindent\textbf{Proof of (\ref{div})}. Let $f$ be the nondecreasing function such that $\int_0^\infty \frac{dt}{te^{f(t)}}=\infty$. By Erd\"{o}s \cite{Erdos},
we can assume that $\frac{1}{2}\log (\log t)\leq f(t)\leq 2\log (\log t)$ for all large $t$ without any loss of generality.
Let
\begin{align}
F_x :=\{M_n+x\leq \frac{1}{\alpha}\log n-f(n),\quad \mbox{i.o.}\}, \;x\in\mathbb{R} \nonumber.
\end{align}
We are going to prove that there exists $c_{8}>0$ such that for any $x$,
\begin{align}
\mathbf{P}( F_x)\ge c_{8}. \label{T:1.1_1}
\end{align}
Define $n_i=2^i$, $\lambda_i=f(n_{i+1})+x$ , and $ E_i=E(n_i,\lambda_i) $. It is easy to see for any $x\in \mathbb{R}$, we can choose $i_0=i_0(x)$ such that $0\le\lambda_i\le \frac{1}{2\alpha}\log n_i$ for $i\ge i_0$. According to Lemma \ref{L:3.2} and Lemma \ref{L:3.3}, there exists $c>0$ such that for any $i\ge i_0, j\ge i+2$,
\begin{align}
& \frac{1}{c} e^{-\lambda_i}\leq \mathbf{P}(E_i)\le c e^{-\lambda_i} ,\;\;i\ge i_0 ,\nonumber\\
&\mathbf{P}(E_i\cap E_j)\le c e^{-\lambda_i-\lambda_j}+ce^{-\lambda_j}\frac{\log n_i}{n_i^\frac{1}{\alpha}} . \nonumber
\end{align}
It follows that
\begin{align}
&\sum_{i=i_0}^k\mathbf{P}(E_i)\ge c\sum_{i=i_0}^ke^{-\lambda_i},\nonumber\\
&\sum_{i,j=i_0}^k\mathbf{P}(E_i\cap E_j)\leq c\Big(\sum_{i=i_0}^ke^{-\lambda_i}\Big)^2+c\Big(\sum_{i=i_0}^ke^{-\lambda_i}\Big)
\Big(\sum_{i=1}^\infty\frac{\log n_i}{n_i^{\frac{1}{\alpha}}}\Big).
\end{align}
Note that $\sum_{i=i_0}e^{-\lambda_i}\ge c \sum_{i=i_0}e^{-f(n_{i+1})}\ge c\sum_{i=i_0}\int_{n_{i+1}}^{n_{i+2}}
\frac{dt}{te^{f(t)}}=\infty$. Thus, we can find a constant $c_8$ (notice that our choice of $c_8$ does not depend on $x$) such that
\begin{align}
\limsup_{k\rightarrow\infty}\frac{\sum_{i,j=1}^k\mathbf{P}(E_i\cap E_j)}{(\sum_{i=1}^k\mathbf{P}(E_i))^2}\leq c_8.\nonumber
\end{align}
By Kochen and Stone's version of Borel-Cantelli Lemma \cite{B_C}, we have $\mathbf{P}(E_i,\;i.o.)\ge c_8$, which implies \eqref{T:1.1_1}. Let $F_\infty:=\cap_{x=1}^\infty F_x$. We have $\mathbf{P}(F_\infty)\ge c_8$ since $F_x$ are non-increasing on $x$. We then use the branching property to obtain
\begin{align}
\mathbf{P}(F_\infty|\mathcal{F}_k)=\mathbf{1}_{\{Z_k>0\}}(1-\prod_{|x|=k}(1-\mathbf{P}_{V(x)}(F_\infty)))
\leq \mathbf{1}_{\{Z_k>0\}} (1-(1-c_8)^{Z_k}).
\end{align}
Leting $k\rightarrow\infty$ in the above inequality, we conclude that
\begin{align}
\mathbf{1}_{F_\infty}=\mathbf{1}_{\{\mbox{non-extinction}\}}\;\;\mathbf{P}-a.s.
\end{align}
The divergence part (\ref{div}) is now proved.
$\Box$
\noindent\textbf{Proofs of Theorems 1.1--1.2.} They are immediate by combining the above proofs of (\ref{conver}) and (\ref{div}).
$\Box$
\noindent\textbf{Proof of Theorem 1.3} In Theorem~\ref{T:1.2}, taking $f(n)=\log\log n$ and $(1+\varepsilon)\log\log n$ for $\varepsilon>0$, we obtain the desired result.
$\Box$
\section{Proof of Theorem 1.4}
\begin{lemma}\label{L:4.1}
Assume \eqref{stable0}, \eqref{stable1}--\eqref{stable3}. For any $\lambda>0$, there is $c_9>0$ such that for each $n\geq1$,
\begin{align}
\mathbf{P}\Big(M_n<\big(1+\frac{1}{\alpha}\big)\log n-\lambda\Big)\leq c_9 (1+\lambda)e^{-\lambda}. \nonumber
\end{align}
\end{lemma}
\begin{proof}
If we have proved that for any $\lambda,\beta>0$, there exists $c$ such that for any $n\geq1$,
\begin{align}
\mathbf{P}\Big(&M_n<\big(1+\frac{1}{\alpha}\big)\log n-\lambda, \min_{|u|\leq n}V(u)\geq-\beta\Big)\nonumber\\ &\leq c(1+\beta)e^{-\lambda}\bigg(1+\frac{\Big(1+\big(\beta+(1+\frac{1}{\alpha})\log n-\lambda\big)_+\Big)^{2\alpha+1}}{n^{\frac{1}{\alpha}}}\bigg), \label{E:4.1}
\end{align}
then by the following fact
\begin{align}
\mathbf{P}(\inf_{u\in \textbf{T}}V(u)<-\lambda)\leq e^{-\lambda}, \nonumber
\end{align}
we can obtain the proof of Lemma~\ref{L:4.1}.
Now we turn to prove \eqref{E:4.1}. For brevity we write $b=\big(1+\frac{1}{\alpha}\big)\log n-\!\lambda-\!1$. Note that we can assume $b+1>-\beta$, otherwise there is nothing to prove for \eqref{E:4.1}. For $|u|=n$ such that $V(u)<b+1$, either $\min_{\frac{n}{2}\leq j\leq n}V(u_j)>b$, or $\min_{\frac{n}{2}\leq j\leq n}V(u_j)\leq b$. For the latter case, we shall consider the first $j\in [\frac{n}{2}, n]$ such that $V(u_j)\leq b$. Then
\begin{align}
\mathbf{P}\Big(M_n<\big(1+\frac{1}{\alpha}\big)\log n-\lambda, \min_{|u|\leq n}V(u)\geq -\beta\Big)\leq \mathbf{P}( H_1 )+\mathbf{P}( H_2 ). \label{a0}
\end{align}
with
\begin{align}
& H_1 :=\Big\{\exists |u|=n: V(u)<b+1, \underline{V}(u)\geq -\beta, \min_{\frac{n}{2}\leq j\leq n}V(u_j)>b \Big\}, \nonumber\\
& H_2 :=\bigcup_{\frac{n}{2}\leq j\leq n} \Big\{\exists |u|=n: V(u)<b+1, \underline{V}(u)\geq -\beta, \min_{\frac{n}{2}\leq i< j}V(u_i)>b, V(u_j)\leq b \Big\} \nonumber.
\end{align}
By (\ref{manytoone}) and (\ref{S:4}), we have
\begin{align}
\mathbf{P}(H_1)\leq &\;\mathbf{E}\,\Big(\sum_{|u|=n}\textbf{1}_{\{V(u)<b+1,\, \underline{V}(u)\geq -\beta, \, \min_{\frac{n}{2}\leq j\leq n}V(u_j)>b\}}\Big)\nonumber\\
=&\;\mathbf{E}\,\Big(e^{S_n}\textbf{1}_{\{S_n<b+1,\, \underline{S}_n\geq-\beta, \,\min_{\frac{n}{2}\leq j\leq n}S_j>b\}}\Big) \nonumber\\
\leq &\; c \,e^b(1+\beta)\,n^{-1-\frac{1}{\alpha}} \nonumber\\
\leq & \; c \,(1+\beta)\,e^{-\lambda}.\label{a1}
\end{align}
To deal with $\mathbf{P}(H_2)$, we consider $v=u_j$ and use the notation $ |u|_v:=|\mu|-|\nu|=n-j $ and $V_v(u):=V(u)-V(v)$ for $|u|=n$ and $v<u$. Then by the Markov property,
\begin{align}
&\mathbf{P}(H_2) \nonumber\\
\leq & \sum_{\frac{n}{2}\leq j\leq n} \mathbf{E}\Big(\sum_{|v|=j}\textbf{1}_{\{\underline{V}(v)\geq -\beta, \min_{\frac{n}{2}\leq i<j}V(v_i)>b, V(v)\leq b\}}\!\!\sum_{|u|_v=n-j}\textbf{1}_{\{V_v(u)\leq b+1-V(v), \min_{j\leq i\leq n}V_v(u_i)\geq-\beta-V(v)\}}\Big) \nonumber\\
=&\sum_{\frac{n}{2}\leq j\leq n}\mathbf{E}\Big(\sum_{|v|=j}\textbf{1}_{\{\underline{V}(v)\geq -\beta, \min_{\frac{n}{2}\leq i<j}V(v_i)>b, V(v)\leq b\}}\phi(V(v),n-j)\Big) \label{a666}\\
=:&\mathbf{ E_\eqref{a666}} + \mathbf{E'_\eqref{a666}} \label{a2}
\end{align}
where $E_\eqref{a666}$ denotes the sum $\sum_{\frac{n}{2}\leq j\leq \frac{3n}{4}}$ and $E'_\eqref{a666}$ denotes the sum $\sum_{\frac{3n}{4}<j\leq n}$ in \eqref{a2}, and
\begin{align}
\phi(x, n-j):=&\mathbf{E}\Big(\sum_{|u|_v=n-j}\textbf{1}_{\{V_v(u)\leq b+1-V(v), \min_{j\leq i\leq n}V_v(u_i)\geq-\beta-V(v)\}}\Big|V(v)=x\Big)\nonumber\\
=&\mathbf{E}\Big(e^{S_{n-j}}\textbf{1}_{\{S_{n-j}\leq b+1-x,\underline{S}_{n-j}\geq-\beta-x\}}\Big).\nonumber
\end{align}
It follows from \eqref{S:3} that
\begin{align}
\phi(x,n-j)\leq {\color{blue} c}(1+\beta+x)(2+\beta+b)^\alpha(n-j+1)^{-1-\frac{1}{\alpha}}e^{b-x}. \label{E:4.4}
\end{align}
By \eqref{E:4.4}, (\ref{manytoone}) and then (\ref{S:3}), we obtain that
\begin{align}
E_\eqref{a666} \leq& \;{\color{blue} c}\sum_{\frac{n}{2}\leq j\leq \frac{3n}{4}}(2+b+\beta)^\alpha n^{-1-\frac{1}{\alpha}}e^b\mathbf{E}\Big((1+\beta+S_j)\textbf{1}_{\{\underline{S}_j\geq-\beta, \min_{\frac{n}{2}\leq i<j}S_i>b, S_j\leq b\}}\Big)\nonumber\\
\leq& \; c \,(2+b+\beta)^{\alpha+1}e^{-\lambda}\sum_{\frac{n}{2}\leq j\leq \frac{3n}{4}}\mathbf{P}(\underline{S}_j\geq-\beta,S_j\leq b)\nonumber\\
\leq&\; c \,(1+\beta)(2+b+\beta)^{2\alpha+1}e^{-\lambda}\sum_{\frac{n}{2}\leq j\leq \frac{3n}{4}}j^{-1-\frac{1}{\alpha}} \nonumber\\
\leq& \; c \,(1+\beta)\frac{(1+\beta+(1+\frac{1}{\alpha})\log n-\lambda)^{2\alpha+1}}{n^{\frac{1}{\alpha}}}e^{-\lambda}. \label{a21}
\end{align}
Meanwhile, by the estimate $\phi(x,n-j)\leq e^{b+1-x}$, we get that
\begin{align}
E'_\eqref{a666} \leq &\sum_{\frac{3n}{4}\leq j\leq n}\mathbf{E}\Big(\sum_{|v|=j}\textbf{1}_{\{\underline{V}(v)\geq-\beta, \min_{\frac{n}{2}\leq i<j}V(v_i)>b, V(v)\leq b\}}e^{b+1-V(v)}\Big)\nonumber\\
=& \;e^{b+1}\sum_{\frac{3n}{4}\leq j\leq n}\mathbf{P}\Big(\underline{S}_j\geq-\beta, \min_{\frac{n}{2}\leq i<j}S_i>b, S_j\leq b\Big)\nonumber\\
\leq &\, c \,e^b(1+\beta)\,n^{-1-\frac{1}{\alpha}}\nonumber\\
\leq &\, c \,(1+\beta)e^{-\lambda}.\label{a22}
\end{align}
Combing the estimates (\ref{a0})--(\ref{a22}), we get \eqref{E:4.1}, and then complete the proof.
$\Box$
\end{proof}
\noindent\textbf{Proof of Theorem \ref{T:1.4}}.
Consider large integer $j$. Let $n_j:=2^j$ and $\lambda_j:=a\log\log\log n_j$ with some constant $0<a<1$. Put
\begin{align}
K_j :=\big\{M_{n_j}>(1+\frac{1}{\alpha})\log n_j+\lambda_j\big\}. \nonumber
\end{align}
Recall that if the system dies out at generation $n_j$, then $M_{n_j}=\infty$. Define
$M_.^{(u)}$ for the subtree $\mathbb{T}_u$ just as $M_.$ for $\mathbb{T}$. Then $$ K_j =\{ |u|=n_{j-1}, M_{n_j-n_{j-1}}^{(u)}> (1+\frac{1}{\alpha})\log n_j+\lambda_j-V(u)\}.$$ By the branching property at $n_{j-1}$ we obtain
\begin{align}
\mathbf{P^*}\big( K_j \big|\mathcal{F}_{n_{j-1}}\big)=\prod_{|u|=n_{j-1}}\mathbf{P^*}\Big(M_{n_j-n_{j-1}}\geq
(1+\frac{1}{\alpha})\log n_j+\lambda_j-x\Big)\Big|_{x=V(u)}.\nonumber
\end{align}
By Theorem \ref{T:1.3}, a.s. for all large $j$, $M_{n_{j-1}}\geq \frac{1}{2\alpha}\log n_{j-1}\sim cj$, hence $x\equiv V(u)\gg\lambda_j$ since $\lambda_j\sim a \log\log j$. By Lemma 4.1, on
$\{M_{n_{j-1}}\geq\frac{1}{2\alpha}\log n_{j-1}\}$, for some constant {\color{blue} $c>0$} and all $|u|=n_{j-1}$,
\begin{align}
\mathbf{P^*}\Big(M_{n_j-n_{j-1}}<(1+\frac{1}{\alpha})\log n_j+\lambda_j-x\Big)\Big|_{x=V(u)}\leq {\color{blue} c} V(u)e^{-(V(u)-\lambda_j)} .\nonumber
\end{align}
For sufficiently large $j$, it follows that
\begin{align}
\mathbf{P^*}\big(K_j\big|\mathcal{F}_{n_{j-1}}\big)\geq &\textbf{1}_{\{M_{n_{j-1}}\geq \frac{1}{\,2\alpha}\log n_{j-1}\}}
\prod_{|u|=n_{j-1}}\Big(1-{\color{blue} c} V(u)e^{-(V(u)-\lambda_j)}\Big)\nonumber\\
\geq &\textbf{1}_{\{M_{n_{j-1}}\geq \frac{1}{\,2\alpha}\log n_{j-1}\}} \exp\Big(-2{\color{blue} c} \sum_{|u|=n_{j-1}}V(u)e^{-(V(u)-\lambda_j)}\Big)\nonumber\\
= &\textbf{1}_{\{M_{n_{j-1}}\geq \frac{1}{\,2\alpha}\log n_{j-1}\}} \exp\Big(-2{\color{blue} c} e^{\lambda_j}D_{n_{j-1}}\Big).\nonumber
\end{align}
By \cite[Theorem 1.1]{sta}, $D_{n_{j-1}}\rightarrow D_\infty, \;a.s.$ Recalling $e^{\lambda_j}\sim(\log j)^a$ with $a<1$, we get that
\begin{align}
\sum_j\mathbf{P^*}\big(K_j\big|\mathcal{F}_{n_{j-1}}\big)=\infty, \quad a.s.\nonumber
\end{align}
which according to L$\acute{e}$vy's conditional form of Borel-Cantelli's lemma (\cite[Corollary 68]{Borel}), implies that $\mathbf{P^*}( K_i, \mbox{i.o.})=1$. Thus
\begin{align}
\limsup_{n\rightarrow\infty}\frac{M_n-(1+\frac{1}{\alpha})\log n}{\log\log\log n}\geq a, \; \mathbf{P^*}-a.s.\nonumber
\end{align}
The proof is completed by letting $a\rightarrow 1$.
$\Box$
\end{document} | math | 38,517 |
\begin{document}
\title{When Kloosterman sums meet Hecke eigenvalues}
\author{Ping Xi}
\address{Department of Mathematics, Xi'an Jiaotong University, Xi'an 710049, P. R. China}\email{[email protected]}
\subjclass[2010]{11L05, 11F30, 11N36, 11T23}
\keywords{Kloosterman sum, Hecke--Maass eigenvalue, equidistribution, Selberg sieve, Riemann Hypothesis over finite fields}
\begin{abstract}
By elaborating a two-dimensional Selberg sieve with asymptotics and equidistributions of Kloosterman sums from $\ell$-adic cohomology, as well as a Bombieri--Vinogradov type mean value theorem for Kloosterman sums in arithmetic progressions, it is proved that for any given primitive Hecke--Maass cusp form of trivial nebentypus, the eigenvalue of the $n$-th Hecke operator does not coincide with the Kloosterman sum $\mathrm{Kl}(1,n)$ for infinitely many squarefree $n$ with at most $100$ prime factors. This provides a partial negative answer to a problem of Katz on modular structures of Kloosterman sums.
\end{abstract}
\dedicatory{{\it \small Dedicated to Professor \'Etienne Fouvry \\ on the occasion of his sixty-fifth birthday }}
\maketitle
\section{Introduction}\label{sec:introduction}
We are concerned with the {\it normalized} Kloosterman sum
\[\mathrm{Kl}(a,c)=\frac{1}{\sqrt{c}}~\sideset{}{^*}\sum_{u\bmod c}\mathrm{e}\Big(\frac{au+\overline{u}}{c}\Big)\]
defined for all $c\in\mathbf{Z}^+$ and $a\in\mathbf{Z}.$ Denote by $\mathcal{P}$ the set of primes. For each $p\in\mathcal{P}$ and $a\in\mathbf{Z}$, the celebrated Weil's bound asserts that $|\mathrm{Kl}(a,p)|\leqslant2,$ from which one finds there exists a certain $\theta_p(a)\in[0,\pi]$ such that
\[\mathrm{Kl}(a,p)=2\cos\theta_p(a).\]
In his famous lecture notes, Katz \cite[Chapter 1]{Ka80} proposed the following three problems (with $a\neq0$ fixed):
\begin{enumerate}[(I)]
\item Does the density of $\{p\in\mathcal{P}:\mathrm{Kl}(a,p)>0\}$ in $\mathcal{P}$ exist? If yes, is it equal to $1/2?$
\item Is there a measure on $[0,\pi]$ such that $\{\theta_p(a):p\in\mathcal{P}\}$ equidistributes?
\item Consider the Euler product
\[L_a(s):=\prod_{p\in\mathcal{P},p\nmid a}\Big(1-\frac{\mathrm{Kl}(a,p)}{p^{s}}+\frac{1}{p^{2s}}\Big)^{-1}\]
for $\mathbb{R}e s>1.$ Is it defined to be an $L$-function attached to some Maass form of level $\mathfrak{q}$ with $\mathfrak{q}$ being a power of 2?
\end{enumerate}
Problem I is also referred to the sign change problem of Kloosterman sums, and the first serious progress was made by Fouvry and Michel \cite{FM03b,FM07}, who proved that there are at least $\gg X/\log X$ squarefree numbers $c\in[X,2X]$ with $\omega(c)\leqslant23$ such that $\mathrm{Kl}(1,c)>0$ (resp. $\mathrm{Kl}(1,c)<0$), where $\omega(c)$ denotes the number of distinct prime factors of $c$. The method of Fouvry and Michel includes a pioneer combination of the Selberg sieve, spectral theory of automorphic forms and $\ell$-adic cohomology.
The constant 23 was later sharpened by Sivak-Fischler \cite{SF09}, Matom\"aki \cite{Ma11} and the author \cite{Xi15}, and the current record 7 is due to the author \cite{Xi18}. Quite recently, Drappeau and Maynard \cite{DM19} reduced the constant further to 2 by assuming the existence of Landau--Siegel zeros in a suitable way.
Problem II concerns the horizontal equidistribution of Kloosterman sums, and it is expected the Sato--Tate measure $\mathrm{d}\mu_{\text{ST}}:=\frac{2}{\pi}\cos^2\theta\mathrm{d}\theta$ does this job. In fact, Katz \cite[Conjecture 1.2.5]{Ka80} formulated a precise conjecture that for each fixed integer $a\neq0$, the set $\{\theta_p(a):p\in\mathcal{P},p\nmid a\}$ of Kloosterman sum angles should equidistribute with respect to $\mathrm{d}\mu_{\text{ST}}$. It then follows immediately from this conjecture that the desired density in Problem I is $1/2$; i.e.,
\[\lim_{x\rightarrow+\infty}\frac{|\{p\in\mathcal{P}\cap[1,x]:\mathrm{Kl}(a,p)>0\}|}{|\mathcal{P}\cap[1,x]|}=\frac{1}{2}.\]
The original Sato--Tate conjecture was first formulated independently by Sato
and Tate in the context of elliptic curves, and then reformulated and extended to
the framework of Hecke eigencuspforms for $SL_2(\mathbf{Z})$ by Serre \cite{Se68}, predicting the similar equidistributions
of Fourier coefficients of such cusp forms.
Very recently, the conjecture has been confirmed by L. Clozel, M. Harris \& R. Taylor \cite{CHT08} for non-CM elliptic curves over $\mathbf{Q}$ with non-integral $j$-invariants, and was later generalized by
Barnet-Lamb, Geraghty, Harris and Taylor \cite{BGHT11} for non-CM, holomorphic elliptic modular newforms of weight $k\geqslant2$, level $N$. Much earlier before this resolution, the vertical analogue with $p$ fixed and the form varying was considered independently
by Conrey, Duke and Farmer \cite{CDF97} and Serre \cite{Se97}. In parallel with the vertical Sato--Tate distribution for cusp forms, Katz \cite{Ka88} proved for Kloosterman sums that the set $\{\theta_p(a):1\leqslant a<p\}$ becomes equidistributed with respect to $\mathrm{d}\mu_{\text{ST}}$ as long as $p\rightarrow+\infty.$
In view of the similarity between the distributions of Kloosterman sums and Hecke eigenvalues of holomorphic cusp forms, it seems natural to expect that $\pm \mathrm{Kl}(1,p)$ might coincide with the $p$-th Fourier coefficient of some holomorphic Hecke cusp form. In fact, thanks to Deligne \cite{De80}, $-\mathrm{Kl}(1,p)$ and eigenvalue $\lambda_f(p)$ of the $p$-th Hecke operator acting on a primitive holomorphic cusp form $f$ are both known to be Frobenius traces of an $\ell$-adic Galois representations of rank 2 and weight 0. Unfortunately, it is easily known that they could not coincide, since $\mathrm{Kl}(1,p)$ cannot lie in any fixed number field (see \cite{Bo00} for some discussions).
Problem III of Katz concerns the modular structure of Kloosterman sums, and predicts that the situation might be valid if one considers Maass forms in place of holomorphic ones.
In what follows, we take $f$ to be a primitive Hecke--Maass cusp form of level $\mathfrak{q},$ trivial nebentypus and eigenvalue $\lambda=1/4+t^2,$ so that it is a joint eigenfunction of the Laplacian and Hecke operators. Suppose $f$ admits the following Fourier expansion
\begin{align*}f(z)=\sqrt{y}\sum_{n\neq0}\lambda_f(n)K_{it}(2\pi|n|y)\mathrm{e}(nx),\end{align*}
where $\lambda_f(1)=1$ and $K_\nu$ is the $K$-Bessel function of order $\nu$. The trivial nebentypus enables $\lambda_f$'s to be real numbers.
As eigenvalues of Hecke operators, $\lambda_f$'s are expected to satisfy the inequality
\begin{align}\label{eq:RamanujanPetersson}
|\lambda_f(n)|\leqslant n^\vartheta\tau(n)
\end{align}
for some $\vartheta<1/2.$ The Ramanujan--Petersson conjecture asserts that $\vartheta=0$ is admissible, and the current record, due to Kim--Sarnak \cite{KS03},
takes $\vartheta=7/64.$ Although it is already known that most Hecke--Maass cusp forms $f$ satisfiy \eqref{eq:RamanujanPetersson} with $\vartheta=0$ (see Sarnak \cite{Sa87}), the distribution of $\lambda_f(n)$ is still mysterious in many aspects. Problem III is thus two-fold: $\lambda_f(n)$ is suggested to be controlled
by virtue of Kloosterman sums; and conversely, spectral theory of Maass forms might be helpful to understand {\it non-trivial} analytic information about the Euler product $L_a(s),$
which would yield non-trivial progresses towards to Problems I and II.
Unfortunately, Problem III seems too optimistic to be true, but there seems no satisfactory approach that has been found to attack it; and to my best knowledge, the only known result regarding this problem was obtained by Booker \cite{Bo00} based on numerical computations: if $\mathrm{Kl}(1,p)=\pm\lambda_f(p)$ for some primitive Hecke--Maass cusp form $f$ of level $\mathfrak{q}=2^\nu$ and eigenvalue $\lambda,$
then $\mathfrak{q}\cdot(\lambda+3)>18.3\times10^6.$
In this paper, we present an analytic-number-theoretic approach to Problem III, which enables us to provide a partial negative answer with almost primes in place of primes.
\begin{theorem}\label{thm:non-identity}
Let $f$ be a primitive Hecke--Maass cusp form $f$ with trivial nebentypus.
Then there exist infinitely many squarefree number $n$ with at most $100$ prime factors, such that
\begin{align*}\lambda_f(n)\neq \pm\mathrm{Kl}(1,n).\end{align*}
Quantitatively, for $\eta\in\{-1,1\},$ there exists certain constant $c=c(f)>0,$ such that
\begin{align*}|\{n\in[X,2X]:\lambda_f(n)>\eta\cdot\mathrm{Kl}(1,n),~\omega(n)\leqslant 100,~\mu^2(n)=1\}|\geqslant \frac{cX}{\log X}\end{align*}
and
\begin{align*}|\{n\in[X,2X]:\lambda_f(n)<\eta\cdot\mathrm{Kl}(1,n),~\omega(n)\leqslant 100,~\mu^2(n)=1\}|\geqslant \frac{cX}{\log X}\end{align*}
hold for all $X>1/c.$
\end{theorem}
In fact, we can prove the following general theorem.
\begin{theorem}\label{thm:non-identity-general}
For any $\eta\in\mathbf{R}$ and each primitive Hecke--Maass cusp form $f$ of trivial nebentypus,
there exist two constants $c=c(f,\eta)>0$ and $r=r(\eta)<+\infty,$ such that
\begin{align*}|\{n\in[X,2X]:\lambda_f(n)>\eta\cdot\mathrm{Kl}(1,n),~\omega(n)\leqslant r,~\mu^2(n)=1\}|\geqslant \frac{cX}{\log X}\end{align*}
and
\begin{align*}|\{n\in[X,2X]:\lambda_f(n)<\eta\cdot\mathrm{Kl}(1,n),~\omega(n)\leqslant r,~\mu^2(n)=1\}|\geqslant \frac{cX}{\log X}\end{align*}
hold for all $X>1/c.$ In particular, one may take $r(\pm\frac{1}{2018})=25,$ $r(\pm2018)=41.$
\end{theorem}
Theorems \ref{thm:non-identity} and \ref{thm:non-identity-general} are new, at least to the author, even if there is no restriction on the size of $\omega(n).$
The merit of Theorem \ref{thm:non-identity-general} is revealed by the flexibility of $\eta$. Although we cannot provide a complete negative answer to Problem III, it seems that there is little hope to find a suitable Hecke--Maass cusp form to capture modular structures of Kloosterman sums following the line in Problem III. However, the function field analogue was confirmed by
Chai and Li \cite{CL03} that the relevant Kloosterman sums (defined over the residue field of completion of the function field at place $v$) and Hecke eigenvalues of a certain $GL_2$ automorphic form can coincide up to a negative sign.
In a private communication, Katz proposed a problem to consider an analogue of Problem III with the cubic exponential sum
\[B(a,c):=\frac{1}{\sqrt{c}}\sum_{x\bmod c}\mathrm{e}\Big(\frac{x^3+ax}{c}\Big)\]
in place of $\mathrm{Kl}(1,p).$ The vertical Sato--Tate distribution of $B(a,p)$, as $a$ runs over $(\mathbf{Z}/p\mathbf{Z})^\times$ for sufficiently large prime $p$, was already proved by Katz \cite[Section 7.11]{Ka90}, and it is also expected that the horizontal equidistribution of $B(a,p)$ should be true. However, to prove analogues of Theorems \ref{thm:non-identity} and \ref{thm:non-identity-general} seems beyond our current approach.
In fact, the proofs of Theorems \ref{thm:non-identity} and \ref{thm:non-identity-general} rely on a kind of Bombieri--Vinogradov type equidistribution
for Kloosterman sums $\mathrm{Kl}(1,c)$ (see Lemma \ref{lm:BV-FM} below), and this was proved by Fouvry and Michel \cite{FM07} by appealing to the spectral theory of automorphic forms. It is thus natural to expect such a theorem should also exist for $B(1,c)$. We would like to mention a similar result due to Louvel \cite{Lo14} that such Bombieri--Vinogradov type equidistribution
holds for cubic exponential sums modulo Eisenstein integers, for which he employed the spectral theory of cubic metaplectic forms, and cubic residue symbols can be well-introduced. However, as Louvel has pointed out, it is not yet known how to move from the cubic exponential sums modulo Eisenstein integers to those modulo rational integers in the horizontal aspect.
Theorems \ref{thm:non-identity} and \ref{thm:non-identity-general} will be proved by
appealing to a weighted Selberg sieve and the arguments will be outlined in the next section.
\subsection*{Notation} As usual, $\mathrm{e}(z)=\mathrm{e}^{2\pi iz}$ and $\mu,\varphi,\tau$ denote the M\"obius, Euler and divisor functions, respectively. We use $\omega(n)$ to count the number of distinct prime factors of $n$. The superscript $*$ in summation indicates primitive elements.
Given $X\geqslant2$, we set $\mathcal{L}=\log X$ and the notation $n\sim N$ means $N<n\leqslant2N.$
For a sequence of coefficients $\boldsymbol\alpha=(\alpha_m)$, denote by $\|\cdot\|_1$ and $\|\cdot\|$ the $\ell_1$- and $\ell_2$-norms, respectively, i.e., $\|\boldsymbol\alpha\|_1=\sum_m|\alpha_m|,\|\boldsymbol\alpha\|=(\sum_m|\alpha_m|^2)^{1/2}.$
\section{Setting-up: outline of the proof}\label{sec:outline}
\subsection{A weighted Selberg sieve}
Suppose $(a_n)_{n\leqslant x}$ is a sequence of non-negative numbers. The sieve method was originally designed to capture how often these numbers are supported on primes, although current status only allows us to detect {\it almost primes} in most cases.
A convenient approach was invented by Selberg \cite{Se71} in 1950's in connection with the twin prime conjecture. Precisely, he suggests to consider the weighted average
\begin{align*}
\sum_{n\leqslant x}a_nw_n\{\rho-\tau(n)\},
\end{align*}
where $w_n$ is a non-negative function, and $\rho$ is to be chosen appropriately such that the total average is positive for all sufficiently large $x$, from which one obtains the existence of $n$ such that $\omega(n)\leqslant\log\rho/\log2.$ The ingenuity then lies in the choice of $w_n$, which should attenuate the contributions from those $n$'s that have many prime factors. A typical choice for $w_n$, due to Selberg himself, is the square of the M\"obius transform of a certain smooth truncation of the M\"obius function; see \cite[Chapters 4--7, 10]{HR74} and \cite[Chapter 7]{FI10} for detailed discussions.
Focusing on Problem I of Katz in the first section on sign changes of Klooterman sums, the author \cite{Xi15,Xi18} introduced the above weighted Selberg sieve to the context of Kloosterman sums, in which situation $\tau(n)$ is replaced by a certain truncated divisor function that suits well for the application of Sato--Tate distribution of Kloosterman sums in the vertical aspect. Such experiences motivate us to consider Problem III of Katz in a similar manner.
We need to make some preparations.
\begin{itemize}
\item Let $n$ be a positive integer. For $\alpha,\beta>0$ and $\varDelta>1$, define a truncated divisor function
\begin{align}\label{eq:tau(n;alpha,beta)}
\tau_{\varDelta}(n;\alpha,\beta)=\sum_{\substack{d\mid n\\d\leqslant n^\frac{1}{1+\varDelta}}}\alpha^{\omega(d)}\beta^{\omega(n/d)}.\end{align}
\item Let $X$ be a large number and define $\vartheta\in~]0,\frac{1}{4}]$ by $\sqrt{D}=X^\vartheta\exp(-\sqrt{\mathcal{L}})$. We choose $(\varrho_d)$ such that
\begin{align}\label{eq:varrho_d}
\varrho_d=
\begin{cases}
\mu(d)\Big(\dfrac{\log(\sqrt{D}/d)}{\log\sqrt{D}}\Big)^2,\ \ & d\leqslant\sqrt{D},\\
0, &d>\sqrt{D}.
\end{cases}
\end{align}
\item Let $\varPsi$ be a fixed non-negative smooth function supported in $[1,2]$ with the normalization
\begin{align}\label{eq:normalization-Psi}
\int_{\mathbf{R}}\varPsi(x)\mathrm{d} x=1.
\end{align}
The Mellin transform of $\varPsi$ is defined as
\begin{align*}\widetilde{\varPsi}(s)=\int_{\mathbf{R}}\varPsi(x)x^{s-1}\mathrm{d} x.\end{align*}
Hence $\widetilde{\varPsi}(1)=1$. Integrating by parts, we have
\begin{align*}\widetilde{\varPsi}(s)\ll(|s|+1)^{-A}\end{align*}
for any $A\geqslant0$ with an implied constant depending only on $A$ and $\varPsi$.
\item For any fixed $\eta\in\mathbf{R},$ put
\begin{align}\psi(n)=\psi_{f,\eta}(n):=\lambda_f(n)-\eta\cdot \mathrm{Kl}(1,n).\end{align}
\item For all $z\geqslant2$, define
\begin{align*}P(z)=\prod_{p<z,p\in\mathcal{P}}p.\end{align*}
We will specialize $z$ later as a small power of $X$ such that $z^{12}\leqslant X$.
\end{itemize}
Our theorems will be concluded by effective evaluations of the following weighted average
\begin{align}\label{eq:Hpm(X)}
H^\pm(X)=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)\{|\psi(n)|\pm\psi(n)\}\{\rho-\tau_{\varDelta}(n;\alpha,\beta)\}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2,\end{align}
where $\rho,\vartheta,\alpha,\beta,z,\varDelta$ are some parameters to be chosen later. Clearly, we have
\begin{align}\label{eq:Hpm(X)-lowerbound}
H^\pm(X)\geqslant\rho\cdot H_1(X)-2H_2(X)\pm\rho\cdot H_3(X)
\end{align}
with
\begin{align*}H_1(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\psi(n)|\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2,\\
H_2(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\psi(n)|\tau_{\varDelta}(n;\alpha,\beta)\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2,\\
H_3(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)\psi(n)\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2.\end{align*}
Note that $\eta$ is contained implicitly in all above and subsequent formulations; we will keep it fixed and not display this until necessary.
The task reduces to find a positive lower bound for $H_1(X)$ and an upper bound for $H_2(X)$, which should be of the same order of magnitude, and also a reasonable estimate for $H_3(X)$. In fact, we may prove the following three propositions.
\begin{proposition}\label{prop:H1(X)-lowerbound}
For large $X$, we have
\begin{align*}
H_1(X)&\geqslant
(1+o(1))\sum_{2\leqslant i\leqslant7}I_i\cdot\sqrt{l_i^3/{u_i}} \cdot \frac{X}{\log X},
\end{align*}
where $I_i,l_i,u_i$ are given as in Proposition $\ref{prop:Sigma(X,alphai)-lowerbound}.$
\end{proposition}
\begin{proposition}\label{prop:H2(X)-upperbound}
Let $\alpha=\frac{3\pi }{8}$ and $\beta=\frac{1}{2}.$ For large $X$, we have
\begin{align*}H_2(X)&\leqslant (1+o(1))(1+|\eta|/2)\mathfrak{S}\Big(\vartheta,\frac{2\vartheta\log X}{\log z}\Big)\frac{X}{\log X},\end{align*}
where $\mathfrak{S}(\cdot,\cdot)$ is defined by $\eqref{eq:fS(gamma,tau)}$.
\end{proposition}
\begin{proposition}\label{prop:H3(X)-estimate}
For large $X$, we have
\begin{align*}H_3(X)&\ll(1+|\eta|)X(\log X)^{-A}\end{align*}
for any $A>0,$ provided that $\vartheta\leqslant\frac{1}{4},$ where the implied constant depends on $A,f$ and $\varPsi.$
\end{proposition}
Upon suitable choices of $\rho,\vartheta$ and $z$, the positivity of $H^\pm(X)$ would imply, for $X$ large enough, that there exists $n\in[X,2X]$ with
\[\tau_{\varDelta}(n;\alpha,\beta)<\rho\]
for which $\lambda_f(n)-\eta\cdot \mathrm{Kl}(1,n)>0$ (or $<0$). In Section \ref{sec:numerical}, we will do necessary numerical computations that lead to Theorems \ref{thm:non-identity} and \ref{thm:non-identity-general}.
\subsection{Ingredients of the proof}
The proofs of Propositions \ref{prop:H1(X)-lowerbound}, \ref{prop:H2(X)-upperbound} and \ref{prop:H3(X)-estimate} form the heart of this paper.
The proof of Proposition \ref{prop:H3(X)-estimate} is not new and was stated as \cite[Proposition 2.1]{FM07} in a slightly different setting.
For the proof of Proposition \ref{prop:H1(X)-lowerbound}, we will be restricted to some specialized integers having fixed number of prime factors, for which we may explore the vertical Sato--Tate distribution for Kloosterman sums and moments of Hecke eigenvalues to produce a positive lower bound for $H_1(X)$.
There is another new ingredient in this paper that the lower bound for $H_1(X)$ relies on the economic control of the correlation
\begin{align*}
\sum_{n\in\mathcal{S}}\lambda_f(n)\mathrm{Kl}(1,n),
\end{align*}
where $\mathcal{S}$ is a suitable set of the products of a fixed number of distinct primes. It is expected that $\lambda_f(n)$ does not correlate with $\mathrm{Kl}(1,n)$ as $n$ runs over $\mathcal{S}$, and an upper bound which beats the trivial estimate $O(|\mathcal{S}|)$ is highly desirable. Unfortunately, we do not know how to capture such cancellations, even if $n$ is relaxed to run over consecutive integers. Alternatively, it could be a courageous choice to ignore the sign changes of
summands, and a suitable upper bound for
\begin{align*}
\sum_{n\in\mathcal{S}}|\lambda_f(n)\mathrm{Kl}(1,n)|
\end{align*}
with a small scalar
might also suffice.
In fact, for $n$ being a product of distinct primes, say $n=qr$ with $q,r\in\mathcal{P}$, we may decompose the summand as $|\lambda_f(q)\lambda_f(r)\mathrm{Kl}(\overline{r}^2,q)\mathrm{Kl}(\overline{q}^2;r)|$. Our observation lies in the fact that
$|\lambda_f(p)|$ and $|\mathrm{Kl}(\overline{p}^2,q)|$ are both smaller than 1, say $\delta$, on average while $p$ runs over a suitable set of primes; see Lemma \ref{lm:pi-kappa(X)-moments} and Lemma \ref{lm:Kloosterman-Mellin-prime} for details.
The factor $\delta^j$ for some large $j$ will then appear if $n$ has more prime factors, and $\delta^j$ can be considerably small if $j$ is taken to be reasonably large. This, while $n$ is restricted to be products of a large number of distinct primes, will lead to quite a small scalar in the upper bound for the above average with absolute values, although we cannot save anything in the order of magnitude.
Typically, we require $n$ to have 7 distinct prime factors, but this would arise a combinatorial disaster while evaluating $\varrho_d$ in the sieve weight to conclude Proposition \ref{prop:H1(X)-lowerbound}. Thus, the restriction $n\mid P(z)$ in \eqref{eq:Hpm(X)} is introduced to overcome such difficulty in computations.
More precisely, we may restrict $n$ to be the product of certain primes of prescribed sizes larger than $z$, then only $d=1$ survives in the convolution $\sum_{d\mid (n,P(z))}\varrho_d.$
The upper bound for $H_2(X)$ also relies on the vertical Sato--Tate distribution for Kloosterman sums, and by appealing to an idea in our previous work \cite{Xi18}, we may reduce the {\it dimension} of the sifting problem by introducing $\tau_{\varDelta}(n;\alpha,\beta)$ with appropriate choices for $\alpha$ and $\beta$, so that the upper bound for $H_2(X)$ can be controlled more effectively. Due to the appearance of $n\mid P(z)$, one has to evaluate the $k$-dimensional sifting average
\[\sum_{n}\mu^2(n)b_n\Big(\sum_{d\mid (n,P(z))}\varrho_d\Big)^2\]
with $\{b_n\}$ being a non-negative multiplicative function mimicking $k^{\omega(n)}$ on average.
In particular, we may develop an asymptotic evaluation in the case $k=2$ upon the choice \eqref{eq:varrho_d}, which we call a two-dimensional Selberg sieve with asymptotics.
A complete and precise statement will be included as the appendix.
\subsection{Correlations of Kloosterman sums and Hecke eigenvalues}
Before closing this section, we would like to formulate two conjectures which illustrate the correlations between Kloosterman sums and Hecke eigenvalues.
\begin{conjecture}\label{conj:primes}
Let $f$ be a fixed primitive cusp form $($holomorphic for Maass$).$ For all large $X,$ we have
\begin{align*}
\sum_{p\leqslant X}\lambda_f(p)\mathrm{Kl}(1,p)=o(X\mathcal{L}^{-1}).
\end{align*}
\end{conjecture}
If Conjecture \ref{conj:primes} could be proved affirmatively, it would follow that there exist $100\%$ primes $p$ such that $\lambda_f(p)\neq\mathrm{Kl}(1,p)$ for each primitive cusp form $f$, which provides a negative answer to Problem III of Katz.
In order to consider the correlations along prime variables, it should be natural at first to study the average over consecutive integers as we have mentioned. To this end, we may formulate the following correlation with a precise saving.
\begin{conjecture}\label{conj:integers}
Let $f$ be a fixed primitive cusp form $($holomorphic for Maass$).$ For all large $X,$ we have
\begin{align*}
\sum_{n\leqslant X}\lambda_f(n)\mathrm{Kl}(1,n)=O(X\mathcal{L}^{-2018}).
\end{align*}
\end{conjecture}
It seems that the above two conjectures are both beyond the current approach, and the resolutions should require new creations both from automorphic forms and algebraic geometry.
\section{Maass forms and Kloosterman sums}\label{sec:Maass-Kloosterman}
\subsection{Maass forms}
We will not need too much information on Maass forms. The following moments of Fourier coefficients at prime arguments would be most of what is required.
Let $f$ be a primitive Hecke--Maass cusp form $f$ of level $\mathfrak{q},$ trivial nebentypus and eigenvalue $\lambda$ as an eigenfunction of the Laplacian operator.
For each $\kappa\geqslant0$ and $X>1,$ define
\begin{align*}
\pi_\kappa(X)=\sum_{p\leqslant X}|\lambda_f(p)|^\kappa.
\end{align*}
\begin{lemma}\label{lm:pi-kappa(X)-moments}
For all large $X,$ we have
\begin{align*}
\pi_\kappa(X)=c_\kappa(1+o(1))X\mathcal{L}^{-1}
\end{align*}
for $\kappa=0,2,4,6$ with $c_0=c_2=1,$ $c_4=2,$ $c_6=5,$ and
\begin{align*}
\pi_\kappa(X)\leqslant c_\kappa(1+o(1))X\mathcal{L}^{-1}
\end{align*}
for $\kappa=1,3$ with $c_1=\frac{11}{12},$ $c_3=\sqrt{5}.$
\end{lemma}
\proof
We only consider the cases $\kappa\geqslant1.$ Following the approach of Hadamard--de la Vall\'ee-Poussin to the classical prime number theorem, it suffices to consider the non-vanishing and holomorphy of the symmetric power $L$-functions $L(\mathrm{sym}^\kappa f,s)$ at $\mathbb{R}e s=1$ with $\kappa=2,4,6.$
These are already known due to a series of celebrated works \cite{GJ78,KSh00,KSh02,Sh89}.
In fact, $[0,\pi]$ is identified with the set of conjugacy classes of the compact group $SU_2(\mathbf{C})$ via the map
$g\in SU_2(\mathbf{C})\rightarrowto \mathrm{tr}(g)=2\cos\theta;$
the image of the probability Haar measure of $SU_2(\mathbf{C})$ is just the Sato--Tate measure $\mu_{\mathrm{ST}}.$
For $\kappa=2j$ $(j=1,2,3)$, we have
$$c_\kappa=\int(2\cos\theta)^{2j}\mathrm{d}\mu_{\mathrm{ST}}=\frac{1}{j+1}\binom{2j}{j}.$$
In particular, $c_2=1,$ $c_4=2$ and $c_6=5.$
The value of $c_1$ follows from the asymptotics for $\pi_\kappa(X)$ with $\kappa=0,2,4,6$ and the inequality
\begin{align*}
|y|\leqslant\frac{1}{36}(13+29y^2-7y^4+y^6)
\end{align*}
upon the choice by Holowinsky \cite{Ho09},
which is valid for all $y\in\mathbf{R}$. The value of $c_3$ follows from Cauchy's inequality and the asymptotics for $\pi_6(X).$
\endproof
\subsection{Kloosterman sums}
Following Deligne \cite{De80} and Katz \cite{Ka88}, it is known that
$$a\rightarrowto-\mathrm{Kl}(a,p)=-2\cos\theta_p(a),\ \ a\in \mathbf{F}_p^\times$$
is the trace function of an $\ell$-adic sheaf $\mathcal{K}l$ on $\mathbf{G}_{m}(\mathbf{F}_p)=\mathbf{F}_p^\times$, which is of rank 2 and pure of weight 0.
Alternatively, we may write
\begin{align*}2\cos\theta_p(a)=\mathrm{tr}(\mathrm{Frob}_a,\mathcal{K}l),\ \ a\in \mathbf{F}_p^\times.\end{align*}
By Weyl's criterion and the Peter--Weyl
theorem, Katz's vertical equidistribution, as mentioned in the first section,
reduces to control the cancellations within the averages
\begin{align*}\sum_{a\in\mathbf{F}_p^\times}\mathrm{sym}_k(\theta_p(a))=\sum_{a\in\mathbf{F}_p^\times}\mathrm{tr}(\mathrm{Frob}_a,\mathrm{sym}^k\mathcal{K}l),\end{align*}
where $\mathrm{sym}^k\mathcal{K}l$ is the $k$-th symmetric power
of the Kloosterman sheaf $\mathcal{K}l$ (i.e., the composition of the sheaf $\mathcal{K}l$ with the $k$-th symmetric power representation of $SL_2$) and
\begin{align*}\mathrm{sym}_k(\theta)=\frac{\sin(k+1)\theta}{\sin\theta}.\end{align*}
In fact, Katz \cite[Example 13.6]{Ka88} proved that
\begin{align}\label{eq:Katz}
\left|\sum_{a\in\mathbf{F}_p^\times}\mathrm{sym}_k(\theta_p(a))\right|\leqslant\frac{1}{2}(k+1)\sqrt{p}.
\end{align}
It is natural to expect that the square-root cancellation also holds if replacing $\theta_p(a)$ by $\theta_p(\Pi(a))$ for any non-constant rational function $\Pi$ of fixed degree over $\mathbf{F}_p^\times.$ In general, we have the following estimate.
\begin{lemma}\label{lm:Michel}
Let $\psi$ and $\chi$ be additive and multiplicative characters $($not necessarily non-trivial$)$ modulo $p$ and $\Pi$ a fixed non-constant rational function modulo $p$. For each fixed positive integer $k,$ there exists some constant $B$ depending only on $\deg(\Pi),$ such that
\begin{align}\label{eq:psi-Kl}
\sideset{}{^*}\sum_{a\bmod p}\psi(a)\mathrm{sym}_k(\theta_p(\Pi(a)))\ll k^B\sqrt{p}
\end{align}
\begin{align}\label{eq:chi-Kl}
\sideset{}{^*}\sum_{a\bmod p}\chi(a)\mathrm{sym}_k(\theta_p(\Pi(a)))\ll k^B\sqrt{p}
\end{align}
hold with implied constants depending at most on $B.$ In particular, one can take $B=1$ if $\Pi(a)=1/a^2.$
\end{lemma}
The case $\Pi(a)=1/a^2$ in \eqref{eq:psi-Kl} was contained in Michel \cite[Corollarie 2.9]{Mi98b} and there is no essential difference when extending to general $\Pi.$ The bound in Lemma \ref{lm:Michel} lies in the fact that the underlying sheaf
$\mathrm{sym}^k([\Pi^*\mathcal{K}l])$ is of rank $k+1$, while the Artin--Scherier sheaf $\mathcal{L}_\psi$ is of rank 1 if $\psi$ is non-trivial.
These two geometrically irreducible sheaves are not geometrically isomorphic, and the square-root cancellation then follows from the
Riemann Hypothesis of Deligne \cite{De80} (see also \cite[Theorem 4.1]{FKM15}, for instance, for practical use in analytic number theory).
The bound \eqref{eq:chi-Kl} follows by noting that the Kummer sheaf $\mathcal{L}_\chi$ is geometrically irreducible and of rank 1 if $\chi$ is non-trivial.
For $(a,c)=1$, define
\begin{align}\label{eq:Omega}
\varOmega(a,c):=\mathrm{Kl}(\overline{a}^2,c).\end{align}
It follows from the Chinese remainder theorem that the twisted multiplicativity $\varOmega(a,rs)=\varOmega(ar,s)\varOmega(as,r)$ holds for all $a,r,s$ with $(r,s)=(a,rs)=1.$ For each Dirichlet character $\chi\bmod c$, define
\begin{align*}
\widetilde{\varOmega}(\chi,c):=\frac{1}{\sqrt{c}}~\sideset{}{^*}\sum_{r\bmod c}\overline{\chi}(r)|\varOmega(r,c)|.
\end{align*}
For $\chi_1\bmod{c_1}$ and $\chi_2\bmod{c_2}$ with $(c_1,c_2)=1,$
the Chinese remainder theorem yields
\begin{align}\label{eq:Kl-Mellin-multiplicativity}
\widetilde{\varOmega}(\chi_1\chi_2,c_1c_2)=\chi_1(c_2)\chi_2(c_1)\widetilde{\varOmega}(\chi_1,c_1)\widetilde{\varOmega}(\chi_2,c_2).
\end{align}
For prime moduli, we have the following asymptotic characterizations.
\begin{lemma}\label{lm:Kloosterman-Mellin-prime}
Let $p$ be a large prime. Then
\begin{align*}
\widetilde{\varOmega}(\chi,p)=\delta_\chi \sqrt{p}+O(\log p),
\end{align*}
where $\delta_\chi$ vanishes unless $\chi$ is the trivial character mod $p,$ in which case it is equal to $\frac{8}{3\pi},$ and the implied constant is absolute.
\end{lemma}
\proof In view of Lemma \ref{lm:Michel}, we may apply Lemma \ref{lm:cos-Chebyshev} with
\[J=\varphi(p), \ \ B=1,\ \ U=c\sqrt{p},\]
\[\{y_j\}_{1\leqslant j\leqslant J}=\{\overline{\chi}(r):1\leqslant r\leqslant p-1\}, \ \ \{\theta_j\}_{1\leqslant j\leqslant J}=\{\theta_p(\overline{r}^2):1\leqslant r\leqslant p-1\},\]
where $c$ is a large suitable constant,
so that
\begin{align*}
\sqrt{p}\widetilde{\varOmega}(\chi,p)-\frac{8}{3\pi}~\sideset{}{^*}\sum_{r\bmod p}\overline{\chi}(r)
&\ll\sqrt{p}\log K+\frac{p^{3/2}}{K}
\end{align*}
for any $K>1$.
The proof is completed by taking $K=p.$
\endproof
\begin{lemma}\label{lm:Kloosterman-Mellin-composite}
Let $q\geqslant2$ be a squarefree number and $\chi$ a primitive character mod $q$. Then we have
\begin{align*}
|\widetilde{\varOmega}(\chi,q)|\leqslant c^{\omega(q)}\log q
\end{align*}
for some absolute constant $c>0.$
\end{lemma}
\proof
In view of \eqref{eq:Kl-Mellin-multiplicativity}, we have
\begin{align*}
|\widetilde{\varOmega}(\chi,q)|=\prod_{p\mid q}|\widetilde{\varOmega}(\chi_p,p)|,
\end{align*}
where $\chi_p$ is a non-trivial character mod $p$. By Lemma \ref{lm:Kloosterman-Mellin-prime}, there exists some absolute constant $c_0>0$, such that
\begin{align*}
|\widetilde{\varOmega}(\chi,q)|
\leqslant\prod_{p\mid q}c_0\log p=c_0^{\omega(q)}\prod_{p\mid q}\log p
\leqslant c_0^{\omega(q)}\sum_{d\mid q}\mu^2(d)\log d
\leqslant(2c_0)^{\omega(q)}\log q.
\end{align*}
This completes the proof of the lemma by taking $c=2c_0$.
\endproof
The following bilinear form estimation can be found in \cite[Corollaire 2.11]{Mi95} and a more general statement has been proved in \cite[Theorem 1.17]{FKM14}.
\begin{lemma}\label{lm:bilinear}
Let $p$ be a large prime and $(a,p)=1.$
For each $k\geqslant1$ and any coefficients $\boldsymbol\alpha=(\alpha_m),\boldsymbol\beta=(\beta_n),$ we have
\begin{align*}\mathop{\sum_{m\sim M}\sum_{n\sim N}}_{(mn,p)=1}
\alpha_m\beta_n\mathrm{sym}_k(\theta_p(\overline{(amn)^2}))\ll
\|\boldsymbol\alpha\|\|\boldsymbol\beta\|(MN)^{\frac{1}{2}}(p^{-\frac{1}{4}}+N^{-\frac{1}{2}}+M^{-\frac{1}{2}}p^{\frac{1}{4}}(\log p)^{\frac{1}{2}}),\end{align*}
where the implied constant depends polynomially on $k$.
\end{lemma}
\begin{remark}
Lemma \ref{lm:bilinear} is non-trivial as long as $N>\log p,M>p^{\frac{1}{2}}(\log p)^2$ and $p>\log(MN).$
\end{remark}
The following lemma is originally proved by Fouvry and Michel \cite[Proposition 7.2]{FM07} using $\ell$-adic cohomology.
\begin{lemma}\label{lm:bilinearform-twisted}
Suppose $q=q_1q_2\cdots q_s$ with $q_1,q_2,\cdots,q_s$ being distinct primes. For each $s$-tuple of positive integers $\mathbf{k}=(k_1,k_2,\cdots,k_s),$ and any coefficients $\boldsymbol\alpha=(\alpha_m),\boldsymbol\beta=(\beta_n),\boldsymbol\gamma=(\gamma_{m,n})$ with $m\equiv m'\bmod n\mathbb{R}ightarrow\gamma_{m,n}=\gamma_{m',n},$ we have
\begin{align*}\mathop{\sum_{m\sim M}\sum_{n\sim N}}_{(mn,q)=1}\alpha_m\beta_n\gamma_{m,n}&\prod_{1\leqslant j\leqslant s}\mathrm{sym}_{k_j}(\theta_{q_j}(\overline{(mnq/q_j)^2}))\\
&\ll c(s;\mathbf{k})
\|\boldsymbol\alpha\|\|\boldsymbol\beta\|\|\boldsymbol\gamma\|_{\infty}(MN)^{\frac{1}{2}}(q^{-\frac{1}{8}}+N^{-\frac{1}{4}}q^{\frac{1}{8}}+M^{-\frac{1}{2}}N^{\frac{1}{2}}),\end{align*}
where $c(s;\mathbf{k})=3^s\prod_{j=1}^s(k_j+1)$ and the implied constant is absolute.
\end{lemma}
\begin{remark}
Lemma \ref{lm:bilinearform-twisted} is non-trivial as long as $M>N\log q>q^{\frac{1}{2}}(\log q)^2$ and $q>\log(MN).$
\end{remark}
\begin{lemma}\label{lm:bilinearform-avergeovermoduli}
Let $P,M\geqslant3.$ Suppose $\boldsymbol\gamma=(\gamma_p)$ is a complex coefficient
supported on primes in $]P,2P]$ and $\Pi$ is a fixed non-constant rational function with integral coefficients in numerators and denominators. Then there exists some constant $B=B(\deg(\Pi))>0,$ such that for each $k\geqslant1$ and arbitrary coefficient $\boldsymbol\alpha=(\alpha_m)$ supported in $]M,2M],$
\begin{align*}
\sum_{p\sim P}\gamma_p\sum_{m\sim M}\alpha_m \mathrm{sym}_k(\cos\theta_p(\Pi(m)))\ll k^B(M^{\frac{1}{2}}+P\log P)\|\boldsymbol\alpha\|\|\boldsymbol\gamma\|
\end{align*}
holds with some implied constant depending at most on $B.$
\end{lemma}
\begin{remark}
A typical situation is $\gamma_p\equiv1$, in which case Lemma \ref{lm:bilinearform-avergeovermoduli} becomes non-trivial as long as $P,M/(P\log^2P)\rightarrow+\infty.$ It is an important and challenging problem to beat the barrier $M=P$ for a general coefficient $\boldsymbol\alpha=(\alpha_m)$. We would like to mention a deep result of Michel \cite{Mi98a}, who considered the special case $k=1,$ $\boldsymbol\gamma\equiv 1,$ $\Pi(m)=m$, and he was able to work non-trivially even when $M$ is quite close to $\sqrt{P}.$
\end{remark}
\proof
Write $K(m,p)=\mathrm{sym}_k(\cos\theta_p(\Pi(m)))$ and
denote by $S$ the average in question. First, by Cauchy's inequality, we have
\begin{align}\label{eq:Cauchy}
|S|^2\leqslant \|\boldsymbol\alpha\|^2\varSigma
,\end{align}
where
\begin{align*}
\varSigma=\sum_{m\sim M}\left|\sum_{p\sim P}\gamma_pK(m,p)\right|^2.\end{align*}
Squaring out and switching summations, we find
\begin{align*}
\varSigma=\mathop{\sum\sum}_{p_1,p_2\sim P}\gamma_{p_1}\overline{\gamma}_{p_2}\sum_{m\sim M}K(m,p_1)\overline{K(m,p_2)}=\varSigma^=+\varSigma^{\neq},\end{align*}
where we split the double sum over $p_1,p_2$ according to $p_1=p_2$ or $p_1\neq p_2.$
Trivially, we have
\begin{align}\label{eq:Sigma=}
\varSigma^==\sum_{p\sim P}|\gamma_p|^2\sum_{m\sim M}|K(m,p)|^2\leqslant (k+1)^2M\|\boldsymbol\gamma\|^2.
\end{align}
By completion method, we may derive, for $p_1\neq p_2,$ that
\begin{align*}
\sum_{m\sim M}K(m,p_1)\overline{K(m,p_2)}
&=\sum_{r\bmod{p_1p_2}}K(r,p_1)\overline{K(r,p_2)}\sum_{\substack{m\sim M\\ m\equiv r\bmod{p_1p_2}}}1\\
&=\frac{1}{\sqrt{p_1p_2}}\sum_{|h|\leqslant \frac{1}{2}p_1p_2}\sum_{m\sim M}\mathrm{e}\Big(\frac{hm}{p_1p_2}\Big)\widehat{K}(h\overline{p_2},p_1)\overline{\widehat{K}(-h\overline{p_1},p_2)},\end{align*}
where
\begin{align*}
\widehat{K}(y,p)=\frac{1}{\sqrt{p}}~\sideset{}{^*}\sum_{x\bmod p}K(x,p)\mathrm{e}\Big(\frac{-xy}{p}\Big).\end{align*}
From Lemma \ref{lm:Michel} it follows that
\begin{align*}
\varSigma^{\neq}
&\leqslant k^B\mathop{\sum\sum}_{p_1\neq p_2\sim P}
\frac{|\gamma_{p_1}\gamma_{p_2}|}{\sqrt{p_1p_2}}\sum_{|h|\leqslant \frac{1}{2}p_1p_2}\min\Big\{M,\frac{p_1p_2}{h}\Big\}\\
&\ll k^B(M+P^2\log P)\|\boldsymbol\gamma\|^2.\end{align*}
Combining this with \eqref{eq:Sigma=}, we find
\begin{align*}
\varSigma\ll k^B(M+P^2\log P)\|\boldsymbol\gamma\|^2,\end{align*}
from which and \eqref{eq:Cauchy}, the lemma follows immediately.
\endproof
\begin{lemma}\label{lm:bilinearaverageoverprimes}
Let $P,X\geqslant3.$ Suppose $\boldsymbol\gamma=(\gamma_p)$ is a complex coefficient
supported on primes in $]P,2P]$ and $\nu$ is a multiplicative function such that
\[\sum_{n\leqslant N}\tau(n)|\nu(n)|^2\ll N(\log N)^\kappa\]
for some constant $\kappa\geqslant1.$ Then we have
\begin{align*}
\sum_{p\sim P}\gamma_p\sum_{\substack{n\sim X\\(n,p)=1}}\mu^2(n)&\nu(n)\Lambda(n)|\varOmega(n,p)|=
\frac{8}{3\pi}\sum_{p\sim P}\gamma_p\sum_{\substack{n\sim X\\(n,p)=1}}\mu^2(n)\nu(n)\Lambda(n)\\
&\ \ \ \ +O\Big(\mathcal{L}^{A}\{PX^{\frac{1}{2}}+P^{\frac{1}{4}}X+P^{\frac{1}{2}}X\mathcal{L}^{-2A}+(PX)^{\frac{3}{4}}\}\|\boldsymbol\gamma\|\Big)
\end{align*}
for any $A>\kappa+2$, where the implied constant depends only on $A$ and $\kappa$.
\end{lemma}
\begin{remark}
Lemma \ref{lm:bilinearaverageoverprimes} is non-trivial as long as $\mathcal{L}\ll P\ll X\mathcal{L}^{-3A}.$
\end{remark}
\proof
In view of the Chebyshev approximation for $|\cos|$ (see Lemma \ref{lm:cos-Chebyshev}), it suffices to consider
\begin{align*}
\sum_{p\sim P}\gamma_p\sum_{\substack{n\sim X\\(n,p)=1}}\mu^2(n)&\nu(n)\Lambda(n)\mathrm{sym}_k(\cos\theta_p(\overline{n^2})).
\end{align*}
By virtue of Vaughan's identity (see \cite[Proposition 13.4]{IK04} for instance), we may decompose the sum over $n$ to bilinear forms
and consider
\begin{align*}
T(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma)=\sum_{p\sim P}\gamma_p\mathop{\sum\sum}_{\substack{m\sim M,n\sim N\\(mn,p)=1}}\alpha_m\beta_n\mu^2(mn)\nu(mn)\mathrm{sym}_k(\cos\theta_p(\overline{(mn)^2})),\end{align*}
where $\boldsymbol\alpha=(\alpha_m),\boldsymbol\beta=(\beta_n)$ are some coefficients
supported in $]M,2M]$ and $]N,2N],$ respectively,
such that
$|\alpha_m\beta_n|\leqslant10+\log m\log n$. Here $M,N$ are chosen subject to
\begin{align}\label{eq:MN-sizes}
X\mathcal{L}^{-C}<MN\leqslant X,\ \ \ M\geqslant N,
\end{align}
where $C$ is some large constant. We would like to prove that
\begin{align}\label{eq:T(alpha,beta,gamma)-estimate}
T(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma)
&\ll k^A\mathcal{L}^{A-2}\{PX^{\frac{1}{2}}+P^{\frac{1}{4}}X+P^{\frac{1}{2}}X\mathcal{L}^{-2A}+(PX)^{\frac{3}{4}}\}\|\boldsymbol\gamma\|,\end{align}
subject to the restrictions in \eqref{eq:MN-sizes}, for any $A>\kappa+2$ and some $C>0$. The lemma then follows from \eqref{eq:T(alpha,beta,gamma)-estimate} immediately.
The restriction $MN>X\mathcal{L}^{-C}$ is reasonable, since the contributions from those $MN\leqslant X\mathcal{L}^{-C}$ contribute at most $O(\|\boldsymbol\gamma\|X(P\mathcal{L}^{\kappa-C})^{\frac{1}{2}})$.
The restriction $M\geqslant N$ is input due to the symmetric roles between $\boldsymbol\alpha$ and $\boldsymbol\beta$.
There is an implicit restriction that $(m,n)=1$ in the inner sums due to the appearance $\mu^2(mn)$, in which case we have $\nu(mn)=\nu(m)\nu(n).$ In this way, we may write
\begin{align*}
T(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma)=\sum_{p\sim P}\gamma_p\mathop{\sum\sum}_{\substack{m\sim M,n\sim N\\(mn,p)=(m,n)=1}}\alpha^*(m)\beta^*(n)\mathrm{sym}_k(\cos\theta_p(\overline{(mn)^2})),\end{align*}
with $\alpha^*(m)=\mu^2(m)\alpha_m\nu(m)$ and $\beta^*(n)=\mu^2(n)\beta_n\nu(n)$.
Furthermore, the M\"obius formula gives
\begin{align}\label{eq:trilinear-bilinear}
T(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma)=\sum_d\mu(d)\sum_{\substack{p\sim P\\ p\nmid d}}\gamma_p\mathop{\sum\sum}_{\substack{m\sim M/d,n\sim N/d\\(mn,p)=1}}\alpha^*(md)\beta^*(nd)\mathrm{sym}_k(\cos\theta_p(\overline{d^4(mn)^2})).\end{align}
For each fixed $d$, we have two alternative ways to estimate the trilinear forms in \eqref{eq:trilinear-bilinear} by appealing to Lemmas \ref{lm:bilinear} and \ref{lm:bilinearform-avergeovermoduli}.
If $N\leqslant \mathcal{L}^C$, we then have $M>X\mathcal{L}^{-2C}$ by
\eqref{eq:MN-sizes}, and Lemma \ref{lm:bilinearform-avergeovermoduli} we may derive that
\begin{align}
T(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma)
&\ll k^B\|\boldsymbol\gamma\|\sum_d((M/d)^{\frac{1}{2}}+P)\Big(\sum_{m\sim M/d}|\alpha^*(md)|^2\Big)^{\frac{1}{2}}\Big(\sum_{n\sim N/d}|\beta^*(nd)|\Big)\nonumber\\
&\ll k^B(M^{\frac{1}{2}}+P)M^{\frac{1}{2}}N\mathcal{L}^{1+\kappa}\|\boldsymbol\gamma\|\nonumber\\
&\ll k^B(X+PX^{\frac{1}{2}}\mathcal{L}^C)\mathcal{L}^{1+\kappa}\|\boldsymbol\gamma\|.\label{eq:N-small}\end{align}
We now consider the case $N> \mathcal{L}^C$. By
\eqref{eq:MN-sizes}, we have $M>X^{\frac{1}{2}}\mathcal{L}^{-\frac{C}{2}}$. From Lemma \ref{lm:bilinear} it follows that
\begin{align}
T(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma)
&\ll k^B(MN)^{\frac{1}{2}}\sum_d\frac{1}{d}\sum_{p\sim P}|\gamma_p|\Big(\sum_{m\sim M/d}|\alpha^*(md)|^2\Big)^{\frac{1}{2}}\Big(\sum_{n\sim N/d}|\beta^*(nd)|^2\Big)^{\frac{1}{2}}\nonumber\\
&\ \ \ \ \times(p^{-\frac{1}{4}}+(d/N)^{\frac{1}{2}}+(d/M)^{\frac{1}{2}}p^{\frac{1}{4}}(\log p)^{\frac{1}{2}})\nonumber\\
&\ll k^BP^{\frac{1}{2}}X\mathcal{L}^{\kappa+3}(P^{-\frac{1}{4}}+N^{-\frac{1}{2}}+M^{-\frac{1}{2}}P^{\frac{1}{4}})\|\boldsymbol\gamma\|\nonumber\\
&\ll k^BP^{\frac{1}{2}}X\mathcal{L}^{\kappa+3}(P^{-\frac{1}{4}}+\mathcal{L}^{-\frac{B}{2}}+(X^{-1}P\mathcal{L}^C)^{\frac{1}{4}})\|\boldsymbol\gamma\|.\label{eq:N-large}
\end{align}
Combining \eqref{eq:N-small} and \eqref{eq:N-large}, we conclude that
\begin{align*}
T(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma)
&\ll k^B\mathcal{L}^{\kappa+3}\{PX^{\frac{1}{2}}\mathcal{L}^{C+1}+P^{\frac{1}{4}}X+P^{\frac{1}{2}}X\mathcal{L}^{-\frac{C}{2}}+(PX)^{\frac{3}{4}}\mathcal{L}^{\frac{C}{4}}\}\|\boldsymbol\gamma\|\end{align*}
holds uniformly in all tuples $(M,N)$ subject to the restrictions in \eqref{eq:MN-sizes}.
This completes the proof of \eqref{eq:T(alpha,beta,gamma)-estimate}, and thus that of the lemma, by supplying the initial error $O(\|\boldsymbol\gamma\|X(P\mathcal{L}^{\kappa-C})^{\frac{1}{2}})$ and choosing $A=(C+\kappa+3)/10.$
\endproof
\section{A generalization of the Barban--Davenport--Halberstam theorem}
Regarding the equidistributions of primes in arithmetic progressions, the classical Barban--Davenport--Halberstam theorem (see e.g., \cite[Theorem 17.2]{IK04}) asserts that
\begin{align*}
\sum_{q\leqslant Q}~\sideset{}{^*}\sum_{a\bmod q}\Big|\sum_{\substack{n\leqslant X\\ n\equiv a\bmod q}}\Lambda(n)-\frac{1}{\varphi(q)}\sum_{\substack{n\leqslant X\\ (n,q)=1}}\Lambda(n)\Big|^2\ll X\mathcal{L}^{-A}\end{align*}
for any $A>0$, as long as $Q\leqslant X\mathcal{L}^{-B}$ with some $B=B(A)>0,$ where the implied constant depends only on $A$.
As shown by Bombieri, Friedlander and Iwaniec \cite[Theorem 0]{BFI86}, the above estimate also holds if $\Lambda$ is replaced by an arbitrary function $\vartheta_n$ satisfying the following ``Siegel--Walfisz" condition.
\begin{definition}
An arithmetic function $\vartheta$ is said to satisfy the ``Siegel--Walfisz" condition, if for any $w\geqslant1,d\geqslant1,(w,a)=1,a\neq0,$
\begin{align}\label{eq:SiegelWalfisz}
\sum_{\substack{n\leqslant X\\ n\equiv a\bmod w\\(n,d)=1}}\vartheta_n-\frac{1}{\varphi(w)}\sum_{\substack{n\leqslant X\\(n,dw)=1}}\vartheta_n\ll\|\boldsymbol\vartheta\|X^{\frac{1}{2}}\tau(d)^{B}\mathcal{L}^{-A} \end{align}
holds for some constant $B>0$ and any $A>0$ with an implied constant in $\ll$ depending only on $A$.
\end{definition}
In the following treatment to $H_2(X),$ we would require a further generalization, which involves the equidistributions of the convolution of two arbitrary arithmetic functions, and one of them satisfies the
``Siegel--Walfisz" condition. Moreover, we also require the following definition of admissibility, which concerns with the $q$-analogue of the Mellin transform of $W_q:\mathbf{Z}/q\mathbf{Z}\rightarrow\mathbf{C},$ defined by
\begin{align*}
\widetilde{W_q}(\chi)
=\frac{1}{\sqrt{q}}~\sideset{}{^*}\sum_{r\bmod q}\overline{\chi}(r)W_q(r).
\end{align*}
Here $\chi$ is a Dirichlet character mod $q.$
\begin{definition} \label{def:admissibility}
Let $q\geqslant1$ a squarefree number, $k\in\mathbf{Z}$ and $C>0$ a constant. An arithmetic function
$\varXi_q:\mathbf{Z}/q\mathbf{Z}\rightarrow\mathbf{C}$ is said to be $(k,C)$-admissible, if
\begin{itemize}
\item $\varXi_{q_1q_2}(\cdot)=\varXi_{q_1}(q_2^k\cdot)\varXi_{q_2}(q_1^k\cdot)$ for all $q_1,q_2\geqslant1$ with $\mu^2(q_1q_2)=1;$
\item for each primitive character $\chi\bmod q$, one has $\|\varXi_q\|_\infty+|\widetilde{\varXi}_q(\chi)|\leqslant (\tau(q)\log 2q)^C.$
\end{itemize}
\end{definition}
\begin{remark}
By Lemma \ref{lm:Kloosterman-Mellin-composite}, one may see $\varXi_q$ is $(1,B)$-admissible for some $B>0$
if taking
\begin{align*}
\varXi_q(a)=
\begin{cases}
|\varOmega(a,q)|,\ \ &(a,q)=1,\\
0,&(a,q)>1.
\end{cases}
\end{align*}
\end{remark}
For a $(k,C)$-admissible arithmetic function $\varXi_q$ as above, the Chinese remainder theorem yields
\begin{align}\label{eq:Xi-Mellin-multiplicativity}
\widetilde{\varXi}_q(\chi)=\chi_1(q_2)^k\chi_2(q_1)^k\widetilde{\varXi}_{q_1}(\chi_1)\widetilde{\varXi}_{q_2}(\chi_2)
\end{align}
for all $q_1q_2=q,\chi_1\chi_2=\chi$ with $\chi\bmod{q_1}$ and $\chi_2\bmod{q_2}.$
We are now ready to state our generalization of the Barban--Davenport--Halberstam theorem.
\begin{lemma}\label{lm:bilinear-BDH}
Let $M,N,C>0$ and $q\geqslant1$ squarefree. Let $\boldsymbol\alpha=(\alpha_m)$ be a complex coefficient with support in $[M,2M]$ and also satisfy the above``Siegel--Walfisz" condition, and $\boldsymbol\beta=(\beta_n),\boldsymbol\gamma_q=(\gamma_{n,q})$
complex coefficients with supports in $[N,2N]$ with $\|\boldsymbol\gamma_q\|_\infty \leqslant (\tau(q)\log2q)^C.$
For a $(k,C)$-admissible arithmetic function $\varXi_q$ with some $k\in\mathbf{Z}$, put
\begin{align*}
\mathcal{E}(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma_q;q,\varXi_q)&=\mathop{\sum\sum}_{(mn,q)=1}\alpha_m\beta_n\gamma_{n,q}
\varXi_q(mn)-\frac{1}{\varphi(q)}\sideset{}{^*}\sum_{r\bmod q}\varXi_q(r)\mathop{\sum\sum}_{(mn,q)=1}\alpha_m\beta_n\gamma_{n,q}\\
&=\sideset{}{^*}\sum_{r\bmod q}\varXi_q(r)\Big(\mathop{\sum\sum}_{mn\equiv r\bmod q}\alpha_m\beta_n\gamma_{n,q}-\frac{1}{\varphi(q)}\mathop{\sum\sum}_{(mn,q)=1}\alpha_m\beta_n\gamma_{n,q}\Big).\end{align*}
Let $r\geqslant1$ and $M\geqslant N$. For any $A>0$, there exists some constant $B=B(A,C)>0,$ such that
\begin{align*}
\sum_{q\leqslant Q}\mu^2(q)\tau(q)^r|\mathcal{E}(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma_{q};q,\varXi_q)|\ll \|\boldsymbol\alpha\| \|\boldsymbol\beta\|Q(MN)^{\frac{1}{2}}(\log MN)^{-A}\end{align*}
for $Q\leqslant MN(\log MN)^{-B},$
where the implied constant depends only on $A,C$ and $r$.
\end{lemma}
\proof
In what follows, we assume $B_0,B_1,B_2,\cdots,B_{11}$ are some positive constants that we will not specialize their values. Moreover, we always keep $q$ to be squarefree.
By virtue of orthogonality of multiplicative characters, we may write
\begin{align*}
\mathcal{E}(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma_{q};q,\varXi_q)
&=\frac{\sqrt{q}}{\varphi(q)}\sum_{\substack{\chi\bmod{q}\\ \chi\neq\chi_0}}\widetilde{\varXi}_q(\chi)\Big(\sum_m\alpha_m\chi(m)\Big)\Big(\sum_n\beta_n\gamma_{n,q}\chi(n)\Big).
\end{align*}
Each non-trivial character $\chi\bmod q$ is induced by some primitive character $\chi^*\bmod{q^*}$ with $q^*\mid q.$ Since $q$ is squarefree, we then have $(q^*,q/q^*)=1$ automatically. Therefore, by \eqref{eq:Xi-Mellin-multiplicativity}, we obtain
\begin{align*}
\mathcal{E}(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma_{q};q,\varXi_q)
&=\frac{\sqrt{q}}{\varphi(q)}\sum_{q^*q_0=q}~\sideset{}{^*}\sum_{\chi\bmod{q^*}}\chi_0(q^*)^k\chi^*(q_0)^k\widetilde{\varXi}_{q^*}(\chi)\widetilde{\varXi}_{q_0}(\chi_0)\\
&\ \ \ \ \ \times\Big(\sum_{(m,q_0)=1}\alpha_m\chi(m)\Big)
\Big(\sum_{(n,q_0)=1}\beta_n\gamma_{n,q}\chi(n)\Big),
\end{align*}
where $\chi_0$ denotes the trivial character mod $q_0$.
By Definition \ref{def:admissibility}, we have $\widetilde{\varXi}_{q^*}(\chi)\leqslant (\tau(q^*)\log2q^*)^{B_0}$ and $|\widetilde{\varXi}_{q^0}(\chi_0)|\leqslant \sqrt{q_0}(\tau(q_0)\log2q_0)^{B_0}$. It then follows that
\begin{align}
\sum_{q\leqslant Q}\mu^2(q)\tau(q)^r|\mathcal{E}(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma_{q};q,\varXi_q)|
&\ll
Q^{\frac{1}{2}}(\log Q)^{B_1}\sum_{q_0\leqslant Q}\frac{\sqrt{q_0}}{\varphi(q_0)}\sum_{q\leqslant Q/q_0}\frac{\tau(qq_0)^{B_1}}{\varphi(q)}\nonumber\\
&\ \ \ \ \ \times \sideset{}{^*}\sum_{\chi\bmod q}\Big|\sum_{(m,q_0)=1}\alpha_m\chi(m)\Big|
\Big|\sum_{(n,q_0)=1}\beta_n\gamma_{n,qq_0}\chi(n)\Big|\nonumber\\
&=Q^{\frac{1}{2}}(\log Q)^{B_1}\cdot (S_1+S_2),\label{eq:bilinear-S1+S2}
\end{align}
where $S_1$ and $S_2$ denote the corresponding contributions from $q_0\leqslant Q_1$ and $Q_1<q_0\leqslant Q,$ respectively.
By Cauchy's inequality, we find
\begin{align*}
S_1^2\leqslant S_{11}S_{12}\end{align*}
with
\begin{align*}
S_{11}&=\sum_{q_0\leqslant Q_1}\frac{1}{\varphi(q_0)}\sum_{q\leqslant Q/q_0}\frac{1}{\varphi(q)}\sideset{}{^*}\sum_{\chi\bmod q}\Big|\sum_{(m,q_0)=1}\alpha_m\chi(m)\Big|^2,\\
S_{12}&=\sum_{q_0\leqslant Q_1}\frac{q_0}{\varphi(q_0)}\sum_{q\leqslant Q/q_0}\frac{\tau(qq_0)^{2B_1}}{\varphi(q)}\sideset{}{^*}\sum_{\chi\bmod q}\Big|\sum_{(n,q_0)=1}\beta_n\gamma_{n,qq_0}\chi(n)\Big|^2.\end{align*}
We first consider $S_{11}$. We further split $S_{11}$ according to $q\leqslant Q_2$ and $q>Q_2$, and the corresponding contributions are denoted by $S_{11}'$ and $S_{11}'',$ respectively. Regarding $S_{11}'$, the Siegel--Walfisz condition for $\boldsymbol\alpha$ gives
\begin{align*}
S_{11}'&\ll\sum_{q_0\leqslant Q_1}\frac{\tau(q_0)^{B}}{\varphi(q_0)}\sum_{q\leqslant Q_2}\varphi(q)^2\|\boldsymbol\alpha\|^2M(\log M)^{-A}\ll \|\boldsymbol\alpha\|^2Q_2^3M(\log M)^{-A}(\log Q)^{B_2}.\end{align*}
For $S_{11}'',$ the dyadic device yields
\begin{align*}
S_{11}''&\ll\log Q\sum_{q_0\leqslant Q_1}\frac{1}{\varphi(q_0)}\sup_{Q_2<Q_3\leqslant Q/q_0}\frac{1}{Q_3}\sum_{q\sim Q_3}\frac{q}{\varphi(q)}\sideset{}{^*}\sum_{\chi\bmod q}\Big|\sum_{(m,q_0)=1}\alpha_m\chi(m)\Big|^2.\end{align*}
From the classical multiplicative large sieve inequality (see \cite[Theorem 7.13]{IK04} for instance), it follows that
\begin{align*}
S_{11}''&\ll\|\boldsymbol\alpha\|^2\log Q\sum_{q_0\leqslant Q_1}\frac{1}{\varphi(q_0)}\sup_{Q_2<Q_3\leqslant Q/q_0}\frac{1}{Q_3}(Q_3^2+M)\\
&\ll\|\boldsymbol\alpha\|^2(Q+M/Q_2)(\log Q)^2.\end{align*}
Collecting the above estimates for $S_{11}'$ and $S_{11}'',$ we find
\begin{align*}
S_{11}&\ll \|\boldsymbol\alpha\|^2\{Q_2^3M(\log M)^{-A}(\log Q)^{B_2}+(Q+M/Q_2)(\log Q)^2\}.\end{align*}
Taking $Q_2=(\log M)^{A/6}$, we then obtain
\begin{align*}
S_{11}&\ll \|\boldsymbol\alpha\|^2\{M(\log M)^{-A}+Q(\log Q)^{2}\}\end{align*}
by re-defining $A$.
On the other hand,
\begin{align*}
S_{12}
&\leqslant\sum_{q_0\leqslant Q_1}\frac{q_0}{\varphi(q_0)}\sum_{q\leqslant Q/q_0}\frac{\tau(qq_0)^{2B_1}}{\varphi(q)}\sum_{\chi\bmod q}\Big|\sum_{(n,q_0)=1}\beta_n\gamma_{n,qq_0}\chi(n)\Big|^2\\
&\ll(\log Q)^{B_3}\sum_{q_0\leqslant Q_1}\frac{q_0\tau(q_0)^{B_3}}{\varphi(q_0)}\sum_{q\leqslant Q/q_0}\tau(q)^{B_3}\mathop{\sum\sum}_{n_1\equiv n_2\bmod q}|\beta_{n_1}\beta_{n_2}|.\end{align*}
Note that
\begin{align*}
\sum_{q\leqslant Q/q_0}\tau(q)^{B_3}\mathop{\sum\sum}_{n_1\equiv n_2\bmod q}|\beta_{n_1}\beta_{n_2}|
&\ll \sum_{q\leqslant Q/q_0}\tau(q)^{B_3}\mathop{\sum\sum}_{n_1\equiv n_2\bmod q}|\beta_{n_1}|^2\\
&\ll \|\boldsymbol\beta\|^2Qq_0^{-1}(\log Q)^{B_4}+\sum_{n_1}|\beta_{n_1}|^2\sum_{\substack{n_2\sim N\\n_2\neq n_1}}\tau(|n_2-n_1|)^{B_4}\\
&\ll \|\boldsymbol\beta\|^2(Q/q_0+N)(\log QN)^{B_5},\end{align*}
from which we conclude that
\begin{align*}
S_{12}
&\leqslant \|\boldsymbol\beta\|^2(Q+Q_1N)(\log QN)^{B_6}.\end{align*}
Combining the above estimates for $S_{11}$ and $S_{12},$ we obtain
\begin{align*}
S_1\ll \|\boldsymbol\alpha\|\|\boldsymbol\beta\|(M(\log M)^{-A}+Q)^{\frac{1}{2}}(Q+Q_1N)^{\frac{1}{2}}(\log QN)^{B_7}.\end{align*}
Again by Cauchy's inequality, we find
\begin{align*}
S_2^2\leqslant S_{21}S_{22}\end{align*}
with
\begin{align*}
S_{21}&=\sum_{Q_1<q_0\leqslant Q}\frac{\sqrt{q_0}}{\varphi(q_0)}\sum_{q\leqslant Q/q_0}\frac{\tau(qq_0)^{2B_1}}{\varphi(q)}\sideset{}{^*}\sum_{\chi\bmod q}\Big|\sum_{(m,q_0)=1}\alpha_m\chi(m)\Big|^2,\\
S_{22}&=\sum_{Q_1<q_0\leqslant Q}\frac{\sqrt{q_0}}{\varphi(q_0)}\sum_{q\leqslant Q/q_0}\frac{1}{\varphi(q)}\sideset{}{^*}\sum_{\chi\bmod q}\Big|\sum_{(n,q_0)=1}\beta_n\gamma_{n,qq_0}\chi(n)\Big|^2.\end{align*}
As argued in estimating $S_{11}$ and $S_{12},$ we may derive that
\begin{align*}
S_{21}&\ll \|\boldsymbol\alpha\|^2\{M\sqrt{Q}(\log M)^{-A}+Q(\log Q)^{B_8}\},\\
S_{22}&\ll \|\boldsymbol\beta\|^2\Big(\frac{Q}{\sqrt{Q_1}}+\sqrt{Q}N\Big)(\log QN)^{B_9}.\end{align*}
Therefore, we arrive at
\begin{align*}
S_2\ll \|\boldsymbol\alpha\|\|\boldsymbol\beta\|(M\sqrt{Q}(\log M)^{-A}+Q)^{\frac{1}{2}}\Big(\frac{Q}{\sqrt{Q_1}}+\sqrt{Q}N\Big)^{\frac{1}{2}}(\log QN)^{B_{10}}.\end{align*}
Inserting the estimates for $S_1,S_2$ into \eqref{eq:bilinear-S1+S2}, we find
\begin{align*}
\sum_{q\leqslant Q}\mu^2(q)\tau(q)^r|\mathcal{E}(\boldsymbol\alpha,\boldsymbol\beta,\boldsymbol\gamma_q;q,\varXi_q)|&\ll \|\boldsymbol\alpha\|\|\boldsymbol\beta\|(MN)^{\frac{1}{2}}Q(\log MNQ)^{B_{11}}\varDelta(M,N,Q;Q_1),\end{align*}
where
\begin{align*}
\varDelta(M,N,Q;Q_1)^2&=\frac{Q}{MN}+\frac{\sqrt{Q}}{M}+\frac{Q_1}{M}+\Big(1+\frac{1}{N}\sqrt{\frac{Q}{Q_1}}\Big)(\log MN)^{-A}.\end{align*}
Taking
\begin{align*}
Q_1=
\begin{cases}
(M/N)^{\frac{2}{3}}Q^{\frac{1}{3}},\ \ \ & M\leqslant NQ,\\
Q(\log MN)^{-A},&M>NQ,
\end{cases}
\end{align*}
the proof is completed by noting that $\sqrt{Q}\leqslant \sqrt{MN(\log MN)^{-B}}\leqslant M(\log MN)^{-B/2}$.
\endproof
\section{Lower bound for $H_1(X)$}
Recalling the definition \eqref{eq:Omega}, we may write
\begin{align*}H_1(X)&= \sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\lambda_f(n)-\eta\cdot \varOmega(1,n)|\bigg(\sum_{d|(n,P(z))}\varrho_d\bigg)^2.\end{align*}
To seek a positive lower bound for $H_1(X),$
we
need only consider
those $n$ with few prime factors. To that end, we introduce the interval
\begin{align*}I(P)=~]P,P+P\mathcal{L}^{-1}],\end{align*}
and the set of the products of primes
\begin{align*}\mathcal{P}_i(X;P_{i1},P_{i2},\cdots,P_{ii})&=\{p_1p_2\cdots p_i:p_j\in I(P_{ij})\text{ for each } j\leqslant i\}\end{align*}
for each positive integer $i\geqslant2.$ Furthermore, for each
fixed $i$, we assume that $\{P_{ij}\}$ is a decreasing sequence as powers of $(1+\mathcal{L}^{-1})$
as $j$ varies, and the product of $P_{ij}$'s falls into $[X,2X]$; i.e.,
\begin{align}\label{eq:Pij-initial}
P_{ij}\exp(-\sqrt{\mathcal{L}})>P_{i(j+1)}\geqslant X^{\frac{1}{12}}\ \ \ (1\leqslant j<i),\ \ \prod_{1\leqslant j\leqslant i}P_{ij}\in[X,2X].\end{align}
In this way, we can bound $H_1(X)$ from below by the summation
over $\mathcal{P}_i(X;P_{i1},P_{i2},\cdots,P_{ii})$; for this, we employ the variants of
the Sato--Tate distributions stated above.
Due to the positivity of each term, we can drop those $n$'s with
``bad''
arithmetic structures. To this end, we introduce the
following
restrictions on the size of $P_{ij}$:
\begin{equation}\label{eq:Pij-restrictions}
\begin{split}\begin{cases}P_{21}^{\frac{3}{4}}X^\delta<P_{22},\ \ \delta=10^{-2018},\\
\sqrt{P_{31}}\exp(\sqrt{\mathcal{L}})<P_{32},\\
\sqrt{P_{41}}\exp(\sqrt{\mathcal{L}})<P_{42}P_{43},\\
\sqrt{P_{i1}}\exp(\sqrt{\mathcal{L}})<P_{i2}\cdots P_{i(i-1)}\text{ and }\sqrt{P_{i3}\cdots P_{ii}}\exp(\sqrt{\mathcal{L}})<P_{i2},\ \ i\geqslant5.
\end{cases}\end{split}\end{equation}
Now summing up to $i=7,$ we then have the lower bound
\begin{align}\label{eq:H1(X)-lowerbound-initial}
H_1(X)&\geqslant\sum_{2\leqslant i\leqslant7}H_{1,i}(X),
\end{align}
where
\begin{align*}H_{1,i}(X)&=\sideset{}{^\dagger}\sum_{P_{i1},P_{i2},\cdots,P_{ii}}\ \ \sum_{n\in\mathcal{P}_i(X;P_{i1},P_{i2},\cdots,P_{ii})}\varPsi\Big(\frac{n}{X}\Big)|\lambda_f(n)-\eta\cdot\varOmega(1,n)|\bigg(\sum_{d|(n,P(z))}\varrho_d\bigg)^2,\end{align*}
with $\dagger$ yielding that $P_{ij}$'s are powers of $(1+\mathcal{L}^{-1})$ satisfying the restrictions \eqref{eq:Pij-initial} and \eqref{eq:Pij-restrictions}.
Recalling the choice (\ref{eq:varrho_d}), we find, for each $n\in\mathcal{P}_i(X;P_{i1},P_{i2},\cdots,P_{ii})$, that $n$ has no prime factors smaller than $X^{\frac{1}{12}}$. Note that $z$ will be chosen such that $z\leqslant X^{\frac{1}{12}}$, we then have $(n,P(z))=1$ and
\begin{align*}\sum_{d|(n,P(z))}\varrho_d=\varrho_1=1\end{align*}
for such $n$. Hence we can write
\begin{align}\label{eq:H1i(X)-lowerbound}
H_{1,i}(X)&=(1+o(1))\mathcal{L}^{2i-1}\int_{\mathcal{R}_i}\varSigma(X,\boldsymbol\alpha_i)\mathrm{d}\boldsymbol\alpha_i,\end{align}
where for $\boldsymbol\alpha_i=(\alpha_2,\cdots,\alpha_i),$ we adopt the convention
\begin{align*}
\mathcal{P}_i(X,\boldsymbol\alpha_i)
&=\mathcal{P}_i(X;X^{1-\alpha_2-\cdots-\alpha_i},X^{\alpha_2},\cdots,X^{\alpha_i}),\\
\varSigma(X,\boldsymbol\alpha_i)
&=\sum_{n\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{n}{X}\Big)|\lambda_f(n)-\eta\cdot\varOmega(1,n)|,
\end{align*}
and
the multiple-integral is over the area $\mathcal{R}_i$ with
\begin{equation}\label{eq:Ri}
\begin{split}
\mathcal{R}_2&:=\{\alpha_2\in[\tfrac{1}{12},1[:\tfrac{3}{4}(1-\alpha_2)+\delta<\alpha_2<\tfrac{1}{2}\},\\
\mathcal{R}_3&:=\{(\alpha_2,\alpha_3)\in[\tfrac{1}{12},1[^2:\tfrac{1}{2}(1-\alpha_2-\alpha_3)<\alpha_2,\alpha_3<\alpha_2<1-\alpha_2-\alpha_3\},\\
\mathcal{R}_4&:=\{(\alpha_2,\alpha_3,\alpha_4)\in[\tfrac{1}{12},1[^3:\tfrac{1}{2}(1-\alpha_2-\alpha_3-\alpha_4)<\alpha_2+\alpha_3\}\\
&\ \ \ \ \ \cap\{(\alpha_2,\alpha_3,\alpha_4)\in[\tfrac{1}{12},1[^3:\alpha_4<\alpha_3<\alpha_2<1-\alpha_2-\alpha_3-\alpha_4\},\\
\mathcal{R}_i&:=\{(\alpha_2,\cdots,\alpha_i)\in[\tfrac{1}{12},1[^{i-1}:\tfrac{1}{2}(1-\alpha_2-\cdots-\alpha_i)<\alpha_2+\cdots+\alpha_{i-1}\}\\
&\ \ \ \ \ \cap\{(\alpha_2,\cdots,\alpha_i)\in[\tfrac{1}{12},1[^{i-1}:\tfrac{1}{2}(\alpha_3+\cdots+\alpha_i)<\alpha_2\}\\
&\ \ \ \ \ \cap\{(\alpha_2,\cdots,\alpha_i)\in[\tfrac{1}{12},1[^{i-1}:\alpha_i<\alpha_{i-1}<\cdots<\alpha_2<1-\alpha_2-\cdots-\alpha_i\}
\end{split}\end{equation}
for $i\geqslant5$ with $\delta=10^{-2018}$. Note that $\alpha_j<1/j$ for $2\leqslant j\leqslant i$ in the above coordinates.
It remains to seek a lower bound for $\varSigma(X,\boldsymbol\alpha_i)$.
\begin{proposition}\label{prop:Sigma(X,alphai)-lowerbound}
For $i\in[2,7]\cap\mathbf{Z}$ and $\boldsymbol\alpha_i:=(\alpha_2,\cdots,\alpha_i)\in\mathcal{R}_i,$ we have
\begin{align*}\varSigma(X,\boldsymbol\alpha_i)\geqslant\sqrt{l_i^3/{u_i}}\cdot (1+o(1))\cdot |\mathcal{P}_i(X,\boldsymbol\alpha_i)|\end{align*}
for all sufficiently large $X,$ where
\begin{align*}l_i=(1-4|\eta|\cdot(\tfrac{8}{3\pi})^{i-1}(\tfrac{11}{12})^i+B_i\eta^2)^+\end{align*}
and
\begin{align*}
u_i=16^i\cdot|\eta|^4+4\cdot (\tfrac{22}{3})^i\cdot |\eta|^3+6\cdot 4^i\cdot |\eta|^2+4\cdot (2\sqrt{5})^i\cdot |\eta|+2^i\end{align*}
with the convention that $x^+=\max\{0,x\}$ and $B_i$'s being given below by $\eqref{eq:Bi-constants}.$
Consequently, for $i\in[2,7]\cap\mathbf{Z},$ we have
\begin{align}\label{eq:Sigma(X,alphai)-integrallowerbound}
\int_{\mathcal{R}_i}\varSigma(X,\boldsymbol\alpha_i)\mathrm{d}\boldsymbol\alpha_i
&\geqslant (1+o(1))\sqrt{l_i^3/{u_i}}\cdot I_i,\end{align}
where
\begin{align}\label{eq:Ii}
I_i=\int_{\mathcal{R}_i}\frac{\mathrm{d}\boldsymbol\alpha_i}{\alpha_2\cdots \alpha_i(1-\alpha_2-\cdots-\alpha_i)}.
\end{align}
\end{proposition}
The proof of Proposition \ref{prop:Sigma(X,alphai)-lowerbound} will be given in the next section. Proposition \ref{prop:H1(X)-lowerbound} then follows by substituting \eqref{eq:H1i(X)-lowerbound} and \eqref{eq:Sigma(X,alphai)-integrallowerbound} into \eqref{eq:H1(X)-lowerbound-initial}.
\section{Proof of Proposition \ref{prop:Sigma(X,alphai)-lowerbound}}
For the seek of proving Proposition \ref{prop:Sigma(X,alphai)-lowerbound}, we would like to introduce the following averages
\begin{align*}
\mathcal{A}_{\ell}(X,\boldsymbol\alpha_i)&=\sum_{n\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{n}{X}\Big)|\lambda_f(n)|^\ell,\\
\mathcal{B}(X,\boldsymbol\alpha_i)&=\sum_{n\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{n}{X}\Big)|\varOmega(1,n)|^2,\\
\mathcal{C}(X,\boldsymbol\alpha_i)&=\sum_{n\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{n}{X}\Big)\lambda_f(n)\varOmega(1,n),\end{align*}
where $\mathcal{P}_i(X,\boldsymbol\alpha_i)$ is given as before for $i\in[2,7]\cap\mathbf{Z}.$
By H\"older's inequality, we have
\begin{align*}
\varSigma(X,\boldsymbol\alpha_i)\geqslant \varSigma_{2}(X,\boldsymbol\alpha_i)^{\frac{3}{2}}\varSigma_{4}(X,\boldsymbol\alpha_i)^{-\frac{1}{2}},\end{align*}
where
\begin{align*}\varSigma_{\ell}(X,\boldsymbol\alpha_i):=\sum_{n\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{n}{X}\Big)|\lambda_f(n)-\eta\cdot\varOmega(1,n)|^\ell.\end{align*}
To prove Proposition \ref{prop:Sigma(X,alphai)-lowerbound}, it suffices to prove that
\begin{align*}\varSigma_{2}(X,\boldsymbol\alpha_i)\geqslant l_i(1+o(1))\cdot |\mathcal{P}_i(X,\boldsymbol\alpha_i)|,\end{align*}
\begin{align*}\varSigma_{4}(X,\boldsymbol\alpha_i)\leqslant u_i(1+o(1))\cdot |\mathcal{P}_i(X,\boldsymbol\alpha_i)|\end{align*}
with $l_i,u_i$ are given as in Proposition \ref{prop:Sigma(X,alphai)-lowerbound}.
\subsection{Bounding $\varSigma_{4}(X,\boldsymbol\alpha_i)$ from above}
From Weil's bound for Kloosterman sums, we have
\begin{align}\label{eq:varSigma4(X,alphai)-upperbound-initial}
\varSigma_{4}(X,\boldsymbol\alpha_i)
&\leqslant \sum_{0\leqslant \ell\leqslant4}\binom{4}{\ell}\cdot(2^i\cdot|\eta|)^{4-\ell}\cdot \mathcal{A}_{\ell}(X,\boldsymbol\alpha_i).
\end{align}
By the definition of $\mathcal{P}_i(X,\boldsymbol\alpha_i)$ and multiplicativity of Hecke eigenvalues, it follows from Lemma \ref{lm:pi-kappa(X)-moments} that
\begin{align*}\mathcal{A}_{\ell}(X,\boldsymbol\alpha_i)\leqslant c_\ell^{i}(1+o(1))|\mathcal{P}_i(X,\boldsymbol\alpha_i)|,\end{align*}
from which and \eqref{eq:varSigma4(X,alphai)-upperbound-initial} we conclude that
\begin{align*}\varSigma_{4}(X,\boldsymbol\alpha_i)
&\leqslant \sum_{0\leqslant \ell\leqslant4}\binom{4}{\ell}\cdot(2^i\cdot|\eta|)^{4-\ell}\cdot c_\ell^{i}\cdot (1+o(1))|\mathcal{P}_i(X,\boldsymbol\alpha_i)|\\
&=u_i\cdot (1+o(1)) |\mathcal{P}_i(X,\boldsymbol\alpha_i)|,\end{align*}
provided that $X$ is large enough, where $u_i$'s are given as claimed.
\subsection{Bounding $\varSigma_{2}(X,\boldsymbol\alpha_i)$ from below}
We now turn to the lower bound for $\varSigma_{2}(X,\boldsymbol\alpha_i).$
Squaring out, we may write
\begin{align}\label{eq:Sigma2(X,alphai)}
\varSigma_{2}(X,\boldsymbol\alpha_i)
&=\mathcal{A}_2(X,\boldsymbol\alpha_i)+\eta^2\cdot\mathcal{B}(X,\boldsymbol\alpha_i)-2\eta\cdot\mathcal{C}(X,\boldsymbol\alpha_i).\end{align}
By the definition of $\mathcal{P}_i(X,\boldsymbol\alpha_i)$ and multiplicativity of Hecke eigenvalues, if follows from Lemma \ref{lm:pi-kappa(X)-moments} that
\begin{align}\label{eq:A2(X,alphai)}
\mathcal{A}_2(X,\boldsymbol\alpha_i)=(1+o(1))|\mathcal{P}_i(X,\boldsymbol\alpha_i)|.
\end{align}
The lower bound for $\mathcal{B}(X,\boldsymbol\alpha_i)$
follows from {\it joint} equidistributions of Kloosterman sums. By twisted multiplicatitivity, $\varOmega(1,n)$ can be expressed as the product the two Kloosterman sums, the equidistributions of which are known in a certain sense. To formulate the precise distributions, we would like to introduce the corresponding measures firstly. Following \cite{FM07}, we define a measure $\mu^{(1)}$ on $[-1,1]$ to be the image of the measure $\mu_{\mathrm{ST}}$ under the mapping $\theta\rightarrowto \cos\theta,$ so that $\mathrm{d}\mu^{(1)}=\frac{2}{\pi}\sqrt{1-x^2}\mathrm{d} x$.
Furthermore, for $k\geqslant2$, we define a measure $\mu^{(k)}$ on $[-1,1]$ to be the image of $\mu^{(1)}\otimes \mu^{(1)}\otimes\cdots\otimes \mu^{(1)}$ under the mapping
\begin{align*}
[-1,1]^k&\rightarrow [-1,1]\\
(x_1,x_2,\cdots,x_k)&\rightarrowto x_1x_2\cdots x_k.
\end{align*}
Then for $x\in[0,1],$ we have the following recursive relation
\begin{align}\label{eq:mu.1}
\mu^{(1)}([-x,x])=\frac{4}{\pi}\int_0^x\sqrt{1-t^2}\mathrm{d} t,
\end{align}
\begin{align}\label{eq:mu.k}
\mu^{(k)}([-x,x])=\mu^{(1)}([-x,x])+\frac{4}{\pi}\int_x^1\mu^{(k-1)}([-x/t,x/t])\sqrt{1-t^2}\mathrm{d} t,\ \ \ k\geqslant2.
\end{align}
\begin{lemma}\label{lm:equidistribution}
With the notation as above, for $i\in[2,7]\cap\mathbf{Z}$ and $\boldsymbol\alpha_i:=(\alpha_2,\cdots,\alpha_i)\in\mathcal{R}_i$ as given by $\eqref{eq:Ri},$ the sets
\[\{2^{1-i}\varOmega(p_1,p_2\cdots p_i):n=p_1p_2\cdots p_i\in \mathcal{P}_i(X,\boldsymbol\alpha_i)\}\]
and
\[\{2^{1-i}\varOmega(p_2\cdots p_i,p_1):n=p_1p_2\cdots p_i\in\mathcal{P}_i(X,\boldsymbol\alpha_i)\}\]
equidistribute in $[-1,1]$ with respect to $\mu^{(i-1)}$ and
$\mu^{(1)}$,
respectively, as $X\rightarrow+\infty$, where the measures $\mu^{(j)}$ on $[-1,1]$ are defined recursively by
$\eqref{eq:mu.1}$ and $\eqref{eq:mu.k}.$
\end{lemma}
The original statement of Lemma \ref{lm:equidistribution}, in the case $i\in\{3,4,5\}$, can be found in \cite[Propositions 6.1, 6.2 and 6.3]{FM07} and the case $i\in\{6,7\}$ can be treated in a similar way.
The case $i=2$ follows from \cite[Theorem 1.5]{FKM14}
by taking $K(n)=\mathrm{sym}_k(\theta_p(\overline{n^2}))$ therein.
The following rearrangement type inequality, due to Matom\"{a}ki \cite{Ma11}, allows us to derive a lower bound for $\mathcal{B}(X,\boldsymbol\alpha_i)$ from the equidistributions of Kloosterman sums arising from the above factorization.
\begin{lemma}\label{lm:Matomaki}
Assume that the sequences $(a_n)_{n\leqslant N}$ and $(b_n)_{n\leqslant N}$ contained
in $[0,1]$ equidistribute with respect to some absolutely continuous measures $\mu_a$ and $\mu_b$, respectively, as $N\rightarrow+\infty$. Then
\begin{align*}(1+o(1))\int_0^1xy_l(x)\mathrm{d}\mu_a([0,x]) \leqslant\frac{1}{N}\sum_{n\leqslant N}a_nb_n\leqslant
(1+o(1))\int_0^1xy_u(x)\mathrm{d}\mu_a([0,x]),\end{align*}
where $y_l(x)$ is the smallest solution to the equation $\mu_b([y_l,1])=\mu_a([0,x])$ and
$y_u(x)$ is the largest solution to the equation $\mu_b([0,y_u])=\mu_a([0,x])$.
\end{lemma}
We now write
\begin{align*}
\mathcal{B}(X,\boldsymbol\alpha_i)
&=\sum_{p_1p_2\cdots p_i\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{p_1p_2\cdots p_i}{X}\Big)|\varOmega(p_2\cdots p_i,p_1)|^2|\varOmega(p_1,p_2\cdots p_i)|^2.\end{align*}
By Lemma \ref{lm:Matomaki}, we have
\begin{align}\label{eq:jointequidistribution-lowerbound}
\mathcal{B}(X,\boldsymbol\alpha_i)\geqslant B_i(1+o(1))|\mathcal{P}_i(X,\boldsymbol\alpha_i)|\end{align}
with
\begin{align*}B_i=4^i\int_0^1x^2 y_i(x)^2\mathrm{d}\mu^{(1)}([-x,x]),\end{align*}
where $y_i(x)$ is the unique solution to the equation
\begin{align*}\mu^{(1)}([-x,x])=\mu^{(i-1)}([-1,-y]\cup[y,1])=1-\mu^{(i-1)}([-y,y]).\end{align*}
With the help of Mathematica 10, we can obtain
\begin{equation}\label{eq:Bi-constants}
\begin{split}
B_2\geqslant 0.233838, \ \ \ \ \ \ \ B_5&\geqslant 0.023523\\
B_3\geqslant 0.099779, \ \ \ \ \ \ \ B_6&\geqslant 0.011685\\
B_4\geqslant 0.047473, \ \ \ \ \ \ \ B_7&\geqslant 0.005567.
\end{split}
\end{equation}
To conclude Proposition \ref{prop:Sigma(X,alphai)-lowerbound}, it remains to control $\mathcal{C}(X,\boldsymbol\alpha_i)$ effectively.
It is highly desired that $\lambda_f(n)$ does not correlate with $\varOmega(1,n)$ as $n$ runs over primes or almost primes. Quantitatively, we expect, as discussed in Section \ref{sec:outline}, that
\begin{align*}
\mathcal{C}(X,\boldsymbol\alpha_i)&=o(|\mathcal{P}_i(X,\boldsymbol\alpha_i)|)\end{align*}
for $\boldsymbol\alpha_i\in\mathcal{R}_i$ as given by \eqref{eq:Ri} and $X\rightarrow+\infty.$
Unfortunately, this non-correlation is not yet known even as $n$ runs over consecutive integers.
Our success builds on the observation that $|\lambda_f(p)|^2$ is approximately $1$ on average; however, $|\lambda_f(p)|$ and $|\varOmega(n,p)|$ are both smaller than 1 on average in a suitable family, so that one may obtain a relatively small scalar in the upper bound of $\mathcal{C}(X,\boldsymbol\alpha_i)$, even though the sign changes of $\lambda_f(n)\varOmega(1,n)$ are not taken into account.
Precisely speaking, we are able to bound $\mathcal{C}(X,\boldsymbol\alpha_i)$ as follows.
\begin{proposition}\label{prop:C(X,alphai)-upperbound}
With the notation as above, we have, for all sufficiently large $X,$ that
\begin{align*}
|\mathcal{C}(X,\boldsymbol\alpha_i)|
&\leqslant2\cdot\Big(\frac{8}{3\pi}\Big)^{i-1}\Big(\frac{11}{12}\Big)^{i}(1+o(1))|\mathcal{P}_i(X,\boldsymbol\alpha_i)|\end{align*}
for each $i\in[2,7]\cap\mathbf{Z}.$
\end{proposition}
The lower bound for $\varSigma_{2}(X,\boldsymbol\alpha_i)$ in Proposition \ref{prop:Sigma(X,alphai)-lowerbound} then follows by combining \eqref{eq:Sigma2(X,alphai)}, \eqref{eq:A2(X,alphai)}, \eqref{eq:jointequidistribution-lowerbound} and Proposition
\ref{prop:C(X,alphai)-upperbound}, as well as
\begin{align*}
|\mathcal{P}_i(X,\boldsymbol\alpha_i)|=\frac{X\mathcal{L}^{-2i}(1+o(1))}{\alpha_2\cdots\alpha_i(1-\alpha_2-\cdots-\alpha_i)}\end{align*}
from the prime number theorem. The complete proof of Proposition $\ref{prop:C(X,alphai)-upperbound}$ will be given in the next section.
\section{Proof of Proposition $\ref{prop:C(X,alphai)-upperbound}$}
By the definition of $\mathcal{P}_i(X,\boldsymbol\alpha_i)$ and twisted multiplicativity of Kloosterman sums, we may write
\begin{align*}
\mathcal{C}(X,\boldsymbol\alpha_i)
&=\sum_{p_1p_2\cdots p_i\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{p_1p_2\cdots p_i}{X}\Big)\lambda_f(p_1p_2\cdots p_i)\varOmega(p_2\cdots p_i,p_1)\varOmega(p_1,p_2\cdots p_i).\end{align*}
Weil's bound gives
\begin{align*}
|\mathcal{C}(X,\boldsymbol\alpha_i)|
&\leqslant2\mathcal{C}^*(X,\boldsymbol\alpha_i),\end{align*}
where
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_i)
&=\sum_{p_1p_2\cdots p_i\in\mathcal{P}_i(X,\boldsymbol\alpha_i)}\varPsi\Big(\frac{p_1p_2\cdots p_i}{X}\Big)|\lambda_f(p_1p_2\cdots p_i)||\varOmega(p_1,p_2\cdots p_i)|.\end{align*}
It suffices to prove that
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_i)
&\leqslant\Big(\frac{8}{3\pi}\Big)^{i-1}\Big(\frac{11}{12}\Big)^{i}(1+o(1))|\mathcal{P}_i(X,\boldsymbol\alpha_i)|\end{align*}
for $i\in[2,7]\cap\mathbf{Z}.$
We prove these inequalities case by case. The case $i=2$ is a bit different, which essentially relies on Lemma \ref{lm:bilinearaverageoverprimes} and the remaining cases will be concluded by Lemmas \ref{lm:bilinear} and \ref{lm:bilinearform-twisted} amongst other things.
\subsection{Bounding $\mathcal{C}^*(X,\boldsymbol\alpha_2)$}
We first consider the case $i=2.$
From the twisted multiplicativity for Kloosterman sums and multiplicativity for Hecke eigenvalues, we may write
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_2)
&=\sum_{p_1p_2\in\mathcal{P}_2(X,\boldsymbol\alpha_2)}\varPsi\Big(\frac{p_1p_2}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)||\varOmega(p_1,p_2)|.\end{align*}
We then apply Lemma \ref{lm:bilinearaverageoverprimes} with
\[(n,p)=(p_1,p_2),\ \ \nu(n)=|\lambda_f(p_1)|,\ \ \gamma_q=|\lambda_f(p_2)|,\]
getting
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_2)&=\frac{8}{3\pi}\sum_{p_1p_2\in\mathcal{P}_2(X,\boldsymbol\alpha_2)}\varPsi\Big(\frac{p_1p_2}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)|\\
&\ \ \ \ +O\Big(\mathcal{L}^{10}\{X^{\frac{1}{2}+\alpha_2}+X^{1-\frac{1}{4}\alpha_2}+X\mathcal{L}^{-20}+X^{\frac{3+2\alpha_2}{4}}\}\Big).
\end{align*}
The desired inequality for $i=2$ now follows from Lemma \ref{lm:pi-kappa(X)-moments}
and $\frac{3}{7}<\alpha_2<\frac{1}{2}$ by \eqref{eq:Ri}.
\subsection{Bounding $\mathcal{C}^*(X,\boldsymbol\alpha_3)$}
We now consider the case $i=3.$ By multiplicativity, we may write
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_3)
&=\sum_{p_1p_2p_3\in\mathcal{P}_3(X,\boldsymbol\alpha_3)}\varPsi\Big(\frac{p_1p_2p_3}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)||\lambda_f(p_3)||\varOmega(p_1p_3,p_2)||\varOmega(p_1p_2,p_3)|.\end{align*}
In view of the Chebyshev approximation for $|\cos\theta|$ (see Lemma \ref{lm:cos-Chebyshev}), we consider
\begin{align*}
\mathcal{C}_k^*(X,\boldsymbol\alpha_3)
&:=\sum_{p_1p_2p_3\in\mathcal{P}_3(X,\boldsymbol\alpha_3)}\varPsi\Big(\frac{p_1p_2p_3}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)||\lambda_f(p_3)||\varOmega(p_1p_3,p_2)|\mathrm{sym}_{k}(\cos\theta_{p_3}(\overline{(p_1p_2)^2})).\end{align*}
Applying Lemma \ref{lm:bilinearform-twisted} with
\begin{align*}
s=1,\ (m,n,q)=(p_1,p_2,p_3),\ (M,N)=(X^{1-\alpha_2-\alpha_3},X^{\alpha_2}),\end{align*}
\begin{align*}
\alpha_m=|\lambda_f(p_1)|,\ \beta_n=|\lambda_f(p_2)|,\ \gamma_{m,n}=|\varOmega(p_1p_3,p_2)|,\end{align*}
we obtain
\begin{align*}
\mathcal{C}_k^*(X,\boldsymbol\alpha_3)
&\ll(k+1)\sum_{p_1p_2p_3\in\mathcal{P}_3(X,\boldsymbol\alpha_3)}|\lambda_f(p_3)|(p_3^{-\frac{1}{8}}+X^{-\frac{\alpha_2}{4}}p_3^{\frac{1}{8}}+X^{\frac{2\alpha_2+\alpha_3-1}{2}})\\
&\ll(k+1)\exp(-\sqrt{\mathcal{L}})|\mathcal{P}_3(X,\boldsymbol\alpha_3)|
\end{align*}
by Cauchy's inequality and Lemma \ref{lm:pi-kappa(X)-moments}.
Therefore, it follows from Lemma \ref{lm:cos-Chebyshev} that
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_3)
&=\frac{8}{3\pi}(1+o(1))\sum_{p_1p_2p_3\in\mathcal{P}_3(X,\boldsymbol\alpha_3)}\varPsi\Big(\frac{p_1p_2p_3}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)||\lambda_f(p_3)||\varOmega(p_1p_3,p_2)|.\end{align*}
By Lemmas \ref{lm:bilinear} and \ref{lm:cos-Chebyshev}, we further have
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_3)
&=\Big(\frac{8}{3\pi}\Big)^2(1+o(1))\sum_{p_1p_2p_3\in\mathcal{P}_3(X,\boldsymbol\alpha_3)}\varPsi\Big(\frac{p_1p_2p_3}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)||\lambda_f(p_3)|.\end{align*}
Then Lemma \ref{lm:pi-kappa(X)-moments} yields
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_3)
&\leqslant\Big(\frac{8}{3\pi}\Big)^2\Big(\frac{11}{12}\Big)^3(1+o(1))|\mathcal{P}_3(X,\boldsymbol\alpha_3)|\end{align*}
as expected.
\subsection{Bounding $\mathcal{C}^*(X,\boldsymbol\alpha_i)$ for $i\in[4,7]\cap\mathbf{Z}$}
The cases for $i\geqslant4$ can be treated in a similar way to that for $i=3$, and we only present the details for $i=7$ here.
From multiplicativities, we may write
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_7)
&=\sum_{p_1p_2\cdots p_7\in\mathcal{P}_7(X,\boldsymbol\alpha_7)}\varPsi\Big(\frac{p_1p_2\cdots p_7}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)|\cdots|\lambda_f(p_7)|\\
&\ \ \ \ \ \times\prod_{2\leqslant j\leqslant7}|\varOmega(p_1p_2\cdots p_7/p_j,p_j)|.\end{align*}
In view of Lemma \ref{lm:cos-Chebyshev}, we consider
\begin{align*}
\mathcal{C}_{\mathbf{k}}^*(X,\boldsymbol\alpha_7)
&=\sum_{p_1p_2\cdots p_7\in\mathcal{P}_7(X,\boldsymbol\alpha_7)}\varPsi\Big(\frac{p_1p_2\cdots p_7}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)|\cdots|\lambda_f(p_7)||\varOmega(p_1p_3p_4\cdots p_7,p_2)|\\
&\ \ \ \ \ \ \times\prod_{3\leqslant j\leqslant 7}\mathrm{sym}_{k_{j-2}}(\cos\theta_{p_{j}}(\overline{(p_1p_2\cdots p_7/p_{j})^2}))\end{align*}
for $\mathbf{k}=(k_1,\cdots,k_5)\in\mathbf{Z}_{\geqslant0}^5.$
The term with $\mathbf{k}=(0,\cdots,0)$ is expected to contribute as the main term. We now assume at least one of $k_1,k_2,\cdots,k_5$ is positive, and only consider the case $k_1k_2\cdots k_5\neq0$ without loss of generality (the remaining cases are simpler).
Applying Lemma \ref{lm:bilinearform-twisted} with
\begin{align*}
s=5,\ (m,n,q)=(p_1,p_2,p_3p_4\cdots p_7),\ (M,N)=(X^{1-\alpha_2-\cdots-\alpha_7},X^{\alpha_2}),\end{align*}
\begin{align*}
\alpha_m=|\lambda_f(p_1)|,\ \beta_n=|\lambda_f(p_2)|,\ \gamma_{m,n}=|\varOmega(p_1p_3p_4\cdots p_7,p_2)|,\end{align*}
we get
\begin{align*}
\mathcal{C}_\mathbf{k}^*(X,\boldsymbol\alpha_7)
&\ll k_1k_2\cdots k_5\sum_{p_1p_2\cdots p_7\in\mathcal{P}_7(X,\boldsymbol\alpha_7)}|\lambda_f(p_1)||\lambda_f(p_2)|\cdots|\lambda_f(p_7)|\\
&\ \ \ \ \ \times\{(p_3p_4\cdots p_7)^{-\frac{1}{8}}+X^{-\frac{\alpha_2}{4}}(p_3p_4\cdots p_7)^{\frac{1}{8}}+X^{\frac{2\alpha_2+\alpha_3+\alpha_4+\cdots \alpha_7-1}{2}}\}\\
&\ll k_1k_2\cdots k_5\exp(-\sqrt{\mathcal{L}})|\mathcal{P}_7(X,\boldsymbol\alpha_7)|
\end{align*}
by Cauchy's inequality and Lemma \ref{lm:pi-kappa(X)-moments}. Therefore, it follows from Lemmas \ref{lm:cos-Chebyshev} and \ref{lm:bilinear} that
\begin{align*}
\mathcal{C}^*(X,\boldsymbol\alpha_7)
&=\Big(\frac{8}{3\pi}\Big)^6(1+o(1))\sum_{p_1p_2\cdots p_7\in\mathcal{P}_7(X,\boldsymbol\alpha_7)}\varPsi\Big(\frac{p_1p_2\cdots p_7}{X}\Big)|\lambda_f(p_1)||\lambda_f(p_2)|\cdots|\lambda_f(p_7)|,\end{align*}
which yields the desired upper bound in view of Lemma \ref{lm:pi-kappa(X)-moments}.
\section{Upper bound for $H_2(X)$}
First, we may write
\begin{align}\label{eq:H2(X)-upperbound-initial}
H_2(X)&\leqslant H_{21}(X)+|\eta|\cdot H_{22}(X)
\end{align}
with
\begin{align*}H_{21}(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\lambda_f(n)|\tau_{\varDelta}(n;\alpha,\beta)\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2,\\
H_{22}(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\varOmega(1,n)|\tau_{\varDelta}(n;\alpha,\beta)\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2.\end{align*}
\subsection{Dimension-reduction in $H_{22}(X)$}
We now transform $H_{22}(X)$ in the flavor of \cite{Xi18}, so that the {\it dimension}
of sifting $|\varOmega(1,n)|$ in $H_{22}(X)$ can be reduced. It is a pity that there are some slips in the original arguments of \cite{Xi18}, which will be definitely remedied in this section. As one may find from the definition \eqref{eq:tau(n;alpha,beta)}, the restriction $p\mid d\mathbb{R}ightarrow p>(\log n)^A$ in \cite{Xi18} is replaced by $d\leqslant n^\frac{1}{1+\varDelta}$ with $\varDelta>1$. This new restriction is technical, and it will be reflected in the application of Lemma \ref{lm:bilinear-BDH}.
By twisted multiplicativity, $H_{22}(X)$ becomes
\begin{align*}H_{22}(X)&=\mathop{\sum\sum}_{m^\varDelta\leqslant n}\varPsi\Big(\frac{mn}{X}\Big)\mu^2(mn)|\varOmega(m,n)||\varOmega(n,m)|\alpha^{\omega(m)}\beta^{\omega(n)}\Big(\sum_{d|(mn,P(z))}\varrho_d\Big)^2.\end{align*}
The Weil bound gives
\begin{align*}H_{22}(X)&\leqslant\mathop{\sum\sum}_{m^\varDelta\leqslant n}\varPsi\Big(\frac{mn}{X}\Big)\mu^2(mn)|\varOmega(n,m)|\alpha^{\omega(m)}(2\beta)^{\omega(n)}\Big(\sum_{d|(mn,P(z))}\varrho_d\Big)^2.\end{align*}
Following the arguments on smooth partition of units in \cite{Xi18} (see e.g., \cite{Fo85}), we have
\begin{align}\label{eq:H22(X)-partition}
H_{22}(X)&\leqslant \sum_{(M,N)}H_{22}(X;M,N),\end{align}
where $M,N$ run over powers of $1+\mathcal{L}^{-B}$ with $B$ appropriately large and
\begin{align*}H_{22}(X;M,N)&=\mathop{\sum\sum}_{m^\varDelta\leqslant nd}U(m)V(n)\varPsi\Big(\frac{mn}{X}\Big)\mu^2(mn)|\varOmega(n,m)|\alpha^{\omega(m)}(2\beta)^{\omega(n)}\Big(\sum_{d|(mn,P(z))}\varrho_d\Big)^2\end{align*}
with $U,V$ being certain smooth functions supported on $]M,M(1+\mathcal{L}^{-B})]$ and
$]N,N(1+\mathcal{L}^{-B})]$, respectively. By symmetry, we may assume that
\begin{align}\label{eq:sizes-MN}
MN\asymp X,\ \ \ M^\varDelta\ll N.
\end{align}
Note that there are at most $O(\mathcal{L}^{2B+2})$ tuples of $(M,N)$ in summation.
To evaluate $H_{22}(X;M,N)$, we make the transformation by
\begin{align*}H_{22}(X;M,N)&=\sum_{m}U(m)\mu^2(m)\alpha^{\omega(m)}\Xi(m,N),\end{align*}
where
\begin{align}\label{eq:xi}
\xi(n)=\mathop{\sum\sum}_{[d_1,d_2]=n}\varrho_{d_1}\varrho_{d_2}
\end{align}
and
\begin{align*}\Xi(m,N):=\mathop{\sum\sum}_{\substack{(nd,m)=1\\m^\varDelta\leqslant nd\\ d\mid P(z)}}V(nd)\varPsi\Big(\frac{mnd}{X}\Big)|\varOmega(nd,m)|\mu^2(nd)(2\beta)^{\omega(nd)}\sum_{l\mid (m,P(z))}\xi(dl).\end{align*}
Moreover, one can employ Mellin inversion to separate variables $n,d$ subject to the restrictions in $nd>m$, $V(nd)$ and $\varPsi(mnd/X)$.
Due to the appearance of $\mu^2(nd)$, we can also introduce the M\"obius formula to relax the implicit restriction $(n,d)=1.$
Noting that $N\geqslant X^{\frac{\varDelta}{1+\varDelta}}\gg\sqrt{X}$ in view of \eqref{eq:sizes-MN} and $\xi$ is supported on squarefree numbers up to $\sqrt{X}\exp(-2\sqrt{\mathcal{L}})$ by the choice of $(\varrho_d)$, we are in a good position to apply Lemmas \ref{lm:bilinear-BDH}
and \ref{lm:Kloosterman-Mellin-prime}, getting
\begin{align*}H_{22}(X;M,N)&=\sum_mU(m)\mu^2(m)\Big(\frac{8\alpha}{3\pi}\Big)^{\omega(m)}\Xi^*(m,N)+O(X\mathcal{L}^{-2B-4}),\end{align*}
where
\begin{align*}\Xi^*(m,N):=\mathop{\sum\sum}_{\substack{(nd,m)=1\\m^\varDelta\leqslant n\\ d\mid P(z)}}V(nd)\varPsi\Big(\frac{mnd}{X}\Big)\mu^2(nd)(2\beta)^{\omega(nd)}\sum_{l\mid (m,P(z))}\xi(dl).\end{align*}
Rearranging all above summations, we may obtain
\begin{align*}H_{22}(X;M,N)&=\mathop{\sum\sum}_{m^\varDelta\leqslant n}U(m)V(n)\varPsi\Big(\frac{mn}{X}\Big)\mu^2(mn)\Big(\frac{8\alpha}{3\pi}\Big)^{\omega(m)}(2\beta)^{\omega(n)}\Big(\sum_{d|(mn,P(z))}\varrho_d\Big)^2\\
&\ \ \ \ +O(X\mathcal{L}^{-2B-4}).\end{align*}
Taking into account all admissible tuples $(M,N)$, we find
\begin{align*}H_{22}(X)&\leqslant\sum_{n}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)\tau_\varDelta\Big(n;\frac{8\alpha}{3\pi},2\beta\Big)\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2+O(X\mathcal{L}^{-2}).\end{align*}
Taking $\alpha,\beta>0$ such that
\begin{align}\label{eq:alpha-beta1}
\frac{8\alpha}{3\pi}+2\beta\leqslant2,
\end{align}
so that
\begin{align*}\tau_\varDelta\Big(n;\frac{8\alpha}{3\pi},2\beta\Big)\leqslant2^{\omega(n)}\end{align*}
for all squarefree $n\geqslant1$. Hence the above upper bound for $H_{22}(X)$ becomes
\begin{align}\label{eq:H22(X)-upperbound-dimreduction}
H_{22}(X)&\leqslant\frac{1}{2}\sum_{n}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)2^{\omega(n)}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2+O(X\mathcal{L}^{-2}).\end{align}
\subsection{Bounding $H_{21}(X)$ initially}
On the other hand, from the trivial inequality $\tau_{\varDelta}(n;\alpha,\beta)\leqslant(\alpha+\beta)^{\omega(n)}$ it follows that
\begin{align*}H_{21}(X)
&\leqslant\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\lambda_f(n)|(\alpha+\beta)^{\omega(n)}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2.\end{align*}
Taking $\alpha,\beta>0$ such that
\begin{align}\label{eq:alpha-beta2}
\alpha+\beta\leqslant2,
\end{align}
so that
\begin{align*}H_{21}(X)
&\leqslant\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\lambda_f(n)|2^{\omega(n)}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2.\end{align*}
By Cauchy's inequality, we have
\begin{align}\label{eq:H21(X)-Cauchy}
H_{21}(X)&\leqslant \sqrt{H_{21}'(X)H_{21}''(X)}\end{align}
with
\begin{align*}
H_{21}'(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\lambda_f(n)|^22^{\omega(n)}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2,\\
H_{21}''(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)2^{\omega(n)}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2.
\end{align*}
\subsection{Concluding an upper bound for $H_2(X)$}
The evaluations for $H_{21}'(X),H_{21}''(X)$ and $H_{22}(X)$ will rely on asymptotic computations of the average of Selberg sieve weights against some general multiplicative functions.
The later should be of independent interests and we will state a general situation by Theorem \ref{thm:asymptoticSelberg} in the appendix.
To evaluate $H_{22}(X)$ and $H_{21}''(X),$ we may take $h(n)=2^{\omega(n)}$ in Theorem \ref{thm:asymptoticSelberg}, so that
\begin{align}
H_{22}(X)&\leqslant(1+o(1))\mathfrak{S}\Big(\vartheta,\frac{\log X}{4\log z}\Big)\frac{X}{\log X}\label{eq:H22(X)-upperbound}\\
H_{21}''(X)&\leqslant(1+o(1))\mathfrak{S}\Big(\vartheta,\frac{\log X}{4\log z}\Big)\frac{X}{\log X},\label{eq:H21''(X)-upperbound}\end{align}
where $\mathfrak{S}(\cdot,\cdot)$ is given by \eqref{eq:fS(gamma,tau)}.
The evaluation of $H_{21}'(X)$ can be done by taking $h(n)=|\lambda_f(n)|^22^{\omega(n)}$ in Theorem \ref{thm:asymptoticSelberg}, and it suffices to verify
the conditions of non-vanishing, meromorphic continuation \eqref{eq:meromorphiccontinuation},
first moment \eqref{eq:convergence-firstmoment} and second moment \eqref{eq:convergence-secondmoment} with some constants $L,c_0>0.$
In fact, it is well-known (see \cite[Proposition 2.3]{RS96} for instance) that
\[\sum_{p\leqslant x}\frac{\lambda_f(p)^2\log p}{p}=\log x+O_f(1),\]
which yields \eqref{eq:convergence-firstmoment} with some $L$ depending only on $f$.
To check the condition \eqref{eq:meromorphiccontinuation} on meromorphic continuation, it may appeal to Lemma \ref{lm:pi-kappa(X)-moments} and derive that
\[\sum_{n\geqslant1}\mu^2(n)|\lambda_f(n)|^22^{\omega(n)}n^{-s}=\zeta(s)^2L(\mathrm{sym}^2f,s)^2F(s)\]
for $\mathbb{R}e s>1$ and $F(s)$ admits a Dirichlet series convergent absolutely in $\mathbb{R}e s>0.9.$ Hence the meromorphic continuation condition \eqref{eq:meromorphiccontinuation} holds with $\mathcal{H}^*(s)=L(\mathrm{sym}^2f,s)^2F(s)$ and $c_0=0.1$. The non-vanishing condition is guaranteed by the zero-free region of $L(\mathrm{sym}^2f,s)$ (see \cite[Theorem 5.44]{IK04} for instance).
After checking all above conditions, we conclude from Theorem \ref{thm:asymptoticSelberg} that
\begin{align}
H_{21}'(X)&\leqslant(1+o(1))\mathfrak{S}\Big(\vartheta,\frac{\log X}{4\log z}\Big)\frac{X}{\log X}.\label{eq:H21'(X)-upperbound}\end{align}
In conclusion, Proposition \ref{prop:H2(X)-upperbound} follows immediately by combining \eqref{eq:H2(X)-upperbound-initial}, \eqref{eq:H21(X)-Cauchy}, \eqref{eq:H22(X)-upperbound}, \eqref{eq:H21''(X)-upperbound}, \eqref{eq:H21'(X)-upperbound}.
\section{Estimate for $H_3(X)$}
We rewrite $H_3(X)$ by
\begin{align*}
H_3(X)&=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)(\lambda_f(n)-\eta\cdot\mathrm{Kl}(1,n))\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2\\
&=\sum_{\substack{d\leqslant D\\d\mid P(z)}}\mu^2(d)\xi(d)\sum_{n\equiv0\bmod d}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)(\lambda_f(n)-\eta\cdot\mathrm{Kl}(1,n))\end{align*}
with $\xi$ given by \eqref{eq:xi}.
In view of $|\xi(d)|\leqslant 3^{\omega(d)}$ for all squarefree $d\geqslant1$, Proposition \ref{prop:H3(X)-estimate} then follows from the following two lemmas.
\begin{lemma}\label{lm:BV-FM}
For any $A>0$, there exists some $B=B(A)>0$ such that
\begin{align*}
\sum_{q\leqslant \sqrt{X}\mathcal{L}^{-B}}3^{\omega(q)}\left|\sum_{n\equiv0\bmod q}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)\mathrm{Kl}(1,n)\right|\ll X\mathcal{L}^{-A},\end{align*}
where the implied constant depends on $A$ and $\varPsi.$
\end{lemma}
\begin{lemma}\label{lm:EH-eigenvalues}
For any $A>0$, there exists some $B=B(A)>0$ such that
\begin{align*}
\sum_{q\leqslant X\mathcal{L}^{-B}}3^{\omega(q)}\left|\sum_{n\equiv0\bmod q}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)\lambda_f(n)\right|\ll X\mathcal{L}^{-A},\end{align*}
where the implied constant depends on $A,f$ and $\varPsi.$
\end{lemma}
Lemma \ref{lm:BV-FM}, which can be regarded as a Bombieri--Vinogradov type equidistribution for Kloosterman
sums, was initiated by Fouvry and Michel \cite{FM07} deriving from the spectral theory of automorphic forms without the weights $3^{\omega(q)}$ and $\mu^2(n)$. The current version is given by Sivak-Fischler \cite{SF09} and the author \cite{Xi15} with minor efforts.
Lemma \ref{lm:EH-eigenvalues} is not surprising to those readers that are familiar with automorphic forms, but the rigorous proof would require several extra lines. To simply the arguments, we assume the form $f$ is of level 1. In fact, the inner sum over $n$, denoted by $T$, can be rewritten as
\begin{align*}
T&=\sum_{d\leqslant2\sqrt{X}}\mu(d)
\sum_{n\equiv 0\bmod{[q,d^2]}}\varPsi\Big(\frac{n}{X}\Big)\lambda_f(n)\\
&=\sum_{d\leqslant2\sqrt{X}}\mu(d)
\sum_{n\geqslant1}\varPsi\Big(\frac{n[q,d^2]}{X}\Big)\lambda_f(n[q,d^2]).\end{align*}
By Hecke relation (see e.g., \cite[Formula (8.37)]{Iw02})
\[\lambda_f(mn)=\sum_{\ell\mid (m,n)}\mu(\ell)\lambda_f(m/\ell)\lambda_f(n/\ell),\]
we get
\begin{align*}
T
&=\sum_{d\leqslant2\sqrt{X}}\mu(d)\sum_{\ell\mid [q,d^2]}\mu(\ell)\lambda_f([q,d^2]/\ell)
\sum_{n\geqslant1}\varPsi\Big(\frac{n\ell [q,d^2]}{X}\Big)\lambda_f(n).\end{align*}
By partial summation and the well-known estimate (see e.g., \cite[Theorem 8.1]{Iw02})
\begin{align*}
\sum_{n\leqslant N}\lambda_f(n)\ll_f N^{\frac{1}{2}}\log N,\end{align*}
we derive that
\begin{align*}
T
&\ll_{f,g}\sqrt{X}\mathcal{L}\sum_{d\leqslant2\sqrt{X}}\sum_{\ell\mid [q,d^2]}\mu^2(\ell)\frac{|\lambda_f([q,d^2]/\ell)|}{\sqrt{[q,d^2]\ell}}
\leqslant \sqrt{X}\mathcal{L}\sum_{d\leqslant2\sqrt{X}}\frac{1}{[q,d^2]}\sum_{\ell\mid [q,d^2]}\mu^2(\ell)|\lambda_f(\ell)|\sqrt{\ell}.\end{align*}
Hence the original double sum in the lemma is bounded by
\begin{align*}
&\ll\sqrt{X}\mathcal{L}\sum_{q\leqslant X\mathcal{L}^{-B}}3^{\omega(q)}\sum_{d\leqslant2\sqrt{X}}\frac{1}{[q,d^2]}\sum_{\ell\mid [q,d^2]}\mu^2(\ell)|\lambda_f(\ell)|\sqrt{\ell}\\
&\ll\sqrt{X}\mathcal{L}\sum_{\ell\leqslant 4X\mathcal{L}^{-B}}\mu^2(\ell)|\lambda_f(\ell)|\sqrt{\ell}\sum_{d\leqslant2\sqrt{X}}\frac{1}{d^2}
\sum_{\substack{q\leqslant X\mathcal{L}^{-B}\\ q\equiv0\bmod{\ell/(\ell,d^2)}}}\frac{3^{\omega(q)}(q,d^2)}{q}\\
&\ll \sqrt{X}\mathcal{L}\sum_{\ell\leqslant4X\mathcal{L}^{-B}}\frac{\mu^2(\ell)|\lambda_f(\ell)|}{\sqrt{\ell}}\sum_{d\leqslant2\sqrt{X}}\frac{3^{\omega(d)}(\ell,d)}{d^2}.\end{align*}
The lemma then follows from Cauchy's inequality and the Rankin--Selberg bound
\[\sum_{\ell\leqslant L}|\lambda_f(\ell)|^2\ll L,\]
as well as the choice $B=2A+4.$
\section{Numerical computations: concluding Theorems \ref{thm:non-identity} and \ref{thm:non-identity-general}}\label{sec:numerical}
In view of Propositions \ref{prop:H1(X)-lowerbound}, \ref{prop:H2(X)-upperbound} and \ref{prop:H3(X)-estimate}, we may conclude that
\begin{align}\label{eq:H(X)-positivelowerbound}
H^\pm(X)>\varepsilon_0 X\mathcal{L}^{-1},
\end{align}
with some absolute constant $\varepsilon_0>0$, from the inequality
\begin{align}\label{eq:inequality-determingrho}
\rho\cdot \mathfrak{A}_1(\eta)>\mathfrak{A}_2(\eta)\end{align}
by choosing $\rho,\vartheta,z$ appropriately for a given $\eta\in\mathbf{R}$, where
\begin{align}
\mathfrak{A}_1(\eta)&:=\sum_{2\leqslant i\leqslant7}I_i\cdot\sqrt{l_i^3/{u_i}},\nonumber\\
\mathfrak{A}_2(\eta)&:=(2+|\eta|)\mathfrak{S}\Big(\frac{1}{4},6\Big)=(2+|\eta|)16\mathrm{e}^{2\gamma}\Big(\frac{2c_1(6)}{3}+\frac{c_2(6)}{9}\Big)\label{eq:A2(eta)}
\end{align}
subject to the restrictions \eqref{eq:alpha-beta1}, \eqref{eq:alpha-beta2} and the choice
\begin{align}\label{eq:paremeter-choices}
\vartheta=\frac{1}{4},\ \ z=X^{\frac{1}{12}}.
\end{align}
\subsection{Upper bound for $\mathfrak{A}_2(\eta)$}
From the definitions \eqref{eq:sigma-equationsolution} and \eqref{eq:ff-equationsolution}, we find
\begin{align*}
\sigma(s)=
\begin{cases}
\dfrac{s^2}{8\mathrm{e}^{2\gamma}},\ \ &s\in~]0,2],\\\noalign{\vskip 0,9mm}
\dfrac{s^2}{8\mathrm{e}^{2\gamma}}\Big(4+\log4-2\log s-\dfrac{8s-4}{s^2}\Big),&s\in~]2,4],\\\noalign{\vskip 0,9mm}
\dfrac{s^2}{8\mathrm{e}^{2\gamma}}\Big(4\displaystyle\int_4^s\dfrac{(t-2)^2\log(t-2)}{t^3}\mathrm{d} t-(8+2\log4)\log s\\\noalign{\vskip 0,9mm}
\ \ \ \ \ \ +\dfrac{49+35\log4+8(\log4)^2}{4}-\dfrac{48+8\log4}{s}+\dfrac{32+4\log4}{s^2}\Big),&s\in~]4,6],
\end{cases}
\end{align*}
and
\begin{align*}
\mathfrak{f}(s)=
\begin{cases}
2,\ \ &s\in~]0,2],\\\noalign{\vskip 0,9mm}
4\log(s/2)+2,&s\in~]2,4],\\\noalign{\vskip 0,9mm}
8\displaystyle\int_4^s\dfrac{\log(t-2)}{t}\mathrm{d} t-(8\log2-4)\log s+16(\log2)^2-4\log2+2,&s\in~]4,6].
\end{cases}
\end{align*}
Note that
\begin{align*}
c_1(6)&=\frac{1}{6}\int_0^6\sigma'(6-u)\mathfrak{f}(u)^2\mathrm{d} u.
\end{align*}
From the positivity of $\sigma'$ and the monotonicity of $\mathfrak{f}$, it follows that
\begin{align*}
c_1(6)&=\frac{1}{6}\sum_{1\leqslant j\leqslant 6}\int_{j-1}^j\sigma'(6-u)\mathfrak{f}(u)^2\mathrm{d} u\leqslant \frac{1}{6}\sum_{1\leqslant j\leqslant 6}\mathfrak{f}(j)^2\int_{j-1}^j\sigma'(6-u)\mathrm{d} u\\
&=\frac{1}{6}\sum_{1\leqslant j\leqslant 6}\mathfrak{f}(j)^2(\sigma(7-j)-\sigma(6-j))\\
&=\frac{1}{6}\mathfrak{f}(1)^2\sigma(6)+\frac{1}{6}\sum_{3\leqslant j\leqslant 6}(\mathfrak{f}(j)^2-\mathfrak{f}(j-1)^2)\sigma(7-j).
\end{align*}
On the other hand,
\begin{align*}
c_2(6)&=\int_0^1\sigma'(6(1-u))\mathrm{d} u\int_0^{3u}\mathfrak{f}(6u-2v)\{2\mathfrak{f}(6u)-\mathfrak{f}(6u-2v)\}\mathrm{d} v\\
&=\frac{1}{12}\int_0^6\sigma'(6-u)\mathrm{d} u\int_0^u\mathfrak{f}(v)\{2\mathfrak{f}(u)-\mathfrak{f}(v)\}\mathrm{d} v.\end{align*}
Note that $\mathfrak{f}(v)\{2\mathfrak{f}(u)-\mathfrak{f}(v)\}\leqslant\mathfrak{f}(u)^2$ for all $v\in[0,u]$. Hence
\begin{align*}
c_2(6)&\leqslant\frac{1}{12}\int_0^6\sigma'(6-u)\mathfrak{f}(u)^2u\mathrm{d} u.\end{align*}
From the positivity of $\sigma'$ and the monotonicity of $\mathfrak{f}$, it follows that
\begin{align*}
c_2(6)
&\leqslant\frac{1}{12}\sum_{1\leqslant j\leqslant6}\mathfrak{f}(j)^2j\int_{j-1}^{j}\sigma'(6-u)\mathrm{d} u\\
&=\frac{1}{12}\sum_{1\leqslant j\leqslant6}\mathfrak{f}(j)^2j(\sigma(7-j)-\sigma(6-j))\\
&=\frac{1}{12}\mathfrak{f}(1)^2\sigma(6)+\frac{1}{12}\sum_{2\leqslant j\leqslant6}\{\mathfrak{f}(j)^2j-\mathfrak{f}(j-1)^2(j-1)\}\sigma(7-j).\end{align*}
Inserting the special values for $\sigma$ and $\mathfrak{f}$, we obtain
\begin{align*}
c_1(6)&\leqslant2.43762,\ \ \ \ c_2(6)\leqslant5.15051\end{align*}
upon the choice \eqref{eq:paremeter-choices}.
Combining the above two bounds and \eqref{eq:A2(eta)}, we conclude that
\begin{align*}
\mathfrak{A}_2(\eta)\leqslant 111.53(2+|\eta|).\end{align*}
\subsection{Lower bound for $\mathfrak{A}_1(\eta)$ and concluding Theorem \ref{thm:non-identity}}
With the help of Mathematica 10, we can find
\begin{align*}
I_2\geqslant 0.28768,\ \ \ \ & I_5\geqslant 0.14893\\
I_3\geqslant 1.04781,\ \ \ \ & I_6\geqslant 0.00424\\
I_4\geqslant 0.85019,\ \ \ \ & I_7\geqslant 7.25032\times10^{-6}.\end{align*}
For $\eta=\pm1$, we obtain
$\mathfrak{A}_1(\eta)\approx 3.687\times10^{-11},$ $\mathfrak{A}_2(\eta)\leqslant 334.59$, so that
\eqref{eq:inequality-determingrho} holds by taking $\rho=9.076\times10^{12}$.
It suffices to solve the inequality
\begin{align}\label{eq:tau-upperbound}
\tau_{\varDelta}(n;\alpha,\beta)<9.076\times10^{12}.\end{align}
To conclude Theorem \ref{thm:non-identity}, we should explore a lower bound for
$\tau_\varDelta(n;\alpha,\beta)$, which grows as long as $\omega(n)$ increases.
Recall the definition \eqref{eq:tau(n;alpha,beta)} of the truncated divisor function $\tau_{\varDelta}(n;\alpha,\beta)$:
\begin{align*}
\tau_{\varDelta}(n;\alpha,\beta)=\sum_{\substack{d\mid n\\d\leqslant n^{\frac{1}{1+\varDelta}}}}\alpha^{\omega(d)}\beta^{\omega(n/d)}.\end{align*}
We would like to prove a lower bound for $\tau_{\varDelta}(n;\alpha,\beta)$ by elementary methods. To this end, let us recall a previous result of Soundararajan \cite{So92},
which gives a lower bound for the truncated convolution of multiplicative functions by {\it complete} convolutions.
The following lemma can be found in \cite[Theorem 4]{So92} with minor modifications on notation.
\begin{lemma}\label{lm:Soundararajan}
Let $t>0$ be a rational number and $g$ a multiplicative function with $0<g(p)\leqslant 1/t$ for all primes $p.$ Then, for each squarefree number $n\geqslant2,$ we have
\begin{align*}
\sum_{\substack{d\mid n\\ d\leqslant n^\frac{1}{1+t}}}g(d)\geqslant\mathfrak{A}(t)\sum_{d\mid n}g(d),
\end{align*}
where, if $t$ has the continued fraction expansion $[a_0,a_1,\cdots,a_k],$
\begin{align}\label{eq:continuedfraction}
\mathfrak{A}(t):=\frac{1}{1+a_0+a_1+\cdots+a_k}.
\end{align}
In particular, if $t$ is a positive integer, then $\mathfrak{A}(t)=1/(1+t).$
\end{lemma}
We now produce a lower bound for $\tau_{\varDelta}(n;\alpha,\beta)$ by virtue of Lemma \ref{lm:Soundararajan} subject to the restrictions \eqref{eq:alpha-beta1} and \eqref{eq:alpha-beta2}. Taking $\alpha,\beta,\varDelta$ such that $\alpha\varDelta=\beta>0,\varDelta\in\mathbf{Q}~\cap~]1,+\infty[$, we conclude from Lemma \ref{lm:Soundararajan} that
\begin{align*}
\tau_{\varDelta}(n;\alpha,\beta)=\beta^{\omega(n)}\sum_{\substack{d\mid n\\d\leqslant n^{\frac{1}{1+\varDelta}}}}\Big(\frac{1}{\varDelta}\Big)^{\omega(d)}\geqslant \beta^{\omega(n)}\mathfrak{A}(\varDelta)\sum_{d\mid n}\Big(\frac{1}{\varDelta}\Big)^{\omega(d)}=\mathfrak{A}(\varDelta)(\alpha+\beta)^{\omega(n)}.\end{align*}
Following the above arguments, we are now in a position to solve the inequality
\begin{align*}
\mathfrak{A}(\varDelta)(\alpha+\beta)^{\omega(n)}<9.076\times10^{12},\end{align*}
where $\varDelta,\alpha,\beta>0$ are chosen freely subject to the following restrictions
\begin{align*}
\varDelta=\beta/\alpha\in\mathbf{Q}~\cap~]1,+\infty[,\ \ \ \ \frac{8\alpha}{3\pi}+2\beta\leqslant2,\ \ \ \ \alpha+\beta\leqslant2.
\end{align*}
In particular, we would like to take
\begin{align*}
\varDelta=\frac{14}{13},\ \ \ \ \alpha=\frac{39\pi}{52+42\pi},\ \ \ \ \beta=\frac{21\pi}{26+21\pi},
\end{align*}
in which case one has $\mathfrak{A}(\varDelta)=\frac{1}{15}.$ It now suffices to
solve the inequality
\begin{align*}
\frac{1}{15}\Big(\frac{81\pi}{52+42\pi}\Big)^{\omega(n)}<9.076\times10^{12},\end{align*}
which yields $\omega(n)<100.29,$ i.e., $\omega(n)\leqslant100.$
To conclude the quantitative statement in Theorem \ref{thm:non-identity}, we would like to argue as follows.
Put $\mathcal{N}(X):=\{n\in[X,2X]:\lambda_f(n)>\mathrm{Kl}(1,n),\omega(n)\leqslant 100,\mu^2(n)=1\}$. Trivially, we have
\begin{align*}
H^+(X)
&\leqslant\rho\sum_{\tau_{\varDelta}(n;\alpha,\beta)<\rho}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)\{|\psi(n)|+\psi(n)\}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2\\
&\leqslant 2\rho\sum_{\substack{\psi(n)>0\\ \omega(n)\leqslant 100}}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)|\psi(n)|\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2\end{align*}
with $\psi(n)=\lambda_f(n)-\mathrm{Kl}(1,n).$
By Cauchy's inequality, we find
\begin{align*}
H^+(X)^2
&\leqslant4\rho^2|\mathcal{N}(X)|\sum_{\substack{\psi(n)>0\\ \omega(n)\leqslant 100}}\varPsi^2\Big(\frac{n}{X}\Big)\mu^2(n)|\psi(n)|^2\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^4.\end{align*}
Note that
\begin{align*}
\Bigg|\sum_{d|(n,P(z))}\varrho_d\Bigg|\leqslant 2^{\omega(n)}\end{align*}
for each squarefree $n$, from which and Weil's bound for Kloosterman sums, it follows that
\begin{align*}
H^+(X)^2
&\leqslant4\rho^2|\mathcal{N}(X)|\sum_{\omega(n)\leqslant 100}\varPsi^2\Big(\frac{n}{X}\Big)\mu^2(n)|\psi(n)|^24^{\omega(n)}\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2\\
&\leqslant4^{101}\rho^2|\mathcal{N}(X)|\sum_{\omega(n)\leqslant 100}\varPsi^2\Big(\frac{n}{X}\Big)\mu^2(n)(|\lambda_f(n)|^2+4^{100})\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2\\
&\leqslant4^{101}\rho^2|\mathcal{N}(X)|\sum_{n\geqslant1}\varPsi^2\Big(\frac{n}{X}\Big)\mu^2(n)(|\lambda_f(n)|^2+4^{100})\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2.\end{align*}
We now proceed as in the proof of Proposition \ref{prop:H2(X)-upperbound}, and the last sum over $n$ can be bounded by $O(X\mathcal{L}^{-1})$ with an absolute constant. Therefore,
\begin{align*}
H^+(X)^2\ll X\mathcal{L}^{-1}\cdot|\mathcal{N}(X)|.\end{align*}
Combining this with \eqref{eq:H(X)-positivelowerbound}, we then arrive at
\begin{align*}
|\mathcal{N}(X)|\gg X\mathcal{L}^{-1}.\end{align*}
Similar arguments can also lead to
\begin{align*}
|\{n\in[X,2X]:\lambda_f(n)<\mathrm{Kl}(1,n),\omega(n)\leqslant 100,\mu^2(n)=1\}|\gg X\mathcal{L}^{-1}.\end{align*}
We now complete the proof of Theorem \ref{thm:non-identity}.
\subsection{The case of general $\eta$}
Given an $\eta\in\mathbf{R}$, one may see that those $l_i$'s in Proposition \ref{prop:Sigma(X,alphai)-lowerbound} are not always positive. To obtain a positive lower bound for $\mathfrak{A}_1(\eta)$, we need to solve the inequality $\mathfrak{A}_1(\eta)>0$, which holds provided that
\begin{align}\label{eq:A1(eta)positive-eta}
|\eta|\in[0,~1.23]\cup[11.84,~+\infty[.
\end{align}
For such $\eta$ we may choose a considerably large $\rho$ such that \eqref{eq:inequality-determingrho} holds, and thus we can always produce almost primes in Theorem \ref{thm:non-identity-general} for a general $\eta\in\mathbf{R}$ satisfying \eqref{eq:A1(eta)positive-eta}.
In fact, as $|\eta|$ is sufficiently large, we find from Lemma \ref{prop:Sigma(X,alphai)-lowerbound} that
$$\mathfrak{A}_1(\eta)\geqslant c_1|\eta|,\ \ \ \mathfrak{A}_2(\eta)\leqslant c_2|\eta|$$
for some constant $c_1,c_2>0.$ Therefore, a certain absolute $\rho$ could be found for all such large $|\eta|$, for which we may explore a uniform $r$ in
Theorem \ref{thm:non-identity-general}.
This is not surprising since Kloosterman sums will dominate the contributions to $H^\pm(X)$ if $|\eta|$ is quite large, and the difficulty of Theorem
\ref{thm:non-identity-general} becomes close to the sign changes of Kloosterman sums with almost prime moduli, as considered in \cite{FM03b,FM07,SF09,Ma11,Xi15,Xi18}.
On the other hand, if $|\eta|$ decays to zero, we also have uniform bounds for $\mathfrak{A}_1(\eta)$ and $\mathfrak{A}_2(\eta)$. Following a similar argument,
the choice of $r$ in Theorem \ref{thm:non-identity-general} can also be made uniformly
in all such small $|\eta|$.
It remains to consider the complementary range of $\eta$ to \eqref{eq:A1(eta)positive-eta}. Recall that
\begin{align*}l_i=(1-4|\eta|\cdot(\tfrac{8}{3\pi})^{i-1}(\tfrac{11}{12})^i+B_i\eta^2)^+\end{align*}
in Proposition \ref{prop:Sigma(X,alphai)-lowerbound},
and the positivity of $l_i$ lies in the essential part of this paper. For any $\eta$ with
$|\eta|\in[1.23,~11.94],$ one may find $l_i>0.2$ as long as $i\geqslant 17.$ Therefore, one may sum up to $i=17$ in \eqref{eq:H1(X)-lowerbound-initial} with
\begin{align*}
\mathcal{R}_i&:=\{(\alpha_2,\cdots,\alpha_i)\in[\tfrac{1}{18},1[^{i-1}:\tfrac{1}{2}(1-\alpha_2-\cdots-\alpha_i)<\alpha_2+\cdots+\alpha_{i-1}\}\\
&\ \ \ \ \ \cap\{(\alpha_2,\cdots,\alpha_i)\in[\tfrac{1}{18},1[^{i-1}:\tfrac{1}{2}(\alpha_3+\cdots+\alpha_i)<\alpha_2\}\\
&\ \ \ \ \ \cap\{(\alpha_2,\cdots,\alpha_i)\in[\tfrac{1}{18},1[^{i-1}:\alpha_i<\alpha_{i-1}<\cdots<\alpha_2<1-\alpha_2-\cdots-\alpha_i\}
\end{align*}
for $i=17$. To evaluate the Selberg sieve weight, we may re-take $z=X^{\frac{1}{19}}$, so that
\begin{align*}\sum_{d|(n,P(z))}\varrho_d=\varrho_1=1\end{align*}
if $n$ is restricted to $\mathcal{P}_{17}(X,\boldsymbol\alpha_{17})$.
Following the above arguments in proving Proposition \ref{prop:H1(X)-lowerbound}, we may obtain a positive lower bound for $H_{1,17}(X)$, and thus that for $H_1(X).$
To complete the proof of Theorem \ref{thm:non-identity-general}, it remains to produce an explicit numerical upper bound for $H_2(X).$
This requires a delicate analysis on $\sigma(s),\mathfrak{f}(s)$, and the details are omitted here.
The Mathematica codes can be found at \url{http://gr.xjtu.edu.cn/web/ping.xi/miscellanea} or requested from the author.
\appendix
\section{Multiplicative functions against M\"obius}
We would like to evaluate a weighted average of general multiplicative functions against M\"obius function.
This will be employed in the evaluation of Selberg sieve weights essentially given by \eqref{eq:varrho_d}.
Let $g$ be a non-negative multiplicative function with $0\leqslant g(p)<1$ for each $p\in\mathcal{P}$. Suppose the Dirichlet series
\begin{align}\label{eq:G(s)}
\mathcal{G}(s):=\sum_{n\geqslant1}\mu^2(n)g(n)n^{-s}
\end{align}
converges absolutely for $\mathbb{R}e s>1.$ Assume there exist a positive integer $\kappa$ and some constants $L,c_0>0,$ such that
\begin{align}\label{eq:meromorphiccontinuation-G}
\mathcal{G}(s)=\zeta(s+1)^\kappa\mathcal{F}(s),
\end{align}
where $\mathcal{F}(s)$ is holomorphic for $\mathbb{R}e s\geqslant -c_0$ and does not vanish in the region
\begin{align}\label{eq:zerofree}
\mathcal{D}:=\Big\{\sigma+it: t\in\mathbf{R},\sigma\geqslant-\frac{1}{L\cdot\log(|t|+2)}\Big\},
\end{align}
and $|1/\mathcal{F}(s)|\leqslant L$ for all $s\in\mathcal{D}.$
We also assume
\begin{align}\label{eq:convergence-firstmoment-g}
\left|\sum_{p\leqslant x}g(p)\log p-\kappa\log x\right|\leqslant L
\end{align}
holds for all $x\geqslant3$ and
\begin{align}\label{eq:convergence-secondmoment-g}
\sum_{p}g(p)^2p^{2c_0}<+\infty.
\end{align}
We are interested in the asymptotic behaviour of the sum
\begin{align*}
\mathcal{M}_\kappa(x,z;q)&=\sum_{\substack{n\leqslant x\\ n\mid P(z)\\ (n,q)=1}}\mu(n)g(n)\Big(\log\frac{x}{n}\Big)^\kappa,\end{align*}
where $q$ is a positive integer and $x,z\geqslant3$.
\begin{lemma}\label{lm:averageagainstMobius}
Let $q\geqslant1.$ Under the assumption as above, we have
\begin{align*}
\mathcal{M}_\kappa(x,z;q)&=H\cdot\prod_{p\mid q}(1-g(p))^{-1}\cdot m_\kappa(s)+O(\kappa^{\omega(q)}(\log z)^{-A})\end{align*}
for all $A>0,x\geqslant2,z\geqslant2$ with $x\leqslant z^{O(1)}$, where $s=\log x/\log z,$
\[H=\prod_{p}(1-g(p))\Big(1-\frac{1}{p}\Big)^{-\kappa},\]
and $m_\kappa(s)$ is a continuous solution to the differential-difference equation
\begin{align}\label{eq:mkappa(s)}
\begin{cases}
m_\kappa(s)=\kappa!,\ \ &s\in~]0,1],\\
sm_\kappa'(s)=\kappa m_\kappa(s-1),&s\in~]1,+\infty[.\end{cases}
\end{align}
The implied constant depends on $A,\kappa,L$ and $c_0.$
\end{lemma}
\proof
We are inspired by \cite[Appendix A.3]{FI10}.
Write $\mathcal{M}_\kappa(x,x;q)=\mathcal{M}_\kappa(x;q)$. By Mellin inversion, we have
\begin{align*}
\mathcal{M}_\kappa(x;q)&=\sum_{\substack{n\leqslant x\\ (n,q)=1}}\mu(n)g(n)\Big(\log\frac{x}{n}\Big)^\kappa=\frac{\kappa!}{2\pi i}\int_{2-i\infty}^{2+i\infty}\mathcal{G}(t,q)\frac{x^t}{t^{\kappa+1}}\mathrm{d} t,\end{align*}
where
\begin{align*}
\mathcal{G}(t,q)=\sum_{\substack{n\geqslant1\\(n,q)=1}}\frac{\mu(n)g(n)}{n^t},\ \ \mathbb{R}e t>1.\end{align*}
Note that
\begin{align*}
\mathcal{G}(t,q)=\prod_{p\nmid q}\Big(1-\frac{g(p)}{p^t}\Big)=\prod_{p\mid q}\Big(1-\frac{g(p)}{p^t}\Big)^{-1}\frac{\mathcal{G}^*(t)}{\zeta(t+1)^\kappa},\end{align*}
where
\begin{align*}
\mathcal{G}^*(t)=\prod_{p}\Big(1-\frac{g(p)}{p^t}\Big)\Big(1-\frac{1}{p^{t+1}}\Big)^{-\kappa}=\prod_{p}\Big(1-\frac{g(p)^2}{p^{2t}}\Big)\frac{1}{\mathcal{F}(t)},\end{align*}
which is absolutely convergent and holomorphic for $t\in\mathcal{C}$ by \eqref{eq:meromorphiccontinuation-G}, \eqref{eq:convergence-firstmoment-g} and \eqref{eq:convergence-secondmoment-g}.
Hence we find
\begin{align*}
\mathcal{M}_\kappa(x;q)
&=\frac{\kappa!}{2\pi i}\int_{2-i\infty}^{2+i\infty}\prod_{p\mid q}\Big(1-\frac{g(p)}{p^t}\Big)^{-1}\frac{\mathcal{G}^*(t)x^t}{\zeta(t+1)^\kappa t^{\kappa+1}}\mathrm{d} t.\end{align*}
Shifting the $t$-contour to the left boundary of $\mathcal{C}$ and passing one simple pole at $t=0$, we get
\begin{align*}
\mathcal{M}_\kappa(x;q)&=\kappa!\mathcal{G}^*(0)\prod_{p\mid q}(1-g(p))^{-1}+O(\kappa^{\omega(q)}(\log 2x)^{-A})\end{align*}
for any fixed $A>0$.
For $s=\log x/\log z,$ we expect that
\begin{align}\label{eq:expectation-generalz}
\mathcal{M}_\kappa(x,z;q)&=c(q)m_\kappa(s)+O(\kappa^{\omega(q)}(\log z)^{-A})\end{align}
for all $A>0,x\geqslant2,z\geqslant2$ and $q\geqslant1$, where $c(q)$ is some constant defined in terms of $g$ and depending also on $q$, and $m_\kappa(s)$ is a suitable continuous function in $s>0.$
As mentioned above, this expected asymptotic formula holds for $0<s\leqslant1,$ in which case we may take
\begin{align*}
c(q)=\mathcal{G}^*(0)\prod_{p\mid q}(1-g(p))^{-1},\ \ \ m_\kappa(s)=\kappa!.\end{align*}
We now move to the case $s>1$ and prove the asymptotic formula \eqref{eq:expectation-generalz} by induction. Since $x\leqslant z^{O(1)},$ this induction will have a bounded number of steps.
We first consider the difference $\mathcal{M}_\kappa(x,z;q)-\mathcal{M}_\kappa(x;q)$. In fact, each $n$ that contributes to this difference has a prime factor at least $z$, and we may decompose $n=mp$ uniquely up to the restriction $z\leqslant p<x,$ $m\mid P(p).$ Hence
\begin{align}
\mathcal{M}_\kappa(x,z;q)
&=\mathcal{M}_\kappa(x;q)+\sum_{\substack{z\leqslant p<x\\(p,q)=1}}g(p)\sum_{\substack{m\leqslant x/p\\ m\mid P(p)\\ (m,q)=1}}\mu(m)g(m)\Big(\log\frac{x}{mp}\Big)^\kappa\nonumber\\
&=\mathcal{M}_\kappa(x;q)+\sum_{\substack{z\leqslant p<x\\(p,q)=1}}g(p)\mathcal{M}_\kappa(x/p,p;q).\label{eq:iteration}\end{align}
Substituting \eqref{eq:expectation-generalz} to \eqref{eq:iteration}, we get
\begin{align*}
\mathcal{M}_\kappa(x,z;q)
&=c(q)\kappa!+c(q)\sum_{\substack{z\leqslant p<x\\(p,q)=1}}g(p)m_\kappa\Big(\frac{\log(x/p)}{\log p}\Big)+O(\kappa^{\omega(q)}(\log x)^{-A})\\
&\ \ \ \ +O\Big(\kappa^{\omega(q)}\sum_{\substack{z\leqslant p<x\\(p,q)=1}}g(p)(\log (2x/p))^{-A}\Big).\end{align*}
By partial summation, we find
\begin{align*}
\mathcal{M}_\kappa(x,z;q)
&=c(q)\Big\{\kappa!+\kappa\int_1^sm_\kappa\Big(\frac{s}{u}-1\Big)\frac{\mathrm{d} u}{u}\Big\}+O(\kappa^{\omega(q)}(\log z)^{-A}).\end{align*}
Hence, by \eqref{eq:expectation-generalz}, $m_\kappa(s)$ should satisfy the equation
\begin{align*}
m_\kappa(s)=\kappa!+\kappa\int_1^sm_\kappa\Big(\frac{s}{u}-1\Big)\frac{\mathrm{d} u}{u}=\kappa!+\kappa\int_1^sm_\kappa(u-1)\frac{\mathrm{d} u}{u}\end{align*}
for $s>1$.
Taking the derivative with respect to $s$ gives \eqref{eq:mkappa(s)}.
\endproof
\begin{remark}
To extend $m_\kappa(s)$ to be defined on $\mathbf{R},$ we may put $m_\kappa(s)=0$ for $s\leqslant0$.
\end{remark}
\section{A two-dimensional Selberg sieve with asymptotics}
This section devotes to present a two-dimensional Selberg sieve that plays an essential role in proving Proposition \ref{prop:H2(X)-upperbound}.
Let $h$ be a non-negative multiplicative function. Suppose the Dirichlet series
\begin{align}\label{eq:H(s)}
\mathcal{H}(s):=\sum_{n\geqslant1}\mu^2(n)h(n)n^{-s}
\end{align}
converges absolutely for $\mathbb{R}e s>1.$ Assume there exist some constants $L,c_0>0,$ such that
\begin{align}\label{eq:meromorphiccontinuation}
\mathcal{H}(s)=\zeta(s)^2\mathcal{H}^*(s),
\end{align}
where $\mathcal{H}^*(s)$ is holomorphic for $\mathbb{R}e s\geqslant 1-c_0,$ and does not vanish in the region
$\mathcal{D}$ as given by \eqref{eq:zerofree} and $|1/\mathcal{H}^*(s)|\leqslant L$ for all $s\in\mathcal{D}$.
We also assume
\begin{align}\label{eq:convergence-firstmoment}
\left|\sum_{p\leqslant x}\frac{h(p)\log p}{p}-2\log x\right|\leqslant L
\end{align}
holds for all $x\geqslant3$ and
\begin{align}\label{eq:convergence-secondmoment}
\sum_{p}h(p)^2p^{2c_0-2}<+\infty.
\end{align}
Define
\begin{align*}
S(X,z;h,\boldsymbol\varrho)=\sum_{n\geqslant1}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)h(n)\Big(\sum_{d|(n,P(z))}\varrho_d\Big)^2,
\end{align*}
where $\boldsymbol\varrho=(\varrho_d)$ is given as in \eqref{eq:varrho_d} and $\varPsi$ is a fixed non-negative smooth function supported in $[1,2]$ with normalization \eqref{eq:normalization-Psi}.
\begin{theorem}\label{thm:asymptoticSelberg}
Let $X,D,z\geqslant3$ with $X\leqslant D^{O(1)}$ and $X\leqslant z^{O(1)}.$ Put $\tau=\log D/\log z$ and $\sqrt{D}=X^\vartheta\exp(-\sqrt{\mathcal{L}}),\vartheta\in~]0,\frac{1}{2}[.$
Under the above assumptions, we have
\begin{align*}
S(X,z;h,\boldsymbol\varrho)&=(1+o(1))\mathfrak{S}(\vartheta,\tau)X\mathcal{L}^{-1},\end{align*}
where $\mathfrak{S}(\vartheta,\tau)$ is defined by
\begin{align}\label{eq:fS(gamma,tau)}
\mathfrak{S}(\vartheta,\tau)&=16\mathrm{e}^{2\gamma}\Big(\frac{c_1(\tau)}{4\tau\vartheta^2}+\frac{c_2(\tau)}{\tau^2\vartheta}\Big),\end{align}
where
\begin{align*}
c_1(\tau)&=\int_0^1\sigma'((1-u)\tau)\mathfrak{f}(u\tau)^2\mathrm{d} u,\\
c_2(\tau)&=\int_0^1\int_0^1\sigma'((1-u)\tau)\mathfrak{f}(u\tau-2v)\{2\mathfrak{f}(u\tau)-\mathfrak{f}(u\tau-2v)\}\mathrm{d} u\mathrm{d} v.\end{align*}
Here $\sigma(s)$ is the continuous solution to the differential-difference equation
\begin{align}\label{eq:sigma-equationsolution}
\begin{cases}
\sigma(s)=\dfrac{s^2}{8\mathrm{e}^{2\gamma}},\ \ &s\in~]0,2],\\\noalign{\vskip 0,9mm}
(s^{-2}\sigma(s))'=-2s^{-3}\sigma(s-2),&s\in~]2,+\infty[,
\end{cases}
\end{align}
and $\mathfrak{f}(s)=m_2(s/2)$ as given by $\eqref{eq:mkappa(s)},$ i.e., $\mathfrak{f}(s)$ is the continuous solution to the differential-difference equation
\begin{align}\label{eq:ff-equationsolution}
\begin{cases}
\mathfrak{f}(s)=2,\ \ &s\in~]0,2],\\
s\mathfrak{f}'(s)=2\mathfrak{f}(s-2),&s\in~]2,+\infty[.\end{cases}
\end{align}
\end{theorem}
\begin{remark}
Theorem \ref{thm:asymptoticSelberg} is a generalization of \cite[Proposition 4.1]{Xi18} with a general multiplicative function $h$ and the extra restriction $d\mid P(z)$,
but specializing $k=2$ therein. It would be rather interesting to extend the case to a general $k\in\mathbf{Z}^+$ and we would like to concentrate this problem in the near future.
We now choose $z=\sqrt{D}$, so that the restriction $d\mid P(z)$ is redundant, in which case one has $\tau=2.$ Note that
\begin{align*}
c_1(2)&=4\int_0^1\sigma'(2u)\mathrm{d} u=\frac{1}{\mathrm{e}^{2\gamma}},\\
c_2(2)&=4\int_0^1\sigma'(2(1-u))u\mathrm{d} u=\frac{1}{3\mathrm{e}^{2\gamma}}.\end{align*}
For $\vartheta=1/4,$ we find $\mathfrak{S}(\vartheta,\tau)=\mathfrak{S}(1/4,2)=112/3$, which coincides with $4\mathfrak{c}(2,F)$ in \cite[Proposition 4.1]{Xi18} by taking $F(x)=x^2$ therein.
\end{remark}
We now give the proof of Theorem \ref{thm:asymptoticSelberg}.
To begin with, we write by \eqref{eq:xi} that
\begin{align*}
S(X,z;h,\boldsymbol\varrho)
&=\sum_{d\mid P(z)}\xi(d)\sum_{n\equiv0\bmod d}\varPsi\Big(\frac{n}{X}\Big)\mu^2(n)h(n)\\
&=\sum_{d\mid P(z)}\xi(d)h(d)\sum_{(n,d)=1}\varPsi\Big(\frac{nd}{X}\Big)\mu^2(n)h(n).\end{align*}
By Mellin inversion,
\begin{align*}
\sum_{(n,d)=1}\varPsi\Big(\frac{nd}{X}\Big)\mu^2(n)h(n)
&=\frac{1}{2\pi i}\int_{(2)}\widetilde{\varPsi}(s)(X/d)^s\mathcal{H}^\flat(s,d)\mathrm{d} s,\end{align*}
where, for $\mathbb{R}e s>1,$
\[\mathcal{H}^\flat(s,d)=\sum_{\substack{n\geqslant1\\(n,d)=1}}\frac{\mu^2(n)h(n)}{n^s}.\]
For $\mathbb{R}e s>1,$ we first write
\begin{align*}\mathcal{H}^\flat(s,d)&=\prod_{p\nmid d}\Big(1+\frac{h(p)}{p^s}\Big)=\prod_{p\mid d}\Big(1+\frac{h(p)}{p^s}\Big)^{-1}\mathcal{H}(s)=\prod_{p\mid d}\Big(1+\frac{h(p)}{p^s}\Big)^{-1}\zeta(s)^2\mathcal{G}(s).\end{align*}
Note that
\begin{align*}
\mathcal{G}(1)=\lim_{s\rightarrow1}\frac{\mathcal{H}(s)}{\zeta(s)^2}=\prod_p\Big(1+\frac{h(p)}{p}\Big)\Big(1-\frac{1}{p}\Big)^2.\end{align*}
By \eqref{eq:meromorphiccontinuation}, $\mathcal{H}^\flat(s,d)$ admits a meromorphic continuation to $\mathbb{R}e s\geqslant 1-c_0.$ Shifting the $s$-contour to the left beyond $\mathbb{R}e s=1,$ we may obtain
\begin{align*}
\sum_{(n,d)=1}\varPsi\Big(\frac{nd}{X}\Big)\mu^2(n)h(n)
&=\mathbb{R}es_{s=1}\widetilde{g}(s)\mathcal{G}(s)(X/d)^s\prod_{p\mid d}\Big(1+\frac{h(p)}{p^s}\Big)^{-1}\zeta(s)^2+O((X/d)\mathcal{L}^{-100}).\end{align*}
We compute the residue as
\begin{align*}
\mathbb{R}es_{s=1}[\cdots]
&=\frac{\mathrm{d}}{\mathrm{d} s}\widetilde{\varPsi}(s)\mathcal{G}(s)(X/d)^s\prod_{p\mid d}\Big(1+\frac{h(p)}{p^s}\Big)^{-1}\zeta(s)^2(s-1)^2\Big|_{s=1}\\
&=\widetilde{\varPsi}(1)\mathcal{G}(1)\prod_{p\mid d}\Big(1+\frac{h(p)}{p}\Big)^{-1}\frac{X}{d}\Big(\log(X/d)+\sum_{p\mid d}\frac{h(p)\log p}{p+h(p)}+c\Big)\\
&=\mathcal{G}(1)\prod_{p\mid d}\Big(1+\frac{h(p)}{p}\Big)^{-1}\frac{X}{d}\Big(\log X-\sum_{p\mid d}\frac{p\log p}{p+h(p)}+c\Big),\end{align*}
where $c$ is some constant independent of $d$.
Define $\beta$ and $\beta^*$ to be multiplicative functions supported on squarefree numbers via
\begin{align*}\beta(p)=\frac{p}{h(p)}+1,\ \ \ \ \beta^*(p)=\beta(p)-1=\frac{p}{h(p)}.\end{align*}
Define $L$ to be an additive function supported on squarefree numbers via
\begin{align*}
L(p)=\frac{\beta^*(p)\log p}{\beta(p)}.\end{align*}
Therefore, for each squarefree number $d$, we have
\begin{align*}
\beta(d)=\prod_{p\mid d}\Big(\frac{p}{h(p)}+1\Big),\ \ \ \ \beta^*(d)=\frac{d}{h(d)},\ \ \ \ L(d)= \sum_{p\mid d}\frac{\beta^*(p)\log p}{\beta(p)}.\end{align*}
In this way, we may obtain
\begin{align*}S(X;h,\boldsymbol\varrho)
&=\mathcal{G}(1)X\{S_1(X)\cdot (\log X+c)-S_2(X)\}+O(X\mathcal{L}^{-2}),\end{align*}
where
\begin{align*}
S_1(X)
&=\sum_{d\mid P(z)}\frac{\xi(d)}{\beta(d)},\\
S_2(X)
&=\sum_{d\mid P(z)}\frac{\xi(d)}{\beta(d)}L(d).\end{align*}
Note that
\begin{align*}
S_1(X)&=\mathop{\sum\sum}_{d_1,d_2\mid P(z)}\frac{\varrho_{d_1}\varrho_{d_2}}{\beta([d_1,d_2])}\\
&=\mathop{\sum\sum}_{d_1,d_2\mid P(z)}\frac{\varrho_{d_1}\varrho_{d_2}}{\beta(d_1)\beta(d_2)}\beta((d_1,d_2))\\
&=\mathop{\sum\sum}_{d_1,d_2\mid P(z)}\frac{\varrho_{d_1}\varrho_{d_2}}{\beta(d_1)\beta(d_2)}\sum_{l\mid(d_1,d_2)}\beta^*(l).\end{align*}
Hence we may diagonalize $S_1(X)$ by
\begin{align}\label{eq:S1(X)-initial}
S_1(X)
&=\sum_{\substack{l\leqslant\sqrt{D}\\l\mid P(z)}}\beta^*(l)y_l^2,\end{align}
where, for each $l\mid P(z)$ and $l\leqslant\sqrt{D},$
\begin{align*}
y_l=\sum_{\substack{d\mid P(z)\\ d\equiv0\bmod l}}\frac{\varrho_d}{\beta(d)}.\end{align*}
From the definition of sieve weights \eqref{eq:varrho_d}, we find
\begin{align*}
y_l
&=\frac{4\mu(l)}{\beta(l)(\log D)^2}\sum_{\substack{d\leqslant\sqrt{D}/l\\dl\mid P(z)}}\frac{\mu(d)}{\beta(d)}\Big(\log\frac{\sqrt{D}/l}{d}\Big)^2.\end{align*}
Applying Lemma \ref{lm:averageagainstMobius} with $g(p)=1/\beta(p)$ and $q=l$, we have
\begin{align}\label{eq:yl}
y_l
&=\frac{4\mu(l)}{\mathcal{G}(1)\beta^*(l)(\log D)^2}m_2\Big(\frac{\log(\sqrt{D}/l)}{\log z}\Big)+O\Big(\frac{\tau(l)}{\beta(l)}(\log z)^{-A}\Big).\end{align}
Inserting this expression to \eqref{eq:S1(X)-initial}, we have
\begin{align*}
S_1(X)
&=\frac{16(1+o(1))}{\mathcal{G}(1)^2(\log D)^4}\sum_{\substack{l\leqslant\sqrt{D}\\l\mid P(z)}}\frac{1}{\beta^*(l)}m_2\Big(\frac{\log(\sqrt{D}/l)}{\log z}\Big)^2.\end{align*}
Following \cite[Lemma 6.1]{HR74}, we have
\begin{align}
\sum_{\substack{l\leqslant x\\l\mid P(z)}}\frac{1}{\beta^*(l)}=\frac{1}{W(z)}\Big\{\sigma(2\log x/\log z)+O\Big(\frac{(\log x/\log z)^5}{\log z}\Big)\Big\}\end{align}
with
\begin{align*}W(z)&=\prod_{p<z}\Big(1-\frac{1}{\beta(p)}\Big),\end{align*}
from which and partial summation, we find
\begin{align*}
S_1(X)
&=\frac{16\tau c_1(\tau)}{\mathcal{G}(1)^2W(z)(\log D)^4}\cdot (1+o(1))\end{align*}
with $\tau=\log D/\log z$ and
\begin{align}
c_1(\tau)=\int_0^1\sigma'((1-u)\tau)\mathfrak{f}(u\tau)^2\mathrm{d} u.\end{align}
We now turn to consider $S_2(X)$. Note that $L(d)$ is an additive function supported on squarefree numbers. We then have
\begin{align*}
S_2(X)
&=\mathop{\sum\sum}_{d_1,d_2\mid P(z)}\frac{\varrho_{d_1}\varrho_{d_2}}{\beta([d_1,d_2])}L([d_1,d_2])\\
&=\mathop{\sum\sum}_{dd_1d_2\mid P(z)}\frac{\varrho_{dd_1}\varrho_{dd_2}}{\beta(dd_1d_2)}\{L(d)+L(d_1)+L(d_2)\},\end{align*}
where there is an implicit restriction that $d,d_1,d_2$ are pairwise coprime. By M\"obius formula, we have
\begin{align*}
S_2(X)
&=\mathop{\sum\sum\sum}_{dd_1,dd_2\mid P(z)}\frac{\varrho_{dd_1}\varrho_{dd_2}}{\beta(d)\beta(d_1)\beta(d_2)}\{L(d)+L(d_1)+L(d_2)\}\sum_{l\mid(d_1,d_2)}\mu(l)\\
&=\mathop{\sum\sum\sum\sum}_{ldd_1,ldd_2\mid P(z)}\frac{\mu(l)\varrho_{ldd_1}\varrho_{ldd_2}}{\beta(l)^2\beta(d)\beta(d_1)\beta(d_2)}\{L(ldd_1)+L(ldd_2)-L(d)\}\\
&=2S_{21}(X)-S_{22}(X)\end{align*}
with
\begin{align*}
S_{21}(X)&=\sum_{l\mid P(z)}\beta^*(l)y_ly_l',\\
S_{22}(X)&=\sum_{l\mid P(z)}v(l)y_l^2,\end{align*}
where for each $l\mid P(z),l\leqslant\sqrt{D},$
\begin{align*}
y_l'=\sum_{\substack{d\mid P(z)\\ d\equiv0\bmod l}}\frac{\varrho_dL(d)}{\beta(d)}.\end{align*}
and
\begin{align}\label{eq:v(d)}
v(l)&=\beta(l)\sum_{uv=l}\frac{\mu(u)L(v)}{\beta(u)}.\end{align}
Moreover, we have
\begin{align*}
y_l'&=\sum_{dl\mid P(z)}\frac{\varrho_{dl}L(dl)}{\beta(dl)}=\sum_{d\mid P(z)}\frac{\varrho_{dl}L(d)}{\beta(dl)}+L(l)y_l\\
&=\sum_{p<z}\frac{\beta^*(p)\log p}{\beta(p)}\sum_{d\mid P(z)}\frac{\varrho_{pdl}}{\beta(pdl)}+L(l)y_l\\
&=\sum_{p<z}\frac{y_{pl}\beta^*(p)\log p}{\beta(p)}+L(l)y_l.\end{align*}
It then follows that
\begin{align*}
S_{21}(X)&=\sum_{p<z}\frac{\beta^*(p)\log p}{\beta(p)}\sum_{l\mid P(z)}\beta^*(l)y_ly_{pl}+\sum_{l\mid P(z)}L(l)\beta^*(l)y_l^2\\
&=\sum_{p<z}\frac{\beta^*(p)\log p}{\beta(p)}\sum_{pl\mid P(z)}\beta^*(l)y_ly_{pl}+\sum_{p<z}\frac{\beta^*(p)^2\log p}{\beta(p)}\sum_{pl\mid P(z)}\beta^*(l)y_{pl}^2\\
&=S_{21}'(X)+S_{21}''(X),\end{align*}
say.
From \eqref{eq:yl}, it follows, by partial summation, that
\begin{align*}
S_{21}'(X)
&=-\frac{16(1+o(1))}{\mathcal{G}(1)^2(\log D)^4}\sum_{l\mid P(z)}\frac{1}{\beta^*(l)}m_2\Big(\frac{\log(\sqrt{D}/l)}{\log z}\Big)\sum_{\substack{p<z\\ p\nmid l}}\frac{\log p}{\beta(p)}m_2\Big(\frac{\log(\sqrt{D}/(pl))}{\log z}\Big).\end{align*}
Up to a minor contribution, the inner sum over $p$ can be relaxed to all primes $p\leqslant z.$ In fact, the terms with $p\mid \ell$ contribute at most
\begin{align*}
&\ll\frac{1}{(\log D)^4}\sum_{l\mid P(z)}\frac{1}{\beta^*(l)}m_2\Big(\frac{\log(\sqrt{D}/l)}{\log z}\Big)\sum_{p\mid l}\frac{\log p}{p}\\
&\ll\frac{1}{(\log D)^3\log \log D}\sum_{l\mid P(z)}\frac{1}{\beta^*(l)}m_2\Big(\frac{\log(\sqrt{D}/l)}{\log z}\Big)\\
&\ll\frac{1}{W(z)(\log D)^3\log\log D}.\end{align*}
We then derive that
\begin{align*}
S_{21}'(X)
&=-\frac{16(1+o(1))}{\mathcal{G}(1)^2(\log D)^4}\sum_{l\mid P(z)}\frac{1}{\beta^*(l)}m_2\Big(\frac{\log(\sqrt{D}/l)}{\log z}\Big)\sum_{p<z}\frac{\log p}{\beta(p)}m_2\Big(\frac{\log(\sqrt{D}/(pl))}{\log z}\Big)\\
&\ \ \ \ \ \ +O\Big(\frac{\log z}{\log \log z}\frac{1}{W(z)(\log D)^4}\Big)\\
&=-\frac{32\tau c_{21}'(\tau)\log z}{\mathcal{G}(1)^2W(z)(\log D)^4}\cdot(1+o(1)),\end{align*}
where
\begin{align*}
c_{21}'(\tau)=\int_0^1\int_0^1\sigma'((1-u)\tau)\mathfrak{f}(u\tau)\mathfrak{f}(u\tau-2v)\mathrm{d} u\mathrm{d} v.\end{align*}
In a similar manner, we can also show that
\begin{align*}
S_{21}''(X)&=\frac{32\tau c_{21}''(\tau)\log z}{\mathcal{G}(1)^2W(z)(\log D)^4}\cdot(1+o(1)),\end{align*}
where
\begin{align*}
c_{21}''(\tau)=\int_0^1\int_0^1\sigma'((1-u)\tau)\mathfrak{f}(u\tau-2v)^2\mathrm{d} u\mathrm{d} v.\end{align*}
In conclusion, we obtain
\begin{align*}
S_{21}(X)=S_{21}'(X)+S_{21}''(X)
&=\frac{32\tau(c_{21}''(\tau)-c_{21}'(\tau))\log z}{\mathcal{G}(1)^2W(z)(\log D)^4}\cdot(1+o(1)),\end{align*}
We now evaluate $S_{22}(X)$. For each squarefree $l\geqslant1$, we have
\begin{align*}
v(l)&=\beta(l)\sum_{u\mid l}\frac{\mu(u)}{\beta(u)}\sum_{p\mid l/u}\frac{\beta^*(p)\log p}{\beta(p)}\\
&=\beta(l)\sum_{p\mid l}\frac{\beta^*(p)\log p}{\beta(p)}\sum_{u\mid l/p}\frac{\mu(u)}{\beta(u)}\\
&=\beta(l)\sum_{p\mid l}\frac{\beta^*(l/p)\beta^*(p)\log p}{\beta(l/p)\beta(p)}\\
&=\beta^*(l)\log l.\end{align*}
Hence
\begin{align*}
S_{22}(X)&=\sum_{p<z}\beta^*(p)\log p\sum_{pl\mid P(z)}\beta^*(l)y_{pl}^2\\
&=\frac{16(1+o(1))}{\mathcal{G}(1)^2(\log D)^4}\sum_{l\mid P(z)}\frac{1}{\beta^*(l)}\sum_{\substack{p<z\\p\nmid l}}\frac{\log p}{\beta^*(p)}m_2\Big(\frac{\log(\sqrt{D}/(pl))}{\log z}\Big)^2\end{align*}
by \eqref{eq:yl}.
From partial summation, it follows that
\begin{align*}
S_{22}(X)&=\frac{32\tau c_{21}''(\tau)\log z}{\mathcal{G}(1)^2W(z)(\log D)^4}\cdot(1+o(1)).\end{align*}
Combining all above evaluations, we find
\begin{align*}S(X,z;h,\boldsymbol\varrho)
&=\mathcal{G}(1)X\{S_1(X)\cdot (\log X+c)-2S_{21}(X)+S_{22}(X)\}+O(X\mathcal{L}^{-2})\\
&=(1+o(1))\frac{16\tau X\log z}{\mathcal{G}(1)W(z)(\log D)^4}\Big\{c_1(\tau)\frac{\log X}{\log z}+4c_{21}'(\tau)-2c_{21}''(\tau))\Big\}.\end{align*}
Hence Theorem \ref{thm:asymptoticSelberg} follows by observing that $c_2(\tau)=2c_{21}'(\tau)-c_{21}''(\tau)$
and
\begin{align*}\mathcal{G}(1)W(z)&=\prod_{p<z}\Big(1-\frac{1}{p}\Big)^2\cdot \prod_{p\geqslant z}\Big(1+\frac{h(p)}{p}\Big)\Big(1-\frac{1}{p}\Big)^2
=(1+o(1))\frac{\mathrm{e}^{-2\gamma}}{(\log z)^2}\end{align*}
by Mertens' formula.
\section{Chebyshev approximation}
A lot of statistical analysis of $GL_2$ objects relies heavily on the properties of Chebychev polynomials $\{U_k(x)\}_{k\geqslant 0}$ with $x\in[-1,1],$ which can be defined recursively by
\[U_0(x)=1,\ \ U_1(x)=2x,\]
\[U_{k+1}(x)=2xU_k(x)-U_{k-1}(x),\ \ k\geqslant1.\]
It is well-known that Chebychev polynomials form an orthonormal basis of $L^2([-1, 1])$ with respect to the measure $\frac2{\pi}\sqrt{1-x^2}\mathrm{d} x$.
In fact, for any $f\in \mathcal{C}([-1,1])$, the expansion
\begin{align}\label{eq:psi-Chebyshevcoefficient}
f(x)=\sum_{k\geqslant 0}\beta_k(f)U_k(x)
\end{align}
holds with
\[\beta_k(f)=\frac2{\pi} \int_{-1}^1f(t)U_k(t)\sqrt{1-t^2}\mathrm{d} t.\]
In practice, the following truncated approximation is usually more effective and useful, which has its prototype in \cite[Theorem 5.14]{MH03}.
\begin{lemma}\label{lm:Chebyshevapproximation}
Suppose $f:[-1,1]\rightarrow\mathbf{R}$ has $C+1$ continuous derivatives on $[-1,1]$ with $C\geqslant2$. Then for each positive integer $K>C,$ there holds the approximation
\[f(x)=\sum_{0\leqslant k\leqslant K}\beta_k(f)U_k(x)+O\Big(K^{1-C}\|f^{(C+1)}\|_1\Big)\]
uniformly in $x\in[-1,1]$, where the implied constant depends only on $C$.
\end{lemma}
\proof
For each $K>C$, we introduce the operator $\vartheta_{K}$ mapping $f \in \mathcal{C}^{C+1}([-1,1])$ via
\[(\vartheta_{K}f)(x):=\sum_{0\leqslant k\leqslant K}\beta_k(f)U_k(x)-f(x).\]
This gives the remainder of approximation by Chebychev polynomials up to degree $K$. Obviously, $(\vartheta_{K}f)(\cdot)\in \mathcal{C}^{C+1}([-1,1])$ and in fact, $\vartheta_{K}$ is a bounded linear functional on $\mathcal{C}^{C+1}([-1,1]),$ which vanishes on polynomials of degree $\leqslant K$.
Using a theorem of Peano (Theorem 3.7.1, \cite{Da61}), we find that
\begin{align}\label{eq:theta_K}
(\vartheta_{K}f)(x)=\frac{1}{C!}\int_{-1}^1f^{(C+1)}(t)H_K(x,t)\mathrm{d} t,
\end{align}
where
\begin{align*}H_K(x,t)=-\sum_{k>K}\lambda_k(t)U_k(x)\end{align*}
with
\begin{align*}\lambda_k(t)=\frac{2}{\pi}\int_t^1\sqrt{1-x^2}(x-t)^CU_k(x)\mathrm{d} x.\end{align*}
Put $x=\cos\theta,t=\cos\phi$, so that
\begin{align*}\lambda_k(t) = \lambda_k(\cos \phi)=\frac{2}{\pi}\int_0^\phi(\cos\theta-\cos\phi)^C\sin\theta\sin((k+1)\theta)\mathrm{d}\theta.\end{align*}
We deduce from integration by parts that
\begin{align*}\|\lambda_k\|_\infty\ll\frac{1}{k\dbinom{k-1}{C}},\end{align*}
where the implied constant is absolute.
For any $x,t\in[-1,1]$, the Stirling's formula $\log \Gamma(k) = (k-1/2)\log k - k +\log\sqrt{2\pi} + O(1/k)$ gives
\begin{align*}H_K(x,t)
&\ll\sum_{k>K}\frac{1}{\dbinom{k-1}{C}}=C!\sum_{k>K}\frac{\Gamma(k-C)}{\Gamma(k)}\\
& \ll \sum_{k>K} \Big(\frac{\mathrm{e}}{k-C}\Big)^C \Big(1-\frac{C}k\Big)^{k-1/2}\\
& \ll K^{1-C},
\end{align*}
from which and \eqref{eq:theta_K} we conclude that
\begin{align*}
\|\vartheta_{K}f\|_\infty\ll K^{1-C}\|f^{(C+1)}\|_1.
\end{align*}
This completes the proof of the lemma.
\endproof
We now turn to derive a truncated approximation for $|x|$ on average.
\begin{lemma}\label{lm:|x|-Chebyshev}
Let $k,J$ be two positive integers and $K>1.$ Suppose $\{x_j\}_{1\leqslant j\leqslant J}\in[-1,1]$ and $\mathbf{y}:=\{y_j\}_{1\leqslant j\leqslant J}\in\mathbf{C}$ are two sequences satisfying
\begin{align}\label{eq:Chebyshev-averageassumption}
\max_{1\leqslant j\leqslant J}|y_j|\leqslant 1,\ \ \ \Bigg|\sum_{1\leqslant j\leqslant J}y_jU_k(x_j)\Bigg|\leqslant k^BU
\end{align}
with some $B\geqslant 1$ and $U>0$. Then we have
\begin{align*}
\sum_{1\leqslant j\leqslant J}y_j|x_j|
=\frac{4}{3\pi}\sum_{1\leqslant j\leqslant J}y_j+O\Big(UK^{B-1}(\log K)^{\delta(B)}+\frac{\|\mathbf{y}\|_1^2}{UK^B}\Big).
\end{align*}
where $\delta(B)$ vanishes unless $B=1$, in which case it is equal to $1$, and the $O$-constant depends only on $B.$
\end{lemma}
\proof
In order to apply Lemma \ref{lm:Chebyshevapproximation}, we would like to introduce a smooth function
$R:[-1,1]\rightarrow[0,1]$ with $R(x)=R(-x)$
such that
\begin{align*}
\begin{cases}
R(x)=0,\ \ &x\in[-\varDelta,\varDelta],\\
R(x)=1,&x\in[-1,-2\varDelta]\cup[2\varDelta,1],\end{cases}
\end{align*}
where $\varDelta\in~]0,1[$ be a positive number to be fixed later.
We also assume the derivatives satisfy
\begin{align*}
R^{(j)}(x)\ll_j \varDelta^{-j}
\end{align*}
for each $j\geqslant0$
with an implied constant depending only on $j$.
Put $f(x):=R(x)|x|.$ Due to smooth decay of $R$ at $x=0,$ we may apply Lemma \ref{lm:Chebyshevapproximation} to $f(x)$ with $C=2$,
getting
\begin{align*}
f(x)
&=\sum_{0\leqslant k\leqslant K}\beta_k(f)U_k(x)+O(K^{-1}\|f'''\|_1).
\end{align*}
Note that $f'''(x)$ vanishes unless $x\in[-2\varDelta,-\varDelta]\cup[\varDelta,2\varDelta]$, in which case we have $f'''(x)\ll\varDelta^{-2}.$ It then follows that
\begin{align*}
f(x)
&=\sum_{0\leqslant k\leqslant K}\beta_k(f)U_k(x)+O\Big(\frac{1}{K\varDelta}\Big).
\end{align*}
Moreover, $f(x)-|x|$ vanishes unless $x\in[-2\varDelta,2\varDelta]$. This implies that $f(x)=|x|+O(\varDelta)$. In addition, $\beta_0(f)=\frac{4}{3\pi}+O(\varDelta)$. Therefore,
\begin{align*}
|x|&=\frac{4}{3\pi}+\sum_{1\leqslant k\leqslant K}\beta_k(f)U_k(x)+O\Big(\varDelta+\frac{1}{K\varDelta}\Big).
\end{align*}
We claim that
\begin{align}\label{eq:beta_k(f)-upperbound}
\beta_k(f)\ll k^{-2}
\end{align}
for all $k\geqslant1$ with an absolute implied constant.
It then follows that
\begin{align*}
\sum_{1\leqslant j\leqslant J}y_j|x_j|-\frac{4}{3\pi}\sum_{1\leqslant j\leqslant J}y_j
&=\sum_{1\leqslant k\leqslant K}\beta_{k}(f)\sum_{1\leqslant j\leqslant J}y_jU_{k}(x_j)+O\Big(\|\mathbf{y}\|_1\varDelta+\frac{\|\mathbf{y}\|_1}{K\varDelta}\Big)\\
&\ll U\sum_{1\leqslant k\leqslant K}k^{B-2}+\|\mathbf{y}\|_1\varDelta+\frac{\|\mathbf{y}\|_1}{K\varDelta}\\
&\ll UK^{B-1}(\log K)^{\delta(B)}+\|\mathbf{y}\|_1\varDelta+\frac{\|\mathbf{y}\|_1}{K\varDelta},
\end{align*}
where the implied constant depends only on $B$.
To balance the first and last terms, we take $\varDelta=\|\mathbf{y}\|_1/(UK^B)$, which yields
\begin{align*}
\sum_{1\leqslant j\leqslant J}y_j|x_j|-\frac{4}{3\pi}\sum_{1\leqslant j\leqslant J}y_j
&\ll UK^{B-1}(\log K)^{\delta(B)}+\frac{\|\mathbf{y}\|_1^2}{UK^B}
\end{align*}
as expected.
It remains to prove the upper bound \eqref{eq:beta_k(f)-upperbound}. Since $U_k(\cos\theta)=\sin((k+1)\theta)/\sin\theta$, it suffices to show that
\begin{align}\label{eq:beta_k-upperbound}
\beta_k:=\int_0^{\frac{\pi}{2}}R(\cos\theta)(\sin2\theta)\sin((k+1)\theta)\mathrm{d}\theta\ll k^{-2}
\end{align}
for all $k\geqslant3$ with an absolute implied constant.
From the elementary identity $2\sin\alpha\sin\beta=\cos(\alpha-\beta)-\cos(\alpha+\beta)$, it follows that
\begin{align*}
\beta_k=\int_0^{\arccos\varDelta}R(\cos\theta)(\sin2\theta)\sin((k+1)\theta)\mathrm{d}\theta=\frac{\alpha(k-1,R)-\alpha(k+3,R)}{2},
\end{align*}
where, for $\ell\geqslant2$ and a function $g\in\mathcal{C}^2([-1,1])$,
\begin{align*}
\alpha(\ell,g):=\int_0^{\arccos\varDelta}g(\cos\theta)\cos(\ell\theta)\mathrm{d}\theta.
\end{align*}
From integration by parts, we derive that
\begin{align*}
\alpha(\ell,g)&=\frac{1}{\ell}\int_0^{\arccos\varDelta}g'(\cos\theta)(\sin\theta)\sin(\ell\theta)\mathrm{d}\theta=\frac{\alpha(\ell-1,g')-\alpha(\ell+1,g')}{2\ell},
\end{align*}
and also
\begin{align*}
\alpha(\ell,g')&=\frac{\alpha(\ell-1,g'')-\alpha(\ell+1,g'')}{2\ell}.
\end{align*}
It then follows that
\begin{align*}
\alpha(\ell,g)&=\frac{\alpha(\ell-2,g'')-\alpha(\ell,g'')}{4\ell(\ell-1)}-\frac{\alpha(\ell,g'')-\alpha(\ell+2,g'')}{4\ell(\ell+1)}.
\end{align*}
We then further have
\begin{align*}
\beta_k&=\frac{1}{8}(\beta_{k,1}-\beta_{k,2})
\end{align*}
with
\begin{align*}
\beta_{k,1}&=\frac{\alpha(k-3,R'')-\alpha(k-1,R'')}{(k-1)(k-2)}-\frac{\alpha(k-1,R'')-\alpha(k+1,R'')}{k(k-1)},\\
\beta_{k,2}&=\frac{\alpha(k-1,R'')-\alpha(k+1,R'')}{k(k+1)}-\frac{\alpha(k+1,R'')-\alpha(k+3,R'')}{(k+1)(k+2)}.
\end{align*}
Note that
\begin{align*}
\alpha(k-3,R'')-\alpha(k-1,R'')&=\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)\{\cos((k-3)\theta)-\cos((k-1)\theta)\}\mathrm{d}\theta\\
&=2\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)(\sin(k-2)\theta)(\sin\theta)\mathrm{d}\theta
\end{align*}
and
\begin{align*}
\alpha(k-1,R'')-\alpha(k+1,R'')
&=2\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)(\sin k\theta)(\sin\theta)\mathrm{d}\theta.
\end{align*}
Hence
\begin{align*}
\beta_{k,1}
&=\frac{2}{(k-1)(k-2)}\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)(\sin(k-2)\theta)(\sin\theta)\mathrm{d}\theta\\
&\ \ \ \ \ -\frac{2}{k(k-1)}\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)(\sin k\theta)(\sin\theta)\mathrm{d}\theta\\
&=\frac{2}{(k-1)(k-2)}\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)\{\sin(k-2)\theta-\sin k\theta\}(\sin\theta)\mathrm{d}\theta\\
&\ \ \ \ \ +\frac{4}{k(k-1)(k-2)}\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)(\sin k\theta)(\sin\theta)\mathrm{d}\theta.
\end{align*}
The first term can be evaluated as
\begin{align*}
&=\frac{2}{(k-1)(k-2)}\int_{\arccos2\varDelta}^{\arccos\varDelta}R''(\cos\theta)(\sin(k-1)\theta)(\cos\theta)(\sin\theta)\mathrm{d}\theta\\
&\ll\frac{1}{k^2}\int_{\arccos2\varDelta}^{\arccos\varDelta}\varDelta^{-2}\cos\theta\mathrm{d}\theta\ll\frac{1}{k^2}.
\end{align*}
Again from the integration by parts, the second term is
\begin{align*}
=\frac{4}{(k-1)(k-2)}\int_{\arccos2\varDelta}^{\arccos\varDelta}R'(\cos\theta)(\cos k\theta)\mathrm{d}\theta
\ll\frac{1}{k^2}\int_{\arccos2\varDelta}^{\arccos\varDelta}\varDelta^{-1}\mathrm{d}\theta\ll\frac{1}{k^2}.
\end{align*}
Hence $\beta_{k,1}\ll k^{-2}$, and similarly $\beta_{k,2}\ll k^{-2}.$
These yield \eqref{eq:beta_k-upperbound}, and thus \eqref{eq:beta_k(f)-upperbound}, which completes the proof of the lemma.
\endproof
Note that $U_k(\cos\theta)=\mathrm{sym}_k(\theta).$ Taking $x_j=\cos\theta_j$ in Lemma \ref{lm:|x|-Chebyshev}, we obtain the following truncated approximation for $|\cos|$.
\begin{lemma}\label{lm:cos-Chebyshev}
Let $k,J$ be two positive integers and $K>1.$ Suppose $\{\theta_j\}_{1\leqslant j\leqslant J}\in[0,\pi]$ and $\mathbf{y}:=\{y_j\}_{1\leqslant j\leqslant J}\in\mathbf{C}$ are two sequences satisfying
\begin{align*}
\max_{1\leqslant j\leqslant J}|y_j|\leqslant 1,\ \ \ \Bigg|\sum_{1\leqslant j\leqslant J}y_j\mathrm{sym}_k(\theta_j)\Bigg|\leqslant k^BU
\end{align*}
with some $B\geqslant 1$ and $U>0.$ Then we have
\begin{align*}
\sum_{1\leqslant j\leqslant J}y_j|\cos\theta_j|
=\frac{4}{3\pi}\sum_{1\leqslant j\leqslant J}y_j+O\Big(UK^{B-1}(\log K)^{\delta(B)}+\frac{\|\mathbf{y}\|_1^2}{UK^B}\Big).
\end{align*}
where $\delta(B)$ is defined as in Lemma {\rm \ref{lm:|x|-Chebyshev}.}
\end{lemma}
\end{document} | math | 134,470 |
\begin{document}
\begin{abstract} We provide a detailed proof of the fact that any domain which is sufficiently flat in the sense of Reifenberg is also Jones-flat, and hence it is an extension domain. We discuss various applications of this property, in particular we obtain $L^\infty$ estimates for the eigenfunctions of the Laplace operator with Neumann boundary conditions. We also compare different ways of measuring the ``distance" between two sufficiently close Reifenberg-flat domains. These results are pivotal to the quantitative stability analysis of the spectrum of the Neumann Laplacian performed in~\cite{lms2}.
\end{abstract}
\maketitle
{\bf AMS classification.} 49Q20, 49Q05, 46E35
{\bf Key words.} Reifenberg-flat domains, Extension domains
\section{Introduction}
The main goal of the present paper is establishing extension and geometric properties for a class of domains whose boundaries satisfy a fairly weak regularity requirement introduced by Reifenberg~\cite{r}.
In particular, we show that any domain that is sufficiently flat in the sense of Reifenberg enjoys the so-called extension property and we discuss applications that are relevant for the analysis of PDEs defined in these domains. We also compare different ways of measuring the ``distance" between two sufficiently close Reifenberg-flat domains $X$ and $Y$, in particular we discuss the relations between the Hausdorff distances $d_H(X, Y)$, $d_H(\mathbb R^N \setminus X, \mathbb R^N \setminus Y)$ and $d_H(\partial X, \partial Y)$ and the measure of the symmetric difference $|X \triangle Y|$.
Although we are confident our results can find different applications, our original motivation was the quantitative stability analysis of the spectrum of the Laplace operator with Neumann boundary conditions defined in
Reifenberg-flat domains, see~\cite{lms2}.
The notion of Reifenberg-flat sets was first introduced in 1960 by Reifenberg~\cite{r} when he was working on the Plateau problem, and has since then played an important role in the study of minimal surfaces. More recently, the works by David \cite{d1,d2} about the regularity for 2-dimensional minimal sets in $\mathbb R^N$ rely on the Reifenberg parametrization and the specific $3$-dimensional results by David, De Pauw and Toro \cite{ddpt}. Also, Reifenberg-flat set are relevant in the study of the harmonic measure (see Kenig and Toro~\cite{hm3,hm2,hm1} and Toro~\cite{t,hm4}) and of the regularity for free boundary problems, like the minimization of the Mumford-Shah functional (see \cite{l1,l2}). Elliptic and parabolic equations defined in Reifenberg-flat domains have been recently investigated by Byun, Wang and Zhou~\cite{w1,w2,w3}, by Lemenant, Milakis and Spinolo and by Milakis and Toro~\cite{lm2,lm,lms2, mt}. Finally, we mention that Reifenberg-flat domains are in particular NTA domains in the sense of Jerison and Kenig
\cite{jk}.
We now provide the precise definition. We denote by $d_H$ the classical Hausdorff distance between two sets $X$ and $Y$,
\begin{equation}
d_H( X , Y ) : = \max \big\{ \sup_{x \in X } d(x, Y),
\sup_{y \in Y} d(y, X) \big\}.
\end{equation}
\begin{defin}\label{defreif} Let $\varepsilon, r_0$ be two real numbers satisfying $0 < \varepsilon<1/2$ and $r_0 >0$. An $(\varepsilon,r_0)$-Reifenberg-flat domain $\Omega \subseteq \mathbb R^N$ is a nonempty open set satisfying the following two conditions:
\begin{itemize}
\item[$i)$] for every $x \in \partial \Omega$ and for every $r\leq r_0$, there is a hyperplane $P(x,r)$ containing $x$ which satisfies
\begin{eqnarray}
\frac{1}{r}d_H\big( \partial \Omega \cap B(x,r), P(x,r)\cap B(x,r) \big) \leq \varepsilon. \label{reif}
\end{eqnarray}
\item[$ii)$]For every $x \in \partial \Omega$, one of the connected component of
$$B(x,r_0)\cap \big\{x : \; dist(x,P(x,r_0))\geq 2\varepsilon r_0\big\}$$
is contained in $\Omega$ and the other one is contained in $\mathbb R^N \setminus \Omega$.
\end{itemize}
\end{defin}
Condition $i)$ states that the boundary of $\Omega$ is an $(\varepsilon,r_0)$-Reifenberg-flat set. A Reifenberg-flat set enjoys local separability properties (see e.g. Theorem 4.1. in \cite{lihewang}), however we observe that condition $ii)$ in the definition is not in general implied by condition $i)$, as the example of $\Omega = \mathbb R^N \setminus \partial B(0,1)$ shows (here $\partial B(0, 1)$ denotes the boundary of the unit ball). However, a consequence of the analysis in David~ \cite{d0} is that $i)$ implies $ii)$ under some further topological assumption, for instance the implication holds if $\Omega$ and $\partial \Omega$ are both connected. Note furthermore that a straightforward consequence of the definition is that, if $\varepsilon_1 < \varepsilon_2$, then any $(\varepsilon_1,r_0)$-Reifenberg-flat domain is also an $(\varepsilon_2,r_0)$-Reifenberg-flat domain. Finally, note that we only impose the separability requirement $ii)$ at scale $r_0$ but it simply follows from the definition that it also holds at any scale $r\leq r_0$ (see \cite[Proposition 2.2]{hm3} or Lemma~\ref{l:ii} below).
In \cite{r} Reifenberg proved the so-called topological disk theorem which states that, provided $\varepsilon$ is small enough, any $(\varepsilon, r_0)$-Reifenberg-flat set in the unit $N$-ball is the bi-H\"olderian image of an $(N-1)$-dimensional disk. Also, any Lipschitz domain with sufficiently small Lipschitz constant is Reifenberg-flat for a suitable choice of the regularity parameter $\varepsilon$ (the choice depends on the Lipschitz constant). On the other hand, the ``flat'' Koch snowflake with sufficiently small angle is Reifenberg-flat (see Toro~\cite{t}) and hence it is an example of a Reifenberg-flat set which is \emph{not} Lipschitz, and with Hausdorff dimension greater than $N-1$.
The main goal of this paper is providing a complete and detailed proof of the fact that Reifenberg-flat domains are extension domains. This fact is relevant for the study of elliptic problems and was already known and used in the literature (see e.g. the introduction of \cite{w1}). However, to the best of our knowledge, an explicit proof was so far missing. We recall that the so called \textit{extension problem} can be formulated as follows: given an open set $\Omega$, we denote by $W^{1,p}$ the classical Sobolev space and we wonder whether or not one can define a bounded linear operator (the so-called extension operator)
$$
E:W^{1,p}(\Omega)\rightarrow W^{1,p}(\mathbb{R}^N)
$$
such that $E(u) \equiv u$ on $\Omega$. If $\partial\Omega$ is Lipschitz, Calderon \cite{Calderon} established the existence of an extension operator in the case when $1<p<\infty$, while Stein \cite{Stein} considered the cases $p=1,\infty$. Jones \cite{J} proved the existence of extension operators for a new class of domains, the so-called $(\varepsilon,\delta)$-Jones flat domains (the precise definition is recalled in Section~\ref{topos}). In the present work we prove that sufficiently flat Reifenberg domains are indeed Jones flat domains. Our main result concerning the extension problem is as follows.
\begin{theo}\label{jonesrei}
Any $(1/600 , r_0)$-Reifenberg flat domain is a $(1/450 , r_0/7)$-Jones flat domain.
\end{theo}
As direct consequence of Theorem~\ref{jonesrei} we get that one can define extension operators for $(1/600 , r_0)$-Reifenberg flat domains (see Corollary \ref{ext} for a precise statement). Some relevant features of this result are the following: first, we provide an explicit and universal threshold
on the coefficient $\varepsilon$ for the extension property to hold (namely, $\varepsilon \leq 1/600$). Second, $1/600$ is fairly big compared to the usual threshold needed to apply Reifenberg's topological disk theorem (for e.g. the threshold is $10^{-15}$ in \cite{ddpt}, see also \cite{lihewang2} for an interesting alternative proof).
As a consequence of the extension extension property, we obtain that the classical Rellich-Kondrachov Theorem applies to Reifenberg-flat domains (see Proposition~\ref{embedding}), that the Neumann Laplacian has a discrete spectrum and that the eigenfunctions are bounded (see Proposition~\ref{uestim}). Also, by combining Theorem~\ref{jonesrei} with the works by Chua \cite{Chua-indiana,Chua-illinois,Chua-Canad} and Christ \cite{Christ} we get that one can define extension operators for weighted Sobolev spaces and Sobolev spaces of fractional order (see Remark~\ref{chuachrist} in the present paper).
We conclude the paper by establishing results unrelated to the extension problem, namely we study the relation between different ways of measuring the ``distance" between sets of $\mathbb R^N$. In particular, for two general open sets $X$ and $Y$, neither the Hausdorff distance $d_{H}(X,Y)$ nor the Hausdorff distance between the complements $d_H(\mathbb R^N \setminus X, \mathbb R^N \setminus Y)$ is, in general, controlled by the Lebesgue measure of the symmetric difference $|X \triangle Y|$. However, we prove that they are indeed controlled provided that $X$, $Y$ are Reifenberg flat and close enough, in a suitable sense. This result will be as well applied in~\cite{lms2} to the stability analysis of the spectrum of the Laplace operator with Neumann boundary conditions.
The paper is organized as follows: in Section~\ref{topos} we prove that sufficiently flat Reifenberg domains are Jones-flat, in Section~\ref{appl} we show that if these domains are also connected, then they enjoy the extension property. In Section~\ref{appl} we also discuss some applications of the extension property. In Section~\ref{connected} we investigate how to handle domains that are not connected and finally in Section~\ref{s:hausdorff} we investigate the relation between different ways of measuring the ``distance" between Reifenberg-flat domains.
\subsection{Notations}
We denote by $C(a_1, \dots, a_h)$ a constant only depending on the variables $a_1, \dots, a_h$. Its precise value can vary from line to line.
Also, we use the following notations: \\
$\mathcal H^N$ : the $N$-dimensional Hausdorff measure.\\
$\omega_N$ : the Lebesgue measure of the unit ball in $\mathbb R^N$.\\
$|A|$ : the Lebesgue measure of the Borel set $A \subseteq \mathbb R^N$. \\
$A^c$: the complement of the set $A$, $A^c : = \mathbb R^N \setminus A.$ \\
$\bar A$: the closure of the set $A$. \\
$W^{1,p}(\Omega)$ : the Sobolev space of $L^p$ functions whose derivatives are in $L^p$.\\
$\langle x, y \rangle$: the standard scalar product between the vectors $x, y \in \mathbb R^N$. \\
$|x|$: the norm of the vector $x \in \mathbb R^N$. \\
$d(x, y)$: the distance from the point $x$ to the point $y$, $d(x, y) = |x -y|$. \\
$d(x, A)$: the distance from the point $x$ to the set $A$. \\
$d_H(A,B)$ : the Hausdorff distance from the set $A$ to the set $B$.\\
$[x, y]$: the segment joining the points $x, y \in \mathbb R^N$. \\
$B(x, r)$: the open ball of radius $r$ centered at $x$. \\
$\overline{B}(x, r)$: the closed ball of radius $r$ centered at $x$.
\section{Reifenberg-flat and Jones domains}
\label{topos}
In this section we show that any sufficiently flat Reifenberg domain is Jones-flat, in the sense of~\cite{J}. The extension property follows then as a corollary of the analysis in~\cite{J}.
First, we provide the precise definition of Jones-flatness.
\begin{defin} An open and bounded set $\Omega$ is a $(\delta, R_0)$-Jones-flat domain if for any $x,y \in \Omega$ such that $d(x,y)\leq R_0$ there is a rectifiable curve $\gamma$ which connects $x$ and $y$ and satisfies
\begin{equation}
\label{e:lenght}
{\mathcal H^1(\gamma)\leq \delta^{-1}d(x,y)}
\end{equation}
and
\begin{eqnarray}
d(z,\Omega^c)\geq \delta \, \frac{d(z,x)d(z,y)}{d(x,y)},\quad \text{ for all } z\in \gamma. \label{jonesprop3}
\end{eqnarray}
\end{defin}
To investigate the relation between Jones flatness and Reifenberg flatness we need two preliminary lemmas.
\begin{lem} \label{angleLemma}Let $\Omega \subseteq \mathbb R^N$ be an $(\varepsilon, r_0)-$Reifenberg flat domain. Given $x \in \partial \Omega$ and $r \leq r_0$, we term $\nu_r$ the unit normal vector to the hyperplane $P(x,r)$ provided by the definition of Reifenberg-flatness. Given $M \ge 1$, for every $r\leq r_0/M$ we have
\begin{equation}
\label{e:normal}
|\langle \nu_r, \nu_{M r}\rangle| \geq 1 - (M+1)\varepsilon.
\end{equation}
\end{lem}
\begin{proof} We assume with no loss of generality that $x$ is the origin. For simplicity, in the proof we denote by $B_r$ the ball $B(0,r)$ and by $P_r$ the hyperplane $P(0,r)$. From the definition of Reifenberg flatness we infer that
\begin{eqnarray}
d_H(P_{M r}\cap B_r , P_r \cap B_r ) & \leq & d_H(P_{M r}\cap B_r , \partial \Omega \cap B_r ) + d_H(\partial \Omega \cap B_r , P_r \cap B_r ) \notag \\
&\leq & M r \varepsilon + r\varepsilon \leq (M+1)r \varepsilon. \notag
\end{eqnarray}
Since $P_{Mr}$ and $P_r$ are linear spaces we deduce that
\begin{eqnarray}
d_H(P_{M r}\cap B_1 , P_r \cap B_1 )\leq (M+1)\varepsilon. \label{distP}
\end{eqnarray}
We term $\pi_r$ and $\pi_{Mr}$ the orthogonal projections onto $P_r$ and $P_{Mr}$, respectively, and we fix an arbitrary point $y \in P_r\cap B_1$. Inequality~\eqref{distP} states that there is $z\in \bar P_{Mr}\cap \bar B_1$ satisfying
$$ d(z, y)\leq (M+1)\varepsilon.$$
In particular, since $1= |\nu_{Mr} |=\inf_{z \in P_{Mr}} d( \nu_{Mr}, z)$, we get
$$d(\nu_{Mr}, y ) \geq d(\nu_{Mr}, z )- d(z, y ) \geq 1-(M+1)\varepsilon.$$
By taking the infimum for $y \in P_r\cap B_1$ we obtain
$$ |\nu_{Mr}- \pi_{r}(\nu_{Mr}) |\geq 1-(M+1)\varepsilon,$$
and the proof is concluded by recalling that $ |\langle \nu_{M r}, \nu_r \rangle| = d(\nu_{Mr}, \pi_{r}(\nu_{Mr}) )$.
\end{proof}
The following lemma discuss an observation due to Kenig and Toro~\cite[Proposition 2.2]{hm3}. Note that the difference between Lemma~\ref{l:ii} and part $ii)$ in the definition of Reifenberg flatness is that in $ii)$ we only require the separation property at scale $r_0$.
\begin{lem}
\label{l:ii} Let $\Omega \subseteq \mathbb R^N$ be an $(\varepsilon, r_0)-$Reifenberg flat domain. For every $x \in \partial \Omega$ and $r \in ]0, r_0]$, one of the connected components of
$$B(x,r)\cap \big\{x : \; dist(x,P(x,r))\geq 2\varepsilon r \big\}$$
is contained in $\Omega$ and the other one is contained in $\mathbb R^N \setminus \Omega$. Here $P(x, r)$ is the same hyperplane as in part $i)$ of the definition of Reifenberg-flatness.
\end{lem}
\begin{proof}
We fix $\rho \in ]0, r_0]$ and we assume that the separation property holds at scale $\rho$, namely that one of the connected components of
$$B(x, \rho)\cap \big\{x : \; dist(x,P(x,\rho))\geq 2\varepsilon \rho \big\}$$
is contained in $\Omega$ and the other one is contained in $\mathbb R^N \setminus \Omega$. We now show that the same separation property holds at scale $r$ for every $r \in ]\rho/M, \rho]$ provided that $M \leq (1- \varepsilon) / 3 \varepsilon.$ By iteration this implies that the separation property holds at any scale $r \in ]0, r_0].$
Let us fix $r \in ]\rho/M, \rho]$ and denote by $B^+ (x, r) $ one of the connected components of
$$
B(x, r)\cap \big\{x : \; dist(x,P(x,r)) \geq 2\varepsilon r \big\}$$
and by $B^-(x, r)$ the other one. Also, we term $Y^+$ and $Y^-$ the points of intersection of the line passing through $x$ and perpendicular to
$P(x, r)$ with the boundary of the ball $B(x, r)$.
By recalling~\eqref{e:normal} and the inequality $r \ge \rho / M$, we get that the distance of $Y^{\pm}$ from the hyperpane $P(x, \rho)$ satisfies the following inequality:
$$
d( Y^{\pm}, P(x, \rho)) \ge r
| \langle \nu_\rho, \nu_{\rho/M} \rangle| \geq r \big[ 1 -
(M+1)\varepsilon \big] \ge \rho \frac{ 1 -
(M+1)\varepsilon }{M}.
$$
Since by assumption $M \leq (1- \varepsilon) / 3 \varepsilon$, this implies that
$d( Y^{\pm}, P(x, \rho)) \ge 2 \varepsilon \rho$ and hence that one among $Y^+$ and $Y^-$ belongs to $B^+(x, \rho)$ and the other one to $B^-(x, \rho)$. Since by assumption the separation property holds at scale $\rho$, this implies that one of them belongs to $\Omega$ and the other one to $\Omega^c$.
\begin{figure}
\caption{notations for the proof of Theorem \ref{jonesrei}
\label{fig1}
\end{figure}
To conclude, note that part $i)$ in the definition of Reifenberg flatness implies that
$$
B^{\pm} (x, r) \cap \partial \Omega = \emptyset
$$
and hence both $B^+(x, r)$ and $B^-(x, r)$ are entirely contained in either $\Omega$ or $\Omega^c$. By recalling that one among $Y^+$ and $Y^-$ belongs to $\Omega$ and the other one to $\Omega^c$, we conclude
the proof of the lemma.
\end{proof}
We are now ready to establish the main result of this section, namely Theorem \ref{jonesrei}.
\begin{proof}[Proof of Theorem \ref{jonesrei}]
We assume $\varepsilon \leq 1/600$, we fix an $(\varepsilon, r_0)$-Reifenberg flat domain $\Omega \subseteq \mathbb R^N$ and we proceed according to the following steps. \\
{\sc ${\rm Diam}ond$ Step 1.} We first introduce some notations (see Figure~\ref{fig1} for a representation).
\begin{center}
\scalebox{0.7}
{
\begin{pspicture}(0,-6.4)(19.47,6.41)
\definecolor{color613b}{rgb}{0.8,0.8,1.0}
\definecolor{color554}{rgb}{0.4,0.4,0.0}
\pscircle[linewidth=0.04,dimen=outer](9.02,0.0){6.38}
\psline[linewidth=0.04cm](0.0,4.78)(19.38,-5.54)
\psdots[dotsize=0.2](8.9,0.04)
\usefont{T1}{ptm}{m}{n}
\rput(9.37,0.105){$x_0$}
\usefont{T1}{ptm}{m}{n}
\rput(17.83,-3.975){$P(x_0,\rho)$}
\psarc[linewidth=0.04,fillstyle=solid,fillcolor=color613b](9.02,-0.02){6.36}{342.47443}{140.24182}
\psline[linewidth=0.04](15.084785,-1.9351954)(4.130747,4.04753)
\psarc[linewidth=0.04,fillstyle=solid,fillcolor=color613b](9.02,-0.02){6.36}{163.05739}{320.33215}
\psline[linewidth=0.04](2.9360423,1.8333911)(13.915661,-4.0798163)
\pscustom[linewidth=0.06,linecolor=color554]
{
\newpath
\moveto(0.66,6.28)
\lineto(0.75,6.08)
\curveto(0.795,5.98)(0.89,5.805)(0.94,5.73)
\curveto(0.99,5.655)(1.085,5.535)(1.13,5.49)
\curveto(1.175,5.445)(1.44,5.195)(1.66,4.99)
\curveto(1.88,4.785)(2.335,4.345)(2.57,4.11)
\curveto(2.805,3.875)(3.08,3.605)(3.12,3.57)
\curveto(3.16,3.535)(3.28,3.43)(3.36,3.36)
\curveto(3.44,3.29)(3.605,3.145)(3.69,3.07)
\curveto(3.775,2.995)(3.915,2.87)(3.97,2.82)
\curveto(4.025,2.77)(4.145,2.66)(4.21,2.6)
\curveto(4.275,2.54)(4.38,2.45)(4.42,2.42)
\curveto(4.46,2.39)(4.525,2.33)(4.55,2.3)
\curveto(4.575,2.27)(4.64,2.15)(4.68,2.06)
\curveto(4.72,1.97)(4.795,1.84)(4.83,1.8)
\curveto(4.865,1.76)(4.95,1.675)(5.0,1.63)
\curveto(5.05,1.585)(5.165,1.515)(5.23,1.49)
\curveto(5.295,1.465)(5.415,1.42)(5.47,1.4)
\curveto(5.525,1.38)(5.605,1.36)(5.63,1.36)
\curveto(5.655,1.36)(5.73,1.36)(5.78,1.36)
\curveto(5.83,1.36)(5.94,1.36)(6.0,1.36)
\curveto(6.06,1.36)(6.205,1.365)(6.29,1.37)
\curveto(6.375,1.375)(6.495,1.4)(6.53,1.42)
\curveto(6.565,1.44)(6.635,1.475)(6.67,1.49)
\curveto(6.705,1.505)(6.775,1.53)(6.81,1.54)
\curveto(6.845,1.55)(6.905,1.565)(6.93,1.57)
\curveto(6.955,1.575)(7.005,1.585)(7.03,1.59)
\curveto(7.055,1.595)(7.115,1.6)(7.15,1.6)
\curveto(7.185,1.6)(7.25,1.6)(7.28,1.6)
\curveto(7.31,1.6)(7.37,1.6)(7.4,1.6)
\curveto(7.43,1.6)(7.495,1.6)(7.53,1.6)
\curveto(7.565,1.6)(7.64,1.6)(7.68,1.6)
\curveto(7.72,1.6)(7.785,1.6)(7.81,1.6)
\curveto(7.835,1.6)(7.89,1.595)(7.92,1.59)
\curveto(7.95,1.585)(8.025,1.56)(8.07,1.54)
\curveto(8.115,1.52)(8.175,1.47)(8.19,1.44)
\curveto(8.205,1.41)(8.24,1.35)(8.26,1.32)
\curveto(8.28,1.29)(8.3,1.235)(8.3,1.21)
\curveto(8.3,1.185)(8.305,1.135)(8.31,1.11)
\curveto(8.315,1.085)(8.34,1.03)(8.36,1.0)
\curveto(8.38,0.97)(8.405,0.92)(8.41,0.9)
\curveto(8.415,0.88)(8.44,0.84)(8.46,0.82)
\curveto(8.48,0.8)(8.52,0.755)(8.54,0.73)
\curveto(8.56,0.705)(8.605,0.66)(8.63,0.64)
\curveto(8.655,0.62)(8.695,0.57)(8.71,0.54)
\curveto(8.725,0.51)(8.75,0.445)(8.76,0.41)
\curveto(8.77,0.375)(8.79,0.32)(8.8,0.3)
\curveto(8.81,0.28)(8.83,0.235)(8.84,0.21)
\curveto(8.85,0.185)(8.865,0.13)(8.87,0.1)
\curveto(8.875,0.07)(8.89,0.02)(8.9,0.0)
\curveto(8.91,-0.02)(8.93,-0.065)(8.94,-0.09)
\curveto(8.95,-0.115)(8.975,-0.17)(8.99,-0.2)
\curveto(9.005,-0.23)(9.035,-0.295)(9.05,-0.33)
\curveto(9.065,-0.365)(9.105,-0.425)(9.13,-0.45)
\curveto(9.155,-0.475)(9.21,-0.52)(9.24,-0.54)
\curveto(9.27,-0.56)(9.35,-0.59)(9.4,-0.6)
\curveto(9.45,-0.61)(9.54,-0.625)(9.58,-0.63)
\curveto(9.62,-0.635)(9.705,-0.65)(9.75,-0.66)
\curveto(9.795,-0.67)(9.875,-0.68)(9.91,-0.68)
\curveto(9.945,-0.68)(10.015,-0.68)(10.05,-0.68)
\curveto(10.085,-0.68)(10.15,-0.68)(10.18,-0.68)
\curveto(10.21,-0.68)(10.28,-0.675)(10.32,-0.67)
\curveto(10.36,-0.665)(10.425,-0.655)(10.45,-0.65)
\curveto(10.475,-0.645)(10.565,-0.64)(10.63,-0.64)
\curveto(10.695,-0.64)(10.825,-0.64)(10.89,-0.64)
\curveto(10.955,-0.64)(11.055,-0.64)(11.09,-0.64)
\curveto(11.125,-0.64)(11.205,-0.655)(11.25,-0.67)
\curveto(11.295,-0.685)(11.5,-0.77)(11.66,-0.84)
\curveto(11.82,-0.91)(12.145,-1.015)(12.31,-1.05)
\curveto(12.475,-1.085)(12.69,-1.18)(12.74,-1.24)
\curveto(12.79,-1.3)(13.01,-1.475)(13.18,-1.59)
\curveto(13.35,-1.705)(13.635,-1.86)(13.75,-1.9)
\curveto(13.865,-1.94)(14.015,-2.01)(14.05,-2.04)
\curveto(14.085,-2.07)(14.15,-2.155)(14.18,-2.21)
\curveto(14.21,-2.265)(14.26,-2.35)(14.28,-2.38)
\curveto(14.3,-2.41)(14.34,-2.45)(14.36,-2.46)
\curveto(14.38,-2.47)(14.475,-2.515)(14.55,-2.55)
\curveto(14.625,-2.585)(14.825,-2.665)(14.95,-2.71)
\curveto(15.075,-2.755)(15.28,-2.785)(15.36,-2.77)
\curveto(15.44,-2.755)(15.59,-2.67)(15.66,-2.6)
\curveto(15.73,-2.53)(15.87,-2.45)(15.94,-2.44)
\curveto(16.01,-2.43)(16.175,-2.41)(16.27,-2.4)
\curveto(16.365,-2.39)(16.565,-2.365)(16.67,-2.35)
\curveto(16.775,-2.335)(16.93,-2.32)(16.98,-2.32)
\curveto(17.03,-2.32)(17.125,-2.4)(17.17,-2.48)
\curveto(17.215,-2.56)(17.425,-2.71)(17.59,-2.78)
\curveto(17.755,-2.85)(18.02,-2.95)(18.12,-2.98)
\curveto(18.22,-3.01)(18.375,-3.05)(18.43,-3.06)
\curveto(18.485,-3.07)(18.575,-3.08)(18.61,-3.08)
\curveto(18.645,-3.08)(18.73,-3.08)(18.78,-3.08)
\curveto(18.83,-3.08)(18.915,-3.09)(18.95,-3.1)
\curveto(18.985,-3.11)(19.045,-3.12)(19.07,-3.12)
\curveto(19.095,-3.12)(19.145,-3.12)(19.17,-3.12)
\curveto(19.195,-3.12)(19.245,-3.12)(19.27,-3.12)
\curveto(19.295,-3.12)(19.345,-3.12)(19.37,-3.12)
\curveto(19.395,-3.12)(19.425,-3.12)(19.44,-3.12)
}
\pscustom[linewidth=0.06,linecolor=color554]
{
\newpath
\moveto(0.7,6.24)
\lineto(0.67,6.27)
\curveto(0.655,6.285)(0.625,6.315)(0.61,6.33)
\curveto(0.595,6.345)(0.58,6.365)(0.58,6.38)
}
\usefont{T1}{ptm}{m}{n}
\rput(17.95,-1.875){$\partial \Omega$}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(17.42,-2.6)(17.96,-2.08)
\usefont{T1}{ptm}{m}{n}
\rput(16.37,-1.215){$\Omega$}
\usefont{T1}{ptm}{m}{n}
\rput(15.45,-4.515){$\Omega^c$}
\psline[linewidth=0.04cm,tbarsize=0.07055555cm 5.0]{|*-|}(2.54,2.0)(3.7,4.24)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(11.72,5.66)(8.9,0.08)
\psdots[dotsize=0.2](11.72,5.68)
\usefont{T1}{ptm}{m}{n}
\rput(12.14,6.045){$Y(x_0,\rho)$}
\usefont{T1}{ptm}{m}{n}
\rput(2.09,2.785){$4\varepsilon \rho$}
\usefont{T1}{ptm}{m}{n}
\rput(11.64,4.285){$\rho \vec \nu_\rho$}
\usefont{T1}{ptm}{m}{n}
\rput(8.05,4.465){$B^+(x_0,\rho)$}
\usefont{T1}{ptm}{m}{n}
\rput(8.19,-3.775){$B^-(x_0,\rho)$}
\end{pspicture}
}
\end{center}
For any $x_0 \in \partial \Omega$ and $\rho \leq r_0$, we denote as usual by $P(x_0,\rho)$ the hyperplane provided by the definition of Reifenberg flatness, and by $\vec \nu_\rho$ its normal. By Lemma~\ref{l:ii}, we can choose the orientation of $\vec \nu_\rho$ in such a way that
$$
B^+ (x_0, \rho) : =
\left\{ z +t \vec \nu_\rho: \; z \in P(x_0, \rho), \,
t \ge 2 \varepsilon \rho \right\} \cap B(x_0, \rho ) \subseteq \Omega
$$
and
$$
B^- (x_0, \rho) : =
\left\{ z -t \vec \nu_\rho: \; z \in P(x_0, \rho), \,
t \ge 2 \varepsilon \rho \right\} \cap B(x_0, \rho ) \subseteq \Omega^c.
$$
Also, we define the hyperplanes $ P^+ (x_0, \rho)$ and $ P^- (x_0, \rho)$ by setting
$$
P^+ (x_0, \rho) : =
\left\{ z +2 \varepsilon \rho \vec \nu_\rho: \; z \in P(x_0, \rho)
\right\}
$$
and
$$
P^- (x_0, \rho) : =
\left\{ z -2 \varepsilon \rho \vec \nu_\rho: \; z \in P(x_0, \rho)
\right\}
$$
and we denote by $Y(x_0, \rho)$ the point
$$
Y(x_0, \rho) : = x_0 + \rho \vec \nu_\rho.
$$
Finally, for any $x \in \Omega$, we denote by $x_0 \in \partial \Omega$ the point such that $d(x, \Omega^c)=d(x,x_0)$ (if there is more than one such $x_0$, we arbitrarily fix one).
{\sc ${\rm Diam}ond$ Step 2.} We provide a preliminary construction: more precisely, given
\begin{itemize}
\item $x \in \Omega$ such that
$d(x, \Omega^c) \leq 2r_0/7$ and
\item $r$ satisfying $d(x, \Omega^c) /2 \leq r \leq r_0/7$,
\end{itemize}
the curve $\gamma_{x,r}$ is defined as follows.
\begin{itemize}
\item[(I)] If $d(x, \Omega^c)/2 \leq r \leq 2 d (x, \Omega^c)$, then $\gamma_{x,r}$ is simply the segment $[x, Y(x_0, r)]$.
\item[(II)] If $2 d(x, \Omega^c) < r \leq r_0/7$, we denote by $k_0 \ge 1$ the biggest natural number $k$ satisfying
$ {2^{-k} r \ge d(x, \Omega^c)}$ and we set
$$
\gamma_{x,r} := [x,Y(x_0, 2^{-k_0}r)] \cup
\bigcup_{k =0}^{k_0-1} [Y(x_0,2^{-k}r), Y(x_0,2^{-(k+1)}r)].
$$
\end{itemize}
{\sc ${\rm Diam}ond$ Step 3.} We prove that in both cases (I) and (II) we have
\begin{equation}
\label{jonesprop2}
\mathcal H^1(\gamma_{x,r}) \leq 4 r.
\end{equation}
To handle case (I) we just observe that, since by assumption $d (x, \Omega^c) = d(x, x_0) \leq 2r$, then,
by recalling $d(x_0, Y(x_0, r))=r$, property~\eqref{jonesprop2} follows.
To handle case (II), we first observe that, since $d(x, x_0) \leq 2^{-k_0}r$, then both $x$ and $Y(x_0, 2^{-k_0}r)$ belong to the closure of $B(x_0, 2^{-k_0}r)$. Also, by construction both $Y(x_0,2^{-k}r)$ and $Y(x_0,2^{-(k+1)}r)$ belong to the closure of
$B(x_0, 2^{-k}r)$ and by combining these observations we conclude that
\begin{eqnarray}
\mathcal H^1(\gamma_{x,r}) &\leq& d(x, Y(x_0, 2^{-k_0}r))+ \sum_{k=0}^{k_0-1} d(Y(x_0,2^{-k}r) ,Y(x_0,2^{-(k+1)}r)) \notag \\
&\leq& 2 \cdot 2^{-k_0} r + \sum_{k =0}^{k_0-1} 2 \cdot 2^{-k} r \leq 2 r \sum_{k \in \mathbb N} 2^{-k} = 4r.
\end{eqnarray}
{\sc ${\rm Diam}ond$ Step 4.} We prove that for every $z \in \gamma_{x, r}$
\begin{equation}
\label{jonesprop}
d(z,\Omega^c)\geq \frac{29}{240} \, d(z,x).
\end{equation}
We start by handling case (I): we work in the ball $B(x_0, 4r)$ and we recall the definition of $B^+(x_0, 4r)$ and of $B^-(x_0, 4r)$, given at {\sc Step 1}. Since by assumption $\varepsilon \leq 1/32$, we have
$$
16 \varepsilon r \leq \frac{r}{2} \leq d (x, \Omega^c) \leq d(x, B^-(x_0, 4r))
$$
and hence $x \in B^+(x_0, 4r)$. Let $\beta$ denotes the angle between $\nu_r$ and $\nu_{4r}$, then by Lemma \ref{angleLemma} applied with $M=4$ we get that provided $\varepsilon \leq 1/9$, then
$
4 \varepsilon r \leq r \cos \beta,
$
so that $Y(x_0, r) \in B^+(x_0, 4r)$. By recalling that $x \in B^+(x_0, 4r)$, we conclude that $[x, Y(x_0, r) ] \subseteq B^+(x_0, 4r)$.
We are now ready to establish~\eqref{jonesprop}, so we fix $z \in [x, Y(x_0, r)]$.
To provide a bound from above on $d(z, x)$, we simply observe that, since both $x$ and $Y(x_0, r)$ belong to the closure of $B (x_0, 2r)$, then so does $z$ and hence
\begin{equation}
\label{e:above}
d(z, x) \leq 4r .
\end{equation}
Next, we provide a bound from below on $d(z, \Omega^c)$: since $z \in B^+(x_0, 4r) \subseteq \Omega$, then
\begin{equation}
\label{e:bplus}
{d(z, \Omega^c) \ge d(z, \partial B^+(x_0, 4r))} =
\min \Big\{ d(z, P^+ (x_0, 4r)), d (z, \partial B(x_0, 4r) ) \Big\} .
\end{equation}
First, we recall that $z \in B(x_0, 2r)$ and we provide a bound on the distance from $z$ to the spherical part of $\partial B^+(x_0, 4r)$:
$$
d ( z, \partial B(x_0, 4r) ) = 4r - d(z, x_0) \ge 4r - 2r =2 r.
$$
Next, we observe that
$$
d(z, P^+(x_0, 4r)) = d ( z, P(x_0, 4r) ) - 8 \varepsilon r
$$
and, since $z \in [x, Y(x_0, r)]$, then
$$
d ( z, P(x_0, 4r) ) \ge \min \Big\{ d ( x, P(x_0, 4r), d ( Y(x_0, r), P(x_0, 4r) ) \Big\}.
$$
Note that $d ( Y(x_0, r), P(x_0, 4r) )= r \cos \beta$ and, using Lemma \ref{angleLemma}, we conclude that $${d ( Y(x_0, r), P(x_0, 4r) )\geq r/2}$$ because $\varepsilon \leq 1/10$. Also, since $B^- (x_0, 4r) \subseteq \Omega^c$, then
$$
r/2 \leq d(x, \Omega^c) \leq d(x, B^-(x_0, 4r)) = d ( x, P(x_0, 4r) ) + 2 \varepsilon r
$$
By recalling~\eqref{e:bplus} and the inequality $\varepsilon \leq 1/600$ and by combining all the previous observations we conclude that
\begin{equation}
\label{e:below}
\begin{split}
d(z, \Omega^c) & \ge d(z, \partial B^+(x_0, 4r) ) \ge
\min \{ d(z, P^+ (x_0, 4r) , 2r \} =
\min \{ d(z, P (x_0, 4r)- 8 \varepsilon r, 2r \} \ge \\
& \ge
\min \Big\{ \min \big\{ d \big( x, P(x_0, 4r) \big),
d \big( Y(x_0, r), P(x_0, 4r) \big) \big\}- 8 \varepsilon r , 2r \Big\} \ge \\
& \ge
\min \Big\{ \min \big\{ r / 2 - 2 \varepsilon r,
r /2 \big\}- 8 \varepsilon r , 2r \Big\} =
\frac{r}{2} - 10 \varepsilon r \ge \frac{29}{60} r. \\
\end{split}
\end{equation}
Finally, by comparing~\eqref{e:below} and~\eqref{e:above} we obtain~\eqref{jonesprop}. \\
{\sc ${\rm Diam}ond$ Step 5.} We now establish~\eqref{jonesprop} in case (II).
If $z \in [x, Y(x_0, 2^{-k_0}r]$, then we can repeat the argument we used in {\sc Step 4} by replacing $r$ with $2^{-k_0}r$, which satisfies
$$
d(x, \Omega^c) \leq 2^{-k_0} r \leq 2 d(x, \Omega^c).
$$
Hence, we are left to consider the case when $z \in [Y(x_0,2^{-k}r), Y(x_0,2^{-(k+1)}r)]$ for some natural number $k \leq k_0 -1$. We set $\rho:= 2^{-k} r$ and we work in the ball $B(x_0, 2 \rho)$. We denote by $\alpha$ the angle between $\nu_{2 \rho}$ and $\nu_{\rho}$, and by $\beta$ the angle between $\nu_{2 \rho}$ and $\nu_{\rho/2}$. Due to Lemma \ref{angleLemma} applied with $M=2$ and $M=4$, we know that, if $\varepsilon\leq 1/13$, then
$$
\rho \cos \alpha \ge 4 \varepsilon \rho \qquad
\frac{1}{2} \rho \cos \beta \ge 4 \varepsilon \rho,
$$
so that both $Y(x_0, \rho)$ and $Y(x_0, \rho/2)$ belong to $B^+(x_0, 2 \rho)$. Hence, given
$${z \in [Y(x_0, \rho), Y(x_0, \rho/2)] } \subseteq B^+ (x_0, 2 \rho)\subseteq \Omega,$$ we have
$
{ d(z, \Omega^c) \ge d(z, \partial B^+(x_0, 2 \rho))}.
$
The distance from $z$ to the spherical part of $\partial B^+(x_0, 2 \rho)$ is bounded from below by
$\rho$, while the distance from $z$ to $P^+(x_0, 2 \rho)$ is bounded from below by
$
\frac{1}{2} \rho - 4 \varepsilon \rho \ge \frac{1}{4} \rho
$
provided that $\varepsilon \leq 1/16.$ Hence,
$
d(z, \Omega^c) \ge \rho/4.
$
To provide an upper bound on $d(z, x)$ we observe that, since ${d(x, x_0)= d(x, \Omega^c) \leq 2^{-k}r}$, then both $z$ and $x$ belong to the closure of $B(x_0, \rho)$. Hence, ${d(x, z) \leq 2 \rho}$ and~\eqref{jonesprop} holds.\\
{\sc ${\rm Diam}ond$ Step 6.} We are finally ready to show that $\Omega$ is a Jones-flat domain. Given $x, y \in \Omega$ satisfying $d(x, y) \leq r_0/7$, there are two possible cases:
\begin{enumerate}
\item if either $d(x,\Omega^c) \ge 2 d(x,y)$ or $d(y,\Omega^c) \ge 2 d(x,y)$, then we set $\gamma : =[x,y]$. To see that $\gamma$ satisfies~\eqref{jonesprop3}, let us assume that $d(x,\Omega^c) \ge 2 d(x,y)$ (the other case is completely analogous), then $y \in B(x, d(x,y))\subseteq \Omega$ and $[x, y] \subseteq \Omega$. Also, since
\begin{eqnarray}
\sup_{z \in [x,y]} \frac{d(z,x)d(z,y)}{d(x,y)} = \frac{1}{4}d(x,y), \label{geod}
\end{eqnarray}
then for any $z \in \gamma$,
$$
d(z,\Omega^c) \geq d(x,\Omega^c) -d(z,x) \geq d(x,y)\geq 4d(z,x)d(z,y)/d(x,y).
$$
Hence, $\gamma$ satisfies~\eqref{jonesprop3} provided that $\delta =4$.
\item we are left to consider the case when both $d(x,\Omega^c) < 2 d(x,y)$ and $d(y,\Omega^c) < 2 d(x,y)$. Denote by $x_0 \in \partial \Omega$ a point such that $d(x,\Omega^c)=d(x,x_0)$ and $y_0 \in \partial \Omega$ a point such that $d(y,y_0)=d(y,\Omega^c)$ and set
$r:=d(x,y)\leq r_0/7$. We define
\begin{equation}
\label{e:gamma}
\gamma: = \gamma_{x,r}\cup\gamma_{y,r} \cup [Y(x_0,r),Y(y_0,r)].
\end{equation}
{\sc Step 7} is devoted to showing that $\gamma$ satisfies~\eqref{e:lenght} and~\eqref{jonesprop3}.
\end{enumerate}
{\sc ${\rm Diam}ond$ Step 7.} First, we establish~\eqref{e:lenght}: we observe that
$$
d \Big( Y(x_0,r),Y(y_0,r) \Big) \leq d \Big( Y(x_0,r), x_0 \Big) + d(x_0, x) + d(x, y) +
d(y, y_0) +
d \Big( y_0,Y(y_0,r) \Big) \leq 7r
$$
and hence by using~\eqref{jonesprop2}
$$
\mathcal H^1(\gamma) \leq \mathcal H^1(\gamma_{x,r}) + d \Big( Y(x_0,r),Y(y_0,r) \Big) +
\mathcal H^1(\gamma_{y,r}) \leq 15r
$$
which proves \eqref{e:lenght}.
Next, we establish~\eqref{jonesprop3}: we denote by $d_{\gamma}$ the geodesic distance on the curve $\gamma$ and we observe that
\begin{equation}
\label{e:geo}
\frac{d(z,y)}{15 d(x,y)} \leq \frac{d_\gamma(z,y)}{d_\gamma(x,y)}\leq 1.
\end{equation}
Hence, if $z \in \gamma_{x, r}$, then by using~\eqref{jonesprop} we obtain
$$ d(z,\Omega^c)\geq \frac{29}{240} d(z,x)\geq \frac{29}{240 \cdot 15}
\left(\frac{d(z,x)d(z,y)}{d(x,y)} \right)
$$
and we next observe $29/240 \cdot 15 \ge 5 / 60 \cdot 15 = 1/ 180.$ Since the same argument works in the case when $z\in \gamma_{y,r}$, then we are left to esablish~\eqref{jonesprop3} in the case when $z$ lies on the segment $[Y(x_0,r),Y(y_0,r)]$.
We first observe that
\begin{equation}
\label{e:dxY}
d(x_0, Y(y_0, r)) \leq
d(x_0, x) + d(x, y) + d(y, y_0) + d(y_0,Y (y_0, r) ) \leq 6r
\end{equation}
and hence $[Y(x_0,r),Y(y_0,r)] \subseteq B (x_0, 7r)$. Next, we note that $7r \leq r_0$ and we use~\eqref{e:below} to get
\begin{equation}
\label{e:dYp}
\frac{29}{60}r \leq d(Y(x_0, r), \Omega^c) \leq d(Y(x_0, r), P^-(x_0, 7r)),
\end{equation}
hence since $\varepsilon$ is so small that $28 \varepsilon r \leq 29 r/ 60$, then we have $d(Y(x_0, r), P^-(x_0, 7r)) \ge 28 \varepsilon r$, which means that $Y(x_0, r) \in B^+(x_0, 7r)$. By repeating the same argument we get ${Y(x_0, r) \in B^+(x_0, 7r)}$ and hence
${[Y(x_0,r),Y(y_0,r)] \subseteq B^+(x_0, 7r)}$.
We fix $z \in [Y(x_0,r),Y(y_0,r)] $ and we observe that
\begin{equation}
\label{e:dzx}
\begin{split}
d(z, x) & \leq d(z, Y(x_0, r)) + d(Y(x_0, r), x_0) + d(x_0, x) \\
& \leq
d(Y(y_0,r), Y(x_0, r)) + d(Y(x_0, r), x_0) + d(x_0, x)
\leq
7r + r + 2r =10r. \\
\end{split}
\end{equation}
Also,
\begin{equation}
\label{e:dzo}
d(z, \Omega^c) \ge d(z, \partial B^+(x_0, 7r)) \ge
\min \Big\{ d (z, \partial B(x_0, 7r) ); d(z, P^+(x_0, 7r)) \Big\}
\end{equation}
and by using~\eqref{e:dxY} we get
$$
d(z, \partial B(x_0, 7r) ) \ge r.
$$
Also, we have
$$
d(z, P^+(x_0, 7r)) \ge \min \Big\{ d(Y(x_0, r), P^+(x_0, 7r)), d( Y(y_0, r), P^+(x_0, 7r))
\Big\}
$$
and by recalling~\eqref{e:dYp} we get that
$$
d(Y(x_0, r), P^+(x_0, 7r)) = d(Y(x_0, r), P^-(x_0, 7r)) - 28 \varepsilon r \ge
\frac{29}{60}r - 28 \varepsilon r \ge \frac{r}{3} .
$$
Since $Y(y_0,r)$ satisfies the same estimate, then by recalling~\eqref{e:geo},~\eqref{e:dzx} and~\eqref{e:dzo}
we get
$$
d(z, \Omega^c) \ge \frac{r}{3} \ge \frac{1}{3 \cdot 10} d(z, x) \ge
\frac{1}{3 \cdot 10 \cdot 15} \frac{d(z, x) d(z, y)}{d(x, y)},
$$
which concludes the proof because $3 \cdot 10 \cdot 15 = 450$.
\end{proof}
\begin{remark} There are Jones-flat domains that are not Reifenberg-flat, for instance a Lipschitz domain with sufficiently big constant (for example a heavily non convex polygonal domain). Actually, Jones \cite[Theorem 3]{J} proved that, for a simply connected domain in dimension 2, being Jones-flat is equivalent to being an extension domain, which is also known to be equivalent to the fact that the boundary is a quasicircle (see the introduction of \cite{J}).
\end{remark}
\section{Extension properties of Reifenberg-flat domains and applications }
\label{appl}
In this section we combine the analysis in~\cite{J} with Theorem~\ref{jonesrei} to prove that domains that are sufficiently flat in the sense of Reifenberg satisfy the extension property. We also discuss some direct consequences. Note that in this section we always assume that $\Omega$ is connected, as Jones did in~\cite{J}. In Section~\ref{connected} we prove that the connectedness assumption can be actually removed in the case of Reifenberg flat domains. Note also that, before providing the precise extension result, we have to introduce a preliminary lemma comparing different notions of ``radius" of a given domain $\Omega$.
\subsection{``Inner radius", ``outer radius" and ``diameter" of a given domain}
We term outer radius of a nonempty set $\Omega \subseteq \mathbb R^N$ the quantity
\begin{equation}
\label{e:outrad}
Rad (\Omega) : = \inf_{x \in \Omega} \sup_{y \in \Omega} d(x, y),
\end{equation}
and we term inner radius the quantity
\begin{equation}
\label{e:inrad}
rad (\Omega) : = \sup_{x \in \Omega} \sup \{r>0: \; B(x,r)\subset \Omega\}.
\end{equation}
The inner radius is the radius of the biggest ball that could fit inside $\Omega$, whereas the outer radius, as seen below, is the radius of the smallest ball, centered in $\overline{\Omega}$, that contains $\Omega$.
Also, we recall that $Diam (\Omega)$ denotes the diameter of $\Omega$,
namely
$$
Diam (\Omega) : = \sup_{x, y \in \Omega} d (x, y).
$$
For the convenience of the reader, we collect some consequences of the definition in the following lemma.
\begin{lem}
\label{l:rad}
Let $\Omega$ be a nonempty subset of $\mathbb R^N$, then the following properties
hold:
\begin{itemize}
\item[(i)] We have the formula
\begin{equation}
\label{e:rad}
Rad(\Omega)=\inf_{x \in \Omega} \inf\{r>0: \; \Omega \subset B(x,r)\}.
\end{equation}
Also, if $Rad (\Omega) < + \infty$, then there is a point $ x \in \overline{\Omega}$ such that $\Omega \subseteq B(x, Rad (\Omega))$.
\item[(ii)] $rad(\Omega)\leq Rad(\Omega)\leq {\rm Diam}(\Omega). $
\item[(iii)] If $\Omega$ is an $(\varepsilon, r_0)$-Reifenberg-flat domain for some $r_0 >0$ and some $\varepsilon$ satisfying $0 < \varepsilon < 1/2$, then $r_0/4 \leq rad (\Omega)\leq Rad(\Omega) \leq {\rm Diam}(\Omega) .$
\end{itemize}
\end{lem}
\begin{proof}
To establish property (i), we first observe that, if $\Omega$ is not bounded, then $Rad (\Omega) = + \infty$ and formula~\eqref{e:rad} is trivially satisfied. Also, the assumption $Rad (\Omega) < + \infty$ implies that the closure $\overline{ \Omega}$ is compact. Hence, if $Rad (\Omega) < + \infty$, then
\begin{equation}
\label{e:min}
Rad (\Omega) = \min_{x \in \overline{\Omega}} \sup_{y \in \Omega} d(x, y)
\end{equation}
and if we term $x_0\in \overline{\Omega}$ any point that realizes the minimum in \eqref{e:min} we have $\Omega \subset \overline{B}(x_0,Rad(\Omega))$. This establishes the inequality
$$Rad(\Omega)\geq \inf_{x \in \Omega} \inf\{r>0: \; \Omega \subset B(x,r)\}.$$
To establish the reverse inequality we observe that if $x\in \Omega$ is any arbitrary point and $r>0$ is such that $\Omega \subset B(x,r)$, then $\sup_{y \in \Omega}d(x,y)\leq r$. By taking the infimum in $x$ and $r$ we conclude. This ends the proof of property (i).
To establish (ii), we focus on the case when $Rad(\Omega)<+\infty$, because otherwise $\Omega$ is unbounded and (ii) trivially holds. Hence, by relying on (i) we infer that $\Omega \subseteq B:=B(x_0, Rad (\Omega))$ for some point $x_0\in \Omega$. Given $x \in \Omega$ and $r>0$ satisfying $B(x,r)\subset \Omega$, we have $B(x,r)\subset B(x_0,Rad(\Omega))$. Hence, $ d(x, x_0)+r\leq Rad(\Omega)$ and hence $r\leq Rad(\Omega)$. By taking the supremum in $r$ and $x$ we get finally $rad(\Omega)\leq Rad(\Omega)$. The inequality $Rad(\Omega)\leq {\rm Diam}(\Omega)$ directly follows from the two definitions.
Given (ii), establishing property (iii) amounts to show that
\begin{eqnarray}
rad(\Omega)\geq r_0/4. \label{amontrer17}
\end{eqnarray}
We can assume with no loss of generality that $\partial \Omega\not = \emptyset$, otherwise $\Omega=\mathbb R^N$ and~\eqref{amontrer17} trivially holds in this case (we recall that the case $\Omega=\emptyset$ is ruled out by the definition of Reifenberg-flat domain).
Hence, we fix $y \in \partial \Omega$, denote by $P(y, r_0)$ the hyperplane in the definition and let $\vec \nu$ be its normal vector. We choose the orientation of $\vec \nu$ in such a way that
\begin{equation}
\label{e:incl}
\{ z + t \nu: \; z \in P(y, r_0), \; t \ge 2 \varepsilon r \} \cap B(y, r_0 ) \subseteq \Omega.
\end{equation}
Since $d_H (P(y, r_0) \cap B(y, r_0), \partial \Omega \cap B(y, r_0) ) \leq \varepsilon r$, then from~\eqref{e:incl} we infer that actually
$$
\{ z + t \nu: \; z \in P(y, r_0), \; t \ge \varepsilon r \} \cap B(y, r_0 ) \subseteq \Omega.
$$
By recalling $\varepsilon < 1/2$, we infer that there is $x\in \Omega$ such that $B(x,r_0/4)\subset \Omega$ and this establishes \eqref{amontrer17}.
\end{proof}
\subsection{Extension properties and applications}
The following extension property of Reifenberg flat domains is established by combining Theorem~\ref{jonesrei} above with Jones'analysis (Theorem 1 in~\cite{J}).
\begin{cor} \label{ext}
Let $\Omega \subseteq \mathbb R^N$ be a connected, $(\varepsilon,r_0)$-Reifenberg-flat domain. If $\varepsilon \leq 1/600$, then, for every $p \in [1,+\infty] $, there is an extension operator $E : W^{1,p}(\Omega) \to W^{1,p}(\mathbb R^N)$ satisfying
$$\|E(u)\|_{W^{1,p}(\mathbb R^N)} \leq C \|u\|_{W^{1,p}(\Omega)},$$
where the constant $C$ only depends on $N$, $p$, and $r_0$.
\end{cor}
\begin{proof}
The corollary is a direct application of~\cite[Theorem 1]{J}.
The only nontrivial point we have to address is that, in general, the norm of the extension operator $E$ depends on $Rad (\Omega)$, see for examples the statements of Jones' Theorem provided in the paper by Chua~\cite{Chua-indiana} and in the very recent preprint by Brewster, D. Mitrea, I. Mitrea and M. Mitrea~\cite{BrewsterMitreas}. Note that in Jones' original statement the dependence on the radius was not mentioned because the radius was fixed (see the remark at the top of page 76 in~\cite{J}).
However, by applying for example the remarks in~\cite[pages 9 and 10]{BrewsterMitreas} to Reifenberg-flat domains, we get that the norm of $E$ is bounded by $C(N, p, r_0, M)$ if $ 1/ Rad (\Omega) \leq M$. By recalling that $r_0/4 \leq Rad(\Omega)$, we finally infer that the bound on the norm of the extension operator only depends on $N, p$ and $r_0$ and this concludes the proof.
\end{proof}
\begin{remark}
\label{chuachrist}To simplify the exposition, we chose to only state the extension property for classical Sobolev Spaces. However, the extension property also applies to other classes of spaces. For instance, Chua~\cite{Chua-indiana,Chua-illinois,Chua-Canad}, extended Jones' Theorem to weighted Sobolev spaces. These spaces are defined by replacing the Lebesgue measure by a weighted measure $\omega dx$, where $\omega$ is a function satisfying suitable growth conditions and Poincar\'e inequalities. Also, Christ~\cite{Christ} established the extension property for Sobolev spaces of fractional order.
The results of both Christ~\cite{Christ} and Chua~\cite{Chua-indiana,Chua-illinois,Chua-Canad} apply to Jones-flat domains, hence by relying on Theorem~\ref{jonesrei} we infer that they apply to $(1/600,r_0)$-Reifenberg-flat domains as well.
\end{remark}
As a consequence of Corollary \ref{ext} we get that the classical Rellich-Kondrachov Theorem holds in Reifenberg-flat domains.
\begin{prop}
\label{embedding}
Let $\Omega \subseteq \mathbb R^N$ be a bounded, connected $(\varepsilon,r_0)$-Reifenberg-flat domain and assume $0<\varepsilon \leq 1/600$.
If $1\leq p < N$, set $p^*:=\displaystyle{\frac{Np}{N-p}}$. Then the Sobolev space $W^{1,p}(\Omega)$ is continuously embedded in the space $L^{p^*}(\Omega)$ and is compactly embedded in $L^q(\Omega)$ for every $1\leq q < p^*$.
If $p \ge N$, then the Sobolev space $W^{1,N}(\Omega)$ is continuously embedded in the space $L^{\infty}(\Omega)$ and is compactly embedded
$L^q(\Omega)$ for every $q \in [1, + \infty[$.
Also, the norm of the above embedding operators only depends on $N$, $r_0$, $q$, $p$ and $Rad(\Omega)$.
\end{prop}
\begin{proof} We first use the extension operator provided by Corollary \ref{ext} and then we apply the classical Embedding Theorem in a ball of radius $Rad(\Omega)$ containing $\Omega$ (see property (i) in the statement of Lemma~\ref{l:rad}).
\end{proof}
As an example of application of Proposition \ref{embedding}, we establish a uniform bound on the $L^{\infty}$ norm of Neumann eigenfunctions defined in Reifenberg-flat domains. We use this bound in the companion paper~\cite{lms2}. Here is the precise statement. We recall that we term ``Neumann eigenfunction" an eigenfunction for the Laplace operator subject to homogeneous Neumann conditions on the boundary of the domain.
\begin{prop}\label{uestim} Let $\Omega \subseteq \mathbb R^N$ be a bounded, connected, $(\varepsilon,r_0)$-Reifenberg-flat domain and let $u$ be a Neumann eigenfunction associated to the eigenvalue $\mu$. If $\varepsilon \leq 1/600$, then $u$ is bounded and
\begin{eqnarray}
\label{e:linfty}
\|u \|_{L^\infty(\Omega)}\leq C (1+\sqrt{\mu})^{\gamma(N)} \|u\|_{L^{2}(\Omega)}, \label{desirein}
\end{eqnarray}
where $\gamma(N)=\max\big\{ \frac{N}{2},\frac{2}{N-1} \big\}$ and $C=C(N, r_0,Rad(\Omega))$.
\end{prop}
\begin{proof} By using classical techniques coming from the regularity theory for elliptic operators, Ross~\cite[Proposition 3.1]{marty} established~\eqref{desirein} in the case of Lipschitz domains. However, in~\cite{marty} the only reason why one needs the regularity assumption on the domain $\Omega$ is to
use the Sobolev inequality
\begin{equation}
\label{e:sobsob}
\|u\|_{L^{2^*}(\Omega)}\leq C(\|u\|_{L^2(\Omega)}+\|\nabla u\|_{L^2(\Omega)}), \qquad C = C(N,r_0, Rad(\Omega))
\end{equation}
as the starting point for a bootstrap argument. Since Proposition \ref{embedding} states that~\eqref{e:sobsob} holds if $\Omega$ is a bounded Reifenberg-flat domain, then the proof in~\cite{marty} can be extended to the case of Reifenberg-flat domains.
\end{proof}
\begin{remark}
An inequality similar to \eqref{desirein} holds for Dirichlet eigenfunctions. We emphasize that the boundedness of Dirichlet eigenfunctions, unlike the boundedness of Neumann eigenfunctions, does not require any regularity assumption on the domain $\Omega$, see for instance \cite[Lemma 3.1.]{davies2} for a precise statement.
\end{remark}
\section{Connected components of Reifenberg-flat domains}
\label{connected}
In the previous section we have always assumed that the domain $\Omega$ is connected.
We now show that the results we have established can be extended to general (i.e., not necessarily connected) Reifenberg-flat domains. Although extension of the result of Jones ~\cite{J} to non-connected domains were already widely known in the literature, we decided to provide here a self-contained proof. In this way, we obtain results on the structure of Reifenberg-flat domains that may be of independent interest.
We first show that any sufficiently flat Reifenberg-flat domain is finitely connected and we establish a quantitative bound on the Hausdorff distance between two connected components.
\begin{prop} \label{topol1} Let $\Omega \subseteq \mathbb R^N$ be a bounded, $(\varepsilon,r_0)$-Reifenberg flat domain and we assume $\varepsilon \leq 20^{-N}$. Then $\Omega$ has a finite number of nonempty, open and disjoint connected components $U_1$, ... , $U_{n}$, where
\begin{equation}
\label{e:n}
n \leq \frac{20^N}{\omega_N} \frac{|\Omega|}{r_0^N}.
\end{equation}
Moreover, if $i \neq j$, then for every $z \in \partial U_i$ we have
\begin{equation}
\label{e:sep}
d(z, U_j) > r_0/70.
\end{equation}
\end{prop}
\begin{proof} We proceed according to the following steps.\\
{\sc ${\rm Diam}ond$ Step 1} We recall that any nonempty open set $\Omega\subseteq \mathbb R^N$ can be decomposed as
\begin{equation}
\label{e:dec}
\Omega: = \bigcup_{i \in I} U_i,
\end{equation}
where the connected components $U_i$ satisfy
\begin{itemize}
\item for every $i \in I$, $U_i$ is a nonempty, open, arcwise connected set which is also closed in $\Omega$. Hence, in particular, $\partial U_i \subseteq \partial \Omega$.
\item $U_i \cap U_j = \emptyset$ if $ i \neq j$.
\end{itemize}
Indeed, for any $x \in \Omega$ we can define
$$
U_x : = \big\{ y \in \Omega: \; \text{there is a continuous curve
$\gamma: [0, 1] \to \Omega$ such that $\gamma(0)=x$ and $\gamma(1)=y$}\big\}
$$
and observe that any $U_x$ is a nonempty, open, arcwise connected set which is also closed in $\Omega$. Also, given two points $x, y \in \mathbb R^N$, we have either $U_x =U_y$ or $U_x \cap U_y = \emptyset$. \\
{\sc ${\rm Diam}ond$ Step 2} Let $\Omega$ as in the statement of the proposition, and let the family $\{U_i \}_{i \in I}$ be as in~\eqref{e:dec}. We fix $i \in I$ and we prove that $|U_i| \ge C(r_0, N) $. This straightforwardly implies that $\sharp I \leq C(|\Omega|, r_0, N)$.
Since $U_i$ is bounded, then $\partial U_i \neq \emptyset$: hence, we can fix a point $\tilde x \in \partial U_i$, and a sequence $\{ x_n \}_{n \in \mathbb N} $ such that $x_n \in U_i$ and $x_n \to \tilde x$ as $n \to + \infty$. We recall that $\partial U_i \subseteq \partial \Omega$ and we infer that, for any $n \in \mathbb N$, the following chain of inequalities holds:
$$
d(x_n, \partial U_i) = d(x_n, U_i^c) \leq d (x_n, \Omega^c) = d(x_n, \partial \Omega) \leq d(x_n, \partial U_i),
$$
which implies $d (x_n, \Omega^c) = d(x_n, \partial U_i)$. We fix $n$ sufficiently large such that
$d(x_n, \tilde x) \leq r_0 /7$, so that
$$d(x_n, \Omega^c)= d(x_{n},\partial U_i)\leq r_0/7.$$ We term $\Gamma:=\gamma_{x_{n},r_0/7}$ the polygonal curve constructed as in Step 2 of the proof of Theorem \ref{jonesrei} and we observe that, if $\varepsilon \leq 1/32$, then~\eqref{jonesprop} holds and $\Gamma \subseteq \Omega$ and hence, by definition of $U_i$, $\Gamma \subseteq U_i$. We use the same notation as in {\sc Step 1} of the proof of Theorem~\ref{jonesrei} and we recall that $\Gamma$ connects $x_n$ to some point $Y(x_0, r_0/7)$, defined with some $x_0\in \partial \Omega$. Hence, in particular, $Y(x_0, r_0/7) \in U_i$ and this implies that $B^+(x_0, r_0/7) \subseteq U_i$ because $B^+(x_0, r_0/7) $ is connected. This finally yields
$$
|U_i| \ge |B^+(x_0, r_0/7)| \ge \omega_N \Big( \frac{r_0}{14} (1 - 2 \varepsilon) \Big)^N\geq \omega_N \Big( \frac{9r_0}{140} \Big)^N\geq \omega_N \Big( \frac{r_0}{20} \Big)^N,
$$
because $\varepsilon\leq 1/20$. We deduce that
$$
\sharp I \leq \frac{20^N}{\omega_N} \frac{|\Omega|}{r_0^N}.
$$
{\sc ${\rm Diam}ond$ Step 3} We establish the separation property~\eqref{e:sep}.
We set $r_1:= r_0 /70$ and we argue by contradiction, assuming that there are $z \in \partial U_i$, $y \in \partial U_j$ such that
$$
d(z, U_j) = d (z, \partial U_j) = d(z, y) \leq r_1.
$$
Let $\{ z_n \}_{n \in \mathbb N}$ and $\{y_n \}_{n \in \mathbb N}$ be sequences in $U_i$ and $U_j$ converging to $z$ and $y$, respectively. We fix $n$ sufficiently large such that
$$
d(z_n, \partial U_i) \leq d(z_n, z) \leq r_1 \leq r_0/14
$$
and we term $\bar z$ be a point in $\partial U_i$ satisfying $d(z_n, \bar z) = d(z_n, \partial U_i)$ (if there is more than one such $\bar z$, we arbitrarily fix one). By arguing as in {\sc Step 2}, we infer that $B^+ (\bar z, r_0/14) \subseteq U_i$.
Next, we do the same for $U_j$, namely we fix $m$ sufficiently large that
$$
d(y_m, \partial U_j) \leq d(y_m, y) \leq r_1 \leq r_0/7,
$$
we let $\bar y$ be a point in $\partial U_j$ satisfying $d(y_m, \bar y) = d(y_m, \partial U_j)$ and, by arguing as in {\sc Step 2}, we get that $B^+ (\bar y, r_0/7) \subseteq U_j$. Also, we note that
$$
d(\bar z, \bar y) \leq d(\bar z, z_n) + d(z_n, z) + d(z, y) + d(y, y_m) + d(y_m, \bar y)
\leq 5 r_1.
$$
Since $r_1 = r_0 /70$, then
$B^+ (\bar z, r_0/14) \subseteq B (\bar z, r_0/14) \subseteq B (\bar y, r_0/7)$. We observe that
\begin{eqnarray}
{B^+ (\bar z, r_0/14) \cap B^- (\bar y, r_0/7)= \emptyset} \label{totor1}
\end{eqnarray}
since by construction
$B^+ (\bar z, r_0/14) \subseteq \Omega$ and $B^- (\bar y, r_0/7) \subseteq \Omega^c$. Also, by recalling that
$$
B^+ (\bar z, r_0/14) \subseteq U_i, \qquad B^+ (\bar y, r_0/7) \subseteq U_j \; \; \; \;\text{and} \; \; \; \;
U_i \cap U_j = \emptyset,
$$
we have that
\begin{eqnarray}
B^+ (\bar z, r_0/14) \cap B^+ (\bar y, r_0/7) = \emptyset \label{totor2}
\end{eqnarray}
By combining~\eqref{totor1} and~\eqref{totor2} we get
\begin{eqnarray}
B^+ (\bar z, r_0/14) \subseteq B (\bar y, r_0/7) \setminus \big( B^+ (\bar y, r_0/7) \cup B^- (\bar y, r_0/7) \big). \label{totor3}
\end{eqnarray}
We now use the inequality
\begin{eqnarray}
\omega_{N}\geq \omega_{N-1} \frac{1}{2^{N-1}}, \label{amontrerlater}
\end{eqnarray}
which will be proven later. By relying on \eqref{amontrerlater} and by recalling that $\varepsilon\leq 20^{-N}\leq 1/20$ we obtain
\begin{eqnarray}
| B^+ (\bar z, r_0/14) | \ge \omega_N \left( \frac{r_0}{28} (1-2 \varepsilon) \right)^N \geq 2\omega_{N-1}\left( \frac{9r_0}{560} \right)^N \notag
\end{eqnarray}
and
\begin{eqnarray}
\Bigg|
B (\bar y, r_0/7) \setminus \big( B^+ (\bar y, r_0/7) \cup B^- (\bar y, r_0/7) \big)
\Bigg| \leq 4 \varepsilon \omega_{N-1} \left( \frac{r_0}{7} \right)^N\leq 2\omega_{N-1}\left( \frac{2r_0}{140} \right)^N, \notag
\end{eqnarray}
which contradicts \eqref{totor3} since $2/140< 9/560$.
To finish the proof we are thus left to establish~\eqref{amontrerlater}. To do this, we use the relation
\begin{eqnarray}
\omega_N&=&\omega_{N-1}\int_{-1}^1\big(\sqrt{1-x^2}\big)^{N-1}dx \notag.
\end{eqnarray}
This implies that, for any $\lambda�\in (0,1)$, we have
\begin{eqnarray}
\omega_N&\geq & \omega_{N-1}2\int_{0}^\lambda \big(\sqrt{1-x^2}\big)^{N-1}dx \notag \\
&\geq & \omega_{N-1}2 \lambda \big(\sqrt{1-\lambda^2}\big)^{N-1} \notag
\end{eqnarray}
By choosing $\lambda={\sqrt{3}}/{2}$ we obtain the inequality
$$\omega_{N}\geq \omega_{N-1}\frac{\sqrt{3}}{2^{N-1}}\geq \omega_{N-1}\frac{1}{2^{N-1}},$$
and this concludes the proof.
\end{proof}
By relying on Proposition~\ref{topol1} we can now remove the connectedness assumption in the statement of Proposition~\ref{ext}.
\begin{cor} Let $N\geq 2$ and $\Omega \subseteq \mathbb R^N$ be a bounded, $(\varepsilon,r_0)$-Reifenberg flat domain with $\varepsilon \leq \min(20^{-N},1/600)$. Then for every $p \in [1, + \infty]$ there is an extension operator
\begin{equation}
E: W^{1, p} (\Omega) \to W^{1, p} (\mathbb R^N)
\end{equation}
whose norm is bounded by a constant which only depends on $N$, $p$, and $r_0$.
\end{cor}
\begin{proof} We employ the same notation as in the statement of Proposition~\ref{topol1} and we fix a connected component $U_i$. By recalling that $\partial U_i \subseteq \partial \Omega$ and the separation property~\eqref{e:sep}, we infer that $U_i$ is itself a $(\varepsilon, r_0/140)$-Reifenberg flat domain. Since by definition $U_i$ is connected, we can apply Proposition~\ref{ext} which says that, for every $p \in [1, + \infty]$, there is an extension operator
$$
E_i: W^{1, p} (U_i) \to W^{1, p} (\mathbb R^N)
$$
whose norm is bounded by a constant which only depends on $N$, $p$ and $r_0$.
In order to ``glue together" the extension operators $E_1, \dots, E_n$ we proceed as follows. Given $i=1, \dots, n$, we set $\delta:=r_0/280$ and we introduce the notation
$$
U_i^{\delta}:= \big\{ x \in \mathbb R^N: \; d(x, U_i ) < \delta \big\}.
$$
Note that the separation property~\eqref{e:sep} implies that $U_i^{2 \delta} \cap U_j^{2 \delta} = \emptyset$ if $i \neq j$.
We now construct suitable cut-off functions $\varphi_i$, $i=1, \dots, n$. Let $\ell: [0, + \infty[ \to [0, 1]$ be the auxiliary function defined by setting
\begin{equation*}
\ell (t) : =
\left\{
\begin{array}{ll}
1 & \text{if $t \leq \delta$} \\
\displaystyle{1 + \frac{\delta - t }{\delta} }
& \text{if $\delta \leq t \leq 2 \delta $} \\
0 & \text{if $t \ge 2 \delta$} \\
\end{array}
\right.
\end{equation*}
We set $\varphi_i(x): = \ell \big(d(x, U_i) \big)$ and we recall that the function $x \mapsto d(x, U_i)$ is 1-Lipschitz and that $\delta = r_0 / 280$. Hence, the function $\varphi_i$ satisfies the following properties:
\begin{equation}
\label{e:varphi}
0 \leq \varphi_i(x) \leq 1, \; \;
|\nabla \varphi_i(x)| \leq C(r_0) \; \;
\forall x \in \mathbb R^N,
\quad \varphi_i \equiv 1 \; \textrm{on $U_i$}, \quad
\varphi_i \equiv 0 \; \textrm{on $\mathbb R^N \setminus U^{2 \delta}_i$}.
\end{equation}
We then define
$
E: W^{1, p} (\Omega) \to W^{1, p} (\mathbb R^N)
$
by setting
$$
E(u) : = \sum_{i=1}^n E_i(u)(x) \varphi_i(x).
$$
We recall that the sets $U_1, \dots, U_n$ are all pairwise disjoint, we focus on the case $p < + \infty$ and we get
\begin{equation*}
\begin{split}
\| E(u)\|_{L^p(\mathbb R^N)} & =
\left( \int_{\mathbb R^N} \left| \sum_{i=1}^n E_i(u)(x) \varphi_i(x) dx \right|^p \right)^{1/p} \leq
\sum_{i=1}^n \left( \int_{U_i^{2 \delta}} | E_i(u)(x) \varphi_i(x) |^p dx \right)^{1/p}
\\
& \leq
\sum_{i=1}^n \| E_i(u) \|_{L^p(\mathbb R^N)}
\leq \sum_{i=1}^nC(N, p, r_0) \|u\|_{W^{1,p}(U_i)} \\
&\leq
C(N, p, r_0) \| u \|_{W^{1, p} (\Omega)}. \phantom{\int_{\Omega}}\\
\end{split}
\end{equation*}
Also, by using the bound on $|\nabla \varphi_i|$ provided by~\eqref{e:varphi}, we get
\begin{equation*}
\begin{split}
\| \nabla E(u)\|_{L^p(\mathbb R^N)} & =
\left( \int_{\mathbb R^N} \left| \sum_{i=1}^n
\big( \nabla E_i(u)(x) \varphi_i(x) + E_i(u)(x) \nabla \varphi_i(x) \big) dx
\right|^p \right)^{1/p} \\ &
\leq
\sum_{i=1}^n \left( \int_{U_i^{2 \delta}} | \nabla E_i(u)(x) \varphi_i(x) |^p dx \right)^{1/p}
+ \sum_{i=1}^n \left(
\int_{U_i^{2 \delta}} | E_i(u)(x) \nabla \varphi_i(x) |^p dx \right)^{1/p}
\\ & \leq \sum_{i=1}^n \|\nabla E_i(u)\|_{L^p(\mathbb R^N)}
+ C(r_0) \sum_{i=1}^n \| E_i(u) \|_{L^p(\mathbb R^N)} \\
&\leq
C(N, p, r_0) \| u \|_{W^{1, p} (\Omega)}. \phantom{\int_{\Omega}}\\
\end{split}
\end{equation*}
The proof in the case $p = \infty$ is a direct consequence of the bounds on the norm of $ E_i $ and on the uniform norms of $\varphi_i$ and $\nabla \varphi_i$. This concludes the proof of the corollary.
\end{proof}
\section{On the Hausdorff distance between Reifenberg-flat domains}
\label{s:hausdorff}
We end this paper by comparing different ways of measuring the ``distance" between Reifenberg-flat domains.
\subsection{Comparison between different Hausdorff distances.}
This subsections aims at comparing the Hausdorff distances $d_H(X, Y)$, $d_H(X^c, Y^c)$ and $d_H(\partial X, \partial Y)$, where $X$ and $Y$ are subsets of $\mathbb R^N$.
First, we exhibit two examples showing that, in general, neither $d_H (X, Y)$ controls $d_H(X^c, Y^c)$ nor $d_H (X^c, Y^c)$ controls $d_H(X, Y)$.
We term $B:=B(1, \vec 0)$ the unit ball and we consider the two perturbations $A$ and $C$ as represented in Figure~\ref{fig2}.
\begin{figure}\label{fig2}
\end{figure}
Next, we exhibit an example showing that, in general, $d_H(\partial X, \partial Y)$ controls neither $d_H(X, Y)$ nor $d_H(X^c, Y^c)$. Let $X:=B(R, \vec 0)$ and $Y:=
B (R + \varepsilon, \vec 0) \setminus B(R, \vec 0)$, then
$$
\varepsilon = d_H (\partial X, \partial Y) < < d_H(X, Y)= d_H(X^c, Y^c)= R.
$$
Also, note that the examples represented in Figure~\ref{fig2} show that, in general, neither $d_H(X, Y)$ nor $d_H(X^c, Y^c)$ controls $d_H(\partial X, \partial Y).$ Indeed, $d_H(\partial A, \partial B) \simeq 1$ and $d_H (\partial C, \partial B) \simeq 1$.
However, if $X$ and $Y$ are two sufficiently close Reifenberg-flat domains, then we have the following result.
\begin{lem}\label{airport2} Let $X$ and $Y$ be two $(\varepsilon, r_0)$-Reifenberg-flat domains satisfying ${d_H(\partial X, \partial Y)\leq 2 r_0}$. Then
\begin{equation}
\label{e:june}
d_H(\partial X, \partial Y)
\leq \frac{4}{1-2 \varepsilon} \min \big\{d_{H}(X,Y),d_{H}(X^c,Y^c)\big\}.
\end{equation}
\end{lem}
\begin{proof}
Just to fix the ideas, assume that $d_H (\partial X, \partial Y) = \sup_{x \in \partial X} d(x, \partial Y)$. Since by assumption $d_H (\partial X, \partial Y) < + \infty$, then for every $h>0$ there is $x_h \in \partial X$ such that
$$
d_H (\partial X, \partial Y) - h \leq d_h : = d (x_h, \partial Y) \leq
d_H (\partial X, \partial Y).
$$
Note that $\partial Y \cap B(x_h, d_h /2) = \emptyset$ and hence either (i) ${B(x_h, d_h /2) \subseteq Y}$ or
(ii) ${B(x_h, d_h /2) \subseteq Y^c}$.
First, consider case (i): let $P(x_h, d_h/2)$ be the hyperplane prescribed by the definition of Reifenberg flatness, then by Lemma~\ref{l:ii} we can choose the orientation of the normal vector $\nu$ in such a way that
$$
B^- (x_h, d_h/2) : = \left\{ z +t \nu: \; z \in P(x_h, d_h/2), \, t \ge \varepsilon d_h \right\} \cap B(x_h, d_h/2) \subseteq X^c
$$
and
$$
B^+ (x_h, d_h/2) : =
\left\{ z -t \nu: \; z \in P(x_h, d_h/2), \, t \ge \varepsilon d_h \right\} \cap B(x_h, d_h/2) \subseteq X.
$$
Fix the point
$$
\bar z:= x_h + \frac{\big( 1+ 2\varepsilon \big) d_h}{4} \nu,
$$
then we have
$$
B \left( \bar z, \frac{\big( 1 - 2 \varepsilon \big) d_h}{4} \right) \subseteq B^- (x_h, d_h/2) \subseteq X^c \cap Y
$$
and hence
$$
d_H (X^c, Y^c) \ge \sup_{z \in X^c } d( z, Y^c) \ge d( \bar z, Y^c) \ge \frac{(1-2 \varepsilon)d_h}{4}
$$
and
$$
d_H (X, Y) \ge \sup_{z \in Y} d(z, X) \ge d(\bar z, X) \ge \frac{(1- 2\varepsilon)d_h}{4}.
$$
Since case (ii) can be tackled in an entirely similar way, by the arbitrariness of
$h$ we deduce that
\begin{equation}
\label{e:may}
d_H(\partial X, \partial Y)
\leq \frac{4}{1-2 \varepsilon} d_{H}(X,Y).
\end{equation}
The proof of~\eqref{e:june} is concluded by making the following observations:
\begin{itemize}
\item if $X$ is an $(\varepsilon, r_0)$-Reifenberg flat domain, then $X^c$ is also an $(\varepsilon, r_0)$-Reifenberg flat domain.
\item $\partial X = \partial X^c$ and $\partial Y = \partial Y^c$.
\end{itemize}
Hence, by replacing in~\eqref{e:may} $X$ with $X^c$ and $Y$ with $Y^c$ we obtain~\eqref{e:june}. \end{proof}
\subsection{Comparison between the Hausdorff distance and the measure of the symmetric difference} This subsection aims at comparing the Hausdorff distances $d_H(X, Y)$ and $d_H (X^c, Y^c)$ with the Lebesgue measure of the symmetric difference, $|X \triangle Y|$. As usual, $X$ and $Y$ are subsets of $\mathbb R^N$. The results we state are applied in~\cite{lms2} to the stability analysis of the spectrum of the Laplace operator with Neumann boundary conditions.
First, we observe that the examples illustrated in Figure~\ref{fig2} show that, in general, $|X \triangle Y|$ controls neither $d_H(X, Y)$ nor $d_H(X^c, Y^c)$. Indeed, $|A \triangle B| \simeq \varepsilon$ and $|C \triangle B| \simeq \varepsilon.$ However, if
$X$ and $Y$ are two sufficiently close Reifenberg-flat domains, then the following result hold.
\begin{lem}\label{airport1} Let $X$ and $Y$ be two $(\varepsilon, r_0)$-Reifenberg-flat domains in $\mathbb R^N$.
Then the following implications hold:
\begin{enumerate}
\item if ${d_H(X, Y)\leq 4 r_0}$, then
\begin{equation}
d_{H}(X,Y) \leq \frac{8}{(1-2 \varepsilon) } \left( \frac{ |X \triangle Y|}{\omega_N} \right)^{1/N}. \label{mardi}
\end{equation}
\item If ${d_H( X^c, Y^c )\leq 4 r_0}$, then
\begin{equation}
\label{mardi2}
d_{H}(X^c,Y^c) \leq \frac{8}{(1-2 \varepsilon) } \left( \frac{ |X \triangle Y|}{\omega_N} \right)^{1/N}.
\end{equation}
\end{enumerate}
In both the previous expressions, $\omega_N$ denotes the measure of the unit ball in $\mathbb R^N$.
\end{lem}
\begin{proof} The argument relies on ideas similar to those used in the proof of Lemma~\ref{airport2}.
We first establish~\eqref{mardi}. Just to fix the ideas, assume that
$
d_H (X, Y) = \sup_{x \in X} d(x, Y)
$
and note that by assumption $d_H(X, Y) < + \infty$. Hence, for every $h>0$ there is $x_h \in X$ such that
$$
d_H(X, Y) - h \leq d_h : = d(x_h, Y) \leq d_H(X, Y)
$$
Note that, by the very definition of
$d(x_h, Y)$, we have
$
B \left( x_h, d_h \right) \subseteq Y^c.
$
We now separately consider two cases: if $B(x_h, d_h /2) \subseteq X$, then
$$
B (x_h, d_h /2 ) \subseteq X \cap Y^c \subseteq |X \triangle Y|
$$
and hence
$$
\omega_N \left( \frac{d_h}{2} \right)^N \leq |X \triangle Y|,
$$
and by the arbitrariness of $h$ this implies~\eqref{mardi}.
Hence, we are left to consider the case when there is $x_0 \in B (x_h, d_h /2 ) \cap \partial X.$ We make the following observations: first,
\begin{equation}
\label{e:iy}
B(x_0, d_h/4) \subseteq B(x_h, d_h) \subseteq Y^c.
\end{equation}
Second, since $d_h/4 \leq d_H(X, Y)/4 \leq r_0$, then we can apply the definition of Reifenberg-flatness in the ball $B(x_0, d_h/4)$. Let $P(x_0, d_h/4)$ be the hyperplane provided by property (i) in the definition, and let $\nu_0$ denote the normal vector. By relying on Lemma~\ref{l:ii}
we infer that we can choose the orientation of $\nu_0$ in such a way that
\begin{equation}
\label{e:ix}
B \left(x_0 + \frac{(1 + 2 \varepsilon)d_h}{ 8} \, \nu_0, \frac{(1 - 2 \varepsilon)d_h}{ 8 } \right) \subseteq X \cap B(x_0, d_h/4)
\end{equation}
By combining~\eqref{e:iy} and~\eqref{e:ix} we infer that
$$
\omega_N \left( \frac{(1 - 2 \varepsilon)d_h}{ 8 } \right)^N \leq |X \cap Y^c| \leq |X \triangle Y|
$$
and by the arbitrariness of $h$ this completes the proof of~\eqref{mardi}.
Estimate~\eqref{mardi2} follows from~\eqref{mardi} by relying on the following two observations:
\begin{itemize}
\item $X \triangle Y = ( X^c \cap Y ) \cup ( X \cap Y^c) = X^c \triangle Y^c$.
\item if $X$ is an $(\varepsilon, r_0)$-Reifenberg flat domain, then $X^c$ is also an $(\varepsilon, r_0)$-Reifenberg flat domain.
\end{itemize}
Hence, by replacing in ~\eqref{mardi} $X$ with $X^c$ and $Y$ with $Y^c$
we get~\eqref{mardi2}.
\end{proof}
\begin{comment}
\noindent Antoine Lemenant, LJLL, Universit\'e Paris 7 - CNRS, {\small \tt [email protected]}
\noindent Emmanouil Milakis, University of Cyprus, {\small \tt [email protected]}
\noindent Laura V. Spinolo, IMATI-CNR, { \small \tt [email protected]}
\end{comment}
\begin{tabular}{l}
Antoine Lemenant\\
Universit\'e Paris Diderot - Paris 7 - LJLL - CNRS \\
U.F.R de Math\'ematiques \\
Site Chevaleret Case 7012\\
75205 Paris Cedex 13 FRANCE\\
{e-mail : \small \tt [email protected]}
\end{tabular}
\begin{tabular}{l}
Emmanouil Milakis\\
University of Cyprus \\
Department of Mathematics \& Statistics \\
P.O. Box 20537\\
Nicosia, CY- 1678 CYPRUS\\
{e-mail : \small \tt [email protected]}
\end{tabular}
\begin{tabular}{l}
Laura V. Spinolo\\
IMATI-CNR, \\
via Ferrata 1 \\
I-27100, Pavia, ITALY \\
{e-mail : \small \tt [email protected]}
\mathcal Hfill
\end{tabular}
\end{document} | math | 67,736 |
\begin{document}
\title{Classification of outer actions of $\Z^N$ on $\mathcal{O}
\begin{abstract}
We will show that
any two outer actions of $\mathbb{Z}^N$ on $\mathcal{O}_2$
are cocycle conjugate.
\end{abstract}
\section{Introduction}
Group actions on $C^*$-algebras and von Neumann algebras are
one of the most fundamental subjects in the theory of operator algebras.
A. Connes introduced a non-commutative Rohlin property
and classified single automorphisms of von Neumann algebras
(\cite{C1},\cite{C2}),
and A. Ocneanu generalized it
to actions of discrete amenable groups (\cite{O}).
In the setting of $C^*$-algebras,
A. Kishimoto established a non-commutative Rohlin type theorem
for single automorphisms on UHF algebras
and classified them up to outer conjugacy (\cite{K1},\cite{K2}).
H. Nakamura used the same technique
for automorphisms on purely infinite simple nuclear $C^*$-algebras
and classified them up to outer conjugacy (\cite{N2}).
Furthermore, he proved a Rohlin type theorem
for $\mathbb{Z}^2$-actions on UHF algebras
and gave a classification result for product type actions
up to cocycle conjugacy (\cite{N1}).
Recently, T. Katsura and the author generalized this result
and gave a complete classification of uniformly outer actions
of $\mathbb{Z}^2$ on UHF algebras (\cite{KM}).
In the case of finite group actions,
M. Izumi introduced a notion of the Rohlin property
and classified a large class of actions (\cite{I2},\cite{I3}).
The reader may consult the survey paper \cite{I1}
for the Rohlin property of automorphisms on $C^*$-algebras.
The aim of this paper is
to extend these results to $\mathbb{Z}^N$-actions
on the Cuntz algebra $\mathcal{O}_2$.
More precisely, we will show that
any outer actions of $\mathbb{Z}^N$ on $\mathcal{O}_2$ have the Rohlin property
and that they are cocycle conjugate to each other.
The content of this paper is as follows.
In Section 2,
we collect notations and basic facts relevant to this paper.
Notions of the ultraproduct algebra $A^\omega$ and
the central sequence algebra $A_\omega$ will help our analysis.
When $A$ is isomorphic to $\mathcal{O}_2$,
it is known that $A_\omega$ contains a unital copy of $\mathcal{O}_2$.
This fact implies
strong triviality of the $K$-theory of $\mathcal{O}_2$.
For $\mathbb{Z}^N$-actions on unital $C^*$-algebras,
we recall the definition of the Rohlin property from \cite{N1}
and give a couple of remarks.
In Section 3, we establish the cohomology vanishing theorem
for $\mathbb{Z}^N$-actions on the Cuntz algebra $\mathcal{O}_2$
with the Rohlin property.
One of the difficulties
in the study of $\mathbb{Z}^N$-actions on $C^*$-algebras is that
one has to deal with homotopies of unitaries
in order to obtain the so-called cohomology vanishing theorem.
In our situation, however,
we do not need
to take care of $K_*$-classes of (continuous families of) unitaries,
because of the triviality of $K_*(\mathcal{O}_2)$.
Indeed, any continuous family of unitaries in $\mathcal{O}_2$
is homotopic to the identity by a smooth path of finite length.
This enables us to avoid $K$-theoretical arguments.
Section 4 is devoted to the Rohlin type theorem
for $\mathbb{Z}^N$-actions on $\mathcal{O}_2$.
The main idea of the proof is similar to that in \cite{N1}.
We also make use of several techniques developed in \cite{N2}.
The proof is by induction on $N$,
because we need the cohomology vanishing theorem for $\mathbb{Z}^{N-1}$-actions
in order to prove the Rohlin type theorem for $\mathbb{Z}^N$-actions.
In Section 5, the main theorem is shown.
D. E. Evans and A. Kishimoto introduced in \cite{EK}
an intertwining argument for automorphisms,
which is an equivariant version of Elliott's intertwining argument
for classification of $C^*$-algebras.
By using the Evans-Kishimoto intertwining argument,
we show that
any two outer actions of $\mathbb{Z}^N$ on $\mathcal{O}_2$ are cocycle conjugate.
The cohomology vanishing theorem is necessary
in each step of the intertwining argument.
\textbf{Acknowledgement.}
The author is grateful to Toshihiko Masuda for many helpful discussions.
\section{Preliminaries}
Let $A$ be a unital $C^*$-algebra.
We denote by $\mathcal{U}(A)$ the group of unitaries in $A$.
For $u\in\mathcal{U}(A)$, we let $\operatorname{Ad} u(a)=uau^*$ for $a\in A$ and
call it an inner automorphism on $A$.
When an automorphism $\alpha\in\operatorname{Aut}(A)$ on $A$ is not inner,
it is said to be outer.
For any $a,b\in A$, we write $[a,b]=ab-ba$
and call it the commutator of $a$ and $b$.
In this paper, we deal with central sequence algebras,
which simplify the arguments a little.
Let $A$ be a separable $C^*$-algebra and
let $\omega\in\beta\mathbb{N}\setminus\mathbb{N}$ be a free ultrafilter.
We set
\[ c^\omega(A)=\{(a_n)\in\ell^\infty(\mathbb{N},A)\mid
\lim_{n\to\omega}\lVert a_n\rVert=0\}, \]
\[ A^\omega=\ell^\infty(\mathbb{N},A)/c^\omega(A). \]
We identify $A$ with the $C^*$-subalgebra of $A^\omega$
consisting of equivalence classes of constant sequences.
We let
\[ A_\omega=A^\omega\cap A'. \]
When $\alpha$ is an automorphism on $A$ or
an action of a discrete group on $A$,
we can consider its natural extension on $A^\omega$ and $A_\omega$.
We denote it by the same symbol $\alpha$.
If $A$ is a unital separable purely infinite simple $C^*$-algebra,
then $A^\omega$ is purely infinite simple.
When $A$ is
a unital separable purely infinite simple nuclear $C^*$-algebra,
it is known that $A_\omega$ is also purely infinite simple
(\cite[Proposition 3.4]{KP}).
We would like to collect several facts
about the Cuntz algebra $\mathcal{O}_2$.
The Cuntz algebra $\mathcal{O}_2$ is
the universal $C^*$-algebra generated by two isometries $s_1$ and $s_2$
satisfying $s_1s_1^*+s_2s_2^*=1$.
It is a unital separable purely infinite simple nuclear $C^*$-algebra
with trivial $K$-groups, i.e. $K_0(\mathcal{O}_2)=K_1(\mathcal{O}_2)=0$.
By \cite[Theorem 3.6]{R1},
any automorphisms $\alpha$ and $\beta$ on $\mathcal{O}_2$
are approximately unitarily equivalent.
Thus,
there exists a sequence of unitaries $\{u_n\}_n$ in $\mathcal{O}_2$
such that $\beta(a)=\lim_{n\to\infty}u_n\alpha(a)u_n^*$
for all $a\in A$.
It is also known that
$\mathcal{O}_2$ is isomorphic to the infinite tensor product
$\bigotimes_{n=1}^\infty\mathcal{O}_2$
(\cite{R2} or \cite[Theorem 3.8]{KP}).
Consequently,
$(\mathcal{O}_2)_\omega=(\mathcal{O}_2)^\omega\cap\mathcal{O}_2'$
contains a unital copy of $\mathcal{O}_2$.
(Actually, its converse also holds:
if $A$ is a unital simple separable nuclear $C^*$-algebra
and $A_\omega$ contains a unital copy of $\mathcal{O}_2$,
then $A$ is isomorphic to $\mathcal{O}_2$ (\cite[Lemma 3.7]{KP}).
But, in this paper, we do not need this fact.)
Let $e$ be a projection of $(\mathcal{O}_2)_\omega$.
By the usual argument taking subsequences,
we can find a unital copy of $\mathcal{O}_2$
in the relative commutant of $e$ in $(\mathcal{O}_2)_\omega$.
Therefore, $[e]=[e]+[e]$ in $K_0((\mathcal{O}_2)_\omega)$,
and so $[e]=0$.
Since $e$ is arbitrary and
$(\mathcal{O}_2)_\omega$ is purely infinite simple,
we have $K_0((\mathcal{O}_2)_\omega)=0$.
In a similar fashion, we also have $K_1((\mathcal{O}_2)_\omega)=0$.
Indeed, for any unitary $u\in(\mathcal{O}_2)_\omega$,
we can find a unital copy of $\mathcal{O}_2$
in the relative commutant of $u$ in $(\mathcal{O}_2)_\omega$.
It follows from \cite[Lemma 5.1]{HR} that
there exists a smooth path $u(t)$ of unitaries
in $(\mathcal{O}_2)_\omega$ such that $u(0)=u$ and $u(1)=1$.
Moreover, the length of the path is not greater than $8\pi/3$.
We will use this argument in Lemma \ref{HaagerupRordam} again.
Let $N$ be a natural number and
let $\xi_1,\xi_2,\dots,\xi_N$ be the canonical basis of $\mathbb{Z}^N$,
that is,
\[ \xi_i=(0,0,\dots,1,\dots,0,0), \]
where $1$ is in the $i$-th component.
Let $\alpha$ be an action of $\mathbb{Z}^N$ on a unital $C^*$-algebra $A$.
We say that $\alpha$ is an outer action on $A$,
if $\alpha_n$ is not inner on $A$ for any $n\in\mathbb{Z}^N\setminus\{0\}$.
A family of unitaries $\{u_n\}_{n\in\mathbb{Z}^N}$ in $A$ is called
an $\alpha$-cocycle,
if
\[ u_n\alpha_n(u_m)=u_{n+m} \]
for all $n,m\in\mathbb{Z}^N$.
If a family of unitaries $u_1,u_2,\dots,u_N\in\mathcal{U}(A)$ satisfies
\[ u_i\alpha_{\xi_i}(u_j)=u_j\alpha_{\xi_j}(u_i) \]
for all $i,j=1,2,\dots,N$,
then it determines uniquely an $\alpha$-cocycle $\{u_n\}_{n\in\mathbb{Z}^N}$
such that $u_{\xi_i}=u_i$.
We may also call the family $\{u_1,u_2,\dots,u_N\}$ an $\alpha$-cocycle.
An $\alpha$-cocycle $\{u_n\}_n$ in $A$ is called a coboundary,
if there exists $v\in\mathcal{U}(A)$ such that
\[ u_n=v\alpha_n(v^*) \]
for all $n\in\mathbb{Z}^N$, or equivalently,
if
\[ u_{\xi_i}=v\alpha_{\xi_i}(v^*) \]
for all $i=1,2,\dots,N$.
When $\{u_n\}_{n\in\mathbb{Z}^N}$ is an $\alpha$-cocycle,
it turns out that
a new action $\tilde{\alpha}$ of $\mathbb{Z}^N$ on $A$ can be defined by
\[ \tilde{\alpha}_n(x)=\operatorname{Ad} u_n\circ\alpha_n(x)=u_n\alpha_n(x)u_n^* \]
for each $x\in A$.
We call $\tilde{\alpha}$
the perturbed action of $\alpha$ by $\{u_n\}_n$.
Two actions $\alpha$ and $\beta$ of $\mathbb{Z}^N$ on $A$
are said to be cocycle conjugate,
if there exists an $\alpha$-cocycle $\{u_n\}_n$ in $A$ such that
the perturbed action of $\alpha$ by $\{u_n\}_n$ is conjugate to $\beta$.
The main theorem of this paper states that
any two outer actions of $\mathbb{Z}^N$ on $\mathcal{O}_2$
are cocycle conjugate to each other.
Let $N$ be a natural number.
We would like to recall
the definition of the Rohlin property for $\mathbb{Z}^N$-actions
on unital $C^*$-algebras (see \cite[Section 2]{N1}).
Let $\xi_1,\xi_2,\dots,\xi_N$ be the canonical basis of $\mathbb{Z}^N$ as above.
For $m=(m_1,m_2,\dots,m_N)$ and $n=(n_1,n_2,\dots,n_N)$ in $\mathbb{Z}^N$,
$m\leq n$ means $m_i\leq n_i$ for all $i=1,2,\dots,N$.
For $m=(m_1,m_2,\dots,m_N)\in\mathbb{N}^N$, we let
\[ m\mathbb{Z}^N=\{(m_1n_1,m_2n_2,\dots,m_Nn_N)\in\mathbb{Z}^N
\mid (n_1,n_2,\dots,n_N)\in\mathbb{Z}^N\}. \]
For simplicity, we denote $\mathbb{Z}^N/m\mathbb{Z}^N$ by $\mathbb{Z}_m$.
Moreover, we may identify $\mathbb{Z}_m=\mathbb{Z}^N/m\mathbb{Z}^N$ with
\[ \{(n_1,n_2,\dots,n_N)\in\mathbb{Z}^N
\mid 0\leq n_i\leq m_i-1\text{ for all }i=1,2,\dots,N\}. \]
\begin{df}\label{Rohlin}
Let $\alpha$ be an action of $\mathbb{Z}^N$ on a unital $C^*$-algebra $A$.
Then $\alpha$ is said to have the Rohlin property,
if for any $m\in\mathbb{N}$ there exist $R\in\mathbb{N}$ and
$m^{(1)},m^{(2)},\dots,m^{(R)}\in\mathbb{N}^N$
with $m^{(1)},\dots,m^{(R)}\geq(m,m,\dots,m)$
satisfying the following:
For any finite subset $\mathcal{F}$ of $A$ and $\varepsilon>0$,
there exists a family of projections
\[ e^{(r)}_g \qquad (r=1,2,\dots,R, \ g\in\mathbb{Z}_{m^{(r)}}) \]
in $A$ such that
\[ \sum_{r=1}^R\sum_{g\in\mathbb{Z}_{m^{(r)}}}e^{(r)}_g=1,
\quad \lVert[a,e^{(r)}_g]\rVert<\varepsilon,
\quad \lVert\alpha_{\xi_i}(e^{(r)}_g)-e^{(r)}_{g+\xi_i}\rVert<\varepsilon \]
for any $a\in\mathcal{F}$, $r=1,2,\dots,R$, $i=1,2,\dots,N$
and $g\in\mathbb{Z}_{m^{(r)}}$,
where $g+\xi_i$ is understood modulo $m^{(r)}\mathbb{Z}^N$.
\end{df}
\begin{rem}\label{reindex}
Clearly, we can restate the definition of the Rohlin property
as follows.
For any $m\in\mathbb{N}$ there exist $R\in\mathbb{N}$,
$m^{(1)},m^{(2)},\dots,m^{(R)}\in\mathbb{N}^N$
with $m^{(1)},\dots,m^{(R)}\geq(m,m,\dots,m)$ and
a family of projections
\[ e^{(r)}_g \qquad (r=1,2,\dots,R, \ g\in\mathbb{Z}_{m^{(r)}}) \]
in $A_\omega=A^\omega\cap A'$ such that
\[ \sum_{r=1}^R\sum_{g\in\mathbb{Z}_{m^{(r)}}}e^{(r)}_g=1,
\quad \alpha_{\xi_i}(e^{(r)}_g)=e^{(r)}_{g+\xi_i} \]
for any $r=1,2,\dots,R$, $i=1,2,\dots,N$ and $g\in\mathbb{Z}_{m^{(r)}}$,
where $g+\xi_i$ is understood modulo $m^{(r)}\mathbb{Z}^N$.
Furthermore, by the reindexation trick,
for a given separable subset $S$ of $A^\omega$,
we can make the projections $e^{(r)}_g$ commute
with all elements in $S$.
We refer the reader to \cite[Lemma 5.3]{O} for details.
In particular, the same conclusion also follows
for perturbed actions on $A^\omega$.
Let $\alpha$ be an action of $\mathbb{Z}^N$ on $A$ with the Rohlin property
and let $\{u_n\}_n\subset\mathcal{U}(A^\omega)$ be
an $\alpha$-cocycle in $A^\omega$.
We can consider
the perturbed action $\tilde{\alpha}$ of $\mathbb{Z}^N$ on $A^\omega$.
Then, for a given separable subset $S$ of $A^\omega$,
we can choose the projections $e^{(r)}_g$ in $A_\omega$
so that they commute with all elements in $S\cup\{u_n\}_n$.
It follows that we have
\[ \tilde{\alpha}_{\xi_i}(e^{(r)}_g)
=u_{\xi_i}\alpha_{\xi_i}(e^{(r)}_g)u_{\xi_i}^*
=u_{\xi_i}e^{(r)}_{g+\xi_i}u_{\xi_i}^*
=e^{(r)}_{g+\xi_i} \]
for any $r=1,2,\dots,R$, $i=1,2,\dots,N$ and $g\in\mathbb{Z}_{m^{(r)}}$,
where $g+\xi_i$ is understood modulo $m^{(r)}\mathbb{Z}^N$.
\end{rem}
\begin{rem}\label{Rem2ofN1}
We can also restate the Rohlin property as follows (\cite[Remark 2]{N2}).
For any $n,m\in\mathbb{N}$ with $1\leq n\leq N$,
there exist $R\in\mathbb{N}$,
natural numbers $m^{(1)},m^{(2)},\dots,m^{(R)}\geq m$ and
a family of projections
\[ e^{(r)}_j \qquad (r=1,2,\dots,R, \ j=0,1,\dots,m^{(r)}-1) \]
in $A_\omega=A^\omega\cap A'$ such that
\[ \sum_{r=1}^R\sum_{j=0}^{m^{(r)}-1}e^{(r)}_j=1,
\quad \alpha_{\xi_n}(e^{(r)}_j)=e^{(r)}_{j+1},
\quad \alpha_{\xi_i}(e^{(r)}_j)=e^{(r)}_j \]
for any $r=1,2,\dots,R$, $i=1,2,\dots,N$ with $i\neq n$ and
$j=0,1,\dots,m^{(r)}-1$,
where the index is understood modulo $m^{(r)}$.
\end{rem}
\begin{rem}\label{restriction}
It is also obvious that
if $\alpha$ is an action of $\mathbb{Z}^N$ on $A$ with the Rohlin property,
then the action $\alpha'$ of $\mathbb{Z}^{N-1}$ generated by
$\alpha_{\xi_2},\alpha_{\xi_3},\dots,\alpha_{\xi_N}$ also has
the Rohlin property as a $\mathbb{Z}^{N-1}$-action.
\end{rem}
\section{Cohomology vanishing}
Throughout this section,
we let $A$ denote a $C^*$-algebra which is isomorphic to $\mathcal{O}_2$.
First we need a technical lemma about homotopies of unitaries.
\begin{lem}\label{HaagerupRordam}
Let $(X,d)$ be a compact metric space and
let $z:X\to\mathcal{U}(A^\omega)$ be a map.
Suppose that there exists $C>0$ such that
$\lVert z(x)-z(x')\rVert\leq Cd(x,x')$ for any $x,x'\in X$.
Then, for any separable subset $B$ of $A^\omega$,
we can find a map $\tilde{z}:X\times[0,1]\to\mathcal{U}(A^\omega)$
such that the following are satisfied.
\begin{enumerate}
\item For any $x\in X$, $\tilde{z}(x,0)=z(x)$ and $\tilde{z}(x,1)=1$.
\item For any $x,x'\in X$ and $t,t'\in[0,1]$,
\[ \lVert\tilde{z}(x,t)-\tilde{z}(x',t')\rVert
\leq 4Cd(x,x')+\frac{8\pi}{3}\lvert t-t'\rvert. \]
\item For any $a\in B$ and $(x,t)\in X\times[0,1]$,
$\lVert[\tilde{z}(x,t),a]\rVert\leq4\lVert[z(x),a]\rVert$.
\end{enumerate}
\end{lem}
\begin{proof}
Since $\mathcal{O}_2$ is isomorphic to
the infinite tensor product $\bigotimes_{i=1}^\infty\mathcal{O}_2$
(see \cite{R2} or \cite[Theorem 3.8]{KP}),
there exists a unital $C^*$-subalgebra $D$
in $A^\omega\cap(B\cup\{z(x)\mid x\in X\})'$
such that $D\cong\mathcal{O}_2$.
We regard $z$ as a unitary in $C(X)\otimes A^\omega$.
By \cite[Lemma 5.1]{HR} and its proof (see also \cite[Lemma 6]{N2}),
we can find a unitary $\tilde{z}\in C(X)\otimes C([0,1])\otimes A^\omega$
such that the following hold.
\begin{itemize}
\item $\tilde{z}(x,0)=z(x)$ and $\tilde{z}(x,1)=1$ for all $x\in X$.
\item $\displaystyle\lVert\tilde{z}(x,t)-\tilde{z}(x,t')\rVert
\leq\frac{8\pi}{3}\lvert t-t'\rvert$ for all $x\in X$ and $t,t'\in[0,1]$.
\item $\lVert\tilde{z}(x,t)-\tilde{z}(x',t)\rVert
\leq4\lVert z(x)-z(x')\rVert$ for all $x,x'\in X$ and $t\in[0,1]$.
\item $\lVert[\tilde{z}(x,t),a]\rVert\leq4\lVert[z(x),a]\rVert$
for all $(x,t)\in X\times[0,1]$ and $a\in A$.
\end{itemize}
Then the conclusions follow immediately.
\end{proof}
Let $N$ be a natural number.
We denote the $l^\infty$-norm on $\mathbb{R}^N$ by $\lVert\cdot\rVert$.
We put
\[ E=\{t\in\mathbb{R}^N\mid\lVert t\rVert\leq1\} \]
and
\[ \partial E=\{t\in\mathbb{R}^N\mid\lVert t\rVert=1\}. \]
\begin{lem}\label{extendz}
Let $C>0$ and
let $z_0:\partial E\to\mathcal{U}(A^\omega)$ be a map
such that
$\lVert z_0(t)-z_0(t')\rVert\leq C\lVert t-t'\rVert$
for every $t,t'\in\partial E$.
Then, for any separable subset $B$ of $A^\omega$,
there exists a map $z:E\to\mathcal{U}(A^\omega)$
such that the following hold.
\begin{enumerate}
\item For $t\in\partial E$, $z(t)=z_0(t)$.
\item For every $t,t'\in E$,
$\lVert z(t)-z(t')\rVert\leq(24C+16\pi/3)\lVert t-t'\rVert$.
\item For any $a\in B$ and $t\in E$,
$\lVert[z(t),a]\rVert\leq4\sup\{\lVert[z(s),a]\rVert\mid s\in\partial E\}$.
\end{enumerate}
\end{lem}
\begin{proof}
Lemma \ref{HaagerupRordam} applies and
yields a map $\tilde{z}_0:\partial E\times[0,1]\to\mathcal{U}(A^\omega)$.
We define $z:E\to\mathcal{U}(A^\omega)$ by
\[ z(t)=\begin{cases}
1 & \text{ if }\lVert t\rVert\leq1/2 \\
\tilde{z}_0(t/\lVert t\rVert,2(1-\lVert t\rVert))
& \text{ if }\lVert t\rVert\geq1/2. \end{cases} \]
Conditions (1) and (3) are immediate from the definition.
In order to verify (2), take $t,t'\in E$.
If $\lVert t\rVert,\lVert t'\rVert\leq1/2$,
we have nothing to do.
Let us consider the case such that
$\lVert t'\rVert\leq1/2\leq\lVert t\rVert$.
Since
\[ \lVert t\rVert-1/2\leq
\lVert t\rVert-\lVert t'\rVert
\leq\lVert t-t'\rVert, \]
we have
\begin{align*}
\lVert z(t)-z(t')\rVert
&=\lVert
\tilde{z}_0(t/\lVert t\rVert,2(1-\lVert t\rVert))-1
\rVert \\
&=\lVert
\tilde{z}_0(t/\lVert t\rVert,2(1-\lVert t\rVert))
-\tilde{z}_0(t/\lVert t\rVert,1)
\rVert \\
&\leq\frac{8\pi}{3}
\lvert2(1-\lVert t\rVert)-1\rvert \\
&\leq\frac{16\pi}{3}\lVert t-t'\rVert.
\end{align*}
It remains for us to check the case such that
$\lVert t\rVert,\lVert t'\rVert\geq1/2$.
Put $s=\displaystyle\frac{\lVert t'\rVert}{\lVert t\rVert}t$.
Then
\begin{align*}
\lVert z(t)-z(s)\rVert
&\leq\frac{8\pi}{3}
\lvert2(1-\lVert t\rVert)-2(1-\lVert t'\rVert)\rvert \\
&\leq\frac{16\pi}{3}
\lvert\lVert t\rVert-\lVert t'\rVert\rvert \\
&\leq\frac{16\pi}{3}\lVert t-t'\rVert.
\end{align*}
Besides,
\begin{align*}
\lVert z(s)-z(t')\rVert
&\leq4C\left\lVert
\frac{t}{\lVert t\rVert}-\frac{t'}{\lVert t'\rVert}
\right\rVert \\
&\leq24C\lVert t-t'\rVert.
\end{align*}
Combining these, we obtain
\[ \lVert z(t)-z(t')\rVert
\leq(24C+16\pi/3)\lVert t-t'\rVert. \]
\end{proof}
For each $i=1,2,\dots,N$, we let
\[ E^+_i=\{(t_1,t_2,\dots,t_N)\in E\mid t_i=1\} \]
and
\[ E^-_i=\{(t_1,t_2,\dots,t_N)\in E\mid t_i=-1\}. \]
Thus, $E^+_i$ and $E^-_i$ are $(N-1)$-dimensional faces of $E$.
Let $\sigma_i:E\to E$ be the map such that
\[ \sigma_i:(t_1,t_2,\dots,t_i,\dots,t_N)
\mapsto(t_1,t_2,\dots,-t_i,\dots,t_N) \]
for each $i=1,2,\dots,N$.
For $k=1,2,\dots,N-1$, we define $E(k)$ by
\[ E(k)=\{(t_1,t_2,\dots,t_N)\in E\mid
\#\{i\mid\lvert t_i\rvert=1\}\geq N-k\}. \]
In other words, $E(k)$ is the union of $k$-dimensional faces of $E$.
We have $\partial E=E(N-1)=\bigcup_{i=1}^N(E^+_i\cup E^-_i)$.
For each $I\subset\{1,2,\dots,N\}$, we let
\[ E^+(I)=\{(t_1,t_2,\dots,t_N)\in E\mid
t_i=1\text{ for all }i\notin I\}. \]
Thus, $E^+(I)$ is a $\#I$-dimensional face of $E$ and
$E^+(I)\cap E(\#I-1)$ is the boundary of $E^+(I)$.
We also have $E^+_i=E^+(\{1,2,\dots,N\}\setminus\{i\})$.
\begin{lem}\label{extendz2}
Let $\alpha_1,\alpha_2,\dots,\alpha_N$ be
$N$ commuting automorphisms on $A^\omega$.
Suppose that
there exists a family of unitaries $u_1,u_2,\dots,u_N$ in $A^\omega$
satisfying $u_i\alpha_i(u_j)=u_j\alpha_j(u_i)$ for all $i,j=1,2,\dots,N$.
Let $1\leq k\leq N-2$ and $C>0$.
Suppose that
$z_0:E(k)\to\mathcal{U}(A^\omega)$ satisfies the following.
\begin{itemize}
\item $z_0(1,1,\dots,1)=1$.
\item For every $i=1,2,\dots,N$ and $t\in E(k)\cap E^+_i$,
$z_0(\sigma_i(t))=u_i\alpha_i(z_0(t))$.
\item For every $t,t'\in E(k)$,
$\lVert z_0(t)-z_0(t')\rVert\leq C\lVert t-t'\rVert$.
\end{itemize}
Then, for any separable subset $B$ of $A^\omega$, we can find
an extension $z:E(k+1)\to\mathcal{U}(A^\omega)$ of $z_0$
such that the following hold.
\begin{enumerate}
\item For every $i=1,2,\dots,N$ and $t\in E(k+1)\cap E^+_i$,
$z(\sigma_i(t))=u_i\alpha_i(z(t))$.
\item For every $t,t'\in E(k+1)$,
$\lVert z(t)-z(t')\rVert\leq(48C+32\pi/3)\lVert t-t'\rVert$.
\item Let $I$ be a subset of $\{1,2,\dots,N\}$ such that $\#I=k+1$.
For every $a\in B$ and $t\in E^+(I)$, one has
\[ \lVert[z(t),a]\rVert
\leq4\sup\{\lVert[z_0(s),a]\rVert,
\lVert[z_0(s),\alpha_i^{-1}(a)]\rVert+\lVert[u_i,a]\rVert
\mid s\in E^+(I\setminus\{i\}),i\in I\}. \]
\end{enumerate}
\end{lem}
\begin{proof}
For each $I\subset\{1,2,\dots,N\}$ with $\#I=k+1$,
by using Lemma \ref{extendz},
we can extend $z_0$ on $E^+(I)\cap E(k)$
to the map $z_I:E^+(I)\to\mathcal{U}(A^\omega)$ satisfying the following.
\begin{itemize}
\item For every $t,t'\in E^+(I)$,
$\lVert z_I(t)-z_I(t')\rVert\leq(24C+16\pi/3)\lVert t-t'\rVert$.
\item For any $a\in B$ and $t\in E^+(I)$,
$\lVert[z_I(t),a]\rVert
\leq4\sup\{\lVert z_0(s),a]\rVert\mid s\in E^+(I)\cap E(k)\}$.
\end{itemize}
We define $z:E(k+1)\to\mathcal{U}(A^\omega)$ as follows.
First, for $t\in E^+(I)$, we let $z(t)=z_I(t)$.
Then, we can uniquely extend $z$ to $E(k+1)$ so that
$z(\sigma_i(t))=u_i\alpha_i(z(t))$ holds
for any $i=1,2,\dots,N$ and $t\in E(k+1)\cap E^+_i$,
because of the equality $u_i\alpha_i(u_j)=u_j\alpha_j(u_i)$.
Note that
if $t,t'\in E(k+1)$ lie on the same $(k+1)$-dimensional face of $E$,
then we still have
$\lVert z(t)-z(t')\rVert\leq(24C+16\pi/3)\lVert t-t'\rVert$.
Condition (1) is already achieved.
Let us verify (2).
Take $t=(t_1,t_2,\dots,t_N)$ and $t'=(t'_1,t'_2,\dots,t'_N)$ in $E(k+1)$.
Since any unitaries are within distance two of each other,
we may assume that $\lVert t-t'\rVert$ is less than two.
We define $s=(s_1,s_2,\dots,s_N)\in E(k+1)$ by
\[ s_i=\begin{cases}
t_i & \text{ if }\lvert t_i\rvert\neq1\text{ and }\lvert t'_i\rvert\neq1 \\
t_i & \text{ if }\lvert t_i\rvert=1 \\
t'_i & \text{ if }\lvert t'_i\rvert=1. \end{cases} \]
It is easy to see that
$t$ and $s$ lie on the same $(k+1)$-dimensional face of $E$
and that $t'$ and $s$ lie on the same $(k+1)$-dimensional face of $E$.
In addition,
both $\lVert t-s\rVert$ and $\lVert s-t'\rVert$
are less than $\lVert t-t'\rVert$.
It follows that
\begin{align*}
\lVert z(t)-z(t')\rVert
&\leq\lVert z(t)-z(s)\rVert+\lVert z(s)-z(t')\rVert \\
&\leq(24C+16\pi/3)(\lVert t-s\rVert+\lVert s-t'\rVert) \\
&\leq(48C+32\pi/3)\lVert t-t'\rVert,
\end{align*}
which ensures condition (2).
Let us consider (3). Take $t\in E^+(I)$.
We already have
\[ \lVert[z_I(t),a]\rVert
\leq4\sup\{\lVert z_0(s),a]\rVert\mid s\in E^+(I)\cap E(k)\}. \]
For each $s=(s_1,s_2,\dots,s_N)$ in $E^+(I)\cap E(k)$,
there exists $i\in I$ such that $\lvert s_i\rvert=1$.
If $s_i=1$, then $s$ is in $E^+(I\setminus\{i\})$.
If $s_i=-1$, then $s'=\sigma_i(s)$ is in $E^+(I\setminus\{i\})$ and
\begin{align*}
\lVert[z_0(s),a]\rVert
&=\lVert[z_0(\sigma_i(s')),a]\rVert \\
&=\lVert[u_i\alpha_i(z_0(s')),a]\rVert \\
&\leq\lVert[z_0(s'),\alpha_i^{-1}(a)]\rVert+\lVert[u_i,a]\rVert,
\end{align*}
thereby completing the proof.
\end{proof}
The following proposition is a crucial tool for cohomology vanishing.
\begin{prop}\label{Berg's}
Let $\alpha_1,\alpha_2,\dots,\alpha_N\in\operatorname{Aut}(A^\omega)$ be
$N$ commuting automorphisms on $A^\omega$.
Suppose that
there exists a family of unitaries $u_1,u_2,\dots,u_N$ in $A^\omega$
satisfying $u_i\alpha_i(u_j)=u_j\alpha_j(u_i)$ for all $i,j=1,2,\dots,N$.
Then, for any separable subset $B$ of $A^\omega$,
we can find a continuous map $z:E\to\mathcal{U}(A^\omega)$
such that the following hold.
\begin{enumerate}
\item $z(1,1,\dots,1)=1$.
\item For every $i=1,2,\dots,N$ and $t\in E^+_i$,
$z(\sigma_i(t))=u_i\alpha_i(z(t))$.
\item For every $t,t'\in E$,
$\lVert z(t)-z(t')\rVert\leq50^N\lVert t-t'\rVert$.
\item For any $a\in B$ and $t\in E$,
\[ \lVert[z(t),a]\rVert
\leq4^N\sup\sum_{k=1}^K
\lVert[u_{i_k},
(\alpha_{i_1}\alpha_{i_2}\dots\alpha_{i_{k-1}})^{-1}(a)]\rVert, \]
where the supremum is taken over
all permutations $i_1,i_2,\dots,i_K$ of elements in $\{1,2,\dots,N\}$.
\end{enumerate}
\end{prop}
\begin{proof}
Clearly we may assume that
$B$ is $\alpha_i$-invariant for every $i=1,2,\dots,N$.
By applying Lemma \ref{HaagerupRordam} to the case
such that $X$ is a singleton,
for each $i=1,2,\dots,N$, we obtain
a map $z_{1,i}$ from $E^+(\{i\})\cong[0,1]$ to $\mathcal{U}(A^\omega)$
satisfying the following.
\begin{itemize}
\item $z_{1,i}(\sigma_i(1,1,\dots,1))=u_i$ and $z_{1,i}(1,1,\dots,1)=1$.
\item For any $t,t'\in E^+(\{i\})$,
$\displaystyle \lVert z_{1,i}(t)-z_{1,i}(t')\rVert
\leq\frac{8\pi}{3}\lVert t-t'\rVert$.
\item For any $a\in B$ and $t\in E^+(\{i\})$,
$\lVert[z_{1,i}(t),a]\rVert\leq4\lVert[u_i,a]\rVert$.
\end{itemize}
From these maps $z_{1,i}$'s,
by the same argument as in the previous lemma,
we can construct a map $z_1:E(1)\to\mathcal{U}(A^\omega)$
such that the following are satisfied.
\begin{itemize}
\item For every $i=1,2,\dots,N$ and $t\in E^+(\{i\})$,
$z_1(t)=z_{1,i}(t)$.
\item For every $i=1,2,\dots,N$ and $t\in E(1)\cap E^+_i$,
$z_1(\sigma_i(t))=u_i\alpha_i(z_1(t))$.
\item For every $t,t'\in E(1)$,
$\displaystyle \lVert z_1(t)-z_1(t')\rVert
\leq\frac{16\pi}{3}\lVert t-t'\rVert$.
\end{itemize}
Note that we have used the equality $u_i\alpha_i(u_j)=u_j\alpha_j(u_i)$.
We apply Lemma \ref{extendz2} to $z_1:E(1)\to\mathcal{U}(A^\omega)$
and obtain an extension $z_2:E(2)\to\mathcal{U}(A^\omega)$ of $z_1$
which satisfies the following.
\begin{itemize}
\item For every $i=1,2,\dots,N$ and $t\in E(2)\cap E^+_i$,
$z_2(\sigma_i(t))=u_i\alpha_i(z_2(t))$.
\item For every $t,t'\in E(2)$,
$\lVert z_2(t)-z_2(t')\rVert\leq50^2\lVert t-t'\rVert$.
\item Let $I$ be a subset of $\{1,2,\dots,N\}$ such that $\#I=2$.
For every $a\in B$ and $t\in E^+(I)$, one has
\[ \lVert[z_2(t),a]\rVert
\leq4\sup\{\lVert[z_1(s),a]\rVert,
\lVert[z_1(s),\alpha_i^{-1}(a)]\rVert+\lVert[u_i,a]\rVert
\mid s\in E^+(I\setminus\{i\}),i\in I\}. \]
\end{itemize}
Repeating this argument, we get a map $z_{N-1}$
from $E(N-1)=\partial E$ to $\mathcal{U}(A^\omega)$
satisfying the following.
\begin{itemize}
\item For every $i=1,2,\dots,N$ and $t\in E^+_i$,
$z_{N-1}(\sigma_i(t))=u_i\alpha_i(z_{N-1}(t))$.
\item For every $t,t'\in E(N-1)$,
$\lVert z_{N-1}(t)-z_{N-1}(t')\rVert\leq50^{N-1}\lVert t-t'\rVert$.
\item Let $I$ be a subset of $\{1,2,\dots,N\}$ such that $\#I=N-1$.
For every $a\in B$ and $t\in E^+(I)$, one has
\begin{align*}
&\lVert[z_{N-1}(t),a]\rVert \\
&\leq4\sup\{\lVert[z_{N-2}(s),a]\rVert,
\lVert[z_{N-2}(s),\alpha_i^{-1}(a)]\rVert+\lVert[u_i,a]\rVert
\mid s\in E^+(I\setminus\{i\}),i\in I\}.
\end{align*}
\end{itemize}
By using Lemma \ref{extendz},
we can extend $z_{N-1}$ to $z:E\to\mathcal{U}(A^\omega)$.
Then, clearly conditions (2) and (3) are satisfied.
As for condition (4), we have
\begin{align*}
\lVert[z(t),a]\rVert
&\leq4\sup\{\lVert[z_{N-1}(s),a]\rVert\mid s\in E(N-1)\} \\
&\leq4\sup\{\lVert[z_{N-1}(s),a]\rVert,
\lVert[z_{N-1}(s),\alpha_i^{-1}(a)]\rVert+\lVert[u_i,a]\rVert
\mid s\in E^+_i,1\leq i\leq N\}
\end{align*}
for any $t\in E$ and $a\in B$.
By estimating norms of
commutators of $z_k(s)$ with elements in $B$ inductively,
we obtain the desired inequality.
\end{proof}
Now we would like to show the cohomology vanishing theorem.
\begin{thm}\label{CVanish}
Let $\alpha$ be an action of $\mathbb{Z}^N$ on $A$ with the Rohlin property
and let $\tilde\alpha$ be a perturbed action of $\alpha$
on $A^\omega$ by an $\alpha$-cocycle in $A^\omega$.
Let $B\subset A^\omega$ be an $\tilde\alpha$-invariant separable subset.
Suppose that a family of unitaries
$\{u_n\}_{n\in\mathbb{Z}^N}$ in $A^\omega\cap B'$ is an $\tilde\alpha$-cocycle.
Then, there exists a unitary $v\in\mathcal{U}(A^\omega\cap B')$
such that
\[ u_n=v\tilde\alpha_n(v^*) \]
for each $n\in\mathbb{Z}^N$, that is, $\{u_n\}_{n\in\mathbb{Z}^N}$ is a coboundary.
\end{thm}
\begin{proof}
Evidently it suffices to show the following:
For any $\varepsilon>0$,
there exists a unitary $v\in\mathcal{U}(A^\omega\cap B')$
such that
\[ \lVert u_{\xi_i}-v\tilde\alpha_{\xi_i}(v^*)\rVert<\varepsilon \]
for every $i=1,2,\dots,N$.
Choose a natural number $M$ so that $\varepsilon(M-1)>2\cdot50^N$.
Since $\alpha$ has the Rohlin property,
there exist $R\in\mathbb{N}$ and $m^{(1)},m^{(2)},\dots,m^{(R)}\in\mathbb{Z}^N$
with $m^{(1)},\dots,m^{(R)}\geq(M,M,\dots,M)$
and which satisfies the requirement in Definition \ref{Rohlin}.
For each $r=1,2,\dots,R$ and $i=1,2,\dots,N$,
let $m^{(r)}_i$ be the $i$-th summand of $m^{(r)}$.
We put $\eta_{r,i}=m^{(r)}_i\xi_i\in\mathbb{Z}^N$.
By applying Lemma \ref{Berg's} to
$\tilde\alpha_{\eta_{r,1}},\tilde\alpha_{\eta_{r,2}},
\dots,\tilde\alpha_{\eta_{r,N}}$ and
unitaries $u_{\eta_{r,1}},u_{\eta_{r,2}},\dots,u_{\eta_{r,N}}$
in $\mathcal{U}(A^\omega\cap B')$,
we obtain a map $z^{(r)}:E\to\mathcal{U}(A^\omega\cap B')$
satisfying the following.
\begin{itemize}
\item $z^{(r)}(1,1,\dots,1)=1$.
\item For every $i=1,2,\dots,N$ and $t\in E^+_i$,
$z^{(r)}(\sigma_i(t))
=u_{\eta_{r,i}}\tilde\alpha_{\eta_{r,i}}(z^{(r)}(t))$.
\item For every $t,t'\in E$,
$\lVert z^{(r)}(t)-z^{(r)}(t')\rVert\leq50^N\lVert t-t'\rVert$.
\end{itemize}
For each $r=1,2,\dots,R$ and $g=(g_1,g_2,\dots,g_N)\in\mathbb{Z}_{m^{(r)}}$,
we define $w^{(r)}_g$ in $\mathcal{U}(A^\omega\cap B')$ by
\[ w^{(r)}_g=z^{(r)}\left(\frac{2g_1}{m^{(r)}_1-1}-1,
\frac{2g_2}{m^{(r)}_2-1}-1,\dots,\frac{2g_N}{m^{(r)}_N-1}-1\right). \]
It is easily seen that
one has the following
for any $r=1,2,\dots,R$, $i=1,2,\dots,N$
and $g=(g_1,g_2,\dots,g_N)\in\mathbb{Z}_{m^{(r)}}$.
\begin{itemize}
\item If $g_i\neq0$, then
$\lVert w^{(r)}_g-w^{(r)}_{g-\xi_i}\rVert$ is less than $\varepsilon$.
\item If $g_i=0$, then
$w^{(r)}_g$ is equal to
$u_{\eta_{r,i}}\tilde\alpha_{\eta_{r,i}}(w^{(r)}_{g+\eta_{r,i}-\xi_i})$.
\end{itemize}
By Remark \ref{reindex},
we can take a family of projections
$\{e^{(r)}_g\mid r=1,2,\dots,R, \ g\in\mathbb{Z}_{m^{(r)}}\}$
in $A^\omega\cap B'$ such that
\[ \sum_{r=1}^R\sum_{g\in\mathbb{Z}_{m^{(r)}}}e^{(r)}_g=1,
\quad \tilde\alpha_{\xi_i}(e^{(r)}_g)=e^{(r)}_{g+\xi_i} \]
for any $r=1,2,\dots,R$, $i=1,2,\dots,N$ and $g\in\mathbb{Z}_{m^{(r)}}$,
where $g+\xi_i$ is understood modulo $m^{(r)}\mathbb{Z}^N$.
Moreover, we may also assume that
$e^{(r)}_g$ commutes with $u_g$ and $\tilde\alpha_g(w^{(r)}_g)$.
Define $v\in\mathcal{U}(A^\omega\cap B')$ by
\[ v=\sum_{r=1}^R\sum_{g\in\mathbb{Z}_{m^{(r)}}}
u_g\tilde\alpha_g(w^{(r)}_g)e^{(r)}_g. \]
It is now routinely checked that
$\lVert u_{\xi_i}-v\tilde\alpha_{\xi_i}(v^*)\rVert$ is
less than $\varepsilon$ for each $i=1,2,\dots,N$.
\end{proof}
The following corollary is an immediate consequence of the theorem above
and we omit the proof.
\begin{cor}\label{appCVanish}
Let $\alpha$ be an action of $\mathbb{Z}^N$ on $A$ with the Rohlin property.
For any $\varepsilon>0$ and a finite subset $\mathcal{F}$ of $A$,
there exist $\delta>0$ and a finite subset $\mathcal{G}$ of $A$
such that the following holds:
If a family of unitaries $\{u_n\}_{n\in\mathbb{Z}^N}$ in $A$
is an $\alpha$-cocycle satisfying
\[ \lVert[u_{\xi_i},a]\rVert<\delta \]
for every $i=1,2,\dots,N$ and $a\in\mathcal{G}$, then
we can find a unitary $v\in\mathcal{U}(A)$
satisfying
\[ \lVert u_{\xi_i}-v\alpha_{\xi_i}(v^*)\rVert<\varepsilon \]
and
\[ \lVert[v,a]\rVert<\varepsilon \]
for each $i=1,2,\dots,N$ and $a\in\mathcal{F}$.
Furthermore,
if $\mathcal{F}$ is an empty set,
then we can take an empty set for $\mathcal{G}$.
\end{cor}
\section{Rohlin type theorem}
Throughout this section,
we let $A$ denote a unital $C^*$-algebra
which is isomorphic to $\mathcal{O}_2$.
In this section, we would like to show the Rohlin type theorem
for $\mathbb{Z}^N$-actions on $A$
by combining techniques developed in \cite{N1} and \cite{N2}.
\begin{lem}\label{scattered}
Let $\alpha$ be an outer action of $\mathbb{Z}^N$ on $A$.
Then, for any $m\in\mathbb{N}^N$ and
a non-zero projection $p\in A_\omega=A^\omega\cap A'$,
there exists a non-zero projection $e$ in $pA_\omega p$ such that
$e\alpha_g(e)=0$ for all $g\in\mathbb{Z}_m\setminus\{0\}$.
\end{lem}
\begin{proof}
We can prove this lemma
exactly in the same way as \cite[Lemma 3]{N2}.
See also \cite[Lemma 2]{N2}.
\end{proof}
The following lemma is a generalization of \cite[Lemma 3.5]{K2}.
\begin{lem}\label{fixedisom}
Let $\alpha$ be an action of $\mathbb{Z}^N$ on $A$ with the Rohlin property.
Suppose that
one has two non-zero projections $e,f\in A_\omega$
satisfying $\alpha_n(e)=e$ and $\alpha_n(f)=f$ for any $n\in\mathbb{Z}^N$.
Then, there exists $w\in A_\omega$ such that
$w^*w=e$, $ww^*=f$ and $\alpha_n(w)=w$ for all $n\in\mathbb{Z}^N$.
\end{lem}
\begin{proof}
To simplify notation, we denote $\alpha_{\xi_i}$ by $\alpha_i$.
Since $A_\omega$ is purely infinite simple and $K_0(A_\omega)=0$,
there exists a partial isometry $u\in A_\omega$
such that $u^*u=e$, $uu^*=f$.
Put $u_i=u^*\alpha_i(u)+1-e$.
It is straightforward to verify $u_i\alpha_i(u_j)=u_j\alpha_j(u_i)$
for all $i,j=1,2,\dots,N$.
Thus, the family $\{u_i\}$ is an $\alpha$-cocycle in $A_\omega$.
Clearly $u_i$ commutes with $e$.
By Theorem \ref{CVanish},
we can find a unitary $v\in A_\omega$ such that
$[v,e]=0$ and $u_i=v\alpha_i(v^*)$.
Then $w=uv$ satisfies the requirements.
\end{proof}
We also have an approximate version of the lemma above.
\begin{lem}\label{appfixedisom}
Let $\alpha$ be an action of $\mathbb{Z}^N$ on $A$ with the Rohlin property.
For any $\varepsilon>0$ and a finite subset $\mathcal{F}$ of $A$,
there exist $\delta>0$ and a finite subset $\mathcal{G}$ of $A$
such that the following holds:
Suppose that
$e$ and $f$ are two non-zero projections in $A$ satisfying
\[ \lVert[e,a]\rVert<\delta, \ \lVert[f,a]\rVert<\delta
\quad\text{ for all }a\in\mathcal{G} \]
and
\[ \lVert\alpha_{\xi_i}(e)-e\rVert<\delta, \
\lVert\alpha_{\xi_i}(f)-f\rVert<\delta
\quad\text{ for each }i=1,2,\dots,N. \]
Then, we can find a partial isometry $v\in A$ such that
$v^*v=e$, $vv^*=f$ and
\[ \lVert[v,a]\rVert<\varepsilon
\quad\text{ for all }a\in\mathcal{F} \]
and
\[ \lVert\alpha_{\xi_i}(v)-v\rVert<\varepsilon
\quad\text{ for each }i=1,2,\dots,N. \]
\end{lem}
\begin{proof}
This immediately follows from the lemma above.
\end{proof}
Next, we have to recall a technical result
about almost cyclic projections.
Suppose that we are given $\varepsilon>0$ and $n\in\mathbb{N}$.
Choose a natural number $k\in\mathbb{N}$ so that $2/\sqrt{k}<\varepsilon$.
Let $\alpha$ be an automorphism on $A$ and
let $p$ be a projection of $A$
which satisfies $p\alpha^i(p)=0$ for all $i=1,2,\dots,nk$.
Furthermore, let $v\in A$ be a partial isometry
such that $v^*v=p$ and $vv^*=\alpha(p)$.
Define
\[ E_{i,j}=\begin{cases}
\alpha^{i-1}(v)\alpha^{i-1}(v)\dots\alpha^j(v) & \text{ if }i>j \\
\alpha^i(p) & \text{ if }i=j \\
\alpha^i(v^*)\alpha^{i+1}(v^*)\dots\alpha^{j-1}(v^*) &
\text{ if }i<j \end{cases} \]
for each $i,j=0,1,\dots,nk$.
Then we can easily see that $\{E_{i,j}\}$ is a system of matrix units
and $\alpha(E_{i,j})=E_{i+1,j+1}$ for any $i,j=0,1,\dots,nk-1$.
We let
\[ f=\frac{1}{k}\sum_{i,j=0}^{k-1}E_{ni,nj} \]
and
\[ e_i=\alpha^i(f) \]
for all $i=0,1,\dots,n-1$.
Then $\{e_i\}$ is an orthogonal family of projections in $A$ satisfying
\[ e_0+e_1+\dots+e_{n-1}\leq\sum_{i=0}^{nk-1}E_{i,i} \]
and
\[ \lVert e_0-\alpha(e_{n-1})\rVert<\frac{2}{\sqrt{k}}<\varepsilon. \]
Moreover, this argument applies to the case that
$p$ is almost orthogonal to $\alpha^i(p)$.
Thus, for any $\varepsilon>0$,
there exists a small positive constant $c(\varepsilon,n,k)$
such that the following holds:
if $\lVert p\alpha^i(p)\rVert<c(\varepsilon,n,k)$ for all $i=1,2,\dots,nk$,
then we can find a projection $e_0$ such that
\[ \left\lVert(e_0+\alpha(e_0)+\dots+\alpha^{n-1}(e_0))
\left(1-\sum_{i=0}^{nk-1}E_{i,i}\right)\right\rVert<\varepsilon \]
and
\[ \lVert e_0-\alpha^n(e_0)\rVert<\frac{2}{\sqrt{k}}<\varepsilon. \]
Using these estimates, we can prove the next lemma.
\begin{lem}\label{appcyclicproj}
Let $\alpha$ be an outer action of $\mathbb{Z}^N$ on $A$.
Suppose that
the action $\alpha'$ of $\mathbb{Z}^{N-1}$ generated
by $\alpha_{\xi_2},\alpha_{\xi_3},\dots,\alpha_{\xi_N}$ has
the Rohlin property.
Then, for any $n\in\mathbb{N}$, $\varepsilon>0$ and
a finite subset $\mathcal{F}$ of $A$,
there exist $m\in\mathbb{N}^N$, $\delta>0$ and
a finite subset $\mathcal{G}$ of $A$ satisfying the following:
If a non-zero projection $p$ in $A$ satisfies
\[ \lVert p\alpha_g(p)\rVert<\delta
\quad\text{ for each }g\in\mathbb{Z}_m\setminus\{0\} \]
and
\[ \lVert[\alpha_g(p),a]\rVert<\delta
\quad\text{ for all }g\in\mathbb{Z}_m\text{ and }a\in\mathcal{G}, \]
then there exists a non-zero projection $e$ in $A$ such that
\begin{enumerate}
\item $\left\lVert(e+\alpha_{\xi_1}(e)+\dots+\alpha^{n-1}_{\xi_1}(e))
\left(1-\sum_{g\in\mathbb{Z}_m}\alpha_g(p)\right)\right\rVert<\varepsilon$.
\item $\lVert[\alpha^j_{\xi_1}(e),a]\rVert<\varepsilon$
for all $j=0,1,\dots,n-1$ and $a\in\mathcal{F}$.
\item $\lVert\alpha_{\xi_i}(e)-e\rVert<\varepsilon$ for each $i=2,3,\dots,N$.
\item $\lVert\alpha^n_{\xi_1}(e)-e\rVert<\varepsilon$.
\end{enumerate}
\end{lem}
\begin{proof}
We prove the lemma by induction on $N$.
When $N=1$, the assertion follows immediately
from \cite[Lemma 4]{N2} and its proof.
Suppose that the case of $N-1$ has been shown.
We would like to consider the case of $\mathbb{Z}^N$-actions.
To simplify notation, we denote $\alpha_{\xi_i}$ by $\alpha_i$.
We are given $n\in\mathbb{N}$, $\varepsilon>0$
and a finite subset $\mathcal{F}$ of $A$.
We choose $k\in\mathbb{N}$ so that $2/\sqrt{k}<\varepsilon$.
We will eventually find
a projection $q$ and a partial isometry $v\in A$ such that
$v^*v=q$, $vv^*=\alpha_1(q)$ and
$\lVert q\alpha^j_1(q)\rVert<c(\varepsilon/2,n,k)$ for all $j=0,1,\dots,nk$.
Then, by the above mentioned technique,
we will construct a projection $e$ satisfying
\[ \left\lVert(e+\alpha_1(e)+\dots+\alpha_1^{n-1}(e))
\left(1-\sum_{j=0}^{nk-1}\alpha_1^j(q)\right)\right\rVert<\varepsilon/2 \]
and
\[ \lVert e-\alpha_1^n(e)\rVert<\frac{2}{\sqrt{k}}<\varepsilon. \]
In this construction,
we can find $\varepsilon'>0$ and a finite subset $\mathcal{F}'$ of $A$
such that the following hold:
If the projection $q$ and the partial isometry $v$ satisfy
\[ \lVert[q,a]\rVert<\varepsilon', \ \lVert[v,a]\rVert<\varepsilon'
\quad\text{ for all }a\in\mathcal{F}' \]
and
\[ \lVert\alpha_i(q)-q\rVert<\varepsilon', \ \lVert\alpha_i(v)-v\rVert<\varepsilon'
\quad\text{ for each }i=2,3,\dots,N, \]
then the obtained projection $e$ satisfies conditions (2) and (3).
By applying Lemma \ref{appfixedisom} to
the action $\alpha'$ of $\mathbb{Z}^{N-1}$, $\varepsilon'>0$ and $\mathcal{F}'$,
we get $\varepsilon''>0$ and a finite subset $\mathcal{F}''$ of $A$.
We may assume that $\varepsilon''$ is smaller than
\[ \min\left\{\varepsilon',\frac{c(\varepsilon/2,n,k)}{3},\frac{\varepsilon}{4nk}\right\} \]
and that $\mathcal{F}''$ contains $\mathcal{F}'$.
By using the induction hypothesis for
the action $\alpha'$ of $\mathbb{Z}^{N-1}$, $n=1$,
$\varepsilon''>0$ and $\mathcal{F}''\cup\alpha^{-1}_1(\mathcal{F}'')$
(see Remark \ref{restriction}),
we have $m'\in\mathbb{N}^{N-1}$, $\delta>0$ and a finite subset $\mathcal{G}$.
We define $m\in\mathbb{N}^N$ by $m=(nk,m')$.
In order to show that these items meet the requirements,
let $p$ be a projection in $A$ such that
\[ \lVert p\alpha_g(p)\rVert<\delta
\quad\text{ for each }g\in\mathbb{Z}_m\setminus\{0\} \]
and
\[ \lVert[\alpha_g(p),a]\rVert<\delta
\quad\text{ for all }g\in\mathbb{Z}_m\text{ and }a\in\mathcal{G}. \]
Then there exists a projection $q$ in $A$
which satisfies the following.
\begin{itemize}
\item $\left\lVert q(1-\sum_g\alpha_g(p))\right\rVert<\varepsilon''$,
where the summation runs over all $g=(g_1,g_2,\dots,g_N)\in\mathbb{Z}_m$
with $g_1=0$.
\item $\lVert[p,a]\rVert<\varepsilon''$ and $\lVert[\alpha_1(p),a]\rVert<\varepsilon''$
for all $a\in\mathcal{F}''$.
\item $\lVert\alpha_i(p)-p\rVert<\varepsilon''$ for each $i=2,3,\dots,N$.
\end{itemize}
In addition, by taking $\delta$ sufficiently small,
we may assume that the first condition above implies
$\lVert q\alpha_1^j(q)\rVert<c(\varepsilon/2,n,k)$ for all $j=1,2,\dots,nk$.
From Lemma \ref{appfixedisom},
there exists a partial isometry $v$ such that
$v^*v=q$, $vv^*=\alpha_1(q)$ and
\[ \lVert[v,a]\rVert<\varepsilon'
\quad\text{ for all }a\in\mathcal{F}' \]
and
\[ \lVert\alpha_i(v)-v\rVert<\varepsilon'
\quad\text{ for each }i=2,3,\dots,N. \]
Then, by the choice of $\varepsilon'>0$ and $\mathcal{F}'$,
the desired projection $e$ is obtained.
\end{proof}
By using the lemma above, we can show the following.
\begin{lem}\label{cyclicproj}
Let $\alpha$ be an outer action of $\mathbb{Z}^N$ on $A$.
Suppose that
the action $\alpha'$ of $\mathbb{Z}^{N-1}$ generated
by $\alpha_{\xi_2},\alpha_{\xi_3},\dots,\alpha_{\xi_N}$ has
the Rohlin property.
Then, for any $n\in\mathbb{N}$,
there exist non-zero mutually orthogonal projections
$e_0,e_1,\dots,e_{n-1}$ in $A_\omega=A^\omega\cap A'$
such that
\[ \alpha_{\xi_i}(e_j)=e_j \]
for each $i=2,3,\dots,N$ and $j=0,1,\dots,n-1$, and
\[ \alpha_{\xi_1}(e_j)=e_{j+1} \]
for each $j=0,1,\dots,n-1$, where the addition is understood modulo $n$.
\end{lem}
\begin{proof}
It suffices to show the following.
For any $n\in\mathbb{N}$, $\varepsilon>0$ and a finite subset $\mathcal{F}$ of $A$,
there exist non-zero almost mutually orthogonal projections
$e_0,e_1,\dots,e_{n-1}$ in $A$ such that the following are satisfied.
\begin{itemize}
\item $\lVert[e_j,a]\rVert<\varepsilon$
for any $j=0,1,\dots,n-1$ and $a\in\mathcal{F}$.
\item $\lVert\alpha_{\xi_i}(e_j)-e_j\rVert<\varepsilon$
for each $i=2,3,\dots,N$ and $j=0,1,\dots,n-1$.
\item $\lVert\alpha_{\xi_1}(e_j)-e_{j+1}\rVert<\varepsilon$
for each $j=0,1,\dots,n-1$, where the addition is understood modulo $n$.
\end{itemize}
Lemma \ref{appcyclicproj} applies to
$n\in\mathbb{N}$, $\varepsilon>0$ and $\mathcal{F}\subset A$
and yields $m\in\mathbb{N}^N$, $\delta>0$ and
a finite subset $\mathcal{G}$ of $A$.
From Lemma \ref{scattered},
there exists a non-zero projection $p$ in $A$ such that
$\lVert p\alpha_g(p)\rVert<\delta$ for all $g\in\mathbb{Z}_m\setminus\{0\}$ and
$\lVert[\alpha_g(p),a]\rVert<\delta$
for all $g\in\mathbb{Z}_m$ and $a\in\mathcal{G}$.
Then, by Lemma \ref{appcyclicproj}, we can find desired projections.
\end{proof}
The following lemma is an easy exercise and we omit the proof.
\begin{lem}\label{matrix}
For any $M\in\mathbb{N}$ and $\varepsilon>0$,
there exists a natural number $L\in\mathbb{N}$
such that the following holds.
If $u,u'$ are two unitaries in the matrix algebra $M_{LM+1}(\mathbb{C})$
satisfying
\[ \operatorname{Sp}(u)=\{1\}
\cup\left\{\exp\frac{2\pi\sqrt{-1}j}{LM}\mid j=0,1,\dots,LM-1\right\} \]
and
\begin{align*}
\operatorname{Sp}(u')=&\left\{\exp\frac{2\pi\sqrt{-1}j}{(L-1)M}
\mid j=0,1,\dots,(L-1)M-1\right\} \\
&\cup\left\{\exp\frac{2\pi\sqrt{-1}j}{M+1}\mid j=0,1,\dots,M\right\}
\end{align*}
with multiplicity,
then there exists a unitary $w$ in $M_{LM+1}(\mathbb{C})$ such that
\[ \lVert wuw^*-u'\rVert<\varepsilon. \]
\end{lem}
Now we can prove the Rohlin type theorem
for $\mathbb{Z}^N$-actions on the Cuntz algebra $\mathcal{O}_2$.
\begin{thm}\label{Rohlintype}
Let $A$ be a unital $C^*$-algebra
which is isomorphic to $\mathcal{O}_2$ and
let $\alpha$ be an action of $\mathbb{Z}^N$ on $A$.
Then the following are equivalent.
\begin{enumerate}
\item $\alpha$ is an outer action.
\item $\alpha$ has the Rohlin property.
\end{enumerate}
\end{thm}
\begin{proof}
It is obvious that (2) implies (1).
The implication from (1) to (2) is shown by induction on $N$.
When $N=1$, the conclusion follows from \cite[Theorem 3.1]{K2}.
We assume that the assertion has been shown for $N-1$.
Let us consider the case of $\mathbb{Z}^N$-actions.
To simplify notation, we denote $\alpha_{\xi_i}$ by $\alpha_i$.
It suffices to show the following (see Remark \ref{Rem2ofN1}):
For any $M\in\mathbb{N}$ and $\varepsilon>0$, there exist projections
$p_0,p_1,\dots,p_{M-1}$ and
$q_0,q_1,\dots,q_M$ in $A_\omega$ such that the following hold.
\begin{itemize}
\item $\sum_{j=0}^{M-1}p_j+\sum_{j=0}^Mq_j=1$.
\item $\alpha_i(p_j)=p_j$ and $\alpha_i(q_j)=q_j$
for each $i=2,3,\dots,N$ and $j=0,1,\dots,M-1$.
\item $\lVert\alpha_1(p_j)-p_{j+1}\rVert<\varepsilon$
for each $j=0,1,\dots,M-1$,
where $p_M$ is understood as $p_0$.
\item $\lVert\alpha_1(q_j)-q_{j+1}\rVert<\varepsilon$
for each $j=0,1,\dots,M$,
where $q_{M+1}$ is understood as $q_0$.
\end{itemize}
Let $\alpha'$ be the action of $\mathbb{Z}^{N-1}$
generated by $\alpha_2,\alpha_3,\dots,\alpha_N$.
By the induction hypothesis, $\alpha'$ has the Rohlin property.
We are given $M\in\mathbb{N}$ and $\varepsilon>0$.
By applying Lemma \ref{matrix}, we obtain a natural number $L\in\mathbb{N}$.
Let $K$ be a very large natural number.
By Lemma \ref{cyclicproj},
we can find non-zero projections $e_0,e_1,\dots,e_{LM-1}$ in $A_\omega$
such that
\[ \alpha_i(e_j)=e_j \]
for each $i=2,3,\dots,N$ and $j=0,1,\dots,LM-1$, and
\[ \alpha_1(e_j)=e_{j+1} \]
for each $j=0,1,\dots,LM-1$, where the addition is understood modulo $LM$.
Set $e=\sum_{j=0}^{LM-1}e_j$.
If $e=1$, we have nothing to do, so we assume $e\neq1$.
Take $\varepsilon_0>0$ and a finite subset $\mathcal{F}_0$ of $A$ arbitrarily.
By using Lemma \ref{appcyclicproj}
for $\alpha'$, $n=1$, $\varepsilon_0$ and $\mathcal{F}_0$,
we get $m'\in\mathbb{N}^{N-1}$, $\delta>0$ and a finite subset $\mathcal{G}$.
Define $m\in\mathbb{N}^N$ by $m=(KLM,m')$.
By Lemma \ref{scattered},
we can find a non-zero projection $p$ in $e_0A_\omega e_0$
such that $p\alpha_g(p)=0$ for all $g\in\mathbb{Z}_m\setminus\{0\}$.
Then, by applying Lemma \ref{appcyclicproj}
to each coordinate of
a representing sequence of $p$ in $\ell^\infty(\mathbb{N},A)$,
we can construct a non-zero projection $f$ in $A^\omega$
satisfying the following.
\begin{itemize}
\item $\lVert f(1-\sum_g\alpha_g(p))\rVert<\varepsilon_0$,
where the summation runs over all $g=(g_1,g_2,\dots,g_N)\in\mathbb{Z}_m$
with $g_1=0$.
\item $\lVert[f,a]\rVert<\varepsilon_0$ for all $a\in\mathcal{F}_0$.
\item $\lVert\alpha_i(f)-f\rVert<\varepsilon_0$ for each $i=2,3,\dots,N$.
\end{itemize}
Notice that, from the first condition,
one has $\lVert f(1-e_0)\rVert<\varepsilon_0$
and $\lVert f\alpha_1^j(f)\rVert<2\varepsilon_0$ for all $j=1,2,\dots,KLM-1$.
Since $\varepsilon_0>0$ and $\mathcal{F}_0$ were arbitrary,
by the reindexation trick, we may assume that
there exists a non-zero projection $f\in A_\omega$ such that
$f\leq e_0$,
\[ \alpha_i(f)=f\quad\text{ for each }i=2,3,\dots,N \]
and
\[ f\alpha_1^j(f)=0\quad\text{ for each }j=1,2,\dots,KLM-1. \]
By applying Lemma \ref{fixedisom} to $\alpha'$, $1-e$ and $f$
(note that $\alpha'$ has the Rohlin property),
we obtain a partial isometry $v\in A_\omega$ such that
$v^*v=1-e$, $vv^*=f$ and $\alpha_i(v)=v$ for all $i=2,3,\dots,N$.
We let
\[ w=\frac{1}{\sqrt{K}}\sum_{k=0}^{K-1}\alpha_1^{kLM}(v). \]
Then, $w$ is a partial isometry in $A_\omega$
satisfying $w^*w=1-e$ and $ww^*\leq e_0$.
In addition, $\alpha_i(w)=w$ for all $i=2,3,\dots,N$
and $\lVert\alpha_1^{LM}(w)-w\rVert<2/\sqrt{K}$.
We consider a $C^*$-subalgebra $D$ of $A_\omega$,
defined by
\[ D=C^*(w,\alpha_1(w),\dots,\alpha_1^{LM-1}(w)). \]
It is easy to see that
$D$ is isomorphic to the matrix algebra $M_{LM+1}(\mathbb{C})$ and
its unit is
\[ 1_D=1-e+ww^*+\alpha_1(ww^*)+\dots+\alpha_1^{LM-1}(ww^*). \]
By choosing $K$ so large, we may assume that
there exists a unitary
\[ u=\begin{bmatrix}
1 & & & & & & \\
& 0 & \cdot & \ldots & \cdot & 0 & 1 \\
& 1 & 0 & \ldots & \cdot & & 0 \\
& 0 & 1 & \cdot & & & \cdot \\
& \vdots & & \ddots & \ddots & \vdots \\
& \cdot & & 0 & 1 & 0 & 0 \\
& 0 & & & 0 & 1 & 0
\end{bmatrix} \]
in $D$ such that
\[ \lVert\alpha_1(x)-uxu^*\rVert\leq\varepsilon\lVert x\rVert \]
for all $x\in D$.
By the choice of $L$,
we can find a unitary $u'$ in $D$ such that
\begin{align*}
\operatorname{Sp}(u')=&\left\{\exp\frac{2\pi\sqrt{-1}j}{(L-1)M}
\mid j=0,1,\dots,(L-1)M-1\right\} \\
&\cup\left\{\exp\frac{2\pi\sqrt{-1}j}{M+1}\mid j=0,1,\dots,M\right\}
\end{align*}
and $\lVert u-u'\rVert<\varepsilon$.
It follows that
there exist non-zero mutually orthogonal projections
$p_0,p_1,\dots,p_{M-1},q_0,q_1,\dots,q_M$ in $D$ satisfying the following.
\begin{itemize}
\item $\sum_{j=0}^{M-1}p_j+\sum_{j=0}^Mq_j=1_D$.
\item $\alpha_i(p_j)=p_j$ and $\alpha_i(q_j)=q_j$
for each $i=2,3,\dots,N$ and $j=0,1,\dots,M-1$.
\item $\lVert\alpha_1(p_j)-p_{j+1}\rVert<3\varepsilon$
for each $j=0,1,\dots,M-1$,
where $p_M$ is understood as $p_0$.
\item $\lVert\alpha_1(q_j)-q_{j+1}\rVert<3\varepsilon$
for each $j=0,1,\dots,M$,
where $q_{M+1}$ is understood as $q_0$.
\end{itemize}
Finally, we define projections $p'_j$ in $A_\omega$ by
\[ p'_j=p_j+\sum_{l=0}^{L-1}\left(e_{lM+j}-\alpha_1^{lM+j}(ww^*)\right). \]
Then, we can check
\[ \sum_{j=0}^{M-1}p'_j+\sum_{j=0}^Mq_j=1, \]
\[ \alpha_i(p'_j)=p_j \]
for all $i=2,3,\dots,N$ and $j=0,1,\dots,M-1$ and
\[ \lVert\alpha_1(p'_j)-p'_{j+1}\rVert<3\varepsilon+\frac{4}{\sqrt{K}} \]
for each $j=0,1,\dots,M-1$, where $p'_M$ is understood as $p'_0$.
This completes the proof.
\end{proof}
\section{Classification}
In this section, we would like to prove our main result Theorem \ref{main}
by using the Evans-Kishimoto intertwining argument (\cite{EK}).
\begin{lem}\label{1-cocycle}
Let $A$ be a unital $C^*$-algebra which is isomorphic to $\mathcal{O}_2$
and let $\alpha$ and $\beta$ be actions of $\mathbb{Z}^N$ on $\mathcal{O}_2$.
When $\alpha$ has the Rohlin property, we have the following.
\begin{enumerate}
\item There exists an $\alpha$-cocycle $\{u_n\}_n$ in $A^\omega$
such that $\beta_n(a)=\operatorname{Ad} u_n\circ\alpha_n(a)$
for all $n\in\mathbb{Z}^N$ and $a\in A$.
\item For any $\varepsilon>0$ and a finite subset $\mathcal{F}$ of $A$,
there exists an $\alpha$-cocycle $\{u_n\}_n$ in $A$
such that
\[ \lVert\beta_{\xi_i}(a)-\operatorname{Ad} u_{\xi_i}\circ\alpha_{\xi_i}(a)\rVert<\varepsilon \]
for each $i=1,2,\dots,N$ and $a\in\mathcal{F}$.
\end{enumerate}
\end{lem}
\begin{proof}
To simplify notation,
we denote $\alpha_{\xi_i}$, $\beta_{\xi_i}$ by $\alpha_i$, $\beta_i$.
(1).
We prove this by induction on $N$.
When $N=1$, the assertion is clearly true,
because $\alpha_1,\beta_1\in\operatorname{Aut}(A)$ are approximately unitarily equivalent
(\cite[Theorem 3.6]{R1}).
Suppose that the assertion for $N-1$ has been shown.
Let us consider the case of $\mathbb{Z}^N$-actions.
Let $\alpha'$ be the action of $\mathbb{Z}^{N-1}$ on $A$
generated by $\alpha_2,\alpha_3,\dots,\alpha_N$.
By using the induction hypothesis to $\alpha'$,
we can find unitaries $u_2,u_3,\dots,u_N$ in $A^\omega$ such that
\[ \beta_i(a)=\operatorname{Ad} u_i\circ\alpha_i(a) \]
and
\[ u_i\alpha_i(u_j)=u_j\alpha_j(u_i) \]
for all $i,j=2,3,\dots,N$ and $a\in A$.
Since two automorphisms $\alpha_1$ and $\beta_1$ on $A$
are approximately unitarily equivalent,
there exists a unitary $u\in A^\omega$ such that
\[ \beta_1(a)=\operatorname{Ad} u\circ\alpha_1(a) \]
for all $a\in A$.
For $i=2,3,\dots,N$, we define $x_i\in A^\omega$ by
\[ x_i=u\alpha_1(u_i)(u_i\alpha_i(u))^*. \]
It is easy to see that $x_i$ belongs to $\mathcal{U}(A_\omega)$.
Let $\tilde\alpha$ be the perturbed action of $\alpha'$
by the $\alpha'$-cocycle $\{u_2,u_3,\dots,u_N\}$.
Then, we can verify, for every $i,j=2,3,\dots,N$,
\begin{align*}
x_i\tilde\alpha_i(x_j)
&=u\alpha_1(u_i)\alpha_i(u)^*u_i^*
\tilde\alpha_i(u\alpha_1(u_j)\alpha_j(u)^*u_j^*) \\
&=u\alpha_1(u_i)\alpha_i(\alpha_1(u_j)\alpha_j(u)^*u_j^*)u_i^* \\
&=u\alpha_1(u_i\alpha_i(u_j))\alpha_i(\alpha_j(u)^*)(u_i\alpha_i(u_j))^* \\
&=u\alpha_1(u_j\alpha_j(u_i))\alpha_j(\alpha_i(u)^*)(u_j\alpha_j(u_i))^* \\
&=x_j\tilde\alpha_j(x_i),
\end{align*}
and so the family of unitaries $\{x_2,x_3,\dots,x_N\}$ is
a $\tilde\alpha$-cocycle on $A_\omega$.
It follows from Theorem \ref{CVanish} that
there exists a unitary $v\in A_\omega$ such that
\[ x_i=v\tilde\alpha_i(v^*) \]
for all $i=2,3,\dots,N$.
Put $u_1=v^*u$.
It can be easily checked that
\[ \beta_1(a)=\operatorname{Ad} u_1\circ\alpha_1(a) \]
for all $a\in A$ and
\[ u_1\alpha_1(u_i)=u_i\alpha_i(u_1) \]
for each $i=2,3,\dots,N$.
Therefore, the family of unitaries $\{u_1,u_2,\dots,u_N\}$
induces the desired $\alpha$-cocycle in $A^\omega$.
(2).
From (1),
there exists an $\alpha$-cocycle $\{u_n\}_n$ in $A^\omega$
such that
\[ \beta_n(a)=\operatorname{Ad} u_n\circ\alpha_n(a) \]
for all $n\in\mathbb{Z}^N$ and $a\in A$.
It follows from Theorem \ref{CVanish} that
there exists a unitary $v\in A^\omega$ such that
\[ u_n=v\alpha_n(v^*) \]
for every $n\in\mathbb{Z}^N$.
Hence, for any $\varepsilon>0$ and a finite subset $\mathcal{F}$ of $A$,
we can find a unitary $w$ in $A$ such that
\[ \lVert \beta_i(a)-\operatorname{Ad}(v\alpha_i(v^*))\circ\alpha_i(a)\rVert<\varepsilon \]
for every $i=1,2,\dots,N$ and $a\in\mathcal{F}$.
Therefore, $\{v\alpha_n(v^*)\}_n$ is the desired $\alpha$-cocycle.
\end{proof}
Now we are ready to give a proof for our main theorem.
We make use of the Evans-Kishimoto intertwining argument
(\cite[Theorem 4.1]{EK}).
See also \cite[Theorem 5]{N2} or \cite[Theorem 3.5]{I2}.
\begin{thm}\label{main}
Let $\alpha$ and $\beta$ be two outer actions
on the Cuntz algebra $\mathcal{O}_2$.
Then, they are cocycle conjugate to each other.
\end{thm}
\begin{proof}
We denote the Cuntz algebra $\mathcal{O}_2$ by $A$.
Set $S=\{\xi_1,\xi_2,\dots,\xi_N\}\subset\mathbb{Z}^N$.
Note that, by Theorem \ref{Rohlintype},
both $\alpha$ and $\beta$ have the Rohlin property.
We choose an increasing family of finite subsets
$\mathcal{F}_1,\mathcal{F}_2,\dots$ of $A$
whose union is dense in $A$.
Put $\alpha^0=\alpha$ and $\beta^1=\beta$.
We will construct
$\mathbb{Z}^N$-actions $\alpha^{2k}$ and $\beta^{2k+1}$ on $A$,
cocycles $\{u^k_n\}_{n\in\mathbb{Z}^N}$ in $A$
and unitaries $v_k$ in $A$ inductively.
Applying Corollary \ref{appCVanish}
to $\beta^1$, $2^{-1}>0$ and $\mathcal{F}_1$,
we obtain $\delta_1>0$ and a finite subset $\mathcal{G}_1$ of $A$.
We let
\[ \mathcal{G}_1'=\bigcup_{g\in S}\beta^1_{-g}(\mathcal{G}_1)
\cup\mathcal{F}_1. \]
By Lemma \ref{1-cocycle} (2),
there exists an $\alpha^0$-cocycle $\{u^0_n\}_n$ in $A$
such that
\begin{equation}
\lVert\beta^1_g(a)-\operatorname{Ad} u^0_g\circ\alpha^0_g(a)\rVert
<\frac{\delta_1}{2}
\label{first}
\end{equation}
for every $g\in S$ and $a\in\mathcal{G}_1'$.
Let $\alpha^2$ be the perturbed action
of $\alpha^0$ by the $\alpha^0$-cocycle $\{u^0_n\}_n$.
By using Corollary \ref{appCVanish}
for $\alpha^0$ and $u^0$,
we can find a unitary $v_0$ in $A$ such that
\[ \lVert u^0_g-v_0\alpha^0_g(v_0)^*\rVert<1 \]
for each $g\in S$.
Applying Corollary \ref{appCVanish}
to $\alpha^2$, $2^{-2}$ and
\[ \mathcal{F}_2'=\mathcal{F}_2\cup\operatorname{Ad} v_0(\mathcal{F}_2), \]
we obtain $\delta_2>0$ and a finite subset $\mathcal{G}_2$ of $A$.
We may assume that $\delta_2$ is less than $\delta_1$ and $2^{-2}$.
We let
\[ \mathcal{G}_2'=\bigcup_{g\in S}\alpha^2_{-g}(\mathcal{G}_2)
\cup\bigcup_{g\in S}\beta^1_{-g}(\mathcal{G}_1)
\cup\mathcal{F}_2. \]
By Lemma \ref{1-cocycle} (2),
there exists an $\beta^1$-cocycle $\{u^1_n\}_n$ in $A$
such that
\begin{equation}
\lVert\operatorname{Ad} u^1_g\circ\beta^1_g(a)-\alpha^2_g(a)\rVert
<\frac{\delta_2}{2}
\label{second}
\end{equation}
for every $g\in S$ and $a\in\mathcal{G}_2'$.
Let $\beta^3$ be the perturbed action
of $\beta^1$ by the $\beta^1$-cocycle $\{u^1_n\}_n$.
From \eqref{first} and \eqref{second}, one has
\[ \lVert[u^1_g,a]\rVert<\delta_1 \]
for every $g\in S$ and $a\in\mathcal{G}_1$.
By using Corollary \ref{appCVanish}
for $\beta^1$ and $u^1$,
we can find a unitary $v_1$ in $A$ such that
\[ \lVert u^1_g-v_1\beta^1_g(v_1)^*\rVert<2^{-1} \]
and
\[ \lVert[v_1,a]\rVert<2^{-1} \]
for each $g\in S$ and $a\in\mathcal{F}_1$.
Applying Corollary \ref{appCVanish}
to $\beta^3$, $2^{-3}>0$ and
\[ \mathcal{F}_3'=\mathcal{F}_3\cup\operatorname{Ad} v_1(\mathcal{F}_3), \]
we obtain $\delta_3>0$ and a finite subset $\mathcal{G}_3$ of $A$.
We may assume that $\delta_3$ is less than $\delta_2$ and $2^{-3}$.
We let
\[ \mathcal{G}_3'=\bigcup_{g\in S}\beta^3_{-g}(\mathcal{G}_3)
\cup\bigcup_{g\in S}\alpha^2_{-g}(\mathcal{G}_2)
\cup\mathcal{F}_3. \]
By Lemma \ref{1-cocycle} (2),
there exists an $\alpha^2$-cocycle $\{u^2_n\}_n$ in $A$
such that
\begin{equation}
\lVert\beta^3_g(a)-\operatorname{Ad} u^2_g\circ\alpha^2_g(a)\rVert
<\frac{\delta_3}{2}
\label{third}
\end{equation}
for every $g\in S$ and $a\in\mathcal{G}_3'$.
Let $\alpha^4$ be the perturbed action
of $\alpha^2$ by the $\alpha^2$-cocycle $\{u^2_n\}_n$.
From \eqref{second} and \eqref{third}, one has
\[ \lVert[u^2_g,a]\rVert<\delta_2 \]
for every $g\in S$ and $a\in\mathcal{G}_2$.
By using Corollary \ref{appCVanish}
for $\alpha^2$ and $u^2$,
we can find a unitary $v_2$ in $A$ such that
\[ \lVert u^2_g-v_2\alpha^2_g(v_2)^*\rVert<2^{-2} \]
and
\[ \lVert[v_2,a]\rVert<2^{-2} \]
for each $g\in S$ and $a\in\mathcal{F}_2'$.
Applying Corollary \ref{appCVanish}
to $\alpha^4$, $2^{-4}$ and
\[ \mathcal{F}_4'=\mathcal{F}_4
\cup\operatorname{Ad}(v_2v_0)(\mathcal{F}_4), \]
we obtain $\delta_4>0$ and a finite subset $\mathcal{G}_4$ of $A$.
We may assume that $\delta_4$ is less than $\delta_3$ and $2^{-4}$.
We let
\[ \mathcal{G}_4'=\bigcup_{g\in S}\alpha^4_{-g}(\mathcal{G}_4)
\cup\bigcup_{g\in S}\beta^3_{-g}(\mathcal{G}_3)
\cup\mathcal{F}_4. \]
By Lemma \ref{1-cocycle} (2),
there exists an $\beta^3$-cocycle $\{u^3_n\}_n$ in $A$
such that
\begin{equation}
\lVert\operatorname{Ad} u^3_g\circ\beta^3_g(a)-\alpha^4_g(a)\rVert
<\frac{\delta_4}{2}
\label{fourth}
\end{equation}
for every $g\in S$ and $a\in\mathcal{G}_4'$.
Let $\beta^5$ be the perturbed action
of $\beta^3$ by the $\beta^3$-cocycle $\{u^3_n\}_n$.
From \eqref{third} and \eqref{fourth}, one has
\[ \lVert[u^3_g,a]\rVert<\delta_3 \]
for every $g\in S$ and $a\in\mathcal{G}_3$.
By using Corollary \ref{appCVanish}
for $\beta^3$ and $u^3$,
we can find a unitary $v_3$ in $A$ such that
\[ \lVert u^3_g-v_3\beta^3_g(v_3)^*\rVert<2^{-3} \]
and
\[ \lVert[v_3,a]\rVert<2^{-3} \]
for each $g\in S$ and $a\in\mathcal{F}_3'$.
Repeating this argument,
we obtain a sequence of $\mathbb{Z}^N$-actions
$\alpha^0,\beta^1,\alpha^2,\beta^3,\dots$,
cocycles $\{u^0_n\}_n,\{u^1_n\}_n,\dots$
and unitaries $v_0,v_1,\dots$.
Define $\sigma_{2k}$ and $\sigma_{2k+1}$ by
\[ \sigma_{2k}=\operatorname{Ad}(v_{2k}v_{2k-2}\dots v_0) \]
and
\[ \sigma_{2k+1}=\operatorname{Ad}(v_{2k+1}v_{2k-1}\dots v_1). \]
Since we have
\[ \lVert[v_k,a]\rVert<2^{-k} \]
and
\[ \lVert[v_k,\sigma_{k-2}(a)]\rVert<2^{-k} \]
for any $a\in\mathcal{F}_k$ ,
we can conclude that
there exist automorphisms $\gamma_0$ and $\gamma_1$ such that
\[ \gamma_0=\lim_{k\to\infty}\sigma_{2k} \]
and
\[ \gamma_1=\lim_{k\to\infty}\sigma_{2k+1} \]
in the point-norm topology (see \cite[Lemma 3.4]{I2}).
Define $w^{2k}_g,w^{2k+1}_g\in\mathcal{U}(A)$ by
\[ w^{2k}_g=u^{2k}_g\alpha^{2k}_g(v_{2k})v_{2k}^* \]
and
\[ w^{2k+1}_g=u^{2k+1}_g\beta^{2k+1}_g(v_{2k+1})v_{2k+1}^* \]
for every $k=0,1,2,\dots$ and $g\in S$.
Then $\lVert w^k_g-1\rVert$ is less than $2^{-k}$.
Furthermore, we define $\widetilde{w}^k_g\in\mathcal{U}(A)$ by
\[ \widetilde{w}^0_g=w^0_g, \quad \widetilde{w}^1_g=w^1_g \]
and
\[ \widetilde{w}^k_g=w^k_gv_k\widetilde{w}^{k-2}_gv_k^* \]
inductively.
We would like to see that
$\{\widetilde{w}^{2k}_g\}_k$ converges to a unitary for each $g\in S$.
From
\[ \widetilde{w}^{2k}_g=w^{2k}_g\cdot\operatorname{Ad}(v_{2k})(w^{2k-2}_g)
\cdot\operatorname{Ad}(v_{2k}v_{2k-2})(w^{2k-4}_g)\cdot\dots
\cdot\operatorname{Ad}(v_{2k}v_{2k-2}\dots v_2)(w^0_g), \]
we get
\[ \sigma_{2k}^{-1}(\widetilde{w}^{2k}_g)
=\sigma_{2k-2}^{-1}(w^{2k-2}_g)
\cdot\sigma_{2k-4}^{-1}(w^{2k-4}_g)\cdot\dots
\cdot\sigma_0^{-1}(w^0_g). \]
It follows from $\lVert w^{2k}_g-1\rVert<4^{-k}$ that
the right hand side converges.
Hence $\widetilde{w}^{2k}_g$ converges to a unitary $W^0_g$ in $A$,
because $\sigma_{2k}$ converges to $\gamma_0$.
Likewise,
$\widetilde{w}^{2k+1}_g$ also converges to a unitary $W^1_g$ in $A$.
We can also check that
the unitaries $\{\widetilde{w}^{2k}_g\}_g$ are a cocycle
for the $\mathbb{Z}^N$-action $\sigma_{2k}\circ\alpha\circ\sigma_{2k}^{-1}$
and that
the unitaries $\{\widetilde{w}^{2k+1}_g\}_g$ are a cocycle
for the $\mathbb{Z}^N$-action $\sigma_{2k+1}\circ\beta\circ\sigma_{2k+1}^{-1}$.
In addition, we can verify
\[ \operatorname{Ad}(\widetilde{w}^{2k}_g)\circ
\sigma_{2k}\circ\alpha_g\circ\sigma_{2k}^{-1}=\alpha^{2k+2}_g \]
and
\[ \operatorname{Ad}(\widetilde{w}^{2k+1}_g)\circ
\sigma_{2k+1}\circ\beta_g\circ\sigma_{2k+1}^{-1}=\beta^{2k+3}_g. \]
Since
\[ \lVert\beta^{2k+3}_g(a)-\alpha^{2k+2}_g(a)\rVert<2^{-2k-3} \]
for all $a\in\mathcal{F}_{2k+2}$,
we obtain
\[ \operatorname{Ad} W^1_g\circ\gamma_1\circ\beta_g\circ\gamma_1^{-1}
=\operatorname{Ad} W^0_g\circ\gamma_0\circ\alpha_g\circ\gamma_0^{-1} \]
for every $g\in S$.
Furthermore, $\{W^0_g\}_g$ is a cocycle
for the $\mathbb{Z}^N$-action $\gamma_0\circ\alpha\circ\gamma_0^{-1}$ and
$\{W^1_g\}_g$ is a cocycle
for the $\mathbb{Z}^N$-action $\gamma_1\circ\beta\circ\gamma_1^{-1}$.
Therefore, we can conclude that
$\alpha$ and $\beta$ are cocycle conjugate to each other.
\end{proof}
\begin{rem}
From Theorem \ref{main} and Theorem \ref{CVanish},
one can actually show the following:
Let $\alpha$ and $\beta$ be outer actions of $\mathbb{Z}^N$ on $\mathcal{O}_2$.
For any $\varepsilon>0$,
there exist $\gamma\in\operatorname{Aut}(\mathcal{O}_2)$ and
an $\alpha$-cocycle $\{u_n\}_{n\in\mathbb{Z}^N}$ such that
\[ \operatorname{Ad} u_n\circ\alpha_n(a)=\gamma\circ\beta_n\circ\gamma^{-1}(a), \]
\[ \lVert u_{\xi_i}-1\rVert<\varepsilon \]
for any $n\in\mathbb{Z}^N$, $a\in A$ and $i=1,2,\dots,N$.
\end{rem}
\end{document} | math | 62,455 |
\begin{document}
\title{Quantitative estimates for stress concentration of the Stokes flow between adjacent circular cylinders\thanks{This work is supported by NRF grants No. 2017R1A4A1014735, 2019R1A2B5B01069967 and 2020R1C1C1A01010882.}}
\author{Habib Ammari\thanks{Department of Mathematics, ETH Z\"urich, R\"amistrasse 101, CH-8092 Z\"urich, Switzerland ([email protected]).} \and Hyeonbae Kang\thanks{Department of Mathematics and Institute of Mathematics, Inha University, Incheon 22212, S. Korea ([email protected], [email protected]). } \and Do Wan Kim\footnotemark[3] \and Sanghyeon Yu\thanks{Department of Mathematics, Korea University, Seoul 02841, S. Korea (sanghyeon\[email protected]).}}
\date{}
\maketitle
\begin{abstract}
When two inclusions with high contrast material properties are located close to each other in a homogeneous medium, stress may become arbitrarily large in the narrow region between them. In this paper, we investigate such stress concentration in the two-dimensional Stokes flow when inclusions are the two-dimensional cross sections of circular cylinders of the same radii and the background velocity field is linear. We construct two vector-valued functions which completely capture the singular behavior of the stress and derive an asymptotic representation formula for the stress in terms of these functions as the distance between the two cylinders tends to zero. We then show, using the representation formula, that the stress always blows up by proving that either the pressure or the shear stress component of the stress tensor blows up. The blow-up rate is shown to be $\delta^{-1/2}$, where $\delta$ is the distance between the two cylinders. To our best knowledge, this work is the first to rigorously derive the asymptotic solution in the narrow region for the Stokes flow.
\end{abstract}
\noindent {\footnotesize {\bf AMS subject classifications.} 35J40, 74J70, 76D30}
\noindent {\footnotesize {\bf Key words.} stress concentration, blow-up, Stokes flow, Stokes system, singular functions, bi-polar coordinates}
\section{Introduction and statements of the main results}
When two close-to-touching inclusions with high contrast material properties are present, the physical fields such as the stress may become arbitrarily large in the narrow region between them. Such field blow-up occurs in electro-statics and elasto-statics, and quantitative understanding of such a phenomenon is important in relation with the light confinement in the electro-static case, and with materials failure analysis in the elasto-static case. Lately, significant progress has been made in understanding the field enhancement. In the electro-static case, it is proved that the electric field, which is the gradient of the solution to the conductivity equation, blows up in the narrow region between two perfect conductors (where the conductivity is infinite) at the rate of $\delta^{-1/2}$ \cite{AKL, Yun-SIAP-07} in two dimensions and of $|\delta \log \delta|^{-1}$ in three dimensions \cite{BLY-ARMA-09}, as the distance $\delta$ between the two inclusions tends to zero. The singular term of the stress concentration is also characterized in two dimensions \cite{ACKLY-ARMA-13}. This result has been extended to the elasticity in the context of the Lam\'e system of linear elasticity, showing that the blow-up rate of the stress in between two stiff inclusions (where the shear modulus is infinite) is $\delta^{-1/2}$ in two dimensions \cite{BLL-ARMA-15, KY19}. References cited above are far from being complete. In fact, there is a long list of recent important achievements in this direction of research, for which we refer to the references in \cite{KY19,Milton-book-2001}.
In this paper, we consider the stress concentration in the two-dimensional steady Stokes system when two adjacent circular cylinders are present. Its quantitative analysis is important in understanding hydrodynamic interactions in soft matter systems. This problem is particularly interesting in comparison to the case of linear elasticity. In the linear elasticity case, the divergence of the displacement vector field blows up in general as the distance between two inclusions tends to zero, as was proved in \cite{KY19}. However, the divergence of the velocity vector in Stokes flow is confined to be zero, namely, the flow is incompressible. Thus, it is not clear whether the stress blows up or not in the case of Stokes flow, and how large it is if it actually blows up. The stress in the Newtonian fluid including the Stokes flow consists of two components, the pressure and the shear gradient of the velocity field. We investigate the blow-up rate of each component when the distance between the two cylinders tends to zero.
More precisely, suppose that two circular cylinders, denoted by $D_1$ and $D_2$, of the same radius $R$ are immersed in Stokes flow and they are separated by a distance $\delta>0$. Since $D_1$ and $D_2$ are (rigid) cylinders, the boundary values of the steady flow on $\partial D_1$ and $\partial D_2$ are given as a linear combination of three vector fields representing rigid motions $\{\mbox{\boldmath $\Gy$}_j\}_{j=1}^3$, which are defined as
\begin{equation}\langlebel{Bpsi}
\mbox{\boldmath $\Gy$}_1=\begin{bmatrix}
1 \\ 0
\end{bmatrix}, \quad
\mbox{\boldmath $\Gy$}_2=\begin{bmatrix}
0 \\ 1
\end{bmatrix}, \quad
\mbox{\boldmath $\Gy$}_3=\begin{bmatrix}
y \\ -x
\end{bmatrix}.
\end{equation}
Thus, we consider the following Stokes system in the exterior domain $D^e := \mathbb{R}^2\setminus \overline{D_1\cup D_2}$:
\begin{equation}\langlebel{stokes}
\begin{cases}
\mu \Delta {\bf u} = \nabla p \quad &\mbox{in }D^e,
\\
\nabla \cdot {\bf u} = 0 \quad &\mbox{in }D^e,
\\[0.3em]
\displaystyle {\bf u} = \sum_{j=1}^3 c_{ij} \mbox{\boldmath $\Gy$}_j & \mbox{on } \partial D_i, \ i=1,2,
\\[0.3em]
({\bf u}-{\bf U}, p-P) \in \mathcal{M}_0,
\end{cases}
\end{equation}
where $\mu$ represents the constant viscosity of the fluid, $c_{ij}$ are constants to be determined from the equilibrium conditions (see \eqnref{equili} below), $({\bf U},P)$ is a given background solution to the homogeneous Stokes system in $\mathbb{R}^2$, namely,
\begin{equation}
\mu \Delta {\bf U} = \nabla P \quad \mbox{in } \mathbb{R}^2,
\end{equation}
and the class $\mathcal{M}_0$ is characterized by decay conditions at $\infty$. The precise definition of $\mathcal{M}_0$ is given later in Subsection \ref{subsec:diri}. Here we just mention that the problem \eqnref{stokes} admits a unique solution.
Throughout this paper, we assume that both the gradient $\nabla {\bf U}$ of the background velocity field and the pressure $P$ are constant functions. Since the pressure is determined up to a constant, we assume that $P=0$ and
\begin{equation}\langlebel{fieldU}
{\bf U}(x,y) = \begin{bmatrix} a & c \\ d & -a \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \quad (a^2 + (c+d)^2 \neq 0)
\end{equation}
for some constants $c$ and $d$. The fields in \eqnref{fieldU} are the only divergence-free fields in the case where $\nabla {\bf U}$ is constant. The condition $a^2 + (c+d)^2 \neq 0$ is imposed from the fact that otherwise
$$
{\bf U}(x,y) = c\begin{bmatrix} y \\ -x \end{bmatrix} ,
$$
and hence ${\bf U}$ with a constant $p$ is the solution to the problem \eqnref{stokes}, and its gradient does not blow up. If we write ${\bf U}$ as
\begin{equation}
{\bf U} = a {\bf U}_{\rm ex} + \frac{c+d}{2} {\bf U}_{\rm sh} + \frac{c-d}{2} \begin{bmatrix} y \\ -x \end{bmatrix} := a \begin{bmatrix} x \\ -y \end{bmatrix} + \frac{c+d}{2} \begin{bmatrix} y \\ x \end{bmatrix} + \frac{c-d}{2} \begin{bmatrix} y \\ -x \end{bmatrix},
\end{equation}
and denote respectively by $({\bf u}_{\rm ex}, \partialst)$ and $({\bf u}_{\rm sh}, \partialsh)$ the solutions to \eqnref{stokes} when ${\bf U}={\bf U}_{\rm ex}$ and ${\bf U}={\bf U}_{\rm sh}$, then the solution $({\bf u}, p)$ is given by
\begin{equation}\langlebel{Buexpre}
{\bf u} = a {\bf u}_{\rm ex} + \frac{c+d}{2} {\bf u}_{\rm sh} + \frac{c-d}{2} \begin{bmatrix} y \\ -x \end{bmatrix}
\end{equation}
and
\begin{equation}\langlebel{pexpre}
p = a \partialst + \frac{c+d}{2} \partialsh + \frac{c-d}{2} \mbox{const.}
\end{equation}
The singular behavior of the stress comes solely from those corresponding to $({\bf u}_{\rm ex},\partialst)$ and $({\bf u}_{\rm sh}, \partialsh)$. The flows ${\bf U}_{\rm ex}$ and ${\bf U}_{\rm sh}$ are called the extensional flow and the shear flow, respectively, which explains the subscripts ex and sh in our notation.
For the solution $({\bf u}, p)$ to the Stokes system, the strain tensor, denoted by $\mathcal{E}[{\bf u}]$, is given by
\begin{equation}
\mathcal{E}[{\bf u}] = \frac{1}{2}(\nabla {\bf u} + \nabla{\bf u}^T) ,
\end{equation}
where the superscript $T$ denotes the transpose, and the corresponding stress tensor is given by
\begin{equation}\langlebel{Gsdef}
\sigma[{\bf u},p] = - p I + 2\mu \mathcal{E}[{\bf u}],
\end{equation}
where $I$ is the identity matrix. The constants $c_{ij}$ appearing in \eqnref{stokes} are determined by the boundary integral conditions
\begin{equation}\langlebel{equili}
\int_{\partial D_i} \mbox{\boldmath $\Gy$}_j \cdot \sigma[{\bf u},p] \nu \, dl=0, \quad i=1,2, \ j=1,2,3.
\end{equation}
Here, $\nu$ denotes the unit normal on the boundary $\partial D_i$ and $dl$ is the line element.
Physically, these integral conditions imply that each rigid inclusion is in equilibrium, namely, the net translational and rotational stress on each boundary is zero (see, e.g., \cite{Berlyand-SIMA-06}).
The following is the main result of this paper. It shows that the stress always blows up. There and in what follows, $A \lesssim B$ means that there is a constant $C$ independent of $\delta$ such that $A \le C B$, and $A \approx B$ means that both $A \lesssim B$ and $B \lesssim A$ hold. The supremum norm on $D^e$ is denoted by $\| \cdot \|_\infty$.
\begin{theorem}\langlebel{thm:main}
Let $D_1$ and $D_2$ be disks of the same radii and let $({\bf u},p)$ be the the unique solution to \eqnref{stokes} when $U$ is of the form \eqnref{fieldU} and $P=0$. Then,
\begin{equation}
\| \sigma[{\bf u},p] \|_\infty \approx \delta^{-1/2}.
\end{equation}
\end{theorem}
In fact, we can separate our problem into the cases where the pressure or shear stress blows up as the following two theorems show, of which the main theorem is an immediate consequence. To present these results clearly, we assume for convenience that the centers of $D_1$ and $D_2$ are, respectively, given by
\begin{equation}\langlebel{config}
c_1= (-R-\delta/2,0) \quad\mbox{and}\quad c_2=(R+\delta/2,0)
\end{equation}
after applying rotation and translation if necessary, where $R$ is the common radius of the disks and $\delta$ is the distance between them. To describe the two-dimensional Stokes flow, we construct a pair of stream functions using the bipolar coordinates, and then use the stream function formulation to construct special solutions $({\bf h}_j,p_j)$, $j=1,2$, to the Stokes system (see Section \ref{sec:singular} for precise definitions of $({\bf h}_j,p_j)$). It turns out that these special solutions, called singular functions, capture precisely the singular behavior of $\sigma[{\bf u}_{\rm ex},\partialst]$ and $\sigma[{\bf u}_{\rm sh},\partialsh]$. As a result, we are able to characterize the blow-up of the pressure and the shear stress for the different configurations of the background velocity field ${\bf U}$: when ${\bf U}={\bf U}_{\rm ex}$, the pressure blows up at the rate of $\delta^{-1/2}$ while the shear stress is bounded; when ${\bf U}={\bf U}_{\rm sh}$, the other way around.
The precise statements of the results are presented in the following theorems. Here and afterwards, $\Pi_\delta$ denotes the narrow region between the two cylinders defined by
\begin{equation}\langlebel{narrow}
\Pi_\delta := ([-R-\delta/2, R+\delta/2] \times [-\sqrt{\delta}, \sqrt{\delta}]) \cap D^e.
\end{equation}
\begin{theorem}\langlebel{thm:main1}
Suppose that $D_1$ and $D_2$ are arranged so that \eqnref{config} holds and that ${\bf U}={\bf U}_{\rm ex}$ and $P=0$. It holds that
\begin{equation}
\| \mathcal{E}[{\bf u}_{\rm ex}] \|_\infty \lesssim 1 \quad\mbox{and}\quad \| \partialst \|_\infty \approx \delta^{-1/2}
\end{equation}
as $\delta \to 0$. In the narrow region $\Pi_\delta$,
\begin{equation}\langlebel{stressst}
\sigma[{\bf u}_{\rm ex},\partialst](x,y) = 2\mu \sqrt{R} \delta^{-1/2} \frac{(y^2+3R\delta)(y^2-R\delta)}{(y^2+R\delta)^2} I + O(1).
\end{equation}
\end{theorem}
\begin{theorem}\langlebel{thm:main2}
Suppose that $D_1$ and $D_2$ are arranged so that \eqnref{config} holds and that ${\bf U}={\bf U}_{\rm sh}$ and $P=0$. It holds that
\begin{equation}
\|\mathcal{E}[{\bf u}_{\rm sh}]\|_\infty \approx \delta^{-1/2} \quad\mbox{and}\quad
\| \partialsh \|_\infty \lesssim 1
\end{equation}
as $\delta \rightarrow 0$. In the narrow region $\Pi_\delta$,
\begin{equation}\langlebel{stresssh}
\sigma[{\bf u}_{\rm sh},\partialsh](x,y)= 2\mu \sqrt{\frac{R}{\delta}} \frac{R\delta}{y^2+R\delta} \begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix} + O(1).
\end{equation}
\end{theorem}
Let $({\bf u},p)$ be the solution to \eqnref{stokes}. According to \eqnref{Buexpre} and \eqnref{pexpre},
\begin{equation}
\sigma[{\bf u},p] = a \sigma[{\bf u}_{\rm ex},\partialst] + \frac{c+d}{2} \sigma[{\bf u}_{\rm sh},\partialsh] +O(1).
\end{equation}
Thus, Theorem \ref{thm:main} is an immediate consequence of \eqnref{stressst} and \eqnref{stresssh}.
What is actually shown in this paper is that if the background velocity field ${\bf U}$ is of the form \eqnref{fieldU} and $P=0$, then the solution $({\bf u}, p)$ is of the following form:
\begin{equation}\langlebel{Bpcomp}
({\bf u},p) = a \frac{2}{\sqrt{R}} \delta^{3/2} ({\bf h}_1,p_1) + \frac{c+d}{2} \sqrt{R\delta} ({\bf h}_2, p_2) + ({\bf u}_0, p_0),
\end{equation}
where $({\bf u}_0, p_0)$ is a solution to the Stokes problem whose stress tensor is bounded. See the end of section \ref{sec:1.2} for a brief proof of this fact. Since the singular functions $({\bf h}_j,p_j)$ ($j=1,2$) are given explicitly, the decomposition formula \eqnref{Bpcomp} may cast light on the challenging problem of computing the Stokes flow in presence of closely located rigid cylinders.
Some historical remarks on the study of the Stokes flow in presence of two circular cylinders are in order. Jeffrey developed in \cite{Jeffrey-PTRS-1921} a separable solution method based on bipolar coordinates and then analyzed in \cite{Jeffrey-PRSA-1922} the flow generated by two rotating circular cylinders. Several other authors independently developed similar methods \cite{Wannier-QAM-1950, BE-PF-1965}. Jeffrey's method has been applied to various problems of Stokes flow \cite{Schubert-JFM-1967, Wakiya-JPSJ-1975, Wakiya-JPSJ-1975-II, JO-QJMAM-1981, Smith-M-1991, Watson-M-1995, Crowdy-IJNLM-2011, IC-JFM-2017}. In particular, Raasch derived the exact analytic solution for two circular cylinders under the equilibrium condition, which represents suspended particles in a viscous fluid \cite{Raasch-PhD-1961} (see also \cite{Raasch-ZAMM-1961, DRM-CJCE-1967}). However, due to the high complexity of the solution, it is difficult to analyze the singular behavior of the solution when the cylinders are close-to-touching. In this work, this difficulty is successfully overcome by introducing the singular functions.
Other than the method of bipolar coordinates, a formal asymptotic technique called the lubrication theory was also developed for the viscous flow in the narrow region \cite{FA-CES-1967,Graham-ASR-1981,NK-JFM-1984}. Berlyand {\it et al} \cite{Berlyand-SIMA-06} constructed a refined lubrication approximation and then derived an asymptotic formula for the effective viscosity of concentrated suspensions. We mention that the approximation \eqref{Bpcomp} is different from the lubrication one in two respects. Firstly, it provides a rigorous pointwise approximation of the solution in the narrow region. Secondly, its singular parts satisfy the Stokes system at the exact level, which is a key to the development of an accurate numerical scheme.
The organization of the paper is as follows. In the next section, we introduce the bipolar coordinates and review the stream function formulation for the Stokes system. Section \ref{sec:singular} is to construct singular functions which are the building blocks in describing the singular behavior of the solution to the Stokes system \eqnref{stokes} as the separating distance between $D_1$ and $D_2$ tends to zero. Sections \ref{sec:1.1} and \ref{sec:1.2} are to prove Theorems \ref{thm:main1} and \ref{thm:main2}. Section \ref{sec:noslip1} and \ref{sec:noslip2} are to prove that stress does not blow up if the no-slip boundary condition is prescribed on the boundary of the circular inclusions. Appendices are to prove some auxiliary lemmas. The paper ends with a short discussion.
\section{Preliminaries}
\subsection{Bipolar coordinates}\langlebel{subsec:bi}
Given a positive constant $a$, the bipolar coordinates $(\zeta,\theta)$ are defined by
\begin{equation}
x + i y = a \frac{e^{\zeta - i\theta}+1}{e^{\zeta - i \theta}-1},
\end{equation}
so that
\begin{equation}\langlebel{eq:bipolar_x_y}
x = a \frac{\sinh \zeta}{\cosh \zeta - \cos\theta}, \quad y= a \frac{\sin \theta}{\cosh \zeta - \cos\theta},
\end{equation}
or equivalently,
\begin{equation}\langlebel{cartbipolar}
\zeta = \log \frac{\sqrt{(x+a)^2+y^2}}{\sqrt{(x-a)^2+y^2}}, \quad \theta= \arg(x-a,y)-\arg(x+a,y).
\end{equation}
The coordinate curve $\{\zeta = c\}$ represents a circle of radius $a/|\sinh c|$ centered at the point $(a/\tanh c,0)$. Similarly, the curve $\{\theta = c\}$ represents a circle of radius $a/|\sin c|$ centered at $(a/\tan c,0)$.
The point of infinity corresponds to $(\zeta,\theta)=(0,0)$. See, e.g., \cite{Jeffrey-PTRS-1921} for bipolar coordinates in relation with the Stokes system.
The geometry of two disks (the cross sections of the two circular cylinders) can be described efficiently in terms of bipolar coordinates. Let
\begin{equation}\langlebel{adef}
a := \sqrt{\delta \left( R + \frac{\delta}{4} \right)}.
\end{equation}
Then the boundary $\partial D_i$ of the cylinder $D_i$ can be parameterized by a $\zeta$-coordinate curve as follows:
\begin{equation}
\partial D_1 = \{ \zeta = - s\}, \quad \partial D_2 = \{ \zeta = + s\},
\end{equation}
where
\begin{equation}\langlebel{s}
s = \sinh^{-1}(a/R).
\end{equation}
We note that
\begin{equation}\langlebel{sGd}
s = \sqrt{\frac{\delta}{R}} + O(\delta^{3/2}) \quad\mbox{as } \delta \to 0.
\end{equation}
The exterior domain $D^e$ of $D_1 \cup D_2$ is characterized in bipolar coordinates $(\zeta, \theta)$ by the rectangle
\begin{equation}
D^e= \{ (\zeta, \theta) \in (-s,s) \times [0, 2\partiali) \}.
\end{equation}
In particular, $\{ (\zeta, \partiali) , |\zeta| <s \}$ is the line segment connecting the two points $(-\delta/2, 0)$ and $(\delta/2,0)$.
Let $\{{\bf e}_x,{\bf e}_y \}$ be the standard unit basis vectors in $\mathbb{R}^2$ and let $\{{\bf e}_{\zeta},{\bf e}_\theta \}$ be the unit basis vectors in the bipolar coordinates, namely,
$$
{\bf e}_\zeta = \frac{\nabla \zeta}{|\nabla \zeta|}, \quad
{\bf e}_\theta = \frac{\nabla \theta}{|\nabla \theta|}.
$$
Let $[{\bf e}_\zeta, {\bf e}_\theta]$ denote the $2\times 2$ matrix whose columns are ${\bf e}_\zeta$ and ${\bf e}_\theta$. Then one can easily see from \eqnref{cartbipolar} that
\begin{equation}\langlebel{Xione}
\Xi := [{\bf e}_\zeta, {\bf e}_\theta] = \begin{bmatrix} \alpha(\zeta,\theta) & -\beta(\zeta,\theta) \\ - \beta(\zeta,\theta) & - \alpha(\zeta,\theta) \end{bmatrix},
\end{equation}
where
\begin{equation}\langlebel{pqdef}
\alpha(\zeta,\theta) := \frac{1-\cosh\zeta\cos\theta }{\cosh\zeta - \cos\theta} , \quad \beta(\zeta,\theta) := \frac{\sinh\zeta \sin\theta}{\cosh\zeta-\cos\theta}.
\end{equation}
Since $\alpha^2+\beta^2=1$, we have
\begin{equation}\langlebel{Xisquare}
\Xi^2=I.
\end{equation}
This means that $\Xi$ is the transition transformation in the sense that
\begin{equation}\langlebel{exey}
[{\bf e}_x, {\bf e}_y] = \Xi [{\bf e}_\zeta, {\bf e}_\theta].
\end{equation}
Define the scaling function
\begin{equation}\langlebel{h_def}
h(\zeta,\theta) : = \frac{\cosh\zeta-\cos\theta}{a}.
\end{equation}
Then, for any scalar function $f$, its gradient $\nabla f$ can be expressed as
\begin{equation}\langlebel{gradrel}
\nabla f = h(\zeta,\theta) [\partial_\zeta f \,{\bf e}_\zeta + \partial_\theta f \,{\bf e}_\theta]
\end{equation}
(see, e.g., \cite{Smythe-1968}). Here and throughout this paper, $\partial_\zeta$ and $\partial_\theta$ denote the partial derivatives with respect to the $\zeta$ and $\theta$ variables, respectively. Moreover, the line element, denoted by $dl$, on $\partial D_2$ is given by
\begin{equation}\langlebel{lineele}
dl = h(s,\theta)^{-1} d\theta.
\end{equation}
One can easily check that, for $i=1,2$,
\begin{equation}\langlebel{eq:nor_deri_bipolar}
\partial_\nu f\big|_{\partial D_i} = (-1)^{i+1} h(\zeta,\theta) \partial_\zeta f\big|_{\zeta=(-1)^i s} \ ,
\end{equation}
and
\begin{equation}\langlebel{eq:tan_deri_bipolar}
\partial_T f\big|_{\partial D_i} = (-1)^i h(\zeta,\theta) \partial_\theta f\big|_{\zeta=(-1)^i s} \ ,
\end{equation}
where $\partial_\nu$ and $\partial_T$ denote the normal and tangential derivatives, respectively.
Using \eqref{eq:bipolar_x_y} and \eqnref{adef}, one can see that
$$
\frac{\cos \theta}{\cosh\zeta - \cos \theta} = \frac{1}{2a^2}(x^2+y^2) -\frac{1}{2} = \frac{1}{2R\delta}(x^2+y^2) -\frac{1}{2} + O(\delta).
$$
If $(x,y)$ lies in the narrow region $\Pi_\delta$ defined in \eqnref{narrow}, then $|x| \lesssim \delta$, and hence
$$
\frac{1}{2R\delta}(x^2+y^2) -\frac{1}{2} = \frac{y^2}{2R\delta} -\frac{1}{2} +O(\delta).
$$
Moreover, if $(\zeta,\theta)$ lies in $\Pi_\delta$, then there is a positive constant $C< \partiali$ such that $|\theta-\partiali| < C$. Since $|\zeta| < s \approx \sqrt{\delta}$, we have
$$
\cosh\zeta - \cos \theta = 1 - \cos \theta + O(\zeta^2)= 1 - \cos \theta + O(\delta).
$$
Thus we have
$$
\frac{\cos \theta}{1 - \cos \theta} = \frac{y^2}{2R\delta} -\frac{1}{2} + O(\delta),
$$
or equivalently,
\begin{equation}\langlebel{cosnarrow}
\cos\theta= \frac{y^2-R\delta}{y^2+R\delta} + O(\delta)
\end{equation}
in the region $\Pi_\delta$.
One can also easily see from \eqnref{pqdef} that in $\Pi_\delta$
$$
\alpha(\zeta,\theta) = 1+ O(\delta), \quad \beta(\zeta,\theta)=O(\sqrt{\delta}),
$$
and hence
\begin{equation}\langlebel{Xinarrow}
\Xi =\begin{bmatrix} 1 & 0 \\ 0 & - 1 \end{bmatrix} + O(\sqrt{\delta}).
\end{equation}
Using \eqref{eq:bipolar_x_y}, one can see
$$
|{\bf x}|^2 = x^2+y^2 = \frac{\cosh\zeta+\cos\theta}{\cosh\zeta-\cos\theta}.
$$
Since the following relation holds for large enough $|{\bf x}|$ (or small enough $\zeta$ and $\theta$)
$$
|{\bf x}|^{-2}=\frac{\cosh\zeta-\cos\theta}{\cosh\zeta+\cos\theta} = \frac{\frac{\zeta^2}{2} + \frac{\theta^2}{2}}{2+O(\zeta^2+\theta^2)},
$$
we obtain
\begin{equation}\langlebel{eq:largex_smallGzGt}
\frac{1}{8}({\zeta^2+\theta^2}) \leq |{\bf x}|^{-2} \leq \frac{1}{2}(\zeta^2+\theta^2).
\end{equation}
\subsection{The stream function}
Here we review the stream function formulation in the two-dimensional incompressible flow and collect some useful formulas.
It is well known that any solution $({\bf u},p)$ to the Stokes system, $\mu \Delta {\bf u} = \nabla p$ and $\nabla \cdot {\bf u} = 0$, can be written using a scalar function $\Psi$ satisfying the biharmonic equation $\Delta^2 \Psi = 0$. The function $\Psi$ is called the stream function. Once the function $\Psi$ is known, the velocity field ${\bf u}=(u_x,u_y)^T$ can be determined from the relations
\begin{equation}\langlebel{uxPsi}
u_x = \partial_y \Psi, \quad u_y = -\partial_x \Psi,
\end{equation}
and the pressure $p$ is a harmonic conjugate of $\mu \Delta \Psi$ (see, e.g., \cite{Batchelor-1967}).
Let us write the stream function formulation in terms of bipolar coordinates. It is also known (see, e.g, \cite{Wakiya-JPSJ-1975, Wakiya-JPSJ-1975-II}) that the Laplacian in Cartesian coordinates is related to bipolar coordinates via
\begin{equation}\langlebel{eq:Laplacian_Psi}
\Delta_{x,y} \Psi = \frac{1}{a}\left( (\cosh\zeta-\cos\theta) \Delta_{\zeta,\theta} + (\cosh\zeta + \cos\theta) - 2(\sinh\zeta \partial_\zeta + \sin\theta \partial_\theta) \right)(h\Psi),
\end{equation}
where $h$ is the function defined in \eqnref{h_def}. Using this formula, the biharmonic equation $\Delta^2\Psi=0$ can be rewritten as
\begin{equation}\langlebel{eq:biharmonic_bipolar}
\left( \partial_\zeta^4 + 2 \partial_\zeta^2 \partial_\theta^2 + \partial_\theta^4 -2 \partial_\zeta^2 + 2 \partial_\theta^2 + 1 \right) (h\Psi) =0,
\end{equation}
and the general solution to the above equation takes the following form:
\begin{align}
& (h\Psi)(\zeta,\theta) \nonumber \\
& = K(\cosh\zeta -\cos\theta)\ln (2\cosh \zeta - 2\cos \theta)
+ a_0 \cosh \zeta + b_0 \zeta \cosh \zeta + c_0 \sinh \zeta + d_0 \zeta \sinh \zeta \nonumber\\
& + (a_1 \cosh 2\zeta + b_1 + c_1 \sinh 2\zeta + d_1 \zeta)\cos\theta \nonumber
+ (\widetilde{a}_1 \cosh 2\zeta + \widetilde{b}_1 + \widetilde{c}_1 \sinh 2\zeta + \widetilde{d}_1 \zeta)\sin\theta \nonumber
\\
&
+\sum_{n=2}^\infty {\bf i}g(a_n \cosh (n+1) \zeta + b_n \cosh (n-1)\zeta + c_n \sinh (n+1)\zeta+ d_n \sinh(n-1)\zeta {\bf i}g)\cos n\theta\nonumber
\\
&
+\sum_{n=2}^\infty {\bf i}g(\widetilde{a}_n \cosh (n+1) \zeta + \widetilde{b}_n \cosh (n-1)\zeta +\widetilde{c}_n \sinh (n+1)\zeta+ \widetilde{d}_n \sinh(n-1)\zeta {\bf i}g)\sin n\theta.
\langlebel{eq:general_biharmonic_solution}
\end{align}
Using \eqnref{gradrel} and \eqnref{uxPsi}, one can see that the components of the velocity ${\bf u} = u_\zeta {\bf e}_\zeta + u_\theta {\bf e}_\theta$ are given as follows:
\begin{align}
u_\zeta &= - h \partial_\theta \Psi =\left(- \partial_\theta + \frac{\sin\theta}{\cosh\zeta-\cos\theta}\right)(h\Psi), \langlebel{eq_velo1} \\
u_\theta &= +h \partial_\zeta \Psi = \left( \partial_\zeta - \frac{\sinh\zeta}{\cosh\zeta-\cos\theta}\right)(h\Psi), \langlebel{eq_velo2}
\end{align}
and the pressure $p$ satisfies the relations
\begin{equation}\langlebel{eq:pressure_bipolar}
\partial_\zeta p = - \mu \partial_\theta \Delta \Psi, \quad
\partial_\theta p = \mu \partial_\zeta \Delta \Psi.
\end{equation}
The entries of the strain tensor $\mathcal{E}[{\bf u}]$ when represented in terms of the basis $\{{\bf e}_\zeta, {\bf e}_\theta\}$ are given by
\begin{align}
\mathcal{E}_{\zeta\zeta}&= - h \partial_\zeta \left( h \partial_\theta \Psi \right) - h \partial_\zeta \Psi \partial_\theta h,
\langlebel{eq:strain_bipolar1}
\\
\mathcal{E}_{\theta\theta}&= + h \partial_\theta \left( h \partial_\zeta \Psi \right) + h \partial_\theta \Psi \partial_\zeta h,
\langlebel{eq:strain_bipolar2}
\\
\mathcal{E}_{\zeta\theta}&= \frac{1}{2} \left( \partial_\zeta \left( {h^2} \partial_\zeta \Psi \right) - \partial_\theta \left({h^2} \partial_\theta \Psi \right)\right).
\langlebel{eq:strain_bipolar3}
\end{align}
Therefore, the following relation holds:
\begin{equation}\langlebel{Ecalrel}
\mathcal{E}[{\bf u}]= \Xi \begin{bmatrix} \mathcal{E}_{\zeta\zeta} & \mathcal{E}_{\zeta\theta} \\ \mathcal{E}_{\zeta\theta} & \mathcal{E}_{\theta\theta} \end{bmatrix} \Xi,
\end{equation}
where $\Xi$ is the matrix given in \eqnref{Xione}. The entries of the stress tensor in bipolar coordinates are given by
\begin{equation}\langlebel{eq:sigma_formula_bipolar}
\sigma _{\zeta\zeta} = - p + 2\mu \mathcal{E}_{\zeta\zeta},
\quad
\sigma_{\theta\theta} = - p + 2\mu \mathcal{E}_{\zeta\theta},
\quad
\sigma_{\zeta\theta} = 2\mu \mathcal{E}_{\zeta\theta}.
\end{equation}
Similarly, we have the following relation for the stress tensor:
\begin{equation}\langlebel{Gsrel}
\sigma[{\bf u},p]= \Xi \begin{bmatrix} \sigma_{\zeta\zeta} & \sigma_{\zeta\theta} \\ \sigma_{\zeta\theta} & \sigma_{\theta\theta} \end{bmatrix} \Xi.
\end{equation}
Since each component of $\Xi$ is bounded, it follows from \eqnref{Xisquare}, \eqnref{Ecalrel} and \eqnref{Gsrel} that
\begin{equation}\langlebel{Ecalnorm}
\| \mathcal{E}[{\bf u}] \|_{L^\infty(K)} \approx \| \mathcal{E}_{\zeta\zeta} \|_{L^\infty(K)} + \| \mathcal{E}_{\theta\theta} \|_{L^\infty(K)} + \| \mathcal{E}_{\zeta\theta} \|_{L^\infty(K)},
\end{equation}
and
\begin{equation}\langlebel{Gsnorm}
\|\sigma[{\bf u},p] \|_{L^\infty(K)} \approx \| \sigma_{\zeta\zeta} \|_{L^\infty(K)} + \| \sigma_{\theta\theta} \|_{L^\infty(K)} + \| \sigma_{\zeta\theta} \|_{L^\infty(K)}
\end{equation}
for any subset $K$ of $D^e$.
Using integrations by parts on the exterior domain $D^e$, we have for any solutions $({\bf u},p),({\bf v},q)$ to the Stokes system such that ${\bf u}({\bf x}),{\bf v}({\bf x})\rightarrow 0$ as $|{\bf x}|\rightarrow\infty$ that
\begin{align}
\int_{ \partial D^e} {\bf u}\cdot\sigma[{\bf v},q]\nu &= -\int_{ D^e} \mathcal{E}[{\bf u}]:\sigma[{\bf v},q]\nonumber
\\
&=\int_{ D^e} (\nabla \cdot {\bf u})q -2\mu\int_{ D^e} \mathcal{E}[{\bf u}]:\mathcal{E}[{\bf v}]\nonumber
\\
&= -2\mu\int_{ D^e} \mathcal{E}[{\bf u}]:\mathcal{E}[{\bf v}].\langlebel{eq:int_parts_formula}
\end{align}
This implies in particular that the following Green's theorem holds:
\begin{equation}\langlebel{eq:int_parts_formula2}
\int_{ \partial D^e} {\bf u}\cdot\sigma[{\bf v},q]\nu = \int_{ \partial D^e} {\bf v}\cdot\sigma[{\bf u},p]\nu.
\end{equation}
\subsection{An exterior Dirichlet problem}\langlebel{subsec:diri}
Let ${\bf \GG}({\bf x})=(\Gamma_{ij}({\bf x}))_{i,j=1,2}$ be
$$
\Gamma_{ij}({\bf x}) = -\frac{1}{4\partiali\mu}(\delta_{ij}\log|{\bf x}| + \frac{x_i x_j}{|{\bf x}|^2}), \quad {\bf x}\in\mathbb{R}^2\setminus\{(0,0)\},
$$
and define ${\bf p}=(p_j)_{j=1,2}$ by
$$
{\bf p} = -\frac{1}{2\partiali} \frac{{\bf x}}{|{\bf x}|^2}, \quad {\bf x}\in\mathbb{R}^2\setminus\{(0,0)\}.
$$
Then, $({\bf \GG},{\bf p})$ is the fundamental solution to the Stokes system, namely,
$$
\mu \Delta {\bf \GG} - \nabla {\bf p} = \delta({\bf x})\mathbf{I}.
$$
Let $\Gamma_\Delta$ be the fundamental solution to the Laplacian given by
$$
\Gamma_\Delta ({\bf x}) = \frac{1}{2\partiali}\log|{\bf x}|.
$$
The existence and uniqueness of the exterior Dirichlet problem, proved in \cite[Theorem 9.15]{Mitrea-book-2012}, is as follows.
\begin{theorem}\langlebel{thm:ext_diri}
Assume that $\Omega$ is a bounded Lipschitz domain. Then the exterior Dirichlet problem
\begin{equation}
\begin{cases}
\mu \Delta {\bf u} = \nabla p &\quad \mathrm{in } \ \mathbb{R}^2\setminus \overline{\Omega},
\\
\nabla \cdot {\bf u} = 0 &\quad \mathrm{in }\ \mathbb{R}^2\setminus \overline{\Omega},
\\[0.3em]
{\bf u} = {\bf g} &\quad \mathrm{on }\ {\partial \Omega},
\end{cases}
\end{equation}
with the decaying conditions
\begin{align*}
\begin{cases}
{\bf u}({\bf x}) = \mathbf{\Gamma}({\bf x}){\bf A} +{\bf C}+O(|{\bf x}|^{-1}), \\
\partial_j {\bf u}({\bf x}) = \partial_j \mathbf{\Gamma}({\bf x}){\bf A} +O(|{\bf x}|^{-2}), \\
p({\bf x}) = \nabla \Gamma_\Delta \cdot {\bf A} + O(|{\bf x}|^{-2})
\end{cases}
\end{align*}
as $|{\bf x}|\rightarrow\infty$ for some constant ${\bf C}\in\mathbb{R}^2$, has a solution, which is unique modulo adding functions to the
pressure term which are locally constant in $\mathbb{R}^2$.
Here, ${\bf A} \in \mathbb{R}^2$ is a priori given constant.
\end{theorem}
We shall consider the exterior Dirichlet problem with ${\bf A}=0$.
Let $\mathcal{M}$ be the set of all pairs of functions $({\bf u},p)$ satisfying
\begin{equation}\langlebel{Mcal}
\begin{cases}
{\bf u}({\bf x}) = {\bf C}+ O(|{\bf x}|^{-1}),\\
\nabla {\bf u}({\bf x}) = O(|{\bf x}|^{-2}), \\
p({\bf x}) = O(|{\bf x}|^{-2})
\end{cases}
\end{equation}
as $|{\bf x}|\rightarrow\infty$ for some constant ${\bf C}\in\mathbb{R}^2$. We denote by $\mathcal{M}_0$ the set of all pairs of functions $({\bf u},p)$ satisfying the decay conditions \eqref{Mcal} with ${\bf C}=0$.
\section{The singular functions for the Stokes system}\langlebel{sec:singular}
In what follows, we construct the singular functions $({\bf h}_j,p_j)$, $j=1,2$, which is the unique solution to the following problem:
\begin{equation}\langlebel{Bhj}
\begin{cases}
\mu \Delta {\bf h}_j = \nabla p_j &\quad \mbox{in }D^e,
\\
\nabla \cdot {\bf h}_j = 0 &\quad \mbox{in } D^e,
\\[0.3em]
{\bf h}_j = \frac{(-1)^i}{2}\mbox{\boldmath $\Gy$}_j&\quad {\partial D_i}, \ i=1,2,
\\[0.3em]
({\bf h}_j,p_j) \in \mathcal{M}.
\end{cases}
\end{equation}
We then provide quantitative estimates of the blow-up of these functions in the subsequent propositions. We call the solutions $({\bf h}_j,p_j)$ the singular functions since they are the building blocks in describing the singular behavior, i.e., the stress tensor blow-up, of the solution to \eqnref{stokes}. In fact, we will see that the solution to \eqnref{stokes} can be expressed as a linear combination of singular functions (modulo a regular function) and the nature of the stress tensor blow-up is characterized by $({\bf h}_1,p_1)$ and $({\bf h}_2,p_2)$.
\begin{prop}\langlebel{lem:hone}
Define two constants $A_1$ and $B_1$ by
\begin{equation}\langlebel{AB}
A_1 := \frac{1}{2s - \tanh 2s}, \quad
B_1 := -\frac{1}{2\cosh 2s}A_1.
\end{equation}
\begin{itemize}
\item[{\rm (i)}] The stream function $\Psi_1$ associated with the singular functions $({\bf h}_1,p_1)$ is given by
\begin{equation}\langlebel{eq:stream_singular1}
\Psi_1(\zeta,\theta) = \frac{1}{h(\zeta,\theta)}( A_1 \zeta + B_1 \sinh 2\zeta) \sin\theta.
\end{equation}
\item[{\rm (ii)}] The components of the velocity ${\bf h}_1 = h_{1\zeta}{\bf e}_\zeta + h_{1\theta} {\bf e}_\theta$ are given by
\begin{align}
h_{1\zeta} &= (A_1\zeta + B_1\sinh 2\zeta) \frac{1-\cosh\zeta\cos\theta}{\cosh\zeta-\cos\theta}, \langlebel{honeone}
\\
h_{1\theta} &= \sin\theta \left( A_1+2B_1 \cosh 2\zeta - \frac{\sinh\zeta (A_1\zeta + B_1\sinh 2\zeta)}{\cosh\zeta-\cos\theta} \right). \langlebel{honetwo}
\end{align}
\item[{\rm (iii)}]
The pressure $p_1$ is given by
\begin{equation}\langlebel{eq:pressure1}
p_1 = \frac{2\mu}{a}( (A_1-2B_1)\cosh\zeta \cos\theta + B_1 \cosh 2\zeta \cos 2\theta) - \frac{2\mu}{a}(A_1-B_1).
\end{equation}
\end{itemize}
\end{prop}
\partialroof
The formulas \eqnref{eq:stream_singular1}-\eqnref{honetwo} are derived in the following way. We use the expansion \eqref{eq:general_biharmonic_solution} for the general solution to the Stokes system, and then determine its unknown constant coefficients by matching the boundary conditions on $\partial D^e$, given by $\{\zeta=\partialm s\}$, and using formulas \eqref{exey}, \eqref{eq_velo1}, and \eqref{eq_velo2}. Let us show that the boundary conditions are fulfilled. If $\zeta = \partialm s $, we have
\begin{align*}
h_{1\zeta} |_{\zeta=\partialm s} &= \partialm \frac{1-\cosh s\cos\theta }{2(\cosh s - \cos\theta)}= \partialm \frac{1}{2}\alpha(s,\theta), \\
h_{1\theta} |_{\zeta=\partialm s} &= \mp \frac{ \sinh s \sin\theta}{2(\cosh s -\cos\theta)} = \mp \frac{1}{2}\beta(s,\theta) .
\end{align*}
One can see from the relation \eqref{exey} that the boundary conditions on $\partial D_1\cup \partial D_2$ are satisfied.
The formula \eqnref{eq:pressure1} follows from \eqref{eq:pressure_bipolar} and \eqnref{eq:stream_singular1}. In fact, applying \eqref{eq:Laplacian_Psi} to $\Psi_1$ given in \eqnref{eq:stream_singular1}, we see that
$$
\mu\Delta \Psi_1 = \frac{-2\mu}{a}( (A_1-2B_1)\sinh\zeta \sin\theta + B_1 \sinh 2\zeta \sin 2\theta).
$$
The harmonic conjugate of this function vanishing at $(\zeta,\theta)=(0,0)$ is nothing but the one given in \eqnref{eq:pressure1}.
We now prove that $({\bf h}_1, p_1)$ belongs to $\mathcal{M}$. We first prove that ${\bf h}_1({\bf x}) = O(|{\bf x}|^{-1})$ as $|{\bf x}|\rightarrow\infty$, which amounts to proving
\begin{equation}\langlebel{eq:h1_far_claim}
{\bf h}(\zeta,\theta) = O(|\zeta|+|\theta|), \quad (\zeta,\theta)\rightarrow(0,0),
\end{equation}
thanks to \eqref{eq:largex_smallGzGt}.
We have from \eqref{honeone} and \eqref{honetwo} that
\begin{align}
|h_{1\zeta}|&\leq C (|\zeta| +|\zeta| \frac{|\zeta|^2+|\theta|^2}{|\zeta|^2+|\theta|^2}) \leq C |\zeta|,\nonumber
\\
|h_{1\theta}|&\leq C |\theta|{\bf i}g(1 +|\zeta| \frac{|\zeta|}{|\zeta|^2+|\theta|^2}{\bf i}g) \leq C |\theta|.\langlebel{eq:h1GzGt_far_estim}
\end{align}
Here and throughout this proof, the constant $C$ may depend on $s$, but is independent of $(\zeta,\theta)$.
This proves \eqref{eq:h1_far_claim}.
Similarly, one can show that
\begin{align}
|\partial_\zeta h_{1\zeta}|\leq C, \quad |\partial_\zeta h_{1\zeta}|\leq C, \quad |\partial_\zeta h_{1\zeta}|\leq C, \quad |\partial_\zeta h_{1\zeta}|\leq C.
\langlebel{eq:h1GzGt_grad_far_estim}
\end{align}
Since ${\bf h}_1 = h_{1\zeta}{\bf e}_\zeta + h_{1\theta} {\bf e}_\theta$, we have
\begin{align}
|\nabla {\bf h}_1| \leq C (|\nabla h_{1\zeta}| + |h_{1\zeta} \nabla {\bf e}_\zeta| + |\nabla h_{1\theta}| + |h_{1\theta} \nabla {\bf e}_\theta|).
\end{align}
It then follows from \eqref{eq:h1GzGt_far_estim} and the following lemma, whose proof will be given in Appendix \ref{appendixC}, that
\begin{align}
|\nabla {\bf h}_1| &\leq C (|\nabla h_{1\zeta}| + |\nabla h_{1\theta}| + |\zeta|^2 +|\theta|^2).
\end{align}
\begin{lemma}\langlebel{lem:grad_eGz_eGt_estim}
It holds that
\begin{equation}
|\nabla {\bf e}_\zeta| + |\nabla {\bf e}_\theta| \lesssim |\zeta| + |\theta|.
\end{equation}
\end{lemma}
We then have from \eqref{eq:h1GzGt_grad_far_estim} that
\begin{align*}
|\nabla {\bf h}_1|
\leq C (| h \partial_\zeta h_{1\zeta} | + | h \partial_\theta h_{1\zeta} |+| h \partial_\zeta h_{1\theta} |+| h \partial_\theta h_{1\theta} |+|\zeta|^2+|\theta|^2)
\leq C ( |h| + |\zeta|^2 + |\theta|^2) .
\end{align*}
One can see from the definition of the function $h$ that
$$
|h(\zeta,\theta)| \leq C (|\zeta|^2 + |\theta|^2),
$$
and hence
$$
|\nabla {\bf h}_1| \leq C (|\zeta|^2 + |\theta|^2),
$$
or equivalently, $\nabla {\bf h}_1({\bf x}) = O(|{\bf x}|^{-2})$ as $|{\bf x}|\rightarrow\infty$.
Note that $p(\zeta,\theta)= O(|\zeta|^2+|\theta|^2)$ as $(\zeta,\theta) \to (0,0)$. Thus, $p({\bf x})=O(|{\bf x}|^{-2})$ as $|{\bf x}| \to \infty$, and hence $({\bf h}_1, p_1) \in \mathcal{M}$. This completes the proof. \qed
It is helpful to write ${\bf h}_1$ in terms of Cartesian coordinates. By \eqref{eq:bipolar_x_y}, we have
\begin{align*}
\Psi_1 = A_1 y \zeta + B_1 y \sinh\zeta,
\end{align*}
and hence
\begin{align*}
\nabla\Psi_1 = A_1 \zeta {\bf e}_y + A_1 y \nabla \zeta + B_1 \sinh\zeta {\bf e}_y + B_1 y \cosh \zeta \nabla \zeta.
\end{align*}
Then, since ${\bf h}_1=(\nabla \Psi_1)^\partialerp$, we have
\begin{equation}\langlebel{BhoneCart}
{\bf h}_1 = ( A_1 \zeta + B_1 \sinh\zeta ) {\bf e}_x + (A_1 + B_1 \cosh \zeta) y (\nabla \zeta)^\partialerp.
\end{equation}
Here, $(x,y)^\partialerp = (y,-x)$.
\begin{prop}\langlebel{cor:h1_p1_asymp}
We have
\begin{equation}\langlebel{honepone}
\| \mathcal{E}[{\bf h}_1] \| \lesssim \delta^{-3/2} \quad\mbox{and}\quad \| p_1\|_\infty \approx \delta^{-2}
\end{equation}
as $\delta \to 0$. In the narrow region $\Pi_\delta$,
\begin{equation}\langlebel{ponenarrow}
\sigma[{\bf h}_1,p_1](x,y) = -\frac{3}{4} \mu R \delta^{-2} \frac{(y^2+3R\delta)(y^2-R\delta)}{(y^2+R\delta)^2} I + O(\delta^{-3/2}).
\end{equation}
\end{prop}
\partialroof
One can see from the explicit forms of the constants $A_1$ and $B_1$ in \eqnref{AB} that
\begin{align}
A_1 &= \frac{3}{8} s^{-3} + O(s^{-1}), \quad B_1 = - \frac{3}{16} s^{-3} + O(s^{-1}).
\langlebel{eq:A1B1_asymp}
\end{align}
Using \eqref{eq:strain_bipolar1}-\eqref{eq:strain_bipolar3} and Proposition \ref{lem:hone} (i), we have
\begin{align}
\mathcal{E}_{\zeta\zeta} &= -h(\zeta,\theta)(A_1+2B_1 \cosh 2\xi)\cos\theta, \langlebel{Ezz} \\
\mathcal{E}_{\theta\theta} &= h(\zeta,\theta)(A_1+2B_1 \cosh 2\xi)\cos\theta, \langlebel{Ett} \\
\mathcal{E}_{\zeta\theta} &= h(\zeta,\theta) 2B_1 \sinh 2\zeta \sin \theta. \langlebel{Ezt}
\end{align}
We first estimate $\mathcal{E}_{\zeta\zeta}$. It follows from the Taylor expansions of $\cosh2\zeta$ and $\sinh2\zeta$, and from \eqnref{eq:A1B1_asymp} that
\begin{align*}
\mathcal{E}_{\zeta\zeta} = - \frac{1+\cos\theta + O(\zeta^2)}{a} (A_1+ 2B_1 + O(\zeta^2))\cos\theta .
\end{align*}
Observe from \eqref{eq:A1B1_asymp} that $A_1+2B_1 = O(s^{-1})$. Since $|\zeta| \le s$ and $a,s \approx \sqrt\delta$,
we have
$$
|\mathcal{E}_{\zeta\zeta}| \lesssim \delta^{-1}.
$$
Estimates for $\mathcal{E}_{\theta\theta}$ and $\mathcal{E}_{\zeta\theta}$ are simpler. In fact, one can see immediately from \eqnref{Ett} and \eqnref{Ezt} that
$$
|\mathcal{E}_{\theta\theta}| = |\mathcal{E}_{\zeta\zeta}| \lesssim \delta^{-1}
$$
and
$$
|\mathcal{E}_{\zeta\theta}| \lesssim a^{-1} |B_1 \zeta| \lesssim \delta^{-3/2}.
$$
Then \eqnref{Ecalnorm} yields the first estimate in \eqnref{honepone}.
We now consider the pressure $p_1$.
Since $a \approx \sqrt{\delta}$, we have
$$
|p_1(\zeta,\theta)| \lesssim \delta^{-2} (\cosh\zeta \,|\cos\theta| +1).
$$
Since $|\zeta| \le s \approx \sqrt{\delta}$ by \eqnref{sGd} if $(\zeta,\theta) \in D^e$, we have
$$
|p_1(\zeta,\theta)| \lesssim \delta^{-2} .
$$
Using the Taylor expansion of $\cosh \zeta$, we see
$$
p_1 = \frac{3}{2} \mu R \delta^{-2} \left( \cos \theta- \frac{1}{2} \cos^2 \theta \right) +O(\delta^{-1}).
$$
In particular, we have $\| p_1\|_\infty \gtrsim \delta^{-2}$, and the second statement in \eqnref{honepone} follows. Now the expansion \eqnref{ponenarrow} in the narrow region follows from \eqnref{Gsdef} and \eqnref{cosnarrow}.
\qed
The expressions for the solution $({\bf h}_2,p_2)$ are quite involved even though it is possible to express it explicitly. However, its singular part, which is to be used in the rest of the paper, can be expressed in a rather simple way. To express the singular part, which is denoted by $(\widetilde{\bf h}_2,\widetilde p_2)$, let
\begin{equation}\langlebel{A2C2}
A_2 = -\frac{1}{2s + \sinh 2s}.
\end{equation}
Then, the components of the velocity field $\widetilde{\bf h}_2 = \widetilde h_{2\zeta}{\bf e}_\zeta + \widetilde h_{2\theta} {\bf e}_\theta$ are given by
\begin{align}
\widetilde h_{2\zeta} &=
A_2 \zeta \beta(\zeta,\theta),
\langlebel{eq:h2t_Gz}
\\
\widetilde h_{2\theta} &=A_2\zeta \alpha(\zeta,\theta) + A_2 \sinh \zeta, \langlebel{eq:h2t_Gt}
\end{align}
and the pressure $\widetilde p_2$ is given by
\begin{equation}\langlebel{eq:pressure2}
\widetilde p_2 = -\frac{2\mu}{a} A_2 \sinh \zeta\sin\theta.
\end{equation}
Then one can see easily that $(\widetilde{\bf h}_2,\widetilde p_2)$ belongs to $\mathcal{M}$ and is a solution to the Stokes system. Moreover, $\widetilde{\bf h}_2$ satisfies
\begin{equation}\langlebel{eq:h2_pD_2}
\widetilde{{\bf h}}_2|_{\partial D_i} = \frac{(-1)^i}{2}\mbox{\boldmath $\Gy$}_2 - C_2 \mbox{\boldmath $\Gy$}_3,\quad i=1,2,
\end{equation}
where $\mbox{\boldmath $\Gy$}_3$ is the one given in \eqnref{Bpsi} and $C_2$ is the constant given by
\begin{equation}\langlebel{C2}
C_2 = \frac{\sinh^2 s}{a} A_2.
\end{equation}
In fact, one can easily check using \eqref{exey} that
\begin{equation}\langlebel{eq:Be_Gt_formula}
{\bf e}_\theta|_{\partial D_2} = -\cosh s \mbox{\boldmath $\Gy$}_2 - \frac{\sinh s}{a} \mbox{\boldmath $\Gy$}_3.
\end{equation}
It then follows from \eqref{eq:h2t_Gz} and \eqref{eq:h2t_Gt} that
\begin{align*}
\widetilde{{\bf h}}_2|_{\partial D_2} &= A_2 s (\beta {\bf e}_\zeta + \alpha {\bf e}_\theta) + A_2 \sinh s {\bf e}_\theta
\\
&=A_2 s (-\mbox{\boldmath $\Gy$}_2) + A_2 (-\sinh s \cosh s) \mbox{\boldmath $\Gy$}_2 - \frac{\sinh^2 s}{a} A_2 \mbox{\boldmath $\Gy$}_3.
\end{align*}
This proves \eqnref{eq:h2_pD_2} on $\partial D_2$. \eqnref{eq:h2_pD_2} on $\partial D_1$ can be proved in the same way. In Cartesian coordinates, $\widetilde{\bf h}_2$ is represented in a simple form as
\begin{equation}
\widetilde{\bf h}_2 = -A_2 \zeta {\bf e}_y + A_2 x (\nabla\zeta)^\partialerp.
\end{equation}
Some words about how to derive $(\widetilde{\bf h}_2,\widetilde p_2)$ may be helpful. As in Proposition \ref{lem:hone}, we first derive the relevant stream function $\widetilde\Psi_2$ using the expansion \eqref{eq:general_biharmonic_solution} for the general solution, which turns out to be
\begin{equation}\langlebel{eq:stream_singular2}
\widetilde\Psi_2(\zeta,\theta) = \frac{1}{h(\zeta,\theta)} A_2 \zeta \sinh \zeta.
\end{equation}
We then let $(\widetilde{\bf h}_2,\widetilde p_2)$ be its associated solution to the Stokes system.
Thanks to \eqnref{eq:h2_pD_2}, how to find the solution $({\bf h}_2,p_2)$ is clear. Let
$({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}})$ be the solution to
\begin{equation}\langlebel{eq:def_hrot}
\begin{cases}
\mu \Delta {\bf h}_{\mathrm{rot}} = \nabla p_{\mathrm{rot}} &\quad \mbox{in }D^e,
\\
\nabla \cdot {\bf h}_{\mathrm{rot}} = 0 &\quad \mbox{in } D^e,
\\[0.3em]
{\bf h}_{\mathrm{rot}} = \mbox{\boldmath $\Gy$}_3 &\quad {\partial D_1\cup \partial D_2},
\\[0.3em]
({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}})\in \mathcal{M}.
\end{cases}
\end{equation}
The existence and uniqueness of the solution are guaranteed by Theorem \ref{thm:ext_diri}. We will construct the stream function for $({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}})$ explicitly in subsection \ref{subsec6.1} and prove the following theorem in section \ref{sec:noslip2}.
\begin{theorem}\langlebel{thm:boundedstress_rot}
We have
\begin{equation}\langlebel{rotstrain}
\|\mathcal{E}[{\bf h}_{\mathrm{rot}}]\|_\infty \lesssim 1, \quad \| p_{\mathrm{rot}} \|_\infty \lesssim 1,
\end{equation}
and
\begin{equation}\langlebel{rotstress}
\|\sigma[\mathbf{h}_{\mathrm{rot}},p_{\mathrm{rot}}]\|_\infty \lesssim 1.
\end{equation}
\end{theorem}
We immediately have the following proposition.
\begin{prop}\langlebel{lem:htwo}
Let $(\widetilde{\bf h}_2,\widetilde p_2)$ be as given in \eqnref{eq:h2t_Gz}-\eqnref{eq:pressure2} and $C_2$ the constant given in \eqnref{C2}. The solution $({\bf h}_2,p_2)$ to \eqnref{Bhj} is given by
\begin{equation}
({\bf h}_2,p_2)= (\widetilde{\bf h}_2,\widetilde p_2)+ C_2({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}).
\end{equation}
\end{prop}
\begin{prop}\langlebel{cor:h2_p2_asymp}
It holds that
\begin{equation}\langlebel{htwoptwo}
\| \mathcal{E}[{\bf h}_2] \|_\infty \approx \delta^{-1} \quad\mbox{and}\quad \| p_2\|_\infty \approx \delta^{-1/2},
\end{equation}
as $\delta \to 0$. In the narrow region $\Pi_\delta$,
\begin{equation}\langlebel{Gstwonarrow}
\sigma[{\bf h}_2,p_2](x,y)= \mu \delta^{-1} \frac{R\delta}{y^2+R\delta} \begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix} + O(\delta^{-1/2}).
\end{equation}
\end{prop}
\noindent {\sl Proof}. \
We first note that
\begin{equation}\langlebel{Atwo}
A_2= -\frac{1}{4s} + O(s).
\end{equation}
Since $|\zeta| \le s \approx \sqrt{\delta}$ and $a \approx \sqrt{\delta}$, the second estimate in \eqnref{htwoptwo} immediately follows from \eqnref{eq:pressure2}.
Since $a \approx s$ as one can see from \eqnref{s}, it follows from \eqnref{Atwo} and the definition of $C_2$ in \eqnref{C2} that $C_2$ is bounded regardless of $\delta$. In view of \eqref{rotstrain}, we only need to derive estimates related to $(\widetilde{{\bf h}}_2, \widetilde p_2)$.
Using \eqref{eq:strain_bipolar1}-\eqref{eq:strain_bipolar3} and \eqnref{eq:stream_singular2},
we have
\begin{align}
\mathcal{E}_{\zeta\zeta}[\widetilde{\bf h}_2] &= 0, \langlebel{Ezz2}
\\
\mathcal{E}_{\theta\theta}[\widetilde{\bf h}_2] &= 0, \langlebel{Ett2}
\\
\mathcal{E}_{\zeta\theta}[\widetilde{\bf h}_2] &= \frac{\cosh \zeta- \cos\theta}{a} A_2\cosh \zeta. \langlebel{Ezt2}
\end{align}
We then have from \eqnref{Ezt2} that
$$
|\mathcal{E}_{\zeta\theta}[\widetilde{\bf h}_2]| \lesssim \delta^{-1},
$$
and hence
$$
\| \mathcal{E}[\widetilde{\bf h}_2] \|_\infty \lesssim \delta^{-1}.
$$
We see from \eqnref{adef}, \eqnref{sGd}, \eqnref{Atwo} and \eqnref{Ezt2} that
$$
\mathcal{E}_{\zeta\theta}[\widetilde{\bf h}_2] = -\frac{1}{4\delta} (\cosh \zeta- \cos\theta) + O(1).
$$
In the narrow region $\Pi_\delta$, we have
$$
\mathcal{E}_{\zeta\theta}[\widetilde{\bf h}_2] = -\frac{1}{4\delta} (1- \cos\theta) + O(1).
$$
In particular, $|\mathcal{E}_{\zeta\theta}| \gtrsim \delta^{-1}$, and the first estimate in \eqnref{htwoptwo} follows.
The asymptotic formula \eqnref{Gstwonarrow} follows from \eqnref{Gsdef}, \eqref{cosnarrow}, \eqnref{Xinarrow} and \eqnref{Gsrel}.
\qed
\section{Proof of Theorem \ref{thm:main1}}\langlebel{sec:1.1}
Thanks to the symmetry of the problem \eqnref{stokes} with ${\bf U}={\bf U}_{\rm ex}=(x,-y)^T$ and $P=0$, the velocity ${\bf u}$ enjoys the following symmetry:
\begin{align*}
u_{x}(x,y)=u_{x}(x,-y)=-u_{x}(-x,y), \\
u_{y}(x,y)=-u_{y}(x,-y)=u_{y}(-x,y),
\end{align*}
and the pressure $p$ does:
$$
p(x,y) = p(-x,y), \quad p(x,y) = p(x,-y).
$$
Thus, we infer
$$
c_{11}=-c_{21} \quad\mbox{and}\quad c_{i2}=c_{i3}=0 \ \mbox{ for $i=1,2$}.
$$
In other words, we have
\begin{equation}\langlebel{eq:u_bdry}
{\bf u} = - c_{21} \mbox{\boldmath $\Gy$}_1 \quad\mbox{on } \partial D_1, \quad {\bf u} = c_{21} \mbox{\boldmath $\Gy$}_1 \quad\mbox{on } \partial D_2.
\end{equation}
Therefore, the solution $({\bf u},p):=({\bf u}_{\rm ex}, \partialst)$ admits the decomposition in terms of the singular function
\begin{equation}\langlebel{eq:decomp_simple_u}
{\bf u} = {\bf v}_1 +2 c_{21}{{\bf h}}_{1}, \quad p = q_1 +2 c_{21} p_1 \quad \mbox{in } D^e,
\end{equation}
where $({\bf v}_1,q_1)$ is the solution with the no-slip boundary condition, namely,
\begin{equation}\langlebel{stokesv}
\begin{cases}
\mu \Delta {\bf v}_1 = \nabla q_1 \quad &\mbox{in }D^e,
\\
\nabla \cdot {\bf v}_1 = 0 \quad &\mbox{in }D^e,
\\
{\bf v}_1=0 \quad &\mbox{on } \partial D_1 \cup \partial D_2, \\
({\bf v}_1-{\bf U}_{\rm ex},q_1) \in \mathcal{M}.
\end{cases}
\end{equation}
We will construct the stream function for $({\bf v}_1,q_1)$ in subsection \ref{subsec6.1} and prove the following theorem in section \ref{sec:noslip2}.
\begin{theorem}\langlebel{thm:boundedstress_v1q1}
Let $({\bf v}_1, q_1)$ be the solution to \eqnref{stokesv}. Then, the following estimates hold:
\begin{equation}
\|\mathcal{E}[{\bf v}_1]\|_\infty \lesssim 1, \quad \| q_1 \|_\infty \lesssim 1,
\end{equation}
and
\begin{equation}
\|\sigma[{\bf v}_1,q_1]\|_\infty \lesssim 1.
\end{equation}
\end{theorem}
It then follows from \eqref{eq:decomp_simple_u} that
\begin{align}
\mathcal{E} [{\bf u}] &= 2 c_{21 }\mathcal{E}[{\bf h}_1] +O(1),\nonumber
\\
p &= 2 c_{21 }p_1 +O(1), \langlebel{eq:pre_asymps}
\\
\sigma[{\bf u},p] &= 2 c_{21 }\sigma[{\bf h}_1,p_1] +O(1), \nonumber
\end{align}
as $\delta \to 0$. Here, $O(1)$ means that the supremum norms of the remainder terms are bounded on $D^e$ regardless of $\delta$.
Because of \eqnref{honepone}, it is now sufficient to estimate the constant $c_{21}$.
We first express $c_{21}$ in terms of boundary integrals. To do so, we let
\begin{equation}
\mathcal{I}_1 := \displaystyle\int_{\partial D_2} {\bf e}_x \cdot \sigma[ {\bf h}_1,p_1]\nu \, dl
\quad\mbox{and}\quad \mathcal{J}_1 :=
\displaystyle \int_{\partial D_2} {\bf U} \cdot \sigma [{\bf h}_1,p_1]\nu \, dl,
\end{equation}
with ${\bf U}={\bf U}_{\rm ex}=(x,-y)^T$.
\begin{lemma}\langlebel{lem:c21}
We have
\begin{equation}
c_{21} = \frac{\mathcal{J}_1}{\mathcal{I}_1}.
\end{equation}
\end{lemma}
\partialroof
By Green's identity for the Stokes system on $D^e$, we obtain that
\begin{equation}\langlebel{1000}
\int_{\partial D^e}
({\bf u}-{\bf U})\cdot {\sigma[ {\bf h}_1,p_1]} \big|_+ \nu - {\sigma[{\bf u}-{\bf U},p]} \big|_+ \nu \cdot{\bf h}_1
=0.
\end{equation}
Since ${\bf h}_1|_{\partial D_i} = (-1)^i\frac{1}{2}\mbox{\boldmath $\Gy$}_1$, it follows from the boundary integral conditions \eqnref{equili} that
$$
\int_{\partial D_i} \sigma[{\bf u},p] \big|_+ \nu \cdot{\bf h}_1=0, \quad i=1,2.
$$
Applying Green's identity on $D_i$, we have
$$
\int_{\partial D_i} \sigma[{\bf U},0] \big|_+ \nu \cdot{\bf h}_1= \int_{\partial D_i} \sigma[{\bf U},p_0] \big|_- \nu \cdot{\bf h}_1 = 0, \quad i=1,2.
$$
It then follows from \eqnref{1000} that
$$
\int_{\partial D^e} ({\bf u}-{\bf U})\cdot {\sigma[ {\bf h}_1,p_1]} \big|_+ \nu =0,
$$
or equivalently
$$
\int_{\partial D^e} {\bf u}\cdot {\sigma[ {\bf h}_1,p_1]} \big|_+ \nu
=\int_{\partial D^e} {\bf U}\cdot {\sigma[ {\bf h}_1,p_1]} \big|_+ \nu.
$$
By symmetry, we have
$$
\int_{\partial D_2} {\bf u}\cdot {\sigma[ {\bf h}_1,p_1]} \big|_+ \nu
=\int_{\partial D_2} {\bf U}\cdot {\sigma[ {\bf h}_1,p_1]} \big|_+ \nu.
$$
Then the conclusion follows from \eqref{eq:u_bdry}.
\qed
We have the following lemma whose proof is given in Appendix \ref{appendixA}.
\begin{lemma} \langlebel{lem:asymp_I1_J1}
As $\delta\rightarrow 0$, we have
\begin{equation}\langlebel{Icalone}
\mathcal{I}_1 = -\frac{3\partiali \mu}{2} \left(\frac{R}{\delta}\right)^{3/2} + O(\delta^{-1/2}),
\end{equation}
and
\begin{equation}\langlebel{Jcalone}
\mathcal{J}_1 = -3\partiali \mu R + O(\delta).
\end{equation}
\end{lemma}
As an immediate consequence of Lemmas \ref{lem:c21} and \ref{lem:asymp_I1_J1}, we have the following corollary:
\begin{cor}\langlebel{cor:c21_asymp}
As $\delta\rightarrow 0$, we have
\begin{equation}\langlebel{c21}
c_{21} = \frac{2}{\sqrt{R}} \delta^{3/2} + O(\delta^{5/2}).
\end{equation}
\end{cor}
Now Theorem \ref{thm:main1} follows from Proposition \ref{cor:h1_p1_asymp}, \eqnref{eq:pre_asymps}, and Corollary \ref{cor:c21_asymp}.
\section{Proof of Theorem \ref{thm:main2}}\langlebel{sec:1.2}
Assume that ${\bf U}(x,y)={\bf U}_{\rm sh}=(y,x)^T$. We write $({\bf u},p)$ for $({\bf u}_{\rm sh},\partialsh)$ for ease of notation. In this case the velocity ${\bf u}$ satisfies
\begin{align}
u_{x}(x,y)=-u_{x}(x,-y)=u_{x}(-x,y),\nonumber \\
u_{y}(x,y)=u_{y}(x,-y)=-u_{y}(-x,y),
\end{align}
and the pressure $p$ satisfies:
$$
p(x,y) = -p(-x,y), \quad p(x,y) = -p(x,-y).
$$
Then, we see that $c_{22}=-c_{12}$, $c_{23}=c_{13}$ and $c_{i1}=c_{i1}=0$ for $i=1,2$. As a result, we have ,
\begin{equation}\langlebel{eq:u_bdry2}
{\bf u} = - c_{22} \mbox{\boldmath $\Gy$}_2 + c_{23} \mbox{\boldmath $\Gy$}_3 \quad\mbox{on } \partial D_1, \quad {\bf u} = c_{22} \mbox{\boldmath $\Gy$}_2 + c_{23}\mbox{\boldmath $\Gy$}_3 \quad\mbox{on } \partial D_2.
\end{equation}
Let us decompose the solution $({\bf u}, p)$ in $D^e$ as
\begin{equation}\langlebel{eq:decomp_simple_u_two}
({\bf u}, p) = ({\bf v}_2,q_2) + 2 c_{22}({{\bf h}}_{2},p_2) + c_{23}({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}),
\end{equation}
where $({\bf v}_2,q_2)$ is the solution to
\begin{equation}\langlebel{Bv2}
\begin{cases}
\mu \Delta {\bf v}_2 = \nabla {q}_2 &\quad \mbox{in }D^e,
\\
\nabla \cdot {\bf v}_2 = 0 &\quad \mbox{in } D^e,
\\
{\bf v}_2 = 0 &\quad \partial D_1 \cup \partial D_2,
\\
({\bf v}_2-{\bf U}_{\rm sh},q_2) \in \mathcal{M},
\end{cases}
\end{equation}
and $({\bf h}_{\mathrm{rot}},q_{\mathrm{rot}})$ is the solution to \eqnref{eq:def_hrot}. Note that ${\bf v}_2$ also satisfies the no-slip boundary condition like ${\bf v}_1$. We will construct the stream function for $({\bf v}_2,q_2)$ together with those for $({\bf h}_{\mathrm{rot}},q_{\mathrm{rot}})$ and $({\bf v}_1,q_1)$ in subsection \ref{subsec6.1} and prove the following theorem in section \ref{sec:noslip2}.
\begin{theorem}\langlebel{thm:boundedstress_v2q2}
We have
\begin{equation}
\|\mathcal{E}[{\bf v}_2]\|_\infty \lesssim 1, \quad \| q_2 \|_\infty \lesssim 1,
\end{equation}
and
\begin{equation}
\|\sigma[{\bf v}_2,q_2]\|_\infty \lesssim 1.
\end{equation}
\end{theorem}
It follows from \eqref{eq:decomp_simple_u_two} that
\begin{align}
\mathcal{E} [{\bf u}] &= 2 c_{22 }\mathcal{E}[{\bf h}_2] + c_{23 }\mathcal{E}[{\bf h}_{\mathrm{rot}}] +O(1),\nonumber
\\
p &= 2 c_{22 }p_2 + c_{23}p_{\mathrm{rot}} +O(1), \langlebel{eq:pre_asymps2}
\\
\sigma[{\bf u},p] &= 2 c_{22 }\sigma[{\bf h}_2,p_2]+c_{23 }\sigma[{\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}] +O(1), \nonumber
\end{align}
as $\delta\rightarrow 0$.
As before, we represent the constant $c_{22}$ using the integrals
\begin{align}
\mathcal{I}_{2j} &:= \displaystyle\int_{\partial D_2} \mbox{\boldmath $\Gy$}_j \cdot \sigma[ {\bf h}_2,p_2]\nu \, dl,\quad j=2,3,
\\
\mathcal{I}_{\mathrm{rot}} &:= \displaystyle\int_{\partial D_2} \mbox{\boldmath $\Gy$}_3 \cdot \sigma[ {\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]\nu \, dl,
\\ \mathcal{J}_2 &:=
\displaystyle \int_{\partial D_2} {\bf U} \cdot \sigma [{\bf h}_2,p_2]\nu \, dl,
\\ \mathcal{J}_{\mathrm{rot}} &:=
\displaystyle \int_{\partial D_2} {\bf U} \cdot \sigma [{\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]\nu \, dl, \langlebel{ItwoJtwo}
\end{align}
where ${\bf U}={\bf U}_{\rm sh}=(y,x)^T$.
We have the following lemma whose proof is similar to the one of Lemma \ref{lem:c21}.
\begin{lemma}
We have
$$
\begin{bmatrix}
\mathcal{I}_{22} & \mathcal{I}_{23}
\\
\mathcal{I}_{23} & \mathcal{I}_{\mathrm{rot}}
\end{bmatrix}
\begin{bmatrix}
c_{22}
\\
c_{23}
\end{bmatrix}
=
\begin{bmatrix}
\mathcal{J}_2\\ \mathcal{J}_{\mathrm{rot}}
\end{bmatrix}.
$$
\end{lemma}
\partialroof
As in the proof of Lemma \ref{lem:c21}, we have
$$
\int_{\partial D_2} {\bf u}\cdot {\sigma[ {\bf h}_2,p_2]} \big|_+ \nu
=\int_{\partial D_2} {\bf U}\cdot {\sigma[ {\bf h}_2,p_2]} \big|_+ \nu,
$$
and
$$
\int_{\partial D_2} {\bf u}\cdot {\sigma[ {\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]} \big|_+ \nu
=\int_{\partial D_2} {\bf U}\cdot {\sigma[ {\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]} \big|_+ \nu.
$$
Then, by \eqref{eq:u_bdry2}, we see
$$
\mathcal{I}_{22} c_{22} + \mathcal{I}_{23} c_{23} = \mathcal{J}_2,
$$
and
$$
\int_{\partial D_2} \mbox{\boldmath $\Gy$}_2\cdot {\sigma[ {\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]} \big|_+ \nu \cdot c_{22} + \mathcal{I}_{\mathrm{rot}} c_{23} = \mathcal{J}_{\mathrm{rot}}.
$$
Then, Green's identity yields
\begin{align}
\int_{\partial D_2} \mbox{\boldmath $\Gy$}_2\cdot {\sigma[ {\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]} \big|_+ \nu &=
\int_{\partial D^e} {\bf h}_2\cdot {\sigma[ {\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]} \big|_+ \nu =
\int_{\partial D^e} {\bf h}_{\mathrm{rot}}\cdot {\sigma[ {\bf h}_{2},p_{2}]} \big|_+ \nu \nonumber \\
&=
\int_{\partial D_2} \mbox{\boldmath $\Gy$}_3\cdot {\sigma[ {\bf h}_{2},p_{2}]} \big|_+ \nu = \mathcal{I}_{23},
\langlebel{eq:I23_another}
\end{align}
and hence the conclusion follows.
\qed
We have the following lemma whose proof is given in Appendix \ref{sec:appendixB}.
Let
\begin{align}
&f_0(x) : = \frac{ 4e^{-x} \sinh^2 x (\cosh x + \sinh x) -4x^2}{x^3(\sinh 2x + 2x)}, \langlebel{def_small_f}
\\
&g_0(x) :=\frac{4x}{\sinh 2x + 2x}, \langlebel{def_small_g}
\end{align}
and let
\begin{equation}\langlebel{F0G0}
F_0:= \int_0^\infty f_0(x) dx, \quad G_0:=\int_0^\infty g_0(x) dx.
\end{equation}
\begin{lemma}\langlebel{lem:asymp_I2_J2}
As $\delta\rightarrow 0$, we have
\begin{align}
&\mathcal{I}_{22} = -\partiali\mu\sqrt{\frac{R}{\delta}}+O(1),
\langlebel{eq:I22}
\\
&\mathcal{I}_{23} = \frac{\partiali\mu R}{F_0} + O(\sqrt{\delta}),
\langlebel{eq:I23}
\\
&\mathcal{I}_{\mathrm{rot}} = -\frac{4\partiali\mu R^2}{F_0} + O(\sqrt{\delta}).
\langlebel{eq:Irot}
\\
&\mathcal{J}_2 = -\partiali \mu R \left( 1-\frac{1-G_0}{F_0}\right) +O(\sqrt{\delta}), \langlebel{eq:J2}
\\
&\mathcal{J}_{\mathrm{rot}} = - 4\partiali\mu R^2 \frac{1-G_0}{F_0} + O(\sqrt{\delta}).
\langlebel{eq:Jrot}
\end{align}
\end{lemma}
As an immediate consequence, the following corollary holds:
\begin{cor}\langlebel{cor:c22_asymp}
As $\delta\rightarrow 0$, we have
\begin{equation}\langlebel{c22}
c_{22} = \sqrt{R\delta} +O(\delta),
\quad
c_{23} = O(1).
\end{equation}
\end{cor}
Now, Theorem \ref{thm:main2} follows from Theorem \ref{thm:boundedstress_rot}, Proposition \ref{cor:h2_p2_asymp}, \eqref{eq:pre_asymps2} and Corollary \ref{cor:c22_asymp}.
One can also see that the decomposition formula \eqnref{Bpcomp} for the solution $({\bf u},p)$ is an immediate consequence of \eqnref{eq:pre_asymps}, \eqnref{c21}, \eqnref{eq:pre_asymps2} and \eqnref{c22}.
\section{No blow-up with no-slip boundary conditions I}\langlebel{sec:noslip1}
In this and next sections, we show that the stress tensor does not blow up under the no-slip boundary condition, that is, we prove Theorems \ref{thm:boundedstress_rot}, \ref{thm:boundedstress_v1q1} and \ref{thm:boundedstress_v2q2}. Theorem \ref{thm:boundedstress_rot} is for the problem with the boundary condition given by $\mbox{\boldmath $\Gy$}_3$, and Theorems \ref{thm:boundedstress_v1q1} and \ref{thm:boundedstress_v2q2} for those with the no-slip boundary conditions. For doing so we first construct solutions $({\bf v}_j,q_j)$, $j=1,2$, and $({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}})$ by using the stream function formulation and bipolar coordinates. To avoid notational confusion, we denote the stream functions by $\Phi$ in this section instead of $\Psi$ which was used in previous sections.
\subsection{Construction of stream functions}\langlebel{subsec6.1}
In the following three lemmas we present stream functions for $({\bf v}_1-{\bf U}_{\rm ex},q_1)$, $({\bf v}_2-{\bf U}_{\rm sh},q_2)$, and $({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}})$. Each stream function is found using the general form \eqref{eq:general_biharmonic_solution} and matching the boundary conditions using the formula \eqref{eq_velo1} and \eqref{eq_velo2} for ${\bf u}$ components of the solution.
\subsubsection{Stream function for $({\bf v}_1-{\bf U}_{\rm ex},q_1)$}
\begin{lemma}\langlebel{lem:Phi_1}
Let $\Phi_1$ be the stream function associated with the solution $({\bf v}_1-{\bf U}_{\rm ex},q_1)$.
We have
\begin{align}
(h\Phi_1)(\zeta,\theta) &=
a_1 \sinh 2\zeta\sin\theta + b_1 \zeta\sin\theta \nonumber
\\
&\quad +\sum_{n=2}^\infty {\bf i}g({a}_n \sinh (n+1)\zeta+ {b}_n \sinh(n-1)\zeta {\bf i}g)\sin n\theta,
\langlebel{eq:stream_v1}
\end{align}
where
\begin{align*}
a_1 &=-\frac{2ae^{-s}(\sinh s -s e^{-s})}{\sinh 2s - 2s \cosh 2s},
\\
b_1 &=\frac{4a \sinh^2 s}{\sinh 2s - 2s \cosh 2s},
\\
a_n &=- \frac{2a(e^{-ns} \sinh ns - e^{-s} n \sinh s)}{\sinh 2ns-n\sinh 2s}, \quad n\geq 2,
\\
b_n &= \frac{2a(e^{-ns} \sinh ns - e^{s} n \sinh s)}{\sinh 2ns-n\sinh 2s}, \quad n\geq 2.
\end{align*}
\end{lemma}
\partialroof
We need to show that the solution $({\bf v}_1, q_1)$ constructed from $\Phi_1$ satisfies the no-slip condition ${\bf v}_1=0$ on $\partial D_1$ and $\partial D_2$, and $({\bf v}_1-{\bf U}_{\rm ex},q_1) \in \mathcal{M}$.
We first observe that $\Phi^{0}_{1} := xy$ is the stream function associated to the background solution $({\bf U},0)$. Here and throughout this proof ${\bf U}={\bf U}_{\rm ex}$. In fact, ${\bf U}=(x,-y) = (\nabla \Phi_1^0)^\partialerp$ and a harmonic conjugate of $\mu\Delta \Phi^{0}_{1}=0$ is constant. We see from \eqref{eq:bipolar_x_y} that
$$
\Phi^{0}_{1} = \frac{a^2 \sinh \zeta \sin \theta}{(\cosh\zeta-\cos\theta)^2}
$$
in bipolar coordinates.
Notice that $\Phi_1^0$ has the odd symmetry in both $\zeta$ and $\theta$. So we look for $\Phi_1$ with the same symmetric property. We assume that $\Phi_1$ of the form \eqnref{eq:stream_v1} which is the part with such symmetry of the general solution \eqref{eq:general_biharmonic_solution}, and determine the coefficients $a_n$ and $b_n$ from the no-slip boundary condition.
For that, define $\Phi_1^{\mathrm{tot}}$ by
$$
\Phi_1^{\mathrm{tot}} := \Phi_1^0 + \Phi_1,
$$
so that $\Phi_1^{\mathrm{tot}}$ is the stream function associated with $({\bf v}_1,q_1)$. If we write ${\bf v}_1 = v_{1\zeta} {\bf e}_\zeta + v_{1\theta} {\bf e}_\theta$, then the no-slip boundary condition becomes
\begin{align}
v_{1\zeta} = 0 \quad \mbox{on }\zeta=\partialm s,\langlebel{eq:v1Gz_zero_bd}
\\
\quad v_{1\theta}=0 \quad \mbox{on }\zeta=\partialm s.\langlebel{eq:v1Gt_zero_bd}
\end{align}
Then, from the formula \eqref{eq:tan_deri_bipolar} for the tangential derivative and \eqref{eq_velo1} for the stream function in bipolar coordinates, we have
\begin{align*}
0={v}_{1\zeta}|_{\zeta =\partialm s} &= -( h \partial_\theta \Phi_{1}^{\mathrm{tot}})|_{\zeta=\partialm s}
=\mp \partial_T \Phi_{1}^{\mathrm{tot}}{\bf i}g|_{\zeta=\partialm s}.
\end{align*}
This amounts to $\Phi_{1}^{\mathrm{tot}}$ being constant on $\{\zeta=s\}$ and $\{\zeta=-s\}$. Since $\Phi_{1}^{\mathrm{tot}}$ is odd in $\zeta$, we further require that
$$
\Phi_{1}^{\mathrm{tot}} = 0 \quad \mbox{on } \zeta=\partialm s.
$$
We also have from \eqref{eq_velo2} and \eqref{eq:v1Gt_zero_bd} that on $\{\zeta=\partialm s\}$
$$
0= {v}_{1\theta} = h \partial_\zeta \Phi_{1}^{\mathrm{tot}} = \left( \partial_\zeta - \frac{\sinh\zeta}{\cosh\zeta-\cos\theta}\right)(h\Phi_{1}^{\mathrm{tot}})
=\partial_{\zeta} (h\Phi_{1}^{\mathrm{tot}}).
$$
Thus the no-slip boundary condition is fulfilled if
$$
\begin{cases}
h\Phi_{1}^{\mathrm{tot}} = 0 &\quad \mbox{on } \zeta=\partialm s,
\\
\partial_{\zeta} (h\Phi_{1}^{\mathrm{tot}}) = 0&\quad \mbox{on } \zeta=\partialm s,
\end{cases}
$$
or equivalently
\begin{equation}\langlebel{noslipv1}
\begin{cases}
h\Phi_{1} = - h\Phi_1^0 &\quad \mbox{on } \zeta=\partialm s,
\\
\partial_{\zeta} (h\Phi_{1}) = - \partial_\zeta(h\Phi_1^0)&\quad \mbox{on } \zeta=\partialm s.
\end{cases}
\end{equation}
Note that
\begin{align}
(h \Phi_1^0)(\zeta,\theta) &= \frac{a \sinh\zeta\sin\theta}{\cosh \zeta-\cos\theta}
=2a \sinh\zeta \sum_{n=1}^\infty e^{-n |\zeta|} \sin n\theta, \quad \zeta\neq 0.
\end{align}
We then see from \eqnref{eq:stream_v1} that \eqnref{noslipv1} is equivalent to the following linear systems for $a_n$ and $b_n$:
\begin{align}
\begin{bmatrix}
\sinh 2s && s
\\
2\cosh 2s && 1
\end{bmatrix}
\begin{bmatrix}
a_1
\\
b_1
\end{bmatrix}
=
\begin{bmatrix}
-2a \sinh s e^{-s}
\\
-2a \cosh s e^{-s} + 2a \sinh s e^{-s}
\end{bmatrix},
\end{align}
and
\begin{align}
&\begin{bmatrix}
\sinh (n+1)s & \sinh(n-1)s
\\
(n+1)\cosh (n+1)s & (n-1)\cosh (n-1)s
\end{bmatrix}
\begin{bmatrix}
a_n
\\
b_n
\end{bmatrix}
\nonumber
\\
&=
\begin{bmatrix}
-2a \sinh s e^{-ns}
\\
-2a \cosh s e^{-ns} + 2a \sinh s n e^{-ns}
\end{bmatrix}, \quad n \ge 2.
\end{align}
Solving these linear systems yield the expressions for $a_n$ and $b_n$.
We now show
\begin{equation}\langlebel{voneMcal}
({\bf v}_1-{\bf U},q_1) \in \mathcal{M}.
\end{equation}
We first prove
\begin{equation}\langlebel{eq:claim_uo_bdd}
({\bf v}_1-{\bf U})({\bf x})=O(|{\bf x}|^{-1}), \quad |{\bf x}|\rightarrow\infty.
\end{equation}
Since $|{\bf x}|\rightarrow\infty$ is equivalent to $(\zeta,\theta)\rightarrow(0,0)$, it is equivalent to proving
\begin{equation}
({\bf v}_1-{\bf U})(\zeta,\theta) = O(|\zeta|+|\theta|), \quad (\zeta,\theta)\rightarrow(0,0).
\end{equation}
One can see from the explicit forms of $a_n$ and $b_n$ that there is a constant $C$ independent of $n$ ($C$ may dependent on $s$) such that
$$
|a_n| + |b_n| \le C n e^{-2ns}
$$
for all $n$. Thus for any positive number $k$ there is a constant $C$ such that
\begin{equation}\langlebel{eq:anbnsum_conv}
\sum_{n=1}^\infty n^k e^{ns}(|a_n| + |b_n|) \le C.
\end{equation}
The constant $C$ may differ at each appearance.
If we write ${\bf v}_1-{\bf U} = f_{\zeta}{\bf e}_\zeta + f_{\theta}{\bf e}_\theta$, then it follows from \eqref{eq_velo1} and \eqref{eq_velo2} that
\begin{align}
f_{\zeta} &= - h \partial_\theta \Phi_1 =\left(- \partial_\theta + F \right)(h\Phi_1), \langlebel{eq:uo_Gz} \\
f_{\theta} &= +h \partial_\zeta \Phi_1 = \left( \partial_\zeta - G \right)(h\Phi_1), \langlebel{eq:uo_Gt}
\end{align}
where
\begin{equation}
F:= \frac{\sin\theta}{\cosh\zeta-\cos\theta}, \quad G:= \frac{\sinh\zeta}{\cosh\zeta-\cos\theta}.
\end{equation}
According to \eqnref{eq:stream_v1}, $h\Phi_1$ can be written as
\begin{equation}
(h \Phi_1)(\zeta,\theta) = a_1 \sinh 2\zeta\sin\theta + b_1 \zeta\sin\theta
+\sum_{n=2}^\infty \left( {a}_n w_n^+(\zeta,\theta) + {b}_n w_n^-(\zeta,\theta) \right) ,
\end{equation}
where
\begin{equation}
w_n^\partialm(\zeta,\theta):= \sinh (n\partialm 1)\zeta \sin n\theta.
\end{equation}
One can see that
\begin{equation}\langlebel{eq:def_wnpm}
|w_n^\partialm(\zeta,\theta)| \lesssim n^2e^{n s} |\zeta\theta| .
\end{equation}
It thus follows from \eqnref{eq:anbnsum_conv} that
\begin{equation}\langlebel{eq:hPhio_estim}
|(h\Phi_1)(\zeta,\theta)| \lesssim |\zeta\theta| + |\zeta\theta| \sum_{n=2}^\infty n^2 e^{ns}(|a_n|+|b_n|) \lesssim |\zeta\theta|.
\end{equation}
Similarly, one can show that there is $C$ independent of $(\zeta,\theta)$ such that
\begin{align}
&|\partial_\zeta (h\Phi_1)| \le C |\theta|,
\quad
|\partial_\theta (h\Phi_1)| \le C |\zeta|, \nonumber
\\
&|\partial_\zeta^2 (h\Phi_1)| \le C |\zeta\theta|,
\quad
|\partial_\theta^2 (h\Phi_1)| \le C |\zeta\theta|,
\quad |\partial_\zeta\partial_\theta (h\Phi_1)| \le C .
\langlebel{eq:hPhio_d_estim}
\end{align}
Since
\begin{equation}\langlebel{FGest}
\left| F \right| \approx \frac{|\theta|}{\zeta^2+\theta^2}, \quad \left| G \right| \approx \frac{|\zeta|}{\zeta^2+\theta^2}
\end{equation}
as $(\zeta,\theta) \to 0$, we have from \eqref{eq:uo_Gz}, \eqref{eq:uo_Gt}, \eqnref{eq:hPhio_estim} and \eqnref{eq:hPhio_d_estim} that
\begin{equation}\langlebel{eq:uoGz_estim}
|f_{\zeta}| + |f_\theta| \le C (|\zeta| + |\theta|)
\end{equation}
for some constant $C$ (depending on $s$, and hence on $\delta$), which implies \eqref{eq:claim_uo_bdd}.
Next, we prove
\begin{equation}\langlebel{eq:claim_grad_uo_decay}
\nabla ({\bf v}_1-{\bf U})({\bf x}) = O(|{\bf x}|^{-2}), \quad |{\bf x}|\rightarrow\infty,
\end{equation}
or equivalently
\begin{equation}
\nabla({\bf v}_1-{\bf U})(\zeta,\theta) = O(\zeta^2+\theta^2), \quad (\zeta,\theta)\rightarrow(0,0).
\end{equation}
Since ${\bf v}_1-{\bf U} = f_{\zeta}{\bf e}_\zeta + f_{\theta}{\bf e}_\theta$, we have
$$
|\nabla({\bf v}_1-{\bf U})| \leq C(| \nabla f_\zeta| + |f_\zeta \nabla {\bf e}_\zeta| + |\nabla f_\theta|+|f_\theta\nabla{\bf e}_\theta|).
$$
Lemma \ref{lem:grad_eGz_eGt_estim} and \eqref{eq:uoGz_estim} yield
$$
|\nabla{\bf v}_1-{\bf U}| \lesssim | \nabla f_\zeta| + |\nabla f_\theta| + (\zeta^2+\theta^2).
$$
We see from \eqref{eq:uo_Gz}
$$
\partial_\zeta f_{\zeta} =\left(- \partial_\zeta\partial_\theta + F \partial_\zeta + \partial_\zeta F \right)(h\Phi_1).
$$
One can see easily that $|\partial_\zeta F| \lesssim (\zeta^2+\theta^2)^{-1}$. Thus we obtain from \eqref{eq:hPhio_estim}, \eqref{eq:hPhio_d_estim} and \eqnref{FGest}
$$
\partial_\zeta f_{\zeta}= O(1) .
$$
Similarly, one can show
\begin{align*}
\partial_\theta f_{\zeta}= O(1), \quad \partial_\zeta f_{\theta}= O(1), \quad \partial_\theta f_{\theta}= O(1).
\end{align*}
Therefore, we have
\begin{align}
\nabla f_\zeta= O(|h \partial_\zeta f_\zeta| + |h \partial_\theta f_\zeta|) = O(|h|) = O(\zeta^2+\theta^2),
\\
\nabla f_\theta= O(|h \partial_\zeta f_\theta| + |h \partial_\theta f_\theta|) = O(|h|) = O(\zeta^2+\theta^2).
\end{align}
This proves \eqref{eq:claim_grad_uo_decay}.
We now prove the estimate of the pressure:
\begin{equation}\langlebel{eq:claim_p_uo_decay}
q_1({\bf x}) = O(|{\bf x}|^{-2}), \quad |{\bf x}|\rightarrow\infty,
\end{equation}
or equivalently,
\begin{equation}
q_1 = O(\zeta^2+\theta^2), \quad (\zeta,\theta)\rightarrow(0,0).
\end{equation}
Let
\begin{equation}\langlebel{wn}
w_n(\zeta,\theta):=\sinh n\zeta \sin n \theta, \quad
\tilde{w}_n(\zeta,\theta)=\cosh n\zeta \cos n \theta.
\end{equation}
The pressure $q_1$ is given by
\begin{align}
q_1 &=C -a_1\frac{2\mu}{a} (2 \tilde{w}_1-\tilde{w}_2 ) +b_1\frac{2\mu}{a}\tilde{w}_1
- \frac{2\mu}{a}\sum_{n=2}^\infty((n+1)a_n - (n-1)b_n )\tilde{w}_n \nonumber
\\
&\quad + \frac{2\mu}{a}\sum_{n=2}^\infty n a_n \tilde{w}_{n+1}
- \frac{2\mu}{a}\sum_{n=2}^\infty n b_n \tilde{w}_{n-1}, \langlebel{qone}
\end{align}
for some constant $C$. In fact, one can see from \eqref{eq:Laplacian_Psi} that
\begin{align*}
\Delta \Phi_1 &= a_1\frac{2}{a} (2w_1-w_2) -b_1\frac{2}{a}w_1
+ \frac{2}{a}\sum_{n=2}^\infty((n+1)a_n - (n-1)b_n )w_n
\\&\quad - \frac{2}{a}\sum_{n=2}^\infty n a_n w_{n+1}
+ \frac{2}{a}\sum_{n=2}^\infty n b_n w_{n-1}.
\end{align*}
Since $q_1$ is a harmonic conjugate of $\mu\Delta\Phi_1$ and $-\tilde{w}_n$ is a harmonic conjugate of $w_n$, \eqnref{qone} follows.
We choose the constant $C$ to be
\begin{equation}\langlebel{qonepinfty}
C = \frac{2\mu}{a}a_1 - \frac{2\mu}{a}b_1+\frac{2\mu}{a}\sum_{n=2}^\infty (a_n+b_n).
\end{equation}
Then, $q_1$ take the form
\begin{align}
q_1 &=C-a_1\frac{2\mu}{a} (2 (\tilde{w}_1-1)-(\tilde{w}_2-1) ) +b_1\frac{2\mu}{a}(\tilde{w}_1-1)\nonumber
\\
&\quad - \frac{2\mu}{a}\sum_{n=2}^\infty((n+1)a_n - (n-1)b_n )(\tilde{w}_n -1)
\nonumber \\&\quad
+ \frac{2\mu}{a}\sum_{n=2}^\infty n a_n (\tilde{w}_{n+1}-1)
- \frac{2\mu}{a}\sum_{n=2}^\infty n b_n (\tilde{w}_{n-1}-1) . \langlebel{qone2}
\end{align}
Note that
\begin{equation}\langlebel{eq:wtn_estim}
|\tilde{w}_n(\zeta,\theta) - 1| \lesssim n^2 e^{n s}(\zeta^2 + \theta^2).
\end{equation}
This together with \eqnref{eq:anbnsum_conv} yields
\begin{align*}
|q_1|&\lesssim {\bf i}g( 1 + \sum_{n=2}^\infty n^3 e^{ns}(|a_n|+|b_n|){\bf i}g)|\zeta^2+\theta^2 | =O(\zeta^2+\theta^2).
\end{align*}
This proves \eqref{eq:claim_p_uo_decay} and hence \eqnref{voneMcal}. The proof is completed.
\qed
\subsubsection{Stream function for $({\bf v}_2-{\bf U}_{\rm sh},q_2)$}
\begin{lemma}\langlebel{lem:Phi_2}
The stream function $\Phi_2$ associated with the solution $({\bf v}_2-{\bf U}_{\rm sh},q_2)$ is given by
\begin{align}
(h\Phi_2)(\zeta,\theta)
&= K_v(\cosh\zeta -\cos\theta)\ln (2\cosh \zeta - 2\cos \theta) +c_0 \cosh \zeta + d_0 \zeta \sinh \zeta \nonumber
\\
&\quad
+\sum_{n=1}^\infty \big(c_n \cosh (n+1) \zeta + d_n \cosh (n-1)\zeta \big)\cos n\theta ,
\langlebel{eq:W_biharmonic_solution}
\end{align}
where
\begin{align*}
K_v=\frac{a(1-\tanh s -\frac{2\sinh^2 s}{2s+ \sinh 2s}-M')}{\frac{1}{2} + \frac{s(\sinh 2s -2 \tanh s)}{2s +\sinh 2s} + M}
\end{align*}
with
\begin{equation} \langlebel{M}
M=
\sum_{n=2}^\infty \frac{4n \sinh s \cosh s + e^{-n s}\sinh n s - 4n^2 \sinh^2 s }{n(n^2-1)(\sinh 2n s + n \sinh 2s)},
\end{equation}
and \begin{equation} \langlebel{M'}
M'=\sum_{n=2}^\infty \frac{4 n \sinh^2 s }{\sinh 2ns + n \sinh 2s},
\end{equation}
\begin{align*}
c_0 &=-\frac{a}{2}+\frac{a \sinh^2 s}{\sinh s \cosh s+s} + K_v\frac{-1+e^{-2s} -2s(1+s)}{2s + \sinh 2s}, \\
d_0 &= \frac{a}{\sinh s \cosh s +s} -K_v \frac{\sinh^2 s}{s+\cosh s \sinh s},
\\
c_1 &=a(-1+\coth 2s) + K_v \frac{1}{1+e^{2s}},
\\
d_1 &= \frac{a}{2}-\frac{a}{\sinh 2s} + K_v(1+s-\frac{\tanh s}{2}),
\\
c_n &=\frac{2a(e^{-ns}\cosh ns- e^{-s}n \sinh s)}{\sinh 2ns + n \sinh 2s}+2K_v \frac{e^{-ns} \sinh n s + e^{-s}n\sinh s }{n(n+1)(\sinh 2ns + n \sinh 2s)},
\\
d_n &=-\frac{2a(e^{-ns}\cosh ns- e^{s}n \sinh s)}{\sinh 2ns + n \sinh 2s} - 2K_v\frac{e^{-n s} \sinh ns + e^s n \sinh s}{n(n-1) (\sinh 2ns + n \sinh 2s)}.
\end{align*}
\end{lemma}
\partialroof
Like the proof of Lemma \ref{lem:Phi_1}, one can see that the stream function associated with the background solution $({\bf U},0)$ is given by
\begin{equation}
\Phi^{0}_{2} = \frac{1}{2}(-x^2+y^2).
\end{equation}
Here and throughout this proof, ${\bf U}={\bf U}_{\rm sh}$. One can see from \eqref{eq:bipolar_x_y} that
\begin{equation}\langlebel{eq:Phi20_bipolar}
\Phi^{0}_{2} = \frac{1}{2}\frac{a^2 (-\sinh^2 \zeta+ \sin^2 \theta)}{(\cosh\zeta-\cos\theta)^2}.
\end{equation}
Since $\Phi_2^0$ has the even symmetry in both $\zeta$ and $\theta$, we seek $\Phi_2$ in the form \eqnref{eq:W_biharmonic_solution} which has the same symmetric property. Let
\begin{align}
(h\Phi_{2}^K)(\zeta,\theta) &:= K_v (\cosh\zeta -\cos\theta)\ln (2\cosh \zeta - 2\cos \theta),\langlebel{eq:Phi2K}\\
(h\Phi_{2}^F)(\zeta,\theta) &:= c_0 \cosh \zeta + d_0 \zeta \sinh \zeta
\nonumber
\\
&\quad
+\sum_{n=1}^\infty \big(c_n \cosh (n+1) \zeta + d_n \cosh (n-1)\zeta \big)\cos n\theta,
\langlebel{eq:Phi2F_fourier}
\end{align}
so that
\begin{align}
& \Phi_{2} = \Phi_2^K + \Phi_2^F. \langlebel{eq:Phi2_fourier}
\end{align}
Let
$$
\Phi_2^{\mathrm{tot}} := \Phi_2^0 + \Phi_2.
$$
Then, $\Phi_2^{\mathrm{tot}}$ is the stream function associated with $({\bf v}_2,q_2)$.
We determine the coefficients $c_n$ and $d_n$ from the no-slip boundary condition ${\bf v}_2=0$ on $\partial D_1$ and $\partial D_2$. One can show as in the proof of Lemma \ref{lem:Phi_1} that this condition is fulfilled if
\begin{equation}
\begin{cases}
h\Phi_{2}^{\mathrm{tot}} = 0 &\quad \mbox{on } \zeta=\partialm s,
\\
\partial_{\zeta} (h\Phi_{2}^{\mathrm{tot}}) = 0&\quad \mbox{on } \zeta=\partialm s.
\end{cases}
\end{equation}
In other words,
\begin{equation}\langlebel{systemv2}
\begin{cases}
h\Phi_{2}^F = - h\Phi_2^0 - h\Phi_2^K &\quad \mbox{on } \zeta=\partialm s,
\\
\partial_{\zeta} (h\Phi_{2}^F) = - \partial_\zeta(h\Phi_2^0) - \partial_\zeta(h\Phi_2^K)&\quad \mbox{on } \zeta=\partialm s.
\end{cases}
\end{equation}
Let
\begin{align}
(h \Phi_2^0)(\zeta,\theta) &= \frac{1}{2}\frac{a^2 (-\sinh^2\zeta+\sin^2\theta)}{\cosh \zeta-\cos\theta}\nonumber
\\
&=\frac{a}{2} e^{-|\zeta|} + \frac{a}{2}e^{-2|\zeta|}\cos\theta - a\sinh|\zeta|\sum_{n=2}^\infty e^{-n|\zeta|}\cos n\theta\nonumber
\\
&=:\sum_{n=0}^\infty \partialhi_{n}^0(\zeta) \cos n\theta,
\end{align}
and
\begin{align}
(h \Phi_2^K)(\zeta,\theta)
&=K_v(|\zeta|\cosh\zeta + e^{-|\zeta|}) -K_v{\bf i}g(1+\frac{e^{-2|\zeta|}}{2}+|\zeta|{\bf i}g)\cos\theta
\nonumber\\
&\quad + K_v\sum_{n=2}^\infty {\bf i}g(\frac{e^{-(n-1)|\zeta|}}{n-1}-2\cosh \zeta \frac{e^{-n|\zeta|}}{n} +\frac{e^{-(n+1)|\zeta|}}{n+1}{\bf i}g)\cos n\theta\nonumber
\\
&=:\sum_{n=0}^\infty \partialhi_{n}^K(\zeta) \cos n\theta.
\end{align}
Then one can infer from \eqnref{systemv2} that the following system of equations for $c_n, d_n$ hold:
\begin{align*}
\begin{bmatrix}
\cosh s && s\sinh s
\\
\sinh s && \sinh s + s\cosh s
\end{bmatrix}
\begin{bmatrix}
c_0
\\
d_0
\end{bmatrix}
&=
\begin{bmatrix}
-\partialhi_{0}^K(s) -\partialhi_{0}^K(s)
\\
-(\partialhi_{0}^K)'(s) -(\partialhi_{0}^K)'(s)
\end{bmatrix},
\\
\begin{bmatrix}
\cosh 2s && 1
\\
2\sinh 2s && 0
\end{bmatrix}
\begin{bmatrix}
c_1
\\
d_1
\end{bmatrix}
&=
\begin{bmatrix}
-\partialhi_{1}^K(s) -\partialhi_{1}^K(s)
\\
-(\partialhi_{1}^K)'(s) -(\partialhi_{1}^K)'(s)
\end{bmatrix},
\end{align*}
and
\begin{align*}
&\begin{bmatrix}
\cosh (n+1)s & \cosh(n-1)s
\\
(n+1)\sinh (n+1)s & (n-1)\sinh (n-1)s
\end{bmatrix}
\begin{bmatrix}
c_n
\\
d_n
\end{bmatrix}
=
\begin{bmatrix}
-\partialhi_{n}^K(s) -\partialhi_{n}^K(s)
\\
-(\partialhi_{n}^K)'(s) -(\partialhi_{n}^K)'(s)
\end{bmatrix},
\end{align*}
for $n\geq2$. Solving these linear systems yields the expressions given in the lemma for $c_n$ and $d_n$ in terms of $K_v$.
We then determine the constant $K_v$ by imposing the condition
\begin{equation}\langlebel{eq:cndnsum_pre_cond}
c_0 + \sum_{n=1}^\infty (c_n + d_n) = 0.
\end{equation}
This condition is required to prove
\begin{equation}\langlebel{eq:v2q2_decay}
({\bf v}_2-{\bf U},q_2) \in \mathcal{M}.
\end{equation}
We will be brief in presenting the proof of \eqnref{eq:v2q2_decay} since it is parallel to \eqnref{voneMcal}. We only mention why the condition \eqnref{eq:cndnsum_pre_cond} is required, and write down the formula for the pressure term $q_2$ since it will be used in latter part of this section.
Similarly to \eqnref{eq:anbnsum_conv}, one can show that for any positive number $k$ there is a constant $C$ such that
\begin{equation}\langlebel{cndn}
\sum_{n=1}^\infty n^k e^{ns}(|c_n| + |d_n|) \le C.
\end{equation}
Note that
$$
(h \Phi^{e,F})(\zeta,\theta) =c_0 \cosh \zeta + d_0 \zeta \sinh \zeta
+\sum_{n=1}^\infty \big( c_n \tilde{w}_n^+(\zeta,\theta) +d_n \tilde{w}_n^-(\zeta,\theta) \big) ,
$$
where
$$
\tilde{w}_n^\partialm (\zeta,\theta)= \cosh (n\partialm 1)\zeta \cos n\theta.
$$
Thanks to \eqnref{eq:cndnsum_pre_cond}, we have
$$
(h \Phi^{e,F})(\zeta,\theta) =c_0 (\cosh \zeta-1) + d_0 \zeta \sinh \zeta
+\sum_{n=1}^\infty c_n (\tilde{w}_n^+(\zeta,\theta)-1) +d_n (\tilde{w}_n^-(\zeta,\theta)-1).
$$
We then use \eqnref{eq:wtn_estim} to obtain
$$
(h \Phi^{e,F})(\zeta,\theta) = O(\zeta^2+\theta^2).
$$
We use \eqref{eq:Laplacian_Psi} to see that $\Delta\Phi_2^K=0$ and
\begin{align*}
\Delta \Phi_2 = \Delta \Phi_2^F &= \frac{2}{a} c_0 + d_0 \frac{2}{a} (1-\tilde{w}_1)
+ \frac{2}{a}\sum_{n=1}^\infty((n+1)c_n - (n-1)d_n )\tilde{w}_n \\
&\quad - \frac{2}{a}\sum_{n=1}^\infty n c_n \tilde{w}_{n+1}
+ \frac{2}{a}\sum_{n=1}^\infty n d_n \tilde{w}_{n-1}.
\end{align*}
Since the pressure $q_2$ is a harmonic conjugate of $\mu\Delta\Phi^e$, we have
\begin{align}
q_2 &=\frac{2\mu }{a} d_0 {w}_1 + \frac{2\mu}{a}\sum_{n=1}^\infty((n+1)c_n - (n-1)d_n ){w}_n \nonumber
\\&\quad - \frac{2\mu}{a}\sum_{n=1}^\infty n c_n {w}_{n+1}
+ \frac{2\mu}{a}\sum_{n=1}^\infty n d_n{w}_{n-1} + C \langlebel{qtwo}
\end{align}
for some constant $C$. We choose $C=0$. Then, since
\begin{equation}\langlebel{wnest}
|w_n(\zeta,\theta)| \lesssim n^2 e^{n s}(\zeta^2 + \theta^2),
\end{equation}
we have $q_2 = O(\zeta^2 + \theta^2)$ as $(\zeta, \theta) \to 0$, namely, \eqnref{eq:v2q2_decay} holds.
\qed
\subsubsection{Stream function for $({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}})$}
\begin{lemma}\langlebel{lem:Psi_rot_exp}
The stream function $\Phi_{\mathrm{rot}}$ associated with the solution $({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}})$ is given by
\begin{align}
(h\Phi_{\mathrm{rot}})(\zeta,\theta)
&= K_{\mathrm{rot}}(\cosh\zeta -\cos\theta)\ln (2\cosh \zeta - 2\cos \theta) +a_0' \cosh \zeta + d_0' \zeta \sinh \zeta \nonumber
\\
&\quad
+\sum_{n=1}^\infty \big(a_n' \cosh (n+1) \zeta + b_n' \cosh (n-1)\zeta \big)\cos n\theta ,
\langlebel{eq:Psi_rot}
\end{align}
where
\begin{align*}
K_{\mathrm{rot}}=-a \left(\frac{s \sinh^2 s \tanh s}{\sinh s \cosh s +s}+\frac{1}{2} + M\right)^{-1}
\end{align*}
with $M$ given in \eqnref{M}, and
\begin{align*}
a_0' &= a-\frac{ K_{\mathrm{rot}}(s^2+s+e^{-s}\sinh s)}{\sinh s \cosh s +s},
\quad
d_0' = -\frac{ K_{\mathrm{rot}} \sinh^2 s}{\sinh s \cosh s +s} ,
\\
a_1' &= \frac{1}{2} K_{\mathrm{rot}} e^{-s} \mathrm{sech}\, s,
\quad
b_1' = K_{\mathrm{rot}}(s+1-\frac{1}{2} \tanh s),
\\
a_n' &=\frac{2K_{\mathrm{rot}} (n e^{-s} \sinh s +e^{-n s}\sinh n s)}{n(n+1)(\sinh 2ns + n \sinh 2s)},
\\
b_n' &=-\frac{2K_{\mathrm{rot}} (n e^{s} \sinh s +e^{-n s}\sinh n s)}{n(n-1)(\sinh 2ns + n \sinh 2s)}.
\end{align*}
\end{lemma}
\partialroof
Let
$$
(\tilde{\bf h}_\mathrm{rot},\tilde{p}_\mathrm{rot}):=({\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}) - (\mbox{\boldmath $\Gy$}_3,0).
$$
Then $(\tilde{\bf h}_\mathrm{rot},\tilde{p}_\mathrm{rot})$ is the solution to
\begin{equation}
\begin{cases}
\mu \Delta \tilde{\bf h}_\mathrm{rot} = \nabla \tilde{p}_\mathrm{rot} \quad &\mbox{in }D^e,
\\
\nabla \cdot \tilde{\bf h}_\mathrm{rot} = 0 \quad &\mbox{in }D^e,
\\
(\tilde{\bf h}_\mathrm{rot},\tilde{p}_\mathrm{rot}) - (-\mbox{\boldmath $\Gy$}_3,0) \in\mathcal{M},
\end{cases}
\end{equation}
with the no-slip boundary condition, namely,
\begin{equation}\langlebel{eq:stream_htilde_rot_bcbc}
\tilde{\bf h}_\mathrm{rot} |_{\partial D_1}=0, \quad \tilde{\bf h}_\mathrm{rot}|_{\partial D_2}=0.
\end{equation}
Observe that the above equation is similar to the equation \eqref{Bv2} for $({\bf v}_2,q_2)$ with the only difference being that the background solution ${\bf U}_{\rm sh}$ is replaced with $-\mbox{\boldmath $\Gy$}_3$.
It is easy to see that the function $\Phi_\mathrm{rot}^0$ defined by
$$
\Phi_\mathrm{rot}^0 = -\frac{1}{2}(x^2+y^2)
$$
is a stream function associated with the solution $(-\mbox{\boldmath $\Gy$}_3,0)$. In bipolar coordinates,
\begin{equation}
\Phi_\mathrm{rot}^0 = \frac{1}{2}\frac{a^2 (-\sinh^2 \zeta+ \sin^2 \theta)}{(\cosh\zeta-\cos\theta)^2}.
\end{equation}
Note that $\Phi_\mathrm{rot}^0$ has the even symmetry in both $\zeta$ and $\theta$.
In exactly the same way as in the proof of Lemma \ref{lem:Phi_2}, we can find the stream function associated with $(\tilde{\bf h}_\mathrm{rot},\tilde{p}_\mathrm{rot})$, which immediately yields Lemma \ref{lem:Psi_rot_exp}.
\qed
\subsection{Asymptotics of $K_v$ and $K_{\mathrm{rot}}$}
\begin{lemma}\langlebel{lem:K_asymp}
As $\delta\rightarrow 0$, we have
\begin{align}
K_{v} &=R\frac{1-G_0}{F_0} \sqrt{\frac{R}{\delta}} + O(1), \langlebel{eq:Kv_asymp} \\
K_{\mathrm{rot}} &= -\frac{R}{F_0} \sqrt{\frac{R}{\delta}}+O(1),\langlebel{eq:Krot_asymp}
\end{align}
where $F_0$ and $G_0$ are the numbers given in \eqref{F0G0}.
\end{lemma}
\partialroof
The proof is based on a special case of the Euler-Maclaurin summation formula: if $f\in C^1(\mathbb{R}^+) \cap L^1(\mathbb{R}^+)$, then, for a small parameter $s>0$, we have
\begin{equation}\langlebel{EM}
s \sum_{n=0}^\infty f(x_0+ n s) = \int_{x_0}^\infty f(x) dx + R_1,
\end{equation}
where the remainder term $R_1$ satisfies
$$
|R_1| \lesssim s \left( |f(x_0)| + \int_{x_0}^\infty |f'(x)|dx \right).
$$
We first consider the asymptotics of the series $M$ defined by (\ref{M}).
One can easily see that
\begin{align*}
M + 2\sum_{n=2}^\infty \frac{1}{n(n^2-1)} = \sum_{n=2}^\infty \frac{4e^{-ns} \sinh^2(ns)(\cosh ns + \sinh ns) - 4n^2 \sinh^2 s }{n(n^2-1)(\sinh 2n s + n \sinh 2s)},
\end{align*}
and
$$
2\sum_{n=2}^\infty \frac{1}{n(n^2-1)} = \frac{1}{2}.
$$
Thus, the Euler-Maclaurin summation formula yields
\begin{align}
M + \frac{1}{2} &= s^3 \sum_{n=2}^\infty \frac{ 4e^{-ns} \sinh^2(ns)(\cosh ns + \sinh ns) - 4(n s)^2 (\sinh s/s)^2 }{(ns)((ns)^2-s^2)(\sinh 2n s + ns (\sinh 2s/s)} \nonumber
\\
&=s^2\int_{2s}^\infty f_s(x) dx + s^2R_1,
\end{align}
where
$$
f_s(x) : = \frac{ 4e^{-x} \sinh^2 x (\cosh x + \sinh x)-4x^2(\sinh s /s)^2 }{x(x^2-s^2)(\sinh 2x + (\sinh 2s/s)x)},
$$
and
$$
|R_1| \lesssim s \left( |f_s(2s)| + \int_{2s}^\infty |f_s'(x)|dx \right).
$$
By straightforward but tedious computations, one can see that
$$
|f_s(2s)|\leq C, \quad \int_{2s}^\infty|f_s'(x)|dx \leq C',
$$
where
$C$ and $C'$ are constants independent of $s>0$.
Therefore, as $s\rightarrow 0$, we obtain
$$
M + \frac{1}{2} = s^2 \int_0^\infty f_0(x) dx + O(s^3) =s^2F_0 + O(s^3).
$$
So, for small $s$, we have
$$
K_{\mathrm{rot}} = -a \left( \frac{1}{2} + M + O(s^3) \right)^{-1} = -\frac{a}{s^2 F_0} + O(1).
$$
The other quantity $K_v$ can be estimated similarly, and the proof is completed.
\qed
\section{No blow-up with no-slip boundary conditions II}\langlebel{sec:noslip2}
\subsection{Proof of Theorem \ref{thm:boundedstress_v1q1}}
We first estimate the strain tensor $\mathcal{E}[{\bf v}_1]$. Since $\mathcal{E}[{\bf U}]=O(1)$, we estimate $\mathcal{E}[{\bf v}_1-{\bf U}]$.
The following formulae are derived using the relations \eqref{eq:strain_bipolar1}--\eqref{eq:strain_bipolar3} between the strain tensor and the stream function $\Phi_1$ and \eqnref{eq:stream_v1}:
\begin{align*}
\mathcal{E}_{\zeta\zeta}[{\bf v}_1-{\bf U}] &=
-h(\zeta,\theta) 2 a_1 \cosh 2\zeta \cos \theta - h(\zeta,\theta) b_1 \cos\theta
\\
&\quad - h(\zeta,\theta)\sum_{n=2}^\infty {\bf i}g(\tilde{a}_n \cosh (n+1)\zeta +\tilde{b}_n \cosh (n-1)\zeta{\bf i}g)\cos n\theta, \\
\mathcal{E}_{\zeta\theta}[{\bf v}_1-{\bf U}] &= h(\zeta,\theta) 2a_1 \sinh 2\zeta \sin \theta
\\
&\quad + h(\zeta,\theta)\sum_{n=2}^\infty {\bf i}g(\tilde{a}_n \sinh (n+1)\zeta +\tilde{b}_n \sinh (n-1)\zeta{\bf i}g)\sin n\theta, \\
\mathcal{E}_{\theta\theta}[{\bf v}_1-{\bf U}] &= - \mathcal{E}_{\zeta\zeta}[{\bf v}_1-{\bf U}] ,
\end{align*}
where
\begin{equation}\langlebel{tildean}
\tilde{a}_n = n(n+1) a_n, \quad \tilde{b}_n = n(n-1) b_n.
\end{equation}
Here, $a_n$ and $b_n$ are given in Lemma \ref{lem:Phi_1}.
Using the hyperbolic identities
\begin{align*}
& \cosh(n+1)\zeta + \cosh(n-1)\zeta = 2\cosh n \zeta \cosh \zeta,
\\
&\cosh(n+1)\zeta - \cosh(n-1)\zeta = 2\sinh n \zeta \sinh \zeta,
\end{align*}
we can rewrite $\mathcal{E}_{\zeta\zeta}$ and $ \mathcal{E}_{\zeta\theta}$ as
\begin{align}
& \mathcal{E}_{\zeta\zeta}[{\bf v}_1-{\bf U}] =
-h(\zeta,\theta) 2 a_1 \cosh 2\zeta \cos \theta - h(\zeta,\theta) b_1 \cos\theta\nonumber
\\
&\quad - h(\zeta,\theta)\sum_{n=2}^\infty \big( ({\tilde{a}_n+\tilde{b}_n}) \cosh n \zeta \cosh \zeta + ({\tilde{a}_n- \tilde{b}_n}) \sinh n\zeta \sinh \zeta \big)\cos n\theta,
\langlebel{eq:Ecal_GzGz_v1_symm}
\end{align}
and
\begin{align}
&\mathcal{E}_{\zeta\theta}[{\bf v}_1-{\bf U}] =
h(\zeta,\theta) 2a_1 \sinh 2\zeta \sin \theta \nonumber
\\
&\quad + h(\zeta,\theta)\sum_{n=2}^\infty \big( ({\tilde{a}_n+\tilde{b}_n}) \sinh n \zeta \cosh \zeta + ({\tilde{a}_n- \tilde{b}_n}) \cosh n\zeta \sinh \zeta \big)\sin n\theta.
\langlebel{eq:Ecal_GzGt_v1_symm}
\end{align}
From the expressions of $a_n$ and $b_n$ given in Lemma \ref{lem:Phi_1}, we have, for $n \geq 2$,
\begin{align*}
{\tilde{a}_n+\tilde{b}_n} &=-\frac{4a}{s}\frac{n s e^{-n s} \sinh n s - (n s)^2\eta_2 + (n s)^3 \eta_1}{\sinh 2ns - 2ns\eta_{2}},
\\
{\tilde{a}_n-\tilde{b}_n} &=-\frac{4a}{s^2}\frac{(ns)^2 e^{-n s} \sinh n s - (n s)^3 \eta_2 + s^2 (n s)^2 \eta_1}{\sinh 2ns - 2ns\eta_{2}},
\end{align*}
where
\begin{equation}\langlebel{eq:def_eta1_eta2}
\eta_1 = \eta_{1}(s) := \frac{\sinh^2 s}{s^2} , \quad \eta_2 = \eta_{2}(s) := \frac{\sinh 2s}{2s}.
\end{equation}
If we define
\begin{align}
f_1(x) := \frac{x e^{-x}\sinh x - x^2\eta_{2} + x^3 \eta_1}{\sinh 2x - 2x\eta_{2}}, \quad
f_2(x) := \frac{x^2 e^{-x}\sinh x - x^3 \eta_2 + s^2 x^2 \eta_1 }{\sinh 2x - 2x\eta_{2}},\langlebel{eq:def_f1f2}
\end{align}
then ${\tilde{a}_n+\tilde{b}_n}$ and ${\tilde{a}_n-\tilde{b}_n}$ can be rewritten as
\begin{align}
{\tilde{a}_n+\tilde{b}_n} =-\frac{4a}{s} f_1(ns), \quad
{\tilde{a}_n-\tilde{b}_n} =-\frac{4a}{s^2} f_2(ns) .\langlebel{eq:anpmbn_fj}
\end{align}
It follows from $\mathcal{E}_{\zeta\zeta}$ and $\mathcal{E}_{\zeta\theta}$ that
from \eqref{eq:Ecal_GzGz_v1_symm} and \eqref{eq:anpmbn_fj} that
\begin{align}
\mathcal{E}_{\zeta\zeta}[{\bf v}_1-{\bf U}] &=
-h(\zeta,\theta) 2 c_1 \cosh 2\zeta \cos \theta - h(\zeta,\theta) d_1 \cos\theta\nonumber
\\
&\quad + \frac{4a}{s}h(\zeta,\theta)\sum_{n=2}^\infty \big( f_1(ns) \cosh n \zeta \cosh \zeta \big)\cos n\theta\nonumber
\\
&\quad + \frac{4a}{s^2}h(\zeta,\theta)\sum_{n=2}^\infty \big( f_2(ns) \sinh n\zeta \sinh \zeta \big)\cos n\theta. \langlebel{eq:Ecal_GzGz_pre_series}
\end{align}
Let, for $j=1,2$,
\begin{equation}
A_{j,n}^+(\zeta) = f_j(n s) \cosh n\zeta, \quad A_{j,n}^-(\zeta) = f_j(n s) \sinh n\zeta,
\end{equation}
and then define $S_j^{++}$, $S_j^{-+}$, etc, by
\begin{equation}
S_j^{\partialm+} = \sum_{n=2}^\infty A_{j,n}^\partialm(\zeta) ah(\zeta,\theta)\cos n\theta, \quad
S_j^{\partialm-} = \sum_{n=2}^\infty A_{j,n}^\partialm(\zeta) ah(\zeta,\theta)\sin n\theta.
\langlebel{eq:Sj_pm_rewrite}
\end{equation}
Then, \eqref{eq:Ecal_GzGz_pre_series} reads
\begin{align}
\mathcal{E}_{\zeta\zeta}[{\bf v}_1-{\bf U}]
&=S_0 + \frac{4}{s} \cosh \zeta S_1^{++} + \frac{4}{s^2} \sinh \zeta S_2^{-+},
\langlebel{eq:Ecal_GzGz_v1_preasymp}
\end{align}
where
\begin{equation}\langlebel{eq:def_S0}
S_0 = -h(\zeta,\theta) 2 a_1 \cosh 2\zeta \cos \theta - h(\zeta,\theta) b_1 \cos\theta.
\end{equation}
Similarly, one can see that $\mathcal{E}_{\zeta\theta}$ is written as
\begin{align}
\mathcal{E}_{\zeta\theta}[{\bf v}_1-{\bf U}]
&=\widetilde{S}_0 - \frac{4}{s} \cosh \zeta S_1^{--} - \frac{4}{s^2} \sinh \zeta S_2^{+-},
\langlebel{eq:Ecal_GzGt_v1_preasymp}
\end{align}
where
\begin{equation}\langlebel{eq:def_S0p}
\widetilde{S}_0 = h(\zeta,\theta) 2b_1 \sinh 2\zeta \sin \theta.
\end{equation}
We use the following lemma here and present its proof in Appendix \ref{appendixD}.
\begin{lemma}\langlebel{lem:fj_v1_estim}
If $2s \le x$, then
\begin{equation}\langlebel{eq:fj_estim}
|f_j(x)|+ |f_j'(x)| + |f_j''(x)|\lesssim (1+x^3)e^{-2x}, \quad j=1,2.
\end{equation}
If $2s \le x \le 1$, then
\begin{equation}\langlebel{eq:Df1_DDf2_asymp}
\left| f_1'(x) - \frac{1}{2} \right| \lesssim x,
\quad
|f_2''(x) - {1}| \lesssim x.
\end{equation}
\end{lemma}
\begin{lemma}\langlebel{lem:pre_asymp_Sj}
The following asymptotic formulas hold for $j=1,2$:
\begin{align}
S_{j}^{+ +} &=- \frac{ 1}{2} f_j(2s)\cosh 2\zeta \cos \theta + f_j(2s)\cosh 2\zeta \cos 2\theta \nonumber \\
&\quad -\frac{1}{2} f_j(3s)\cosh 3\zeta \cos 2\theta + O(s), \langlebel{Sj++}\\
S_{j}^{+ -}&=- \frac{ 1}{2} f_j(2s)\cosh 2\zeta \sin \theta +f_j(2s)\cosh 2\zeta \sin 2\theta \nonumber \\
&\quad -\frac{1}{2} f_j(3s)\cosh 3\zeta \sin 2\theta + O(s), \langlebel{Sj+-} \\
S_{j}^{- \partialm} &=O(s), \langlebel{Sj-pm}
\end{align}
as $s \to 0$.
\end{lemma}
\partialroof
We first have the following identity:
\begin{align*}
ah(\zeta,\theta)\cos n\theta &= \cosh\zeta \cos n \theta - \cos\theta \cos n\theta
\\
&= -\frac{1}{2}\big(\cos (n+1)\theta - 2\cosh \zeta \cos n \theta + \cos (n-1)\theta \big)
\\
&=-\frac{1}{2}\big(\cos (n+1)\theta - 2 \cos n \theta + \cos (n-1)\theta \big) + (\cosh\zeta-1) \cos n \theta
\\
&=-\frac{1}{2}\big(\cos (n+1)\theta - 2 \cos n \theta + \cos (n-1)\theta \big) + \sinh^2(\zeta/2) \cos n \theta.
\end{align*}
By substituting this identity into \eqref{eq:Sj_pm_rewrite} and then rearranging indices, we arrive at
\begin{align}
S_j^{\partialm+} &=
- \frac{ 1}{2} (A_{j,2}^\partialm \cos \theta -2 A_{j,2}^\partialm \cos 2\theta + A_{j,3}^\partialm \sin 2\theta )
\nonumber \\ &\quad
-\frac{1}{2}\sum_{n=3}^\infty {\bf i}g(A_{j,n+1}^\partialm - 2A_{j,n}^\partialm + A_{j,n-1}^\partialm {\bf i}g)\cos n\theta \nonumber
\\
& \quad + \sinh^2 (\zeta/2) \sum_{n=2}^\infty A_{j,n}^\partialm \cos n\theta
=: S_{j,0}^{\partialm +} + S_{j,1}^{\partialm +} + S_{j,2}^{\partialm +}. \langlebel{eq:v2_Sp_fourier}
\end{align}
Note that $A_{j,n}^\partialm$ is of the form
$$
A_{j,n}^\partialm = F_{j,n} G^\partialm_{n},
$$
where
$$
F_{j,n} = f_j(n s), \quad G_n^+ = \cosh n \zeta, \quad G_n^- = \sinh n\zeta.
$$
We then have
\begin{align*}
A_{j,n+1}^\partialm - 2A_{j,n}^\partialm + A_{j,n-1}^\partialm & = F_{j,n+1} G_{n+1}^\partialm - 2 F_{j,n} G_n^\partialm + F_{j,n-1}G_{n-1}^\partialm
\\
&= (F_{j,n+1}-2F_{j,n}+F_{j,n-1}) G_n^\partialm + F_{j,n} (G^\partialm_{n+1}-2G^\partialm_n + G^\partialm_{n-1})
\\
&+ (F_{j,n+1} - F_{j,n}) (G^\partialm_{n+1}- G^\partialm_{n})
+ (F_{j,n} - F_{j,n-1}) (G^\partialm_{n}- G^\partialm_{n-1}).
\end{align*}
One can easily see that
\begin{align*}
&G_{n+1}^\partialm - G_n^\partialm = \sinh ({\zeta}/{2}) (e^{(n+\frac{1}{2})\zeta} \mp e^{-(n+\frac{1}{2})\zeta} ),
\\
&G_{n+1}^\partialm -2G_n^\partialm + G_{n-1}^\partialm = 2\sinh^2 (\zeta/2) (e^{n\zeta} \partialm e^{-n\zeta} ).
\end{align*}
Since $|\zeta| \le s$, we have
\begin{equation}\langlebel{Gnest}
|G_n^\partialm| \lesssim e^{ns},\quad |G_{n+1}^\partialm - G_n^\partialm| \lesssim s e^{n s},
\quad
| G_{n+1}^\partialm -2G_n^\partialm + G_{n-1}^\partialm | \lesssim s^2 e^{n s}.
\end{equation}
Next, we estimate $F_{j,n}$ and its finite differences.
By the mean value theorem, there exist $x_n^*\in(ns, (n+1)s), x_n^{**}\in ((n-1)s, (n+1)s) $ such that
$$
\frac{F_{j,n+1}-F_{j,n}}{s} = f_j'(x_n^*), \quad \frac{F_{j,n+1}-2F_{j,n}+F_{j,n-1}}{s^2} = f_j''( x_n^{**} ).
$$
Then, by \eqnref{eq:fj_estim}, we infer
\begin{align}
&|F_{j,n} | \lesssim ( 1+ (n s)^3) e^{-2ns},
\\
&|F_{j,n+1}-F_{j,n}| \lesssim s ( 1+ (n s)^3) e^{-2ns},
\\
&|F_{j,n+1}-2F_{j,n} + F_{j,n-1}| \lesssim s^2 ( 1+ (n s)^3) e^{-2ns}.
\langlebel{eq:Fn_estim}
\end{align}
These estimates together with \eqnref{Gnest} lead us to
$$
|A_n^\partialm |\lesssim e^{n s}|F_{j,n}| \lesssim (1+(ns)^3) e^{- n s},
$$
and
\begin{align*}
|A_{n+1}^\partialm - 2A_{n}^\partialm + A_{n-1}^\partialm| & \lesssim
{\bf i}g|F_{n+1}-2F_n+F_{n-1}{\bf i}g| e^{n s} + F_n s^2 e^{n s}
\\
&\quad + |F_{n+1} - F_{n}| s e^{n s} + |F_n - F_{n-1}| s e^{ns}
\\
&\lesssim s^2 (1+(ns)^3)e^{-n s}.
\end{align*}
Using \eqnref{EM}, we have
\begin{align*}
|S_{j,1}^{\partialm+}| &\lesssim \sum_{n=2} |A_{n+1}^\partialm - 2A_{n}^\partialm + A_{n-1}^\partialm| \lesssim \sum_{n=2} s^2 (1+(ns)^3)e^{-ns}
\\
&\lesssim s\int_0^\infty (1+x^3)e^{-x} dx \lesssim s,
\end{align*}
and
\begin{align*}
|S_{j,2}^{\partialm+}| &\lesssim s^2 \sum_{n=2} | A_{n}^\partialm|\lesssim s^2\sum_{n=2} (1+(ns)^3)e^{-ns}
\\
&\lesssim s\int_0^\infty (1+x^3)e^{-x} dx \lesssim s.
\end{align*}
Therefore, from \eqref{eq:v2_Sp_fourier} and \eqref{eq:v2_Sm_fourier}, we see that
\begin{align*}
S_{j}^{+ +}&= S_{j,0}^{+ +} + O(s),
\end{align*}
which is the formula \eqnref{Sj++}.
Similarly,
\begin{align*}
S_{j}^{- +}&= S_{j,0}^{- +} + O(s)
\\
&=- \frac{ 1}{2} (A_{j,2}^- \cos \theta -2 A_{j,2}^- \cos 2\theta + A_{j,3}^- \cos 2\theta ) + O(s)
\\
&=- \frac{ 1}{2} (f_j(2s)\sinh 2\zeta \cos \theta -2 f_j(2s)\sinh 2\zeta \cos 2\theta + f_j(3s)\sinh 3\zeta \cos 2\theta ) + O(s).
\end{align*}
Since $\sinh \zeta=O(s)$ and $|f_j(2s)|+|f_j(3s)|$ is bounded thanks to \eqnref{eq:fj_estim}, the estimate \eqnref{Sj-pm} for $S_j^{-+}$ follows.
Using the identity
\begin{align*}
ah(\zeta,\theta)\sin n\theta = -\frac{1}{2}\big(\sin (n+1)\theta - 2 \sin n \theta + \sin (n-1)\theta \big) + \sinh^2(\zeta/2) \sin n \theta,
\end{align*}
one can see that
\begin{align}
S_j^{\partialm-} &=
- \frac{ 1}{2} (A_{j,2}^\partialm \sin \theta -2 A_{j,2}^\partialm \sin 2\theta + A_{j,3}^\partialm \sin 2\theta )
\nonumber \\ &\quad
-\frac{1}{2}\sum_{n=3}^\infty {\bf i}g(A_{j,n+1}^\partialm - 2A_{j,n}^\partialm + A_{j,n-1}^\partialm {\bf i}g)\sin n\theta + \sinh^2 (\zeta/2) \sum_{n=2}^\infty A_{j,n}^\partialm \sin n\theta.
\langlebel{eq:v2_Sm_fourier}
\end{align}
The other formulas, namely, \eqnref{Sj+-} and \eqnref{Sj-pm} for $S_j^{--}$, can be proved in the same way.
The proof is completed.
\qed
We are now prepared for estimating $\mathcal{E}_{\zeta\zeta}$ and $\mathcal{E}_{\zeta\theta}$. By applying Lemma \ref{lem:pre_asymp_Sj} to \eqref{eq:Ecal_GzGz_v1_preasymp} and \eqref{eq:Ecal_GzGt_v1_preasymp}, we have
\begin{align*}
\mathcal{E}_{\zeta\zeta}[{\bf v}_1-{\bf U}]
&=S_0 + \frac{4}{s} \cosh \zeta S_1^{++} + O(1)
\\
&=S_0 - \frac{2}{s} \cosh \zeta {\bf i}g[f_1(2s)\cosh 2\zeta \cos \theta \\
&\quad -2 f_1(2s)\cosh 2\zeta \cos 2\theta+ f_1(3s)\cosh 3\zeta \cos 2\theta {\bf i}g] +O(1),
\end{align*}
and
\begin{align*}
\mathcal{E}_{\zeta\theta}[{\bf v}_1-{\bf U}]
&=\widetilde{S}_0 - \frac{4}{s^2} \sinh \zeta S_2^{+-} + O(1)
\\
&= \widetilde{S}_0 +\frac{2}{s^2}\sinh \zeta {\bf i}g[f_j(2s)\cosh 2\zeta \sin \theta
\\
&\quad -2 f_j(2s)\cosh 2\zeta \sin 2\theta + f_j(3s)\cosh 3\zeta \sin 2\theta {\bf i}g] +O(1).
\end{align*}
By applying Taylor expansions to $f_j$ given in \eqref{eq:def_f1f2}, we see that
\begin{align*}
&f_1(2s) = s + O(s^2), \quad f_1(3s) = \frac{3}{2} s + O(s^2),
\\
&f_2(2s) = -\frac{3}{2}s + O(s^2), \quad f_2(3s) = -\frac{9}{4} s + O(s^2).
\end{align*}
So we have
\begin{align*}
\mathcal{E}_{\zeta\zeta}[{\bf v}_1] &=S_0 + O(1), \qquad \mathcal{E}_{\zeta\theta}[{\bf v}_1] =\widetilde{S}_0 +O(1).
\end{align*}
It remains to estimate $S_0$ and $\widetilde{S}_0$.
Applying Taylor expansions to $a_1$ and $b_1$ given in Lemma \ref{lem:Phi_1}, we have
\begin{equation}\langlebel{a1b1}
a_1 = \frac{3a}{4s} +O(s), \quad b_1 = -\frac{3a}{2s} +O(s).
\end{equation}
Then, from \eqref{eq:def_S0} and \eqref{eq:def_S0p}, we have
\begin{align*}
S_0 &= -h(\zeta,\theta) (2 a_1 \cosh 2\zeta + b_1 )\cos\theta = - h(\zeta,\theta)(2a_1 +b_1) + O(1) = O(1),
\\
\widetilde{S}_0 &= h(\zeta,\theta) 2a_1 \sinh 2\zeta \sin \theta = O(1).
\end{align*}
Therefore, we obtain that
\begin{align}
\mathcal{E}_{\zeta\zeta}[{\bf v}_1] &= O(1), \qquad \mathcal{E}_{\zeta\theta}[{\bf v}_1] = O(1).
\end{align}
We now prove that the pressure $q_1$ is bounded regardless of $\delta$. Recall from \eqnref{qone2} that $q_1$ is given by
\begin{align*}
q_1 &=-a_1\frac{2\mu}{a} (2 (\tilde{w}_1-1)-(\tilde{w}_2-1) ) +b_1\frac{2\mu}{a}(\tilde{w}_1-1)\nonumber
\\
&\quad - \frac{2\mu}{a}\sum_{n=2}^\infty((n+1)a_n - (n-1)b_n )(\tilde{w}_n -1)
\nonumber \\&\quad
+ \frac{2\mu}{a}\sum_{n=2}^\infty n a_n (\tilde{w}_{n+1}-1)
- \frac{2\mu}{a}\sum_{n=2}^\infty n b_n (\tilde{w}_{n-1}-1),
\end{align*}
where $\tilde{w}_n(\zeta,\theta)=\cosh n\zeta \cos n \theta$. Using notation in \eqnref{tildean}, we have
\begin{align}
q_1 &=\frac{2\mu}{a} (-2a_1 + b_1-2 b_2) \tilde{w}_1 + \frac{2\mu}{a} (a_1 -3 a_2 + b_2 - 3 b_3) \tilde{w}_2 \nonumber
\\
&\quad -\frac{2\mu}{a}\sum_{n=3}^\infty \frac{1}{n}(\tilde{a}_n - \tilde{b}_n - \tilde{a}_{n-1} + \tilde{b}_{n+1}) \tilde{w}_n.
\langlebel{q2_series}
\end{align}
Note that
\begin{align*}
\tilde{a}_n - \tilde{b}_n - \tilde{a}_{n-1} + \tilde{b}_{n+1} &=\frac{1}{2} {\bf i}g((\tilde{a}_{n+1}+\tilde{b}_{n+1}) -(\tilde{a}_{n-1}+\tilde{b}_{n-1}) {\bf i}g)
\\
&\quad - \frac{1}{2}{\bf i}g((\tilde{a}_{n+1}-\tilde{b}_{n+1})-2(\tilde{a}_n - \tilde{b}_n) + (\tilde{a}_{n-1}-\tilde{b}_{n-1}) {\bf i}g).
\end{align*}
Then we have from \eqref{eq:anpmbn_fj} that
\begin{align*}
\tilde{a}_n - \tilde{b}_n - \tilde{a}_{n-1} + \tilde{b}_{n+1} &= -\frac{2a}{s} \big( f_1((n+1)s)- f_1((n-1)s)\big)
\\
&\quad +\frac{2a}{s^2} \big(f_2((n+1)s)-2f_2(ns) + f_2((n-1)s)\big).
\end{align*}
Therefore, by \eqref{q2_series}, we have
\begin{align}
|q_1| &\lesssim \frac{1}{s} |-2a_1 + b_1-2 b_2| + \frac{1}{s} |a_1 -3 a_2 + b_2 - 3 b_3|\nonumber
\\
&\quad +\sum_{n=3}^\infty \frac{s\cosh ns}{ns}{\bf i}g|2\frac{f_1((n+1)s)- f_1((n-1)s)}{2s} \nonumber
\\
&\qquad\quad\qquad -
\frac{1}{s^2} \big(f_2((n+1)s)-2f_2(ns) + f_2((n-1)s)\big){\bf i}g|. \langlebel{eq:q1_prefinal_estim}
\end{align}
By applying the mean value theorem, we have
\begin{align}
|q_1| &\lesssim \frac{1}{s} |-2a_1 + b_1-2 b_2| + \frac{1}{s} |a_1 -3 a_2 + b_2 - 3 b_3|\nonumber
\\
&\quad +\sum_{n=3}^\infty \frac{s\cosh ns}{ns}{\bf i}g|2 f_1'(x_n^*) -f_2''(x_n^{**}){\bf i}g| \langlebel{eq:q1_pre_estim}
\end{align}
for some $x_n^* \in ((n-1)s,(n+1)s)$ and $x_n^{**}\in ((n-1)s,(n+1)s)$.
By regarding the infinite series in \eqnref{eq:q1_pre_estim} as a Riemann sum, we infer
\begin{align*}
I:= \sum_{n=3}^\infty \frac{s\cosh ns}{ns}{\bf i}g|2 f_1'(x_n^*) -f_2''(x_n^{**}){\bf i}g|
\lesssim \int_0^\infty \frac{\cosh x}{x} |2f_1'(x) - f_2''(x)| dx.
\end{align*}
It then follows that
\begin{align*}
I & \lesssim \int_0^1 + \int_1^\infty \frac{\cosh x}{x} |2f_1'(x) -f_2''(x)| dx \\
& = \int_0^1 \frac{\cosh x}{x} |2(f_1'(x)-\frac{1}{2}) - (f_2''(x)-1)| dx + \int_1^\infty \frac{\cosh x}{x} |2f_1'(x) -f_2''(x)| dx \\
& \lesssim \int_0^1 \frac{\cosh x}{x} (x+x) dx + \int_1^\infty \frac{\cosh x}{x} x^3 e^{-2x} dx \\
& \lesssim 1 + \int_1^\infty x^2 e^{-x} dx \lesssim 1,
\end{align*}
where the third inequality follows from \eqnref{eq:Df1_DDf2_asymp}.
We next estimate the first two terms in the right-hand side of \eqref{eq:q1_prefinal_estim}. By Taylor expansions we obtain
\begin{equation}\langlebel{eq:a1a2b1b2b3_estim}
a_2 = \frac{1}{2} \frac{a}{s} + O(s), \quad b_2 = -\frac{3}{2} \frac{a}{s} + O(s),
\quad b_3 = -\frac{3}{4} \frac{a}{s} + O(s).
\end{equation}
These together with \eqnref{a1b1} yield
\begin{align*}
-2 a_1 + b_1 -2 b_2 = O(s), \quad a_1-3a_2+ b_2 - 3 b_3 = O(s).
\end{align*}
Therefore, from \eqref{eq:q1_pre_estim}, we have
\begin{align*}
|q_1|\lesssim 1.
\end{align*}
This completes the proof. \qed
\subsection{Proofs of Theorems \ref{thm:boundedstress_rot} and \ref{thm:boundedstress_v2q2}}
We only prove Theorem \ref{thm:boundedstress_v2q2}. Thanks to the similarity between the stream functions $\Phi_{2}$ and $\Phi_{\mathrm{rot}}$, Theorem \ref{thm:boundedstress_rot} can be proved in exactly the same way.
We first estimate the strain tensor $\mathcal{E}[{\bf v}_2-{\bf U}]$. In this proof, ${\bf U}={\bf U}_{\rm sh}$.
Using \eqref{eq:strain_bipolar1}-\eqref{eq:strain_bipolar3} and \eqnref{eq:W_biharmonic_solution}, one can see that
\begin{align}
\mathcal{E}_{\zeta\zeta}[{\bf v}_2-{\bf U}] &=
-K_v \frac{\sinh\zeta}{a}\sin \theta + h(\zeta,\theta)2 c_1 \sinh 2\zeta \sin \theta\nonumber \\
&\quad + h(\zeta,\theta)\sum_{n=2}^\infty {\bf i}g(\tilde{c}_n \sinh (n+1)\zeta +\tilde{d}_n \sinh (n-1)\zeta{\bf i}g)\sin n\theta, \\
\mathcal{E}_{\zeta\theta}[{\bf v}_2-{\bf U}] &= K_v \frac{\cosh2\zeta -2\cosh\zeta\cos\theta+ \cos 2\theta}{2a} \nonumber
\\
&\quad + h(\zeta,\theta) d_0\cosh\zeta + h(\zeta,\theta) 2c_1\cosh 2\zeta \cos\theta\nonumber
\\
&\quad + h(\zeta,\theta)\sum_{n=2}^\infty {\bf i}g(\tilde{c}_n \cosh (n+1)\zeta +\tilde{d}_n \cosh (n-1)\zeta{\bf i}g)\cos n\theta, \\
\mathcal{E}_{\theta\theta}[{\bf v}_2-{\bf U}] &= -\mathcal{E}_{\zeta\zeta}[{\bf v}_2-{\bf U}],
\end{align}
where
$$
\tilde{c}_n = n(n+1) c_n, \quad \tilde{d}_n = n(n-1) b_n.
$$
Using the hyperbolic identities
\begin{align*}
&\sinh(n+1)\zeta + \sinh(n-1)\zeta = 2\sinh n \zeta \cosh \zeta,
\\
&\sinh(n+1)\zeta - \sinh(n-1)\zeta = 2\cosh n \zeta \sinh \zeta,
\end{align*}
we can rewrite $\mathcal{E}_{\zeta\zeta}$ and $ \mathcal{E}_{\zeta\theta}$ as
\begin{align}
&\mathcal{E}_{\zeta\zeta}[{\bf v}_2-{\bf U}] =
-K_v \frac{\sinh\zeta}{a}\sin \theta + h(\zeta,\theta)2 c_1 \sinh 2\zeta \sin \theta\nonumber
\\
&\quad + h(\zeta,\theta)\sum_{n=2}^\infty \big( ({\tilde{c}_n+\tilde{d}_n}) \sinh n \zeta \cosh \zeta + ({\tilde{c}_n- \tilde{d}_n}) \cosh n\zeta \sinh \zeta \big)\sin n\theta, \langlebel{eq:Ecal_GzGz_v2_symm}
\end{align}
and
\begin{align}
&\mathcal{E}_{\zeta\theta}[{\bf v}_2-{\bf U}] =
K_v \frac{\cosh2\zeta -2\cosh\zeta\cos\theta+ \cos 2\theta}{2a} \nonumber
\\
&\quad + h(\zeta,\theta) d_0\cosh\zeta + h(\zeta,\theta) 2c_1\cosh 2\zeta \cos\theta\nonumber
\\
&\quad + h(\zeta,\theta)\sum_{n=2}^\infty \big( ({\tilde{c}_n+\tilde{d}_n}) \cosh n \zeta \cosh \zeta + ({\tilde{c}_n- \tilde{d}_n}) \sinh n\zeta \sinh \zeta \big)\cos n\theta.
\end{align}
From the expressions of $c_n$ and $d_n$ given in Lemma \ref{lem:Phi_2}, one can see that, for $n \geq 2$,
\begin{align*}
{\tilde{c}_n+\tilde{d}_n} &=\frac{4a}{s} \frac{ns e^{-ns}\cosh n s - (ns)^2\eta_{2} + (ns)^3 \eta_{1}}{\sinh 2ns + 2ns\eta_{2} } - 4K_v s \eta_{1} \frac{ n s }{\sinh 2ns + 2ns\eta_{2}},
\\
{\tilde{c}_n-\tilde{d}_n} &=\frac{4a}{s^2} \frac{(ns)^2 e^{-ns}\cosh n s - (ns)^3\eta_{2}+ (ns)^2 s^2\eta_{1}}{\sinh 2ns + 2ns\eta_{2}} + 4K_v \frac{ e^{-ns}\sinh ns + n s\eta_{2}}{\sinh 2ns + 2ns\eta_{2}},
\end{align*}
where $\eta_1$ and $\eta_2$ are the quantities given in \eqref{eq:def_eta1_eta2}.
Define, for $0<x<\infty$,
\begin{align}
g_1(x) &:= \frac{x e^{-x}\cosh x - x^2\eta_{2} + x^3 \eta_{1}}{\sinh 2x + 2x\eta_{2}}, \nonumber
\\
g_2(x) &:= \frac{x}{\sinh 2x + 2x\eta_{2}}, \nonumber
\\
g_3(x) &:=\frac{x^2 e^{-x}\cosh x - x^3\eta_{2}+ x^2 s^2\eta_{1}}{\sinh 2x + 2x\eta_{2}}, \nonumber
\\
g_4(x) &:=\frac{e^{-x}\sinh x+ x \eta_2}{\sinh 2x + 2x\eta_{2}}. \langlebel{gjdef}
\end{align}
Then, we have
\begin{align}
{\tilde{c}_n+\tilde{d}_n} &=\frac{4a}{s} g_1(ns) - 4K_v s \eta_{1} g_2(ns),\nonumber
\\
{\tilde{c}_n-\tilde{d}_n} &=\frac{4a}{s^2} g_3(ns) + 4K_v g_4(ns).\langlebel{an_bn_tilde_even_odd}
\end{align}
It follows from \eqref{eq:Ecal_GzGz_v2_symm} that
\begin{align}
\mathcal{E}_{\zeta\zeta}[{\bf v}_2-{\bf U}] &=
-K_v \frac{\sinh\zeta}{a}\sin \theta + h(\zeta,\theta)2 c_1 \sinh 2\zeta \sin \theta\nonumber
\\
&\quad + \frac{4a}{s}h(\zeta,\theta)\sum_{n=2}^\infty \big( g_1(ns) \sinh n \zeta \cosh \zeta \big)\sin n\theta\nonumber
\\&\quad
- 4 K_v h(\zeta,\theta)\sum_{n=2}^\infty \big( g_2(ns) \sinh n \zeta \cosh \zeta \big)\sin n\theta \nonumber
\\
&\quad + \frac{4a}{s^2}h(\zeta,\theta)\sum_{n=2}^\infty \big( g_3(ns) \cosh n\zeta \sinh \zeta \big)\sin n\theta\nonumber
\\
&\quad
+ 4K_v h(\zeta,\theta)\sum_{n=2}^\infty \big( g_4(ns) \cosh n\zeta \sinh \zeta \big)\sin n\theta.
\end{align}
If we define
\begin{align*}
&T_j^{++}(\zeta,\theta) := a h\sum_{n=2}^\infty g_j(ns) \cosh n\zeta \cos n\theta,
\\
&T_j^{+-}(\zeta,\theta) := a h\sum_{n=2}^\infty g_j(ns) \cosh n\zeta \sin n\theta,
\\
&T_j^{-+}(\zeta,\theta) := a h \sum_{n=2}^\infty g_j(ns) \sinh n\zeta \cos n\theta,
\\
&T_j^{--}(\zeta,\theta) := a h \sum_{n=2}^\infty g_j(ns) \sinh n\zeta \sin n\theta,
\end{align*}
then the component $\mathcal{E}_{\zeta\zeta} $ can be rewritten as
\begin{align}
\mathcal{E}_{\zeta\zeta}[{\bf v}_2-{\bf U}]
&=T_0 + \frac{4}{s} \cosh \zeta T_1^{--} - \frac{4K_v s \eta_1}{a}\cosh \zeta T_2^{--} \nonumber
\\&\quad + \frac{4}{s^2} \sinh \zeta T_3^{+-} + \frac{4K_v}{a} \sinh \zeta T_4^{+-},
\langlebel{eq:Ecal_GzGz_Tj_v2}
\end{align}
where
\begin{equation}\langlebel{eq:T0_def}
T_0 =-K_v \frac{\sinh\zeta}{a}\sin\theta + h(\zeta,\theta)2 c_1 \sinh 2\zeta \sin \theta.
\end{equation}
Similarly, $\mathcal{E}_{\zeta\theta}$ can be written as
\begin{align}
\mathcal{E}_{\zeta\theta}[{\bf v}_2-{\bf U}]
&=\widetilde{T}_0 + \frac{4}{s} \cosh \zeta T_1^{++} - \frac{4K_v s \eta_1}{a}\cosh \zeta T_2^{++} \nonumber
\\&\quad + \frac{4}{s^2} \sinh \zeta T_3^{-+} + \frac{4K_v}{a} \sinh \zeta T_4^{-+},
\langlebel{eq:Ecal_GzGt_Tj_v2}
\end{align}
where
\begin{align}
\widetilde{T}_0 &= K_v \frac{\cosh2\zeta -2\cosh\zeta\cos\theta+ \cos 2\theta}{2a} \nonumber
\\& \quad + h(\zeta,\theta) d_0\cosh\zeta + h(\zeta,\theta) 2c_1\cosh 2\zeta \cos\theta.
\langlebel{eq:T0p_def}
\end{align}
The proof the following lemma is given in appendix \ref{sec:appendxE}.
\begin{lemma}\langlebel{lem:gj_estim} For $j=1,2,3,4$, we have
\begin{equation}\langlebel{gjest}
|g_j(x)|+ |g_j'(x)| + |g_j''(x)|\lesssim (1+x^3)e^{-2x}, \quad 0<x<\infty.
\end{equation}
\end{lemma}
We omit the proof of the following lemma since it can be proved using Lemma \ref{lem:gj_estim} in the
same way as Lemma \ref{lem:pre_asymp_Sj}:
\begin{lemma}\langlebel{lem:pre_asymp_Tj}
For $j=1,2,3,4$,
\begin{align}
T_{j}^{+ +} &=- \frac{ 1}{2} g_j(2s)\cosh 2\zeta \cos \theta + g_j(2s)\cosh 2\zeta \cos 2\theta \nonumber \\
&\quad-\frac{1}{2} g_j(3s)\cosh 3\zeta \cos 2\theta + O(s), \\
T_{j}^{+ -} &=- \frac{ 1}{2} g_j(2s)\cosh 2\zeta \sin \theta +g_j(2s)\cosh 2\zeta \sin 2\theta \nonumber \\
&\quad -\frac{1}{2} g_j(3s)\cosh 3\zeta \sin 2\theta + O(s), \\
T_{j}^{- \partialm}&=O(s).
\end{align}
\end{lemma}
We infer using the definition \eqnref{gjdef} of $g_j$ that for $k=2,3,$
\begin{align}
&g_1(ks) = \frac{1}{4} + O(s),
\quad
g_2(ks) = \frac{1}{4} + O(s), \nonumber
\\
&g_3(ks) = \frac{1}{4}s + O(s^2),
\quad
g_4(ks) = \frac{1}{2} + O(s). \langlebel{eq:gj_zero_asymp}
\end{align}
Thus, we have
$$
T_3^{+-} = O(s)
$$
and
\begin{align*}
T_4^{+-} &= - \frac{1}{4} \cosh 2\zeta \sin \theta + \frac{1}{2}\cosh 2\zeta \sin 2\theta-\frac{1}{4} \cosh 3\zeta \sin 2\theta + O(s) \\
&= - \frac{1}{4}(\sin \theta - \sin 2\theta) + O(s),
\end{align*}
as $s \to 0$. It then follows from \eqref{eq:Ecal_GzGz_Tj_v2} that
\begin{equation}\langlebel{Ezzest}
\mathcal{E}_{\zeta\zeta}[{\bf v}_2-{\bf U}] = T_0 + \frac{K_v \zeta}{a}(- \sin\theta + \sin 2\theta ) +O(1).
\end{equation}
Since
\begin{equation}\langlebel{c1est}
c_1 = a(-1+\coth 2s) + K_v \frac{1}{1+e^{2s}} = \frac{1-s}{2}K_v + \frac{a}{2s} +O(s),
\end{equation}
it follows from \eqref{eq:T0_def} that
\begin{align}
T_0 &=-K_v \frac{\sinh\zeta}{a}\sin\theta + \frac{\cosh \zeta}{a} 2 c_1 \sinh 2\zeta \sin \theta - \frac{\cos\theta}{a}2 c_1 \sinh 2\zeta \sin \theta\nonumber
\\
&= - K_v \frac{\zeta}{a} \sin\theta + 2 K_v \frac{\zeta}{a}\sin\theta - K_v \frac{\zeta}{a} \sin 2\theta + O(1)\nonumber
\\
&= K_v \frac{\zeta}{a}\sin\theta - K_v \frac{\zeta}{a} \sin 2\theta + O(1). \langlebel{eq:T0_asymp}
\end{align}
This together with \eqnref{Ezzest} yields the desired estimate
\begin{equation}\langlebel{Ezzest2}
\mathcal{E}_{\zeta\zeta}[{\bf v}_2-{\bf U}] = O(1).
\end{equation}
Likewise, we use \eqref{eq:Ecal_GzGt_Tj_v2}, Lemma \ref{lem:pre_asymp_Tj} and \eqnref{eq:gj_zero_asymp} to ensure
$$
\mathcal{E}_{\zeta\theta}[{\bf v}_2-{\bf U}]
=\widetilde{T}_0 + (-\frac{1}{2s} + \frac{K_v s}{2a})\cos\theta + (+\frac{1}{2s} - \frac{K_v s}{2a})\cos 2\theta + O(1).
$$
Then, using the estimate
\begin{equation}\langlebel{d0est}
d_0 = \frac{a}{\sinh s \cosh s +s} -K_v \frac{\sinh^2 s}{s+\cosh s \sinh s} = \frac{a}{2s}- K_v \frac{s}{2} + O(s)
\end{equation}
in addition to \eqnref{c1est}, we obtain
\begin{align}
{T}_0' &= \frac{K_v}{2a} - \frac{K_v}{a} \cos\theta +\frac{1}{2}\frac{K_v}{a}\cos 2\theta + \frac{1-\cos\theta}{a}(\frac{a}{2s} + \frac{K_v s}{2})\nonumber
\\&\quad + \frac{1-\cos\theta}{a} (K_v -s K_v + \frac{a}{s})\cos\theta + O(1)
\nonumber
\\
&=\frac{K_v}{a} ( \frac{1}{2}) + \frac{1}{2s}-\frac{K_v s}{2a} - \frac{1}{2}(\frac{K_v}{a} - \frac{K_v s}{a} + \frac{1}{s}) + ( - \frac{1}{2s}+\frac{K_v s}{2a} - \frac{K_v s}{a} + \frac{1}{s}) \cos\theta
\nonumber
\\
&\quad + (\frac{1}{2}\frac{K_v}{a}- \frac{1}{2}(\frac{K_v}{a} - \frac{K_v s}{a} + \frac{1}{s}) ) \cos 2\theta +O(1)
\nonumber
\\
&= ( \frac{1}{2s} - \frac{K_v s}{2a})\cos\theta + (\frac{K_v s }{2a} -\frac{1}{2s})\cos2\theta +O(1),\langlebel{eq:T0p_asymp}
\end{align}
where we have used the following identity for the second equality:
$$
(1-\cos\theta)\cos\theta = -\frac{1}{2} + \cos\theta -\frac{1}{2}\cos2\theta.
$$
Thus, we have
\begin{align*}
\mathcal{E}_{\zeta\theta}[{\bf v}_2-{\bf U}] = O(1).
\end{align*}
So far we proved that
\begin{align*}
\| \mathcal{E}[{\bf v}_2] \|_\infty \lesssim 1.
\end{align*}
We now prove that the pressure $q_2$ is bounded independently of $\delta$. It was shown in \eqnref{qtwo}
\begin{align}
q_2 &=\frac{2\mu }{a} d_0 {w}_1 + \frac{2\mu}{a}\sum_{n=1}^\infty((n+1)c_n - (n-1)d_n ){w}_n \nonumber
\\&\quad - \frac{2\mu}{a}\sum_{n=1}^\infty n c_n {w}_{n+1}
+ \frac{2\mu}{a}\sum_{n=1}^\infty n d_n{w}_{n-1} , \nonumber
\end{align}
where $w_n(\zeta,\theta):=\sinh n\zeta \sin n \theta$. It can be rewritten as
\begin{align}
q_2 &=
\frac{2\mu }{a} (-d_0 + 2(c_1 + d_2)) w_1 +\frac{2\mu}{a}\sum_{n=2}^\infty \frac{1}{n}(\tilde{c}_n - \tilde{d}_n - \tilde{c}_{n-1} + \tilde{d}_{n+1})w_n,
\langlebel{q2_series2}
\end{align}
where $\tilde{c}_n = n(n+1) c_n$ and $\tilde{d}_n = n(n-1) d_n$ for $n\geq 2$.
Note that
\begin{align*}
\tilde{c}_n - \tilde{d}_n - \tilde{c}_{n-1} + \tilde{d}_{n+1} &=\frac{1}{2} (\tilde{c}_{n+1}+\tilde{d}_{n+1}) -\frac{1}{2}(\tilde{c}_{n-1}+\tilde{d}_{n-1})
\\
&\quad + \frac{1}{2}{\bf i}g(2(\tilde{c}_n - \tilde{d}_n) - (\tilde{c}_{n-1}-\tilde{d}_{n-1}) -(\tilde{c}_{n+1}-\tilde{d}_{n+1}){\bf i}g).
\end{align*}
It then follows from \eqref{an_bn_tilde_even_odd} that
\begin{align}
\tilde{c}_n - \tilde{d}_n - \tilde{c}_{n-1} + \tilde{d}_{n+1} &= \frac{2a}{s} ( g_1((n+1)s) - g_1((n-1)s)) \nonumber
\\
&\quad -2K_v s^2 \eta_1 ( g_2((n+1)s)- g_2((n-1)s)) \nonumber
\\
&\quad -2a (g_3((n+1)s)-2g_3(ns) + g_3((n-1)s)) \nonumber
\\
&\quad
- 2K_v s^2 ( g_4((n+1)s) - 2g_4(ns) + g_4((n-1)s)). \langlebel{+-+-cd}
\end{align}
By the mean value theorem, there are $x_{j,n} \in ((n-1)s, (n+1)s)$ such that
$$
|g_j((n+1)s) - g_j((n-1)s)| \lesssim s |g_j'(x_{j,n})|, \quad j=1,2,
$$
and
$$
|g_j((n+1)s) - 2g_j(ns) + g_j((n-1)s))| \lesssim s^2 |g_j''(x_{j,n})|, \quad j=3,4.
$$
We then infer from \eqnref{gjest} that
$$
|g_j((n+1)s) - g_j((n-1)s)| \lesssim s (1+(ns)^3)e^{-2ns}, \quad j=1,2,
$$
and
$$
|g_j((n+1)s) - 2g_j(ns) + g_j((n-1)s))| \lesssim s^2 (1+(ns)^3)e^{-2ns}, \quad j=3,4.
$$
Since $a\approx s$ and $K_v =O( s^{-1})$, it then follows from \eqnref{+-+-cd} that
$$
|\tilde{c}_n - \tilde{d}_n - \tilde{c}_{n-1} + \tilde{d}_{n+1}| \lesssim s (1+(ns)^3)e^{-2ns},
$$
and from \eqnref{q2_series2} that
\begin{equation}\langlebel{2000}
|q_2| \lesssim (|d_0| + 2|c_1 + d_2|) + \sum_{n=2}^\infty \frac{1}{n}(1+(ns)^3)e^{-2ns} \sinh n \zeta.
\end{equation}
One can see from \eqnref{d0est} that
$$
d_0 = \frac{a}{2s}-\frac{K_v s}{2} +O(s) = O(1).
$$
One can also see from expressions of $c_1$ and $d_2$ in Lemma \ref{lem:Phi_2} that
$$
c_1 = \frac{1}{2}K_v +O(1), \quad d_2= - \frac{1}{2}K_v + O(1) ,
$$
and hence $c_1+d_2 = O(1)$. Thus we have from \eqnref{2000}
\begin{align*}
|q_2| \lesssim 1 + \sum_{n=2}^\infty \frac{s\sinh n s}{n s}(1+(ns)^3)e^{-2ns} \lesssim 1+ \int_0^\infty \frac{\sinh t }{t}(1+t^3)e^{-2t} dt \lesssim 1.
\end{align*}
This completes the proof.
\qed
\section*{Concluding remarks}
In this paper, we have investigated the problem of quantifying the stress concentration in the narrow region between two rigid cylinders and derived precise estimates for the stress blow-up in the Stokes system when inclusions are circular cylinders of the same radii. We have shown that, even though the divergence of the velocity is confined to be zero, either the pressure component or the shear stress component of the stress tensor always blows up, and that the blow-up rate is $\delta^{-1/2}$, where $\delta$ is the distance between the cylinders. This blow-up rate coincides with the ones for elasto-statics and elasto-statics. In the course of deriving the results, it is proved that the blow-up of the stress tensor does not occur when the no-slip boundary is prescribed. We also derived an asymptotic decomposition formula which explicitly characterizes the singular behaviour of the solution. This formula may play an important role in computing the Stokes flow in presence of closely located rigid cylinders.
Since the method of bipolar coordinates is employed, extension of this paper's results to the case of circular cylinders with different radii is not a big issue. However, it is quite challenging to extend them to the more general case when the cross sections of the cylinders are strictly convex. In particular, proving no blow-up for the problem with the no-slip boundary condition on the convex boundaries seems already quite challenging.
\appendix
\section{Proof of Lemma \ref{lem:grad_eGz_eGt_estim}}\langlebel{appendixC}
We have from \eqref{exey} that
$$
{\bf e}_\zeta = \alpha {\bf e}_x - \beta {\bf e}_y, \quad {\bf e}_\theta = -\beta {\bf e}_x - \alpha {\bf e}_y.
$$
So we have
$$
|\nabla {\bf e}_\zeta| + |\nabla {\bf e}_\theta| = 2( |\nabla \alpha| + |\nabla \beta|) \lesssim |h \partial_\zeta \alpha| + |h\partial_\theta\alpha| + |h\partial_\zeta\beta| +|h\partial_\theta\beta| .
$$
Since
$$
h \partial_\zeta \alpha = -\frac{\sinh\zeta \sin^2\theta}{a(\cosh\zeta-\cos\theta)},
$$
one can see that
$$
|h \partial_\zeta \alpha| \lesssim |\theta| \le |\zeta| + |\theta|.
$$
Similarly one can show that
$$
|h\partial_\theta\alpha| , |h\partial_\zeta\beta| , |h\partial_\theta\beta| \lesssim |\zeta| + |\theta|.
$$
This completes the proof. \qed
\section{Proof of Lemma \ref{lem:asymp_I1_J1}}\langlebel{appendixA}
According to the transition relation \eqnref{Gsrel}, the stress tensor $\sigma[{\bf h}_1,p_1]$ is given by
$$
\sigma[{\bf h}_1,p_1] = \Xi \begin{bmatrix}
\sigma_{1,\zeta\zeta} & \sigma_{1,\zeta\theta}
\\
\sigma_{1,\zeta\theta} & \sigma_{1,\theta\theta}
\end{bmatrix} \Xi.
$$
In particular, we have
\begin{equation}
{\bf e}_\zeta \cdot \sigma[{\bf h}_1,p_1] {\bf e}_\zeta = \sigma_{1,\zeta\zeta}, \quad {\bf e}_\zeta \cdot \sigma[{\bf h}_1,p_1] {\bf e}_\theta = \sigma_{1,\zeta\theta}.
\end{equation}
On $\partial D_2$ which is parametrized by $\{\zeta=s\}$, the outward unit normal $\nu$ is given by
$$
\nu |_{\partial D_2} = - {\bf e}_\zeta|_{\zeta=s},
$$
and, according to \eqnref{exey}, ${\bf e}_x$ is expressed as ${\bf e}_x=\alpha(\zeta,\theta) {\bf e}_{\zeta} -\beta(\zeta,\theta) {\bf e}_\theta$, where $\alpha$ and $\beta$ are defined by \eqnref{pqdef}.
So, we have
\begin{align*}
{\bf e}_x \cdot \sigma [{\bf h}_1,p_1]\nu = -\big( \alpha(s,\theta) \sigma_{1,\zeta\zeta} -\beta(s,\theta) \sigma_{1,\zeta\theta} \big).
\end{align*}
Due to \eqnref{lineele},
we have
\begin{equation}\langlebel{A2}
\mathcal{I}_1 = - \int_{-\partiali}^\partiali \big( \alpha(s,\theta) \sigma_{1,\zeta\zeta} -\beta(s,\theta) \sigma_{1,\zeta\theta} \big) h(s,\theta)^{-1} d\theta.
\end{equation}
We now compute $( \alpha(s,\theta) \sigma_{1,\zeta\zeta} -\beta(s,\theta) \sigma_{1,\zeta\theta} ) h(s,\theta)^{-1}$. It follows from the formulas \eqnref{eq:strain_bipolar1} and \eqnref{eq:strain_bipolar3} of the strain in bipolar coordinates, the strain-stress relation \eqref{eq:sigma_formula_bipolar}, and the formula \eqref{eq:stream_singular1} for the stream function that
\begin{align}
\sigma_{1,\zeta\zeta}|_{\zeta=s} &= \frac{A_2 \mu}{a} (2+\mbox{sech}\, 2s - 4 \cosh^3 s\, \mbox{sech}\, 2s \cos\theta + \cos 2\theta) , \langlebel{4000}
\\
\sigma_{1,\zeta\theta}|_{\zeta=s} &= -\frac{A_2\mu}{a}{(\cosh s-\cos\theta)2\tanh 2s \sin\theta}. \langlebel{4010}
\end{align}
Using the definitions \eqnref{pqdef} of $\alpha$ and $\beta$, \eqnref{4000} and \eqnref{4010}, we arrive at
\begin{align}
& \big( \alpha(s,\theta) \sigma_{1,\zeta\zeta} -\beta(s,\theta) \sigma_{1,\zeta\theta} \big) h(s,\theta)^{-1} = 2\mu A_1 (-1+\cosh s \,\mbox{sech}\,2s \cos\theta). \langlebel{Icalint}
\end{align}
Then by integrating both sides of \eqnref{Icalint} over $[-\partiali,\partiali]$, we obtain
\begin{equation}\langlebel{Icalone2}
\mathcal{I}_1 = -4\partiali \mu A_1 = -\frac{4\partiali \mu}{2s-\tanh 2s} = -\frac{3\partiali\mu}{2} \frac{1}{s^3} + O(s^{-1}).
\end{equation}
Thanks to the asymptotic formula \eqnref{sGd} of $s$ as $\delta$ tends to $0$, we get the asymptotic formula \eqnref{Icalone} for $\mathcal{I}_1$.
Next we consider $\mathcal{J}_1$. Similarly to the case of $\mathcal{I}_1$, we have
\begin{align*}
\int_{\partial D_2} {\bf U} \cdot \sigma[{\bf h}_1,p_1] \nu \, dl &
=\int_{\partial D_2} {\bf U}\cdot \sigma[{\bf h}_1,p_1] (-{\bf e}_\zeta) \, dl \\
&= - \int_{-\partiali}^\partiali (U_\zeta {\bf e}_\zeta + U_\theta {\bf e}_\theta)\cdot (\sigma_{1,\zeta\zeta} {\bf e}_\zeta + \sigma_{1,\zeta\theta}{\bf e}_\theta ) h(s,\theta)^{-1} d\theta \\
&=- \int_{-\partiali}^\partiali (U_\zeta \sigma_{1,\zeta\zeta} + U_\theta \sigma_{1,\zeta\theta})|_{\zeta=s} h(s, \theta)^{-1} d\theta .
\end{align*}
Since $U_\zeta=U \cdot {\bf e}_\zeta$ and $U_\theta=U \cdot {\bf e}_\theta$, it follows from \eqref{eq:bipolar_x_y} and \eqref{pqdef} that
\begin{align}
U_\zeta|_{\zeta=s} &=\frac{a \sinh s \left(1-\cosh s \cos \theta+\sin ^2\theta\right)}{(\cosh s - \cos \theta)^2},\nonumber \\
U_\theta|_{\zeta=s} &=\frac{a \sin \theta \left(1-\cosh s \cos \theta-\sinh^2 s\right)}{(\cosh s- \cos \theta)^2}.
\langlebel{eq:UGzUGt_s}
\end{align}
It is convenient to use the following functions:
\begin{equation}\langlebel{qnk}
q_n = q_n(s, \theta) := \frac{\cos n\theta}{\cosh s-\cos\theta}, \quad n=0,1,2,\ldots.
\end{equation}
We obtain, by using \eqref{4000}, \eqref{4010} and \eqref{eq:UGzUGt_s}, that
\begin{align}
&(U_\zeta \sigma_{1,\zeta\zeta} + U_\theta \sigma_{1,\zeta\theta})h(s,\theta)^{-1} \nonumber \\
&=-a\mu A_1 \,\mbox{sech}\,2s \sinh s ((-1+2\cosh 2s)q_0 -2\cosh s q_1 + q_2) . \langlebel{eq:USh}
\end{align}
As before, by integrating both sides of the equality in \eqref{eq:USh} over $[-\partiali,\partiali]$, we arrive at
\begin{align}
\mathcal{J}_1 &=
-a\mu A_1 \,\mbox{sech}\,2s \sinh s ((-1+2\cosh 2s)\mathcal{Q}_1^0 -2\cosh s \mathcal{Q}_1^1 + \mathcal{Q}_1^2), \langlebel{J1_pre}
\end{align}
where
\begin{equation}\langlebel{eq:def_Qnk}
\mathcal{Q}_{n}=\mathcal{Q}_n(s) := \int_{-\partiali}^\partiali q_n(s, \theta) d\theta.
\end{equation}
We also obtain the following asymptotic expansion of $\mathcal{Q}_n$, whose proof will be given after the current proof is completed.
\begin{lemma}\langlebel{Qcalest}
It holds that
\begin{equation}\langlebel{Qn}
\mathcal{Q}_n(s) = \frac{2\partiali}{s} - 2n\partiali + (n^2-1/3)\partiali s + O(s^2), \quad \mbox{as } s \to 0.
\end{equation}
\end{lemma}
Then, together with the asymptotic formula \eqnref{sGd} of $s$ as $\delta$ tends to $0$, applying Lemma \ref{Qcalest} to \eqref{J1_pre} yields \eqnref{Jcalone}.
\qed
\noindent{\sl Proof of Lemma \ref{Qcalest}}.
One can easily see that $Q_n(s)$ is the real part of the following contour integral:
$$
-2 \int_C \frac{z^n}{z^2-2 z \cosh s +1} dz,
$$
where $C$ is the unit circle. Then the residue theorem yields
\begin{equation}\langlebel{eq:Qcal_n}
\mathcal{Q}_n(s) = \frac{2\partiali e^{-n s}}{\sinh s},
\end{equation}
from which \eqnref{Qn} follows.
\qed
\section{The asymptotics of the boundary integrals}\langlebel{sec:appendixB}
Here we compute the asymptotics of the boundary integrals $\mathcal{I}_{22},\mathcal{I}_{23}, \mathcal{J}_2, \mathcal{I}_{\mathrm{rot}}$, and $\mathcal{J}_{\mathrm{rot}}$, and prove Lemma \ref{lem:asymp_I2_J2}.
\subsection{A lemma}
The following result will be used to prove Lemma \ref{lem:asymp_I2_J2}.
\begin{lemma} \langlebel{lem:stream_integral}
Suppose that a solution $({\bf v},q)$ to the Stokes system on the exterior region $D^e$ satisfies $({\bf v},q)\in \mathcal{M}$, and that its corresponding stream function $\Psi$ is given by
\begin{align}
(h\Psi)(\zeta,\theta) &= K (\cosh\zeta - \cos \theta)\ln (2\cosh\zeta-2\cos\theta)
+a_0 \cosh \zeta + d_0 \zeta \sinh \zeta \nonumber
\\
& \quad +\sum_{n=1}^\infty \big(a_n \cosh (n+1) \zeta + b_n \cosh (n-1)\zeta \big)\cos n\theta.
\quad
\end{align}
Then we have the following formulas for the boundary integrals:
\begin{align}
&\int_{\partial D_2} \mbox{\boldmath $\Gy$}_2 \cdot \sigma[{\bf v},q]\nu = d_0 4\partiali \mu,
\\
&\int_{\partial D_2} \mbox{\boldmath $\Gy$}_3 \cdot \sigma[{\bf v},q]\nu = K 4\partiali \mu a.
\end{align}
where $a$ is the number defined in \eqnref{adef}.
\end{lemma}
\partialroof
Let us write in terms of bipolar coordinates the stress tensor $\sigma[{\bf v},q]$ and the strain tensor $\mathcal{E}[{\bf v},q]$ as
$$
\sigma[{\bf v},q] = \Xi \begin{bmatrix}
\sigma_{\zeta\zeta} & \sigma_{\zeta\theta}
\\
\sigma_{\zeta\theta} & \sigma_{\theta\theta}
\end{bmatrix} \Xi,\quad \mathcal{E}[{\bf v},q] = \Xi \begin{bmatrix}
\mathcal{E}_{\zeta\zeta} & \mathcal{E}_{\zeta\theta}
\\
\mathcal{E}_{\zeta\theta} & \mathcal{E}_{\theta\theta}
\end{bmatrix} \Xi.
$$
One can show in the same way as of deriving \eqnref{A2} that
\begin{align}
\mathcal{K}_2:=\int_{\partial D_2} \mbox{\boldmath $\Gy$}_2\cdot \sigma[{\bf v},q]\nu = \int_{-\partiali}^\partiali \big ( \beta(s,\theta) \sigma_{\zeta\zeta} + \alpha(s,\theta) \sigma_{\zeta\theta} \big) h(s,\theta)^{-1} d\theta.
\end{align}
Using \eqref{eq:Be_Gt_formula}, one can also see that
\begin{align}
\mathcal{K}_3:=\int_{\partial D_2} \mbox{\boldmath $\Gy$}_3\cdot \sigma[{\bf v},q]\nu &= -\frac{a}{\tanh s}\int_{\partial D_2} \mbox{\boldmath $\Gy$}_2\cdot \sigma[{\bf v},q] \nu -\frac{a}{\sinh s} \int_{\partial D_2}{\bf e}_\theta \cdot \sigma[{\bf v},q]\nu \nonumber\\
&=-\frac{a}{\tanh s}\mathcal{K}_2+ \frac{a}{\sinh s} \int_{-\partiali}^\partiali \sigma_{\zeta\theta}|_{\zeta=s} h(s,\theta)^{-1}d\theta. \langlebel{eq:K3_identity}
\end{align}
We assume for a moment that the stream function $\Psi$ is given by
\begin{equation}\langlebel{eq:Psi_temp1}
(h\Psi)(\zeta,\theta) = K (\cosh\zeta - \cos \theta)\ln (2\cosh\zeta-2\cos\theta).
\end{equation}
Applying the formula \eqnref{eq:Laplacian_Psi} for the Laplacian in bipolar coordinates, we see that $\mu\Delta\Psi=0$. Together with the relation \eqref{eq:pressure_bipolar} between the pressure and the stream function and the condition $q\rightarrow 0$ as $|{\bf x}|\rightarrow \infty$, this implies that the corresponding pressure $q=0$. Then, by \eqnref{eq:strain_bipolar1}-\eqnref{eq:strain_bipolar3} (the strain-stream function relation) and \eqref{eq:sigma_formula_bipolar} (the stress-strain relation), we obtain
\begin{align}
\sigma_{\zeta\zeta}|_{\zeta=s} &= -K\frac{2\mu}{a}\sinh s\sin\theta, \langlebel{eq:Szz_temp1}
\\
\sigma_{\zeta\theta}|_{\zeta=s} &=\frac{K\mu}{a}(\sinh^2 s -\sin^2\theta + (\cosh\zeta-\cos\theta)^2). \langlebel{eq:Szt_temp1}
\end{align}
We also have
\begin{align*}
\big ( \beta(s,\theta) \sigma_{\zeta\zeta} + \alpha(s,\theta) \sigma_{\zeta\theta} \big)|_{\zeta=s}h(s,\theta)^{-1} = -2\mu \cosh s\cos\theta.
\end{align*}
Then, by integrating over $[-\partiali,\partiali]$, we arrive at
$$
\mathcal{K}_2 = 0.
$$
We now consider $\mathcal{K}_3$. We see from \eqref{eq:K3_identity} and \eqref{eq:Szt_temp1} that
\begin{align*}
\mathcal{K}_3&=0 +\frac{a}{\sinh s} {K\mu} \int_{-\partiali}^\partiali {\bf i}g(\frac{\sinh^2 s - \sin^2 \theta}{\cosh s-\cos \theta} + \cosh s - \cos\theta{\bf i}g)\,d\theta
\\
&= \frac{a}{\sinh s} {K\mu}( 2\partiali \sinh s - 2\partiali e^{-s} + 2\partiali \cosh s )
\\
&=K 4\partiali\mu a,
\end{align*}
where we have used \eqref{eq:Qcal_n} for the second equality.
So far, we have computed $\mathcal{K}_2$ and $\mathcal{K}_3$ when the stream function $\Psi$ is given by \eqref{eq:Psi_temp1}.
Next we assume that $\Psi$ is given by
\begin{equation}\langlebel{eq:Psi_temp2}
(h\Psi)(\zeta,\theta)=d_0 \zeta \sinh \zeta+a_0 \cosh \zeta +\sum_{n=1}^\infty \big(a_n \cosh (n+1) \zeta + b_n \cosh (n-1)\zeta \big)\cos n\theta.
\end{equation}
By symmetry and from the fact that ${\bf h}_2|_{\partial D_i} = (-1)^i\frac{1}{2}\Psi_2$, we have
\begin{align*}
\mathcal{K}_2 &= \int_{\partial D_1} \frac{-1}{2}\Psi_2\cdot \sigma[{\bf v},q]\nu + \int_{\partial D_2} \frac{1}{2}\Psi_2\cdot \sigma[{\bf v},q]\nu
\\
&= \int_{\partial D^e} {\bf h}_2\cdot \sigma[{\bf v},q]\nu.
\end{align*}
Then, by \eqref{eq:int_parts_formula} (the divergence theorem), we have
$$
\mathcal{K}_2 = -2\mu\int_{ D^e} \mathcal{E}[{\bf h}_2]:\mathcal{E}[{\bf v}].
$$
Recall from \eqref{Ezz2}-\eqref{Ezt2} that
\begin{align}
\mathcal{E}_{\zeta\zeta}[{\bf h}_2] = 0, \quad
\mathcal{E}_{\theta\theta}[{\bf h}_2] = 0, \quad
\mathcal{E}_{\zeta\theta}[{\bf h}_2] = h(\zeta,\theta) A_2\cosh \zeta.
\end{align}
By \eqref{eq:strain_bipolar3}, one can easily check that
\begin{align}
\mathcal{E}_{\zeta\theta}[{\bf v}] &= h(\zeta,\theta) d_0 \cosh \zeta \nonumber
\\
&\quad + h(\zeta,\theta)\sum_{n=1}^\infty (n(n+1)a_n \cosh(n+1)\zeta + n(n-1)b_n \cosh(n-1)\zeta) \cos n\theta. \langlebel{eq:strain_temp2}
\end{align}
So we obtain
\begin{align*}
\mathcal{K}_2 &=-2\mu\int_{ D^e} \mathcal{E}_{\zeta\zeta}[{\bf h}_2]\mathcal{E}_{\zeta\zeta}[{\bf v}] + 2\mathcal{E}_{\zeta\theta}[{\bf h}_2]\mathcal{E}_{\zeta\theta}[{\bf v}] + \mathcal{E}_{\theta\theta}[{\bf h}_2]\mathcal{E}_{\theta\theta}[{\bf v}]
\\
&= -2\mu \int_{-s}^s \int_{-\partiali}^\partiali 2\mathcal{E}_{\zeta\theta}[{\bf h}_2] \mathcal{E}_{\zeta\theta}[{\bf v}] \frac{1}{h(\zeta,\theta)^2} d\theta d\zeta
\\
&=-2\mu (2\partiali) \int_{-s}^s 2A_2 d_0 \cosh^2\zeta \,d\zeta = -d_0 4\partiali\mu A_2 (2s+\sinh 2s) = d_0 4\partiali \mu .
\end{align*}
We now compute $\mathcal{K}_3$. We have from \eqref{eq:K3_identity} and \eqref{eq:strain_temp2} that
\begin{align}
\mathcal{K}_3 &=-\frac{a}{\tanh s}\mathcal{K}_2+ \frac{2\mu a}{\sinh s} \int_{-\partiali}^\partiali \mathcal{E}_{\zeta\theta}|_{\zeta=s} h(s,\theta)^{-1}d\theta
\\
&=- \frac{a}{\tanh s} (d_0 4\partiali \mu) + \frac{2\mu a}{\sinh s} \int_{-\partiali}^\partiali d_0 \cosh s d\theta\nonumber
\\
&= 0.
\end{align}
The proof is completed.
\qed
\subsection{Proof of Lemma \ref{lem:asymp_I2_J2}}
Now we are ready to compute the asymptotics of integrals $\mathcal{I}_{22},\mathcal{I}_{23}, \mathcal{J}_2, \mathcal{I}_{\mathrm{rot}}$, and $\mathcal{J}_{\mathrm{rot}}$.
We first consider $\mathcal{I}_{22}$. We see from Theorem \ref{thm:boundedstress_rot} and Proposition \ref{lem:htwo} that
\begin{equation}\langlebel{eq:estim_h2_h2t}
\|\sigma[{\bf h}_2 - \widetilde{\bf h}_2, p_2 - \widetilde{p}_2] \|_\infty \lesssim 1.
\end{equation}
So we get
$$
\mathcal{I}_{22} = \int_{\partial D_2} \mbox{\boldmath $\Gy$}_2 \cdot \sigma[\widetilde{{\bf h}}_2,\widetilde{p}_2]\nu +O(1).
$$
Therefore, Lemma \ref{lem:stream_integral} with $\Psi=\widetilde{\Psi}_2$ yields
$$
\mathcal{I}_{22} =A_2 4\partiali \mu
+O(1).
$$
Hence, since $s \approx \sqrt{\delta}$, the asymptotic formula \eqnref{Atwo} for $A_2$ yields \eqref{eq:I22}.
We now consider $\mathcal{I}_{23}$. Recall from \eqref{eq:I23_another} that
$$
\mathcal{I}_{23} = \int_{\partial D_2} \mbox{\boldmath $\Gy$}_2\cdot {\sigma[ {\bf h}_{\mathrm{rot}},p_{\mathrm{rot}}]} \big|_+ \nu.
$$
Then, Lemma \ref{lem:stream_integral} with $\Psi=\Psi_{\mathrm{rot}}$ yields
$$
\mathcal{I}_{23} = -\frac{ K_{\mathrm{rot}} \sinh^2 s}{\sinh s \cosh s +s} 4\partiali\mu.
$$
Similarly, we have
$$
\mathcal{I}_{\mathrm{rot}} = K_{\mathrm{rot}} 4\partiali\mu a.
$$
Since $s \approx \sqrt{\delta}$, the asymptotic formula \eqref{eq:Krot_asymp} for $K_{\mathrm{rot}}$ yields \eqref{eq:I23} and \eqref{eq:Irot}.
Next we consider $\mathcal{J}_2$ and $\mathcal{J}_{\mathrm{rot}}$. Using the symmetry and the fact that $({\bf v}_2-{\bf U})|_{\partial D_e} = -{\bf U}$, we have
\begin{align*}
\mathcal{J}_2 = \frac{1}{2}\int_{\partial D^e}(-1) ({\bf v}_2-{\bf U}) \cdot \sigma [{\bf h}_2,p_2]\nu .
\end{align*}
Here ${\bf U}(x,y)={\bf U}_{\rm sh}=(y,x)^T$. Thanks to Green's formula \eqref{eq:int_parts_formula2}, the following holds:
\begin{align*}
\mathcal{J}_2 &=
-\frac{1}{2}\int_{\partial D^e} {\bf h}_2 \cdot \sigma [{\bf v}_2-{\bf U},q_2]\nu = -\frac{1}{2}\int_{\partial D_2} \mbox{\boldmath $\Gy$}_2 \cdot \sigma [{\bf v}_2-{\bf U},q_2]\nu.
\end{align*}
It can be proved in the same way as the proof of Lemma \ref{lem:c21} that
$$
\int_{\partial D^e} {\bf h}_2 \cdot \sigma [{\bf v}_2-{\bf U},q_2]\nu =0.
$$
Thus,
$$
\mathcal{J}_2 = -\frac{1}{2}\int_{\partial D_2} \mbox{\boldmath $\Gy$}_2 \cdot \sigma [{\bf v}_2-{\bf U},q_2]\nu.
$$
Similarly, we have
$$
\mathcal{J}_{\mathrm{rot}} = -\int_{\partial D_2} \mbox{\boldmath $\Gy$}_3 \cdot \sigma [{\bf v}_2-{\bf U},q_2] \nu .
$$
Since the stream function $\Psi_{v,2}$ associated with $({\bf v}_2-{\bf U},q_2)$ is given in \eqnref{eq:W_biharmonic_solution}, we may apply Lemma \ref{lem:stream_integral} to have
\begin{align*}
\mathcal{J}_2 &= -\frac{4\partiali \mu}{2} \left(\frac{a}{\sinh s \cosh s +s} -K_v \frac{\sinh^2 s}{s+\cosh s \sinh s}\right),
\\
\mathcal{J}_{\mathrm{rot}} &= - K_v 4\partiali\mu a.
\end{align*}
Since $s \approx a \approx \sqrt{\delta}$, the asymptotic formula \eqnref{eq:Kv_asymp} for $K_v$ yields \eqref{eq:J2} and \eqref{eq:Jrot}.
The proof is then completed.
\qed
\section{Proof of Lemma \ref{lem:fj_v1_estim}}\langlebel{appendixD}
If $1<x<\infty$, then one can easily see that
$$
|f_j(x)|+ |f_j'(x)| + |f_j''(x)|\lesssim x^3e^{-2x}
$$
for $j=1,2$. So we consider the case when $2s \le x \le 1$, and prove
\begin{equation}\langlebel{eq:fj_estim2}
|f_j(x)|+ |f_j'(x)| + |f_j''(x)|\lesssim 1, \quad j=1,2,
\end{equation}
and \eqnref{eq:Df1_DDf2_asymp}.
Let
\begin{align*}
\alpha_1 (x) &= x e^{-x}\sinh x - x^2\eta_{2} + x^3 \eta_1 ,
\\
\alpha_2 (x) &= x^2 e^{-x}\sinh x - x^3 \eta_2 + s^2 x^2 \eta_1 , \\
\beta (x) &= \frac{1}{\sinh 2x -2x\eta_2},
\end{align*}
so that the following relations hold:
\begin{align*}
f_1(x) = \alpha_1(x) \beta(x) , \quad f_2(x) = \alpha_2(x) \beta(x).
\end{align*}
One can see from the definition \eqnref{eq:def_eta1_eta2} of $\eta_j$ that
\begin{equation}\langlebel{eta2}
\eta_1 = 1+ O(s^2), \quad \eta_2 = 1+\frac{2}{3}s^2 + R_1(s),
\end{equation}
where the remainder term $R_1(s)$ satisfies
\begin{equation}\langlebel{Rone}
|R_1(s)| \le \frac{4}{15} s^4,
\end{equation}
provided that $s$ is sufficiently small.
Suppose $2s<x\leq 1$. Since
$$
\alpha_1(x) = (1-\eta_2) x^2 + (-1+\eta_1)x^3 + \frac{2}{3}x^4 + O(x^5),
$$
we have
\begin{equation}\langlebel{Gaone}
\alpha_1 = \frac{2}{3} a + O(x^5), \quad \alpha_1'(x) = \frac{2}{3} a' + O(x^4), \quad \alpha_1''(x) = \frac{2}{3} a'' + O(x^3),
\end{equation}
where
\begin{equation}\langlebel{aone}
a(x):= x^4- s^2 x^2 .
\end{equation}
Likewise, since
$$
\alpha_2 (x) = (1- \eta_2) x^3 + \frac{2}{3} x^5 - (x^4-s^2x^2 \eta_1) + O(x^6) ,
$$
we have
\begin{equation}\langlebel{Gatwo}
\alpha_2 = \tilde{a} + O(x^6), \quad \alpha_2' = \tilde{a}' + O(x^5), \quad \alpha_2'' = \tilde{a}'' + O(x^4),
\end{equation}
where
\begin{equation}\langlebel{tildea}
\tilde{a}(x):= \big( -1+ \frac{2}{3} x \big) a(x).
\end{equation}
Let
$$
w(x):= \sinh 2x -2x\eta_2.
$$
so that $\beta(x)= w(x)^{-1}$. Note that
$$
\sinh 2x = 2x + \frac{4}{3} x^3 + R_2(x),
$$
where the remainder term $R_2$ satisfies $R_2(x) = O(x^5)$ and
\begin{equation}\langlebel{Rtwo}
R_2 (x)\geq \frac{4}{15} x^5.
\end{equation}
Then
$$
w(x) = \frac{4}{3}x^3 - \frac{4}{3}s^2 x + R,
$$
where $R:= R_2(x)-R_1(s)x$. Since $x > 2s$, it follows from \eqnref{Rone} and \eqnref{Rtwo} that
$$
R \ge \frac{4}{15} x(x^4-s^s) \ge C x^5
$$
for some positive constant $C$. Therefore, we have
$$
w(x)= \frac{4}{3} b(x) (1+O(x^2)),
$$
where the remainder term $O(x^2)$ is larger than $C x^2$ for some positive constant $C$ and
\begin{equation}\langlebel{b}
b(x)= x^3 - s^2 x = \frac{a(x)}{x}.
\end{equation}
Thus we have
\begin{equation}\langlebel{Gbone}
\beta(x)= w(x)^{-1} = \frac{3}{4} \frac{1}{b(x)} + O(x^{-1}).
\end{equation}
Since $\beta'=-\beta^2 w'$ and $\beta''= 2\beta^3 (w')^2 -\beta^2 w''$, we have
\begin{equation}\langlebel{Gbtwo}
\beta'(x)= - \frac{3}{4} \frac{b'}{b^2} + O(x^{-2}), \quad \beta''(x)= \frac{3}{4} \frac{2(b')^2-bb''}{b^3} + O(x^{-3}).
\end{equation}
Now it is easy to see that $f_1(x)= O(x)$ and $f_1'(x)=O(1)$. To prove the first part of \eqnref{eq:Df1_DDf2_asymp}, we invoke \eqnref{Gaone}, \eqnref{Gbone} and \eqnref{Gbtwo} to derive
$$
f_1' = \frac{1}{2} \frac{a'b-ab'}{b^2} + O(x).
$$
Since $a=xb$, we have
\begin{equation}\langlebel{abab}
\frac{a'b-ab'}{b^2} =1,
\end{equation}
which yields the first part of \eqnref{eq:Df1_DDf2_asymp}. To prove that $f_1''$ is bounded, we again use \eqnref{Gaone}, \eqnref{Gbone} and \eqnref{Gbtwo} to derive
$$
f_1''= \frac{1}{2} \frac{a''b^2-2a'b'b + 2a(b')^2 - abb''}{b^3} + O(1).
$$
One can easily see that
$$
\frac{a''b^2-2a'b'b + 2a(b')^2 - abb''}{b^3} = \left( \frac{a'b-ab'}{b^2} \right)'.
$$
Thus, thanks to \eqnref{abab}, we infer
\begin{equation}\langlebel{ababab}
\frac{a''b^2-2a'b'b + 2a(b')^2 - abb''}{b^3} =0.
\end{equation}
This proves \eqnref{eq:fj_estim2} for $j=1$.
It is easy to see that $f_2(x)= O(x)$ and $f_2'(x)=O(1)$. On the other hand, we have
$$
f_2'' = \frac{3}{4} \frac{\tilde{a}''b^2-2\tilde{a}'b'b + 2\tilde{a}(b')^2 - \tilde{a}bb''}{b^3} + O(x).
$$
Because of \eqnref{tildea}, \eqnref{abab} and \eqnref{ababab}, we have
$$
\frac{\tilde{a}''b^2-2\tilde{a}'b'b + 2\tilde{a}(b')^2 - \tilde{a}bb''}{b^3} =
\frac{\frac{4}{3} a'b^2 - \frac{4}{3} a b'b }{b^3} = \frac{4}{3}.
$$
Thus $f_2''=1+ O(x)$, which proves the second part of \eqnref{eq:Df1_DDf2_asymp} as well as \eqnref{eq:fj_estim2} for $j=2$. This completes the proof.
\qed
\section{Proof of Lemma \ref{lem:gj_estim}}\langlebel{sec:appendxE}
The functions $g_j$ can be rewritten as
\begin{align*}
g_1(x) &= \big(e^{-x} \cosh x - \eta_2 x + \eta_1 x^2\big) v(x),
\\
g_2(x) &= v(x),
\\
g_3(x) &= \big(x e^{-x}\cosh x - x^2 \eta_2 + s^2 \eta_1 x\big)v(x),
\\
g_4(x) &={\bf i}g(\frac{e^{-x}\sinh x}{x} + \eta_2{\bf i}g) v(x),
\end{align*}
where
\begin{equation}
v(x) = \frac{x}{\sinh 2x + 2x \eta_2}, \quad x>0.
\end{equation}
We estimate $v$ first. Since $\eta_2 = \sinh(2s)/(2s)\geq 1$, we have
$$
|v(x)| \leq \frac{x}{\sinh 2x + 2x},
$$
and hence
\begin{equation}\langlebel{vest}
|v(x)| \lesssim (1+x )e^{-2x}.
\end{equation}
By straight-forward computations, one can see that
$$
v'(x) = \gamma_1(x) (v(x))^2, \quad v''(x)= \gamma_2(x) (v(x))^3,
$$
where
\begin{align*}
\gamma_1(x) &:= \frac{\sinh 2x - 2x \cosh 2x }{x^2},
\\
\gamma_2(x) &:= \frac{2(3x + x \cosh 4x - \sinh 4x) + 4\eta_2 ( 2x \cosh 2x -(1+2x^2) \sinh 2x)}{x^3}.
\end{align*}
By Taylor expansions, it is easy to see that $\gamma_1$ and $\gamma_2$ are bounded if $0<x \le 1$. It is also easy to see that $|\gamma_1(x)| \lesssim x^{-1} e^{2x}$ and $|\gamma_2(x)| \lesssim x^{-2} e^{4x}$ if $1<x<\infty$. Putting these estimates together, we have
$$
|\gamma_1 (x)|\lesssim \frac{e^{2x}}{1+x}, \quad |\gamma_2(x)| \lesssim \frac{e^{4x}}{1+x^2}, \quad 0<x<\infty.
$$
Then, from \eqref{vest}, we obtain
\begin{align}
|v'(x)| &\lesssim \frac{e^{2x}}{1+x} |v(x)|^2 \lesssim (1+x)e^{-2x}, \langlebel{v1est}
\\
|v''(x)| &\lesssim \frac{e^{4x}}{1+x^2} |v(x)|^3 \lesssim (1+x) e^{-2x}. \langlebel{v2est}
\end{align}
Since $g_2=v$, the estimate \eqnref{gjest} for $j=2$ is already proved. Let us prove it for $j=1$. We are ready to estimate $g_j$ and their derivatives. We consider $g_1$ only for simplicity. We write
$$
g_1(x) = \gamma(x) v (x), \quad \mbox{where}\quad \gamma(x) = e^{-x}\cosh x - \eta_2 x + \eta_1 x^2.
$$
It is easy to show that
\begin{equation}\langlebel{Ggest}
|\gamma(x)| \lesssim 1+x^2, \quad |\gamma'(x)|\lesssim 1+x, \quad |\gamma''(x)|\lesssim 1,
\end{equation}
and the estimate \eqnref{gjest} for $j=1$ is an easy consequence of \eqnref{vest}-\eqnref{Ggest}. \eqnref{gjest} for $j=3,4$ can be proved in the same way.
\qed
\end{document} | math | 140,652 |
\begin{document}
\title{Theoretical framework for physical unclonable functions, including quantum readout}
\"{a}uthor{Giulio Gianfelici}
\email{[email protected]}
\"{a}uthor{Hermann Kampermann}
\"{a}uthor{Dagmar Bru\ss}
\"{a}ffiliation{ Institut f\"{u}r Theoretische Physik III, Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, D-40225 D\"{u}sseldorf, Germany }
\date{\today}
\begin{abstract}
\noindent
We propose a theoretical framework to quantitatively describe Physical Unclonable Functions (PUFs), including extensions to quantum protocols, so-called Quantum Readout PUFs (QR-PUFs).
$\text{(QR-)}$ PUFs are physical systems with challenge-response behavior intended to be hard to clone or simulate. Their use has been proposed in several cryptographic protocols, with particular emphasis on authentication.
Here, we provide theoretical assumptions and definitions behind the intuitive ideas of $\text{(QR-)}$ PUFs. This allows to quantitatively characterize the security of such devices in cryptographic protocols. First, by
generalizing previous ideas, we design a general authentication scheme, which is applicable to different physical implementations of both classical PUFs and $\text{(QR-)}$ PUFs. Then, we define the \emph{robustness} and the \emph{unclonability}, which allows us to derive security thresholds for $\text{(QR-)}$ PUF authentication and paves the way to develop further new authentication protocols.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:Intro}
\emph{Authentication} is a major task of both classical and quantum cryptography. To achieve secure communication between two parties Alice and Bob, it is necessary to ensure that no intruder may participate in the communication, pretending to be one of the legitimate parties, e.g. by a so-called \emph{Man-in-the-middle attack} \cite{KM}. Authentication is ultimately classical, even in quantum protocols like QKD \cite{SBCDLP}.
The main ingredient of an authentication protocol is a shared secret between the legitimate parties: during any authenticated communication Alice and Bob must prove the possession of this secret to confirm their identity.
One has to distinguish two types of authentication \cite{KM}. \emph{Message authentication} is the assurance that a given entity was the original source of the received data. This type of authentication can be achieved by unconditionally secure protocols \cite{WC}. \emph{Entity authentication} is the assurance that a given entity can prove its identity and its involvement in the communication session to another entity.
Entity authentication is particularly important if there is an asymmetry between the parties, e.g. when one party, namely Alice, is a trusted institution and the other one, namely Bob, is an untrusted user. The communication between Alice and Bob may happen on an authenticated channel owned by Alice, where Bob interacts through a remote terminal. In that case, a one-way entity authentication protocol will be used by Alice to authenticate Bob and to allow him to use her channel.
Such protocols are usually based on a \emph{challenge-response authentication}, a type of authentication where Alice presents a \emph{challenge} and Bob provides a valid \emph{response}, based on the common secret, to be authenticated. For instance, Alice can ask for a password (challenge) and Bob will provide the correct one (response).
In the case of asymmetric communication, it is useful to design authentication protocols based on something the parties possess.
The trusted Alice can still be required to have secret knowledge since she is able to conceal information from an adversary, but Bob is required only to protect a given token from theft.
A crucial condition of this approach is that the object has to be unique and an adversary, namely Eve, should not be able to copy it easily.
A \emph{Physical Unclonable Function} (PUF) \cite{RP} is a physical system which can interact in a very complex way with an external signal (which can serve as a challenge) to give an unpredictable output (which can serve as a response). Its internal disorder is exploited to make it unique, hard to clone or simulate.
PUFs are particularly suited for entity authentication because their internal structure plays the role of the shared secret. They can also be used in other protocols, like oblivious transfer \cite{UR10}, bit commitment \cite{RD} or classical key distribution \cite{BFSK}.
There is a large variety of PUFs, such as the \emph{Optical PUF} \cite{PRTG}, the \emph{Arbiter PUF} \cite{LLGSDD}, the \emph{SRAM PUF} \cite{GKST}, the \emph{Coating PUF} \cite{TSSGVW}, the \emph{Magnetic PUF} \cite{IM}, the \emph{Ring Oscillator PUF} \cite{BNCF} and so on. A more detailed description of the whole family of PUFs is given in \cite{MBWRY} and in \cite{MV}.
To ensure reliability and security it is required to post-process the PUFs' outputs \cite{DGSV, PMBHS}. The most common way to do it is by using the so-called \emph{fuzzy extractor} \cite{DORS}, a tool which combines error correction and privacy amplification.
Error correction is necessary because the PUF's output can be different each time the PUF interacts with the same challenge, even when the authentication involves the real Bob with the original PUF. This can be due to an erroneous implementation of the challenge or to noise in the physical process.
Privacy amplification is important since the outcomes of a PUF are generally non-uniform, i.e. there exist correlations between different responses that can be used by an adversary to undermine the PUF's security. Furthermore, the response, once it is mapped into a uniform key, can, in principle, be used in different protocols other than entity authentication.
However, even when dealing with noise and non-uniformity, there are some issues with PUFs, because it has been shown that many of them can be actually cloned or simulated \cite{HBNS, RSSDDS, R-etal}, compromising their use in secure authentication schemes.
To solve these problems, an extension of PUFs to quantum protocols was suggested, the so-called \emph{Quantum Readout PUFs} (QR-PUFs) \cite{BS}. Such PUFs encode challenges and responses in quantum states, thus they are expected to be more secure and reliable than classical PUFs, as they add a layer of complexity given by the unclonability of the involved quantum states \cite{WZ}. Moreover, if such quantum states are non-orthogonal, an adversary cannot perfectly distinguish them, and an attempt to do it would introduce disturbances, thus exposing the presence of an intruder to the legitimate parties.
It is desirable to establish a theoretical framework in which one can perform a rigorous, quantitative, analysis of the security properties of $\text{(QR-)}$ PUFs. Several efforts have been made to formalize the intuitive ideas of PUF \cite{RSS, AMSSW, PK, PM, JD}, and they all capture some aspects of them, but a well-defined agreement about theoretical assumptions and definitions is still lacking. Moreover, the previous approaches are devoted to classical PUFs only.
In this article we propose a common theoretical framework by quantitatively characterizing the $\text{(QR-)}$ PUF properties, particularly the \emph{robustness} \cite{AMSSW} against noise and the \emph{unclonability}. This is done by generalizing ideas from previous approaches (in particular from \cite{AMSSW}) to encompass both classical and QR-PUFs.
Moreover, we introduce a generic scheme for authentication protocols with $\text{(QR-)}$ PUFs, for which security thresholds can be calculated once an experimental implementation is specified. This scheme provides an abstract formalization of existing protocols, together with new ideas such as the difference between a \textit{physical layer} and a \textit{mathematical layer} (see Sec. \ref{sec:Auth}) or the concept of the \textit{shifter} (see Secs. \ref{subsec:cenr} and \ref{subsec:qenr}).
This framework is designed to be independent of the specific experimental implementation, such that a comparison of different types of PUFs and QR-PUFs becomes possible.
In particular, all implementations use a fuzzy extractor for post-processing.
We expect that this analysis supports both theoretical and experimental research on $\text{(QR-)}$ PUFs, by promoting the implementation of such devices in existing and new secure authentication schemes.
The paper is organized as follows.
In Sec. \ref{sec:Auth} we give an introduction on entity authentication protocols with $\text{(QR-)}$ PUFs. Sec. \ref{sec:not} contains the notation we will use in the paper, in Sec. \ref{sec:class} we describe a protocol with a generic classical PUF, and in Sec. \ref{sec:quant} we generalize this to a generic QR-PUF. The shared formalization of the theoretical properties of $\text{(QR-)}$ PUFs is stated in Sec. \ref{sec:prop} and the formalism is applied in some examples in Sec. \ref{sec:ex}. Some final remarks and the outlook of the work are given in the Conclusion.
\section{Authentication protocols}
\label{sec:Auth}
In the following, we will always call Alice the party that has to authenticate Bob. Mutual authentication can be achieved by repeating the protocol swapping the roles of Alice and Bob. Moreover, we stated in the Introduction that the raw output of a $\text{(QR-)}$ PUF has to be post-processed to be used in secure cryptographic protocols. Therefore, for the sake of clarity, we call \emph{outcome} the raw output while we mean with \emph{response} only the post-processed uniform key.
Entity authentication protocols with $\text{(QR-)}$ PUFs consist of two phases \cite{STO}, the \emph{enrollment stage} and the \emph{verification stage} (see fig. \ref{fig:enver}).
\begin{figure}
\caption{ A schematic description of the authentication scheme (colour online). \\
\textbf{Top:}
\label{fig:enver}
\end{figure}
The enrollment stage is a part of the protocol which happens only once at the beginning, after the manufacture of the $\text{(QR-)}$ PUF and before any communications between Alice and Bob. An entity, or a group of entities, called the \emph{$\text{(QR-)}$ PUF Certifier} (which may be the $\text{(QR-)}$ PUF manufacturer, Alice itself, a third trusted party or a combination of all of them) studies the $\text{(QR-)}$ PUF's properties, evaluates the parameters needed for the implementation and the post-processing.
In particular, the Certifier selects a certain number $N$ of challenges and records the corresponding responses.
Challenges and responses form the so-called \emph{Challenge-Response pairs} (CRPs) and they are stored as a \emph{Challenge-Response Table} (CRT), together with additional information needed in the remaining part of the protocol.
After the end of this stage, the Certifier gives the CRT to Alice (which then \emph{knows} the secret) and the $\text{(QR-)}$ PUF to Bob (which then \emph{has} the secret).
The verification stage is the part of the protocol where communication between Alice and Bob is necessary. In this stage, Bob declares his identity to Alice with his $\text{(QR-)}$ PUF, remotely interacting with her through her terminal. To authenticate Bob, Alice sends randomly one challenge from the CRT to the $\text{(QR-)}$ PUF and collects the outcome, which is then post-processed. The calculated response is compared with the one in the CRT, i.e. the one obtained in the enrollment stage. If they match, Alice authenticates Bob.
This stage can be repeated every time Alice needs to authenticate Bob. After every round, however, the used challenge-response pair has to be eliminated from the CRT and cannot be used again \footnote{It was argued \cite{BS} that in the QR-PUF case, challenge-response pairs could be used again, because an adversary is not able to gain full information about their state. Such claims need to be quantitatively proven, here we continue as if any reused CRP is insecure.}.
Depending on the different types of $\text{(QR-)}$ PUFs, the challenges could be different types of physical quantities. For instance, optical PUFs are transparent materials filled with light scattering particles: a laser that interacts with one of them is turned into a unique speckle pattern. For a classical optical PUF, the challenge is the laser orientation and the outcome is the intensity of some points in the speckle pattern \cite{PRTG}. For a QR-PUF, the challenges and the outcomes are quantum states \cite{BS}.
In both cases, however, challenges, outcomes and responses are stored in the CRT as digital binary strings, and the responses are used as authentication keys.
There are two different layers involved in this protocol: a physical one, where the actual $\text{(QR-)}$ PUF acts as a physical evolution from input systems to output systems, and a mathematical one, where a binary challenge string (which should represent the information on how to implement the input system) is mapped into an outcome string which is post-processed into a response string.
To deal with the two different layers, we denote as \emph{challenges} (\emph{outcomes}, \emph{responses}) the strings in the mathematical layer, and as \emph{challenge states} \footnote{This term clearly comes from quantum physics, where it is used to describe a vector in a Hilbert space. We will use the term \emph{classical state} in this article meaning a classical physical quantity, either scalar or vectorial.} (\emph{outcome states}, \emph{response states}) the implementations in the physical layer.
This configuration is schematized in fig. \ref{fig:sch}.
\begin{figure*}
\caption{A scheme of the two layers, the mathematical one (where the cryptographic protocol takes place) and the physical one (where the $\text{(QR-)}
\label{fig:sch}
\end{figure*}
\section{Notation}
\label{sec:not}
In the article we will use the following conventions:
\begin{itemize}
\item Digital strings, like the challenges and the responses, are denoted by lowercase bold letters, for instance, $\mathbf{x_i}$ and $\mathbf{r_j}$ for the i-th challenge and the j-th response, respectively;
\item Sets of digital strings are denoted by the calligraphic uppercase letters, e.g. $\mathcal{X}$ and $\mathcal{R}$ for the set of challenges and responses, respectively;
\item Random variables which take values from given sets are denoted by uppercase italic letters, e.g. $X$ and $R$ for challenges and responses, respectively;
\item The physical classical states are denoted by the vector symbol (right arrow), for instance, $\vec{x}_i$ and $\vec{r}_j$ for the i-th challenge state and the j-th response state, respectively;
\item The physical quantum states are denoted by the usual ket notation, for instance, $\Ket{x_i}$ and $\Ket{r_j}$ for the i-th challenge state and the j-th response state, respectively;
\item Maps are denoted by uppercase letters with a circumflex accent, e.g. $\hat{P}$ or $\hat{\Pi}$. In particular, the Latin letters are used for maps between strings and the Greek ones for maps between states.
\end{itemize}
\section{Classical PUF}
\label{sec:class}
The realization of a challenge state may involve several different steps, each of them with different experimental complexity.
Each step involves devices with a limited, even though possibly large, number of different configurations and such configurations can be used to parametrize the experimental system, resulting in our ability to formalize the challenges through discrete variables.
A challenge is therefore defined as the binary string $\bf x_i$ of length $n$ representing the configuration which realizes a given challenge state $\vec{x}_i$.
\subsection{Enrollment}
\label{subsec:cenr}
At the start of the enrollment stage, the PUF Certifier selects $N\leq 2^n$ different challenges ${\bf x_i}\in \mathcal{X} \subseteq \{0,1\}^n$, where $\mathcal{X}\subseteq\{0,1\}^n$ is the set of all chosen challenges and $|\mathcal{X}|=N$. In fact, if a challenge consists of $n$ bits, the total possible number of challenges is $2^n$. However, in practice, certain challenges could represent states which are impossible or hard to implement or they do not lead to a set of distinguishable responses.
For security purposes, the set of challenges $\mathcal{X}$ has to be \textit{uniform}, i.e. $\hat{S}(X)=|\mathcal{X}|$, where $X$ is the random variable defined on the set $\mathcal{X}$ and $\hat{S}(X)$ is the Shannon entropy of $X$. An adversary should not be able to characterize the set of challenges by studying some of them. The Certifier is free to discard some challenges from $\mathcal{X}$ if he finds correlations in them. This affects the number $N$ of challenges and has to be quantified for given experimental implementations.
Each $\bf x_i\in\mathcal{X}$ represents a challenge state $\vec{x}_i$ which can be experimentally realized and sent to the PUF, that acts as a deterministic function $\hat{\Pi}$. Due to its complex structure, any attempt to give a full description of it should be unfeasible, even for the Certifier itself.
For a given challenge state $\vec{x}_i$, $\hat{\Pi}(\vec{x}_i)= \vec{y}_i$, where $\vec{y}_i$ is denoted as \emph{outcome state}.
The Certifier needs to map the outcome state into an outcome string, taking into account both the distribution of the outcome states and any error which may have occurred due to noise or wrong implementation of the experimental system. To do this, we introduce the notion of a \emph{shifter}.
For each outcome state $\vec{y}_i$, let $\hat{\Omega}_i$ be a state-dependent operation, which maps $\vec{y}_i$ into a \textit{reference state}, denoted by $\vec{0}$, equal for all outcome states. For $N$ outcome states $\vec{y}_i$, we obtain a set of $N$ shifters $\hat{\Omega}_i$.
The importance of using the shifters will be more clear when we discuss QR-PUFs. The shifters simplify the error verification process, as each expected outcome is identical.
Some devices ascribable to shifters have been used in some PUF implementations: consider, for instance, the optical PUF \cite{PRTG}, where a laser beam (challenge state) is transformed into a complex speckle pattern (outcome state). In this scenario, it has been proposed \cite{GHMSP} to use spatial light modulators to transform every speckle pattern into a plane wave, which then is focused into a single point (the reference state). Only if the pattern is the expected one this happens, otherwise, the outcome state is mapped into another speckle pattern.
Shifters can be designed also for other PUFs, depending on which physical quantities are implied in the outcome states. If the outcome state is already a binary value (like in the \emph{SRAM PUF} \cite{GKST}) the reference state can be the bit $0$ and the shifters can be realized by a gate implementing either the identity or a bit flip operation, depending on the expected outcome state. Whenever an outcome is determined by the frequency of a signal (like in a \emph{ring oscillator PUF} \cite{BNCF}), the shifters can be passband filters.
The Certifier can implement the corresponding shifter for every outcome state, since he can characterize $\hat{\Pi}(\vec{x}_i)$, possibly repeating the PUF evaluation for the same challenge state $\vec{x}_i$, to find a $\hat{\Omega}_i$ such that $\hat{\Omega}_i\big(\hat{\Pi}(\vec{x}_i)\big)=\vec{0}$.
We define $\vec{o}_i:=\hat{\Omega}_i\big(\hat{\Pi}(\vec{x}_i)\big)$. While in the enrollment stage, or in a noiseless verification stage, $\vec{o}_i=\vec{0}$ by definition, in reality $\vec{o}_i$ will contain errors.
This error is mapped into the Hamming weight, i.e. the number of bits that are different from $0$, of a classical string $\mathbf{o_i}$, i.e. $\mathbf{o_i}=\mathbf{0}_{l_o}=00\dots0$ if and only if $\vec{o}_i=\vec{0}$. The string has a length $ l_o$, dependent on the experimental implementation of the shifter. In the aforementioned example of an optical PUF, the plane wave is focused onto an analyzer plane with a pinhole. If $\vec{o}_i=\vec{0}$ the light passes through this pinhole, and a detector will click. Therefore the intensity of the light on the analyzer plane outside the pinhole can be used to find $\mathbf{o_i}$, and the resolution of the analyzer plane determines the length $l_o$.
The shifters convey information about the distribution of the outcome states (as they are designed on them) and therefore indirectly about the PUF.
We can represent this information in terms of binary strings in the mathematical layer, just as we did for challenge states. The shifters are implemented by an experimental device (or a collection of them) with a limited number of configurations, each one of them implementing a different $\hat{\Omega}_i$.
Parametrizing such configurations, we map each shifter $\hat{\Omega}_i$ in a string ${\bf w_i}\in\mathcal{W}\subseteq\{0,1\}^{l_w}$. This string is exact, because it represents only the correct implementation of the shifter, without taking into account any noise.
The length $l_w$ depends on the entropy of the shifters and, consequently, on the outcome states (for some implementations, methods to analyze such an entropy have been derived \cite{TSSAO, RSGD}). The entropy of $\mathcal{W}$ has to be studied also to verify the presence of non-uniformity, i.e. correlations between different outcomes or between challenges and corresponding outcomes. This entropy affects the \emph{unclonability} of the PUF (see Sec. \ref{sec:prop}).
The two strings ${\bf o_i}$ and ${\bf w_i}$ convey two different aspects of the outcome state. In fact, ${\bf o_i}$ gives information about the error only, without distinguishing different outcomes. Instead, ${\bf w_i}$ gives information about the distribution of the outcome states, but not about errors (even a single bit flip of ${\bf w_i}$ changes it into ${\bf w_{j\neq i}}$).
We combine ${\bf o_i}$ and ${\bf w_i}$ by defining as \emph{outcome} a string $\mathbf{y_i}$ of length $l= l_w+l_o$, such that
\begin{equation}
\mathbf{y_i}=\bf w_i\,\|\,o_i\,,
\end{equation}
where $\|$ is the concatenation of strings.
We designate $\mathcal{Y}\subseteq\{0,1\}^l$ as the set of all outcomes, including all possible noisy versions. Explicitly,
\begin{equation}
\label{setY}
\mathcal{Y}=\left\{\mathbf{y_i}=\mathbf{w_i}\,\|\,\mathbf{o_i},\, \mathbf{w_i}\in\mathcal{W},\, \mathbf{o_i}\in\{0,1\}^{l_o} \right\}\, ,
\end{equation}
and $|\mathcal{Y}|=2^{l_o}\,N$ (see fig. \ref{fig:setY} for a graphic representation of the set $\mathcal{Y}$).
Moreover we define a function $\hat{P}:\mathcal{X}\rightarrow\mathcal{Y}$, associating each challenge with the corresponding outcome, i.e. $\hat{P}(\bf x_i)=y_i$.
\begin{figure}
\caption{Graphic representation of the set $\mathcal{Y}
\label{fig:setY}
\end{figure}
The outcome string, being noisy and not uniformly distributed, cannot be used directly as a response. The most common way to post-process it is through a \emph{fuzzy extractor} \cite{DORS}, which is a combined error correction and privacy amplification scheme:
\begin{definition}
Let $\{0,1\}^\star$ be the \textit{star closure} of $\{0,1\}$, i.e. the set of strings of arbitrary length:
\begin{equation}
\{0,1\}^\star=\bigcup_{i \ge 0 }\{0,1\}^i\, ,
\end{equation}
where $\{0,1\}^0=\emptyset$ is the empty set.
Let $\hat{H}({\bf y_i,y'_i})$ be the \textit{Hamming distance} between $\bf y_i$ and $\bf y'_i$, i.e. the Hamming weight of $\bf y_i+y'_i$ and $ s:= -\log \left(\max_k p_k\right)$ be the \emph{min-entropy} of a probability distribution $p=\Set{p_k}$. Furthermore, given two probability distributions $p_A$, $p_B$, associated to discrete random variables $A,B$ with the same domain $\mathcal{C}$, let $\hat{D}_S(p_A, p_B)$ be the \emph{statistical distance} between $p_A$ and $p_B$, i.e.
\begin{equation}
\label{statdist}
\hat{D}_S(p_A,p_B):=\frac{1}{2}\,\sum_{c\in\mathcal{C}}\,\left|Pr(A=c)-Pr(B=c)\right|\, . \end{equation}
A $(\mathcal{Y},s,m,t,\epsilon)$-\emph{fuzzy extractor} is a pair of random functions,
the \emph{generation function} $\hat{G}$ and the \emph{reproduction function} $\hat{R}$, with the following properties:
\begin{itemize}
\item $\hat{G}:\mathcal{Y}\rightarrow\{0,1\}^m\times\{0,1\}^\star$ on input $\bf y_i\in\mathcal{Y}$ outputs an extracted string ${\bf r_i}\in\mathcal{R}\subseteq\{0,1\}^m$ and a \emph{helper data} ${\bf h_i}\in\mathcal{H}\subseteq\{0,1\}^\star$. While $\bf r_i$ has to be kept secret, $\bf h_i$ can be made public (it can even be physically attached to the PUF);
\item $\hat{R}:\mathcal{Y}\times\mathcal{H}\rightarrow\{0,1\}^m$ takes an element $\bf y'_i\in\mathcal{Y}$
and a helper string $\bf h_i\in\mathcal{H}$ as inputs.
The \emph{correctness property} of a fuzzy extractor guarantees that if $\hat{H}({\bf y_i,y'_i})\leq t$ and $({\bf r_i,h_i})=\hat{G}(\bf y_i)$, then $\hat{R}(\bf y_i')=r_i$;
\item The \emph{security property} guarantees that for any probability distribution on $\mathcal{Y}$ of min-entropy $s$,
the string $\bf r_i$ is nearly uniform even for those who observe $\bf h_i$: i.e. if $({\bf r_i,h_i})=\hat{G}\bf(y_i)$, then
\begin{equation}
\hat{D}_S(p_{RH}, p_{UH})\leq\epsilon\, ,
\end{equation}
where $p_{RH}$ ($p_{U\! H}$) is a joint probability distribution for $\bf r_i\in\mathcal{R}$ (for a uniformly distributed variable on $m$-bit binary strings) and $\bf h_i\in\mathcal{H}$.
\end{itemize}
\end{definition}
The generation function of a fuzzy extractor is used, in the enrollment stage, to transform the outcome $\bf y_j$ into the uniformly distributed $\bf r_i$, that is the final \emph{response}. We will see later that, in the verification stage, the reproduction function is used on a noisy version of the outcome to generate the same response.
The Certifier selects a fuzzy extractor by knowing $\mathcal{Y}$ and its min-entropy $s$, and choosing $t$ such that the fuzzy extractor uniquely maps a given outcome into a response, without collisions: due to noise or an erroneous experimental setup, a challenge state $\vec{x}_i$ can be implemented as a state which is closer to $\vec{x}_j$, for $i\neq j$.
The error ${\bf o}_{\bf i}^{(j)}$ associated to $\hat{\Omega}_i\big(\hat{\Pi}(\vec{x}_j)\big)$ for $i\neq j$, must be uncorrectable: the Certifier has to choose a maximum allowed error $t<l_o$ smaller than the minimum Hamming weight of ${\bf o}_{\bf i}^{(j)}$, over all $i\neq j$ (see Fig. \ref{fig:overlap}).
\begin{figure}
\caption{Graphic representation of the choice of $t$ for $N=2$ challenge-response pairs. The circle represents both $\bf o_1$ and $\bf o_2$, indipendently from $\bf w_1$ and $\bf w_2$. The center of the circle represent the noiseless cases ${\bf o_1}
\label{fig:overlap}
\end{figure}
There is a trade-off between $t$ and the entropy of the shifters: a high entropy, associated to a longer length $l_w$ of $\bf w_i$, is equivalent to similar states with a small error in case of a wrong implementation, and $t$ has to be chosen low.
The Certifier may decide to delete challenge-response pairs from the Challenge-Response Table, in order to choose a higher $t$ and increase the resistance of the PUF against the noise.
For practical purposes we define two functions $\hat{G}_R$ and $\hat{G}_H$ such that
\begin{equation}
\hat{G}(\cdot)=(\hat{G}_R(\cdot),\hat{G}_H(\cdot))\, ,
\end{equation}
and therefore ${\bf r_i}=\hat{G}_R(\bf y_i)$ and ${\bf h_i}=\hat{G}_H\bf(y_i)$ for $\bf y_i\in\mathcal{Y}$.
Moreover, we define the function $\hat{F}_E$ to be the function mapping each challenge to the respective response in the enrollment stage, i.e.
\begin{equation}
\label{eq:fe}
\hat{F}_E(\cdot):=\hat{G}_R(\hat{P}(\cdot))\, ,
\end{equation}
for $\bf x_i\in\mathcal{X}$ and therefore ${\bf r_i}=\hat{F}_E ({\bf x_i})$.
Summarising, during the enrollment stage the Certifier creates a set of $N$ challenges $\mathcal{X}\in\{0,1\}^n$ and a set of $N$ responses $\mathcal{R}\subseteq\{0,1\}^m$
\begin{equation}
\mathcal{R}=
\Set{{\bf r_i}\in\{0,1\}^m\,|\,{\bf r_i}=\hat{F}_E({\bf x_i});\quad {\bf x_i}\in\mathcal{X}}\, .
\end{equation}
They are stored into the Challenge-Response Table (CRT) together with
\begin{itemize}
\item the set of $N$ strings $\bf w_i$ representing how to set the shifter operator to get the correct outcome;
\item the parameters of the fuzzy extractor;
\item the (possibly public) set of helper data $\mathcal{H}\subseteq\{0,1\}^\star$, i.e.
\begin{equation}
\mathcal{H}=
\Set{{\bf h_i}\in\{0,1\}^\star\,|\,{\bf h_i}=\hat{G}_H(\hat{P}({\bf x_i}));\quad {\bf x_i}\in\mathcal{X}} \, .
\end{equation}
\end{itemize}
The Challenge-Response Table is given to Alice and the PUF to Bob, concluding the enrollment stage.
\subsection{Verification}
\label{subsec:cver}
In the verification stage, Bob declares his identity and allows Alice to (remotely) interact with his PUF. Alice, equipped with the CRT, retraces the steps made by the Certifier in the enrollment stage.
She picks up a randomly selected challenge ${\bf x_j}\in\mathcal{X}$ (for which she knows the response ${\bf r_j}=\hat{F}_E({\bf x_j})$) and prepares the challenge state $\vec{x}_j$. The PUF transforms $\vec{x}_j$ into the outcome state $\hat{\Pi}(\vec{x}_j)$. At this point, Alice tunes the shifter $\hat{\Omega}_j$, according to the CRT and evaluates $\hat{\Omega}_j\big(\hat{\Pi}(\vec{x}_j)\big)$.
After the use of the PUF and the shifter, she may obtain a noisy version of $\vec{y}_j$, because of noise or a wrong preparation of the challenge state. Moreover, the noise could come from the PUF not being the original one, if an adversary Eve is impersonating Bob.
We call this noisy version $\vec{y'}_j= \hat{\Pi}^{(e)}(\vec{x}_j)$. In that case $\hat{\Omega}_j(\vec{y'}_j)\neq\vec{0}$, which leads to $\mathbf{o'_j}\neq \mathbf{0}_{l_o}$ such that ${\bf y'_j}={\bf w_j\,\|\,o'_j}=\hat{P}^{(e)}({\bf x_j})$ is different from the ${\bf y_j}$ obtained by the Certifier in the enrollment stage.
The outcome is then post-processed by the reproduction function of the fuzzy extractor that was used in the enrollment stage, so Alice collects $\mathbf{z_j}:= \hat{F}_V(\bf x_j)$, where the function $\hat{F}_V$ represents the map between the challenges and the corresponding responses in the verification stage, i.e.
\begin{equation}
\label{eq:fv}
\hat{F}_V:=\hat{R}\big(\hat{P}^{(e)}(\mathbf{\cdot}), \hat{G}_H(\hat{P}(\mathbf{\cdot}))\big) \, ,
\end{equation}
for $\bf x_j\in\mathcal{X}$.
The claimed response $\bf z_j$ is compared with the one in the CRT: if $\bf z_j=r_j$, Bob is authenticated, otherwise the protocol fails.
\section{QR-PUF}
\label{sec:quant}
The authentication scheme for Quantum Readout PUFs follows the structure of the classical scheme (see Sect. \ref{sec:class}) and still uses classical challenges, responses and fuzzy extractors in the mathematical layer. However, the implementation of the challenge states and outcome states in the physical layer is done via quantum states.
At the moment, the only classical PUF which was extended to a QR-PUF is an optical PUF \cite{BS, GHMSP}, for which there are some studies on side-channel attacks \cite{SMP, BS13, YGLZ}.
In this work, we study discrete qubit states, but our approach could also be generalized to continuous-variable $\text{(QR-)}$ PUFs \cite{ND, GN}.
Let us assume to work with $\lambda$ qubits, so challenge states are elements of the Hilbert space $\mathbb{C}^{2^\lambda}$. We also assume that each qubit can be in a finite number of states. Like in the classical case, we can parametrize the configurations of the experimental system that implements the challenge states, to obtain a set $\mathcal{X}$ of classical challenges. Let us denote the length of such strings by $n$, to match the case of classical PUFs.
Here the challenge states are quantum, therefore challenge states will be represented by $\ket{x_i}$.
Our QR-PUF will be described in an idealized way, as unitary operation acting on a pure state to produce another pure state. In reality, this process will introduce noise: in our framework, this will be taken into account in the transition from the outcome state to the outcome string.
\subsection{Enrollment}
\label{subsec:qenr}
Since not all states are implementable, or they do not lead to distinguishable responses, the Certifier selects $N\leq 2^{n}$ challenges ${\bf x_i}\in\mathcal{X}\subseteq\{0,1\}^n$,
where $\mathcal{X}$ is implemented by a set of nonorthogonal states $\Set{\Ket{x_1},\dots,\Ket{x_N}}\in\mathbb{C}^{2^\lambda}$.
The nonorthogonality is expected to be a crucial condition, since, as a consequence of the no-cloning theorem \cite{WZ}, there does not exist a measurement which perfectly distinguishes nonorthogonal states. We expect that this enhances the security of QR-PUFs compared to classical PUFs since an adversary could gain only a limited amount of information about the challenge and the outcome states.
In this work we consider separable challenge states $\Ket{x_i}$, so $\ket{x_i}=\bigotimes_{k=1}^\lambda\ket{x_{ik}}$ and we can deal with single qubit states $\ket{x_{ik}}$. The procedure can be generalized to other challenge states. The qubit states can be written in terms of some complete orthonormal basis, which we denote as $\Set{\Ket{0},\,\Ket{1}}$:
\begin{equation}
\label{chalstat}
\ket{x_{ik}}= \cos \theta_{ik} \, \ket{0} + e^{i \varphi_{ik}} \sin \theta_{ik} \,\ket{1}\, ,
\end{equation}
where $\theta_{ik}\in[0,\pi]$ and $\varphi_{ik}\in[0,2\pi]$.
The Certifier sends all states to the QR-PUF, collecting the outcome states. The QR-PUF is formalized as a $\lambda$-fold tensor product of single-qubit unitary gates $\hat{\Phi}=\bigotimes_{k=1}^\lambda \hat{\Phi}_k$. Despite its form being unknown, it can be parametrized by \cite{ZK}:
\begin{equation}
\label{unitmat}
\hat{\Phi}_k(\omega_k,\psi_k,\chi_k)=
\begin{pmatrix}
e^{i \psi_k}\cos \omega_k & e^{i \chi_k}\sin \omega_k \\
-e^{-i \chi_k}\sin \omega_k & e^{-i \psi_k}\cos \omega_k
\end{pmatrix}\, ,
\end{equation}
with random parameters $\psi_k, \chi_k \in [0, 2\pi]$ and $ \omega_k \in \left[0, \frac{\pi}{2}\right]$.
The outcome state is then $\ket{y_i}=\bigotimes_{k=1}^\lambda\ket{y_{ik}}$, where
\begin{equation}
\label{outstat}
\begin{split}
&\ket{y_{ik}}=\,\hat{\Phi}_k \ket{x_{ik}}\\
&=\begin{pmatrix}
e^{i \psi_k}\cos \omega_k \cos \theta_{ik} + e^{i (\chi_k+\varphi_{ik}) }\sin \omega_k \sin \theta_{ik} \\
-e^{-i \chi_k}\sin \omega_k \cos \theta_{ik} + e^{i(\varphi_{ik}- \psi_k)}\cos \omega_k \sin \theta_{ik}
\end{pmatrix}\, .
\end{split}
\end{equation}
\begin{figure*}
\caption{A scheme for the verification stage for QR-PUFs, as described in Sec. \ref{subsec:qver}
\label{fig:qPUF}
\end{figure*}
Like in the classical case, the Certifier can design a state-dependent shifter, that performs a tensor product of unitary transformations, $\hat{\Omega}_{i}=\bigotimes_{k=1}^\lambda\hat{\Omega}_{ik}$, each one of them mapping a specific qubit state to the reference state $\ket{0}=(1,0)^T$. This operation is indeed unitary, because for $\ket{y_{ik}}= \cos\"{a}lpha_{ik}\Ket{0}+e^{i\beta_{ik}}\sin\"{a}lpha_{ik}\ket{1}$, it holds that $\hat{\Omega}_{ik} \ket{y_{ik}} =\Ket{0}$ for
\begin{equation}
\label{shifdef}
\hat{\Omega}_{ik} =
\begin{pmatrix}
\cos\"{a}lpha_{ik} & e^{-i\beta_{ik}}\sin\"{a}lpha_{ik}\\
e^{i\beta_{ik}}\sin\"{a}lpha_{ik} & -\cos\"{a}lpha_{ik}
\end{pmatrix}\, ,
\end{equation}
which verifies $\hat{\Omega}_{ik}\,\hat{\Omega}^\dagger_{ik}=\hat{\Omega}_{ik}^\dagger\,\hat{\Omega}_{ik}=\mathbb{I}$, where $\mathbb{I}$ is the identity operator.
The Certifier can implement $\hat{\Omega}_{i}$ for each $\hat{\Phi}\Ket{x_i}$ because he can repeat the experiment and characterize each outcome state by performing quantum state tomography or, as we work with pure states, compressed sensing \cite{GLFBE}.
Instead of having to change the single-qubit measurement basis for each qubit and each challenge, by applying the suitable shifter it is now possible to use the basis $\set{\Ket{0}, \Ket{1}}$ for all qubits of all challenges.
By definition of $\hat{\Omega}_{ik}$, if there is no error, we will measure for every qubit the state $\ket{0}$, and the results of the measurement form a string of length $\lambda$ made by all zeros, ${\bf o_i=0}=00\dots0$. If there is some error, which in the quantum case is introduced by either the environment or an adversary, the Hamming weight of $\bf o_i$ will give us an estimate of it.
Like in the classical case, we can parametrize the experimental system that implements the shifters in terms of the (discrete) configuration it must assume to implement a specific $\hat{\Omega}_i$. Therefore, a given $\hat{\Omega}_i$ is represented by a classical string $\bf w_i\in\mathcal{W}$ of length $ l_w$.
We again define as \emph{outcome} a classical string $\mathbf{y_i}$ of length $l=l_w+\lambda$, given by:
\begin{equation}
\mathbf{y_i}=\bf w_i\,\|\,o_i\, ,
\end{equation}
where $\|$ is the concatenation of strings.
We also define a set $\mathcal{Y}$ like in Eq. \eqref{setY} and a function $\hat{P}:\mathcal{X}\rightarrow\mathcal{Y}$ mapping every challenge to the corresponding outcome.
At this point, like for classical PUFs, the Certifier fixes the correctable amount of noise $t<l_o$ and selects a fuzzy extractor $(\hat{G}, \hat{R})$, able to correct $t$ errors and to generate a uniformly distributed response, according to the distribution of the outcome states and the entropy of the set of outcomes. The non-orthogonality of the challenge states affects $t$: when a wrong challenge state is implemented, its \textit{fidelity} with the correct one is preserved by the QR-PUF and the shifter, since they are unitary maps, and influences the results of the measurement. The maximum correctable error $t$ has to be chosen lower than the error produced by wrong implementations, which becomes small for highly non-orthogonal challenges. The Certifier may decide to delete challenge-response pairs from the Challenge-Response Table, in order to choose a higher $t$ and increase the resistance of the QR-PUF against the noise. However, this reduces the overall non-orthogonality of the quantum states, thus improving Eve's ability to distinguish them. Such a trade-off will be discussed again in the following sections.
The generation function of a fuzzy extractor generates a uniformly distributed response $\bf r_i \in\mathcal{R}$, together with a public helper data $\bf h_i\in\mathcal{H}$.
Again we have:
\begin{equation}
\hat{G}(\cdot)=(\hat{G}_R(\cdot),\hat{G}_H(\cdot))\, ,
\end{equation}
and
\begin{equation}
{\bf r_i}=\hat{G}_R({\bf y_i}), \quad\forall \,{\bf y_i}\in\mathcal{Y}\, .
\end{equation}
We define a function $\hat{F}_E(\cdot):=\hat{G}_R(\hat{P}(\cdot)):\mathcal{X}\rightarrow\mathcal{R}$ mapping each challenge to the corresponding response, representing the action of the QR-PUF in the enrollment stage.
Like for classical PUFs, challenges, responses and other information are stored in the Challenge-Response Table, which is given to Alice, while the QR-PUF is given to Bob.
\subsection{Verification}
\label{subsec:qver}
In the verification stage Bob allows Alice to (remotely) interact with his $\text{QR-}$PUF. She selects randomly a challenge $\bf x_j\in\mathcal{X}$ and prepares $\Ket{x_j}$.
Using the QR-PUF with the challenge state $\Ket{x_j}$, Alice may obtain $\ket{y'_j}$, different from the expected $\ket{y_j}$, because of noise or an erroneous implementation of the system or the action of a malicious intruder. Then Alice applies $\hat{\Omega}_{j}$ and measures each qubit state in the basis $\Set{\Ket{0},\Ket{1}}$, obtaining $\bf o'_j$ and hence the outcome $\mathbf{y'_j}=\bf w_j\,\|\,o'_j$.
While in the ideal noiseless case $\mathbf{o'_j}={\bf 0}_{l_o}$, here we may measure some state $\ket{1}$ for some qubits, therefore $\mathbf{y'_j}$ could be different from the $\bf y_j$ obtained by the Certifier in the enrollment stage.
The outcome is then post-processed by the reproduction function of the fuzzy extractor that was used in the enrollment stage, so Alice collects $\mathbf{z_j}:= \hat{F}_V(\bf x_j)$, where the function $\hat{F}_V$ is defined like in the classical case, $\hat{F}_V(\mathbf{\cdot}):=\hat{R}\big(\hat{P}^{(e)}(\mathbf{\cdot}), \hat{G}_H(\hat{P}(\mathbf{\cdot}))\big)$.
Authentication succeeds if $F_E({\bf x_j})=F_V({\bf x_j})$.
The verification stage is schematized in fig. \ref{fig:qPUF}.
\section{Properties and formalization}
\label{sec:prop}
In this section, we will analyze the properties of $\text{(QR-)}$ PUFs. As we have seen, both PUFs and QR-PUFs can be represented by a classical pair of functions $\hat{F}=(\hat{F}_E,\hat{F}_V)$ that describe the map between challenges and responses in the enrollment ($\hat{F}_E$, see Eq. \eqref{eq:fe}) or verification ($\hat{F}_V$, see Eq. \eqref{eq:fv}) stage. We will keep the same formalism for both PUFs and QR-PUFs, to allow our framework to compare them, but we will also specify the practical differences.
We have seen that the noise can be a problem which can lead to false rejection in the protocols.
Therefore it is important to characterize and quantify the amount of noise of a $\text{(QR-)}$ PUF, which is connected to the \emph{robustness} of a $\text{(QR-)}$ PUF.
We take the definition of this concept from \cite{AMSSW}, adapting it to our framework and our formalism.
\begin{definition}
Let us consider a $\text{(QR-)}$ PUF $\hat{F}$ with a set of challenges $\mathcal{X}$, where $|\mathcal{X}|=N$.
$\hat{F}$ is $\rho$-\emph{robust} with respect to $\mathcal{X}$ if $\rho\in[0,1]$ is the greatest number for which:
\begin{equation}
\frac{1}{N}\sum_{i=1}^{N}\,
Pr\{\hat{F}_V({\bf x_i})=\hat{F}_E({\bf x_i})\} \geq \rho\, .
\end{equation}
$\rho$ is called the \emph{robustness} of the $\text{(QR-)}$ PUF with respect to $\mathcal{X}$.
\end{definition}
The robustness represents the average probability that the $\text{(QR-)}$ PUF in the verification stage outputs the correct response, such that the authentication succeeds. So it represents the $\text{(QR-)}$ PUF's ability to avoid false rejections and depends on many factors, e.g. on the average noise of the specific implementation and the parameters of the fuzzy extractor.
Regarding the robustness, we do not expect a significant advantage of QR-PUFs compared to classical PUFs.
Actually, there is the possibility to have a disadvantage, because of the fragility of quantum states and of the necessity of having a low error threshold $t$, as the noise can originate from a possible interaction of an adversary.
Any implementation with QR-PUFs has to pay special care to this issue.
Now we will discuss unclonability, which is the main parameter involved in attacks from an adversary Eve. This concept is also mildly inspired by \cite{AMSSW}, but with marked differences, mainly caused by the need of taking into account QR-PUFs.
In the context of entity authentication with $\text{(QR-)}$ PUF, the purpose of an adversary Eve is to create a clone of a $\text{(QR-)}$ PUF, such that Alice can verify with it a challenge-response pair, falsely authenticating her as Bob.
When we say \emph{clone}, we need to specify if we are talking of a physical or a mathematical one.
A \emph{physical clone} is an experimental reproduction of the $\text{(QR-)}$ PUF. It will have the same physical properties as the original one, even in contexts not involved with the authentication protocol.
The requirement of \emph{physical unclonability} means that a physical clone is technologically or financially unfeasible at the current state of technology.
A mathematical clone, instead, is an object that \emph{simulates} the challenge-response behavior of a $\text{(QR-)}$ PUF.
In this case, we cannot just state that a mathematical clone is unfeasible, because if there are some correlations between the outcome states, in principle they can be exploited to predict new challenge-response pairs. As mentioned in the introduction, several PUFs have been successfully mathematically cloned. We need to formalize this notion, in order to quantify it for different $\text{(QR-)}$ PUFs.
We assume that Eve cannot directly access the internal structure of the $\text{(QR-)}$ PUF \cite{RSS, RBK}, but only interact with the challenge and the outcome states.
An attack consists of two phases, both carried out during the verification stage of the protocol. We require that the enrollment stage is inaccessible to Eve since this part is performed in the Certifier's lab and it involves the study of the inner structure of the $\text{(QR-)}$ PUF. During the \emph{passive phase}, Eve observes a certain number of successful authentications with the real $\text{(QR-)}$ PUF, collecting as much information as she can. Then, during the \emph{active phase} she designs a clone and gives it to Alice, claiming to be Bob. The attack succeeds if she is authenticated as Bob.
Each interaction affects one challenge-response pair. In this context, there is a crucial difference between PUFs and QR-PUFs. Classical states can be measured without introducing disturbances and can be copied perfectly. Therefore for $q\leq N$ interactions, we can assume that Eve would know exactly $q$ challenge and outcome states, possibly using this information to create a mathematical clone of the PUF.
Instead, a quantum state cannot be copied. Moreover, a quantum measurement cannot perfectly distinguish the states (since they are non-orthogonal) and any measurement can in principle introduce errors, thus potentially making a passive eavesdrop a detectable action. After $q$ interactions, Eve would know less than $q$ challenge and outcome states. This is the main reason for which QR-PUFs have been introduced: we expect that, concerning unclonability, they can be superior, even far superior, than classical PUFs \footnote{As we mentioned in Sec. \ref{subsec:qenr}, highly non-orthogonal challenge states require a fuzzy extractor with a low correctable error, undermining the robustness of the QR-PUF. Therefore this feature of QR-PUFs must be used carefully, balancing robustness and unclonability.}.
\begin{definition}
Let $\hat F$ be a $\text{(QR-)}$ PUF with a set of challenges $\mathcal{X}$, where $|\mathcal{X}|=N$. Let us suppose that an adversary Eve has $q$ interactions with a $\text{(QR-)}$ PUF in the passive stage of an attack, by observing an authentication protocol between Alice and Bob. With the information she can extract, she prepares a clone $\hat{E}_q$, defined as (see Eq. \eqref{eq:fv} for a comparison)
\begin{equation}
\label{eq:Eq}
\hat{E}_q(\cdot):=\hat{R}\big(\hat{P}_E (\cdot), \hat{G}_H(\hat{P}(\cdot))\big)\:,
\end{equation}
and gives it to Alice, who selects a challenge $\bf x_i\in\mathcal{X}$ and evaluates $\hat{E}_q({\bf x_i})$.
Then $\hat{E}_q$ is a $ (\gamma,q)$-\emph{(mathematical) clone} of $\hat{F}$ if $\gamma\in[0,1]$ is the greatest number for which
\begin{equation}
\frac{1}{N}\sum_{i=1}^{N} Pr(\hat{E}_q({\bf x_i})=\hat{F}_E({\bf x_i}))\geq \gamma\, .
\end{equation}
\end{definition}
\begin{definition}
A $\text{(QR-)}$ PUF $\hat{F}$ is called $(\gamma,q)$-\emph{(mathematical) clonable} if $\gamma\in[0, 1]$ is the smallest number for which it is not possible to generate a $(\bar{\gamma},q)$ clone of the $\text{(QR-)}$ PUF for any $\bar{\gamma}>\gamma$.
Conversely, a $\text{(QR-)}$ PUF $\hat{F}$ is denoted as $(\delta,q)$-\emph{(mathematical) unclonable} if it is $(1-\delta,q)$-clonable.
\end{definition}
The unclonability of a $\text{(QR-)}$ PUF is therefore related to the average probability of false acceptance.
We could expect to find a relation between the number of interactions $q$ and the unclonability: with a higher knowledge of CRP, it could be expected that Eve will be able to build a more and more sophisticated reproduction of the $\text{(QR-)}$ PUF. Increasing $q$ increases the know-how for making $(1-\delta,q)$-clones with a lower $\delta$. Therefore, fixing the maximum number of uses $q=q^* $ we fix the minimum $\delta=\delta^* $.
So we ensure that for $q<q^* $, the $\text{(QR-)}$ PUF is at least $(\delta^* , q)$-unclonable.
\begin{definition}
A $(\rho,\delta^* ,q^* )$-\emph{secure} $\text{(QR-)}$ PUF $\hat{F}$ is $\rho$-robust, physically unclonable and at least $(\delta^* ,q)$-mathematically unclonable up to $q^* $ uses.
\end{definition}
When manufacturing $\text{(QR-)}$ PUFs several properties, that are typically implementation-dependent, are important \cite{MV}. We believe that the above theoretical definitions of robustness and unclonability are, from a theoretical point of view, the main and most general properties involved in a $\text{(QR-)}$ PUF. They are directly related to the probabilities of false rejection and false acceptance, hence describing the efficiency and the security of the entity authentication protocol.
They also describe all $\text{(QR-)}$ PUFs independently from their implementation.
\section{Examples}
\label{sec:ex}
Explicit calculation of the robustness and the unclonability for a particular $\text{(QR-)}$ PUFs strongly depends on its implementation.
In this section, we illustrate the analysis for simplified examples, starting from idealized, extreme, cases.
\begin{itemize}
\item Consider a physically unclonable device implementing a true random number generator. An example of that is a QR-PUF based on the shot noise of an integrated circuit.
This device is
extremely difficult to simulate (Eve has to try a random guess), but also not robust at all (since it will not generate the same number in the enrollment and the verification). For this device, it holds
\begin{equation}
\begin{split}
&\frac{1}{N}\sum_{i=1}^{N}\,Pr\{\hat{F}_V({\bf x_i})=\hat{F}_E({\bf x_i})\}= \frac{1}{N}\, ; \\
&\frac{1}{N}\sum_{i=1}^{N} Pr(\hat{E}_{q^*}({\bf x_i})=\hat{F}_E({\bf x_i}))= \frac{1}{N}\, .
\end{split}
\end{equation}
Therefore it is a $(1/N,1-1/N,q^*)$ $\text{(QR-)}$ PUF, for any $q^*$.
\item Consider a physically unclonable device that outputs a fixed signal ($\vec{0}$ for classical PUFs or $\Ket{0}$ for QR-PUFs) for any input. An example is an optical QR-PUF based on the polarization of light for which a fixed polarizer is used as a shifter: for all outcome states only light waves of a specific polarization would pass though. This device is perfectly robust, but also clonable. It holds
\begin{equation}
\begin{split}
&\frac{1}{N}\sum_{i=1}^{N}\,Pr\{\hat{F}_V({\bf x_i})=\hat{F}_E({\bf x_i})\}= 1\, ; \\
&\frac{1}{N}\sum_{i=1}^{N} Pr(\hat{E}_{q^*}({\bf x_i})=\hat{F}_E({\bf x_i}))= 1\, .
\end{split}
\end{equation}
Therefore the $\text{(QR-)}$ PUF is a $(1,0,q^*)$ $\text{(QR-)}$ PUF, for any $q^*$.
\end{itemize}
These examples are extreme cases, while all $\text{(QR-)}$ PUFs will be somewhere in between. We now focus on an example of QR-PUF, to point out some features of QR-PUFs and some open points.
Let $\hat{F}$ be a QR-PUF implemented by a unitary transformation $\hat{\Phi}$, acting on $\lambda$ qubits, parametrized according to Eq. \eqref{unitmat}, with $\psi_k=\chi_k=0$, i.e.
\begin{equation}
\hat{\Phi}=\bigotimes_{k=1}^\lambda \hat{\Phi}_k=\bigotimes_{k=1}^\lambda
\begin{pmatrix}
\cos\omega_k & \sin\omega_k \\
-\sin\omega_k & \cos\omega_k
\end{pmatrix}\, .
\end{equation}
Consider a scenario in which each challenge state is a separable state of $\lambda$ qubits, $\Ket{x_i}=\bigotimes_{k=1}^\lambda \Ket{x_{ik}}$, and each qubit is in one of four possible states:
\begin{equation}
\label{exqub}
\ket{x_{ik}}=\ket{x_{ik}^{(\ell)}}:=\cos \left(\frac{\phi^{(\ell)}}{
2}\right)\Ket{0}+\sin \left(\frac{\phi^{(\ell)}}{2}\right)\Ket{1}\, ,
\end{equation}
where
\begin{equation}
\label{eq:phi}
\begin{split}
&\phi^{(1)}=\phi\, ,\qquad\qquad \phi^{(2)}=-\phi\, ,\\
&\phi^{(3)}=\phi-\pi\, ,\qquad\: \phi^{(4)}=\pi-\phi\, ,
\end{split}
\end{equation}
for a fixed angle $\phi$. Such challenge states can be parametrized by challenge strings of length $n=2\lambda$: for each qubit, the four possibilities are represented by two bits.
For simplicity of notation, from now on, we drop the indices $i$ and $k$, e.g. we write $\big|x^{(\ell)}\big\rangle:=\big| x_{ik}^{(\ell)}\big\rangle$.
The pairs $\{\Ket{x^{(1)}},\Ket{x^{(3)}}\}$ and $\{\Ket{x^{(2)}},\Ket{x^{(4)}}\}$ are orthogonal, but the overall set is non-orthogonal.
We assume that the noise can be parametrized as a depolarizing channel, associated to a probability of error $\tilde{p}$ and equal for all qubits. The noisy challenge state reads:
\begin{equation}
\begin{split}
\tilde{\rho}_x&:=(1-\tilde{p})\Ket{x}\Bra{x}+\tilde{p}\,\frac{\hat{I}}{2} \\
&=\left[(1-\tilde{p})\cos^2\left(\frac{\phi^{(\ell)}}{2}\right)+ \frac{\tilde{p}}{2}\right] \Ket{0}\Bra{0}\\
&+\left[ (1-\tilde{p}) \sin\left(\frac{\phi^{(\ell)}}{2}\right) \cos\left(\frac{\phi^{(\ell)}}{2}\right) \right] \left(\Ket{0}\Bra{1}+ \Ket{1}\Bra{0}\right) \\
&+\left[(1-\tilde{p}) \sin^2\left(\frac{\phi^{(\ell)}}{2}\right) +\frac{\tilde{p}}{2}\right]\Ket{1}\Bra{1}\, .
\end{split}
\end{equation}
The shifter needs to map the noiseless outcome state to $\Ket{0}\dots\Ket{0}$. According to Eq.\eqref{shifdef} it can be chosen to be a $\lambda$-fold tensor product of single qubit gates
\begin{equation}
\begin{split}
\hat{\Omega}= &\cos\left(\frac{\phi^{(\ell)}}{2}-\omega\right)\proj{0}{0}+ \sin\left(\frac{\phi^{(\ell)}}{2}-\omega\right)\proj{0}{1}\\
+&\sin\left(\frac{\phi^{(\ell)}}{2}-\omega\right)\proj{1}{0}-\cos\left(\frac{\phi^{(\ell)}}{2}-\omega\right)\proj{1}{1}\, ,
\end{split}
\end{equation}
and it follows:
\begin{equation}
\tilde{\rho}_o:= \hat{\Omega}\,\tilde{\rho}_{y}\,\hat{\Omega}^\dagger= \left(1-\frac{\tilde{p}}{2}\right)\proj{0}{0}+ \left(\frac{\tilde{p}}{2}\right)\proj{1}{1} \, .
\end{equation}
For a single qubit, therefore, the probability of measuring $\Ket{1}$ is $\tilde{p}/2$.
For a challenge state of $\lambda$ qubits, the average Hamming weight of the string $\bf o_i$ is $\lambda\,\tilde{p}/2$.
Any fuzzy extractor is defined in terms of the maximum number of errors $t$ it can correct. With our error model, we can choose to correct the average error of the system, i.e. $t=\lceil \lambda\,\tilde{p}/2 \rceil$, where $\lceil \lambda\,\tilde{p}/2 \rceil$ is the least integer greater than or equal to $\lambda\,\tilde{p}/2$.
However, $t$ and the number $N$ of challenge-response pairs are related since the fuzzy extractor has to uniquely map a given outcome into a unique response, without collisions.
Consider $\big| x^{(\ell)}\big\rangle$ and $\big| x^{(\ell')}\big\rangle$ ($\ell, \ell'\in \{1,2,3,4\}$ and $\ell\neq \ell'$) and estimate the error if $\big| x^{(\ell)}\big\rangle$ is implemented as the state $\big| x^{(\ell')}\big\rangle$, by evaluating $\hat{\Omega}_\ell \,\hat{\Phi}\big|x^{(\ell')}\big\rangle$.
From
\begin{equation}
\begin{split}
&\Ket{x^{(\ell)}}= \cos \left(\frac{\phi^{(\ell)}}{2}\right)\Ket{0}+\sin \left(\frac{\phi^{(\ell)}}{2}\right)\Ket{1}\, , \\
&\Ket{x^{(\ell')}}= \cos \left(\frac{\phi^{(\ell')}}{2}\right)\Ket{0}+\sin \left(\frac{\phi^{(\ell')}}{2}\right)\Ket{1}\, ,
\end{split}
\end{equation}
it follows
\begin{equation}
\begin{split}
&\hat{\Omega}_\ell\, \hat{\Phi}\Ket{x^{(\ell')}} \\
&=\cos\left(\frac{\phi^{(\ell)}-\phi^{(\ell')}}{2}\right)\Ket{0}+ \sin\left(\frac{\phi^{(\ell)}-\phi^{(\ell')}}{2}\right)\Ket{1}\, .
\end{split}
\end{equation}
Therefore, for this case, the probability of measuring $\ket{1}$ is $\sin^2\big[\big(\phi^{(\ell)}-\phi^{(\ell')}\big)/2\big]$.
In table \ref{table1}, the explicit values for all the combinations of the 4 qubit states are listed. In case of wrong implementation, challenges with a large overlap lead to small error weights, while orthogonal challenges lead to big ones. Thus there is a trade-off between the robustness of the QR-PUF and the quantum advantage of using indistinguishable non-orthogonal states.
\begin{table}[ht]
\centering
\begin{tabular}{ c| c c c c }
& $\Ket{x^{(1)}}$ & $\Ket{x^{(2)}}$ & $\Ket{x^{(3)}}$ & $\Ket{x^{(4)}}$ \\
\hline $\Ket{x^{(1)}}$ & 0 & $\sin^2\phi$ & 1 & $\cos^2\phi$ \\
$\Ket{x^{(2)}}$ & $\sin^2\phi$ & 0 & $\cos^2\phi$ & 1 \\
$\Ket{x^{(3)}}$ & 1 & $\cos^2\phi$ & 0 & $\sin^2\phi$ \\
$\Ket{x^{(4)}}$ & $\cos^2\phi$ & 1 & $\sin^2\phi$ & 0 \\
\hline
\end{tabular}
\caption{Error induced by implementing the wrong challenge state: the entry in row $\ell$ and column $\ell'$ of the table is the probability of error when applying shifter $\ell$ to state $\ell'$. The parameter $\phi$ is defined in Eq. \eqref{eq:phi}.}
\label{table1}
\end{table}
For any pair of possible challenge states $\Ket{x_i}=\bigotimes_{k=1}^\lambda \ket{x_{ik}}$ and $\Ket{x_j}=\bigotimes_{k=1}^\lambda \ket{x_{jk}}$, the average Hamming weight of the error string $\bf o_i$, obtained by the aforementioned process, is
\begin{equation}
\begin{split}
\operatorname{err}_{i,j}&:=(n_{12}+n_{34})\sin^2\phi+ (n_{13}+n_{24}) \\
&+(n_{14}+n_{23})\cos^2\phi\, ,
\end{split}
\end{equation}
where $n_{ab}$ counts how many times $\ket{x_{ik}}=\Ket{x^{(\ell)}}$ when $\ket{x_{jk}}=\Ket{x^{(\ell')}}$ (or viceversa).
If $\operatorname{err}_{i,j}<\lceil \lambda\,\tilde{p}/2 \rceil$, then the Certifier should discard one of the two challenges, either $\bf x_i$ or $\bf x_j$, thus reducing the number $N$ of possible challenge-response pairs.
After this selection is repeated for all pairs of challenges, the Certifier studies the entropy of the set of shifters, determining the strings $\bf w_i$
and the outcomes $\mathbf{y_i}=\bf w_i\,\|\,o_i$.
The \textit{Canetti's reusable fuzzy extractor} \cite{CFPRS} is able to correct up to $t=(l\ln l/m)$ bits, where $l$ is the length of the outcomes and $m$ the length of the responses. As $l=\lambda+l_w$ is fixed, $m$ has to be adapted to the noise level $\lceil \lambda\,\tilde{p}/2 \rceil$.
The correctness property of this fuzzy extractor guarantees that an error smaller than $t$ is corrected with probability $1-\tilde{\varrho}$, where
\begin{equation}
\tilde{\varrho} = \left(1-\left(1-\frac{t}{l}\right)^m\right)^{\xi_1}+\xi_1\xi_2 \,,
\end{equation}
with $\xi_1$ and $\xi_2$ being computational parameters of the fuzzy extrator (in \cite{CFPRS}, to which we refer for a precise explanation, they are called $\ell$ and $\gamma$, respectively).
Then the robustness of this QR-PUF is $1-\tilde{\varrho}$.
Concerning the unclonability, one should relate the amount of information Eve obtains from the (possibly correlated) challenge-response pairs to her ability to create a mathematical clone of the QR-PUF. Unfortunately, there is no general method known to provide this relation. We expect that, for some QR-PUFs, quantum unitary gate discrimination methods \cite{CH} could be used, but this line of research goes beyond the purposes of our work.
Here, we can show that QR-PUFs prevent Eve to gain too much information about challenges and responses, thus strongly hindering her ability to learn the CRT.
As the optimal global attack on the challenge states is unknown, unless knowing all challenge states, here we consider an attack that acts individually on
qubits.
In particular, we consider the case for which, on each qubit, Eve can apply a $1\rightarrow 2$ cloning operator, i.e. she can intercept each qubit of a challenge state during an authentication round to produce two (imperfect) copies, one of which is given back to the legitimate parties and the other is kept for herself.
For such a set of states, the optimal cloning tranformation, i.e. the transformation who keeps the highest possible fidelity between the copies and the original states, has been derived \cite{BM01} and for any challenge state $\Ket{x_i}$ and its optimal copy $\rho_i^E$ holds:
\begin{equation}
\begin{split}
&F(\Ket{x_{i}}\Bra{x_{i}},\varrho^E_{i} ):=\prod_{k=1}^\lambda\Braket{x_{ik}|\varrho^E_{ik}|x_{ik}}\\
&=\left(\frac{1}{2}\,\left(1+\sqrt{\sin^4\phi+\cos^4\phi}\right)\right)^\lambda\,.
\end{split}
\end{equation}
For fixed $\lambda$, the minimum value of the fidelity is reached for $\phi=\pi/4$, for which, considering a single qubit, $F=(0.85)$.
Already for $10$ qubits the fidelity drops to $F=(0.20)$, and for $20$ qubits, $F=(0.04)$.
Thus, Eve is not able to successfully simulate the challenge-response behavior, as she cannot even reconstruct the challenge and outcome states.
Moreover, as the fidelity is preserved by unitary matrices, this result holds also for the expected outcome state $\Ket{y_i}$ and the actual outcome state Alice obtains after challenging the QR-PUF with her (unwittingly altered by the cloning process) challenge state. The noise is too high to be corrected by the fuzzy extractor, thus aborting the authentication protocol and exposing the presence of an intruder.
For classical PUFs, instead, Eve could perfectly read the challenge and outcome states, without being noticed. This provides an advantage of QR-PUFs compared to classical PUFs in terms of unclonability. However, we also noticed that a high non-orthogonality of the challenges can, in principle, undermine the robustness. The trade-off between the advantages and disadvantages of QR-PUFs (Table \ref{table2}) has to be studied to find secure applications of them.
\pagebreak
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|}
\hline
\multicolumn{2}{|c|}{QR-PUFs compared to PUFs} \\ \hline
\textbf{ Advantages} & \textbf{ Disadvantages} \\ \hline
& \\
\begin{tabular}[c]{@{}c@{}} An adversary cannot \\ copy or distinguish \\ non-orthogonal states. \\ \end{tabular} &
\begin{tabular}[c]{@{}c@{}}Highly non-orthogonal \\ states reduce the \\ robustness. \\ \end{tabular} \\
& \\
\begin{tabular}[c]{@{}c@{}}Adversarial measurements\\ on the states introduce \\ detectable disturbances.\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Quantum states are \\ more fragile than \\ classical states.\end{tabular} \\ & \\ \hline
\end{tabular}
\caption{Advantages and disadvantages of QR-PUFs compared to classical PUFs.}
\label{table2}
\end{table}
\section{Conclusion}
In this article, we proposed a theoretical framework for the quantitative characterisation of both PUFs and QR-PUFs. After developing an authentication protocol common to both typologies, with the same error correction and privacy amplification scheme, we formalized the $\text{(QR-)}$ PUFs in term of two main properties, the \emph{robustness} (connected to false rejection) and the \emph{unclonability} (connected to false acceptance). Finally, we studied some examples, motivating the possible advantages and disadvantages of QR-PUFs compared to classical PUFs.
Our framework is useful to study and to compare different implementations of $\text{(QR-)}$ PUFs and to develop new authentication schemes.
An important application would be to strictly prove the superiority of QR-PUFs over classical PUFs.
The next step towards that goal would be the development of new methods to estimate the unclonability of (QR-) PUFs for different implementations.
This could open an interesting line of theoretical and experimental research about $\text{(QR-)}$ PUFs.
Furthermore, our framework can be employed to determine the level of security of using $\text{(QR-)}$ PUFs in other cryptographic protocols, like QKD, where a quantitatively secure $\text{(QR-)}$ PUF can be used as authentication and reduces the number of necessary preshared key bits.
\emph{Note added:} During the finalisation of this work, we became aware of a preprint on a related topic \cite{ADDK}.
\begin{thebibliography}{42}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\"{u}ndefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \"{u}rl [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\"{u}rlprefix }}
\providecommand \"{u}rlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{https://doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\"{a}uto@bib@innerbib\@empty
\bibitem [{\citenamefont {Martin}(2012)}]{KM}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont
{Martin}},\ }\href@noop {} {\emph {\bibinfo {title} {Everyday Cryptography:
Fundamental Principles and Applications}}}\ (\bibinfo {publisher} {OUP
Oxford},\ \bibinfo {year} {2012})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Scarani}\ \emph {et~al.}(2009)\citenamefont
{Scarani}, \citenamefont {Bechmann-Pasquinucci}, \citenamefont {Cerf},
\citenamefont {Du{\v{s}}ek}, \citenamefont {L{\"u}tkenhaus},\ and\
\citenamefont {Peev}}]{SBCDLP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Scarani}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bechmann-Pasquinucci}}, \bibinfo {author} {\bibfnamefont {N.~J.}\
\bibnamefont {Cerf}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Du{\v{s}}ek}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{L{\"u}tkenhaus}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Peev}},\ }\bibfield {title} {\bibinfo {title} {The security of practical
quantum key distribution},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {1301} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wegman}\ and\ \citenamefont {Carter}(1981)}]{WC}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont
{Wegman}}\ and\ \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont
{Carter}},\ }\bibfield {title} {\bibinfo {title} {New hash functions and
their use in authentication and set equality},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {J. Comput. Syst. Sci.}\ }\textbf {\bibinfo
{volume} {22}},\ \bibinfo {pages} {265} (\bibinfo {year} {1981})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Pappu}(2001)}]{RP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Pappu}},\ }\emph {\bibinfo {title} {Physical one-way functions}},\
\href@noop {} {Ph.D. thesis},\ \bibinfo {school} {Massachusetts Institute of
Technology, USA} (\bibinfo {year} {2001})\BibitemShut {NoStop}
\bibitem [{\citenamefont {R{\"u}hrmair}(2010)}]{UR10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{R{\"u}hrmair}},\ }\bibfield {title} {\bibinfo {title} {Oblivious transfer
based on physical unclonable functions},\ }in\ \href@noop {} {\emph {\bibinfo
{booktitle} {International Conference on Trust and Trustworthy Computing}}}\
(\bibinfo {organization} {Springer},\ \bibinfo {year} {2010})\ pp.\ \bibinfo
{pages} {430--440}\BibitemShut {NoStop}
\bibitem [{\citenamefont {R{\"u}hrmair}\ and\ \citenamefont {van
Dijk}(2013)}]{RD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{R{\"u}hrmair}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {van
Dijk}},\ }\bibfield {title} {\bibinfo {title} {On the practical use of
physical unclonable functions in oblivious transfer and bit commitment
protocols},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Cryptogr. Eng.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {17}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Brzuska}\ \emph {et~al.}(2011)\citenamefont
{Brzuska}, \citenamefont {Fischlin}, \citenamefont {Schr{\"o}der},\ and\
\citenamefont {Katzenbeisser}}]{BFSK}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Brzuska}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fischlin}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Schr{\"o}der}},\ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Katzenbeisser}},\
}\bibfield {title} {\bibinfo {title} {Physically uncloneable functions in
the universal composition framework},\ }in\ \href@noop {} {\emph {\bibinfo
{booktitle} {Annual Cryptology Conference}}}\ (\bibinfo {organization}
{Springer},\ \bibinfo {year} {2011})\ pp.\ \bibinfo {pages}
{51--70}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Pappu}\ \emph {et~al.}(2002)\citenamefont {Pappu},
\citenamefont {Recht}, \citenamefont {Taylor},\ and\ \citenamefont
{Gershenfeld}}]{PRTG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Pappu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Recht}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Taylor}},\ and\ \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Gershenfeld}},\ }\bibfield
{title} {\bibinfo {title} {Physical one-way functions},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {297}},\ \bibinfo {pages} {2026} (\bibinfo {year}
{2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lee}\ \emph {et~al.}(2004)\citenamefont {Lee},
\citenamefont {Lim}, \citenamefont {Gassend}, \citenamefont {Suh},
\citenamefont {Van~Dijk},\ and\ \citenamefont {Devadas}}]{LLGSDD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~W.}\ \bibnamefont
{Lee}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lim}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Gassend}}, \bibinfo {author}
{\bibfnamefont {G.~E.}\ \bibnamefont {Suh}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Van~Dijk}},\ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Devadas}},\ }\bibfield {title} {\bibinfo {title} {A
technique to build a secret key in integrated circuits for identification and
authentication applications},\ }in\ \href@noop {} {\emph {\bibinfo
{booktitle} {2004 Symposium on VLSI Circuits. Digest of Technical Papers
(IEEE Cat. No. 04CH37525)}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo
{year} {2004})\ pp.\ \bibinfo {pages} {176--179}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Guajardo}\ \emph {et~al.}(2007)\citenamefont
{Guajardo}, \citenamefont {Kumar}, \citenamefont {Schrijen},\ and\
\citenamefont {Tuyls}}]{GKST}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Guajardo}}, \bibinfo {author} {\bibfnamefont {S.~S.}\ \bibnamefont {Kumar}},
\bibinfo {author} {\bibfnamefont {G.-J.}\ \bibnamefont {Schrijen}},\ and\
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Tuyls}},\ }\bibfield
{title} {\bibinfo {title} {$\text{FPGA}$ intrinsic $\text{PUFs}$ and their
use for $\text{IP}$ protection},\ }in\ \href@noop {} {\emph {\bibinfo
{booktitle} {International Workshop on Cryptographic Hardware and Embedded
Systems}}}\ (\bibinfo {organization} {Springer},\ \bibinfo {year} {2007})\
pp.\ \bibinfo {pages} {63--80}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tuyls}\ \emph {et~al.}(2006)\citenamefont {Tuyls},
\citenamefont {Schrijen}, \citenamefont {{\v{S}}kori{\'c}}, \citenamefont
{Van~Geloven}, \citenamefont {Verhaegh},\ and\ \citenamefont
{Wolters}}]{TSSGVW}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Tuyls}}, \bibinfo {author} {\bibfnamefont {G.-J.}\ \bibnamefont {Schrijen}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {{\v{S}}kori{\'c}}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Van~Geloven}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Verhaegh}},\ and\ \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Wolters}},\ }\bibfield {title}
{\bibinfo {title} {Read-proof hardware from protective coatings},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {International Workshop on
Cryptographic Hardware and Embedded Systems}}}\ (\bibinfo {organization}
{Springer},\ \bibinfo {year} {2006})\ pp.\ \bibinfo {pages}
{369--383}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Indeck}\ and\ \citenamefont {Muller}(1994)}]{IM}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont
{Indeck}}\ and\ \bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont
{Muller}},\ }\href@noop {} {\bibinfo {title} {Method and apparatus for
fingerprinting magnetic media}} (\bibinfo {year} {1994}),\ \bibinfo {note}
{$\text{US}$ Patent 5, 365, 586}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bossuet}\ \emph {et~al.}(2013)\citenamefont
{Bossuet}, \citenamefont {Ngo}, \citenamefont {Cherif},\ and\ \citenamefont
{Fischer}}]{BNCF}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Bossuet}}, \bibinfo {author} {\bibfnamefont {X.~T.}\ \bibnamefont {Ngo}},
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Cherif}},\ and\ \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Fischer}},\ }\bibfield {title}
{\bibinfo {title} {A $\text{PUF}$ based on a transient effect ring oscillator
and insensitive to locking phenomenon},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {IEEE Trans. Emerg. Topics Comput.}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {30} (\bibinfo {year} {2013})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {McGrath}\ \emph {et~al.}(2019)\citenamefont
{McGrath}, \citenamefont {Bagci}, \citenamefont {Wang}, \citenamefont
{Roedig},\ and\ \citenamefont {Young}}]{MBWRY}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{McGrath}}, \bibinfo {author} {\bibfnamefont {I.~E.}\ \bibnamefont {Bagci}},
\bibinfo {author} {\bibfnamefont {Z.~M.}\ \bibnamefont {Wang}}, \bibinfo
{author} {\bibfnamefont {U.}~\bibnamefont {Roedig}},\ and\ \bibinfo {author}
{\bibfnamefont {R.~J.}\ \bibnamefont {Young}},\ }\bibfield {title} {\bibinfo
{title} {A $\text{PUF}$ taxonomy},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Appl. Phys. Rev.}\ }\textbf {\bibinfo {volume} {6}},\
\bibinfo {pages} {011303} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Maes}\ and\ \citenamefont {Verbauwhede}(2010)}]{MV}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Maes}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Verbauwhede}},\ }\bibfield {title} {\bibinfo {title} {Physically unclonable
functions: A study on the state of the art and future research directions},\
}in\ \href@noop {} {\emph {\bibinfo {booktitle} {Towards Hardware-Intrinsic
Security}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {2010})\
pp.\ \bibinfo {pages} {3--37}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Delvaux}\ \emph {et~al.}(2014)\citenamefont
{Delvaux}, \citenamefont {Gu}, \citenamefont {Schellekens},\ and\
\citenamefont {Verbauwhede}}]{DGSV}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Delvaux}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gu}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Schellekens}},\ and\
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Verbauwhede}},\
}\bibfield {title} {\bibinfo {title} {Helper data algorithms for
$\text{PUF}$-based key generation: Overview and analysis},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {IEEE Trans. Comput.-Aided Design
Integr. Circuits Syst.}\ }\textbf {\bibinfo {volume} {34}},\ \bibinfo {pages}
{889} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Puchinger}\ \emph {et~al.}(2015)\citenamefont
{Puchinger}, \citenamefont {M{\"u}elich}, \citenamefont {Bossert},
\citenamefont {Hiller},\ and\ \citenamefont {Sigl}}]{PMBHS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Puchinger}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{M{\"u}elich}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Bossert}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hiller}},\
and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Sigl}},\ }\bibfield
{title} {\bibinfo {title} {On error correction for physical unclonable
functions},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {SCC 2015; 10th
International ITG Conference on Systems, Communications and Coding}}}\
(\bibinfo {organization} {VDE},\ \bibinfo {year} {2015})\ pp.\ \bibinfo
{pages} {1--6}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dodis}\ \emph {et~al.}(2008)\citenamefont {Dodis},
\citenamefont {Ostrovsky}, \citenamefont {Reyzin},\ and\ \citenamefont
{Smith}}]{DORS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Dodis}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ostrovsky}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Reyzin}},\ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Smith}},\ }\bibfield {title}
{\bibinfo {title} {Fuzzy extractors: How to generate strong keys from
biometrics and other noisy data},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {SIAM J. Comput.}\ }\textbf {\bibinfo {volume} {38}},\
\bibinfo {pages} {97} (\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Helfmeier}\ \emph {et~al.}(2013)\citenamefont
{Helfmeier}, \citenamefont {Boit}, \citenamefont {Nedospasov},\ and\
\citenamefont {Seifert}}]{HBNS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Helfmeier}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Boit}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Nedospasov}},\ and\
\bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Seifert}},\
}\bibfield {title} {\bibinfo {title} {Cloning physically unclonable
functions},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {2013 IEEE
International Symposium on Hardware-Oriented Security and Trust (HOST)}}}\
(\bibinfo {organization} {IEEE},\ \bibinfo {year} {2013})\ pp.\ \bibinfo
{pages} {1--6}\BibitemShut {NoStop}
\bibitem [{\citenamefont {R{\"u}hrmair}\ \emph
{et~al.}(2010{\natexlab{a}})\citenamefont {R{\"u}hrmair}, \citenamefont
{Sehnke}, \citenamefont {S{\"o}lter}, \citenamefont {Dror}, \citenamefont
{Devadas},\ and\ \citenamefont {Schmidhuber}}]{RSSDDS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{R{\"u}hrmair}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Sehnke}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {S{\"o}lter}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Dror}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Devadas}},\ and\ \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Schmidhuber}},\ }\bibfield {title}
{\bibinfo {title} {Modeling attacks on physical unclonable functions},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the 17th ACM
conference on Computer and communications security}}}\ (\bibinfo
{organization} {ACM},\ \bibinfo {year} {2010})\ pp.\ \bibinfo {pages}
{237--249}\BibitemShut {NoStop}
\bibitem [{\citenamefont {R{\"u}hrmair}\ \emph {et~al.}(2013)\citenamefont
{R{\"u}hrmair}, \citenamefont {S{\"o}lter}, \citenamefont {Sehnke},
\citenamefont {Xu}, \citenamefont {Mahmoud}, \citenamefont {Stoyanova},
\citenamefont {Dror}, \citenamefont {Schmidhuber}, \citenamefont {Burleson},\
and\ \citenamefont {Devadas}}]{R-etal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{R{\"u}hrmair}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{S{\"o}lter}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sehnke}},
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Xu}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Mahmoud}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Stoyanova}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Dror}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Schmidhuber}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Burleson}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Devadas}},\ }\bibfield {title} {\bibinfo {title} {$\text{PUF}$ modeling
attacks on simulated and silicon data},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {IEEE Trans. Inf. Forensics Security}\ }\textbf
{\bibinfo {volume} {8}},\ \bibinfo {pages} {1876} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {{\v{S}}kori{\'c}}(2012)}]{BS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{\v{S}}kori{\'c}}},\ }\bibfield {title} {\bibinfo {title} {Quantum readout
of physical unclonable functions},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Int. J. Quantum Inf.}\ }\textbf {\bibinfo {volume}
{10}},\ \bibinfo {pages} {1250001} (\bibinfo {year} {2012})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Wootters}\ and\ \citenamefont {Zurek}(1982)}]{WZ}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~K.}\ \bibnamefont
{Wootters}}\ and\ \bibinfo {author} {\bibfnamefont {W.~H.}\ \bibnamefont
{Zurek}},\ }\bibfield {title} {\bibinfo {title} {A single quantum cannot be
cloned},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\
}\textbf {\bibinfo {volume} {299}},\ \bibinfo {pages} {802} (\bibinfo {year}
{1982})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {R{\"u}hrmair}\ \emph {et~al.}(2009)\citenamefont
{R{\"u}hrmair}, \citenamefont {S{\"o}lter},\ and\ \citenamefont
{Sehnke}}]{RSS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{R{\"u}hrmair}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{S{\"o}lter}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Sehnke}},\ }\bibfield {title} {\bibinfo {title} {On the foundations of
physical unclonable functions},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {IACR Cryptology ePrint Archive}\ }\textbf {\bibinfo
{volume} {2009}},\ \bibinfo {pages} {277} (\bibinfo {year}
{2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Armknecht}\ \emph {et~al.}(2011)\citenamefont
{Armknecht}, \citenamefont {Maes}, \citenamefont {Sadeghi}, \citenamefont
{Standaert},\ and\ \citenamefont {Wachsmann}}]{AMSSW}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Armknecht}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Maes}},
\bibinfo {author} {\bibfnamefont {A.-R.}\ \bibnamefont {Sadeghi}}, \bibinfo
{author} {\bibfnamefont {F.-X.}\ \bibnamefont {Standaert}},\ and\ \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Wachsmann}},\ }\bibfield {title}
{\bibinfo {title} {A formalization of the security features of physical
functions},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {2011 IEEE
Symposium on Security and Privacy}}}\ (\bibinfo {organization} {IEEE},\
\bibinfo {year} {2011})\ pp.\ \bibinfo {pages} {397--412}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Plaga}\ and\ \citenamefont {Koob}(2012)}]{PK}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Plaga}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Koob}},\
}\bibfield {title} {\bibinfo {title} {A formal definition and a new security
mechanism of physical unclonable functions},\ }in\ \href@noop {} {\emph
{\bibinfo {booktitle} {International GI/ITG Conference on Measurement,
Modelling, and Evaluation of Computing Systems and Dependability and Fault
Tolerance}}}\ (\bibinfo {organization} {Springer},\ \bibinfo {year} {2012})\
pp.\ \bibinfo {pages} {288--301}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Plaga}\ and\ \citenamefont {Merli}(2015)}]{PM}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Plaga}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Merli}},\
}\bibfield {title} {\bibinfo {title} {A new definition and classification of
physical unclonable functions},\ }in\ \href@noop {} {\emph {\bibinfo
{booktitle} {Proceedings of the Second Workshop on Cryptography and Security
in Computing Systems}}}\ (\bibinfo {organization} {ACM},\ \bibinfo {year}
{2015})\ p.~\bibinfo {pages} {7}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Delvaux}(2017)}]{JD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Delvaux}},\ }\emph {\bibinfo {title} {Security analysis of
$\text{PUF}$-based key generation and entity authentication}},\ \href@noop {}
{Ph.D. thesis},\ \bibinfo {school} {Katholieke Universiteit Leuven, Belgium}
(\bibinfo {year} {2017})\BibitemShut {NoStop}
\bibitem [{\citenamefont {{\v{S}}kori{\'c}}\ \emph {et~al.}(2005)\citenamefont
{{\v{S}}kori{\'c}}, \citenamefont {Tuyls},\ and\ \citenamefont
{Ophey}}]{STO}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{\v{S}}kori{\'c}}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Tuyls}},\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Ophey}},\
}\bibfield {title} {\bibinfo {title} {Robust key extraction from physical
uncloneable functions},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle}
{International Conference on Applied Cryptography and Network Security}}}\
(\bibinfo {organization} {Springer},\ \bibinfo {year} {2005})\ pp.\ \bibinfo
{pages} {407--422}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Goorden}\ \emph {et~al.}(2014)\citenamefont
{Goorden}, \citenamefont {Horstmann}, \citenamefont {Mosk}, \citenamefont
{{\v{S}}kori{\'c}},\ and\ \citenamefont {Pinkse}}]{GHMSP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Goorden}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horstmann}},
\bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont {Mosk}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {{\v{S}}kori{\'c}}},\ and\ \bibinfo
{author} {\bibfnamefont {P.~W.}\ \bibnamefont {Pinkse}},\ }\bibfield {title}
{\bibinfo {title} {Quantum-secure authentication of a physical unclonable
key},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optica}\
}\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {421} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tuyls}\ \emph {et~al.}(2005)\citenamefont {Tuyls},
\citenamefont {{\v{S}}kori{\'c}}, \citenamefont {Stallinga}, \citenamefont
{Akkermans},\ and\ \citenamefont {Ophey}}]{TSSAO}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Tuyls}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{\v{S}}kori{\'c}}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Stallinga}}, \bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont
{Akkermans}},\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Ophey}},\ }\bibfield {title} {\bibinfo {title} {Information-theoretic
security analysis of physical uncloneable functions},\ }in\ \href@noop {}
{\emph {\bibinfo {booktitle} {International Conference on Financial
Cryptography and Data Security}}}\ (\bibinfo {organization} {Springer},\
\bibinfo {year} {2005})\ pp.\ \bibinfo {pages} {141--155}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Rioul}\ \emph {et~al.}(2016)\citenamefont {Rioul},
\citenamefont {Sol{\'e}}, \citenamefont {Guilley},\ and\ \citenamefont
{Danger}}]{RSGD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Rioul}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Sol{\'e}}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guilley}},\ and\ \bibinfo
{author} {\bibfnamefont {J.-L.}\ \bibnamefont {Danger}},\ }\bibfield {title}
{\bibinfo {title} {On the entropy of physically unclonable functions},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {2016 IEEE International Symposium
on Information Theory (ISIT)}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo
{year} {2016})\ pp.\ \bibinfo {pages} {2928--2932}\BibitemShut {NoStop}
\bibitem [{\citenamefont {{\v{S}}kori{\'c}}\ \emph {et~al.}(2013)\citenamefont
{{\v{S}}kori{\'c}}, \citenamefont {Mosk},\ and\ \citenamefont
{Pinkse}}]{SMP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{\v{S}}kori{\'c}}}, \bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont
{Mosk}},\ and\ \bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Pinkse}},\ }\bibfield {title} {\bibinfo {title} {Security of
quantum-readout $\text{PUFs}$ against quadrature-based challenge-estimation
attacks},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Int. J.
Quantum Inf.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1350041}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {{\v{S}}koric}(2016)}]{BS13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{\v{S}}koric}},\ }\bibfield {title} {\bibinfo {title} {Security analysis of
quantum-readout $\text{PUFs}$ in the case of challenge-estimation attacks},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf.
Comput.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {0050}
(\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yao}\ \emph {et~al.}(2016)\citenamefont {Yao},
\citenamefont {Gao}, \citenamefont {Li},\ and\ \citenamefont {Zhang}}]{YGLZ}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Yao}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gao}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Li}},\ and\ \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Zhang}},\ }\bibfield {title} {\bibinfo
{title} {Quantum cloning attacks against $\text{PUF}$-based quantum
authentication systems},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Quantum Inf. Process.}\ }\textbf {\bibinfo {volume} {15}},\
\bibinfo {pages} {3311} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nikolopoulos}\ and\ \citenamefont
{Diamanti}(2017)}]{ND}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont
{Nikolopoulos}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Diamanti}},\ }\bibfield {title} {\bibinfo {title} {Continuous-variable
quantum authentication of physical unclonable keys},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo
{volume} {7}},\ \bibinfo {pages} {46047} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nikolopoulos}(2018)}]{GN}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont
{Nikolopoulos}},\ }\bibfield {title} {\bibinfo {title} {Continuous-variable
quantum authentication of physical unclonable keys: Security against an
emulation attack},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {012324}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zyczkowski}\ and\ \citenamefont {Kus}(1994)}]{ZK}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Zyczkowski}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Kus}},\ }\bibfield {title} {\bibinfo {title} {Random unitary matrices},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A: Math.
Gen.}\ }\textbf {\bibinfo {volume} {27}},\ \bibinfo {pages} {4235} (\bibinfo
{year} {1994})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gross}\ \emph {et~al.}(2010)\citenamefont {Gross},
\citenamefont {Liu}, \citenamefont {Flammia}, \citenamefont {Becker},\ and\
\citenamefont {Eisert}}]{GLFBE}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gross}}, \bibinfo {author} {\bibfnamefont {Y.-K.}\ \bibnamefont {Liu}},
\bibinfo {author} {\bibfnamefont {S.~T.}\ \bibnamefont {Flammia}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Becker}},\ and\ \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\bibfield {title} {\bibinfo
{title} {Quantum state tomography via compressed sensing},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {105}},\ \bibinfo {pages} {150401} (\bibinfo {year}
{2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {R{\"u}hrmair}\ \emph
{et~al.}(2010{\natexlab{b}})\citenamefont {R{\"u}hrmair}, \citenamefont
{Busch},\ and\ \citenamefont {Katzenbeisser}}]{RBK}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{R{\"u}hrmair}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Busch}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Katzenbeisser}},\ }\bibfield {title} {\bibinfo {title} {Strong
$\text{PUFs}$: models, constructions, and security proofs},\ }in\ \href@noop
{} {\emph {\bibinfo {booktitle} {Towards Hardware-Intrinsic Security}}}\
(\bibinfo {publisher} {Springer},\ \bibinfo {year} {2010})\ pp.\ \bibinfo
{pages} {79--96}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Canetti}\ \emph {et~al.}(2016)\citenamefont
{Canetti}, \citenamefont {Fuller}, \citenamefont {Paneth}, \citenamefont
{Reyzin},\ and\ \citenamefont {Smith}}]{CFPRS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Canetti}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Fuller}},
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Paneth}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Reyzin}},\ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Smith}},\ }\bibfield {title} {\bibinfo
{title} {Reusable fuzzy extractors for low-entropy distributions},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {Annual International Conference
on the Theory and Applications of Cryptographic Techniques}}}\ (\bibinfo
{organization} {Springer},\ \bibinfo {year} {2016})\ pp.\ \bibinfo {pages}
{117--146}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Helstrom}(1969)}]{CH}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C. W.}~\bibnamefont
{Helstrom}},\ }\bibfield {title} {\bibinfo {title} {Quantum detection and estimation theory},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {J. Stat. Phys.}\ }\textbf {\bibinfo {volume}
{1}},\ \bibinfo {pages} {231--252} (\bibinfo {year} {1969})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Bru{\ss}}\ and\ \citenamefont
{Macchiavello}(2001)}]{BM01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Bru{\ss}}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Macchiavello}},\ }\bibfield {title} {\bibinfo {title} {Optimal cloning for
two pairs of orthogonal states},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {J. Phys. A: Math. Gen.}\ }\textbf {\bibinfo {volume}
{34}},\ \bibinfo {pages} {6815} (\bibinfo {year} {2001})}\BibitemShut
{NoStop}
\bibitem{ADDK}
M. Arapinis, M. Delavar, M. Doosti and E. Kashefi,
Quantum Physical Unclonable Functions: Possibilities and Impossibilities,
arXiv:1910.02126v1 (2019).
\end{thebibliography}
\end{document} | math | 96,578 |
\begin{document}
\begin{abstract}
For a Latt\`es map $\phi:\mathbb P^1 \to \mathbb P^1$ defined over a number field $K$, we
prove a conjecture on the integrality of points in the backward orbit of $P\in
\mathbb P^1(\overline K)$ under $\phi$.
\\
\\
\emph{Accepted for publication in the Turkish Journal of Analysis
and Number Theory}
\end{abstract}
\title{Backward Orbit Conjecture for Latt\`es Maps}
\section{Introduction}
Let $\phi:\mathbb P^1 \to \mathbb P^1$ be a rational map of degree $\ge 2$ defined over a
number field $K$, and write $\phi^n$ for the $n$th iterate of $\phi$. For a
point $P\in \mathbb P^1$, let $\phi^+(P)=\{P, \phi(P), \phi^2(P), \dots \}$ be the
\emph{forward orbit} of $P$ under $\phi$, and let $$\phi^-(P) = \bigcup_{n\ge 0}
\phi^{-n}(P)$$ be the \emph{backward orbit} of $P$ under $\phi$. We say $P$ is
$\phi$-\emph{preperiodic} if and only if $\phi^+(P)$ is finite.
Viewing the projective line $\mathbb P^1$ as $\mathbb A^1 \cup \{\infty \}$ and
taking $P\in \mathbb A^1(K)$, a theorem of Silverman \cite{sil} states that if
$\infty$ is not a fixed point for $\phi^2$, then $\phi^+(P)$
contains at most finitely many points in $\mathcal{O}_K$, the ring of algebraic
integers in $K$. If $S$ is the set of all archimedean places for $K$, then
$\mathcal{O}_K$ is the set of points in $\mathbb P^1(K)$ which are $S$-integral
relative to $\infty$ (see section 2). Replacing $\infty$ with any point $Q\in
\mathbb P^1(K)$ and $S$ with any finite set of places containing all the archimedean
places, Silverman's Theorem can be stated as: If $Q$ is not a fixed point for
$\phi^2$, then $\phi^+(P)$ contains at most finitely many points which are
$S$-integral relative to $Q$.
A conjecture for finiteness of integral points in backward orbits
was stated in \cite[Conj. 1.2]{sook}.
\begin{conj}\ellabel{conj} If $Q\in \mathbb P^1(\overline K)$ is not $\phi$-preperiodic,
then $\phi^-(P)$ contains at most finitely many points in $\mathbb P^1(\overline K)$
which are $S$-integral relative to $Q$.
\end{conj}
In \cite{sook}, Conjecture \ref{conj} was shown true for the powering map
$\phi(z)=z^d$ with degree $d\ge 2$, and consequently for Chebyschev
polynomials. A generalized version of this conjecture, which is stated over a
dynamical family of maps $[\varphi]$, is given in \cite[Sec. 4]{grant_ih}.
Along those lines, our goal is to prove a general form of Conjecture \ref{conj}
where $[\varphi]$ is the family of Latt\`es maps associated to a fixed elliptic
curve $E$ defined over $K$ (see Section \ref{main}).
\section{The Chordal Metric and Integrality}
\subsection{The Chordal Metric on $\mathbb P^N$}\ellabel{chordal}
Let $M_{K}$ be the set of places on $K$ normalized so that the product formula
holds: for all $\alpha\in K^*$, $$\prod_{v\in M_K}|\alpha|_v = 1.$$ For points
$P=[x_0:x_1:\cdots:x_N]$ and $Q=[y_0:y_1:\cdots:y_N]$ in $\mathbb
P^N(\overline{K}_v)$, define the \emph{$v$-adic chordal metric} as $$\Delta_v
(P,Q)= \frac{\max_{i,j}(|x_iy_j-x_jy_i|_v)}{\max_i(|x_i|_v)\cdot
\max_i(|y_i|_v)}.$$ Note that $\Delta_v$ is independent of choice of projective
coordinates for $P$ and $Q$, and $0\elle \Delta_v(\cdot, \cdot) \elle 1$ (see
\cite{ShuSil}).
\subsection{Integrality on Projective Curves}\ellabel{integrality}
Let $C$ be an irreducible curve in $\mathbb P^N$ defined over $K$ and $S$ a
finite subset of $M_K$ which includes all the archimedean places. A
\emph{divisor} on $C$ defined over $\overline{K}$ is a finite formal sum $\sum
n_i Q_i$ with $n_i\in \mathbb Z$ and $Q_i\in C(\overline K)$. The
divisor is \emph{effective} if $n_i > 0$ for each $i$, and its
\emph{support} is the set $\mbox{Supp}(D)=\{Q_1,\dots, Q_\ell \}$.
Let $\ellambda_{Q,v}(P) = -\ellog \Delta_v(P,Q)$ and $\ellambda_{D, v}(P)= \sum
n_i\ellambda_{Q_i,v}(P)$ when $D=\sum n_i Q_i$. This makes $\ellambda_{D,v}$ an
arithmetic distance function on $C$ (see \cite{sil2}) and as with any arithmetic
distance function, we may use it to classify the integral points on $C$.
For an effective divisor $D = \sum n_i Q_i$ on $C$ defined over $\overline{K}$,
we say $P \in C(\overline{K})$ is \emph{$S$-integral} relative to $D$, or $P$ is
a $(D, S)$-integral point, if and only if $\ellambda_{Q_i^\sigma,v}(P^\tau) = 0$
for all embeddings $\sigma, \tau:\overline{K}\to \overline{K}$ and for all
places $v\not\in S$. Furthermore, we say the set $\mathcal{R}\subset C(\overline
K)$ is $S$-integral relative to $D$ if and only if each point in $\mathcal{R}$
is $S$-integral relative to $D$.
As an example, let $C$ be the projective line $\mathbb A^1 \cup \{\infty\}$,
$S$ be the archimedean place of $K=\mathbb Q$, and $D=\infty$. For $P=x/y$, with $x$
and $y$ are relatively prime in $\mathbb Z$, we have $\ellambda_{D, v}(P)=-\ellog|y|_v$ for
each prime $v$. Therefore, $P$ is $S$-integral relative to $D$ if and only
if $y=\pm 1$; that is, $P$ is $S$-integral relative to $D$ is and only if $P\in
\mathbb Z$.
From the definition we find that if $S_1 \subset S_2$ are finite subsets of
$M_K$ which contains all the archimedean places, then $P$ is a $(D,
S_2)$-integral point implies that $P$ is a $(D, S_1)$-integral point. Similarly,
if $\mbox{Supp}(D_1) \subset \mbox{Supp}(D_2)$, then $P$ is a $(D_2,
S)$-integral point implies that $P$ is also a $(D_2, S)$-integral
point. Therefore enlarging $S$ or $\mbox{Supp}(D)$ only enlarges the set of
$(D, S)$-integrals points on $C(\overline{K})$.
For $\phi:C_1\to C_2$, a finite morphism between projective curves and $P\in
C_2$, write $$\phi^*P= \sum_ {Q\in \phi^{-1}(P)} e_{\phi}(Q)\cdot Q$$
where $e_\phi(Q) \ge 1$ is the ramification index of $\phi$ at $Q$.
Furthermore, if $D=\sum n_iQ_i$ is a divisor on $C$, then we define
$\phi^*D=\sum n_i\phi^*Q_i$.
\begin{thm}[Distribution Relation]\ellabel{dist}
Let $\phi:C_1\to C_2$ be a finite morphism between irreducibly smooth curves
in $\mathbb P^N(\overline{K})$. Then for $Q\in C_1$, there is a finite set of
places $S$, depending only on $\phi$ and containing all the archimedean
places, such that $\ellambda_{P, v} \circ \phi = \ellambda_{\phi^*P, v}$
for all $v\not\in S$.
\end{thm}
\begin{proof} See \cite[Prop. 6.2b]{sil2} and note that for projective varieties
the $\ellambda_{\delta W \times V}$ term is not required, and that the big-O
constant is an $M_K$-bounded constant not depending on $P$ and $Q$.
\end{proof}
\begin{cor}\ellabel{dist2}
Let $\phi:C_1\to C_2$ be a finite morphism between irreducibly smooth curves
in $\mathbb P^N(\overline{K})$, let $P\in C_1(\overline K)$, and let $D$ be an
effective divisor on $C _2$ defined over $K$. Then there is a finite set of
places $S$, depending only on $\phi$ and containing all the archimedean
places, such that $\phi(P)$ is $S$-integral relative to $D$ if and only
$P$ is S-integral relative to $\phi^*D$.
\end{cor}
\begin{proof}
Extend $S$ so that the conclusion of Theorem \ref{dist} holds. Then
for $D=\sum n_i Q_i$ with each $n_i > 0$ and $Q_i\in C_2(\overline K)$, we
have that $$\ellambda_{\phi^*D,v}(P)=\ellambda_{D,v}(\phi(P)) = \sum n_i
\ellambda_{Q_i, v}(\phi (P)).$$ So $\ellambda_{\phi^*D,v}(P)=0$ if and only if
$\ellambda_{Q_i, v}(\phi (P))=0.$
\end{proof}
\section{Main Result}\ellabel{main}
Let $E$ be an elliptic curve, $\psi: E\to E$ a morphism, and $\pi:E \to
\mathbb P^1$ be a finite covering. A \emph{Latt\`es map} is a rational map $\phi:
\mathbb P^1 \to \mathbb P^1$ making the following diagram commute:
$$
\begin{CD}
E @>\psi>> E \\
@VV \pi V @VV \pi V \\
\mathbb P^1 @>\phi>> \mathbb P^1
\end{CD}
$$
For instance, if $E$ is defined by the Weierstrass equation $y^2=x^3+ax^2+bx+c$,
$\psi=[2]$ is the multiplication-by-2 endomorphism on $E$, and $\pi(x,y)=x$,
then $$\phi(x)=\frac{x^4-2bx^2-8cx+b^2-4ac}{4x^3+4ax^2+4bx+4c}.$$
Fix an elliptic curve $E$ defined over a number field $K$, and for
$P\in\mathbb P^1(\overline K)$ define:
\begin{center}
\begin{align*}
[\varphi] &= \Biggl\{\phi:\mathbb P^1 \to \mathbb P^1 \;\bigg|\;
\pctext{2.5in}{there exist $K$-morphism $\psi:E\to E$ and finite covering
$\pi:E\to \mathbb P^1$ such that
$\pi\circ \psi = \phi \circ \pi$}\;\Biggr\} \\ \\
\Gamma_0 &= \bigcup_{\phi\in[\varphi]}\phi^+(P)\\ \\
\Gamma&= \elleft( \bigcup_{\phi\in[\varphi]} \phi^-(\Gamma_0) \right) \cup
\mathbb P^1(\overline K)_{[\varphi]-\mbox{preper}}
\end{align*}
\end{center}
A point $Q$ is $[\varphi]$-preperiodic if and only if $Q$ is $\phi$-preperiodic
for some $\phi\in[\varphi]$. We write $\mathbb P^1(\overline
K)_{[\varphi]-\mbox{preper}}$ for the set of $[\varphi]$-preperiodic points in
$\mathbb P^1(\overline K)$.
\begin{thm}
If $Q\in \mathbb P^1(\overline K)$ is not $[\varphi]$-periodic, then $\Gamma$ contains
at most finitely many points in $\mathbb P^1(\overline K)$ which are $S$-integral
relative to $Q$.
\end{thm}
\begin{proof}
Let $\Gamma_0'$ be the $\mbox{End}($E$)$-submodule of $E(\overline K)$ that is
finitely generated by the points in $\pi^{-1}(P)$, and let $$\Gamma'=\{\xi\in
E(\overline K) \mid \ellambda(\xi) \in \Gamma_0' \mbox{ for some non-zero }
\ellambda \in \mbox{End}(E) \}.$$ Then $\pi^{-1}(\Gamma)\subset \Gamma'$.
Indeed, if $\pi(\xi) \in \Gamma$ is not $[\varphi]$-preperiodic, then $\xi$ is
non torsion and $(\phi_1\circ \pi)(\xi)\in \Gamma_0$ for some Latt\`es map
$\phi_1$. So $(\pi\circ \psi_1)(\xi) \in \Gamma_0$ for some morphism
$\psi_1:E\to E$, and this gives $(\pi\circ \psi_1)(\xi)= \phi_2(P)$ for some
Latt\`es map $\phi_2$. Therefore $\psi_1(\xi) \in (\pi^{-1}\circ \phi_2)(P) =
(\psi_2 \circ \pi^{-1})(P)$ for some morphism $\psi_2:E\to E$. Since any
morphism $\psi:E\to E$ is of the form $\psi(X)=\alpha(X)+T$ where $\alpha\in
\mbox{End}(E)$ and $T\in E_{\mbox{tors}}$ (see \cite[6.19]{sil3}), we find
that there is a $\ellambda \in \mbox{End}(E)$ such that $\ellambda(\xi)$ is in
$\Gamma_0'$, the $\mbox{End}($E$)$-submodule generated by $\pi^{-1}(P)$.
Otherwise, if $\pi(\xi)\in \Gamma$ is $[\varphi]$-preperiodic, then
$\pi\elleft(E(\overline K)_{\mbox{tors}}\right) = \mathbb P^1(\overline
K)_{[\varphi]-\mbox{preper}}$ (\cite[Prop. 6.44]{sil3}) gives that $\xi$ may be
a torsion point; again $\xi \in\Gamma'$ since $E(\overline K)_{\mbox{tors}}
\subset \Gamma'$. Hence $\pi^{-1}(\Gamma) \subset \Gamma'$.
Let $D$ be an effective divisor whose support lies entirely in $\pi^{-1}(Q)$,
let $\mathcal{R}_Q$ be the set of points in $\Gamma$ which are $S$-integral
relative to $Q$, and let $\mathcal{R}'_D$ be the set of points in $\Gamma'$
which are $S$-integral relative to $D$. Extending $S$ so that Theorem
\ref{dist} holds for the map $\pi:E\to \mathbb P^1$, and since $\mbox{Supp}(D)\subset
\mbox{Supp}(\pi^*Q)$, we have: if $\gamma \in \Gamma$ is $S$-integral relative
to $Q$, then $\pi^{-1}(\gamma)$ is $S$-integral relative to $D$. Therefore
$\pi^{-1}(\mathcal{R}_Q) \subset \mathcal{R}'_{D}$. Now $\pi$ is a finite map
and $\pi(E(\overline K)) = \mathbb P^1(\overline K)$; so to complete the proof, it
suffices to show that $D$ can be chosen so that $\mathcal{R}'_{D}$ is finite.
From \cite[Prop. 6.37]{sil3}, we find that if $\Lambda$ is a nontrivial
subgroup of $\mbox{Aut}(E)$, then $E/{\Lambda} \cong \mathbb P^1$ and the map $\pi:E
\to \mathbb P^1$ can be determine explicitly. The four possibilities for
$\pi$, which are $\pi(x,y) = x,\, x^2,\, x^3$, or $y$ correspond respectively
to the four possibilities for $\Lambda$, which are $\Lambda = \mu_2, \, \mu_4,
\, \mu_6$, or $\mu_3$, which in turn depends only on the $j$-invariant of $E$.
(Here, $\mu_N$ denotes the $N$th roots of unity in $\mathbb C$.)
First assume that $\pi(x,y)\not=y$. Since $Q$ is not $[\varphi]$-preperiodic,
take $\xi \in \pi^{-1}(Q)$ to be non-torsion. Then $-\xi \in \pi^{-1}(Q)$ since
$\Lambda = \mu_2, \, \mu_4$, or $\mu_6$, and $\xi - (-\xi) = 2\xi$ is
non-torsion. Taking $D=(\xi)+(-\xi)$, \cite[Thm. 3.9(i)]{grant_ih} gives that
$\mathcal{R}'_{D}$ is finite.
Suppose that $\pi(x,y)=y$. Then $\pi^{-1}(Q) = \{\xi, \xi', \xi''\}$ where
$\xi+\xi'+\xi''=0$ and $\xi$ is non-torsion since $Q$ is not
$[\varphi]$-preperiodic. Assuming that both $\xi-\xi'$ and $\xi-\xi''$ are
torsion give that $3\xi$ is torsion, and this contradicts the fact that $\xi$
is non-torsion. Therefore, we may assume that $\xi-\xi'$ is non-torsion.
Now taking $D=(\xi)+(\xi')$, \cite[Thm. 3.9(i)]{grant_ih} again gives that
$\mathcal{R}'_{D}$ is finite. Hence $\mathcal{R}_Q$, the set of points in
$\Gamma$ which are $S$-integral relative to $Q$, is finite.
\end{proof}
\end{document} | math | 12,480 |
\begin{document}
\sloppy
\vspace*{2cm}
{\Large Degenerations of Jordan Superalgebras}
\textbf{Mar\'ia Alejandra Alvarez$^{a}$, Isabel Hern\'andez$^{b}$, Ivan Kaygorodov$^{c}$}
{\tiny
$^{a}$ Departamento de Matem\'aticas, Universidad de Antofagasta, Chile.
$^{b}$ CONACYT - Centro de Investigaci\'on en Matem\'aticas, A.C. Unidad M\'erida, M\'exico.
$^{c}$ Universidade Federal do ABC, CMCC, Santo Andr\'{e}, Brazil.
E-mail addresses:
Mar\'ia Alejandra Alvarez ([email protected]),
Isabel Hern\'andez ([email protected]),
Ivan Kaygorodov ([email protected]),
}
\vspace*{2cm}
{\bf Abstract.}
We describe degenerations of three-dimensional Jordan superalgebras over $\mathbb{C}.$
In particular, we describe all irreducible components in the corresponding varieties.
{\bf Keywords:} Jordan superalgebra, orbit closure, degeneration, rigid superalgebra
\section{Introduction}
Contractions of Lie algebras are limiting processes between Lie algebras, which have been studied first in physics \cite{13,10}. For example, classical mechanics is a limiting case of quantum mechanics as $\hbar \to 0,$ described by a contraction of the Heisenberg-Weyl Lie algebras to the Abelian Lie algebra of the same dimension. Description of contractions of low dimensional Lie algebras was given in \cite{nesterpop}.
The study of contractions and graded contractions of binary algebras has a very big history (see, for example, \cite{c1,c2,c3}).
The study of graded contractions of Jordan algebras and Jordan superalgebras was iniciated in \cite{kp03}.
The first attempt of the study of contractions of $n$-ary algebras stays in the variety of Filippov algebras \cite{deaz}.
In mathematics, often a more general definition of contraction is used, the so-called degeneration. Degenerations are related to deformations. Degenerations of algebras is an interesting subject, which was studied in various papers (see, for example, \cite{CKLO13,BC99,S90,GRH,GRH2,BB09,chouhy,BB14,laur03}). In particular, there are many results concerning degenerations of algebras of low dimensions in a variety defined by a set of identities. One of the important problems in this direction is the description of the so-called rigid algebras \cite{ikv17}. These algebras are of big interest, since the closures of their orbits under the action of generalized linear group form irreducible components of a variety under consideration (with respect to the Zariski topology).
There are fewer works in which the full information about degenerations was found for some variety of algebras. This problem was solved
for two-dimensional pre-Lie algebras in \cite{BB09},
for two-dimensional Jordan algebras in \cite{jor2},
for three-dimensional Novikov algebras in \cite{BB14},
for three-dimensional Jordan algebras \cite{gkp17},
for four-dimensional Lie algebras in \cite{BC99},
for nilpotent four-dimensional Jordan algebras \cite{contr11},
for nilpotent four-dimensional Leibniz and Zinbiel algebras in \cite{kppv},
for nilpotent five- and six-dimensional Lie algebras in \cite{S90,GRH},
for nilpotent five- and six-dimensional Malcev algebras in \cite{kpv},
and for all $2$-dimensional algebras \cite{kv16}.
On the same time,
the study of degenerations of superalgebras and graded algebras was iniciated in \cite{deggraa}
for associative case and in \cite{degsulie} for Lie superalgebras.
\section{Definitions and notation}
\subsection{Jordan superalgebras}
Jordan algebras appeared as a tool for studies in quantum mechanic in
the paper of Jordan, von Neumann and Wigner \cite{jnw}.
A commutative algebra is called a {\it Jordan algebra} if it satisfies the identity
$$(x^2y)x=x^2(yx).$$
The study of the structure theory and other properties of Jordan algebras was initiated by Albert.
Jordan algebras are related with some questions in
differentional equations \cite{svi91}, superstring theory \cite{superstring},
analysis, operator theory, geometry, mathematical biology, mathematical statistics and physics
(see, the survey of Iordanescu \cite{radu}).
Let $G$ be the Grassmann algebra over $\mathbb{F}$ given by the generators $1, \xi_1, \ldots , \xi_n, \ldots$ and the defining relations $\xi_i^2=0$ and $\xi_i\xi_j=-\xi_j\xi_i.$
The elements $1, \xi_{i_1} \xi_{i_2} \ldots \xi_{i_k},\ i_1<i_2<\ldots <i_k,$ form a basis of the algebra $G$ over $\mathbb{F}$. Denote by $G_0$ and $G_1$ the subspaces spanned by the products
of even and odd lengths, respectively; then $G$ can be represented as the direct sum of these subspaces,
$G = G_0 \oplus G_1.$
Here the relations $G_iG_j \subseteq G_{i+j (mod \ 2)}$, $i,j = 0, 1$, hold.
In other words, $G$ is a $\mathbb{Z}_2$-graded algebra (or a superalgebra) over $\mathbb{F}.$
Suppose now that $A = A_0 \oplus A_1$ is an arbitrary superalgebra over $\mathbb{F}$. Consider the tensor product $G \otimes A$ of $\mathbb{F}$-algebras.
The subalgebra
$$G(A) = G_0 \otimes A_0 + G_1 \otimes A_1$$
of $G\otimes A$ is referred to as the Grassmann envelope of the superalgebra $A.$
Let $\Omega$ be a variety of algebras over $\mathbb{F}.$
A superalgebra $A = A_0 \oplus A_1$ is referred to as an
$\Omega$-superalgebra if its Grassmann envelope $G(A)$ is an algebra in $\Omega.$
In particular, $A = A_0 \oplus A_1$ is
referred to as a Jordan superalgebra if its Grassmann envelope $G(A)$ is a Jordan algebra.
The study of Jordan superalgebras has very big history (for example, see \cite{MS,ELS08,MZ01,K10,K12,CK07}).
\subsection{Degenerations}
Given an $(m,n)$-dimensional vector superspace $V=V_0\oplus V_1$, the set
$$\Hom(V \otimes V,V)=(\Hom(V \otimes V,V))_0\oplus (\Hom(V \otimes V,V))_1$$ is a vector superspace of dimension $m^3+3mn^2$. This space has a structure of the affine variety $\mathbb{C}^{m^3+3mn^2}.$ If we fix a basis $\{e_1,\dots,e_m,f_1,\dots,f_n\}$ of $V$, then any $\mu\in \Hom(V \otimes V,V)$ is determined by $m^3+3mn^2$ structure constants
$\alpha_{i,j}^k,\beta_{i,j}^q,\gamma_{i,j}^q, \delta_{p,q}^k \in\mathbb{C}$ such that
$$\mu(e_i\otimes e_j)=\sum\limits_{k=1}^m\alpha_{i,j}^ke_k,
\quad \mu(e_i\otimes f_p)=\sum\limits_{q=1}^n\beta_{i,p}^qf_q,
\quad \mu(f_p\otimes e_i)=\sum\limits_{q=1}^n\gamma_{p,i}^qf_q,
\quad \mu(f_p\otimes f_q)=\sum\limits_{k=1}^m\delta_{p,q}^ke_k.$$
A subset $\mathbb{L}(T)$ of $\Hom(V \otimes V,V)$ is {\it Zariski-closed} if it can be defined by a set of polynomial equations $T$ in the variables
$\alpha_{i,j}^k,\beta_{i,p}^q, \gamma_{p,i}^q, \delta_{p,q}^k$ ($1\le i,j,k\le m,\ 1\leq p,q\leq n$).
Let $\mathcal{S}^{m,n}$ be the set of all superalgebras of dimension $(m,n)$ defined by the family of polinomial super-identities $T$, understood as a subset $\mathbb{L}(T)$ of an affine variety $\Hom(V\otimes V, V)$. Then one can see that $\mathcal{S}^{m,n}$ is a Zariski-closed subset of the variety $\Hom(V\otimes V, V).$
The group $G=(\Aut V)_0\simeq\GL(V_0)\oplus\GL(V_1)$ acts on $\mathcal{S}^{m,n}$ by conjugations:
$$ (g * \mu )(x\otimes y) = g\mu(g^{-1}x\otimes g^{-1}y)$$
for $x,y\in V$, $\mu\in\mathbb{L}(T)$ and $g\in G$.
Thus, $\mathcal{S}^{m,n}$ is decomposed into $G$-orbits that correspond to the isomorphism classes of superalgebras. Let $O(\mu)$ denote the orbit of $\mu\in\mathbb{L}(T)$ under the action of $G$ and $\overline{O(\mu)}$ denote the Zariski closure of $O(\mu)$.
Let $J, J' \in \mathcal{S}^{m,n}$ and $\lambda,\mu\in \mathbb{L}(T)$ represent $J$ and $J'$ respectively. We say that $\lambda$ degenerates to $\mu$ and write $\lambda\to \mu$ if $\mu\in\overline{O(\lambda)}$. Note that in this case we have $\overline{O(\mu)}\subset\overline{O(\lambda)}$. Hence, the definition of a degeneration does not depend on the choice of $\mu$ and $\lambda$, and we will right indistinctly $J\to J'$ instead of $\lambda\to\mu$ and $O(J)$ instead of $O(\lambda)$. If $J\not\cong J'$, then the assertion $J\to J'$ is called a {\it proper degeneration}. We write $J\not\to J'$ if $J'\not\in\overline{O(J)}$.
Let $J$ be represented by $\lambda\in\mathbb{L}(T)$. Then $J$ is {\it rigid} in $\mathbb{L}(T)$ if $O(\lambda)$ is an open subset of $\mathbb{L}(T)$. Recall that a subset of a variety is called irreducible if it cannot be represented as a union of two non-trivial closed subsets. A maximal irreducible closed subset of a variety is called {\it irreducible component}. In particular, $J$ is rigid in $\mathcal{S}^{m,n}$ iff $\overline{O(\lambda)}$ is an irreducible component of $\mathbb{L}(T)$. It is well known that any affine variety can be represented as a finite union of its irreducible components in a unique way. We denote by $Rig(\mathcal{S}^{m,n})$ the set of rigid superalgebras in $\mathcal{S}^{m,n}$.
\subsection{Principal notations}
Let $\mathcal{JS}^{m,n}$ be the set of all Jordan superalgebras of dimension $(m,n).$
Let $J$ be a Jordan superalgebra with fixed basis $\{e_1,\dots,e_m,f_1,\dots f_n\}$, defined by
\[e_ie_j=\sum_{k=1}^m\alpha_{ij}^ke_k,\quad e_if_j=\sum_{k=1}^n\beta_{ij}^kf_k,\quad f_if_j=\sum_{k=1}^m\gamma_{ij}^ke_k.\]
We will use the following notation:
\begin{enumerate}
\item $\a(J)$ is the Jordan superalgebra with the same underlying vector superspace than $J$, and defined by $f_if_j=\displaystyle\sum_{k=1}^n\gamma_{ij}^ke_k$.
\item $J^1=J$, $J^r=J^{r-1}J+J^{r-2}J^2+\dots+ JJ^{r-1}$, and in every case $J^r=(J^r)_0\oplus (J^r)_1$.
\item $c_{i,j}=\displaystyle\frac{\tr (L(x)^i)\cdot\tr(L(y)^j)}{\tr( L(x)^i\cdot L(y)^j)}$
is the Burde invariant, where $L(x)$
is the left
multiplication
. This invariant $c_{i,j}$
is defined as a quotient of two polynomials in the structure constants of $J$, for all $x,y\in J$ such that both polynomials are not zero and $c_{i,j}$
is independent of the choice of $x,y$.
\end{enumerate}
\section{Methods}
First of all, if $J\to J'$ and $J\not\cong J'$, then $\dim\Aut(J)<\dim\Aut(J')$, where $\Aut(J)$ is the space of automorphisms of $J$. Secondly, if $J\to J'$ and $J'\to J''$ then $J\to J''$. If there is no $J'$ such that $J\to J'$ and $J'\to J''$ are proper degenerations, then the assertion $J\to J''$ is called a {\it primary degeneration}. If $\dim\Aut(J)<\dim\Aut(J'')$ and there are no $J'$ and $J'''$ such that $J'\to J$, $J''\to J'''$, $J'\not\to J'''$ and one of the assertions $J'\to J$ and $J''\to J'''$ is a proper degeneration, then the assertion $J \not\to J''$ is called a {\it primary non-degeneration}. It suffices to prove only primary degenerations and non-degenerations to describe degenerations in the variety under consideration. It is easy to see that any superalgebra degenerates to the superalgebra with zero multiplication. From now on we will use this fact without mentioning it.
Let us describe the methods for proving primary non-degenerations. The main tool for this is the following lemma.
\begin{Lem}\label{lema:inv}
If $J\to J'$ then the following hold:
\begin{enumerate}
\item $\dim (J^r)_i\geq\dim (J'^r)_i$, for $i\in\Z_2;$
\item $(J)_0\to (J')_0;$
\item $\a(J)\to\a(J');$
\item If the Burde invariant exist for $J$ and $J'$, then both superalgebras have the same Burde invariant$;$
\item If $J$ is associative then $J'$ must be associative. In fact, if $J$ satisfies a P.I. then $J'$ must satisfy the same P.I.
\end{enumerate}
\end{Lem}
In the cases where all of these criteria can't be applied to prove $J\not\to J'$, we will define $\mathcal{R}$ by a set of polynomial equations and will give a basis of $V$, in which the structure constants of $\lambda$ give a solution to all these equations. We will omit everywhere the verification of the fact that $\mathcal{R}$ is stable under the action of the subgroup of upper triangular matrices and of the fact that $\mu\not\in\mathcal{R}$ for any choice of a basis of $V$. These verifications can be done by direct calculations.
{\bf Degenerations of Graded algebras}. Let
$G$ be an abelian group and let $\mathcal {V}(\mathcal{F})$ be a variety of algebras defined by a family of polynomial identities $\mathcal{F}$. It is important to notice that degeneration on the $G$-graded variety $G\mathcal{V}( \mathcal{F})$ is a more restrictive notion than degeneration on the variety $\mathcal{V}(\mathcal{F})$, In fact, consider $A, A^\prime \in G\mathcal {V}(\mathcal{F})$ such that $ A, A^\prime \in \mathcal{V}(\mathcal{F})$, a degeneration between the algebras $A$ and $A^\prime$ may not give rise to a degeneration between the $G$-graded algebras $A$ and $A^\prime$, since the matrices describing the basis changes in $G\mathcal {V}(\mathcal{F})$ must preserve the $G$-graduation. Hence, we have the following result.
\begin{Lem}
Let $ A, A^\prime \in G\mathcal {V}(\mathcal{F}) \cap \mathcal {V}(\mathcal{F})$. If $A \not \to A^\prime $ as algebras, then $A \not \to A^\prime $ as $G$-graded algebras.
\label{alg-nd-Galg-nd}
\end{Lem}
\section{Main result}
In this section we describe all degenerations and non-degenerations of $3$-dimensional Jordan superalgebras.
Note that,
there is only one (trivial) Jordan superalgebra of the type $(0,3);$
the variety of $3$-dimensional Jordan algebras (Jordan superalgebras of the type $(3,0)$)
has $19$ non isomorphic algebras with non zero multiplication,
in particularly, $5$ algebras are rigid.
The full description of all degenerations and non-degenerations of $3$-dimensional Jordan algebras was given in \cite{gkp17}.
The rest of the section is dedicated to study of degenerations of Jordan superalgebras of types $(1,2)$ and $(2,1).$
\subsection{Jordan Superalgebras of type $(1,2)$}
\subsubsection{Algebraic classification}
As was noticed the algebraic classification of Jordan superalgebras of the type $(1,2)$ was received in \cite{MS}.
In the next table we give this classification with some additional useful information about these superalgebras.
\begin{center}
Table 1. {\it $(1,2)$-Dimensional Jordan superalgebras.}
\begin{equation*}
\begin{array}{|c|l|c|c|l|}
\hline
\mbox{$J$} & \mbox{ multiplication tables } & \mbox{$\dim\Aut(J)$} & \mbox{$c_{i,j}$} & \mbox{type}\\
\hline \hline
U^s_1 & e_1e_1=e_1 & 4 & 1 & \text{associative}\\ \hline
S_1^2 & e_1e_1=e_1, e_1f_1=\frac{1}{2}f_1 & 2 & 2 & \text{non-associative}\\ \hline
S_1^3 & e_1f_1=f_2, \ f_1f_2=e_1 & 2 & \not\exists & \text{non-associative}\\ \hline
S^2_2 & e_1e_1=e_1, e_1f_1=f_1 & 2 & 2 & \text{associative} \\ \hline
S_2^3 & f_1f_2=e_1 & 4 & \not\exists & \text{associative} \\ \hline
S_3^3 & e_1f_1=f_2 & 3 & \not\exists & \text{associative}\\ \hline
S^3_4 & e_1e_1=e_1, e_1f_1=f_1, e_1f_2=\frac{1}{2}f_2 & 2 & \frac{\left(2+\left(\frac{1}{2}\right)^i\right)\left(2+\left(\frac{1}{2}\right)^j\right)}{\left(2+\left(\frac{1}{2}\right)^{i+j}\right)} & \text{non-associative}\\ \hline
S^3_5 & e_1e_1=e_1, e_1f_1=\frac{1}{2}f_1, e_1f_2=\frac{1}{2}f_2 & 4 & \frac{\left(1+2\left(\frac{1}{2}\right)^i\right)\left(1+2\left(\frac{1}{2}\right)^j\right)}{\left(1+2\left(\frac{1}{2}\right)^{i+j}\right)} & \text{non-associative}\\ \hline
S^3_6 & e_1e_1=e_1, e_1f_1=f_1, e_1f_2=f_2 & 4 & 3 & \text{associative}\\ \hline
S^3_7 & e_1e_1=e_1, e_1f_1=\frac{1}{2}f_1, e_1f_2=\frac{1}{2}f_2, f_1f_2=e_1 & 3 & \frac{\left(1+2\left(\frac{1}{2}\right)^i\right)\left(1+2\left(\frac{1}{2}\right)^j\right)}{\left(1+2\left(\frac{1}{2}\right)^{i+j}\right)} & \text{non-associative} \\ \hline
S^3_8 & e_1e_1=e_1, e_1f_1=f_1, e_1f_2=f_2, f_1f_2=e_1 & 3 & 3 & \text{non-associative} \\ \hline
\end{array}
\end{equation*}
\end{center}
\subsubsection{Degenerations}
\begin{Th}\label{third}\label{theorem}
The graph of primary degenerations for Jordan superalgebras of dimension $(1,2)$ has the following form:
\end{Th}
\scriptsize
\begin{center}
\begin{tikzpicture}[->,>=stealth',shorten >=0.08cm,auto,node distance=1.5cm,
thick,main node/.style={rectangle,draw,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
blue node/.style={rectangle,draw, color=blue,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
orange node/.style={rectangle,draw, color=orange,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
green node/.style={rectangle,draw, color=green,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
olive node/.style={rectangle,draw, color=olive,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
connecting node/.style={circle, draw, color=purple },
rigid node/.style={rectangle,draw,fill=black!20,rounded corners=1.5ex,font=\sffamily \tiny \bfseries },
bluerigid node/.style={rectangle,draw,color=blue,fill=black!20,rounded corners=1.5ex,font=\sffamily \tiny \bfseries }, style={draw,font=\sffamily \scriptsize \bfseries }]
\node (30) {};
\node (31)[right of=30]{};
\node (32)[right of=31]{};
\node (33)[right of=32]{};
\node (34)[right of=33]{};
\node [rigid node] (s31) [right of=30] {$S^3_1$};
\node [rigid node] (s21s11) [right of=31] {$S^2_1$};
\node [rigid node] (s34) [right of=32] {$S^3_4$};
\node [bluerigid node] (s22s11) [right of=33] {$S^2_2$};
\node (20)[below of=30]{};
\node (21)[right of=20]{};
\node (22)[right of=21]{};
\node (23)[right of=22]{};
\node (24)[right of=23]{};
\node [blue node] (s33) [right of=22] {$S^3_3$};
\node [rigid node] (s37) [right of=20] {$S^3_7$};
\node [rigid node] (s38) [right of=21] {$S^3_8$};
\node (10)[below of=20]{};
\node (11)[right of=10]{};
\node (12)[right of=11]{};
\node (13)[right of=12]{};
\node (14)[right of=13]{};
\node [blue node] (s32) [right of=11] {$S^3_2$};
\node [bluerigid node] (u1ss) [right of=13] {$U_1^s$};
\node [main node] (s35) [right of=10] {$S^3_5$};
\node [blue node] (s36) [right of=12] {$S^3_6$};
\node (00)[below of=10]{};
\node (01)[right of=00]{};
\node (02)[right of=01]{};
\node [blue node] (u2ss) [right of=01] {$\mathbb{C}^{1,2}$};
\path[every node/.style={font=\sffamily\small}]
(s31) edge [bend right=0, color=black] node{} (s33)
(s31) edge [bend right=0, color=black] node{} (s32)
(s21s11) edge [bend right=0, color=black] node{} (s33)
(s34) edge [bend right=0, color=black] node{} (s33)
(s22s11) edge [bend right=0, color=blue] node{} (s33)
(s37) edge [bend right=0, color=black] node{} (s32)
(s37) edge [bend right=0, color=black] node{} (s35)
(s38) edge [bend right=0, color=black] node{} (s32)
(s38) edge [bend right=0, color=black] node{} (s36)
(s32) edge [bend right=0, color=blue] node{} (u2ss)
(u1ss) edge [bend right=0, color=blue] node{} (u2ss)
(s35) edge [bend right=0, color=black] node{} (u2ss)
(s36) edge [bend right=0, color=blue] node{} (u2ss)
(s33) edge [bend right=0, color=blue] node{} (u2ss);
\end{tikzpicture}
\end{center}
\normalsize
\begin{Proof}
We prove all required primary degenerations in Table 2 below. Recall that an associative superalgebra can only degenerate to an associative superalgebra. Let us consider the first degeneration $S^2_1\to S_3^3$
to clarify this table. Write nonzero products in $S^2_1$ in the basis $E_1^t,F_1^t,F_2^t$:
$$E_1^tE_1^t=tE_1^t, \ E_1^tF_1^t=te_1(f_1-2t^{-1}f_2)=\frac{t}{2}(f_1-2t^{-1}f_2)+f_2=\frac{t}{2}F_1^t+F_2^t.$$
It is easy to see that for $t=0$ we obtain the multiplication table of $S_3^3.$
The remaining degenerations can be considered in the same way.
\begin{center}\footnotesize Table 2. {\it Primary degenerations of Jordan superalgebras of dimension $(1,2)$.}
$$\begin{array}{|c|lll|}
\hline
\mbox{degenerations} & \multicolumn{3}{|c|}{\mbox{parametrized bases}} \\
\hline
\hline
S^2_1\to S_3^3 & E^t_1=te_1,& F_1^t=f_1-2t^{-1}f_2,& F_2^t=f_2 \\ \hline
S^2_2\to S_3^3 & E^t_1=te_1,& F_1^t=f_1-t^{-1}f_2,& F_2^t=f_2 \\ \hline
S^3_1\to S^3_3 & E^t_1=e_1,& F_1^t=tf_1,& F_2^t=tf_2 \\ \hline
S^3_1\to S^3_2 & E^t_1=e_1,& F_1^t=tf_1,& F_2^t=tf_2 \\ \hline
S^3_4\to S^3_3 & E^t_1=te_1,& F_1^t=f_1-2t^{-1}f_2,& F_2^t=f_2 \\ \hline
S^3_7\to S^3_2 & E^t_1=te_1, & F_1^t=tf_1,& F_2^t=f_2 \\ \hline
S^3_7\to S^3_5 & E^t_1=e_1, &F_1^t=f_1,& F_2^t=tf_2 \\ \hline
S^3_8\to S^3_2 & E^t_1=te_1,& F_1^t=tf_1,& F_2^t=f_2 \\ \hline
S^3_8\to S^3_6 & E^t_1=e_1,& F_1^t=tf_1,& F_2^t=f_2 \\ \hline
\end{array}$$
\end{center}
\normalsize
The primary non-degenerations are proved in Table 3.
\begin{center}\footnotesize Table 3. {\it Primary non-degenerations of Jordan superalgebras of dimension $(1,2)$.}
$$\begin{array}{|l|c|}
\hline
\mbox{non-degenerations
} & \mbox{reason} \\
\hline
\hline
\begin{array}{l}
S^3_3\not\to S_2^3 \end{array}& \dim (J^2)_0<\dim (J'^2)_0 \\ \hline
\begin{array}{l}
S^3_3\not\to U^s_1,\
S^3_6;\quad
S^3_1\not\to
S^3_7,\
S^3_8,\
U^s_1,\
S^3_5,\
S^3_6 \end{array}
& J_0\not\to J'_0 \\ \hline
\begin{array}{l}
S^3_7\not\to U_1^s,\
S_6^3;\quad
S^3_8\not\to U_1^s,\
S_3^5;\quad
S^2_1\not\to S_7^3,\
S_8^3,\
U_1^s,\
S_5^3,\
S_6^3; \\
S^3_4\not\to S^3_7,\
S^3_8,\
U^s_1,\
S^3_5,\
S^3_6;\quad
S^2_2\not\to U_1^s,\
S_6^3
\end{array}
& c_{i,j}
\\ \hline
\begin{array}{l}
S^2_1\not\to S_2^3;\quad
S^3_4\not\to S_2^3;\quad
S^2_2\not\to S_2^3;\quad
\end{array}& \a(J)\not\to\a(J') \\ \hline
\end{array}$$
\end{center}
\end{Proof}
\subsubsection{Irreducible components and rigid algebras}
Using Theorem \ref{theorem}, we describe the irreducible components and the rigid algebras in $\mathcal{JS}^{1,2}.$
\begin{corollary}\label{ir_12} The irreducible components of $\mathcal{JS}^{1,2}$ are:
$$
\begin{aligned}
\mathcal{C}_1 &=\overline{ O(U_{1}^s) }=
\{ U_1^s, \mathbb{C}^{1,2} \}:\\
\mathcal{C}_2 &=\overline{ O(S_{1}^2) }=
\{ S_1^2, S_3^3, \mathbb{C}^{1,2} \}; \\
\mathcal{C}_3 &=\overline{ O(S_{1}^3) }=
\{ S_1^3, S_3^3, \mathbb{C}^{1,2} \}; \\
\mathcal{C}_4 &=\overline{ O(S_{2}^2) }=
\{ S_2^2, S_3^3, S^3_2, \mathbb{C}^{1,2} \}; \\
\mathcal{C}_5 &=\overline{ O(S_{4}^3) }=
\{ S_4^3, S^3_3, \mathbb{C}^{1,2} \}; \\
\mathcal{C}_6 &=\overline{ O(S_{7}^3) }=
\{ S_5^3, S_2^3, \mathbb{C}^{1,2} \}; \\
\mathcal{C}_7 &=\overline{ O(S_{8}^3) }=
\{ S_8^3, S_2^3, S^3_6, \mathbb{C}^{1,2} \};\\
\end{aligned}
$$
In particular, $Rig(\mathcal{JS}^{1,2})= \{ U_{1}^s, S_{1}^2, S_{1}^3, S_{2}^2, S_{4}^3, S_{7}^3, S_{8}^3 \}.$
\end{corollary}
\subsection{Superalgebras of type $(2,1)$}
In this section we describe all possible primary degenerations between Jordan superalgebras of dimension $(2,1)$.
First of all, notice that every Jordan superalgebra of dimension $(2,1)$ has trivial odd product, so it can be considered as a Jordan algebra of dimension $3$. However, the graph of primary degenerations of Jordan superalgebras of dimension $(2,1)$ is not a subgraph of the primary degenerations of de Jordan algebras of dimension $3$. In fact, there exist Jordan superalgebras, without degenerations
between them, such that they degenerate as Jordan algebras. Take for example
$S_2^2 \oplus U_1^s$ and $B_1^s$ (see Table 5), Taking the basis change: $E_1^t= e_1$,
$E_2^t=f_1$ and $F_1^t= t e_2$, it follows that $S_2^2 \oplus U_1^s \to B_1^s$ as Jordan algebras.
Notice that this basis change does not preserve the $\mathbb{Z}_2$-graduation. Moreover, we shall prove that
$S_2^2 \oplus U_1^s \not \to B_1^s$ as Jordan superalgebras.
\subsubsection{Algebraic classification}
In the next table we provide the classification of $(2,1)$-dimensional Jordan superalgebras with some additional useful information about these superalgebras.
\begin{center}
Table 4. {\it $(2,1)$-Dimensional Jordan superalgebras.}
\begin{equation*}
\begin{array}{|l|l|c|l|}
\hline
\mbox{$J$} & \mbox{Multiplication tables } & \mbox{$\dim\Aut(J)$} & \mbox{Type} \\
\hline \hline
2U_1^s
& e_1e_1=e_1, \; \;e_2e_2=e_2 & 1 & \mbox{associative} \\\hline
U_1^s
& e_1e_1=e_1 & 2 & \mbox{associative} \\ \hline
B_1^s
& e_1e_1=e_1, \;\; e_1e_2=e_2 & 2 & \mbox{associative} \\ \hline
B_2^s
& e_1e_1=e_1, \;\;e_1e_2= \frac{1}{2}e_2 & 3 & \mbox{non-associative} \\ \hline
B_3^s
& e_1e_1=e_2 & 3 & \mbox{associative} \\ \hline
S_1^2\oplus U_1^s & e_1e_1=e_1, \;\; e_2e_2=e_2, \;\; e_1f_1=\frac{1}{2}f_1 & 1 & \mbox{non-associative} \\ \hline
S_1^2
& e_1e_1=e_1,\;\; e_1f_1= \frac{1}{2}f_1 & 2 & \mbox{non-associative} \\ \hline
S_2^2 \oplus U_1^s & e_1e_1=e_1, \;\;e_2e_2=e_2, \;\;e_1f_1=f_1 & 1 & \mbox{associative} \\ \hline
S_2^2
& e_1e_1=e_1, \;\; e_1f_1=f_1 & 2 & \mbox{associative}\\ \hline
S_9^3 & e_1e_1=e_1,\;\; e_1e_2=e_2, \;\; e_1f_1=\frac{1}{2}f_1 & 2 & \mbox{non-associative} \\ \hline
S_{10}^3 & e_1e_1=e_1, \;\; e_1e_2=e_2, \;\;e_1f_1=f_1 & 2 & \mbox{associative} \\ \hline
S_{11}^3 & e_1e_1=e_1, \;\;e_1e_2= \frac{1}{2}e_2, \;\;e_1f_1= \frac{1}{2}f_1 & 3 & \mbox{non-associative} \\ \hline
S_{12}^3 & e_1e_1=e_1,\;\; e_1e_2= \frac{1}{2}e_2, \; e_1f_1=f_1 & 3 & \mbox{non-associative} \\ \hline
S_{13}^3 & e_1e_1=e_1, \;\;e_2e_2=e_2, \;\; e_1f_1=\frac{1}{2}f_1,\;\; e_2f_1=\frac{1}{2}f_1 & 1 & \mbox{non-associative}\\ \hline
\end{array}
\label{2-1JSA}
\end{equation*}
\end{center}
\subsubsection{Degenerations}
\begin{Th}\label{4th}\label{theorem21}
The graph of primary degenerations for Jordan superalgebras of dimension $(2,1)$ has the following form:
\end{Th}
\begin{center}
\scriptsize
\begin{tikzpicture}[->,>=stealth',shorten >=0.08cm,auto,node distance=1.5cm,
thick,main node/.style={rectangle,draw,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
blue node/.style={rectangle,draw, color=blue,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
orange node/.style={rectangle,draw, color=orange,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
green node/.style={rectangle,draw, color=green,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
olive node/.style={rectangle,draw, color=olive,fill=gray!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries },
connecting node/.style={circle, draw, color=purple },
rigid node/.style={rectangle,draw,fill=black!20,rounded corners=1.5ex,font=\sffamily \tiny \bfseries },
bluerigid node/.style={rectangle,draw,color=blue,fill=black!20,rounded corners=1.5ex,font=\sffamily \tiny \bfseries },
style={draw,font=\sffamily \scriptsize \bfseries }]
\node (30) {};
\node (31)[right of=30]{};
\node (32)[right of=31]{};
\node (33)[right of=32]{};
\node (34)[right of=33]{};
\node (35)[right of=34]{};
\node (36)[right of=35]{};
\node [rigid node] (s12u1s) [right of=30] {$S_1^2 \oplus U_1^s$};
\node [rigid node] (s133) [right of=31] {$S_{13}^3$};
\node [bluerigid node] (s22u1s) [right of=33] {$S_2^2 \oplus U_1^s$};
\node [bluerigid node] (u1su1ss11) [right of=35] {$2U_1^s
$};
\node (20)[below of=30]{};
\node (21)[right of=20]{};
\node (22)[right of=21]{};
\node (23)[right of=22]{};
\node (24)[right of=23]{};
\node (25)[right of=24]{};
\node (26)[right of=25]{};
\node (27)[right of=26]{};
\node (28)[right of=27]{};
\node [main node] (s93) [right of=20] {$S^3_9$};
\node [main node] (s12u2s) [right of=21] {$S_1^2
$};
\node [blue node] (s103) [right of=22] {$S^3_{10}$};
\node [blue node] (u1su2ss11) [right of=23] {$U_1^s
$};
\node [blue node] (s22u2s) [right of=25] {$S_2^2
$};
\node [blue node] (b1ss12) [right of=26] {$B_1^s
$};
\node (10)[below of=20]{};
\node (11)[right of=10]{};
\node (12)[right of=11]{};
\node (13)[right of=12]{};
\node (14)[right of=13]{};
\node (15)[right of=14]{};
\node (16)[right of=15]{};
\node [rigid node] (s123) [right of=10] {$S_{12}^3$};
\node [blue node] (b3ss11) [right of=13] {$B_3^s
$};
\node [rigid node] (b2ss11) [right of=16] {$B_2^s
$};
\node [rigid node] (s113) [right of=15] {$S_{11}^3$};
\node (00)[below of=10]{};
\node (01)[right of=00]{};
\node (02)[right of=01]{};
\node (03)[right of=02]{};
\node (04)[right of=03]{};
\node [blue node] (u2su2ss11) [right of=03] {$\mathbb{C}^{2,1} $};
\path[every node/.style={font=\sffamily\small}]
(s133) edge [bend right=0, color=black] node{} (s12u2s)
(s133) edge [bend right=0, color=black] node{} (s103)
(s12u1s) edge [bend right=0, color=black] node{} (s12u2s)
(s12u1s) edge [bend right=0, color=black] node{} (s93)
(s12u1s) edge [bend right=0, color=black] node{} (u1su2ss11)
(s12u2s) edge [bend right=0, color=black] node{} (b3ss11)
(s93) edge [bend right=4, color=black] node{} (b3ss11)
(b2ss11) edge [bend right=0, color=black] node{} (u2su2ss11)
(s113) edge [bend right=4, color=black] node{} (u2su2ss11)
(s123) edge [bend right=0, color=black] node{} (u2su2ss11)
(u1su1ss11) edge [bend right=0, color=blue] node{} (b1ss12)
(u1su1ss11) edge [bend right=0, color=blue] node{} (u1su2ss11)
(s22u1s) edge [bend right=0, color=blue] node{} (u1su2ss11)
(s22u1s) edge [bend right=0, color=blue] node{} (s22u2s)
(s22u1s) edge [bend right=0, color=blue] node{} (s103)
(b1ss12) edge [bend right=0, color=blue] node{} (b3ss11)
(u1su2ss11) edge [bend right=0, color=blue] node{} (b3ss11)
(s22u2s) edge [bend right=4, color=blue] node{} (b3ss11)
(s103) edge [bend right=0, color=blue] node{} (b3ss11)
(b3ss11) edge [bend right=0, color=blue] node{} (u2su2ss11);
\end{tikzpicture}
\end{center}
\begin{Proof} We prove all required primary degenerations in Table 5 below.
For clarifying this table was conside\-red an example in the proof of the theorem \ref{theorem}.
\begin{center}\footnotesize Table 5. {\it Primary degenerations of Jordan superalgebras of dimension $(2,1)$.}
$$\begin{array}{|l|lll|}
\hline
\mbox{degenerations} & \multicolumn{3}{|c|}{\mbox{parametrized bases}} \\
\hline
\hline
2U_1^s
\to U_1^s
& E^t_1=e_1,& E_2^t=te_2,& F_1^t=f_1\\ \hline
2U_1^s
\to B_1^s
& E^t_1=e_1+e_2,& E_2^t=te_2,& F_1^t=f_1\\ \hline
U_1^s
\to B_3^s
& E^t_1=te_1+e_2,& E_2^t=t^2e_1,& F_1^t=f_1\\ \hline
B_1^s
\to B_3^s
& E^t_1=te_1+e_2,& E_2^t=-t^2e_1,& F_1^t=f_1\\ \hline
S_1^2 \oplus U_1^s \to U_1^s
& E^t_1=e_2,& E_2^t=te_1,& F_1^t=f_1\\ \hline
S_1^2 \oplus U_1^s \to S_1^2
& E^t_1=e_1,& E_2^t=te_2,& F_1^t=f_1\\ \hline
S_1^2 \oplus U_1^s \to S_9^3 & E^t_1=e_1+e_2,& E_2^t=te_2,& F_1^t=f_1\\ \hline
S_1^2
\to B_3^s
& E^t_1=te_1+e_2,& E_2^t=-t^2e_1,& F_1^t=f_1\\ \hline
S_2^2
\to B_3^s
& E^t_1=te_1+e_2,& E_2^t=t^2e_1,& F_1^t=f_1\\ \hline
S_2^2 \oplus U_1^s \to S_2^2
& E^t_1=e_1,& E_2^t=te_2,& F_1^t=f_1\\ \hline
S_2^2 \oplus U_1^s \to U_1^s
& E^t_1= \frac{1}{t}e_2,& E_2^t=e_1,& F_1^t=f_1\\ \hline
S_2^2 \oplus U_1^s \to S_{10}^3 & E^t_1=e_1+e_2,& E_2^t=te_2,& F_1^t=f_1\\ \hline
S_{9}^3 \to B_3^s
& E^t_1=te_1+e_2,& E_2^t=te_2, & F_1^t=f_1\\ \hline
S_{10}^3 \to B_3^s
& E^t_1=te_1-\frac{1}{2}e_2,& E_2^t=t^2e_1- te_2 ,& F_1^t=f_1\\ \hline
S^3_{13}\to S_1^2
& E^t_1=e_1,& E_2^t=te_2,& F_1^t=f_1 \\ \hline
S^3_{13}\to S_{10}^3 & E^t_1=e_1+e_2,& E_2^t=te_2,& F_1^t=f_1 \\ \hline
\end{array}$$
\end{center}
Primary non-degenerations between 3-dimensional Jordan algebras are given in \cite{gkp17}, we use this result and Lemma 2 to prove some primary non-degenerations, (see table $6$).
From \cite{gkp17} it follows that $2U_1^s \to S_2^2 $, $ S_1^2 \to B_2^s$, $S_9^3 \to S_{12}^3$, and $S_2^2 \oplus U_1^s \to B_1^s$ as Jordan algebras; we shall prove that they do not degenerate as Jordan superalgebras.
First, suppose that $S_2^2 \oplus U_1^s \to B_1^s$ as Jordan superalgebras, then there exists a parameterized basis
$$E_1^t= a(t)e_1+ b(t)e_2, \quad E_2^t= c(t)e_1 + d(t) e_2, \quad F_1^t= x(t)f_1$$
for $S_2^2 \oplus U_1^2$, such that for $t=0$ we obtain $B_1^s$.
Since $E_1^t F_1^t = a(t) F_1^t$ and $E_2^t F_1^t = c(t) F_1^t$, it follows that $a(0)=c(0)=0$. Now, since
$E_1^tE_1^t=E_1^t$ it follows that $a(t)=0$ and $b(t)=1$ for all $t$, Finally, since $E_2^tE_2^t =0$ at $t=0$, it follows that $d(0)=0$, showing that $ E_1^t E^t_2 = 0$, for all $t$. This proves that $S_2^2 \oplus U_1^s \not \to B_1^s$. For the remaining cases see table $6$.
\begin{center}\footnotesize Table 6. {\it Primary non-degenerations of Jordan superalgebras of dimension $(2,1)$.}
$$\begin{array}{|l|c|}
\hline
\mbox{non-degenerations
} & \mbox{reason} \\
\hline
\hline
\begin{array}{l}
2U_1^s
\not \to S_1^2, \ S_2^2, \ S_9^3, \ S_{10}^3, \ S_{11}^3, \ S_{12}^3
\end{array}
& \dim (J^r)_1<\dim ((J^\prime)^r)_1 \\ \hline
\begin{array}{l}
S_1^2
\not \to B_2^s
;\quad S_9^3 \not \to S_{12}^3;\quad S_1^2 \oplus U_1^s\not \to B_2^s, \ S_{11}^3, \ S_{12}^3; \\
S_{13}^3 \not \to B_2^s, \ S_{11}^3, \ S_{12}^3 \\
\end{array}
& J_0\not\to J'_0
\\ \hline
\begin{array}{l}
2U_1^s \not \to B_{2}^s;
S_1^2 \not \to S_{11}^3,\; S_{12}^3; \quad S_9^3 \not \to B_2^s, \; S_{11}^3;\\
S_1^2\oplus U_1^s \not \to B_1^s, \; S_2^2, \; S_{10}^3; \quad
S_2^2\oplus U_1^s \not \to S_{1}^2, \ S_{9}^3; \
S_{13}^3 \not\to U_1^s,\; B_1^s, \; S_2^2, \; S_9^3
\end{array}
& J\not \to J' \text{ as Jordan algebras}
\\ \hline
\end{array}$$
\end{center}
\scriptsize
\normalsize
\end{Proof}
\subsubsection{Irreducible components and rigid algebras}
Using Theorem \ref{theorem21}, we describe the irreducible components and the rigid superalgebras in $\mathcal{JS}^{2,1}.$
\begin{corollary}\label{ir_LZ} The irreducible components of $\mathcal{JS}^{2,1}$ are:
$$
\begin{aligned}
\mathcal{C}_1 &=\overline{ O(2U_1^s) }=
\{ 2U_1^s , U_1^s , B_1^s, B_3^s, \mathbb{C}^{2,1} \};\\
\mathcal{C}_2 &=\overline{ O(B_2^s) }= \{ B_2^s , \mathbb{C}^{2,1} \};\\
\mathcal{C}_3 &=\overline{ O(S_{1}^2 \oplus U_1^s) }=
\{U_1^s, B_3^s, S_{1}^2 \oplus U_1^s, S_1^2 , S_9^3, \mathbb{C}^{2,1} \};\\
\mathcal{C}_4 &=\overline{ O(S_{2}^2 \oplus U_1^s) }=
\{ U_1^s, B_3^s, S_{2}^2 \oplus U_1^s, S_2^2 , S_{10}^3, \mathbb{C}^{2,1} \};\\
\mathcal{C}_5 &=\overline{ O(S_{11}^3) }= \{ S_{11}^3, \mathbb{C}^{2,1} \};\\
\mathcal{C}_6 &=\overline{ O(S_{12}^3) }= \{ S_{12}^3, \mathbb{C}^{2,1} \};\\
\mathcal{C}_7 &=\overline{ O(S_{13}^3) }=
\{ B_3^s , S_{1}^2 , S_{10}^3, S_{13}^3, \mathbb{C}^{2,1} \}.\\
\end{aligned}
$$
In particular, $Rig(\mathcal{JS}^{2,1})=
\{ 2U_1^s, B_2^s , S_{1}^2 \oplus U_1^s,
S_{2}^2 \oplus U_1^s, S_{11}^3, S_{12}^3, S_{13}^3 \}.$
\end{corollary}
\paragraph{Acknowledgements}
This work was started during the research stay of I. Kaygorodov at the Department of Mathematics, parcially funded by the {\it Coloquio de Matem\'atica} (CR 4430) of the University of Antofagasta.
The work was supported by RFBR 17-01-00258, by FOMIX-CONACYT YUC-2013-C14-221183 and 222870.
\end{document} | math | 34,913 |
\begin{document}
\title{Weighted Persistent Homology}
\author{Shiquan Ren}
\address{\textit{a} School of Mathematics and Computer Science, Guangdong Ocean University, 1 Haida Road, Zhanjiang, China, 524088.\newline
\textit{b} Department of Mathematics, National University of Singapore, Singapore, 119076.}
\email{[email protected]}
\thanks{The project was supported in part by the Singapore Ministry of Education research grant (AcRF Tier 1 WBS No.~R-146-000-222-112). The first author was supported in part by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. The second author was supported in part by the President's Graduate Fellowship of National University of Singapore. The third author was supported by a grant (No.~11329101) of NSFC of China.}
\author{Chengyuan Wu$^*$}
\address{Department of Mathematics, National University of Singapore, Singapore 119076}
\email{[email protected]}
\thanks{$^*$Corresponding author}
\author{Jie Wu}
\address{Department of Mathematics, National University of Singapore, Singapore 119076}
\email{[email protected]}
\keywords{Algebraic topology, Applied topology, Persistent homology, Weighted simplicial complex, Weighted persistent homology}
\subjclass[2010]{Primary 55U10, 55-02; Secondary 55N99}
\begin{abstract}
In this paper we develop the theory of weighted persistent homology. In 1990, Robert J.\ Dawson was the first to study in depth the homology of weighted simplicial complexes. We generalize the definitions of weighted simplicial complex and the homology of weighted simplicial complex to allow weights in an integral domain $R$. Then we study the resulting weighted persistent homology. We show that weighted persistent homology can tell apart filtrations that ordinary persistent homology does not distinguish. For example, if there is a point considered as special, weighted persistent homology can tell when a cycle containing the point is formed or has disappeared.
\end{abstract}
\maketitle
\section{Introduction}
In topological data analysis, point cloud data refers to a finite set $X$ of points in the Euclidean space $\mathbb{R}^n$, and the computation of persistent homology usually
begins with the point cloud data $X$.
In the
classical approach of the persistent homology of $X$, each point in $X$ plays an equally important role, or in other words, each point has the same weight (cf. \cite{Zomorodian2005}). Then $X$ is converted into a simplicial complex, for example, the \v{C}ech complex and the Vietoris-Rips complex (cf. \cite[Chap.~III]{Edelsbrunner2010}).
In this paper, we consider the situation that different points in $X$ may have different importance. Our point cloud data $X$ is weighted, that is, each point
in $X$ has a weight.
Some practical examples where it is
useful to consider weighted point cloud data are described in Section~\ref{subs-m}.
Our approach is to weight the boundary map. This is different from existing techniques of introducing weights to persistent homology. For instance in the paper \cite{Petri2013} by Petri, Scolamiero, Donato, and Vaccarino, weights are introduced via the \emph{weight rank clique filtration} with a thresholding of weights, where at each step $t$ the thresholded graph with links of weight larger than a threshold $\epsilon_t$. In the paper \cite{edelsbrunner2012persistent} by Edelsbrunner and Morozov, the weight of edges is also used to construct a filtration. The theory of weighted simplicial complexes we use is also significantly different from the theory of \emph{weighted alpha shapes} \cite{edelsbrunner1992weighted} by H.\ Edelsbrunner, which are polytopes uniquely determined by points, their weights, and a parameter $\alpha\in\mathbb{R}$ that controls the level of detail.
In his thesis \cite{curry2013sheaves}, J. Curry utilizes the barcode descriptor from persistent homology to interpret cellular cosheaf homology in terms of Borel-Moore homology of the barcode. In \cite[p.~244]{curry2013sheaves}, it is briefly mentioned that for applications, cosheaves should allow us to weight different models of the real world. In a subsequent work \cite{curry2016discrete} by J.\ Curry, R.\ Ghrist and V.\ Nanda, it is shown how sheaves and sheaf cohomology are powerful tools in computational topology, greatly generalizing persistent homology. An algorithm for simplifying the computation of cellular sheaf cohomology via (discrete) Morse-theoretic techniques is included in \cite{curry2016discrete}. In the recent paper \cite{kashiwara2017persistent}, M.\ Kashiwara and P.\ Schapira show that many results in persistent homology can be interpreted in the language of microlocal sheaf theory. We note that in \cite[p.~8]{kashiwara2017persistent}, a notion of weight is being used, where the closed ball $B(s;t)$ is being replaced by $B(s;\rho(s)t)$, where $\rho(s)\in\mathbb{R}_{\geq 0}$ is the weight. This notion of weight is more geometrical, which differs from our more algebraic approach of weighting the boundary operator.
In the seminal paper \cite{carlsson2009theory} by Carlsson and Zomorodian, the theory of multidimensional persistence of multidimensional filtrations is developed. In a subsequent work \cite{carlsson2009computing} by Carlsson, Singh and Zomorodian, a polynomial time algorithm for computing multidimensional persistence is presented. In \cite{cerri2013betti}, the authors Cerri, Fabio, Ferri, Frosini and Landi show that Betti numbers in multidimensional persistent homology are stable functions, in the sense that small changes of the vector-valued filtering functions imply only small changes of persistent Betti numbers functions. In \cite{xia2015multidimensional}, K. Xia and G. Wei introduce two families of multidimensional persistence, namely pseudomultidimensional persistence and multiscale multidimensional persistence, and apply them to analyze biomolecular data. The utility and robustness of the proposed topological methods are effectively demonstrated via protein folding, protein flexibility analysis, and various other applications. In \cite[p.~1509]{xia2015multidimensional}, a particle type-dependent weight function $w_j$ is introduced in the definition of the atomic rigidity index $\mu_i$. The atomic rigidity index $\mu_i$ can be generalized to a position ($\mathbf{r}$)-dependent rigidity density $\mu(\mathbf{r})$. Subsequently \cite[p.~1512]{xia2015multidimensional}, filtration is performed over the density $\mu(\mathbf{r})$.
The main aim of our paper is to construct weighted persistent homology to study the topology of weighted point cloud data. A weighted simplicial complex is a simplicial complex where each simplex is assigned with a weight. We convert a weighted point cloud data $X$ into a weighted simplicial complex.
In \cite{Dawson1990}, Robert J.\ Dawson was the first to study in depth the homology of weighted simplicial complexes. We use an adaptation of \cite{Dawson1990} to compute the homology of weighted simplicial complexes. In \cite{Dawson1990}, the weights take values in the set of non-negative integers $\{0,1,2,\dots\}$. We generalize \cite{Dawson1990} such that the weights can take values in any integral domain $R$ with multiplicative identity $1_R$. Finally, we study and analyze the weighted persistent homology of filtered weighted simplicial complexes.
\section{Background}
In this section, we review some background knowledge and give some preliminary definitions. We give some examples of weighted cloud data in Subsection~\ref{subs-m}
. We review the definitions of simplicial complexes in Subsection~\ref{subs2.2} and review some properties of rings and integral domains in Subsection~\ref{subs2.3}. We give the formal definitions of weighted point cloud data and weighted simplicial complex in Subsection~\ref{subs2.4}.
\subsection{
Examples of weighted cloud data}\label{subs-m}
As the motivation of this paper, we describe
some practical problems with weight function on data. We look at typical applications of persistent homology, and consider the situation that data points may not be equally important. When some data points may be more important than others, mathematically it requires a weight function to give the difference between points.
In the field of computer vision, Carlsson et al.\ \cite{Carlsson2008} develops a framework to use persistent homology to analyze natural images such as digital photographs. The natural image may be viewed as a vector in a very high-dimensional space $\mathcal{P}$. In the paper, the dimension of $\mathcal{P}$ is the number of pixels in the format used by the camera and the image is associated to the vector whose coordinates are grey scale values of the pixels. In certain scenarios, such as color detection in computer vision \cite{Pedreschi2006,Lu2000,Comaniciu1997}, each pixel may play different roles depending on its color. In this case, each pixel can then be given a different weight depending on its color. More generally, pixels in images can be weighted depending on its wavelength in the electromagnetic spectrum which includes infrared and ultraviolet light.
In the paper \cite{Carstens2013}, persistent homology is used to study collaboration networks, which measures how scientists collaborate on papers. In the collaboration network, there is a connection between two scientists if they are coauthors on at least one paper. Depending on the purpose of research, weights can be used to differentiate different groups of scientists, for example PhD students, postdoctoral researchers and professors, or researchers in different fields.
Lee et al.\ \cite{Lee2012} proposed a new framework for modeling brain connectivity using persistent homology. The connectivity of the human brain, also known as human connectome, is usually represented as a graph consisting of nodes and edges connecting the nodes. In this scenario, different weights could be assigned to different neurons in different parts of the brain, for example left/right brain, frontal lobe or temporal lobe.
There are many different ways to define the theory of weighted persistent homology (cf.\ \cite{Horak2013,Petri2013,Dawson1990}), and our definition is not unique. We will show that our definition satisfies some nice properties, including some properties related to category theory \cite{Lane1978} which is an important part of modern mathematics.
In this section we review the mathematical background necessary for our work. We assume all rings have the multiplicative identity $1$. First we define the concept of weighted point cloud data (WPCD). Then, similar to the unweighted case, we can convert the WPCD to a simplicial complex, using either the \v{C}ech complex or the Vietoris-Rips complex. Then, we define a weight function for the simplices so as to compute the weighted simplicial homology.
\subsection{Simplicial Complexes}\label{subs2.2}
The following definition of simplicial complexes can be found in \cite[p. 107]{Hatcher2002}.
An \emph{(abstract) simplicial complex} is a collection $K$ of nonempty finite sets, called \emph{(abstract) simplices}, such that if $\sigma\in K$, then every nonempty subset of $\sigma$ is in $K$. Let $K$ be a simplicial complex and let $\sigma\in K$.
An element $v$ of $\sigma$ is called a \emph{vertex}
, and any nonempty subset of $\sigma$ is called a \emph{face}
. For convenience, we do not distinguish between a vertex $v$ and the corresponding face $\{v\}$.
The definition of orientations of simplices is given in \cite[p. 105]{Hatcher2002}. Let $\sigma=
\{v_0,v_1,\dots,v_n\}$ be a simplex of a simplicial complex $K$.
An orientation of $\sigma$ is given by an ordering of its vertices $v_0,v_1,\dots,v_n$, with the rule that two orderings define the same orientation if and only if they differ by an even permutation. An oriented simplex $\sigma$ is written as $[v_0,v_1,\cdots,v_n]$.
Let $\{x_\alpha\}$ be a set of points in the Euclidean space $\mathbb{R}^n$. Let $\epsilon>0$. The \emph{\v{C}ech complex},
denoted as $\mathcal{C}_\epsilon$, is the abstract simplicial complex where $k+1$ vertices span a $k$-simplex if and only if the $k+1$ corresponding closed $\epsilon/2$-ball neighborhoods of the vertices have nonempty intersection (cf. \cite[p. 72]{Edelsbrunner2010}).
The \emph{Vietoris-Rips complex}, denoted as $\mathcal{R}_\epsilon$, is the abstract simplicial complex where $k+1$ vertices span a $k$-simplex if and only if the distance between any pair of the $k+1$ vertices is at most $\epsilon$ (cf. \cite[p.~74]{Edelsbrunner2010}).
\subsection{Rings}\label{subs2.3}
Throughout this section, we let $R$ to be a commutative ring with multiplicative identity. A nonzero element $a\in R$ is said to \emph{divide} an element $b\in R$ (denoted $a\mid b$) if there exists $x\in R$ such that $ax=b$.
A nonzero element $a$ in a ring $R$ is called a
\emph{zero divisor} if there exists a nonzero $x\in R$ such that $ax=0$.
A commutative ring $R$ with $1_R\neq 0$ and no zero divisors is called an \emph{integral domain} (cf. \cite[p.~116]{Hungerford}).
Let $R$ be an integral domain. Let $S$ be the set of all nonzero elements in $R$. Then we can construct the \emph{quotient field} $S^{-1}R$ (cf. \cite[p.~142]{Hungerford}).
\begin{prop}{\cite[p.~144]{Hungerford}}
\label{embedq}
The map $\varphi_s:R\to S^{-1}R$ given by $r\mapsto rs/s$ (for any $s\in S$) is a monomorphism. Hence, the integral domain $R$ can be embedded in its quotient field.
\end{prop}
\begin{remark}
\label{identifyfrac}
Due to Proposition \ref{embedq}, we may identify $rs/s \in S^{-1}R$ with $r\in R$. We denote this as $\varphi_s^{-1}(rs/s)=r$, or simply $rs/s=r$ if there is no danger of confusion.
\end{remark}
\subsection{Weighted Simplicial Complexes}\label{subs2.4}
In the following definitions, we generalize Robert J.\ Dawson's paper \cite{Dawson1990} and define \emph{weighted point cloud data} and \emph{weighted simplicial complexes}, with weights in rings.
\begin{defn}[Weighted point cloud data]
Let $n$ be a positive integer. The \emph{point cloud data} $X$ in $\mathbb{R}^n$ is a finite subset of $\mathbb{R}^n$. Given some point cloud data $X$,
a \emph{weight} on $X$ is a function $w_0: X\to R$, where $R$ is a commutative ring. The pair $(X,w_0)$ is called \emph{weighted point cloud data}, or \emph{WPCD} for short.
\end{defn}
Next, in Definition \ref{wscdef} we generalize the definition of \emph{weighted simplicial complex} in \cite[p.~229]{Dawson1990} to allow for weights in a commutative ring.
\begin{defn}[cf.\ {\cite[p.~229]{Dawson1990}}]
\label{wscdef}
A \emph{weighted simplicial complex} (or \emph{WSC} for short) is a pair $(K,w)$ consisting of a simplicial complex $K$ and a function $w: K\to R$, where $R$ is a commutative ring, such that for any $\sigma_1, \sigma_2\in K$ with $\sigma_1\subseteq \sigma_2$, we have $w(\sigma_1)\mid w(\sigma_2)$.
\end{defn}
Given any weighted point cloud data $(X,w_0)$, we allow flexible definitions of extending the weight function $w_0$ to all higher-dimensional simplices, where the only condition to be satisfied is the divisibility condition in Definition \ref{wscdef}. One such definition is what we call the \emph{product weighting}.
\begin{defn}[Product weighting]
\label{product}
Let $(X,w_0)$ be a weighted point cloud data, with weight function $w_0: X\to R$ (where $R$ is a commutative ring). Let $K$ be a simplicial complex
whose set of vertices is $X$. We define a weight function $w: K\to R$ by
\begin{align}\label{e1}
w(\sigma)=\prod_{i=0}^k w_0(v_i).
\end{align}
where $\sigma=[v_0,v_1,\dots,v_k]$ is a $k$-simplex of $K$. We call $w$ defined as such the \emph{product weighting}.
\end{defn}
\begin{prop}
\label{prop1}
Let $(X,w_0)$ be a weighted point cloud data. Let $w$ be the product weighting defined in Definition \ref{product}. Then the followings hold:
\begin{enumerate}
\item The restriction of $w$ to the vertices of $K$ is $w_0$.
\item For any $\sigma_1,\sigma_2\in K$, if $\sigma_1\subseteq\sigma_2$, then $w(\sigma_1)\mid w(\sigma_2)$.
\end{enumerate}
\end{prop}
\begin{proof}
Firstly, if $\sigma=[v_0]$ is a vertex of $K$ (0-simplex), then $w(\sigma)=w_0(v_0)$ by (\ref{e1}).
For the second assertion, suppose $\sigma_1\subseteq\sigma_2$, where $\sigma_1=[v_0,\dots,v_k]$ and $\sigma_2=[v_0,\dots,v_k,\dots,v_l]$. Then $w(\sigma_2)=w(\sigma_1)\cdot\prod_{i=k+1}^l w_0(v_i)$.
\end{proof}
For commutative rings such that every two elements have a LCM (for instance UFDs), we can use the economical weighting in \cite[p.~231]{Dawson1990} instead, where the weight of any simplex is the LCM of the weights of its faces.
\section{Properties of Weighted Simplicial Complexes}
In this section, we prove some properties of weighted simplicial complexes. We consider the case where $R$ is a commutative ring with 1.
We now consider subcomplexes given by the preimage of the weight function with values in ideals. This may have the meaning to take out partial data according to the values of the weight function.
\begin{lemma}
\label{lemma2}
Let $I$ be an ideal of a commutative ring $R$. Let $(K,w)$ be a WSC, where $w: K\to R$ is a weight function. Let $w^{-1}(I)$ denote the preimage of $I$ under $w$. If $\sigma\in w^{-1}(I)$, then for all simplices $\tau$ containing $\sigma$, we have $\tau\in w^{-1}(I)$.
\end{lemma}
\begin{proof}
Let $\sigma\in w^{-1}(I)$, i.e.\ $w(\sigma)\in I$. By Definition \ref{wscdef}, for $\sigma\subseteq\tau$ we have $w(\sigma)\mid w(\tau)$. Hence $w(\tau)=w(\sigma)x$ for some $x\in R$. Since $I$ is an ideal, thus $w(\tau)\in I$.
\end{proof}
\begin{theorem}
\label{preimageideal}
Let $I$ be an ideal of a commutative ring $R$. Let $(K,w)$ be a WSC, where $w:K\to R$ is a weight function. Then $K\setminus w^{-1}(I)$ is a simplicial subcomplex of $K$.
\end{theorem}
\begin{proof}
If $K\setminus w^{-1}(I)=\emptyset$, then it is the empty subcomplex of $K$.
Otherwise, let $\tau\in K\setminus w^{-1}(I)$. Let $\sigma$ be a nonempty subset of $\tau$. Suppose to the contrary $\sigma\in w^{-1}(I)$. Then by Lemma \ref{lemma2}, we have $\tau\in w^{-1}(I)$, which is a contradiction. Hence $\sigma\in K\setminus w^{-1}(I)$, so we have proved that $K\setminus w^{-1}(I)$ is a simplicial complex.
\end{proof}
\begin{prop}
Let $I$, $J$ be ideals of a commutative ring $R$. Let $(K,w)$ be a WSC. Then
\begin{equation}
\label{intersectideal}
K\setminus w^{-1}(I\cap J)=(K\setminus w^{-1}(I))\cup(K\setminus w^{-1}(J))
\end{equation}
is a simplicial subcomplex of $K$.
\end{prop}
\begin{proof}
We have that
\begin{align*}
\sigma\in K\setminus w^{-1}(I\cap J)&\iff w(\sigma)\notin I\cap J\\
&\iff w(\sigma)\notin I\ \text{or}\ w(\sigma)\notin J\\
&\iff \sigma\in(K\setminus w^{-1}(I))\cup(K\setminus w^{-1}(J)).
\end{align*}
Hence Equation \ref{intersectideal} holds.
Since $I$, $J$ are ideals, by Theorem \ref{preimageideal} both $K\setminus w^{-1}(I)$ and $K\setminus w^{-1}(J)$ are simplicial subcomplexes of $K$ and so is their union. Alternatively, we can apply Theorem \ref{preimageideal} to the ideal $I\cap J$ to conclude that $K\setminus w^{-1}(I\cap J)$ is a simplicial subcomplex of $K$.
\end{proof}
\subsection{Categorical Properties of WSC}
Let $K$ and $L$ be simplicial complexes. A map $f:K\to L$ is called a \emph{simplicial map} if it sends each simplex of $K$ to a simplex of $L$ by a linear map taking vertices to vertices. That is, if the vertices $v_0,\dots,v_n$ of $K$ span a simplex of $K$, the points $f(v_0),\dots,f(v_n)$ (not necessarily distinct) span a simplex of $L$.
Next, we will use some terminology from Category Theory. We recommend the book by Mac Lane \cite{Lane1978} for an introduction to the subject. The categorical properties of WSCs have been studied in \cite{Dawson1990}. Here, we mainly show that it easily generalizes to the case where weights lie in a ring, and write it in greater detail.
\begin{defn}[{\cite[p.~13]{Lane1978}}]
Let $C$ and $B$ be categories. A \emph{functor} $T:C\to B$ with domain $C$ and codomain $B$ consists of two suitably related functions: The object function $T$, which assigns to each object $c$ of $C$ an object $Tc$ of $B$ and the arrow function (also written as $T$) which assigns to each arrow $f:c\to c'$ of $C$ an arrow $Tf: Tc\to Tc'$ of $B$, such that \[T(1_c)=1_{Tc},\qquad T(g\circ f)=Tg\circ Tf,\] where the latter holds whenever the composite $g\circ f$ is defined in $C$.
\end{defn}
In \cite[p. 229]{Dawson1990}, morphisms of weighted simplicial complexes with integral weights have been studied. In the following definition, we generalize \cite{Dawson1990} and define morphisms of weighted simplicial complexes with weights in general commutative rings.
\begin{defn}[cf.\ {\cite[p.~229]{Dawson1990}}]
\label{WSCmorphism}
Let $(K,w_K)$ and $(L,w_L)$ be WSCs. A \emph{morphism of WSCs} is a simplicial map $f: K\to L$ such that $w_L(f(\sigma))\mid w_K(\sigma)$ for all $\sigma\in K$. These form the morphisms of a category \textbf{WSC}. We may omit the subscripts in $w_K$ and $w_L$, for instance writing $w(f(\sigma))\mid w(\sigma)$, if there is no danger of confusion.
\end{defn}
The next example generalizes \cite[p. 229]{Dawson1990}.
\begin{eg}[cf.\ {\cite[p.~229]{Dawson1990}}]
For any simplicial complex $K$ and every $a\in R$, there is a WSC $(K,a)$ in which every simplex (in particular every vertex) has weight $a$. We call this construction a \emph{constant weighting}.
\end{eg}
Let \textbf{SC} denote the category of simplicial complexes.
\begin{prop}[cf.\ {\cite[p.~229]{Dawson1990}}]
Constant weightings are functorial: Let $T:\textbf{SC}\to\textbf{WSC}$ be defined by $TK=(K,a)$ for each simplicial complex $K\in\textbf{SC}$ and $Tf=f$ for each simplicial map $f\in\textbf{SC}$. Then $T$ is a functor.
\end{prop}
\begin{proof}
Straightforward verification. Note that the condition $w(f(\sigma))\mid w(\sigma)$ in Definition \ref{WSCmorphism} is trivially satisfied since $a\mid a$ for all $a\in R$.
\end{proof}
\begin{defn}[{\cite[p.~80]{Lane1978}}]
Let $A$ and $X$ be categories. An \emph{adjunction} from $X$ to $A$ is a triple $\langle F,G,\varphi\rangle$, where $F:X \to A$ and $G: A\to X$ are functors and $\varphi$ is a function which assigns to each pair of objects $x\in X$, $a\in A$ a bijection of sets \[\varphi=\varphi_{x,a}: A(Fx,a)\cong X(x,Ga)\] which is natural in $x$ and $a$.
An adjunction may also be described directly in terms of arrows. It is a bijection which assigns to each arrow $f: Fx\to a$ an arrow $\varphi f=\text{rad}\,f:x\to Ga$, the \emph{right adjunct} of $f$, such that \[\varphi(k\circ f)=Gk\circ\varphi f,\qquad \varphi(f\circ Fh)=\varphi f\circ h\] hold for all $f$ and all arrows $h: x'\to x$ and $k: a\to a'$. Given such an adjunction, the functor $F$ is said to be a \emph{left adjoint} for $G$, while $G$ is called a \emph{right adjoint} for $F$.
\end{defn}
One reason why we generalize the weights to take values in rings with 1 is to keep the following nice proposition true.
\begin{prop}[cf.\ {\cite[p.~229]{Dawson1990}}]
\label{prop:rladjoint}
The constant weighting functor $T_1:=(-,1_R)$ and $T_0:=(-,0_R)$ are respectively right and left adjoint to the forgetful functor $U$ from \textbf{WSC} to the category \textbf{SC} of simplicial complexes.
\end{prop}
\begin{proof}
Let $\varphi$ be a bijection that assigns to each arrow $f: U(K,w)\to L$ an arrow $\varphi f: (K,w)\to (L,1)$, where $\varphi f(\sigma)=f(\sigma)$. The key point is that the condition for WSC morphism (Def.\ \ref{WSCmorphism}), namely $1\mid w(\sigma)$, always holds for all $\sigma\in (K,w)$. Then for all arrows $h:(K,w)\to (K',w')$ and $k: L\to L'$, we have $\varphi(k\circ f)=k\circ f=Uk\circ\varphi f$ and $\varphi(f\circ Uh)=f\circ Uh=\varphi f\circ h$. Thus $T_1$ is the right adjoint for $U$.
Let $\psi$ be a bijection that assigns to each arrow $f': (K,0)\to (L,w')$ an arrow $\psi f': K\to U(L,w')$, where $\psi f'(\sigma)=f'(\sigma)$. The key point is that $w'(f'(\sigma))\mid 0$ always holds for all $\sigma\in (K,0)$. Similarly, we can conclude that $T_0$ is the left adjoint for $U$.
\end{proof}
\section{Homology of Weighted Simplicial Complexes}
\label{homology}
In this section, we let $R$ be an integral domain, in order to form the field of fractions (also known as quotient field) which is needed for our purposes.
\subsection{Chain complex}
A \emph{chain complex} $(C_\bullet,\partial_\bullet)$ is a sequence of abelian groups or modules $\dots, C_2, C_1, C_0, C_{-1}, C_{-2},\dots$ connected by homomorphisms (called boundary homomorphisms) $\partial_n: C_n\to C_{n-1}$, such that $\partial_n\circ\partial_{n+1}=0$ for each $n$. A chain complex is usually written out as: \[\dots\to C_{n+1}\xrightarrow{\partial_{n+1}}C_n\xrightarrow{\partial_n}C_{n-1}\to\dots\to C_1\xrightarrow{\partial_1}C_0\xrightarrow{\partial_0}C_{-1}\xrightarrow{\partial_{-1}}C_{-2}\to\dots\]
A \emph{chain map} $f$ between two chain complexes $(A_\bullet, \partial_{A,\bullet})$ and $(B_\bullet, \partial_{B,\bullet})$ is a sequence $f_\bullet$ of module homomorphisms $f_n: A_n\to B_n$ for each $n$ that commutes with the boundary homomorphisms on the two chain complexes: \[\partial_{B, n}\circ f_n=f_{n-1}\circ \partial_{A,n}.\]
\[
\begin{tikzcd}
\dots\arrow[r] &A_{n+1}\arrow[d,"f_{n+1}"]\arrow[r,"\partial_{A,n+1}"] &A_{n}\arrow[d,"f_n"]\arrow[r,"\partial_{A,n}"] &A_{n-1}\arrow[d,"f_{n-1}"]\arrow[r] &\dots\\
\dots\arrow[r] &B_{n+1}\arrow[r,"\partial_{B,n+1}"] &B_n\arrow[r,"\partial_{B,n}"] &B_{n-1}\arrow[r]&\dots
\end{tikzcd}
\]
\subsection{Homology Groups}
For a topological space $X$ and a chain complex $C(X)$, the \emph{$n$th homology group} of $X$ is $H_n(X):=\ker(\partial_n)/\Ima(\partial_{n+1})$. Elements of $B_n(X):=\Ima(\partial_{n+1})$ are called \emph{boundaries} and elements of $Z_n(X):=\ker(\partial_n)$ are called \emph{cycles}.
\begin{prop}
\label{chainmapinduce}
A chain map $f_\bullet$ between chain complexes $(A_\bullet, \partial_{A, \bullet})$ and $(B_\bullet, \partial_{B,\bullet})$ induces homomorphisms between the homology groups of the two complexes.
\end{prop}
\begin{proof}
The relation $\partial f=f\partial$ implies that $f$ takes cycles to cycles since $\partial\alpha=0$ implies $\partial(f\alpha)=f(\partial\alpha)=0$. Also $f$ takes boundaries to boundaries since $f(\partial\beta)=\partial(f\beta)$.
For $\beta\in\Ima\partial_{A,n+1}$, we have $\pi_{B,n}f_n(\beta)=\Ima\partial_{B,n+1}$. Therefore $\Ima\partial_{A,n+1}\subseteq\ker(\pi_{B,n}\circ f_n)$. By the universal property of quotient groups, there exists a unique homomorphism $(f_n)_*$ such that the following diagram commutes.
\[
\begin{tikzcd}[column sep=tiny]
\ker\partial_{A,n}\arrow[r,"f_n"]\arrow[rd,"\pi_{A,n}"] &\ker\partial_{B,n}\arrow[r,"\pi_{B,n}"] &H_n(B_\bullet)=\ker\partial_{B,n}/\Ima\partial_{B,n+1}\\
&H_n(A_\bullet)=\ker\partial_{A,n}/\Ima\partial_{A,n+1}\arrow[ur,"(f_n)_*"]
\end{tikzcd}
\]
Hence $f_\bullet$ induces a homomorphism $(f_\bullet)_*: H_\bullet (A_\bullet)\to H_\bullet (B_\bullet)$.
\end{proof}
\begin{defn}
\label{chaingroup}
Let $C_n(K,w)$ (or simply $C_n(K)$ where unambiguous) be the free $R$-module with basis the $n$-simplices of $K$ with nonzero weight. Elements of $C_n(K)$, called $n$\emph{-chains}, are finite formal sums $\sum_\alpha n_\alpha\sigma_\alpha$ with coefficients $n_\alpha\in R$ and $\sigma_\alpha\in K$.
\end{defn}
\begin{defn}
\label{chain}
Given a simplicial map $f: K\to L$, the induced homomorphism $f_\sharp: C_n(K)\to C_n(L)$ is defined on the generators of $C_n(K)$ (and extended linearly) as follows. For $\sigma=[v_0,v_1,\dots,v_n]\in C_n(K)$, we define
\begin{equation}f_\sharp(\sigma)=\begin{cases}\frac{w(\sigma)}{w(f(\sigma))}f(\sigma)&\text{if $f(v_0),\dots,f(v_n)$ are distinct,}\\
0&\text{otherwise,}
\end{cases}
\end{equation}
where $\frac{w(\sigma)}{w(f(\sigma))}\in S^{-1}R$ is identified with the corresponding element in $R$ as described in Remark \ref{identifyfrac}.
Note that this is well-defined since if $w(\sigma)\neq 0$, then $w(f(\sigma))\mid w(\sigma)$ in Definition \ref{WSCmorphism} implies $w(f(\sigma))\neq 0$. So $\frac{w(\sigma)}{w(f(\sigma))}\in S^{-1}R$. Furthermore, $\frac{w(\sigma)}{w(f(\sigma))}=\frac{xw(f(\sigma))}{w(f(\sigma))}$ for some $x\in R$, so that $\frac{w(\sigma)}{w(f(\sigma))}=x\in R$.
\end{defn}
\begin{defn}[cf.\ {\cite[p.~234]{Dawson1990}}]
\label{boundary}
The \emph{weighted boundary map} $\partial_n: C_n(K)\to C_{n-1}(K)$ is the map: \[\partial_n(\sigma)=\sum_{i=0}^n\frac{w(\sigma)}{w(d_i(\sigma))}(-1)^id_i(\sigma)\] where the \emph{face maps} $d_i$ are defined as: \[d_i(\sigma)=[v_0,\dots,\widehat{v_i},\dots,v_n]\qquad\text{(deleting the vertex $v_i$)}\] for any $n$-simplex $\sigma=[v_0,\dots,v_n]$.
Again, if $w(\sigma)\neq 0$, then $w(d_i(\sigma))\neq 0$ so $\partial_n$ is well-defined. Similarly, we identify $\frac{w(\sigma)}{w(d_i(\sigma))}\in S^{-1}R$ with the corresponding element in $R$ as described in Remark \ref{identifyfrac}.
\end{defn}
Next we show that after generalization to weights in an integral domain, the relation $\partial^2=0$ (\cite[p.~234]{Dawson1990}) of the weighted boundary map remains true.
\begin{prop}[cf.\ {\cite[p.~234]{Dawson1990}}]
$\partial^2=0$. To be precise, the composition $C_n(K)\xrightarrow{\partial_n}C_{n-1}(K)\xrightarrow{\partial_{n-1}}C_{n-2}(K)$ is the zero map.
\end{prop}
\begin{proof}
Let $\sigma=[v_0,\dots,v_n]$ be a $n$-simplex. We have \[\partial_n(\sigma)=\sum_{i=0}^n\frac{w(\sigma)}{w([v_0,\dots,\widehat{v_i},\dots,v_n])}(-1)^i[v_0,\dots,\widehat{v_i},\dots,v_n].\] Hence
\begin{align*}
&\partial_{n-1}\partial_n(\sigma)\\
&=\sum_{j<i}\frac{w(\sigma)}{w(d_i(\sigma))}(-1)^i\frac{w(d_i(\sigma))}{w([v_0,\dots,\widehat{v_j},\dots,\widehat{v_i},\dots,v_n])}(-1)^j[v_0,\dots,\widehat{v_j},\dots,\widehat{v_i},\dots,v_n]\\
&\quad+\sum_{j>i}\frac{w(\sigma)}{w(d_i(\sigma))}(-1)^i\frac{w(d_i(\sigma))}{w([v_0,\dots,\widehat{v_i},\dots,\widehat{v_j},\dots,v_n])}(-1)^{j-1}[v_0,\dots,\widehat{v_i},\dots,\widehat{v_j},\dots,v_n]\\
&=\sum_{j<i}\frac{w(\sigma)}{w([v_0,\dots,\widehat{v_j},\dots,\widehat{v_i},\dots,v_n])}(-1)^{i+j}[v_0,\dots,\widehat{v_j},\dots,\widehat{v_i},\dots,v_n]\\
&\quad+\sum_{j>i}\frac{w(\sigma)}{w([v_0,\dots,\widehat{v_i},\dots,\widehat{v_j},\dots,v_n])}(-1)^{i+j-1}[v_0,\dots,\widehat{v_i},\dots,\widehat{v_j},\dots,v_n]\\
&=0.
\end{align*}
The latter two summations cancel since after switching $i$ and $j$ in the second sum, it becomes the additive inverse of the first.
\end{proof}
\begin{lemma}
\label{fdcommute}
Let $f: K\to L$ be a simplicial map and $d_i$ be the $i$th face map. Then
\begin{equation}d_i(f(\sigma))=f(d_i(\sigma))
\end{equation}
for all $\sigma=[v_0,v_1,\dots,v_n]\in K$ with $f(v_0),\dots,f(v_n)$ are distinct.
\end{lemma}
\begin{proof}
Let $\sigma=[v_0,\dots,v_n]$. Then we have
\begin{align*}
d_i(f(\sigma))&=d_i[f(v_0),\dots,f(v_n)]\\
&=[f(v_0),\dots,\widehat{f(v_i)},\dots,f(v_n)]\\
&=f([v_0,\dots,\widehat{v_i},\dots,v_n])\\
&=f(d_i(\sigma)).
\end{align*}
\end{proof}
\begin{prop}
Let $f:K\to L$ be a simplicial map. Then $f_\sharp\partial=\partial f_\sharp$.
\end{prop}
\begin{proof}
Let $\sigma=[v_0,\dots,v_n]\in C_n(K)$. Let $\tau$ be the simplex of $L$ spanned by $f(v_0),\dots,f(v_n)$. We consider three cases.
\begin{description}
\item[Case 1. $\dim \tau=n$]
In this case, the vertices $f(v_0),\dots,f(v_n)$ are distinct. We have
\begin{align*}
f_\sharp\partial(\sigma)&=f_\sharp\left(\sum_{i=0}^n\frac{w(\sigma)}{w(d_i(\sigma))}(-1)^id_i(\sigma)\right)\\
&=\sum_{i=0}^n\frac{w(\sigma)}{w(d_i(\sigma))}(-1)^i f_\sharp(d_i(\sigma))\\
&=\sum_{i=0}^n\frac{w(\sigma)}{w(d_i(\sigma))}(-1)^i\frac{w(d_i(\sigma))}{w(f(d_i(\sigma)))}f(d_i(\sigma))\\
&=\sum_{i=0}^n\frac{w(\sigma)}{w(f(d_i(\sigma)))}(-1)^if(d_i(\sigma)).
\end{align*}
On the other hand, we have
\begin{align*}
\partial f_\sharp(\sigma)&=\partial\left(\frac{w(\sigma)}{w(f(\sigma))}f(\sigma)\right)\\
&=\sum_{i=0}^n\frac{w(\sigma)}{w(f(\sigma))}\cdot\frac{w(f(\sigma))}{w(d_i(f(\sigma)))}(-1)^id_i(f(\sigma))\\
&=\sum_{i=0}^n\frac{w(\sigma)}{w(d_i(f(\sigma)))}(-1)^id_i(f(\sigma))\\
&=f_\sharp\partial(\sigma)
\end{align*}
since $d_i(f(\sigma))=f(d_i(\sigma))$ by Lemma \ref{fdcommute}.
\item[Case 2. $\dim\tau\leq n-2$]
In this case, $f_\sharp(d_i(\sigma))=0$ for all $i$, since at least two of the points $f(v_0),\dots,f(v_{i-1}),f(v_{i+1}),\dots,f(v_n)$ are the same. Thus $f_\sharp\partial(\sigma)$ vanishes. Note that $\partial f_\sharp(\sigma)$ also vanishes since $f_\sharp(\sigma)=0$, because $f(v_0),\dots f(v_n)$ are not distinct.
\item[Case 3. $\dim\tau=n-1$] WLOG we may assume that the vertices are ordered such that $f(v_0)=f(v_1)$, and $f(v_1),\dots,f(v_n)$ are distinct. Then $\partial f_\sharp(\sigma)$ vanishes. Now, \[f_\sharp\partial(\sigma)=\sum_{i=0}^n\frac{w(\sigma)}{w(d_i(\sigma))}(-1)^i f_\sharp(d_i(\sigma))\] has only two nonzero terms which sum up to
\begin{align*}
&\frac{w(\sigma)}{w(d_0(\sigma))}\cdot\frac{w(d_0(\sigma))}{w(f(d_0(\sigma)))}f(d_0(\sigma))-\frac{w(\sigma)}{w(d_1(\sigma))}\cdot\frac{w(d_1(\sigma))}{w(f(d_1(\sigma)))}f(d_1(\sigma))\\
=&\frac{w(\sigma)}{w(f(d_0(\sigma)))}f(d_0(\sigma))-\frac{w(\sigma)}{w(f(d_1(\sigma)))}f(d_1(\sigma)).
\end{align*}
Since $f(v_0)=f(v_1)$, we have $f(d_0(\sigma))=f(d_1(\sigma))$ and hence the two terms cancel each other as desired.
\end{description}
\end{proof}
\begin{defn}
We define the weighted homology group
\begin{equation}
H_n(K,w):=\ker(\partial_n)/\Ima(\partial_{n+1}),
\end{equation}
where $\partial_n$ is the weighted boundary map defined in Definition \ref{boundary}.
\end{defn}
Since the maps $f_\sharp:C_n(K,w_K)\to C_n(L,w_L)$ satisfy $f_\sharp\partial=\partial f_\sharp$, the $f_\sharp$'s define a chain map from the chain complex of $(K,w_K)$ to that of $(L,w_L)$. By Proposition \ref{chainmapinduce}, $f_\sharp$ induces a homomorphism $f_*: H_n(K,w_K)\to H_n(L,w_L)$. We may then view the map $(K,w_K)\mapsto H_n(K,w_K)$ as a functor $H_n: \textbf{WSC}\to \textbf{R-Mod}$ from the category of weighted simplicial complexes (\textbf{WSC}) to the category of $R$-modules (\textbf{R-Mod}).
\subsection{Calculation of Homology Groups in WSC}
The homology functor we define is different from the standard simplicial homology functor. For instance, it is possible for $H_0$ of a weighted simplicial complex to have torsion when the coefficient ring is $\mathbb{Z}$, as shown in \cite[p.~237]{Dawson1990}. We illustrate this more generally in the following example.
\begin{figure}
\caption{Simplicial complex with 3 vertices $x$, $y$, $z$.}
\label{torsioneg}
\end{figure}
\begin{eg}[cf.\ {\cite[p.~237]{Dawson1990}}]
Let $R=\mathbb{Z}$. Consider $(K,w)$, where $w$ is the product weighting, to be the WSC shown in Figure \ref{torsioneg}, with $w(x)=1$, $w(y)=n$ and $w(z)=1$, where $n\in\mathbb{Z}$, $n\geq 2$. Then
\begin{align*}
\partial_1([x,y])&=\frac{w([x,y])}{w(y)}y-\frac{w([x,y])}{w(x)}x\\
&=\frac{n}{n}y-\frac{n}{1}x\\
&=y-nx.
\end{align*}
Similarly, $\partial_1([y,z])=nz-y$. Thus
\begin{align*}
H_0(K,w)&=\ker\partial_0/\Ima\partial_1\\
&\cong\langle x,y,z\mid nx=y, y=nz\rangle\\
&\cong\langle x,z\mid nx=nz\rangle\\
&\cong\mathbb{Z}\oplus\mathbb{Z}_n.
\end{align*}
\end{eg}
\begin{prop}[{cf.\ \cite[p.~239]{Dawson1990}}]
For the constant weighting $(K,a)$, $a\in R\setminus\{0\}$, the weighted homology functor is the same as the standard simplicial homology functor.
\end{prop}
\begin{proof}
If every simplex has weight $a\in R\setminus\{0\}$, note that the chain maps in Definition \ref{chain} and the weighted boundary maps in Definition \ref{boundary} reduce to the usual ones in standard simplicial homology. Hence the resulting weighted homology functor reduces to the standard one.
\end{proof}
\section{Weighted Persistent Homology}
After defining weighted homology, we proceed to define weighted persistent homology, following the example of the seminal paper by Zomorodian and Carlsson \cite{Zomorodian2005}. First, we give a review of persistence \cite{Zomorodian2005,Ghrist2008}, with generalizations to the weighted case.
\subsection{Persistence}
\begin{defn}
A \emph{weighted filtered complex} is an increasing sequence of weighted simplicial complexes $(\mathcal{K},w)=\{(K^i,w)\}_{i\geq 0}$, such that $K^i\subseteq K^{i+1}$ for all integers $i\geq 0$. (The weighting on $K^i$ is a restriction of that on $K^j$ for $i<j$.)
\end{defn}
Given a weighted filtered complex, for the $i$th complex $K^i$ we define the associated weighted boundary maps $\partial_k^i$ and groups $C_k^i, Z_k^i, B_k^i, H_k^i$ for all integers $i,k\geq 0$, following our development in Section \ref{homology}.
\begin{defn}
The \emph{weighted boundary map} $\partial_k^i: C_k(K^i)\to C_{k-1}(K^i)$ is the map $\partial_k: C_k(K^i)\to C_{k-1}(K^i)$ as defined in Definition \ref{boundary}. The \emph{weighted chain group} $C_k^i$ is the group $C_k(K^i,w)$ in Definition \ref{chaingroup}. The \emph{weighted cycle group} $Z_k^i$ is the group $\ker (\partial_k^i)$, while the \emph{weighted boundary group} $B_k^i$ is the group $\Ima(\partial_{k+1}^i)$. The \emph{weighted homology group} $H_k^i$ is the quotient group $Z_k^i/B_k^i$. (If the context is clear, we may omit the adjective ``weighted''.)
\end{defn}
\begin{defn}[{cf.\ \cite[p.~6]{Zomorodian2005}}]
The weighted \emph{$p$-persistent $k$th homology group} of $(\mathcal{K},w)=\{(K^i,w)\}_{i\geq 0}$ is defined as
\begin{equation}
H_k^{i,p}(\mathcal{K},w):=Z_k^i/(B_k^{i+p}\cap Z_k^i).
\end{equation}
If the coefficient ring $R$ is a PID and all the $K^i$ are finite simplicial complexes, then $H_k^{i,p}$ is a finitely generated module over a PID. We can then define the \emph{$p$-persistent $k$th Betti number} of $(K^i,w)$, denoted by $\beta_k^{i,p}$, to be the rank of the free submodule of $H_k^{i,p}$. This is well-defined by the structure theorem for finitely generated modules over a PID.
\end{defn}
Consider the homomorphism $\eta_k^{i,p}: H_k^i\to H_k^{i+p}$ that maps a homology class into the one that contains it. To be precise,
\begin{equation}
n_k^{i,p}(\alpha+B_k^i)=\alpha+B_k^{i+p}.
\end{equation}
The homomorphism $\eta_k^{i,p}$ is well-defined since if $\alpha_1+B_k^i=\alpha_2+B_k^i$, then $\alpha_1-\alpha_2\in B_k^i\subseteq B_k^{i+p}$.
We prove that similar to the unweighted case (cf.\ \cite{Edelsbrunner2002,Zomorodian2005,Zomorodi2005}) we have $\Ima \eta_k^{i,p}\cong H_k^{i,p}$.
\begin{prop}[cf.\ {\cite[p.~6]{Zomorodian2005}}]
$\Ima \eta_k^{i,p}\cong H_k^{i,p}$.
\end{prop}
\begin{proof}
By the first isomorphism theorem, we have \[\Ima \eta_k^{i,p}\cong H_k^i/\ker \eta_k^{i,p}.\]
Note that
\begin{equation}
\begin{split}
&\alpha+B_k^i\in\ker\eta_k^{i,p}\\
&\iff \alpha+B_k^{i+p}=B_k^{i+p}\ \text{and}\ \alpha\in Z_k^i\\
&\iff\alpha\in B_k^{i+p}\cap Z_k^i\\
&\iff\alpha+B_k^i\in(B_k^{i+p}\cap Z_k^i)/B_k^i.
\end{split}
\end{equation} Hence \[\ker\eta_k^{i,p}=(B_k^{i+p}\cap Z_k^i)/B_k^i.\]
Hence we have
\begin{align*}
\Ima \eta_k^{i,p}&\cong H_k^i/\ker\eta_k^{i,p}\\
&=\frac{Z_k^i/B_k^i}{(B_k^{i+p}\cap Z_k^i)/B_k^i}\\
&\cong Z_k^i/(B_k^{i+p}\cap Z_k^i) \tag{by the third isomorphism theorem}\\
&=H_k^{i,p}.
\end{align*}
\end{proof}
\section{Applications}
Weighted persistent homology can tell apart filtrations that ordinary persistent homology does not distinguish. For instance, if there is a special point, weighted persistent homology can tell when a cycle containing the point is formed or has disappeared. This is a generalization of the main feature of persistent homology which is to detect the ``birth'' and ``death'' of cycles. We illustrate this in the following example.
\begin{eg}
\label{eg:finalfigure}
\begin{figure}
\caption{$K^0$}
\caption{$K^1$}
\caption{$K^2$}
\caption{$K^3$}
\caption{The filtration $\mathcal{K}
\label{fig:filtK}
\end{figure}
\begin{figure}
\caption{$L^0$}
\caption{$L^1$}
\caption{$L^2$}
\caption{$L^3$}
\caption{The filtration $\mathcal{L}
\label{fig:filtL}
\end{figure}
Consider the two filtrations as shown in Figure \ref{fig:filtK} and \ref{fig:filtL}. By symmetry, it is clear that the (unweighted) persistent homology groups of the two filtrations will be the same.
Suppose we consider $v_2$ as a special point and wish to tell through weighted persistent homology whether a 1-cycle containing $v_2$ is formed or has disappeared. We can achieve it by the following weight function (choosing $R=\mathbb{Z}$). Let $w$ be the weight function such that all 2-dimensional (and higher) simplices containing $v_2$ have weight 2, while all other simplices have weight 1. In our example, this means $w([v_0,v_2,v_3])=2$ while $w(\sigma)=1$ for all $\sigma\neq [v_0,v_2,v_3]$.
Then for the filtration $\mathcal{K}=\{K^0,K^1,K^2,K^3\}$ we have
\begin{align*}
Z_1^1&=\ker(\partial_1^1)=\langle[v_0,v_1]-[v_0,v_3]+[v_1,v_3]\rangle\\
\partial_2^3([v_0,v_1,v_3])&=[v_1,v_3]-[v_0,v_3]+[v_0,v_1]\\
\partial_2^1&=\partial_2^2=0.
\end{align*}
Hence, we have \[H_1^{1,p}(\mathcal{K},w)=\begin{cases}\mathbb{Z} &\text{for}\ p=0,1\\
0 &\text{for}\ p=2.
\end{cases}\]
However for the filtration $\mathcal{L}=\{L^0,L^1,L^2,L^3\}$ we have
\begin{align*}
Z_1^1&=\ker(\partial_1^1)=\langle [v_2,v_3]-[v_0,v_3]+[v_0,v_2]\rangle\\
\partial_2^3([v_0,v_2,v_3])&=2[v_2,v_3]-2[v_0,v_3]+2[v_0,v_2]\\
\partial_2^1&=\partial_2^2=0
\end{align*}
so that
\begin{equation}
\label{torsionegeqn}
H_1^{1,p}(\mathcal{L},w)=\begin{cases}
\mathbb{Z} &\text{for}\ p=0,1\\
\mathbb{Z}_2 &\text{for}\ p=2.
\end{cases}
\end{equation}
\end{eg}
Referring to Equation (\ref{torsionegeqn}), we can interpret the presence of torsion in $H_1^{1,2}(\mathcal{L},w)$ to mean that a 1-cycle containing $v_2$ is formed in $L^1$, persists in $L^2$, and disappears in $L^3$.
\begin{remark}
Let $R=\mathbb{Z}$. Generalizing Example \ref{eg:finalfigure}, if there is a special point $v$, we can tell if a $k$-cycle containing $v$ is formed or has disappeared by setting all $k+1$-dimensional and higher simplices containing $v$ to have weight $m\geq 2$, and all other simplices to have weight 1.
\end{remark}
\subsection{Algorithm for PIDs}
For coefficients in a PID, we show that the weighted persistent homology groups are computable. In the seminal paper \cite{Zomorodian2005} by Zomorodian and Carlsson, the authors show an algorithm for persistent homology over a PID. We present an algorithm in this section, which is a weighted modification of the algorithm in \cite{Zomorodian2005} based on the reduction algorithm. We use Figure \ref{fig:filtL} as a running example to illustrate the algorithm.
Let $R$ be a PID. We represent the weighted boundary operator $\partial_n: C_n(K,w)\to C_{n-1}(K,w)$ relative to the standard bases (The standard basis for $C_n(K,w)$ is the set of $n$-simplices of $K$ with nonzero weight (see Definition \ref{chaingroup})) of the respective weighted chain groups as a matrix $M_n$ with entries in $R$. The matrix $M_n$ is called the \emph{standard matrix representation} of $\partial_n$. It has $m_n$ columns and $m_{n-1}$ rows, where $m_n$, $m_{n-1}$ are the number of $n$- and $(n-1)$-simplices with nonzero weights respectively.
In general, due to the weights, the matrix $M_n$ for the weighted boundary map is \emph{different} from that of the unweighted case. For instance, for the unweighted case the matrix representation is restricted to having entries in $\{-1_R,0_R,1_R\}$, while the weighted matrix representation can have entries taking arbitrary values in the ring $R$. In particular, when performing the reduction algorithm, we need to make the modification to allow the following \emph{elementary row operations} on $M_k$:
\begin{enumerate}
\item exchange row $i$ and row $j$,
\item multiply row $i$ by a unit $u\in R\setminus\{0\}$,
\item replace row $i$ by (row $i$)+$q$(row $j$), where $q\in R\setminus\{0\}$ and $j\neq i$.
\end{enumerate}
Note that for the unweighted case \cite[p.~5]{Zomorodian2005}, the second elementary row operation was ``multiply row $i$ by $-1$''. A similar modification is also needed for the \emph{elementary column operations}.
The subsequent steps are similar to that of the unweighted case (cf. \cite[pp.~5,12]{Zomorodian2005}). We summarize the algorithm (Algorithm \ref{alg:reduction}) and refer the reader to \cite[p.~5]{Zomorodian2005} for more information on the reduction algorithm and the Smith normal form.
Given a weighted filtered complex $\{(K^i,w)\}_{i\geq0}$, we write $M_k^i$ to denote the standard matrix representation of $\partial_k^i$. We perform the Algorithm \ref{alg:reduction} to obtain the weighted homology groups.
\begin{algorithm}
\caption{Weighted Persistent Homology Algorithm for PIDs (cf.\ {\cite[p.~12]{Zomorodian2005}})}
\label{alg:reduction}
\begin{flushleft}
\textbf{Input}: Weighted filtered complex $(\mathcal{K},w)=\{(K^i,w)\}_{i\geq0}$
\textbf{Output}: Weighted $p$-persistent $k$th homology group $H_k^{i,p}(\mathcal{K},w)$
\end{flushleft}
\begin{enumerate}
\item Reduce the matrix $M_k^i$ to its Smith normal form and obtain a basis $\{z^j\}$ for $Z_k^i$.
\item Reduce the matrix $M_{k+1}^{i+p}$ to its Smith normal form and obtain a basis $\{b^l\}$ for $B_k^{i+p}$.
\item Let $A=[\{b^l\}\;\{z^j\}]=[B\;Z]$, i.e.\ the columns of matrix $A$ consist of the basis elements computed in the previous steps, with respect to the standard basis of $C_k(K^{i+p},w)$. We reduce $A$ to its Smith normal form to find a basis $\{a^q\}$ for its nullspace.
\item Each $a^q=[\alpha^q\;\beta^q]$, where $\alpha^q$, $\beta^q$ are column vectors of coefficients of $\{b^l\}$, $\{z^j\}$ respectively. Since $Au^q=B\alpha^q+Z\beta^q=0$, the element $\beta\alpha^q=-Z\beta^q$ belongs to the span of both bases $\{z^j\}$ and $\{b^l\}$. Hence, both $\{B\alpha^q\}$ and $\{Z\beta^q\}$ are bases for $B_k^{i,p}=B_k^{i+p}\cap Z_k^i$. Using either, we form the matrix $M_{k+1}^{i,p}$ using the basis. The number of columns of $M_{k+1}^{i,p}$ is the cardinality of the basis for $B_k^{i,p}$, while the number of rows is the cardinality of the standard basis for $C_k(K^{i+p},w)$.
\item We reduce $M_{k+1}^{i,p}$ to Smith normal form to read off the torsion coefficients of $H_k^{i,p}(\mathcal{K},w)$ and the rank of $B_k^{i,p}$.
\item The rank of the free submodule of $H_k^{i,p}(\mathcal{K},w)$ is the rank of $Z_k^i$ minus the rank of $B_k^{i,p}$.
\end{enumerate}
\end{algorithm}
We illustrate the algorithm using Example \ref{eg:reduction}.
\begin{eg}
\label{eg:reduction}
Consider the filtration $\mathcal{L}=\{L^0,L^1,L^2,L^3\}$ in Figure \ref{fig:filtL}. We have
\begin{equation}
\begin{split}
M^1_1&=\mleft[\begin{array}{c|ccc}
&[v_0,v_3] &[v_0,v_2] &[v_2,v_3]\\
\hline
v_0 &-1 &-1 &0\\
v_1 & 0 & 0 & 0\\
v_2 & 0 & 1 & -1\\
v_3 & 1 & 0 & 1
\end{array}
\mright]\\
&\xrightarrow{\text{reduce}}\mleft[\begin{array}{c|ccc}
&[v_0,v_3] &[v_0,v_2] &[v_2,v_3]-[v_0,v_3]+[v_0,v_2]\\
\hline
v_3-v_0 &1 &0 &0\\
v_2-v_0 & 0 &1 &0\\
v_1 & 0 &0 &0\\
v_2 & 0 & 0 & 0
\end{array}
\mright].
\end{split}
\end{equation}
Hence a basis for $Z_1^1$ is $\{[v_2,v_3]-[v_0,v_3]+[v_0,v_2]\}$.
\begin{equation}
\begin{split}
M_2^3&=\mleft[\begin{array}{c|c}
&[v_0,v_2,v_3]\\
\hline
[v_0,v_1] &0\\
{[}v_0,v_2] &2\\
{[}v_0,v_3] &-2\\
{[}v_2,v_3] &2
\end{array}
\mright]\\
&\xrightarrow{\text{reduce}}\mleft[\begin{array}{c|c}
&[v_0,v_2,v_3]\\
\hline
[v_0,v_2]-[v_0,v_3]+[v_2,v_3] &2\\
{[}v_0,v_3] &0\\
{[}v_2,v_3] &0\\
{[}v_0,v_1] &0
\end{array}
\mright]
\end{split}
\end{equation}
Hence a basis for $B_1^3$ is $\{2[v_0,v_2]-2[v_0,v_3]+2[v_2,v_3]\}$. Let $b=2[v_0,v_2]-2[v_0,v_3]+2[v_2,v_3]$ and $z=[v_0,v_2]-[v_0,v_3]+[v_2,v_3]$.
\begin{equation}
A=[B\;Z]\\=\mleft[\begin{array}{c|cc}
&b &z\\
\hline
{[}v_0,v_1] &0 &0\\
{[}v_0,v_2] &2 &1\\
{[}v_0,v_3] &-2 &-1\\
{[}v_2,v_3] &2 &1
\end{array}\mright]\\
\xrightarrow{reduce}\mleft[\begin{array}{c|cc}
&z &b-2z\\
\hline
z & 1 &0\\
{[}v_0,v_3] &0 &0\\
{[}v_2,v_3] &0 &0\\
{[}v_0,v_1] &0 &0
\end{array}\mright]
\end{equation}
Hence a basis for the nullspace of $A$ is $\{b-2z\}$. In this context, a basis for $B_1^{1,2}$ is $\{B\alpha^q\}=\{b\}$. Hence we form a matrix
\begin{equation}
\label{eq:torsionmatrix}
M_2^{1,2}=\mleft[
\begin{array}{c|c}
&b\\
\hline
{[}v_0,v_1] &0\\
{[}v_0,v_2] &2\\
{[}v_0,v_3] &-2\\
{[v}_2,v_3] &2
\end{array}
\mright]\\
\xrightarrow{\text{reduce}}\mleft[
\begin{array}{c|c}
&b\\
\hline
z &2\\
{[}v_0,v_1] &0\\
{[}v_0,v_3] &0\\
{[}v_2,v_3] &0
\end{array}
\mright].
\end{equation}
Since both $Z_1^1$ and $B_1^{1,2}$ have rank 1, the rank of the free part of $H_1^{1,2}(\mathcal{L},w)$ is $1-1=0$. We read off (\ref{eq:torsionmatrix}) and conclude that $H_1^{1,2}(\mathcal{L},w)=\mathbb{Z}_2$, which agrees with our previous computation in Example \ref{eg:finalfigure}.
\end{eg}
\end{document} | math | 48,781 |
\begin{document}
\title{Controlled photon emission and Raman transition experiments with a single trapped atom. }
\author{M.~P.~A. JONES, B. DARQUI\'{E}, J. BEUGNON, J. DINGJAN, S. BERGAMINI, Y. SORTAIS, G. MESSIN, A. BROWAEYS and P. GRANGIER}
\address{Laboratoire Charles Fabry de l'Institut d'Optique \\
B\^{a}t. 503 Centre Universitaire \\
91403 Orsay, France\\
E-mail: [email protected]}
\maketitle
\abstracts{We present recent results on the coherent control of an optical transition in a single rubidium atom, trapped
in an optical tweezer. We excite the atom using resonant light pulses that are short (4\,ns) compared with the lifetime
of the excited state (26\,ns). By varying the intensity of the laser pulses, we can observe an adjustable number of Rabi
oscillations, followed by free decay once the light is switched off. To generate the
pulses we have developed a novel laser system based on frequency doubling a telecoms laser diode at 1560\,nm. By setting
the laser intensity to make a $\pi$ pulse, we use this coherent control to make a high quality triggered source of
single photons. We obtain an average single photon rate of $9600\,\mathrm{s}^{-1}$ at the detector. Measurements of
the second-order temporal correlation function show almost perfect antibunching at zero delay. In addition,
we present preliminary results on the use of Raman transitions to couple the two hyperfine levels of the ground state of
our trapped atom. This will allow us to prepare and control a qubit formed by two hyperfine sub-levels. }
\section{Introduction}
In order to use a particular physical system for quantum computation, it is necessary to be able to perform single-qubit
operations such as rotations, and two-qubit operations such as controlled-not gates. These two basic building blocks
can then be concatenated to realise any other desired logical operation.
Neutral atoms have been proposed as a candidate physical system for quantum information processing. For example, the
alkali atoms have two hyperfine levels in the ground state which can be used to make a qubit with very long coherence
times. Single-qubit operations can be realised by using microwaves to drive the hyperfine transition directly, or by
using a two-photon Raman transition. Recently, addressable single-qubit operations have been successfully demonstrated on
a ``quantum register'' of trapped atoms using microwaves\cite{meschede}.
So far, individually addressed two-qubit gates have not been demonstrated with neutral atoms.
Deterministic gates generally require a strong interaction between the particles that are used to carry the physical
qubits\cite{Zoller},
such as the Coulomb interaction between trapped ions. Promising results have been obtained on entangling neutral atoms
using cold controlled collisions in an optical lattice\cite{bloch}, but the single-qubit operations are difficult to perform in such a system.
Another approach is to bypass the requirement for a direct
interaction between the qubits, and use instead an interference
effect and a measurement-induced state projection to create the
desired operation\cite{KLM,Dowling}. An interesting recent
development of this idea is to use photon detection events for
creating entangled states of two atoms\cite{protsenko02b,simon,duan03,barrett05}. This provides ``conditional"
quantum gates, where the success of the logical operation is
heralded by appropriate detection events. Such schemes can be
extended to realize a full controlled-not gate, or more generally to implement conditional unitary
operations\cite{protsenko02b,Beige}. These proposals require the controlled emission of indistinguishable photons by
the two atoms.
In this paper we describe our recent experiments\cite{our_science} on triggered single-photon emission from a single
rubidium atom trapped at
the focal point of a high-numerical-aperture lens (N.A. = $0.7$). We show that we have full control of the optical
transition by observing Rabi oscillations. Secondly, we describe preliminary results on the use of Raman transitions to
couple the two ground state hyperfine levels of the trapped atom with a view to performing single-qubit operations.
\section{Trapping single atoms}
We trap the single rubidium 87 atom at the focus of the lens using a far-detuned optical dipole trap (810 nm), loaded
from an optical molasses. The same lens is also used to collect the fluorescence emitted by the atom (Fig. \ref{setup}),
which is then detected using a photon counting avalanche photodiode. The overall detection and collection efficiency
for the light emitted from the atom is measured to be $0.60 \pm 0.04\%$. A crucial feature of our experiment is the
existence of a ``collisional blockade" mechanism\cite{Schlosser01,Schlosser02} which allows only one atom at a time to be stored in
the trap: if a second atom enters the trap, both are immediately ejected. In this regime the atom statistics are
sub-Poissonian and the trap contains either one or zero (and never two) atoms, with an average atom number of $0.5$. The
experimental apparatus is described in more detail in references\cite{Schlosser01,Schlosser02}.
\begin{figure}
\caption{Schematic of the experiment. The same lens
is used to focus the dipole trap and collect the fluorescence
light. The fluorescence is separated by a dichroic mirror and
imaged onto two photon counting avalanche photodiodes (APD),
placed after a beam-splitter (BS). The insert shows the relevant
hyperfine levels and Zeeman sub-levels of rubidium $87$. The cycling
transition is shown by the arrow. \label{setup}
\label{setup}
\end{figure}
\section{Triggered single photon emission}
We excite the atom with $4$\,ns pulses of laser light, resonant with the \mbox{$S_{1/2},\, F=2 \rightarrow P_{3/2},\,
F^{\prime} = 3$} transition at $780.2$\,nm (see insert in figure \ref{setup}). These pulses are much shorter than the
$26$\,ns lifetime of the upper state.
The pulsed laser beam is $\sigma^+$-polarized with respect to the quantisation axis which is defined by a $4.2$\,G
magnetic field. The trapped atom is optically pumped into the $F=2,\, m_F =+2$ ground
state by the first few laser pulses. It then cycles on the \mbox{$F=2,\, m_F=+2 \rightarrow F^{\prime}=3,\, m_F^{\prime}=+3$}
transition, which forms a closed two-level system emitting $\sigma^+$-polarized photons. Impurities in the polarisation
of the pulsed laser beam with respect to the quantisation axis, together with the large bandwidth of the exciting pulse
($250$ MHz), result in off-resonant excitation to the $F'=2$ upper state, leading to possible de-excitation to the $F=1$
ground state. To counteract this, we add a repumping laser resonant with the $F=1
\rightarrow F' = 2$ transition. We have checked that our two-level description is still valid in the presence of the
repumper by analyzing the polarisation of the emitted single photons\cite{our_science}.
\begin{figure}
\caption{Schematic of the pulsed laser system. FC: Fibre Coupler\label{laser_system_diagram}
\label{laser_system_diagram}
\end{figure}
To generate these short laser pulses, we have developed a novel laser system\cite{tech_paper}, shown in figure \ref{laser_system_diagram}, based on frequency
doubling pulses at the
1560 nm telecoms wavelength. The pulses are generated using an electro-optic
modulator to chop
the output of a continuous-wave diode laser. A commercial fibre amplifier is used to boost the peak power of the pulses
prior to the doubling stage. The laser, modulator, and amplifier are all standard telecommunications components.
The frequency doubling is performed in a single pass using a heated periodically-poled lithium niobate (PPLN) crystal. We
monitor the centre frequency using the fluorescence in a rubidium vapour cell, and tune the system using the temperature of the diode laser. The repetition rate of the
source is $5$ MHz, and we obtain peak powers of up to 12\,W.
\subsection*{Rabi oscillations}
For a two-level atom and exactly resonant square light pulses of fixed
duration $T$, the probability for an atom in the ground state to
be transferred to the excited state is $\sin^2(\Omega T/2)$, the
Rabi frequency $\Omega$ being proportional to the square root of
the power. Therefore the excited state population and hence the
fluorescence rate oscillates as the intensity is increased. To
observe these Rabi oscillations, we illuminate the trapped atom
with the laser pulses during $1$ ms. We keep the length of each laser
pulse fixed at $4$\,ns, with a repetition rate of $5$ MHz, and
measure the total fluorescence rate as a function of the laser
power. The Rabi oscillations are clearly visible on our results
(see Fig. \ref{inter}a). From the height of the first peak and the calibrated
detection efficiency, we derive a maximum
excitation efficiency per pulse of $95\pm 5\%$.
The reduction in the contrast of the oscillations at high laser
power is mostly due to fluctuations of the pulsed laser peak
power. This is shown by the theoretical curve in Fig. \ref{inter}a,
based on a simple two-level model. This model shows
that the $10\%$ relative intensity fluctuations that we measured on
the laser beam are enough to smear out the oscillations as
observed.
The behaviour of the atom in the time domain can be studied by
using time resolved photon counting techniques to record the
arrival times of the detected photons following the excitation
pulses, thus constructing a time spectrum. By adjusting the laser
pulse intensity, we observe an adjustable number of Rabi
oscillations during the duration of the pulse, followed by the
free decay of the atom once the laser has been turned off. The
effect of pulses close to $\pi$, $2\pi$ and $3\pi$ are displayed
as inserts on Fig. \ref{inter}a.
\subsection*{Single photon emission}
In order to use this system as a single photon source, the laser
power is set to realize a $\pi$ pulse. To maximise the number of
single photons emitted before the atom is heated out of the trap,
we use the following sequence. First, the presence of an atom in
the dipole trap is detected in real-time using its fluorescence
from the molasses light. Then, the magnetic field is switched on
and we trigger an experimental sequence that alternates
$115\,\mu$s periods of pulsed excitation with $885\,\mu$s periods
of cooling by the molasses light. After $100$ excitation/cooling cycles,
the magnetic field is switched off and the molasses is turned
back on, until a new atom is recaptured and the process begins
again. On average, three atoms are captured per second under these
conditions. The average count rate during the excitation is
$9600$~s$^{-1}$, with a peak rate of $29000$~s$^{-1}$.
To characterize the statistics of the emitted light, we measure
the second order temporal correlation function, using a Hanbury
Brown and Twiss (HBT) type set-up. This is done using the beam splitter
in the imaging system (Fig. \ref{setup}), which sends the fluorescence light
to two photon-counting avalanche photodiodes that are connected to
a high-resolution counting card in a
start-stop configuration (resolution of about 1 ns). The card is
gated so that only photons scattered during the periods
of pulsed excitation are counted, and the number of coincidence
events is measured as a function of delay. The histogram obtained
after $4$ hours of continuous operation is displayed in Fig. \ref{inter}b, and
shows a series of spikes separated by the period of the excitation
pulses ($200$ ns). The $1/e$ half width of the peaks is \mbox{$27 \pm 3$\,
ns}, in agreement with the lifetime of the upper state.
After correction for a small flat background due to stray laser light and dark counts, the integrated residual area
around zero delay is \mbox{$3.4\%\, \pm\, 1.2\%$} of the area of the other peaks. This is due to a small
probability for the atom to emit a photon {\it during} the pulse, and be re-excited and emit a second photon. From our calculation\cite{our_science}, the probability for the atom to emit two photons per pulse is 0.018.
\begin{figure}
\caption{a) Total count rate as a function of average power for a pulse length of 4\,ns. Rabi oscillations are clearly
visible. The inset shows the time resolved fluorescence signal for $\pi$, $2\pi$ and $3\pi$ pulses. b) Histogram of the
time delays in the HBT experiment. The peak at zero delay is absent, showing that the source is emitting single photons.
\label{inter}
\label{inter}
\end{figure}
\section{Raman transitions - towards a qubit}
To drive Raman transitions between the two hyperfine levels, two phase-coherent laser beams with a frequency difference
equal to the hyperfine transition frequency are required. In our experiment we use the dipole trap itself as one of the
Raman beams. The large detuning minimises problems due to spontaneous emission during the Raman pulse. Due to the very
high intensity at the waist of the dipole trap, high two-photon Rabi frequencies can still be obtained. The second
Raman beam is generated using two additional diode lasers that are phase-locked to the dipole trap by injection
locking. The 6.8 \,GHz frequency offset is generated by modulating the current of one of the diode lasers at 3.4\,GHz.
To drive Raman transitions for qubit rotations, we superimpose the second Raman beam with the dipole trap beam. As the
beams are co-propagating, this makes the transition insensitive to the external state of the atom. The beam is linearly
polarised orthogonal to the dipole trap. With the quantisation axis defined by a 4.2\,G magnetic field as described
above, this means that we can drive $\pi /\sigma^{+}$ or $\pi /\sigma^{-}$ transitions.
We perform spectroscopy of the transitions as follows. The atom is prepared in the $F=1$ hyperfine level by switching
off the repump light 1\,ms before the molasses light. This process populates all of the magnetic sublevels. Then we
transfer the atom to the $F=2$ hyperfine level by pulsing on the second Raman beam. The population in the $F=2$ level is
detected using the fluorescence from a resonant probe beam. By measuring the fluorescence as a function of the frequency
difference between the two Raman beams we obtain spectra such as those shown in figure \ref{raman_peaks}a. The width of these
peaks is limited
by the length of the Raman pulse in each case.
\begin{figure}
\caption{a) Fluorescence as a function of detuning between the Raman beams with an applied magnetic field of 4.2\,G.
Zero
detuning corresponds to the hyperfine splitting in zero applied field. Two of the possible transitions between magnetic
sublevels are shown: \mbox{$F=1,m_{F}
\label{raman_peaks}
\end{figure}
We have also made a preliminary observation of Rabi oscillations. The detuning between the Raman lasers was set resonant
with the \mbox{$F=1,m_F=-1 \rightarrow F=2,m_F=-2$} transition, and we measured the population in $F=2$ as a function of the
pulse length. The results are shown in figure \ref{raman_peaks}b. The measured Rabi frequency is $\Omega = 2 \pi \times 65$\,kHz, with
a power of only 60\,nW in the second Raman beam. As we have 10\,mW available, we should be able to attain Rabi
frequencies in the MHz range.
\section{Conclusions and outlook}
We have realized a high quality source of single photons based on the coherent excitation of an optically trapped single atom. Preliminary
results on using Raman transitions to couple the two hyperfine ground states of
the trapped atom have also been obtained. In the future, we would like to extend these experiments to several atoms. In previous work we
have shown that we can create two-dimensional arrays of dipole traps separated by distances of several microns, each
containing a single atom\cite{slmpaper}. Our goal is to see whether we can realise single-qubit and two-qubit operations in such arrays.
\section*{Acknowledgments}
This work
was supported by the European Union through the IST/FET/QIPC project ÒQGATESÓ and
the Research Training Network ÒCONQUESTÓ. M. Jones was supported by EU Marie Curie fellowship MEIF-CT-2004-009819
\end{document} | math | 16,105 |
\begin{document}
\begin{abstract}
The main result is that the qc-scalar curvature of a seven dimensional quaternionic contact Einstein manifold is a constant. In addition, we characterize qc-Einstein structures with certain flat vertical connection and develop their local structure equations. Finally, regular qc-Ricci flat structures are shown to fiber over hyper-K\"ahler manifolds.
\textbf {e}nd{abstract}
\keywords{quaternionic contact structures, qc conformal flatness, qc
conformal curvature, Einstein structures}
\subjclass[2010]{58G30, 53C17}
\title[Quaternionic contact Einstein manifolds]
{Quaternionic contact Einstein manifolds}
\date{\today}
\author{Stefan Ivanov}
\address[Stefan Ivanov]{University of Sofia, Faculty of Mathematics and Informatics,
blvd. James Bourchier 5, 1164,
Sofia, Bulgaria} \textbf {e}mail{[email protected]}
\author{Ivan Minchev}
\address[Ivan Minchev]{University of Sofia, Faculty of Mathematics and Informatics,
blvd. James Bourchier 5, 1164,
Sofia, Bulgaria}
\textbf {e}mail{[email protected]}
\author{Dimiter Vassilev}
\address[Dimiter Vassilev]{ Department of Mathematics and Statistics\\
University of New Mexico\\
Albuquerque, New Mexico, 87131-0001}
\textbf {e}mail{[email protected]}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
Following the work of Biquard \cite{Biq1} quaternionic contact (qc) manifolds describe the Carnot-Carath\'eodory geometry on the conformal boundary at infinity of quaternionic K\"ahler manifolds. The qc geometry also became a crucial geometric tool in finding the extremals and the best constant in the $L^2$ Folland-Stein Sobolev-type embedding on the quaternionic Heisenberg groups \cite{F2,FS}, see \cite{IMV,IMV2,IMV3}. An extensively studied class of quaternionic contact structures are provided by the 3-Sasakian manifolds. From the point of view of qc geometry, 3-Sasakian structures are qc manifolds whose torsion endomorphism of the Biquard connection vanishes. In turn, the latter property is equivalent to the qc structure being qc-Einstein, i.e., the trace-free part of the qc-Ricci tensor vanishes, see \cite{IMV}. The qc-scalar curvature of a 3-Sasakian manifold is a non-zero constant. Conversely, it was shown in \cite{IMV,IV2} that the Biquard torsion is the obstruction for a given qc structure to be locally 3-Sasakian provided the qc-scalar curvature $Scal$ is a non zero constant. Furthermore, as a consequence of the Bianchi identities, \cite[Theorem 4.9]{IMV} shows that the qc-scalar curvature of a qc-Einstein manifold of dimension at least eleven is constant while the seven dimensional case was left open.
The main purpose of this paper is to show that the qc-scalar curvature of a seven dimensional qc-Einstein manifold is constant, i.e., to prove the following
\begin{thrm}\label{t:main}
If $M$ is a qc-Einstein qc manifold of dimension seven, then, the qc-scalar curvature is a constant, $Scal=const$.
\textbf {e}nd{thrm}
The proof of Theorem~\ref{t:main} makes use of the qc-conformal curvature tensor \cite{IV1}, which characterizes locally qc conformally flat structures, a result of Kulkarni \cite{Kul} on algebraic properties of curvature tensors in four dimensions, and an extension of \cite[Theorem~1.21]{IMV} which describes explicitly all qc-Einstein structures defined on open sets of the quaternionic Heisenberg group that are point-wise qc-conformal to the standard flat qc structure on the quaternionic Heisenberg groups. The main application of Theorem \ref{t:main} is the removal of the a-priori assumption of constancy of the qc-scalar curvature in some previous papers concerning seven dimensional qc-Einstein manifolds, see for example Corollaries \ref{c:vert int}, \ref{c:4-form} and \ref{c:3-sasakian}.
The remaining parts of this paper are motivated by known properties of qc-Einstein manifolds with non-vanishing qc-scalar curvature, in that we prove corresponding results in the case of vanishing qc-scalar curvature. With this goal in mind and because of its independent interest, in Section~\ref{three} we define a connection on the canonical three dimensional vertical distribution of a qc manifold. We show that qc-Einstein spaces can be characterized by the flatness of this vertical connection. This allows us to write the structure equations of a qc-Einstein manifold in terms of the defining 1-forms, their exterior derivatives and the qc-scalar curvature, see Theorem~\ref{str_eq_mod_th}. The latter extends the results of \cite{IV2} and \cite[Section 4.4.2]{IV3} to the vanishing qc-scalar curvature case.
Recall that complete and regular 3-Sasakian and $nS$-spaces
(called negative 3-Sasakian here) have canonical fibering with
fiber $Sp(1)$ or $SO(3)$, and base a quaternionic K\"ahler
manifold. The shows that if $S>0$ (resp. $S<0$), the qc Einstein
manifolds are "essentially" $SO(3)$ bundles over quaternionic
K\"ahler manifolds with positive (resp. negative) scalar
curvature. In section \ref{five} we show that in the "regular"
case, similar to the non-zero qc-scalar curvature cases, a
qc-Einstein manifold of zero scalar curvature fibers over a
hyper-K\"ahler manifold, see Proposition~\ref{t:hKquot}.
We conclude the paper with a brief section where we show that
every qc-Einstein manifold of non-zero scalar curvature carries
two Einstein metrics. Note that the corresponding results
concerning the 3-Sasakian case is well known, see \cite{BGN}. In
the negative qc-scalar curvature case both Einstein metrics are of
signature $(4n,3)$ of which the first is locally (negative)
3-Sasakian, while the second "squashed' metric is not 3-Sasakian,
see Proposition~\ref{p:einst m}.
\begin{conv}\label{conven}
Throughout the paper, unless explicitly stated otherwise, we will
use the following conventions.
\begin{enumerate}[a)]\label{e:notation}
\item The triple $(i,j,k)$ denotes any cyclic permutation of
$(1,2,3)$ while $s,t$ will denote any numbers from the set $\{1,2,3\}$, $s,t \in \{1,2,3\}$.
\item For a decomposition $TM=V\oplus H$ we let
$[.]_V$ and $[.]_H$be the corresponding projections to $V$ and $H$.
\item ${{A}}, {{B}}, {{C}}$, etc. will denote
sections of the tangent bundle of $M$, i.e., ${{A}},
{{B}}, {{C}}\in TM$.
\item $X,Y,Z,U$ will denote horizontal vector fields, i.e.,
$X,Y,Z,U\in H$.
\item $\xi,\xi',\xi''$ will denote vertical vector fields, i.e., $\xi,\xi',\xi''\in
V$.
\item $\{e_1,\dots,e_{4n}\}$ denotes an orthonormal basis of the
horizontal space $H$;
\item The summation convention over repeated vectors from the
basis $ \{e_1,\dots,e_{4n}\}$ is used. For example,
$k=P(e_b,e_a,e_a,e_b)$ means $
k=\sum_{a,b=1}^{4n}P(e_b,e_a,e_a,e_b). $
\textbf {e}nd{enumerate}
\textbf {e}nd{conv}
\textbf{Acknowledgments} The research is partially supported by
the Contract ``Idei", DID 02-39/21.12.2009. S.I and I.M. are
partially supported by the Contract 156/2013 with the University
of Sofia `St.Kl.Ohridski'.
\section{Preliminaries}\label{s:prelim}
It is well known that the sphere at infinity of a non-compact symmetric space $M$ of rank one carries a natural
Carnot-Carath\'eodory structure, see \cite{M,P}. Quaternionic contact (qc) structure were introduced by O. Biquard \cite{Biq1} and are modeled on the conformal boundary at infinity of the quaternionic hyperbolic space. Biquard showed that the infinite dimensional family \cite{LeB91} of complete quaternionic-K\"ahler deformations of the quaternion hyperbolic metric have conformal infinities which provide an infinite dimensional family of examples of qc structures. Conversely, according to \cite{Biq1} every real analytic qc structure on a manifold $M$ of dimension at least eleven is the conformal infinity of a unique quaternionic-K\"ahler metric defined in a neighborhood of $M$. Furthermore, \cite{Biq1} considered CR and qc structures as boundaries of infinity of Einstein metrics rather than only as boundaries at infinity of K\"ahler-Einstein and quaternionic-K\"ahler metrics, respectively. In fact, \cite{Biq1} showed that in each of the three hyperbolic cases (complex, quaternionic, octoninoic) any small perturbation of the standard Carnot-Carath\'eodory structure on the boundary is the conformal infinity of an essentially unique Einstein metric on the unit ball, which is asymptotically symmetric.
We refer to \cite{Biq1}, \cite{IMV} and \cite{IV3} for a more detailed exposition of the definitions and properties of qc structures and the associated Biquard connection. Here, we recall briefly the relevant facts needed for this paper. A quaternionic contact (qc) manifold is a $4n+3$
-dimensional manifold $M$ with a codimension three distribution $H$ equipped with an $Sp(n)Sp(1)$ structure locally defined by an $\mathbb{R}^3$-valued 1-form $\textbf {e}ta=(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$. Thus, $H=\cap_{s=1}^3 Ker\, \textbf {e}ta_s$
is equipped with a positive definite symmetric tensor $g$, called the horizontal metric, and a compatible rank-three bundle $\mathbb{Q}$
consisting of endomorphisms of $H$ locally generated by three orthogonal almost complex
structures $I_s$, $s=1,2,3$, satisfying the unit quaternion relations: (i) $I_1I_2=-I_2I_1=I_3, \quad $ $I_1I_2I_3=-id_{|_H}$; \hskip.1in (ii) $g(I_s.,I_s.)=g(.,.)$; and \hskip.1in (iii) the
compatibility conditions $2g(I_sX,Y)\ =\ d\textbf {e}ta_s(X,Y)$, $
X,Y\in H$ hold true.
The transformations preserving a given quaternionic contact
structure $\textbf {e}ta$, i.e., $\bar\textbf {e}ta=\mu\Psi\textbf {e}ta$ for a positive smooth
function $\mu$ and an $SO(3)$ matrix $\Psi$ with smooth functions as
entries are called \textbf {e}mph{quaternionic contact conformal (qc-conformal) transformations}. If the function $\mu$ is constant $\bar\textbf {e}ta$ is called \textbf {e}mph{qc-homothetic} to $\textbf {e}ta$ and in the case $\mu\textbf {e}quiv 1$ we call $\bar\textbf {e}ta$ \textbf {e}mph{qc-equivalent} to $\textbf {e}ta$. Notice that in the latter case, $\textbf {e}ta$ and $\tilde\textbf {e}ta$ define the same qc structure. The qc conformal curvature tensor $W^{qc}$, introduced in \cite{IV1}, is the
obstruction for a qc structure to be locally qc conformal to the
standard 3-Sasakian structure on the $(4n+3)$-dimensional sphere \cite{IV1,IV3}.
Biquard showed that on a qc manifold of dimension at least eleven
there is a unique connection $\nabla$ with torsion $T$ and a
unique supplementary to $H$ in $TM$ subspace $V$, called the\textbf {e}mph{ vertical space}, such that the following conditions are satisfied:
(i) $\nabla$ preserves the decomposition $H\oplus V$ and the $
Sp(n)Sp(1)$ structure on $H$, i.e., $\nabla g=0, \nabla\sigma \in\Gamma(
\mathbb{Q})$ for a section $\sigma\in\Gamma(\mathbb{Q})$, and its torsion on
$H$ is given by $T(X,Y)=-[X,Y]_{|V}$; \quad
(ii) for $\xi\in V$, the endomorphism $T_{\xi }=T(\xi ,\cdot ):H\rightarrow H$ of $H$ lies in $
(sp(n)\oplus sp(1))^{\bot}\subset gl(4n)$; \quad
(iii) the connection on $V$ is induced by the natural identification $
\varphi$ of $V$ with the subspace $sp(1)$ of the endomorphisms of $H$, i.e., $
\nabla\varphi=0$.
Furthermore, \cite{Biq1} also described the supplementary distribution $V$, which is (locally) generated by the so called Reeb vector fields $
\{\xi_1,\xi_2,\xi_3\}$ determined by
\begin{equation} \label{bi1}
\textbf {e}ta_s(\xi_t)=\delta_{st}, \qquad (\xi_s\lrcorner
d\textbf {e}ta_s)_{|H}=0,\quad (\xi_s\lrcorner d\textbf {e}ta_t)_{|H}=-(\xi_t\lrcorner
d\textbf {e}ta_s)_{|H},
\textbf {e}nd{equation}
where $\lrcorner$ denotes the interior multiplication.
If the dimension of $M $ is seven Duchemin showed in \cite{D} that if we assume, in addition, the
existence of Reeb vector fields as above, then the Biquard result holds. Henceforth, by a qc structure in dimension $7$ we shall mean a qc structure satisfying \textbf {e}qref{bi1}. We shall call $\nabla$ \textbf {e}mph{the Biquard connection}.
Notice that equations \textbf {e}qref{bi1} are invariant under the natural $SO(3)$
action. Using the triple of Reeb vector fields we extend the horizontal metric $g$ to a metric $h$ on $M$ by requiring $span\{\xi_1,\xi_2,\xi_3\}=V\perp H
\text{ and } h(\xi_s,\xi_t)=\delta_{st} $,
\begin{equation}\label{e:Riem-metric}
h|_H=g, \qquad h|_V= \textbf {e}ta_1\otimes\textbf {e}ta_1+ \textbf {e}ta_2\otimes\textbf {e}ta_2 + \textbf {e}ta_3\otimes\textbf {e}ta_3,\qquad h(\xi_s,X)=0.
\textbf {e}nd{equation}
The Riemannian metric $h$ as well as the Biquard connection do not depend on the action of $SO(3)$ on $V$, but both change if $\textbf {e}ta$ is multiplied by a conformal factor \cite{IMV}.
The fundamental 2-forms $\omega_s$ and the { fundamental 4-form} $\Omegaega$ of the quaternionic structure $\mathbb{Q}$ are
defined, respectively, by
\[
2\omega_{s|H}\ =\ \, d\textbf {e}ta_{s|H},\quad \xi\lrcorner\omega_s=0,\qquad
\Omegaega=\omega_1\wedge\omega_1+\omega_2\wedge\omega_2+\omega_3\wedge\omega_3.
\]
\subsection{The torsion of the Biquard connection}
It was shown in \cite{Biq1} that the
torsion $T_{\xi }$ is completely trace-free, $tr\,T_{\xi }=tr\,T_{\xi }\circ
I_{s}=0$. Decomposing the endomorphism $T_{\xi }\in (sp(n)+sp(1))^{\perp }$
into its symmetric part $T_{\xi }^{0}$ and skew-symmetric part $b_{\xi
},T_{\xi }=T_{\xi }^{0}+b_{\xi }$, we have $T_{\xi
_{i}}^{0}I_{i}=-I_{i}T_{\xi _{i}}^{0}\quad I_{2}(T_{\xi
_{2}}^{0})^{+--}=I_{1}(T_{\xi _{1}}^{0})^{-+-},\quad I_{3}(T_{\xi
_{3}}^{0})^{-+-}=I_{2}(T_{\xi _{2}}^{0})^{--+},\quad I_{1}(T_{\xi
_{1}}^{0})^{--+}=I_{3}(T_{\xi _{3}}^{0})^{+--}$, where the upper script $+++$
denotes the component commuting with all three $I_{i}$, $+--$ indicates the component commuting with $
I_{1} $ and anti-commuting with the other two, etc. Furthermore, the symmetric part
$T_\xi^0$ satisfies the identity
\begin{equation}\label{to}
g(T_\xi^0(X),Y)=\frac{1}{2}\mathcal{L}_{\xi}g(X,Y),\qquad \xi\in
V,\quad X,Y\in H,
\textbf {e}nd{equation}
where $\mathcal{L}_{\xi}$ denotes the Lie derivative with respect to $\xi$. The skew-symmetric
part can be represented as $b_{\xi _{i}}=I_{i}u$, where $u$ is a traceless
symmetric (1,1)-tensor on $H$ which commutes with $I_{1},I_{2},I_{3}$.
Therefore we have $T_{\xi _{i}}=T_{\xi _{i}}^{0}+I_{i}u$. If $n=1$ then the
tensor $u$ vanishes identically, $u=0$, and the torsion is a symmetric
tensor, $T_{\xi }=T_{\xi }^{0}$.
Following \cite{IMV} we define the $Sp(n)Sp(1)$ components $T^0$ and $U$ of the torsion tensor by
\begin{gather*}
T^0(X,Y)\ {=}\ g((T_{\xi_1}^{0}I_1+T_{\xi_2}^{0}I_2+T_{
\xi_3}^{0}I_3)X,Y),\qquad
U(X,Y)\ {=}\ -g(uX,Y).
\textbf {e}nd{gather*}
Then, as shown in \cite{IMV}, both $T^0$ and $U$ are trace-free,
symmetric and invariant under qc homothetic transformations.
Using the fixed horizontal metric $g$, we shall also denote by $T^0$ and $U$ the corresponding
endomorphisms of $H$, $g(T^0(X),Y)=T^0(X,Y)$ and $g(U(X),Y)=U(X,Y)$. The torsion of the Biquard connection $\nabla$
is described by the formulas \cite{Biq1} and \cite{IMV}
\begin{equation}\label{torsion}
\begin{aligned}
& T(X,Y) = -[X,Y]_V=2\sum_{s=1}^3\omega_s(X,Y)\xi_s,\qquad
T(\xi_s,X) = \frac{1}{4}(I_sT^0-T^0I_s)(X)+I_sU(X),\\
& \hskip 1in T(\xi_i,\xi_j) = -S\xi_k-[\xi_i,\xi_j]_H,
\textbf {e}nd{aligned}
\textbf {e}nd{equation}
where $[\xi_i,\xi_j]_H$ stands for the $H$-component of the
commutator of the vector fields $\xi_i$, $\xi_j$ and $S$ is the normalized qc-scalar curvature defined below.
\subsection{The curvature of the Biquard connection}
We denote by
$R=[\nabla,\nabla]-\nabla_{[,]}$ the curvature tensor of $\nabla$ and by the same letter
$R$ the curvature $(0,4)$-tensor
$R({A},{B},{C},{D}):=h(R_{{A},{B}}{C},{D}).$ The \textbf {e}mph{qc-Ricci tensor}, the \textbf {e}mph{qc-scalar
curvature}, and the three \textbf {e}mph{qc-Ricci 2-forms} are defined as follows,
\begin{equation}\label{neww}
Ric({A},{B})=R(e_a,{A},{B},e_a),\quad Scal=Ric(e_a,e_a),\quad
{\rho_s({A},{B})=\frac{1}{4n}R({A},{B},e_a,I_se_a)}.
\textbf {e}nd{equation}
The \textbf {e}mph{normalized qc-scalar curvature} $S$ is defined by $8n(n+2)S=Scal$.
A fundamental fact, \cite[Theorem 3.12]{IMV}, is that
the torsion endomorphism determines the (horizontal) qc-Ricci tensor
and the (horizontal) qc-Ricci forms of the Biquard connection,
\begin{equation}\label{sixtyfour}
\begin{aligned}
& Ric(X,Y) \ =\ (2n+2)T^0(X,Y) +(4n+10)U(X,Y)+2(n+2)Sg(X,Y)\\
&\rho_s(X,I_sY) \ =\
-\frac12{B}igl[T^0(X,Y)+T^0(I_sX,I_sY){B}igr]-2U(X,Y)-
Sg(X,Y).
\textbf {e}nd{aligned}
\textbf {e}nd{equation}
We say that $M$ is a \textbf {e}mph{qc-Einstein manifold} if the horizontal Ricci tensor is proportional to the horizontal metric $g$,
$$Ric(X,Y)=\frac{Scal}{4n}g(X,Y)=2(n+2)Sg(X,Y),$$
which taking into account \textbf {e}qref{sixtyfour} is equivalent to $T^0=U=0$. Furthermore, by \cite[Theorem 4.9]{IMV} if $\dim(M)>7$ then any qc-Einstein structure has a constant qc-scalar curvature. It was left as an open question whether a qc-Einsten manifold of dimension seven has constant qc-scalar curvature. The main result of the current paper Theorem \ref{t:main} shows that this is indeed the case.
If the covariant derivatives with respect to $\nabla$ of the endomorphisms $I_s$, the fundamental 2-forms $\omega_s$, and the Reeb vector fields $\xi_s$ are given by
\begin{equation}\label{der}
\nabla I_i=-\alpha_j\otimes I_k+\alpha_k\otimes I_j,\qquad
\nabla\omega_i=-\alpha_j\otimes\omega_k+\alpha_k\otimes\omega_j,\qquad
\nabla\xi_i=-\alpha_j\otimes\xi_k+\alpha_k\otimes\xi_j,
\textbf {e}nd{equation}
where $\alpha_1,\alpha_2, \alpha_3$ are the local connection 1-forms, then
\cite{Biq1} proved that $\alpha_i(X)=d\textbf {e}ta_k(\xi_j,X)=-d\textbf {e}ta_j(\xi_k,X)\quad \text{for all}\quad X\in H$. {On the other hand, as shown in \cite{IMV} the vertical and the $\mathfrak{ sp}(1)$
parts of the curvature endomorphism $R({A},{B})$ are related to the $\mathfrak{ sp}(1)$-connection 1-forms
$\alpha_s$ by }
\begin{equation} \label{sp1curv}
R({A},{B},\xi_i,\xi_j)=2\rho_k({A},{B})=(d\alpha_k+\alpha_i\wedge\alpha_j)({A},{B}).
\textbf {e}nd{equation}
Finally, we have the following commutation relations \cite{IMV}
\begin{equation}\label{rrho}
R(B,C,I_iX,Y)+R(B,C,X,I_iY)=2{B}ig[-\rho_j(B,C)\omega_k(X,Y)+\rho_k(B,C)\omega_j(X,Y){B}ig].
\textbf {e}nd{equation}
In the next section we give the proof of our main result.
\section{Proof of Theorem~\ref{t:main}}
The proof of Theorem~\ref{t:main} is achieved with the help of the following Lemma \ref{lemman} where we calculate the curvature $R(Z,X,Y,V)$ at points where the horizontal gradient of the qc-scalar curvature does not vanish, $\nabla S\not=0$. The proof of Theorem~\ref{t:main} proceeds by showing that on any open set where $S$ is not locally constant $M$ is locally qc-conformally flat. In fact, on any open set where $\nabla S\not=0$ the qc-conformal curvature $W^{qc}$ defined in \cite{IV1} will be seen to vanish, hence by \cite[Theorem~1.2]{IV1} the qc manifold is locally qc-conformally flat. The final step involves a generalization of \cite [Theorem~1.1]{IMV}, which follows from the proof of \cite [Theorem~1.1]{IMV}, allowing the explicit description of all qc-Einstein structures defined on open sets of the quaternionic Heisenberg group that are point-wise qc-conformal to the standard flat qc structure on the quaternionic Heisenberg groups. It turns out that all such qc structures are of constant qc-scalar curvature, which allows the completion of the proof of Theorem \ref{t:main}.
\begin{lemma}\label{lemman}
On a seven dimensional qc-Einstein manifold we have the following formula for the horizontal curvature of the Biquard connection on any open set where the qc-scalar curvature is not constant,
\begin{equation}\label{n17}
R(Z,X,Y,V)=2S{B}ig[g(Z,V)g(X,Y)-g(X,V)g(Z,Y){B}ig].
\textbf {e}nd{equation}
\textbf {e}nd{lemma}
\begin{proof}[Proof of Lemma \ref{lemman}]
Our first goal is to show the next identity,
\begin{equation}\label{n15}
R(Z,X,Y,\nabla S)=2S{B}ig[dS(Z)g(X,Y)-dS(X)g(Z,Y){B}ig],
\textbf {e}nd{equation}
where $\nabla S$ is the horizontal gradient of $S$ defined by $g(X,\nabla S)=dS(X)$. For this, recall the general formula proven in \cite[Theorem 3.1, (3.6)]{IV1},
\begin{multline}\label{vert2}
R(\xi_i,\xi_j,X,Y)=(\nabla_{\xi_i}U)(I_jX,Y)-(\nabla_{\xi_j}U)(I_iX,Y)\\
-\frac14{B}ig[(\nabla_{\xi_i}T^0)(I_jX,Y)+(\nabla_{\xi_i}T^0)(X,I_jY){B}ig]
+\frac14{B}ig[(\nabla_{\xi_j}T^0)(I_iX,Y)+(\nabla_{\xi_j}T^0)(X,I_iY){B}ig]\\
-(\nabla_X\rho_k)(I_iY,\xi_i) -\frac{Scal}{8n(n+2)}T(\xi_k,X,Y)
-T(\xi_j,X,e_a)T(\xi_i,e_a,Y)+T(\xi_j,e_a,Y)T(\xi_i,X,e_a)
\textbf {e}nd{multline}
where the Ricci two forms are given by, cf. \cite[Theorem 3.1]{IV1},
\begin{equation}
\begin{aligned}
& 6(2n+1)\rho_s(\xi_s,X)=(2n+1)X(S)+\frac12{B}ig[(\nabla_{e_a}T^0)(e_a,X)-3(\nabla_{e_a}T^0)(I_se_a,I_sX){B}ig]
-2(\nabla_{e_a}U)(e_a,X),\\
& 6(2n+1)\rho_i(\xi_j,I_kX)=-6(2n+1)\rho_i(\xi_k,I_jX)=(2n-1)(2n+1)X(S)
\\
& \hskip1in -\frac12{B}ig [ (4n+1)(\nabla_{e_a}T^0)(e_a,X)+3(\nabla_{e_a}T^0)(I_ie_a,I_iX){B}ig]-4(n+1)(
\nabla_{e_a}U)(e_a,X) .
\textbf {e}nd{aligned} \label{d3n6}
\textbf {e}nd{equation}
In our case $T^0=U=0$, hence \textbf {e}qref{vert2} takes the form
\begin{equation}\label{n2}
R(\xi_i,\xi_j,X,Y)=-(\nabla_X\rho_k)(I_iY,\xi_i).
\textbf {e}nd{equation}
Letting $n=1$ and $T^0=U=0$ in \textbf {e}qref{d3n6} it follows $\rho_i(I_kY,\xi_j)=-\frac16dS(Y)$, which after a cyclic permutation of $ijk$ and a substitution of $Y$ with $I_kY$ yields
\begin{equation}\label{n1}
\rho_k(I_iY,\xi_i)=-\frac16dS(I_kY).
\textbf {e}nd{equation}
Taking the covariant derivative of \textbf {e}qref{n1} with respect to the Biquard connection and applying \textbf {e}qref{der} we calculate
\begin{multline}\label{n3}
(\nabla_X\rho_k)(I_iY,\xi_i)-\alpha_i(X)\rho_j(I_iY,\xi_i)+\alpha_j(X)\rho_i(I_iY,\xi_i)-\alpha_j(X)\rho_k(I_kY,\xi_i)+\alpha_k(X)\rho_k(I_jY,\xi_i)\\-\alpha_j(X)\rho_k(I_iY,\xi_k)+\alpha_k(X)\rho_k(I_iY,\xi_j)=-\frac16\nabla^2S(X,I_kY)+\frac16\alpha_i(X)dS(I_jY)-\frac16\alpha_j(X)dS(I_iY).
\textbf {e}nd{multline}
Applying \textbf {e}qref{d3n6} with $n=1$ and $T^0=U=0$ we see that the terms involving the connection 1-forms cancel and \textbf {e}qref{n3} turns into
\begin{equation}\label{n4}
(\nabla_X\rho_k)(I_iY,\xi_i)=-\frac16\nabla^2S(X,I_kY).
\textbf {e}nd{equation}
A substitution of \textbf {e}qref{n4} in \textbf {e}qref{n2} taking into account the skew-symmetry of $R(\xi_i,\xi_j,X,Y)$ with respect to $X$ and $Y$ allows us to conclude the following identity for the horizontal Hession of $S$
\begin{equation}\label{n5}
\nabla^2S(X,I_sY)+\nabla^2S(Y,I_sX)=0.
\textbf {e}nd{equation}
The trace of \textbf {e}qref{n5} together with the Ricci identity yield
\begin{multline*}
0=2\nabla^2S(e_a,I_ke_a)=\nabla^2S(e_a,I_ke_a)-\nabla^2S(I_ke_a,e_a)
=-2\sum_{s=1}^3\omega_s(e_a,I_ke_a)dS(\xi_s)=-8dS(\xi_k),
\textbf {e}nd{multline*}
i.e., we have
\begin{equation}\label{n6}
dS(\xi_s)=0, \qquad \nabla^2S(\xi_s,\xi_t)=0.
\textbf {e}nd{equation}
The equality \textbf {e}qref{n6} shows that $S$ is constant along the vertical directions, $dS(\xi_s)=0$, hence, in view of \textbf {e}qref{der}, the second equation of \textbf {e}qref{n6} holds as well. In addition, we have
$\nabla^2S(X,\xi_s)=XdS(\xi_s)-dS(\nabla_X\xi_s)=0$ since $\nabla$ preserves the vertical directions due to \textbf {e}qref{der}.
Moreover, the Ricci identity $$\nabla^2S(\xi_s,X)-\nabla^2S(X,\xi_s)=dS(T(\xi_s,X))=0$$ together with the above equality leads to
\begin{equation}\label{n7}
\nabla^2S(\xi_s,X)=\nabla^2S(X,\xi_s)=0.
\textbf {e}nd{equation}
Next, we show that the horizontal Hessian of $S$ is symmetric. Indeed, we have the identity
\begin{equation}\label{n8}
\nabla^2S(X,Y)-\nabla^2S(Y,X)=d^2S(X,Y)-dS(T(X,Y)) =-2\sum_{s=1}^3\omega_s(X,Y)dS(\xi_s)=0
\textbf {e}nd{equation}
where we applied \textbf {e}qref{n6} to conclude the last equality. Now, \textbf {e}qref{n5} and \textbf {e}qref{n8} imply
\begin{equation}\label{n9a}
\nabla^2S(X,Y)-\nabla^2S(I_sX,I_sY)=0
\textbf {e}nd{equation}
which shows that the $[-1]$-component of the horizontal Hessian vanishes. Hence, the horizontal Hessian of $S$ is proportional to the horizontal metric since $n=1$, i.e.,
\begin{equation}\label{n9}
\nabla^2S(X,Y)=\frac{\nabla^2S(e_a,e_a)}4g(X,Y)=-\frac{\triangle S}4g(X,Y),
\textbf {e}nd{equation}
where $\triangle S=-\nabla^2S(e_a,e_a)$ is the sub-Laplacian of $S$. We have the following Ricci identity of order three (see e.g. \cite{IPV}
\begin{equation}\label{n10}
\nabla^3 S(X,Y,Z)-\nabla^3 S(Y,X,Z)=-R(X,Y,Z,\nabla S) - 2\sum_{s=1}^3
\omega_s(X,Y)\nabla^2S (\xi_s,Z).
\textbf {e}nd{equation}
Applying \textbf {e}qref{n7} we conclude from \textbf {e}qref{n10} that
\begin{equation}\label{n11}
\nabla^3 S(X,Y,Z)-\nabla^3 S(Y,X,Z)=-R(X,Y,Z,\nabla S).
\textbf {e}nd{equation}
Combining \textbf {e}qref{n11} and \textbf {e}qref{n9} we obtain the next expression for the curvature
\begin{equation}\label{n12}
R(Z,X,Y,\nabla S)=\frac{\nabla^3S(X,e_a,e_a)}4g(Z,Y)-\frac{\nabla^3S(Z,e_a,e_a)}4g(X,Y).
\textbf {e}nd{equation}
The trace of \textbf {e}qref{n12} together with the first equality of \textbf {e}qref{sixtyfour} computed for $n=1, T^0=0$ and $U=0$ yield
$$Ric(Z,\nabla S)=6SdS(Z)=-\frac34\nabla^3S(Z,e_a,e_a).$$
Thus, we have
\begin{equation}\label{n14}
\nabla^3S(Z,e_a,e_a)=-8SdS(Z).
\textbf {e}nd{equation}
Now, a substitution of \textbf {e}qref{n14} in \textbf {e}qref{n12} gives \textbf {e}qref{n15}.
Turning to the general formula \textbf {e}qref{n17} we note that the horizontal curvature of the Biquard connection in the qc-Einstein case
satisfies the identity
\begin{equation}\label{b1}R(X,Y,Z,V)+R(Y,Z,X,V)+R(Z,X,Y,V)=0.
\textbf {e}nd{equation}
This follows from the first Bianchi identity since $(\nabla T)(X,Y)=0$ and $ T(T(X,Y),Z)=\sum_{s=1}^32\omega_s(X,Y)T(\xi_s,Z)=0.$
Thus, the horizontal curvature has the algebraic properties of the Riemannian curvature, namely it is skew-symmetric with respect to the first and the last pairs and satisfies the Bianchi identity \textbf {e}qref{b1}. Therefore it also has the fourth Riemannian curvature property,
\begin{equation}\label{riem}
R(X,Y,Z,V)=R(Z,V,X,Y).
\textbf {e}nd{equation}
The equalities \textbf {e}qref{n15} and \textbf {e}qref{riem} imply
\begin{gather}\label{n16}
0=R(I_i\nabla S,I_j\nabla S,I_k\nabla S,\nabla S)=R(I_k\nabla S,\nabla S,I_i\nabla S,I_j\nabla S), \\\nonumber 0= R(I_i\nabla S,I_j\nabla S,I_j\nabla S,\nabla S)= R(I_j\nabla S,\nabla S,I_i\nabla S,I_j\nabla S).
\textbf {e}nd{gather}
Moreover, using \textbf {e}qref{rrho} and the second equality in \textbf {e}qref{sixtyfour} with $T^0=U=0$ we calculate
\begin{multline}\label{scur1}
R(I_j\nabla S,I_i\nabla S,I_i\nabla S,I_k\nabla S)-R(I_j\nabla S,I_i\nabla S,\nabla S,I_j\nabla S)\\=-2\rho_j(I_j\nabla S,I_i\nabla S)\omega_k(\nabla S,I_k\nabla S)+2\rho_k(I_j\nabla S,I_i\nabla S)\omega_j(\nabla S,I_k\nabla S)=0
\textbf {e}nd{multline}
The second equality of \textbf {e}qref{n16} together with \textbf {e}qref{scur1} yields
\begin{equation}\label{scur2}
R(I_j\nabla S,I_i\nabla S,I_i\nabla S,I_k\nabla S)=0.
\textbf {e}nd{equation}
Finally, \textbf {e}qref{n15}, \textbf {e}qref{n16}, \textbf {e}qref{scur1}, \textbf {e}qref{scur2} together with \textbf {e}qref{rrho} and \textbf {e}qref{sixtyfour} imply for any $s\not=t$ the identities
\begin{equation}\label{scur3}
R(I_s\nabla S,I_t\nabla S,I_t\nabla S,I_s\nabla S)=R(I_s\nabla S,\nabla S,\nabla S,I_s\nabla S)=2S|\nabla S|^4.
\textbf {e}nd{equation}
In a neighborhood of any point where $\nabla S\not=0$ the quadruple $\{\frac{\nabla S}{|\nabla S|}, \frac{I_1\nabla S}{|\nabla S|},\frac{I_2\nabla S}{|\nabla S|},\frac{I_3\nabla S}{|\nabla S|}\}$ is an orthonormal basis of $H$, hence
after a small calculation taking into account \textbf {e}qref{n16}, \textbf {e}qref{scur2} and \textbf {e}qref{scur3}, we see that for any orthonormal basis $\{Z,X,Y,V\}$ of $H$ we have
\begin{equation}\label{kul1}
R(Z,X,Y,V)=0, \qquad R(Z,X,Z,V)-R(Y,X,Y,V)=0,
\textbf {e}nd{equation}
where the second equation follows from the first using the orthogonal basis $\{Z+Y,X,Z-Y,V\}$.
For the "sectional curvature" $K(Z,X)=R(Z,X,Z,X)$ we have then the identities
\begin{multline*}
K(Z,X)+K(Y,V)-K(Z,V)-K(Y,X)=R(Z,X,Z,X)+R(Y,V,Y,V) -R(Z,V,Z,V) - R(Y,X,Y,X)\\
=R(Y,X,Y,X)+R(Y,X,Y,V)-R(Y,V,Y,X)+R(Y,V,Y,V)-R(Z,X,Z,X)-R(Z,X,Z,V)\\
+R(Z,V,Z,X)-R(Z,V,Z,V)=R(Z,X+V,Z,X-V)-R(Y,X+V,Y,X-V)=0
\textbf {e}nd{multline*}
using \textbf {e}qref{riem} in the second equality and \textbf {e}qref{kul1} in the last equality. Now, \cite[Theorem 3]{Kul}, shows that the Riemannian conformal tensor of the horizontal curvature $R$ vanishes. In view of $Ric=6S\cdot g$, we conclude that the curvature restricted to the horizontal space is given by \textbf {e}qref{n17} which proves the lemma.
\textbf {e}nd{proof}
\begin{proof}[Proof of Theorem~\ref{t:main}]
Let $M$ be a qc-Einstein manifold of dimension seven with a local $\mathbb{R}^3$-valued 1-form $\textbf {e}ta=(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ defining the given qc structure. Suppose the qc-scalar curvature is not a locally constant function. We shall reach a contradiction by showing that $M$ is locally qc conformally flat, which will be shown to imply that the qc-scalar curvature is locally constant.
To prove the first claim we prove that if the qc-scalar curvature is not locally constant then the qc-conformal curvature $W^{qc}$ of \cite{IV1} vanishes on the open set where $\nabla S\not=0$. For this we recall the formula for the qc-conformal curvature $W^{qc}$ given in \cite[Prposition~4.2]{IV1} which with the assumptions $T^0=U=0$ simplifies to
\begin{multline}\label{qc1}
W^{qc}(Z,X,Y,V)=\frac14{B}ig[R(Z,X,Y,V)+\sum_{s=1}^3R(I_sZ,I_sX,Y,V){B}ig]\\
+\frac{S}2{B}ig[g(Z,Y)g(X,V)-g(Z,V)g(X,Y)+\sum_{s=1}^3{B}ig(\omega_s(Z,Y)\omega_s(X,V)-
\omega_s(Z,V)\omega_s(X,Y){B}ig){B}ig].
\textbf {e}nd{multline}
A substitution of \textbf {e}qref{n17} in \textbf {e}qref{qc1} shows $W^{qc}=0$ on $\nabla S\not=0$.
Now, \cite[Theorem~1.2]{IV1} shows that the open set $\nabla S\not=0$ is locally qc-conformaly flat, i.e., every point $p$, $\nabla S(p)\not=0$ has an open neighborhood $O$ and a qc-conformal transformation $F: O \rightarrow \boldsymbol {G\,(\mathbb{H})}$ to the quaternionic Heisenberg group $\boldsymbol {G\,(\mathbb{H})}$ equipped with the standard flat qc structure $\tilde\Theta$. Thus, $\Theta\overset{def}{=}{F}^*\textbf {e}ta= \frac{1}{2\mu} \tilde\Theta$ for some positive smooth function $\mu$ defined on the open set $F(O)$. By its definition $\Theta$ is a qc-Einstein structure, hence the proof of \cite[Theorem 1.1]{IMV} shows that, with a small change of the parameters in \cite[Theorem 1.1]{IMV}, $\mu$ is given by
\begin{equation}\label{e:Liouville conf factor}
\mu (q,\omega) \ =\ c_0\ {B}ig [ \big ( \sigma\ +\
|q+q_0|^2 \big )^2\ +\ |\omega\ +\ \omega_o\ +
\ 2\ \text {Im}\ q_o\, \bar q|^2 {B}ig ],
\textbf {e}nd{equation}
for some fixed $(q_o,\omega_o)\in \boldsymbol {G\,(\mathbb{H})}$ and constants $c_0>0$ and $\sigma\in \mathbb{R}$. A small calculation using \textbf {e}qref{e:Liouville conf factor} and the Yamabe equation \cite[(5.8)]{IMV} shows $Scal_{\Theta}=128n(n+2)c_0\sigma =const$. Since $\textbf {e}ta$ is qc-conformal to $\Theta$ via the map $F$, it follows that $Scal_{\textbf {e}ta}=const$ on $O$, which is a contradiction.
\textbf {e}nd{proof}
An immediate consequence of Theorem \ref{t:main} and \cite[Theorem 4.9]{IMV} is the next
\begin{cor}\label{c:vert int}
The vertical space $V$ of a seven dimensional qc-Einstein manifold is integrable.
\textbf {e}nd{cor}
We note that the integrability of the vertical distribution of a $4n+3$ dimensional qc-Einstein manifold in the case $n>1$, and when $S=const$ and $n=1$ was proven earlier in \cite[Theorem 4.9]{IMV}. {Thus, in any dimension, the vertical distribution $V$ of a qc-Einstein manifold is
integrable and we have}
\begin{equation}\label{e: some ricci of einstein}
\rho_{s}(X,Y)=-S
\omega_s(X,Y), \qquad
Ric(\xi_s,X)=\rho_s(X,\xi_t)=0, \qquad [\xi_s,\xi_t]\in V.
\textbf {e}nd{equation}
Another Corollary of Theorem \ref{t:main} and the analysis of the corresponding results in the case $n>1$\cite{IV2} is
\begin{cor}\label{c:4-form}
If $M$ is a seven dimensional qc-Einstein manifold then $ d\Omegaega=0$, where $\Omegaega$ is the fundamental 4-form defining the quaternionic structure on the horizontal distribution.
\textbf {e}nd{cor}
For details, we refer to the proof of the case $n>1$ in \cite[Theorem 4.4.2. c)]{IV3} which is valid in the case $n=1$, as well, due to Theorem \ref{t:main} and Corollary \ref{c:vert int}.
We note that the converse to Corollary \ref{c:4-form} holds true when $n>1$ , see \cite{IV2}, while in the case $n=1$ a counterexample for the implication was found in \cite{CFS}.
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
\section{A characterization based on vertical flat connection}\label{three}
In this section we show that for any qc manifold $M$ there is a natural linear connection $\tilde \nabla$, defined on the vertical distribution $V$, the latter considered as a vector bundle over $M$. This connection has the remarkable property of being flat exactly when $M$ is qc-Einstein, see Theorem~\ref{flat tilde}, and will turn out to be a useful technical tool for the geometry of qc Einstein manifolds in the sequel.
We start by introducing a cross-product on the vertical space $V$. Recall that $h$ \textbf {e}qref{e:Riem-metric} is the natural extension of the horizontal metric $g$ to a Riemannian metric on $M$, which induces an inner product, denoted by $\langle.,.\rangle$ here, and an orientation on the vertical distribution $V$. This allows us to introduce also the
cross-product operation $\times:\Lambda^2(V)\rightarrow V$ in the
standard way: $\xi_i\times\xi_j=\xi_k,\ \xi_i\times\xi_i=0$.
The cross product operation is parallel with respect to
any connection on $V$ preserving the inner product $\langle.,.\rangle$, in particular, with respect to the
restriction of the Biquard connection $\nabla$ to $V$. For any $\xi,\xi',\xi''\in V$, we have the standard relations
\begin{equation}\label{nablax}
\begin{aligned}
&(\xi\times \xi')\times \xi''=\langle \xi,\xi'' \rangle \xi' - \langle \xi',\xi'' \rangle \xi, \qquad
\xi\times (\xi'\times \xi'')=(\xi\times \xi')\times \xi''+\xi'\times (\xi\times \xi''),\\
& \hskip1in \nabla_{{A}}(\xi\times \xi')=(\nabla_{{A}}\xi)\times
\xi'+\xi\times (\nabla_{{A}}\xi') .
\textbf {e}nd{aligned}
\textbf {e}nd{equation}
In the next lemma we collect some formulas, which will be used in the proof of Theorem~\ref{flat tilde}.
\begin{lemma}\label{Einstein:RT}
{The curvature $R$ and torsion $T$ of the Biquard connection $\nabla$ of a
qc-Einstein manifold satisfy the following
identities}
\begin{equation}\label{thri}
T(\xi,\xi')=-{S}\xi\times \xi',\quad T(\xi,X)=0, \quad R({{A}},{{B}})\xi=-2{
S}\sum_{s=1}^{3}\omega_s({{A}},{{B}})\xi_s\times \xi.
\textbf {e}nd{equation}
\textbf {e}nd{lemma}
\begin{proof}
{The first two identities follow directly from \textbf {e}qref{torsion} and the integrability of the vertical distribution $V$, see Corollary~\ref{c:vert int} and the paragraph after it.} The last identity follows from \textbf {e}qref{vert2}, \textbf {e}qref{neww} and \textbf {e}qref{e: some ricci of einstein}. In particular, the three Ricci 2-forms $\rho_s({A},{B})$ vanish unless ${A}$ and ${B}$ are both
horizontal, in which case we have \textbf {e}qref{e: some ricci of einstein}. The proof is complete.
\textbf {e}nd{proof}
\begin{dfn}
We define a connection $ \widetilde{\nabla}$ on
the vertical vector bundle $V$ of a qc manifold $M$ as follows
\begin{equation}\label{tildenabla}
\widetilde{\nabla}_X\xi:=\nabla_X\xi,\qquad
\widetilde{\nabla}_\xi\xi':=\nabla_\xi\xi'+{
S}(\xi\times \xi').
\textbf {e}nd{equation}
\textbf {e}nd{dfn}
The main result of this section is
\begin{thrm}\label{flat tilde}
A qc manifold $M$ is qc-Einstein iff the connection $\tilde \nabla$ is flat, $R^{\widetilde{\nabla}}=0$.
\textbf {e}nd{thrm}
\begin{proof}We start by relating the curvature $R^{\widetilde{\nabla}}$ of the connection $\widetilde{\nabla}$, cf. \textbf {e}qref{tildenabla}, to the curvature of the Biquard connection $\nabla$. To this end,
let $L\ =\ (\widetilde{\nabla}-\nabla)\quad \in\ \Gamma(M,T^*M\otimes V^*\otimes
V)$ be the difference between the two connections on~$V$. Then \textbf {e}qref{tildenabla} implies
$L_{{A}}\xi=L({{A}},\xi)={S}[{{A}}]_V\times \xi$, where $[{{A}}]_V$ is the orthogonal projection of $A$ on $V$.
The curvature tensor $R^{\widetilde{\nabla}}$ of the new connection $\widetilde{\nabla}$ is given in terms of $R$ and $L$ by the well known general formula
\begin{equation}\label{R-R}
R^{\widetilde{\nabla}}({{A}},{{B}})\xi=R({{A}},{{B}})\xi+\big(\nabla_{{A}}L\big)({{B}},\xi)-\big(\nabla_{{B}}L\big)({{A}},\xi) + \big[L_{{A}},L_{{B}}\big]\xi+L\big(T({{A}},{{B}}),\xi\big).
\textbf {e}nd{equation}
We proceed by considering each of the terms on the right hand side of \textbf {e}qref{R-R} separately. We have, cf. \textbf {e}qref{sp1curv},
\begin{equation}\label{pR-R4}
R({A},{B})\xi=\left(\sum_{s=1}^3 2\rho_s({A},{B})\xi_s\right)\times \xi.
\textbf {e}nd{equation}
Using \textbf {e}qref{nablax} and the obvious identity $\nabla_{A}
\big(\,[{B}]_V\big)=\big[\nabla_{A}{B}\big]_V$ we obtain
\begin{equation}\label{pR-R1}
\big(\nabla_{A} L\big)({B},\xi)=\nabla_{A}
\big(L({B},\xi)\big)-L\big(\nabla_{A}{B},\xi\big)-L\big({B},\nabla_{A} \xi\big)=dS({A})[{{B}}]_V\times
\xi.
\textbf {e}nd{equation}
From \textbf {e}qref{nablax} it follows
\begin{equation}\label{pR-R2}
\big[L_{{A}},L_{{B}}\big]\xi=\big(L_{{A}}\times L_{{B}}\big)\times \xi={S}^2{B}ig(\,[{{A}}]_V\times [{{B}}]_V\,{B}ig)\times \xi.
\textbf {e}nd{equation}
\noindentndent The torsion identities \textbf {e}qref{torsion} imply
\begin{equation}\label{pR-R3}
L\big(T({A},{B}\big),\xi)=S\big[T({A},{B})\big]_V\times
\xi=S{B}ig(\,-S[{A}]_V\times[{B}]_V+2\sum_{s=1}^3\omega_s({A},{B})\xi_s\,{B}ig)\times
\xi.
\textbf {e}nd{equation}
Finally, a substitution of \textbf {e}qref{pR-R4}, \textbf {e}qref{pR-R1}, \textbf {e}qref{pR-R2} and \textbf {e}qref{pR-R3} in the right hand side of formula \textbf {e}qref{R-R} gives the equivalent relation
\begin{gather}\label{R_tilda}
R^{\widetilde{\nabla}}({{A}},{{B}})\xi=\left(\sum_{s=1}^3 2\rho_s({A},{B})\xi_s+dS({A})[{{B}}]_V-dS({B})[{{A}}]_V+2S\sum_{s=1}^3\omega_s({A},{B})\xi_s\right)\times \xi.
\textbf {e}nd{gather}
We are now ready to complete the proof of the theorem. Suppose first that $M$ is a qc-Einstein manifold. By Theorem \ref{t:main} when $n=1$ and \cite{IMV} when $n>1$ it follows that the qc-scalar curvature is constant. Lemma \ref{Einstein:RT} implies that
$$\sum_{s=1}^3\rho_s({A},{B})\xi_s=-{ S}\sum_{s=1}^3\omega_s({{A}},{{B}})\xi_s.$$ Since $dS=0$, \textbf {e}qref{R_tilda} gives $R^{\widetilde{\nabla}}=0$, and thus $\widetilde{\nabla}$ is a flat connection on $V$.
Conversely, if $\widetilde{\nabla}$ is flat, then
by applying \textbf {e}qref{R_tilda} with $({A},{B})=(X,Y)$ we obtain $\rho_s(X,Y)=-{S}\omega_s(X,Y)$. Applying the second formula of \textbf {e}qref{sixtyfour} we derive $T^0=0$ and $U=0$
by comparing the $Sp(n)Sp(1)$ components of the obtained equalities. Thus, $(M,\textbf {e}ta)$ is a qc Einstein manifold taking into account the first formula in \textbf {e}qref{sixtyfour}.
\textbf {e}nd{proof}
\section{The structure equations of a qc Einstein manifold}
Let $M$ be a qc manifold with normalized qc-scalar curvature $S$. From \cite[Proposition 3.1]{IV2} we have the structure equations
\begin{equation}\label{streq}
\begin{aligned}
d\textbf {e}ta_i & =2\omega_i-\textbf {e}ta_j\wedge\alpha_k+\textbf {e}ta_k\wedge\alpha_j -
S
\textbf {e}ta_j\wedge\textbf {e}ta_k,\\
d\omega_i & =\omega_j\wedge(\alpha_k+
S\textbf {e}ta_k)-\omega_k\wedge(\alpha_j+ S\textbf {e}ta_j)-\rho_k\wedge\textbf {e}ta_j+
\rho_j\wedge\textbf {e}ta_k+ \frac{1}{2}dS\wedge\textbf {e}ta_j\wedge\textbf {e}ta_k,
\textbf {e}nd{aligned}
\textbf {e}nd{equation}
where $(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ is a local $\mathbb{R}^3$-valued 1-form defining the given qc-structure and $\alpha_s$ are the corresponding connection 1-forms. If, {locally, there is an $\mathbb{R}^3$-valued 1-form $\textbf {e}ta=(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ defining the given qc-stricture, such that,} we have the structure equations $d\textbf {e}ta_i=2\omega_i+S\textbf {e}ta_j\wedge \textbf {e}ta_k$ with $S=const$ or the connection 1-forms vanish on the horizontal space, ${\alpha_i}\vert_{H}=0$, then $M$ is a qc-Einstein manifold of normalized qc-scalar curvature $S$, see \cite[Proposition 3.1]{IV2} and \cite[Lemma 4.18]{IMV}.
Conversely, on a qc-Einstein manifold of nowhere vanishing qc-scalar curvature the structure equations \textbf {e}qref{str_eq_mod} hold true by \cite{IV2} and \cite[Section 4.4.2]{IV3}, taking into account Corollary \ref{c:3-sasakian}. The purpose of this section is to give the corresponding results in the case $Scal=0$. The proof of Theorem \ref{str_eq_mod_th} which is based on the connection defined in Section \ref{three} rather than the cone over a 3-Sasakian manifold employed in \cite{IV2} and \cite[Theorem 4.4.4]{IV3} works also in the case $Scal\not=0$, thus in the statement of the Theorem we will not make an explicit note of the condition $Scal=0$.
\begin{thrm}\label{str_eq_mod_th} Let $M$ be a qc manifold. The following conditions are equivalent:
\begin{enumerate}[a)]
\item $M$ is a qc Einstein manifold;
\item locally, the given qc-structure is defined by 1-form $(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ such that for some constant $S$ we have
\begin{equation}\label{str_eq_mod}
d\textbf {e}ta_i=2\omega_i+S\textbf {e}ta_j\wedge\textbf {e}ta_k;
\textbf {e}nd{equation}
\item locally, the given qc-structure is defined by 1-form $(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ such that the corresponding connection 1-forms vanish on $H$, $\alpha_s=-S\textbf {e}ta_s.$
\textbf {e}nd{enumerate}
\textbf {e}nd{thrm}
\begin{proof}
As explained above, the implication c) $\mathbb{R}ightarrow$ a) is known, while b) $\mathbb{R}ightarrow$ c) is an immediate consequence of \textbf {e}qref{streq}. Thus, only the implication a) implies b) needs to be proven, see also the paragraph preceding the Theorem.
Assume a) holds. We will show that the structure equation in b) are satisfied. By Theorem \ref{t:main} when $n=1$ and \cite{IMV} when $n>1$ it follows $M$ is of constant qc-scalar curvature. Let $V$ be
the vertical distribution. Clearly, the
connection $\widetilde{\nabla}$ defined in Theorem~\ref{flat tilde} is a flat metric connection on $V$ with respect to the inner product
$\langle.,.\rangle$. Therefore the bundle $V$ admits a local orthonormal oriented frame $K_1,K_2,K_3$
which is $\widetilde{\nabla}$-parallel, i.e., we have
\begin{equation}\label{nabla K_i}
\nabla_{{A}}K_i=-S[{A}]_V\times K_i.
\textbf {e}nd{equation}
\noindentndent There exists a triple of local 1-forms $(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ on $M$ vanishing on $H$, which satisfy
$\textbf {e}ta_s(K_t)=\delta_{st}$. We rewrite \textbf {e}qref{nabla K_i} as
\begin{equation}\label{nabla K_i2}
\nabla_{{A}}K_i=S\big(\textbf {e}ta_j({A})K_k-\textbf {e}ta_k({A})K_j\big).
\textbf {e}nd{equation}
Since $K_1,K_2,K_3$ is an orthonormal and oriented frame of $V$, we can complete the dual triple $(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ to one defining the given qc-structure. By differentiating the equalities $\textbf {e}ta_s(K_i)=\delta_{si}$ we obtain using \textbf {e}qref{nabla K_i2} that
\begin{multline*}
0\ =\ \big(\nabla_{{A}}\textbf {e}ta_s\big)(K_i)+\textbf {e}ta_s\big(\nabla_{{A}}K_i\big)\ =\ \big(\nabla_{{A}}\textbf {e}ta_s\big)(K_i)
+\textbf {e}ta_s{B}ig( S\big(\textbf {e}ta_j({A})K_k-\textbf {e}ta_k({A})K_j\big){B}ig)\\
=\ \big(\nabla_{{A}}\textbf {e}ta_s\big)(K_i)+S{B}ig(\textbf {e}ta_j({A})\delta_{sk}-\textbf {e}ta_k({A})\delta_{sj}{B}ig).
\textbf {e}nd{multline*}
Hence,
$\big(\nabla_{{A}}\textbf {e}ta_i\big)({B})\ =\ S\textbf {e}ta_j\wedge\textbf {e}ta_k({A},{B})$,
which together with Lemma~\ref{Einstein:RT} allows the computation of
the exterior derivative of $\textbf {e}ta_i$,
\begin{multline}\label{deta Ki}
d\textbf {e}ta_i({A},{B}) = \big(\nabla_{A}\textbf {e}ta_i\big)({B})-\big(\nabla_{B}\textbf {e}ta_i\big)({A})
+\textbf {e}ta_i\big(T({A},{B})\big)=S\textbf {e}ta_j\wedge\textbf {e}ta_k({A},{B})-S\textbf {e}ta_j\wedge\textbf {e}ta_k({B},{A})\\
+ \textbf {e}ta_i{B}ig(-S[{A}]_V\times[{B}]_V +2\sum_s\omega_s({A},{B})\xi_s{B}ig)
={B}ig(2\omega_i+S\textbf {e}ta_j\wedge\textbf {e}ta_k{B}ig)({A},{B}),
\textbf {e}nd{multline}
which proves
\textbf {e}qref{str_eq_mod}. Now $\alpha_s|_H=0$ shows that $K_s$ satisfy \textbf {e}qref{bi1} and therefore $K_s$ are the Reeb vector fields, which completes the proof of the Theorem.
\textbf {e}nd{proof}
We finish the section with another condition characterizing qc-Einstein manifolds, which is useful in some calculations.
\begin{prop} Let $M$ be a qc manifold. $M$ is qc-Einstein iff for some $\textbf {e}ta$ compatible with the given qc-structure
\begin{equation}\label{hor_domega}
d\omega_s(X,Y,Z)=0.
\textbf {e}nd{equation}
\textbf {e}nd{prop}
\begin{proof}
If \textbf {e}qref{str_eq_mod} are satisfied, then we have
$
0\ =\ d(d\textbf {e}ta_i)\ =\ d{B}ig(2\omega_i+
S\textbf {e}ta_j\wedge\textbf {e}ta_k{B}ig),
$
which implies \textbf {e}qref{hor_domega}.
Conversely, suppose the given qc-structure is locally defined by 1-form $(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ which satisfies \textbf {e}qref{hor_domega}. By \textbf {e}qref{streq} we have $
{B}ig(\omega_j\wedge\alpha_k-\omega_k\wedge\alpha_j{B}ig)|_{H}=0,
$
which after a contraction with the endomorphism $I_i$ gives
\begin{multline*}
0=(\omega_j\wedge\alpha_k-\omega_k\wedge\alpha_j)(X,e_a,I_ie_a)\ =\omega_j(X,e_a)\alpha_k(I_ie_a)+\omega_j(e_a,I_ie_a)\alpha_k(X)+
\omega_j(I_ie_a,X)\alpha_k(e_a) \\ -\omega_k(X,e_a)\alpha_j(I_ie_a)-\omega_k(e_a,I_ie_a)\alpha_j(X)-
\omega_k(I_ie_a,X)\alpha_j(e_a)
=2\omega_j(X,e_a)\alpha_k(I_ie_a) - 2\omega_k(X,e_a)\alpha_j(I_ie_a)\\ =
2\alpha_k(I_kX)\ +\ 2\alpha_j(I_jX).
\textbf {e}nd{multline*}
Since the above calculation is valid for any even permutation $(i,j,k)$, it follows that $\alpha_s(X)=0$ which completes the proof of the Proposition.
\textbf {e}nd{proof}
\section{The related Riemannian geometry}\label{five}
{A (4n + 3)-dimensional
(pseudo) Riemannian manifold $(M,g)$ is 3-Sasakian if the cone metric is a (pseudo) hyper-K\"ahler metric \cite{BG,BGN}. We note explicitly that in this paper 3-Sasakian manifolds are to be understood in the wider sense of positive (the usual terminology) or negative 3-Sasakian structures, cf. \cite[Section 2]{IV2} and \cite[Section 4.4.1]{IV3} where the "negative" 3-Sasakian term was adopted in the case when the Riemannian cone is hyper-K\"ahler of signature $ (4n,4)$. Every 3-Sasakian manifold is a qc-Einstein manifold of constant qc-scalar curvature, \cite{Biq1}, \cite{IMV} and \cite{IV2}. As well known, a positive 3-Sasakian manifold is Einstein with a positive Riemannian scalar curvature \cite{Kas} and, if complete, it is compact with finite fundamental group due to Myer’s
theorem. The negative 3-Sasakian structures are Einstein with respect to the corresponding pseudo-Riemannian metric of signature $(4n,3)$ \cite{Kas,Tan}. In this case, by a simple change of signature, we obtain a positive definite $nS$ metric on $M$, \cite{Tan,Jel,Kon}.}
By \cite[Theorem 1.3]{IMV} when $Scal>0$, and \cite{IV2} and \cite[Theorem 4.4.4]{IV3} when $Scal<0$ a qc-Einstein of dimension at least eleven is locally qc-homothetic to a 3-Sasakian structure. The corresponding result in the seven dimensional case was proven with the extra assumption that the qc-scalar is constant. Thanks to Theorem \ref{t:main} the additional hypothesis is redundant, hence we have the following
\begin{cor}\label{c:3-sasakian}
A seven dimensional qc-Einstein manifold of nowhere vanishing qc-scalar curvature is locally qc-homothetic to a 3-Sasakian structure.
\textbf {e}nd{cor}
There are many known examples of positive 3-Sasakian manifold, see \cite{BG} and references therein for a nice overview of 3-Sasakian spaces. On the other hand, certian SO(3)-bundles over quaternionic K\"ahler manifolds with negative scalar curvature constructed in \cite{Kon,Tan,Jel} are examples of negative 3-Sasakian manifolds.
Other, explicit examples of negative 3-Sasakian manifolds are constructed also in \cite{AFIV}.
Complete and regular 3-Sasakian manifolds, resp. $nS$-structures, fiber over a quaternionic K\"ahler manifold with positive, resp. negative, scalar curvature \cite{Is, BGN,Tan,Jel} with fiber $SO(3)$. Conversely, a quaternionic K\"ahler manifold with positive (resp. negative) scalar curvature has a canonical $SO(3)$ principal bundle, the total space of which admits a natural 3-Sasakian (resp. $nS$-) structure \cite{Is,Kon,Tan, BGN, Jel}.
In this section we describe the properties of qc-Einstein structures of zero qc-scalar curvature, which complement the well known results in the 3-Sasakian case. A common feature of the $Scal=0$ and $Scal\ne 0$ cases is the existence of Killing vector fields.
\begin{lemma}\label{l:killing}
Let $M$ be a qc-Einstein manifold with zero qc-scalar curvature. If $(\textbf {e}ta_1,\textbf {e}ta_2,\textbf {e}ta_3)$ is an $\mathbb{R}^3$-valued local 1-form defining the qc structure as in~\textbf {e}qref{str_eq_mod}, then the corresponding Reeb vector fields $\xi_1,\xi_2,\xi_3$ are Killing vector fields for the Riemannian metric $h$, cf. \textbf {e}qref{e:Riem-metric}.
\textbf {e}nd{lemma}
\begin{proof}
By Theorem \ref{str_eq_mod_th} c) we have $\alpha_i=0$, hence
$\nabla_{{A}}\xi_i=0$ while Lemma~\ref{Einstein:RT} yields $T(\xi_s,\xi_t)=0$. Therefore,
$
[\xi_s,\xi_t]=\nabla_{\xi_s}\xi_t-\nabla_{\xi_t}\xi_s-T(\xi_s,\xi_t)=0,
$
which implies for any $i,s,t \in\{1,2,3\}$ we have
$
({\mathcal
L_{\xi_i}}h)(\xi_s,\xi_t)=-h([\xi_i,\xi_s],\xi_t)-h(\xi_s,[\xi_i,\xi_t])
= 0.
$
Furthermore, using $d\textbf {e}ta_j(\xi_i,X)=\alpha_k(X)=0$ we compute
\begin{equation*}
({\mathcal
L}_{\xi_s}h)(\xi_t,X)=-h(\xi_t,[\xi_s,X])=d\textbf {e}ta_t(\xi_s,X)=0.
\textbf {e}nd{equation*}
Finally, \textbf {e}qref{to} gives $({\mathcal L}_{\xi_i}h)(X,Y)=({\mathcal
L}_{\xi_i}g)(X,Y)=2T^0_{\xi_i}(X,Y)=0$, which completes the proof.
\textbf {e}nd{proof}
\subsection{The quotient space of a qc Einstein manifold with $S=0$}
The total space of an $\mathbb{R}^3$-bundle over a hyper-K\"ahler manifold with closed and locally exact K\"ahler forms $2\omega_s = d\textbf {e}ta_s$ with connection
1-forms $\textbf {e}ta_s$ is a qc-structure determined by the three 1-forms $\textbf {e}ta_s$, which is qc-Einstein of vanishing qc-scalar curvature, see \cite{IV2}. In fact, we characterize qc-Einstein manifold with vanishing qc-scalar curvature as $\mathbb{R}^3$-bundle over hyper-K\"ahler manifold.
Let $M$ be a qc-Einstein manifold. As observed in Corollary \ref{c:vert int} and the paragraph after it the vertical distribution $V$ is completely integrable hence defines a foliation on $M$. We recall, taking into account \cite{Pal}, that the quotient space $P=M/V$ is a manifold when the foliation is regular and the quotient topology is Hausdorff.
{If $P$ is a manifold and all the leaves of $V$ are compact, then by Ehresmann's fibration theorem \cite{Ehr, Pal} it follows that $\Pi:M\rightarrow P$ is a locally trivial fibration and all the leaves are isomorphic. By \cite{Pal}, examples of such foliations are given by regular foliations on compact manifolds. In the case of a qc-Einstein manifold of non-vanishing qc-scalar curvature, the leaves of the foliation generated by $V$ are Riemannian 3-manifold of positive constant curvature. Hence, if the associated (pseudo) Riemannian metrics on $M$ is complete, then the leaves of the foliation are compact. On the other hand, in the case of vanishing qc-scalar curvature, the leaves of the foliation are flat Riemannian manifolds that may not be compact as is, for example, the case of the quaternionic Heisenberg group.} We summarize the properties of the Reeb foliation on a qc-Einstein manifold of vanishing qc-scalar curvature case in the following
\begin{prop}\label{t:hKquot} Let $M$ be a qc-Einstein manifold with zero qc-scalar curvature.
\begin{enumerate}[a)]
\item If the vertical distribution $V$ is regular and the space of leaves $P=M/V$ with the quotient topology is Hausdorff, then $P$ is a locally hyper-K\"ahler manifold.
\item If the leaves of the foliation generated by $V$ are compact then there exists an open dense subset $M_o\subset M$ such that $P_o:=M_o/V$ is a locally hyper-K\"ahler manifold.
\textbf {e}nd{enumerate}
\textbf {e}nd{prop}
\begin{proof} We begin with the proof of a).
By Theorem~\ref{str_eq_mod_th} we can assume, locally, the structure equations given in Theorem
\ref{str_eq_mod_th}. This, together with \cite[Lemma 3.2 \&
Theorem 3.12]{IMV} imply that the horizontal metric $g$, see also
\textbf {e}qref{to}, and the {closed} local fundamental 2-forms
$\omega_s$, see \textbf {e}qref{str_eq_mod} with $S=0$, are projectable. The claim of part a) follows from
Hitchin's lemma \cite{Hit}.
We turn to the proof of part b).
Lemma \ref{l:killing} implies that, in particular, the Riemannian metric $h$ on $M$ is bundle-like, i.e., for any two horizontal vector fields $X$ and $Y$ in the normalizer of $\mathcal V$ under the Lie bracket, the equation $\xi h(X,Y)=0$ holds for any vector field $\xi$ in $\mathcal V$. Since all the leaves of the vertical foliation are assumed to be compact, we can apply \cite[Proposition 3.7 ]{Mo},
which shows that
$P=M/V$ is a 4n-dimensional orbifold. In particular $P$ is a Hausdorff space. The regular points of any orbifold are an open dens set. Thus, if we let $P_o$ to be the set of all regular points of $P$, then $P_o$ is an open dens subset of $P$ which is also a manifold. It follows that if $M_o:=\Pi^{-1}(P_o)$ then all the leaves of the restriction of the vertical foliation to $M_o$ are regular and hence the claim of b) follows.
\textbf {e}nd{proof}
\subsection{The Riemannian curvature}
Let $M$ be a qc-Einstein manifold. Note that, by applying an appropriate qc homothetic transformation, we can aways reduce a general qc-Einstein structure to one whose normalized qc-scalar curvature $S$ equals $0,2$ or -2. Consider the one-parameter family of (pseudo) Riemannian metrics $g^{\lambda},\
\lambda\ne 0$ on $M$ by letting
$h^{\lambda}({A},{B})\ :=\ h({A},{B})+(\lambda-1)h|_V.$ Let $\nabla^\lambda$ be the Levi-Civita connection of $h^\lambda.$
Note that $h^{\lambda}$ is a positive-definite metric when $\lambda>0$ and
has signature $(4n,3)$ when $\lambda<0$.
Let us recall that, if $S=2$ and $\lambda=1$ the Riemannian metric $h=h^\lambda$ is a 3-Sasakian metric on $M$. In particular, it is an Einstein metric of positive Riemannian scalar curvature (4n + 2)(4n + 3) \cite{Kas}. There is also a second Einstein metric, the "squashed" metric, in the family $h^\lambda$ when $\lambda={1}/{(2n+3)}$, see \cite{BG}. The case $S=-2$ is completely analogous. Here we have two distinct pseudo-Riemannian Einstein metrics corresponding to $\lambda=-1$ and $\lambda=-{1}/{(2n+3)}$. The first one defines a negative 3-Sasaskian structure. On the other hand, the metric $h^{\lambda}$ with $ \lambda =1$ (assuming $S=-2$) gives an $nS$ structure on $M$. In \cite{Tan}, it was shown that the Riemannian
Ricci tensor of the latter has precisely two constant eigenvalues, $ - 4n - 14$ (of multiplicity $4n$) and
$4n + 2$ (of multiplicity $3$), and that the Riemannian scalar curvature is the negative constant $-16n^2 - 44n + 6$. In particular, in this case, $(M,h^{\lambda})$ is an example of an A-manifold in the terminology of \cite{Gr}.
The following proposition addresses the case $S=0$. However, the argument is valid for all values of $S$ and $\lambda\ne 0$. In particular, we obtain new proofs of the above mentioned results concerning the cases of positive and negative 3-Sasakian structures.
\begin{prop}\label{p:einst m} Let $M$ be a qc-Einstein manifold with normalized qc-scalar curvature ${S}$. For a vector field $A$, let $[A]_V$ denote the orthogonal projection of $A$ to the vertical space $V$.
The (pseudo) Riemannian Ricci and scalar curvatures of $h^{\lambda}$ are given by
\begin{align} \label{Ric-lambda}
Ric^{\lambda}({A},{B})
&={B}ig(4n\lambda+\frac{S^2}{2\lambda}{B}ig)h^\lambda{B}ig([{A}]_V,[{B}]_V{B}ig)+{B}ig(2{S}(n+2)-6\lambda{B}ig)h^\lambda{B}ig([{A}]_H,[{B}]_H{B}ig)\\
Scal^{\lambda} &= \frac{1}{\lambda}{B}ig(-12n\lambda^2+8n(n+2)S\lambda+\frac{3}{2}{S}^2{B}ig).
\textbf {e}nd{align}
In particular, if $S=0$, the Ricci curvature of each metric in the family $h^{\lambda}$ has exactly two different constant eigenvalues of multiplicities $4n$ and $3$ respectively.
\textbf {e}nd{prop}
\begin{proof}
We start by noting that the difference $L=\nabla^{\lambda}-\nabla$ between the Levi-Cevita connection $\nabla^{\lambda}$ and the Biquard connection $\nabla$ is given by
\begin{equation}\label{nabla-lambda}
L(A,B)\textbf {e}quiv \nabla^{\lambda}_{{A}}{B}-\nabla_{{A}}{B}= \frac{S}{2}[{A}]_V\times [{B}]_V + \sum_{s=1}^3{B}ig\{-\omega_s({A},{B})\xi_s+\lambda\textbf {e}ta_s({A})I_s{B}+\lambda\textbf {e}ta_s({B})I_s{A} {B}ig\}.
\textbf {e}nd{equation}
Indeed, if we let $D_{A}{B}:=\nabla_{A}{B}+L({A},{B})$, then $h^\lambda(L({A},{B}), {C})$ is skew symmetric in ${B}$ and ${C}$, hence the connection $D$ preserves the metric $h^\lambda$. Furthermore, the torsion tensor of $D$
vanishes since
$h^\lambda(L({A},{B}),{C})-h^\lambda(L({B},{A}),{C})=-h^{\lambda}(T({A},{B}),{C}).$ The latter follows from the formula for $T$ in Lemma~\ref{Einstein:RT}. Thus $D$ is the Levi-Civita connection of $h^\lambda$.
The well known formula for the difference $R^{\lambda}-R$ between the curvature tensors of two connections $\nabla^{\lambda}$ and $\nabla$ gives
\begin{equation}\label{R-R2}
R^{\lambda}({{A}},{{B}}){C}\ -\ R({{A}},{{B}}){C}\ =\ (\nabla_{{A}}L)({{B}},{C})\ -\ (\nabla_{{B}}L)({{A}},{C})\ +\ [L_{{A}},L_{{B}}]{C}+L(T({{A}},{{B}}),{C}).
\textbf {e}nd{equation}
From \textbf {e}qref{nabla-lambda}, it follows $L$ is $\nabla$-parallel. Thus,
in the right hand side of the above formula only the last two terms are non-zero. Furthermore, we have that
$[L_{{A}},L_{{B}}]{C}=L({A}, L({B},{C}))-L({B}, L({A},{C}))$. A straightforward
computation gives
\begin{multline}\label{R_lambda}
R^\lambda(A,B)C = R(A,B)C + h^{\lambda}{B}ig([B]_V,[C]_V{B}ig){B}ig(\frac{S^2}{4\lambda}[A]_V+\lambda[A]_H{B}ig) - h^{\lambda}{B}ig([A]_V,[C]_V{B}ig){B}ig(\frac{S^2}{4\lambda}[B]_V+\lambda[B]_H{B}ig)\\
+\sum_{(i,j,k)-\text{cyclic}}{B}ig\{\ {B}ig(\frac{S}{2}-\lambda{B}ig)\textbf {e}ta_k(A)\omega_j(B,C)\ -\ {B}ig(\frac{S}{2}-\lambda{B}ig)\textbf {e}ta_k(B)\omega_j(A,C)
\\
-\ {B}ig(\frac{S}{2}-\lambda{B}ig)\textbf {e}ta_j(A)\omega_k(B,C)\ +\ {B}ig(\frac{S}{2}-\lambda{B}ig)\textbf {e}ta_j(B)\omega_k(A,C)\ + \ (S+2\lambda)\textbf {e}ta_k(C)\omega_j(A,B)
\\
-\ (S+2\lambda)\textbf {e}ta_j(C)\omega_k(A,B)\ -\ \lambda\textbf {e}ta_i(B)h^\lambda{B}ig([A]_H,[C]_H{B}ig)\ +\ \lambda\textbf {e}ta_i(A) h^\lambda{B}ig([B]_H,[C]_H{B}ig)\ {B}ig\}\xi_i\\
\ +\sum_{(i,j,k)-\text{cyclic}} {B}ig\{\ {B}ig(\frac{\lambda S}{2}-\lambda^2 {B}ig)\textbf {e}ta_j\wedge\textbf {e}ta_k(B,C) I_iA
-\ {B}ig(\frac{\lambda S}{2}-\lambda^2 {B}ig)\textbf {e}ta_j\wedge\textbf {e}ta_k(A,C) I_iB
\\-\ (\lambda S-2\lambda^2)\textbf {e}ta_j\wedge\textbf {e}ta_k(A,B) I_iC
-\
\lambda\omega_i(B,C)I_iA
\ +\ \lambda\omega_i(A,C)I_iB\ +\ 2\lambda\omega_i(A,B)I_iC \ {B}ig\}.
\textbf {e}nd{multline}
After taking the trace with respect to $A$ and $D$ in equation \textbf {e}qref{R_lambda}, we obtain
$$Ric^{\lambda}({B},{C})=Ric{B}ig([{B}]_H,[{C}]_H{B}ig)+{B}ig(4n\lambda+\frac{S^2}{2\lambda}{B}ig)h^\lambda{B}ig([{B}]_V,[{C}]_V{B}ig)-6\lambda h^\lambda{B}ig([{B}]_H,[{C}]_H{B}ig).$$
Since $M$ is assumed to be qc Einstein, we have
\begin{align}
Ric{B}ig([{B}]_H,[{C}]_H{B}ig)=\frac{Scal}{4n}g{B}ig([{B}]_H,[{C}]_H{B}ig)=2(n+2){S} h^\lambda{B}ig([{B}]_H,[{C}]_H{B}ig),
\textbf {e}nd{align} which yields~\textbf {e}qref{Ric-lambda}. Taking one more trace in \textbf {e}qref{Ric-lambda} gives the formula for the scalar curvature.
\textbf {e}nd{proof}
\begin{thebibliography}{99}
\bibitem[AFIV]{AFIV} de Andres, L., Fernandez, M., Ivanov, S., Joseba, S. \&
Ugarte, L. \& Vassilev, D., {\textbf {e}m Quaternionic Kaehler and Spin(7) metrics arising from
quaternionic contact Einstein structures}, Annali di matematica Pura ed
Applicata, (2012), DOI 10.1007/s10231-012-0276-8.
\bibitem[Biq1]{Biq1}
O. Biquard, {\textbf {e}m M\'{e}triques d'Einstein
asymptotiquement sym\'{e}triques}, Ast\'{e}risque {\bf 265} (2000).
\bibitem[BG]{BG} Boyer, C. \& Galicki, K., \textbf {e}mph{3-Sasakian manifolds},
Surveys in differential geometry: essays on Einstein manifolds,
123--184, Surv. Differ. Geom., {\bf VI}, Int. Press, Boston, MA,
1999.
\bibitem[BGN]{BGN} Boyer, C., \ Galicki, K. \&
Mann, B., \textbf {e}mph{The geometry and topology of $3$-Sasakian manifolds} J. Reine Angew.
Math., {\bf 455} (1994), 183--220.
\bibitem[CFS]{CFS} Conti, D., Fern\'andez, M. \& Santisteban, J. \textbf {e}mph{On seven dimensional quaternionic contact solvable Lie groups}, to appear in Forum Math.
\bibitem[D]{D} Duchemin, D., {\textbf {e}m Quaternionic contact structures in
dimension 7}, Ann. Inst. Fourier, Grenoble {\bf 56}, 4 (2006) 851--885.
\bibitem[Ehr]{Ehr} Ehresmann, C., \textbf {e}mph{Les connexions infinitésimales dans un espace fibré différentiable},
Colloque de Topologie, Bruxelles (1950), 29-55.
\bibitem[F2]{F2}
G. Folland, \textbf {e}mph{Subelliptic estimates and function spaces on nilpotent Lie groups},
Ark. Math., \textbf{13}~(1975), 161--207.
\bibitem[FS]{FS}
Folland, G. \& Stein, E., \textbf {e}mph{Estimates for the ${B}ar
{\partial}_{b}$ Complex and Analysis on the Heisenberg Group},
Comm. Pure Appl. Math., \textbf{27}~(1974), 429--522.
\bibitem[Gr]{Gr} Gray, A., \textbf {e}mph{Einstein manifolds which are not Einstein}, Geom. Dedicata \textbf{7} (1978), 259-280.
\bibitem[Hit]{Hit} Hitchin, N. J., \textbf {e}mph{The self duality equations on a Riemannian surface}, Proc. London Math. Soc., {\bf 55} (1987), 59--126.
\bibitem[Is]{Is} Ishihara, S., \textbf {e}mph{Quaternion Kahler manifolds and fibred Riemannian spaces with Sasakian 3-structure}, Kodai Math. Sem. Rep. {\bf 25} (1973), 321--329.
\bibitem[IK]{IK} Ishihara, S. \& Konishi, M. \textbf {e}mph{Fibred Riemannian spaces with Sasakian 3-structure}, Differential geometry, in honor of K. Yano, Kinokuniya, Tokyo (1972), 179--194.
\bibitem[IMV2]{IMV2} Ivanov, S., Minchev, I., \& Vassilev, D., \textbf {e}mph{Extremals for the Sobolev
inequality on the seven dimensional quaternionic Heisenberg group
and the quaternionic contact Yamabe problem}, J. Eur. Math. Soc.,
{\bf 12} (2010), 1041--1067.
\bibitem[IMV]{IMV} \bysame, \textbf {e}mph{
Quaternionic contact Einstein structures and the quaternionic
contact Yamabe problem}, to appear in Mem. Amer. Math. Soc.
\bibitem[IMV3]{IMV3} \bysame, \textbf {e}mph{The optimal constant in
the $L^2$ Folland-Stein inequality on the quaternionic Heisenberg
group}, Ann. Sc. Norm. Super Pisa Cl. Sci. (5), Vol. XI (2012),
635-652
\bibitem[IPV]{IPV} Ivanov, S., Petkov, A., \& Vassilev, D., \textbf {e}mph{The sharp
lower bound of the first eigenvalue of the sub-Laplacian on a quaternionic
contact manifold}, to appear in J. Geom. Anal., arXiv:1112.0779.
\bibitem[IV1]{IV1} Ivanov, S., \& Vassilev, D., \textbf {e}mph{Conformal quaternionic contact curvature and the local
sphere theorem}, J. Math. Pure Appl, {\bf 93} (2010), 277--307.
\bibitem[IV2]{IV2}
Ivanov, S., \& Vassilev, D., \textbf {e}mph{Quaternionic contact manifolds with a closed fundamental 4-form}, Bull. London Math. Soc. (2010) doi: 10.1112/blms/bdq061
\bibitem[IV3]{IV3} \bysame, \textbf {e}mph{Extremals for the Sobolev Inequality and the
Quaternionic Contact Yamabe Problem}, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2011. xviii+219 pp.
\bibitem[Jel]{Jel}
Jelonek, W., \textbf {e}mph{Positive and negative 3-K-contact structures}, Proc. Amer. Math. Soc. {\bf 129} (2000), 247--256.
\bibitem[Kas]{Kas} Kashiwada, T., \textbf {e}mph{A note on Riemannian space
with Sasakian 3-structure}, Nat. Sci. Reps. Ochanomizu Univ., {\bf 22}~(1971), 1--2.
\bibitem[Kon]{Kon}
Konishi, M., \textbf {e}mph{On manifolds with Sasakian 3-structure over quaternion K\"ahler manifolds}, Kodai Math. Sem. Rep. {\bf 26} (1975), 194-200.
\bibitem[Kul]{Kul} Kulkarni, R. S., \textbf {e}mph{Curvature structures and conformal transformations.} Bull. Amer. Math. Soc. 75 1969 91--94.
\bibitem[LeB91]{LeB91}
LeBrun, C., \textbf {e}mph{On complete quaternionic-K\"ahler manifolds.} Duke Math. J. 63 (1991), no. 3, 723--743.
\bibitem[M]{M} Mostow, G. D., \textbf {e}mph{Strong rigidity of locally symmetric spaces
}, Annals of Mathematics Studies, No. 78. Princeton University Press,
Princeton, N.J.; University of Tokyo Press, Tokyo, 1973. v+195 pp.
\bibitem[P]{P} Pansu, P., \textbf {e}mph{M\'etriques de Carnot-Carath\'eodory et
quasiisom\'etries des espaces sym\'etriques de rang un}, Ann. of Math. (2)
129 (1989), no. 1, 1--60.
\bibitem[Pal]{Pal} Palais, R., {\textbf {e}m A global formulation of the Lie theory of transformation groups }, Mem. Amer. Math. Soc., 22 (1957)
\bibitem[Mo]{Mo}
Molino, P., \textbf {e}mph{Riemannian Foliations}, Birkh\"auser, Boston, (1988).
\bibitem[Tan]{Tan}
Tanno, S., \textbf {e}mph{Remarks on a triple of K-contact structures}, Tohoku Math. J. {\bf 48} (1996), 519--531
\textbf {e}nd{thebibliography}
\textbf {e}nd{document} | math | 66,040 |
\begin{document}
\title{Closed Range Integral Operators on Hardy, BMOA and Besov Spaces}
\author{
\name{Kostas Panteris\thanks{Contact Kostas Panteris Email: [email protected], [email protected]}}
\affil{Department of Mathematics and Applied Mathematics, University of Crete, University Campus Voutes, 70013 Heraklion, Greece }
}
\maketitle
\begin{abstract}
If $g\in H^{\infty}$, the integral operator $S_{g}$ on $H^{p}$, $BMOA$ and $B^{p}$(Besov) spaces, is defined as $S_{g}f(z)=\int_{0}^{z} f^{\prime}(w) g(w) dw$. In this paper, we prove three necessary and sufficient conditions for the operator $S_{g}$ to have closed range on $H^{p}\hspace{1mm}(1 < p < \infty)$, $BMOA$ and $B^{p}\hspace{1mm}(1 < p < \infty)$.
\end{abstract}
\section{Introduction and Preliminaries}
Let $\mathbb{D}$ denote the open unit disk in the complex plane, $\mathbb{T}$ the unit circle, $A$ the normalized area Lebesgue measure in $\mathbb{D}$ and $m$ the normalized length Lebesgue measure in $\mathbb{T}$. For $1 \leq p < \infty$ the Hardy space $H^{p}$ is defined as the set of all analytic functions $f$ in $\mathbb{D}$ for which
\[
\sup \limits_{0\leq r<1} \int \limits_{\mathbb{T}} \vert f(r \zeta) \vert^{p} dm(\zeta) < +\infty
\]
and the corresponding norm in $H^{p}$ is defined by
\[
\Vert f \Vert_{H^{p}}^{p} = \sup \limits_{0 \leq r<1} \int \limits_{\mathbb{T}} \vert f(r \zeta) \vert^{p} dm(\zeta).
\]
When $p=\infty$, we define $H^{\infty}$ to be the space of bounded analytic functions $f$ in $\mathbb{D}$ and $\Vert f \Vert_{\infty} = \sup\lbrace \vert f(z)\vert:z\in\mathbb{D}\rbrace$.
In this work we will mainly make use of the following equivalent norm (see Calderon's theorem in \cite{Pavlovic}, page 213) in $H^{p}$, $1 \leq p < \infty$:
\begin{equation}\label{Stolz_norm}
\Vert f \Vert_{H^{p}}^{p} = \vert f(0) \vert^{p} + \int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(z)\vert^{2} dA(z) \Big)^{\frac{p}{2}} dm(\zeta),
\end{equation}
where $\Gamma_{\beta}(\zeta)$ is the Stolz angle at $\zeta\in\mathbb{T}$, the conelike region with aperture $\beta\in(0,1)$, which is defined as
\[
\Gamma_{\beta}(\zeta) = \lbrace z\in\mathbb{D}: \vert z\vert < \beta\rbrace \cup \bigcup_{\vert z\vert< \beta}[z,\zeta).
\]
The BMOA space is defined as the set of all analytic functions $f$ in $\mathbb{D}$ for which
\[
\sup \limits_{\beta\in\mathbb{D}} \iint \limits_{\mathbb{D}} \frac{1-\vert\beta\vert^{2}}{\vert 1-\overline{\beta}z\vert^{2}} \vert f^{\prime}(z) \vert^{2} \log\frac{1}{\vert z \vert} dA(z) <\infty
\]
and we may define the corresponding norm in BMOA by
\[
\Vert f \Vert_{*}^{2} = \vert f(0) \vert^{2} + \sup \limits_{\beta\in\mathbb{D}} \iint \limits_{\mathbb{D}} \frac{1-\vert\beta\vert^{2}}{\vert 1-\overline{\beta}z\vert^{2}} \vert f^{\prime}(z) \vert^{2} \log\frac{1}{\vert z \vert} dA(z).
\]
For $1 < p < \infty$ the Besov space $B^{p}$ is defined as the set of all analytic functions $f$ in $\mathbb{D}$ for which
\[
\iint \limits_{\mathbb{D}} \vert f^{\prime}(z) \vert^{p} (1-\vert z \vert^{2})^{p-2} dA(z) < +\infty
\]
and the corresponding norm in $B^{p}$ is defined by
\[
\Vert f \Vert_{B^{p}}^{p} = \vert f(0) \vert^{p} + \iint \limits_{\mathbb{D}} \vert f^{\prime}(z) \vert^{p} (1-\vert z \vert^{2})^{p-2} dA(z).
\]
Let $g:\mathbb{D} \rightarrow \mathbb{C}$ be an analytic function. If $X$ is a space of analytic functions $f$ in $\mathbb{D}$ (in particular, in this paper, $X=H^{p}$ or $X=BMOA$ or $X=B^{p}$) then, the integral operator $S_{g}:X\rightarrow X$, induced by $g$, is defined as
\[
S_{g}f(z) = \int_{0}^{z} f^{\prime}(w) g(w) dw, \hspace{2mm} z\in\mathbb{D},
\]
for every $f\in X$.
Let $\rho(z,w)$ denote the pseudo-hyberbolic distance between $z,w \in \mathbb{D}$,
\[
\rho(z,w) = \Big\vert \frac{z - w}{1 - \overline{z}w} \Big\vert,
\]
$D_{\eta}(a)$ denote the pseudo-hyberbolic disk of center $a \in \mathbb{D}$ and radius $\eta<1$:
\[
D_{\eta}(a) = \lbrace z \in \mathbb{D}: \rho(a,z) < \eta \rbrace,
\]
and $\Delta_{\eta}(\alpha)$ denote the euclidean disk of center $a \in \mathbb{D}$ and radius $\eta(1-\vert \alpha\vert)$, $\eta<1$:
\[
\Delta_{\eta}(\alpha) = \lbrace z\in\mathbb{D}: \vert z - \alpha\vert< \eta(1-\vert \alpha\vert) \rbrace.
\]
In the following, $C$ denotes a positive and finite constant which may change from one occurrence to another. Moreover, by writing
$K(z) \asymp L(z)$ for the non-negative quantities $K(z)$ and $L(z)$ we mean that $K(z)$ is comparable to $L(z)$ if $z$ belongs to a specific set: there are positive constants
$C_{1}$ and $C_{2}$ independent of $z$ such that
\[
C_{1} K(z) \leq L(z) \leq C_{2} K(z).
\]
\section{Closed range integral operators on Hardy spaces}
Let $g:\mathbb{D} \rightarrow \mathbb{C}$ be an analytic function and, for $c>0$, let $G_{c} = \lbrace z\in\mathbb{D}:\vert g(z)\vert>c \rbrace$. It is well known (see \cite{Anderson}) that the integral operator $S_{g}:H^{p}\rightarrow H^{p}$ $(1\leq p < \infty)$ is bounded if and only if $g\in H^{\infty}$.
We say that $S_{g}$, on $H^{p}$, is bounded below, if there is $C>0$ such that $\Vert S_{g}f\Vert_{H^{p}} > C \Vert f\Vert_{H^{p}}$ for every $f\in H^{p}$. Since $S_{g}$ maps every constant function to the $0$ function, if we want to study the property of being bounded below for $S_{g}$, we are obliged to consider spaces of analytic functions modulo the constants or, equivalently, spaces of analytic functions $f$ such that $f(0)=0$. Theorem 2.3 in \cite{Anderson} states that $S_{g}$ is bounded below on $H^{p}/\mathbb{C}$ if and only if it has closed range on $H^{p}/\mathbb{C}$. Next, we denote $H^{p}/\mathbb{C}$ as $H_{0}^{p}$.
Corollary 3.6 n \cite{Anderson} states that
$S_{g}:H_{0}^{2}\rightarrow H_{0}^{2}$ has closed range if and only if
there exist $c>0$, $\delta > 0$ and $\eta \in (0,1)$ such that
\[
A(G_{c} \cap D_{\eta}(a)) \geq \delta A(D_{\eta}(a))
\]
for all $a \in \mathbb{D}$.
In the end of \cite{Anderson} A. Anderson posed the question, if the above condition for $H_{0}^{2}$ holds also for all $H_{0}^{p}$. In this paper, theorem \ref{integral_theorem} gives an affirmative answer to this question, for the case $1 < p < \infty$. Although the answer in case $p=2$ is an immediate consequence of D. Luecking's theorem (see \cite{Anderson}, Proposition 3.5), the answer in case $1 < p < \infty$ requires much more effort.
For $\lambda\in(0,1)$ and $f\in H^{p}$ we set
$$
E_{\lambda}(\alpha) = \lbrace z\in \Delta_{\eta}(\alpha): \vert f^{\prime}(z) \vert^{2} > \lambda \vert f^{\prime}(\alpha)\vert^{2} \rbrace
$$
and
$$
B_{\lambda}f(\alpha) = \frac{1}{A(E_{\lambda}(\alpha))} \iint \limits_{E_{\lambda}(\alpha)} \vert f^{\prime}(z)\vert^{2} dA(z).
$$
Lemma \ref{Luecking_lemma1} is due to D. Luecking (see \cite{Luecking81}, lemma 1).
\begin{lemma}\label{Luecking_lemma1}
Let $f$ analytic in $\mathbb{D}$, $a\in\mathbb{D}$ and $\lambda\in(0,1)$. Then
\[
\frac{A(E_{\lambda}(\alpha))}{A(\Delta_{\eta}(\alpha))} \geq \frac{\log\frac{1}{\lambda}}{\log\frac{B_{\lambda}f(\alpha)}{\vert f^{\prime}(\alpha) \vert^{2}} + \log\frac{1}{\lambda}}.
\]
\end{lemma}
Moreover in \cite{Luecking81}, the following sentence is proved: If $\alpha\in\mathbb{D}$ and $\frac{2\eta}{1+\eta^{2}}\leq r<1$ then
\begin{equation}\label{euclidean_to_pseudo}
\Delta_{\eta}(\alpha) \subseteq D_{r}(\alpha).
\end{equation}
We proceed with the main result of this section.
\begin{theorem}\label{integral_theorem}
Let $1 < p < \infty$ and $g\in H^{\infty}$. Then the following are equivalent:
\begin{enumerate}
\item[(i)] $S_{g}:H_{0}^{p}\rightarrow H_{0}^{p}$ has closed range
\item[(ii)] There exist $c>0$, $\delta > 0$ and $\eta \in (0,1)$ such that
\begin{equation}\label{second_part}
A(G_{c} \cap D_{\eta}(a)) \geq \delta A(D_{\eta}(a))
\end{equation}
for all $a \in \mathbb{D}$.
\item[(iii)] There exist $c>0$, $\delta > 0$ and $\eta \in (0,1)$ such that
\begin{equation}\label{second_part2}
A(G_{c} \cap \Delta_{\eta}(a)) \geq \delta A(\Delta_{\eta}(a))
\end{equation}
for all $a \in \mathbb{D}$.
\end{enumerate}
\end{theorem}
We first prove two lemmas which will play an important role in the proof of theorem \ref{integral_theorem}.
For $\zeta\in\mathbb{T}$ and $0<\beta<\beta^{\prime}<1$ we consider the Stolz angles $\Gamma_{\beta}(\zeta)$ and $\Gamma_{\beta^{\prime}}(\zeta)$, where $\beta^{\prime}$ has been chosen so that $\Delta_{\eta}(\alpha) \subset \Gamma_{\beta^{\prime}}(\zeta)$ for every $\alpha\in\Gamma_{\beta}(\zeta)$.
\begin{lemma}\label{lemma_2}
Let $\varepsilon>0$, $f$ analytic in $\mathbb{D}$ and
\[
A = \Big\lbrace \alpha\in\mathbb{D}: \vert f^{\prime}(\alpha)\vert^{2} < \frac{\varepsilon}{A(\Delta_{\eta}(\alpha))} \iint \limits_{\Delta_{\eta}(\alpha)} \vert f^{\prime}(z)\vert^{2} dA(z) \Big\rbrace.
\]
There is $C>0$ depending only on $\eta$ such that
\[
\iint \limits_{A\cap\Gamma_{\beta}(\zeta)} \vert f^{\prime}(z)\vert^{2} dA(z) \leq \varepsilon C \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z)\vert^{2} dA(z)
\]
\end{lemma}
\begin{proof}
Integrating
\[
\vert f^{\prime}(\alpha)\vert^{2} < \frac{\varepsilon}{A(\Delta_{\eta}(\alpha))} \iint \limits_{\Delta_{\eta}(\alpha)} \vert f^{\prime}(z)\vert^{2} dA(z)
\]
over $\alpha\in A\cap\Gamma_{\beta}(\zeta)$ and using Fubini's theorem on the right side, we get
\[
\iint \limits_{A\cap\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha)\vert^{2} dA(\alpha) < \varepsilon \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z)\vert^{2} \Big[\iint \limits_{A\cap\Gamma_{\beta}(\zeta)} \frac{\chi_{\Delta_{\eta}(\alpha)}(z)}{A(\Delta_{\eta}(\alpha))} dA(\alpha)\Big] dA(z)
\]
Using \eqref{euclidean_to_pseudo} with $r=\frac{2\eta}{1+\eta^{2}}$, we have $\chi_{\Delta_{\eta}(\alpha)}(z) \leq \chi_{D_{r}(\alpha)}(z)=\chi_{D_{r}(z)}(\alpha)$. We have that $A(D_{r}(z))\asymp (1-\vert z\vert)^{2}$ and, for $\alpha\in D_{\eta}(z)$, we have $(1-\vert z\vert) \asymp (1-\vert \alpha\vert)$, where the underlying constants in these relations depend only on $\eta$. In addition, $A(\Delta_{\eta}(\alpha)) = \eta^{2}(1-\vert \alpha\vert)^{2}$.
So,
\begin{align}\label{brackets_int}
\iint \limits_{A\cap\Gamma_{\beta}(\zeta)} \frac{\chi_{\Delta_{\eta}(\alpha)}(z)}{A(\Delta_{\eta}(\alpha))} dA(\alpha) & \leq \iint \limits_{A\cap\Gamma_{\beta}(\zeta)} \frac{\chi_{D_{r}(z)}(\alpha)}{\eta^{2}(1-\vert \alpha\vert)^{2}} dA(\alpha)\nonumber\\
& \leq C \iint \limits_{D_{r}(z)} \frac{1}{\eta^{2}(1-\vert z\vert)^{2}} dA(\alpha) = C \frac{A(D_{r}(z))}{\eta^{2}(1-\vert z\vert)^{2}} \leq C,
\end{align}
where $C>0$ depends only on $\eta$.
\end{proof}
\begin{lemma}\label{lemma_3}
Let $0<\varepsilon<1$, $f$ analytic in $\mathbb{D}$, $0<\lambda<\frac{1}{2}$ and
\[
B = \Big\lbrace \alpha\in\mathbb{D}: \vert f^{\prime}(\alpha)\vert^{2} < \varepsilon^{3} B_{\lambda}f(\alpha) \Big\rbrace.
\]
There is $C>0$ depending only on $\eta$ such that
\[
\iint \limits_{B\cap \Gamma_{\beta}(\zeta)} \vert f^{\prime}(z)\vert^{2} dA(z) \leq \varepsilon C \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z)\vert^{2} dA(z)
\]
\end{lemma}
\begin{proof}
We write
\[
\iint \limits_{B\cap \Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha)\vert^{2} dA(\alpha) = \iint \limits_{B\cap \Gamma_{\beta}(\zeta)\cap A} \vert f^{\prime}(\alpha)\vert^{2} dA(\alpha) + \iint \limits_{(B\cap \Gamma_{\beta}(\zeta))\setminus A} \vert f^{\prime}(\alpha)\vert^{2} dA(\alpha),
\]
where $A$ is as in lemma \ref{lemma_2}.
The first integral is estimated by lemma \ref{lemma_2}, so it remains to show the desired result for the second integral. Integrating the relation
\[
\vert f^{\prime}(\alpha)\vert^{2} < \varepsilon^{3} B_{\lambda}f(\alpha) = \varepsilon^{3} \frac{1}{A(E_{\lambda}(\alpha))} \iint \limits_{E_{\lambda}(\alpha)} \vert f^{\prime}(z)\vert^{2} dA(z)
\]
over the set $(B\cap \Gamma_{\beta}(\zeta))\setminus A$ and using Fubini's theorem on the right side, we get
\begin{align}\label{epsilon_lambda}
\iint \limits_{(B\cap \Gamma_{\beta}(\zeta))\setminus A} \vert f^{\prime}(\alpha)\vert^{2} dA(\alpha) & \leq \varepsilon^{3} \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z)\vert^{2} \Big[\iint \limits_{(B\cap \Gamma_{\beta}(\zeta))\setminus A} \frac{1}{A(E_{\lambda}(\alpha))} \chi_{E_{\lambda}(\alpha)}(z) dA(\alpha)\Big] dA(z)\nonumber\\
& \leq \varepsilon^{3} \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z)\vert^{2} \Big[\iint \limits_{(B\cap \Gamma_{\beta}(\zeta))\setminus A} \frac{1}{A(E_{\lambda}(\alpha))} \chi_{\Delta_{\eta}(\alpha)}(z) dA(\alpha)\Big] dA(z)
\end{align}
where the last inequality is justified by $E_{\lambda}(\alpha) \subseteq \Delta_{\eta}(\alpha)$. Let $\alpha \not\in A$, i.e.
\begin{equation}\label{notA}
\vert f^{\prime}(\alpha)\vert^{2} \geq \frac{\varepsilon}{A(\Delta_{\eta}(\alpha))} \iint \limits_{\Delta_{\eta}(\alpha)} \vert f^{\prime}(z)\vert^{2} dA(z).
\end{equation}
Set $r=\eta(1 - \vert \alpha \vert)$ and suppose $\lambda<\frac{1}{2}$ and $\vert z-\alpha \vert < \frac{r}{4}$. We have that
\begin{align}\label{Cauchy}
\vert f^{\prime}(z)^{2} - f^{\prime}(\alpha)^{2} \vert & = \frac{1}{2\pi} \Bigg\vert \int \limits_{\vert w-\alpha\vert = \frac{r}{2}} f^{\prime}(w)^{2} \Bigg(\frac{1}{w-z} - \frac{1}{w-\alpha} \Bigg) dw \Bigg\vert \nonumber\\
& = \frac{1}{2\pi} \Bigg\vert\int \limits_{\vert w-\alpha\vert = \frac{r}{2}} f^{\prime}(w)^{2} \frac{z-\alpha}{ (w - z) (w - \alpha)} dw\Bigg\vert.
\end{align}
For $\vert w - \alpha\vert = \frac{r}{2}$, by the subharmonicity of $\vert f^{\prime}\vert^{2}$ we have
\begin{align*}
\vert f^{\prime}(w)\vert^{2} < \frac{1}{\frac{r^{2}}{4}} \iint \limits_{\vert u-w\vert\leq\frac{r}{2}} \vert f^{\prime}(u)\vert^{2} dA(u)\leq \frac{C}{A(\Delta_{\eta}(\alpha))} \iint \limits_{\Delta_{\eta}(\alpha)} \vert f^{\prime}(u)\vert^{2} dA(u).
\end{align*}
Since $\vert w - z\vert > \frac{r}{4}$ when $\vert w - \alpha\vert = \frac{r}{2}$, from \eqref{Cauchy} we get
\[
\vert f^{\prime}(z)^{2} - f^{\prime}(\alpha)^{2} \vert \leq \frac{C \vert z-\alpha\vert}{r} \frac{1}{A(\Delta_{\eta}(\alpha))} \iint \limits_{\Delta_{\eta}(\alpha)} \vert f^{\prime}(u)\vert^{2} dA(u).
\]
Since we may assume that $C>2$, taking $\vert z-\alpha\vert < \frac{\varepsilon r}{2C}$, then we have $\vert z-\alpha\vert < \frac{r}{4}$ and we get
\begin{equation}\label{last_eq_lem}
\vert f^{\prime}(z)^{2} - f^{\prime}(\alpha)^{2} \vert \leq \frac{\varepsilon}{2A(\Delta_{\eta}(\alpha))} \iint \limits_{\Delta_{\eta}(\alpha)} \vert f^{\prime}(u)\vert^{2} dA(u).
\end{equation}
Combining \eqref{notA} and \eqref{last_eq_lem}, we get
\[
\vert f^{\prime}(z)\vert^{2} > \frac{1}{2} \vert f^{\prime}(\alpha)\vert^{2} > \lambda \vert f^{\prime}(\alpha)\vert^{2}.
\]
This means that if $\Delta^{\prime}=\lbrace z\in\mathbb{D}: \vert z-\alpha\vert < \frac{\varepsilon r}{2C}\rbrace$ then $\Delta^{\prime} \subset E_{\lambda}(\alpha)$ and
\[
A(E_{\lambda}(\alpha))\geq A(\Delta^{\prime})= \frac{\varepsilon^{2}}{4C^{2}}r^{2}
= \frac{\varepsilon^{2}}{4C^{2}} A(\Delta_{\eta}(\alpha)).
\]
We finally use this last inequality in \eqref{epsilon_lambda} and we complete the proof.
\end{proof}
\begin{proof}[Proof of theorem \ref{integral_theorem}.] $(ii) \Leftrightarrow (iii)$ This is easy and it is proved in \cite{Luecking81}.\\
$(iii) \Rightarrow (i)$ Let $\alpha\in \mathbb{D}\setminus B$, where $B$ is as in lemma \ref{lemma_3}, where $0<\varepsilon<1$, $0<\lambda<\frac{1}{2}$. Then $\frac{B_{\lambda}f(\alpha)}{\vert f^{\prime}(\alpha)\vert^{2}} \leq \frac{1}{\varepsilon^{3}}$ and, if we choose $\lambda<\varepsilon^{\frac{6}{\delta}}$, then, from lemma \ref{Luecking_lemma1}, we get that
\begin{align}\label{lambda_rel}
\frac{A(E_{\lambda}(\alpha))}{A(\Delta_{\eta}(\alpha))} > \frac{\frac{2}{\delta}\log\frac{1}{\varepsilon^{3}}}{\log\frac{1}{\varepsilon^{3}}+\frac{2}{\delta}\log\frac{1}{\varepsilon^{3}}}> 1 - \frac{\delta}{2}.
\end{align}
Combining \eqref{second_part2} and \eqref{lambda_rel}, we get
\begin{align*}
A(G_{c} \cap E_{\lambda}(\alpha)) &= A(G_{c} \cap \Delta_{\eta}(\alpha)) - A(G_{c} \cap (\Delta_{\eta}(\alpha)\setminus E_{\lambda}(\alpha)))\\
& \geq \delta A(\Delta_{\eta}(\alpha)) - A(\Delta_{\eta}(\alpha)\setminus E_{\lambda}(\alpha))\\
& = \delta A(\Delta_{\eta}(\alpha)) - A(\Delta_{\eta}(\alpha)) + A(E_{\lambda}(\alpha))\\
& \geq \delta A(\Delta_{\eta}(\alpha)) - A(\Delta_{\eta}(\alpha)) + A(\Delta_{\eta}(\alpha)) - \frac{\delta}{2} A(\Delta_{\eta}(\alpha))\\
& = \frac{\delta}{2} A(\Delta_{\eta}(\alpha))
\end{align*}
Now let $f\in H_{0}^{p}, \zeta\in\mathbb{T}$ and $\alpha\in \Gamma_{\beta}(\zeta)\setminus B$. Then, using the last relation and $E_{\lambda}(\alpha) \subset \Delta_{\eta}(\alpha)\subset\Gamma_{\beta^{\prime}}(\zeta)$, we get
\begin{align*}
\frac{1}{A(\Delta_{\eta}(\alpha))} & \iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \chi_{\Delta_{\eta}(\alpha)}(z) \vert f^{\prime}(z) \vert^{2} dA(z) \\
& \geq \frac{\delta}{2 A(G_{c} \cap E_{\lambda}(\alpha))} \iint \limits_{G_{c}\cap E_{\lambda}(\alpha)} \chi_{\Delta_{\eta}(\alpha)}(z) \vert f^{\prime}(z) \vert^{2} dA(z)\\
& = \frac{\delta}{2 A(G_{c} \cap E_{\lambda}(\alpha))} \iint \limits_{G_{c}\cap E_{\lambda}(\alpha)} \vert f^{\prime}(z) \vert^{2} dA(z)
\geq \frac{\delta\lambda}{2} \vert f^{\prime}(\alpha) \vert^{2}.
\end{align*}
Integrating the last relation over the set $\Gamma_{\beta}(\zeta)\setminus B$ and using Fubini's theorem on the left side, we have
\begin{equation*}
\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} \Big[\hspace{1mm} \iint \limits_{\Gamma_{\beta}(\zeta)\setminus B} \frac{\chi_{\Delta_{\eta}(\alpha)}(z)}{A(\Delta_{\eta}(\alpha))} dA(\alpha)\Big] dA(z) \geq \frac{\delta\lambda}{2} \iint \limits_{\Gamma_{\beta}(\zeta)\setminus B} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha).
\end{equation*}
With similar arguments as in relation \eqref{brackets_int}, we can show that the integral in the brackets is bounded above from a constant $C>0$ depending only on $\eta$. So, we have that
\begin{align*}
\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z) & \geq \frac{C\delta\lambda}{2} \iint \limits_{\Gamma_{\beta}(\zeta)\setminus B} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\\
& = \frac{C\delta\lambda}{2} \iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha) - \frac{C\delta\lambda}{2} \iint \limits_{\Gamma_{\beta}(\zeta)\cap B} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha).
\end{align*}
Because of lemma \ref{lemma_3} we have that
\[
\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z) \geq \frac{C\delta\lambda}{2} \iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha) - \varepsilon \frac{C^{\prime}\delta\lambda}{2} \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)
\]
and so
\[
\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z) + \varepsilon \frac{C^{\prime}\delta\lambda}{2} \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha) \geq \frac{C\delta\lambda}{2} \iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha).
\]
Hence,
\begin{align*}
\Big(\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z)\Big)^{\frac{1}{2}} & + \Big(\frac{C^{\prime}\varepsilon\delta\lambda}{2}\Big)^{\frac{1}{2}} \Big(\iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{1}{2}} \\
& \geq \Big(\frac{C\delta\lambda}{2}\Big)^{\frac{1}{2}} \Big(\iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{1}{2}}.\\
\end{align*}
Applying Minkowski's inequality, we get
\begin{align*}
\Big[\int \limits_{\mathbb{T}} \Big(\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}}dm(\zeta)\Big]^{\frac{1}{p}} & + \Big( \frac{C^{\prime}\varepsilon\delta\lambda}{2} \Big)^{\frac{1}{2}} \Big[\int \limits_{\mathbb{T}} \Big( \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}} \\
& \geq \Big(\frac{C\delta\lambda}{2}\Big)^{\frac{1}{2}} \Big[ \int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}}\\
\end{align*}
and so
\begin{align}\label{almost_last}
\Big[\int \limits_{\mathbb{T}} \Big(\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}}dm(\zeta) & \Big]^{\frac{1}{p}} \geq \Big(\frac{C\delta\lambda}{2}\Big)^{\frac{1}{2}} \Big[ \int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}}\nonumber \\
&- \Big( \frac{C^{\prime}\varepsilon\delta\lambda}{2} \Big)^{\frac{1}{2}} \Big[\int \limits_{\mathbb{T}} \Big( \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}}.\nonumber\\
\end{align}
According to \eqref{Stolz_norm}, both integrals at the right side of \eqref{almost_last}, represent equivalent norms in $H_{0}^{p}$. Due to the relation between $\beta$ and $\beta^{\prime}$ there is $C^{\prime\prime}>0$ which depends only on $\eta$, such that
\begin{equation}\label{equiv_stolz_norms}
\Big[\int \limits_{\mathbb{T}} \Big( \iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}}
\leq C^{\prime\prime} \Big[ \int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}}.
\end{equation}
Combining relations \eqref{almost_last} and \eqref{equiv_stolz_norms}, we get
\begin{align*}
\Big[\int \limits_{\mathbb{T}} \Big[\Big(\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}} & dm(\zeta) \Big]^{\frac{1}{p}} \geq \Big(\frac{C\delta\lambda}{2}\Big)^{\frac{1}{2}} \Big[ \int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}}\\
&- \Big( \frac{C^{\prime}\varepsilon\delta\lambda}{2} \Big)^{\frac{1}{2}} C^{\prime\prime} \Big[ \int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta}(\zeta)} \vert f^{\prime}(\alpha) \vert^{2} dA(\alpha)\Big)^{\frac{p}{2}} dm(\zeta)\Big]^{\frac{1}{p}}\\
& = \Big(\frac{\delta\lambda}{2}\Big)^{\frac{1}{2}} [C^{\frac{1}{2}} - \varepsilon^{\frac{1}{2}} {C^{\prime}}^{\frac{1}{2}}C^{\prime\prime}] \Vert f \Vert_{H_{0}^{p}}.
\end{align*}
Choosing $\varepsilon$ small enough so that $C - \varepsilon^{\frac{1}{2}} {C^{\prime}}^{\frac{1}{2}}C^{\prime\prime}>0$, we have that
\begin{equation*}
\Big[\int \limits_{\mathbb{T}} \Big(\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}}dm(\zeta)\Big]^{\frac{1}{p}} \geq C \Vert f \Vert_{H_{0}^{p}},
\end{equation*}
and since $G_{c} = \lbrace z\in\mathbb{D}: \vert g(z)\vert>c \rbrace$, we have
\begin{align*}
\Vert S_{g}f \Vert_{H_{0}^{p}} & \asymp \Big[\int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert (S_{g}f(z))^{\prime} \vert^{2} dA(z)\Big)^{\frac{p}{2}}dm(\zeta)\Big]^{\frac{1}{p}}\\
& = \Big[\int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}}dm(\zeta)\Big]^{\frac{1}{p}} \\
&\geq c \Big[\int \limits_{\mathbb{T}} \Big(\iint \limits_{G_{c}\cap\Gamma_{\beta^{\prime}}(\zeta)} \vert f^{\prime}(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}}dm(\zeta)\Big]^{\frac{1}{p}}\geq C \Vert f \Vert_{H_{0}^{p}}.
\end{align*}
So the integral operator $S_{g}$ has closed range.
$(i) \Rightarrow (ii)$ Let $\alpha\in\mathbb{D}$, $\zeta\in\mathbb{T}$, $\eta\in (0,1)$, $E(z_{0};r)=\lbrace z\in\mathbb{D}: \vert z-z_{0} \vert <r\rbrace $, $C(z_{0},r)=\lbrace z\in\mathbb{D}: \vert z-z_{0} \vert =r \rbrace$ and the arc $I_{\alpha}=\lbrace \zeta\in\mathbb{T}:\Gamma_{\frac{1}{2}}(\zeta)\cap D_{\eta}(\alpha)\neq \emptyset\rbrace$. It's easy to see that $\zeta\in I_{\alpha}$ is equivalent to $\alpha\in\Gamma_{\eta^{\prime}}(\zeta)$, where $\eta^{\prime}$ depends only on $\eta$. In fact, an elementary geometric argument shows that $1-\eta^{\prime} \asymp 1-\eta$, where the underlying constants are absolute.
Set $R_{0}=\frac{1+\eta^{\prime}}{2}$. We continue with the proof by considering two cases for $\alpha$: {\bf (a)} $R_{0} \leq \vert\alpha\vert<1$ and {\bf (b)} $0\leq \vert\alpha\vert\leq R_{0}$.
{\bf Case (a) $R_{0} \leq \vert\alpha\vert \leq 1$:} Then another simple geometric argument gives $m(I_{\alpha}) \asymp \frac{1-\vert\alpha\vert}{(1-\eta^{\prime})^{\frac{1}{2}}}$ and hence:
\begin{equation}\label{I_alpha_estimation}
m(I_{\alpha}) \asymp \frac{1-\vert\alpha\vert}{(1-\eta)^{\frac{1}{2}}}.
\end{equation}
If $S_{g}$ has closed range on $H_{0}^{p}$ then there exists $C>0$ such that for every $f \in H_{0}^{p}$ we have
\begin{equation}\label{Hardy_closed_range}
C\Vert S_{g}f \Vert_{H_{0}^{p}}^{p} \geq \Vert f \Vert_{H_{0}^{p}}^{p}.
\end{equation}
Let
\[
\psi_{\alpha}(z)=\frac{\alpha-z}{1-\overline{\alpha}z}.
\]
Then, after some calculations, we get that $\Vert \psi_{\alpha}-\alpha \Vert_{H^{p}}^{p} \asymp (1-\vert\alpha\vert)$.
Setting $f=\psi_{\alpha}-\alpha$ in \eqref{Hardy_closed_range} and using $(x+y)^{p}\leq 2^{p-1}(x^{p} + y^{p})$, we get
\begin{align}\label{basic_closed_range}
1-\vert\alpha\vert & \leq C\Vert S_{g}(\psi_{\alpha}-\alpha) \Vert_{H_{0}^{p}}^{p}
= C\int \limits_{\mathbb{T}} \Big(\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta)} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}} dm(\zeta)\nonumber\\
& \leq C \int \limits_{I_{\alpha}} \Big(\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta)\cap G_{c} \cap D_{\eta}(\alpha)} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}} dm(\zeta) \nonumber\\
& + C \int \limits_{I_{\alpha}} \Big(\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta)\cap (D_{\eta}(\alpha) \setminus G_{c})} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}} dm(\zeta) \nonumber\\
&+ C\int \limits_{I_{\alpha}} \Big(\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta) \setminus D_{\eta}(\alpha)} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}} dm(\zeta) \nonumber\\
& + C \int \limits_{\mathbb{T} \setminus I_{\alpha}} \Big(\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta)} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}} dm(\zeta) \nonumber\\
& = C(I_{1} + I_{2} + I_{3} + I_{4}).
\end{align}
Using $A(D_{\eta}(\alpha)) = \frac{(1-\vert \alpha \vert^{2})^{2}}{(1-\eta^{2}\vert \alpha \vert^{2})^{2}}\eta^{2}\leq \frac{(1-\vert \alpha \vert^{2})^{2}}{(1-\eta^{2})^{2}}$, we get
\begin{align*}
I_{1} & \leq \Vert g \Vert_{\infty}^{p} \int \limits_{I_{\alpha}} \Big(\iint \limits_{G_{c} \cap D_{\eta(\alpha)}} \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert 1-\overline{\alpha}z\vert^{4}} dA(z)\Big)^{\frac{p}{2}} dm(\zeta)\\
& \leq \Vert g \Vert_{\infty}^{p} m(I_{\alpha}) \Big(\frac{A(G_{c}\cap D_{\eta}(\alpha))}{(1-\vert \alpha \vert^{2})^{2}}\Big)^{\frac{p}{2}}\\
& \leq \Vert g \Vert_{\infty}^{p} m(I_{\alpha}) \frac{1}{(1-\eta^{2})^{p}} \Big(\frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}\Big)^{\frac{p}{2}}.
\end{align*}
Using $\vert g(z) \vert \leq c$ in $\mathbb{D} \setminus G_{c}$ and making the change of variables $w=\psi_{\alpha}(z)$, we get
\begin{align*}
I_{2} & \leq c^{p} \int \limits_{I_{\alpha}} \Big(\iint \limits_{\mathbb{D}} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} dA(z)\Big)^{\frac{p}{2}} dm(\zeta) = c^{p} \int \limits_{I_{\alpha}} \Big(\iint \limits_{\mathbb{D}} dA(w)\Big)^{\frac{p}{2}} dm(\zeta) = c^{p} m(I_{\alpha}).
\end{align*}
We increase $I_{3}$ by extending it over $\mathbb{D} \setminus D_{\eta}(\alpha)$ and then we make the change of variables $w=\psi_{\alpha}(z)$ to get
\begin{align*}
I_{3} & \leq \Vert g \Vert_{\infty}^{p} \int \limits_{I_{\alpha}} \Big(\iint \limits_{\mathbb{D} \setminus D_{\eta}(0)} dA(w)\Big)^{\frac{p}{2}} dm(\zeta) = \Vert g \Vert_{\infty}^{p} m(I_{\alpha}) (1-\eta^{2})^{\frac{p}{2}}.
\end{align*}
In order to estimate $I_{4}$ we have first to estimate $\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta)} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} dA(z)$, when $\zeta\in\mathbb{T} \setminus I_{\alpha}$. Without loss of generality we may assume that $\alpha\in[R_{0},1)$. For $j\in\mathbb{N}, j\geq 2$, we define $r_{j}=1-\frac{1}{2^{j}}$ and consider the sets $\Omega_{1} = E(0;\frac{1}{2})$ and $\Omega_{j}= (E(0;r_{j})\setminus E(0;r_{j-1}))\cap \Gamma_{\frac{1}{2}}(\zeta)$. Then we have that $\Gamma_{\frac{1}{2}}(\zeta) = \bigcup \limits_{j=1}^{+\infty}\Omega_{j}$ and $A(\Omega_{j})\asymp \frac{1}{4^{j}}$, when $j\geq 1$. We fix $z_{j}\in\Omega_{j}$ such that $Arg(z_{j}) = Arg(\zeta)$. Then, if $z\in\Omega_{j}$, we have $\vert 1-\alpha z\vert \asymp \vert \frac{1}{\alpha}-z\vert \asymp \vert \frac{1}{\alpha}-z_{j}\vert$. Also we have that $\vert \frac{1}{\alpha}-z_{j}\vert \asymp \big\vert \frac{1}{2^{j}} + 1-\vert\alpha\vert + \vert Arg(\zeta)\vert \big\vert$. In all these relations, the underlying constants are absolute. If $\zeta\in\mathbb{T}\setminus I_{\alpha}$, then $a\not\in \Gamma_{\frac{1}{2}}(\zeta)$ which means that $1-\vert \alpha \vert < \vert Arg(\zeta)\vert$, so we have that $\vert \frac{1}{\alpha}-z_{j}\vert \asymp \big\vert \frac{1}{2^{j}} + \vert Arg(\zeta)\vert \big\vert$. There is some $j_{0}$ so that $\frac{1}{2^{j_{0}}} \leq \vert Arg(\zeta)\vert \leq \frac{1}{2^{j_{0}-1}} $. For $j<j_{0}$ we have $\vert Arg(\zeta)\vert < \frac{1}{2^{j}}$ which implies that $\vert \frac{1}{\alpha}-z_{j}\vert \asymp \frac{1}{2^{j}}$ and for $j>j_{0}$ we have $\vert Arg(\zeta)\vert > \frac{1}{2^{j}}$ which implies that $\vert \frac{1}{\alpha}-z_{j}\vert \asymp \vert Arg(\zeta)\vert$. Therefore
\begin{align*}
\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta)} \vert \psi_{\alpha}^{\prime}(z) \vert^{2}& dA(z) = \iint \limits_{\Omega_{1}} \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert 1-\alpha z\vert^{4}} dA(z) + \sum_{j=2}^{+\infty}\iint \limits_{\Omega_{j}} \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert 1-\alpha z\vert^{4}} dA(z)\\
& \asymp (1-\vert \alpha \vert^{2})^{2} + \sum_{j=2}^{j_{0}}\iint \limits_{\Omega_{j}} \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert \frac{1}{\alpha}-z_{j}\vert^{4}} dA(z) + \sum_{j=j_{0}}^{+\infty} \iint \limits_{\Omega_{j}} \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert \frac{1}{\alpha}-z_{j}\vert^{4}} dA(z)\\
& \asymp (1-\vert \alpha \vert^{2})^{2} + \sum_{j=2}^{j_{0}} A(\Omega_{j}) (1-\vert \alpha \vert^{2})^{2}(2^{j})^{4} + \sum_{j=j_{0}}^{+\infty} A(\Omega_{j}) \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert Arg(\zeta)\vert^{4}}\\
& \asymp (1-\vert \alpha \vert^{2})^{2} + (1-\vert \alpha \vert^{2})^{2} \sum_{j=2}^{j_{0}} \frac{1}{4^{j}} 16^{j} + \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert Arg(\zeta)\vert^{4}} \sum_{j=j_{0}}^{+\infty} \frac{1}{4^{j}}.
\end{align*}
But $\sum_{j=2}^{j_{0}} 4^{j}\asymp 4^{j_{0}} \asymp \frac{1}{\vert Arg(\zeta)\vert^{2}}$ and $\sum_{j=j_{0}}^{+\infty} \frac{1}{4^{j}}\asymp \frac{1}{4^{j_{0}}}\asymp \vert Arg(\zeta)\vert^{2}$. Therefore
\begin{align*}
\iint \limits_{\Gamma_{\frac{1}{2}}(\zeta)} \vert \psi_{\alpha}^{\prime}(z) \vert^{2}& dA(z) \asymp (1-\vert \alpha \vert^{2})^{2} + \frac{(1-\vert \alpha \vert^{2})^{2}}{\vert Arg(\zeta)\vert^{2}}.
\end{align*}
Since $\alpha$ is positive, there is $\phi_{0}$ such that $\mathbb{T}\setminus I_{\alpha}=[\phi_{0}, 2\pi-\phi_{0}]$ and $\phi_{0}\asymp m(I_{\alpha})$. Therefore
\begin{align*}
I_{4} & \asymp \int_{\phi_{0}}^{\pi} (1-\vert \alpha \vert^{2})^{p} d\phi + \int_{\phi_{0}}^{\pi} \frac{(1-\vert \alpha \vert^{2})^{p}}{\phi^{p}}d\phi\\
& \asymp (1-\vert \alpha \vert^{2})^{p} + \frac{(1-\vert \alpha \vert^{2})^{p}}{\phi_{0}^{p-1}}\\
& \asymp (1-\vert \alpha \vert^{2})^{p} + \frac{(1-\vert \alpha \vert^{2})^{p}}{m(I_{\alpha})^{p-1}}.
\end{align*}
Substituting the estimates for $I_{1},I_{2},I_{3},I_{4}$ in \eqref{basic_closed_range}, we get
\begin{align*}
1-\vert \alpha \vert &\leq C\Big[\Vert g \Vert_{\infty}^{p} m(I_{\alpha}) \frac{1}{(1-\eta^{2})^{p}} \Big(\frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}\Big)^{\frac{p}{2}} + c^{p} m(I_{\alpha}) \\
& + \Vert g \Vert_{\infty}^{p} m(I_{\alpha}) (1-\eta^{2})^{\frac{p}{2}} + (1-\vert \alpha \vert^{2})^{p} + \frac{(1-\vert \alpha \vert^{2})^{p}}{m(I_{\alpha})^{p-1}}\Big].
\end{align*}
Using \eqref{I_alpha_estimation} we get
\begin{align*}
1-\vert \alpha \vert &\leq C\Big[\Vert g \Vert_{\infty}^{p} \frac{1-\vert\alpha\vert}{(1-\eta)^{\frac{1}{2}}} \frac{1}{(1-\eta^{2})^{p}} \Big(\frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}\Big)^{\frac{p}{2}}\\
& + c^{p} \frac{1-\vert\alpha\vert}{(1-\eta)^{\frac{1}{2}}}
+ \Vert g \Vert_{\infty}^{p} \frac{1-\vert\alpha\vert}{(1-\eta)^{\frac{1}{2}}} (1-\eta^{2})^{\frac{p}{2}} \\
& + (1-\vert \alpha \vert^{2})(1-\eta^{2})^{p-1} + (1-\vert \alpha \vert^{2})(1-\eta)^{\frac{p-1}{2}}\Big].
\end{align*}
Thus
\begin{align*}
C &\leq \Vert g \Vert_{\infty}^{p} \frac{1}{(1-\eta)^{\frac{2p+1}{2}}} \Big(\frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}\Big)^{\frac{p}{2}} + \frac{c^{p}}{(1-\eta)^{\frac{1}{2}}} \\
& + \Vert g \Vert_{\infty}^{p} (1-\eta)^{\frac{p-1}{2}} + (1-\eta)^{p-1} + (1-\eta)^{\frac{p-1}{2}}.
\end{align*}
Choose $\eta$ close enough to 1 so that $\Vert g \Vert_{\infty}^{p} (1-\eta)^{\frac{p-1}{2}} + (1-\eta)^{p-1} + (1-\eta)^{\frac{p-1}{2}} <\frac{C}{4}$ and then set $C_{\eta}=\frac{1}{(1-\eta)^{\frac{1}{2}}}$. We have that
\begin{align*}
\frac{3C}{4} \leq \frac{\Vert g \Vert_{\infty}^{p} }{C_{\eta}^{2p+1}} \Big(\frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}\Big)^{\frac{p}{2}} + \frac{c^{p}}{C_{\eta}^{\frac{1}{2}}}.
\end{align*}
Choose $c$ small enough so that $\frac{c^{p}}{C_{\eta}^{\frac{1}{2}}} <\frac{C}{4}$. Then
\begin{align*}
\frac{C}{2} \leq \frac{\Vert g \Vert_{\infty}^{p} }{C_{\eta}^{2p+1}} \Big(\frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}\Big)^{\frac{p}{2}}
\end{align*}
and finally
\begin{align*}
\Big(\frac{C C_{\eta}^{2p+1}}{2\Vert g \Vert_{\infty}^{p}}\Big)^{\frac{2}{p}} \leq \frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}
\end{align*}
or
\begin{equation*}
A(G_{c}\cap D_{\eta}(\alpha)) \geq \delta A(D_{\eta}(\alpha)),
\end{equation*}
for every $\alpha$ with $R_{0} \leq\vert \alpha\vert<1$.
{\bf Case (b)} $0\leq \vert\alpha\vert\leq R_{0}$: There exists $\eta_{1}$, depending only on $\eta$, such that $D_{\eta}(R_{0}) \subseteq D_{\eta_{1}}(0)$. Take $\alpha^{\prime}$ so that $\vert\alpha^{\prime}\vert=R_{0}$ and $Arg(\alpha^{\prime})=Arg(\alpha)$. Then $D_{\eta}(\alpha^{\prime}) \subseteq D_{\eta_{1}}(\alpha)$. Set $\eta_{2}=\max\lbrace\eta, \eta_{1}\rbrace$. Then from case (a) for $\alpha^{\prime}$ we have
\begin{align*}
A(G_{c}\cap D_{\eta_{2}}(\alpha)) & \geq A(G_{c}\cap D_{\eta_{1}}(\alpha)) \geq A(G_{c}\cap D_{\eta}(\alpha^{\prime}))\\
& \geq \delta A(D_{\eta}(\alpha^{\prime})) \geq C\delta A(D_{\eta_1}(\alpha)) \geq C\delta A(D_{\eta_2}(\alpha)),
\end{align*}
where the constants $C>0$ depend only on $\eta$.
Moreover, when $R_{0} \leq\vert \alpha\vert<1$, we have
\begin{align*}
A(G_{c}\cap D_{\eta_{2}}(\alpha)) & \geq A(G_{c}\cap D_{\eta}(\alpha)) \geq \delta A(D_{\eta}(\alpha)) \geq C\delta A(D_{\eta_2}(\alpha)),
\end{align*}
where the constant $C>0$ depends only on $\eta$. So, we have proved that there are $\eta_{2}\in(0,1)$, $c>0$ and $C>0$ such that
\[
A(G_{c}\cap D_{\eta_{2}}(\alpha)) \geq C A(D_{\eta_2}(\alpha)),
\]
for every $\alpha\in\mathbb{D}$, which is what we had to prove.
\end{proof}
\begin{remark}
As we can observe, the proof of the implication $(ii) \Rightarrow (i)$ in theorem \ref{integral_theorem} is valid even in the case $p=1$.
\end{remark}
\section{Closed range integral operators on BMOA space}
Let denote as $BMOA_{0}$ the space $BMOA/\mathbb{C}$. In \cite{Anderson}, A. Anderson posed the question of finding a necessary and sufficient condition for the operator $S_{g}$ to have closed range on $BMOA_{0}$.
Next, we answer this question, proving that
conditions (ii) and (iii) of theorem \ref{integral_theorem}, for $H_{0}^{p}$, are also necessary and sufficient for the integral operator $S_{g}$ to have closed range on $BMOA_{0}$.
Let $z_{0}\in\mathbb{D}$. The point evaluation functional of the derivative, on $BMOA$, induced by $z_{0}$, is defined as $\Lambda_{z_{0}}f=f^{\prime}(z_{0}), f\in BMOA$. It is easy to check that $\Lambda_{z_{0}}$ is bounded on $BMOA$. Therefore, using Theorem 2.2 and Corollary 2.3 in \cite{Anderson}, we conclude that
the operator $S_{g}:BMOA_{0} \rightarrow BMOA_{0}$ is bounded if and only if $g\in H^{\infty}$. So, we consider $g\in H^{\infty}$ and set again $G_{c} = \lbrace z\in\mathbb{D}: \vert g(z)\vert>c\rbrace$.
The following theorem is the main result of this section.
\begin{theorem}\label{integral_theorem_bmoa}
Let $g\in H^{\infty}$. Then the following are equivalent:
\begin{enumerate}
\item[(i)] The operator $S_{g}:BMOA_{0}\rightarrow BMOA_{0}$ has closed range
\item[(ii)] There exist $c>0$, $\delta > 0$ and $\eta \in (0,1)$ such that
\begin{equation}\label{theorem_rel}
A(G_{c} \cap D_{\eta}(a)) \geq \delta A(D_{\eta}(a))
\end{equation}
for all $a \in \mathbb{D}$.
\end{enumerate}
\end{theorem}
Recall that the weighted Bergman space $\mathbb{A}_{\gamma}^{p}, \gamma>-1$, is defined as the set of all analytic functions $f$ in $\mathbb{D}$ such that
\[
\iint \limits_{\mathbb{D}} \vert f(z)\vert^{p} (1-\vert z\vert^{2})^{\gamma}dA(z) < \infty.
\]
We will make use of the following theorem of D. Luecking (see \cite{Luecking81}).
\begin{theorem}\label{Lue2}
Let $p\geq 1$, $\gamma>-1$ and measurable $G\subseteq\mathbb{D}$. The following assertions are equivalent.
\begin{enumerate}
\item[(i)] There exists $C>0$ such that
\begin{equation}\label{Lue_integrals}
\iint \limits_{G} \vert f(z) \vert^{p} (1 - \vert z \vert^{2})^{\gamma} dA(z)
\geq C \iint \limits_{\mathbb{D}} \vert f(z) \vert^{p} (1 - \vert z \vert^{2})^{\gamma} dA(z)
\end{equation}
for every $f \in \mathbb{A}_{\gamma}^{p}$.
\item[(ii)] There exist $c>0$, $\delta > 0$ and $\eta \in (0,1)$ such that
\[
A(G \cap D_{\eta}(a)) \geq \delta A(D_{\eta}(a))
\]
for all $a \in \mathbb{D}$.
\end{enumerate}
\end{theorem}
In the proof of theorem \ref{integral_theorem_bmoa}, we will use the fact that $\log\frac{1}{\vert z \vert} \asymp 1-\vert z \vert^{2}$, when $0<\delta\leq\vert z\vert < 1$, where $\delta$ is fixed but arbitrary.
\begin{proof}[Proof of theorem \ref{integral_theorem_bmoa}.]
$(ii) \Rightarrow (i)$ If \eqref{theorem_rel} holds then, because of theorem \ref{Lue2}, \eqref{Lue_integrals} also holds for $G=G_{c}$. For $\beta\in\mathbb{D}$, $z\in\mathbb{D}$ and $f\in BMOA_{0}$, we consider the function $h_{\beta}(z) = \frac{(1-\vert\beta\vert^{2})^{\frac{1}{2}}}{1-\overline{\beta}z} f^{\prime}(z)$. It's easy to see that if $f\in BMOA_{0}$ then $h_{\beta}\in\mathbb{A}_{1}^{2}$.
Indeed
\begin{align*}
\Vert h_\beta \Vert_{\mathbb{A}_{1}^{2}}^{2} & = \iint \limits_{\mathbb{D}} \frac{1-\vert\beta\vert^{2}}{\vert 1-\overline{\beta}z \vert^{2}} \vert f^{\prime}(z) \vert^{2} (1-\vert z\vert^{2}) dA(z)\\
& \leq C \iint \limits_{\mathbb{D}} \frac{1-\vert\beta\vert^{2}}{\vert 1-\overline{\beta}z \vert^{2}} \vert f^{\prime}(z) \vert^{2} \log\frac{1}{\vert z\vert} dA(z)\leq \Vert f\Vert_{BMOA_{0}}^{2}<\infty.
\end{align*}
Let $\beta\in\mathbb{D}$. We have that
\begin{align*}
\Vert S_{g}f \Vert_{BMOA_{0}}^{2}& = \sup \limits_{z_{0}\in\mathbb{D}} \iint \limits_{\mathbb{D}} \frac{1-\vert z_{0}\vert^{2}}{\vert 1-\overline{z_{0}}z\vert^{2}} \vert (S_{g}f(z))^{\prime} \vert^{2} \log\frac{1}{\vert z \vert} dA(z) \\
& = \sup \limits_{z_{0}\in\mathbb{D}} \iint \limits_{\mathbb{D}} \frac{1-\vert z_{0}\vert^{2}}{\vert 1-\overline{z_{0}}z\vert^{2}} \vert f^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} \log\frac{1}{\vert z \vert} dA(z) \\
& \geq \iint \limits_{\mathbb{D}} \frac{1-\vert \beta\vert^{2}}{\vert 1-\overline{\beta}z\vert^{2}} \vert f^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} \log\frac{1}{\vert z \vert} dA(z) \\
& \geq c^{2} \iint \limits_{G_{c}} \frac{1-\vert \beta\vert^{2}}{\vert 1-\overline{\beta}z\vert^{2}} \vert f^{\prime}(z) \vert^{2} \log\frac{1}{\vert z \vert} dA(z) \\
& = c^{2} \iint \limits_{G_{c}} \vert h_{\beta}(z)\vert^{2} \log\frac{1}{\vert z \vert} dA(z) \\
& \geq C \iint \limits_{G_{c}} \vert h_{\beta}(z)\vert^{2} (1 - \vert z \vert^{2}) dA(z) \\
& \geq C \iint \limits_{\mathbb{D}} \vert h_{\beta}(z)\vert^{2} (1 - \vert z \vert^{2}) dA(z),
\end{align*}
where the last inequality is justified by theorem \ref{Lue2}.
So
\begin{align*}
\Vert S_{g}f \Vert_{BMOA_{0}}^{2} & \geq C \iint \limits_{\mathbb{D}} \frac{1-\vert \beta\vert^{2}}{\vert 1-\overline{\beta}z\vert^{2}} \vert f^{\prime}(z) \vert^{2} \log\frac{1}{\vert z \vert} dA(z).
\end{align*}
Taking the supremum over $\beta\in\mathbb{D}$ in the last relation we get
\[
\Vert S_{g}f \Vert_{BMOA_{0}}^{2} \geq C \Vert f \Vert_{BMOA_{0}}^{2}.
\]
$(i) \Rightarrow (ii)$ If $S_{g}$ has closed range then there exist $C_{1}>0$ such that for every $f \in BMOA_{0}$ we have
\[
\Vert S_{g}f \Vert_{BMOA_{0}}^{2} \geq C_{1} \Vert f \Vert_{BMOA_{0}}^{2}.
\]
For $\alpha\in\mathbb{D}$, if we set $f=\psi_{\alpha}-\alpha$ in the last inequality, just as in the case of Hardy spaces and observe that $\Vert\psi_{\alpha}-\alpha\Vert_{BMOA} \asymp 1$ and $\frac{(1-\vert\beta\vert^{2})(1-\vert z \vert^{2})}{\vert 1-\overline{\beta}z\vert^{2}}<1$, for every $z,\beta\in\mathbb{D}$, then we have
\begin{align*}
C_{1} & \leq \Vert S_{g}(\psi_{\alpha}-\alpha) \Vert_{BMOA_{0}}^{2}\\
& = \sup \limits_{\beta\in\mathbb{D}} \iint \limits_{\mathbb{D}} \frac{1-\vert\beta\vert^{2}}{\vert 1-\overline{\beta}z\vert^{2}} \vert (S_{g}(\psi_{\alpha}-\alpha)(z))^{\prime} \vert^{2} \log\frac{1}{\vert z \vert} dA(z)\\
& \leq C \sup \limits_{\beta\in\mathbb{D}} \iint \limits_{\mathbb{D}} \frac{1-\vert\beta\vert^{2}}{\vert 1-\overline{\beta}z\vert^{2}} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} \vert g(z) \vert^{2} (1-\vert z \vert^{2}) dA(z)\\
& \leq C \iint \limits_{\mathbb{D}} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} \vert g(z) \vert^{2}dA(z)\\
& \leq C\Big[\Vert g\Vert_{\infty}^{2}\iint \limits_{G_{c}\cap D_{\eta}(\alpha)} \frac{(1-\vert\alpha \vert^{2})^{2}}{\vert 1-\overline{\alpha}z \vert^{4}} dA(z)
+ c^{2} \iint \limits_{D_{\eta}(\alpha)\setminus G_{c}} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} dA(z)\\
& \hspace{75mm}+ \Vert g\Vert_{\infty}^{2} \iint \limits_{\mathbb{D}\setminus D_{\eta}(\alpha)} \vert \psi_{\alpha}^{\prime}(z) dA(z)\Big]\\
& \leq C\Big[\Vert g\Vert_{\infty}^{2} \iint \limits_{G_{c}\cap D_{\eta}(\alpha)} \frac{1}{(1-\vert\alpha \vert^{2})^{2}} dA(z) + c^{2} \iint \limits_{\mathbb{D}} \vert \psi_{\alpha}^{\prime}(z) \vert^{2} dA(z)\\
& \hspace{75mm}+ \Vert g\Vert_{\infty}^{2} \iint \limits_{\mathbb{D}\setminus D_{\eta}(\alpha)}\vert \psi_{\alpha}^{\prime}(z) \vert^{2} dA(z)\Big]\\
& = C\Big[\Vert g\Vert_{\infty}^{2} \frac{A(G_{c}\cap D_{\eta}(\alpha))}{(1-\vert\alpha \vert^{2})^{2}} + c^{2} \iint \limits_{\mathbb{D}} dA(w) + \Vert g\Vert_{\infty}^{2} \iint \limits_{\mathbb{D}\setminus D_{\eta}(0)} dA(w)\Big]\\
& \leq C\Big[C^{\prime}\Vert g\Vert_{\infty}^{2} \frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))} + c^{2} + \Vert g\Vert_{\infty}^{2}(1-\eta^{2})\Big],
\end{align*}
where $C^{\prime}$ depends only on $\eta$ and $C$ is absolute. Therefore
\begin{align*}
C_{1} \leq C^{\prime}\Vert g \Vert_{\infty}^{2} \frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))}
+ c^{2} + \Vert g \Vert_{\infty}^{2} (1-\eta^{2}).
\end{align*}
First, we choose $\eta$ close enough to 1 so that $\Vert g \Vert_{\infty}^{2} (1-\eta^{2})< \frac{C_{1}}{4}$ and $c$ small enough so that $c< \frac{C_{1}}{4}$. So
\[
A(G_{c}\cap D_{\eta}(\alpha))\geq \frac{C}{2\Vert g \Vert_{\infty}^{2}} A(D_{\eta}(\alpha))=\delta A(D_{\eta}(\alpha)),
\]
where $C$ depends only on $\eta$.
\end{proof}
\begin{remark}
The $Q_{p}$ space, $0<p<\infty$, is defined as the set of all analytic functions $f$ in $\mathbb{D}$ for which
\[
\sup \limits_{\beta\in\mathbb{D}} \iint \limits_{\mathbb{D}} \frac{(1-\vert\beta\vert^{2})^{p}}{\vert 1-\overline{\beta}z\vert^{2p}} \vert f^{\prime}(z) \vert^{2} (1 -\vert z \vert^{2})^{p} dA(z) <\infty.
\]
Let denote as $Q_{p,0}$ the space $Q_{p}/\mathbb{C}$. For $\beta,z\in\mathbb{D}$ and $f\in Q_{p,0}$, we consider the functions $h_{\beta}(z) = \frac{(1-\vert\beta\vert^{2})^{\frac{p}{2}}}{(1-\overline{\beta}z)^{p}} f^{\prime}(z)$. It's easy to see that if $f\in Q_{p,0}$ then $h_{\beta}\in\mathbb{A}_{p}^{2}$ and using similar arguments as in the proof of theorem \ref{integral_theorem_bmoa}, we can prove that \eqref{theorem_rel} is also necessary and sufficient for $S_{g}$ to have closed range on $Q_{p,0}\hspace{1mm}(0<p<\infty)$.
\end{remark}
\section{Closed range integral operators on Besov spaces}
Let denote as $B_{0}^{p}$ the space $B^{p}/\mathbb{C}$.
With similar arguments as in the case of $BMOA$ space we can see that
the operator $S_{g}:B_{0}^{p} \rightarrow B_{0}^{p}$ $(1<p<\infty)$ is bounded if and only if $g\in H^{\infty}$. So, we consider $g\in H^{\infty}$ and $G_{c} = \lbrace z\in\mathbb{D}: \vert g(z)\vert>c\rbrace$. We will prove that condition \eqref{theorem_rel} is also necessary and sufficient for the operator $S_{g}$ to have closed range on $B_{0}^{p}$. For the sufficiency, we observe that, if $f\in B^{p}$ then $f^{\prime}\in\mathbb{A}_{p-2}^{p}$, the weighted Bergman space defined in the previous section, so we can use theorem \ref{Lue2}. We have
\begin{align*}
\Vert S_{g}f \Vert_{B_{0}^{p}}^{p}& = \iint \limits_{\mathbb{D}} \vert (S_{g}f(z))^{\prime} \vert^{p} (1-\vert z \vert^{2})^{p-2} dA(z)\\
& \geq \iint \limits_{G_{c}} \vert f^{\prime}(z)\vert^{p} \vert g(z)\vert^{p} (1-\vert z \vert^{2})^{p-2} dA(z) \\
& \geq c^{p} \iint \limits_{G_{c}} \vert f^{\prime}(z)\vert^{p} (1-\vert z \vert^{2})^{p-2} dA(z) \\
&\geq C \iint \limits_{\mathbb{D}} \vert f^{\prime}(z)\vert^{p} (1-\vert z \vert^{2})^{p-2} dA(z) \\
& = C \Vert f \Vert_{B_{0}^{p}}^{p},
\end{align*}
where the last inequality is justified by theorem \ref{Lue2}. So $S_{g}$ has closed range on $B_{0}^{p}$.
If $S_{g}$ has closed range on $B_{0}^{p}$ then there exist $C_{1}>0$ such that for every $f \in B_{0}^{p}$ we have
\[
\Vert S_{g}f \Vert_{B_{0}^{p}}^{p} \geq C_{1} \Vert f \Vert_{B_{0}^{p}}^{p}.
\]
For $\alpha\in\mathbb{D}$, if we set $f=f_{\alpha}=\frac{(1 - \vert \alpha \vert^{2})^{\frac{2}{p}} }{\frac{2\overline{\alpha}}{p} (1 - \overline{\alpha}z)^{\frac{2}{p}}}-\frac{(1 - \vert \alpha \vert^{2})^{\frac{2}{p}}}{\frac{2\overline{\alpha}}{p}}$ in the last inequality, just as in the case of $BMOA$, and observe that $\Vert f_{\alpha}\Vert_{B_{0}^{p}} \asymp 1$ and $\vert f_{\alpha}^{\prime}(z)\vert = \frac{(1 - \vert \alpha \vert^{2})^{\frac{2}{p}} }{ \vert 1 - \overline{\alpha}z \vert^{{\frac{2}{p}}+1}}$, then we have
\begin{align*}
& C_{1} \leq \Vert S_{g}f_{\alpha} \Vert_{B_{0}^{p}}^{p} = \iint \limits_{\mathbb{D}} \vert f_{\alpha}^{\prime}(z)\vert^{p} \vert g(z) \vert^{p} (1-\vert z \vert)^{p-2} dA(z)\\
&\leq \Vert g\Vert_{\infty}^{p}\iint \limits_{G_{c}\cap D_{\eta}(\alpha)} \frac{(1 - \vert \alpha \vert^{2})^{2} }{ \vert 1 - \overline{\alpha}z \vert^{2+p}} (1-\vert z \vert)^{p-2} dA(z)
+ c^{p} \iint \limits_{D_{\eta}(\alpha)\setminus G_{c}} \vert f_{\alpha}^{\prime}(z)\vert^{p} (1-\vert z \vert)^{p-2} dA(z)\\
& \hspace{60mm} + \Vert g\Vert_{\infty}^{p} \iint \limits_{\mathbb{D}\setminus D_{\eta}(\alpha)} \frac{(1 - \vert \alpha \vert^{2})^{2} }{ \vert 1 - \overline{\alpha}z \vert^{2+p}} (1-\vert z \vert)^{p-2} dA(z)\\
&\leq \Vert g\Vert_{\infty}^{p}\iint \limits_{G_{c}\cap D_{\eta}(\alpha)} \frac{1}{(1-\vert \alpha \vert^{2})^{2}} dA(z)
+ c^{p} \iint \limits_{\mathbb{D}} \vert f_{\alpha}^{\prime}(z)\vert^{p} (1-\vert z \vert)^{p-2} dA(z)\\
& \hspace{30mm} + \Vert g\Vert_{\infty}^{p} \iint \limits_{\mathbb{D}\setminus D_{\eta}(0)} \frac{(1 - \vert \alpha \vert^{2})^{2} }{ \vert 1 - \overline{\alpha}\psi_{\alpha}(w) \vert^{2+p}} (1-\vert \psi_{\alpha}(w) \vert)^{p-2} \vert \psi_{\alpha}^{\prime}(w) \vert^{2} dA(w)\\
&= \Vert g\Vert_{\infty}^{p}\iint \limits_{G_{c}\cap D_{\eta}(\alpha)} \frac{1}{(1-\vert \alpha \vert^{2})^{2}} dA(z)
+ c^{p} \Vert f_{\alpha}\Vert_{B_{0}^{p}}^{p} + \Vert g\Vert_{\infty}^{p} \iint \limits_{\mathbb{D}\setminus D_{\eta}(0)} \frac{(1 - \vert w \vert^{2})^{p-2} }{ \vert 1 - \overline{\alpha}w \vert^{p-2}} dA(w)\\
&\leq \Vert g\Vert_{\infty}^{p} \frac{A(G_{c}\cap D_{\eta}(\alpha))}{(1-\vert \alpha \vert^{2})^{2}} + c^{p} \Vert f_{\alpha}\Vert_{B_{0}^{p}}^{p} + \Vert g\Vert_{\infty}^{p} \iint \limits_{\mathbb{D}\setminus D_{\eta}(0)} dA(w)\\
& \leq C^{\prime}\Vert g\Vert_{\infty}^{p} \frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))} + C c^{p} + \Vert g\Vert_{\infty}^{p}(1-\eta^{2}),
\end{align*}
where $C^{\prime}$ depends only on $\eta$ and $C$ is absolute.
So we have
\begin{align*}
& C_{1} \leq C^{\prime}\Vert g\Vert_{\infty}^{p}\frac{A(G_{c}\cap D_{\eta}(\alpha))}{A(D_{\eta}(\alpha))} + C c^{p} + \Vert g\Vert_{\infty}^{p} (1-\eta^{2}).
\end{align*}
Choosing $\eta$ close enough to 1 so that $\Vert g\Vert_{\infty}^{p} (1-\eta^{2})< \frac{C_{1}}{4}$, and $c$ small enough so that $C c^{p}< \frac{C_{1}}{4}$, we get
\[
A(G_{c}\cap D_{\eta}(\alpha))\geq \frac{C_{1}}{2C^{\prime}\Vert g \Vert_{\infty}^{p}} A(D_{\eta}(\alpha))=\delta A(D_{\eta}(\alpha)).
\]
\end{document} | math | 50,170 |
\ensuremath{\mathbf{e}}\xspacegin{document}
\date{January 30, 2017}
\title[Explicit Determination of \((N-1)\)-Dimensional Area Minimizing Surfaces]
{Explicit Determination in \(\ensuremath{\mathbb{R}}\xspace^{N}\) of \((N-1)\)-Dimensional\\
Area Minimizing Surfaces with Arbitrary Boundaries}
\author{Harold R. Parks}
\address{Department of Mathematics\\
Oregon State University\\
Corvallis, OR 97331}
\email{[email protected]}
\author{Jon T. Pitts}
\address{Department of Mathematics\\
Texas A\&M University\\
College Station, TX 77843}
\email{[email protected]}
\subjclass{49Q15, 49Q20, 49Q05}
\thanks{The work of the second author was supported in part by a grant
from the National Science Foundation.}
\ensuremath{\mathbf{e}}\xspacegin{abstract}
Let \(N\ge3\)
be an integer and \(B\)
be a smooth, compact, oriented, \((N-2)\)-dimensional
boundary in \(\ensuremath{\mathbb{R}}\xspace^{N}\).
In 1960, H.~Federer and W.~Fleming \cite{FedererFleming60} proved
that there is an \((N-1)\)-dimensional
integral current spanning surface of least area. The proof was by
compactness methods and non-constructive. In 1970 H.~Federer
\cite{Federer70} proved the definitive regularity result for such a
codimension one minimizing surface. Thus it is a question of long
standing whether there is a numerical algorithm that will closely
approximate the area minimizing surface. The principal result of
this paper is an algorithm that solves this problem.
Specifically, given a neighborhood~\(U\) around \(B\)
in~\(\ensuremath{\mathbb{R}}\xspace^{N}\) and a tolerance~\(\epsilon>0\),
we prove that one can explicitly compute in finite time an
\((N-1)\)-dimensional integral current \(T\)
with the following approximation requirements:
\ensuremath{\mathbf{e}}\xspacegin{enumerate}
\item \(\operatorname{spt}(\ensuremath{\mathbf{d}}\xspacery T)\subset U\).
\item \(B\) and \(\ensuremath{\mathbf{d}}\xspacery T\) are within distance \(\epsilon\) in the
Hausdorff distance.
\item \(B\) and \(\ensuremath{\mathbf{d}}\xspacery T\) are within distance \(\epsilon\) in the
flat norm distance.
\item \(\ensuremath{\mathbb{M}}\xspace(T)<\epsilon+\inf\{\ensuremath{\mathbb{M}}\xspace(S):\ensuremath{\mathbf{d}}\xspacery S=B\}\).
\item Every area minimizing current \(R\)
with \(\partial R=\partial T\)
is within flat norm distance \(\epsilon\) of~\(T\).
\end{enumerate}
\end{abstract}
\maketitle
\section{Introduction}\label{sec:intro}
In this paper, we will follow the notation and terminology of Federer
\cite{Federer69} except as otherwise noted. Fix a positive
integer~\(N\geq3\).
In 1960, H.~Federer and W.~Fleming \cite{FedererFleming60} proved that
for any smooth, compact, \((N-2)\)-dimensional,
oriented boundary in \(\ensuremath{\mathbb{R}}\xspace^{N}\),
there is an \((N-1)\)-dimensional
spanning surface of least area. The proof was by compactness methods
and non-constructive. In 1970 H.~Federer \cite{Federer70}
proved the definitive regularity result for such a codimension one
minimizing surface. Thus it is a question of long standing whether
there is a numerical algorithm that will closely approximate the area
minimizing surface. The principal result of this paper is an
algorithm that solves this problem:
\ensuremath{\mathbf{e}}\xspacegin{sstheorem}[Main Result]
\label{thm:main}
Given a smooth \((N-2)\)-dimensional
integral boundary \(B\), neighborhood \(U\)
around \(B\), and \(\epsilon>0\),
we will compute in finite time an integral current \(T\)
that we can guarantee satisfies the following requirements:
\ensuremath{\mathbf{e}}\xspacegin{enumerate}
\item\label{thm:main:1} \(\operatorname{spt}(\ensuremath{\mathbf{d}}\xspacery T)\subset U\).
\item\label{thm:main:2}
\(\operatorname{dist}_{H}[\operatorname{spt}(\ensuremath{\mathbf{d}}\xspacery T),\operatorname{spt}(B)]<\epsilon\), where
\(\operatorname{dist}_{H}[\cdot,\cdot]\) is Hausdorff distance.
\item\label{thm:main:3} \(\ensuremath{\mathcal{F}}(\ensuremath{\mathbf{d}}\xspacery T-B)<\epsilon\).
\item\label{thm:main:4}
\(\ensuremath{\mathbb{M}}\xspace(T)<\epsilon+\inf\{\ensuremath{\mathbb{M}}\xspace(S):\ensuremath{\mathbf{d}}\xspacery S=B\}\).
\item\label{thm:main:5} Every area minimizing current \(R\)
with \(\partial R=\partial T\)
is within flat norm distance
\(\epsilon\) of~\(T\).
\end{enumerate}
\end{sstheorem}
\ensuremath{\mathbf{e}}\xspacegin{ssremarks}
We should note that there is a limit to what can be expected. For
general boundary curves, the best reasonable result is the
approximation, in both area and location, of an area minimizing
surface that has boundary near the given boundary and has area
nearly equal to the minimum of areas of surfaces spanning the given
boundary.
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item In general, there will be little \textit{a~priori} control of
the topology of a minimizing surface.
\item In general, the area minimizing surface with a given boundary
is not unique. Even though F.~Morgan \cite{morgan81} has shown
that for a generic boundary the area minimizing surface is unique,
there are but few situations in which uniqueness can be guaranteed
\textit{a~priori}.
\item Distinct small perturbations of the boundary can result in
unique area minimizing surfaces that are widely separated even
though their boundaries are nearly identical. It was noted by
M.~Beeson \cite{Beeson77} that a consequence of such discontinuous
behavior is that, in a certain formal system, the area minimizing
surface is not computable. Thus we believe that it is essential
to seek an approximation to an area minimizing surface the
boundary of which is near to, but not necessarily identical to,
the given boundary.
\end{itemize}
\end{ssremarks}
The last two items above concerning uniqueness and non-uniqueness
present the crucial difficulties in closely approximating the location
of an area minimizing surface, because a surface of nearly minimum
area for the given boundary may be far away in location from any area
minimizer for that boundary. We deal with these difficulties by using
a sequence of more and more precise approximations in which we first
construct a surface \(T_{i}\)
of nearly minimum area, and then second consider an auxiliary
minimization problem. This auxiliary problem seeks the minimum area
among surfaces satisfying two constraints which we describe informally
as follows. The first constraint is that the boundary of each of the
surfaces considered must equal the boundary of an appropriate portion,
\(T'_{i}\),
of the surface \(T_{i}\).
The second constraint is that each of the surfaces must be relatively
far from \(T'_{i}\) in the flat norm.
Continuing our informal discussion, if \(\epsilon>0\) is specified at
the outset and if the parameters defining large and small and near and
far are chosen correctly vis-\`{a}-vis that \(\epsilon\), then in the
above sequence of constructions and minimizations, it eventually must
happen both that \(T'_{i}\) differs little from \(T_{i}\) and that the
minimum area among the surfaces considered in the auxiliary problem is
relatively large. Consequently, the \(T'_{i}\) constructed at that
iteration is such that all surfaces relatively far from \(T'_{i}\)
have relatively large area. Thus the surfaces with relatively small
area all must be relatively near to \(T'_{i}\).
In the previous papers \cite{Parks77} and \cite{Parks86}, the
theoretical basis was developed for computing approximations to area
minimizing surfaces by numerically approximating functions of least
gradient. Those papers required that the given boundary for which an
area minimizing spanning surface was sought must lie on the surface of
a convex set. An important feature of the results in those papers was
that one could be certain, at least in principle, of when sufficient
computation had been done to guarantee any desired accuracy of the
approximation in the sense of Hausdorff distance.
The method described in \cite{Parks77} and \cite{Parks86} was
implemented numerically in \cite{Parks92}. The results reported there
and later results in \cite{Parks93} showed that, in practice, the
method gives much better approximations than the theorems
of \cite{Parks77} and \cite{Parks86} guarantee.
The requirement of \cite{Parks77} and \cite{Parks86} that the boundary
lie on the surface of a convex set is often not met. Various
alternative methods are available for application in these
circumstances. These are developed in the extremely general covering
space approach of K.~Brakke \cite{Brakke95}, in the duality approach
in the thesis of J.~Sullivan \cite{Sullivan90}, the more general work
of K.~Brakke \cite{Brakke95b}, and in the modification of the least
gradient method in our previous work \cite{ParksPitts96} and
\cite{ParksPitts97}. The results of \cite{Sullivan90} and
\cite{Brakke95b} provide a way to approximate the area of the area
minimizing surface (but not the position), and implicitly so do the
results of \cite{ParksPitts97}.
We dedicate this paper to the memory of our thesis advisor and friend
Frederick~J. Almgren, Jr.
\section{The Algorithm}
\label{sec:surfaces}
The Approximation Theorem obtained by Federer and Fleming tells us
that any integral current can be approximated arbitrarily well by an
integral polyhedral chain. Consequently, given a smooth, compact,
embedded, \((N-2)\)-dimensional
boundary in \(\ensuremath{\mathbb{R}}\xspace^{N}\),
an area minimizing surface spanning the given boundary can be obtained
as the limit of integral polyhedral chains obtained by minimizing mass
in an increasing family of finite dimensional subspaces of the vector
space of \((N-1)\)-dimensional
polyhedral chains, \(\ensuremath{\mathcal{P}}_{N-1}(\ensuremath{\mathbb{R}}\xspace^{N})\).
As a computational method, the obvious shortcoming of such an approach
is that, if one has in mind a desired level of accuracy of
approximation, there is no way to know whether one has achieved
it. What is lacking is \textit{a~priori} information on which finite dimension
subspace of \(\ensuremath{\mathcal{P}}_{N-1}(\ensuremath{\mathbb{R}}\xspace^{N})\)
is required to obtain the desired accuracy of approximation.
In his thesis [Sul90], John Sullivan has addressed this lack of
\textit{a~priori} information. Sullivan's approximation is carried
out using an appropriate cell complex obtained by slicing space
with equally spaced parallel planes in each of many directions,
a structure that he calls a ``multigrid.''
\ensuremath{\mathbf{e}}\xspacegin{ssdefinition}
A {\em multigrid in} \(\ensuremath{\mathbb{R}}\xspace^{N}\)
is the set of chains generated by a finite family of convex
polyhedra in \(\ensuremath{\mathbb{R}}\xspace^{N}\)
and by their vertices, edges, and faces. In our implementation, we
need include only the \((N-1)\)-dimensional
faces and \((N-2)\)-faces.
\end{ssdefinition}
Sullivan's approximation result is the following:
\ensuremath{\mathbf{e}}\xspacegin{sstheorem}[Sul90, Theorem~6.1]\label{sullivan_main}
Given \(\epsilon\) and an
\((N-1)\)-current \(T\), we can pick a multigrid \(C\) such that \(T\)
has a good approximation \(S\),
which is a chain in \(C\), is flat close to \(T\), and has not
much more mass,
\(\ensuremath{\mathbb{M}}\xspace(S) \leq (1+\epsilon)\,\ensuremath{\mathbb{M}}\xspace(T)\). In fact the choice of \(C\)
can be made merely knowing
\(\epsilon\) and bounds on \(\ensuremath{\mathbb{M}}\xspace(T)\) and on the mass of its boundary.
\end{sstheorem}
Using this last approximation result, Sullivan obtains the next result
(which we paraphrase)
regarding an algorithm for approximating the minimum area that
is required to span a given boundary cycle.
\ensuremath{\mathbf{e}}\xspacegin{sstheorem}[Sul90, Corollary~6.2]\label{sullivan_boundary}
{\sl Given any boundary cycle in \(\ensuremath{\mathbb{R}}\xspace^{N}\), with some a priori lower
bound on the area of a possible area-minimizing surface,
a surface with no more than \(1+\epsilon\) times the true minimum area
can be found by solving a linear programming problem.}
\end{sstheorem}
In the statement of Theorem~\ref{sullivan_boundary}, Sullivan focuses
on the approximation of the minimum area. But we note that in
Theorem~\ref{sullivan_main} the approximating surface also
approximates the given boundary; a fact that is important in our work.
By making use of the top-dimensional polyhedra in a sequence of finer
and finer multltigrids, we are able to obtain an algorithm that not
only approximates the minimum area, but that also approximates both
the area and the location (in the sense of the \(\ensuremath{\mathcal{F}}\)-norm)
of an area minimizer with boundary nearly equal to the given
boundary. This algorithm is the first to accomplish that goal.
\ensuremath{\mathbf{e}}\xspacegin{sstheorem}
Let \(B \in \ensuremath{\mathbb{I}}\xspace_{N-2}\) with \(\partial B = 0\) and smooth support be given.
Let \(\epsilon>0\) be given.
Let an open set, \(U\), with \(\operatorname{spt}(B)\subset U\) be given.
Then there is a computation
requiring finitely many multigrid minimizations
that results in a \(T\) guaranteed to satisfy
the following requirements:
\ensuremath{\mathbf{e}}\xspacegin{enumerate}
\item\label{it:contr.bdry}
\(\operatorname{spt}(\partial T) \subset U\),
\item\label{it:contr.hd.bdry}
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial T),\, \operatorname{spt} (B) \, ] < \epsilon\),
\item\label{it:contr.flt.bdry}
\(B = \partial S + \partial T \) with
\(\operatorname{spt}(S) \subset U\) and \(\ensuremath{\mathbb{M}}\xspace(S) < \epsilon\),
\item\label{it:contr.mass}
\(\ensuremath{\mathbb{M}}\xspace(T) < \epsilon +
\inf\{\ensuremath{\mathbb{M}}\xspace(S) : \partial S = B \}\),
\item\label{it:contr.flat} every area minimizing
current \(R\) with \(\partial R = \partial T \)
is within \(\ensuremath{\mathcal{F}}\)-distance \(\epsilon\) of~\(T\).
\end{enumerate}
\end{sstheorem}
\noindent{\bf Proof.} Let \(B \in \ensuremath{\mathbb{I}}\xspace_{N-2}\)
with \(\partial B = 0\)
and smooth support be given. Let \(\epsilon>0\)
be given. Let the open set \(U\) with \(\operatorname{spt}(B)\subset U\) be given.
For each \(0<r\), set
\[
I(r) = \{ x: \operatorname{dist}(x,\operatorname{spt} B) < r \}\,,
\qquad
O(r) = \{ x: \operatorname{dist}(x,\operatorname{spt} B) \geq r \}
\,.
\]
Let \(0< \epsilon_i\), \(i=1,2,\dots\), be a decreasing sequence with limit \(0\).
Choose \(\epsilon_1\) so that
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
\(\epsilon_1 < \epsilon/4\),
\item
\(\hbox{\rm Clos}[ I(2\,\epsilon_1) ] \subset U\),
\item
\(\|R\|[ I(\epsilon_{1}) ] < \epsilon/3\) holds
for any mass minimizer with \(\partial R = B\),
which we can do by Proposition~5.6 of [Sul90].
\end{itemize}
For each \(i\), use Sullivan's approximation method
(Theorem~\ref{sullivan_main}) to form a multigrid \(\ensuremath{\mathcal{G}}(i)\) such that
for any mass minimizer \(R\) with \(\partial R = B\)
there exists \(\widehat{R}\in \ensuremath{\mathcal{G}}(i)\cap \ensuremath{\mathcal{P}}_2\) such that
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
there exists \(S\) with
\(B = \partial S + \partial \widehat{R}\),
\(\operatorname{dist}_{H}[ \, \operatorname{spt}(S),\,\operatorname{spt}(B)\, ] < \epsilon_i \),
and \(\ensuremath{\mathbb{M}}\xspace(S) < \epsilon_i \),
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial \widehat{R}),\, \operatorname{spt} (B) \, ] < \epsilon_i \),
\item
\(\operatorname{spt} (\partial \widehat{R}) \subset U\),
\item
\( \ensuremath{\mathbb{M}}\xspace(\widehat{R}) \leq \ensuremath{\mathbb{M}}\xspace(R) +\epsilon_i \).
\end{itemize}
Choose the multigrids so that
\(\ensuremath{\mathcal{G}}(1)\subset \ensuremath{\mathcal{G}}(2)\subset\ensuremath{\mathcal{G}}(3)\subset \cdots\).
For each \(i\), let \(\ensuremath{\mathcal{T}}(i)\subset \ensuremath{\mathcal{G}}(i)\cap\ensuremath{\mathcal{P}}_{N-1}\) be the set of
currents, \(T\), satisfying
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
there exists \(S\) with
\(B = \partial S + \partial T\),
\(\operatorname{dist}_{H}[ \, \operatorname{spt}(S),\,\operatorname{spt}(B)\, ] < \epsilon_i\), and
\(\ensuremath{\mathbb{M}}\xspace(S) < \epsilon_i \),
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial T),\, \operatorname{spt} (B) \, ] < \epsilon_i\),
\item
\(\operatorname{spt} (\partial T) \subset U\).
\end{itemize}
\noindent
Using an appropriate algorithm, obtain \(T_i\in \ensuremath{\mathcal{T}}(i)\) such that
$$
\ensuremath{\mathbb{M}}\xspace(T_i) \leq \epsilon_i + \inf\{\ensuremath{\mathbb{M}}\xspace(T):T\in \ensuremath{\mathcal{T}}(i) \}
\,.
$$
(We are solving a linear programming problem. We are also not requiring the
exact solution; only that we be within \(\epsilon_{i}\) of the minimum
value of the objective function.)
\noindent{\bf Claim 1.} If \(\mu\) denotes
the mass of any mass minimizer \(R\) with \(\partial R = B\),
then
\ensuremath{\mathbf{e}}\xspacegin{equation}\label{(*)}
\mu - \epsilon_i \leq \ensuremath{\mathbb{M}}\xspace(T_i) \leq \mu + 2\,\epsilon_i
\end{equation}
holds for each \(i\),
and the limit of any \(\ensuremath{\mathcal{F}}\)-convergent
subsequence of
\(\big\{ T_i \big\}_{i = 1}^\infty\)
is a mass minimizer with boundary equal to \(B\).
\noindent
{\bf Proof of Claim.}
Let \(R\) be a mass minimizer with \(\partial R = B\).
Since \(T_i\in \ensuremath{\mathcal{T}}(i)\), there exists \(S_i\) with
\(B = \partial S_i + \partial T_i = \partial (S_i+T_i) \)
and \(\ensuremath{\mathbb{M}}\xspace(S_i) < \epsilon_i \).
Thus we have
$$
\mu = \ensuremath{\mathbb{M}}\xspace(R) \leq \ensuremath{\mathbb{M}}\xspace(S_i+T_i) \leq \ensuremath{\mathbb{M}}\xspace(S_i) + \ensuremath{\mathbb{M}}\xspace(T_i)
\leq \epsilon_i +\ensuremath{\mathbb{M}}\xspace(T_i)
\,,
$$
giving us the left-hand inequality in (\ref{(*)}).
We have chosen the multigrid \({\ensuremath{\mathcal{G}}}(i)\)
so that for any mass minimizer \(R\)
with \(\partial R = B\)
there exists \(\widehat{R}\in \ensuremath{\mathcal{T}}(i)\) such that
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
there exists \(S\) with
\(B = \partial S + \partial \widehat{R}\),
\(\operatorname{dist}_{H}[ \, \operatorname{spt}(S),\,\operatorname{spt}(B)\, ] < \epsilon_i\),
and \(\ensuremath{\mathbb{M}}\xspace(S) < \epsilon_i \),
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial \widehat{R}),\, \operatorname{spt} (B) \, ] < \epsilon_i\),
\item
\(\operatorname{spt} (\partial \widehat{R}) \subset U\),
\item
\( \ensuremath{\mathbb{M}}\xspace(\widehat{R}) \leq \ensuremath{\mathbb{M}}\xspace(R) +\epsilon_i\).
\end{itemize}
Then \(\widehat{R}\) satisfies the conditions for membership in \(\ensuremath{\mathcal{T}}(i)\).
By the choice of \(T_i\), we conclude that
$$
\ensuremath{\mathbb{M}}\xspace(T_i) \leq \epsilon_i + \ensuremath{\mathbb{M}}\xspace(\widehat{R})
\leq 2\,\epsilon_i + \ensuremath{\mathbb{M}}\xspace(R) = 2\,\epsilon_i + \mu
\,,
$$
giving us the right-hand inequality in (\ref{(*)}).
Now, let \(T^*\) be the limit of any \(\ensuremath{\mathcal{F}}\)-convergent
subsequence of
\(\big\{ T_i \big\}_{i = 1}^\infty\).
Passing to that subsequence, but without changing notation,
we suppose \(T_i\rightarrow T^*\).
Letting \(S_i\) be as above, we have \(B= \partial(S_i+T_i)\)
and \(\ensuremath{\mathbb{M}}\xspace(S_i)\rightarrow 0\). So \(B = \partial T^*\).
By the lower semicontinuity of mass,
\(\ensuremath{\mathbb{M}}\xspace(T^*)\leq \lim_{i\rightarrow\infty} \ensuremath{\mathbb{M}}\xspace(T_i) = \mu\).
Thus
\(T^*\)
is a mass minimizer with boundary \(B\).
\noindent{\bf Claim 1 has been proved.}
\noindent{\bf Claim 2.} For infinitely many \(i\), we
have
$$
\ensuremath{\mathbb{M}}\xspace[ T_i\elbow I(\epsilon_{1}) ] \leq \epsilon/2
\,.
$$
\noindent
{\bf Proof of Claim.} Suppose Claim~2 were false. Then
there would be but finitely many elements in
$$
J = \{ i : \ensuremath{\mathbb{M}}\xspace[ T_i\elbow I(\epsilon_{1}) ] \leq \epsilon/2 \}
\,.
$$
Set \(i_0= 1+ \max J\). Then
$$
\ensuremath{\mathbb{M}}\xspace[ T_i\elbow I(\epsilon_{1}) ] > \epsilon/2
$$
holds for all \(i \geq i_0\).
Since
$$
\ensuremath{\mathbb{M}}\xspace[T_i] = \ensuremath{\mathbb{M}}\xspace[ T_i\elbow I(\epsilon_{1}) ]
+ \ensuremath{\mathbb{M}}\xspace[ T_i\elbow O(\epsilon_1) ]
$$
we have
$$
\ensuremath{\mathbb{M}}\xspace[ T_i\elbow O(\epsilon_1) ]
= \ensuremath{\mathbb{M}}\xspace[T_i] - \ensuremath{\mathbb{M}}\xspace[ T_i\elbow I(\epsilon_{1}) ]
< \ensuremath{\mathbb{M}}\xspace[T_i] - \epsilon/2
\,.
$$
So
$$
\lim_{i\rightarrow\infty} \ensuremath{\mathbb{M}}\xspace[ T_i\elbow O(\epsilon_1) ]
\leq \mu -\epsilon/2,
$$
where, as in Claim~1, \(\mu\) denotes the mass of any minimizer with
boundary~\(B\).
Passing to an \(\ensuremath{\mathcal{F}}\)-convergent
subsequence, but without changing notation, we
may suppose \(T_i\) converges to
a mass minimizer \(R\) with \(\partial R = B\).
By the lower semicontinuity of mass,
$$
\| R\| [O(\epsilon_1)] \leq \mu -\epsilon/2
$$
holds.
Since \(\ensuremath{\mathbb{M}}\xspace[R] = \mu\), we have
$$
\| R\| [I(\epsilon_{1})] \geq \epsilon/2
\,,
$$
contradicting the requirement in the definition of \(\epsilon_1\) that
\(\|R\|[ I(\epsilon_{1}) ] < \epsilon/3\) hold.
\noindent{\bf Claim 2 has been proved.}
Let \(\ensuremath{\mathcal{K}}\) be a closed set disjoint from
\(I(\epsilon_1/2)\),
containing \(O(\epsilon_1)\), and having a
polyhedral boundary.
For each \(i=1,2,\dots\), set
$$
T'_i = T_{i}\elbow \ensuremath{\mathcal{K}}
\hbox{\rm\quad and\quad}
B_i = \partial T'_i
\,.
$$
For each \(i\),
use Sullivan's approximation method (Theorem~\ref{sullivan_main}) to form a
multigrid \(\ensuremath{\mathcal{G}}'(i)\),
with \(\ensuremath{\mathcal{G}}(i) \subset \ensuremath{\mathcal{G}}'(i)\)
and \(T'_i\in \ensuremath{\mathcal{G}}'(i)\),
such that for any mass minimizer \(R\)
with \(\partial R = B_i\)
there exists \(\widehat{R}\in \ensuremath{\mathcal{G}}(i)\cap \ensuremath{\mathcal{P}}_{N-1}\) such that
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
there exists \(S\in \ensuremath{\mathcal{G}}'(i) \cap \ensuremath{\mathcal{P}}_{N-1}\) with
\(B_{i } = \partial S + \partial \widehat{R}\),
\(\ensuremath{\mathbb{M}}\xspace(S ) < \epsilon_{i} \), and
\(\operatorname{dist}_{H}[ \, \operatorname{spt}(S ),\,\operatorname{spt}(B_{i })\, ] < \epsilon_{i }\),
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial \widehat{R}),
\, \operatorname{spt} (B_{i}) \, ] < \epsilon_{i}\),
\item
\(\operatorname{spt} (\partial \widehat{R}) \subset U\),
\item
\( \ensuremath{\mathbb{M}}\xspace(\widehat{R}) \leq \ensuremath{\mathbb{M}}\xspace(R) +\epsilon_{i}\),
\item
\(\widehat{R} - R = X + \partial Y\) for some \(X\) and \(Y\) with
\(
\ensuremath{\mathbb{M}}\xspace(X) + \ensuremath{\mathbb{M}}\xspace(Y) \leq \epsilon_{i }
\).
\end{itemize}
Choose the multigrids so that
\(\ensuremath{\mathcal{G}}'(1)\subset \ensuremath{\mathcal{G}}'(2)\subset\ensuremath{\mathcal{G}}'(3)\subset \cdots\).
For each \(i\),
let \(\ensuremath{\mathcal{T}}'(i)\subset \ensuremath{\mathcal{G}}'(i)\cap \ensuremath{\mathcal{P}}_{N-1}\) be the set of currents,
\(T\), satisfying
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
there exists \(S\in \ensuremath{\mathcal{G}}'(i) \cap \ensuremath{\mathcal{P}}_{N-1}\) with
\(B_i = \partial S + \partial T\),
\(\ensuremath{\mathbb{M}}\xspace(S) < \epsilon_{i} \),
and \(\operatorname{dist}_{H}[ \, \operatorname{spt}(S),\,\operatorname{spt}(B_i)\, ] < \epsilon_{i}\),
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial T),\, \operatorname{spt} (B_i) \, ] < \epsilon_{i}\),
\item
\(\operatorname{spt} (\partial T) \subset U\).
\end{itemize}
\noindent
For each \(i\), let \(\ensuremath{\mathcal{Q}}(i)\subset \ensuremath{\mathcal{T}}'(i)\) be the set of currents,
\(Q\), satisfying
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
\( \ensuremath{\mathbb{M}}\xspace(W) \geq \epsilon/2\), where \(\partial W = T'_i-S-Q\) where
\(S\) is as in the first condition for membership of \(Q\) in \(\ensuremath{\mathcal{T}}'(i)\).
\end{itemize}
Notice that if \(B_i = \partial S + \partial Q\), then
\(W\) satisfying \(\partial W = T'_i-S-Q\) is unique and
\(W\in \ensuremath{\mathcal{G}}'(i) \cap \ensuremath{\mathcal{P}}_{N}\).
\noindent
Using an appropriate algorithm, obtain \(Q_i\in \ensuremath{\mathcal{Q}}(i)\) such that
$$
\ensuremath{\mathbb{M}}\xspace(Q_i) \leq \epsilon_{i} + \inf\{\ensuremath{\mathbb{M}}\xspace(Q):Q\in \ensuremath{\mathcal{Q}}(i) \}.
$$
(We are solving a linear programming problem. We are also not
requiring the exact solution, only that we be within \(\epsilon_{i}\)
of the minimum value of the objective function.)
\noindent
{\bf Stopping Conditions:}
\ensuremath{\mathbf{e}}\xspacegin{enumerate}[(C1)]
\item\label{eq:first.stop}
\(\ensuremath{\mathbb{M}}\xspace(Q_{i}) \geq \ensuremath{\mathbb{M}}\xspace(T'_{i }) + 3\,\epsilon_{i}\)
\item\label{eq:second.stop}
\(\ensuremath{\mathbb{M}}\xspace(T\elbow\ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}})\leq\epsilon/2\)
\end{enumerate}
\noindent{\bf Claim 3.}
If for some \(i_0\), the stopping conditions
are satisfied, then \( T'_{i_0} \) is the
desired approximation. That is,
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
\(B = \partial S + \partial T'_{i_0} \) with
\(\operatorname{spt}(S) \subset U\) and \(\ensuremath{\mathbb{M}}\xspace(S) < \epsilon\),
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial T'_{i_0} ),\, \operatorname{spt} (B) \, ] < \epsilon\),
\item
\(\operatorname{spt}(\partial T'_{i_0} ) \subset U\),
\item
\(\ensuremath{\mathbb{M}}\xspace( T'_{i_0} ) < \epsilon +
\inf\{\ensuremath{\mathbb{M}}\xspace(S) : \partial S = B \}\),
\item
every
mass minimizing
current \(R\) with \(\partial R = \partial T'_{i_0} = B_{i_0} \)
is within \(\ensuremath{\mathcal{F}}\)-distance \(\epsilon\) of
\( T'_{i_0} \).
\end{itemize}
\noindent
{\bf Proof of Claim.}
By the choice of \(\epsilon_1\), it is immediate that
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial T'_{i_0} ),\, \operatorname{spt} (B) \, ] < \epsilon\) ,
\item
\(\operatorname{spt}(\partial T'_{i_0} ) \subset U\)
\end{itemize}
hold.
Since \(T_{i_0} \in \ensuremath{\mathcal{T}}(i_0)\),
there exists \(S_1\) with
$$
B = \partial S_1 + \partial T_{i_0},
\ \ \operatorname{dist}_{H}[ \, \operatorname{spt}(S_1),\,\operatorname{spt}(B)\, ] <
\epsilon_{i_0}, \hbox{\rm\ \ and\ \ } \ensuremath{\mathbb{M}}\xspace(S_1) < \epsilon_{i_0}
\,.
$$
So
\ensuremath{\mathbf{e}}\xspacegin{eqnarray*}
B &=& \partial S_1 + \partial T_{i_0} \\
&=& \partial S_1 + \partial \Big(T_{i_0} \elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}}
+ T_{i_0} \elbow \ensuremath{\mathcal{K}} \Big)\\
&=& \partial \Big( S_1 +
T_{i_0} \elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} \Big)
+ \partial T'_{i_0}
\,.
\end{eqnarray*}
We have
$$
\operatorname{spt}( S_1 +
T_{i_0} \elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} ) \subset U
$$
and
$$
\ensuremath{\mathbb{M}}\xspace\Big( S_1 +
T_{i_0} \elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} \Big)
\leq
\ensuremath{\mathbb{M}}\xspace( S_1) +
\ensuremath{\mathbb{M}}\xspace( T_{i_0} \elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} )
\leq \epsilon_{i_0} + \epsilon/2
\leq \epsilon
\,,
$$
where we have used the stopping condition
(C\ref{eq:second.stop}).
The right-hand inequality in (\ref{(*)}) gives us
$$
\ensuremath{\mathbb{M}}\xspace( T'_{i_0} ) < \epsilon +
\inf\{\ensuremath{\mathbb{M}}\xspace(S) : \partial S = B \}
\,.
$$
Suppose \(R\) is a minimizer with \(\partial R = B_{i_0}\).
Let \(\widehat{R}\) be such that
\ensuremath{\mathbf{e}}\xspacegin{itemize}
\item
there exists \(S_2\) with
\(B_{i_0} = \partial S_2 + \partial \widehat{R}\),
\(\operatorname{dist}_{H}[ \, \operatorname{spt}(S_2),\,\operatorname{spt}(B_{i_0})\, ] < \epsilon_{i_0}\),
and \(\ensuremath{\mathbb{M}}\xspace(S_2) < \epsilon_{i_0} \),
\item
\(\operatorname{dist}_{H}[ \, \operatorname{spt} (\partial \widehat{R}),
\, \operatorname{spt} (B_{i_0}) \, ] < \epsilon_{i_0}\),
\item
\(\operatorname{spt} (\partial \widehat{R}) \subset U\),
\item
\( \ensuremath{\mathbb{M}}\xspace(\widehat{R}) \leq \ensuremath{\mathbb{M}}\xspace(R) +\epsilon_{i_0}\),
\item
\(\widehat{R} - R = X + \partial Y\) for some \(X\) and \(Y\) with
\(
\ensuremath{\mathbb{M}}\xspace(X) + \ensuremath{\mathbb{M}}\xspace(Y) \leq \epsilon_{i_0}
\).
\end{itemize}
Notice that the first three conditions above tell us that
\(\widehat{R}\in \ensuremath{\mathcal{T}}'(i_0)\).
Next, note that since \(R\) is a mass minimizer with
\(\partial R = \partial T'_{i_0}\), we have
$$
\ensuremath{\mathbb{M}}\xspace(R) \leq \ensuremath{\mathbb{M}}\xspace( T'_{i_0})
\,.
$$
Thus
$$
\ensuremath{\mathbb{M}}\xspace(\widehat{R}) \leq \ensuremath{\mathbb{M}}\xspace(R) + \epsilon_{i_0}
\leq
\ensuremath{\mathbb{M}}\xspace(T'_{i_0}) + \epsilon_{i_0}
$$
holds.
If it were the case that \(\widehat{R}\in \ensuremath{\mathcal{Q}}(i_0)\),
then the choice of \(Q_{i_0}\) would give us
$$
\ensuremath{\mathbb{M}}\xspace(Q_{i_0}) \leq \epsilon_{i_0} + \ensuremath{\mathbb{M}}\xspace(\widehat{R})
\leq
\ensuremath{\mathbb{M}}\xspace(T'_{i_0}) + 2\,\epsilon_{i_0}
\,,
$$
contradicting the stopping condition (C\ref{eq:first.stop}).
We conclude that \(\widehat{R}\in \ensuremath{\mathcal{T}}'(i_0)\setminus\ensuremath{\mathcal{Q}}(i_0)\).
Now, let \(W\) satisfy \(\partial W = T'_{i_{0}} - S_{2} - \widehat{R}\) with
\(S_2\) as above.
Because \(\widehat{R} \notin \ensuremath{\mathcal{Q}}(i_0)\), we have
$$
\ensuremath{\mathbb{M}}\xspace(W) < \epsilon/2
\,.
$$
We also have
\(\widehat{R} - R = X + \partial Y\) for some \(X\) and \(Y\) with
$$
\ensuremath{\mathbb{M}}\xspace(X) + \ensuremath{\mathbb{M}}\xspace(Y) \leq \epsilon_{i_0}
\,.
$$
Consequently, we see that
$$
T'_{i_0} - R =
S_2 + X + \partial Y + \partial W
\,,
$$
with
$$
\ensuremath{\mathbb{M}}\xspace(S_2) + \ensuremath{\mathbb{M}}\xspace(X) + \ensuremath{\mathbb{M}}\xspace(Y) +\ensuremath{\mathbb{M}}\xspace(W)
\leq 2\, \epsilon_{i_0} + \epsilon/2 \leq \epsilon
\,.
$$
That is, we have \(\ensuremath{\mathcal{F}}(T'_{i_0} - R) \leq \epsilon\).
\noindent{\bf Claim 3 has been proved.}
\noindent{\bf Claim 4.} For some \(i\), the stopping conditions
will be satisfied.
\noindent {\bf Proof of Claim.} Applying Claim~2, we pass to a
subsequence (without changing notation) for which the stopping
condition~(C\ref{eq:second.stop}) holds for all \(i\).
Arguing by contradiction, we suppose that
$$
\ensuremath{\mathbb{M}}\xspace(Q_{i}) < \ensuremath{\mathbb{M}}\xspace(T'_{i}) + 3\,\epsilon_{i}
$$
holds for every \(i\).
Since \(T_{i }\in \ensuremath{\mathcal{T}}(i )\), there exists \(S_{i }\) with
\(B = \partial S_{i } + \partial T_{i } \)
and \(\ensuremath{\mathbb{M}}\xspace(S_{i } ) < \epsilon_{i }\).
Since \(Q_i\in \ensuremath{\mathcal{Q}}(i)\), there exists
\(S'_i\) with
\(\partial T'_{i} = B_i = \partial S'_i + \partial Q_i\)
and \(\ensuremath{\mathbb{M}}\xspace(S'_i) < \epsilon_{i}\).
Set
$$
P_i = S_{i } + T_{i }\elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} + S'_i+ Q_i
\,.
$$
We have
\ensuremath{\mathbf{e}}\xspacegin{eqnarray*}
\partial P_i &=& \partial S_i +
\partial [T_{i }\elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}}]
+ \partial S'_i + \partial Q_i\\
&=&\partial S_i +
\partial [T_{i }\elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}}] + \partial T'_i\\
&=& \partial S_i + \partial T_i
\ = \ B
\end{eqnarray*}
and
\ensuremath{\mathbf{e}}\xspacegin{eqnarray*}
\ensuremath{\mathbb{M}}\xspace(P_i) &\leq &
\ensuremath{\mathbb{M}}\xspace( S_{i } ) + \ensuremath{\mathbb{M}}\xspace[ T_{i }\elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} ]+
\ensuremath{\mathbb{M}}\xspace(S'_i) + \ensuremath{\mathbb{M}}\xspace( Q_i )\\
&\leq & 2\,\epsilon_i + \ensuremath{\mathbb{M}}\xspace[ T_{i }\elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} ]+
\ensuremath{\mathbb{M}}\xspace(T'_i) + 3\,\epsilon_i
\ =\ \ensuremath{\mathbb{M}}\xspace(T_i) + 5\,\epsilon_i
\,.
\end{eqnarray*}
We may pass to a subsequence,
again without changing notation, such
that \(P_i\) converges to \(P^*\) and \(S_i + T_{i }\) converges \(T^*\).
By the lower semicontinuity of mass and the right-hand
inequality in (\ref{(*)}), we see that both
\(P^*\) and \( T^*\) are mass
minimizers with boundary \( B\). By construction, \(P^*\) and \( T^*\)
are equal in \(I(\epsilon_1/2)\). By the regularity theory of mass
minimizers, the singular set of a minimizer cannot disconnect the
surface. We have \(P^*= T^*\).
The fact that \(P^* = T^*\) tells us that
\(\ensuremath{\mathcal{F}}[ (S_i+ T_i) - P_{i} ] \rightarrow 0\), so
we can write \((S_i+ T_i) - P_{i} = X_i + \partial Y_i\)
with \(\ensuremath{\mathbb{M}}\xspace(X_i) + \ensuremath{\mathbb{M}}\xspace(Y_i) \rightarrow 0\). Then applying the isoperimetric
inequality to \(X_i\), we see that we can write
\((S_i+ T_i) - P_{i} = \partial Z_i\) with \(\ensuremath{\mathbb{M}}\xspace(Z_i)\rightarrow 0\).
On the other hand, observe that
$$
(S_i+ T_i) - P_{i} = T'_i -Q_i - S'_i
\,.
$$
By the definition of \(\ensuremath{\mathcal{Q}}(i)\), we have
\(T'_i -Q_i - S'_i = \partial W_i\) with \(\ensuremath{\mathbb{M}}\xspace(W_i) \geq \epsilon/2\).
This last inequality contradicts \(\ensuremath{\mathbb{M}}\xspace(Z_i)\rightarrow 0\),
because \(W_i\) and \(Z_i\) are \(N\)-dimensional integral currents
in \(\ensuremath{\mathbb{R}}\xspace^N\) having the same boundary, so in fact, they are equal.
\noindent{\bf Claim 4 has been proved.}
\noindent{\bf Conclusion.}
Once the sequence \(\epsilon_i\) satisfying the required
conditions has been chosen,
the algorithm proceeds as follows:
\ensuremath{\mathbf{e}}\xspacegin{enumerate}
\item[(A1)] Set \(i = 1\).
\item[(A2)]
Compute \(T_i\).
\item[(A3)]
If the condition
\(\ensuremath{\mathbb{M}}\xspace[ T_{i }\elbow \ensuremath{\mathbb{R}}\xspace^{N}\setminus\ensuremath{\mathcal{K}} ] \leq \epsilon/2 \)
is satisfied, then advance to step (A4).
Otherwise, increment \(i\) and go to step (A2).
\item[(A4)]
Compute \(Q_{i}\).
\item[(A5)]
If the condition
\(\ensuremath{\mathbb{M}}\xspace(Q_{i}) \geq \ensuremath{\mathbb{M}}\xspace( T_{i}\elbow \ensuremath{\mathcal{K}}) + 3\,\epsilon_{i}\)
is satisfied, then return \(T'_i \) and terminate the algorithm.
Otherwise, increment \(i\) and go to step (A2).
\end{enumerate}
Claim~4 guarantees that the algorithm terminates after finitely
many steps, while Claim~3 guarantees that the returned
value \(T'_{i} \) is the desired approximation.
\end{document} | math | 35,020 |
\begin{document}
\section{Introduction}\label{sec:intro}
Following recent conjectures of \cite{FyodorovHiaryKeating2012} and \cite{MR3151088} about the limiting law of the {\it Gibbs measure} and the limiting law of the maximum for the Riemann zeta function on bounded random intervals of the critical line, progress have been made in the mathematics literature.
If $\tau$ is sampled uniformly in $[T,2T]$ for some large $T$, then it is expected that the limiting law of the Gibbs measure (see \eqref{def:gibbs.measure}) at low temperature for the field $(\log |\zeta(\frac{1}{2} + i(\tau + h))|, h\in [0,1])$ is a one-level Ruelle probability cascade (see e.g.\hspace{-0.3mm} \cite{MR875300}) and the law of the maximum is asymptotic to $\log \log T - \frac{3}{4} \log \log \log T + \mathcal{M}_T$ where $(\mathcal{M}_T, T\geq 2)$ is a sequence of random variables converging in distribution.
For a randomized version of the Riemann zeta function (see \eqref{def:X}), the first order of the maximum was proved in \cite{arXiv:1304.0677}, the second order of the maximum was proved in \cite{MR3619786}, and the limiting two-overlap distribution was found in \cite{arXiv:1706.08462} (see Theorem \ref{thm:limiting.two.overlap.distribution} below). The tightness of the recentered maximum is still open (see \cite{arXiv:1807.04860}).
In this short paper, we complete the analysis of \cite{arXiv:1706.08462} by proving the Ghirlanda-Guerra (GG) identities in the limit $T\to \infty$ (see Theorem \ref{thm:extended.GG.identities}).
As is well known in the spin glass literature (see e.g.\hspace{-0.3mm} Chapter 2 in \cite{MR3052333}), the limiting law of the two-overlap distribution, with a finite support, together with the GG identities allow a complete description of the limiting law of the Gibbs measure as a {\it Ruelle probability cascade} with finitely many levels (a random measure with a tree structure and Poisson-Dirichlet weights at each level).
Our main result (Theorem \ref{thm:Poisson.Dirichlet}) describes the joint law of the overlaps under the limiting mean Gibbs measure in terms of Poisson-Dirichlet weights.
It is expected that the approach presented here, which mostly stems from the work of \cite{MR3211001}, \cite{MR2070334} and \cite{MR3052333} on other models, can be adapted to prove the same result for the (true) Riemann zeta function on bounded random intervals of the critical line. At present, for the (true) Riemann zeta function, the first order of the maximum is proved conditionally on the Riemann hypothesis in \cite{doi:10.1007/s00440-017-0812-y} and unconditionally in \cite{arXiv:1612.08575}.
The paper is organised as follows.
In Section \ref{sec:definitions}, we give a few definitions.
In Section \ref{sec:main.result}, the main result is stated and shown to be a consequence of the GG identities and the main result from \cite{arXiv:1706.08462} about the limiting two-overlap distribution.
In Section \ref{sec:known.results}, we state known results from \cite{arXiv:1706.08462} that we will use to prove the GG identities.
The GG identities are proven in Section \ref{sec:proof.GG} along with other preliminary results, see the structure of the proof in Figure \ref{fig:proof.structure}.
For an explanation of the consequences of the GG identities and their conjectured universality for mean field spin glass models, we refer the reader to \cite{MR3628881}, \cite{MR3052333} and \cite{MR3024566}.
\section{Some definitions}\label{sec:definitions}
Let $(U_p, p ~\text{primes})$ be an i.i.d.\hspace{-0.3mm} sequence of uniform random variables on the unit circle in $\mathbb{C}$.
The random field of interest is
\begin{equation}\label{def:X}
X_h \circeq \sum_{p \leq T} W_p(h) \circeq \sum_{p \leq T} \frac{\text{Re}(U_p \, p^{-i h})}{p^{1/2}}, \quad h\in [0,1].
\end{equation}
This is a good model for the large values of $(\log |\zeta(\frac{1}{2} + i(\tau + h))|, h\in [0,1])$ for the following reason.
Proposition 1 in \cite{arXiv:1304.0677} proves that, assuming the Riemann hypothesis, and for $T$ large enough, there exists a set $B\subseteq [T,T+1]$, of Lebesgue measure at least $0.99$, such that
\begin{equation}
\log |\zeta(\frac{1}{2} + i t)| = \text{Re}\left(\sum_{p \leq T} \frac{1}{p^{1/2 + it}} \frac{\log(T / p)}{\log T}\right) + O(1), \quad t\in B.
\end{equation}
If we ignore the smoothing term $\log(T / p) / \log T$ and note that the process $(p^{-i\tau}\hspace{-1mm}, p ~\text{primes})$, where $\tau$ is sampled uniformly in $[T,2T]$, converges (in the sense of convergence of its finite-dimensional distributions), as $T\to\infty$, to a sequence of independent random variables distributed uniformly on the unit circle (by computing the moments), then the model \eqref{def:X} follows.
For more information, see Section 1.1 in \cite{MR3619786}.
For simplicity, the dependence in $T$ will be implicit everywhere for $X$. Summations over $p$'s and $q$'s always mean that we sum over primes.
For $\alpha\in [0,1]$, we denote truncated sums of $X$ as follows :
\begin{equation}\label{def:X.alpha}
X_h(\alpha) \circeq \sum_{p \leq \exp((\log T)^{\alpha})} W_p(h), \quad h\in [0,1],
\end{equation}
where $\sum_{\emptyset} \circeq 0$. Define the {\it overlap} between two points of the field by
\begin{equation}\label{eq:correlation}
\rho(h,h') \circeq \frac{\mathbb{E}[X_h X_{h'}]}{\sqrt{\mathbb{E}[X_h^2] \mathbb{E}[X_{h'}^2]}}, \quad h,h'\in [0,1].
\end{equation}
For any $\alpha\in [0,1]$ and any $\beta > 0$, define the {\it (normalized) free energy of the perturbed model} by
\begin{equation}\label{eq:free.energy}
f_{\alpha,\beta,T}(u) \circeq \frac{1}{\log \log T} \log \int_0^1 e^{\beta (u X_h(\alpha) + X_h)} dh, \quad u > -1.
\end{equation}
The parameter $u$ is there to allow perturbations in the correlation structure of the model.
When $u = 0$, we recover the {\it free energy}.
Finally, for any Borel set $A\in \mathcal{B}([0,1])$, define the {\it Gibbs measure} by
\begin{equation}\label{def:gibbs.measure}
G_{\beta,T}(A) = \int_A \frac{e^{\beta X_h}}{\int_{[0,1]} e^{\beta X_{h'}} dh'} dh.
\end{equation}
The parameter $\beta$ is called the {\it inverse temperature} in statistical mechanics.
\section{Main result}\label{sec:main.result}
The main result of this article is to present a complete description of the joint law of the overlaps for the model \eqref{def:X}, under the {\it limiting mean Gibbs measure}
\begin{equation}\label{eq:limiting.mean.Gibbs.measure}
\lim_{T\to \infty} \mathbb{E} G_{\beta,T}.
\end{equation}
We will show that, when $\beta > \beta_c \circeq 2$, this measure is the expectation $E$ of a random measure $\mu_{\beta}$ sampling orthonormal vectors in an infinite-dimensional separable Hilbert space, where the probability weights follow a
\begin{equation*}
\text{Poisson-Dirichlet distribution of parameter $\beta_c / \beta$.}
\end{equation*}
This is done through what is called the {\it Ghirlanda-Guerra identities}.
These identities first appeared in \cite{MR1662161} and, 15 years later, it was proved in a celebrated work of Panchenko \cite{MR2999044} (a simple proof is given in \cite{MR2825947} when $E \mu_{\beta}$ has a finite support) that if a random measure on the unit ball of a separable Hilbert space satisfies an extended version of the Ghirlanda-Guerra identities, then we must have ultrametricity (a tree-like structure) of the overlaps under the mean of this random measure. This was an important step because it was well-known following the publication of \cite{MR1662161} that the Ghirlanda-Guerra identities and ultrametricity together completely determine the joint law of the overlaps, up to the distribution of one overlap.
See e.g., Theorem 6.1 in \cite{BaffioniRosati2000}, Section 1.2 in \cite{MR1993891} (in the context of the REM model from \cite{MR575260}) and Theorem 1.13 in \cite{MR2070334} (in the context of the GREM model from \cite{Derrida1985}).
Thus, from the work of Panchenko, proving the (extended) Ghirlanda-Guerra identities under \eqref{eq:limiting.mean.Gibbs.measure} implies ultrametricity and, consequently, determines the joint law of the overlaps, up to
the {\it limiting two-overlap distribution}
\begin{equation}\label{eq:limiting.two.overlap.distribution}
\lim_{T\to\infty} \mathbb{E} G_{\beta,T}[\bb{1}_{\{\rho(h,h') \in \, \cdot \, \}}],
\end{equation}
which \cite{arXiv:1706.08462} already determined for the model \eqref{def:X}.
\begin{theorem}[Theorem 1 in \cite{arXiv:1706.08462}]\label{thm:limiting.two.overlap.distribution}
For any $\beta > \beta_c \circeq 2$ and any Borel set $A\in \mathcal{B}([0,1])$,
\begin{equation}\label{eq:thm:limiting.two.overlap.distribution.eq}
\lim_{T\to \infty} \mathbb{E} G_{\beta,T}^{\times 2} \big[\bb{1}_{\{\rho(h,h') \in A\}}\big] = \frac{2}{\beta} \bb{1}_A(0) + \left(1 - \frac{2}{\beta}\right) \bb{1}_A(1).
\end{equation}
\end{theorem}
\begin{remark}
The limiting two-overlap distribution in \eqref{eq:thm:limiting.two.overlap.distribution.eq} can be interpreted as a measure of relative distance between the extremes of the model.
\end{remark}
To state our main result, recall the definition of a Poisson-Dirichlet variable.
For $0 < \theta < 1$, let $\eta = (\eta_i)_{i\in \mathbb{N}^*}$ be the atoms of a Poisson random measure on $(0,\infty)$ with intensity measure $\theta x^{-\theta - 1} dx$.
A {\it Poisson-Dirichlet variable} $\xi$ of parameter $\theta$ is a random variable on the space of decreasing weights
\begin{equation}\label{eq:space.decreasing.weights}
\left\{(x_1,x_2,\ldots)\in [0,1]^{\mathbb{N}^*} :
\begin{array}{l}
1 \geq x_1 \geq x_2 \geq \ldots \geq 0 \\[1mm]
\text{and}~ \sum_{i=1}^{\infty} x_i = 1
\end{array}
\right\}
\end{equation}
which has the same law as
\begin{equation}
\xi \stackrel{\text{law}}{=} \left(\frac{\eta_i}{\sum_{j=1}^{\infty} \eta_j}, ~i\in \mathbb{N}^*\right)_{\downarrow},
\end{equation}
where $\downarrow$ stands for the decreasing rearrangement.
Here is the main result.
\begin{theorem}[Main result]\label{thm:Poisson.Dirichlet}
Let $\beta > \beta_c \circeq 2$ and let $\xi = (\xi_k)_{k\in \mathbb{N}^*}$ be a Poisson-Dirichlet variable of parameter $\beta_c / \beta$.
Denote by $E$ the expectation with respect to $\xi$.
For any continuous function $\phi : [0,1]^{s(s-1)/2} \to \mathbb{R}$ of the overlaps of $s$ points,
\begin{equation}\label{eq:thm:Poisson.Dirichlet.eq}
\begin{aligned}
&\lim_{T\to \infty} \mathbb{E} G_{\beta,T}^{\times s} \Big[\phi\Big(\big(\rho(h_l,h_{l'})\big)_{1 \leq l,l' \leq s}\Big)\Big] \\
&\hspace{20mm}= E\left[\sum_{k_1,\ldots,k_s\in \mathbb{N}} \xi_{k_1} \cdots \xi_{k_s} \phi\Big(\big(\bb{1}_{\{k_l = k_{l'}\}}\big)_{1 \leq l,l' \leq s}\Big)\right].
\end{aligned}
\end{equation}
\end{theorem}
\begin{remark}
The domain of $\phi$ is $[0,1]^{s(s-1)/2}$ here because the matrix $(\rho(h_l,h_{l'}))_{1 \leq l,l' \leq s}$ is symmetric and has $1$'s on the diagonal.
\end{remark}
\begin{remark}
The proof of Theorem \ref{thm:Poisson.Dirichlet} is given in Section \ref{sec:proof.min.result}.
As mentioned earlier, it is a consequence of Theorem \ref{thm:limiting.two.overlap.distribution}, Theorem \ref{thm:extended.GG.identities} and the ultrametric structure of the overlaps under the limiting mean Gibbs measure.
To prove the extended Ghirlanda-Guerra identities in Section \ref{sec:proof.GG}, we will use the strategy developed in \cite{MR2070334,MR2070335} and used in \cite{MR3211001} and \cite{arXiv:1706.08462} (see Remark \ref{rem:beta.cases.explanation}).
For an alternative strategy (which requires a stronger control on the path of the maximal particle in the tree structure), see \cite{MR3539644}.
\end{remark}
\begin{remark}\label{rem:beta.cases.explanation}
In this paper, we state most of our results above the critical inverse temperature (i.e.\hspace{-0.3mm} at low temperature), namely when $\beta > \beta_c \circeq 2$, because that's the only interesting case. The description of the joint law of the overlaps under the limiting mean Gibbs measure turns out to be trivial when $\beta < \beta_c$.
Here's why.
When $\beta > \beta_c$, the Gibbs measure gives a lot of weight to the ``particles'' $h$ that are near the maximum's height in the tree structure underlying the model \eqref{def:X}. The result of Theorem \ref{thm:limiting.two.overlap.distribution} simply says that if you sample two particles under the Gibbs measure, then, in the limit and on average, either the particles branched off ``at the last moment'' in the tree structure (there are clusters of points reaching near the level of the maximum) or they branched off in the beginning. They cannot branch at intermediate scales.
When $\beta < \beta_c$, the weights in the Gibbs measure are more spread out so that most contributions to the free energy actually come from particles reaching heights that are well below the level of the maximum in the tree structure. Hence, when two particles are selected from this larger pool of contributors that are not clustering, it can be shown that, in the limit and on average, the particles necessarily branched off in the beginning of the tree.
The proof would follow the exact same strategy used in \cite{arXiv:1706.08462} :
\begin{itemize}
\item find the free energy of the perturbed model as a function of the perturbation parameter $u$,
\item link the expectation of the derivative of the perturbed free energy at $u = 0$ with the two-overlap distribution by using an approximate integration by parts argument and the convexity of the free energy.
\end{itemize}
(We refer to this strategy as the {\it Bovier-Kurkova technique} since it is adapted from the strategy introduced in \cite{MR2070334,MR2070335} for the GREM model.)
The computations would actually be easier in this case. One would find that
\begin{equation}\label{eq:limiting.gibbs.measure.under.2}
\lim_{T\to \infty} \mathbb{E} G_{\beta,T}^{\times 2} \big[\bb{1}_{\{\rho(h,h') \in A\}}\big] = \bb{1}_A(0).
\end{equation}
In other words, when $\beta < \beta_c$, the limiting mean Gibbs measure only samples points that are uncorrelated (and thus far from each other) in the limiting tree structure.
More generally, our main result (Theorem \ref{thm:Poisson.Dirichlet}), which describes the joint law of the overlaps under the limiting mean Gibbs measure, would say that for any continuous function $\phi : [0,1]^{s^2} \to \mathbb{R}$ of the overlaps of $s$ points,
\begin{equation}\label{eq:limiting.joint.law.overlaps.under.2}
\lim_{T\to \infty} \mathbb{E} G_{\beta,T}^{\times s} \Big[\phi\Big(\big(\rho(h_l,h_{l'})\big)_{1 \leq l,l' \leq s}\Big)\Big] = \phi(I_s),
\end{equation}
where $I_s$ denotes the identity matrix of order $s$.
In the critical case $\beta = \beta_c$, we obtain \eqref{eq:limiting.gibbs.measure.under.2} and \eqref{eq:limiting.joint.law.overlaps.under.2} with the same techniques.
\end{remark}
\section{Known results}\label{sec:known.results}
In this section, we gather the results from \cite{arXiv:1706.08462} that we will use in Section \ref{sec:proof.GG} to prove the extended Ghirlanda-Guerra identities.
The two propositions below are known convergence results for $f_{\alpha,\beta,T}$ and its derivative (with respect to $u$).
We slightly reformulate them for later use.
\begin{proposition}[Proposition 3 in \cite{arXiv:1706.08462}]\label{prop:mean.convergence.derivative.free.energy}
Let $\beta > \beta_c \circeq 2$ and $0 < \alpha < 1$.
Then,
\begin{equation}\label{eq:prop.3.Arguin.Tai.2017}
\frac{2}{\beta^2} \cdot \mathbb{E}\big[f_{\alpha,\beta,T}'(0)\big] = \int_0^{\alpha} \mathbb{E} G_{\beta,T}^{\times 2} [\bb{1}_{\{\rho(h,h') \leq y\}}] dy + o_T(1).
\end{equation}
Since $f_{\alpha,\beta,T}'(0) = \beta (\log \log T)^{-1} G_{\beta,T}[X_h(\alpha)]$, we can also write \eqref{eq:prop.3.Arguin.Tai.2017} as
\begin{equation}\label{eq:prop.3.Arguin.Tai.2017.rewrite}
\frac{1}{\beta} \cdot \frac{\mathbb{E} G_{\beta,T}[X_h(\alpha)]}{\frac{1}{2} \log \log T} = \alpha - \mathbb{E} G_{\beta,T}^{\times 2} [\int_0^{\alpha} \bb{1}_{\{y < \rho(h,h')\}} dy] + o_T(1).
\end{equation}
\end{proposition}
\begin{proposition}[Equation 13, Proposition 4 and Lemma 14 in \cite{arXiv:1706.08462}]\label{prop:convergence.free.energy}
Let $\beta > \beta_c \circeq 2$, $0 \leq \alpha \leq 1$ and $u > -1$.
Then,
\begin{equation}\label{eq:prop.4.Arguin.Tai.2017}
\lim_{T\to \infty} f_{\alpha,\beta,T}(u) = f_{\alpha,\beta}(u) \circeq \left\{\hspace{-1mm}
\begin{array}{ll}
\frac{\beta^2}{4} V_{\alpha,u}, &\mbox{if } u < 0, ~2 < \beta \leq 2 / \sqrt{V_{\alpha,u}}, \\[0.5mm]
\beta \sqrt{V_{\alpha,u}} - 1, &\mbox{if } u < 0, ~\beta > 2 / \sqrt{V_{\alpha,u}}, \\[0.5mm]
\beta(\alpha u + 1) - 1, &\mbox{if } u \geq 0, ~\beta > 2,
\end{array}
\right.
\end{equation}
where the limit holds in $L^1$, and where $V_{\alpha,u} \circeq (1+u)^2\alpha + (1-\alpha)$.
\end{proposition}
\section{Proof of the extended Ghirlanda-Guerra identities}\label{sec:proof.GG}
This section is dedicated to the proof of the extended Ghirlanda-Guerra identities (Theorem \ref{thm:extended.GG.identities}).
We adopt a ``bottom-up'' style of presentation, where Theorem \ref{thm:extended.GG.identities} is the end goal.
Here is the structure of the proof :
\begin{figure}
\caption{Structure of the proof}
\label{fig:proof.structure}
\end{figure}
We start by relating the overlaps of the field $X$ to the overlaps of the truncated field $X(\alpha)$.
\begin{lemma}[Overlaps of the truncated field]\label{lem:covariance.estimates}
Let $0 \leq \alpha \leq 1$. Then, for all $h,h'\in [0,1]$,
\begin{equation}
\frac{\mathbb{E}[X_h(\alpha) X_{h'}(\alpha)]}{\frac{1}{2} \log \log T}
=
\left\{\hspace{-1mm}
\begin{array}{ll}
\rho(h,h') + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } \rho(h,h') \leq \alpha, \\[1mm]
\alpha + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } \rho(h,h') > \alpha.
\end{array}
\right.
\end{equation}
In both cases, the $O\hspace{-0.5mm}\left((\log \log T)^{-1}\right)$ term is uniform in $\alpha$.
\end{lemma}
\begin{proof}
Since $\text{Re}(z) = (z + \overline{z})/2$, $\mathbb{E}[U_p^2] = \mathbb{E}[(\overline{U_p})^2] = 0$ and $\mathbb{E}[U_p \overline{U_p}] = 1$, it is easily shown from \eqref{def:X} that, for any prime $p$,
\begin{equation}\label{eq:W.covariance}
\mathbb{E}[W_p(h) W_p(h')] = \frac{1}{2p} \cos(|h - h'| \log p), \quad h,h'\in [0,1].
\end{equation}
Thus, from the independence of the $U_p$'s and \eqref{def:X.alpha},
\begin{equation}\label{eq:X.alpha.covariance}
\mathbb{E}[X_h(\alpha) X_{h'}(\alpha)] = \sum_{p \leq \exp((\log T)^{\alpha})} \frac{1}{2p} \cos(|h - h'| \log p), \quad h,h'\in [0,1].
\end{equation}
Sums like the one on the right-hand side of \eqref{eq:X.alpha.covariance} were estimated on page 20 of Appendix A in \cite{arXiv:1304.0677} by using the prime number theorem.
In particular,
\begin{equation}\label{eq:rho.estimate}
\rho(h,h') = \frac{\frac{1}{2} \log\big((\log T) \wedge |h - h'|^{-1}\big)}{\frac{1}{2} \log \log T} + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right),
\end{equation}
and
\begin{equation}\label{eq:rho.estimate.alpha}
\frac{\mathbb{E}[X_h(\alpha) X_{h'}(\alpha)]}{\frac{1}{2} \log \log T}
=
\left\{\hspace{-1mm}
\begin{array}{ll}
\frac{\log|h - h'|^{-1}}{\log \log T} + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } 1 \leq |h - h'|^{-1} < (\log T)^{\alpha}, \\[1.2mm]
\alpha + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } |h - h'|^{-1} \geq (\log T)^{\alpha},
\end{array}
\right.
\end{equation}
where the $O\hspace{-0.5mm}\left((\log \log T)^{-1}\right)$ terms are all uniform in $\alpha$.
By comparing \eqref{eq:rho.estimate} and \eqref{eq:rho.estimate.alpha}, we get
\begin{align}
\frac{\mathbb{E}[X_h(\alpha) X_{h'}(\alpha)]}{\frac{1}{2} \log \log T}
&= \left\{\hspace{-1mm}
\begin{array}{ll}
\rho(h,h') + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } \rho(h,h') - O\hspace{-0.5mm}\left((\log \log T)^{-1}\right) < \alpha, \\[1.2mm]
\alpha + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } \rho(h,h') - O\hspace{-0.5mm}\left((\log \log T)^{-1}\right) \geq \alpha,
\end{array}
\right. \notag \\[2mm]
&=
\left\{\hspace{-1mm}
\begin{array}{ll}
\rho(h,h') + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } \rho(h,h') \leq \alpha, \\[1.2mm]
\alpha + O\hspace{-0.5mm}\left((\log \log T)^{-1}\right), &\mbox{if } \rho(h,h') > \alpha.
\end{array}
\right.
\end{align}
This ends the proof.
\end{proof}
The next lemma is an approximate integration by parts result.
It is a straightforward generalization of Lemma 9 in \cite{arXiv:1706.08462}.
\begin{lemma}[Approximate integration by parts]\label{lem:approximate.integration.by.parts}
Let $s\in \mathbb{N}^*$ and let $\bb{\xi} \circeq (\xi_1,\xi_2,\ldots,\xi_s)$ be a random vector taking values in $\mathbb{C}^s$, such that $\mathbb{E}[|\xi_j|^3] < \infty$ and $\mathbb{E}[\xi_j] = 0$ for all $j\in \{1,\ldots,s\}$, and such that $\mathbb{E}[\xi_l \xi_j] = 0$ for all $l,j\in \{1,\ldots,s\}$.
Let $F : \mathbb{C}^s \to \mathbb{C}$ be a twice continuously differentiable function such that, for some $M > 0$,
\begin{equation*}
\max_{1 \leq j \leq s} \left\{\|\partial_{z_j}^2 F\|_{\infty} \vee \|\partial_{\overline{z_j}}^2 F\|_{\infty}\right\} \leq M,
\end{equation*}
where $\|f\|_{\infty} \circeq \sup_{\bb{z}\in \mathbb{C}^s} |f(\bb{z},\bb{\overline{z}})|$.
Then, for any $k\in \{1,\ldots,s\}$,
\begin{align}
&\Big|\mathbb{E}[\xi_k F(\bb{\xi},\overline{\bb{\xi}})] - \sum_{j=1}^s \mathbb{E}[\xi_k \overline{\xi_j}] ~ \mathbb{E}[\partial_{\overline{z_j}} F(\bb{\xi},\overline{\bb{\xi}})]\Big| \ll s^2 M \max_{1 \leq j \leq s} \mathbb{E}[|\xi_j|^3], \label{eq:lem:approximate.integration.by.parts.eq.1} \\
&\Big|\mathbb{E}[\overline{\xi_k} F(\bb{\xi},\overline{\bb{\xi}})] - \sum_{j=1}^s \mathbb{E}[\overline{\xi_k} \xi_j] ~ \mathbb{E}[\partial_{z_j} F(\bb{\xi},\overline{\bb{\xi}})]\Big| \ll s^2 M \max_{1 \leq j \leq s} \mathbb{E}[|\xi_j|^3], \label{eq:lem:approximate.integration.by.parts.eq.2}
\end{align}
where $f(\cdot) \ll g(\cdot)$ means that $|f(\cdot)| \leq C g(\cdot)$ for some universal constant $C > 0$ (the Vinogradov notation).
\end{lemma}
\begin{proof}
Fix $k\in \{1,\ldots,s\}$.
We only prove \eqref{eq:lem:approximate.integration.by.parts.eq.1} because the proof of \eqref{eq:lem:approximate.integration.by.parts.eq.2} is almost identical.
Since $\mathbb{E}[\xi_k] = 0$ and $\mathbb{E}[\xi_k \xi_j] = 0$ for all $j\in \{1,\ldots,s\}$, the left-hand side of \eqref{eq:lem:approximate.integration.by.parts.eq.1} can be written as
\begin{equation}\label{eq:lem:approximate.integration.by.parts.first}
\begin{aligned}
&\mathbb{E}\Big[\xi_k \Big(F(\bb{\xi},\overline{\bb{\xi}}) - F(\bb{0},\bb{0}) - \sum_{j=1}^s \xi_j \partial_{z_j} F(\bb{0},\bb{0}) - \sum_{j=1}^s \overline{\xi_j} \partial_{\overline{z_j}} F(\bb{0},\bb{0})\Big)\Big] \\
&\hspace{50mm}- \sum_{j=1}^s \mathbb{E}\Big[\xi_k \overline{\xi_j}\Big] \mathbb{E} \Big[\partial_{\overline{z_j}} F(\bb{\xi},\overline{\bb{\xi}}) - \partial_{\overline{z_j}} F(\bb{0},\bb{0})\Big].
\end{aligned}
\end{equation}
By Taylor's theorem in several variables and the assumptions, the following estimates hold
\begin{align}
&\Big|F(\bb{\xi},\overline{\bb{\xi}}) - F(\bb{0},\bb{0}) - \sum_{j=1}^s \xi_j \partial_{z_j} F(\bb{0},\bb{0}) - \sum_{j=1}^s \overline{\xi_j} \partial_{\overline{z_j}} F(\bb{0},\bb{0})\Big| \notag \\
&\hspace{50mm}\ll M \left(\sum_{l=1}^s |\xi_l|\right)^2 \leq M \, s \sum_{l=1}^s |\xi_l|^2, \\
&\Big|\partial_{\overline{z_j}} F(\bb{\xi},\overline{\bb{\xi}}) - \partial_{\overline{z_j}} F(\bb{0},\bb{0})\Big| \ll M \sum_{l=1}^s |\xi_l| \quad \text{for all } j\in \{1,\ldots,s\}.
\end{align}
Therefore,
\begin{align}
|\eqref{eq:lem:approximate.integration.by.parts.first}|
&\ll M \sum_{l=1}^s \Big(s \, \mathbb{E}\big[|\xi_k| \cdot |\xi_l|^2\big] + \sum_{j=1}^s \mathbb{E}\big[|\xi_k| \cdot |\xi_j|\big] \mathbb{E}\big[|\xi_l|\big]\Big) \notag \\
&\leq M \sum_{l=1}^s \Big(s \, \mathbb{E}\big[|\xi_k|^3\big]^{1/3} \mathbb{E}\big[(|\xi_l|^2)^{3/2}\big]^{2/3} + \sum_{j=1}^s \mathbb{E}\big[|\xi_k|^3\big]^{1/3} \mathbb{E}\big[|\xi_j|^3\big]^{1/3} \mathbb{E}\big[|\xi_l|^3\big]^{1/3}\Big) \notag \\
&\leq 2 s^2 M \max_{1 \leq j \leq s} \mathbb{E}[|\xi_j|^3],
\end{align}
where we used Holder's inequality to obtain the second inequality.
\end{proof}
Here is a generalization of Proposition 10 in \cite{arXiv:1706.08462}.
It could be seen as a generalization of \eqref{eq:prop.3.Arguin.Tai.2017.rewrite} if \eqref{eq:prop.3.Arguin.Tai.2017.rewrite} was applied to $(W_p(h), h\in [0,1])$ instead of $(X_h(\alpha), h\in [0,1])$.
\begin{lemma}[Bovier-Kurkova technique - preliminary version]\label{lem:bovier.kurkova.technique.p}
Let $\beta > 0$ and $p \leq T$.
For any $s\in \mathbb{N}^*$, any $k\in \{1,\ldots,s\}$, and any bounded mesurable function $\phi : [0,1]^s \rightarrow \mathbb{R}$, we have
\begin{equation}\label{eq:prop:bovier.kurkova.technique.p.eq}
\begin{aligned}
&\left|\mathbb{E} G_{\beta,T}^{\times s}[W_p(h_k) \phi(\bb{h})] \right. \\
&\hspace{10mm}- \left. \beta \cdot \left\{\hspace{-1.5mm}
\begin{array}{l}
\sum_{l=1}^s \mathbb{E} G_{\beta,T}^{\times s} \big[\mathbb{E}[W_p(h_k) W_p(h_l)] \, \phi(\bb{h})\big] \\[1mm]
- s \, \mathbb{E} G_{\beta,T}^{\times (s+1)} \big[\mathbb{E}[W_p(h_k) W_p(h_{s+1})] \, \phi(\bb{h})\big]
\end{array}
\hspace{-1.5mm}\right\}
\right|
\leq K p^{-3/2},
\end{aligned}
\end{equation}
where $\bb{h} \circeq (h_1,h_2,\ldots,h_s)$, $K \circeq s^2 C \beta^2 \|\phi\|_{\infty}$, and $C > 0$ is a universal constant.
\end{lemma}
\begin{proof}
Write for short
\begin{equation}\label{eq:def.Omega.p}
\omega_p(h) \circeq \frac{1}{2} p^{-i h - 1/2} \quad \text{and} \quad Y_p(h) \circeq \beta \sum_{\substack{q \leq T \\ q \neq p}} W_q(h).
\end{equation}
\hspace{-2.3mm}
Define
\begin{equation}
F_p(\bb{z},\overline{\bb{z}}) \circeq \frac{\int_{[0,1]^s} \omega_p(h_k) \phi(\bb{h}) \prod_{l=1}^s \exp\Big(\beta (z_l \omega_p(h_l) + \overline{z_l} \overline{\omega_p(h_l)}) + Y_p(h_l)\Big) d \bb{h}}{\int_{[0,1]^s} \prod_{l=1}^s \exp\Big(\beta (z_l \omega_p(h_l) + \overline{z_l} \overline{\omega_p(h_l)}) + Y_p(h_l)\Big) d \bb{h}}.
\end{equation}
Then,
\begin{equation}\label{eq:prop:bovier.kurkova.technique.p.sum.expectations}
\mathbb{E} G_{\beta,T}^{\times s}[W_p(h_k) \phi(\bb{h})] = \mathbb{E}[U_p \cdot F_p(\bb{U}_p,\overline{\bb{U}_p})] + \mathbb{E}[\overline{U_p} \cdot \overline{F_p}(\bb{U}_p,\overline{\bb{U}_p})],
\end{equation}
where $\bb{U}_p \circeq (U_p,U_p,\ldots,U_p)$.
Since the $U_p$'s are i.i.d.\hspace{-0.3mm} uniform random variables on the unit circle in $\mathbb{C}$, we have $\mathbb{E}[|U_p|^3] < \infty$, $\mathbb{E}[U_p \overline{U_p}] = 1$ and $\mathbb{E}[U_p^2] = \mathbb{E}[U_p] = 0$.
If we apply \eqref{eq:lem:approximate.integration.by.parts.eq.1} with $F = F_p$ and $\bb{\xi} = \bb{U}_p$, and \eqref{eq:lem:approximate.integration.by.parts.eq.2} with $F = \overline{F_p}$ and $\bb{\xi} = \bb{U}_p$, we get, as $T\to\infty$,
\begin{equation}\label{eq:prop:bovier.kurkova.technique.p.sum.expectations.next}
\begin{aligned}
\mathbb{E} G_{\beta,T}^{\times s}[W_p(h_k) \phi(\bb{h})]
&= \sum_{j=1}^s \Big\{\mathbb{E}\big[\partial_{\overline{z_j}} F_p(\bb{U}_p,\overline{\bb{U}_p})\big] + \mathbb{E}\big[\partial_{z_j} \overline{F_p}(\bb{U}_p,\overline{\bb{U}_p})\big]\Big\} \\
&+ s^2 \, O\Big(\max_{1 \leq j \leq s} \big\{\|\partial_{z_j}^2 F_p\|_{\infty} \vee \|\partial_{\overline{z_j}}^2 F_p\|_{\infty}\big\}\Big).
\end{aligned}
\end{equation}
For any bounded mesurable function $H : [0,1] \to \mathbb{C}$, define
\begin{equation}
\langle H \rangle_{(z,\overline{z})} \circeq \langle H(h) \rangle_{(z,\overline{z})} \circeq \frac{\int_{[0,1]} H(h) \exp\Big(\beta (z \omega_p(h) + \overline{z} \overline{\omega_p(h)}) + Y_p(h)\Big) d h}{\int_{[0,1]} \exp\Big(\beta (z \omega_p(h) + \overline{z} \overline{\omega_p(h)}) + Y_p(h)\Big) d h},
\end{equation}
and for any bounded mesurable function $H : [0,1]^s \to \mathbb{C}$, define
\begin{equation}
\langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \circeq \langle H(\bb{h}) \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \circeq \frac{\int_{[0,1]^s} H(\bb{h}) \phi(\bb{h}) \prod_{l=1}^s \exp\Big(\beta (z_l \omega_p(h_l) + \overline{z_l} \overline{\omega_p(h_l)}) + Y_p(h_l)\Big) d \bb{h}}{\int_{[0,1]^s} \prod_{l=1}^s \exp\Big(\beta (z_l \omega_p(h_l) + \overline{z_l} \overline{\omega_p(h_l)}) + Y_p(h_l)\Big) d \bb{h}}.
\end{equation}
Differentiation of the above yields
\begin{equation}\label{eq:first.derivative.H}
\begin{aligned}
&\partial_{\overline{z_j}} \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} = \beta \big\{\langle H \overline{\omega(h_j)}\rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} - \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \langle \overline{\omega_p(h_{s+1})} \rangle_{(z_j,\overline{z_j})}\big\}, \\
&\partial_{z_j} \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} = \beta \big\{\langle H \omega(h_j)\rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} - \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})}\big\}.
\end{aligned}
\end{equation}
The partial derivatives in \eqref{eq:first.derivative.H} can be used to expand the summands on the right-hand side of \eqref{eq:prop:bovier.kurkova.technique.p.sum.expectations.next}. Indeed, by using the relation $F_p(\bb{z},\overline{\bb{z}}) = \langle \omega_p(h_k) \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi}$ with $\bb{z} = \bb{U}_p$,
\begin{align}\label{eq:prop:bovier.kurkova.technique.p.sum.expectations.next.2}
&\mathbb{E}\big[\partial_{\overline{z_j}} F_p(\bb{U}_p,\overline{\bb{U}_p})\big] + \mathbb{E}\big[\partial_{z_j} \overline{F_p}(\bb{U}_p,\overline{\bb{U}_p})\big] \notag \\[1mm]
&\hspace{5mm}\stackrel{\phantom{\eqref{eq:first.derivative.H}}}{=} \mathbb{E}\Big[\partial_{\overline{z_j}} \langle \omega_p(h_k) \rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi}\Big] + \mathbb{E}\Big[\partial_{z_j} \big\langle \overline{\omega_p(h_k)} \big\rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi}\Big] \notag \\[1mm]
&\hspace{5mm}\stackrel{\eqref{eq:first.derivative.H}}{=} \beta \, \mathbb{E}\Big[\big\langle \omega_p(h_k) \overline{\omega_p(h_j)}\big\rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi} - \langle \omega_p(h_k) \rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi} \langle \overline{\omega_p(h_{s+1})} \rangle_{(U_p,\overline{U_p})}\Big] \notag \\[1mm]
&\hspace{5mm}\quad \, \, + \beta \, \mathbb{E}\Big[\big\langle \overline{\omega_p(h_k) \overline{\omega_p(h_j)}}\big\rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi} - \langle \overline{\omega_p(h_k) \rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi} \langle \overline{\omega_p(h_{s+1})}} \rangle_{(U_p,\overline{U_p})}\Big] \notag \\[1mm]
&\hspace{5mm}\stackrel{\phantom{\eqref{eq:first.derivative.H}}}{=} \beta \cdot \left\{\hspace{-1mm}
\begin{array}{l}
\mathbb{E}\Big[\big\langle 2 \text{Re}\big(\omega_p(h_k) \overline{\omega_p(h_j)}\big)\big\rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi}\Big] \\[3mm]
- \mathbb{E}\Big[2 \text{Re}\Big(\langle \omega_p(h_k) \rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi} \langle \overline{\omega_p(h_{s+1})} \rangle_{(U_p,\overline{U_p})}\Big)\Big]
\end{array}
\hspace{-1mm}\right\}.
\end{align}
Since, by definition,
\begin{equation}\label{eq:bracket.U.p.equals.Gibbs}
\langle \, \cdot \, \rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi} = G_{\beta,T}^{\times s}[~\cdot~ \phi(\bb{h})],
\end{equation}
and
\begin{align}\label{eq:lem:bovier.kurkova.technique.p.eq.technical.part}
&2 \text{Re}\Big(\langle \omega_p(h_k) \rangle_{(\bb{U}_p,\overline{\bb{U}_p})}^{\phi} \langle \overline{\omega_p(h_{s+1})} \rangle_{(U_p,\overline{U_p})}\Big) \notag \\
&\quad= 2 \text{Re}\left(\frac{\int_{[0,1]} \int_{[0,1]^s} \omega_p(h_k) \overline{\omega_p(h_{s+1})} \phi(\bb{h}) \prod_{l=1}^{s+1} \exp\Big(\beta \sum_{p \leq T} W_p(h_l)\Big) d \bb{h} \, d h_{s+1}}{\int_{[0,1]} \int_{[0,1]^s} \prod_{l=1}^{s+1} \exp\Big(\beta \sum_{p \leq T} W_p(h_l)\Big) d \bb{h} \, d h_{s+1}}\right) \notag \\[1mm]
&\quad= G_{\beta,T}^{\times (s+1)}\big[2 \text{Re}\big(\omega_p(h_k) \overline{\omega_p(h_{s+1})}\big) \phi(\bb{h})\big],
\end{align}
and
\begin{equation}
2 \, \text{Re} (\omega_p(h) \overline{\omega_p(h')}) = \frac{1}{2p} \cos(|h - h'| \log p) \stackrel{\eqref{eq:W.covariance}}{=} \mathbb{E}[W_p(h)W_p(h')],
\end{equation}
we can rewrite \eqref{eq:prop:bovier.kurkova.technique.p.sum.expectations.next.2} as
\begin{equation}\label{eq:prop:bovier.kurkova.technique.p.sum.expectations.next.3}
\begin{aligned}
&\mathbb{E}\big[\partial_{\overline{z_j}} F_p(\bb{U}_p,\overline{\bb{U}_p})\big] + \mathbb{E}\big[\partial_{z_j} \overline{F_p}(\bb{U}_p,\overline{\bb{U}_p})\big] \\[1mm]
&\hspace{5mm}= \beta \cdot \left\{\hspace{-1mm}
\begin{array}{l}
\mathbb{E} G_{\beta,T}^{\times s}\big[\mathbb{E}[W_p(h_k)W_p(h_j)] \phi(\bb{h})\big] \\[1mm]
- \mathbb{E} G_{\beta,T}^{\times (s+1)}\big[\mathbb{E}[W_p(h_k)W_p(h_{s+1})] \phi(\bb{h})\big]
\end{array}
\hspace{-1mm}\right\}.
\end{aligned}
\end{equation}
From \eqref{eq:prop:bovier.kurkova.technique.p.sum.expectations.next} and \eqref{eq:prop:bovier.kurkova.technique.p.sum.expectations.next.3}, we conclude \eqref{eq:prop:bovier.kurkova.technique.p.eq}, as long as, for all $j\in \{1,\ldots,s\}$,
\begin{equation}
\|\partial_{z_j}^2 F\|_{\infty} \vee \|\partial_{\overline{z_j}}^2 F\|_{\infty} \leq \widetilde{C} \beta^2 \|\phi\|_{\infty} p^{-3/2},
\end{equation}
where $\widetilde{C} > 0$ is a universal constant.
To verify this last point, note that, by differentiating in \eqref{eq:first.derivative.H},
\begin{align}\label{eq:second.derivative.z.bound}
&\partial_{z_j}^2 \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi}
= \beta \left\{\hspace{-1mm}
\begin{array}{l}
\partial_{z_j} \langle H \omega(h_j)\rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} - (\partial_{z_j} \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi}) \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})} \\[1mm]
- \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} (\partial_{z_j} \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})})
\end{array}
\hspace{-1mm}\right\} \notag \\[1mm]
&\hspace{10mm}= \beta^2 \left\{\hspace{-1mm}
\begin{array}{l}
\langle H \omega^2(h_j)\rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} - \langle H \omega(h_j) \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})} \\[2mm]
- \Big(\langle H \omega(h_j)\rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} - \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})}\Big) \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})} \\[1.5mm]
- \langle H \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \Big(\langle \omega_p^2(h_{s+1})\rangle_{(z_j,\overline{z_j})} - \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})}^2\Big)
\end{array}
\hspace{-1mm}\right\}.
\end{align}
Using the relation $F_p(\bb{z},\overline{\bb{z}}) = \langle \omega_p(h_k) \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi}$, \eqref{eq:second.derivative.z.bound}, and the triangle inequality, we obtain
\begin{align}\label{eq:lem:bovier.kurkova.technique.p.end.proof.Jensen}
|\partial_{z_j}^2 F_p(\bb{z},\overline{\bb{z}})|
&= \beta^2 \left|\hspace{-1mm}
\begin{array}{l}
\langle \omega_p(h_k) \omega^2(h_j)\rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \\[2mm]
- 2 \langle \omega_p(h_k) \omega(h_j) \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})} \\[2mm]
+ 2 \langle \omega_p(h_k) \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \langle \omega_p(h_{s+1}) \rangle_{(z_j,\overline{z_j})}^2 \\[2mm]
- \langle \omega_p(h_k) \rangle_{(\bb{z},\overline{\bb{z}})}^{\phi} \langle \omega_p^2(h_{s+1})\rangle_{(z_j,\overline{z_j})}
\end{array}
\hspace{-1mm}\right| \notag \\[1mm]
&\leq \beta^2 \left\{\hspace{-1mm}
\begin{array}{l}
\langle |\omega_p(h_k)| \cdot |\omega(h_j)|^2\rangle_{(\bb{z},\overline{\bb{z}})}^{|\phi|} \\[2mm]
+ 2 \langle |\omega_p(h_k)| \cdot |\omega(h_j)| \rangle_{(\bb{z},\overline{\bb{z}})}^{|\phi|} \langle |\omega_p(h_{s+1})| \rangle_{(z_j,\overline{z_j})} \\[2mm]
+ 2 \langle |\omega_p(h_k)| \rangle_{(\bb{z},\overline{\bb{z}})}^{|\phi|} \langle |\omega_p(h_{s+1})| \rangle_{(z_j,\overline{z_j})}^2 \\[2mm]
+ \langle |\omega_p(h_k)| \rangle_{(\bb{z},\overline{\bb{z}})}^{|\phi|} \langle |\omega_p(h_{s+1})|^2\rangle_{(z_j,\overline{z_j})}
\end{array}
\hspace{-1mm}\right\}.
\end{align}
Since $|\omega_p(h)| = \frac{1}{2} p^{-1/2}$, $\langle 1 \rangle_{(z_j,\overline{z_j})} = 1$ and $\langle 1 \rangle_{(\bb{z},\overline{\bb{z}})}^{|\phi|} \leq \|\phi\|_{\infty}$, we deduce from \eqref{eq:lem:bovier.kurkova.technique.p.end.proof.Jensen} that
\begin{equation}
|\partial_{z_j}^2 F_p(z,\overline{z})| \leq \frac{6}{8} \beta^2 \|\phi\|_{\infty} p^{-3/2}.
\end{equation}
We obtain the bound on $\|\partial_{\overline{z_j}}^2 F_p\|_{\infty}$ in the same manner.
\end{proof}
The next proposition is a consequence of the two previous lemmas. It generalizes \eqref{eq:prop.3.Arguin.Tai.2017.rewrite}, which corresponds to the special case $(k = 1, s = 1, \phi \equiv 1)$.
The idea for the statement originates from \cite{MR2070334}, and the idea behind the proof generalizes the special-case application in \cite{MR3211001}.
See \cite{MR3354619,MR3731796} for an application in the context of the Gaussian free field.
\begin{proposition}[Bovier-Kurkova technique]\label{eq:bovier.kurkova.technique}
Let $\beta > 0$ and $0 \leq \alpha \leq 1$.
For any $s\in \mathbb{N}^*$, any $k\in \{1,\ldots,s\}$, and any bounded mesurable function $\phi : [0,1]^s \rightarrow \mathbb{R}$, we have
\begin{equation}\label{eq:prop:bovier.kurkova.technique.eq}
\begin{aligned}
&\left|\frac{1}{\beta} \cdot \frac{\mathbb{E} G_{\beta,T}^{\times s}\big[X_{h_k}(\alpha) \phi(\bb{h})\big]}{\frac{1}{2} \log \log T} \right.\\[1mm]
&\hspace{10mm}- \left.\left\{\hspace{-1mm}
\begin{array}{l}
\sum_{l=1}^s \mathbb{E} G_{\beta,T}^{\times s} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_l)\}} dy ~ \phi(\bb{h})\big] \\[1mm]
- s\, \mathbb{E} G_{\beta,T}^{\times (s+1)} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_{s+1})\}} dy ~ \phi(\bb{h})\big]
\end{array}
\hspace{-1.5mm}\right\}\right| = O\hspace{-0.5mm}\left((\log \log T)^{-1}\right),
\end{aligned}
\end{equation}
where $\bb{h} \circeq (h_1,h_2,\ldots,h_s)$.
\end{proposition}
\begin{proof}
For any $l\in \{1,\ldots,s+1\}$,
\begin{equation}\label{eq:prop:BV.technique.beginning}
\begin{aligned}
\mathbb{E}G_{\beta,T}^{\times (s+1)} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_l)\}} dy ~\phi(\bb{h})\big]
&= \mathbb{E}G_{\beta,T}^{\times (s+1)} \big[\rho(h_k,h_l) \, \bb{1}_{\{\rho(h_k,h_l) \leq \alpha\}}\, \phi(\bb{h})\big] \\
&+ \mathbb{E}G_{\beta,T}^{\times (s+1)} \big[\alpha \, \bb{1}_{\{\rho(h_k,h_l) > \alpha\}}\, \phi(\bb{h})\big].
\end{aligned}
\end{equation}
On the other hand, if we sum \eqref{eq:prop:bovier.kurkova.technique.p.eq} over the set $\{p ~\text{prime} : p \leq \exp((\log T)^{\alpha})\}$ and divide by $\frac{\beta}{2} \log \log T$, we obtain
\begin{equation}\label{eq:prop:BV.technique.end}
\begin{aligned}
&\left|\frac{1}{\beta} \cdot \frac{\mathbb{E} G_{\beta,T}^{\times s}[X_{h_k}(\alpha) \phi(\bb{h})]}{\frac{1}{2} \log \log T} \right. \\
&\hspace{10mm}-
\left.\left\{\hspace{-1.5mm}
\begin{array}{l}
\sum_{l=1}^s \mathbb{E} G_{\beta,T}^{\times s} \left[\frac{\mathbb{E}[X_{h_k}(\alpha)X_{h_l}(\alpha)]}{\frac{1}{2} \log \log T} \, \phi(\bb{h})\right] \\[2mm]
- s \, \mathbb{E} G_{\beta,T}^{\times (s+1)} \left[\frac{\mathbb{E}[X_{h_k}(\alpha) X_{h_{s+1}}(\alpha)]}{\frac{1}{2} \log \log T} \, \phi(\bb{h})\right]
\end{array}
\hspace{-1.5mm}\right\}
\right| = O\hspace{-0.5mm}\left((\log \log T)^{-1}\right).
\end{aligned}
\end{equation}
Now, one by one, take the difference in absolute value between each of the $s+1$ expectations inside the braces in \eqref{eq:prop:BV.technique.end} and the corresponding expectation on the left-hand side of \eqref{eq:prop:BV.technique.beginning}. We obtain the bound \eqref{eq:prop:bovier.kurkova.technique.eq} by using Lemma \ref{lem:covariance.estimates}.
\end{proof}
Our goal now is to combine Proposition \ref{eq:bovier.kurkova.technique} with a concentration result (Proposition \ref{prop:concentration.result}) in order to prove an approximate version of the GG identities (Theorem \ref{thm:approximate.extended.GG.identities}). We will then show that the identities must hold exactly in the limit $T\to \infty$ (Theorem \ref{thm:extended.GG.identities}).
Before stating and proving the concentration result, we show that $f_{\alpha,\beta}(\cdot)$, the limiting perturbed free energy, is differentiable in an open interval around $0$.
\begin{lemma}\label{lem:differentiability.limiting.free.energy}
Let $\beta > \beta_c \circeq 2$ and $0 \leq \alpha \leq 1$. There exists $\delta = \delta(\alpha,\beta) > 0$ small enough that $f_{\alpha,\beta}(\cdot)$ from Proposition \ref{prop:convergence.free.energy} is differentiable on $(-\delta,\delta)$. Also, we have $f_{\alpha,\beta}'(0) = \beta \alpha$.
\end{lemma}
\begin{proof}
Since $\beta > 2$ and $\lim_{u\to 0} V_{\alpha,u} = 1$, there exists $\delta = \delta(\alpha,\beta) > 0$ small enough that, for all $u\in (-\delta,\delta)$,
\begin{equation}\label{eq:limiting.free.energy.expression}
f_{\alpha,\beta}(u) =
\left\{\hspace{-1mm}
\begin{array}{ll}
\beta \sqrt{V_{\alpha,u}} - 1, &\mbox{if } u < 0, \\
\beta(\alpha u + 1) - 1, &\mbox{if } u \geq 0.
\end{array}
\right.
\end{equation}
The differentiability of $f_{\alpha,\beta}(\cdot)$ on $(-\delta,\delta)\backslash \{0\}$ is obvious.
Also,
\begin{equation}\label{eq:limiting.free.energy.derivative.linear}
\frac{f_{\alpha,\beta}(u) - f_{\alpha,\beta}(0)}{u} =
\left\{\hspace{-1mm}
\begin{array}{ll}
\beta \frac{\sqrt{V_{\alpha,u}} - 1}{u}, &\mbox{if } u < 0, \\
\beta\alpha, &\mbox{if } u \geq 0.
\end{array}
\right.
\end{equation}
Take both the left and right limits at $0$ to conclude.
\end{proof}
Here is the concentration result.
It is analogous to Theorem 3.8 in \cite{MR3052333}, which was proved for the mixed $p$-spin model.
We give the proof for completeness.
\begin{proposition}[Concentration]\label{prop:concentration.result}
Let $\beta > \beta_c \circeq 2$ and $0 < \alpha < 1$.
For any $s\in \mathbb{N}^*$, any $k\in \{1,\ldots,s\}$, and any bounded mesurable function $\phi : [0,1]^s \rightarrow \mathbb{R}$, we have
\begin{equation}
\left|\frac{\mathbb{E} G_{\beta,T}^{\times s}[X_{h_k}(\alpha) \phi(\bb{h})]}{\log \log T} - \frac{\mathbb{E} G_{\beta,T}[X_{h_k}(\alpha)]}{\log \log T} \mathbb{E} G_{\beta,T}^{\times s}[\phi(\bb{h})]\right| = o_T(1),
\end{equation}
where $\bb{h} \circeq (h_1,h_2,\ldots,h_s)$.
\end{proposition}
\begin{proof}
By applying Jensen's inequality to the expectation $\mathbb{E}G_{\beta,T}^{\times s}[ \, \cdot \, ]$, followed by the triangle inequality,
\begin{align*}
&\big|\mathbb{E}G_{\beta,T}^{\times s}[X_{h_k}(\alpha) \phi(\bb{h})] - \mathbb{E}G_{\beta,T}[X_{h_k}(\alpha)] \mathbb{E}G_{\beta,T}^{\times s}[\phi(\bb{h})]\big| \notag \\[0.5mm]
&\quad\quad\leq \mathbb{E}G_{\beta,T} \big|X_{h_k}(\alpha) - \mathbb{E}G_{\beta,T} [X_{h_k}(\alpha)]\big| \cdot \|\phi\|_{\infty} \notag \\
&\quad\quad\leq \left\{\hspace{-1mm}
\begin{array}{l}
\mathbb{E}G_{\beta,T}\big|X_{h_k}(\alpha) - G_{\beta,T}[X_{h_k}(\alpha)]\big| \\[1mm]
+ \mathbb{E}\big|G_{\beta,T}[X_{h_k}(\alpha)] - \mathbb{E}G_{\beta,T}[X_{h_k}(\alpha)]\big|
\end{array}
\hspace{-1mm}\right\} \cdot \|\phi\|_{\infty} \notag \\
&\quad\quad\circeq \big\{(a) + (b)\big\} \cdot \|\phi\|_{\infty}.
\end{align*}
Below, we show that $(a)$ and $(b)$ are $o(\log \log T)$ in Step 1 and Step 2, respectively.
\noindent
{\bf Step 1}. Note that
\begin{align}\label{eq:lem:concentration.result.start.step.1}
(a)
&= \mathbb{E}G_{\beta,T}\Big|\int_0^1 (X_{h_1}(\alpha) - X_{h_2}(\alpha))\frac{e^{\beta X_{h_2}}}{\int_0^1 \hspace{-0.5mm}e^{\beta X_{z_2}} dz_2} dh_2\Big| \notag \\
&\leq \mathbb{E}G_{\beta,T}^{\times 2}\big|X_{h_1}(\alpha) - X_{h_2}(\alpha)\big|.
\end{align}
For $u \geq 0$, we define a perturbed version of the last quantity, where the Gibbs measure $G_{\beta,T,u}$ is now defined with respect to the field $(u X_h(\alpha) + X_h, h\in [0,1])$ :
\begin{align}
D_{\alpha,\beta,T}(u)
&\circeq \mathbb{E}G_{\beta,T,u}^{\times 2} \big|X_{h_1}(\alpha) - X_{h_2}(\alpha)\big|.
\end{align}
We can easily verify that
\begin{equation}\label{eq:lem:concentration.result.derivative.D}
D_{\alpha,\beta,T}'(y) = \beta ~\mathbb{E}G_{\beta,T,y}^{\times 3}\Big[\big|X_{h_1}(\alpha) - X_{h_2}(\alpha)\big| \cdot \big(X_{h_1}(\alpha) + X_{h_2}(\alpha) - 2X_{h_3}(\alpha)\big)\Big].
\end{equation}
If we separate the expectation in \eqref{eq:lem:concentration.result.derivative.D} in two parts and apply the Cauchy-Schwarz inequality to each one of them, followed by an application of the elementary inequality $(c + d)^2 \leq 2c^2 + 2d^2$, we find, for $y \geq 0$,
\begin{align}\label{eq:lem:IGFF.ghirlanda.guerra.restricted.2.a.bound.derivative}
\left|D_{\alpha,\beta,T}'(y)\right|
&\leq \beta \cdot
\left\{\hspace{-1mm}
\begin{array}{l}
\mathbb{E}G_{\beta,T,y}^{\times 3} \big|X_{h_1}(\alpha) - X_{h_2}(\alpha)\big| \big|X_{h_1}(\alpha) - X_{h_3}(\alpha)\big| \\[1mm]
+\, \mathbb{E}G_{\beta,T,y}^{\times 3} \big|X_{h_1}(\alpha) - X_{h_2}(\alpha)\big| \big|X_{h_2}(\alpha) - X_{h_3}(\alpha)\big|
\end{array}
\hspace{-1mm}\right\} \notag \\[1mm]
&\leq \beta \cdot 2\, \mathbb{E}G_{\beta,T,y}^{\times 2} [(X_{h_1}(\alpha) - X_{h_2}(\alpha))^2] \notag \\
&\leq \beta \cdot 8\, \mathbb{E}G_{\beta,T,y} [\big(X_h(\alpha) - G_{\beta,T,y}[X_h(\alpha)]\big)^2].
\end{align}
Note that $\beta^{-2} (\log \log T) f_{\alpha,\beta,T}''(y) = G_{\beta,T,y} [\big(X_h(\alpha) - G_{\beta,T,y}[X_h(\alpha)]\big)^2]$ and apply inequality \eqref{eq:lem:IGFF.ghirlanda.guerra.restricted.2.a.bound.derivative} in the identity $u D_{\alpha,\beta,T}(0) = \int_0^u D_{\alpha,\beta,T}(y) dy - \int_0^u \int_0^x D_{\alpha,\beta,T}'(y) dy dx$. We obtain, for $u > 0$,
\begin{align}\label{eq:lem:concentration.result.D.0.bound}
D_{\alpha,\beta,T}(0)
&\leq \frac{1}{u} \int_0^u D_{\alpha,\beta,T}(y) dy + \int_0^u \left|D_{\alpha,\beta,T}'(y)\right| dy \notag \\
&\leq 2 \left(\frac{1}{u} \int_0^u \beta^{-2} (\log \log T) \mathbb{E} [f_{\alpha,\beta,T}''(y)] dy\right)^{1/2} \notag \\
&+ 8 \beta \int_0^u \beta^{-2} (\log \log T) \mathbb{E}[f_{\alpha,\beta,T}''(y)] dy.
\end{align}
In order to bound $\frac{1}{u}\int_0^u D_{\alpha,\beta,T}(y) dy$, we separated $D_{\alpha,\beta,T}(y)$ in two parts (with the triangle inequality) and we applied the Cauchy-Schwarz inequality to the two resulting expectations $\frac{1}{u} \int_0^u \mathbb{E}G_{\beta,T,y}[\, \cdot\, ]\, dy$.
Now, on the right-hand side of \eqref{eq:lem:concentration.result.D.0.bound}, use the convexity of $f_{\alpha,\beta,T}(\cdot)$ and the mean convergence of $f_{\alpha,\beta,T}(z), ~z > -1$, from Proposition \ref{prop:convergence.free.energy}. We get, for all $u > 0$ and all $y\in (0,1)$,
\begin{align}
\limsup_{T\rightarrow \infty} \frac{(a)}{\log \log T}
&\stackrel{\eqref{eq:lem:concentration.result.start.step.1}}{\leq} \limsup_{T\rightarrow \infty} \frac{D_{\alpha,\beta,T}(0)}{\log \log T} \notag \\[0.5mm] &\stackrel{\eqref{eq:lem:concentration.result.D.0.bound}}{\leq} \frac{8}{\beta} \cdot \left(\frac{f_{\alpha,\beta}(u + y) - f_{\alpha,\beta}(u)}{y} - \frac{f_{\alpha,\beta}(0) - f_{\alpha,\beta}(-y)}{y}\right).
\end{align}
From Lemma \ref{lem:differentiability.limiting.free.energy}, there exists $\delta = \delta(\alpha,\beta) > 0$ such that $f_{\alpha,\beta}(\cdot)$ is differentiable on $(-\delta,\delta)$. Therefore, take $u \rightarrow 0^+$ and then $y \rightarrow 0^+$ in the above equation to conclude Step 1.
\noindent
{\bf Step 2}. For all $u\in (0,1)$, let
\begin{equation}
\begin{aligned}
\eta_{\alpha,\beta,T}(u)
&\circeq \big|f_{\alpha,\beta,T}(-u) - \mathbb{E}[f_{\alpha,\beta,T}(-u)]\big| + \big|f_{\alpha,\beta,T}(0) - \mathbb{E}[f_{\alpha,\beta,T}(0)]\big| \\[1mm]
&\quad+ \big|f_{\alpha,\beta,T}(u) - \mathbb{E}[f_{\alpha,\beta,T}(u)]\big|.
\end{aligned}
\end{equation}
Differentiation of the free energy gives $f_{\alpha,\beta,T}'(0) = \beta (\log \log T)^{-1} G_{\beta,T}[X_{h_k}(\alpha)]$.
Then, from the convexity of $f_{\alpha,\beta,T}(\cdot)$,
\begin{align}
\beta \cdot \frac{(b)}{\log \log T}
&= \mathbb{E}\big|f_{\alpha,\beta,T}'(0) - \mathbb{E}[f_{\alpha,\beta,T}'(0)]\big| \notag \\[1mm]
&\leq \left|\frac{\mathbb{E}[f_{\alpha,\beta,T}(u)] - \mathbb{E}[f_{\alpha,\beta,T}(0)]}{u} - \mathbb{E}[f_{\alpha,\beta,T}'(0)]\right| \notag \\[1mm]
&+ \left|\frac{\mathbb{E}[f_{\alpha,\beta,T}(0)] - \mathbb{E}[f_{\alpha,\beta,T}(-u)]}{u} - \mathbb{E}[f_{\alpha,\beta,T}'(0)]\right| + \frac{\mathbb{E}[\eta_{\alpha,\beta,T}(u)]}{u}.
\end{align}
Using the $L^1$ convergence of $f_{\alpha,\beta,T}(z), ~z > -1$, from Proposition \ref{prop:convergence.free.energy}, and the mean convergence of $f_{\alpha,\beta,T}'(0)$ from Proposition \ref{prop:mean.convergence.derivative.free.energy} (the limit is $f_{\alpha,\beta}'(0)$ by Lemma \ref{lem:differentiability.limiting.free.energy}, the convexity of $\mathbb{E}[f_{\alpha,\beta,T}(\cdot)]$ and $f_{\alpha,\beta}(\cdot)$, and by Theorem 25.7 in \cite{MR0274683}), we deduce that for all $u\in (0,1)$,
\begin{equation*}
\limsup_{T\rightarrow \infty} \frac{(b)}{\log \log T}
\leq \frac{1}{\beta} \cdot \left\{\hspace{-1mm}
\begin{array}{l}
\left|\frac{f_{\alpha,\beta}(u) - f_{\alpha,\beta}(0)}{u} - f_{\alpha,\beta}'(0)\right| \\[2mm]
+ \left|\frac{f_{\alpha,\beta}(0) - f_{\alpha,\beta}(-u)}{u} - f_{\alpha,\beta}'(0)\right|
\end{array}
\hspace{-1mm}\right\},
\end{equation*}
Take $u\rightarrow 0^+$ in the last equation, the differentiability of $f_{\alpha,\beta}(\cdot)$ at $0$ (from Lemma \ref{lem:differentiability.limiting.free.energy}) concludes Step 2.
\end{proof}
\begin{theorem}[Approximate extended Ghirlanda-Guerra identities]\label{thm:approximate.extended.GG.identities}
Let $\beta > \beta_c \circeq 2$ and $0 < \alpha < 1$.
For any $s\in \mathbb{N}^*$, any $k\in \{1,\ldots,s\}$, and any bounded mesurable function $\phi : [0,1]^s \rightarrow \mathbb{R}$, we have
\begin{equation}
\begin{aligned}
&\left|
\mathbb{E} G_{\beta,T}^{(s+1)} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_{s+1})\}} dy \, \phi(\bb{h})\big] \right.\\[1mm]
&\hspace{10mm}- \left.
\left\{\hspace{-1mm}
\begin{array}{l}
\frac{1}{s} \mathbb{E} G_{\beta,T}^{\times 2} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_1,h_2)\}} dy\big] \mathbb{E} G_{\beta,T}^{\times s}[\phi(\bb{h})] \\[2mm]
+ \frac{1}{s} \sum_{l \neq k}^s \mathbb{E} G_{\beta,T}^{\times s} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_l)\}} dy \, \phi(\bb{h})\big]
\end{array}
\hspace{-1mm}\right\}
\right| = o_T(1),
\end{aligned}
\end{equation}
where $\bb{h} \circeq (h_1,h_2,\ldots,h_s)$.
\end{theorem}
\begin{proof}
From Proposition \ref{eq:bovier.kurkova.technique}, Proposition \ref{prop:concentration.result} and the triangle inequality, we get
\begin{equation}\label{thm:approximate.extended.GG.identities.eq.1}
\begin{aligned}
&\left|
\frac{1}{\beta} \cdot \frac{\mathbb{E} G_{\beta,T}[X_{h_k}(\alpha)]}{\frac{1}{2} \log \log T} \mathbb{E} G_{\beta,T}^{\times s}[\phi(\bb{h})] \right. \\[1mm]
&\hspace{10mm}- \left.
\left\{\hspace{-1mm}
\begin{array}{l}
\sum_{l=1}^s \mathbb{E} G_{\beta,T}^{\times s} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_l)\}} dy ~ \phi(\bb{h})\big] \\[1mm]
- s \, \mathbb{E} G_{\beta,T}^{\times (s+1)} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_{s+1})\}} dy ~ \phi(\bb{h})\big]
\end{array}
\hspace{-1.5mm}\right\}
\right| = o_T(1).
\end{aligned}
\end{equation}
Furthermore, from Proposition \ref{eq:bovier.kurkova.technique} in the special case $(s=1,k=1,\phi \equiv 1)$,
\begin{equation}\label{thm:approximate.extended.GG.identities.eq.2}
\begin{aligned}
&\left|
\frac{1}{\beta} \cdot \frac{\mathbb{E} G_{\beta,T}[X_{h_k}(\alpha)]}{\frac{1}{2} \log \log T} \right. \\[1mm]
&\hspace{10mm}- \left.
\left\{\hspace{-1mm}
\begin{array}{l}
\mathbb{E} G_{\beta,T}^{\times s} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_k,h_k)\}} dy\big] \\[1mm]
- \mathbb{E} G_{\beta,T}^{\times (s+1)} \big[\int_0^{\alpha} \bb{1}_{\{y < \rho(h_1,h_2)\}} dy\big]
\end{array}
\hspace{-1.5mm}\right\}
\right| = O\hspace{-0.5mm}\left((\log \log T)^{-1}\right).
\end{aligned}
\end{equation}
By combining \eqref{thm:approximate.extended.GG.identities.eq.1} and \eqref{thm:approximate.extended.GG.identities.eq.2}, we get the conclusion.
\end{proof}
By the representation theorem of Dovbysh and Sudakov \cite{MR666087} (for an accessible proof, see \cite{MR2679002}), we can show (see e.g.\hspace{-0.3mm} the reasoning on page 1459 of \cite{MR3211001} or page 101 of \cite{MR3052333}) that there exists a subsequence $\{T_m\}_{m\in \mathbb{N}^*}$ converging to $+\infty$ such that for any $s\in \mathbb{N}^*$ and any continuous function $\phi : [0,1]^{s(s-1)/2} \rightarrow \mathbb{R}$, we have
\begin{equation}\label{eq:gibbs.measure.limit}
\lim_{m\to\infty} \mathbb{E} G_{\beta,T_m}^{\times \infty}\big[\phi((\rho(h_l,h_{l'}))_{1 \leq l,l' \leq s})\big] = E \mu_{\beta}^{\times \infty} \big[\phi((R_{l,l'})_{1 \leq l,l' \leq s})\big],
\end{equation}
where $R$ is a random element of some probability space with measure $P$ (and expectation $E$), generated by the random matrix of scalar products
\begin{equation}\label{eq:matrix.scalar.products.H}
(R_{l,l'})_{l,l'\in \mathbb{N}^*} = \big((\rho_l,\rho_{l'})_{\mathcal{H}}\big)_{l,l'\in \mathbb{N}^*},
\end{equation}
where $(\rho_l)_{l\in \mathbb{N}^*}$ is an i.i.d.\hspace{-0.3mm} sample from some random measure $\mu_{\beta}$ concentrated a.s.\hspace{-0.3mm} on the unit sphere of a separable Hilbert space $\mathcal{H}$.
In particular, from Theorem \ref{thm:limiting.two.overlap.distribution}, we have
\begin{equation}\label{eq:thm:limiting.two.overlap.distribution.eq.mu}
E \mu_{\beta}^{\times 2} \big[\bb{1}_{\{R_{1,2} \in A\}}\big] = \frac{2}{\beta} \bb{1}_A(0) + \left(1 - \frac{2}{\beta}\right) \bb{1}_A(1), \quad A\in \mathcal{B}([0,1]).
\end{equation}
Next, we show the consequence of taking the limit \eqref{eq:gibbs.measure.limit} in the statement of Theorem \ref{thm:approximate.extended.GG.identities}.
Note that a function $\phi : \{0,1\}^{s(s-1)/2} \rightarrow \mathbb{R}$ can always be embedded in a continuous function defined on $[0,1]^{s(s-1)/2}$.
Here is the main result of this section.
\begin{theorem}[Extended Ghirlanda-Guerra identities in the limit]\label{thm:extended.GG.identities}
Let $\beta > \beta_c \circeq 2$ and $0 < \alpha < 1$. Also, let $\mu_{\beta}$ be a subsequential limit of $\{G_{\beta,T}\}_{T\geq 2}$ in the sense of \eqref{eq:gibbs.measure.limit}.
For any $s\in \mathbb{N}^*$, any $k\in \{1,\ldots,s\}$, and any functions $\psi : \{0,1\} \rightarrow \mathbb{R}$ and $\phi : \{0,1\}^{s(s-1)/2} \rightarrow \mathbb{R}$, we have
\begin{equation}\label{thm:extended.GG.identities.to.prove}
\begin{aligned}
E \mu_{\beta}^{(s+1)} \big[\psi(R_{k,s+1}) \phi((R_{i,i'})_{1\leq i,i' \leq s})\big]
&= \frac{1}{s} E \mu_{\beta}^{\times 2} \big[\psi(R_{1,2})\big] E \mu_{\beta}^{\times s}\big[\phi((R_{i,i'})_{1\leq i,i' \leq s})\big] \\
&+ \frac{1}{s} \sum_{l \neq k}^s E \mu_{\beta}^{\times s} \big[\psi(R_{k,l}) \phi((R_{i,i'})_{1\leq i,i' \leq s})\big].
\end{aligned}
\end{equation}
\end{theorem}
\begin{remark}
The functions $\psi$ and $\phi$ have $\{0,1\}$ and $\{0,1\}^{s(s-1)/2}$ as their domain, respectively, because $R_{l,l'}\in \{0,1\}$ $E \mu_{\beta}^{\times 2}$-almost-surely by \eqref{eq:thm:limiting.two.overlap.distribution.eq.mu} and the matrix $(R_{l,l'})_{1 \leq l,l' \leq s}$ is symmetric and its diagonal elements are equal to $1$ $E \mu_{\beta}^{\times s}$-almost-surely by \eqref{eq:matrix.scalar.products.H}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:extended.GG.identities}]
From \eqref{eq:gibbs.measure.limit} and Theorem \ref{thm:approximate.extended.GG.identities} (in the particular case where $\phi$ is a function of the overlaps), we deduce
\begin{equation}
\begin{aligned}
&E \mu_{\beta}^{(s+1)} \big[\int_0^{\alpha} \bb{1}_{\{y < R_{k,s+1}\}} dy \, \phi((R_{i,i'})_{1 \leq i,i' \leq s})\big] \\[1mm]
&\hspace{15mm}= \frac{1}{s} E \mu_{\beta}^{\times 2} \big[\int_0^{\alpha} \bb{1}_{\{y < R_{1,2}\}} dy\big] E \mu_{\beta}^{\times s}\big[\phi((R_{i,i'})_{1 \leq i,i' \leq s})\big] \\
&\hspace{15mm}+ \frac{1}{s} \sum_{l \neq k}^s E \mu_{\beta}^{\times s} \big[\int_0^{\alpha} \bb{1}_{\{y < R_{k,l}\}} dy \, \phi((R_{i,i'})_{1 \leq i,i' \leq s})\big].
\end{aligned}
\end{equation}
From \eqref{eq:thm:limiting.two.overlap.distribution.eq.mu}, we know that $\bb{1}_{\{y < R_{i,i'}\}}$ is $E \mu_{\beta}^{\times 2}$-a.s.\hspace{-0.3mm} constant in $y$ on $[-1,0)$ and $[0,1)$ respectively. Therefore, for any $x\in \{-1,0\}$,
\begin{equation}\label{thm:approximate.extended.GG.identities.finish}
\begin{aligned}
&E \mu_{\beta}^{(s+1)} \big[\bb{1}_{\{x < R_{k,s+1}\}} \phi((R_{i,i'})_{1 \leq i,i' \leq s})\big] \\[2mm]
&\hspace{15mm}= \frac{1}{s} E \mu_{\beta}^{\times 2} \big[\bb{1}_{\{x < R_{1,2}\}}\big] E \mu_{\beta}^{\times s}\big[\phi((R_{i,i'})_{1 \leq i,i' \leq s})\big] \\
&\hspace{15mm}+ \frac{1}{s} \sum_{l \neq k}^s E \mu_{\beta}^{\times s} \big[\bb{1}_{\{x < R_{k,l}\}} \phi((R_{i,i'})_{1 \leq i,i' \leq s})\big].
\end{aligned}
\end{equation}
But, any function $\psi : \{0,1\} \rightarrow \mathbb{R}$ can be written as a linear combination of the indicator functions $\bb{1}_{\{0 < \, \cdot \, \}}$ and $\bb{1}_{\{-1 < \, \cdot \, \}}$, so we get the conclusion by the linearity of \eqref{thm:approximate.extended.GG.identities.finish}.
\end{proof}
\section{Proof of Theorem \ref{thm:Poisson.Dirichlet}}\label{sec:proof.min.result}
Once we have Theorem \ref{thm:limiting.two.overlap.distribution} and the Ghirlanda-Guerra identities from Theorem \ref{thm:extended.GG.identities}, the proof follows exactly the same steps as in the proof of Theorem 1.5 in \cite{MR3211001}.
We can show that any subsequential limit $\mu_{\beta}$ of $\{G_{\beta,T}\}_{T\geq 2}$ in the sense of \eqref{eq:gibbs.measure.limit} must satisfy
\begin{equation}\label{eq:1.RPC}
\mu_{\beta} = \sum_{k\in \mathbb{N}^*} \xi_k \delta_{e_k}, \quad P-a.s.,
\end{equation}
where $\delta$ is the Dirac measure, $(e_k)_{k\in \mathbb{N}^*}$ is a sequence of orthonormal vectors in $\mathcal{H}$ and $\xi$ is a Poisson-Dirichlet variable of parameter $\beta_c / \beta$. Since the space of probability measures on $[0,1]^{\mathbb{N}^* \times \mathbb{N}^*}$ (the space of overlap matrices) is a metric space under the weak topology, the limit in \eqref{eq:gibbs.measure.limit} must hold for the original sequence. Then, \eqref{eq:thm:Poisson.Dirichlet.eq} is a direct consequence of \eqref{eq:1.RPC}.
\ACKNO{I would like to thank the anonymous referees and my advisor, Louis-Pierre Arguin, for their valuable comments that led to improvements in the presentation of this paper.}
\end{document} | math | 65,751 |
\begin{document}
\title{On indices of 1-forms on determinantal singularities}
\begin{abstract}
We consider 1-forms on, so called, essentially isolated determinantal singularities (a natural generalization of isolated ones), show how to define analogues of the Poincar\'e--Hopf index for them, and describe relations between these indices and the radial index. For isolated determinantal singularities, we discuss properties of the homological index of a holomorphic 1-form and its relation with the Poincar\'e--Hopf index.
\end{abstract}
\section*{Introduction}
There are several generalizations of the notion of the index of a singular point (zero) of a vector field or of a 1-form on a smooth manifold (real or complex analytic) to vector fields or 1-forms on singular varieties. Some of them (the radial index, the Euler obstruction) are defined for arbitrary varieties. The homological index is defined for holomorphic vector fields or 1-forms on isolated singularities. The GSV (G\'omez-Mont--Seade--Verjovsky) index or PH (Poincar\'e--Hopf) index is defined for isolated complete intersection singularities: {ICIS}. For definitions and properties of these indices see \cite{EGSch} and the references there. This paper emerged from an attempt to generalize the notion(s) of the GSV- or PH-indices from complete intersections to more general varieties. It is not clear how this can be done in the general setting.
The GSV-index of a vector field or of a 1-form on an ICIS can be considered as a (duely understood) localization of the top Chern-Fulton (or Chern-Fulton-Johnson) class of a variety with isolated complete intersection singularities. It seems that in general such a localization is not well-defined (at least as an appropriate index of a vector field or of a 1-form).
A more general class than complete intersection singularities is the class of determinantal singularities which are defined by vanishing of all the minors of a certain size of an $m \times n$-matrix. We consider so called essentially isolated determinantal singularities (EIDS) as a natural generalization of isolated ones. For a 1-form on an EIDS, we define several analogues of the Poincar\'e--Hopf index using natural resolutions of deformations of the EIDS. We determine relations between these indices and the radial index. For isolated determinantal singularities, we discuss properties of the homological index of a holomorphic 1-form and its relation with the Poincar\'e--Hopf index.
\section{Determinantal singularities}
A determinantal variety $X$ (of type $(m,n,t)$) in an open domain $U\subset{\Bbb C}^N$ is a variety of dimension $d:=N-(n-t+1)(m-t+1) $ defined by the condition ${\rm rk\,} F(x) < t$ where $t\le \min(m, n)$, $F(x)=\left(f_{ij}(x)\right)$ is an $m\times n$-matrix ($i=1,\ldots, m$, $j=1,\ldots, n$), whose entries $f_{ij}(x)$ are complex analytic functions on $U$. In other words, $X$ is defined by the equations $m^t_{IJ}(x)=0$ for all subsets $I\subset\{1, \ldots, m\}$, $J\subset\{1, \ldots, n\}$ with $t$ elements, where $m^t_{IJ}(x)$ is the corresponding $t\times t$-minor of the matrix $F(x)$.
This definition can be reformulated in the following way. Let $M_{m,n} \cong {\Bbb C}^{mn}$ be the space of $m \times n$-matrices with complex entries and let $M_{m,n}^t$ be the subset of $M_{m,n}$ consisting of matrices of rank less than $t$, i.e.\ of matrices all $t \times t$-minors of which vanish. The variety $M_{m,n}^t$ has codimension $(m-t+1)(n-t+1)$ in $M_{m,n}$. It is singular. The singular locus of $M_{m,n}^t$ coincides with $M_{m,n}^{t-1}$. The singular locus of the latter one coincides with $M_{m,n}^{t-2}$, etc.\ (see, e.g., \cite{ACGH}).
The representation of the variety $M_{m,n}^{t}$ as the union of $M_{m,n}^{i}\setminusM_{m,n}^{i-1}$, $i=1, \ldots, t$, is a Whitney stratification of $M_{m,n}^{t}$.
The matrix $F(x)=(f_{ij}(x))$, $x \in {\Bbb C}^N$, determines a map $F: U \to M_{m,n}$ and the determinantal variety $X$ is the preimage $F^{-1}(M_{m,n}^t)$ of the variety $M_{m,n}^t$ (subject to the condition that ${\rm codim}\, X = {\rm codim}\, M_{m,n}^t$).
For $1\le i\le t$, let $X_i=F^{-1}(M_{m,n}^{i})$.
The image of a generic map $F: U \to M_{m,n}$ may intersect the varieties $M_{m,n}^{i}$ for $i < t$ (therefore the corresponding determinantal variety $F^{-1}(M_{m,n}^t)$ will have singularities). However, a generic map $F$ intersects the strata $M_{m,n}^{i} \setminus M_{m,n}^{i-1}$ of the variety $M_{m,n}^t$ transversally. This means that, at the corresponding points, the determinantal variety has ''standard'' singularities whose analytic type only depends on $i= {\rm rk}\, F(x) +1$. This inspires the following definition:
\begin{definition} A point $x \in X=F^{-1}(M_{m,n}^t)$ is called {\it essentially non-singular} if, at the point $x$, the map $F$ is transversal to the corresponding stratum of the variety $M_{m,n}^t$ (i.e., to $M_{m,n}^{i} \setminus M_{m,n}^{i-1}$ where $i= {\rm rk} \, F(x)+1$).
\end{definition}
At an essentially non-singular point $x_0\in X$ with ${\rm rk} \, F(x_0)=i-1$,
$${\rm rk} \{dm_{IJ}^{i} \} = (m-i+1)(n-i+1)$$
($\{dm_{IJ}^{i} \}$ is the set of the differentials of all the $i \times i$-minors of the matrix $F(x)$) and, in a neighbourhood of the point $x_0$,
the subvariety $X_i \setminus X_{i-1} = \{ x \in X \, | \, {\rm rk} \, F(x)=i-1\}$ is non-singular of dimension $N-(m-i+1)(n-i+1)$. At an essentially singular point $x \in X$ one has
$${\rm rk} \{dm_{IJ}^{i} \} < (m-i+1)(n-i+1)\,.$$
\begin{definition}
A germ $(X,0)\subset({\Bbb C}^N,0)$ of a determinantal variety of type $(m,n,t)$ has an {\em isolated essentially singular point} at the origin (or is an {\it essentially isolated determinantal singularity}: {EIDS}) if it has only essentially non-singular points in a punctured neighbourhood of the origin in $X$.
\end{definition}
An example of an {EIDS} is an isolated complete intersection singularity ({ICIS}): it is an {EIDS} of type $(1,n,1)$.
From now on we shall suppose that a determinantal singularity $(X,0)$ of type $(m,n,t)$ is defined by a map $F: ({\Bbb C}^N,0) \to M_{m,n}$ with $F(0)=0$. This is not a restriction because if $F(0) \neq 0$ and therefore ${\rm rk} \, F(0)=s>0$, $(X,0)$ is a determinantal singularity of type $(m-s,n-s,t-s)$ defined by a map $F': ({\Bbb C}^N,0) \to M_{m-s,n-s}$ with $F'(0)=0$. We also suppose that $\dim X >0$, i.e.\ $N>(m-t+1)(n-t+1)$. Let $\varepsilon>0$ be such that, for all positive $\varepsilon'\le \varepsilon$, each stratum $X_i\setminus X_{i-1}$ of the variety $X$, $1\le i \le t$, is transversal to the sphere $S_{\varepsilon'}=\partial B_{\varepsilon'}$ of radius $\varepsilon'$ centred at the origin in ${\Bbb C}^N$. We shall suppose deformations $\widetilde X$ of the EIDS
$(X,0)$ being defined in a neighbourhood $U$ of the ball $B_\varepsilon$ and being such that the corresponding strata stay transversal to the sphere $S_\varepsilon$.
An essentially isolated determinantal singularity
$(X,0) \subset ({\Bbb C}^N,0)$ of type $(m,n,t)$ (defined by a map $F: ({\Bbb C}^N,0) \to (M_{m,n}, 0)$) has an isolated singularity at the origin if and only if $N \leq (m-t+2)(n-t+2)$.
In the sequel we shall consider deformations (in particular, smoothings) of determinantal singularities which are themselves determinantal ones, i.e., they are defined by perturbations of the map $F$ defining the singularity.
\begin{remark} Determinantal singularities (in particular isolated ones) may have deformations (even smoothings) which are not determinantal (see an example in \cite{Pi}). These smoothings may have different topology, say, different Euler characteristics.
\end{remark}
Let $(X,0) \subset ({\Bbb C}^N,0)$ be an EIDS defined by a map $F: ({\Bbb C}^N,0) \to (M_{m,n},0)$ ($X=F^{-1}(M_{m,n}^t)$, $F$ is transversal to $M_{m,n}^{i} \setminus M_{m,n}^{i-1}$ at all points $x$ from a punctured neighbourhood of the origin in ${\Bbb C}^N$ and for all $i\le t$).
\begin{definition} An {\em essential smoothing} $\widetilde{X}$ of the EIDS $(X,0)$ is a subvariety of a neighbourhood $U$ of the origin in ${\Bbb C}^N$ defined by a perturbation $\widetilde{F}: U \to M_{m,n}$ of the germ $F$ transversal to all the strata $M_{m,n}^{i} \setminus M_{m,n}^{i-1}$ with $i \leq t$.
\end{definition}
A generic deformation $\widetilde{F}$ of the map $F$ defines an essential smoothing of the EIDS $(X,0)$ (according to the Thom Transversality Theorem). An essential smoothing is in general not smooth (for $N \geq (m-t+2)(n-t+2)$). Its singular locus is $\widetilde{F}^{-1}(M_{m,n}^{t-1})$, the singular locus of the latter one is $\widetilde{F}^{-1}(M_{m,n}^{t-2})$, etc. The representation of $\widetilde{X}$ as the union
$$\widetilde{X} = \bigcup_{1 \leq i \leq t} \widetilde{F}^{-1}(M_{m,n}^{i} \setminus M_{m,n}^{i-1})$$
is a Whitney stratification of it. An essential smoothing of an EIDS $(X,0)$ of type $(m,n,t)$ is a genuine smoothing if and only if $N < (m-t+2)(n-t+2)$.
One can say that the variety $M_{m,n}^t$ has three natural resolutions.
One of them is constructed in the following way (see, e.g., \cite{ACGH}).
Let ${\rm Gr}(k,n)$ be the Grassmann manifold of $k$-dimensional vector subspaces in ${\Bbb C}^n$.
Consider $m \times n$-matrices as linear maps from ${\Bbb C}^n$ to ${\Bbb C}^m$.
Let $Y_1$
be the subvariety of the product $M_{m,n}\times {\rm Gr}(n-t+1,n)$ which consists of pairs $(A, W)$ such that $A(W)=0$. The variety $Y_1$
is smooth and connected. Its projection to the first factor defines a resolution $\pi_1: Y_1 \to M_{m,n}^t$ of the variety $M_{m,n}^t$.
If one considers $m \times n$-matrices as linear maps from ${\Bbb C}^m$ to ${\Bbb C}^n$, one gets in the same way another resolution $\pi_2: Y_2 \to M_{m,n}^t$ of the variety $M_{m,n}^t$. In this case one has $Y_2 \subset M_{m,n} \times {\rm Gr}(m-t+1,m)$.
The third natural resolution is given by the Nash transform $\widehat{M_{m,n}^t}$ of the variety $M_{m,n}^t \subset M_{m,n}$. The Nash transform $\widehat{Z}$ of a variety $Z \subset U \subset {\Bbb C}^N$ of pure dimension $d$ is the closure in the product $Z \times {\rm Gr}(d,N)$ of the set
$$\{ (x,W) \, | \, x \in Z_{\rm reg}, W=T_xZ_{\rm reg} \}\,,$$
where $Z_{\rm reg}$ is the set of non-singular points of the variety $Z$. The Nash transform may in general be singular. The tautological vector bundle ${\Bbb T}$ over the Nash transform $\widehat{Z}$ is the pull-back of the tautological bundle (of rank $d$) over the Grassmann manifold ${\rm Gr}(d,N)$ (under the projection $Z \times {\rm Gr}(d,N) \to {\rm Gr}(d,N)$). Over the preimage of the non-singular part $Z_{\rm reg}$ of the variety $Z$ the tautological bundle is in a natural way isomorphic to the tangent bundle. However, if the Nash transform $\widehat{Z}$ is by a chance non-singular, the tautological bundle and the tangent bundle are in general different.
Let $d_{m,n}^t:=\dim M_{m,n}^t = mn-(m-t+1)(n-t+1)$ and let ${\rm Gr}(d_{m,n}^t,M_{m,n})$ be the Grassmann manifold of $d_{m,n}^t$-dimensional subspaces of $M_{m,n}$. The Nash transform $\widehat{M_{m,n}^t}$ is the closure in $M_{m,n}^t \times {\rm Gr}(d_{m,n}^t,M_{m,n})$ of the set
$$\{(A,T) \, | \, A \in M_{m,n}^t \setminus M_{m,n}^{t-1}=(M_{m,n}^t)_{\rm reg}, T=T_AM_{m,n}^t \}.$$
As above, let us consider matrices $A \in M_{m,n}^t$ as operators $A: {\Bbb C}^n \to {\Bbb C}^m$. The tangent space to $M_{m,n}^t$ at a point $A$ with ${\rm rk}\, A =t-1$ is
$$T_AM_{m,n}^t=\{ B \in M_{m,n} \, | \, B(\ker A) \subset {\rm im}\, A\}$$
(see, e.g., \cite{ACGH}). Let $\alpha: {\rm Gr}(n-t+1,n) \times {\rm Gr}(t-1,m) \to {\rm Gr}(d_{m,n}^t,M_{m,n})$ be the map which sends a pair $(W_1, W_2)$ ($W_1 \subset {\Bbb C}^n$, $W_2 \subset {\Bbb C}^m$, $\dim W_1=n-t+1$, $\dim W_2=t-1$) to the $d_{m,n}^t$-dimensional vector subspace
$$\{ B \in M_{m,n} \, | \, B(W_1) \subset W_2 \} \subset M_{m,n}.$$
This map is an embedding. The closure of the set of matrices (operators) $A$ with $\ker A=W_1$, ${\rm im}\, A=W_2$ ($(W_1,W_2) \in {\rm Gr}(n-t+1,n) \times {\rm Gr}(t-1,m)$) consists of all operators $A$ with $\ker A \supset W_1$ and ${\rm im}\, A \subset W_2$. Therefore the Nash transform $\widehat{M_{m,n}^t}$ of $M_{m,n}^t$ consists of pairs $(A,W)$ such that $W=\alpha(W_1,W_2)$, $\ker A \supset W_1$, ${\rm im}\, A \subset W_2$. This means that the projection of $\widehat{M_{m,n}^t} \subset M_{m,n}^t \times {\rm Gr}(d_{m,n}^t,M_{m,n})$ to the second factor is the projection of a vector bundle of rank $(t-1)^2$ over $\alpha({\rm Gr}(n-t+1,n) \times {\rm Gr}(t-1,m))$. Thus the Nash transform gives a resolution $\pi_3: Y_3:=\widehat{M_{m,n}^t} \to M_{m,n}^t$ of the variety $M_{m,n}^t$.
\section{Poincar\'e--Hopf indices of 1-forms on EIDS}
Let $(X,0)=F^{-1}(M_{m,n}^t) \subset ({\Bbb C}^N,0)$ be an EIDS and let $\omega$ be a germ of a (complex) 1-form on $({\Bbb C}^N,0)$ whose restriction to $(X,0)$ has an isolated singular point (zero) at the origin. This means that the restrictions of the 1-form $\omega$ to the strata $X_i\setminus X_{i-1}$, $i\le t$,
have no zeros in a punctured neighbourhood of the origin.
If $(X,0)$ is an ICIS (i.e. an EIDS of type $(1,n,1)$), the PH-index ${\rm ind}_{\rm PH} \, \omega$ of the 1-form on it is defined as the sum of the indices of the zeros of the restriction of the 1-form $\omega$ to a smoothing $\widetilde{X}$ of the ICIS $(X,0)$ in a neighbourhood of the origin.
An essential smoothing $\widetilde{X} \subset U$ of the EIDS $(X,0)$ (in a neighbourhood $U$ of the origin in ${\Bbb C}^N$) is in general not smooth. To define an analogue of the PH-index one has to construct a substitute of the tangent bundle to $\widetilde{X}$. It is possible to use one of the following two natural ways.
One possibility is to use a resolution of the variety $\widetilde{X}$ connected with one of the three resolutions of the variety $M_{m,n}^t$ described above. Let $\pi_i: Y_i \to M_{m,n}^t$ be one of the described resolutions of the determinantal variety $M_{m,n}^t$ and let $\overline{X}_i = Y_i \times_{M_{m,n}^t} \widetilde{X}$, $i=1,2,3$, be the fibre product of the spaces $Y_i$ and $\widetilde{X}$ over the variety $M_{m,n}^t$:
$$\diagram
& \overline{X}_i \ar[dl]_{\Pi_i} \ar[dd] \ar[dr] &\\
\widetilde{X} \ar[dr]_{\widetilde{F}|_{\widetilde{X}} } & & Y_i \ar[dl]^{\pi_i} \\
& M_{m,n}^t &
\enddiagram
$$
The map $\Pi_i : \overline{X}_i \to \widetilde{X}$ is a resolution of the variety $ \widetilde{X}$. The lifting $\omega_i:= (j \circ \Pi_i)^\ast \omega$ ($j$ is the inclusion map $\widetilde{X} \hookrightarrow U \subset {\Bbb C}^N$) of the 1-form $\omega$ is a 1-form on a (non-singular) complex analytic manifold $\overline{X}_i$ without zeros outside of the preimage of a small neighbourhood of the origin. In general, the 1-form $\omega_i$ has non-isolated zeros.
\begin{definition}
The {\em Poincar\'e--Hopf index} ({\em PH-index})
${\rm ind}_{\rm PH}^i \, \omega = {\rm ind}_{\rm PH}^i \, (\omega;X,0)$, $i=1,2,3$,
of the 1-form $\omega$ on the EIDS $(X,0) \subset ({\Bbb C}^N,0)$ is the sum of the indices of the zeros of a generic perturbation $\widetilde{\omega}_i$ of the 1-form $\omega_i$ on the manifold $\overline{X}_i$ (in the preimage of a neighbourhood of the origin in ${\Bbb C}^N$).
\end{definition}
In other words, the PH-index ${\rm ind}PH \omega$ is the obstruction to extend the non-zero 1-form $\omega_i$ from the preimage (under the map $j \circ \Pi_i$) of a neighbourhood of the sphere $S_\varepsilon= \partial B_\varepsilon$ to the manifold $\overline{X}_i$.
The other possibility is to take into account that the space $\overline{X}_3$ of the third resolution is the Nash transform of the variety $\widetilde{X}$ and to use the Nash bundle $\widehat{{\Bbb T}}$ over it instead of the tangent bundle. This brings the idea of the Euler obstruction into play.
The 1-form $\omega$ defines a non-vanishing section $\widehat{\omega}$ of the dual bundle $\widehat{{\Bbb T}}^\ast$ over the preimage of the intersection $\widetilde{X} \cap S_\varepsilon$ of the variety $\widetilde{X}$ with the sphere $S_\varepsilon$.
\begin{sloppypar}
\begin{definition}
The {\em Poincar\'e--Hopf-Nash index} ({\em PHN-index}) ${\rm ind}PHN \omega = {\rm ind}PHN (\omega;X,0)$ of the 1-form $\omega$ on the EIDS $(X,0)$ is the obstruction to extend the non-zero section $\widehat{\omega}$ of the dual Nash bundle $\widehat{{\Bbb T}}^\ast$ from the preimage of the boundary $S_\varepsilon=\partial B_\varepsilon$ of the ball $B_\varepsilon$ to the preimage of its interior, i.e.\ to the manifold $\overline {X}_3$, more precisely, its value (as an element of
$H^{2d}(\Pi_3^{-1}(\widetilde{X} \cap B_\varepsilon), \Pi_3^{-1}(\widetilde{X} \cap S_\varepsilon))$) on the fundamental class of the pair $(\Pi_3^{-1}(\widetilde{X} \cap B_\varepsilon), \Pi_3^{-1}(\widetilde{X} \cap S_\varepsilon))$.
\end{definition}
\end{sloppypar}
There is another possible definition of the PHN-index which follows from the property of the Euler obstruction described in \cite[Proposition~2.3]{STV} (see also \cite[Proposition~2.1]{EGChern}).
\begin{proposition}
The PHN-index ${\rm ind}PHN \omega$ of the 1-form $\omega$ on the {EIDS} $(X,0)$ is equal to the number of non-degenerate singular points of a generic deformation $\widetilde\omega$ of the 1-form $\omega$ on the non-singular part $\widetilde{X}_{\rm reg}=\widetilde{F}^{-1}(M_{m,n}^t \setminus M_{m,n}^{t-1})$ of the variety $\widetilde X$.
\end{proposition}
All the defined indices (as well as the radial one and the local Euler obstruction) satisfy the law of conservation of number (see, e.g., \cite{EGSch}).
Let $\chi(X,0):=\chi(\widetilde{X} \cap B_\varepsilon)$ for an essential smoothing $\widetilde{X}$ of the singularity $(X,0)$. Let us recall that $(X_i,0)=F^{-1}(M_{m,n}^i)$ , $i=1, \ldots , t$. The variety $\widetilde{X}_i \subset \widetilde{X}$ is an essential smoothing of the EIDS $(X_i,0)$ (of type $(m,n,i)$).
Let us denote by $G_i^k$ the preimage of a point from $M_{m,n}^i \setminus M_{m,n}^{i-1}$ under the resolution $\pi_k$, $k=1,2,3$, of the variety $M_{m,n}^t$, i.e.
\begin{eqnarray*}
G_i^1 & = & {\rm Gr}(n-t+1,n-i+1),\\
G_i^2 & = & {\rm Gr}(m-t+1,m-i+1), \\
G_i^3 & = & {\rm Gr}(n-t+1,n-i+1) \times {\rm Gr}(m-t+1,m-i+1).
\end{eqnarray*}
One has
\begin{eqnarray*}
\chi(G_i^1) & = & \binom{n-i+1}{t-i}, \quad \chi(G_i^2) = \binom{m-i+1}{t-i}, \\
\chi(G_i^3) & = & \binom{n-i+1}{t-i} \binom{m-i+1}{t-i}.
\end{eqnarray*}
\begin{proposition} \label{PropPH}
One has
\begin{eqnarray*}
{\rm ind}PHk (\omega;X,0) & = & (-1)^{\dim X} \sum_{i=1}^t \left( (-1)^{\dim X_i} {\rm ind}rad (\omega; X_i,0) \right. \\
& & {} - (-1)^{\dim X_{i-1}} {\rm ind}rad(\omega; X_{i-1},0) \\
& & \left. {} + (\chi(X_i,0) - \chi(X_{i-1},0)) \right) \cdot \chi(G_i^k).
\end{eqnarray*}
where we set $X_0=\{ 0\}$ $($and therefore ${\rm ind}rad (\omega; X_0,0)=1${}$)$ and $\chi(X_0,0)=0$.
\end{proposition}
\begin{remark}
1. If $N < mn$ several summands in the formula of Proposition~\ref{PropPH} vanish. \\
2. Note that
$\dim X_i$ is equal to
$$\max\{ 0, N-(m-i+1)(n-i+1) \}.$$
\end{remark}
\begin{proof}
There exists a 1-form $\widetilde{\omega}$ which coincides with $\omega$ in a neighbourhood of the sphere $S_\varepsilon$ such that it is radial in a neighbourhood of the origin and, in a neighbourhood of each singular point $x_0$ on $X_i \setminus X_{i-1}$, it looks as follows. There exists a local biholomorphism $h: {\Bbb C}^{\dim X_i} \times M_{m-i+1,n-i+1} \to ({\Bbb C}^N, x_0)$ which sends ${\Bbb C}^{\dim X_i} \times M_{m-i+1,n-i+1}^{t-i+1}$ onto $X$ (and therefore ${\Bbb C}^{\dim X_i} \times \{ 0 \}$ onto $X_i$) and one has $h^\ast \widetilde{\omega} = p_1^\ast \omega' + p_2^\ast \omega''$ where $p_1$ and $p_2$ are the projections of ${\Bbb C}^{\dim X_i} \times M_{m-i+1,n-i+1}$ to the first and second factor respectively, $\omega'$ is a 1-form on $({\Bbb C}^{\dim X_i},0)$ with a non-degenerate zero at the origin (and therefore ${\rm ind}rad (\omega'; {\Bbb C}^{\dim X_i},0)= \pm 1$) and $\omega''$ is a radial 1-form on $(M_{m-i+1,n-i+1},0)$. The sum of the indices of the zeros of the 1-form $\widetilde{\omega}$ on $X_i \setminus X_{i-1}$ is equal to
$${\rm ind}rad (\omega; X_i,0) - (-1)^{\dim X_i -\dim X_{i-1}} {\rm ind}rad (\omega;X_{i-1},0).$$
Let $\widetilde{X}= \widetilde{F}^{-1}(M_{m,n}^t)$ be an essential smoothing of the singularity $(X,0)$ ($\widetilde{F}$ is small generic pertubation of $F$). Since outside of a small neighbourhood $B_{\varepsilon'}$ of the origin (such that $\widetilde{\omega}$ is radial on its boundary $S_{\varepsilon'}$), $X$ and $\widetilde{X}$ are diffeomorphic to each other (as stratified spaces), we can suppose $\widetilde{\omega}$ to have the same singular points on $\widetilde{X} \setminus B_{\varepsilon'}$. Changing $\widetilde{\omega}$ inside the ball $B_{\varepsilon'}$, we can suppose that all the singular points of the 1-form $\widetilde{\omega}$ on $\widetilde{X}$ are of the type described above. The sum of the indices of the zeros of the 1-form $\widetilde{\omega}$ on $(\widetilde{X}_i \setminus \widetilde{X}_{i-1}) \cap B_{\varepsilon'}$ is equal to
$$(-1)^{\dim \widetilde{X}_i} (\chi(\widetilde{X}_i \cap B_{\varepsilon'}) - \chi(\widetilde{X}_{i-1} \cap B_{\varepsilon'}))=
(-1)^{\dim \widetilde{X}_i}(\chi(X_i,0) - \chi(X_{i-1},0))$$
(the sign $(-1)^{\dim \widetilde{X}_i}$ reflects the difference between the radial index of a complex 1-form and of its real part, see, e.g., \cite{EGGD}). Therefore the sum of the indices of the zeros of the 1-form $\widetilde{\omega}$ on the entire stratum $\widetilde{X}_i \setminus \widetilde{X}_{i-1}$ is equal to
\begin{eqnarray*}
\lefteqn{{\rm ind}rad (\omega; X_i,0) - (-1)^{\dim X_i - \dim X_{i-1}} {\rm ind}rad (\omega;X_i,0)} \\
&+ & (-1)^{\dim \widetilde{X}_i} (\chi(X_i,0) - \chi(X_{i-1},0)).
\end{eqnarray*}
Applying one of the three possible resolutions of the variety $\widetilde{X}$ over a point of $\widetilde{X}_i \setminus \widetilde{X}_{i-1}$, we glue in a standard fibre $G_i^k$ (a Grassmanian or a product of two of them). The sum of the indices of the zeros of a generic perturbation of the radial 1-form on the corresponding resolution of $M_{m-i+1,n-i+1}^{t-i+1}$ (the normal slice to $\widetilde{X}_i$ in $\widetilde{X}$) is equal to $(-1)^{\dim \widetilde{X} - \dim \widetilde{X}_i} \chi(G_i^k)$. Therefore the corresponding PH-index of the 1-form is given by the formula of Proposition~\ref{PropPH}.
\end{proof}
In the formula expressing the radial index of a 1-form in terms of the Euler obstructions of the 1-form on the different strata \cite{EGGD}, there participate certain integer coefficients $n_i$. Up to sign they are equal to the reduced Euler characteristics of generic hyperplane sections of normal slices of the variety at points of different strata of a Whitney stratification. (The reduced Euler characteristic $\overline{\chi}(Z)$ of a topological space $Z$ is $\chi(Z)-1$.) For a determinantal singularity $X=F^{-1}(M_{m,n}^t)$ outside of the origin these slices are standard ones: a normal slice to the stratum $X_i \setminus X_{i-1}$ ($X_i=F^{-1}(M_{m,n}^i)$) is isomorphic to $(M_{m-i+1,n-i+1}^{t-i+1},0)$. Therefore one has the problem to compute the Euler characteristic of a generic hyperplane section of the variety $M_{m,n}^t$.
\begin{proposition}
Let $\ell: M_{m,n} \to {\Bbb C}$ be a generic linear form and let $L_{m,n}^t = M_{m,n}^t \cap \ell^{-1}(1)$. Then, for $t \leq m \leq n$, one has
$$\overline{\chi}(L_{m,n}^t) = (-1)^t \binom{m-1}{t-1}.$$
\end{proposition}
\begin{proof}
The linear form $\ell$ can be written as $\ell(A)= {\rm tr}\, CA$ for a certain (generic) $n \times m$-matrix $C$. Since
${\rm tr}\, CA = {\rm tr}\, S_1^{-1} C S_2 S_2^{-1} A S_1$
for invertible matrices $S_1$ and $S_2$ of size $n \times n$ and $m \times m$ respectively, one can suppose that
$$C = \left( \begin{array}{c} I_m \\ 0 \end{array} \right)$$
where $I_m$ is the unit $m \times m$-matrix and $0$ is the zero matrix of size $(n-m) \times m$. Thus one has $\ell(A) = \sum_{i=1}^m a_{ii}$ for $A=(a_{ij})$.
For $I \subset \{1, \ldots , m\}$, $I \neq \emptyset$, let
$$L_I= \{ A \in L_{m,n}^t \, | \, a_{ii} \neq 0 \mbox{ for } i \in I, a_{ii}=0 \mbox{ for } i \not\in I \}.$$
The space $L_{m,n}^t $ is the union of the spaces $L_I$ and therefore $\chi(L_{m,n}^t)=\sum\limits_I \chi(L_I)$.
Let $I$ as above be fixed and let $k = \# I$ be the number of elements of $I$. Without loss of generality we can suppose that $I=\{1, \ldots , k\}$. Let
\begin {eqnarray*}
B_I & = & \{ (\alpha_1, \ldots , \alpha_k) \in {\Bbb C}^k \, | \, \alpha_i \neq 0 , \sum \alpha_i =1 \}, \\
C_I & = & \left\{ A \in L_I \, \left| \, a_{ii}= \frac{1}{k} \mbox{ for } i=1, \ldots , k \right\} \right. .
\end{eqnarray*}
One can see that the space $L_I$ is the direct product of the spaces $B_I$ and $C_I$. The corresponding projections to $B_I$ and $C_I$ are defined by
\begin{eqnarray*}
p_1(A) & = & (a_{11}, \ldots , a_{kk}), \\
p_2(A) & = & D\left(\frac{1}{ka_{11}}, \ldots , \frac{1}{ka_{kk}}, 1, \ldots , 1\right) A
\end{eqnarray*}
where $D(\alpha_1, \ldots , \alpha_s)$ is the diagonal $s \times s$-matrix with diagonal entries $\alpha_1, \ldots , \alpha_s$.
The inclusion-exclusion formula gives
$$\chi(B_I) = 1 - \binom{k}{1} + \binom{k}{2} \pm \ldots +(-1)^{k-1} \binom{k}{k-1} = (-1)^{k-1}.$$
To compute the Euler characteristic of the space $C_I$, let us consider the action of the group $({\Bbb C}^\ast)^n$ on $C_I$ defined by
$$(\lambda_1, \ldots , \lambda_n) \ast A = D(\lambda_1^{-1}, \ldots , \lambda_m^{-1}) A D(\lambda_1, \ldots , \lambda_n)$$
for $\lambda_i \in {\Bbb C}^\ast$. The Euler characteristic of the space $C_I$ is equal to the Euler characteristic of the set of fixed points of the action. One can see that, if $k < t$, the only fixed point of this action is the $m \times n$-matrix
$$\left( \begin{array}{cc} \frac{1}{k} I_k & 0 \\0 & 0 \end{array} \right),$$
if $k \geq t$, the action has no fixed points. Therefore
$$\chi(C_I) = \left\{ \begin{array}{cl} 1 & \mbox{for } k < t, \\
0 & \mbox{for } k \geq t. \end{array} \right.$$
Summarizing we have
$$\overline{\chi}(L_{m,n}^t) = \sum_{k=1}^{t-1} (-1)^{k-1} \binom{m}{k}-1 = \sum_{k=0}^{t-1} (-1)^{k-1} \binom{m}{k} = (-1)^t \binom{m-1}{t-1}.$$
\end{proof}
Following \cite{EGGD}, for $i \leq j$, let
\begin{eqnarray*}
n_{ij} & := & {\rm ind}rad (d\ell; M_{m-i+1,n-i+1}^{j-i+1},0) = (-1)^{d_{ij}-1} \overline{\chi}(L_{m-i+1,n-i+1}^{j-i+1}) \\ & = & (-1)^{(m+n)(j-i)} \binom{m-i}{m-j},
\end{eqnarray*}
where $d_{ij}$ is the dimension of $M_{m-i+1,n-i+1}^{j-i+1}$ equal to $(m-i+1)(n-i+1)-(m-j+1)(n-j+1)$.
\begin{proposition} \label{Propnit}
One has
$${\rm ind}rad (\omega;X,0) = \sum_{i=1}^t n_{it} {\rm ind}PHN (\omega; X_i,0) + (-1)^{\dim X -1} \overline{\chi}(X,0).$$
\end{proposition}
\begin{proof} Consider the 1-form $\omega$ on the essential smoothing $\widetilde{X}$ of the singularity $(X,0)$. There exists a 1-form $\widetilde{\omega}$ which coincides with $\omega$ in a neighbourhood of the sphere $S_\varepsilon$ such that in a neighbourhood of each singular point $x_0$ on $\widetilde{X}_i \setminus \widetilde{X}_{i-1}$ it looks as follows. There exists a local biholomorphism $h : ({\Bbb C}^{\dim \widetilde{X}_i} \times M_{m-i+1,n-i+1},0) \to ({\Bbb C}^N,x_0)$ which sends ${\Bbb C}^{\dim \widetilde{X}_i} \times M_{m-i+1,n-i+1}^{t-i+1}$ onto $\widetilde{X}$ (and therefore ${\Bbb C}^{\dim \widetilde{X}_i} \times \{ 0\}$ onto $\widetilde{X}_i$) and one has $h^\ast \widetilde{\omega} = p_1^\ast \omega' + p_2^\ast d \ell$ where $p_1$ and $p_2$ are the projections of ${\Bbb C}^{\dim \widetilde{X}_i} \times M_{m-i+1,n-i+1}$ to the first and second factor respectively, $\omega'$ is a 1-form on $({\Bbb C}^{\dim \widetilde{X}_i},0)$ with a non-degenerate zero at the origin (and therefore ${\rm ind}rad (\omega';{\Bbb C}^{\dim \widetilde{X}_i},0)= \pm 1$),
and $\ell$ is a generic linear form on $M_{m-i+1,n-i+1}$. Note that the local form of the singular points of the 1-form $\widetilde{\omega}$ here is different from that in the proof of Proposition~\ref{PropPH}. The sum of the indices of the zeros of the 1-form $\widetilde{\omega}$ on $\widetilde{X}_i \setminus \widetilde{X}_{i-1}$ coincides with the PHN-index ${\rm ind}PHN (\omega;X_i,0)$ of the 1-form $\omega$ on the variety $X_i$. Therefore the sum of the radial indices of the singular points of the 1-form $\widetilde{\omega}$ on $\widetilde{X}$ is equal to
$$\sum_{i=1}^t {\rm ind}PHN (\omega;X_i,0) \cdot {\rm ind}rad (d \ell; M_{m-i+1,n-i+1}^{t-i+1},0) = \sum_{i=1}^t {\rm ind}PHN (\omega;X_i,0) n_{it}.$$
On the other hand it is equal to
$${\rm ind}rad (\omega;X,0) + (-1)^{\dim X}(\chi(\widetilde{X})-1) = {\rm ind}rad (\omega;X,0) +(-1)^{\dim X} \overline{\chi}(X,0).$$
\end{proof}
Let $\cal N$ be the upper triangular $t \times t$-matrix with the entries $n_{ij}$ for $i \leq j \leq t$, and let ${\cal M}=(m_{ij})= {\cal N}^{-1}$. One can see that, for $i \leq j \leq t$,
$$m_{ij} = (-1)^{(m+n+1)(j-i)} \binom{m-i}{m-j}.$$
(This follows from the known formula $\sum\limits_{k=r}^s (-1)^k \binom{s}{k} \binom{k}{r}=0$ for $0 \leq r <s$, see, e.g., \cite[(2.1.4)]{Ha}.)
The inverse to Proposition~\ref{Propnit} is the following statement.
\begin{proposition} \label{PropPHN}
One has
$${\rm ind}PHN (\omega;X,0) = \sum_{i=1}^t m_{it} \left( {\rm ind}rad (\omega; X_i,0) + (-1)^{\dim X_i} \overline{\chi}(X_i,0) \right).$$
\end{proposition}
\section{Isolated determinantal singularities}
For isolated determinantal singularities, the relations between the PH-, the PHN- and the radial indices simplify but are somewhat different in the (determinantally) smoothable and in the non-smoothable cases (i.e.\ for $N < (m-t+2)(n-t+2)$ and $N=(m-t+2)(n-t+2)$ respectively).
For isolated smoothable singularities all Poincar\'e--Hopf indices (including the Poincar\'e--Hopf-Nash index) coincide and they are equal to
$${\rm ind}PHs (\omega; X,0) = {\rm ind}rad (\omega; X,0) + (-1)^{\dim X} \overline{\chi}(X,0).$$
Let $(X,0)=F^{-1}(M_{m,n}^t) \subset ({\Bbb C}^N,0)$ be an isolated non-smoothable determinantal singularity, i.e.\ $N=(m-t+2)(n-t+2)$. The singular locus $\widetilde{X}_{t-1}= \widetilde{F}^{-1}(M_{m,n}^{t-1})$ of an essential smoothing $\widetilde{X}= \widetilde{F}^{-1}(M_{m,n}^t)$ of the singularity $(X,0)$ consists of isolated points. The number of these points is the Euler characteristic $\chi(X_{t-1},0)$ of the stratum $\widetilde{X}_{t-1}$. Therefore the relations from Propositions~\ref{PropPH} and \ref{PropPHN} reduce to the following ones.
\begin{proposition}
For $N=(m-t+2)(n-t+2)$, the relation between the PH- and the radial index reduces to
$$
{\rm ind}PHk (\omega;X,0)= {\rm ind}rad (\omega;X,0) + (-1)^{\dim X} \left( \overline{\chi}(X,0)+ \chi(X_{t-1},0)\overline{\chi}(G_{t-1}^k) \right)
$$
where
$$\overline{\chi}(G_{t-1}^1)=n-t+1, \quad \overline{\chi}(G_{t-1}^2)=m-t+1, \quad \overline{\chi}(G_{t-1}^3)=(m-t+2)(n-t+2)-1$$
and $k=1,2,3$.
The relation between the PHN- and the radial index reduces to
\begin{eqnarray*}
\lefteqn{{\rm ind}PHN (\omega;X,0)} \\
& = & {\rm ind}rad (\omega;X,0) + (-1)^{\dim X} \overline{\chi}(X,0)+ (-1)^{m+n+1}(m-t+1) \chi(X_{t-1},0).
\end{eqnarray*}
\end{proposition}
One has the following formula for $\chi(X_{t-1},0)$ in this case.
\begin{proposition} Let $I_F^{t-1}$ be the ideal in the ring ${\cal O}_{{\Bbb C}^N,0}$ generated by all the $(t-1) \times (t-1)$-minors of the matrix $F(x)$. Then
$$\chi(X_{t-1},0) = \dim {\cal O}_{{\Bbb C}^N,0}/I_F^{t-1}.$$
\end{proposition}
\begin{proof} This follows from the fact that $\chi(X_{t-1},0) $ is the intersection number of the image $F({\Bbb C}^N)$ of the map $F$ with the Cohen-Macaulay variety $M_{m,n}^t \subset M_{m,n}$ defined by vanishing of the $(t-1) \times (t-1)$-minors.
\end{proof}
\section{The homological index for IDS}
For an isolated singular point of a holomorphic 1-form $\omega$ on an isolated singularity $(X,0)\subset ({\Bbb C}^N,0)$ (of pure dimension $d$) the homological index was defined in \cite{EGS}. The homological index ${\rm ind}hom\omega={\rm ind}hom(\omega; X,0)$ is $(-1)^d$ times the Euler characteristic of the complex
$$
0 \to \Omega^0_{X,0}
\stackrel{\wedge \omega}{\to} \Omega^1_{X,0} \stackrel{\wedge \omega}{\to} \ldots \stackrel{\wedge \omega}{\to} \Omega^d_{X,0} \to 0\,,
$$
where $\Omega_{X,0}^k$ is the module of holomorphic differential $k$-forms on $(X,0)$, i.e. the module $\Omega_{{\Bbb C}^N,0}^k$ factorized by the equations of variety $X$ and by the wedge products of their differentials with the module $\Omega_{{\Bbb C}^N,0}^{k-1}$. For an isolated complete intersection singularity
$(X,0)=\{f_1=\ldots=f_{k}=0\}\subset ({\Bbb C}^N,0)$ ($k:=N-d$) the homological index ${\rm ind}hom\omega$ is equal to $\dim_{\Bbb C} \Omega_{X,0}^d/\omega\wedge \Omega_{X,0}^{d-1}$, it coincides with the PH- (or GSV-) index, and it is equal to the dimension of the factor ring ${\cal A}_{X,\omega}$ of the ring ${\cal O}_{{\Bbb C}^N,0}$
of germs of functions on $({\Bbb C}^N, 0)$ by the ideal generated by the functions $f_i$ and the $(k+1)\times(k+1)$-minors of the matrix
$$
\left( \begin{array}{ccc} \frac{\partial f_1}{\partial x_1} & \cdots &
\frac{\partial f_1}{\partial x_N} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_k}{\partial x_1} & \cdots & \frac{\partial f_k}{\partial
x_N}\\
A_1 & \cdots & A_N
\end{array} \right).
$$
\begin{proposition}
For a holomorphic 1-form $\omega$ on an IDS $(X,0)$ with an isolated singular point (zero) at the origin one has
$$
{\rm ind}hom\omega =\dim_{\Bbb C} \Omega_{X,0}^d/\omega\wedge \Omega_{X,0}^{d-1}\,.
$$
\end{proposition}
\begin{proof}
We use the following version of a particular case of the de Rham Lemma from \cite[Lemma 1.6]{Gr}.
\begin{statement}
Let $\omega$ be a holomorphic 1-form on an isolated singularity $(X,0)$.
Then the kernel of the mapping
$$
\wedge\omega:\Omega_{X,0}^p\to \Omega_{X,0}^{p+1}
$$
is equal to $\omega\wedge\Omega_{X,0}^{p-1}$ if $0\le p < \mbox{\rm codh\,}{\cal O}_{X,0}-\dim C(\omega, X)$, where $\mbox{\rm codh\,}{\cal O}_{X,0}$ is the homological codimension of the ring ${\cal O}_{X,0}$ and $C(\omega, X)$ is the set of zeros of the 1-form $\omega$ on $X$.
\end{statement}
In \cite{Gr} the statement is formulated for $\omega=df$ for a holomorphic function $f:({\Bbb C}^N,0)\to({\Bbb C},0)$. However (as G.-M.~Greuel pointed out to us) the proof is also valid for an arbitrary holomorphic 1-form $\omega$.
In our case the 1-form $\omega$ has an isolated singular point on $(X,0)$ and therefore $C(\omega, X)=\{0\}$. Moreover $(X,0)$ (being a determinantal singularity) is Cohen-Macaulay \cite{HE}. Therefore $\mbox{\rm codh\,}{\cal O}_{X,0} =\dim X$.
\end{proof}
For a reduced space curve singularity (which is always determinantal of type $(m, m+1, m)$) V.V.~Goryunov \cite{Gor} has defined a Milnor number for a holomorphic function $f$ with an isolated singularity on the curve. This notion was extended to arbitrary reduced curve singularities by D.~Mond and D.~van~Straten \cite{MvS}. Almost the same definition applies to a holomorphic 1-form with an isolated singular point on the curve giving a number which can be considered as an index (GMvS-index ${\rm ind}GMvS\omega$) of the 1-form $\omega$. If the curve singularity is smoothable, the GMvS-index coincides with the Poincar\'e--Hopf index defined by a smoothing. (All smoothings of a curve singularity have the same Euler characteristic and therefore the Poincar\'e--Hopf index is independent of the smoothing.)
\begin{sloppypar}
The homological index of a 1-form on a curve singularity $(C,0)$ is, in general, different from the GMvS-index. Let $(\overline C, \overline 0)$ be the normalization of the curve singularity $(C,0)$ and let $\tau=\dim_{\Bbb C}\ker (\Omega_{C,0}^1\to \Omega_{\overline C, \overline 0}^1)$, $\lambda=\dim_{\Bbb C} \omega_{C,0}/c(\Omega_{C,0}^1)$, where $\omega_{C,0}$ is the dualizing module of Grothendieck, $c:\Omega_{C,0}^1\to \omega_{C,0}$ is the class map. Then one has
$$
{\rm ind}hom\omega={\rm ind}GMvS\omega -\lambda+\tau
$$
(see, e.g., \cite{EGS}).
\end{sloppypar}
By \cite{MvS}, the GMvS-index of a 1-form $\omega=\sum\limits_{i=1}^3 A_idx_i$ on a space curve singularity $(C,0)$ defined by vanishing of the $m\times m$-minors $\Delta_1$, \dots, $\Delta_{m+1}$ of an $m\times (m+1)$-matrix $F(x)$ is equal to the dimension of the factor algebra ${\cal A}_{C,\omega}$ of the algebra ${\cal O}_{{\Bbb C}^3,0}$ by the ideal generated by the equations $\Delta_1$, \dots, $\Delta_{m+1}$ of the curve and by the $3\times 3$-minors of the matrix
$$
\left( \begin{array}{ccc} \frac{\partial \Delta_1}{\partial x_1} & \frac{\partial \Delta_1}{\partial x_2}&
\frac{\partial \Delta_1}{\partial x_3} \\
\vdots & \vdots & \vdots \\
\frac{\partial \Delta_{m+1}}{\partial x_1} & \frac{\partial \Delta_{m+1}}{\partial x_2}&
\frac{\partial \Delta_{m+1}}{\partial x_3} \\
A_1 & A_2 & A_3
\end{array} \right) =: \left( \begin{array}{c} d\Delta_i \\ \omega \end{array} \right).
$$
In \cite{MvS} the statement is formulated for a 1-form $\omega$ being the differential of a holomorphic function, however, the proof can be adapted to the general case. In fact the statement for arbitrary 1-forms can be deduced from the one for differentials of functions.
A natural generalization of the algebras ${\cal A}_{X,\omega}$ for an ICIS $(X,0)$ and ${\cal A}_{C,\omega}$ for a space curve singularity
$(C,0)$ to a 1-form $\omega=\sum\limits_{i=1}^N A_idx_i$ on an IDS $(X,0)=F^{-1}(M_{m,n}^t)\subset({\Bbb C}^N, 0)$ of type $(m,n,t)$ is the algebra ${\cal A}_{X,\omega}$
defined as the factor algebra of the algebra ${\cal O}_{{\Bbb C}^N,0}$ by the ideal $I_{X,\omega}$ generated by the
$t \times t$-minors $m_{IJ}^t$ of the matrix $F(x)$ and by the $(c+1)\times(c+1)$-minors of the matrix
$$
\left( \begin{array}{c} dm_{IJ}^t \\ \omega \end{array} \right)
$$
whose rows are the components of the differentials of the minors $m_{IJ}^t$ ($\# I=\# J=t$) and of the 1-form $\omega$, where $c= (m-t+1)(n-t+1)$ is the codimension of the IDS $(X,0)$. The algebra ${\cal A}_{X,\omega}$ has finite dimension as a complex vector space.
\begin{proposition} \label{PropIneq}
For a (determinantally) smoothable IDS $(X,0)$ one has
$$
\dim_{\Bbb C} \Omega_{X,0}^d/\omega\wedge \Omega_{X,0}^{d-1}\ge {\rm ind}PHs \omega\,,
$$
$$
\dim_{\Bbb C} {\cal A}_{X,\omega}\ge {\rm ind}PHs \omega\,.
$$
\end{proposition}
\begin{proof}
Let $F_\lambda:U\to M_{m,n}$ be a 1-parameter deformation of the map $F$ ($F_0=F$) such that $X_\lambda=F_\lambda^{-1}(M_{m,n}^t)$ is a smoothing of the IDS $(X,0)$, and let $\omega_\lambda$ be a
deformation of the 1-form $\omega$ such that, for $\lambda\in{\Bbb C}\setminus\{0\}$, the 1-form $\omega_\lambda$ has only non-degenerate zeros on the manifold $X_\lambda$ (the number of these points is the PH-index ${\rm ind}PHs \omega$ of the 1-form $\omega$). The family $\omega_\lambda$ defines a 1-form $\check{\omega}$ on $Y=U\times {\Bbb C}$ (without the differential $d\lambda$). The ${\cal O}_{U\times {\Bbb C}}$-modules
$$
\Omega_{Y/{\Bbb C}}^d/\langle m_{IJ} \Omega_{Y/{\Bbb C}}^d, dm_{IJ}\wedge\Omega_{Y/{\Bbb C}}^{d-1}, \check{\omega}\wedge\Omega_{Y/{\Bbb C}}^{d-1} \rangle \quad \mbox{and} \quad
{\cal O}_Y/I_{Y, \check{\omega}}
$$
($\Omega_{Y/{\Bbb C}}^d:=\Omega_{Y}^d/d\lambda \wedge \Omega_{Y}^{d-1}$) have support on the curve consisting of singular points of the 1-forms $\omega_\lambda$ on the fibres $X_\lambda$. For each of these sheaves ${\cal F}$ and for
$y \in Y$, let $\nu(y)$ be the dimension of the vector space ${\Bbb C} \otimes_{{\cal O}_{{\Bbb C}, \lambda}} {\cal F}_y$. For $\lambda \neq 0$, $\nu(\lambda)= \sum_{x \in U} \nu(x,\lambda)$ is equal to the PH-index ${\rm ind}_{\rm PH}\, \omega$ of the 1-form $\omega$. Now the statement follows from the semicontinuity of the function $\nu(\lambda)$.
\end{proof}
One can show that, in general, the inequalities in Proposition~\ref{PropIneq} may be strict and the left hand sides of them may also be different.
\begin{example} Let $(X,0) \subset ({\Bbb C}^4,0)$ be the surface determinantal singularity of type $(2,3,2)$ defined by the matrix
$$\left( \begin{array}{ccc} z & y+u & x \\
u & x & y \end{array} \right)$$
($x,y,z,u$ are the coordinates in ${\Bbb C}^4$), and let $\omega = du$. Then one has
$${\rm ind}_{\rm PH}\, \omega =3.$$
This follows from the fact that $X$ is the space of a 1-parameter deformation (with parameter $u$) of the space curve singularity $(C,0)$ defined by the matrix
$$\left( \begin{array}{ccc} z & y & x \\
0 & x & y \end{array} \right).$$
A versal deformation of the singularity $(C,0)$ has the form
$$\left( \begin{array}{ccc} z & y+b & x+c \\
a & x & y \end{array} \right)$$
with the discriminant $a(b^2-c^2)=0$ (see, e.g., \cite{FK}). A smoothing of $X$ can be given by taking $a=b=u$, $c=\lambda \neq 0$. The corresponding 1-parameter family in $u$ intersects the discriminant at 3 points where the function $u$ has non-degenerate critical points.
On the other hand
\begin{eqnarray*}
\dim_{\Bbb C} \Omega^2_{X,0}/\omega \wedge \Omega^1_{X,0} & = & 6, \\
\dim_{\Bbb C} {\cal A}_{X,\omega} & = & 5.
\end{eqnarray*}
(The calculations were made with the help of the computer algebra system {\sc Singular} \cite{Si}.)
\end{example}
\begin{remark} For a 1-form on a smoothable IDS \ $(X,0) \subset ({\Bbb C}^N,0)$ with an isolated singular point on $(X,0)$, there is a well defined quadratic form on the module $\Omega^d_{X,0}/\omega \wedge \Omega^{d-1}_{X,0}$ constructed in a way similar to \cite{EGMZ}.
\end{remark}
\noindent Leibniz Universit\"{a}t Hannover, Institut f\"{u}r Algebraische Geometrie,\\
Postfach 6009, D-30060 Hannover, Germany \\
E-mail: [email protected]\\
\noindent Moscow State University, Faculty of Mechanics and Mathematics,\\
Moscow, GSP-1, 119991, Russia\\
E-mail: [email protected]
\end{document} | math | 41,931 |
\begin{document}
\title{Tridiagonal
canonical matrices of
bilinear or
sesquilinear forms and
of pairs of symmetric,
skew-symmetric, or
Hermitian forms
hanks{Preprint RT-MAT 2006-16, Universidade de Sao Paulo, 2006, 21 p.}
\renewcommand{\leqslant}{\leqslantqslant}
\renewcommand{\geqslant}{\geqslantqslant}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}{Lemma}[section]
\theoremstyle{remark}
\newtheorem{remark}{Remark}[section]
\newcommand{\,\diagdown\,}{\,\,\diagdown\,gdown\,}
\newcommand{\ddd}{
\text{\begin{picture}(12,8)
\put(-2,-4){$\cdot$}
\put(3,0){$\cdot$}
\put(8,4){$\cdot$}
\end{picture}}}
\begin{abstract}
Tridiagonal canonical
forms of square
matrices under
congruence or
*congruence, pairs of
symmetric or
skew-symmetric
matrices under
congruence, and pairs
of Hermitian matrices
under *congruence are
given over an
algebraically closed
field of
characteristic
different from~$2$.
{\it AMS
classification:}
15A21; 15A57.
{\it Keywords:}
Tridiagonal form;
Canonical matrices;
Congruence; Bilinear
forms, symmetric
forms, and Hermitian
forms.
\end{abstract}
\section{Introduction}
We give tridiagonal
canonical forms of
matrices of
\begin{itemize}
\item[\rm(i)]
bilinear forms and
sesquilinear forms,
\item[\rm(ii)]
pairs of forms, in
which each form is
either symmetric or
skew-symmetric, and
\item[\rm(iii)]
pairs of Hermitian
forms
\end{itemize}
over an algebraically
closed field of
characteristic
different from $2$.
Our canonical forms
are direct sums of
matrices or pairs of
matrices of the form
\begin{equation}
\label{kdu}
\begin{bmatrix}
\varepsilon &a&&&&0\\
a'&0&b\\
&b'&0&a\\
&&a'&0&b\\
&&&b'&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix};
\end{equation}
they employ relatively
few different types of
canonical direct
summands.
Let $\mathbb F$ be a
field of
characteristic
different from $2$.
The problem of
classifying bilinear
or sesquilinear forms
over $\mathbb F$ was
reduced by Gabriel,
Riehm, and
Shrader-Frechette
\cite{gab_form,rie,rie1}
to the problem of
classifying Hermitian
forms over finite
extensions of $\mathbb
F$. In \cite{ser_izv}
this reduction was
extended to
selfadjoint
representations of
linear categories with
involution, and
canonical matrices of
(i)--(iii) were
obtained over $\mathbb
F$ up to
classification of
Hermitian forms over
finite extensions of
$\mathbb F$. Canonical
matrices were found in
a simpler form in
\cite{hor-ser2} when
$\mathbb F=\mathbb C$.
Canonical matrices of
bilinear forms over an
algebraically closed
field of
characteristic $2$
were given in
\cite{ser_char2}. The
problem of classifying
pairs of symmetric,
skew-symmetric, or
Hermitian forms was
studied by many
authors; we refer the
reader to Thompson's
classical work
\cite{thom} with a
bibliography of 225
items, and to the
recent papers by
Lancaster and Rodman
\cite{lan-rod,lan-rod1}.
Each $n\times n$
matrix $A$ over
$\mathbb F$ defines a
bilinear form $x^TAy$
on ${\mathbb F}^n$. If
$\mathbb F$ is a field
with a fixed
nonidentity involution
$a\mapsto \bar a$,
then $A$ defines a
sesquilinear form
$\bar x^TAy$ on
${\mathbb F}^n$. Two
square matrices $A$
and $A'$ give the same
bilinear
(sesquilinear) form
with respect to
different bases if and
only if they are
\emph{congruent}
({\it\!*congruent});
this means that there
is a nonsingular $S$
such that $S^TAS=A'$
($S^*AS=A'$ with
$S^*:=\bar S^T$,
respectively). Two
matrix pairs $(A,B)$
and $(A',B')$ are
\emph{congruent}
(\!\emph{*congruent})
if there is a
nonsingular $S$ such
that $S^TAS=A'$ and
$S^TBS=B'$ ($S^*AS=A'$
and $S^*BS=B'$,
respectively). A
matrix $A$ is
\emph{Hermitian} if
$A=A^*$.
Thus, the canonical
form problem for
(i)--(iii) is the
canonical form problem
for
\begin{itemize}
\item[\rm(i$'$)]
matrices under
congruence or
*congruence (their
tridiagonal canonical
matrices are given in
Theorems \ref{t_matr}
and \ref{t_matr1});
\item[\rm(ii$'$)]
pairs of matrices
under congruence, in
which each matrix is
either symmetric or
skew-symmetric
(Theorems
\ref{th1a}--\ref{t_skew});
and
\item[\rm(iii$'$)]
pairs of Hermitian
matrices under
*congruence (Theorem
\ref{t_her}).
\end{itemize}
The problem of finding
tridiagonal canonical
forms of (ii$'$) or
(iii$'$) is connected
with the problem of
tridiagonalizing
matrices by orthogonal
or unitary similarity:
two pairs $(I_n,B)$
and $(I_n,B')$ are
congruent or
*congruent if and only
if $B$ and $B'$ are
orthogonally or
unitarily similar,
respectively. The
well-known algorithm
for reducing symmetric
real matrices to
tridiagonal form by
orthogonal similarity
\cite[Section 5]{wil}
can not be extended to
symmetric complex
matrices. However,
Ikramov
\cite{ikr_trid} showed
that every symmetric
complex matrix is
orthogonally similar
to a tridiagonal
matrix. Each $4\times
4$ complex matrix is
unitarily similar to a
tridiagonal matrix
\cite{dav-dok,pati},
but there is a
$5\times 5$ matrix
that is not unitarily
similar to a
tridiagonal matrix
\cite{dav-dok,fon-wu,lon,stu}.
Our paper was inspired
by
\cite{dok-zhao_tridiag},
in which
\raisebox{1.5pt}{-}\!\!Dokovi\'{c}
and Zhao gave a
tridiagonal canonical
form of symmetric
matrices for
orthogonal similarity
over an algebraically
closed field of
characteristic
different from $2$ (we
use it in Theorem
\ref{th1b} of our
paper). In a
subsequent article,
and for the same type
of field,
\raisebox{1.5pt}{-}\!\!Dokovi\'{c},
Rietsch, and Zhao
\cite{dok-zhao_skew}
found a $4$-diagonal
canonical form of
skew-symmetric
matrices for
orthogonal similarity.
Matrix pairs $(A,B)$
and $(A',B')$ are
\emph{equivalent} if
there are nonsingular
$R$ and $S$ such that
$RAS=A'$ and $RBS=B'$.
We denote equivalence
of pairs by $\approx$.
Kronecker's theorem on
pencils of matrices
\cite[Section XII,
Theorem 5]{gan}
ensures that each pair
of matrices of the
same size is
equivalent to a direct
sum, determined
uniquely up to
permutation of
summands, of pairs of
the form
\begin{equation*}\label{ksca}
(I_n,J_n(\lambda)),\quad
(J_{n}(0),I_{n}),\quad
(F_n,G_n),\quad
(F_n^T,G_n^T),
\end{equation*}
in which $I_n$ is the
$n\times n$ identity
matrix,
\[
J_n(\lambda) :=
\begin{bmatrix}
\lambda&1&&0\\
&\lambda&\ddots&\\
&&\ddots&1\\
0&&&\lambda
\end{bmatrix}
\text{ is $n$-by-$n$,}
\]
and
\begin{equation*}\label{1.4}
F_n:=\begin{bmatrix}
1&0&&0\\&\ddots&\ddots&\\0&&1&0
\end{bmatrix}\text{
and }\
G_n:=\begin{bmatrix}
0&1&&0\\&\ddots&\ddots&\\0&&0&1
\end{bmatrix}\text{
are $n$-by-$(n+1)$.}
\end{equation*}
Thus, $F_0=G_0$ is the
$0$-by-$1$ matrix,
which represents the
linear mapping
${\mathbb F}\to 0$.
In the following two
theorems (proved in
Sections \ref{s_pr}
and \ref{s_pr1}) we
give tridiagonal
canonical forms of a
square matrix $A$
under congruence and
*congruence. We also
give the Kronecker
canonical form of
$(B^T,B)$ and,
respectively,
$(B^*,B)$ for each
canonical direct
summand $B$, which
permits us to
construct the
canonical form of $A$
for congruence using
the Kronecker
canonical form of
$(A^T,A)$, and to
construct, up to signs
of the direct
summands, the
canonical form of $A$
for *congruence using
the Kronecker
canonical form of
$(A^*,A)$.
\begin{theorem}
\label{t_matr}
{\rm(a)} Each square
matrix $A$ over an
algebraically closed
field $\mathbb F$ of
characteristic
different from $2$ is
congruent to a direct
sum, determined
uniquely up to
permutation of
summands, of
tridiagonal matrices
of three types:
\begin{equation}\label{cm1}
\begin{bmatrix}
0&1 &&&0\\
\lambda&0&1 \\
&\lambda &0&\ddots&\\
&&\ddots&\ddots&1 \\
0&&&\lambda &0
\end{bmatrix}_{2k},\qquad
\lambda
\in\mathbb F,\
\lambda\ne \pm 1,
\end{equation}
in which each nonzero
$\lambda$ is
determined up to
replacement by
$\lambda^{-1}$
$($i.e., the matrices
\eqref{cm1} with
$\lambda$ and
$\lambda^{-1}$ are
congruent$)$;
\begin{equation}\label{cm2}
\begin{bmatrix}
\varepsilon &1&&&&0\\
-1&0&1&\\
&1&0&1&\\
&&-1&0&1&\\
&&&1&0&\ddots&\\
0&&&& \ddots&\ddots
\end{bmatrix}_{n},
\quad
\begin{aligned}
&\text{$\varepsilon
=1$ if $n$ is a
multiple
of $4$,}\\
&\text{$\varepsilon
\in\{0,1\}$
otherwise};
\end{aligned}
\end{equation}
and
\begin{equation}\label{cm3}
\begin{bmatrix}
0 &1&&&&0\\
1&0&1&\\
&-1&0&1&\\
&&1&0&1\\
&&&-1&0&\ddots&\\
0&&&& \ddots&\ddots\\
\end{bmatrix}_{4k}.
\qquad\qquad
\qquad\qquad
\qquad
\end{equation}
The subscripts $2k,\
n$, and $4k$ $($with
$k,n\in\mathbb N)$
designate the sizes of
the corresponding
matrices.
{\rm(b)} The direct
sum asserted in
{\rm(a)} is determined
uniquely up to
permutation of
summands by the
Kronecker canonical
form of $(A^T,A)$ for
equivalence. For each
direct summand $B$ of
types
\eqref{cm1}--\eqref{cm3},
the Kronecker
canonical form of
$(B^T,B)$ is given in
the following table:
\begin{equation}\label{tab1}
\renewcommand{1.2}{1.2}
\begin{tabular}{|c|c|}
\hline $B$& Kronecker
canonical form of $
(B^T,B)$\\
\hline\hline Matrix
\eqref{cm1}&
$(I_k,J_k(\lambda))\oplus
(J_k(\lambda),I_k)$\\\hline
Matrix \eqref{cm2}&
$(F_k,G_k)\oplus
(F_k^T,G_k^T)\ $ if
$n=2k+1$\\
with $\varepsilon=0$&
$(I_k,J_k(-1))\oplus
(I_k,J_k(-1))\ $ if
$n=2k$ $(k$ is
odd$)$\\\hline
$\begin{matrix}
\text{Matrix
\eqref{cm2}}\\
\text{with
$\varepsilon=1$}
\end{matrix}$
&
$(I_n,J_n((-1)^{n+1})$
\\\hline
Matrix \eqref{cm3}&
$(I_{2k},J_{2k}(1))\oplus
(I_{2k},J_{2k}(1))$\\\hline
\end{tabular}
\end{equation}
\end{theorem}
Let $\mathbb F$ be an
algebraically closed
field with nonidentity
involution. Fix
$i\in\mathbb F$ such
that $i^2=-1$. It is
known (see Lemma
\ref{l_real}) that
each element of
$\mathbb F$ is
uniquely representable
in the form $a+bi$
with $a,b$ in $
\mathbb P:=\{\lambda
\in\mathbb F\,|\,\bar
\lambda=\lambda\}$,
and the involution on
$\mathbb F$ is
``complex
conjugation'':
$\overline{a+bi}=a-bi$.
Moreover, $\mathbb P$
is ordered and
$a^2+b^2$ has a unique
positive real root,
which is called the
\emph{modulus} of
$a+bi$ and is denoted
by $|a+bi|$.
\begin{theorem}
\label{t_matr1}
{\rm(a)} Each square
matrix $A$ over an
algebraically closed
field $\mathbb F$ with
nonidentity involution
is *congruent to a
direct sum, determined
uniquely up to
permutation of
summands, of
tridiagonal matrices
of two types:
\begin{equation}\label{cmi1}
\begin{bmatrix}
0&1 &&&0\\
\lambda&0&1 \\
&\lambda &0&\ddots&\\
&&\ddots&\ddots&1 \\
0&&&\lambda &0
\end{bmatrix}_{n},\quad
\begin{matrix}
\lambda\in\mathbb F,\
|\lambda| \ne 1,\\
\text{each nonzero
$\lambda$ is
determined}\\
\text{ up to
replacement by
$\bar\lambda^{-1}$,}\\
\text{$\lambda= 0$ if
$n$
is odd}\\
\end{matrix}
\end{equation}
$($one can take
$|\lambda|<1$ if $n$
is even$)$; and
\begin{equation}\label{cmi2}
\mu
\begin{bmatrix}
1 &1&&&&0\\
-1&0&1&\\
&1&0&1&\\
&&-1&0&1&\\
&&&1&0&\ddots&\\
0&&&& \ddots&\ddots
\end{bmatrix}_{n},
\qquad
\mu\in\mathbb F,\
|\mu |=1.
\end{equation}
{\rm(b)} The Kronecker
canonical form of
$(A^*,A)$ under
equivalence determines
the direct sum
asserted in {\rm(a)}
uniquely up to
permutation of
summands and
multiplication of any
direct summand of type
\eqref{cmi2} by $-1$.
For each direct
summand $B$ of type
\eqref{cmi1} or
\eqref{cmi2}, the
Kronecker canonical
form of $(B^*,B)$ is
given in the following
table:
\begin{equation}\label{tab2}
\renewcommand{1.2}{1.2}
\begin{tabular}{|c|c|}
\hline $B$& Kronecker
canonical form of $
(B^*,B)$\\
\hline\hline Matrix
\eqref{cmi1}&
$\begin{array}{rl}
(F_k,G_k)\oplus
(F_k^T,G_k^T)
&\text{if $n=2k+1$}\\
(J_k(\bar\lambda),I_k)
\oplus
(I_k,J_k(\lambda))
&\text{if $n=2k$}
\end{array}$
\\\hline
Matrix \eqref{cmi2}&
$(I_n,J_n((-1)^{n+1}
\bar\mu^{-1}\mu)
)$\\\hline
\end{tabular}
\end{equation}
\end{theorem}
\section{Four lemmas}
\label{s1a}
In this section we
prove four lemmas that
we use in later
sections. In the first
lemma we collect known
results about
algebraically closed
fields with
involution; i.e., a
bijection $a\mapsto
\bar{a}$ satisfying
$\overline{a+b}=\bar
a+ \bar b$,
$\overline{ab}=\bar a
\bar b$ and $\bar{\bar
a}=a$.
\begin{lemma}\label{l_real}
Let\/ $\mathbb F$ be
an algebraically
closed field with
nonidentity involution
$\lambda\mapsto\bar\lambda$,
and let
\begin{equation}\label{123}
\mathbb
P:=\bigl\{\lambda
\in{\mathbb
F}\,\bigr|\,
\bar{\lambda }=\lambda
\bigr\}.
\end{equation}
Then $\mathbb F$ has
characteristic $0$,
\begin{equation}\label{1pp11}
\mathbb F={\mathbb
P}+{\mathbb P}i,\qquad
i^2=-1,
\end{equation}
and the involution has
the form
\begin{equation}\label{1ii}
\overline{a+bi}=a-bi,\qquad
a,b\in\mathbb P.
\end{equation}
Moreover, the field\/
${\mathbb P}$ has a
unique linear ordering
$\leqslant$ such that
\begin{equation*}\label{slr}
\text{$a>0$ and\,
$b>0$}
\quad\Longrightarrow\quad
\text{$a+b>0$ and\,
$ab>0$}.
\end{equation*}
The positive elements
of\/ $\mathbb P$ with
respect to this
ordering are the
squares of nonzero
elements. Every
algebraically closed
field of
characteristic $0$
possesses a
nonidentity
involution.
\end{lemma}
\begin{proof}
If $\mathbb F$ is an
algebraically closed
field with nonidentity
involution $\lambda
\mapsto \bar{\lambda
}$, then this
involution is an
automorphism of order
2. Hence ${\mathbb F}$
has degree $2$ over
the field ${\mathbb
P}$ defined in
\eqref{123}. By
Corollary 2 in
\cite[Chapter VIII, \S
9]{len}, $\mathbb P$
has characteristic $0$
and every element of
${\mathbb F}$ is
uniquely representable
in the form $a+bi$
with $a,b\in{\mathbb
P}$. Since the
involution is an
automorphism of
${\mathbb F}$,
$\bar{i}^2=-1$. So
$\bar{i}=-i$ and the
involution is
\eqref{1ii}. Due to
Proposition 3 in
\cite[Chapter XI, \S
2]{len}, $\mathbb P$
is a real closed
field, and so the
statements about the
ordering $\leqslant$ follow
from \cite[Chapter XI,
\S 2, Theorem 1]{len}.
By \cite[\S 82,
Theorem 7c]{wan},
every algebraically
closed field of
characteristic $0$
contains at least one
real closed subfield
and hence it can be
represented in the
form \eqref{1pp11} and
possesses the
involution
\eqref{1ii}.
\end{proof}
The canonical form
problem for pairs of
symmetric or
skew-symmetric
matrices under
congruence reduces to
the canonical form
problem for matrix
pairs under
equivalence due to the
following lemma, which
was proved in
\cite[\S\,95, Theorem
3]{mal} for complex
matrices. Roiter
\cite{roi} (see also
\cite{ser1dir,ser_izv})
extended this lemma to
arbitrary systems of
linear mappings and
bilinear forms over an
algebraically closed
field of
characteristic
different from $2$.
\begin{lemma}
\label{l_mal} Let $(A,
B)$ and $(A',B')$ be
given pairs of
$n\times n$ matrices
over an algebraically
closed field $\mathbb
F$ of characteristic
different from $2$.
Suppose that $A$ and
$A'$ are either both
symmetric or both
skew-symmetric, and
also that $B$ and $B'$
are either both
symmetric or both
skew-symmetric. Then
$(A, B)$ and $(A',B')$
are congruent if and
only if they are
equivalent.
\end{lemma}
\begin{proof}
If $(A, B)$ and
$(A',B')$ are
congruent then they
are equivalent.
Conversely, let $(A,
B)$ and $(A',B')$ be
equivalent; i.e.,
$R^TAS=A'$ and
$R^TBS=B'$ for some
nonsingular $R$ and
$S$. Then
\[
R^TAS=A' =\varepsilon
(A')^T=\varepsilon
S^TA^TR=S^TAR,
\]
in which $\varepsilon
=1$ if $A$ and $A'$
are symmetric and
$\varepsilon =-1$ if
$A$ and $A'$ are
skew-symmetric. Write
$M:=SR^{-1}$. Then
\[
AM=M^TA,\quad
AM^2=(M^T)^2A,\ \ldots
\]
and so $Af(M)=f(M)^TA$
for every polynomial
$f\in\mathbb F[x]$. If
there exists
$f\in\mathbb F[x]$
such that $f(M)^2=M$,
then for $N:=f(M)R$ we
have
\[
N^TAN=R^Tf(M)^TAf(M)R
=R^TAf(M)^2R=R^TAMR
=R^TAS=A'.
\]
Repeating the argument
for the matrix $B$, we
obtain $N^TBN=B'$.
Consequently, $(A, B)$
and $(A',B')$ are
congruent.
It remains to find
$f\in\mathbb F[x]$
such that $f(M)^2=M$.
Let
\[
(x-\lambda_1)^{k_1}\cdots
(x-\lambda_t)^{k_t},\qquad
\lambda_i\ne\lambda_j
\text{ if }i\ne j,
\]
be the characteristic
polynomial of $M$. We
can reduce $M$ to
Jordan canonical form
and obtain
\begin{equation*}\label{kmh}
M=J_1\oplus\dots
\oplus J_t,\qquad
J_i=\lambda_iI_{k_i}
+F_i,\quad
F_i^{k_i}=0.
\end{equation*}
For the polynomial
\[\varphi_i(x):
=\prod_{j\ne
i}(x-\lambda
_j)^{k_j}\] we have
\begin{equation}\label{hhg}
\varphi_i(M)
=0_{k_1+\dots+k_{i-1}}
\oplus\varphi_i(J_i)
\oplus
0_{k_{i+1}+\dots+k_t}
\end{equation}
($0_k$ denotes the
$k\times k$ zero
matrix). The field
$\mathbb F$ is
algebraically closed
of characteristic not
$2$, all $\lambda_i$
and $\varphi_i(\lambda
_i)$ are nonzero, so
for each $i=1,\dots,t$
there exist
polynomials $\psi_i,
\tau_i\in\mathbb F[x]$
such that
\begin{equation*}\label{kjh}
\psi_i(x)^2\equiv
\lambda _i+x,\quad
\varphi_i(\lambda
_i+x)\tau_i(x)\equiv
\psi_i(x) \mod x^{k_i}
\end{equation*}
(the coefficients of
$\psi_i$ and $\tau_i$
are determined
successively from
these congruences).
Then
$f(x):=\sum_i\varphi
_i(x)\tau_i(x-\lambda
_i)$ is the required
polynomial. Indeed, by
\eqref{hhg}
\[
f(M)=
\bigoplus_i\varphi
_i(J_i)\tau_i(J_i-\lambda
_iI_{k_i})=\bigoplus_i\varphi
_i(\lambda
_iI_{k_i}+F_i)\tau_i(F_i)=
\bigoplus_i\psi_i(F_i)
\]
and so
$$
f(M)^2=\bigoplus_i
\psi_i(F_i)^2=
\bigoplus_i (\lambda_i
I_{k_i}+F_i)=
\bigoplus_i J_i=M.
\eqno\qedhere
$$
\end{proof}
For each matrix of the
form
\begin{equation}
\label{vdv}
A=\begin{bmatrix}
\varepsilon &a_1&&&&0\\
a'_1&0&b_1\\
&b'_1&0&a_2\\
&&a'_2&0&b_2\\
&&&b'_2&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_{n},
\end{equation}
define
\begin{equation*}
\label{vdv1} {\cal
P}(A):=\begin{bmatrix}
b_k &a'_{k}&&&&&0\\
&\ddots&\ddots\\
&&b_1&a'_1\\
&&&\varepsilon&a_1 \\
&&&&b'_1&\ddots\\
&&&&&\ddots&a_{k} \\
0&&&&&&b'_k
\end{bmatrix}_{2k+1}
\quad
\text{if $n=2k+1$}
\end{equation*}
and
\begin{equation*}
\label{vdv2} {\cal
P}(A):=\begin{bmatrix}
a_k &b'_{k-1}&&&&&&0\\
&a_{k-1}&\ddots\\
&&\ddots&b'_1\\
&&&a_1&\varepsilon \\
&&&&a'_1&b_1\\
&&&&&a'_2&\ddots\\
&&&&&&\ddots&b_{k-1} \\
0&&&&&&&a'_k
\end{bmatrix}_{2k}\quad
\text{if $n=2k$.}
\end{equation*}
\begin{lemma}
\label{l_per} Every
pair $(A,B)$ of
$n\times n$ matrices
of the form
\eqref{vdv} is
equivalent to $({\cal
P}(A),{\cal P}(B))$.
\end{lemma}
\begin{proof}
If $n=2k+1$, then we
rearrange rows
$1,2,\dots,2k+1$ in
$A$ and in $B$ as
follows:
\[
2k,\ 2k-2,\ \dots,\
2,\ 1,\ 3,\ \dots,\
2k-1,\ 2k+1,
\]
and their columns in
the inverse order:
\[
2k+1,\ 2k-1,\ \dots,\
3,\ 1,\ 2,\ \dots,\
2k-2,\ 2k.
\]
If $n=2k$, then we
rearrange the rows of
$A$ and $B$ as
follows:
\[
2k-1,\ 2k-3,\ \dots,\
3,\ 1,\ 2,\ \dots,\
2k-2,\ 2k,
\]
and their columns in
the inverse order:
\[
2k,\ 2k-2,\ \dots,\
2,\ 1,\ 3,\ \dots,\
2k-3,\ 2k-1.
\]
The pair that we
obtain is $({\cal
P}(A),{\cal P}(B))$.
\end{proof}
For a sign $\sigma
\in\{+,-\}$ and a
nonnegative integer
$k$, define the
$2k$-by-$2k$ matrix
\begin{equation*}\label{ndyd}
M^{\sigma }_k
:=\begin{bmatrix}
0 & 1 \\
\sigma
1 & 0
\end{bmatrix}\oplus\dots
\oplus \begin{bmatrix}
0 & 1\\
\sigma
1 & 0
\end{bmatrix}\qquad
\text{($k$ summands).}
\end{equation*}
Thus, $M^{\sigma }_0$
is $0$-by-$0$.
\begin{lemma}
\label{l_ge} Let
$\sigma ,\tau
\in\{+,-\}$ and
$k\in\mathbb N$. Then
the following pairs
are equivalent:
\begin{align}
\label{2}
(0_1\oplus M^{\sigma
}_{k},\: M^{\tau}
_{k}\oplus
0_1)&\approx
(F_{k},G_{k})
\oplus
(F_{k}^T,G_{k}^T),
\\
\label{1} (I_1\oplus
M^{\sigma }_{k},\:
M^{\tau} _{k}\oplus
0_1)&\approx
(I_{2k+1},J_{2k+1}(0)),
\\ \label{4}
(0_1\oplus
M^{\sigma}_{k-1}\oplus
0_1,\:
M^{\tau}_{k})&\approx
(J_k(0),I_k)\oplus
(J_k(0),I_k),
\\ \label{3}
(I_1\oplus M^{\sigma
}_{k-1}\oplus 0_1,\:
M^{\tau}_{k})&\approx
(J_{2k}(0),I_{2k}).
\end{align}
\end{lemma}
\begin{proof}
Let $\varepsilon
\in\{0,1\}.$ By Lemma
\ref{l_per},
\begin{equation*}\label{jdg}
([\varepsilon]\oplus
M^{\sigma }_{k},
M^{\tau }_{k}\oplus
0_1)\approx (I_{k}
\oplus[\varepsilon]
\oplus I_{k},\
J_{2k+1}(0)),
\end{equation*}
which proves \eqref{2}
and \eqref{1}, and
\[
([\varepsilon]\oplus
M^{\sigma
}_{k-1}\oplus 0_1,
M^{\tau}_{k})\approx
\leqslantft(\begin{bmatrix}
0&1&&&&0\\
&0&\cdot&&&\\
&&\cdot&\varepsilon &&\\
&&&\cdot&\cdot&
\\
&&&&\cdot&1\\
0&&&&&0
\end{bmatrix},\
\begin{bmatrix}
1&&&&&0\\
&1&&&&\\
&&\cdot&&&\\
&&&\cdot&&
\\
&&&&\cdot&\\
0&&&&&1
\end{bmatrix}\right),
\] which proves
\eqref{4} and
\eqref{3}.
\end{proof}
\section{Pairs of
symmetric matrices}
\label{s1}
In this section, we
give two tridiagonal
canonical forms of
pairs of symmetric
matrices under
congruence.
\subsection{First
canonical form}
\label{sub1}
\begin{theorem}\label{th1a}
{\rm(a)} Over an
algebraically closed
field $\mathbb F$ of
characteristic
different from $2$,
every pair $(A,B)$ of
symmetric matrices of
the same size is
congruent to a direct
sum, determined
uniquely up to
permutation of
summands, of
tridiagonal pairs of
two types:
\begin{equation}\label{sss1n}
\leqslantft(\begin{bmatrix}
0 &1&&&&0\\
1&0&0\\
&0&0&1\\
&&1&0&0\\
&&&0&0&\ddots\\
0&&&&\ddots&\ddots\\
\end{bmatrix}_{n},\
\begin{bmatrix}
\varepsilon &
\lambda &&&&0\\
\lambda &0&1\\
&1&0&\lambda \\
&&\lambda &0&1\\
&&&1 &0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_{n}\right),
\ \
\begin{matrix}
\lambda\in\mathbb F,\\
\varepsilon
\in\{0,1\},
\end{matrix}
\end{equation}
in which
$\varepsilon=1$ if $n$
is even and
$\lambda=0$ if $n$ is
odd; and
\begin{equation}\label{ss2n}
\leqslantft(\begin{bmatrix}
1&0&&&&0\\
0&0 &1&&&\\
&1&0&0\\
&&0&0&1\\
&&&1&0&\ddots\\
0&&&&\ddots&\ddots \\
\end{bmatrix}_{n},\
\begin{bmatrix}
\lambda &1&&&&0\\
1&0&\lambda &&&\\
&\lambda &0&1\\
&&1 &0&\lambda\\
&&&\lambda&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_{n}\right),
\end{equation}
in which $\lambda=0$
if $n$ is even and
$\lambda \in\mathbb F$
if $n$ is odd.
{\rm(b)} This direct
sum is determined
uniquely up to
permutation of
summands by the
Kronecker canonical
form of $(A,B)$ under
equivalence. The
Kronecker canonical
form of each of the
direct summands is
given in the following
table:
\begin{equation}\label{tab3}
\renewcommand{1.2}{1.2}
\begin{tabular}{|c|c|}
\hline
Pair& Kronecker
canonical form of
the pair
\\
\hline\hline
\eqref{sss1n}&
$\begin{array}{rl}
(F_k,G_k)
\oplus
(F_k^T,G_k^T)
&\text{if $n=2k+1$ and
$\varepsilon=0$}\\
(J_{n}(0),I_{n})
&\text{if $n$ is odd
and
$\varepsilon=1$}\\
(I_{n},J_{n}(\lambda))
&\text{if $n$ is even}
\end{array}$
\\\hline
\eqref{ss2n}&
$\begin{array}{rl}
(I_{n},
J_{n}(\lambda))
&\text{if $n$ is odd}
\\
(J_{n}(0),I_{n})
&\text{if $n$ is even}
\end{array}$\\\hline
\end{tabular}
\end{equation}
\end{theorem}
\begin{proof}
Let the Kronecker
canonical form of
$(A,B)$ be
\begin{equation*}\label{ksc}
\bigoplus_i
(I_{m_i},J_{m_i}
(\lambda_i))
\oplus\bigoplus_j
(J_{n_j}(0),I_{n_j})
\oplus
\bigoplus_l
(F_{s_l},G_{s_l})
\oplus
\bigoplus_r
(F_{t_r}^T,G_{t_r}^T).
\end{equation*}
Since $A$ and $B$ are
symmetric,
\[
(A,B)\approx\bigoplus_i
(I_{m_i},J_{m_i}
(\lambda_i))
\oplus\bigoplus_j
(J_{n_j}(0),I_{n_j})
\oplus
\bigoplus_l
(F_{s_l}^T,G_{s_l}^T)
\oplus
\bigoplus_r
(F_{t_r},G_{t_r}).
\]
Thus, we can make
$s_1=t_1,\
s_2=t_2,\dots$ by
reindexing $\{t_r\}$,
and obtain that the
Kronecker canonical
form of $(A,B)$ is
\begin{equation}\label{kscw}
\bigoplus_i
(I_{m_i},J_{m_i}
(\lambda_i))
\oplus\bigoplus_j
(J_{n_j}(0),I_{n_j})
\oplus
\bigoplus_l
\Big((F_{s_l},G_{s_l})
\oplus
(F_{s_l}^T,G_{s_l}^T)\Big).
\end{equation}
This sum is determined
by $(A,B)$ uniquely up
to permutation of
summands. In view of
Lemma \ref{l_mal}, it
remains to prove
\eqref{tab3}.
The pair \eqref{sss1n}
with $n=2k+1$ and
$\varepsilon=0$ has
the form
$(M^+_{k}\oplus
0_1,0_1\oplus
M^+_{k})$; by
\eqref{2} it is
equivalent to
$(F_k,G_k)
\oplus
(F_k^T,G_k^T)$.
The pair \eqref{sss1n}
with $n=2k+1$ and
$\varepsilon=1$ has
the form $
(M^+_{k}\oplus
0_1,\:I_1\oplus
M^+_{k})$; by
\eqref{1} it is
equivalent to
$(J_{2k+1}(0),I_{2k+1})$.
The pair \eqref{sss1n}
with $n=2k$ has the
form $(M^+_k,\:\lambda
M^+_k+(I_1\oplus
M^+_{k-1}\oplus
0_1))$; it is
equivalent to
$(I_{2k}, \lambda
I_{2k}+ J_{2k}(0))
=(I_{2k},J_{2k}(\lambda))$
since \eqref{3}
ensures that
\begin{equation}\label{jdd1}
(M^+_k,\: I_1\oplus
M^+_{k-1}\oplus
0_1)\approx
(I_{2k},J_{2k}(0)).
\end{equation}
The pair \eqref{ss2n}
with $n=2k+1$ has the
form $(I_1\oplus
M^+_{k},\:\lambda
(I_1\oplus M^+_{k})+
(M^+_{k}\oplus 0_1))$;
by \eqref{1} it is
equivalent to
$(I_{2k+1},
J_{2k+1}(\lambda))$.
The pair \eqref{ss2n}
with $n=2k$ has the
form $(I_1\oplus
M^+_{k-1}\oplus
0_1,\:M^+_k)$; by
\eqref{3} it is
equivalent to
$(J_{2k}(0),I_{2k})$.
\end{proof}
\subsection{Second
canonical form}
\label{sub2}
In this section, we
give another
tridiagonal canonical
form of pairs of
symmetric matrices for
congruence. This form
is not a direct sum of
tridiagonal matrices
of the form
\eqref{kdu}. It is
based on the
\raisebox{1.5pt}{-}\!\!Dokovi\'{c}
and Zhao's tridiagonal
canonical form of
symmetric matrices for
orthogonal similarity
\cite{dok-zhao_tridiag}
and resembles the
Kronecker canonical
form of matrix pairs
for equivalence.
For each positive
integer $n$, let $N_n$
denote any fixed
$n\times n$
tridiagonal symmetric
matrix over $\mathbb
F$ that is similar to
$J_n(0)$. Following
\cite[p.\,79]{dok-zhao_tridiag},
we can take as $N_n$
the value
$N(a_1,\dots,a_n,b)$
of the polynomial
matrix
\begin{equation*}\label{kgk}
N(x_1,\dots,x_n,y):=\begin{bmatrix}
x_1&y&&0\\
y&x_2&\ddots&\\
&\ddots&\ddots&y\\
0&&y&x_n
\end{bmatrix}
\end{equation*}
at any nonzero
solution
$(a_1,\dots,a_n,b)\in
\mathbb F^{n+1}$ of
the system
\[
c_1(x_1,\dots,x_n,y)=0,\
\dots,\
c_n(x_1,\dots,x_n,y)=0
\]
of equations whose
left parts are the
coefficients of the
characteristic
polynomial
$t^n+c_1t^{n-1}+
\dots+c_n$ of
$N(x_1,\dots,x_n,y)$.
Then $0$ is the only
eigenvalue of $N_n$,
$b\ne 0$, $\rank
N_n=n-1$, and $N_n$ is
similar to $J_n(0)$.
If $\mathbb F$ has the
characteristic $0$,
then
\cite[p.\,81]{dok-zhao_tridiag}
ensures that we can
also take
\begin{equation*}\label{luf}
N_n=\begin{bmatrix}
n-1&id_1&&&0\\
id_1&n-3&id_2&&\\
&id_2&n-5&\ddots&\\
&&\ddots&\ddots&id_{n-1}\\
0&&&id_{n-1}&1-n
\end{bmatrix},\quad
\begin{matrix}
d_l:=\sqrt{l(n-l)},\\
i^2=-1.
\end{matrix}
\end{equation*}
\begin{theorem}\label{th1b}
Over an algebraically
closed field $\mathbb
F$ of characteristic
different from $2$,
every pair $(A,B)$ of
symmetric matrices of
the same size is
congruent to a direct
sum, determined
uniquely up to
permutation of
summands, of
tridiagonal pairs of
three types:
\begin{equation}\label{lyg}
(I_n,\lambda I_n+N_n)
\text{ with $\lambda
\in\mathbb F$};\qquad
(N_n, I_n);
\end{equation}
and
\begin{equation}\label{ssnew}
\leqslantft(\begin{bmatrix}
0&1&&&&0\\
1&0 &0&&&\\
&0&0&1\\
&&1&0&0\\
&&&0&0&\ddots\\
0&&&&\ddots&\ddots \\
\end{bmatrix}_{2k+1},\
\begin{bmatrix}
0 &0&&&&0\\
0&0&1 &&&\\
&1 &0&0\\
&&0 &0&1\\
&&&1&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_{2k+1}\right).
\end{equation}
{\rm(b)} This direct
sum is determined
uniquely up to
permutation of
summands by the
Kronecker canonical
form of $(A,B)$ for
equivalence. The
Kronecker canonical
form of each of the
direct summands is
given in the following
table:
\begin{equation}\label{tab4}
\renewcommand{1.2}{1.2}
\begin{tabular}{|c|c|}
\hline
Pair& Kronecker
canonical form of the
pair
\\
\hline\hline
$(I_n,\lambda
I_n+N_n)$&$(I_{n},J_{n}
(\lambda))$\\\hline
$(N_n,
I_n)$&$(J_{n}(0),I_{n})$
\\\hline
\eqref{ssnew}& $
(F_{k},G_{k})
\oplus
(F_{k}^T,G_{k}^T)$\\\hline
\end{tabular}
\end{equation}
\end{theorem}
\begin{proof}
In view of
\eqref{kscw} and Lemma
\ref{l_mal}, it
suffices to prove
\eqref{tab4}. The
equivalences
\[
(I_{n},\lambda
I_{n}+N_{n}) \approx
(I_{n},J_{n}
(\lambda))\ \text{ and
}\ (N_{n}, I_{n})
\approx
(J_{n}(0),I_{n})
\]
are valid since $N_n$
is similar to
$J_n(0)$. The pair
\eqref{ssnew} is
\eqref{sss1n} with
$n=2k+1$ and
$\varepsilon=0$; by
\eqref{tab3} it is
equivalent to $
(F_{k},G_{k})
\oplus
(F_{k}^T,G_{k}^T)$.
\end{proof}
\section{Pairs of
matrices, in which the
first is symmetric and
the second is
skew-symmetric}
\label{s2}
\begin{theorem}\label{t_sc}
Over an algebraically
closed field $\mathbb
F$ of characteristic
different from $2$,
every pair $(A,B)$ of
matrices of the same
size, in which $A$ is
symmetric and $B$ is
skew-symmetric, is
congruent to a direct
sum, determined
uniquely up to
permutation of
summands, of
tridiagonal pairs of
three types:
\begin{equation}\label{sc1}
\leqslantft(\begin{bmatrix}
0&1&&&0\\
1&0&1\\
&1&0&\ddots
\\
&&\ddots&\ddots&1\\
0&&&1&0
\end{bmatrix}_{2k},\
\begin{bmatrix}
0&\lambda&&&0\\
-\lambda&0&\lambda\\
&-\lambda&0&\ddots
\\
&&\ddots&\ddots&\lambda\\
0&&&-\lambda&0
\end{bmatrix}_{2k}\right),
\quad
\begin{matrix}
\lambda\in\mathbb F,\\
\lambda \ne 0,
\end{matrix}
\end{equation}
in which $\lambda$ is
determined up to
replacement by
$-\lambda$;
\begin{equation}\label{sc2}
\leqslantft(\begin{bmatrix}
\varepsilon &0&&&&0\\
0&0&1\\
&1&0&0\\
&&0&0&1\\
&&&1&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n,\
\begin{bmatrix}
0&1&&&&0\\
-1&0&0\\
&0&0&1\\
&&-1&0&0\\
&&&0&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n\right),
\end{equation}
in which $\varepsilon
=1$ if $n$ is a
multiple of $4$, and
$\varepsilon
\in\{0,1\}$ otherwise;
and
\begin{equation}\label{sc3}
\leqslantft(
\begin{bmatrix}
0 &1&&&&0\\
1&0&0\\
&0&0&1\\
&&1&0&0\\
&&&0&0&\ddots\\
0&&&&\ddots&\ddots\\
\end{bmatrix}_{4k},\
\begin{bmatrix}
0&0&&&&0\\
0&0&1\\
&-1&0&0\\
&&0&0&1\\
&&&-1&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_{4k}\right).
\end{equation}
{\rm(b)} This direct
sum is determined
uniquely up to
permutation of
summands by the
Kronecker canonical
form of $(A,B)$ under
equivalence. The
Kronecker canonical
form of each of the
direct summands is
given in the following
table:
\begin{equation}\label{tab5}
\renewcommand{1.2}{1.2}
\begin{tabular}{|c|c|}
\hline
Pair& Kronecker
canonical form of the
pair
\\
\hline\hline
\eqref{sc1}&$(I_k,J_k(\lambda
))\oplus
(I_k,J_k(-\lambda))$
with $\lambda \ne 0$
\\\hline
\eqref{sc2} with
$\varepsilon=0$
&
$\begin{array}{rl}
(F_k,G_k)
\oplus
(F_k^T,G_k^T)
&\text{if
$n=2k+1$}\\
(J_k(0),I_k)\oplus
(J_k(0),I_k) &\text{if
$n=2k$ $(k$ is
odd$)$}
\end{array}$\\\hline
\eqref{sc2} with
$\varepsilon=1$
&
$\begin{array}{rl}
(I_n,J_n(0)) &\text{if
$n$ is odd}\\
(J_n(0),I_n) &\text{if
$n$ is even}
\end{array}$\\\hline
\eqref{sc3}&
$(I_{2k},J_{2k}(0))\oplus
(I_{2k},J_{2k}(0))$\\\hline
\end{tabular}
\end{equation}
\end{theorem}
\begin{proof}
The Kronecker
canonical form of
$(A,B)$ is a direct
sum of pairs of the
types:
\begin{itemize}
\item[\rm(i)]
$(I_k,J_k(\lambda
))\oplus
(I_k,J_k(-\lambda))$,
in which $\lambda\ne
0$ if $k$ is odd,
\item[\rm(ii)]
$(I_n,J_n(0))$ with
odd $n$,
\item[\rm(iii)]
$(J_k(0),I_k)\oplus
(J_k(0),I_k)$ with
odd $k$,
\item[\rm(iv)]
$(J_n(0),I_n)$ with
even $n$,
\item[\rm(v)]
$(F_k,G_k)
\oplus
(F_k^T,G_k^T)$.
\end{itemize}
This statement was
proved in
\cite[Section 4]{thom}
for pairs of complex
matrices and goes back
to Kronecker's 1874
paper; see the
historical remark at
the end of Section 4
in \cite{thom}. The
proof remains valid
for matrix pairs over
$\mathbb F$ (or see
\cite[Theorem
4]{ser_izv}).
In view of Lemma
\ref{l_mal}, it
suffices to prove
\eqref{tab5}.
By Lemma \ref{l_per},
\eqref{sc1} is
equivalent to
\[
(J_k(1),-\lambda
J_k(-1))\oplus
(J_k(1),\lambda
J_k(-1)),
\]
which is equivalent to
(i) with $\lambda\ne
0$.
The pair \eqref{sc2}
with $n=2k+1$ has the
form $([\varepsilon]
\oplus M^+_{k},
M^-_{k}\oplus 0_1)$;
by \eqref{2} and
\eqref{1} this pair is
equivalent to (v) if
$\varepsilon=0$ or
(ii) if
$\varepsilon=1$.
The pair \eqref{sc2}
with $n=2k$ has the
form
\begin{equation}\label{hdt}
([\varepsilon]\oplus
M^+_{k-1}\oplus 0_1,
M^-_{k}),
\end{equation}
in which $\varepsilon
\in\{0,1\}$ if $k$ is
odd and $\varepsilon
=1$ if $k$ is even.
Due to \eqref{4} and
\eqref{3}, \eqref{hdt}
is equivalent to (iii)
if $\varepsilon=0$ or
to (iv) if
$\varepsilon=1$.
The pair \eqref{sc3}
has the form
$(M^+_{2k}, 0_1\oplus
M^-_{2k-1}\oplus
0_1)$, and by
\eqref{4} it is
equivalent to (i) with
$\lambda =0$ and $k$
replaced by $2k$.
\end{proof}
\section{Pairs of
skew-symmetric
matrices} \label{ss1}
\begin{theorem}\label{t_skew}
Over an algebraically
closed field $\mathbb
F$ of characteristic
different from $2$,
every pair $(A,B)$ of
skew-symmetric
matrices of the same
size is congruent to a
direct sum, determined
uniquely up to
permutation of
summands, of
tridiagonal pairs of
two types:
\begin{equation}\label{cc1}
\leqslantft(
\begin{bmatrix}
0&1&&&&0\\
-1&0&0\\
&0&0&1\\
&&-1&0&0\\
&&&0&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_{2k},\
\begin{bmatrix}
0&\lambda &&&&0\\
-\lambda &0&1\\
&-1&0&\lambda\\
&&-\lambda&0&1\\
&&&-1 &0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_{2k}\right)
\end{equation}
and
\begin{equation}\label{cc23}
\leqslantft(\begin{bmatrix}
0 &0&&&&0\\
0&0&1\\
&-1&0&0\\
&&0&0&1\\
&&&-1&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n,\
\begin{bmatrix}
0&1&&&&0\\
-1&0&0\\
&0&0&1\\
&&-1&0&0\\
&&&0&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n\right)
\end{equation}
in which
$k,n\in\mathbb N$ and
$\lambda \in\mathbb
F$.
{\rm(b)} This direct
sum is determined
uniquely up to
permutation of
summands by the
Kronecker canonical
form of $(A,B)$ under
equivalence. The
Kronecker canonical
form of each of the
direct summands is
given in the following
table:
\begin{equation}\label{tab6}
\renewcommand{1.2}{1.2}
\begin{tabular}{|c|c|}
\hline
Pair& Kronecker
canonical form of the
pair
\\
\hline\hline
\eqref{cc1}&$(I_k,J_k(\lambda
))\oplus
(I_k,J_k(\lambda))$
\\\hline
\eqref{cc23}
&
$\begin{array}{rl}
(F_k,G_k)
\oplus
(F_k^T,G_k^T)
&\text{if
$n=2k+1$}\\
(J_k(0),I_k)\oplus
(J_k(0),I_k) &\text{if
$n=2k$}
\end{array}$\\\hline
\end{tabular}
\end{equation}
\end{theorem}
\begin{proof}
The Kronecker
canonical form of
$(A,B)$ under
equivalence is a
direct sum of pairs of
three types:
\begin{gather*}\label{ksqw}
((I_k,J_k
(\lambda))\oplus
(I_k,J_k
(\lambda))),\qquad
((J_k(0),I_k)\oplus
(J_k(0),I_k)),
\\
((F_k,G_k)
\oplus
(F_k^T,G_k^T)).
\end{gather*}
This statement was
proved in
\cite[Section 4]{thom}
for pairs of complex
matrices, but the
proof remains valid
for pairs over
$\mathbb F$ (or see
\cite[Theorem
4]{ser_izv}). In view
of Lemma \ref{l_mal},
it suffices to prove
\eqref{tab6}.
The pair \eqref{cc1}
has the form
$(M^-_k,\lambda
M^-_k+(0_1\oplus
M^-_{k-1}\oplus 0_1))$
and by \eqref{4} it is
equivalent to
\[
(I_{k},\lambda I_k+
J_{k} (0))\oplus
(I_{k},\lambda I_k+
J_{k} (0)) =
(I_{k},J_{k} (\lambda
))\oplus (I_{k},J_{k}
(\lambda )).
\]
The pair \eqref{cc23}
with $n=2k+1$ has the
form $ (0_1\oplus
M^-_{k},M^-_{k}\oplus
0_1)$; by \eqref{2} it
is equivalent to
$(F_{k},G_{k})
\oplus
(F_{k}^T,G_{k}^T)$.
The pair \eqref{cc23}
with $n=2k$ has the
form $(0_1\oplus
M^-_{k-1}\oplus
0_1,M^-_k)$; by
\eqref{4} it is
equivalent to
$(J_{k}(0),I_{k})\oplus
(J_{k}(0),I_{k})$.
\end{proof}
\section{Matrices with
respect to congruence}
\label{s_pr}
In this section we
prove Theorem
\ref{t_matr}.
(a) Each square matrix
$A$ can be expressed
uniquely as the sum of
a symmetric and a
skew-symmetric matrix:
\begin{equation*}\label{pair16}
A=A_{\text{sym}}
+A_{\text{sk}},\qquad
A_{\text{sym}}:
=\frac{A+A^{T}}2
,\quad
A_{\text{sk}}:=\frac{A-A^{T}}2.
\end{equation*}
Two matrices $A$ and
$B$ are congruent if
and only if the
corresponding pairs
$(A_{\text{sym}},
A_{\text{sk}})$ and
$(B_{\text{sym}},
B_{\text{sk}})$ are
congruent. Therefore,
adding the first and
the second matrices in
each of the canonical
pairs from Theorem
\ref{t_sc} gives three
types of canonical
matrices for
congruence:
\begin{equation}\label{cm1a}
\begin{bmatrix}
0&1+\mu &&0\\
1-\mu &\ddots&\ddots&\\
&\ddots&0&1
+\mu \\
0&&1-\mu &0
\end{bmatrix}_{2k},\qquad
\begin{matrix}
\text{$\mu\ne 0,$}\\
\text{$\mu$ is
determined up}\\
\text{to replacement
by $-\mu$};
\end{matrix}
\end{equation}
\eqref{cm2}; and
\eqref{cm3}. We can
assume that $\mu \ne
-1$ because the
congruence
transformation
\begin{equation}\label{hfe}
X\mapsto S^TXS,\qquad
S:=\begin{bmatrix}0&&1\\
&\ddd&\\1&&0
\end{bmatrix},
\end{equation}
maps \eqref{cm1a} with
$\mu= -1$ into
\eqref{cm1a} with
$\mu= 1$. If we
multiply all the odd
columns and rows of
\eqref{cm1a} by
$(1+\mu)^{-1}$ (this
is a transformation of
congruence), we obtain
\eqref{cm1} with
\begin{equation}\label{hvf}
\lambda=\frac{1-\mu}
{1+\mu}.
\end{equation}
The parameter $\mu$ is
determined up to
replacement by $-\mu$,
so each $\lambda\ne 0$
is determined up to
replacement by
$\lambda^{-1}$,
whereas $\lambda=0$ is
determined uniquely
since it corresponds
to $\mu = 1$ and we
assume that $\mu \ne
-1$. We have
$\lambda\ne \pm 1$
because $\mu \ne 0$
and ${-1+\mu}\ne
{1+\mu}$. The
parameter $\lambda$ is
an arbitrary element
of $\mathbb F$ except
for $\pm 1$ since
substituting
$\mu=(1-\lambda)/
(1+\lambda)$ into
\eqref{hvf} gives the
identity.
(b) Let $A$ be the
matrix \eqref{cm1}. By
Lemma \ref{l_per}, the
pair $(A^T,A)$ is
equivalent to
\begin{equation}\label{hhy}
\leqslantft(
\leqslantft[\begin{array}
{c|c}
\begin{matrix}
\lambda &1\\&\lambda
&\ddots\\&&\ddots&1\\
&&&\lambda
\end{matrix}
&0\\ \hline 0&
\begin{matrix}
1 &\lambda\\&1
&\ddots\\
&&\ddots&\lambda\\
&&&1
\end{matrix}
\end{array}
\right]\!,
\leqslantft[\begin{array}
{c|c}
\begin{matrix}
1 &\lambda\\&1
&\ddots\\
&&\ddots&\lambda\\
&&&1\end{matrix} &0\\
\hline 0&
\begin{matrix}
\lambda &1\\&\lambda
&\ddots\\&&\ddots&1\\
&&&\lambda
\end{matrix}
\end{array}
\right] \right)\!,
\end{equation}
which is equivalent to
$(J_k(\lambda), I_k)
\oplus
(I_k,J_k(\lambda))$
since $\lambda \ne \pm
1$. This verifies the
assertion about the
matrix \eqref{cm1} in
table \eqref{tab1}.
The remaining
assertions about the
matrices \eqref{cm2}
and \eqref{cm3} in
table \eqref{tab1}
follow from the
corresponding
assertions about the
matrices \eqref{sc2}
and \eqref{sc3} in
table \eqref{tab5}:
the matrices
\eqref{cm2} and
\eqref{cm3} have the
form $A=B+C$ in which
$(B,C)$ is \eqref{sc2}
or \eqref{sc3}, and so
$(A^T,A)=(B-C,B+C)$.
For example, if $A$ is
\eqref{cm2} with
$\varepsilon =1$, then
by \eqref{tab5}
\[
(B,C)\approx
\begin{cases}
(I_n,J_n(0))
& \text{if $n$ is odd}, \\
(J_n(0),I_n)
& \text{if $n$ is
even},
\end{cases}
\]
and we have
\[
(A^T,A)\approx
\begin{cases}
(I_n-J_n(0), I_n+J_n(0))
\approx
(I_n,J_n(1))
& \text{if $n$ is odd},
\\
(J_n(0)-I_n, J_n(0)+
I_n)
\approx
(I_n,J_n(-1)) &
\text{if $n$ is even}.
\end{cases}
\]
The proof of Theorem
\ref{t_matr} is
complete.
\section{Matrices with
respect to
*congruence}
\label{s_pr1}
In this section we
prove Theorem
\ref{t_matr1}.
Let $\mathbb F$ be an
algebraically closed
field with nonidentity
involution represented
in the form
\eqref{1pp11}. A
canonical form of a
square matrix $A$ over
$\mathbb F$ for
*congruence was given
in \cite{ser_izv} and
was improved in
\cite{hor-ser_transp}
(a direct proof that
the matrices in
\cite{hor-ser_transp}
are canonical is given
in
\cite{hor-ser_regul,
hor-ser2}): $A$ is
*congruent to a direct
sum, determined
uniquely up to
permutation of
summands, of matrices
of three types:
\begin{equation}\label{eqq}
\begin{bmatrix}0&I_k\\
J_k(\lambda) &0
\end{bmatrix}\
(\lambda\ne 0,\
|\lambda |\ne 1),\quad
\mu\begin{bmatrix}
0&&&1
\\
&&\ddd&i\\
&1&\ddd&\\
1&i&&0
\end{bmatrix}\ (|\mu|=1),
\quad
J_n(0),
\end{equation}
in which $\lambda$ is
determined up to
replacement by
$\bar\lambda^{-1}$. It
follows from the proof
of Theorem 3 in
\cite{ser_izv} that
instead of \eqref{eqq}
one can take any set
of matrices
\begin{equation*}\label{vdf}
P_{2k}(\lambda),\qquad
\mu Q_n,\qquad J_n(0)
\end{equation*}
(with the same
conditions on $\lambda
$ and $\mu $) such
that
\begin{equation}\label{azs1}
(P_{2k}(\lambda)^*,
P_{2k}(\lambda))
\approx
(J_k(\bar\lambda),
I_k) \oplus
(I_k,J_k(\lambda))
\end{equation}
and
\begin{equation}\label{azs2}
(Q_n^*, Q_n) \approx
(I_n,J_n(\nu_n)),
\end{equation}
in which
$\nu_1,\nu_2,\dots$
are any elements of
$\mathbb F$ with
modulus one.
\begin{proof}[Proof of
Theorem \ref{t_matr1}]
Let $P_{2k}(\lambda)$
be the matrix
\eqref{cmi1} with
$\lambda \ne 0$ and
let $Q_n$ be the
matrix \eqref{cmi2}
with $\mu=1$. Since
the matrix
\eqref{cmi1} with
$\lambda=0$ is $
J_n(0)$, it suffices
to prove that
\eqref{azs1} and
\eqref{azs2} are
fulfilled.
By Lemma \ref{l_per},
$(P_{2k}(\lambda)^*,
P_{2k}(\lambda))$ is
equivalent to the pair
\eqref{hhy} with
$\bar\lambda$ instead
of $\lambda$ in the
first matrix. This
proves \eqref{azs1}
since $|\lambda |\ne
1$.
The matrix $Q_n$ is
\eqref{cm2} with
$\varepsilon =1$. Due
to \eqref{tab1},
\[
(Q_n^*,Q_n)=(Q_n^T,Q_n)
\approx
(I_n,J_n((-1)^{n+1});
\]
this ensures
\eqref{azs2} with
$\nu_n:=(-1)^{n+1}$.
The assertion about
the matrix
\eqref{cmi1} with
$\lambda=0$ in table
\eqref{tab2} follows
from the equivalence
\[
(J_n(0)^T,J_n(0))\approx
\begin{cases}
(F_k,G_k)\oplus
(F_k^T,G_k^T)
&\text{if $n=2k+1$,}\\
(J_k(0),I_k) \oplus
(I_k,J_k(0)) &\text{if
$n=2k$},
\end{cases}
\]
which was established
in the proof of
Theorem 3 in
\cite{ser_izv}.
\end{proof}
\section{Pairs of
Hermitian matrices}
\label{s_her}
\begin{theorem}\label{t_her}
{\rm(a)} Over an
algebraically closed
field $\mathbb F$ with
nonidentity involution
represented in the
form \eqref{1pp11},
every pair $(A,B)$ of
Hermitian matrices of
the same size is
*congruent to a direct
sum, determined
uniquely up to
permutation of
summands, of
tridiagonal pairs of
two types:
\begin{equation}\label{he1}
\leqslantft(\begin{bmatrix}
0&1&&&0\\
1&0&1\\
&1&0&\ddots\\
&&\ddots&\ddots&1\\
0&&&1&0
\end{bmatrix}_{n},\
\begin{bmatrix}
0&\mu &&&0\\
\bar\mu &0&\mu \\
&\bar\mu &0&\ddots\\
&&\ddots&\ddots&\mu \\
0&&&\bar\mu &0
\end{bmatrix}_{n}\right),
\end{equation}
in which
$\mu\in\mathbb
F\smallsetminus\mathbb
P$ if $n$ is even,
$\mu=\pm i$ if $n$ is
odd, and $\mu$ is
determined up to
replacement by
$\bar\mu$; and
\begin{equation}\label{he2}
\leqslantft(\begin{bmatrix}
a &b&&&&0\\
b&0&a\\
&a&0&b\\
&&b&0&a\\
&&&a&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n,\
\begin{bmatrix}
b&-a&&&&0\\
-a&0&b\\
&b&0&-a\\
&&-a&0&b\\
&&&b&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n\;\right),
\end{equation}
in which
$a,b\in\mathbb P$ and
$a^2+b^2=1$.
{\rm(b)} The Kronecker
canonical form of
$(A,B)$ under
equivalence determines
this direct sum
uniquely up to
permutation of
summands and
multiplication by $-1$
any direct summand of
type \eqref{he2}. The
Kronecker canonical
form of each of the
direct summands is
given in the following
table:
\begin{equation}\label{tab2a}
\renewcommand{1.2}{1.2}
\begin{tabular}{|c|c|}
\hline
Pair& Kronecker
canonical form of the
pair
\\
\hline\hline
\eqref{he1}&
$\begin{array}{rl}
(F_k,G_k)\oplus
(F_k^T,G_k^T)
&\text{if $n=2k+1$}\\
(I_k,J_k(\mu)) \oplus
(I_k,J_k(\bar\mu))
&\text{if $n=2k$}
\end{array}$
\\\hline
\eqref{he2}&
$\begin{array}{rl}
(I_n,J_n( b/a))
&\text{if $n$ is odd
and $a\ne 0$}\\
(I_n,J_n(-a/b))
&\text{if $n$ is even
and $b\ne 0$}\\
(J_n(0),I_n)
&\text{otherwise}
\end{array}$
\\\hline
\end{tabular}
\end{equation}
\end{theorem}
\begin{proof}
(a) Each square matrix
$A$ over $\mathbb F$
has a
\textit{Cartesian
decomposition}
\begin{equation*}
\label{pair6}
A=B+iC,
\qquad
B:=\frac{A+A^*}{2}
,\quad C:=\frac{i(
A^*-A)}{2},
\end{equation*}
in which both $B$ and
$C$ are Hermitian. Two
square matrices $A$
and $A'$ are
*congruent if and only
if the corresponding
pairs $(B,C)$ and
$(B',C')$ are
*congruent. Therefore,
if we apply the
Cartesian
decomposition to the
canonical matrices for
*congruence from
Theorem \ref{t_matr1},
we obtain canonical
pairs of Hermitian
matrices for
*congruence. To
simplify these
canonical pairs, we
multiply \eqref{cmi1}
by $2$ (this is a
transformation of
*congruence), and
using \eqref{1pp11}
take $\mu$ in
\eqref{cmi2} to have
the form $a+bi$ with
$a,b\in\mathbb P$.
Thus, every pair
$(A,B)$ of Hermitian
matrices of the same
size is *congruent to
a direct sum,
determined uniquely up
to permutation of
summands, of pairs of
two types:
\begin{equation}\label{hemm1}
\leqslantft(\begin{bmatrix}
0&\!1+\bar\lambda
\!&&0\\
\!1+\lambda\!&0
&\ddots\\
&\ddots&\ddots&\!1
+\bar\lambda\! \\
0&&\!1+\lambda\! &0
\end{bmatrix}_{n},\
i\begin{bmatrix}
0 &
\!\bar\lambda-1\!&&0\\
\! 1-\lambda\!
&0&\ddots\\
&\ddots&\ddots&
\!\bar\lambda-1\! \\
0&&\!1-\lambda\! &0
\end{bmatrix}_{n}\right),
\end{equation}
in which
$\lambda\in\mathbb F$,
$|\lambda| \ne 1$,
each nonzero $\lambda$
is determined up to
replacement by
$\bar\lambda^{-1}$,
and $\lambda= 0$ if
$n$ is odd; and
\begin{equation}
\label{hemm2}
\leqslantft(\begin{bmatrix}
a&bi&&&&0\\
-bi&0&a\\
&a&0&bi\\
&&-bi&0&a\\
&&&a&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n,\
\begin{bmatrix}
b&-ai&&&&0\\
ai&0&b\\
&b&0&-ai\\
&&ai&0&b\\
&&&b&0&\ddots\\
0&&&&\ddots&\ddots
\end{bmatrix}_n\;\right),
\end{equation}
in which $a^2+b^2=1$.
Let us prove that the
pairs \eqref{hemm1}
and \eqref{hemm2} are
*congruent to the
pairs \eqref{he1} and
\eqref{he2}.
We obtain \eqref{he2}
if we apply the
*congruence
transformation
$X\mapsto S^*XS$ with
\[
S:=\,\diagdown\,g(1,-i,-i,-1,-1,
i,i,1,1,-i,-i,-1,-1,\ldots)
\]
to the matrices of
\eqref{hemm2}.
The pair \eqref{hemm1}
with $\lambda=0$ is
the pair \eqref{he1}
with $\mu=-i$, which
is *congruent to
\eqref{he1} with
$\mu=i$ via the
transformation
\eqref{hfe}.
It remains to consider
\eqref{hemm1} with
$\lambda\ne 0$. Then
$n$ is even. Applying
to the matrices of
\eqref{hemm1} the
*congruence
transformation
$X\mapsto S^*XS$ with
\[
S:=\,\diagdown\,g\leqslantft(1,\
\frac{1}{1+\bar\lambda},\
\frac{1+\lambda}{1
+\bar\lambda},\
\frac{1+\lambda}{(1
+\bar\lambda)^2},\
\frac{(1+\lambda)^2}{(1
+\bar\lambda)^2},\
\frac{(1+\lambda)^2}{(1
+\bar\lambda)^3},\
\ldots \right),
\]
(the denominator is
nonzero since
$|\lambda| \ne 1$), we
obtain \eqref{he1}
with
\begin{equation}\label{nkl}
\mu:=\frac{\bar\lambda-1}{
\bar\lambda+1}i.
\end{equation}
Since $\lambda$ is
nonzero and is
determined up to
replacement by
$\bar\lambda ^{-1}$,
we have that $\mu\ne
-i$ and $\mu$ is
determined up to
replacement by
\[
\frac{\lambda^{-1}-1}
{\lambda^{-1}+1}i=
\frac{1-\lambda}
{1+\lambda}i=\bar\mu.
\]
Every $\mu\in\mathbb
F$ except for $i$ can
be represented in the
form \eqref{nkl} with
$\lambda=(i-\bar\mu)/
(i+\bar\mu)$. We do
not impose the
condition $\mu\ne\pm
i$ in \eqref{he1}
because \eqref{he1}
with $\mu=\pm i$ is
*congruent to
\eqref{hemm1} with
$\lambda=0$.
Let us prove that the
condition
$|\lambda|\ne 1$ is
equivalent to the
condition $\mu\notin
\mathbb P$. If
$|\lambda|= 1$ and
$\lambda=a+bi \ne-1$
with $a,b\in\mathbb
P$, then
\begin{equation}\label{yyy}
\mu=\frac{(\bar\lambda-1)
(\lambda+1)}{(\bar\lambda
+1)(\lambda+1)}i=
\frac{\bar\lambda\lambda-
\lambda+ \bar\lambda
-1}
{\bar\lambda\lambda
+\lambda+\bar\lambda
+1}i= \frac{-bi}{1+a}i
\in\mathbb P.
\end{equation}
Each $\mu\in\mathbb P$
can be represented in
the form \eqref{yyy}
as follows:
$\mu=b/(1+a)$, in
which
\[
a:=\frac{1-\mu^2}{1+\mu^2}
\quad\text{and}\quad
b:=\frac{2\mu}{1+\mu^2}
\qquad (\text{then
}a^2+b^2=1).
\]
(b) Lemma \ref{l_per}
ensures the assertion
about the pair
\eqref{he1} in table
\eqref{tab2a}.
The pair \eqref{he2}
has the form $(aX+bY,
bX-aY)$, in which
$(X,Y)$ is
\eqref{ss2n} with
$\lambda=0$. By
\eqref{tab3},
$(X,Y)\approx
(I_n,J_n(0))$ if $n$
is odd, and
$(X,Y)\approx
(J_n(0),I_n) $ if $n$
is even. Therefore,
\[
\text{Pair
\eqref{he2}}\approx
\begin{cases}
(aI_n+bJ_n(0),bI_n-aJ_n(0))
& \text{if $n$
is odd}, \\
(aJ_n(0)+bI_n,bJ_n(0)-aI_n)
& \text{if $n$ is
even}.
\end{cases}
\]
This validates the
assertion about the
pair \eqref{he2} in
table \eqref{tab2a}.
\end{proof}
\begin{remark}\label{rem2}
The pair \eqref{he2}
with two dependent
parameters, which was
obtained from the
Cartesian
decomposition of
\eqref{cmi2}, can be
replaced by $0$- and
$1$-parameter matrices
as follows. The
matrices \eqref{cmi2}
have the form $\mu A$,
in which $\mu=a+bi$,
$a,b\in\mathbb P$, and
$a^2+b^2=1$. If
$\mu\ne\pm i$, then
$a\ne 0$ and $\mu A$
is *congruent to
$|a|^{-1}\mu
A=\pm(1+ci)A$ with
$c\in\mathbb P$. Now
apply the Cartesian
decomposition to $\pm
iA$ and $ \pm(1+ci)A$
with $c\in\mathbb P$.
\end{remark}
\end{document} | math | 48,728 |
\begin{document}
\title[Dichotomy for strictly increasing bisymmetric maps]{A dichotomy result for strictly increasing bisymmetric maps}
\author{P\'al Burai, Gergely Kiss and Patricia Szokol}
\address{P\'al Burai, \newline Budapest University of Technology and Economics, \newline 1111 Budapest,
Műegyetem rkp. 3., HUNGARY}
\email{[email protected]}
\address{Gergely Kiss, \newline Alfr\'ed R\'enyi Institute of Mathematics,\newline 1053 Budapest, Re\'altanoda street 13-15, HUNGARY}
\email{[email protected]}
\address{Patricia Szokol, \newline University of Debrecen,\newline
Faculty of Informatics, University of Debrecen,
MTA-DE Research Group “Equations, Functions and Curves”, \newline
4028, Debrecen, 26 Kassai road, Hungary}
\email{[email protected]}
\keywords{Bisymmetry, quasi-arithmetic mean, reflexivity, symmetry, regularity of bisymmetric maps}
\subjclass[2010]{26E60, 39B05, 39B22, 26B99}
\maketitle
\begin{abstract}
In this paper we show some remarkable consequences of the method which proves that every bisymmetric, symmetric, reflexive, strictly monotonic binary map on a proper interval is continuous, in particular it is a quasi-arithmetic mean. Now we demonstrate that this result can be refined in the way that the symmetry condition can be weakened by assuming symmetry only for a pair of distinct points of an interval.
\end{abstract}
\section{Introduction}
The role of bisymmetry in the characterization of binary quasi-a\-rith\-metic means goes back to the research of J\'anos Acz\'el (see \cite{Aczel1948}). This led him to a new approach, which is basically different from the earlier multivariate characterization of quasi-arithmetic means by Kolmogoroff \cite{Kolmogoroff1930}, Nagumo \cite{Nagumo1930} and de Finetti \cite{deFinetti1931}. Since that time quasi-arithmetic means became central objects in theory of functional equations, especially in the theory of means (see e.g. \cite{Burai2013c, Daroczy2013,Duc2020,Glazowska2020,Kiss2018,LPZ2020,Nagy2019,Pales2011,Pasteczka2020},
in particular \cite{Daroczy2002} and \cite{Jar2018} and the references therein).
In the proof of Aczél's characterization (Theorem \ref{Aczel1}, for details see \cite[Theorem 1 on p. 287]{Aczel1989}) the assumption of continuity was essentially used. It seemed that continuity cannot be omitted from the conditions of Theorem \ref{Aczel1} until quite recently the authors showed that the characterization of two-variable quasi-arithmetic means is possible without the assumption of continuity (Theorem \ref{T:bisymmetryimpliescontinuity}, for details see \cite[Theorem 8]{BKSZ2021}). It was proved that every bisymmetric, symmetric, reflexive, strictly monotonic binary mapping $F$ on a proper interval $I$ is continuous, in particular it is a quasi-arithmetic mean.
In this paper we show a nontrivial consequence of Theorem \ref{T:bisymmetryimpliescontinuity}.
We prove a dichotomy result of bisymmetric, reflexive, strictly monotonic operations on an interval concerning symmetry (Corollary \ref{cor1}). Namely, such functions are either symmetric everywhere or nowhere symmetric.
In this sense this paper can be seen as the next step toward the characterization of bisymmetric, partially strictly monotone operations (see Open Problem \ref{op1}).
The remaining part of this paper organized as follows.
In Section \ref{S:preliminary} we introduce the basic definitions and preliminary results. Section \ref{s3} is devoted to our main result (Theorem \ref{T:bisymmetryimpliescontinuity2}) and its consequences. Here we show some illustrative examples for the strictness of our main result.
The proof of Theorem \ref{T:bisymmetryimpliescontinuity2} is a quite lengthy and technical one. Therefore, we introduce at first the needed concepts and prove some important lemmata in Section \ref{s31}, while Section \ref{s32} is devoted to the proof of Theorem \ref{T:bisymmetryimpliescontinuity2}. We finish this short note with some concluding remarks.
\section{Notations}\label{S:preliminary}
We keep the following notations throughout the text. Let $I\subseteq\mathbb{R}$ be a proper interval (i.e. the interior of $I$ is nonempty)
and $F\colon I\times I\to\mathbb{R}$ be a map.
Then $F$ is said to be
\begin{enumerate}[(i)]
\item {\it reflexive}, if $F(x,x)=x$ for every $x\in I$;
\item {\it partially strictly increasing}, if the functions $$x\mapsto F(x,y_0),\quad y\mapsto F(x_0,y)$$ are strictly increasing for every fixed $x_0\in I$ and $y_0\in I$. One can define partially strictly monotone, partially monotone, partially increasing functions similarly;
\item {\it symmetric}, if $F(x,y)=F(y,x)$ for every $x,y\in I$;
\item {\it bisymmetric}, if
\begin{equation*}\label{E:bisymmetry}
F\big(F(x,y),F(u,v)\big)=F\big(F(x,u),F(y,v)\big)
\end{equation*}
for every $x,y,u,v\in I$;
\item \emph{left / right cancellative}, if $F(x,a)=F(y,a)$ / $F(a,x)=F(a,y)$ implies $x=y$ for every $x,y,a\in I$. If $F$ is both left and right cancellative, then we simple say that $F$ is {\it cancellative}.
\item \emph{mean}, if \begin{equation*}
\min\{x,y\}\leq F(x,y)\leq\max\{x,y\}
\end{equation*} for every $x,y\in I$. $F$ is a {\it strict mean} if the previous inequalities are strict whenever $x\ne y$.
\end{enumerate}
\begin{obvs}\label{o1}
If a map $F:I^2\to I$ is partially strictly increasing, then it is cancellative.
\end{obvs}
The following fundamental result is due to Acz\'el \cite{Aczel1948} (see also \cite[Theorem 1 on p. 287]{Aczel1989}).
\begin{thm}\label{Aczel1}
A function $F:I^2\to I$ is continuous, reflexive, partially strictly monotonic, symmetric and bisymmetric mapping if and only if there is a continuous, strictly increasing function $f$
that satisfies
\begin{equation}\label{eqa1}
F(x,y)=f \left(\frac{f^{-1}(x)+f^{-1}(y)}{2}\right),\qquad x,y\in I.
\end{equation}
\end{thm}
A function $F$ which satisfies \eqref{eqa1} is called a {\it quasi-arithmetic mean}. In other words, a quasi-arithmetic mean is a conjugate of the arithmetic mean by a continuous bijection $f$.
In \cite{BKSZ2021} the authors showed that the assumption of continuity for $F$ in Theorem \ref{Aczel1} can be omitted insomuch as it is the consequence of the remaining conditions.
\begin{thm}\label{T:bisymmetryimpliescontinuity}
A function $F\colon I^2\to I$ is reflexive, partially strictly increasing, symmetric and bisymmetric mapping if and only if there is a continuous, strictly monotonic function $f$ such that
\begin{equation}\label{Eq_foalak}
F(x,y)=f\left(\frac{f^{-1}(x)+f^{-1}(y)}{2}\right),\qquad x,y\in I.
\end{equation}
In particular every reflexive, partially strictly increasing, symmetric and bisymmetric binary mapping defined on $I$ is continuous.
\end{thm}
\section{Dichotomy result on the symmetry of bisymmetric, strictly monotone, reflexive binary functions}\label{s3}
We prove that a reflexive, bisymmetric, partially strictly increasing map is either totally symmetric or totally non-symmetric on the whole domain.
The main result of this section runs as follows:
\begin{thm}\label{T:bisymmetryimpliescontinuity2}
Let us assume that $I$ is a proper interval and $F\colon I^2\to I$ is a reflexive, partially strictly increasing and bisymmetric map. Suppose that there is an $\alpha,\beta\in I$ ($\alpha\ne \beta$) such that $F(\alpha,\beta)=F(\beta,\alpha)$. Then $F$ is symmetric on $I$ and continuous, i.e., $F$ is a quasi-arithmetic mean.
\end{thm}
As an immediate consequence of Theorem \ref{T:bisymmetryimpliescontinuity2} we get the following dichotomy result.
\begin{cor}\label{cor1}
Let $I$ be a proper interval, then
every bisymmetric, partially strictly increasing, reflexive, binary function $F\colon I^2\to I$ is
either symmetric everywhere on $I$ or nowhere symmetric on $I$.
\end{cor}
We illustrate the strictness of our main results with some examples.
The map
\[
F\colon[0,1]^2\to[0,1],\quad F(x,y):=\begin{cases}
\frac{2xy}{x+y}&\mbox{if } x\in[0,1], \ \mbox{and } y\in [0,\tfrac12[\\
\sqrt{xy}&\mbox{if } x\in [0,\tfrac12],\ \mbox{and } y\in [\tfrac12,1]\\
\frac{x+y}{2}&\mbox{otherwise}
\end{cases}
\]
is reflexive, partially strictly monotone increasing, not bisymmetric and it is neither symmetric nor non-symmetric for every elements of $[0,1]^2$.
The map
\[
F\colon[0,1]^2\to[0,1],\quad F(x,y):=\begin{cases}
y&\mbox{if } x,y\in[\tfrac12,1]\\
\min\{x,y\}&\mbox{otherwise}
\end{cases}
\]
is reflexive, bisymmetric, partially monotone increasing but not strictly, and it is neither symmetric nor non-symmetric for every elements of $[0,1]^2$.
Concerning the relaxation of reflexivity condition we can formulate the following open problem.
\begin{open} \label{op1}
Is it true or not that every bisymmetric, partially strictly increasing map is automatically continuous?
\end{open}
If the answer is affirmative, then the resulted map can be written in the following form (see \cite[Exercise 2, p. 296]{Aczel1989}).
\[
F(x,y)=k^{-1}(ak(x)+bk(y)+c),
\]
where $k$ is an invertible, continuous function, and $a,b,c$ are arbitrary real constants such that $ab\not=0$. In this case $F$ is either symmetric or non-symmetric everywhere. It is reflexive only if $c=0$ and $a+b=1$.
We could not find a map which is bisymmetric, partially strictly increasing, not reflexive and neither symmetric nor non-symmetric for every pair of $I^2$.
\subsection{Auxiliary results and needed concepts}\label{s31}
\phantom{nnn}
Let $(u,v,F)_n$ denote the set of all expressions that can be build as n-times compositions of $F$ by using $u$ and $v$.
For instance
\begin{align*}
(u,v,F)_0=&\{u,v\}\\
(u,v,F)_1=&\{F(u,u),F(u,v), F(v,u), F(v,v)\}\\
(u,v,F)_2=&\{F(F(u,u),u),F(F(u,v),u), F(F(v,u),u), F(F(v,v),u),\\ F(F(u,u),v)&,F(F(u,v),v), F(F(v,u),v), F(F(v,v),v), F(u,F(u,u)), \\ F(u,F(u,v))&, F(u,F(v,u)), F(v,F(v,v)),F(v,F(u,u)),F(v,F(u,v)), \\ F(v,F(v,u))&, F(v,F(v,v))\}.
\end{align*}
Moreover, let $(u,v,F)_{\infty}$ denote the set of all expressions that can be build as any number of compositions of $F$ by using $u$ and $v$. Formally, $$(u,v,F)_{\infty}=\bigcup_{n=1}^{\infty}(u,v,F)_{n}. $$
Reflexivity implies that $(u,v,F)_k\subset (u,v,F)_n$, if $k<n$. Hence, for the sake of convenience, we can introduce the notion of the length of expressions of $(u,v,F)_{\infty}$ as follows.
Let $x\in (u,v,F)_{\infty}$, such that
\[
\min_{k\in\mathbb{N}}\{x\in (u,v,F)_k\}=k_0.
\]
Then $k_0$ is called the length of $x$. Notation: $\mathcal{L}(x)=k_0$.
For example the length of $x = F(F(u,u),v)=F(u,v)$ is $1$.
We go on this subsection with the proof of some technical lemmata.
\begin{lem} If $F(u,v)=F(v,u)$, then F is symmetric for any pair $(t,s)$, where $t,s\in (u,v, F)_{\infty}$. \end{lem}
\begin{proof}
We prove it by induction with respect to the length of the elements of $(u,v, F)_{\infty}$. It is easy to check that $F$ is symmetric for any pair of elements $(u,v,F)_k$ if $k=0,1$. For instance $F$ is symmetric for $(u,F(u,v))$.
Indeed, applying the reflexivity and bisymmetry of $F$ and $F(u,v)=F(v,u)$, we get
\begin{eqnarray*}
F(u,F(u,v))=F(F(u,u),F(v,u))=F(F(u,v),u).
\end{eqnarray*}
Similarly, $F$ is symmetric for $\{(u,F(v,u)), (v,F(u,v)), (v,F(v,u)) \}$, and hence for any pair of elements of $(u,v,F)_1$.
Now, we prove that $F$ is symmetric for any pair of elements of $(u,v,F)_k$, where $k\le n+1$, under the assumption that $F$ is symmetric for any pair of elements of $(u,v,F)_k$, where $k\le n$.
Let $x$ and $y$ be two elements of $(u,v, F)_{\infty}$ such that
$\mathcal{L}(x)=k$, $\mathcal{L}(y)=l$ where $k,l \le n+1$. Then, there exists $a,b,c,d \in (u,v, F)_{\infty}$, such that $\mathcal{L}(a), \mathcal{L}(b), \mathcal{L}(c), \mathcal{L}(d)\leq n$ and
\[
x=F(a,b), \qquad y=F(c,d).
\]
By the inductive hypothesis we get that $F$ is symmetric for each pair of the set $\{a,b,c,d\}$. Applying the bisymmetry of $F$ we obtain
\begin{eqnarray*}
F(x,y)&=&F(F(a,b),F(c,d))=F(F(a,c),F(b,d))\\
&=&F(F(c,a),F(d,b))=F(F(c,d),F(a,b))=F(y,x).
\end{eqnarray*}
\end{proof}
\begin{lem}\label{l1}
Let $I$ be a proper interval, and $F:I^2\to I$ be a bisymmetric, partially strictly increasing and reflexive function. Suppose that
there are $u,v\in I,\ u<v$ such that $F(u,v)=F(v,u)$. Then there is an invertible, continuous function $f\colon[0,1]\to [u,v]$ such that $F$ can be written in the form
\begin{equation}\label{eqam}
F(x,y)=f\left(\frac{f^{-1}(x)+f^{-1}(y)}{2}\right),\qquad x,y\in [u,v].
\end{equation}
In particular, $F(s,t)=F(t,s)$ holds for all $s,t\in [u,v]$.
\end{lem}
\begin{proof}
The argument is similar to the proof\footnote{for the details see \cite[proof of Theorem 8 on page 479]{BKSZ2021}} of Theorem \ref{T:bisymmetryimpliescontinuity}. The main observation is that the proof
use only the symmetry of $F$ on the images of dyadic numbers which is exactly the set $(u,v,F)_{\infty}$ in our case. For convenience we briefly sketch the crucial steps of the proof.
\begin{itemize}
\item Define $f\colon[0,1]\to[u,v]$ recursively on the set of dyadic numbers $\mathcal{D}$ to the set $(u,v,F)_{\infty}$, so that $f(0)=u,f(1)=v, f(\frac{1}{2})=u\circ v$ and $f$ satisfies the identity \begin{equation}\label{identity_on_dyadics}
f\left(\frac{d_1+d_2}{2}\right)=F(f(d_1), f(d_2))
\end{equation}
for every $d_1,d_2\in\mathcal{D}$. There can be proved that such an $f$ is well-defined and strictly increasing. In this argument we crucially use the fact that $F$ is symmetric on $(u,v,F)_{\infty}$. By its recursive definition, it is clear that $f(\mathcal{D})=(u,v,F)_{\infty}$. (See also Acz\'el and Dhombres \cite{Aczel1989} on the pages $287-290$.)
\item The closure of $f(\mathcal{D})$ has uncountably many two-sided accumulation points\footnote{A point $\alpha$ in a set $H$ is a {\it two-sided accumulation point} if for every
$\varepsilon>0$, we have
\[
]\alpha-\varepsilon,\alpha[~\cap~H\not=\emptyset\quad\mbox{ and }\quad ]\alpha,\alpha+\varepsilon[~\cap~H\not=\emptyset.
\].}.
\item If $f(\mathcal{D})$ is not dense in $[u,v]$, i.e., there are $X,Y\in [u,v]$ such that $]X,Y[~\cap ~ f(\mathcal{D})= \emptyset$, then one can show that for arbitrary two-sided accumulation points $s\not=t$ we have
\[
]F(X,s),F(Y,s)[~\cap~ ]F(X,t),F(Y,t)[ ~=\emptyset.
\]
Hence the cardinality of disjoint intervals as well as the cardinality of two-sided accumulation points is uncountable, which gives a contradiction. Thus, $f(\mathcal{D})$ has to be dense in $[u,v]$.
\item If $f(\mathcal{D})$ is dense in $[u,v]$, then $f$ can be defined strictly increasingly on $[0,1]$, so that $f$ is continuous and satisfies \eqref{eqam}.
\end{itemize}
\end{proof}
\begin{lem}\label{l:union}
Let $I_1, I_2\subseteq I$ be two intervals such that $F$ is symmetric on $I_1$ and $I_2$. Then $F$ is symmetric on $F(I_1, I_2):=\{\ F(x_1,x_2)\ |\ x_1\in I_1,\ x_2\in I_2\ \}$.
Furthermore, if $I_1\cap I_2\not=\emptyset$, then $F$ is symmetric on $I_1\cup I_2$.
\end{lem}
\begin{proof}
We have to show that $F(z_1, z_2)=F(z_2, z_1)$ for $z_1, z_2\in F(I_1, I_2)$.
Let $x_1,x_2\in I_1$ and $y_1,y_2\in I_2$ such that $F(x_1, y_1)=z_1$ and $F(x_2, y_2)=z_2$.
Then \begin{align*}
&F(z_1, z_2)=F(F(x_1, y_1), F(x_2, y_2))=F(F(x_1, x_2), F(y_1, y_2))=\\&F(F(x_2, x_1),F(y_2, y_1))=F(F(x_2,y_2),F(x_1, y_1))=F(z_2, z_1),
\end{align*}
where in the second and fourth equalities we use bisymmetry and the third equality holds by the symmetry of $F$ on $I_1$ and $I_2$.
Now, let us assume that $z\in I_1\cap I_2$ and let $x\in I_1$, $y\in I_2$ be arbitrary. We have to show, that $F(x,y)=F(y,x)$.
Using $F(x,z)=F(z,x)$, $F(y,z)=F(z,y)$ and the bisymmetry of $F$, we get
\begin{eqnarray*}
F(F(x,y),F(z,z))&=&F(F(x,z),F(y,z))=\\F(F(z,x),F(y,z))&=&F(F(z,y),F(x,z))=\\
F(F(y,z),F(x,z))&=&F(F(y,x),F(z,z)).
\end{eqnarray*}
Since $F$ is partially strictly increasing, by Observation \ref{o1}, it is cancellative and hence $F(x,y)=F(y,x)$.
\end{proof}
Now, we are in the position to prove our main theorem.
\subsection{Proof of Theorem \ref{T:bisymmetryimpliescontinuity2}}\label{s32}
\phantom{nnn}
Let us assume first that $I$ is a proper compact interval.
Let $\sim$ be defined on $I$ such that for any $a,b\in I$ we have $a\sim b$ if and only if $F(a,b)=F(b,a)$. Then $\sim$ is an equivalence relation.
Indeed, $\sim$ is clearly reflexive and symmetric. Transitivity is a direct consequence of Lemma \ref{l:union}.
Lemma \ref{l1} guarantees that if two points are in the same equivalence class, then the interval between them belongs to the same class. Combining this fact with the transitivity of $\sim$, we can obtain that every equivalence class is an interval.
One can introduce an ordering $<$ between the equivalence classes of $\sim$ as follows.
For two equivalence classes $I_1, I_2$ ($I_1\ne I_2$) we say that $I_1$ smaller than $I_2$ (denote it by $I_1<I_2$) if every element of $I_1$ is smaller than every element of $I_2$. As every equivalence class is an interval, this definition is meaningful and gives a natural total ordering on the equivalence classes of $F$ in $I$.
\textbf{Step 1:} {\it Let $I_1,I_2\subseteq I$ be two equivalence classes such that $I_1 < I_2$, then $F(I_1, I_3)< F(I_2, I_3)$ (resp. $F(I_3, I_1)< F(I_3, I_2)$) for every equivalence class $I_3$. In particular, if $I_1<I_2$, then $I_1<F(I_1, I_2)<I_2$.}
By Lemma \ref{l:union}, we get that $F$ is symmetric on $F(I_1,I_3)$ and on $F(I_2,I_3)$. If these sets are disjoint, then $F(I_1, I_3)< F(I_2, I_3)$ by partially strictly increasingness of $F$. Now, assume that there exists a common element of $F(I_1,I_3)$ and $F(I_2,I_3)$, i.e., there exist $x_1\in I_1$, $x_2\in I_2$ and $y_1,y_2 \in I_3$ such that $F(x_1,y_1)=F(x_2, y_2)$. Hence,
\[
F(F(x_1,y_1),F(x_2,y_2))=F(F(x_2,y_2),F(x_1,y_1)).
\]
By bisymmetry, the left hand-side is equal to $F(F(x_1,x_2),F(y_1,y_2))$. Concerning the right-hand-side, bisymmetry and the fact $y_1\sim y_2$ implies that
\[
F(F(x_1,x_2),F(y_1,y_2))=F(F(x_2,x_1),F(y_1,y_2)).
\]
Moreover, $F$ is partially strictly increasing and hence, by Observation \ref{o1}, it is cancellative. Consequently, $F(x_1,x_2)=F(x_2,x_1)$, which is a contradiction, since $x_1$ and $x_2$ are belonging to two different equivalence classes.
Similarly, we can get that $F(I_3, I_1)< F(I_3, I_2)$ for any $I_3$, if $I_1< I_2$.
In particular, the choice $I_1=I_3$ gives that $F(I_1, I_1)=I_1<F(I_1, I_2)$.
Analogously, substituting $I_2=I_3$ to $F(I_1, I_3)< F(I_2, I_3)$ we have $F(I_1, I_2)=I_1<F(I_2, I_2)=I_2$.
Thus, it implies that if $I_1<I_2$, then $I_1<F(I_1, I_2)<I_2$.
\textbf{Step 2:} \emph{Every equivalence class is a closed interval.}
As we have seen, every equivalence class is a (not necessarily proper) interval. Let $I_1$ be an equivalence class with endpoints $a$ and $b$. If $a=b$, then $I_1$ is a singleton and we are done. Now we assume that $a\ne b$. Suppose that $b\not\in I_1$. Then there is an equivalence class $I_2$ containing $b$.
However, Step 1 implies that $I_1<F(I_1,I_2)<I_2$, which is a contradiction, since $b$ is on the boundary of $I_1$ and $I_2$, so there is no space for $F(I_1, I_2)$. Thus $b\in I_1$. Similar argument shows that $a\in I_1$ and hence $I_1$ is closed.
It is important to note that the equivalence classes can be singletons, but according to our assumption $F(\alpha,\beta)=F(\beta,\alpha)$ holds for given $\alpha,\beta\in I$, hence $\alpha\sim \beta$ and there is at least one equivalence class $I_{\alpha\beta}$ which is a proper interval that contains $\alpha$ and $\beta$.
\textbf{Step 3:} \emph{Let $I$ be a proper- and $J$ be an arbitrary interval, then
$F(I, J)$ (reps. $F(J, I)$) contained in such an equivalence class, which is a proper interval.}
If $I$ is proper, then $F(I, J)$ has at least two elements. Hence, the equivalence class containing $F(I, J)$ is an interval which is proper.
If the intersection of $I$ and $J$ is nonempty, then the statement comes immediately from Lemma \ref{l:union}.
If the intersection is empty, then we can deduce the statement from Step 1 and Step 2.
\textbf{Step 4:} \emph{The whole interval $I$ where $F$ is defined constitutes one equivalence class.}
Let us assume that we have at least two different equivalence classes $I_1$ and $I_2$. Without loss of generality, we can assume that $I_1<I_2$ and at least one of intervals is a proper one (e.g. $I_1=I_{\alpha\beta}$).
Iterating the fact (by Step 1) that $I_1<I_2$ implies $I_1<F(I_1, I_2)<I_2$, we will get infinitely many equivalence classes that are proper intervals. Indeed, the sequence
\[
I_{\alpha\beta},\ F(I_{\alpha\beta}, I_2), F(F(I_{\alpha\beta}, I_2), I_2),\ F(F(F(I_{\alpha\beta}, I_2), I_2), I_2)
\]
gives such equivalence classes, where $I_{\alpha\beta}$ was defined in Step 2.
Let us denote the cardinality of equivalence classes by $\kappa$ and we index the equivalence classes $I_{j}$ for $j<\kappa$.
We distinguish two cases:
\begin{enumerate}
\item $\kappa=\aleph_0$: In this case we have countably infinitely many closed, disjoint intervals that covers the closed interval $I$. This is not possible by the following theorem of Sierpinski \cite{Si} (see also \cite[p. 173]{Ku}).
\begin{thm*}[Sierpinski]
Let $X$ be a compact connected Hausdorff space (i.e. continuum). If $X$ has a countable cover $\{X_i\}_{i=1}^{\infty}$ by pairwise disjoint closed subsets, then at most one of the sets $X_i$ is non-empty.
\end{thm*}
\noindent In our case this implies that one of the equivalence classes must be the whole interval $I$.
\item $\kappa >\aleph_0$: In this case we take $\{F(I_{\alpha\beta},I_j): j<\kappa\}$. By Step 2 and by Step 3, for all $j<\kappa$ the sets $F(I_{\alpha\beta},I_j)$ are contained in equivalence classes that are pairwise disjoint and proper intervals. So we can find uncountably many disjoint, proper intervals in $\mathbb{R}$, which is impossible.
\end{enumerate}
Thus we get that every point of $I$ is in one equivalence class, hence $F$ is symmetric on $I$. In particular, $F$ a is quasi-arithmetic mean on the compact, proper interval $I$.
\textbf{Step 5:} \emph{If $F$ is a quasi-arithmetic mean on every compact subinterval of an arbitrary interval $I$, then it is a quasi-arithmetic mean on $I$.}
The proof is based on a standard compact exhaustion argument. The interested reader is referred to the proof of Theorem 1 in \cite[p. 287]{Aczel1989}.
This finishes the
proof of Theorem \ref{T:bisymmetryimpliescontinuity2}.
\section{Concluding remarks}
One of the main goal concerning the characterization of bisymmetric operations without any regularity assumption is formalized in Open Problem \ref{op1}, which ask whether bisymmetry with strict increasingness would imply continuity. In the first joint paper of the authors (\cite{BKSZ2021}) there have been proved that this is true if the operation is also symmetric and reflexive (see Theorem \ref{T:bisymmetryimpliescontinuity}). Following Acz\'el's idea in the investigation of bisymmetric, strictly increasing maps (\cite{Aczel1948}) the next step would be to verify the case where symmetry condition is not assumed.
\begin{open}
Is it true or not that every bisymmetric, partially
strictly increasing, reflexive map is automatically continuous?
\end{open}
At this moment we do not know the exact answer to this question, although we believe that it is true. In this direction our present investigation would be an initial step by showing the dichotomy of symmetry of bisymmetric, strictly increasing, reflexive operations.
Furthermore, it is important to note that reflexivity in the proof of Theorem \ref{T:bisymmetryimpliescontinuity2} have not been used centrally, just implicitly in Lemma \ref{l1}. This observation leads to the following open question.
\begin{open}
Is it true or not that every bisymmetric, partially
strictly increasing, symmetric map is automatically continuous?
\end{open}
If the answer is affirmative, then with its proof we may get the analogue of Lemma \ref{l1} without reflexivity.
Moreover, it would automatically imply the analogues of Theorem \ref{T:bisymmetryimpliescontinuity2} and the dichotomy result Corollary \ref{cor1} without the assumption of reflexivity.
\end{document} | math | 24,170 |
\begin{document}
\title{Galois groups associated to generic Drinfeld modules and a conjecture of Abhyankar \\ {\bf Notice of Replacement}}
\author{Florian Breuer\\
Stellenbosch University, Stellenbosch, South Africa\\
{\em [email protected]}}
\maketitle
The previous version of this paper, as disseminated on arXiv on 17 March 2013, contains an important gap at the end of the proof. It is superceded by the paper
\cite{new}.
The erroneous claim is that $K_T = K_{T,0}(\lambda(v))$ is a transcendental extension of $K_{T,0}$, whereas in fact $\lambda(v)$ satisfies the equation
\[
\lambda(v)^{q^r-1} = T\prod_{0\neq w\in V}\frac{\lambda(v)}{\lambda(w)} \in K_{T,0}.
\]
To correct this error, one effectively needs to show that $K_T\cap K_{NT,0} = K_{T,0}$ inside $K_{NT}$. This is achieved in \cite{new} by studying the fixed field of $G(NT,T)\cap\mathop{\rm SL}\nolimits_r(A/NTA)$ inside $K_{TN,0}$ via Anderson's determinant morphism from $M_{TN}^r$ to $M_{TN}^1$, and applying some group theory.
The paper \cite{new} also contains an explicit construction of $M_{TN}^r$, extending the construction of Richard Pink, which may be of independent interest.
Many thanks to an alert anonymous referee at the {\em Journal of Number Theory} for pointing out this error.
\begin{thebibliography}{99}
\bibitem{new} F. Breuer, Explicit Drinfeld moduli schemes and Abhyankar's generalized iteration conjecture, {\em arXiv:1503.06420 [math.NT]}.
\end{thebibliography}
\end{document}
\section{Introduction}
Let ${\mathbb{F}}_q$ be the finite field of $q$ elements, $k/{\mathbb{F}}_q$ an algebraic extension and let $T,g_1,g_2,\ldots,g_{r-1}$ be algebraically independent over ${\mathbb{F}}_q$, for some $r\geq 2$.
Let $A:={\mathbb{F}}_q[T]$ be the polynomial ring over ${\mathbb{F}}_q$, $F:=k(T)$ the rational function field over $k$ and $K:=k(T,g_1,\ldots,g_{r-1})$.
Consider the Drinfeld $A$-module $\phi$ of rank $r$ defined over $K$ by
\[
\phi_T(X) = TX + g_1X^q + \cdots + g_{r-1}X^{q^{r-1}} + X^{q^r}.
\]
See \cite[Chap. 4]{GossBS} for a background on Drinfeld modules. For any $N=a_0+a_1T+\cdots+a_nT^n\in A$, the polynomial $\phi_N(X)\in K[X]$ is constructed by iterating $\phi_T$ as follows:
\[
\phi_N(X) = a_0 + a_1\phi_T(X) + a_2(\phi_T\circ\phi_T)(X) + \cdots + a_n\underbrace{(\phi_T\circ\cdots\circ\phi_T)}_{\text{$n$ times}}(X).
\]
Denote by $K_N$ the splitting field of $\phi_N(X)$ over $K$. Equivalently, $K_N$ is obtained by adjoining the elements of the $N$-torsion module $\phi[N]$ of $\phi$ to $K$. Since $\phi[N]\cong (A/NA)^r$ as $A$-modules, and $\mathop{\rm Gal}\nolimits(K_N/K)$ commutes with the $A$-module structure, we see that $\mathop{\rm Gal}\nolimits(K_N/K)\into \mathop{\rm GL}\nolimits_r(A/NA)$ (the embedding depends on a choice of basis of $\phi[N]$).
The goal of this note is to prove the following.
\begin{Thm}\label{Main}
$\mathop{\rm Gal}\nolimits(K_N/K) \cong \mathop{\rm GL}\nolimits_r(A/NA)$.
\end{Thm}
This is S.S.~Abhyankar's ``Generalized Iteration Conjecture'', see \cite[\S19]{Ab01}. It is also the Drinfeld module analogue of the following theorem of Weber, see \cite[Cor. 1, p68]{LangEF}. Let $E$ be an elliptic curve with transcendental $j$-invariant $j$, defined over ${\mathbb{Q}}(j)$, and $n>1$, then $\mathop{\rm Gal}\nolimits({\mathbb{Q}}(j,E[n])/{\mathbb{Q}}(j)) \cong \mathop{\rm GL}\nolimits_2({\mathbb{Z}}/n{\mathbb{Z}})$.
When $N=T$, Theorem \ref{Main} can be traced back to E.H.~Moore \cite{Moore}:
\begin{Thm}[Moore]\label{ThmMoore}
$\mathop{\rm Gal}\nolimits(K_T/K)\cong\mathop{\rm GL}\nolimits_r(A/TA)=\mathop{\rm GL}\nolimits_r({\mathbb{F}}_q)$.
\end{Thm}
\begin{Proof}
See \cite[\S3]{AS1} for a particularly simple proof.
\end{Proof}
A number of other special cases of Theorem \ref{Main} are known, for example the case $r=1$ is due to Carlitz \cite{Carlitz}, the case $r=2$ is closely related to a result of Joshi \cite{Joshi}, and the case $N=T^n$ with $r\geq 1$ arbitrary was proved independently by Thiery \cite{Thiery} and by Abhyankar and Sundaram \cite{AS1}. Theorem \ref{Main} has also been proved under a variety of hypotheses on $N\in A$ by Abhyankar and his students, see \cite[\S19]{Ab01} and the references therein.
\section{Relative Galois groups}
Let $N,M\in A$ and define
\begin{eqnarray*}
G(MN,M) & := & \{ g\in\mathop{\rm GL}\nolimits_r(A/MNA) \;|\; g\equiv 1 \bmod M\} \\
& = & \ker\big(\mathop{\rm GL}\nolimits_r(A/MNA) \longto \mathop{\rm GL}\nolimits_r(A/MA)\big).
\end{eqnarray*}
Our approach will be to show the following.
\begin{Thm}\label{Relative}
$\mathop{\rm Gal}\nolimits(K_{MN}/K_M) \cong G(MN,M)$.
\end{Thm}
Clearly, Theorem \ref{Relative} follows from Theorem \ref{Main}. Conversely, we show that Theorem \ref{Main} follows if Theorem \ref{Relative} holds for $M=T$.
\begin{Proofof}{Theorem \ref{Main}}
\begin{minipage}[u]{5cm}
\[
\xymatrix{
& K_{NT}\ar@{-}[dl]_{G(NT,T)}\ar@{-}[dr]^{G(NT,N)} & \\
K_T\ar@{-}[dr]_{\mathop{\rm GL}\nolimits_r(A/TA)} & & K_N\ar@{-}[dl] \\
& K &
}
\]
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[u]{9cm}
{Suppose that $\mathop{\rm Gal}\nolimits(K_{NT}/K_T)\cong G(NT,T)$. Consider the field extensions in the diagram. We have $\mathop{\rm Gal}\nolimits(K_T/K)\cong\mathop{\rm GL}\nolimits_r(A/TA)$ by Theorem \ref{ThmMoore}, thus $[K_{NT}:K] = \#G(NT,T)\cdot\#\mathop{\rm GL}\nolimits_r(A/TA) = \#\mathop{\rm GL}\nolimits_r(A/NTA)$ and it follows that $\mathop{\rm Gal}\nolimits(K_{NT}/K)\cong\mathop{\rm GL}\nolimits_r(A/NTA)$. Now from the action of $\mathop{\rm Gal}\nolimits(K_{NT}/K)$ on $\phi[N]$ we see that $\mathop{\rm Gal}\nolimits(K_{NT}/K_N)\cong G(NT,N)$ and finally $\mathop{\rm Gal}\nolimits(K_N/K)\cong\mathop{\rm GL}\nolimits_r(A/NTA)/G(NT,N)\cong\mathop{\rm GL}\nolimits_r(A/NA)$.}
\end{minipage}
\end{Proofof}
\section{Drinfeld moduli schemes}
The key point is to recognize the groups $G(MN,M)$ as the Galois groups of \'etale morphisms between Drinfeld moduli schemes, so we introduce these next. A useful reference is the careful article of Hubschmid \cite{Hubschmid}.
Let $N\in A$ be non-constant, and consider the functor ${\cal F}_N^r : \text{$F-$Schemes} \to \text{Sets}$, which maps an $F$-scheme $S$ to the set of isomorphism classes of rank $r$ Drinfeld $A$-modules with level-$N$ structure over $S$. Then ${\cal F}_N^r$ is representable by a non-singular affine scheme $M_N^r$ over $\mathop{\rm Spec}\nolimits(F)$, called a Drinfeld moduli scheme; see \cite[\S2]{Hubschmid} for details.
\begin{Thm}[Drinfeld]\label{etale}
Let $M,N\in A$ be non-constant. Then the natural projection $M_{MN}^r\to M_M^r$ is an \'etale Galois cover over $F$ with group $G(MN,M)$.
\end{Thm}
\begin{Proof}
The ideas are due to Drinfeld \cite{Drinfeld}, see \cite[Prop. 3.1.3]{Hubschmid} for a detailed proof.
\end{Proof}
Next, we need the following result.
\begin{Thm}\label{Irr}
Let $N\in A$ be non-constant. Then $M_N^r$ is irreducible as a scheme over $\mathop{\rm Spec}\nolimits(F)$.
\end{Thm}
\begin{Proof}
This is \cite[Cor. 3.4.5]{Hubschmid} in the case where $k={\mathbb{F}}_q$. In our situation, the same proof still works, namely there exists a morphism $M_N^r\to M_N^1$ defined over $F$ by \cite[Cor. 3.4.4]{Hubschmid}, whose geometric fibres are precisely the geometrically irreducible components of $M_N^r$ by \cite[Prop. 3.4.2]{Hubschmid}. It remains to check that $M_N^1$ is irreducible. For this we recall \cite[Thm 1]{Drinfeld} that $M_N^1=\mathop{\rm Spec}\nolimits(R_N^1\otimes_AF)$ where $R_N^1$ is the integral closure of $A$ in a certain class field over ${\mathbb{F}}_q(T)$ which splits completely above the place of ${\mathbb{F}}_q(T)$ with uniformizer $\frac{1}{T}$. As a result, this class field is a purely geometric extension of ${\mathbb{F}}_q(T)$, whereas $F$ is a constant field extension of ${\mathbb{F}}_q(T)$, and so $R_N^1\otimes_AF$ is a field, and $M_N^1$ is irreducible.
\end{Proof}
\section{The case $N=T$}
When $N=T$, the moduli scheme $M_T^r$ has a nice explicit description, due to Pink \cite{Pink,PinkSchieder}, since a Drinfeld $A$-module is uniquely determined by its level-$T$ structure.
Let $V={\mathbb{F}}_q^r$, $S_V=F[v \;|\; v\in V]$ the symmetric algebra of $V$ over $F$, and $R_V=F[v,\frac{1}{v} \;|\; v\in V\smallsetminus\{0\}]$ the localization of $S_V$ at $\prod_{v\in V\smallsetminus\{0\}}v$. We turn $R_V$ into a graded ring by defining $\mathop{\rm deg}\nolimits(v)=1$ and $\mathop{\rm deg}\nolimits(\frac{1}{v})=-1$ for all $v\in V\smallsetminus\{0\}$, and $\mathop{\rm deg}\nolimits(x)=0$ for all $0\neq x\in F$. Let $R_{V,0}$ denote the degree 0 component of $R_V$, then from \cite[\S7]{Pink} we obtain the following result.
\begin{Thm}[Pink]
$M_T^r = \mathop{\rm Spec}\nolimits(R_{V,0})$.
\end{Thm}
Moreover, the Drinfeld module $\phi$ endowed with a level-$T$ structure $\lambda : V \stackrel{\sim}{\longto} \phi[T]$ defines a $K_T$-valued point of $M_T^r$, denoted ${\rm et}a_T : \mathop{\rm Spec}\nolimits(K_T) \to M_T^r$, corresponding to
\[
{\rm et}a_T^\sharp : R_{V,0} \longto K_T, \quad\text{induced by}\quad \frac{v_1}{v_2} \longmapsto \frac{\lambda(v_1)}{\lambda(v_2)}, \quad \forall v_1,v_2 \in V\smallsetminus\{0\}.
\]
Since ${\rm et}a_T^\sharp$ is injective, ${\rm et}a_T$ maps onto the generic point of $M_T^r$.
By Theorem \ref{Irr}, $M_{NT}^r$ is irreducible, hence has a unique generic point. Now $\phi$ endowed with a level-$NT$ structure defines a $K_{NT}$-valued point ${\rm et}a_{NT}$ of $M_{NT}^r$, which lies above ${\rm et}a_T$, and hence also maps onto the generic point of $M_{NT}^r$.
Denote by $K_{T,0}\subset K_T$ the quotient field of the image of ${\rm et}a_T^\sharp$, then $K_T = K_{T,0}(\lambda(v))$ for any $0\neq v\in V$, so $K_T$ is a purely transcendental extension of $K_{T,0}$. Similarly, we denote by $K_{NT,0}\subset K_{NT}$ the quotient field of the image of ${\rm et}a_{NT}^\sharp$. It follows from Theorem \ref{etale} that $\mathop{\rm Gal}\nolimits(K_{NT,0}/K_{T,0}) \cong G(NT,T)$. Therefore, the compositum $K_{NT,0}\cdot K_T$ in $K_{NT}$ is Galois over $K_T$ with group $G(NT,T)$, so $[K_{NT}:K_T]\geq \#G(NT,T)$. Since $\mathop{\rm Gal}\nolimits(K_{NT}/K_T)\into G(NT,T)$ it follows that $\mathop{\rm Gal}\nolimits(K_{NT}/K_T)\cong G(NT,T)$. This proves Theorem \ref{Relative} for the case $M=T$, and so Theorem \ref{Main} follows. \qed
\paragraph{Acknowledgement.} I wish to thank the anonymous referee who read an earlier draft of this paper, in which a more complicated proof of Theorem \ref{Main} was proposed, and who outlined the proof presented here. This work was supported by NRF grant number BS2008100900027.
\begin{thebibliography}{99}
\bibitem{Ab01} S.S.~Abhyankar, Resolution of singularities and modular Galois theory. {\em Bull. Amer. Math. Soc. (N.S.)} {\bf 38} (2001), no. 2, 131--169.
\bibitem{AS1} S.S.~Abhyankar and G.S.~Sundaram, Galois theory of Moore-Carlitz-Drinfeld modules. {\em C. R. Acad. Sci. Paris S\'er. I Math.} {\bf 325} (1997), no. 4, 349--353.
\bibitem{Carlitz} L.~Carlitz, A class of polynomials. {\em Trans. Amer. Math. Soc.} {\bf 43} (1938), no. 2, 167--182.
\bibitem{Drinfeld}V. L.~Drinfeld, Elliptic modules (Russian), {\em Math. Sbornik},
{\bf 94} (1974), 594-627. Translated in {\em Math. USSR. S.}, {\bf 23} (1974), 561--
592.
\bibitem{GossBS} D.~Goss, Basic structures in function field arithmetic,
Springer-Verlag, 1996.
\bibitem{Hubschmid} P.~Hubschmid, The Andr\'e-Oort conjecture for Drinfeld modular varieties, to appear in {\em Compos. Math.}; arXiv:1201.5556v1[math.NT]
\bibitem{Joshi} K.~Joshi, A family of \'etale coverings of the affine line, {\em J. Number Theory} {\bf 59} (1996), 414--418.
\bibitem{LangEF} S.~Lang, Elliptic Functions, 2nd edition, {\em Graduate Texts in Mathematics} {\bf 112}, Springer-Verlag, 1987.
\bibitem{Moore} E.H.~Moore, A two-fold generalization of Fermat's theorem. {\em Bull. Amer. Math. Soc.} {\bf 2} (1896), no. 7, 189--199.
\bibitem{Pink}R.~Pink, Compactification of Drinfeld modular varieties and Drinfeld Modular Forms of Arbitrary Rank, to appear in {\em Manuscripta Math.}; arXiv:1008.0013v4[math.AG]
\bibitem{PinkSchieder}R.~Pink, S.~Schieder, Compactification of a Drinfeld Period Domain over a Finite Field, to appear in {\em J. Algebraic Geometry}; arXiv:1007.4796v3[math.AG]
\bibitem{Thiery} A.~Thiery, ${\mathbb{F}}_q$-linear Galois theory, {\em J. London Math. Soc.} (2) {\bf 53} (1996), 441--454.
\end{thebibliography}
\end{document}
Let $A={\mathbb{F}}_q[T]$ and $r\in{\mathbb{N}}$. Let $g_1,\ldots,g_{r-1}$ be algebraically independent over $k={\mathbb{F}}_q(T)$, and let $B={\mathbb{F}}_q[T,g_1,\ldots,g_{r-1}]$, with quotient field $K={\mathbb{F}}_q(T,g_1,\ldots,g_{r-1})$.
Let $\phi: A \to \mathop{\rm End}\nolimits_{{\mathbb{F}}_q}({\mathbb{G}}_{\mathrm{a},K})$ be the rank $r$ Drinfeld $A$-module determined by
\[
\phi_T(X) = TX + g_1X^q + \cdots + g_{r-1}X^{q^{r-1}} + X^{q^r}.
\]
We think of $\phi$ as the {\em monic generic Drinfeld $A$-module of rank $r$.} For an introduction to Drinfeld modules, see \cite[Chapter 4]{GossBS} or \cite[Chapter 12]{Rosen}.
Let $N\in A$ be a polynomial of degree $n$, then $\phi_N(X)\in B[X]$ is a separable ${\mathbb{F}}_q$-linear polynomial of degree $q^{rn}$, and S.~S.~Abhyankar conjectured in \cite{Ab99} that its Galois group over $K$ is isomorphic to $\mathop{\rm GL}\nolimits_r(A/NA)$.
When $N=T$, this was already proved by E.~H.~Moore in 1896 \cite{Moore}.
Various special cases of this conjecture were proved in \cite{Ab02,Ab02b,AKe,AS1,AS2}, see also \cite[\S 19]{Ab01}.
Denote by $K^{\rm sep}$ the separable closure of $K$, and by $G_K:=\mathop{\rm Gal}\nolimits(K^{\rm sep}/K)$ the absolute Galois group of $K$. The set of roots $\phi[N]$ of $\phi_N(X)$ in $K^{\rm sep}$ forms an $A$-module isomorphic to $(A/NA)^r$, via the $A$-action induced by $\phi$, and the action of $G_K$ on $\phi[N]$ induces the Galois representation
\[
\rho_N : G_K \longto \mathop{\rm Aut}\nolimits(\phi[N]) \cong \mathop{\rm GL}\nolimits_r(A/NA).
\]
Our main result is the following, which settles Abhyankar's conjecture.
\begin{Thm}\label{main}
The Galois representation $\rho_N$ is surjective for every $N\in A$.
\end{Thm}
\section{Some group theory}
Let $P\in A$ be irreducible, so ${\mathbb{F}}_P:=A/PA$ is a finite field of $q^{\mathop{\rm deg}\nolimits P}$ elements. Denote by
\[
A_P := \lim_{\longleftarrow}A/P^nA
\]
the completion of $A$ at $P$, and by
\[
T_P(\phi) := \lim_{\longleftarrow}\phi[P^n] \cong A_P^r
\]
the Tate module of $\phi$. The action of $G_K$ on $T_P(\phi)$ induces a continuous representation
\[
\rho_{P^\infty} : G_K \longto \mathop{\rm Aut}\nolimits(T_P(\phi)) \cong \mathop{\rm GL}\nolimits_r(A_P).
\]
The following result is standard.
\begin{Prop}\label{equivalence}
The following are equivalent:
\begin{enumerate}
\item $\rho_N : G_K \to \mathop{\rm GL}\nolimits_r(A/NA)$ is surjective for every $N\in A$.
\item $\rho_{P_n} : G_K \to \mathop{\rm GL}\nolimits_r(A/P^nA)$ is surjective for every irreducible $P\in A$ and $n\in{\mathbb{N}}$.
\item $\rho_{P^\infty} : G_K \to \mathop{\rm GL}\nolimits_r(A_P)$ is surjective for every irreducible $P\in A$.\qed
\end{enumerate}
\end{Prop}
We start with the following useful partial result.
\begin{Prop}[Abyhankar-Sundaram]\label{SmallP}
The Galois representation $\rho_{P^n}$ is surjective for all $n\in{\mathbb{N}}$ if $deg(P)=1$.
\end{Prop}
\begin{Proof}
We have $P = \alpha T + \beta \in{\mathbb{F}}_q[T]$, with $\alpha\neq 0$. Now $T\mapsto \alpha T + \beta$ induces an automorphism of $B$ which sends $\phi_{T^n}(X)$ to $\phi_{(\alpha T + \beta)^n}(X)$ in $B[X]$, so it suffices to prove the result for the case $P=T$. This was done by Abhyankar and Sundaram in \cite{AS1}.
\end{Proof}
We denote by
\[
\bar{\rho}_N : G_K \longto \mathop{\rm PGL}\nolimits_r(A/NA)
\]
the projective Galois representation, obtained by composing $\rho_N$ with the canonical epimorphism $\mathop{\rm GL}\nolimits_r(A/NA)\to\mathop{\rm PGL}\nolimits_r(A/NA)$, where $\mathop{\rm PGL}\nolimits_r$ is the quotient of $\mathop{\rm GL}\nolimits_r$ by scalars. This representation describes the action of $G_K$ on the rank 1 $(A/NA)$-submodules of $\phi[N]\cong (A/NA)^r$.
We will prove Theorem \ref{main} by combining the following results.
\begin{Prop}\label{det}
The determinant representation $\det\circ\rho_N : G_K \to \mathop{\rm GL}\nolimits_1(A/NA)$ is surjective.
\end{Prop}
\begin{Prop}\label{PSL}
The image of $\bar{\rho}_{N}$ in $\mathop{\rm PGL}\nolimits_r(A/NA)$ contains $\mathop{\rm PSL}\nolimits_r(A/NA)$.
\end{Prop}
These will be proved in Sections \ref{DetSection}, \ref{InvSection} and \ref{AnalSection}. We begin our proof of Theorem~\ref{main} by combining Propositions \ref{det} and \ref{PSL} into:
\begin{Prop}\label{PGL}
The projective representation $\bar{\rho} : G_K \to \mathop{\rm PGL}\nolimits_r(A/NA)$ is surjective.
\end{Prop}
\begin{Proof}
Let $H={\mathrm {Im}}(\rho_N)\subset\mathop{\rm GL}\nolimits_r(A/NA)$ and $\bar{H} = {\mathrm {Im}}(\bar{\rho}_N)\subset\mathop{\rm PGL}\nolimits_r(A/NA)$. By Proposition~\ref{PSL}, $\mathop{\rm PSL}\nolimits_r(A/NA){\rm tr}iangleleft\bar{H}$, so the determinant induces an isomorphism
\[
\bar{H}/\mathop{\rm PSL}\nolimits_r(A/NA) \cong \det(H)/(A/NA)^{* r} \cong (A/NA)^*/(A/NA)^{* r},
\]
by Proposition \ref{det}. Thus $(\bar{H} : \mathop{\rm PSL}\nolimits_r(A/NA)) = (\mathop{\rm PGL}\nolimits_r(A/NA) : \mathop{\rm PSL}\nolimits_r(A/NA))$, which concludes the proof.
\end{Proof}
\begin{Prop}\label{FP}
If $P\in A$ is irreducible, then the representation $\rho_P : G_K \to \mathop{\rm GL}\nolimits_r({\mathbb{F}}_P)$ is surjective.
\end{Prop}
\begin{Proof}
We continue with the notation in the proof of Proposition \ref{PGL}.
By \cite[Transvection Lemma 2.3]{Ab94}, we have $\mathop{\rm SL}\nolimits_r({\mathbb{F}}_P) \subset H$ if and only if $\mathop{\rm PSL}\nolimits_r({\mathbb{F}}_P)\subset\bar{H}$. Thus, by Proposition~\ref{PSL}, $\mathop{\rm SL}\nolimits_r({\mathbb{F}}_P)\subset H$. Now the same argument as before gives $H=\mathop{\rm GL}\nolimits_r({\mathbb{F}}_P)$.
\end{Proof}
Theorem \ref{main} is now equivalent to the following result.
\begin{Prop}\label{P-adic}
The $P$-adic Galois representation $\rho_{P^\infty} : G_K \to \mathop{\rm GL}\nolimits_r(A_P)$ is surjective for every irreducible $P\in A$.
\end{Prop}
\begin{Proof}
We start with the following result, due to Pink and R\"utsche.
\begin{Prop}[Prop. 4.1 in \cite{PR}]\label{PR}
Let $H$ be a closed subgroup of $\mathop{\rm GL}\nolimits_r(A_P)$. Assume that $|{\mathbb{F}}_P|\geq 4$, that $\det(H)=\mathop{\rm GL}\nolimits_1(A_P)$, that $H$ surjects onto $\mathop{\rm GL}\nolimits_r({\mathbb{F}}_P)$ under reduction modulo $P$, and that $H$ contains a matrix which is the identity modulo $P$ but is non-scalar modulo $P^2$. Then $H=\mathop{\rm GL}\nolimits_r(A_P)$.
\end{Prop}
We assume for the moment that $|{\mathbb{F}}_P|\geq 4$, and let $H={\mathrm {Im}}(\rho_{p^\infty})\subset\mathop{\rm GL}\nolimits_r(A_P)$. This is closed, since it is the continuous image of $G_K$. The group $H$ satisfies the remaining hypotheses of Proposition \ref{PR} by Propositions \ref{det}, \ref{FP} and \ref{PGL}, respectively, so $H=\mathop{\rm GL}\nolimits_r(A_P)$.
It remains to consider the cases where $|{\mathbb{F}}_P| \leq 3$, but then $\mathop{\rm deg}\nolimits(P)=1$, and everything follows from Proposition \ref{SmallP}
\end{Proof}
Lastly, we will need the following result later.
\begin{Prop}\label{Der}
Let $P\in A$ be irreducible, $n\in{\mathbb{N}}$ and $r\geq 2$. If $r=2$, assume further that $|{\mathbb{F}}_P|\geq 4$. Let $H$ be a subgroup of $\mathop{\rm PGL}\nolimits_r(A/P^nA)$ which contains $\mathop{\rm PSL}\nolimits_r(A/P^nA)$. Then the derived subgroup of $H$ is $H^{\rm der} = \mathop{\rm PSL}\nolimits_r(A/P^nA)$.
\end{Prop}
\begin{Proof}
This actually holds with $A/P^nA$ replaced by any commutative local ring $R$, with the assumption that, if $r=2$, then its residue field $k_R$ contains at least $4$ elements.
It suffices to show that $\mathop{\rm SL}\nolimits_r(R)^{\rm der} = \mathop{\rm SL}\nolimits_r(R)$. As usual, $\mathop{\rm SL}\nolimits_r(R)$ is generated by elementary matrices, i.e. matrices $E_{ij}(c)$, which have $c\in R$ in position $(i,j)$ ($i\neq j$), $1$'s on the diagonal, and zeros everywhere else.
We need only show that each $E_{ij}(c)$ is a commutator of elements in $\mathop{\rm SL}\nolimits_r(R)$. When $r\geq 3$, we choose $k\neq i,j$, and compute
\[
E_{ij}(C) = E_{ik}(c)E_{kj}(1)E_{ik}(-c)E_{kj}(-1) \in \mathop{\rm SL}\nolimits_r(R)^{\rm der}.
\]
When $r=2$, we let $a,b\in R$ and compute
\[
\left(\begin{matrix}a & 0 \\ 0 & a^{-1}\end{matrix}\right)
\left(\begin{matrix}1 & b \\ 0 & 1\end{matrix}\right)
\left(\begin{matrix}a & 0 \\ 0 & a^{-1}\end{matrix}\right)^{-1}
\left(\begin{matrix}1 & b \\ 0 & 1\end{matrix}\right)^{-1}
= E_{12}\big(b(a^2-1)\big) \in\mathop{\rm SL}\nolimits_r(R)^{\rm der}
\]
and
\[
\left(\begin{matrix}a^{-1} & 0 \\ 0 & a\end{matrix}\right)
\left(\begin{matrix}1 & 0 \\ b & 1\end{matrix}\right)
\left(\begin{matrix}a^{-1} & 0 \\ 0 & a\end{matrix}\right)^{-1}
\left(\begin{matrix}1 & 0 \\ b & 1\end{matrix}\right)^{-1}
= E_{21}\big(b(a^2-1)\big) \in\mathop{\rm SL}\nolimits_r(R)^{\rm der}.
\]
Since $|k_R|\geq 4$, we can find an element $\bar{a}\in k_R^*$ such that $\bar{a}^2\neq 1$, and this lifts to $a\in R^*$ such that $a^2-1\in R^*$. Now for suitable choices of $b\in R$ we obtain $E_{ij}(c)\in \mathop{\rm SL}\nolimits_r(R)^{\rm der}$ for every $c\in R$.
\end{Proof}
\section{The determinant representation}\label{DetSection}
\begin{Proofof}{Proposition \ref{det}}
Let $\psi$ be the rank 1 Drinfeld module determined by
\[
\psi_T(X) = TX + (-1)^{r-1}X^q.
\]
Then $\psi$ is the ``determinant'' of $\phi$, and there exists a Galois equivariant isomorphism (see \cite{Anderson} and \cite[Ex. 2.6.3]{Goss94})
\[
\wedge^r\phi[N] \stackrel{\sim}{\longto} \psi[N].
\]
This can be seen as an analogue of the Weil pairing for elliptic curves. It follows that $\det\circ\rho_N : G_K \to \mathop{\rm GL}\nolimits_1(A/NA)$ coincides with the rank one Galois representation on $\psi[N]$.
Thus Proposition \ref{det} follows from the surjectivity of
\[
G_K \longto \mathop{\rm Aut}\nolimits(\psi[N])\cong\mathop{\rm GL}\nolimits_1(A/NA).
\]
But this was already proved by Carlitz \cite{Carlitz}. The sign $(-1)^{r-1}$ in $\psi_T(X)$ causes no problems; indeed the origininal Carlitz module was defined as $C_T(X)=TX-X^q$, see \cite[Chapter 12]{Rosen}.
\end{Proofof}
\section{Invariants}\label{InvSection}
It remains to prove Proposition \ref{PSL}. To this end we will introduce invariants of Drinfeld modules, and view them as rigid analytic functions.
First, we remark that since $\mathop{\rm Gal}\nolimits(K^{\rm sep}/{\mathbb{F}}_{q^r}K) \subset \mathop{\rm Gal}\nolimits(K^{\rm sep}/K)$, it suffice to prove Proposition \ref{PSL} with $K$ replaced by the constant field extension
\[
K':={\mathbb{F}}_{q^r}K = {\mathbb{F}}_{q^r}(T,g_1,\ldots,g_{r-1}).
\]
Consider the action of ${\mathbb{F}}_{q^r}^\times$ on $K'$ induced by:
\[
\epsilon * g_i := \epsilon^{q^i-1}g_i,\qquad \epsilon\in{\mathbb{F}}_{q^r}^\times,\;\; i=1,2,\ldots,r-1.
\]
We denote by $L = K'^{{\mathbb{F}}_{q^r}^\times}$ the fixed field under this action, and by $C=(B\otimes{\mathbb{F}}_{q^r})^{{\mathbb{F}}_{q^r}^\times}$ the fixed ring.
The ring $C$ is a ring of isomorphism invariants of rank $r$ Drinfeld modules in the following sense (see \cite{Potemine}).
Let $\psi$ be a Drinfeld module over a field $F$ containing ${\mathbb{F}}_{q^r}$. Then it is isomorphic over $\bar{F}$ to a Drinfeld module $\tilde{\psi}$ determined by $\tilde{\psi}_T(X) = TX + a_1X^q + \cdots + a_rX^{q^r}$ with $a_r=1$, and the remaining $a_i\in\bar{F}$ are determined by $\psi$ up to a factor in ${\mathbb{F}}_{q^r}^\times$. For each $J\in C$, denote by $J(\psi)\in\bar{F}$ the image of $J$ under the homomorphism sending $g_i$ to $a_i$. This is well-defined, since $J$ is invariant under ${\mathbb{F}}_{q^r}^\times$. Now two Drinfeld modules $\psi$ and $\psi'$ are isomorphic if and only if $J(\psi)=J(\psi')$ for all $J\in C$.
We return to our generic Drinfeld module $\phi$, and we denote by ${\mathbb{P}}\phi[N]$ the set of all submodules $H\subset \phi[N]\cong (A/NA)^r$ with $H\cong (A/NA)$. For each $H\in{\mathbb{P}}\phi[N]$ the quotient Drinfeld module $\phi/H$ is determined by the isogeny $\phi \to \phi/H$ with kernel $H$ (see \cite[Prop. 4.7.11]{GossBS}). We denote by $K'_N$ the splitting field of $\phi[N]$ over $K'$, and by $L_N\subset K'_N$ the subfield generated by $J(\phi/H)$ for all $H\in{\mathbb{P}}\phi[N]$. The extension $L_N/L$ is Galois, and its Galois group embeds into $\mathop{\rm PGL}\nolimits_r(A/NA)$ once a basis of $\phi[N]\cong (A/NA)^r$ has been chosen.
Our principal ingredient is the following, which we will prove in Section \ref{AnalSection}.
\begin{Prop}\label{Anal}
$\mathop{\rm Gal}\nolimits(L_N/L)$ contains $\mathop{\rm PSL}\nolimits_r(A/NA)$.
\end{Prop}
From this we readily deduce Proposition \ref{PSL}.
\begin{Proofof}{Proposition \ref{PSL}}
Write $N=\prod_{P|N}P^{n_p}$ as a product of prime powers. Since $\bar{\rho}_N$ decomposes into
\[
\prod_{P|N} \bar{\rho}_{P^{n_P}} : G_K \longto \prod_{P|N} \mathop{\rm PGL}\nolimits_r(A/P^{n_P}A) \cong \mathop{\rm PGL}\nolimits_r(A/NA),
\]
it suffices to prove Proposition \ref{PSL} for $N=P^n$. If $|{\mathbb{F}}_P|<4$ then $\mathop{\rm deg}\nolimits(P)=1$, and the result follows from Proposition \ref{SmallP}. Hence we assume that $|{\mathbb{F}}_P|\geq 4$.
Now $K'/L$ is abelian, hence $K'/K'\cap L_N$ and $K'\cap L_N/L$ are also abelian. The Galois group $\mathop{\rm Gal}\nolimits(L_N/L)$ contains $\mathop{\rm PSL}\nolimits_r(A/NA)$, by Proposition \ref{Anal}, and $\mathop{\rm Gal}\nolimits(L_N/K'\cap L_N)$ is a subgroup of $\mathop{\rm Gal}\nolimits(L_N/L)$ with abelian quotient. Hence, by Proposition \ref{Der}, it must contain $\mathop{\rm PSL}\nolimits_r(A/NA)$ also, and Proposition \ref{PSL} follows, since $\mathop{\rm Gal}\nolimits(K'L_N/K')\cong\mathop{\rm Gal}\nolimits(L_N/K'\cap L_N)$.
\end{Proofof}
\section{Analytic methods}\label{AnalSection}
\begin{Proofof}{Proposition \ref{Anal}}
Here we will interpret the elements of $C$ as analytic functions.
Let $k_\infty={\mathbb{F}}_q((\frac{1}{T}))$ be the completion of $k={\mathbb{F}}_q(T)$ at $1/T$, and let ${\BC}_\infty = \hat{\bar{k}}_{\infty}$ denote the completion of an algebraic closure of $k_\infty$. We denote by
\[
\Omega^r := {\mathbb{P}}^{r-1}({\BC}_\infty)\smallsetminus\{\text{$k_\infty$-rational hyperplanes}\}
\]
Drinfeld's symmetric space\footnote{This is more commonly called Drinfeld's ``upper half-space'', but is neither upper nor half.}, on which $\mathop{\rm GL}\nolimits_r(k_\infty)$ acts from the left. The points of $\Omega^r$ correspond to rank $r$ Drinfeld modules over ${\BC}_\infty$, and two points $\omega,\omega'\in\Omega^r$ correspond to isomorphic (respectively, isogenous) Drinfeld modules $\phi^\omega,\phi^{\omega'}$ if and only if $\omega' = \gamma(\omega)$ for some $\gamma\in\mathop{\rm GL}\nolimits_r(A)$ (respectively, $\gamma\in\mathop{\rm GL}\nolimits_r(k)$).
The ring $C$ may be realized as a ring of $\mathop{\rm GL}\nolimits_r(A)$-invariant functions on $\Omega^r$, defined by $J : \Omega^r\to{\BC}_\infty, \omega\mapsto J(\phi^\omega)$.
Denote by $H_N\subset\mathop{\rm GL}\nolimits_r(k)$ a set of coset representatives of
\[
\mathop{\rm GL}\nolimits_r(A)\backslash\mathop{\rm GL}\nolimits_r(A)\mathop{\rm diag}\nolimits(N,\ldots,N,1)\mathop{\rm GL}\nolimits_r(A).
\]
Then for $\omega,\omega'\in\Omega^r$, the corresponding Drinfeld modules are linked by a cyclic isogeny $f:\phi^{\omega}\to\phi^{\omega'},\; \ker f\cong A/NA$ if and only if $\omega' = h(\omega)$ for some $h\in H_N$.
Thus we see that $\{J(\phi/H) \;|\; H\in{\mathbb{P}}\phi[N]\}$ corresponds to the set of functions $\{J\circ h \;|\; h\in H_N\}$ on $\Omega^r$. Now $\mathop{\rm GL}\nolimits_r(A)$ acts on this set from the right via composition $(J\circ h)*\gamma := J\circ h\circ \gamma$, since the right action of $GL_r(A)$ permutes the cosets represented by $H_N$. This extends to an action of $\mathop{\rm GL}\nolimits_r(A)$ on $L_N$ which fixes $L$, since the elements of $L$ correspond to $\mathop{\rm GL}\nolimits_r(A)$-invariant functions on $\Omega^r$. We thus obtain a group homomorphism
\[
\rho : \mathop{\rm GL}\nolimits_r(A) \longto \mathop{\rm Gal}\nolimits(L_n/L)\subset\mathop{\rm PGL}\nolimits_r(A/NA).
\]
We claim that
\[
\ker\rho\subset Z_n := \{\gamma\in\mathop{\rm GL}\nolimits_r(A) \;|\; \text{$\gamma \bmod N$ is a scalar in $\mathop{\rm GL}\nolimits_r(A/NA)$}\}.
\]
Indeed, suppose $\gamma\in\ker\rho$. Then
\begin{eqnarray*}
& & J\circ h\circ\gamma = J\circ h \quad\forall J\in C,\; h\in H_N\\
& \Leftrightarrow & J(\phi^{h\circ\gamma(\omega)}) = J(\phi^{h(\omega)})\quad\forall J\in C,\; h\in H_N, \; \omega\in\Omega^r\\
& \Leftrightarrow & \phi^{h\circ\gamma(\omega)} \cong \phi^{h(\omega)} \quad\forall h\in H_N, \; \omega\in\Omega^r\\
& \Leftrightarrow & \gamma\in h^{-1}\mathop{\rm GL}\nolimits_r(A)h \quad\forall h\in H_N.
\end{eqnarray*}
Let $h_0:=\mathop{\rm diag}\nolimits(N,1,N,\ldots,N)\in H_N$ and $h_1:=\mathop{\rm diag}\nolimits(1,N,\ldots,N)\in H_N$. Now $\gamma\in h_0^{-1}\mathop{\rm GL}\nolimits_r(A)h_0\cap h_1^{-1}\mathop{\rm GL}\nolimits_r(A)h_1$ implies that $\gamma$ must be a diagonal matrix modulo $N$. Next, for $i=2,\ldots,r$, let $h_i\in H_N$ be the matrix with diagonal $(1,N,\ldots,N)$, a $1$ in position $i$ of the first row, and zeros everywhere else. Given that $\gamma$ is diagonal modulo $N$, one readily computes that $\gamma\in\cap_{i=2}^r h_i^{-1}\mathop{\rm GL}\nolimits_r(A)h_i$ implies that $\gamma$ is a scalar modulo $N$. This proves the claim.
Thus the image of $\rho$ contains the image of $\mathop{\rm GL}\nolimits_r(A)$ in $\mathop{\rm PGL}\nolimits_r(A/NA)$. By Strong Approximation, $\mathop{\rm SL}\nolimits_r(A)$ surjects onto $\mathop{\rm SL}\nolimits_r(A/NA)$, so the image of $\rho$ contains $\mathop{\rm PSL}\nolimits_r(A/NA)$.
\end{Proofof}
We remark that we have essentially computed the Galois group of a Drinfeld modular polynomial $P_{J,N}(X)$ for a suitable $J\in C$ (see \cite{BR}). This is certainly not a ``nice equation for a nice group'', in the spirit of \cite{Ab94}, as can be seen from the computational example filling the last nine pages of \cite{BR}.
\paragraph{Acknowledgement.} This paper grew out of a separate joint project with Hans-Georg R\"uck, and would never have been possible without him. I would also like to thank Dinesh Thakur for pointing me towards Abhyankar's work, which provided the inspiration needed to complete the proof of the main result.
\end{document} | math | 30,803 |
\begin{document}
\title[Products of open manifolds with $\mathbb{R}$ ]{Products of open manifolds with $\mathbb{R}$}
\author{Craig R. Guilbault }
\address{Department of Mathematical Sciences, University of Wisconsin-Milwaukee,
Milwaukee, Wisconsin 53201}
\email{[email protected]}
\date{December 20, 2005}
\subjclass{Primary 57N15, 57Q12}
\keywords{manifold, end, stabilization, Siebenmann's thesis}
\begin{abstract}
In this note we present a characterization of those open $n$-manifolds
($n\geq5$), whose products with the real line are homeomorphic to interiors of
compact $\left( n+1\right) $-manifolds with boundary.
\end{abstract}
\maketitle
\section{Introduction}
One often wishes to know whether a given open manifold can be compactified by
the addition of a manifold boundary. In other words, for an open manifold
$M^{n}$, we ask if there exists a compact manifold $C^{n}$ with $int\left(
C^{n}\right) \approx M^{n}$. Since $int\left( C^{n}\right) \hookrightarrow
C^{n}$ is a homotopy equivalence, and because every compact manifold has the
homotopy type of a finite CW complex (see \cite{KS}), a necessary condition is
that $M^{n}$ have finite homotopy type. This condition is not sufficient. One
of the most striking illustrations of that fact occurs in a famous
contractible (thus, homotopy equivalent to a point) $3$-manifold constructed
by J.H.C. Whitehead \cite{Wh}. That example is best known for not being
homeomorphic $\mathbb{R}^{3}$ (or, equivalently, to $int\left( B^{3}\right)
$), but a little additional thought reveals that it is not homeomorphic to the
interior of \emph{any} compact $3$-manifold.
Somewhat surprisingly, the product of the Whitehead manifold with a line is
homeomorphic to $\mathbb{R}^{4}$. In fact, it is now known that the product of
\emph{any} contractible $n$-manifold with a line is homeomorphic to
$\mathbb{R}^{n+1}$. That fact was obtained through the combined efforts of
several researchers; see, for example, \cite{Gl}, \cite{Mc}, \cite{St},
\cite{Lu1}, \cite{Lu2} and \cite{Fr} . In this note we prove the following
generalization of that result:
\begin{theorem}
\label{main theorem}For an open manifold $M^{n}$ ($n\geq5$), $M^{n}
\times\mathbb{R}$ is homeomorphic to the interior of a compact $\left(
n+1\right) $-manifold with boundary if and only if $M^{n}$ has the homotopy
type of a finite complex.
\end{theorem}
I wish to acknowledge Igor Belegradek for motivating this work by asking me
the question:
\begin{quotation}
\emph{If }$M^{n}$\emph{ is an open manifold homotopy equivalent to an embedded
compact submanifold, say a torus, must }$M^{n}\times R$\emph{ be homeomorphic
to the interior of a compact manifold?}
\end{quotation}
\noindent Initially, I was surprised that the question was open. The fairly
obvious approach---application of the main result of Siebenmann's
thesis---works nicely for $M^{n}\times\mathbb{R}^{2}$. In fact, Siebenmann,
himself, addressed that situation in his thesis \cite[Th.6.12]{Si}; where a
key ingredient was the straightforward observation that (for any connected
open manifold $M^{n}$), $M^{n}\times\mathbb{R}^{2}$ has stable fundamental
group at infinity. We too obtain our result by applying Siebenmann's thesis;
but unlike the `cross $\mathbb{R}^{2}$ situation', stability of the
fundamental group at infinity for $M^{n}\times\mathbb{R}$ is not so easy. In
fact, there exist open manifolds $M^{n}$ for which $M^{n}\times\mathbb{R}$
fails to have stable fundamental group at infinity. An example of that
phenomenon will be provided in Section \ref{Section: Proof}. However, under
the (already necessary) hypothesis of finite homotopy type---or even a weaker
hypothesis of finite domination---we are able to obtain $\pi_{1}$-stability in
$M^{n}\times\mathbb{R}$. That is the main step in our proof. A key ingredient
is the adaptation of a recent technique from \cite{GuTi}.
\section{Definitions and Background}
Throughout this paper, we work in the PL\ category. Proofs can be modified in
the usual ways to obtain equivalent results in the smooth or topological categories.
\subsection{Neighborhoods of infinity, ends, and finite dominations}
A manifold $M^{n}$ is \emph{open} if it is noncompact and has no boundary. A
subset $N$ of $M^{n}$ is a \emph{neighborhood of infinity} if $\overline
{M^{n}-N}$ is compact. We say that $M^{n}$ is \emph{one-ended} if each
neighborhood of infinity contains a connected neighborhood of infinity; in
other words, $M^{n}$ contains `arbitrarily small' connected neighborhoods of
infinity. More generally, $M^{n}$ is $k$\emph{-ended} ($k\in\mathbb{N}$) if it
contains arbitrarily small neighborhoods of infinity consisting of exactly $k$
components, each of which has noncompact closure. If no such $k$ exists, we
say $M^{n}$ has \emph{infinitely many ends}.
A neighborhood of infinity is \emph{clean} if it is a closed subset of $M^{n}$
and a codimension $0$ submanifold with a boundary that is bicollared in
$M^{n}$. By discarding compact components and drilling our neighborhoods arcs,
we can find within any clean neighborhood of infinity $N$, an \emph{improved
}clean neighborhood of infinity $N^{\prime}$ having the properties:
\begin{itemize}
\item $N^{\prime}$ contains no compact components, and
\item each component of $N^{\prime}$ has connected boundary.
\end{itemize}
\noindent If $M^{n}$ is $k$-ended then, there exist arbitrarily small improved
neighborhood of infinity containing exactly $k$ components. Such a
neighborhood is called $0$\emph{-neighborhood of infinity}. In this situation,
we may choose a sequence
\[
N_{1}\supseteq N_{2}\supseteq N_{3}\supseteq\cdots
\]
of $0$-neighborhoods of infinity such that $N_{i+1}\subseteq int\left(
N_{i}\right) $ for all $i$ and $\cap_{i=1}^{\infty}N_{i}=\varnothing$. A
sequence of this sort will be referred to as \emph{neat}. Then for each $i$,
the components may be indexed as $N_{i}^{1},N_{i}^{2},\cdots,N_{i}^{k}$;
furthermore, these indices may chosen coherently so that for all $i<i^{\prime
}$ and all $1\leq j\leq k$ we have $N_{i^{\prime}}^{j}\subseteq int(N_{i}
^{j})$. When all of the above has been accomplished, we will refer to
$\left\{ N_{i}\right\} _{i=1}^{\infty}$ as a \emph{well-indexed neat
sequence of }$0$\emph{-neighborhoods of infinity. }For a fixed $j$, we say
that the nested sequence of components $\left\{ N_{i}^{j}\right\}
_{i=1}^{\infty}$ \emph{represents the }$j^{\emph{th}}$ \emph{end of }$M^{n}$.
A space has \emph{finite homotopy type }if it is homotopy equivalent to a
finite CW complex. A space $X$ is \emph{finitely dominated} if there exists a
finite complex $L$ and maps $u:X\rightarrow L$ and $d:L\rightarrow X$ such
that $d\circ u\simeq id_{X}$. It is a standard fact that a polyhedron (or
complex) $X$ is finitely dominated if and only if there is a homotopy
$H:X\times\left[ 0,1\right] \rightarrow X$ such that $H_{0}=id_{X}$ and
$\overline{H_{1}(X)}$ is compact. (We say $H$ \emph{pulls }$X$ \emph{into a
compact set}.) For later use, we prove a mild refinement of this latter characterization.
\begin{lemma}
\label{improved-domination}A polyhedron $X$ is finitely dominated if and only
if, for any compactum $C\subseteq X$, there is a homotopy $J:X\times\left[
0,1\right] \rightarrow X$ such that
\begin{enumerate}
\item[i)] $J_{0}=id_{X}$,
\item[ii)] $\overline{J_{1}(X)}$ is compact, and
\item[iii)] $\left. J\right\vert _{C\times\left[ 0,1\right] }
=id_{C\times\left[ 0,1\right] }$.
\end{enumerate}
\begin{proof}
We need only prove the forward implication, as the converse is obvious. Begin
with a homotopy $H:X\times\left[ 0,1\right] \rightarrow X$ satisfying the
analogues of conditions i) and ii). Choose a compact polyhedral neighborhood
$D$ of $C$ in $X$. Then define $J$ on the union of $X\times\left\{ 0\right\}
$ and $\left( C\cup\left( X-intD\right) \right) \times\left[ 0,1\right]
$ as follows:
\[
J\left( x,t\right) =\left\{
\begin{tabular}
[c]{rr}
$x$ & if $t=0$\\
$x$ & if $x\in C$\\
$H\left( x,t\right) $ & if $x\in X-intD$
\end{tabular}
\ \ \ \right. \text{.}
\]
Apply the Homotopy Extension Theorem \cite[\S IV.2]{Hu} to extend $J$ to all
of $X\times\left[ 0,1\right] $. Condition ii) follows from compactness of
$D$.
\end{proof}
\end{lemma}
If a space is finitely dominated, one often wishes to know whether it has
finite homotopy type. This issue was resolved by Wall in \cite{Wa} where, to
every finitely dominated $X$, there is defined an obstruction $\sigma\left(
X\right) $ lying in the reduced projective class group $\widetilde{K}
_{0}\left( \mathbb{Z[}\pi_{1}(X)]\right) $. This obstruction vanishes if and
only if $X$ has finite homotopy type.
A space having finite homotopy type may have infinitely many ends. One example
is the universal cover of a figure-eight. However, within the realm of open
manifolds, this does not happen. In fact, we have
\begin{lemma}
The number of ends of a finitely dominated open $n$-manifold $M^{n}$ is a
finite integer bounded above by $rank\left( H_{n-1}\left( M^{n}
,\mathbb{Z}_{2}\right) \right) +1$.
\begin{proof}
[Sketch of Proof]If $M^{n}$ is dominated by a finite complex $L$, then each
homology group of $L$ surjects onto the corresponding homology group of
$M^{n}$. It follows that $rank\left( H_{n-1}\left( M^{n},\mathbb{Z}
_{2}\right) \right) <\infty$. Next observe that, for an improved
neighborhood of infinity $N$, the collection of boundary components of $N$
forms a nearly independent collection of elements of $H_{n-1}\left(
M^{n},\mathbb{Z}_{2}\right) $. (This is were we use the fact that $M^{n}$ is
an open manifold.) So if $M^{n}$ contained improved neighborhoods of infinity
with arbitrarily large numbers of components, $H_{n-1}\left( M^{n}
,\mathbb{Z}_{2}\right) $ would be infinitely generated. See \cite[Prop.3.1]
{GuTi} for details.
\end{proof}
\end{lemma}
\subsection{Fundamental group at infinity and Siebenmann's thesis}
For an inverse sequence
\[
G_{0}\overset{\lambda_{1}}{\longleftarrow}G_{1}\overset{\lambda_{2}
}{\longleftarrow}G_{2}\overset{\lambda_{3}}{\longleftarrow}\cdots
\]
of groups and homomorphisms, a \emph{subsequence} of $\left\{ G_{i}
,\lambda_{i}\right\} $ is an inverse sequence of the form
\[
G_{i_{0}}\overset{\phi_{1}}{\longleftarrow}G_{i_{1}}\overset{\phi_{2}
}{\longleftarrow}G_{i_{2}}\overset{\phi_{3}}{\longleftarrow}\cdots.
\]
where, for each $j$, the homomorphism $\phi_{j}$ is the obvious composition
$\lambda_{i_{j-1}+1}\circ\cdots\circ\lambda_{i_{j}}$ of homomorphisms from the
original sequence. We say that $\left\{ G_{i},\lambda_{i}\right\} $ is
\emph{stable}, if it contains a subsequence $\left\{ G_{i_{j}},\phi
_{j}\right\} $ that induces a sequence of isomorphisms
\begin{equation}
im\left( \phi_{1}\right) \overset{\cong}{\longleftarrow}im\left( \phi
_{2}\right) \overset{\cong}{\longleftarrow}im\left( \phi_{3}\right)
\overset{\cong}{\longleftarrow}\cdots. \tag{*}
\end{equation}
If a sequence (*) exists where the bonding maps are simply injections, we say
that $\left\{ G_{i},\lambda_{i}\right\} $ is \emph{pro-injective}; if one
exists where the bonding maps are surjections, we say that $\left\{
G_{i},\lambda_{i}\right\} $ is \emph{pro-surjective} or (more commonly)
\emph{semistable.}
For a one-ended open manifold $M^{n}$ and a neat sequence $\left\{
N_{i}\right\} _{i=1}^{\infty}$ of $0$-neighborhoods of infinity, choose
basepoints $p_{i}\in N_{i}$, and paths $\alpha_{i}\subset N_{i}$ connecting
$p_{i}$ to $p_{i+1}$. Then construct an inverse sequence of groups:
\begin{equation}
\pi_{1}\left( N_{0},p_{0}\right) \overset{\lambda_{1}}{\longleftarrow}
\pi_{1}\left( N_{1},p_{1}\right) \overset{\lambda_{2}}{\longleftarrow}
\pi_{1}\left( N_{2},p_{2}\right) \overset{\lambda_{3}}{\longleftarrow}
\cdots. \tag{\dag}
\end{equation}
by letting $\lambda_{i+1}:\pi_{1}\left( N_{i+1},p_{i+1}\right)
\rightarrow\pi_{1}\left( N_{i},p_{i}\right) $ be the homomorphism induced by
inclusion followed by the change of basepoint isomorphism determined by
$\alpha_{i}$. The obvious singular ray obtained by piecing together the
$\alpha_{i}$'s is often referred to as the \emph{base ray }for this inverse
sequence. This inverse sequence (or more precisely the `pro-equivalence class'
of this sequence) is referred to as the \emph{fundamental group at infinity}
for $M^{n}$ and is denoted by $\pi_{1}\left( \varepsilon\left( M^{n}\right)
\right) $.
\begin{remark}
For the purposes of this paper, we only need to consider the fundamental group
at infinity for \textbf{one-ended} manifolds. (Even though we often begin with
a multi-ended manifold.) In multi-ended situations, one may associate a
different inverse sequence to each end. For example, if $M^{n}$ is a $k$-ended
open manifold and $\left\{ N_{i}\right\} _{i=1}^{\infty}$ as a well-indexed
neat sequence of\emph{ }$0$-neighborhoods of infinity. Then, for each
$j\in\left\{ 1,2,\cdots,k\right\} $ we can construct an inverse sequence
\[
\pi_{1}\left( N_{0}^{j}\right) \overset{\lambda_{1}^{j}}{\longleftarrow}
\pi_{1}\left( N_{1}^{j}\right) \overset{\lambda_{2}^{j}}{\longleftarrow}
\pi_{1}\left( N_{2}^{j}\right) \overset{\lambda_{3}^{j}}{\longleftarrow
}\cdots.
\]
which is called the \emph{fundamental group at the }$j^{th}$ \emph{end of
}$M^{n}$. (Here we have omitted reference to basepoints only to simplify notation.)
\end{remark}
For a more thorough discussion of inverse sequences and the fundamental group
system at infinity, see \cite{Gu}.
As indicated in the introduction, Theorem \ref{main theorem} will be obtained
as a consequence of the main result of \cite{Si}. For easy reference, we state
that result and provide some necessary definitions.
\begin{theorem}
[Siebenmann, 1965]\label{siebenmann}A one-ended open $n$-manifold $M^{n}$
($n\geq6$) is homeomorphic to the interior of a compact manifold with boundary iff:
\begin{enumerate}
\item $M^{n}$ is inward tame at infinity,
\item $\pi_{1}$ is stable at infinity, and
\item $\sigma_{\infty}\left( M^{n}\right) \in\widetilde{K}_{0}\left(
\mathbb{Z[}\pi_{1}(\varepsilon(M^{n}))]\right) $ is trivial.
\end{enumerate}
\end{theorem}
In the above, \emph{inward tame at infinity }(or simply `inward tame') means
that for any neighborhood $N$ of infinity, there exists a homotopy (sometimes
called a \emph{taming homotopy}) $H:N\times\left[ 0,1\right] \rightarrow V$
such that $H_{0}=id$ and $\overline{H_{1}(N)}$ is compact. Equivalently
$M^{n}$ is inward tame if all clean neighborhoods of infinity are finitely
dominated. If $N\supseteq N^{\prime}$ are clean neighborhoods of infinity,
then any taming homotopy for $N^{\prime}$ can be extended to a taming homotopy
for $N$. Thus, in order to prove inward tameness for $M^{n}$, it suffices to
show the existence of arbitrarily small finitely dominated clean neighborhoods
of infinity.
Given conditions 1) and 2) above, one may choose a $0$-neighborhood of
infinity $N$ with the `correct' fundamental group---as determined by 2). Then
$\sigma_{\infty}\left( M^{n}\right) $ is the Wall finiteness obstruction of
$N$. With some additional work, one sees that $\sigma_{\infty}\left(
M^{n}\right) $ is trivial if and only if \emph{all} clean neighborhoods of
infinity in $M^{n}$ (or equivalently, arbitrarily small clean neighborhoods of
infinity) have finite homotopy type. For more details see \cite{Si} or
\cite{Gu}.
\begin{remark}
By giving a more general definition of $\sigma_{\infty}\left( M^{n}\right)
$, it is possible to separate Conditions 2) and 3); this has been done in
\cite{Gu}. However, in the current context, it seems better to keep Theorem
\ref{siebenmann} in its traditional form.
\end{remark}
\subsection{Combinatorial group theory and the Generalized Seifert-VanKampen
Theorem}
The last bit of background information we wish to comment on is primarily
combinatorial group theory. Given groups $G_{0},G_{1}$ and $G_{2}$ and
homomorphisms $i_{1}:G_{0}\rightarrow G_{1}$ and $i_{2}:G_{0}\rightarrow
G_{2}$ we call $G$ the \emph{pushout} of $\left( i_{1},i_{2}\right) $ if
there exist homomorphisms $j_{1}$ and $j_{2}$ completing a commutative diagram
\begin{equation}
\begin{array}
[c]{ccc}
G_{0} & \overset{i_{1}}{\rightarrow} & G_{1}\\
i_{2}\downarrow\quad & & \quad\downarrow j_{2}\\
G_{2} & \overset{j_{1}}{\rightarrow} & G
\end{array}
\tag{$\diamondsuit$}
\end{equation}
and satisfying the following `universal mapping property':
\begin{gather*}
\text{\emph{If homomorphisms} }k_{1}:G_{1}\rightarrow H\text{ \emph{and}
}k_{2}:G_{2}\rightarrow H\text{ \emph{allow for a similar}}\\
\text{\emph{commutative diagram, then there exists a unique homomorphism}}\\
\varphi:G\rightarrow H\text{ \emph{such that} }j_{1}\circ\varphi=k_{1}\text{
\emph{and} }j_{2}\circ\varphi=k_{2}.
\end{gather*}
In this case, $G$ is uniquely determined up to isomorphism.
In the special case where the above homomorphisms $i_{1}$ and $i_{2}$ are
injective, the pushout is called a \emph{free product with amalgamation of
}$G_{1}$ and $G_{2}$ along $G_{0}$ and is denoted by $G_{1}\ast_{G_{0}}G_{2}$
With this terminology, we are implicitly viewing $G_{0}$ as a subgroup of both
$G_{1}$ and $G_{2}$. Then $G_{1}\ast_{G_{0}}G_{2}$ is the result of `gluing'
$G_{1}$ to $G_{2}$ along $G_{0}$. More precisely, if $\left\langle \left.
A_{1}\ \right\vert \ R_{1}\right\rangle $ and $\left\langle \left.
A_{1}\ \right\vert \ R_{1}\right\rangle $ are presentations for $G_{1}$ and
$G_{2}$ and $B$ is a generating set for $G_{0}$, then $\left\langle \left.
A_{1}\cup A_{2}\ \right\vert \ R_{1},R_{2},S\right\rangle $ is a presentation
for $G_{1}\ast_{G_{0}}G_{2}$ where
\[
S=\left\{ \left. i_{1}\left( y\right) ^{-1}i_{2}\left( y\right)
\ \right\vert \ y\in B\right\} .
\]
(The same procedure produces presentations for arbitrary pushouts.)
In topology, the most common application of `pushout diagrams' is found in the
Seifert-VanKampen Theorem \cite[Ch.4]{Ma}, which may stated as follows: if a
space $X$ is expressed as a union $X=U\cup V$ of path connected open sets such
that $U\cap V$ is also path connected and $x\in U\cap V$, then $\pi_{1}\left(
X,x\right) $ is the pushout of
\[
\begin{array}
[c]{ccc}
\pi_{1}\left( U\cap V,x\right) & \overset{\theta_{1}}{\rightarrow} & \pi
_{1}\left( U,x\right) \\
\theta_{2}\downarrow & & \\
\pi_{1}\left( V,x\right) & &
\end{array}
\]
where $\theta_{1}$ and $\theta_{2}$ are induced by inclusion. In most cases,
$\theta_{1}$ and $\theta_{1}$ are not injective, so $\pi_{1}\left(
X,x\right) $ is not necessarily a free product with amalgamation.
The above group theoretic constructions can be extended to \emph{generalized
graphs of groups} and the more restrictive \emph{graphs of groups. }For either
of these constructions, we begin with an oriented graph $\Delta$. (Here a
`graph' is simply a $1$-dimensional CW complex.) Then, to each vertex $v$ we
associate a `vertex group' $G_{v}$ and to each edge $e$ we associate an `edge
group' $G_{e}$. In addition, for each edge group $G_{e}$ we need `edge
homomorphisms' $\varphi_{e}^{+}$ and $\varphi_{e}^{-}$ mapping $G_{e}$ into
the vertex group at the `positive' and `negative' end of $e$, respectively.
(If $e$ is a loop in $\Delta$, then $\varphi_{e}^{+}$ and $\varphi_{e}^{-}$
can be different homomorphisms into the same vertex group. Let $\left(
\mathcal{G},\Delta\right) $ represent this setup. If each edge homomorphism
is injective, call $\left( \mathcal{G},\Delta\right) $ a `graphs of groups';
otherwise it is just a `generalized graphs of groups'.
Our next task is to assign, to an arbitrary generalized graph of groups
$\left( \mathcal{G},\Delta\right) $, a single group that generalizes the
pushout construction for the simple case. This could be done with a universal
mapping property. Instead, we describe a specific construction of the group.
Let $V$ [resp., $E$] denote the collection of vertices [resp., edges] of
$\Delta$. Choose a maximal tree $\Upsilon$ in $\Delta$. Then the
\emph{fundamental group of }$\left( \mathcal{G},\Delta\right) $ \emph{based
at} $\Upsilon$ is the group
\[
\pi_{1}\left( \mathcal{G},\Delta;\Upsilon\right) =((\ast_{v\in V}G_{v})\ast
F_{E})/N
\]
where $\ast_{v\in V}G_{v}$ is the free product of all vertex groups, $F_{E}$
is the free group generated by the set $E$ and $N$ is the smallest normal
subgroup of $(\ast_{v\in V}G_{v})\ast F_{E}$ generated by the set
\[
\left\{ \left. e^{-1}\cdot\varphi_{e}^{-}\left( x\right) \cdot
e\cdot\left( \varphi_{e}^{+}\left( x\right) \right) ^{-1}\ \right\vert
\ e\in E\text{ and }x\in G_{e}\right\} \cup\left\{ \left. e\ \right\vert
\ e\in E\right\} .
\]
\begin{example}
Diagram ($\diamondsuit$) determines a generalized graph of groups where the
graph is simply an oriented interval; moreover, the fundamental group of that
generalized graph of groups is precisely the pushout of that diagram. When
$\iota_{1}$ and $i_{2}$ are injective we have a genuine graph of groups whose
fundamental group is a free product with amalgamation.
A similar special case---this time, a graph of groups with just one vertex and
one edge---leads to another well-known construction in combinatorial group
theory---the HNN extension.
\end{example}
See \cite{Co} for details on the above ideas.
In group theory, it is preferable to study free products with amalgamation
over arbitrary pushouts. Similarly, genuine graphs of groups are preferable to
generalized graphs of groups. However, as noted above, arbitrary pushouts
occur naturally in topology via the classical Seifert-VanKampen Theorem.
Similarly, the following Generalized Seifert-VanKampen Theorem frequently
leads to a generalized graph of groups.
\begin{theorem}
[Generalized Seifert-VanKampen]Suppose a path connected space $X$ may be
expressed as a union of path connected open subsets $\left\{ U_{\alpha
}\right\} _{\alpha\in A}$ so that no point of $X$ lies in more than two of
the $U_{\alpha}$'s. Let $\Delta$ be the graph having vertex set $A$, and one
edge between $U_{\alpha}$ and $U_{\alpha^{\prime}}$ for each path component
$V_{\alpha\alpha^{\prime}\beta}$ of $U_{\alpha}\cap U_{\alpha^{\prime}}$
($\alpha\neq\alpha^{\prime}\in A$). Place an arbitrarily chosen orientation on
each edge; then choose a base point from each $U_{\alpha}$ [resp., each
$V_{\alpha\alpha^{\prime}\beta}$] and associate to the corresponding vertex
[resp., edge] the fundamental group of $U_{\alpha}$ [resp., $V_{\alpha
\alpha^{\prime}\beta}$]. For each $V_{\alpha\alpha^{\prime}\beta}$, choose
paths in $U_{\alpha}$ and $U_{\alpha^{\prime}}$ respectively, connecting the
base point of $V_{\alpha\alpha^{\prime}\beta}$ to the base points of
$U_{\alpha}$ and $U_{\alpha^{\prime}}$. Let the two edge homomorphisms for
$V_{\alpha\alpha^{\prime}\beta}$ be those induced by inclusion followed by
change of basepoints along these paths. If $\left( \mathcal{G},\Delta\right)
$ denotes this graph of groups, then $\pi_{1}(X,x)$ is isomorphic to $\pi
_{1}\left( \mathcal{G},\Delta;\Upsilon\right) $ where $\Upsilon$ is an
arbitrarily chosen maximal tree in $\Delta.$
\begin{proof}
See Chapter 2 of \cite{Ge}.
\end{proof}
\end{theorem}
\section{Proof of Theorem \ref{main theorem}\label{Section: Proof}}
In order to prove Theorem \ref{main theorem} we need only show that, if an
open manifold $M^{n}$ has finite homotopy type, then $M^{n}\times\mathbb{R}$
is a one-ended open manifold satisfying all three conditions of Theorem
\ref{siebenmann}. Proposition \ref{main-prop} begins that process; Part a)
asserts that $M^{n}\times\mathbb{R}$ is one-ended and open, while Part b)
ensures that Condition 1) holds. Strictly speaking, the `end obstruction'
$\sigma_{\infty}\left( M^{n}\times\mathbb{R}\right) $ found in Theorem
\ref{siebenmann} cannot be defined until it it known that Condition 2) holds.
Even so, it is possible to address Condition 3) before Condition 2) by proving
that \emph{all} clean neighborhoods of infinity in $M^{n}\times\mathbb{R}$
have finite homotopy type. This will be done (under the assumption that
$M^{n}$ has finite homotopy type) in Part c). Therefore, in the context of
Theorem \ref{main theorem}, once Condition 2) is verified, Condition 3)
follows immediately. To simplify the discussion, we refer to a manifold in
which all clean neighborhoods of infinity have finite homotopy type as
\emph{super-inward tame at infinity.}
\subsection{Conditions 1) and 3) of Theorem \ref{siebenmann}}
Before stating Proposition \ref{main-prop}, we introduce some terminology and
notation to used through the reest of this paper. Given a connected open
manifold $M^{n}$, a clean neighborhood of infinity $N\subseteq M^{n}$, and
$m>0$, the \emph{associated neighborhood of infinity} in $M^{n}\times
\mathbb{R}$ is the set
\[
W\left( N,m\right) =(N\times\mathbb{R)}\cup(M^{n}\times((-\infty
,-m]\cup\lbrack m,\infty)))
\]
It is easy to see that $W\left( N,m\right) $ is indeed a neighborhood of
infinity, that $W\left( N,m\right) $ is always connected, and that $W\left(
N,m\right) $ is a $0$-neighborhood of infinity in $M^{n}\times\mathbb{R}$
whenever $N$ is a $0$-neighborhood of infinity $M^{n}$.
In addition, let
\begin{align*}
W^{+}\left( N,m\right) & =(N\times\mathbb{R)}\cup(M^{n}\times\lbrack
m,\infty))\text{ and}\\
W^{-}\left( N,m\right) & =(N\times\mathbb{R)}\cup(M^{n}\times(-\infty,-m])
\end{align*}
Then $W^{+}\left( N,m\right) $ deformation retracts onto $M^{n}
\times\left\{ m\right\} $ and $W^{-}\left( N,m\right) $ deformation
retracts onto $M^{n}\times\left\{ -m\right\} $, so both are homotopy
equivalent to $M^{n}$. Moreover $W^{+}\left( N,m\right) $ $\cap W^{-}\left(
N,m\right) =N\times\mathbb{R\ \simeq\ }N$.
\begin{proposition}
\label{main-prop}Let $M^{n}$ be a connected open $n$-manifold. Then
\begin{enumerate}
\item[a)] $M^{n}\times\mathbb{R}$ is a one-ended open $\left( n+1\right) $-manifold.
\item[b)] $M^{n}\times\mathbb{R}$ is inward tame at infinity iff $M^{n}$ is
finitely dominated.
\item[c)] $M^{n}\times\mathbb{R}$ is super-inward tame at infinity iff $M^{n}$
has finite homotopy type.
\end{enumerate}
\begin{proof}
As noted above, all neighborhoods of infinity of the type $W\left(
N,m\right) $ are connected; moreover, they can be made arbitrarily small by
choosing $N$ to be small and $m$ large. Therefore, $M^{n}\times\mathbb{R}$ is one-ended.
The forward implications of Assertions a) and b) are immediate. In particular,
since $M^{n}\times\mathbb{R}$ itself is a clean neighborhood of infinity in
$M^{n}\times\mathbb{R}$, both implications can be deduced from the homotopy
equivalence $M^{n}\simeq M^{n}\times\mathbb{R}$. Thus, we turn our attention
to the two converses.
Given $W\left( N,m\right) $, let $C=M^{n}-int\left( N\right) $ (a compact
codimension $0$ submanifold of $M^{n}$). Then let $C^{\prime}$ denotes a
second `copy' of $C$ disjoint from $M^{n}$; and $K_{N}$ be the adjunction
space
\[
K_{N}=M^{n}\cup_{\varphi}C^{\prime}
\]
obtained by attaching $C^{\prime}$ to $M^{n}$ along its boundary via the
`identity map' $\varphi:\partial C^{\prime}\rightarrow\partial C$.
It is easy to see that $K_{N}$ is homotopy equivalent to $W\left( N,m\right)
$; indeed, $W\left( N,m\right) $ deformation retracts onto the subset
\[
(M^{n}\times\left\{ m\right\} )\cup\left( \partial C\times\left[
-m,m\right] \right) \cup\left( C\times\left\{ -m\right\} \right)
\]
which is homeomorphic to $K_{N}$. Thus, to show that $M^{n}\times\mathbb{R}$
is inward tame at $\infty$, [resp., $M^{n}\times\mathbb{R}$ is super-inward
tame at $\infty$], it suffices to show that $K_{N}$ is finitely dominated
[resp., has finite homotopy type].
\begin{claim}
If $M^{n}$ is finitely dominated, then $K_{N}$ is finitely
dominated.
\end{claim}
By Lemma \ref{improved-domination}, we may choose a homotopy $J:M^{n}
\times\left[ 0,1\right] \rightarrow M^{n}$ such that $\left. J\right\vert
_{(M^{n}\times\left\{ 0\right\} )\cup\left( C\times\left[ 0,1\right]
\right) }$ is the identity, and $\overline{J_{1}\left( M^{n}\right) }$ is
compact. Extend $J$ to a homotopy $J^{\ast}:K_{N}\times\left[ 0,1\right]
\rightarrow K_{V}$ by letting $J^{\ast}$ be the identity over $C^{\prime}$.
Then $J_{0}^{\ast}$ is the identity, and $J_{1}^{\ast}\left( K_{V}\right) $
has compact closure; so $K_{N}$ is finitely dominated.
\begin{claim}
If $M^{n}$ has finite homotopy type, then $K_{V}$ has finite homotopy
type.
\end{claim}
Let $f:M^{n}\rightarrow L$ be a homotopy equivalence, where $L$ is a finite
complex. Then $K_{N}=M^{n}\cup_{\varphi}C^{\prime}$ is homotopy equivalent to
the adjunction space
\[
L\cup_{f\circ\varphi}C^{\prime}
\]
where $f\circ\varphi$ maps $\partial C^{\prime}$ into $L$. This latter
adjunction space is homotopy equivalent to a finite complex. In fact, if we
begin with a triangulation of $M^{n}$ so that $C$ and $\partial C$ are
subcomplexes and choose $f$ to be a cellular map, then $f\circ\varphi$ is also
cellular and $L\cup_{f\circ\varphi}C^{\prime}$ is a finite complex.
\end{proof}
\end{proposition}
\subsection{Main Step: Stability of $\pi_{1}$ at infinity}
The following will show that $M^{n}\times\mathbb{R}$ satisfies Condition 2) of
Theorem \ref{siebenmann}, and thus complete our proof of Theorem
\ref{main theorem}.
\begin{proposition}
\label{pi1-stable}If a connected open manifold $M^{n}$ is finitely dominated,
then $M^{n}\times\mathbb{R}$ has stable fundamental group at infinity.
\end{proposition}
Let $M^{n}$ be a $k$-ended open $n$-manifold and $P$ and $Q$ be $0$
-neighborhoods of infinity with $Q\subseteq int\left( P\right) $. Index the
components $P^{1},P^{2},\cdots,P^{k}$ of $P$ and $Q^{1},Q^{2},\cdots,Q^{k}$ of
$Q$ so that $Q^{j}\subseteq P^{j}$ for $j$ $=1,\cdots,k$. For each $j$ , let
$A^{j}=\overline{Q^{j}-P^{j}}$.
If $M^{n}$ is finitely dominated, choose $P$ sufficiently small that there is
a homotopy $H:M^{n}\times\left[ 0,1\right] \rightarrow M^{n}$ pulling
$M^{n}$ into $M^{n}-P$. In addition (by Lemma \ref{improved-domination})
arrange that $H$ is fixed over some non-empty open set $U$. To simplify
notation, we focus on a single end; in particular, let $j\in\left\{
1,\cdots,k\right\} $ be fixed. Choose basepoints $p_{\ast}\in U$,
$p\in\partial P^{j}$ and $q\in\partial Q^{j}$; then choose a proper embedding
$r:[0,\infty)\rightarrow M^{n}$ such that $r\left( 0\right) =p_{\ast}$,
$r\left( 1\right) =p$, $r\left( 2\right) =q$, and so that the image ray
$R=r\left( [0,\infty\right) )$ intersects each of $\partial P^{j}$ and
$\partial Q^{j}$ transversely once and only once at the points $p$ and $q$,
respectively. Let $\alpha=R\cap A^{j}$ denote the corresponding arc in $A^{j}$
between $p$ and $q$.
Let $t:B^{n-1}\times\lbrack-1,\infty)\rightarrow M^{n}$ be a homeomorphism
onto a regular neighborhood $T$ of $R$ so that $\left. t\right\vert
_{\left\{ \overline{0}\right\} \times\lbrack0,\infty)}=r$, and so that
$T\cap A^{j}$ is a relative regular neighborhood of $\alpha$ in $A^{j}$
intersecting $\partial P^{j}$ and $\partial Q^{j}$ in $\left( n-1\right)
$-disks $D$ and $D^{\prime}$, with $D=t\left( B^{n-1}\times\left\{
1\right\} \right) $ and $D^{\prime}=t\left( B^{n-1}\times\left\{
2\right\} \right) $. Then choose an $\left( n-1\right) $-ball
$B_{0}\subseteq intB^{n-1}$, centered at $\overline{0}$; and let
$T_{0}=t\left( B_{0}\times\lbrack-1,\infty)\right) $ be the corresponding
smaller regular neighborhood of $R$, with corresponding subdisks $D_{0}$ and
$D_{0}^{\prime}$ contained in $intD$ and $intD^{\prime}$, respectively. We now
utilize the `homotopy refinement procedure' developed on pages 267-268 of
\cite{GuTi} to replace $H$ with a new homotopy $K:M^{n}\times\left[
0,1\right] \rightarrow M^{n}$ which, in addition to pulling $M^{n}$ into
$M^{n}-P$, has the properties:
\begin{itemize}
\item[i)] $K$ is `canonical' over $T_{0}$, and
\item[ii)] tracks of points lying outside $T_{0}$ do not pass through the
interior of $T_{0}$.
\end{itemize}
The first of these properties arranges that all tracks of points in $R$
proceed monotonically in $R$ to $p$; and that $\left. K\right\vert
_{D_{0}^{\prime}\times\left[ 0,1\right] }$ takes \newline$D_{0}^{\prime
}\times\left[ 0,\frac{1}{2}\right] $ homeomorphically onto $t\left(
B_{0}\times\left[ 0,2\right] \right) $, with $D_{0}^{\prime}\times\left[
\frac{1}{2},1\right] $ being flattened onto $t\left( B_{0}\times\left\{
0\right\} \right) $. We may also arrange that $K\left( D_{0}^{\prime}
\times\left\{ \frac{1}{4}\right\} \right) =D_{0}$. See \cite{GuTi} for details.
\begin{proposition}
\label{pushing-loops}Assume the above setup, with $M^{n}$ finitely dominated,
$j\in\left\{ 1,\cdots,k\right\} $ fixed, and all previous notation
unchanged. Then every loop $\tau$ in $P^{j}$ based at $p$ is homotopic (rel
$p$) in $M^{n}$ to a loop of the form $\alpha\ast\tau^{\prime}\ast\alpha^{-1}
$, where $\tau^{\prime}$is a loop in $Q^{j}$ based at $q$.
\begin{proof}
Every loop in $P^{j}$ based at $p$ is homotopic (rel $p$) to a product
$\tau_{1}\ast\tau_{2}\ast\cdots\tau_{u}$ where: for each $v$, either $\tau
_{v}$ lies entirely in $A^{j}$ or $\tau_{v}$ is (already) a loop of the form
$\alpha\ast\tau_{v}^{\prime}\ast\alpha^{-1}$ where $\tau_{v}^{\prime}$ is a
loop in $Q^{j}$ based at $q$. So, without loss of generality, we assume, that
$\tau$ lies entirely in $A^{j}$.
Consider the map $L=\left. K\right\vert _{\partial Q^{j}\times\left[
0,1\right] }:\partial Q^{j}\times\left[ 0,1\right] \rightarrow M^{n}$.
Choose triangulations $\Delta_{1}$ and $\Delta_{2}$ of the domain and range,
respectively. Without changing its definition on $(\partial Q^{j}
\times\left\{ 0\right\} )\cup(D_{0}^{\prime}\times\left[ 0,\frac{1}
{2}\right] )$, adjust $L$ (up to a small homotopy) to a non-degenerate
simplicial map. Then adjust $\tau$ (rel $p$) to an embedded circle in general
position with respect to $\Delta_{2}$, lying entirely in $intA^{j}$, except at
its basepoint $p$, which lies in $\partial A^{j}$. Then $L^{-1}\left(
\tau\right) $ is a closed $1$-manifold lying in $\partial Q^{j}\times(0,1)$.
Let $\sigma$ be the component of $L^{-1}\left( \tau\right) $ containing the
point $\left( q,\frac{1}{4}\right) $. Since $L$ takes a neighborhood of
$\left( q,\frac{1}{4}\right) $ homeomorphically onto a neighborhood of $p$,
and since no other points of $\sigma$ are taken near $p$ (use property ii)
above), then $L$ takes $\sigma$ onto $\tau$ in a degree $1$ fashion. Now the
natural deformation retraction of $\partial Q\times\left[ 0,1\right] $ onto
$\partial Q\times\left\{ 0\right\} $ pushes $\sigma$ into $\partial
Q\times\left\{ 0\right\} $, while sliding $\left( q,\frac{1}{4}\right) $
along the arc $\left\{ q\right\} \times\left[ 0,\frac{1}{4}\right] $.
Composing this push with $L$ provides a homotopy of $\tau$ to a loop
$\tau^{\prime}$ in $\partial Q$ whereby, the basepoint $p$ is slid along
$\alpha$ to $q$. This provides the desired (basepoint preserving) homotopy
from $\tau$ to $\alpha\ast\tau^{\prime}\ast\alpha^{-1}$.
\end{proof}
\end{proposition}
\begin{corollary}
\label{isomorphic images}Assume the full setup for Proposition
\ref{pushing-loops} and let
\begin{align*}
\Gamma_{P^{j}} & =im\left( \pi_{1}\left( P^{j},p\right) \rightarrow
\pi_{1}\left( M^{n},p\right) \right) \text{, and}\\
\Gamma_{Q^{j}} & =im\left( \pi_{1}\left( Q^{j},q\right) \rightarrow
\pi_{1}\left( M^{n},q\right) \right) .
\end{align*}
Then the change of basepoint isomorphism $\widehat{\alpha}:\pi_{1}\left(
M^{n},q\right) \rightarrow\pi_{1}\left( M^{n},p\right) $ takes
$\Gamma_{Q^{j}}$ isomorphically onto $\Gamma_{P^{j}}$.
\begin{proof}
Since $\alpha\cup Q^{j}\subseteq P^{j}$, it is clear that $\widehat{\alpha}$
takes $\Gamma_{Q^{j}}$ into $\Gamma_{P^{j}}$. Injectivity is immediate, and
Proposition \ref{pushing-loops} assures surjectivity.
\end{proof}
\end{corollary}
We now turn our attention back to the manifold $M^{n}\times\mathbb{R}$. In
order to understand the fundamental group system at infinity, it will suffice
to understand `special' neighborhoods of infinity of the sort $W\left(
N,m\right) $ (along with corresponding bonding maps). To simplify the
exposition, we first consider the special case where $M^{n}$ itself is
one-ended. Afterwards we upgrade the proof so that it includes the general case.
\begin{proposition}
\label{technical prop-one-ended}Suppose $M^{n}$ is a one-ended open
$n$-manifold and $P$ and $Q$ are $0$-neighborhoods of infinity in $M^{n}$ with
$Q\subseteq intP$. Choose $p\in\partial P$, $q\in\partial Q$ and $\alpha$ a
path in $\overline{P-Q}$ connecting $p$ to $q$. For $0<m<m^{\prime}<\infty$,
let $W\left( P,m\right) \supseteq W\left( Q,m^{\prime}\right) $ be
corresponding neighborhoods of infinity in $M^{n}\times\mathbb{R}$ and
$\lambda:\pi_{1}\left( W\left( Q,m^{\prime}\right) ,(q,0)\right)
\rightarrow\pi_{1}\left( W\left( P,m\right) ,(p,0)\right) $ the
homomorphism induced by inclusion followed by a change of basepoints along
$\alpha\times0$. Then
\begin{enumerate}
\item $\pi_{1}\left( W\left( P,m\right) ,(p,0)\right) \cong\pi_{1}\left(
M^{n},p\right) \ast_{\Gamma_{P}}\pi_{1}\left( M^{n},p\right) $, where
\[
\Gamma_{P}=im\left( \pi_{1}\left( P,p\right) \rightarrow\pi_{1}\left(
M^{n},p\right) \right) ,
\]
\item $\pi_{1}\left( W\left( Q,m^{\prime}\right) ,(q,0)\right) \cong
\pi_{1}\left( M^{n},q\right) \ast_{\Gamma_{Q}}\pi_{1}\left( M^{n},q\right)
$, where
\[
\Gamma_{Q}=im\left( \pi_{1}\left( Q,q\right) \rightarrow\pi_{1}\left(
M^{n},q\right) \right) ,
\]
\item the homomorphism $\lambda$ is surjective, and
\item if there exists a homotopy pulling $M^{n}$ into $M^{n}-P$, then
$\lambda$ is an isomorphism.
\end{enumerate}
\begin{proof}
Using our earlier notation, the Seifert-VanKampen Theorem establishes
\newline$\pi_{1}\left( W\left( P,m\right) ,(p,0)\right) $ as the pushout
of the diagram
\[
\begin{array}
[c]{ccc}
\pi_{1}\left( P\times\mathbb{R},(p,0)\right) & \rightarrow
&
\pi_{1}\left( W^{+}\left( P,m\right) ,(p,0)\right) \\
\downarrow
& & \\
\pi_{1}\left( W^{-}\left( P,m\right) ,(p,0)\right) & &
\end{array}
\]
where both homomorphisms are induced by inclusion. Homotopy equivalences
\begin{align*}
\left( P\times\mathbb{R},(p,0)\right) & \simeq\left( P,p\right) \text{,
and}\\
\left( W^{+}\left( P,m\right) ,(p,0)\right) & \simeq\left(
M^{n},p\right) \simeq\left( W^{-}\left( P,m\right) ,(p,0)\right)
\end{align*}
allow us to replace the above with a simpler diagram
\[
\begin{array}
[c]{ccc}
\pi_{1}\left( P,p\right)
& \overset{i_{\ast}}{\rightarrow} &
\pi_{1}\left( M^{n},p\right) \\
i_{\ast}\downarrow
& & \\
\pi_{1}\left( M^{n},p\right) & &
\end{array}
.
\]
This diagram does not define a free product with amalgamation since $i_{\ast}$
needn't be injective, however, the pushout is identical to that of
\begin{equation}
\begin{array}
[c]{ccc}
\Gamma_{P}
& \rightarrow & \pi_{1}\left( M^{n},p\right) \\
\downarrow
& & \\
\pi_{1}\left( M^{n},p\right) & &
\end{array}
\tag{\#}
\end{equation}
(both homomorphisms are inclusions) which determines the free product with
amalgamation promised in 1).
Of course, assertion 2) is identical to 1). Then, from (\#) and the analogous
diagram for $\pi_{1}\left( W\left( Q,m^{\prime}\right) ,(q,0)\right) $, we
see that $\lambda$ is induced by the isomorphism $\pi_{1}\left(
M^{n},q\right) \ast\pi_{1}\left( M^{n},q\right) \rightarrow\pi_{1}\left(
M^{n},p\right) \ast\pi_{1}\left( M^{n},p\right) $ by quotienting out (in
the domain and range) by relations induced by $\Gamma_{Q}$ and $\Gamma_{P}$,
respectively---according to the definition of free product with amalgamation.
Thus, $\lambda$ is necessarily surjective. Moreover, if there is a homotopy
pulling $M^{n}$ into $P$, Corollary \ref{isomorphic images} ensures that
$\lambda$ is an isomorphism.
\end{proof}
\end{proposition}
\begin{corollary}
Let $M^{n}$ be a connected, one-ended open $n$-manifold. Then $M^{n}
\times\mathbb{R}$ is a one-ended open $\left( n+1\right) $-manifold with
semistable fundamental group at infinity. If $M^{n}$ is finitely dominated,
then $M^{n}\times\mathbb{R}$ has stable fundamental group at infinity which is
pro-isomorphic to the system
\[
\pi_{1}\left( M^{n}\right) \ast_{\Gamma}\pi_{1}\left( M^{n}\right)
\overset{id}{\longleftarrow}\pi_{1}\left( M^{n}\right) \ast_{\Gamma}\pi
_{1}\left( M^{n}\right) \overset{id}{\longleftarrow}\pi_{1}\left(
M^{n}\right) \ast_{\Gamma}\pi_{1}\left( M^{n}\right) \overset
{id}{\longleftarrow}\cdots
\]
where $\Gamma$ is the image (translated by an appropriate change of basepoint
isomorphism) of the fundamental group of any sufficiently small $0$
-neighborhood of infinity in $\pi_{1}\left( M^{n}\right) $.
\begin{proof}
This corollary is almost immediate. One simply chooses a neat sequence
$\left\{ N_{i}\right\} _{i=1}^{\infty}$ of $0$-neighborhoods of infinity in
$M^{n}$, then applies the previous proposition (repeatedly) to the sequence
$\left\{ W\left( N_{i},i\right) \right\} _{i=1}^{\infty}$. If $M^{n}$ is
finitely dominated, $N_{1}$ should chosen sufficiently small that $M^{n}$ can
be pulled into $M^{n}-N_{1}$.
\end{proof}
\end{corollary}
In the introduction, we noted that, without the hypothesis of finite
domination on $M^{n}$, the fundamental group at infinity in $M^{n}
\times\mathbb{R}$ needn't be stable. This is now easy to exhibit.
\begin{example}
[An $M^{n}\times\mathbb{R}$ with nonstable $\pi_{1}$ at infinity]Let
$T_{1}=B^{n-1}\times S^{1}\subseteq S^{n-1}\times S^{1}$ where $B^{n-1}
\subseteq S^{n-1}$ is a tamely embedded $\left( n-1\right) $-ball. Then let
$T_{2}\subseteq int(T_{1})$ be another (thinner) copy of $B^{n-1}\times S^{1}$
that winds around twice in the $S^{1}$ direction. Inside $T_{2}$ choose a
third (even thinner) copy of $B^{n-1}\times S^{1}$ that winds through $T_{2}$
twice in the $S^{1}$ direction---and thus, four times through $T_{1}$ in the
original $S^{1}$ direction. Continue this infinitely to get a nested sequence
$T_{1}\supseteq T_{2}\supseteq T_{3}\supseteq\cdots$ so that $T_{\infty
}\subseteq S^{n-1}\times S^{1}$ is the \emph{dyadic solenoid}. Then
$M^{n}=(S^{n-1}\times S^{1})-T_{\infty}$ is a one-ended open $n$-manifold and
each $N_{i}=T_{i}-T_{\infty}$ is a $0$-neighborhood of infinity. Provided
$n>3$, it is easy to see that $N_{i}\hookrightarrow T_{i}$ induces a $\pi_{1}
$-isomorphism. Therefore, the inverse sequence
\[
\pi_{1}\left( N_{0},p_{0}\right) \overset{\lambda_{1}}{\longleftarrow}
\pi_{1}\left( N_{1},p_{1}\right) \overset{\lambda_{2}}{\longleftarrow}
\pi_{1}\left( N_{2},p_{2}\right) \overset{\lambda_{3}}{\longleftarrow}
\cdots.
\]
is isomorphic to the sequence
\[
\mathbb{Z}\overset{\times2}{\longleftarrow}\mathbb{Z}\overset{\times
2}{\longleftarrow}\mathbb{Z}\overset{\times2}{\longleftarrow}\mathbb{Z}
\overset{\times2}{\longleftarrow}\mathbb{\cdots}\text{.}
\]
A more descriptive form of the above inverse sequence is:
\[
\mathbb{Z}\hookleftarrow2\mathbb{Z}\hookleftarrow4\mathbb{Z}\hookleftarrow
8\mathbb{Z\hookleftarrow\cdots}\text{.}
\]
It follows that, for the corresponding sequence $\left\{ W\left(
N_{i},i\right) \right\} _{i=1}^{\infty}$ of neighborhoods of infinity in
$M^{n}\times\mathbb{R}$ we obtain a representation of the fundamental group at
infinity for $M^{n}\times\mathbb{R}$ isomorphic to the sequence
\[
\mathbb{Z\ast}_{\mathbb{Z}}\mathbb{Z\leftarrow Z\ast}_{2\mathbb{Z}
}\mathbb{Z\leftarrow Z\ast}_{4\mathbb{Z}}\mathbb{Z\leftarrow Z\ast
}_{8\mathbb{Z}}\mathbb{Z\leftarrow\cdots},
\]
where each bonding map is induced by the identity $\mathbb{Z\ast Z\rightarrow
Z\ast Z}$. Thus, each bond is surjective but not injective. It is easy to see
that such a system cannot be stable.
\end{example}
\begin{remark}
It is interesting to see that `crossing with $\mathbb{R}$' takes examples with
nonstable but pro-injective fundamental groups at infinity and produces
examples with nonstable but pro-surjective (semistable) fundamental groups at
infinity.
\end{remark}
We are now prepared to address the general situation where $M^{n}$ is
$k$-ended ($1\leq k<\infty$). For $k>1$, calculation of $\pi_{1}\left(
W\left( N,m\right) \right) $ is complicated by the fact that $W^{+}\left(
N,m\right) \cap W^{-}\left( N,m\right) =N\times\mathbb{R}$ is not
connected. In this situation, $\pi_{1}\left( W\left( N,m\right) \right) $
is most effectively described using a (generalized or actual) graph of groups.
In particular, let $\Theta_{k}$ denote the oriented graph consisting of two
vertices $v^{+}$ and $v^{-}$ and $k$ oriented edges $e^{1},e^{2},\cdots,e^{k}$
each running from $v^{-}$ to $v^{+}$. If $N^{1},N^{2},\cdots,N^{k}$ are the
components of $N$ with basepoints $p^{1}$ $,p^{2}$ $,\cdots,p^{k}$
respectively, we associate the following groups and homomorphisms to
$\Theta_{k}$:
\begin{itemize}
\item $G\left( v^{+}\right) =\pi_{1}\left( W^{+}\left( N,m\right)
,\left( p^{1},0\right) \right) $ and $G\left( v^{-}\right) =\pi
_{1}\left( W^{-}\left( N,m\right) ,\left( p^{1},0\right) \right) ,$
\item for each $j\in\left\{ 1,2,\cdots,k\right\} $, $G\left( e^{j}\right)
=\pi_{1}\left( N^{j}\times\mathbb{R},\left( p^{j},0\right) \right) ,$
\item For each $j\in\left\{ 1,2,\cdots,k\right\} $ the homomorphism
$\varphi_{j}^{+}:G\left( e^{j}\right) \rightarrow G\left( v^{+}\right) $
is the composition
\[
\pi_{1}\left( N^{j}\times\mathbb{R},\left( p^{j},0\right) \right)
\overset{i_{\ast}}{\longrightarrow}\pi_{1}\left( W^{+}\left( N,m\right)
,\left( p^{j},0\right) \right) \overset{\widehat{\beta^{j}}}
{\longrightarrow}\pi_{1}\left( W^{+}\left( N,m\right) ,\left(
p^{1},0\right) \right)
\]
where $\beta^{j}$ is an appropriately chosen path in $W^{+}\left( N,m\right)
$ from $\left( p^{j},0\right) $ to $\left( p^{1},0\right) $.
\item The homomorphisms $\varphi_{j}^{-}:G\left( e^{j}\right) \rightarrow
G\left( v^{-}\right) $ are defined similarly to the above, but with $\beta$
chosen to lie in $W^{-}\left( N,m\right) $.
\end{itemize}
\noindent Since $\varphi_{j}^{+}$ and $\varphi_{j}^{-}$ needn't be injective,
the above setup is just a generalized\emph{ }graph of groups. Let it be
denoted by $(\mathcal{G}\left( N\right) ,\Theta_{k})$
Note that the edge $e^{1}$, by itself, is a maximal tree in $\Theta_{k}$. By
the Generalized Seifert-VanKampen Theorem $\pi_{1}\left( N,p^{1}\right)
\cong\pi_{1}\left( \mathcal{G}\left( N\right) ,\Theta_{k};e^{1}\right) $.
We may obtain a similar---but simpler---graph of groups as follows. Again we
start with the graph $\Theta_{k}$. Motivated by the homotopy equivalences
$W^{+}\left( N,m\right) \simeq M^{n}\simeq W^{-}\left( N,m\right) $,
define both $G^{\prime}\left( v^{+}\right) $ and $G^{\prime}\left(
v^{+}\right) $ to be $\pi_{1}\left( M^{n},p^{1}\right) $. Then, in order to
obtain injective edge homomorphisms, for each $j\in\left\{ 1,2,\cdots
,k\right\} $, let
\[
G^{\prime}\left( e^{j}\right) =\Gamma_{N^{j}}=im\left( \pi_{1}\left(
N^{j},p^{j}\right) \overset{i_{\ast}}{\longrightarrow}\pi_{1}\left(
M^{n},p^{j}\right) \overset{\widehat{\beta^{j\prime}}}{\longrightarrow}
\pi_{1}\left( M^{n},p^{1}\right) \right) .
\]
and let all edge homomorphisms be inclusions. Here $\beta^{j\prime}$ is a path
in $M^{n}$ `parallel' to the path $\beta^{j}$ used above. This new graph of
groups will be denoted $\left( \mathcal{G}^{\prime}\left( N\right)
,\Theta_{k}\right) $. It is easy to see a canonical isomorphism between
$\pi_{1}\left( \mathcal{G}\left( N\right) ,\Theta_{k};e^{1}\right) $ and
$\pi_{1}\left( \mathcal{G}^{\prime}\left( N\right) ,\Theta_{k}
;e^{1}\right) $.
We are now ready to state a more general version of Proposition
\ref{technical prop-one-ended}, suitable for multi-ended $M^{n}$.
\begin{proposition}
\label{technical prop-k-ended}Suppose $M^{n}$ is a $k$-ended open $n$-manifold
and $P$ and $Q$ are $0$-neighborhoods of infinity in $M^{n}$ with components
of $P^{1},P^{2},\cdots,P^{k}$ and $Q^{1},Q^{2},\cdots,Q^{k}$, such that
$Q^{j}\subseteq int(P^{j})$ for each $j$. Choose basepoints $p^{j}$
$\in\partial P^{j}$ and $q_{j}\in\partial Q^{j}$, paths $\alpha^{j}$ in
$\overline{P^{j}-Q^{j}}$ connecting $p^{j}$ to $q^{j}$ and paths $\beta^{j}$
in $M^{n}$ connecting $q_{j}$ to $q_{1}$ for each $j\in\left\{ 1,2,\cdots
,k\right\} $. For $0<m<m^{\prime}<\infty$, let $W\left( P,m\right)
\supseteq W\left( Q,m^{\prime}\right) $ be corresponding neighborhoods of
infinity in $M^{n}\times\mathbb{R}$, and $\lambda:\pi_{1}\left( W\left(
Q,m^{\prime}\right) ,(q_{1},0)\right) \rightarrow\pi_{1}\left( W\left(
P,m\right) ,(p^{1},0)\right) $ the homomorphism induced by inclusion
followed by a change of basepoints along $\alpha^{1}\times0$. Then
\begin{enumerate}
\item $\pi_{1}\left( W\left( P,m\right) ,(p^{1},0)\right) \cong\pi
_{1}\left( \mathcal{G}^{\prime}\left( P\right) ,\Theta_{k};e^{1}\right) $,
where $\left( \mathcal{G}^{\prime}\left( N\right) ,\Theta_{k}\right) $ is
the graph of groups described below with
\[
\Gamma_{P^{j}}=im\left( \pi_{1}\left( P^{j},p^{j}\right) \overset{i_{\ast}
}{\longrightarrow}\pi_{1}\left( M^{n},p^{j}\right) \overset{\widehat
{\gamma^{j}}}{\longrightarrow}\pi_{1}\left( M^{n},p^{1}\right) \right) ,
\]
and $\gamma^{j}=\alpha^{j}\ast\beta^{j}\ast(\alpha^{1})^{-1}$
\item $\pi_{1}\left( W\left( Q,m^{\prime}\right) ,(q_{1},0)\right)
\cong\pi_{1}\left( \mathcal{G}^{\prime}\left( Q\right) ,\Theta_{k}
;e^{1}\right) $ ($\left( \mathcal{G}^{\prime}\left( Q\right) ,\Theta
_{k}\right) $ analogous to figure below) where
\[
\Gamma_{Q^{j}}=im\left( \pi_{1}\left( Q^{j},q_{j}\right) \overset{i_{\ast}
}{\longrightarrow}\pi_{1}\left( M^{n},q_{j}\right) \overset{\widehat
{\beta^{j}}}{\longrightarrow}\pi_{1}\left( M^{n},q_{1}\right) \right) ,
\]
\item the homomorphism $\lambda$ is surjective, and
\item if there exists a homotopy pulling $M^{n}$ into $P$, then $\lambda$ is
an isomorphism.
\end{enumerate}
\hspace*{1in}
{\parbox[b]{3.8052in}{\begin{center}
\includegraphics[
trim=0.000000in -0.117993in 0.000000in 0.000000in,
height=1.4062in,
width=3.8052in
]
{Bedlewo-fig1.eps}
\\
$\left( \mathcal{G}^{\prime}\left( P\right) ,\Theta_{k}\right) $
\end{center}}}
\begin{proof}
As noted in the comments preceding this Proposition, 1) and 2) are essentially
just applications of the Generalized Seifert-VanKampen Theorem. Assertion 3)
is valid for nearly the same reason as Assertion 3) of Proposition
\ref{technical prop-one-ended}; in this case, $\lambda:\pi_{1}\left(
\mathcal{G}^{\prime}\left( Q\right) ,\Theta_{k};e^{1}\right) \rightarrow
\pi_{1}\left( \mathcal{G}^{\prime}\left( P\right) ,\Theta_{k};e^{1}\right)
$ is induced by the natural isomorphism
\[
\pi_{1}\left( M^{n},q\right) \ast\pi_{1}\left( M^{n},q\right) \ast
F_{E}\rightarrow\pi_{1}\left( M^{n},p\right) \ast\pi_{1}\left(
M^{n},p\right) \ast F_{E}
\]
where $F_{E}$ is the free group on generators $E=\left\{ e^{1},e^{2}
,\cdots,e^{k}\right\} $, by taking appropriate quotients in the domain and
the range (as prescribed by the definition of the fundamental group of a graph
of groups). If there exists a homotopy pulling $M^{n}$ into $P$, Corollary
\ref{pushing-loops} makes it clear that this homomorphism is an isomorphism.
\end{proof}
\end{proposition}
\begin{corollary}
\label{Corollary-k-ended}Let $M^{n}$ be a connected, $k$-ended open
$n$-manifold. Then $M^{n}\times\mathbb{R}$ is a one-ended open $\left(
n+1\right) $-manifold with semi-stable fundamental group at infinity. If
$M^{n}$ is finitely dominated, then $M^{n}\times\mathbb{R}$ has stable
fundamental group at infinity which is pro-isomorphic to the system
\[
\pi_{1}\left( \mathcal{G}^{\prime},\Theta_{k};e^{1}\right) \overset
{id}{\longleftarrow}\pi_{1}\left( \mathcal{G}^{\prime},\Theta_{k}
;e^{1}\right) \overset{id}{\longleftarrow}\pi_{1}\left( \mathcal{G}^{\prime
},\Theta_{k};e^{1}\right) \overset{id}{\longleftarrow}\cdots
\]
where $\left( \mathcal{G}^{\prime},\Theta_{k}\right) $ is the graph of
groups pictured below. Here each $\Gamma_{j}$ is the image---translated by an
appropriate change of basepoint isomorphism---of the fundamental group of the
$j^{th}$ component of any sufficiently small $0$-neighborhood of infinity in
$\pi_{1}\left( M^{n}\right) $; and the edge homomorphisms are all inclusions.
\hspace*{1in}
{\parbox[b]{3.4592in}{\begin{center}
\includegraphics[
trim=0.000000in -0.118065in 0.000000in 0.000000in,
height=1.4183in,
width=3.4592in
]
{Bedlewo-fig2.eps}
\\
$\left( \mathcal{G}^{\prime},\Theta_{k}\right) $
\end{center}}}
\begin{proof}
Choose a well-indexed neat sequence\emph{ }$\left\{ N_{i}\right\}
_{i=1}^{\infty}$ of $0$-neighborhoods of infinity in $M^{n}$. Then apply the
above Proposition repeatedly to the sequence $\left\{ W\left( N_{i}
,i\right) \right\} _{i=1}^{\infty}$ of associated neighborhoods of infinity
in $M^{n}\times\mathbb{R}$. If $M^{n}$ is finitely dominated, $N_{1}$ should
chosen sufficiently small that $M^{n}$ can be pulled into $M^{n}-N_{1}$.
\end{proof}
\end{corollary}
\section{Closing comments}
As indicated in the `easy part' of Theorem \ref{main theorem}, if $M^{n}$ is a
finitely dominated open manifold that is not homotopy equivalent to a finite
complex, then $M^{n}\times\mathbb{R}$ is not compactifiable by the addition of
a manifold boundary. My comparing Propositions \ref{main-prop} and
\ref{pi1-stable} with Theorem \ref{siebenmann}, it must be the case that
$\sigma_{\infty}\left( M^{n}\times\mathbb{R}\right) $ is non-trivial. Since
$M^{n}$ does not have finite homotopy type, its Wall obstruction
$\sigma\left( M^{n}\right) $ is a nontrivial element of $\widetilde{K}
_{0}\left( \mathbb{Z[}\pi_{1}(M^{n})]\right) $. As one might expect, there
is a relationship between $\sigma_{\infty}\left( M^{n}\times\mathbb{R}
\right) $ and $\sigma\left( M^{n}\right) $.
By Proposition \ref{technical prop-k-ended} and Corollary
\ref{Corollary-k-ended}, $\sigma_{\infty}\left( M^{n}\times\mathbb{R}\right)
$ may be viewed as the Wall finiteness obstruction of $W\left( M^{n}
,N\right) $ where $N$ is any sufficiently small $0$-neighborhood of infinity
in $M^{n}$, this obstruction $\sigma\left( W\left( M^{n},N\right) \right)
$ lies in $\widetilde{K}_{0}\left( \mathbb{Z[}\pi_{1}\left( W\left(
N,m\right) \right) ]\right) $. The retraction of $W\left( N,m\right) $
onto $M^{n}\times\left\{ m\right\} $ obtained by projection shows that
$\pi_{1}(M^{n})$ is a retraction of $\pi_{1}\left( W\left( N,m\right)
\right) $; so, by functorality, $\widetilde{K}_{0}\left( \mathbb{Z[}\pi
_{1}(M^{n})]\right) $ is a retraction of $\widetilde{K}_{0}\left(
\mathbb{Z[}\pi_{1}\left( W\left( N,m\right) \right) ]\right) $. As a
consequence, the inclusion induced homomorphism $i_{\ast}:\widetilde{K}
_{0}\left( \mathbb{Z[}\pi_{1}(M^{n})]\right) \rightarrow\widetilde{K}
_{0}\left( \mathbb{Z[}\pi_{1}\left( W\left( N,m\right) \right) ]\right)
$ is injective. By applying the Sum Theorem for Wall's finiteness obstruction
\cite[Ch.VI]{Si} to the homotopy equivalence $W\left( N,m\right) \simeq
K_{N}=M^{n}\cup_{\varphi}C^{\prime}$ utilized in the proof of Proposition
\ref{main-prop}, it is easy to see that $\sigma\left( W\left( M^{n}
,N\right) \right) $ is precisely $i_{\ast}\left( \sigma\left(
M^{n}\right) \right) $.
As a consequence of all of the above, we have a recipe for creating open
manifolds that are reasonably nice at infinity (inward tame with stable
fundamental group), but are not compactifiable via the addition of a manifold
boundary. Specifically: build a finite dimensional complex $K$ that is
finitely dominated but $\sigma\left( K\right) \neq0$; properly embed $K$ in
$\mathbb{R}^{n}$ and let $M^{n}$ be the interior of a proper regular
neighborhood of $K$; then $M^{n}\times\mathbb{R}$ satisfies Conditions 1) and
2) of Theorem \ref{Section: Proof}, but $\sigma_{\infty}(M^{n}\times
\mathbb{R)}$ is non-trivial and equal to $i_{\ast}\left( \sigma\left(
K\right) \right) $.
\end{document} | math | 56,216 |
\begin{document}
\begin{abstract}
The study of the 2D Euler equation with non Lipschitzian velocity was initiated by Yudovich in \cite{Y1} where a result of global well-posedness for essentially bounded vorticity is proved. A lot of works have been since dedicated to the extension of this result to more general spaces. To the best of our knowledge all these contributions lack the proof of at least one of the following three fundamental properties: global existence, uniqueness and regularity persistence. In this paper we introduce a Banach space containing unbounded functions for which all these properties are shown to be satisfied.
{\tt v}arepsilonnd{abstract}
\maketitle
\section{Introduction}
We consider
the Euler system related to an incompressible inviscid fluid with constant
density, namely
\begin{equation}
\label{E}
\left\{
\begin{array}{ll}
\partial_t u+u\cdot\nabla u+\nabla P=0,\qquad x\in \mathbb R^d, t>0, \\
\nabla.u=0,\\
u_{\mid t=0}= u_{0}.
{\tt v}arepsilonnd{array} \right.
{\tt v}arepsilonnd{equation}
Here, the vector field \mbox{$u=(u_2,u_1,...,u_d)$} is a function of \mbox{$(t,x)\in \mathbb R_+\times\mathbb R^d$} denoting the velocity of the fluid and the scalar function
\mbox{$P$} stands for the pressure.
The second equation of the system \mbox{$\nabla.u=0$} is the
condition of incompressibility.
Mathematically, it guarantees the preservation of Lebesgue measure by the particle-trajectory mapping (the classical flow associated to the
velocity vector fields).
It is worthy of noting that the pressure can be recovered from the velocity via an explicit Calder\'on-Zygmund type operator (see \cite{Ch1} for instance).
The question of local well-posedness of {\tt v}arepsilonqref{E} with smooth data was resolved by many authors in different spaces (see for instance \cite{Ch1,Maj}). In this context, the vorticity \mbox{$\omega={\rm curl}\, u$} plays a fundamental role. In fact, the well-known BKM criterion \cite{Beale} ensures that the development of finite time singularities for these solutions is related to the blow-up of the \mbox{$L^\infty$} norm of the vorticity near the maximal time existence. A direct consequence of this result is the global well-posedness of the two-dimensional Euler solutions with smooth initial data, since the vorticity satisfies
the transport equation
\begin{equation}
\label{tourbillon}
\partial_t\omega+(u \cdot \nabla)\omega=0,
{\tt v}arepsilonnd{equation}
and then all its \mbox{$L^p$} norms are conserved.
Another class of solutions requiring lower regularity on the velocity can be considered: the weak solutions (see for instance \cite[Chap 4]{lions1}). They solve a
weak form of the equation in the distribution sense, placing the equations in large
spaces and using duality. The divergence form of Euler equations allows to put all the derivative on the test functions and so to obtain
$$
\int_0^\infty\int_{{\mathbb R}^d}(\partial_t{\tt v}arphi+(u \cdot \nabla){\tt v}arphi).u\,dxdt+\int_{{\mathbb R}^d}{\tt v}arphi(0,x)u_0(x)\,dx=0,
$$
for all \mbox{${\tt v}arphi\in C^\infty_0({\mathbb R}_+\times{\mathbb R}^d, {\mathbb R}^d)$} with \mbox{$ \nabla.{\tt v}arphi=0$}. In the two dimensional space and when the regularity is sufficient to give a sense to Biot-Savart law, then one can consider an alternative weak formulation: the vorticity-stream weak formulation. It consists in resolving the weak form of {\tt v}arepsilonqref{tourbillon} supplemented with the Biot-Savart law:
\begin{equation}
\label{bs}
u=K, \qquad{\rm as}\quad n\to\inftyt\omega,\quad \hbox{with}\quad K(x)=\frac{x^\perp}{2\pi|x|^2}.
{\tt v}arepsilonnd{equation}
In this case, \mbox{$(v,\omega)$} is a weak solution to the vorticity-stream formulation of the 2D Euler equation with initial data \mbox{$\omega_0$} if {\tt v}arepsilonqref{bs} is satisfied and
$$
\int_0^\infty\int_{{\mathbb R}^2}(\partial_t{\tt v}arphi+u.\nabla{\tt v}arphi)\omega(t,x) dxdt+\int_{{\mathbb R}^2}{\tt v}arphi(0,x)\omega_0(x)dx=0,
$$
for all \mbox{${\tt v}arphi\in C^\infty_0({\mathbb R}_+\times{\mathbb R}^2,{\mathbb R})$}.
The questions of existence/uniqueness of weak solutions have been extensively studied and a detailed
account can be found in the books \cite{Ch1, Maj, lions1}. We emphasize that, unlike the fixed-point argument, the compactness method does not guarantee the uniqueness of the solutions and then the two issues (existence/uniqueness) are usually dealt with separately. These questions have been originally addressed by Yudovich in \cite{Y1} where the existence and uniqueness of weak solution to 2D Euler systems (in bounded domain) are proved under the assumptions: \mbox{$u_0\in L^2$} and \mbox{$\omega_0\in L^\infty$}.
Serfati \cite{Ser} proved the uniqueness and existence of a solution with initial velocity and vorticity which are only bounded (without any integrability condition). There is an extensive literature on the existence of weak solution to Euler system, possibly without uniqueness, with unbounded vorticity. DiPerna-Majda \cite{DM} proved the existence of weak solution for \mbox{$\omega_0\in L^1\cap L^p$} with \mbox{$2<p<\infty$}. The \mbox{$L^1$} assumption in DiPerna-Majda's paper has been removed by Giga-Miyakawa-Osada \cite{GMO}.
Chae \cite{Ch} proved an existence result for \mbox{$\omega_0$} in \mbox{$L\ln^+L$} with compact support.
More recently, Taniuchi \cite{tan} has proved the global existence (possibly without uniqueness nor regularity persistence) for \mbox{$(u_0,\omega_0)\in L^\infty\times {\rm BMO}$}. The papers \cite{Vishik1} and \cite{Y2} are concerned with the questions of existence and uniqueness of weak solutions for larger classes of vorticity. Both have intersections with the present paper and we will come back to them at the end of this section (Remark \ref{r2}). A framework for measure-valued
solutions can be found in \cite{De} and \cite{FLX} (see also \cite{Ger} for more detailed references).
Roughly speaking, the proof of uniqueness of weak solutions requires a uniform, in time, bound of the \mbox{$\log$}-Lipschitzian norm of the velocity. This ``almost" Lipschitzian regularity of the velocity is enough to assure the existence and uniqueness of the associated flow (and then of the solution). Initial conditions of the type \mbox{$\omega_0 \in L^\infty({\mathbb R}^2)$} ( or \mbox{$\omega_0 \in{\rm BMO}, B_{\infty,\infty}^0,...$}) guarantee the \mbox{$\log$}-Lipschitzian regularity of \mbox{$u_0$}. However, the persistence of such regularity when time varies requires an {\it a priori} bound of these quantities for the approximate-solution sequences. This is trivially done (via the conservation law) in the \mbox{$L^\infty$} case but not at all clear for the other cases. The main issue in this context is the action of Lebesgue measure preserving homeomorphisms on these spaces. In fact, it is easy to prove that all these spaces are invariant under the action of such class of homeomorphisms, but the optimal form of the constants (depending on the homeomorphisms and important for the application) are not easy to find. It is worth of mentioning, in this context, that the proof by Vishik \cite{Vishik2} of the global existence for {\tt v}arepsilonqref{E} in the borderline Besov spaces is based on a refined result on the action of Lebesgue measure preserving homeomorphisms on \mbox{$B_{\infty,1}^0$}.
In this paper we place ourselves in some Banach space which is strictly imbricated between \mbox{$L^\infty$} and \mbox{${\rm BMO}$}. Although located beyond the reach of the conservation laws of the vorticity this space has many nice properties (namely with respect of the action of the group of Lebesgue measure preserving homeomorphisms) allowing to derive the above-mentioned {\it a priori} estimates for the approximate-solution sequences.
Before going any further, let us introduce this functional space (details about \mbox{${\rm BMO}$} spaces can be found in the book of Grafakos \cite{GR}).
\begin{Defin} For a complex-valued locally integrable function on \mbox{${\mathbb R}^2$}, set
$$
\|f\|_{{{\rm LBMO}}}:=\|f\|_{{\rm BMO}}+\sup_{B_1,B_2}\frac{|{\rm Avg}_{B_2}(f)-{\rm Avg}_{B_1}(f)|}{1+\ln\big(\frac{ 1-\ln r_2 }{1-\ln r_1 }\big)},
$$
where the supremum is taken aver all pairs of balls \mbox{$B_2=B(x_2,r_2)$} and \mbox{$B_1=B(x_1,r_1)$} in \mbox{${\mathbb R}^2$} with \mbox{$0<r_1\leq 1$} and \mbox{$2B_2\subset B_1$}.
Here and subsequently, we denote
$$
{\rm Avg}_{D}(g):=\frac1{|D|}\int_Dg(x)dx,
$$
for every \mbox{$g\in L^1_{\text{loc}}$} and every non negligible set \mbox{$D\subset {\mathbb R}^2$}.
Also, for a ball \mbox{$B$} and \mbox{$\lambda>0$}, \mbox{$\lambda B$} denotes the ball that is concentric with \mbox{$B$} and whose radius is \mbox{$\lambda$} times the radius of \mbox{$B$}.
{\tt v}arepsilonnd{Defin}
We recall that
$$
\|f\|_{{\rm BMO}}:=\sup_{{\rm ball}\,\, B}{\rm Avg}_{B}|f-{\rm Avg}_{B}(f)|.
$$
It is worth of noting that if \mbox{$B_2$} and \mbox{$B_1$} are two balls such that \mbox{$2B_2\subset B_1$} then\footnote{ Throughout this paper the notation \mbox{$A \lesssim B$} means that there exists a positive universal constant \mbox{$C$} such that \mbox{$A\le CB$}. }
\begin{equation}
\label{22}
{|{\rm Avg}_{B_2}(f)-{\rm Avg}_{B_1}(f)|} \lesssim {\ln(1+\frac{r_1}{r_2})} \|f\|_{{\rm BMO}}.
{\tt v}arepsilonnd{equation}
In the definition of \mbox{${{\rm LBMO}}$} we replace the term \mbox{$\ln(1+\frac{r_1}{r_2})$} by \mbox{$\ln\big(\frac{ 1-\ln r_2 }{1-\ln r_1 }\big)$}, which is smaller. This puts more constraints on the functions belonging to this space\footnote{ Here, we identify all functions whose difference is a constant. In section 2, we will prove that \mbox{${{\rm LBMO}}$} is complete and strictly imbricated between \mbox{${\rm BMO}$} and \mbox{$L^\infty$}. {The "$L$" in \mbox{${{\rm LBMO}}$} stands for "logarithmic".}
} and allows us to derive some crucial property on the composition of them with Lebesgue measure preserving homeomorphisms, which is the heart of our analysis.
The following statement is the main result of the paper.
\begin{Theo}
\label{main}Assume \mbox{$\omega_0\in L^p\cap {{\rm LBMO}}$} with \mbox{$p\in ]1,2[$}. Then there exists a unique global weak solution \mbox{$(v,\omega)$} to the vorticity-stream formulation of the 2D Euler equation. Besides, there exists a constant \mbox{$C_0$} depending only on the \mbox{$L^p\cap {{\rm LBMO}}$}-norm of \mbox{$\omega_0$} such that
\begin{equation}
\label{bound}
\|\omega(t)\|_{ L^p\cap {{\rm LBMO}} }\leq C_0{\tt v}arepsilonxp({C_0t}),\qquad\forall\, t\in{\mathbb R}_{+}.
{\tt v}arepsilonnd{equation}
{\tt v}arepsilonnd{Theo}
Some remarks are in order.
\begin{rema} {\rm The proof gives more, namely \mbox{$
\omega\in \mathcal C({\mathbb R}_+, L^q)$} for all \mbox{$p\leq q<\infty$}.
Combined with the Biot-Savart law\footnote{If \mbox{$\omega_0\in L^p$} with \mbox{$p\in ]1,2[$} then a classical Hardy-Littlewood-Sobolev inequality gives \mbox{$u\in L^q$} with \mbox{$\frac1q=\frac1p-\frac12$}. } this yields
\mbox{$
u\in \mathcal C({\mathbb R}_+, W^{1,r})\cap \mathcal C({\mathbb R}_+, L^\infty)$} for all \mbox{$\frac{2p}{2-p}\leq r<\infty$}.
}
{\tt v}arepsilonnd{rema}
\begin{rema}
\label{r2}
{\rm The essential point of Theorem \ref{main} is that it provides an initial space which is strictly larger than \mbox{$L^p\cap L^\infty$} (it contains unbounded elements) which is a space of existence, uniqueness and persistence of regularity at once. We emphasize that the bound {\tt v}arepsilonqref{bound} is crucial since it implies that \mbox{$u$} is, uniformly in time, \mbox{$\log$}-Lipschitzian which is the main ingredient for the uniqueness. Once this bound established the uniqueness follows from the work by Vishik \cite{Vishik1}. In this paper Vishik also gave a result of existence (possibly without regularity persistence) in some large space characterized by growth of the partial sum of the \mbox{$L^\infty$}-norm of its dyadic blocs.
We should also mention the result of uniqueness by Yudovich \cite{Y2} which establish uniqueness (for bounded domain) for some space which contains unbounded functions. Note also that the example of unbounded function, given in \cite{Y2}, belongs actually to the space \mbox{${{\rm LBMO}}$} (see Proposition \ref{pro3} below). Our approach is different from those in \cite{Vishik1} and \cite{Y2} and uses a classical harmonic analysis ``\`a la stein" without making appeal to the Fourier analysis (para-differential calculus). }
{\tt v}arepsilonnd{rema}
\begin{rema} {\rm The main ingredient of the proof of {\tt v}arepsilonqref{bound} is a logarithmic estimate in the space \mbox{$L^p\cap {{\rm LBMO}}$} (see Theorem \ref{decom} below). It would be desirable to prove this result for \mbox{${\rm BMO}$} instead of \mbox{${{\rm LBMO}}$}.
Unfortunately, as it is proved in \cite{BK}, the corresponding estimate with \mbox{${\rm BMO}$} is optimal (with the bi-Lipschitzian norm instead of the \mbox{$\log$}-Lipschitzian norm of the homeomorphism) and so the argument presented here seem to be not extendable to \mbox{${\rm BMO}$}. }
{\tt v}arepsilonnd{rema}
The remainder of this paper is organized as follows. In the two next sections we introduce some functional spaces and prove a logarithmic estimate which is crucial to the proof of Theorem \ref{main}. The fourth and last section is dedicated to the proof of Theorem \ref{main}.
\section{Functional spaces}
Let us first recall that the set of \mbox{$\log$}-Lipschitzian vector fields on ${\mathbb R}^2$ , denoted by $LL$, is the
set of bounded vector fields $v$ such that
$$
\|v\|_{LL}:=\sup_{x\neq y}\frac{|v(x)-v(y)|}{|x-y|\big(1+\big|\ln|x-y|\big|\big)}<\infty.
$$
The importance of this notion lies in the fact that if the vorticity belong to the Yudovich type space (say \mbox{$L^1\cap L^\infty$}) then the velocity is no longer Lipschitzian, but \mbox{$\log$}-Lipschitzian. In this case we still have existence and uniqueness of flow but a
loss of regularity may occur. Actually, this loss of regularity is unavoidable and its degree is
related to the norm \mbox{$L^1_t(LL)$} of the velocity. The reader is referred to section 3.3 in \cite{bah-ch-dan} for more details about this issue.
To capture this behavior, and
overcome the difficulty generated by it, we introduce the following definition.
\begin{Defin} For every homeomorphism \mbox{$\psi$}, we set
$$
\|\psi\|_*:=\sup_{x\neq y}\Phi\big(|\psi(x)-\psi(y)|, |x-y|\big),
$$
where \mbox{$\Phi$} is defined on \mbox{$]0,+\infty[\times]0,+\infty[$} by
\begin{equation*}
\Phi(r,s)=\left\{
\begin{array}{ll}
\max\{\frac{1+|\ln(s)| }{ 1+|\ln r | };\frac{ 1+|\ln r | }{1+|\ln(s)| }\},\quad {\rm if}\quad (1-s)(1-r)\geq 0, \\
{(1+|\ln s|) }{ (1+|\ln r|) },\quad {\rm if}\quad (1-s)(1-r)\leq 0.
{\tt v}arepsilonnd{array} \right.
{\tt v}arepsilonnd{equation*}
{\tt v}arepsilonnd{Defin}
Since \mbox{$\Phi$} is symmetric then \mbox{$\|\psi\|_*=\|\psi^{-1}\|_*\geq 1$}. It is clear also that every homeomorphism \mbox{$\psi$} satisfying
$$
\frac{1}C|x-y|^\alpha\leq |\psi(x)-\psi(y)|\leq C|x-y|^\beta,
$$
for some \mbox{$\alpha,\beta,C>0$} has its \mbox{$\|\psi\|_*$} finite (see Proposition \ref{p1} for a reciprocal property).
The definition above is motivated by this proposition (and by Theorem \ref{decom} below as well).
\begin{Prop} \label{prop} Let \mbox{$u$} be a smooth divergence-free vector fields and \mbox{$\psi$} be its flow:
$$
\partial_t{\psi}(t,x)=u(t,\psi(t,x)),\qquad {\psi}(0,x)=x.
$$
Then, for every \mbox{$t\geq 0$}
$$
\|\psi(t,\cdot)\|_*\leq {\tt v}arepsilonxp(\int_0^t\|u(\tau)\|_{LL}d\tau).
$$
{\tt v}arepsilonnd{Prop}
\begin{proof} It is well-known that for every \mbox{$t\geq 0$} the mapping \mbox{$ x\mapsto \psi(t,x)$} is a Lebesgue measure preserving homeomorphism (see \cite{Ch1} for instance). We fix \mbox{$t\geq 0$} and \mbox{$x\neq y$} and set
$$
z(t)=|\psi(t,x)-\psi(t,y)|.
$$
Clearly the function \mbox{$Z$} is strictly positive and satisfies
$$
|{\partial}ot{z}(t)|\leq \|u(t)\|_{LL}(1+|\ln z(t)|)z(t).
$$
Accordingly, we infer
$$
|g(z(t))-g(z(0))|\leq \int_0^t\|u(\tau)\|_{LL}d\tau
$$
where
\begin{equation*}
g(\tau):=\left\{
\begin{array}{ll}
\ln(1+\ln(\tau)),\quad {\rm if}\quad \tau\geq 1, \\
-\ln(1-\ln(\tau)),\quad {\rm if}\quad 0<\tau<1.
{\tt v}arepsilonnd{array} \right.
{\tt v}arepsilonnd{equation*}
This yields in particular that
\mbox{$
\frac{{\tt v}arepsilonxp(g(z(t)))}{{\tt v}arepsilonxp(g(z(0)))}$} and \mbox{$\frac{{\tt v}arepsilonxp(g(z(0)))}{{\tt v}arepsilonxp(g(z(t)))}$} are both controlled by \mbox{${\tt v}arepsilonxp(\int_0^t\|u(\tau)\|_{LL}d\tau)$} leading to
$$
\Phi(z(t), z(0))\leq {\tt v}arepsilonxp(\int_0^t\|u(\tau)\|_{LL}d\tau),
$$
as claimed.
{\tt v}arepsilonnd{proof}
The following proposition follows directly from the definition by a straightforward computation.
\begin{Prop}
\label{p1}
Let \mbox{$\psi$} be a homeomorphism with \mbox{$\|\psi\|_{*}<\infty$}. Then for every \mbox{$(x,y)\in \mathbb R^2\times\mathbb R^2$} one has
\begin{enumerate}
\item If \mbox{$|x-y|\geq 1$} and \mbox{$|\psi(x)-\psi(y)|\geq 1$}
$$
e^{-1}|x-y|^{\frac{1}{\|\psi\|_*}}\leq |\psi(x)-\psi(y)|\leq e^{\|\psi\|_*}|x-y|^{\|\psi\|_*}.
$$
\item If \mbox{$|x-y|\leq 1$} and \mbox{$|\psi(x)-\psi(y)|\leq 1$}
$$
e^{-\|\psi\|_*}|x-y|^{{\|\psi\|_*}}\leq |\psi(x)-\psi(y)|\leq e |x-y|^{\frac{1}{\|\psi\|_*}}.
$$
\item In the other cases
$$
e^{-\|\psi\|_*}|x-y|\leq |\psi(x)-\psi(y)|\leq e^{\|\psi\|_*}|x-y|.
$$
{\tt v}arepsilonnd{enumerate}
{\tt v}arepsilonnd{Prop}
As an application we obtain the following useful lemma.
\begin{Lemm}
\label{g}
For every \mbox{$r>0$} and a homeomorphism \mbox{$\psi$} one has
$$4\psi(B(x_0,r))\subset B(\psi(x_0), g_\psi(r)),
$$
where\footnote{This notation means that for every ball \mbox{$B\subset\psi(B(x_0,r))$} we have \mbox{$4B \subset B(\psi(x_0), g_\psi(r))$}.},
\begin{equation*}
g_\psi(r):=\left\{
\begin{array}{ll} 4e^{ \|\psi\|_{*}}r^{ \|\psi\|_{*}},\quad {\rm if}\quad r\geq 1, \\
4\max\{ er^{\frac{1}{\|\psi\|_{*}}}; e^{\|\psi\|_{*}}r\}, \quad {\rm if}\quad 0<r<1.
{\tt v}arepsilonnd{array} \right.
{\tt v}arepsilonnd{equation*}
In particular,
\begin{equation}
\label{ss}
|\ln\Big(\frac{ 1+|\ln g_\psi(r)| }{1+|\ln r|} \Big)|\lesssim 1+\ln\big(1+\|\psi\|_{*}\big).
{\tt v}arepsilonnd{equation}
{\tt v}arepsilonnd{Lemm}\begin{proof}
The first inclusion follows from Proposition \ref{p1} and the definition of \mbox{$g_\psi$}. Let us check (\ref{ss}).
This comes from an easy computation using the following trivial fact: if \mbox{$\alpha,\beta,\gamma>0$} then
$$
\sup(\beta, \frac1\beta)\leq \alpha^\gamma \Longleftrightarrow |\ln(\beta)|\leq\gamma \ln(\alpha).
$$
\mbox{$\bullet$} If \mbox{$r\geq 1$} then
$$
1\leq \frac{ 1+|\ln g_\psi(r)| }{1+|\ln r|}=\frac{ 1+\ln4+\|\psi\|_{*}+\ln r }{1+\ln r}\leq 3+\|\psi\|_{*}.
$$
\mbox{$\bullet$} If \mbox{$r< 1$} then we have to deal with two possible values of \mbox{$g_\psi(r)$}.
{\tt u}nderline{\it Case 1:} If \mbox{$g_\psi(r)=4er^{\frac{1}{\|\psi\|_{*}}}$} then
$$
|\ln g_\psi(r)| =|\ln 4+1+\|\psi\|_{*}^{-1}\ln(r)|.
$$
Since \mbox{$\|\psi\|_{*}\geq 1$}, we get
$$
\frac{ 1+|\ln g_\psi(r)| }{1+|\ln r|}\leq \frac{ 3+\frac1{ \|\psi\|_{*} }|\ln r| }{1+|\ln r|}\leq \frac{ 3+|\ln r| }{1+|\ln r|}\leq 3.
$$
To estimate \mbox{$\frac{1+|\ln r|}{ 1+|\ln g_\psi(r)| }$} we consider two possibilities.
- If \mbox{$|\ln(r)|\leq 8 \|\psi\|_{*}$} then
$$
\frac{1+|\ln r|}{ 1+|\ln g_\psi(r)| }\leq 1+|\ln r|\leq 1+8 \|\psi\|_{*}.
$$
- If \mbox{$|\ln(r)|\geq 8 \|\psi\|_{*}$} then
$$
|\ln(4)+1+\|\psi\|_{*}^{-1}\ln(r)|\geq \frac12 \|\psi\|_{*}^{-1} |\ln(r)|,
$$
and so
$$
\frac{1+|\ln r|}{ 1+|\ln g_\psi(r)| }\leq \frac{1+|\ln r|}{1+\frac12 \|\psi\|_{*}^{-1} |\ln(r)|}\leq 2(1+\|\psi\|_{*}).
$$
{\tt u}nderline{\it Case 2:} If \mbox{$g_\psi(r)=4e^{\|\psi\|_{*}}r$} then
$$
|\ln g_\psi(r)| =|\ln 4+\|\psi\|_{*}+\ln(r)|.
$$
Thus,
$$
\frac{ 1+|\ln g_\psi(r)| }{1+|\ln r|}\leq \frac{ 3+\|\psi\|_{*} +|\ln r| }{1+|\ln r|}\leq 3+\|\psi\|_{*}.
$$
As previously for estimating \mbox{$\frac{1+|\ln r|}{ 1+|\ln g_\psi(r)| }$}, we consider two possibilities.
- If \mbox{$|\ln(r)|\leq 2(\ln(4)+ \|\psi\|_{*})$} then
$$
\frac{1+|\ln r|}{ 1+|\ln g_\psi(r)| }\leq 1+|\ln r|\leq 5+2 \|\psi\|_{*}.
$$
- If \mbox{$|\ln(r)|\geq 2(\ln 4+ \|\psi\|_{*})$} then \mbox{$|\ln(4)+\|\psi\|_{*}+\ln r|\geq \frac12|\ln(r)|$}
and so
$$
\frac{1+|\ln r|}{ 1+|\ln g_\psi(r)| } \leq \frac{1+|\ln r|}{1+\frac12|\ln(r)|}\leq 2.
$$
{\tt v}arepsilonnd{proof}
\begin{rema}
\label{sss} The estimate {\tt v}arepsilonqref{ss} remains valid when we multiply \mbox{$g_\psi(r)$} by any positive constant.
{\tt v}arepsilonnd{rema}
\section{ The \mbox{${{\rm LBMO}}$} space}
Let us now detail some properties of the space \mbox{${{\rm LBMO}}$} introduced in the first section of this paper.
\begin{Prop}
\label{pro3}
The following properties hold true.\\
{\rm (i)} The space \mbox{${{\rm LBMO}}$} is a Banach space included in \mbox{${\rm BMO}$} and strictly containing \mbox{$L^\infty({\mathbb R}^2)$}.\\
{\rm (ii)} For every \mbox{$g\in \mathcal C^\infty_0({\mathbb R}^2)$} and \mbox{$f\in {{\rm LBMO}}$} one has
\begin{equation}
\| g, \qquad{\rm as}\quad n\to\inftyt f\|_{{{\rm LBMO}}}\leq \|g\|_{L^1}\| f\|_{{{\rm LBMO}}}.
\label{eq:comp} {\tt v}arepsilonnd{equation}
{\tt v}arepsilonnd{Prop}
\begin{proof}
(i) Completeness of the space. Let \mbox{$(f_n)_n$} be a Cauchy sequence in \mbox{${{\rm LBMO}}$}. Since \mbox{${\rm BMO}$} is complete then this sequences converges in \mbox{${\rm BMO}$} and then in \mbox{$L^1_{\text{loc}}$}.
Using the definition and the the convergence in \mbox{$L^1_{\text{loc}}$}, we get that the convergence holds in \mbox{${{\rm LBMO}}$}.
It remains to check that \mbox{$L^\infty \subsetneq {{\rm LBMO}}$}. Since \mbox{$L^\infty$} is obviously embedded into \mbox{${{\rm LBMO}}$}, we have just to build an unbounded function belonging to \mbox{${{\rm LBMO}}$}. Take
\begin{equation*}
f(x)=\left\{
\begin{array}{ll} \ln(1-\ln|x|) \qquad {\rm if}\quad |x|\leq 1\\
0,\qquad \qquad {\rm if}\quad |x|\geq 1.
{\tt v}arepsilonnd{array} \right.
{\tt v}arepsilonnd{equation*}
It is clear that both \mbox{$f$} and \mbox{$\nabla f$} belong to \mbox{$L^2({\mathbb R}^2)$} meaning that \mbox{$f\in H^1({\mathbb R}^2)\subset {\rm BMO}$}.
Before going further three preliminary remarks are necessary.
\mbox{$\bullet$} Since \mbox{$f$} is radially symmetric and decreasing then, for every \mbox{$r>0$}, the mapping \mbox{$x\mapsto {\rm Avg}_{B(x,r)}f$} is radial and decreasing.
\mbox{$\bullet$} For the same reasons the mapping \mbox{$r\mapsto {\rm Avg}_{B(0,r)}(f)$} is decreasing.
\mbox{$\bullet$} Take \mbox{$(r,\rho) \in ]0,+\infty[^2$} and consider the problem of maximization of
\mbox{${\rm Avg}_{B(x_1,r)}(f)-{\rm Avg}_{B(x_2,r)}(f)$} when \mbox{$|x_1-x_2|=\rho$}. The convexity of \mbox{$f$} implies that
\mbox{$x_1=0$} and \mbox{$|x_2|=\rho$} are solutions of this problem.
\
We fix \mbox{$r_1$} and \mbox{$r_2$} such that \mbox{$r_1\leq 1$} and \mbox{$2r_2\leq r_1$}.
For every \mbox{$x_1\in{\mathbb R}^2$} one defines \mbox{$\tilde x_1$} and \mbox{$\hat x_1$} as follows:
\begin{equation*}
\tilde x_1=\left\{
\begin{array}{ll} x_1(1-\frac{r_2+r_1}{|x_1|}) \qquad {\rm if}\quad |x_1|\geq r_2+r_1\\
0,\qquad \qquad {\rm if}\quad |x_1|\leq r_2+r_1,
{\tt v}arepsilonnd{array} \right.
{\tt v}arepsilonnd{equation*}
and
\begin{equation*}
\hat x_1=\left\{
\begin{array}{ll} x_1(1+\frac{r_2+r_1}{|x_1|}) \qquad {\rm if}\quad |x_1|\neq 0\\
({r_2+r_1},0)\qquad \qquad {\rm if}\quad |x_1|=0.
{\tt v}arepsilonnd{array} \right.
{\tt v}arepsilonnd{equation*}
Let
\mbox{$A(x_1)$} be the set of admissible \mbox{$x_2$}: the set of \mbox{$x_2$} such that \mbox{$2B(x_2,r_2)\subset B(x_1,r_1)$}.
Using the two preliminary remarks above, we see that
$$
\sup_{ x_2\in A(x_1)}|{\rm Avg}_{B(x_2,r_2)}(f)-{\rm Avg}_{B(x_1,r_1)}(f)|\leq \max\{J_{1},J_{2}\}.
$$
with
\begin{eqnarray*}
J_{1}&=&{\rm Avg}_{B(\tilde x_1,r_2)}(f)-{\rm Avg}_{B(x_1,r_1)}(f),
\\
J_{2}&=&{\rm Avg}_{B(x_1,r_1)}(f)-{\rm Avg}_{B(\hat{x}_1,r_2)}(f).
{\tt v}arepsilonnd{eqnarray*}
In fact, if \mbox{${\rm Avg}_{B(x_2,r_2)}(f)-{\rm Avg}_{B(x_1,r_1)}(f)$} is positive (resp. negative) then it is obviously dominated by \mbox{$J_{1}$} (resp. \mbox{$J_{2}$}).
Thus, we obtain
$$
\sup_{ x_2\in A(x_1)}|{\rm Avg}_{B(x_2,r_2)}(f)-{\rm Avg}_{B(x_1,r_1)}(f)|\leq J_{1}+J_{2}= {\rm Avg}_{B(\tilde x_1,r_2)}(f)-{\rm Avg}_{B(\hat{x}_1,r_2)}(f).
$$
The right hand side is maximal in the configuration when \mbox{$\tilde x_1=0$} and \mbox{$\hat{x}_1$} the furthest away from \mbox{$0$}.
This means when
\mbox{$|x_1|=r_1+r_2$}, \mbox{$\tilde x_1=0$} and \mbox{$|\hat{x}_1|=2(r_1+r_2)$}.
Since \mbox{$f$} is increasing (going to the axe) then
$$
{\rm Avg}_{B(\hat{x}_2,r_1)}(f)\geq f(4r_1).
$$
Finally, we get for all \mbox{$x_1\in\mathbb R^2$} and \mbox{$ x_2\in A(x_1)$}
\begin{eqnarray*}
|{\rm Avg}_{B(x_2,r_2)}(f)-{\rm Avg}_{B(x_1,r_1)}(f)|\leq {\rm Avg}_{B(0,r_2)}(f)- f(4r_1).
{\tt v}arepsilonnd{eqnarray*}
Now it is easy to see that
$$
f(4r_1)= \ln(1-\ln(r_1))+ {\mathcal O}(1),
$$
and (with an integration by parts)
\begin{eqnarray*}
{\rm Avg}_{B(0,r_2)}(f)
& =& \ln(1-\ln(r_2)) + \frac{1}{r_2^2}\int_0^{r_1} \frac{1}{1-\ln(\rho)} \rho d\rho
\\
& =& \ln(1-\ln(r_2)) + {\mathcal O}(1).
{\tt v}arepsilonnd{eqnarray*}
This yields,
$$
|{\rm Avg}_{B(x_2,r_2)}(f)-{\rm Avg}_{B(x_1,r_1)}(f)|\leq \ln\Big(\frac{1-\ln(r_2)}{1-\ln(r_1)}\Big)+ {\mathcal O}(1),
$$
as desired.
\
(ii) Stability by convolution. (\ref{eq:comp}) follows from the fact that for all \mbox{$r>0$}
$$
x\mapsto{\rm Avg}_{B(x,r)}(g, \qquad{\rm as}\quad n\to\inftyt f)=(g, \qquad{\rm as}\quad n\to\inftyt{\rm Avg}_{B(\cdot,r)}(f))(x).
$$
{\tt v}arepsilonnd{proof}
The advantage of using the space \mbox{${{\rm LBMO}}$} lies in the following logarithmic estimate which is the main ingredient for proving Theorem \ref{main}.
\begin{Theo}
\label{decom}
There exists a universal constant \mbox{$C>0$} such that
$$
\|f{\rm o}\psi\|_{{{\rm LBMO}}\cap L^p}\leq C\ln(1+\|\psi\|_*)\|f\|_{{{\rm LBMO}}\cap L^p},
$$
for any Lebesgue measure preserving homeomorphism \mbox{$\psi$}.
{\tt v}arepsilonnd{Theo}
\begin{proof}[Proof of Theorem \ref{decom}]
Of course we are concerned with $\psi$ such that $\|\psi\|_*$ is finite (if not the inequality is empty).
Without loss of generality one can assume that \mbox{$\|f\|_{{{\rm LBMO}}\cap L^p}=1$}. Since \mbox{$\psi$} preserves Lebesgue measure then the \mbox{$L^p$}-part of the norm is conserved. For the two other parts of the norm, we will proceed in two steps. In the first step we consider the \mbox{${\rm BMO}$} term of the norm and in the second one we deal with the other term.
\subsection*{ Step 1} Let \mbox{$B=B(x_0,r)$} be a given ball of \mbox{${\mathbb R}^2$}.
By using the \mbox{$L^p$}-norm we need only to deal with balls whose radius is smaller than a universal constant \mbox{${\partial}elta_0$} (we want \mbox{$r$} to be small with respect to the constants appearing in Whitney covering lemma below). Since \mbox{$\psi$} is a Lebesgue measure preserving homeomorphism then \mbox{$\psi(B)$} is an open connected\footnote{We have also that \mbox{$ \psi(B)^C=\psi(B^C)$} and \mbox{$\psi(\partial B)=\partial(\psi(B)).$} } set with \mbox{$|\psi(B)|=|B|$}. By Whitney covering lemma, there exists a collection of balls \mbox{$(O_j)_j$} such that:
- The collection of double ball is a bounded covering:
$$
\psi(B)\subset \bigcup 2O_j.
$$
- The collection is disjoint and, for all \mbox{$j$},
$$
O_j\subset \psi(B).
$$
- The Whitney property is verified:
$$
r_{O_j}{\sigma}meq d(O_j, \psi(B)^c).
$$
\
\mbox{$\bullet$} {\it Case 1}: \mbox{$r\leq\frac14 e^{-\|\psi\|_*}$}. In this case
$$
g_\psi(r)\leq 1.
$$
We set \mbox{$\tilde B:= B(\psi(x_0), g_\psi(r))$}.
Since \mbox{$\psi$} preserves Lebesgue measure we get
\begin{eqnarray*}
{\rm Avg}_{B}|f{\rm o}\psi- {\rm Avg}_{B}(f{\rm o}\psi)|&=&{\rm Avg}_{\psi(B)}|f- {\rm Avg}_{\psi(B)}(f)|
\\
&\leq & 2 {\rm Avg}_{\psi(B)}|f- {\rm Avg}_{\tilde B}(f)|.
{\tt v}arepsilonnd{eqnarray*}
Using the notations above
\begin{eqnarray*}
{\rm Avg}_{\psi(B)}|f- {\rm Avg}_{\tilde B}(f)|&\lesssim & \frac{1}{|B|}\sum_j |O_j|{\rm Avg}_{2O_j}\big|f- {\rm Avg}_{\tilde B}(f)\big|
\\
&\lesssim & I_1+I_2,
{\tt v}arepsilonnd{eqnarray*}
with
\begin{eqnarray*}
I_1&=& \frac{1}{|B|}\sum_j |O_j|\big |{\rm Avg}_{2O_j}(f)- {\rm Avg}_{2O_j}(f)\big |\\
I_2&= & \frac{1}{|B|}\sum_j |O_j|\big |{\rm Avg}_{2O_j}(f)- {\rm Avg}_{\tilde B}(f)\big |.
{\tt v}arepsilonnd{eqnarray*}
On one hand, since \mbox{$\sum|O_j|\leq |B|$} then
\begin{eqnarray*}
I_1&\leq& \frac{1}{|B|}\sum_j |O_j|\|f\|_{{\rm BMO}}
\\
&\leq & \|f\|_{{\rm BMO}}.
{\tt v}arepsilonnd{eqnarray*}
On the other hand, sinc \mbox{$4O_j\subset \tilde B$} (remember Lemma \ref{g}) and \mbox{$r_{\tilde B}\leq 1$}, it ensues that
\begin{eqnarray*}
I_2&\lesssim& \frac{1}{|B|}\sum_j |O_j|\big(1+\ln\Big(\frac{1-\ln 2r_j}{ 1-\ln g_\psi(r) } \Big)\big)
\\
&\lesssim& \frac{1}{|B|}\sum_j |O_j|(1+\ln\big(\frac{1-\ln r_j}{ 1-\ln g_\psi(r) } \big)).
{\tt v}arepsilonnd{eqnarray*}
Thanks to {\tt v}arepsilonqref{ss} we get
\begin{eqnarray}
\nonumber
\ln\Big(\frac{1-\ln r_j}{ 1-\ln g_\psi(r) } \Big)&\leq& \ln\Big(\frac{1-\ln r_j}{ 1-\ln r } \Big)+\ln\Big(\frac{1-\ln r}{ 1-\ln g_\psi(r) } \Big)
\\
\label{s}
&\lesssim& 1+ \ln\Big(\frac{1-\ln r_j}{ 1-\ln r } \Big)+\ln(1+\|\psi\|_{*}).
{\tt v}arepsilonnd{eqnarray}
Thus it remains to prove that
\begin{eqnarray}
\label{ef}
II:=\frac{1}{|B|}\sum_j |O_j|(1+\ln\big(\frac{1-\ln r_j} { 1-\ln r }\big))\lesssim 1+\ln(1+\|\psi\|_{*}).
{\tt v}arepsilonnd{eqnarray}
For every \mbox{$k\in\mathbb N$} we set
$$
u_k:=\sum_{e^{-(k+1)}r< r_j\leq e^{-k}r}|O_j|,
$$
so that
\begin{eqnarray}
\label{eff}
II\leq \frac{1}{|B|}\sum_{k\geq 0} u_k\big(1+\ln\big(\frac{k+2-\ln r} { 1-\ln r }\big)\big).
{\tt v}arepsilonnd{eqnarray}
We need the following lemma.
\begin{Lemm}
\label{equivalence}
There exists a universal constant \mbox{$C>0$} such that
$$
u_k\leq Ce^{-\frac{k}{\|\psi\|_*}}r^{1+\frac{1}{\|\psi\|_*}},
$$
for every \mbox{$k\in\mathbb N$}.
{\tt v}arepsilonnd{Lemm}
\begin{proof}[Proof of Lemma \ref{equivalence}]
If we denote by \mbox{$C\geq 1$} the implicit constant appearing in Whitney Lemma, then
$$
u_k\leq |\{ y\in \psi(B): d(y, \psi(B)^c)\leq Ce^{-k}r\}|.
$$
The preservation of Lebesgue measure by \mbox{$\psi$} yields
$$
|\{ y\in \psi(B): d(y, \psi(B)^c)\leq Ce^{-k}r\}|=|\{ x\in B: d(\psi(x), \psi(B)^c)\leq Ce^{-k}r\}|,
$$
Since \mbox{$ \psi(B)^c=\psi(B^c)$} then
$$
u_k\leq |\{ x\in B: d(\psi(x), \psi(B^c))\leq Ce^{-k}r\}|.
$$
We set
$$D_k=\{ x\in B: d(\psi(x), \psi(B^c))\leq Ce^{-k}r\}.
$$
Since \mbox{$\psi(\partial B)$} is the frontier of \mbox{$\psi(B)$} and \mbox{$d(\psi(x), \psi(B^c))=d(\psi(x), \partial \psi(B))$} then
$$
D_k\subset \{ x\in B: {\tt v}arepsilonxists y\in \partial B \;{\rm with}\; |\psi(x)- \psi(y)|\leq Ce^{-k}r\}.
$$
The condition on \mbox{${\partial}elta_0$} is just to assure that \mbox{$Cr\leq 1$} for all \mbox{$r\leq{\partial}elta_0$}.
In this case Proposition \ref{p1} gives
$$
D_k\subset \{ x\in B: {\tt v}arepsilonxists y\in \partial B: |x- y|\leq Ce^{1-\frac{k}{\|\psi\|_*}}r^{\frac{1}{\|\psi\|_*}}\}.
$$
Thus, \mbox{$D_k$} is contained in the annulus \mbox{$\mathcal A=\{ x\in B: d(x,\partial B) \leq Ce^{1-\frac{k}{\|\psi\|_*}}r^{\frac{1}{\|\psi\|_*}}\}$} and so
$$
u_k\leq |D_k|\lesssim e^{-\frac{k}{\|\psi\|_*}}r^{1+\frac{1}{\|\psi\|_*}},
$$
as claimed.
{\tt v}arepsilonnd{proof}
Coming back to {\tt v}arepsilonqref{eff}. Let \mbox{$N$} a large integer to be chosen later. We split the sum in the right hand side of {\tt v}arepsilonqref{eff} into two parts
$$
II\lesssim \sum_{k\leq N}(...)+\sum_{k> N}(.....):=II_{1}+II_{2}.
$$
Since \mbox{$\sum u_k\leq |B|$} then
\begin{eqnarray}
\label{ff}
II_{1}\leq 1+\ln\Big(\frac{N+2-\ln r} { 1-\ln r }\Big).
{\tt v}arepsilonnd{eqnarray}
On the other hand
$$
II_{2}\leq \sum_{k> N}e^{-\frac{k}{\|\psi\|_*}}r^{\frac{1}{\|\psi\|_*}-1}(1+\ln\big(\frac{k+2-\ln r} { 1-\ln r }\big)).
$$
The parameter \mbox{$N$} will be taken bigger than \mbox{$\|\psi\|_*$} so that the function in \mbox{$k$} inside the sum is decreasing and an easy comparison with integral yields
\begin{eqnarray}
\label{fff}
II_{2}\lesssim e^{-\frac{N}{\|\psi\|_*}}\|\psi\|_*^2r^{\frac{1}{\|\psi\|_*}-1}\big(1+\ln\Big(\frac{N+2-\ln r} { 1-\ln r }\Big)\big).
{\tt v}arepsilonnd{eqnarray}
Putting {\tt v}arepsilonqref{ff} and {\tt v}arepsilonqref{fff} together and taking \mbox{$N= [\|\psi\|_*(\|\psi\|_*-\ln r)]+1$}
\begin{eqnarray*}
II\lesssim \big(1+ e^{-\frac{N}{\|\psi\|_*}}\|\psi\|_*^2r^{\frac{1}{\|\psi\|_*}-1}\big)\big(1+\ln\Big(\frac{N+2-\ln r} { 1-\ln r }\Big)\big).
{\tt v}arepsilonnd{eqnarray*}
Taking \mbox{$N= [\|\psi\|_*(\|\psi\|_*-\ln r)]+1$}
\begin{eqnarray*}
II&\lesssim& (1+ e^{-\|\psi\|_*}\|\psi\|_*^2r^{\frac{1}{\|\psi\|_*}})\big(1+\ln\big(\frac{\|\psi\|_*(\|\psi\|_*-\ln r)+2-\ln r} { 1-\ln r }\big)\big).
\\
&\lesssim& 1+\ln(1+\|\psi\|_*),
{\tt v}arepsilonnd{eqnarray*}
where we have used the fact that \mbox{$r\leq 1$} and the obvious inequality
$$
\frac{\|\psi\|_*(\|\psi\|_*-\ln r)+2-\ln r} { 1-\ln r }\lesssim (1+\|\psi\|_*)^2.
$$
This ends the proof of {\tt v}arepsilonqref{ef}.
\mbox{$\bullet$} {\it Case 2:} \mbox{${\partial}elta_0\geq r \geq \frac14 e^{-\|\psi\|_*}$.} In this case
$$
|\ln r|\lesssim \|\psi\|_*.
$$
Since \mbox{$\psi$} preserves Lebesgue measure, we get
\begin{eqnarray*}
I&:=&{\rm Avg}_{B}|f{\rm o}\psi- {\rm Avg}_{B}(f{\rm o}\psi)|
\\
&\leq & 2 {\rm Avg}_{\psi(B)}|f|.
{\tt v}arepsilonnd{eqnarray*}
Let \mbox{$\tilde O_j$} denote the ball which is concentric to \mbox{$O_j$} and whose radius is equal to \mbox{$1$} (we use the same Whitney covering as above). Without loss of generality we can assume \mbox{${\partial}elta_0\leq\frac14$}. This guarantees \mbox{$4O_j\subset\tilde O_j$} and yields by definition
\begin{eqnarray*}
I&\lesssim & \frac{1}{|B|}\sum_j |O_j|{\rm Avg}_{2O_j}|f-{\rm Avg}_{\tilde O_j}(f)|+ \frac{1}{|B|}\sum_j |O_j| |{\rm Avg}_{\tilde O_j}(f)|
\\
&\lesssim& \frac{1}{|B|}\sum_j |O_j|\Big(1+\ln\big({1-\ln 2r_j} \big)\Big)\|f\|_{{{\rm LBMO}}}+\frac{1}{|B|}\sum_j |O_j| \|f\|_{L^p}
\\
&\lesssim&
1+ \frac{1}{|B|}\sum_j |O_j|\big(1+\ln\big({1-\ln r_j} \big)\big).
{\tt v}arepsilonnd{eqnarray*}
As before one writes
\begin{eqnarray*}
I&\lesssim& \frac{1}{|B|}\sum_{k\geq 0} u_k\big(1+\ln\big(k+2-\ln r\big)\big)
\\
&\lesssim&1+\ln\big(N+2-\ln r\big)+ e^{-\frac{N}{\|\psi\|_*}} \|\psi\|_*^2 r^{\frac{1}{\|\psi\|_*}-1}\big(1+\ln\big(N+2-\ln r)\big).
{\tt v}arepsilonnd{eqnarray*}
Taking \mbox{$N=[ \|\psi\|_*(\|\psi\|_*-\ln r)]+1$} and using the fact that \mbox{$|\ln r|\lesssim \|\psi\|_*$} leads to the desired result.
The outcome of this first step of the proof is
$$
\|f{\rm o}\psi\|_{{\rm BMO}\cap L^p}\lesssim\ln(1+\|\psi\|_*)\|f\|_{{{\rm LBMO}}\cap L^p}.
$$
\subsection*{ Step 2} This step of the proof deals with the second term in the \mbox{${{\rm LBMO}}$}-norm. It is shorter than the first step because it makes use of the arguments developed above.
Take \mbox{$B_2=B(x_2,r_2)$} and \mbox{$B_1=B(x_1,r_1)$} in \mbox{${\mathbb R}^2$} with \mbox{$r_1\leq 1$} and \mbox{$2B_2\subset B_1$}.
There are three cases to consider.
\mbox{$\bullet$} {\it Case 1:} \mbox{$ r_1\lesssim e^{-\|\psi\|_*}$} (so that \mbox{$g_\psi(r_2)\leq g_\psi(r_1) \leq \frac12$}).
We set \mbox{$\tilde B_i:= B(\psi(x_i), g_\psi(r_i)), i=1,2$} and
$$
J:=\frac{|{\rm Avg}_{B_2}(f{\rm o}\psi)-{\rm Avg}_{B_1}(f{\rm o}\psi)|}{1+ \ln\big(\frac{ 1-\ln r_2 }{1-\ln r_1}\big)}.
$$
Since the denominator is bigger than \mbox{$1$} one get
$$
J\leq J_{1}+J_{2}+J_3,
$$
with
\begin{eqnarray*}
J_{1}&=& |{\rm Avg}_{\psi(B_2)}(f)-{\rm Avg}_{\tilde B_2}(f)|+ |{\rm Avg}_{\psi(B_1)}(f)-{\rm Avg}_{\tilde B_1}(f)| \\
J_{2}&=&\frac{|{\rm Avg}_{\tilde B_2}(f)-{\rm Avg}_{2\tilde B_1}(f)|}{1+ \ln\Big(\frac{ 1-\ln r_2 }{1-\ln r_1}\Big)}
\\
J_{3}&=&|{\rm Avg}_{\tilde B_1}(f)-{\rm Avg}_{2\tilde B_1}(f)|.
{\tt v}arepsilonnd{eqnarray*}
Since \mbox{$2\tilde B_2\subset 2\tilde B_1$} and \mbox{$r_{2\tilde B_1}\leq1$} then
$$
J_{2}\leq \frac{1+ \ln\big(\frac{ 1-\ln g_\psi(r_2) }{1-\ln(2g_\psi(r_1))}\big)}{1+ \ln\big(\frac{ 1-\ln r_2 }{1-\ln r_1}\big)}\|f\|_{{{\rm LBMO}}}.
$$
Using similar argument than {\tt v}arepsilonqref{s} (and remembering Remark \ref{sss}) we infer
\begin{eqnarray*}
\ln\Big(\frac{ 1-\ln g_\psi(r_2) }{1-\ln (2g_\psi(r_1)) }\Big)
\lesssim 1+\ln(1+\|\psi\|_{*})+\ln\Big(\frac{ 1-\ln r_2 }{1-\ln r_1}\Big).
{\tt v}arepsilonnd{eqnarray*}
Thus,
$$
J_{2}\lesssim1+\ln(1+\|\psi\|_{*}).
$$
The estimation {\tt v}arepsilonqref{22} yields
$$
J_3\lesssim \|f\|_{{\rm BMO}}.
$$
The term \mbox{$J_{1}$} can be handled exactly as in the analysis of \mbox{case 1} of \mbox{step 1}.
\
\mbox{$\bullet$} {\it Case 2:} \mbox{$e^{-\|\psi\|_*}\lesssim r_2$}. In this case we write
$$
J\leq {\rm Avg}_{\psi(B_2)}|f|+{\rm Avg}_{\psi(B_1)}|f|.
$$
Both terms can be handled as in the analysis of \mbox{case 2} of the proof of \mbox{${\rm BMO}$}-part in \mbox{step 1.}
\mbox{$\bullet$} {\it Case 3:} \mbox{$r_2\lesssim e^{-\|\psi\|_*}$} and \mbox{$r_1\gtrsim e^{-\|\psi\|_*} $}. Again since the denominator is bigger than \mbox{$1$} we get
$$
J\leq {\rm Avg}_{\psi(B_2)}|f- {\rm Avg}_{\tilde B_2}(f) |+\frac{|{\rm Avg}_{\tilde B_2}(f)|}{1+\ln\big(\frac{ 1-\ln r_2 }{1-\ln r_1}\big)} +{\rm Avg}_{\psi(B_1)}|f|=J_{1}+J_{2}+J_3.
$$
The terms \mbox{$J_{1}$} and \mbox{$J_3$} can be controlled as before. The second term is controlled as follows (we make appear the average on \mbox{$B(\psi(x_2),1)$} and use Lemma \ref{g} with \mbox{$\|f\|_{L^p}\leq 1$})
\begin{eqnarray*}
J_{2}&\leq& \frac{1+ \ln(1-\ln r_2) }{1+ \ln\Big(\frac{ 1-\ln r_2 }{1-\ln r_1} \Big)}
\\
&\leq& {1+ \ln(1+|\ln r_1|) }
\\
&\leq& {1+ \ln(1+\|\psi\|_*) }.
{\tt v}arepsilonnd{eqnarray*}
{\tt v}arepsilonnd{proof}
\section{Proof of Theorem \ref{main}}
The proof falls naturally into three parts.
\subsection{{ A priori} estimates}
The following estimates follow directly from Proposition \ref{prop} and Theorem \ref{decom}.
\begin{Prop}
\label{apriori} Let \mbox{$u$} be a smooth solution of {\tt v}arepsilonqref{E} and \mbox{$\omega$} its vorticity. Then, there exists a constant \mbox{$C_0$} depending only on the norm \mbox{$L^p\cap {{\rm LBMO}}$} of \mbox{$\omega_0$} such that
$$
\|u(t)\|_{LL}+\|\omega(t)\|_{{{\rm LBMO}}}\leq C_0{\tt v}arepsilonxp{(C_0t)},
$$
for every \mbox{$t\geq 0$}.
{\tt v}arepsilonnd{Prop}
\begin{proof} One has \mbox{$\omega(t,x) =\omega_0(\psi_t^{-1}(x))$} where \mbox{$\psi_t$} is the flow associated to the velocity \mbox{$u$}. Since \mbox{$u$} is smooth then \mbox{$\psi_t^{\pm 1}$} is Lipschitzian for every \mbox{$t\geq 0$}. This implies in particular that
\mbox{$\|\psi_t^{\pm 1}\|_*$} is finite for every \mbox{$t\geq 0$}.
Theorem \ref{decom} and Proposition \ref{prop} yield together
\begin{eqnarray*}
\|\omega(t)\|_{{{\rm LBMO}}}&\leq& C\|\omega_0\|_{{{\rm LBMO}}\cap L^p}\ln(1+\|\psi_t^{-1}\|_*)
\\
&\leq& C\|\omega_0\|_{{{\rm LBMO}}\cap L^p}\ln(1+{\tt v}arepsilonxp(\int_0^t\|u(\tau)\|_{LL}d\tau))
\\
&\leq& C_0(1+\int_0^t\|u(\tau)\|_{LL}d\tau).
{\tt v}arepsilonnd{eqnarray*}
On the other hand, one has
\begin{eqnarray*}
\|u(t)\|_{LL}&\leq&\|\omega(t)\|_{L^2}+ \|\omega(t)\|_{B_{\infty,\infty}^0}
\\
&\leq& C(\|\omega_0\|_{L^2}+ \|\omega(t)\|_{BM0}).
{\tt v}arepsilonnd{eqnarray*}
The first estimate is classical (see \cite{bah-ch-dan} for instance) and the second one is just the conservation of the \mbox{$L^2$}-norm of the vorticity and the continuity of the embedding \mbox{${\rm BMO}\hookrightarrow B_{\infty,\infty}^0$}.
Consequently, we deduce that
\begin{eqnarray*}
\|u(t)\|_{LL}\leq C_0(1+\int_0^t\|u(\tau)\|_{LL}d\tau),
{\tt v}arepsilonnd{eqnarray*}
and by Gronwall's Lemma
$$
\|u(t)\|_{LL}\leq C_0{\tt v}arepsilonxp(C_0t),\qquad\forall\, t\geq 0.
$$
This yields in particular
$$
\|\omega(t)\|_{{{\rm LBMO}}}\leq C_0{\tt v}arepsilonxp{(C_0t)},\qquad\forall\, t\geq 0,
$$
as claimed.
{\tt v}arepsilonnd{proof}
\subsection{ Existence} Let \mbox{$\omega_0\in L^p\cap {{\rm LBMO}}$} and $u_0=k, \qquad{\rm as}\quad n\to\inftyt \omega_0, $
with $K(x)=\frac{x^\perp}{2\pi|x|^2}.$ We take \mbox{$\rho\in \mathcal C^\infty_0$}, with \mbox{$\rho\geq 0$} and \mbox{$\int\rho(x)dx=1$} and set
$$
\omega_0^n=\rho_n, \qquad{\rm as}\quad n\to\inftyt \omega_0,\qquad u_0^n= \rho_n, \qquad{\rm as}\quad n\to\inftyt u_0,
$$
where \mbox{$\rho_n(x)=n^2\rho(nx)$}. Obviously, \mbox{$\omega_0^n$} is a \mbox{$C^\infty$} bounded function for every \mbox{$n\in\mathbb N^*$}. Furthermore, thanks to {\tt v}arepsilonqref{eq:comp},
$$
\|\omega_0^n\|_{L^p}\leq \|\omega_0\|_{L^p}\qquad{\rm and}\qquad \|\omega_0^n\|_{{{\rm LBMO}}}\leq \|\omega_0\|_{{{\rm LBMO}}}.
$$
The classical interpolation result between Lebesgue and \mbox{${\rm BMO}$} spaces (see \cite{GR} for more details) implies that
$$
\|\omega_0^n\|_{L^q}\leq \|\omega_0^n\|_{L^p\cap {\rm BMO}}\leq \|\omega_0\|_{L^p\cap {\rm BMO}} , \qquad \forall\, q\in[p,+\infty[.
$$
Since, \mbox{$\omega_0^n\in L^p\cap L^\infty$} then there exists a unique weak solution \mbox{$u^n$} with
$$
\omega_n\in L^\infty({\mathbb R}_+, L^p\cap L^\infty).
$$
according to the classical result of Yudovich \cite{Y1}.
According to Proposition \ref{apriori} one has
\begin{eqnarray}
\label{44}
\|u^n(t)\|_{LL}+ \|\omega^n(t)\|_{L^p\cap{{\rm LBMO}}}\leq C_0{\tt v}arepsilonxp(C_0t),\qquad\forall\, t\in{\mathbb R}_+.
{\tt v}arepsilonnd{eqnarray}
With this uniform estimate in hand, we can perform the same analysis as in the case \mbox{$\omega_0\in L^p\cap L^\infty$} (see paragraph 8.2.2 in \cite{Maj} for more explanation). For the convenience of the reader we briefly outline the main arguments of the proof.
If one denotes by \mbox{$\psi_n(t,x)$} the associated flow to \mbox{$u^n$} then
\begin{equation}
\label{tt}
\|\psi_n^{\pm1}(t)\|_{*}\leq C_0{\tt v}arepsilonxp(C_0t),\qquad\forall\, t\in{\mathbb R}_+.
{\tt v}arepsilonnd{equation}
This yields the existence of explicit time continuous functions \mbox{$\beta(t)>0$} and \mbox{$C(t)$} such that
$$
|\psi_n^{\pm1}(t,x_2)-\psi_n^{\pm1}(t,x_1)|\leq C(t)|x_2-x_1|^{\beta(t)},\qquad \forall\, (x_1,x_2)\in{\mathbb R}^2\times{\mathbb R}^2.
$$
Moreover,
$$
|\psi_n^{\pm1}(t_2,x)-\psi_n^{\pm1}(t_1,x)|\leq |t_2-t_1|\|u^n\|_{L^\infty}\leq C_0|t_2-t_1|,\qquad \forall\, (t_1,t_2)\in{\mathbb R}_+\times{\mathbb R}_+.
$$
Here, we have used the Biot-Savart law to get
$$
\|u^n(t)\|_{L^\infty}\lesssim \|\omega^n(t)\|_{L^p\cap L^3}\leq\|\omega_0\|_{L^p\cap L^3}.
$$
The family \mbox{$\{\psi_n,\, n\in\mathbb N\}$} is bounded and equicontinuous on every compact \mbox{$[0,T]\times \bar B(0,R)\subset {\mathbb R}_+\times{\mathbb R}^2$}. The Arzela-Ascoli
theorem implies
the existence of a limiting particle trajectories \mbox{$\psi(t,x)$}. Performing the same analysis for \mbox{$\{\psi_n^{-1},\, n\in\mathbb N\}$} we figure out that \mbox{$\psi(t,x)$} is a Lebesgue measure preserving homeomorphism . Also, passing to the limit\footnote{ We take the pointwise limit in the definition formula and then take the supremum.} in {\tt v}arepsilonqref{tt} leads to
$$
\|\psi_t\|_{*}=\|\psi^{-1}_t\|_{*}\leq C_0{\tt v}arepsilonxp(C_0t),\qquad \forall\, t\in{\mathbb R}_+.
$$
One defines,
$$
\omega(t,x)=\omega_0(\psi^{-1}_t(x)),\qquad u(t,x)=(k, \qquad{\rm as}\quad n\to\inftyt_x \omega(t,.))(x).
$$
We easily check that for every \mbox{$q\in [p,+\infty[$} one has
\begin{eqnarray*}
\omega^n(.,t)&\longrightarrow& \omega(.,t)\quad {\rm in }\,\, L^q.
\\
u^n(.,t)&\longrightarrow_x& u(.,t)\quad {\rm uniformly}.
{\tt v}arepsilonnd{eqnarray*}
The last claim follows from the fact that
$$
\|u^n(t)-u(t)\|_{L^\infty}\lesssim \|\omega^n(t)-\omega(t)\|_{ L^p\cap L^3}.
$$
All this allows us to pass to the limit in the integral equation on \mbox{$\omega^n$} and then to prove that \mbox{$(u,\omega)$} is a weak solution to the vorticity-stream formulation of the 2D Euler system. Furthermore, the convergence of
\mbox{$\{\omega^n(t)\}$} in \mbox{$L^1_{\text{loc}}$} and {\tt v}arepsilonqref{44} imply together that
$$
\|\omega(t)\|_{L^p\cap{{\rm LBMO}}}\leq C_0{\tt v}arepsilonxp(C_0t),\qquad \forall\,t\in{\mathbb R}_+.
$$
as claimed.
The continuity of \mbox{$\psi$} and the preservation of Lebesgue measure imply that \mbox{$t\mapsto \omega(t)$} is continuous\footnote{ By approximation we are reduced to the following situation: \mbox{$g_n(x)\to g(x)$} pointwise and
$$\|g_n\|_{L^q}=\|g\|_{L^q}.
$$
This is enough to deduce that \mbox{$g_n\to g$} in \mbox{$L^q$} (see Theorem 1.9 in \cite{LL} for instance). } with values in \mbox{$L^q$} for all \mbox{$q\in [p,+\infty[$}. This implies in particular that
\mbox{$u\in \mathcal C([0,+\infty[, L^r({\mathbb R}^d))$} for every \mbox{$r\in [\frac{2p}{2-p},+\infty]$}.
\subsection{ Uniqueness} Since the vorticity remains bounded in \mbox{${\rm BMO}$} space then the uniqueness of the solutions follows from Theorem 7.1 in \cite{Vishik1}.
Another way to prove that is to add the information \mbox{$u\in \mathcal C([0,+\infty, L^\infty({\mathbb R}^d))$} (which is satisfied for the solution constructed above) to the theorem and in this case the uniqueness follows from Theorem 7.17 in \cite{bah-ch-dan}.
\
\begin{thebibliography}{9999}
\bibitem{bah-ch-dan} H.~ Bahouri, J-Y.~ Chemin and R.~ Danchin, {\it Fourier Analysis
and Nonlinear Partial Differential Equations}, Grundlehren der mathematischen Wissenschaften 343.
\bibitem{BK} F.~ Bernicot and S.~ Keraani, {\it Sharp constants for composition with a measure-preserving map}, preprint 2012.
\bibitem{Beale}
J. T.~ Beale, T.~ Kato and A.~ Majda, {\it Remarks on the Breakdown of Smooth Solutions
for the \mbox{$3D$} Euler Equations}, Comm. Math. Phys. {\bf 94} (1984) 61--66.
\bibitem{Ch} D.~ Chae, {\it Weak solutions of 2D Euler equations with initial vorticity in LlnL}. J. Diff. Eqs., {\bf 103} (1993), 323--337.
\bibitem{Ch1} J.-Y. Chemin, {\it Fluides Parfaits Incompressibles}, Ast\'erisque 230 (1995); {\it Perfect Incompressible Fluids},
transl. by I. Gallagher and D. Iftimie, Oxford Lecture Series in Mathematics and Its Applications, Vol. {\bf 14},
Clarendon Press-Oxford University Press, New York (1998).
\bibitem{Maj} A. J.~ Majda and A. L.~ Bertozzi, {\it Vorticity and incompressible flow}, Cambridge Texts
in Applied Mathematics, vol. {\bf 27}, Cambridge University Press, Cambridge, 2002.
\bibitem{De} J.-M.~ Delort, {\it Existence de nappes de tourbillon en dimension deux}, J. Amer. Math. Sot., Vol. {\bf 4} (1991)
pp. 553--586.
\bibitem{DM} R.~ DiPerna and A.~ Madja, {\it Concentrations in regularization for 2D incompressible flow}. Comm. Pure Appl.
Math. {\bf 40} (1987), 301--345.
\bibitem{FLX} M. C. Lopes Filho, H. J. Nussenzveig Lopes and Z. Xin, {\it Existence of
vortex sheets with reflection symmetry in two space dimensions}, Arch. Ration. Mech. Anal., {\bf 158}(3) (2001), 235--257.
\bibitem{Ger} P. G\'erard, {\it R\'esultats r\'ecents sur les fluides parfaits incompressibles bidimensionnels [d'apr\`es J.-Y.
Chemin et J.-M. Delort]}, S\'eminire, Bourbaki, 1991/1992, no. 757, Ast\'erisque, Vol. {\bf 206}, 1992,411--444.
\bibitem{GMO} Y.~ Giga, T. ~ Miyakawa and H.~ Osada, {\it \mbox{$2D$} Navier-Stokes flow with measures as initial vorticity}. Arch. Rat.
Mech. Anal. {\bf 104} (1988), 223--250.
\bibitem{GR} L.~ Grafakos, {\it Classical and Modern Fourier Analysis}, Prentice Hall, New-York, 2006.
\bibitem{LL} E. H. Lieb and M. Loss, { \it Analysis}, Grad. Studies in Math. {\bf 14}, Amer. Math. Soc., Providence,
RI, 1997.
\bibitem{lions1} P.-L.~ Lions, {\it Mathematical topics in fluid mechanics. Vol. 1}. The Clarendon Press Oxford University Press, New York, 1996.
\bibitem{Ser} P.~ Serfati, {\it Structures holomorphes \`a faible r\'egularit\'e spatiale en m\'ecanique des fluides}. J. Math. Pures Appl. {\bf 74} (1995), 95--104.
\bibitem{tan}Y.~ Taniuchi, {\it Uniformly local Lp Estimate for 2D Vorticity Equation and Its Application to Euler Equations with Initial Vorticity in {\rm BMO}}. Comm. Math Phys., {\bf 248} (2004), 169--186.
\bibitem{Vishik1} M.~ Vishik, {\it Incompressible flows of an ideal fluid with vorticity in borderline spaces of Besov type}. (English, French summary)
Ann. Sci. \'Ecole Norm. Sup. (4) {\bf 32} (1999), no. 6, 769--812.
\bibitem{Vishik2} M. Vishik, {\it Hydrodynamics in Besov Spaces}, Arch. Rational Mech. Anal. {\bf{145}} (1998), 197--214.
\bibitem{Y1} Y.~ Yudovich, {\it Nonstationary flow of an ideal incompressible liquid}. Zh. Vych. Mat., {\bf 3} (1963), 1032--1066.
\bibitem{Y2} Y.~ Yudovich, {\it Uniqueness theorem for the basic nonstationary problem in the dynamics of an ideal incompressible fluid}. Math. Res. Lett., {\bf 2} (1995), 27--38.
{\tt v}arepsilonnd{thebibliography}
{\tt v}arepsilonnd{document} | math | 49,995 |
\begin{document}
\title{On isotropic divisors on irreducible symplectic manifolds}
\begin{abstract}
Let $ X $ be an irreducible symplectic manifold and $ L $ a divisor on $ X $. Assume that
$ L $ is isotropic with respect to the Beauville-Bogomolov quadratic form.
We define the rational Lagrangian locus and the movable locus
on the universal deformation space of the pair $ (X,L) $.
We prove that the rational Lagrangian locus is
empty or coincide with the movable locus of the universal deformation space.
\end{abstract}
\section{Introduction}
We start with recalling the definition of an irreducible symplectic manifold.
\begin{defn}[{{\cite[Th\'{e}or\`{e}m 1]{Beauville}}}]
Let $ X $ be a compact K\"ahler manifold. The manifold $ X $ is said be irreducible symplectic
if $ X $ satisfies the following three properties.
\begin{itemize}
\item[(1)] $ X $ carries a symplectic form.
\item[(2)] $ X $ is simply connected.
\item[(3)] $ \dim H^{0}(X,\Omega_{X}^{2}) = 1$.
\end{itemize}
\end{defn}
Together with Calabi-Yau manifolds and complex tori, irreducible symplectic manifolds
form a building block of a compact K\"ahler manifold with $ c_{1} = 0 $.
It is shown in \cite{fibrespace}, \cite{addendum_fibrespace} and \cite{MR2453602}
that a fibre space structure of an irreducible symplectic manifold
is very restricted. To state the result, we recall the definition of a Lagrangian fibration.
\begin{defn}\label{rational_Lag_def}
Let $X$ be an irreducible symplectic manifold and $L$
a line bundle on $X$. A surjective morphism
$ g : X \to S$ is said to be Lagrangian
if a general fibre is connected and Lagrangian. A dominant map
$ g: X \dashrightarrow S $ is said to be rational Lagrangian
if there exists a birational map $\phi : X\dasharrow X'$ such that
the composite map $g\circ \phi^{-1} : X' \to S$
is Lagrangian.
We say that $L$ defines a
Lagrangian fibration if the linear system $ |L| $ defines a Lagrangian fibration.
Also we say that $L$ defines a rational Lagrangian fibration if $|L|$ defines a rational Lagrangian fibration.
\end{defn}
\begin{theorem}[\cite{addendum_fibrespace}, \cite{fibrespace} and \cite{MR2453602}]
Let $ X $ be a projective irreducible symplectic manifold. Assume that $ X $ admits a
surjective morphism $ g : X \to S $ over a smooth projective manifold $ S $.
Assume that $ 0 < \dim S < \dim X $ and $ g $ has connected fibres.
Then
$ g $ is Lagrangian and $ S \cong \mathbb{P}^{1/2\dim X}$.
\end{theorem}
It is a natural question when a line bundle $ L $ defines a Lagrangian fibration. If
$ L $ defines a rational Lagrangian fibration, then $ L $ is isotropic with repect to
the Beauville-Bogomolov quadratic form. Moreover the first Chern class $ c_{1}(L) $ of $ L $
belongs to the biational K\"ahler cone which is defined in \cite[Definition 4.1]{MR1992275}.
\begin{conjecture}[D.~Huybrechts and J.~Sawon]\label{conjecture}
Let $ X $ be an irreducible symplectic manifold and $ L $ a line bundle on $ X $.
Assume that $ L $ is
isotropic with respect to the Beauville-Bogomolov quadratic form on $ H^{2}(X,\mathbb{C}) $.
We also assume that $ c_{1}(L) $ belongs to the birational K\"ahler cone of $ X $.
Then $ L $ will define a rational Lagrangian fibration.
\end{conjecture}
At that moment, partial results are known about Conjecture \ref{conjecture}.
We could consult \cite{MR2400885}, \cite{MR2739808},
\cite{MR2357635} and \cite{MR2585581}. In this note,
we consider the above conjecture by a different approach. To state the result,
we recall the basic facts of a deformation of a pair which consists of a symplectic manifold and
a line bundle.
\begin{defn}
Let $X$ be a K\"{a}hler manifold and $L$ a line bundle on $X$.
A deformation of the pair $(X,L)$ consists of a smooth
morphism $\mathfrak{X}\to S$ over a smooth manifold $S$
with a reference point $o$ and a line
bundle $\mathfrak{L}$ on $\mathfrak{X}$
such that the fibre $\mathfrak{X}_{o}$ at $o$ is isomorphic to $X$
and the restriction $\mathfrak{L}|_{\mathfrak{X}_{0}}$ is
isomorphic to $L$.
\end{defn}
If $ X $ is an irreducible symplectic manifold, it is known that there exists the universal
deformation of deformations of a pair $ (X,L) $.
\begin{prop}[{{\cite[(1.14)]{MR1664696}}}]
Let $X$ be an irreducible symplectic manifold and
$L$ a line bundle on $X$. We also let $\mathfrak{X} \to \mathrm{Def}(X)$
be the Kuranishi family of $X$. Then there exists a smooth hypersurface
$\mathrm{Def}(X,L)$ of $\mathrm{Def}(X)$ such that the restriction family
$\mathfrak{X}_{L}:= \mathfrak{X}\times_{\mathrm{Def}(X)}\mathrm{Def}(X,L)
\to \mathrm{Def}(X,L)$ forms the universal family of deformations of the pair
$(X,L)$. Namely, $\mathfrak{X}_{L}$ carries a line bundle $\mathfrak{L}$
and every deformation $\mathfrak{X}_{S} \to S$ of $(X,L)$
is isomorphic to the pull back of $(\mathfrak{X}_{L},\mathfrak{L})$
via a uniquely determined map
$S \to \mathrm{Def}(X,L)$.
\end{prop}
Now we can state the result.
\begin{theorem}\label{main_result}
Let $X$ be an irreducible symplectic manifold
and $L$ a line bundle on $X$. We also let $\pi : \mathfrak{X}_{L} \to \mathrm{Def}(X,L)$
be the universal family of deformations of the pair $(X,L)$
and $\mathfrak{L}$ the universal bundle. We denote by $q$
the Beauville-Bogomolov form on $H^{2}(X,\mathbb{C})$.
Assume that $q(L)=0$. We define the locus of movable $ \mathrm{Def}(X,L)_{\mathrm{mov}} $ by
\[
\{
t \in \mathrm{Def}(X,L);\mbox{$c_{1}(\mathfrak{L}_{t})$ belongs to the birational K\"ahler cone of $ X $.}
\}
\]
We also define more two subsets of $ \mathrm{Def}(X,L) $. The first is
the
locus of rational Lagrangian fibration $V$ which is defined by
\[
\{
t \in \mathrm{Def}(X,L);\mbox{$\mathfrak{L}_{t}$ defines a rational Lagrangian
fibration over the projective space. }
\}
\]
The second is the locus of Lagrangian fibration $V_{\mathrm{reg}}$ which is defined by
\[
\{
t \in \mathrm{Def}(X,L);\mbox{$\mathfrak{L}_{t}$ defines a Lagrangian
fibration over the projective space. }
\}
\]
Then $V=\emptyset$ or $V=\mathrm{Def}(X,L)_{\mathrm{mov}}$. Moreover if $V\ne \emptyset$, $V_{\mathrm{reg}}$
is a dense open subset of $\mathrm{Def}(X,L)$ and
$ \mathrm{Def}(X,L)\setminus V_{\mathrm{reg}} $ is contained in
a union of countably hypersurfaces of $ \mathrm{Def}(X,L) $.
\end{theorem}
\noindent
\begin{remark}
Professors L.~Kamenova and M.~Verbitsky obtained $ V_{\mathrm{reg}} $ is an dense open set of
$ \mathrm{Def}(X,L) $ under the assumption $ V_{\mathrm{reg}} \ne \emptyset $ in \cite[Theorem 3.4]{1208.4626}.
\end{remark}
To state an application of Theorem \ref{main_result}, we need the following two definitions.
\begin{defn}
Two compact K\"ahler manifolds $ X $ and $ X' $ are
said to be deformation equivalent if there exists a proper smooth
morphism $\pi : \mathfrak{X} \to S $ over a smooth connected complex manifold $ S $
such that both $ X $ and $ X' $ form fibres of $ \pi $.
\end{defn}
\begin{defn}
An irreducible symplectic manifold $ X $ is said to be of $ K3^{[n]} $-type if
$ X $ is deformation equivalent to the $ n $-pointed Hilbert scheme of a $ K3 $ surface.
An irreducible symplectic manifold $ X $
is also said to be of type generalized Kummer
if $ X $ is deformation
equivalent to
a generalized Kummer variety which
is defined in \cite[Th\'{e}or\`{e}m 4]{MR785234}.
\end{defn}
\noindent
It was shown in
\cite{1301.6584}, \cite{1301.6968} and \cite{1206.4838}
that if $ X $ is isomorphic to the $ n $-pointed Hilbert scheme of a $ K3 $ surface
or a generalized Kummer variety, then Conjecture \ref{conjecture} holds.
To combine these results and Theorem \ref{main_result}, we obtain the following result.
\begin{corollary}\label{application}
Let $ X $ be an irreducible symplectic manifold of type $ K3^{[n]} $
or of type generalized Kummer.
We also let $ L $
be a line bundle $ L $ on $ X $ which is not trivial, isotropic
with respect to the Beauville-Bogomolov quadratic form on $ H^{2}(X,\mathbb{C}) $and
$ c_{1}(L) $ belongs to the birational K\"ahler cone of $ X $.
Then $ L $ define a
rational Lagrangian fibration
over the projective space.
\end{corollary}
\section{Birational correspondence of deformation families}
In this section we study a relationship between
deformation families. We start with introducing the following Lemma.
\begin{lemma}[{{\cite[Lemma 2.6]{MR1664696}}}]\label{isometry}
Let $ X $ and $ X' $ be irreducible symplectic manifolds. Assume that there
exists a bimeromorphic map $ \phi : X \dasharrow X' $.
Then $ \phi $ induces an isomorphism
\[
\phi_{*} : H^{2}(X,\mathbb{C}) \cong H^{2}(X',\mathbb{C})
\]
which compatible with the Hodge structures and the Beauville-Bogomolov quadratic forms.
\end{lemma}
We consider the relationship of the Kuranishi families
of bimeromorphic irreducible symplectic manifolds.
\begin{prop}\label{birational_kuranishi}
Let $X$ and $X'$ are irreducible symplectic manifolds. We denote
by $\pi : \mathfrak{X}\to\mathrm{Def}(X)$ the universal family of deformations
of $X$. We also denote by $\pi' : \mathfrak{X}' \to \mathrm{Def}(X')$ the universal family of
deformations of $X'$. Assume that $X$ and $X'$ are bimeromorphic.
Then there exist dense open subset $U$ of $\mathrm{Def}(X)$
and $ U' $ of $ \mathrm{Def}(X') $ which satisfy the following three properties.
\begin{itemize}
\item[(1)] The set $ \mathrm{Def}(X)\setminus U $ is contained in
a union of countably hypersurfaces in $ \mathrm{Def}(X) $ and
$ \mathrm{Def}(X')\setminus U'$ is also contained in a union
of countably hypersurfaces in $ \mathrm{Def}(X') $.
\item[(2)] They satisfy the following diagram:
\begin{equation*}\label{birational_diagram}
\xymatrix{
\mathfrak{X}\times_{\mathrm{Def}(X)}U
\ar[r]^{\cong}_{\tilde{\phi}} \ar[d] & \mathfrak{X}'\times_{\mathrm{Def}(X')}{U'}
\ar[d] \\
U \ar[r]_{\cong}^{\varphi} & U' ,
}
\end{equation*}
where $ \tilde{\phi} $ and $ \varphi $ are isomorphic.
\item[(3)] Let $ s $ be a point of $ U $ and $ s' $ the point $ \varphi (s) $. We also let
$ \phi_{s} : \mathfrak{X}_{s} \cong \mathfrak{X}'_{s'
} $ be
the restriction of the isomorphism
$\tilde{\phi} : \mathfrak{X}\times_{\mathrm{Def}(X)} {U} \cong \mathfrak{X}' \times_{\mathrm{Def}(X')} U'$
in the above diagram to the fibre $ \mathfrak{X}_{s} $ at $s$
and the fibre $ \mathfrak{X}'_{s'} $ at $ s' $.
We denote by $ \eta $ a parallel transport in the local system
$ R^{2}\pi_{*} \mathbb{C}$ along a path from the reference point to $ s $.
We also denote by $ \eta' $ a parallel transport in the local system
$ R^{2}\pi'_{*} \mathbb{C}$ along a path from the reference point to $ s' $.
Then the composition of the isomorphisms
\[
H^{2}(X,\mathbb{C}) \stackrel{\eta}{\cong} H^{2}(\mathfrak{X}_{s},\mathbb{C})
\stackrel{\phi_{s}}{\cong} H^{2}(\mathfrak{X}'_{s'}, \mathbb{C})
\stackrel{\eta'^{-1}}{\cong} H^{2}(X', \mathbb{C})
\]
coincides with $ \phi_{*} $ which is the isomorphism induced by
$ \phi : X \dasharrow X' $.
\end{itemize}
\end{prop}
\begin{proof}
The proof of this proposition is a mimic of the proof of
\cite[Theorem 5.9]{MR1664696}.
The proof consists of two steps. First, we show that there exist
open sets $ U $ of $ \mathrm{Def}(X) $ and $ U' $ of $ \mathrm{Def}(X') $
which satisfy the the assertions (2) and (3) of Proposition \ref{birational_kuranishi}.
Since $ X $ and $ X' $ are bimeromorphic,
we have a deformation $\mathfrak{X}_{S}\to S$
of $X$ and a deformation $\mathfrak{X}'_{S}\to S$ of $X'$ over
a small disk $S$ which are
isomorphic to each other over the
punctured disk $S\setminus0$ by \cite[Theorem 4.6]{MR1664696}.
By the universality,
$ \mathfrak{X}_{S} \to S$ is isomorphic
to the base change $ \mathfrak{X} \to \mathrm{Def}(X) $
by a uniquely determined morphism
$ S \to \mathrm{Def}(X) $. The family $ \mathfrak{X}'_{S} \to S $ is also
isomorphic to the base change of
$ \mathfrak{X}' \to \mathrm{Def}(X') $
by a uniquely determined morphism
$ S \to \mathrm{Def}(X') $.
Thus
there exist points $t \in\mathrm{Def}(X)$ and
$ t' \in \mathrm{Def}(X') $
such that the fibres $\mathfrak{X}_{t}$ and $\mathfrak{X}'_{t'}$
are isomorphic.
Let $ \eta $ be a parallel transportation of $ R^{2}\pi_{*}\mathbb{C} $ along
a path from the reference point to $ t $ and
$ \eta' $ a parallel transportation of $ R^{2}\pi'_{*}\mathbb{C} $ along
a path from the reference point to $ t' $.
To consider
the composition of the isomorphisms
\begin{equation}\label{parallel_transport}
H^{2}(X,\mathbb{C}) \stackrel{\eta}{\cong}
H^{2}(\mathfrak{X}_{t},\mathbb{C})
\cong H^{2}(\mathfrak{X}'_{t'},\mathbb{C})
\stackrel{\eta'^{-1}}{\cong}
H^{2}(X',\mathbb{C})
\end{equation}
we need more information of the construction of the two families $ \mathfrak{X}_{S} \to S $ and
$ \mathfrak{X}'_{S} \to S $. By the last paragraph of the proof of \cite[Theorem 4.6]{MR1664696},
the construction is due to \cite[Proposition 4.5]{MR1664696}.
According to \cite[Proposition 4.5]{MR1664696}, $ \mathfrak{X}'_{S} \to S $ is constructed as follows.
Let $ H' $ be an ample divisor on $ X' $ and $ H := (\phi_{*})^{-1}H'$. We consider a deformation
$\pi_{S} : (\mathfrak{X}_{S},\mathfrak{H}) \to S $ of $ (X,H) $. Then the closure of the image of the rational
map $ \mathfrak{X}_{S} \dasharrow \mathbb{P}((\pi_{S})_{*}\mathfrak{H}) $ gives the desired deformation
$ \mathfrak{X}'_{S} \to S $.
Hence we have a birational map $ \mathfrak{X}_{S} \dasharrow \mathfrak{X}'_{S} $ which commutes with
the two projections. Moreover the restriction of this birational map
coincides with $ \phi $. Thus the composition of the isomorphisms (\ref{parallel_transport})
coincides with $ \phi_{*} $.
By \cite[Th\'{e}or\`{e}m 5 (b)]{Beauville}, we can extend the isomorphism
$ \mathfrak{X}_{t} \cong \mathfrak{X}'_{t'} $ over open sets of
$ \mathrm{Def}(X) $ and $ \mathrm{Def}(X') $, that is,
there exist open
sets $U$ of $\mathrm{Def}(X)$ and $ U' $ of $ \mathrm{Def}(X') $
such that the restriction families
$\mathfrak{X} \times_{\mathrm{Def}(X)}U$
and $\mathfrak{X}'\times_{\mathrm{Def}(X')}U'$ are isomorphic and this isomorphism is compatible
with the two projections
$\mathfrak{X}\to\mathrm{Def}(X)$ and $\mathfrak{X}'\to\mathrm{Def}(X')$.
By this construction, the restriction of the
isomorphism $ \tilde{\phi} : \mathfrak{X}\times_{\mathrm{Def}(X)}U \cong \mathfrak{X}'\times_{\mathrm{Def}(X')}U' $
to the fibres
satisfies the assertion (3) of Proposition \ref{birational_kuranishi}.
Next we show that $ U $ and $ U' $ satisfies the assertion (1) of
Proposition \ref{birational_kuranishi}.
Let $s$ be a point of $\bar{U}$. By \cite[Theorem 4.3]{MR1664696},
the fibres $\mathfrak{X}_{s}$ and $\mathfrak{X}'_{s}$ are bimeromorphic.
If $\dim H^{1,1}(\mathfrak{X}_{s},\mathbb{Q})=0$,
then $\mathfrak{X}_{s}$ and $\mathfrak{X}'_{s}$
carries neither curves nor effective divisors. Thus $ \mathfrak{X}_{s} $ and $ \mathfrak{X}'_{s} $
are isomorphic by \cite[Proposition 2.1]{MR1992275} and $ s \in U $.
Thus if $s \in \bar{U}\setminus U$
then $ \dim H^{1,1}(\mathfrak{X}_{s},\mathbb{C}) \ge 1$. This implies
that $ \bar{U}\setminus U $ is contained in a union of
countably hypersurfaces.
\end{proof}
For the proof of Theorem \ref{main_result},
we need also a correspondence of deformation families of pairs.
Before we state the assertion,
we give a proof of
the following Lemma.
\begin{lemma}\label{Picard_number_one}
Let $X$ and $X'$ are irreducible symplectic manifold. Assume that
there exists a bimeromophic map $\phi : X \dasharrow X'$. We
also assume
$\dim H^{1,1}(X,\mathbb{Q})=1$ and
$q(\beta) \ge 0$ for every element $ \beta $ of $H^{1,1}(X,\mathbb{Q})$,
where $ q_{X} $ is the Beauville-Bogomolov quadratic form
on $ H^{2}(X,\mathbb{C}) $.
Then $X$ and $X'$ are isomorphic.
\end{lemma}
\begin{proof}
Since $ X $ and $ X' $ are bimeromorphic, we have an isomorphism
\[
\phi_{*} : H^{2}(X,\mathbb{C}) \cong H^{2}(X' , \mathbb{C})
\]
by Lemma \ref{isometry}.
Since $ \phi_{*} $ respects the Beauville-Bogomolov quadratics
and the Hodge structures, $ \dim H^{1,1}(X', \mathbb{Q}) = 1$ and
$ H^{1,1} (X',\mathbb{Q})$ is generated by a class $ \gamma \in H^{1,1}(X',\mathbb{Q}) $
such that $ q_{X'}(\gamma) \ge 0$, where $ q_{X'} $
is the Beauville-Bogomolov quadratic form on $ H^{2}(X',\mathbb{C}) $.
Let $ \mathcal{C}_{X} $ and $ \mathcal{C}_{X'} $ be the positive cones in $ H^{1,1}(X,\mathbb{R}) $
and $ H^{1,1}(X',\mathbb{R}) $, respectively.
By \cite[Corollary 7.2]{MR1664696}, $ \mathcal{C}_{X} $ and $ \mathcal{C}_{X'} $
coincide with the K\"ahler cones of $ X $ and $ X' $, respectively.
Since $ \phi_{*} $ maps $ \mathcal{C}_{X} $ to $ \mathcal{C}_{X'} $,
$ \phi_{*}\alpha $ is K\"ahler for all K\"ahler class of $ H^{1,1}(X,\mathbb{R}) $.
By \cite[Corollary 3.3]{MR642659},
$ \phi $ can be extended to an isomorphism.
\end{proof}
Now
we can state a correspondence of deformation families of pairs.
\begin{prop}\label{Birationarity-of-Kuranishi}
Let $X$ are irreducible symplectic manifolds and $L$ a line bundle on $ X $.
We also let $X'$ are irreducible symplectic manifold and
$L'$ a line bundle on $X'$.
We denote the universal
family of deformations of the pair
$(X,L)$ by $(\mathfrak{X}_{L},\mathfrak{L})$
and the
parametrizing space
by $\mathrm{Def}(X,L)$.
We also denote by
the universal family of deformations of the pair $(X',L')$ by
$(\mathfrak{X}'_{L'},\mathfrak{L}')$ and
the parameter space
by
$ \mathrm{Def}(X',L') $.
Assume that there exists
a birational map $\phi : X\dasharrow X'$
such that $\phi_* L \cong L'$ and $ q_{X}(L) \ge 0 $, where
$ q_{X} $ is the Beauville-Bogomolov quadratic form
on $ H^{2}(X,\mathbb{C}) $. Then we have the followings.
\begin{itemize}
\item[(1)] There exist open subsets $U_{L}$ of $\mathrm{Def}(X,L)$
and $ U'_{L'} $ of $ \mathrm{Def}(X',L') $ such that
they satisfy the following diagram
\begin{equation*}\label{birational_diagram_2}
\xymatrix{
\mathfrak{X}_{L}\times_{\mathrm{Def}(X,L)}U_{L}
\ar[r]^{\cong} \ar[d] &
\mathfrak{X}'_{L'}\times_{\mathrm{Def}(X',L')}U'_{L'} \ar[d] \\
U_{L} \ar[r]_{\cong}^{\varphi} & U'_{L'} },
\end{equation*}
where $\varphi $ is the isomorphism in the diagram of the assertion (2)
of Proposition \ref{birational_kuranishi}.
Moreover $ \mathrm{Def}(X,L)\setminus U_{L} $ is contained in a union of countably
hypersurfaces of $ \mathrm{Def}(X,L) $ and
$ \mathrm{Def}(X,L)\setminus U'_{L'} $ is also contained in a union of countably
hypersurfaces of $ \mathrm{Def}(X,L) $.
\item[(2)] For every point $ s\in U_{L} $,
$ (\phi_{s})_{*}\mathfrak{L}_{s} \cong \mathfrak{L}'_{s'}$, where
$ s' = \varphi (s)$ and
$ \phi_{s} $ is the restriction of the isomorphism
$
\mathfrak{X}_{L}\times_{\mathrm{Def}(X,L)}U_{L}
\to
\mathfrak{X}'_{L'}\times_{\mathrm{Def}(X',L')}U'_{L'}
$
to the fibres $ \mathfrak{X}_{L,s} $ and $ \mathfrak{X'}_{L',s'} $.
\end{itemize}
\end{prop}
\begin{proof}
We use the same notation in the statements and
the proof of Proposition \ref{birational_kuranishi}.
If $ U\cap \mathrm{Def}(X,L) \ne \emptyset $,
then $ U_{L} := U\cap \mathrm{Def}(X,L) $
and $ U'_{L'} := U'\cap \mathrm{Def}(X',L') $ satisfies the assertion (1) and
every point $ s \in U_{L} $ satisfy the assertion of (2) because the restricted
isomorphism satisfies the assertion (3) of Proposition \ref{birational_kuranishi}.
Let $s$
be a point of $\mathrm{Def}(X,L)$ such that
$\dim H^{1,1}(\mathfrak{X}_{s},\mathbb{Q})=1$,
where $\mathfrak{X}_{s}$ is the
fibre at $s$. We will prove that $ s \in U $.
Since $ U $ is dense and open,
there exists
a small disk $ S $ of $ \mathrm{Def}(X) $
such that $ s \in S $ and
$ S\setminus \{s\} \subset U$.
We denote $ \varphi (s) $ by $ s' $ and $ \varphi (S) $ by $ S' $.
If we consider the base changes $ \mathfrak{X} \to \mathrm{Def}(X)$ by
$ S $ and $ \mathfrak{X'}\to \mathrm{Def}(X') $ by
$ S' $,
we obtain the following diagram:
\[
\xymatrix{
\mathfrak{X}_{S\setminus \{s\}} \ar[d] \ar[r]^{\cong}
& \mathfrak{X}'_{S'\setminus \{s'\}} \ar[d] \\
S \setminus s\ar[r]^{\varphi}_{\cong} & S'\setminus \{s'\}
},
\]
By \cite[Theorem 4.3]{MR1664696}, there exists a birational map
$\mathfrak{X}_{s} \dasharrow \mathfrak{X}'_{s'}$.
By the definition of the Beauville-Bogomolov quadratic form \cite[Page 772]{Beauville}, the function
\[
\mathrm{Def}(X,L) \ni s \mapsto q_{\mathfrak{X}_{s}}(\mathfrak{L}_{s}) \in \mathbb{Z}
\]
is constant, where $ q_{\mathfrak{X}_{s}} $ stands for the Beauville-Bogomolov quadratic form
on $ H^{2}(\mathfrak{X}_{s},\mathbb{C}) $. Thus we have
$ q_{X}(L) = q_{\mathfrak{X}_{s}}(\mathfrak{L}_{s}) \ge 0 $.
By Lemma \ref{Picard_number_one},
$\phi_{s}$ is an ismorphism. This implies that $ s \in U $.
\end{proof}
\section{Proof of Theorem }
We start with giving a numerical criterion of existence of Lagrangian
fibrations.
\begin{lemma}\label{numerical_property}
Let $X$ be an irreducible symplectic manifold and $L$ a line bundle on $X$.
The linear system $|L|$ defines a
Lagrangian fibration over the projective space
if and only if $L$ is nef and $L$ has the following property:
\begin{equation}\label{global_section}
\dim H^{0}(X,L^{\otimes k}) = \dim H^{0}(\mathbb{P}^{1/2\dim X}, \mathcal{O}(k))
\end{equation}
for every positive integer $k$.
\end{lemma}
\begin{proof}
If $|L|$ defines a Lagrangian fibration over the projective space,
it is trivial that $L$ is nef and
the dimension of global sections of $L^{\otimes k}$
satisfies the equation (\ref{global_section})
by Definition \ref{rational_Lag_def}. Thus we prove that $|L|$ defines
a Lagrangian fibration under the assumption that $L$ is nef and $\dim H^{0}(X,L^{\otimes k})$
satisfy the equation (\ref{global_section}).
By the assumption, the linear system $|L|$ defines
a rational map $X \dasharrow \mathbb{P}^{1/2\dim X}$. Let $\nu : Y \to X$ be
a resolution of indeterminacy and $g : Y \to \mathbb{P}^{1/2\dim X}$
is the induced morphism.
Comparing $\nu^{*}L$ and $g^{*}\mathcal{O}(1)$, we have
\[
\nu^{*}L \cong g^{*}\mathcal{O}(1)+F,
\]
where $F$ is a $\nu$-exceptional divisor. By multiplying the both hand sides,
we have
\[
k\nu^{*}L \cong g^{*}\mathcal{O}(k) + kF.
\]
If $F \ne 0$, then the above isomorphism and the equality (\ref{global_section})
implies that $L$ is not semiample.
By the assumption,
$L$ is nef.
If $ L^{\dim X} \ne 0 $, then $ L $ is also big and
$ \dim H^{0}(X,L^{\otimes k}) $ does not satisfy the
equation (\ref{global_section}). Thus $ L^{\dim X} = 0 $.
By \cite[Theorem 4.7]{MR946237}, we obtain
\[
c_{X}q(kL + \alpha)^{1/2\dim X} = (kL + \alpha)^{\dim X},
\]
where $ q_{X} $ is the Beauville-Bogomolov quadratic form
on $ H^{2}(X,\mathbb{C}) $,
$ c_{X} $ is the positive constant of $ X $ and $ \alpha $
is a K\"ahler class of $ H^{1,1}(X,\mathbb{C}) $.
Comparing the degrees of both hand sides of the above equation,
we obtain that the numerical Kodaira dimension $ \nu (L) $ is $ (1/2)\dim X $.
By the equation (\ref{global_section}), the Kodaira dimension
$ \kappa (L) $ is also equal to $ (1/2)\dim X $.
Since $ K_{X} $ is trivial, the equality $ \nu (L) = \kappa (L) $ impies
that $ L $ is semiample by
\cite[Theorem 6.1]{kawamata-freeness}
and \cite[Theorem 1.1]{fujino-freeness}.
Thus $F = 0$ and
the linear system $|L|$ defines the morphism
$
f: X \to \mathbb{P}^{1/2\dim X} $.
The linear system $|lL|$ defines
a morphism
\[
f_{l} : X \to \mathrm{Proj}\oplus_{m\ge 0} H^{0}(X,L^{\otimes ml}) \cong
\mathbb{P}^{\begin{pmatrix}
n+l \\
n
\end{pmatrix} -1 }.
\]
This morphism has
connected fibres if $ l $ is sufficiently large. By the above
expression,
$ f_{l} $ is the composition of $f$ and the Veronese embedding. This implies that
$f$ has connected fibres.
\end{proof}
We introduce a criterion which asserts locally freeness of
direct images of line bundles.
\begin{lemma}\label{nakayama_freeness}
Let $ \pi : \mathfrak{X}_{S} \to S $ be a smooth morphism over
a small disk $ S $ with the reference point $ o $.
We also let $ \mathfrak{L}_{S} $ be a line bundle on
$ \mathfrak{X}_{S} $.
Assume that $ \mathfrak{X}_{S} $ and $ \mathfrak{L}_{S} $
satisfy the following conditions.
\begin{itemize}
\item[(1)] The canonical bundle of every fibre is trivial.
\item[(2)] For every point $ t $ of $ S \setminus \{o\}$,
the restriction $ \mathfrak{L}_{S,t} $ of $ \mathfrak{L}_{S} $
to
the fibre $ \mathfrak{X}_{S,t} $ at $ t $
is semiample.
\item[(3)] The restriction $ \mathfrak{L}_{S,o} $
of $ \mathfrak{L}_{S} $ to
the fibre $ \mathfrak{X}_{S,o} $ at $ o $
is nef.
\end{itemize}
Then the higher direct images $ R^{q}\pi_{*}\mathfrak{L}^{\otimes k}_{S} $
are locally free for all $ q \ge 0 $ and $ k \ge 1$. Moreover
the morphisms
\begin{equation}\label{base_change}
R^{q}\pi_{*}\mathfrak{L}^{\otimes k}_{S}\otimes k(o) \to
H^{q}(\mathfrak{X}_{S,o}, \mathfrak{L}_{S,o})
\end{equation}
are isomorphic for all $ q \ge 0 $ and $ k \ge 1 $.
\end{lemma}
\begin{proof}
The first part is a special case of \cite[Corollary 3.14]{nakayama-freeness}.
By the criteria of cohomological flatness in \cite[page 134]{MR0463470},
if $ R^{q}\pi_{*}\mathfrak{L}^{\otimes k}_{S}$ is locally free and the morphism
(\ref{base_change}) is isomorphic, then the morphism
\[
R^{q-1}\pi_{*}\mathfrak{L}^{\otimes k}_{S}\otimes k(o) \to
H^{q-1}(\mathfrak{X}_{S,o}, \mathfrak{L}_{S,o})
\]
is also isomorphic for every $ k \ge 1 $. If $ q \ge \dim \mathfrak{X}_{S,s} + 1 $,
the both hand sides of the morphism (\ref{base_change}) are zero.
By a reverse induction, we obtain the last part of the assertions of Lemma.
\end{proof}
We need one more lemma to prove Theorem \ref{main_result}.
\begin{lemma}\label{nefness_of_non_projective}
Let $ X $ be an irreducible symplectic manifold. Assume that
$ X $ is not projective. Then a line bundle $ L $ such that
$ q_{X}(L) = 0$ is nef, where $ q_{X} $ is the
Beauville-Bogomolov quadratic form on $ H^{2}(X,\mathbb{C}) $.
\end{lemma}
\begin{proof}
Assume that $ L $ is not nef.
By \cite[Theorem 7.1]{MR1664696}, there exists a line bundle $ M $ on $ X $
such that $ q_{X}(M,L) < 0 $ and $ q_{X}(M,\alpha) \ge 0 $ for all K\"ahler
class $ \alpha \in H^{2}(X,\mathbb{C})$.
If we choose a suitable rational number $ \lambda $, we have $ q_{X}(L + \lambda M) > 0 $.
This implies that $ X $ is projective by \cite[Theorem 2]{MR1965365}. That
is a contradiction.
\end{proof}
Now we prove that if $ V_{\mathrm{reg}} \ne \emptyset $ then $ V_{\mathrm{reg}} $ is
a dence open subset of $ \mathrm{Def}(X,L) $.
\begin{lemma}\label{nef_locus}
We use the same notation as in Theorem \ref{main_result}. If
$V_{\mathrm{reg}} \ne \emptyset$,
$V_{\mathrm{reg}}$ is dense and open in $\mathrm{Def}(X,L)$. Moreover
$ \mathrm{Def}(X,L)\setminus V_{\mathrm{reg}} $ is contained in
a countably union of hypersurfaces of $ \mathrm{Def}(X,L) $.
\end{lemma}
\begin{proof}
Let $t$ be a point of $V_{\mathrm{reg}}$
and we denote by $\mathfrak{X}_{t}$ the fibre at $ t $
and by $\mathfrak{L}_{t}$
the restriction of $\mathfrak{L}$
to $ \mathfrak{X}_{t} $.
First we prove that $ V_{\mathrm{reg}} $ is open.
By the difinition of $ V_{\mathrm{reg}} $ in Theorem \ref{main_result},
the linear system $|\mathfrak{L}_{t}|$
defines a Lagrangian fibration
$f_{t} : \mathfrak{X}_{t} \to \mathbb{P}^{1/2\dim \mathfrak{X}_{t}}$.
Let us consider the Relay spectral sequence
\[
E^{p,q}_{2} = H^{p}(\mathbb{P}^{1/2 \dim \mathfrak{X}_{t}}, R^{q}(f_{t})_{*}f_{t}^{*}\mathcal{O} (1)
))
\Longrightarrow
E^{p+q}= H^{p+q}(\mathfrak{X}_{t}, (f_{t})^{*}\mathcal{O}(1)).
\]
The edge sequence of the above spectral sequence is
\[
0 \to H^{1}(\mathbb{P}^{1/2\dim \mathfrak{X_{t}}}, \mathcal{O}(1)) \to
H^1(\mathfrak{X}_{t},f_{t}^{*}\mathcal{O}(1))
\to H^{0}(\mathbb{P}^{1/2\dim \mathfrak{X}_{t}},
R^{1}(f_{t})_{*}\mathcal{O}_{\mathfrak{X}_{t}}\otimes \mathcal{O}(1))
\]
By \cite[Theorem 1.2]{higher-direct-image},
\[
R^{1}(f_{t})_{*}\mathcal{O}_{\mathfrak{X}_{t}} \cong
\Omega^{1}_{\mathbb{P}^{1/2 \dim \mathfrak{X}_{t}}}.
\]
Since $H^{1}(\mathbb{P}^{1/2\dim \mathfrak{X}_{t}},
\Omega_{\mathbb{P}^{1/2\dim \mathfrak{X}_{t}}} (1))=0$,
we have $H^{1}(\mathfrak{X}_{t},f_{t}^{*}\mathcal{O}(1))=
H^{1}(\mathfrak{X}_{t},\mathfrak{L}_{t}) =
0$.
By \cite[Corollary III. 3.9]{MR0463470},
$\pi_{*}\mathfrak{L}$ is locally free in an open neighbourhood of $ t $
and the morphism
\[
\pi_{*}\mathfrak{L}\otimes k(t) \to H^{0}(\mathfrak{X}_{t}, \mathfrak{L}_{t})
\]
is bijective.
Combining the fact that $ \mathfrak{L}_{t} $ is free,
\[
\pi^{*}\pi_{*}\mathfrak{L} \to \mathfrak{L}
\]
is surjective
over an open neighborhood of $ t $. This implies that $ V_{\mathrm{reg}} $ is open.
Next we prove that
$ \mathrm{Def}(X,L)\setminus V_{\mathrm{reg}} $ is contained in a union
of countably hypersurfaces of $ \mathrm{Def}(X,L) $.
Since a union of real codimension two subsets cannot separate two non-empty open
subsets, this implies that $V_{\mathrm{reg}}$ is dense.
Let $t'$ be a point of the closure of $V_{\mathrm{reg}}$ such that
$\dim H^{1,1}(\mathfrak{X}_{t'} ,\mathbb{Q}) = 1$,
where $ \mathfrak{X}_{t'} $ is the fibre at $ t' $. We denote by
$ \mathfrak{L}_{t'} $ the restriction of $ \mathfrak{L} $ to $ \mathfrak{X}_{t'} $.
By the definition of the Beauville-Bogomolov quadratic in \cite[page 772]{Beauville},
the function
\[
\mathrm{Def}(X,L)\ni t \mapsto q_{\mathfrak{X}_{t}}(\mathfrak{L}_{t}) \in \mathbb{Z}
\]
is a constant function, where $ q_{\mathfrak{X}_{t}} $ is the Beauville-Bogomolov
quadratic form on $ H^{2}(\mathfrak{X}_{t},\mathbb{C}) $.
Thus $ q_{\mathfrak{X}_{t'}} (\mathfrak{L}_{t'}) = 0$.
Since $ H^{1,1}(\mathfrak{X}_{t'},\mathbb{Q}) $ is spanned by $ \mathfrak{L}_{t'} $,
$ \mathfrak{X}_{t'} $ is not projective by \cite[Theorem 2]{MR1965365}. Thus
$ \mathfrak{L}_{t'} $ is nef by Lemma \ref{nefness_of_non_projective}.
We choose a small disk $S$ in $\mathrm{Def}(X,L)$ such that $ t' \in S $ and
$S\setminus \{t'\} \subset V_{\mathrm{reg}}$. We also
consider the restriction
family $\pi_S :\mathfrak{X}_L\times_{\mathrm{Def}(X,L)}S \to S$.
Then $\mathfrak{L}^{\otimes k}_{t"}$ is free for every point $t"$ of $S\setminus \{t'\}$
and $ k \ge 1 $, where $ \mathfrak{L}_{t"} $ is the restriction of $ \mathfrak{L} $ to
the fibre $ \mathfrak{X}_{t"} $ at $ t" $.
By Lemma \ref{nakayama_freeness},
\( (\pi_{S})_{*} \mathfrak{L}^{\otimes k}
\)
is locally free and
the morphism
\[
(\pi_{S})_{*}\mathfrak{L}^{\otimes k}\otimes k(t') \to
H^{0}(\mathfrak{X}_{t'}, \mathfrak{L}_{t'}^{\otimes k})
\]
is bijective for every $ k \ge 1$. By Lemma \ref{numerical_property},
$t' \in V_{\mathrm{reg}}$.
Let $ W $ be the subset of $ \mathrm{Def}(X,L) $ defined by
\[
W := \{
t \in \mathrm{Def}(X,L); \dim H^{1,1}(\mathfrak{X}_{t},\mathbb{Q}) \ge 2
\}.
\]
By the above argument,
$ \mathrm{Def}(X,L)\setminus V_{\mathrm{reg}} \subset W$.
By \cite[(1.14)]{MR1664696}, $ W $
is contained in a union of countably hypersurfaces of $ \mathrm{Def}(X,L)$
and we are done.
\end{proof}
We give a proof of Theorem \ref{main_result}.
\begin{proof}[Proof of Theorem \ref{main_result}]
The proof consists of three parts. We start with proving the following Claim.
\begin{claim}\label{first_step_of_proof_of_Theorem}
If $ V \ne \emptyset$, then $ V_{\mathrm{reg}} \ne \emptyset $.
\end{claim}
\begin{proof}
We may assume that the reference point $ o $ of $ \mathrm{Def}(X,L) $ is contained
in $ V $.
By Definition \ref{rational_Lag_def},
there exists a birational map
$\phi : X \dasharrow X'$
such that the linear system $|\phi_{*}L|$
defines a Lagrangian fibration $X' \to
\mathbb{P}^{1/2\dim X}$.
Let $ L' := \phi_{*}L $ and
$(\mathfrak{X}'_{L'}, \mathfrak{L}')$ be the universal family of
deformations of
the pair $ (X',L') $.
Let $ V'_{\mathrm{reg}} $ be the locus of Lagrangian fibration
of $ \mathrm{Def}(X',L') $. Then the reference point $ o' $ of
$ \mathrm{Def}(X',L') $ is contained in $ V'_{\mathrm{reg}} $.
By Lemma \ref{nef_locus}, $V'_{\mathrm{reg}}$ is a dense open set of
$ \mathrm{Def}(X',L') $.
By Proposition \ref{Birationarity-of-Kuranishi},
we also have dense open sets
$U'_{L'}$
of $\mathrm{Def}(X',L')$
and $ U_{L} $ of $ \mathrm{Def}(X,L) $ which satisfy the
following diagram:
\[
\xymatrix{
\mathfrak{X}_{L}\times_{\mathrm{Def}(X,L)} U_{L} \ar[r]^{\cong} \ar[d] &
\mathfrak{X}'_{L'}\times_{\mathrm{Def}(X',L')} U_{L'} \ar[d] \\
U_{L} \ar[r]_{\cong}^{\varphi} & U'_{L'}
}
\]
By the assertion (2) of Proposition \ref{Birationarity-of-Kuranishi},
$ \varphi^{-1} ( U'_{L'} \cap V'_{\mathrm{reg}} ) \subset V_{\mathrm{reg}}$.
Since $ U'_{L'} \cap V'_{\mathrm{reg}} \ne \emptyset$, we obtain
$ V_{\mathrm{reg}} \ne \emptyset$.
\end{proof}
By Claim \ref{first_step_of_proof_of_Theorem} and Lemma \ref{nef_locus},
$ \mathrm{Def}(X,L) $ coinsides with the closure of $ V_{\mathrm{reg}} $
under the assumption that $ V \ne \emptyset $.
\begin{claim}\label{the_second_step_of_the_Proof}
Assume that the reference point $ o $ of $ \mathrm{Def}(X,L) $
is contained in the closure of $ V_{\mathrm{reg}} $ and $ L $ is nef. Then
$ o \in V_{\mathrm{reg}} $.
\end{claim}
\begin{proof}
By the assumption that $ o \in \overline{V}_{\mathrm{reg}} $, we choose a small disk $ S $ in
$ \mathrm{Def}(X,L) $
which has the following properties:
\begin{itemize}
\item[(1)] $ o \in S $.
\item[(2)] $ S\setminus \{o\} \subset V_{\mathrm{reg}} $.
\end{itemize}
Let $ \pi_{S} : \mathfrak{X}_{L}\times_{\mathrm{De}(X,L)}S \to S $ be
the restriction family
and $ \mathfrak{L}_{S} $ the restriction of the universal bundle $ \mathfrak{L} $ to
$ \mathfrak{X}_{L} \times_{\mathrm{Def}(X,L)} S $. Then
$ \pi_{S} : \mathfrak{X}_{L}\times_{\mathrm{Def}(X,L)} S \to S$
and $ \mathfrak{L}_{S} $
satisfy all assumptions of Lemma \ref{nakayama_freeness}. Hense
$ \pi_{*} \mathfrak{L}_{S}^{\otimes k}$ are locally free and
the morphisms
\[
\pi_{*} \mathfrak{L}_{S}^{\otimes k}\otimes k(s) \to
H^{0}(\mathfrak{X}_{s}, \mathfrak{L}_{s}^{\otimes k})
\]
are isomorphic for all $ k \ge 0 $ and all points $ s \in S $.
This implies that the pair $ (X,L) $ satisfies the all assumptions of Lemma \ref{numerical_property}
and we obtain
$ o \in V_{\mathrm{reg}} $.
\end{proof}
\begin{claim}\label{the_third_step_of_the_Proof}
Assume that the reference point $ o $ of $ \mathrm{Def}(X,L) $
is contained in the closure of $ V_{\mathrm{reg}} $ and $ c_{1}(L) $ belongs to
the birational K\"ahler cone. Then
$ o \in V $.
\end{claim}
\begin{proof}
We remark that
$ X $ is projective by Lemma \ref{nefness_of_non_projective}.
We consider the same restriction family
$ \pi : \mathfrak{X}_{L}\times_{\mathrm{Def}(X,L)} S \to S$
in the proof of Claim \ref{the_second_step_of_the_Proof}.
By the upper semicontinuity of the function
\[
s \in S \mapsto \dim H^{0}(\mathfrak{X}_{s},\mathfrak{L}_{s}),
\]
$ \mathfrak{L}_{o} = L $ is effective.
By \cite[Theorem 1.2]{0907.5311},
there exists a birational map
\( \phi: X \dasharrow X' \) such that $ L' $ is nef, where
$ L' = \phi_{*}L$.
By Proposition \ref{Birationarity-of-Kuranishi},
we have dense open sets
$U'_{L'}$
of $\mathrm{Def}(X',L')$
and $ U_{L} $ of $ \mathrm{Def}(X,L) $ which satisfy the
following diagram:
\[
\xymatrix{
\mathfrak{X}_{L}\times_{\mathrm{Def}(X,L)} U_{L} \ar[r]^{\cong} \ar[d] &
\mathfrak{X}'_{L'}\times_{\mathrm{Def}(X',L')} U_{L'} \ar[d] \\
U_{L} \ar[r]_{ \cong}^{\varphi} & U'_{L'}
}
\]
Let $ V'_{\mathrm{reg}}$ be the locus of Lagrangian fibrations of
$ \mathrm{Def}(X',L') $. Then $ V'_{\mathrm{reg}} \ne \emptyset$ because
the image $\varphi( V_{\mathrm{reg}} \cap U_{L})$ is contained
in $ V'_{\mathrm{reg}} $ by Proposition \ref{Birationarity-of-Kuranishi} (2).
By Lemma \ref{nef_locus}, $ V'_{\mathrm{reg}} $ is dense. Hence the reference
point $ o' $ of $ \mathrm{Def}(X',L') $ is contained in the closure of
$ V'_{\mathrm{reg}} $. By Claim \ref{the_second_step_of_the_Proof},
$ o' \in V'_{\mathrm{reg}} $. This implies that $ o\in V $.
\end{proof}
We finish the proof of Theorem \ref{main_result}. If $ V \ne \emptyset $, $ V_{\mathrm{reg}} $
is open and dense in $ \mathrm{Def}(X,L) $ by Claim \ref{first_step_of_proof_of_Theorem}.
Thus every point $ s $ of $ \mathrm{Def}(X,L)_{\mathrm{mov}} $
is contained in the closure of $ V_{\mathrm{reg}} $.
Then $ s \in V $ by Claim \ref{the_third_step_of_the_Proof} and we are done.
\end{proof}
\begin{proof}[Proof of Corollary \ref{application}]
We use the same notation of Theorem \ref{main_result} and Corollary \ref{application}.
We also define
the subset $ \Lambda $ of $ \mathrm{Def}(X) $ by
\begin{eqnarray*}
\Lambda := \{
s \in \mathrm{Def}(X) ; &\mathfrak{X}_{s}& \mbox{is isomorphic to
the $ n $-pointed
Hilbert scheme of $ K3 $ }\\
&& \mbox{or a generalized Kummer variety }\}
\end{eqnarray*}
If $ \Lambda \cap \mathrm{Def}(X,L)_{\mathrm{mov}} \ne \emptyset$,
then the restriction of the universal bundle $ \mathfrak{L}_{s} $ to the
fibre $ \mathfrak{X}_{s} $ at $ s \in \Lambda\cap \mathrm{Def}(X,L)_{\mathrm{mov}} $
defines a rational Lagrangian fibration by \cite[Conjecture 1.4, Theorem 1.5]{1301.6968}, \cite[Theorem 1.3]{1301.6584} and
\cite[Proposition 3.36]{1206.4838}.
First we prove that $ \Lambda \cap \mathrm{Def}(X,L) $ is dense in $ \mathrm{Def}(X,L) $.
The subset $ \Lambda $
is dense in $ \mathrm{Def}(X) $ by
\cite[Theorem 1.1, Theorem 4.1]{2012arXiv1201.0031M}. Moreover,
by \cite[Th\'eor\`em 6, 7]{Beauville}, each irreducible component of
$ \Lambda $
forms a smooth hypersurface of $ \mathrm{Def}(X) $. Therefore
$ \Lambda \cap \mathrm{Def}(X,L)$ is dense in $ \mathrm{Def}(X,L) $.
Next we will prove the following Lemma.
\begin{lemma}\label{density_of_movable}
Under the same assumptions and notation of Theorem \ref{main_result},
the closure of $\mathrm{Def}(X,L)\setminus \mathrm{Def}(X,L)_{\mathrm{mov}} $ is a proper closed subset
of $ \mathrm{Def}(X,L) $.
\end{lemma}
\begin{proof}
We derive a contradiction assuming that the closure of
$ \mathrm{Def}(X,L)\setminus \mathrm{Def}(X,L)_{\mathrm{mov}} $ coincides with
$ \mathrm{Def}(X,L) $. For a point $ s \in \mathrm{Def}(X,L)\setminus \mathrm{Def}(X,L)_{\mathrm{mov}} $,
we denote by $ \mathfrak{L}_{s} $ the restriction of the universal bundle $ \mathfrak{L} $
to the fibre $ \mathfrak{X}_{s} $ at $ s $. We will prove that $ \mathfrak{L}_{s} $ is big.
By Corollary \cite[Corollary 3.10]{MR1664696},
the interior of the positive cone of an irreducible symplectic manifold is contained
in the effective cone. By the assumption,
$ L $ belongs to the closure of the positive cone of $ X $.
Hence
$ \mathfrak{L}_{s} $ also belongs to the closure of the positive cone of $ \mathfrak{X}_{s} $.
Thus $ \mathfrak{L}_{s} $ is pseudo-effective.
By \cite[Theorem 3.1]{0907.5311},
We obtain the $ q $-Zariski decomposition
\[
\mathfrak{L}_{s} = P_{s} + N_{s}
\]
By \cite[Theorem 3.1 (I) (iii)]{0907.5311}
\[
0 = q_{\mathfrak{X}_{s}}(\mathfrak{L}_{s}) = q_{\mathfrak{X}_{s}}(P_{s}+N_{s}) =
q_{\mathfrak{X}_{s}}(P_{s}) + q_{\mathfrak{X}_{s}}(N_{s}).
\]
Since $ \mathfrak{L}_{s} $ does not belongs to the birational K\"ahler cone of $ \mathfrak{X}_{s} $,
$ N \ne 0 $. This implies that $ q_{\mathfrak{X}_{s}}(P_{s}) > 0 $.
We deduce $ P_{s} $ is big by \cite[Corollary 3.10]{MR1664696}.
Hence $ \mathfrak{L}_{s} $ is also big.
Let us consider the following function
\[
\mathrm{Def}(X,L) \ni s \mapsto h_{n}(s):= \dim H^{0}(\mathfrak{X}_{s},\mathfrak{L}_{s}^{n}) \in \mathbb{Z}.
\]
By the upper semicontinuity of $ h_{n}(s) $, there exists an open set $ W $ of $ \mathrm{Def}(X,L) $
such that for every point $ s $ of $ W $,
\[
h_{n}(t) \ge h_{n}(s)
\]
for all points $ t \in \mathrm{Def}(X,L) $. By the assumption
that the closure of $ \mathrm{Def}(X,L)\setminus \mathrm{Def}(X,L)_{\mathrm{mov}} $
coincides with $ \mathrm{Def}(X,L) $,
$ W \cap (\mathrm{Def}(X,L)\setminus \mathrm{Def}(X,L)_{\mathrm{mov}}) \ne \emptyset$.
In the first half of the proof of this Lemma, we had proved that
$ \mathfrak{L}_{s} $ is big for every point
$ s\in \mathrm{Def}(X,L)\setminus \mathrm{Def}(X,L)_{\mathrm{mov}}$.
This implies that $ \mathfrak{L}_{t} $ is big for every point $ \mathrm{Def}(X,L) $.
Let $ t $ be a point of $ \mathrm{Def}(X,L) $ such
that $ \dim H^{1,1}(X,\mathbb{Q}) = 1 $. Then
$ \mathfrak{L}_{t} $
is nef by Lemma \ref{nefness_of_non_projective}.
Since $ \mathfrak{L}_{t} $ is nef and big, the higher cohomologies of $ \mathfrak{L}_{t} $ vanish
By the Riemann-Roch formula in \cite[(1.11)]{MR1664696},
we obtain
\[
\dim H^{0}(\mathfrak{X}_{t},\mathfrak{L}_{t}^{m}) =
\sum_{j=0}^{\dim \mathfrak{X}_{t}/2}\frac{a_{j}}{2j}m^{2j}q_{\mathfrak{X}_{t}}(\mathfrak{L}_{t})^{j}
= \chi (\mathcal{O}_{\mathfrak{X}_{t}}),
\]
because $ q_{\mathfrak{X}_{t}}(\mathfrak{L}_{t}) = q_{X}(L)= 0 $.
That is a contradiction.
\end{proof}
We finish the proof of Corollary \ref{application}.
If $ \Lambda\cap \mathrm{Def}(X,L)_{\mathrm{mov}} = \emptyset$,
$ \mathrm{Def}(X,L) \setminus \mathrm{Def}(X,L)_{\mathrm{mov}} $ contains dense subsets
of $ \mathrm{Def}(X,L) $.
This contradicts Lemma \ref{density_of_movable}.
\end{proof}
\end{document} | math | 41,288 |
\begin{document}
\title{Implementing the sine transform of fermionic modes as a tensor network}
\author{Hannes Epple} \email{[email protected]}
\author{Pascal Fries}
\author{Haye Hinrichsen}
\affiliation{
Fakult\"at f\"ur Physik und Astronomie, \\
Julius-Maximilians Universit\"at W\"urzburg, \\
Am Hubland, 97074 W\"urzburg, Germany
}
\date{\today}
\begin{abstract}
Based on the algebraic theory of signal processing, we recursively decompose the discrete sine
transform of first kind {(${\opn{DST-I}}$)} into small orthogonal block operations. Using a diagrammatic
language, we then second-quantize this decomposition to construct a tensor network implementing
the {${\opn{DST-I}}$} for fermionic modes on a lattice. The complexity of the resulting network is shown to
scale as $\frac 54 n \log n$ (not considering swap gates), where $n$ is the number of lattice
sites. Our method provides a systematic approach of generalizing Ferris' spectral tensor network
for non-trivial boundary conditions.
\end{abstract}
\maketitle
\section{Introduction}
The study of tensor networks is currently a topic of growing interest both in condensed matter
physics and quantum computation. In condensed matter physics, tensor networks can be used to model
the entanglement structure of quantum states and are therefore well suited for the study of ground
states of strongly correlated systems \mathcal{C}^{\mathrm{I}}te{Eisert:13,Orus:14a,Orus:14b}. Specifically, the
formulation of the \emph{multiscale entanglement renormalization ansatz} (MERA) in this framework
has shown to be very fruitful \mathcal{C}^{\mathrm{I}}te{Vidal:08,Evenbly:09,Carboz:09,Orus:14b} and can be understood as
a lattice realization of the \emph{holographic principle} \mathcal{C}^{\mathrm{I}}te{Susskind:95,Swingle:12},
contributing to a better understanding of gauge-gravity type dualities \mathcal{C}^{\mathrm{I}}te{Maldacena:98}. In
quantum computation, on the other hand, tensor networks are known as \emph{quantum circuits}
\mathcal{C}^{\mathrm{I}}te{Nielsen:00} and provide a natural framework for the decomposition of a large, complicated
unitary operation into a sequence of small local building blocks -- a simple example being the
factorization of the \emph{quantum Fourier transform} \mathcal{C}^{\mathrm{I}}te{Nielsen:00} into a sequence of Hadamard
and phase gates, which causes the efficiency of Shor's algorithm \mathcal{C}^{\mathrm{I}}te{Shor:97}.
A unification of both subjects---namely, the simulation of strongly correlated quantum systems by a
quantum circuit---was already proposed by Feynman in 1982 \mathcal{C}^{\mathrm{I}}te{Feynman:82} and has recently been
realized by a circuit \mathcal{C}^{\mathrm{I}}te{Verstraete:09} for implementing the \emph{exact} dynamics of free
fermions on a circle. More recently, Ferris refined this idea \mathcal{C}^{\mathrm{I}}te{Ferris:14}, giving it the
interpretation of a \emph{spectral tensor network}, which implements the Fourier transform of
\emph{fermionic modes}, hence diagonalizes the Hamiltonian of free fermions
\mathcal{C}^{\mathrm{I}}te{Lieb:61}. Additionally, the geometry of this network generalizes to higher dimensions, while
always staying easily contractible, such that it can be used for the \emph{classical simulation} of
phase transitions in more than one dimension -- a feature that is absent in the MERA
\mathcal{C}^{\mathrm{I}}te{Evenbly:14a,Evenbly:14b}.
The starting point for the construction of Ferris' spectral tensor network is a recursive algorithm
for the discrete Fourier transformation (DFT), widely known as Fast Fourier transform (FFT)
\mathcal{C}^{\mathrm{I}}te{Cooley:65}. The network is then chosen such that it implements the FFT in the one-particle
sector of the theory \mathcal{C}^{\mathrm{I}}te{Ferris:14}. This means that \emph{every particle} is transformed
separately, subject to the constraint of their respective indistinguishability. Note that this is
entirely different from the quantum Fourier transform, which calculates a single DFT on the space
spanned by all particles.
A closely related transformation, on which we will focus in the present work, is the discrete sine
transformation of first kind {(${\opn{DST-I}}$)}. This is a variant of the DFT for vanishing Dirichlet boundary
conditions and thus indispensable for diagonalization procedures of systems on an open chain
\mathcal{C}^{\mathrm{I}}te{Lieb:61}. However, the boundary conditions break the convenient cyclic
translational symmetry vital to the decomposition of the DFT, meaning that the original FFT
algorithm can no longer be used.
The \emph{algebraic theory of signal processing} \mathcal{C}^{\mathrm{I}}te{Moura:08a,Moura:08b} tackles this problem by
constructing an algebraic framework for spectral transformations. Based on these notions, decompositions
of whole classes of transformations were derived in \mathcal{C}^{\mathrm{I}}te{Moura:03,Moura:08c}, including the {${\opn{DST-I}}$}
as a special case. However, in contrast to the simple case of the standard FFT, the resulting
decomposition almost entirely consists of non-unitary elementary transformations. While this is
acceptable for numerical recipes restricted to a single particle, we run into problems when
translating the decomposition into a quantum circuit, implementing the transformation for
indistinguishable particles.
In this paper we unitarize a recursive algorithm for the {${\opn{DST-I}}$}, originally given in \mathcal{C}^{\mathrm{I}}te{Moura:08c},
and show how to second-quantize the resulting algorithm, obtaining a spectral tensor network for a
fermionic chain with open ends. To this end we reorganize the network in a non-trivial way. To keep
the formal ballast as small as possible, the present paper explains most of the steps in terms of
diagrams, showing selected parts of the decomposition and an explicit example of a complete algorithm.
\section{The fast Fourier transform and diagram notation}
The conventional discrete Fourier transform (DFT) converts a sequence of $n$ complex numbers
$x_0, x_1, \ldots, x_{n-1}$ into another sequence of complex numbers
$\tilde x_0, \tilde x_1 ,\ldots, \tilde x_{n-1}$ by means of the linear transformation
\[
\tilde x_a := \sum_{b=0}^{n-1} \mathrm{e}^{-2 \pi \mathrm{i} a b / n} x_b.
\]
Defining the phase factor $\omega_n := \mathrm{e}^{-2\pi \mathrm{i} / n}$, this transformation can
be written as
\[
\tilde x = {\opn{DFT}}_n x
\qtext{with}
{\opn{DFT}}_n^{ab} = \omega_n^{a \cdot b}.
\]
Here and in the following, we use the convention that the lower index $n$ denotes the dimension of
the respective vector space, while upper indices $0 \leq a,b < n$ label components of the matrix.
For even $n=2m$, the recursion formula
\begin{subequations}
\begin{multline} \label{eq:fft-formula}
{\opn{DFT}}_{2m} = L_{2m} ({\opn{DFT}}_m \oplus {\opn{DFT}}_m) \\
\times \parens{{\mathbb{I}}_m \oplus \opn{diag}(\omega_{2m}^0,\ldots, \omega_{2m}^{m-1})}
({\opn{DFT}}_2 \otimes {\mathbb{I}}_m)
\end{multline}
with
\begin{gather*}
{\mathbb{I}}_m^{ab} := \delta_{ab}, \quad
L_{2m} := \bar{L}_{2m-1} \oplus 1, \quad \text{and} \\
\bar{L}_{2m-1}^{ab} :=
\begin{cases}
1 &\text{iff} \quad b \equiv am \mod (2m - 1) \\
0 &\text{otherwise}
\end{cases}
\end{gather*}
is known as the radix 2 fast Fourier transform (FFT) \mathcal{C}^{\mathrm{I}}te{Cooley:65}. It factorizes the ${\opn{DFT}}_{2m}$
into three pieces: The rightmost factor ${\opn{DFT}}_2 \otimes {\mathbb{I}}_m$ is a basis transformation,
consisting of $m$ copies of the matrix
\[
F := {\opn{DFT}}_2 = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}
\]
acting on the pairs of components $(\ell, m+\ell)$, $\ell=0,\ldots,m-1$. The next factor,
\[
{\opn{DFT}}_m \oplus {\opn{DFT}}_m \opn{diag}(\omega_{2m}^0,\ldots, \omega_{2m}^{m-1}),
\]
is a direct sum of two ${\opn{DFT}}_m$, one of which is modified by multiplication with a diagonal
matrix of so called \emph{twiddle factors}. The last factor $L_{2m}$ just permutes the basis
vectors, sorting them into odd and even portions.
Diagrammatically, the recursion relation \eqref{eq:fft-formula} for $n=8$ can be represented as
\begin{equation} \label{eq:fft-diagram}
{\opn{DFT}}_8 \; =
\begin{gathered}
\includegraphics[scale=1.0]{DFT_8.pdf}
\end{gathered}
\end{equation}
\end{subequations}
running from bottom to top. Here, blocks represent matrices, the ingoing lines are columns, while
the outgoing lines lines are rows. The composition rules for such diagrams are
\begin{equation}
\label{eq:single-particle-diagram-rules}
\begin{gathered}
\includegraphics[scale=1.0]{single_particle_rules.pdf}
\end{gathered}
\end{equation}
i.e., vertical composition couples rows of the lower block to columns of the upper block and
therefore represents ordinary matrix multiplication, while horizontal composition gives rise to the
direct sum of matrices \footnote{Note that this is in strict contrast to the usual diagrammatics of
tensor networks and stems from the fact that we are not considering multiple particles yet. This
will change in \cref{sec:diagram-quantization}, where horizontal composition will give rise to the
tensor product. We thank the referee for suggesting to emphasize this point.}. Since the matrix
$F$ is applied to non-neighboring lines, we do not draw it as a box but rather use the shorthand
notation
\begin{equation}
\label{eq:shorthand-hline}
\begin{gathered}
\includegraphics[scale=1.0]{shorthand_hline.pdf}
\end{gathered}
\end{equation}
with two bullets to indicate on which lines the matrix acts. Note that lines can be crossed
arbitrarily, allowing us to move the remaining unaffected lines freely.
Now, if $n$ is a power of $2$, we can use \cref{eq:fft-formula}, anchored at ${\opn{DFT}}_2 = F$, to
implement the ${\opn{DFT}}_n$ using only $2 \times 2$-matrices. In the above diagrammatic language, this
amounts to having no blocks act on more than two lines. By construction, the number of blocks in
such a diagram then scales as $n \log_2 n$, which therefore serves as an upper bound for the
computational complexity of the DFT.
In order to interpret the DFT as a change of orthonormal bases in a single-particle Hilbert space,
it needs to be unitary. This is achieved by the normalization
\[
{\opn{DFT}}o_n := \frac1{\sqrt n} {\opn{DFT}}_n
\]
and, as can be easily checked, \cref{eq:fft-formula,eq:fft-diagram} remain valid under the
replacement ${\opn{DFT}} \to {\opn{DFT}}o$, provided we also normalize $F$ by
\begin{equation}
\label{eq:def-f-hat}
F \to \hat{F} := \frac 1{\sqrt 2} F.
\end{equation}
We thus obtain a decomposition of the large unitary $\smash{{\opn{DFT}}o_n}$ into small unitary building
blocks, which again carry the interpretation of basis transformations in the single particle Hilbert
space.
The process of second quantization \mathcal{C}^{\mathrm{I}}te{Berezin:66} then maps the $\smash{{\opn{DFT}}o_n}$ to a basis
change in the many-particle Hilbert space. Remarkably, the FFT scheme \eqref{eq:fft-diagram} is
still valid in the many-particle case, provided that we slightly change the composition rules
\eqref{eq:single-particle-diagram-rules} and replace the blocks by their respective second
quantizations. We will discuss second quantization of Fourier transforms in more detail later in
\cref{sec:diagram-quantization}.
\section{Decomposing the discrete sine transform}
The discrete sine transform (DST) is a real linear transformation which captures essentially the
imaginary part of the DFT. As it expands the data in sinusoids, this transformation is particularly
useful for discrete systems with Dirichlet boundary conditions. However, depending on the
implementation of the boundary conditions and the respective periodic continuation, one
distinguishes various types of DSTs, which are usually labeled by Roman numbers from I to VIII
\mathcal{C}^{\mathrm{I}}te{Moura:08b}.
We focus on the {${\opn{DST-I}}$} here, which corresponds to a periodic continuation that is odd around both
$x_{-1}$ and $x_n$. Thus, we have $x_{-1}=x_n=0$, making the {${\opn{DST-I}}$} suitable for systems with open
boundary conditions. As we will see below, for a suitable recursion scheme we also need the
{${\opn{DST-I}}ii$}. The two DSTs are defined by the matrices
\begin{align}
\label{eq:dst-1-matrix}
{\opn{DST-I}}_{n}^{ab} &= \sin \frac{(a+1)(b+1)\pi}{n+1} \quad \text{and} \\
\label{eq:dst-3-matrix}
{\opn{DST-I}}ii_{n}^{ab} &= \sin \frac{(a+\frac{1}{2})(b+1)\pi}{n},
\end{align}
with $0 \leq a,b < n$, which are non-orthogonal. An advantage of considering the {${\opn{DST-I}}$} is that
we only need the {${\opn{DST-I}}ii$} and the {${\opn{DST-I}}$} itself in the corresponding recursion. In principle,
it is possible to consider other types of sine (or cosine) transforms and orthogonalize them in a
similar way as described in the next section, though the corresponding recursions may be more
complicated. Specifically, the {${\opn{DST-I}}$} of odd size $n=2m-1$ can be expressed recursively in terms
of {${\opn{DST-I}}$} and {${\opn{DST-I}}ii$} as \mathcal{C}^{\mathrm{I}}te{Moura:08c}
\begin{subequations}
\begin{multline}
\label{eq:dst-1-alg}
{\opn{DST-I}}_{2m-1} = \\
\bar{L}_{2m-1} \big( {\opn{DST-I}}ii_m \oplus {\opn{DST-I}}_{m-1} \big) B_{2m-1}.
\end{multline}
Here, the rightmost factor is a base change matrix
\begin{equation*}
B_{2m-1} :=
\begin{pmatrix}
{\mathbb{I}}_{m-1} & & \phantom{-}{\mathbb{J}}_{m-1} \\
& 1 & \\
{\mathbb{I}}_{m-1} & & -{\mathbb{J}}_{m-1}
\end{pmatrix}
\end{equation*}
with the $(m-1)\times(m-1)$ identity matrix ${\mathbb{I}}_{m-1}$ and
\begin{equation*}
{\mathbb{J}}_{m-1} :=
\begin{pmatrix}
& & 1 \\
& {\mathpalette\reflectbox{$\ddots$}\relax} & \\
1 & &
\end{pmatrix}
.
\end{equation*}
Note that $B_{2m-1}$ can be split into an interaction part and a permutation,
\[
B_{2m-1} =
\begin{pmatrix}
{\mathbb{I}}_{m-1} & & \phantom{-}{\mathbb{I}}_{m-1} \\
& 1 & \\
{\mathbb{I}}_{m-1} & & -{\mathbb{I}}_{m-1}
\end{pmatrix}
\cdot
\begin{pmatrix}
{\mathbb{I}}_{m-1} & & \\
& 1 & \\
& & {\mathbb{J}}_{m-1}
\end{pmatrix},
\]
where the interaction part acts on pairs of components $(\ell,m+\ell)$, $\ell=0,\dots,m-2$ via $F$.
The middle factor is just a direct sum of a {${\opn{DST-I}}ii$} and a {${\opn{DST-I}}$} of smaller sizes $m$ and
$m-1$, respectively, while the leftmost factor is a permutation defined in the previous section.
In the diagrammatic notation established before, the recursion relation \eqref{eq:dst-1-alg} for
$n=7$ can be represented as
\begin{equation}
\label{eq:diag-dst-1-alg}
{\opn{DST-I}}_7 =
\begin{gathered}
\includegraphics[scale=1.0]{DST_1_7.pdf}
\end{gathered}.
\end{equation}
\end{subequations}
The {${\opn{DST-I}}ii$} appearing in this recursion can be further reduced by means of another
recursion relation. For the {${\opn{DST-I}}ii$} of size $n=2m$ we use
\begin{subequations}
\begin{multline}
\label{eq:dst-3-alg}
{\opn{DST-I}}ii_{2m} = K_{2m} \big( {\opn{DST-I}}ii_m \oplus {\opn{DST-I}}ii_m \big) \\
\times \big( X_m^- \oplus X_m^+ \big) \big( {\opn{DST-I}}iip_2 \otimes \, {\mathbb{I}}_m \big) \bar{B}_{2m},
\end{multline}
which is a special case of a more general decomposition for $n=qm$ \mathcal{C}^{\mathrm{I}}te{Moura:08c}. Note that in
contrast to the binary recursion \eqref{eq:dst-1-alg}, the above formula recurs to two copies of
{${\opn{DST-I}}ii$} itself. Again, the rightmost factor is a base change matrix
\begin{equation*}
\bar{B}_{2m} :=
\begin{pmatrix}
{\mathbb{I}}_{m-1} & & {\mathbb{J}}_{m-1} \\
& 1 & \\
& & {\mathbb{I}}_{m-1}
\end{pmatrix}
\oplus 1
,
\end{equation*}
which leaves the components $m-1$ and $2m-1$ unaffected and applies the matrix
\begin{equation*}
A :=
\begin{pmatrix}
1 & 1 \\
0 & 1
\end{pmatrix}
\end{equation*}
to pairs of components $(\ell,2m-2-\ell)$, $\ell=0,\dots,m-2$. Because of its tensor product
structure, the next factor ${\opn{DST-I}}iip_2 \otimes \, {\mathbb{I}}_m$ applies a scaled {${\opn{DST-I}}ii$} to pairs of
components $(\ell,m+\ell)$, $\ell=0,\dots,m-1$, given by the matrix
\[
C := {\opn{DST-I}}iip_2 = F \opn{diag}(1,\sqrt{2}).
\]
The third factor in \cref{eq:dst-3-alg} is a direct sum of two matrices $X_m^\pm$, which will be
discussed later on. The next factor consists of a direct sum of two smaller {${\opn{DST-I}}ii$s}, while the
leftmost factor is again a permutation, defined by
\[
K_{2m} := ( {\mathbb{I}}_2 \oplus {\mathbb{J}}_2 \oplus {\mathbb{I}}_2 \oplus {\mathbb{J}}_2 \oplus \dots ) L_{2m}.
\]
We can represent the recursion relation \eqref{eq:dst-3-alg} diagrammatically as
\begin{equation}
\label{eq:diag-dst-3-alg}
{\opn{DST-I}}ii_8 =
\begin{gathered}
\includegraphics[scale=1.0]{DST_3_8.pdf}
\end{gathered}.
\end{equation}
\end{subequations}
The remaining parts to consider are the matrices $X_m^\pm$. For even size $m$, they are given by
\begin{equation}
\label{eq:x-matrix}
X_m^\pm :=
\begin{pmatrix}
c_m^1 & & & & \hspace{-.2cm} s_m^{\pm (m-1)} & 0 \\
& \hspace{-.2cm} \ddots & & \hspace{-.4cm} {\mathpalette\reflectbox{$\ddots$}\relax} & & \\
& & \hspace{-.2cm} c_m^{m/2}+s_m^{\pm m/2} & & & \vdots \\
& \hspace{-.2cm} {\mathpalette\reflectbox{$\ddots$}\relax} & & \hspace{-.4cm} \ddots & & \\
s_m^{\pm1} & & & & \hspace{-.2cm} c_m^{m-1} & 0 \\
0 & & \hspace{-.3cm} \cdots & & 0 & c_m^m \\
\end{pmatrix}
\end{equation}
with
\[
c_m^\ell := \cos \frac{\ell\pi}{4m} \qtext{and}
s_m^\ell := \sin \frac{\ell\pi}{4m}.
\]
Clearly, these matrices can also be decomposed into blocks acting only on pairs of components
$(\ell-1,m-1-\ell)$, $\ell=1,\dots,m/2-1$ via
\begin{equation}
\label{eq:y-matrix}
Y^\pm_{\ell,m} :=
\begin{pmatrix}
c_m^\ell & s_m^{\pm (m-\ell)} \\
s_m^{\pm \ell} & c_m^{m-\ell}
\end{pmatrix}
.
\end{equation}
Further, in the center and the right lower corner of the matrix \eqref{eq:x-matrix}, we have the
factors
\[
y^\pm := c_m^{m/2}+s_m^{\pm m/2} = \sqrt{1 \pm \frac{1}{\sqrt{2}}}
\qtext{and} c_m^m = \frac{1}{\sqrt{2}},
\]
acting on the components $m/2-1$ and $m-1$. For example, the matrix $X_m^\pm$ of size $m=8$ can be
drawn in our diagrammatic notation as
\begin{equation*}
X_8^\pm =
\begin{gathered}
\includegraphics[scale=1.0]{X_8.pdf}
\end{gathered}
,
\end{equation*}
where the indices $\ell$ and $m$ of the $2 \times 2$ matrices $Y_{\ell,m}^\pm$ are given on the left
and right of the corresponding symbol, respectively.
Putting all this together, the two mutually dependent recursions relations \eqref{eq:dst-1-alg} and
\eqref{eq:dst-3-alg} together with the closing conditions
\[
{\opn{DST-I}}_1 = 1 \qtext{and}
{\opn{DST-I}}ii_2 = F \opn{diag} \Big( \frac{1}{\sqrt{2}},1 \Big)
\]
allow us to calculate the {${\opn{DST-I}}$} of size $n=2^k-1$ using only $2 \times 2$ matrices. However, in
the existing formulation, all the occurring matrices, except for permutations, are still
non-orthogonal. This is a problem in the corresponding quantum version, since non-orthogonal
building blocks cannot be interpreted as elementary changes of orthonormal bases. Fortunately, it
is possible to reformulate the recursion relations in an orthogonal way, as will be shown in the
next section.
\section{Orthogonalization of the recursion relations}
\begin{figure*}
\caption{Orthogonalization of the recursion relation \cref{eq:dst-1-alg}
\label{fig:dst-1-orthogonal}
\end{figure*}
We now show step by step how to obtain an orthogonal version of the recursion relations
\eqref{eq:dst-1-alg} and \eqref{eq:dst-3-alg} discussed in the previous section. For convenience,
we label all orthogonal matrices by a hat symbol.
As stated before, the DSTs defined by \cref{eq:dst-1-matrix,eq:dst-3-matrix} are non-orthogonal.
However, all DSTs can be made orthogonal by a suitable scaling of rows and columns. Let us start
with the {${\opn{DST-I}}$}. Multiplying the corresponding matrix \eqref{eq:dst-1-matrix} by its transpose, we
find that the correct scaling is given by
\begin{subequations} \label{eq:dst-1-ortho}
\begin{equation}
{\opn{DST-I}}o_n := {\opn{DST-I}}_n \sqrt{\frac{2}{n+1}},
\end{equation}
meaning that all matrix entries are rescaled identically. Representing multiplication of a
component by a small diamond this relation may be drawn as
\begin{equation}
\begin{gathered}
\includegraphics[scale=1.0]{DST_1o_def.pdf}
\end{gathered}
\end{equation}
\end{subequations}
in our graphical notation. Now we can try to recast the recursion \eqref{eq:dst-1-alg} in terms of
orthogonal matrices. The procedure is shown in \cref{fig:dst-1-orthogonal} for a {${\opn{DST-I}}$} of size
${n=2^{k+1}-1}$.
The diagram on the left is just \cref{eq:diag-dst-1-alg} with the proper scaling factors according
to \cref{eq:dst-1-ortho} added at the bottom. In the first step, we split up the factors for each
component and move factors of $1/\sqrt{2^{k-1}}$ above the matrices~$F$. This is possible since we
have the same factor at every component. As no matrix $F$ is acting on the component in the middle,
we can also move the remaining factor of $1/\sqrt{2}$ for this component further up, as indicated by
the arrow. Now, all we have to do is to absorb all factors into the matrices directly above them.
We obtain orthogonal matrices $\hat{F}$ as defined in \cref{eq:def-f-hat}. Further, the factors
absorbed into the {${\opn{DST-I}}$} are just the ones from \cref{eq:dst-1-ortho}, so we obtain an orthogonal
version of this transform. Since we know that the whole transform in \cref{fig:dst-1-orthogonal} is
orthogonal and all other building blocks are orthogonal, too, we can conclude that the {${\opn{DST-I}}ii$}
together with the factors in the dotted box must also be orthogonal. Defining the orthogonalized
{${\opn{DST-I}}ii$} as
\begin{subequations}
\label{eq:dst-3-ortho}
\begin{equation}
{\opn{DST-I}}iio_n :=
{\opn{DST-I}}ii_n \sqrt{\frac 2n} \opn{diag} \Big( 1,\dots,1,\frac{1}{\sqrt{2}} \Big)
\end{equation}
or diagrammatically as
\begin{equation}
\begin{gathered}
\includegraphics[scale=1.0]{DST_3o_def.pdf}
\end{gathered},
\end{equation}
\end{subequations}
we obtain the right diagram of \cref{fig:dst-1-orthogonal}, where all occurring matrices are
orthogonal.
\begin{figure*}
\caption{Orthogonalization of the recursion relation \eqref{eq:dst-3-alg}
\label{fig:dst-3-orthogonal}
\end{figure*}
Knowing the orthogonal version of the {${\opn{DST-I}}ii$}, we now turn to the corresponding recursion
relation \eqref{eq:dst-3-alg}. In a first step, we replace the {${\opn{DST-I}}ii$} by its orthogonal
version, as it is shown in \cref{fig:dst-3-orthogonal} for a {${\opn{DST-I}}ii$} of size $n=2^{k+1}$. We
start with the recursion relation from \cref{eq:diag-dst-1-alg} with scaling factors according to
\cref{eq:dst-3-ortho} added at the bottom of the diagram. In the first step, we use
\cref{eq:dst-3-ortho} to replace the two smaller DSTs by their orthogonal versions and the
corresponding inverse scaling factors. Further, we express $C = F \opn{diag} (1, \sqrt 2)$ in terms
of $\hat{F}$ using \cref{eq:def-f-hat}. All factors except the ones in the dotted boxes cancel out.
In the second step, we absorb those into the matrices below, obtaining the matrices
\begin{gather}
{X'}_{\!\!m}^{\pm} := \opn{diag} (1,\dots,1,\sqrt{2})\, X_m^\pm \label{eq:x'-def}\\
\text{and} \quad A' := \opn{diag} (1,\sqrt{2})\, A, \nonumber
\end{gather}
which unfortunately are still non-orthogonal. The part of the recursion relation that remains to be
orthogonalized is indicated by a dashed box in the right diagram of \cref{fig:dst-3-orthogonal}.
Let us have a look at ${X'}_{\!\!m}^\pm$ first. The diagonal matrix in \cref{eq:x'-def} just cancels
the factor $c_m^m=1/\sqrt{2}$ in the lower right corner of $X_m^\pm$. Further, the occurring
matrices $Y_{\ell,m}^\pm$, defined in \cref{eq:y-matrix}, can be decomposed as
\[
Y_{\ell,m}^\pm = \hat{R}_{\pm\ell,m} Z^\pm,
\]
with the rotation matrix
\[
\hat{R}_{\ell,m} :=
\begin{pmatrix}
\cos \frac{\ell\pi}{4m} & -\sin \frac{\ell\pi}{4m} \\
\sin \frac{\ell\pi}{4m} & \phantom{-}\cos \frac{\ell\pi}{4m}
\end{pmatrix}
\]
and the non-orthogonal matrix
\[
Z^\pm :=
\begin{pmatrix}
1 & \pm1/\sqrt{2} \\
0 & \phantom{\pm}1/\sqrt{2}
\end{pmatrix}.
\]
Thus, the diagrammatic representation of ${X'}_{\!\!m}^{\pm}$, drawn below for $m=8$, is
\begin{equation*}
{X'}_{\!\!8}^\pm =
\begin{gathered}
\includegraphics[scale=1.0]{X_prime_8.pdf}
\end{gathered}.
\end{equation*}
Using this representation, the operations in the dashed box in the right diagram of
\cref{fig:dst-3-orthogonal} can be redrawn as shown in the left diagram of
\cref{fig:dst-3-orthogonal-detail}. We have doubled the number of components in this diagram to
show all relevant structures and used the abbreviation $m=2^k$. Again, the non-orthogonal part is
highlighted by a dotted box.
In order to obtain the expression made up from orthogonal matrices given by the right diagram in
\cref{fig:dst-3-orthogonal-detail}, we observe that the part in the dotted box can be decomposed
into three sorts of blocks for any size $n=2^{k+1}$. On the pair of components $(2^k-1,2^{k+1}-1)$,
we have a trivial block
\begin{equation*}
\begin{gathered}
\includegraphics[scale=1.0]{DST_3_orthogonal_block_1.pdf}
\end{gathered},
\end{equation*}
which is already orthogonal. Further, the pair of components
$(2^{k-1}-1,2^k+2^{k-1}-1)$ is coupled by
\begin{equation*}
\begin{gathered}
\includegraphics[scale=1.0]{DST_3_orthogonal_block_2.pdf}
\end{gathered},
\end{equation*}
where the reformulation on the right hand side only contains matrices which are orthogonal. All
other operations decompose into blocks acting on four components
\[
(\ell,2^k-2-\ell,2^k+\ell,2^{k+1}-2-\ell)
\]
with $\ell=0,\dots,2^{k-1}-2$. These blocks may be orthogonalized by the relation
\begin{equation*}
\begin{gathered}
\includegraphics[scale=1.0]{DST_3_orthogonal_block_3.pdf}
\end{gathered},
\end{equation*}
where we introduced the matrix
\begin{equation*}
\hat{G} := \hat{F} {\mathbb{J}}_2 = \frac{1}{\sqrt{2}}
\begin{pmatrix}
\phantom{-}1 & 1 \\
-1 & 1
\end{pmatrix}
.
\end{equation*}
Replacing the operations in the dashed box in the right diagram of \cref{fig:dst-3-orthogonal} by
the right hand side of \cref{fig:dst-3-orthogonal-detail} (in the appropriate size), we obtain a
recursion relation for the {${\opn{DST-I}}ii$} that only contains orthogonal operations.
\begin{figure*}
\caption{Detail for the orthogonalization of the recursion relation \eqref{eq:dst-3-alg}
\label{fig:dst-3-orthogonal-detail}
\end{figure*}
This completes all steps that are required to obtain a completely orthogonal recursion relation for
the ${\opn{DST-I}}$ of size $n=2m-1=2^{k+1}-1$. Let us summarize our final results: For the {${\opn{DST-I}}_n$}, we
arrive at a binary recursion, which can be represented diagrammatically, e.g.\ for ${n=7}$, as
\begin{subequations} \label{eq:dst-1o-alg}
\begin{equation}
{\opn{DST-I}}o_7 =
\begin{gathered}
\includegraphics[scale=1.0]{DST_1o_7.pdf}
\end{gathered}.
\end{equation}
For arbitrary odd size $n=2m-1$, this can also be expressed as
\begin{multline}
{\opn{DST-I}}o_{2m-1} = \\
\bar{L}_{2m-1} \big( {\opn{DST-I}}iio_m \oplus {\opn{DST-I}}o_{m-1} \big) \hat{M}_{2m-1}.
\end{multline}
\end{subequations}
Here, we defined the matrix
\[
\hat{M}_{2m-1} = \frac{1}{\sqrt{2}}
\begin{pmatrix}
{\mathbb{I}}_{m-1} & & \phantom{-}{\mathbb{J}}_{m-1} \\
& \sqrt{2} & \\
{\mathbb{I}}_{m-1} & & -{\mathbb{J}}_{m-1}
\end{pmatrix},
\]
which is just the orthogonalized version of the matrix $B_{2m-1}$ from the non-orthogonal relation
\eqref{eq:dst-1-alg}.
For the {${\opn{DST-I}}ii_n$}, we found the diagram
\begin{subequations} \label{eq:dst-3o-alg}
\begin{equation}
{\opn{DST-I}}iio_8 =
\begin{gathered}
\includegraphics[scale=1.0]{DST_3o_8.pdf}
\end{gathered},
\end{equation}
which is given here as an example for $n=8$. Again this has a general expression for $n=2m$, given
by
\begin{multline}
{\opn{DST-I}}iio_{2m} = K_{2m} \big( {\opn{DST-I}}iio_m \oplus {\opn{DST-I}}iio_m \big) \\
\times \big( \hat{Q}_m^- \oplus \hat{Q}_m^+ \big) \big( {\opn{DST-I}}iio_2 \otimes \, {\mathbb{I}}_m \big)
\hat{N}_{2m}.
\end{multline}
\end{subequations}
The newly introduced matrix $\hat{Q}_m^\pm$ is acting on pairs of components $(\ell-1,m-1-\ell)$,
$\ell=1,\dots,m/2-1$ via $\hat{R}_{\pm\ell,m}$ and therefore has a similar structure as $X_m^\pm$
defined in \cref{eq:x-matrix}. The other new matrix $\hat{N}_{2m}$ couples the pair of components
$(m/2-1,3m/2-1)$ by $\hat{R}_{-m,2m}$ and the pairs of components $(m/2-1+\ell,3m/2-1-\ell)$,
$\ell=1,\dots,m/2-1$ via $\hat{G}$.
The corresponding closing conditions for \cref{eq:dst-1o-alg,eq:dst-3o-alg} are
\begin{equation} \label{eq:closing-conditions}
{\opn{DST-I}}_1 = 1 \qtext{and} {\opn{DST-I}}iio_2 = \hat{F},
\end{equation}
such that the complete recursion leads to a well-defined network of operations. As an example, we
drew the complete network for the $\smash{{\opn{DST-I}}o_{31}}$ in \cref{fig:dst-1-complete}.
\begin{figure*}
\caption{Diagrammatic representation of the ${\opn{DST-I}
\label{fig:dst-1-complete}
\end{figure*}
To calculate the computational complexity of the derived algorithm, denote by $\mathcal{C}^{\mathrm{I}}_n$ and $\mathcal{C}^{\mathrm{I}}ii_n$
the number of elementary operations in the ${\opn{DST-I}}o_n$ and ${\opn{DST-I}}iio_n$, respectively, neglecting
permutations. From \cref{eq:dst-1o-alg}, we find
\[
\mathcal{C}^{\mathrm{I}}_{2^{k+1}-1} = \mathcal{C}^{\mathrm{I}}_{2^k-1} + \mathcal{C}^{\mathrm{I}}ii_{2^k} + (2^k - 1),
\]
where the last summand is the number of $\hat F$ matrices in each recursion step. On the other hand,
\cref{eq:dst-3o-alg} implies
\[
\mathcal{C}^{\mathrm{I}}ii_{2^{k+1}} = 2 \mathcal{C}^{\mathrm{I}}ii_{2^k} + (2^k-1) + 2^k + (2^k-1),
\]
the additional summands being the number of $\hat R_{\ell,m}$, $\hat F$, and $\hat G$ matrices per
recursion step, in that order. Evaluating these formulae, anchored at $\mathcal{C}^{\mathrm{I}}_1=0$ and $\mathcal{C}^{\mathrm{I}}ii_2=1$, we
obtain
\begin{subequations} \label{eq:complexity}
\begin{gather}
\mathcal{C}^{\mathrm{I}}_n = \frac 54 n \log_2 (n+1) - \frac {13}4 n + \frac 94 \log_2 (n+1) - \frac 14 \\
\text{and} \quad \mathcal{C}^{\mathrm{I}}ii_n = \frac 54 n \log_2 n - \frac 74 n + 2.
\end{gather}
\end{subequations}
\section{Second quantization of diagrams}
\label{sec:diagram-quantization}
We will now outline a general method for performing second quantization of diagrammatic algorithms.
We shall discuss fermions only -- all results, however, extend naturally to the bosonic case
\mathcal{C}^{\mathrm{I}}te{Ferris:14,Derezinski:13}.
Let us start with a basis transformation $U$ on the single-particle Hilbert space. Its matrix
representation is given by
\begin{equation} \label{eq:single-particle-matrix}
U \ket{a} = \sum_b \ket{b} \braopket{b}{U}{a} =: \sum_b U^{ba} \ket{b}.
\end{equation}
We can extend $U$ to a basis transformation $\Gamma_{\!U}$ of the multi-particle Hilbert space, by
having it leave the vacuum $\ket{\Omega}$ invariant and act linearly on creation operators
$c^\dagger$ \mathcal{C}^{\mathrm{I}}te{Bogoliubov:58,Derezinski:13}. This means that in the occupation number basis
\[
\ket{k} = (c^\dagger_0)^{k_0} \!\cdots (c^\dagger_{n-1})^{k_{n-1}} \ket{\Omega},
\quad k_a \in \braces{0,1},
\]
we define the \emph{second quantization} $\Gamma_{\!U}$ of $U$ by
\begin{equation} \label{eq:multi-particle-transformation}
\Gamma_{\!U} \ket{k} :=
\big(c^\dagger_{U\ket{0}}\big)^{k_0} \!\cdots \big(c^\dagger_{U\ket{n-1}}\big)^{k_{n-1}}
\ket{\Omega},
\end{equation}
where the transformed mode
\begin{equation} \label{eq:mode-transformation}
c^\dagger_{U\ket{a}} := \sum_b U^{ba} c^\dagger_b
\end{equation}
creates a fermion in the state $U\ket{a}$.
\begin{figure*}
\caption{\label{fig:fft-2nd-quantization}
\label{fig:fft-2nd-quantization}
\end{figure*}
Similarly to \cref{eq:single-particle-matrix}, we then have
\[
\Gamma_{\!U} \ket{k} = \sum_l \Gamma_U^{lk} \ket{l},
\]
where \cref{eq:multi-particle-transformation,eq:mode-transformation} can be used to obtain
\begin{multline} \label{eq:full-vacuum-expectation-value}
\Gamma_{\!U}^{lk} = \Big\langle \Omega \Big| c^{l_{n-1}}_{n-1} \cdots c^{l_0}_0
\Big(\sum_{b_0} U^{b_0 0} c^\dagger_{b_0} \Big)^{\!k_0} \cdots \\
\times \Big(\!\sum_{b_{n-1}} U^{b_{n-1} n-1} c^\dagger_{b_{n-1}} \Big)^{\!k_{n-1}}
\Big| \Omega \Big\rangle.
\end{multline}
Denote now by
\begin{align*}
L &= (L_0,\ldots, L_{p_l-1}) := \braces{a | l_a = 1} \qtext{and} \\
K &= (K_0,\ldots, K_{p_k-1}) := \braces{a | k_a = 1}
\end{align*}
the (ordered) lists of occupied modes in $\ket{l}$ and $\ket{k}$, respectively, where $p_l$ and
$p_k$ are their total numbers. Obviously, we have $\Gamma_{\!U}^{lk} = 0$ for $p_l \neq p_k$, since
then the modes in \cref{eq:full-vacuum-expectation-value} do not match up in pairs. Let us thus
consider the case where $p_k = p_l =: p$. Here, we have
\begin{multline*}
\Gamma_{\!U}^{lk} = \!\!\!\!\!\!\sum_{\smash{b_0,\ldots,b_{p-1}}}\!\!\!\!\!
U^{b_0 K_0} \!\cdots U^{b_{p-1} K_{p-1}} \\
\times \braopket{\!\Omega}{c^{}_{L_{p-1}} \!\!\!\!\!\cdots c^{}_{L_0} c^\dagger_{b_0} \!\!\cdots
c^{}_{b_{p-1}}}{\Omega\!},
\end{multline*}
where the expectation value on the right hand side again vanishes if $(b_0,\ldots,b_{p-1}) \neq L$ as
sets. Since the summand also changes sign under odd permutations of $(b_0,\ldots,b_{p-1})$, we obtain
\begin{equation} \label{eq:multi-particle-matrix}
\begin{split}
\Gamma_{\!U}^{lk} &= \sum_{\pi \in S_p} {\opn{sgn}}(\pi) U^{L_{\pi(0)} K_0} \!\cdots U^{L_{\pi(p-1)} K_{p-1}} \\
&= \det \parens{(U^{L_b K_a})_{0 \leq a,b < p}},
\end{split}
\end{equation}
which shall serve as a recipe for the calculation of $\Gamma_{\!U}$.
The Slater determinant expression \eqref{eq:multi-particle-matrix} enables us to check the
functorial relations \mathcal{C}^{\mathrm{I}}te{Derezinski:13}
\begin{equation} \label{eq:functoriality}
\Gamma_{\!UU'} = \Gamma_{\!U}\Gamma_{\!U'} \qtext{and}
\Gamma_{\!U \oplus U'} = \Gamma_{\!U} \otimes \Gamma_{\!U'}.
\end{equation}
Also, we see that second quantization preserves unitarity and orthogonality, since
\[
\Gamma_{\!U^\dagger} = \Gamma_{\!U}^\dagger, \quad \Gamma_{\!U^{\mathrm T}} =
\Gamma_{\!U}^{\mathrm T}, \qtext{and} \Gamma_{\!{\mathbb{I}}_n} = {\mathbb{I}}_{2^n}.
\]
We can now use \cref{eq:functoriality} to second-quantize the diagrams from the preceding
sections. This amounts to replacing all single particle matrices $U$ by their respective second
quantizations $\Gamma_U$. Each vertical line then represents an occupation number, hence the
vertical composition of blocks as on the left hand side of \cref{eq:single-particle-diagram-rules}
turns into a tensor contraction
\[
\sum_{k_3, k_4} \Gamma_{\!B}^{k_1 k_2, k_3 k_4} \Gamma_{\!A}^{k_3 k_4, k_5 k_6}
= (\Gamma_{\!B} \Gamma_{\!A})^{k_1 k_2, k_5 k_6},
\]
while the right hand side turns into the tensor product
\[
\Gamma_{\!C}^{k_1 k_2, k_3 k_4} \Gamma_{\!D}^{k_5 k_6, k_7 k_8}
= (\Gamma_{\!C} \otimes \Gamma_{\!D})^{k_1 k_2 k_5 k_6, k_3 k_4 k_7 k_8}.
\]
Note that this does not affect the \emph{structure} of the diagrams at all, but just amounts to a
reinterpretation of what they are supposed to mean.
Since only scalars $\alpha$ and $2 \times 2$-matrices $U$ appear in the diagrams we use, we can
explicitly evaluate \cref{eq:multi-particle-matrix} to obtain
\begin{gather}
\Gamma_{\!\alpha} =
\begin{pmatrix}
1 &0 \\
0 &\alpha
\end{pmatrix} \qtext{and} \label{eq:scalar-extension} \\
\Gamma_{\!U} =
\begin{pmatrix}
1 & & &\\
&U^{11} &U^{10} &\\
&U^{01} &U^{00} &\\
& & &\det U
\end{pmatrix}. \label{eq:matrix-extension}
\end{gather}
in the Kronecker basis $\braces{\ket{00},\ket{01},\ket{10},\ket{11}}$.
Finally, we have to give a meaning to the crossings of vertical lines, as in
\cref{eq:shorthand-hline}. For a single particle, these are represented by the swap matrix
\[
S := \begin{pmatrix} 0 &1 \\ 1 &0 \end{pmatrix},
\]
hence in the multi-particle case, we can apply \cref{eq:matrix-extension} to find
\begin{equation} \label{eq:fermion-swap}
\Gamma_{\!S} = \begin{pmatrix} 1&&&\\ &0&1&\\ &1&0&\\
&&&-1 \end{pmatrix},
\end{equation}
which correctly picks up a negative sign if two fermions switch places.
As an example, consider the normalized version of the FFT diagram in \cref{eq:fft-diagram}. We can
recursively break it down, so that we only need the second quantization of $\hat F$ from
\cref{eq:def-f-hat}, given by the matrix
\[
\Gamma_{\!\!\hat{F}} =
\begin{pmatrix}
1 & & &\\
&- 1/\sqrt{2} &1/\sqrt{2} &\\
&1/\sqrt{2} &1/\sqrt{2} &\\
& & &-1
\end{pmatrix}
\]
and local terms as in \cref{eq:scalar-extension}. The entire process can be found in
\cref{fig:fft-2nd-quantization}, resulting in a diagram that exactly reproduces the \emph{spectral
tensor network} by \mathcal{C}^{\mathrm{I}}te{Ferris:14,Verstraete:09}. Correspondingly, we can use the same rules on
the unitary decompositions \eqref{eq:dst-1o-alg} and \eqref{eq:dst-3o-alg} derived in the preceding
section, to obtain a quantum circuit implementing the {${\opn{DST-I}}$} for fermions and thus generalizing the
spectral tensor network for open boundary conditions. Therefore, \cref{eq:complexity} gives an upper
bound for the \emph{quantum computational complexity} \mathcal{C}^{\mathrm{I}}te{Nielsen:00} of the {${\opn{DST-I}}$} and {${\opn{DST-I}}ii$}
of fermionic modes. Note, however, that we omitted any permutations in the calculation leading to
\cref{eq:complexity}. While this is fine if we deal with just a single particle, exchange statistics
need to be incorporated for multiple fermions, hence permutations need to be decomposed into gates
of type \eqref{eq:fermion-swap}. This leads to additional $\sim \frac 76 n^2$ gates, where the
coefficient arises from the most naive decomposition and can likely be dropped to a smaller value.
\section{Discussion}
The present work is based on previous work by P\"uschel and Moura, who gave a purely algebraic
framework for the study of spectral transformations for various types of boundary conditions
\mathcal{C}^{\mathrm{I}}te{Moura:08a,Moura:08b} and their recursive decomposition into sparse matrices
\mathcal{C}^{\mathrm{I}}te{Moura:03,Moura:08c}. Although this work is remarkable and very general, its practical use for
quantum mechanics is limited in so far as the building blocks of the resulting decomposition are not
unitary -- a crucial property of any transformation which is to be interpreted as a change of
orthonormal bases in a Hilbert space. Since unitary variants of these Fourier transforms exist, it
is of course near at hand to expect that also the corresponding recursion relations can be
reformulated in terms of unitary building blocks, but carrying out such a unitarization could be
quite cumbersome.
In this paper, we have used a diagrammatic language to explicitly demonstrate said unitarization at the
example of a discrete sine transformation. This led to a decomposition where the sparse matrices
are direct sums of unitary elementary operations, hence well suited for parallelization. We expect
that other generalized Fourier transformations can be made unitary in a similar way, although then
the technicalities are probably even more involved.
The fact that the resulting recursion relations consist of block-diagonal unitaries is particularly
important when turning to many particles in the context of second quantization. We have shown that
such a second-quantized version of a sine transformation for fermions can be obtained by replacing
the unitary building blocks of the diagram by appropriate second-quantized counterparts and suitable
modifications for line crossings, to implement the particle statistics \mathcal{C}^{\mathrm{I}}te{Orus:14b}. Doing so, we
arrive at a tensor network, whose structure is essentially the same as that of the original diagram,
a circumstance that has already been noted in the context of the ordinary DFT on a circle by Ferris
\mathcal{C}^{\mathrm{I}}te{Ferris:14}.
Another great advantage of unitary building blocks becomes apparent when calculating local
expectation values with the resulting tensor network: as in the case of the MERA, a causal structure
emerges \mathcal{C}^{\mathrm{I}}te{Vidal:08} and parts of the network that are not causally connected to the considered
region can be contracted to trivial operations. Therefore, the effective complexity of the
computation of one- and two-point functions drops even further, scaling just linearly with the
system size.
The network thus makes it possible to numerically study boundary effects in one dimensional free
fermion models, which are exactly solvable by means of a spectral transformation. The Jordan-Wigner
transformation extends the applicability to spin-$\frac{1}{2}$ models, such as the XY model
\mathcal{C}^{\mathrm{I}}te{Lieb:61}. Imposing the variational method on the tensor coefficients, while fixing the overall
topology, the proposed network could also provide a starting point for the simulation of a wider
class of models with non-cyclic boundary conditions. Furthermore, since Ferris' similar construction
for the DFT naturally generalizes to higher dimensions, we expect that this also holds for the
${\opn{DST-I}}$.
Finally, the observation that second quantization preserves the structure of diagrams, which can be
seen as some kind of \emph{Bohr correspondence principle} on a higher level, seems to be very
fundamental and is linked to the underlying category theory, as we shall discuss in a forthcoming
paper.
\end{document} | math | 42,190 |
\begin{document}
\title{Thin position for a connected sum of small knots}
\authors{Yo'av Rieck\\Eric Sedgwick}
\address{Department of Mathematics, Nara Women's University\\
Kitauoya Nishimachi, Nara 630-8506, Japan, and\\
Department of Mathematics, University of Arkansas\\
Fayetteville, AR 72701, USA}
\email{[email protected]}
\Sigmaecondaddress{DePaul University,
Department of Computer Science\\243 S. Wabash Ave. - Suite 401,
Chicago, IL 60604, USA}
\Sigmaecondemail{[email protected]}
\asciiaddress{Department of Mathematics, Nara Women's University\\
Kitauoya Nishimachi, Nara 630-8506, Japan, and\\
Department of Mathematics, University of Arkansas\\
Fayetteville, AR 72701, USA\almost normald\\DePaul University,
Department of Computer Science\\243 S. Wabash Ave. - Suite 401,
Chicago, IL 60604, USA}
\asciiemail{[email protected], [email protected]}
\begin{abstract}
We show that every thin position for a connected sum of small
knots is obtained in an obvious way: place each summand in thin
position so that no two summands intersect the same level surface,
then connect the lowest minimum of each summand to the highest
maximum of the adjacent summand below. See Figure
\ref{fig:thin-position}.
\end{abstract}
\asciiabstract{
We show that every thin position for a connected sum of small
knots is obtained in an obvious way: place each summand in thin
position so that no two summands intersect the same level surface,
then connect the lowest minimum of each summand to the highest
maximum of the adjacent summand below.}
\primaryclass{57M25} \Sigmaecondaryclass{57M27}
\keywords{3--manifold, connected sum of knots, thin position}
\asciikeywords{3-manifold, connected sum of knots, thin position}
\maketitle
\Sigmaection{Introduction}
In \cite{gabai} Gabai introduced {\it thin position}, a tool since
used for many results on knots and graphs in $S^3$: Gabai's proof
of Property $R$ \cite{gabai}, Gordon and Luecke's \cite{g-l} proof
of the Knot Complement Theorem, and Scharlemann and Thompson's
\cite{s-t} proof of Waldhausen's Theorem that irreducible Heegaard
splittings of $S^3$ are unique, among others. Although thin
position has been used in more general settings (eg,\ by Rieck
\cite{rieck:topology} and Rieck and Sedgwick \cite{rs.daisy} to
study the behavior of Heegaard Surfaces under Dehn surgery) it has
been most fruitful when applied to the study of knots in $S^3$. We
examine thin position in $S^3$ for a connected sum of knots.
\begin{figure}
\caption{Thin position for a connected
sum of small knots is a stack of the summands.}
\label{fig:thin-position}
\end{figure}
Given a connected sum of knots, say $K=K_1\#..\#K_n$, there are
obvious candidates for thin position of $K$. Choose an ordering
of the summands $K_1, ..., K_n$. and put each in a thin position
so that successive pairs are separated by level $2$-spheres. Form
the connected sum $K$ by connecting the lowest minimum of $K_i$ to
the highest maximum of $K_{i+1}$. Each level sphere is punctured
twice and becomes a decomposing annulus which appears in this
presentation as a {\it thin level} surface, a surface with a
minimum of the knot immediately above and a maximum of the knot
immediately below. See Figure \ref{fig:thin-position}. We call
this position for $K$ a \it stack \rm on the summands $K_1, ...,
K_n$. The width of any stack on these summands is $\Sigma_{i=1}^n
w(K_i)-2(n-1)$, which gives an upper bound for the width of $K$.
Is this thin position for $K$? In fact, it is not immediately
obvious that thin position for a connected sum has any thin levels
whatsoever, thin position could be bridge position. While
Thompson has shown that small knots do not have thin levels
\cite{thompson:ThinBridge}, the converse is simply false. In
\cite{heath-kobayashi} Heath and Kobayashi give an example of a
knot (originally considered by Morimoto \cite{morimoto}) for which
thin position is bridge position even though its exterior contains
an essential four times punctured sphere. Our first theorem gives
a converse to Thompson's theorem in the case of a connected sum:
\begin{qthm}[\ref{thm:TPneqB}]
Let $K$ be a connected sum of non-trivial knots, $ K\! =\! K_1 \#
K_2$. Then thin position for $K$ is not bridge position for $K$.
\end{qthm}
We then analyze thin levels. For $mp$--small knots (inclusive of
small knots, see Section \ref{sec:prelims} for a definition), we
demonstrate that each thin level is a decomposing annulus. Indeed,
stacks are thin:\eject
\begin{qthm}[\ref{thm:connect-small-knots}]
Let $K=\#_{i=1}^n K_i$ be a connected sum of $mp$--small knots. If
$K$ is in thin position, then there is an ordering of the summands
$K_{i_1}, K_{i_2},...,K_{i_n}$ and a collection of leveled
decomposing annuli $A_{i_1}, A_{i_2}, ..., A_{i_{n-1}}$ so that
the thin levels of the presentation are precisely the annuli
$\{A_{i_j}\}$ occurring in order, where the annulus $A_{i_j}$
separates the connected sum $K_{i_1}\#K_{i_2}\#...\#K_{i_j}$ from
the connected sum $K_{i_{j+1}}\#...\#K_{i_n}$. {\rm (See Figure
\ref{fig:thin-position}.)}
\end{qthm}
For $mp$--small knots this yields the equality $w(K) = \Sigma_{i=1}^n
w(K_i) -2(n-1)$. Finally, since $mp$--small knots in thin position
are also in bridge position \cite{thompson:ThinBridge}, each
component of the stack is in bridge position. After connecting
these knots, the number of maxima is $\Sigma_{i=1}^n b(K_i) -
(n-1)$. Thus, while thin position is not bridge position for these
knots, it realizes the minimal bridge number as given by Schubert
\cite {schubert} (see Schultens \cite{schultens:bridge} for a
modern proof).
\rk{Acknowledgements} The authors would like to thank Marc
Lackenby and Tsuyoshi Kobayashi for helpful conversations, and
RIMS of Kyoto University and Nara Women's University for their
kind hospitality. The first named author was supported by JSPS
grant number P00024.
\Sigmaection{Preliminaries}
\label{sec:prelims}
Most of the definitions we use are standard; however we need:
\begin{dfn}
\label{dfn:mps} A knot $K \Sigmaubset S^3$ is called {\it meridionally
planar small} if there is no meridional essential planar surface
in its complement. We use the notation {\it $mp$--small}.
\end{dfn}
The set of knots under consideration is substantial and inclusive
of small knots: by CGLS \cite{cgls} small knots in $S^3$ are
meridionally small, and by definition meridionally small knots are
$mp$--small.
The width of a presentation of a link $L$ (ie,\ the width of an
embedding of a compact 1-manifold into $S^3$) is defined as
follows: let $h:S^3 \to \mathbb [-\infty,\infty]$ be a height
function that sends one point to $-\infty$, another to $\infty$
and has all other level sets diffeomorphic to $S^2$ (ie,\ a Morse
function with one minimum, one maximum and no other critical
points). Suppose $h|_L$ is Morse (else the width is not defined).
Pick a regular value between every two consecutive critical levels
of $h|_L$ and count the number of times $L$ intersects that level.
The sum of these numbers is the {\it width of the presentation}.
{\it Thin position} for $L$ is any embedding ambient isotopic to
$L$ that minimizes the width in that isotopy class. The {\it
width of the link $L$}, denoted $w(L)$, is the width of a thin
position for $L$.
A {\it thin level} is a level set of a regular value so that the
first critical point of $h|_L$ above is a minimum and the first
critical point below is a maximum. An embedding has no thin levels
if and only if each maximum of $L$ occurs above each minimum of
$L$. Among all embeddings of $L$ without thin levels, one with a
minimal number of maxima (equivalently minima) is called a {\it
bridge position for $L$}. The {\it bridge number of $L$},
denoted $b(L)$, is the number of maxima (equivalently minima) in a
bridge position for $L$. By pulling maxima up and minima down, it is
easy to see that this is the minimal number of maxima in the isotopy
class of the knot.
Given two knots their connected sum is described schematically in
Figure \ref{fig:connect-sum}. For more detail, see
\cite{burde-zie}.
\begin{figure}
\caption{Connected sum of knots.}
\label{fig:connect-sum}
\end{figure}
We recall that every knot has a unique representation as a
connected sum of prime knots, ie,\ any knot $K$ can be written
uniquely up to reordering as $\#_{i=1}^n K_i$ for some $K_i$, and
$K_i$ contains no meridional essential embedded annuli in its
exterior. Since any meridional essential embedded annulus
decomposes the knot as a connected sum (the factors are not
necessarily prime) we call such annulus a {\it decomposing
annulus}. Note that the uniqueness referred to above is
uniqueness of factors, not of the decomposing annuli. A {\it
split link} on knots $K_1$ and $K_2$ is an embedding of the two
knots that can be separated by an embedded $S^2$.
\Sigmaection{\!Essential meridional planar surfaces in a
connected sum of $mp$--small knots}
\begin{lem}
\label{thm:EssMPsurfaces} Let $K = \#_{i=1}^n K_i$ be a connected
sum of $mp$--small knots $K_i,\; i=1, \dots ,n$. Then any essential
meridional planar surface in the exterior of $K$ is a decomposing
annulus.
\end{lem}
\begin{proof} By way of contradiction, let $n \geq 1$ be the smallest $n$
for which there is a connected sum with $n$ components that is a
counterexample to lemma. Let $P$ be a non-annular planar essential
surface in the exterior of $K$. By assumption $n>1$, so we can choose an
annulus $A$ that decomposes $K$ as a connected sum $K=K_1 \# K_2$. The
annulus $A$ is properly embedded in $X = S^3 \Sigmaetminus N(K)$ and divides
$X$ into manifolds $X_1$ and $X_2$, the exterior of $K_1$ and $K_2$
respectively.
Since $A$ and $P$ are essential surfaces in $X$, $P$ may be
isotoped to intersect $A$ essentially. Since both surfaces have
meridional boundary, the intersection will be a (perhaps empty)
collection of simple closed curves, each curve essential in both
surfaces. Among all such positions for $P$, choose one that minimizes
the number of curves in the intersection $A \cap P$. Let $P_i = P
\cap X_i,\; i=1,2$.
Each component of the surface $P_i$ is
properly embedded in the knot exterior $X_i$ and is planar. Since
$P$ is essential and each curve of $P \cap A$ is essential in $P$,
each component of $P_i$ is necessarily incompressible in $X_i,\;
i=1,2$.
In knot exteriors ($X_1$ and $X_2$ in our case) boundary
compression implies compression for any
surface that is not an annulus. If either $P_1$ or $P_2$ contains
a component that is not an annulus, then that component is also
boundary incompressible. This yields an essential surface which is
not an annulus, with its boundary on $A$, a meridional slope. Such
a component would contradict our assumption that $n$ was the least
$n$ so that a connected sum of $n$ components contained an
essential planar surface with meridional slope.
We conclude that each $P_i$ consists entirely of annuli.
This contradicts our assumption that $P$ was a planar surface
that is not an annulus.
\end{proof}
\Sigmaection{Thin position is not bridge position}
\begin{thm}
\label{thm:TPneqB} Let $K$ be a connected sum of non-trivial
knots, $ K = K_1 \# K_2$. Then any thin position for $K$ is not bridge
position for $K$.
\end{thm}
\begin{proof}
Let $K$ be $K_1 \# K_2$, where $K_1$ and $K_2$ are not unknots. We
will prove the theorem by showing that $K$ has a position that has
lower width than its bridge position. Denote the bridge number of
$K$ by $b$ and that of $K_i$ by $b_i$ ($i=1,2$). By Schubert's
Theorem (\cite{schubert}, see \cite{schultens:bridge} for a modern
proof) $b = b_1 + b_2 - 1$.
For a knot in bridge position with $n$ maxima the width of the
presentation is easily seen to be
\begin{eqnarray}
w(K)&=& 2+4+...+(2n-2)+2n+(2n-2)+... +4+2 \nonumber \\ &=&
2(\Sigma_{i=1}^n 2i) - 2n \nonumber \\ & =& 4\frac{n(n+1)}{2} -
2n \nonumber \\& = & 2n^2. \nonumber
\end{eqnarray}
Applying this to $K$, all we need is to show that $2b^2$ is not
the minimal width for $K$.
Consider a presentation of the split link of $K_1$ and $K_2$ with
$K_1$ above the level 0 and $K_2$ below the level 0. If we put
both knots in bridge position, the width of this presentation is
$2b_1^2 + 2b_2^2$. We obtain a presentation of $K=K_1 \# K_2$ by
connecting the lowest minimum of $K_1$ to the highest maximum of
$K_2$ without introducing new critical points, which lowers the
width by 2. Thus, it suffices to show that $2b^2 - (2b_1^2 +
2b_2^2 - 2) > 0$. Applying Schubert's Theorem we get:
\begin{eqnarray}
2b^2 - (2b_1^2 + 2b_2^2 - 2) & = &2(b_1 + b_2 - 1)^2 - (2b_1^2 +
2b_2^2 -2) \nonumber \\ & = &
[2b_1^2+2b_2^2+4b_1b_2-4(b_1+b_2)+2] - [2b_1^2 + 2b_2^2 - 2]
\nonumber
\\ & = & 4b_1b_2 - 4(b_1+b_2)+4. \nonumber
\\ & = & 4 [b_1(b_2-1) - (b_2-1)] \nonumber
\\ & = & 4 (b_1-1)(b_2-1). \nonumber
\end{eqnarray}
Since $K_1$ and $K_2$ are both non-trivial knots, $b_1 > 1$ and
$b_2 > 1$ so the above product is positive.
\end{proof}
\Sigmaection{Connected sum of small knots}
\label{sec:connect-small-knots}
\begin{thm}
\label{thm:connect-small-knots} Let $K=\#_{i=1}^n K_i$ be a
connected sum of $mp$--small knots. If $K$ is in thin position, then
there is an ordering of the summands $K_{i_1},
K_{i_2},...,K_{i_n}$ and a collection of decomposing annuli
$A_{i_1}, A_{i_2}, ..., A_{i_{n-1}}$ so that the thin levels of
the presentation are precisely the annuli $\{A_{i_j}\}$ occurring
in order, where the annulus $A_{i_j}$ separates the connected sum
$K_{i_1}\#K_{i_2}\#...\#K_{i_j}$ from the connected sum
$K_{i_{j+1}}\#...\#K_{i_n}$. {\rm(See Figure \ref{fig:thin-position}.)}
\end{thm}
\begin{proof}
We induct on $n$. Our goal is to show that there exists a leveled
decomposing annulus, ie,\ an essential annulus that is $h^{-1}(t)$
for some $t$. The theorem will follow since above (and below) the
leveled annulus we see summands of $K$ in thin position (else we
can reduce the width of $K$). Since the prime summands are
$mp$--small, Thompson's result \cite{thompson:ThinBridge} guarantees
that their thin positions are bridge positions, thus there are no
thin level surfaces other than the decomposing annuli.
Existence of a leveled annulus follows in two steps given by the
lemmata below.
\begin{lem}
\label{lem:arc-on-thin-level} Let $K$ be the connected sum of
$mp$--small knots and $T$ be any thin level in a thin presentation of
$K$. Then a neighborhood of a spanning arc of a decomposing
annulus is isotopic onto $T$ by an isotopy preserving $K$ setwise.
\end{lem}
\begin{figure}
\caption{$D'$ is incompressible.}
\label{fig:CompressT}
\end{figure}
The existence of a thin level is guaranteed by Theorem
\ref{thm:TPneqB}.
\begin{lem}
\label{lem:level-is-annulus} Let $K$ be a connected sum in thin
position, and $T$ a thin level onto which a spanning arc of a
decomposing annulus is isotopic. Then $T$ is a decomposing
annulus.
\end{lem}
\begin{rmk}
Lemma \ref{lem:level-is-annulus} does not require the assumption
that the knots are $mp$--small. This assumption is only used when,
in the proof of Lemma \ref{lem:arc-on-thin-level}, we claim that
the essential surface we find is (some) decomposing annulus.
\end{rmk}
We may assume that there is a thin level $T$ that is inessential,
for otherwise both lemmata follow from Lemma \ref{thm:EssMPsurfaces}.
\begin{proof}[Proof of Lemma \ref{lem:arc-on-thin-level}](Cf\
Thompson \cite{thompson:ThinBridge}.)\qua Let $T=h^{-1}(t_0)$ be a
thin level. We assumed that $T$ is compressible. Let $D$ be a
compressing disk for $T$, and $D'$ a punctured disk that $\partial D$
bounds on $T$. Passing to an innermost compression on $D'$ we
replace $D'$ by the innermost disk and accordingly change $D$.
(Note: we may not assume that $D$ is a compressing disk for $T$ as
it may cross the level $T$ in its interior, see Figure
\ref{fig:CompressT}; we allow $D$ to compress $D'$ from above or
from below).
We show that $D' \cup D$ is essential. If not, it either
compresses or boundary compresses. That $D' \cup D$ does not
compress follows from our innermost choice of $D'$ and the fact
that the boundary of any compressing disk can be isotoped to be
disjoint from $D$. The boundary of any boundary compressing disk
that lies in $D' \cup D$ can be isotoped to lie entirely in $D'$
and is hence a boundary compression for the thin level $T$. A
boundary compression that joins a boundary component of $D'$ to
itself easily implies a compression, which we have already noted
does not occur. And, a boundary compression that connects two
distinct boundary components can never occur on a thin level. Say
that one does, and that it starts above the thin level $T$. By
definition of thin level, the first critical point on the knot
above $T$ is a minimum. The boundary compressing disk can be used
to isotope an arc of the knot to lie below $T$ with just a single
maximum. This either pulls a maximum on the arc below the minimum
lying above $T$, or eliminates both. Either reduces the width of
the presentation. If the arc contains additional critical points
of the knot, they are eliminated in pairs, further reducing the
width.
Thus, by Theorem \ref{thm:EssMPsurfaces}, $D' \cup D$ is a
decomposing annulus. A spanning arc for this annulus
can be isotoped to be disjoint from $D$
and therefore this arc and its neighborhood lie in $D'$, hence in
$T$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:level-is-annulus}]
This lemma follows from a width calculation. By\break Lem\-ma
\ref{lem:arc-on-thin-level} when we cut $K$ open along a leveled
spanning arc on $T$ we get a decomposition of $K$ into $K_+$ and
$K_-$, where $K_+$ and $K_-$ are non-trivial knots.
See Figure \ref{fig:decomposingK}.
\begin{figure}
\caption{Splitting $K$ to form $K_+$ and $K_-$.}
\label{fig:decomposingK}
\end{figure}
\begin{figure}
\caption{Separating $K_+$ and $K_-$ and reconnecting to obtain $K$.}
\label{fig:Thinner}
\end{figure}
The lemma (and theorem) would follow once we show that $K_+ \cap
T$ and $K_- \cap T$ are both empty. Denote the number of times
$K_+$ intersect $T$ by $m_+$, and the number of times $K_-$
intersect $T$ by $m_-$. Denoting $m=m_++m_-$, our goal is to show
that $m=0$. We do this by separating $K_+$ and $K_-$ and
reconnecting to obtain a new presentation of $K$. See Figure
\ref{fig:Thinner}. If $m>0$ we will demonstrate that this
manipulation reduces width.
Since $T$ is a thin level for $K$ the first critical point above
it is a minimum and the first critical point below it is a
maximum. Denote these points by {\tt min} and {\tt max}, see
Figure \ref{fig:decomposingK}. After cutting $K$ open there is
another minimum above $T$, which belongs to $K_+$, and another
maximum below $T$ belonging to $K_-$. (We do not know to which of
the knots {\tt max} and {\tt min} belong.)
We pair each maximum of $K_+$ with a minimum of $K_+$ so that for each
max\-imum-minimum pair the minimum is below the maximum. That such
correspondence exists follows easily from that fact that above each level
there are no less maxima than minima: label the maxima and label the
minima, both in descending order, and pair the $i^{th}$ maximum with the
$i^{th}$ minimum, for example. Similarly we pair up maxima and minima of
$K_-$.
The following are well known and easy facts about the calculus of
width. While isotoping a link, if two maxima change relative
heights the width does not change, and similarly for minima.
However, if we move a maximum above a minimum the width is raised
by four, while isotoping a minimum above a maximum the width is
lowered by four, see Figure \ref{fig:move-rel-extrema}.
\begin{figure}
\caption{Exchanging relative extrema.}
\label{fig:move-rel-extrema}
\end{figure}
We now begin our width calculation: our first move was cutting $K$
and obtaining the split link on $K_+$ and $K_-$. By doing so, we
removed the level $T$ (with $m+2$ punctures) and replaced it by
three critical levels, two of width $m+2$ and one of width $m$.
Thus we raised the width by $2m+2$. Next we isotope $K_+$ rigidly
to lie above $T$ and $K_-$ to lie below it. See Figure
\ref{fig:Thinner}. During this isotopy $K_+$ and $K_-$ may
intersect each other, but by the end of the process we will once
again have the split link on $K_+$ and $K_-$. The isotopy may also
temporarily increase the width as a maximum of $K_+$ is isotoped
past a minimum of $K_-$; however that contribution will be
canceled when the corresponding minimum of $K_+$ is isotoped past
the corresponding maximum of $K_-$. Finally, we will connect the
lowest minimum of $K_+$ to the highest maximum of $K_-$, obtaining
again a presentation of $K$.
For the next definition, see Figure \ref{fig:split-pair}.
\begin{dfn} \label{dfn:split-pair}
Let $X_+,\;x_+\;Y_-,\;y_-$ be four critical points so that $X_+$
(resp. $Y_-$) is a maximum of $K_+$ (resp. $K_-$) and $x_+$ (resp.
$y_-$) is a minimum of $K_+$ (resp. $K_-$). Assume further that
$X_+$ is paired with $x_+$ and $Y_-$ is paired with $y_-$. Then
the pair $((X_+,x_+),(Y_-,y_-))$ is called a {\it split pair} if
$h(X_+)>h(y_-)$ and $h(x_+) < h(Y_-)$.
\end{dfn}
\begin{figure}
\caption{Two split pairs and a non-split pair.}
\label{fig:split-pair}
\end{figure}
Split pairs are exactly those pairs which lower the width when we
separate $K_+$ and $K_-$: the maximum $X_+$ is already higher than
the minimum $y_-$, so the width is not raised, but the minimum
$x_+$ is below the maximum $Y_-$, so the width is lowered by four.
In Figure \ref{fig:split-pair} one of the of knots (either dashed
or solid) is $K_+$ and the other is $K_-$. Figures (a) and (b) are
both split pairs and (c) is not, independent of which knot we
choose as $K_+$ and which $K_-$. Being a split pair is a
geometric property: both maxima have to be above both minima.
Separating a split pair (by moving one knot up and the other down)
reduces the width by four regardless of the direction we move the
knots.
To get a lower bound on the width reduction we achieve, we must
estimate the number of split pairs. There are four possibilities:
\begin{enumerate}
\item $\mbox{\tt min} \in K_+$ and $\mbox{\tt max} \in K_-$
\item $\mbox{\tt min} \in K_-$ and $\mbox{\tt max} \in K_+$
\item $\mbox{\tt min} \in K_+$ and $\mbox{\tt max} \in K_+$
\item $\mbox{\tt min} \in K_-$ and $\mbox{\tt max} \in K_-$
\end{enumerate}
$K_+$ has exactly $\frac{m_+}{2}$ maximum-minimum pairs in which
the maximum is above the level $T$ ($\frac{m_+}{2}$ is the number
of arcs $K_+$ has in the ball above $T$ and in the ball below it);
similarly $K_-$ has exactly $\frac{m_-}{2}$ pairs separated by
$T$. Since all of these minima are below $T$ and the maxima
above, we note that the two minima adjacent to $T$ above it and
the two maxima below it are not members of these pairs. (We make
this comment to ensure that no split pair is counted twice.) With
this in mind we are ready to treat each of the cases above:
\begin{enumerate}
\item In this case $K_+$ has two minima directly above $T$.
Each of these minima (together with its corresponding maximum)
will be involved in a split pair with each of the pairs of $K_-$
separated by $T$, a total of $\frac{m_-}{2}$ split pairs.
Similarly, the two maxima of $K_-$ below $T$ will be involved in
$\frac{m_+}{2}$ split pairs each. Since the minima above $T$ and
the maxima below it are not members of pairs separated by $T$ no
split pair is counted twice. The width is lowered by four per
split pair, yielding a reduction of at least $4(m_++m_-)=4m$.
\item As in Case (1) the minimum directly above $T$, which is a point of
$K_+$, is involved in $\frac{m_-}{2}$ split pairs. But it is also
involved in a split pair with {\tt min} and its corresponding
maximum, and is thus involved in a total of $\frac{m_-}{2} + 1$
split pairs. Similarly, the maximum directly below $T$ (a point of
$K_-$) is involved in $\frac{m_+}{2} + 1$ split pairs. Each of the
$\frac{m_++m_-}{2}+2$ split pairs reduces the width by 4, giving a
total of $2m+8$.
\item As in Case (1) the minimum directly above $T$ and {\tt min} are
involved in $\frac{m_-}{2}$ split pairs each. As in Case (2) the
maximum directly below $T$ (a point of $K_-$) is involved in
$\frac{m_+}{2} + 1$ split pairs. Together we get $m_- +
\frac{m_+}{2} + 1$ split pairs, yielding a reduction in width of
at least $4 m_- + 2 m_+ + 4 \geq 2 m + 4$.
\item Symmetric to case (3) we get a reduction of at least $2m+4$.
\end{enumerate}
Next we reattach the lowest minimum of $K_+$ to the highest
maximum in $K_-$ to obtain a presentation of $K$. This will
reduce width by exactly two. See Figure \ref{fig:Thinner}.
Splitting to form the link increased the width by $2m+2$ and the
final reattachment reduced it by $2$, a net increase of $2m$. In
cases 2, 3, and 4 the separation of $K_+$ and $K_-$ reduced the
width by at least $2m+8$ (case 2) or $2m+4$ (cases 3 and 4). In
these cases, we obtain an overall reduction of at least 4,
contradicting thin position. In Case 1 we lowered the width by
$4m$ which yields an overall reduction unless $m=0$, our desired
conclusion. (Note that $m=0$ means that $K_+$ and $K_-$ do not
cross the level $T$ and hence force us to be in case 1.)
\end{proof}
This completes the proof of Theorem \ref{thm:connect-small-knots}.
\end{proof}
\Addresses\recd
\end{document} | math | 25,345 |
\begin{document}
\title{Generalized Perron--Frobenius Theorem for Nonsquare Matrices}
\author{
Chen Avin
\thanks{
\hbox{Ben Gurion University, Beer-Sheva, Israel. Email:}
{\tt \{avin,}
{\tt borokhom,}
{\tt zvilo\}@cse.bgu.ac.il,}
{\tt [email protected].}}
\and
Michael Borokhovich $^*$
\and
Yoram Haddad
\thanks{Jerusalem College of Technology, Jerusalem, Israel.
}
$^*$
\and
Erez Kantor
\thanks{The Technion, Haifa, Israel.
Email: {\tt [email protected]}. Supported by Eshkol fellowship,
the Israel Ministry of Science and Technology.}
\and
Zvi Lotker $^*$
\thanks{Supported by a grant of the Israel Science Foundation.}
\and
Merav Parter
\thanks{
The Weizmann Institute of Science, Rehovot, Israel.
Email: {\tt \{merav.parter,david.peleg\}@ weizmann.ac.il}.}
\thanks{Recipient of the Google Europe Fellowship in distributed computing;
research supported in part by this Google Fellowship.}
\and
David Peleg $^\mathrm{d}dag$\thanks{Supported in part by the Israel Science Foundation (grant 894/09),
the United States-Israel Binational Science Foundation
(grant 2008348),
the I-CORE program of the Israel PBC and ISF (grant 4/11),
the Israel Ministry of Science and Technology
(infrastructures grant), and the Citi Foundation.}
}
\maketitle
\begin{abstract}
The celebrated Perron--Frobenius (PF) theorem is stated for irreducible nonnegative square matrices, and provides a simple characterization of their eigenvectors and eigenvalues.
The importance of this theorem stems from the fact that eigenvalue problems
on such matrices arise in many fields of science and engineering, including dynamical systems theory, economics, statistics and optimization.
However, many real-life scenarios give rise to nonsquare matrices.
Despite the extensive development of spectral theories for nonnegative matrices, the applicability of such theories to non-convex optimization problems is not clear. In particular, a natural question is whether the \PFT~(along with its applications) can be generalized to a nonsquare setting.
Our paper provides a generalization of the \PFT{ }
to nonsquare matrices.
The extension can be interpreted as representing client-server systems with additional
degrees of freedom, where each client may choose between multiple
servers that can cooperate in serving it
(while potentially interfering with other clients).
This formulation is motivated by applications to power control
in wireless networks, economics and others,
all of which extend known examples for the use of the original \PFT.
We show that the option of cooperation between servers does not
improve the situation, in the sense that in the optimal solution
no cooperation is needed, and only one server needs to serve each client.
Hence, the additional power of having several potential servers per client
translates into \emph{choosing} the best single server and not into \emph{sharing} the load between the servers in some way, as one might have expected.
The two main contributions of the paper are
(i) a generalized \PFT { }that characterizes the optimal solution
for a non-convex nonsquare problem, and
(ii) an algorithm for finding the optimal solution in polynomial time.
Towards achieving those goals, we extend the definitions of irreducibility and largest eigenvalue
of square matrices to nonsquare ones in a novel and non-trivial way,
which turns out to be necessary and sufficient for our generalized theorem
to hold.
The analysis performed to characterize the optimal solution uses techniques from a wide range of areas and exploits combinatorial
properties of polytopes, graph-theoretic techniques and analytic tools
such as spectral properties of nonnegative matrices and root characterization
of integer polynomials.
\end{abstract}
\section{Introduction}
\paragraph{Motivation and main results.}
This paper presents a generalization of the well known Perron--Frobenius (PF)
Theorem \cite{PF_Frobenius,PF_Perron}.
As a motivating example, let us consider the \emph{Power control problem}, one of the most fundamental problems in wireless networks.
The input to this problem consists of $n$ receiver-transmitter pairs and their physical locations.
All transmitters are set to transmit at the same time with the same frequency,
thus causing interference to the other receivers.
Therefore, receiving and decoding a message at each receiver depends on the
transmitting power of its paired transmitter as well as the power of the rest
of the transmitters.
If the \emph{signal to interference ratio} at a receiver, namely, the signal strength received by a receiver divided by the interfering
strength of other simultaneous transmissions,
is above some \emph{reception threshold} $\beta$, then the
receiver successfully receives the message, otherwise it does not \cite{R96}.
The power control problem is then to find an optimal power assignment
for the transmitters, so as to make the reception threshold $\beta$
as high as possible and ease the decoding process.
As it turns out, this power control problem can be solved elegantly by casting it as an optimization program and using the Perron--Frobenius (PF) Theorem \cite{Zander92b}.
The theorem can be formulated as dealing with
the following optimization problem (where $A \in \mathbb{R}^{n \times n}$):
\begin{eqnarray}\label{eq:basic}
&&\text{maximize $\beta$ subject to: }\\
&&A \cdot \overline{X} \leq 1/\beta \cdot \overline{X},~~
\mathrm{d}isplaystyle ||\overline{X}||_{1}=1,~~
\mathrm{d}isplaystyle \overline{X} \geq \overline{0}.\nonumber
\end{eqnarray}
Let $\beta^*$ denote the optimal solution for Program (\ref{eq:basic}).
The Perron--Frobenius (PF) Theorem characterizes the solution to this
optimization problem and shows the following:
\begin{theorem} {\sc (\PFT, short version, \cite{PF_Frobenius,PF_Perron})}
Let $A$ be an irreducible nonnegative matrix. Then $\beta^* = 1/\PFEigenValue$, where $\PFEigenValue \in \mathbb{R}_{>0}$
is the largest eigenvalue of $A$,
called the \emph{Perron--Frobenius (PF) root} of $A$.
There exists a unique (eigen-)vector $\PFEigenVector>0$,
$||\PFEigenVector||_{1}=1$, such that
$A \cdot \PFEigenVector= r \cdot \PFEigenVector$,
called the \emph{Perron vector} of $A$.
(The pair $(\PFEigenValue,\PFEigenVector)$ is hereafter referred to as an {\em eigenpair}
of $A$.)
\end{theorem}
Returning to our motivating example, let us consider a more complicated
variant of the power control problem,
where each receiver has several transmitters that can transmit to it
(and only to it) synchronously.
Since these transmitters are located at different places, it may conceivably
be better to divide the power (or work) among them, to increase
the reception threshold at their common receiver. Again, the question
concerns finding the best power assignment among all transmitters.
In this paper
we extend Program (\ref{eq:basic}) to \emph{nonsquare matrices} and consider
the following extended optimization problem,
which in particular captures the multiple transmitters scenario.
(Here $A, B \in \mathbb{R}^{n \times m}$, $n \leq m$.)
\begin{eqnarray}\label{eq:extended}
&&\text{maximize $\beta$ subject to: }~~\\
&&A \cdot \overline{X} \leq 1/\beta \cdot B \cdot \overline{X},
\mathrm{d}isplaystyle ~~~||\overline{X}||_{1}=1,~~~
\mathrm{d}isplaystyle \overline{X} \geq \overline{0}.\nonumber
\end{eqnarray}
We interpret the nonsquare matrices $A,B$ as representing some additional
freedom given to the system designer. In this setting,
each \emph{entity} (receiver, in the power control example) has
several \emph{affectors} (transmitters, in the example),
referred to as its \emph{supporters}, which can cooperate in serving it
and share the workload. In such a general setting, we would like
to find the best way to organize the cooperation between the supporters
of each entity.
The original problem was defined for a square matrix, so the appearance of
eigenvalues in the characterization of its solution seems natural. In contrast, in the generalized setting the
situation seems more complex. Our main result is an extension of the
\PFT~to nonsquare matrices and systems that give rise to an optimization problem in the form of (\ref{eq:extended}), with optimal solution $\beta^*$.
\begin{theorem}
{\sc (Nonsquare \PFT, short version)}
Let $\langle A, B \rangle$ be an irreducible nonnegative system
(to be made formal later).
Then $\beta^* = 1/\PFEigenValue$, where $\PFEigenValue \in \mathbb{R}_{>0}$ is
the smallest \emph{Perron--Frobenius (PF) root} of all ${n \times n}$ square
sub-systems (defined formally later).
There exists a vector $\PFEigenVector \ge 0$ such that
$A \cdot \PFEigenVector = \PFEigenValue \cdot B \cdot \PFEigenVector$ and $\PFEigenVector$ has $n$
entries greater than 0 and $m-n$ zero entries
(referred to as a $\ZeroStar$ solution).
\end{theorem}
In other words, the theorem implies that the option of cooperation
does not improve the situation, in the sense that in the optimum solution,
no cooperation is needed and only one supporter per entity needs to work.
Hence, the additional power of having several potential supporters per entity
translates into \emph{choosing} the best single supporter and not into \emph{sharing} the load between the supporters in some way, as one might have expected.
\par As it turns out, the lion's share of our analysis involves such a characterization
of the optimal solution for (the non-convex) problem
of Program (\ref{eq:extended}).
The main challenge is to show that at the optimum, there exists a solution
in which only one supporter per entity is required to work;
we call such a solution a $\ZeroStar$ solution.
Namely, the structure that we establish is that the optimal solution
for our nonsquare
system is in fact the optimal solution of an \emph{embedded} square PF system.
Indeed, to enjoy the benefits of an equivalent square system, one should show that there exists a solution in which only one supporter per entity is required to work.
Interestingly,
it turned out to be relatively easy to show that there exists an optimal
``almost $\ZeroStar$'' solution, in which each entity
\emph{except at most one} has a single active
supporter and the remaining entity has at most \emph{two} active supporters.
Despite the presumably large ``improvement" of decreasing the number of servers from $m$ to $n+1$, this still leaves us in the frustrating situation of a nonsquare $n \times (n+1)$ system, where no spectral characterization for optimal solutions exists. In order to allow us to characterize the optimal solution using
the eigenpair of the best square matrix embedded within the nonsquare system,
one must overcome this last hurdle, and reach the ``phase transition'' point of $n$ servers, in which the system is \emph{square}.
Our main efforts went into showing that the remaining entity, too,
can select just one supporter while maintaining optimality, ending with
a \emph{square} $n\times n$ irreducible system where the traditional \PFT\
can be applied.
Proving the existence of an optimal $\ZeroStar$ solution requires techniques
from a wide range of areas to come into play and provide a rich understanding
of the system on various levels. In particular, the analysis exploits
combinatorial properties of polytopes, graph-theoretic techniques and
analytic tools such as spectral properties of nonnegative matrices and
root characterization of integer polynomials.
In the context of the above example of power control in wireless network with multiple
transmitters per receiver, a $\ZeroStar$ solution means that the best
reception threshold is achieved when only a single transmitter transmits
to each receiver.
Other known applications of the \PFT~can also be extended in a similar manner.
An Example for such applications is the {\em input-output economic model}
\cite{pillai2005pft}.
In this economic model, each industry produces a commodity and buys commodities
(raw materials) from other industries. The percentage
profit margin of an industry is the ratio
of its total income and total expenses (for buying its raw materials). It is required to find a pricing maximizing the ratio of the total income and total expenses of all industries. The extended PF variant of the problem concerns the case
where an industry can produce multiple commodities instead of just one.
In this example, the same general phenomenon holds:
each industry should charge money only for \emph{one} of the commodities it produces. That is, in the optimal pricing, one commodity per industry has nonzero price, therefore the optimum is a $\ZeroStar$ solution.
For a more detailed discussion of applications, see Sec. \ref{short:sec:Applications}. In addition, in Sec. \ref{sec:limit}, we provide a characterization of systems in which a $\ZeroStar$ solution does not exist.
\par While in the original setting the \PFR\ and \PFV\ can be computed in polynomial time,
this is not clear in the extended case, since the problem is not convex
\cite{Boyd-Conv-Opt-Book} (and not even log-convex) and there are
exponentially many choices in the system even if we know that
the optimal solution is $\ZeroStar$
and each entity has only two supporters
to choose from. Our second main contribution is providing a polynomial time
algorithm to find $\beta^*$ and $\PFEigenVector$.
The algorithm uses the fact that fixing $\beta$ yields a relaxed problem
which is convex (actually it becomes a linear program). This allows us to employ
the well known interior point method \cite{Boyd-Conv-Opt-Book},
for testing a specific $\beta$ for feasibility.
Hence, the problem reduces to finding the maximum feasible $\beta$,
and the algorithm does so by applying binary search on $\beta$.
Clearly, the search results in an approximate solution, in fact yielding
a fully polynomial time approximation scheme (FPTAS) for program (\ref{eq:extended}). This, however, leaves open the
intriguing question of whether program (\ref{eq:extended}) is polynomial.
Obtaining an exact optimal $\beta^*$, along with an appropriate vector
$\PFEigenVector$, is thus another challenging aspect of the problem.
\par A central notion in the generalized PF theorem is the \emph{irreducibility}
of the system. While irreducibility is a well-established concept for square
systems, it is less obvious how to define irreducibility for a nonsquare matrix
or system as in Program \eqref{eq:extended}. We provide a suitable definition
based on the property that every maximal square (legal) subsystem is
irreducible, and show that our definition is necessary and sufficient
for the theorem to hold.
A key tool in our analysis is what we call the \emph{constraint graph} of
the system, whose vertex set is the set on $n$ constraints (one per entity)
and whose edges represent direct influence between the constraints.
For a square system, irreducibility is equivalent to the constraint graph
being strongly connected, but for nonsquare systems the situation is more
delicate. Essentially, although the matrices are not square, the notion of
constraint graph is well defined and provides a valuable \emph{square}
representation of the nonsquare system (i.e., the adjacency matrix of
the graph). In \cite{PF_Irred,PF_Archive}, we also present a polynomial-time algorithm for testing the irreducibility of a given system, which exploits the properties
of the constraint graph.
\par\noindent{\bf Related work.}
The \PFT~establishes the following two important ``PF properties" for a nonnegative square matrix $A \in \mathbb{R}^{n \times n}$:
(1) the \emph{Perron--Frobenius property}:
$A$ has a maximal nonnegative eigenpair.
If in addition the matrix $A$ is \emph{irreducible} then its maximal eigenvalue is strictly positive, dominant and with a strictly positive eigenvector. Thus nonnegative irreducible matrix $A$ is said to enjoy
the \emph{strong Perron--Frobenius property} \cite{PF_Frobenius,PF_Perron}.
(2) the \emph{Collatz--Wielandt property} (a.k.a. min-max characterization):
the maximal eigenpair is the optimal solution of Program (\ref{eq:basic}) \cite{PF_Collatz, PF_Wielandt}.
\par Matrices with these properties have played
an important role in a wide variety of applications.
The wide applicability of the \PFT, as well as the fact that the necessary and sufficient properties required of a matrix $A$ for the PF properties to hold are still not fully understood, have led to the emergence of many generalizations. We note that whereas all generalizations concern the Perron--Frobenius property, the Collatz--Wielandt property is not always established.
The long series of existing PF extensions includes \cite{PF_with_some_negative_entries,PF_evNONNEG,PF_complex_matrices,PF_for_non_linear_mapping, PF_nonlinear_more,PF_concave_mappings,PF_for_matrix_polynomials,AXBX}.
We next discuss these extensions in comparison to the current work.
Existing PF extensions can be broadly classified into four classes.
The first concerns matrices that do not satisfy the irreducibility and nonnegativity requirements. For example, \cite{PF_with_some_negative_entries,PF_evNONNEG} establish the Perron-Frobenius property for \emph{almost} nonnegative matrices or \emph{eventually} nonnegative matrices.
A second class of generalizations concerns square matrices over different domains. For example, in \cite{PF_complex_matrices}, the \PFT~was established for complex matrices $A \in \mathbb{C}^{n \times n}$.
In the third type of generalization, the linear transformation obtained by applying the nonnegative irreducible matrix $A$ is generalized to a nonlinear mapping \cite{PF_for_non_linear_mapping, PF_nonlinear_more}, a concave mapping \cite{PF_concave_mappings} or a matrix polynomial mapping \cite{PF_for_matrix_polynomials}.
Last, a much less well studied generalization deals with nonsquare matrices,
i.e., matrices in $\mathbb{R}^{n \times m}$ for $m \neq n$.
Note that when considering a nonsquare system, the notion of eigenvalues
requires definition. There are several possible definitions for eigenvalues
in nonsquare matrices.
One possible setting for this type of generalizations considers a pair
of nonsquare ``pencil" matrices $A, B \in \mathbb{R}^{n \times m}$,
where the term ``pencil" refers to the expression $A- \lambda \cdot B$,
for $\lambda \in \mathbb{C}$. Of special interest here are the values
that reduce the pencil rank, namely, the $\lambda$ values satisfying
$(A -\lambda B) \cdot \overline{X}=\overline{0}$
for some nonzero $\overline{X}$.
This problem is known as the \emph{generalized eigenvalue problem} \cite{AXBX,NonSQPencil,boelgomi05,Kres11},
which can be stated as follows:
Given matrices $A, B \in \mathbb{R}^{n \times m}$, find a vector $\overline{X}\neq \overline{0}$, $\lambda \in \mathbb{C}$, so that $A \cdot\overline{X}=\lambda B \cdot \overline{X}$. The complex number $\lambda$ is said to be an \emph{eigenvalue of $A$ relative to $B$} iff $A \overline{X}=\lambda \cdot B
\cdot \overline{X}$ for some nonzero $\overline{X}$ and $\overline{X}$ is called the \emph{eigenvector of $A$ relative to $B$}. The set of all eigenvalues of $A$ relative to $B$ is called the \emph{spectrum of $A$ relative to $B$}, denoted by $sp(A_{B})$.
Using the above definition, \cite{AXBX} considered
pairs of nonsquare matrices $A,B$ and was the first to characterize
the relation between $A$ and $B$
required to establish their PF property,
i.e., guarantee that the generalized eigenpair is nonnegative.
Essentially, this is done by generalizing the notions of positivity and
nonnegativity in the following manner. A matrix $A$ is said to be
\emph{positive} (respectively,\emph{nonnegative}) with respect to $B$,
if $B^{T} \cdot \overline{Y} \geq 0$ implies that $A^{T} \cdot \overline{Y}>0$
(resp., $A^{T} \cdot \overline{Y}\geq 0$). Note that for $B=I$,
these definitions coincide with the classical definitions of a positive
(resp., nonnegative) matrix. Let $A, B \in \mathbb{R}^{n \times m}$, for $n \geq m$,
be such that the rank of $A$ or the rank of $B$ is $n$.
It is shown in \cite{AXBX} that if $A$ is positive (resp., nonnegative)
with respect to $B$, then the generalized eigenvalue problem
$A \cdot\overline{X}=\lambda \cdot B \cdot \overline{X}$ has a discrete and finite spectrum,
the eigenvalue with the largest absolute value is real and positive
(resp., nonnegative), and the corresponding eigenvector is positive
(resp., nonnegative). Observe that under the definition used therein,
the cases where $m > n$ (which is the setting studied here) is uninteresting,
as the columns of $A -\lambda \cdot B$ are linearly dependent for any real
$\lambda$, and hence the spectrum $sp(A_{B})$ is unbounded.
Despite the significance of \cite{AXBX} and its pioneering generalization of
the \PFT~to nonsquare systems, it is not clear what are the applications
of such a generalization, and no specific implications are known for the
traditional applications of the PF theorem.
Moreover, although \cite{AXBX} established the PF
property for a class of pairs of nonsquare matrices,
the Collatz--Wielandt property, which provides the algorithmic power for the
\PFT, does not necessarily hold with the spectral definition of \cite{AXBX}.
In addition, since no notion of irreducibility was defined in \cite{AXBX},
the spectral radius of a nonnegative system (in the sense of the definition
of \cite{AXBX}) might be zero, and the corresponding eigenvector might be
nonnegative in the strong sense (with some zero coordinates).
These degenerations can be handled only by considering irreducible
nonnegative matrices, as was done
by Frobenius in \cite{PF_Frobenius}.
In contrast, the goal of the current work is to develop the spectral theory for a pair of
nonnegative matrices in a way that is both necessary and sufficient for both
the
PF property and the Collatz--Wielandt property to hold
(allowing the nonsquare system to be of the
``same power" as the square systems considered by Perron and Frobenius).
Towards this we define the eigenvalues and eigenvectors of pairs of $n \times m$ matrices $A$ and $B$ in a novel manner. Such eigenpair $(\lambda, \overline{X})$ satisfies $A \cdot \overline{X}=\lambda \cdot B \cdot \overline{X}$. In \cite{AXBX}, alternative spectral definitions for pairs of nonsquare matrices $A$ and $B$ are provided. We note that whereas in \cite{AXBX} formulation, the maximum eigenvalue is not bounded if $n < m$, with our definition it is bounded.
\par Let us note that although the generalized eigenvalue problem has been studied for many years, and multiple approaches for nonsquare spectral theory in general have been developed, the algorithmic aspects of such theories with respect to the the Collatz--Wielandt property have been neglected when concerning nonsquare matrices (and also in other extensions). This paper is the first, to the best of our knowledge, to provide spectral definitions for nonsquare systems that have the same algorithmic power as those made for square systems (in the context of the \PFT).
The extended optimization problem that corresponds to this nonsquare setting is a nonconvex problem (which is also not log-convex), therefore its polynomial solution and characterization are of interest.
Another way to extend the notion of eigenvalues and eigenvectors of a square
matrix to a nonsquare matrix is via \emph{singular value decompositions (SVD)}
\cite{meyer2000matrix}. Formally, the singular value decomposition of an
$n\times m $ real matrix $M$ is a factorization of the form $M=U\Sigma V^{*}$,
where $U$ is an $m\times m$ real or complex unitary matrix, $\Sigma$ is an
$m \times n$ diagonal matrix with nonnegative reals on the diagonal,
and $V^{*}$ (the conjugate transpose of $V$) is an $n\times n$ real or complex
unitary matrix.
The diagonal entries $\Sigma_{i,i}$ of $\Sigma$ are known as the singular
values of $M$. After performing the product $U \Sigma V^{*}$,
it is clear that the dependence of the singular values of $M$ is linear.
In case all the inputs of $M$ are positive, we can add the absolute
value, and thus the SVD has a flavor of $L^1$ dependence. In contrast
to the SVD definition,
here we are interested in finding the maximum, so our interpretation has
the flavor of $L^\infty$.
In a recent paper \cite{Vazirani12}, Vazirani defined the notion of
{\em rational convex programs} as problems that have a rational number as
a solution. Our paper can be considered as an example for
{\em algebraic programming},
since we show that a solution to our problem is an algebraic number.
\section{Preliminaries}
\label{sec:per}
\subsection{Definitions and terminology}
Consider a directed graph $G=(V,E)$. A subset of the vertices $W \subseteq V$ is called a \emph{strongly connected component}
if $G$ contains a directed path from $v$ to $u$ for every $v, u \in W$. $G$ is said to be \emph{strongly connected} if $V$ is a strongly connected component.
\par Let $A \in \mathbb{R}^{n \times n}$ be a square matrix.
Let $\EigenValue(A)= \{\lambda_1, \ldots, \lambda_k\}$, $k \leq n$,
be the set of real eigenvalues of $A$.
The \emph{characteristic polynomial} of $A$, denoted by $\CP(A,t)$,
is a polynomial whose roots are precisely the eigenvalues of $A$,
$\EigenValue(A)$, and it is given by
\begin{equation}
\label{eq:CP}
\CP(A,t) = \mathrm{d}et(t \cdot I -A)
\end{equation}
where $I$ is the $n \times n$ identity matrix.
Note that $\CP(A,t)=0$ iff $t \in \EigenValue(A)$.
The {\em spectral radius} of $A$ is defined as
$\SpectralRatio(A) =
\mathrm{d}isplaystyle \max\limits_{\lambda \in \EigenValue(A)} |\lambda|.$
The $i^{th}$ element of a vector $\overline{X}$ is given by $X(i)$, and
the $i,j$ entry of a matrix $A$ is denoted $A(i,j)$.
Let $A_{i,0}$ (respectively, $A_{0,i}$) denote the $i$-th row (resp., column)
of $A$. Vector and matrix inequalities are interpreted in the component-wise sense. $A$ is \emph{positive} (respectively, \emph{nonnegative})
if all its entries are.
$A$ is \emph{primitive} if there exists a natural number $k$ such that
$A^{k}>0$. $A$ is \emph{irreducible} if for every $i,j$,
there exists a natural $k_{i,j}$ such that $(A^{k_{i,j}})_{i,j} >0.$
An \emph{irreducible} matrix $A$ is \emph{periodic} with period $\Period$ if
$(A^{t})_{ii}=0$ for $t \neq k \cdot \Period$.
\subsection{Algebraic Preliminaries}
\label{sec:algper}
\paragraph{Generalization of Cramer's rule to homogeneous linear systems.}
Let $A_{i,0}$ (respectively, $A_{0,i}$) denote the $i$-th row (resp., column)
of $A$. Let $A_{-(i,j)}$ denote the matrix that results from $A$ by removing
the $i$-th row and the $j$-th column. Similarly, $A_{-(i,0)}$ and $A_{-(0,j)}$
denote the matrix after removing the $i$-th row (respectively, $j$-th column)
from $A$. Let $\widetilde{A}_{i}=\left(A(1,i), \ldots, A(n-1,i) \right)^T$,
i.e., the $i$-th column of $A$ without the last element $A(n,i)$.
For $\overline{X}=(X(1), \ldots, X(n)) \in \mathbb{R}^{n}$, denote
$\overline{X}_{i}=(X(1), \ldots, X(i)) \in \mathbb{R}^{i}$.
We make use of the following extension of Cramer's rule to homogeneous square linear systems.
\begin{claim}
\label{cl:cramer_square}
Let $A \cdot \overline{X} = \overline{0}$ such that $A_{-(n,n)}$ is invertible.
Then,
\begin{description}
\item{(a)}
$\mathrm{d}isplaystyle
X(i) ~=~ (-1)^{n-i} \cdot X(n) \cdot \frac{\mathrm{d}et(A_{-(n,i)})}{\mathrm{d}et(A_{-(n,n)})}~.$
\item{(b)}
$\mathrm{d}isplaystyle
X(n) \cdot \frac{\mathrm{d}et(A)}{\mathrm{d}et(A_{-(n,n)})}=0~.$
\end{description}
\end{claim}
\par\noindent{\bf Proof:~}
Since $A \cdot \overline{X} = \overline{0}$, it follows that
$A_{-(n,n)} \cdot \overline{X}_{n-1}=-X(n) \cdot \widetilde{A}_{n}$.
As $A_{-(n,n)}$ is invertible, we can apply Cramer's rule to express $X(i)$.
Let $M_{i}=[\widetilde{A}_{1}, \ldots, \widetilde{A}_{i-1},\widetilde{A}_{n},
\widetilde{A}_{i+1}, \ldots, \widetilde{A}_{n-1}] \in \mathbb{R}^{(n-1) \times (n-1)}$,
for $i>1$ and
$M_{1}=[\widetilde{A}_{n}, \widetilde{A}_{2},\ldots, \widetilde{A}_{n-1}]$.
By Cramer's rule, it then follows that
$X(i)=-X(n) \cdot \mathrm{d}et(M_{i}) / \mathrm{d}et(A_{-(n,n)})$.
We next claim that $\mathrm{d}et(M_{i})=(-1)^{n-1-i} \cdot \mathrm{d}et(A_{-(n,i)})$.
To see this, note that $A_{-(n,i)}$ and $M_{i}$ are composed of the same set of
columns up to order. In particular, $M_{i}$ can be transformed to
$A_{-(n,i)}=[\widetilde{A}_{1}, \ldots, \widetilde{A}_{i-1}, \widetilde{A}_{i+1},
\ldots, \widetilde{A}_{n-1},\widetilde{A}_{n}]$
by a sequence of $n-1-i$ swaps of consecutive columns starting from the
$i$-th column of $M_{i}$. It therefore follows that
$X(i)=(-1)^{n-1-i} \cdot -(1) \cdot X(n) \cdot
\frac{\mathrm{d}et(A_{-(n,i)})}{\mathrm{d}et(A_{-(n,n)})}$
establishing part (a) of the claim.
We continue with part (b).
Since $A \cdot \overline{X} = \overline{0}$, it follows that
$A_{(n,0)} \cdot \overline{X}=0$ or that
\begin{eqnarray*}
A_{n,0} ~\cdot~ \overline{X} &=&\sum_{i=1}^{n} A(n,i) \cdot X(i)\nonumber
\\&=&
X(n) \cdot \sum_{i=1}^{n-1} \left( (-1)^{n-i} \cdot A(n,i) \cdot
\frac{\mathrm{d}et(A_{-(n,i)})}{\mathrm{d}et(A_{-(n,n)})} \right)+A(n,n) \cdot X(n) \nonumber
\\&=&
X(n) \cdot \frac{ \sum_{i=1}^{n-1} \left((-1)^{n-i} \cdot A(n,i) \cdot
\mathrm{d}et(A_{-(n,i)}) \right)+ A(n,n) \cdot (-1)^{2n} \mathrm{d}et(A_{-(n,n)})}{\mathrm{d}et(A_{-(n,n)})}
\\&=&
X(n) \cdot \frac{\mathrm{d}et(A)}{\mathrm{d}et(A_{-(n,n)})}=0~. ~~~\quad\quad\blackslug
\nonumber
\end{eqnarray*}
We now turn to a nonsquare matrix $A \in \mathbb{R}^{n \times (n+1)}$.
The matrix $B=B(A) = [\widetilde{A}_{1}, \ldots, \widetilde{A}_{n-1}] \in
\mathbb{R}^{(n-1) \times (n-1)}$ corresponds to the upper left $(n-1) \times (n-1)$
square matrix of $A$. Let $C^{1}=[A_{1}, \ldots, A_{n}]$ i.e.,
$C^{1}=A_{-(0,n+1)}$ and $C^{2}=A_{-(0,n)}$. Note that
$C^{1}, C^{2} \subseteq \mathbb{R}^{n \times n}$, i.e., both are square matrices.
\begin{claim}
\label{cl:cramer_non_square}
Let $A \cdot \overline{X}=\overline{0}$ and $B=B(A)$ is invertible. Then,
\begin{description}
\item{(a)}
$\mathrm{d}isplaystyle
X(i) ~=~ (-1)^{n-i} \cdot \left( \frac{\mathrm{d}et(C^{1}_{-(n,i)})}{\mathrm{d}et(B)} \cdot
X(n) +\frac{\mathrm{d}et \left(C^{2}_{-(n,i)} \right)}{\mathrm{d}et \left(B \right)} \cdot X(n+1) \right) ~,$
\item{(b)}
$\mathrm{d}isplaystyle
X(n) \cdot \frac{\mathrm{d}et \left(C^1 \right)}{\mathrm{d}et(B)} ~=~
-X(n+1) \cdot \frac{\mathrm{d}et \left(C^2 \right)}{\mathrm{d}et(B)}~.$
\end{description}
\end{claim}
\par\noindent{\bf Proof:~}
Since $A \cdot \overline{X}= \overline{0}$, it follows that
$B \cdot \overline{X}_{n-1}= - \left(X(n) \cdot \widetilde{A}_{n} + X(n+1)
\cdot \widetilde{A}_{n+1} \right)$.
As $B$ is invertible we can apply Cramer's rule to express $x_{i}$.
Let $M_{i}=[\widetilde{A}_{1}, \ldots, \widetilde{A}_{i-1},x_{n} \cdot
\widetilde{A}_{n}+x_{n+1} \cdot \widetilde{A}_{n+1}, \widetilde{A}_{i+1},
\ldots, \widetilde{A}_{n-1}] \in \mathbb{R}^{n \times n}$.
Let $M_{i}^{1}=[\widetilde{A}_{1}, \ldots, \widetilde{A}_{i-1},
\widetilde{A}_{n} , \widetilde{A}_{i+1}, \ldots, \widetilde{A}_{n-1}]$ and
\\
$M_{i}^{2}=[\widetilde{A}_{1}, \ldots, \widetilde{A}_{i-1}, \widetilde{A}_{n+1} ,
\widetilde{A}_{i+1}, \ldots, \widetilde{A}_{n-1}]$.
By the properties of the determinant function, it follows, that
$$X(i)=X(n) \cdot \frac{\mathrm{d}et\left(M_{i}^{1}\right)}{\mathrm{d}et\left(B\right)}+
X(n+1) \cdot \frac{\mathrm{d}et\left(M_{i}^{2}\right)}{\mathrm{d}et\left(B\right)}.$$
We now turn to see the connection between $\mathrm{d}et(M_{i}^{1})$ and
$\mathrm{d}et(C_{-(n,i)}^{1})$. Note that $M_{i}^{1}$ and $C_{-(n,i)}^{1}$ correspond to
the same columns up to order. Specifically, we can now employ
the same argument of Claim \ref{cl:cramer_square} and show that
$\mathrm{d}et(M_{i}^{1})=(-1)^{n-1-i} \cdot \mathrm{d}et(C_{-(n,i)}^{1})$
(informally, the square matrix of Claim \ref{cl:cramer_square} is replaced by
a ``combination" of $C_{1}$ and $C_{2}$). In a similar way, one can show that
$\mathrm{d}et(M_{i}^{2})=(-1)^{n-1-i} \cdot \mathrm{d}et(C_{-(n,i)}^{2})$.
We now turn to prove part (b) of the claim.
Since $A_{n,0} ~\cdot~ \overline{X}$, by part (a),
we get that
\begin{eqnarray*}
A_{n,0} ~\cdot~ \overline{X}&=&
\sum_{i=1}^{n-1} A(n,i) \cdot X(i) +A(n,n) \cdot X(n)+ A(n,n+1) \cdot X(n+1)
\nonumber
\\&=&
X(n) \cdot \left(\sum_{i=1}^{n-1} (-1)^{n-i} \cdot A(n,i) \cdot
\frac{\mathrm{d}et \left(C^{1}_{-(n,i)} \right)}{\mathrm{d}et(B)} +A(n,n) \right)
\nonumber
\\&&
+ X(n+1) \cdot \sum_{i=1}^{n-1} \left( (-1)^{n-i} \cdot A(n,i) \cdot
\frac{\mathrm{d}et \left(C^{2}_{-(n,i)} \right)}{\mathrm{d}et(B)} +A(n,n+1)\right)
\nonumber
\\&=&
X(n) \cdot \frac{\sum_{i=1}^{n-1}(-1)^{n-i} \cdot A(n,i) \cdot
\mathrm{d}et\left(C^{1}_{-(n,i)} \right) + (-1)^{2n} \cdot A(n,n) \cdot \mathrm{d}et(B)}{\mathrm{d}et(B)}
\nonumber
\\&&
+ X(n+1) \cdot \frac{\sum_{i=1}^{n-1} (-1)^{n-i} \cdot A(n,i) \cdot
\mathrm{d}et \left(C^{2}_{-(n,i)} \right) + (-1)^{2n} \cdot A(n,n+1) \cdot \mathrm{d}et(B)}{\mathrm{d}et(B)}
\nonumber
\\&=&
X(n) \cdot \frac{\mathrm{d}et(C^{1})}{\mathrm{d}et(B)}+X(n+1) \cdot
\frac{\mathrm{d}et(C^{2})}{\mathrm{d}et(B)}=0~.
\nonumber
\end{eqnarray*}
The claim follows.
\quad\blackslug\lower 8.5pt\null\par
\paragraph{Separation theorem for nonsymmetric matrices.}
We make use of the following fact due to Hall and Porsching \cite{HallInterlacing}, which is an extension of the Cauchy Interlacing Theorem for symmetric matrices.
\begin{lemma}[\cite{HallInterlacing}]
\label{lem:sep_thm}
Let $A$ be a nonegative matrix with eigenvalues $\EigenValue(A)=\{\lambda_i(A) \mid i \in \{1, \ldots, n\}\}$. Let $A_i$ be the $i^{th}$ principle $(n-1) \times (n-1)$ minor of $A$, with eigenvalues $\lambda_j(A_i)$, $j \in \{1, \ldots, n-1\}$.
If $\lambda_p(A)$ is any real eigenvalue of $A$ different from $\lambda_1[A]$, then
$$\lambda_1(A) \leq \lambda_1(A_i) \leq \lambda_p(A)$$
for every $i \in \{1, \ldots, n\}$, with strict inequality on the left if $A$ is irreducible.
\end{lemma}
\subsection{\PFT~for square nonnegative irreducible matrices}
The \PFT~states the following.
\begin{theorem} [PF Theorem, \cite{PF_Frobenius,PF_Perron}]
\label{thm:pf_full}
Let $A \in \mathbb{R}_{\geq 0}^{n \times n}$ be a nonnegative irreducible matrix with
spectral ratio $\SpectralRatio(A)$. Then $\max \EigenValue(A)>0$.
There exists an eigenvalue $r \in \EigenValue(A)$ such that
$r=\SpectralRatio(A)$, called the
\emph{Perron--Frobenius (PF) root} of $A$.
The algebraic multiplicity of $~\PFEigenValue$ is one.
There exists an eigenvector $\overline{X}>0$ such that
$A \cdot \overline{X}=\PFEigenValue \cdot \overline{X}$.
The unique normalized vector $\PFEigenVector$ defined by
$A \cdot \PFEigenVector=\PFEigenValue \cdot \PFEigenVector$
and $||\PFEigenVector||_{1}=1$
is called the \emph{Perron--Frobenius (PF) vector}.
There are no nonnegative eigenvectors for $A$ with $r$ except for positive multiples
of $\PFEigenVector$. If $A$ is a nonnegative irreducible periodic matrix
with period $\Period$, then $A$ has exactly $\Period$ eigenvalues,
$\lambda_j= \SpectralRatio(A) \cdot \exp^{2 \pi i \cdot j/\Period}$ for
$j =1,2, \ldots, \Period,$
and all other eigenvalues of $A$ are of strictly smaller magnitude
than $\SpectralRatio(A)$.
\end{theorem}
\paragraph{Collatz--Wielandt characterization (the min-max ratio).}
Collatz and Wielandt \cite{PF_Collatz, PF_Wielandt} established the following
formula for the \PFR, also known as the min-max ratio characterization.
\begin{lemma} \cite{PF_Collatz, PF_Wielandt}
[Collatz--Wielandt]
\label{lem:Collatz-Wielandt}
$\PFEigenValue=\min_{\overline{X} \in \mathcal{N}} \{\mathfrak{f}(\overline{X})\}$
~where~
$$\mathfrak{f}(\overline{X})= \max\limits_{1 \leq i \leq n, X(i)\neq \overline{0}}
\left \{\frac{(A \cdot \overline{X})_{i}}{X(i)} \right \} \mbox{~~and~~}
\mathcal{N}=\{\overline{X} \geq \overline{0},||\overline{X}||_{1}=1\}.$$
\end{lemma}
Alternatively, this can be written as the following optimization problem.
\begin{equation}\label{LP:Stand_Perron}
\mbox{maximize} ~~ \beta \text{~~~subject to:~~~} \mathrm{d}isplaystyle A \cdot \overline{X} \leq 1/\beta
\cdot \overline{X},~~~ \mathrm{d}isplaystyle ||\overline{X}||_{1}=1,~~~
\mathrm{d}isplaystyle \overline{X} \geq \overline{0}.
\end{equation}
Let $\beta^{*}$ be the optimal solution of Program (\ref{LP:Stand_Perron}) and let
$\overline{X}^{*}$ be the corresponding optimal vector.
Using the representation of Program (\ref{LP:Stand_Perron}),
Lemma \ref{lem:Collatz-Wielandt} translates into the following.
\begin{theorem}
\label{thm:pf}
The optimum solution of \eqref{LP:Stand_Perron} satisfies $\beta^{*}=1/\PFEigenValue$, where $\PFEigenValue \in \mathbb{R}_{>0}$ is the maximal
eigenvalue of $A$ and $\overline{X}^{*}$ is given by eigenvector
$\PFEigenVector$ corresponding for $\PFEigenValue$.
Hence for $\beta^{*}$, the $n$ constraints given by
$A \cdot \overline{X}^{*} \leq 1/\beta^{*} \cdot \overline{X}^{*}$ of
Program (\ref{LP:Stand_Perron}) hold with equality.
\end{theorem}
This can be interpreted as follows. Consider the ratio
$Y(i)= (A\cdot \overline{X})_{i}/X(i)$, viewed as the ``repression factor"
for entity $i$. The task is to find the input vector $\overline{X}$
that minimizes the maximum repression factor over all $i$,
thus achieving balanced growth.
In the same manner, one can characterize the $\max$-$\min$ ratio.
Again, the optimal value (resp., point) corresponds to the PF eigenvalue
(resp., eigenvector) of $A$.
In summary, when taking
$\overline{X}$ to be
the PF eigenvector, $\PFEigenVector$, and $\beta^{*}=1/\PFEigenValue$,
all repression factors are equal, and optimize the $\max$-$\min$ and
$\min$-$\max$ ratios.
\section{A generalized \PFT~for nonsquare systems}
\subsection{The Problem}
\paragraph{System definitions.}
Our framework consists of a set
$\EntitySet=\{\Entity_{1}, \ldots, \Entity_{n}\}$
of entities whose growth is regulated by a set of \emph{affectors}
$\Affectors=\{\Affectors_1, \Affectors_2, \ldots, \Affectors_m\}$,
for some $m \geq n$.
As part of the solution, each affector is set to be either {\em passive} or
{\em active}.
If an affector $\Affectors_j$ is set to be active, then it affects
each entity $\Entity_i$, by either increasing or decreasing it by a certain
amount $g(i,j)$, which is specified as part of the input.
If $g(i,j) >0$ (resp., $g(i,j) < 0$), then $\Affectors_j$ is referred to as a
\emph{supporter} (resp., \emph{repressor}) of $\Entity_i$.
For clarity we may write $g(\Entity_i,\Affectors_j)$ for $g(i,j)$.
The affector-entity relation is described by two matrices,
the \emph{supporters gain} matrix $\SupportersMatrix \in \mathbb{R}^{n \times m}$ and
the \emph{repressors gain} matrix $\mathbb{R}epressorsMatrix \in \mathbb{R}^{n \times m}$,
given by
\begin{equation*}
\SupportersMatrix(i,j) =
\begin{cases}
g(\Entity_i,\Affectors_j), & \text{if $g(\Entity_i,\Affectors_j) >0$;}\\
0, & \text{otherwise.}
\end{cases}
\end{equation*}
\begin{equation*}
\mathbb{R}epressorsMatrix(i,j) =
\begin{cases}
-g(\Entity_i,\Affectors_j), & \text{if $g(\Entity_i,\Affectors_j) <0$;}\\
0, & \text{otherwise.}
\end{cases}
\end{equation*}
Again, for clarity we may write $\mathbb{R}epressorsMatrix(\Entity_i,\Affectors_j)$
for $\mathbb{R}epressorsMatrix(i,j)$, and similarly for $\SupportersMatrix$.
We can now formally define a {\em system} as
$\System=\langle \SupportersMatrix, \mathbb{R}epressorsMatrix \rangle$, where
$\SupportersMatrix, \mathbb{R}epressorsMatrix \in \mathbb{R}^{m \times n}_{\geq 0}$,
$n=|\EntitySet|$ and $m=|\Affectors|$.
We denote the supporter (resp., repressor) set of $\Entity_i$ by
\begin{eqnarray*}
\Supporters_{i}(\System) &=& \{\Affectors_j \mid
\SupportersMatrix(\Entity_i,\Affectors_j)>0\},
\\
\mathbb{R}epressors_{i}(\System) &=& \{\Affectors_j \mid
\mathbb{R}epressorsMatrix(\Entity_i,\Affectors_j)>0\}.
\end{eqnarray*}
When $\System$ is clear from the context, we may omit it and simply write
$\Supporters_{i}$ and $\mathbb{R}epressors_{i}$.
Throughout, we restrict attention to systems in which $|\Supporters_i|\geq 1$
for every $\Entity_i \in \EntitySet$.
We classify the systems into three types:
\begin{description}
\item{(a)}
$\SquareSystemFamily=\{\System ~\mid~ m \leq n, |\Supporters_i|=1 \text{~for every} ~ \Entity_i \in \EntitySet\}$
is the family of \emph{Square Systems}.
\item{(b)}
$\WeakSystemFamily=\{\System \mid m \leq n+1, \exists j
\text{~s.t~} |\Supporters_j|=2 \text{~and~}|\Supporters_i|=1
\text{~for every~}\Entity_i \in \EntitySet \setminus \{\Entity_j\} \}$
is the family of \emph{Weakly Square Systems}, and
\item{(c)}
$\SystemFamily=\{\System \mid m>n+1\}$ is the family of
\emph{Nonsquare Systems}.
\end{description}
\paragraph{The generalized PF optimization problem.}
Consider a set of $n$ entities and gain matrices
$\SupportersMatrix,\mathbb{R}epressorsMatrix \in \mathbb{R}^{n \times m}$, for $m \geq n$.
The main application of the generalized \PFT~is the following optimization
problem, which is an extension of Program (\ref{LP:Stand_Perron}).
\begin{align}
\label{LP:Ext_Perron}
\mbox{maximize~~} ~& \beta \mbox{~~subject to:~~}
\\
& \mathrm{d}isplaystyle \mathbb{R}epressorsMatrix \cdot \overline{X} ~\leq~
1/\beta \cdot \SupportersMatrix \cdot \overline{X} ~,&
\label{eq:SR} \\
& \mathrm{d}isplaystyle \overline{X} \geq \overline{0}~, &
\label{eq:Ineq}\\
& \mathrm{d}isplaystyle ||\overline{X}||_{1}=1~. &
\nonumber
\end{align}
We begin with a simple observation.
An affector $\Affectors_j$ is \emph{redundant} if
$\SupportersMatrix(\Entity_i,\Affectors_j)=0$ for every $i$.
\begin{observation}
\label{obs:only_positive}
If $\Affectors_j$ is \emph{redundant}, then $X(j)=0$ in any optimal solution
$\overline{X}$.
\end{observation}
In view of Obs. \ref{obs:only_positive}, we may hereafter restrict attention
to the case where there are no redundant affectors in the system,
as any redundant affector $\Affectors_j$ can be removed and simply assigned
$X(j)=0$.
We now proceed with some definitions.
Let $X(\Affectors)$ denote the value of $\Affectors$ in $\overline{X}$, i.e., $X(\Affectors)=X(k)$ where the $k'$th entry in $\overline{X}$ corresponds to $\Affectors$.
An affector $\Affectors$ is \emph{active} in a solution $\overline{X}$ if $X(\Affectors)>0$.
Denote the set of affectors taken to be active in a solution $\overline{X}$
by $NZ(\overline{X})=\{\Affectors_j \mid X(\Affectors_j)> 0\}$.
Let $\beta^{*}(\System)$ denote the optimal value of Program
(\ref{LP:Ext_Perron}), i.e., the maximal positive value $\beta$ for which there exists
a nonnegative, nonzero vector $\overline{X}$ satisfying the constraints of
Program (\ref{LP:Ext_Perron}).
When the system $\System$ is clear from the context
we may omit it and simply write $\beta^*$.
A vector $\overline{X}_{\widetilde{\beta}}$ is \emph{feasible} for
$\widetilde{\beta} \in (0,\beta^{*}]$ if it satisfies all the constraints of
Program (\ref{LP:Ext_Perron}) with $\beta=\widetilde{\beta}$.
A vector $\overline{X}^{*}$ is \emph{optimal} for $\System$ if it is feasible
for $\beta^{*}(\System)$, i.e., $\overline{X}^{*}=\overline{X}_{\beta^{*}}$.
The system $\System$ is \emph{feasible} for $\beta$ if
$\beta\leq \beta^{*}(\System)$, i.e., there exists a feasible
$\overline{X}_{\beta}$ solution for Program (\ref{LP:Ext_Perron}).
\par For vector $\overline{X}$, the \emph{total repression} on $\Entity_{i}$ in
$\System$ for a given $\overline{X}$ is
$\TotR(\overline{X}, \System)_{i}=(\mathbb{R}epressorsMatrix \cdot \overline{X})_{i}$.
Analogously, the \emph{total support} for $\Entity_{i}$ is
$\TotS(\overline{X}, \System)_{i}=(\SupportersMatrix \cdot \overline{X})_{i}$.
We now have the following alternative formulation for the constraints of Eq. (\ref{eq:SR}), stated individually for each entity $\Entity_i$.
\begin{equation}
\label{eq:SR_ind}
\TotR(\overline{X}, \System)_{i} ~\leq~
1/\beta \cdot \TotS(\overline{X}, \System)_{i}~\text{~for every~} i \in \{1, \ldots, n\}~.
\end{equation}
\begin{fact}
\label{fc:feasible_tots_totr}
Eq. (\ref{eq:SR}) holds iff Eq. (\ref{eq:SR_ind}) holds.
\end{fact}
We classify the $m+n$ linear
inequality constraints of Program (\ref{LP:Ext_Perron}) into two types of constraints:
\begin{description}
\item{(1)}
SR (Support-Repression) constraints:
the $n$ constraints of Eq. (\ref{eq:SR}) or alternatively of Eq. (\ref{eq:SR_ind}).
\item{(2)}
Nonnegativity constraints:
the $m$ constraints of Eq. (\ref{eq:Ineq}).
\end{description}
When $\System$ is clear from context, we may omit it and simply write
$\TotR(\overline{X})_{i}$ and $\TotS(\overline{X})_{i}$.
As a direct application of the generalized PF Theorem, there is an exact
polynomial time algorithm for solving Program (\ref{LP:Ext_Perron})
for irreducible systems, as defined next.
\subsection{Irreducibility of PF systems}
\paragraph{Irreducibility of square systems.}
A square system $\System=\langle \SupportersMatrix,\mathbb{R}epressorsMatrix\rangle\in \SquareSystemFamily$ is {\em irreducible} iff
(a) $\SupportersMatrix$ is nonsingular and
(b) $\mathbb{R}epressorsMatrix$ is irreducible.
Given an irreducible square $\System$,
let
\begin{equation*}
Z(\System) ~=~ \left(\SupportersMatrix \right)^{-1} \cdot \mathbb{R}epressorsMatrix~.
\end{equation*}
Note the following two observations.
\begin{observation}
\label{cl:irreducible_supporter}
(a) If $\SupportersMatrix$ is nonsingular, then
$\Supporters_i \cap \Supporters_j = \emptyset$.\\
(b) If $\System$ is an irreducible system,
then $Z(\System)$ is an irreducible matrix as well.
\end{observation}
\par\noindent{\bf Proof:~}
Consider part (a). Since $\System$ is square, $|\Supporters_i|=1$ for every $i$. Combining with the fact that $\SupportersMatrix$ is nonsingular, it holds that $\SupportersMatrix$ is equivalent (up to column alternations) to a diagonal matrix with a fully positive diagonal, hence $\Supporters_i \cap \Supporters_j= \emptyset$. Part (b) follows by definition.
\quad\blackslug\lower 8.5pt\null\par
Throughout, when considering square systems, it is convenient to assume that
the entities and affectors are ordered in such a way that $\SupportersMatrix$
is a diagonal matrix, i.e., in $\SupportersMatrix$
(as well as in $\mathbb{R}epressorsMatrix$) the $i^{th}$ column corresponds to
$\Affectors_{k} \in \Supporters_i$, the unique supporter of $\Entity_i$.
\paragraph{Selection matrices.}
To define a notion of irreducibility for a nonsquare system
$\System \notin \SquareSystemFamily$, we first present the notion of a
{\em selection matrix}.
A selection matrix $\FilterMatrix \in \{0,1\}^{m \times n}$ is \emph{legal}
for $\System$ iff for every entity $\Entity_i \in \EntitySet$ there exists
exactly one supporter $\Affectors_j \in \Supporters_i$ such that
$\FilterMatrix(j,i)=1$.
Such a matrix $\FilterMatrix$ can be thought of as representing a selection
performed on $\Supporters_i$ by each entity $\Entity_i$, picking exactly one
of its supporters. Let $\System(\FilterMatrix)$ be the square system corresponding to the legal
selection matrix $\FilterMatrix$, namely,
$\System(\FilterMatrix)=\mathrm{d}isplaystyle \langle \SupportersMatrix \cdot
\FilterMatrix, \mathbb{R}epressorsMatrix \cdot \FilterMatrix\rangle.$ In the resulting system there are $m' \leq n$ non-redundant affectors.
Since
redundant affectors can be discarded from the system (by Obs. \ref{obs:only_positive}), it follows that
the number of active affectors becomes at most the number of entities,
resulting in a square system.
Denote the family of legal selection matrices,
capturing the ensemble of all square systems hidden in $\System$, by
\begin{equation}
\label{eq:FilterMatrixFamily}
\FilterMatrixFamily(\System) ~=~
\{\FilterMatrix \mid \FilterMatrix \text{~is legal for~} \System \}.
\end{equation}
When $\System$ is clear from the context, we simply write $\FilterMatrixFamily$.
Let $\overline{X}_{\beta} \in \mathbb{R}^{n}$ be a solution for the square system $\System(\FilterMatrix)$ for some $\FilterMatrix$. The \emph{natural extension} of $\overline{X}_{\beta} \in \mathbb{R}^{n}$ into a solution $\overline{X}^{m}_{\beta} \in \mathbb{R}^m$ of the original system $\System$ is defined by letting $X^{m}_{\beta}(\Affectors_k)=X_{\beta}(\Affectors_k)$
if $\sum_{\Entity_i \in \EntitySet}\FilterMatrix(\Affectors_k, \Entity_i)>0$
and $X^{m}_{\beta}(\Affectors_k)=0$ otherwise.
\begin{observation}
\label{obs:filter_to_square}
(a) $\System(\FilterMatrix) \in \SquareSystemFamily$
for every $\FilterMatrix \in \FilterMatrixFamily$.\\
(b) For every solution $\overline{X}_{\beta} \in \mathbb{R}^{n}$ for system $\System(\FilterMatrix)$, for some matrix $\FilterMatrix \in \FilterMatrixFamily$,
its natural extension $\overline{X}_{\beta}^{m}$ is a feasible solution for the original $\System$.\\
(c) $\beta^{*}(\System) \geq \beta^{*}(\System(\FilterMatrix))$ for every selection matrix $\FilterMatrix \in \FilterMatrixFamily$.
\end{observation}
\paragraph{Irreducibility of nonsquare systems.}
We are now ready to define the notion of irreducibility for nonsquare systems, as follows.
A nonsquare system $\System$ is \emph{irreducible} iff $\System(\FilterMatrix)$ is irreducible for every selection matrix $\FilterMatrix \in \FilterMatrixFamily$.
Note that this condition is the ``minimal'' \emph{necessary} condition
for our theorem to hold, as explained next.
Our theorem states that the optimum solution for the nonsquare system is the optimum solution for the best \emph{embedded} square system. It is easy to see that for any nonsquare system $\System=\langle \SupportersMatrix, \mathbb{R}epressorsMatrix \rangle$,
one can increase or decrease any entry $g(i,j)$ in the matrices, while maintaining the sign of each entry in the matrices, such that a particular
selection matrix $\FilterMatrix^{*} \in \FilterMatrixFamily$ would correspond to the optimal square system. With an optimal embedded square system at hand, which is also guaranteed to be irreducible (by the definition of irreducible nonsquare systems), our theorem can then apply the traditional \PFT, where a spectral characterization for the solution of Program (\ref{LP:Stand_Perron}) exists. Note that irreducibility is a \emph{structural} property of the system, in the sense that it does not depend on the exact gain values, but rather on the sign of the gains, i.e., to determine irreducibility, it is sufficient to observe the binary matrices $\SupportersMatrix_{B}, \mathbb{R}epressorsMatrix_{B}$, treating $g(i,j) \neq 0$ as $1$. On the other hand, deciding which of the embedded square systems has the maximal eigenvalue (and hence is optimal), depends on the \emph{precise} values of the entries of these matrices. It is therefore necessary that the structural property of irreducibility would hold for any specification of gain values (while maintaining the binary representation of $\SupportersMatrix_{B}, \mathbb{R}epressorsMatrix_{B}$).
Indeed, consider a reducible nonsquare system, for which there exists an embedded square system $\System(\FilterMatrix)$ that is reducible. It is not hard to see that there exists a specification of gain values that would render this square system $\System(\FilterMatrix)$ optimal (i.e., with the maximal eigenvalue among all other embedded square systems). But since $\System(\FilterMatrix)$ is reducible, the \PFT\ cannot be applied, and in particular, the corresponding eigenvector is no longer guaranteed to be \emph{positive}.
\begin{claim}
\label{cor:distinct}
In an irreducible system $\System$,
$\Supporters_i \cap \Supporters_j=\emptyset$ for every $\Entity_i, \Entity_j$.
\end{claim}
\par\noindent{\bf Proof:~}
Assume, toward contradiction, that there exists some affector
$\Affectors_k \in \Supporters_i \cap \Supporters_j$, and consider a selection
matrix $\FilterMatrix$ for which $\FilterMatrix(k,i)=1$ and
$\FilterMatrix(k,j)=1$. It then follows
by Obs. \ref{cl:irreducible_supporter}(a)
that $\SupportersMatrix \cdot \FilterMatrix$ is singular.
But the irreducibility of $\System$ implies that
$\SupportersMatrix \cdot \FilterMatrix$ is nonsingular for every
$\FilterMatrix \in \FilterMatrixFamily$; contradiction.
\quad\blackslug\lower 8.5pt\null\par
\paragraph{Constraint graphs: a graph theoretic representation.}
We now provide a graph theoretic characterization of irreducible systems $\System$.
Let $\ConstraintsGraph_{\System}(V,E)$ be the directed \emph{constraint graph} for the system $\System$, defined as follows:
$V= \EntitySet$, and the rule for a directed edge $e_{i,j}$ from $\Entity_i$ to $\Entity_j$ is
\begin{equation}
\label{eq:cg_condition}
e_{i,j} \in E ~~~~\mbox{~iff~}~~~~
\Supporters_{i} \cap \mathbb{R}epressors_j \neq \emptyset.
\end{equation}
Note that it is possible that
$\ConstraintsGraph_{\System} \nsubseteq \ConstraintsGraph_{\System(\FilterMatrix)}$ for some $\FilterMatrix \in \FilterMatrixFamily$.
A graph $\ConstraintsGraph_{\System}(V,E)$ is \emph{robustly strongly connected}
if $\ConstraintsGraph_{\System(\FilterMatrix)}(V,E)$ is strongly connected
for every $\FilterMatrix \in \FilterMatrixFamily$.
\begin{observation}
\label{obs:reducible_graph_connected}
Let $\System$ be an irreducible system.
\begin{description}
\item{(a)}
If $\System$ is square, then
$\ConstraintsGraph_{\System}(V,E)$ is strongly connected.
\item{(b)}
If $\System$ is nonsquare, then $\ConstraintsGraph_{\System}(V,E)$ is
robustly strongly connected.
\end{description}
\end{observation}
\par\noindent{\bf Proof:~}
Starting with part (a), in a square system $|\Supporters_i|=1$ and therefore
by definition, the two graphs coincide.
Next note that for a diagonal $\SupportersMatrix$
(as can be achieved by column reordering), $\ConstraintsGraph_{\System}(V,E)$
corresponds to $(\mathbb{R}epressorsMatrix)^{T}$ (by treating positive entries as $1$).
Since $\mathbb{R}epressorsMatrix$ is irreducible (and hence corresponds to a
strongly connected digraph), it follows that the matrix
$(\mathbb{R}epressorsMatrix)^{T}$ is irreducible, and hence
$\ConstraintsGraph_{\System}(V,E)$ is strongly connected.
To prove part (b), consider an arbitrary
$\FilterMatrix \in \FilterMatrixFamily$.
Since $\System(\FilterMatrix)$ is irreducible, it follows that
$\mathbb{R}epressorsMatrix \cdot \FilterMatrix$ is irreducible, and by
Obs. \ref{obs:reducible_graph_connected}(a),
$\ConstraintsGraph_{\System(\FilterMatrix)}(V,E)$ is strongly connected.
The claim follows.
\quad\blackslug\lower 8.5pt\null\par
\paragraph{Partial selection for irreducible systems.}
Let $\SelectionVec' \subseteq \Affectors$ be a subset of affectors in an irreducible system $\System$.
Then $\SelectionVec'$ is a \emph{partial selection}
if there exists a subset of entities $V' \subseteq \EntitySet$ such that (a) $|\SelectionVec'|=|V'|$, and (b) for every $\Entity_i \in V'$, $|\Supporters_i \cap \SelectionVec'|=1$.
\\
That is, every entity in $V'$ has a single representative supporter
in $\SelectionVec'$. We refer to $V'$ as the set of entities \emph{determined} by $\SelectionVec'$.
In the system $\System(\SelectionVec')$, the supporters $\Affectors_k$
of any $\Entity_i \in V'$ that were not selected by $\Entity_i$, i.e.,
$\Affectors_k \notin \SelectionVec' \cap \Supporters_i$, are discarded.
In other words, the system's affectors set consists of the selected supporters
$\SelectionVec'$, and the supporters of entities that have not made up
their selection in $\SelectionVec'$.
We now turn to describe $\System(\SelectionVec')$ formally. The set of affectors in $\System(\SelectionVec')$ is given by
$\Affectors(\System(\SelectionVec')) =
\SelectionVec' \cup
\bigcup_{\Supporters_i \cap \SelectionVec'=\emptyset} \Supporters_i$.
The number of affectors in $\System(\SelectionVec')$ is denoted by
$m(\SelectionVec')=|\Affectors(\System(\SelectionVec'))|$.
Recall that the $j^{th}$ column of the
matrices $\SupportersMatrix ,\mathbb{R}epressorsMatrix$ corresponds to $\Affectors_j$.
Let $ind(\Affectors_j) =
j-|\{\Affectors_\ell \notin \Affectors(\System(\SelectionVec')), \ell\leq j-1\}|$
be the index of the affector $\Affectors_j$ in the new system,
$\System(\SelectionVec')$ (i.e, the $ind(\Affectors_j)^{th}$ column in the contracted matrices $\SupportersMatrix(\SelectionVec'),\mathbb{R}epressorsMatrix(\SelectionVec')$ corresponds to $\Affectors_j$).
Define the partial selection matrix $\FilterMatrix(\SelectionVec') \in \{0,1\}^{m \times m(\SelectionVec')}$
such that $\FilterMatrix(\SelectionVec')_{i,ind(\Affectors_j)}=1$ for every
$\Affectors_j \in \Affectors(\System(\SelectionVec'))$, and
$\FilterMatrix(\SelectionVec')_{i,j}=0$ otherwise.
Finally, let
$\System(\SelectionVec') = \mathrm{d}isplaystyle \langle
\SupportersMatrix(\SelectionVec'),\mathbb{R}epressorsMatrix(\SelectionVec')\rangle,$
where $\SupportersMatrix(\SelectionVec') =
\mathrm{d}isplaystyle \SupportersMatrix \cdot \FilterMatrix(\SelectionVec')
\mbox{~~and~~} \mathbb{R}epressorsMatrix(\SelectionVec') =
\mathbb{R}epressorsMatrix \cdot \FilterMatrix(\SelectionVec')$.
Note that $\SupportersMatrix(\SelectionVec'), \mathbb{R}epressorsMatrix(\SelectionVec')
\in \mathbb{R}^{n \times m(\SelectionVec')}$.
Observe that if the selection $\SelectionVec'$ is a complete legal selection,
then $|\SelectionVec'|=n$ and the system $\System(\SelectionVec')$ is a square
system. In summary, we have two equivalent representations for square systems
in the nonsquare system $\System$:
\\
(a) by specifying a complete selection $\SelectionVec$, $|\SelectionVec|=n$,
and
\\
(b) by specifying the selection matrix,
$\FilterMatrix \in \FilterMatrixFamily$.
\\
Representations (a) and (b) are equivalent in the sense that the two square systems
$\System(\FilterMatrix(\SelectionVec))$ and $\System(\SelectionVec)$
are the same.
We now show that if the system $\System$ is irreducible, then so must be
any $\System(\SelectionVec')$, for any partial selection $\SelectionVec'$.
\begin{observation}
\label{obs:irreducible_selection}
Let $\System$ be an irreducible system. Then $\System(\SelectionVec')$ is also
irreducible, for every partial selection $\SelectionVec'$.
\end{observation}
\par\noindent{\bf Proof:~}
Recall that a system is irreducible iff every hidden square system is
irreducible. I.e., the square system $\System(\FilterMatrix)$ is irreducible
for every $\FilterMatrix \in \FilterMatrixFamily(\System)$.
We now show that if
$\FilterMatrix \in \FilterMatrixFamily(\System(\SelectionVec'))$,
then $\FilterMatrix \in \FilterMatrixFamily(\System)$.
This follows immediately by Eq. (\ref{eq:FilterMatrixFamily}) and the fact that
$\Supporters_i(\System(\SelectionVec')) \subseteq \Supporters_i(\System)$.
\quad\blackslug\lower 8.5pt\null\par
\par\noindent{\bf Agreement of partial selections.}
Let $\SelectionVec_1, \SelectionVec_2 \subseteq \Affectors$ be partial
selections for $V_1, V_2 \subseteq \EntitySet$ respectively.
Then we denote by $\SelectionVec_1 \sim \SelectionVec_2$ the property that the partial selections \emph{agree}, namely,
$\SelectionVec_1 \cap \Supporters_j=\SelectionVec_2 \cap \Supporters_j$
for every $\Entity_j \in V_1 \cap V_2$.
\begin{observation}
\label{obs:chain_sym}
Consider $V_1,V_2, V_3 \subseteq \EntitySet$ determined by the partial selections
$\SelectionVec_1,\SelectionVec_2,\SelectionVec_3$ respectively, such that
$V_1 \subset V_2$, $\SelectionVec_1 \sim \SelectionVec_2$ and
$\SelectionVec_2 \sim \SelectionVec_3$.
Then also $\SelectionVec_3 \sim \SelectionVec_1$.
\end{observation}
\par\noindent{\bf Proof:~}
$\SelectionVec_{2}$ is more restrictive than $\SelectionVec_1$ since it defines
a selection for a strictly larger set of entities. Therefore every partial
selection $\SelectionVec_3$ that agrees with $\SelectionVec_2$ agrees also
with $\SelectionVec_1$.
\quad\blackslug\lower 8.5pt\null\par
\paragraph{Generalized \PFT~for nonnegative irreducible systems.}
Recall that the root of a square system $\System \in \SquareSystemFamily$ is
$\PFEigenValue(\System)=\max \left \{\EigenValue(Z(\System)) \right\}.$
$\PFEigenVector(\System)$ is the eigenvector of $Z(\System)$
corresponding to $\PFEigenValue(\System)$.
We now turn to define the \emph{generalized Perron--Frobenius (PF) root} of
a nonsquare system $\System \notin \SquareSystemFamily$, which is given by
\begin{equation}
\label{eq:general_pf_root}
\PFEigenValue(\System) ~=~ \min_{\FilterMatrix \in \FilterMatrixFamily}
\left \{\PFEigenValue(\System(\FilterMatrix)) \right\}.
\end{equation}
Let $\FilterMatrix^*$ be the selection matrix that achieves the minimum in
Eq. (\ref{eq:general_pf_root}). We now describe the corresponding eigenvector
$\PFEigenVector(\System)$. Note that $\PFEigenVector(\System) \in \mathbb{R}^{m}$,
whereas $\PFEigenVector(\System(\FilterMatrix^*)) \in \mathbb{R}^{n}$.
Consider $\overline{X}'=\PFEigenVector(\System(\FilterMatrix^*))$ and let
$\PFEigenVector(\System)=\overline{X}$, where
\begin{equation}
\label{eq:general_pf_vector}
X(\Affectors_j) ~=~
\begin{cases}
X'(\Affectors_j), & \text{if $\sum_{i=1}^{n}\FilterMatrix^{*}(j,i)>0$;}
\\
0, & \text{otherwise.}
\end{cases}
\end{equation}
We next state our main result, which is a generalized variant of the \PFT\ for every nonnegative nonsquare irreducible system.
\begin{theorem}
\label{thm:pf_ext}
Let $\System$ be an irreducible and nonnegative nonsquare system. Then
\begin{description}
\item{(Q1)}
$\PFEigenValue(\System)>0$,
\item{(Q2)}
$\PFEigenVector(\System) \geq 0$,
\item{(Q3)}
$|NZ(\PFEigenVector(\System))|=n$,
\item{(Q4)}
$\PFEigenVector(\System)$ is not unique.
\item{(Q5)}
The generalized Perron root of $\System$ satisfies
$\mathrm{d}isplaystyle \PFEigenValue = \min\limits_{\overline{X} \in \mathcal{N}}
\left\{ \mathfrak{f}(\overline{X}) \right\}$, where
$$\mathfrak{f}(\overline{X}) ~=~ \max\limits_{1 \leq i \leq n, \left(\SupportersMatrix \cdot \overline{X} \right)_{i}\neq 0}
\{ \frac{\left(\mathbb{R}epressorsMatrix \cdot \overline{X} \right)_{i}}
{ \left(\SupportersMatrix \cdot \overline{X} \right)_{i}} \}$$
and $\mathcal{N}=\{\overline{X} \geq 0,||\overline{X}||_{1}=1,
\SupportersMatrix \cdot \overline{X}\neq 0\}.$
I.e., the Perron-Frobenius (PF) eigenvalue is $1/\beta^{*}$ where $\beta^{*}$
is the optimal value of Program (\ref{LP:Ext_Perron}),
and the \PFE~ is the corresponding optimal point. Hence for $\beta^{*}$, the $n$ constraints of
Eq. (\ref{eq:SR}) hold with equality.
\end{description}
\end{theorem}
\paragraph{The difficulty: Lack of log-convexity.}
Before plunging into a description of our proof, we first discuss
a natural approach one may consider for proving Thm. \ref{thm:pf_ext}
in general and solving Program (\ref{LP:Ext_Perron}) in particular,
and explain why this approach fails in this case.
A non-convex program can often be turned into an equivalent convex one
by performing a standard variable exchange.
This allows the program to be solved by convex optimization techniques (see \cite{TanFL11} for more information). An example for a program that's amenable to this technique is
Program (\ref{LP:Stand_Perron}), which is \emph{log-convex}
(see Claim \ref{cl:non_convex}(a)), namely, it becomes convex
after certain term replacements.
Unfortunately, in contrast with Program (\ref{LP:Stand_Perron}), the generalized
Program (\ref{LP:Ext_Perron}) is not log-convex
(see Claim \ref{cl:non_convex}(b)),
and hence cannot be handled in this manner.
More formally, for vector $\overline{X}=(X(1), \ldots, X(m))$ and $\alpha\in \mathbb{R}$, denote the component-wise $\alpha$-power of $\overline{X}$ by $\overline{X}^{\alpha}=(X(1)^{\alpha}, \ldots, X(m)^{\alpha})$. An optimization program $\Pi$ is \emph{log-convex} if given two feasible solutions $\overline{X}_1, \overline{X}_2$ for $\Pi$, their log-convex combination
$\overline{X}_{\mathrm{d}elta}= \overline{X}_1^{\mathrm{d}elta} \cdot \overline{X}_2^{(1-\mathrm{d}elta)}$ (where ``$\cdot$" represents component-wise multiplication)
is also a solution for $\Pi$, for every $\mathrm{d}elta \in [0,1]$.
In the following we ignore the constraint $||\overline{X}||_{1}=1$, since we only
validate the feasibility of nonzero nonnegative vectors; this constraint
can be established afterwards by normalization.
\begin{claim}
\label{cl:non_convex}
(a) Program (\ref{LP:Stand_Perron}) is log-convex (without the $||\overline{X}||_{1}=1$ constraint). \\
(b) Program (\ref{LP:Ext_Perron}) is not log-convex (even without the $||\overline{X}||_{1}=1$ constraint).
\end{claim}
\par\noindent{\bf Proof:~}
We start with (a). In \cite{LogConvex} it is shown that the power-control problem is log-convex.
The log-convexity of Perron-Frobenius eigenvalue is also discussed in \cite{Boyd-Conv-Opt-Book}, for completeness we prove it here.
We use the same technique of \cite{LogConvex} and show it directly for
Program (\ref{LP:Stand_Perron}). Let $A$ be a non-negative irreducible matrix and let $\overline{X}_1,\overline{X}_2$ be two feasible solutions for Program (\ref{LP:Stand_Perron}) with $\beta_1$, resp. $\beta_2$. We now show that $\overline{X}_3=\overline{X}_1^{\alpha} \cdot \overline{X}_2^{(1-\alpha)}$ (where ``$\cdot$" represents entry-wise multiplication). is a feasible solution for $\beta_3=\beta_1^{\alpha} \cdot \beta_2^{1-\alpha}$, for any $\alpha \in [0,1]$. I.e., we show that $A \cdot \overline{X}_3 \leq 1/\beta_3 \cdot \overline{X}_3$. Let $\eta_i=X_1(i)/(A \cdot \overline{X}_1)_{i}$, $\gamma_i=X_2(i)/(A \cdot \overline{X}_2)_{i}$, $\mathrm{d}elta_i=X_3(i)/(A \cdot \overline{X}_3)_{i}$. By the feasibility of $X_1$ (resp., $X_2$) it follows that $\eta_i \geq \beta_1$ (resp., $\gamma_i \geq \beta_2$) for every $i \in \{1, \ldots, n\}$.
It then follows that
\begin{equation}
\label{eq:log_con}
\frac{\mathrm{d}elta_i}{\eta_i^{\alpha} \cdot \gamma_i^{1-\alpha}}= \frac{\left(\sum_{j}A(i,j) \cdot X_1(j)\right)^{\alpha} \cdot \left(\sum_{j}A(i,j) \cdot X_2(j)\right)^{1-\alpha} }{\sum_{j}A(i,j) \cdot X_1(j)^{\alpha} \cdot X_2(j)^{1-\alpha}}~.
\end{equation}
Let $p_j=\left(A(i,j)X_1(j)\right)^{\alpha}$ and $q_j=\left(A(i,j)X_2(j)\right)^{1-\alpha}$. Then Eq. (\ref{eq:log_con}) becomes
\begin{eqnarray*}
\label{eq:log_con2}
\frac{\mathrm{d}elta_i}{\eta_i^{\alpha} \cdot \gamma_i^{1-\alpha}}&=& \frac{\left(\sum_{j} p_{j}^{1/\alpha}\right)^{\alpha} \cdot \left( \sum_{j} q_{j}^{1/(1-\alpha)}\right)^{1-\alpha} }{\sum_{j}p_j \cdot q_j}\geq 1
\end{eqnarray*}
where the last inequality follows by Holder Inequality which can be safely applied since $p_j,q_j \geq 0$ for every $j \in \{1, \ldots,n\}$. We therefore get that
for every $i$, $\mathrm{d}elta_i \geq \eta_i^{\alpha} \cdot \gamma_i^{1-\alpha} \geq \beta_3$,
concluding that $X_3(i)/(A \cdot \overline{X}_3)_{i}\geq \beta_3$ and $A \cdot X_3 \leq 1/\beta_3 \cdot X_3$ as required. Part (a) is established. We now consider (b).
For vector $\overline{Y} \in \mathbb{R}^{m}$, $m \geq i$, recall that $\overline{Y}_{i}=(Y(1), \ldots, Y(i))$, the $i$ first coordinates of $\overline{Y}$.
For given repressor and supporter matrices
$\mathbb{R}epressorsMatrix,\SupportersMatrix\in\mathbb{R}^{n\times m}$,
define the following program. For $\overline{Y}\in \mathbb{R}^{m+1}$:
\begin{eqnarray}
\label{LP:not convex}
&&\max ~ Y({m+1}) ~
\mathrm{s.t.}~\\
&&
\mathrm{d}isplaystyle Y(m+1)\cdot \mathbb{R}epressorsMatrix\cdot(\overline{Y}_{m})^T
\leq \SupportersMatrix \cdot(\overline{Y}_m)^T \nonumber
\\
&& \mathrm{d}isplaystyle \overline{Y} \geq \overline{0} \nonumber
\\
&& \mathrm{d}isplaystyle \overline{Y}_m \neq \overline{0} \nonumber
\end{eqnarray}
\noindent
This program is equivalent to Program (\ref{LP:Ext_Perron}).
An optimal solution $\overline{Y}$ for Program (\ref{LP:not convex})
``includes" an optimal solution for Program (\ref{LP:Ext_Perron}),
where $\beta=Y(m+1)$ and $\overline{X}=\overline{Y}_m$.
We prove that Program (\ref{LP:not convex}) is not log-convex by showing
the following example. Consider the repressor and supporters matrices
\begin{eqnarray*}
\mathbb{R}epressorsMatrix=
\begin{pmatrix}
0 & 2 & 1\\
1 & 0 & 0
\end{pmatrix}
\mbox{ ~~~and~~~ }
\SupportersMatrix=
\begin{pmatrix}
1/2 & 0 & 0 \\
0 & 4 & 4
\end{pmatrix}.
\end{eqnarray*}
It can be verified that $Y_1=(2, 1/2, 0, 1)$ and $Y_2= (4, 0, \sqrt{2}, \sqrt{2})$ are feasible. However, their log-convex combination $Y=Y_1^{1/2} \cdot Y_2^{1/2}$ is not a feasible solution for this system. Lemma follows.
\quad\blackslug\lower 8.5pt\null\par
\subsubsection{Algorithm for testing irreducibility}
In this subsection, we provide a polynomial-time algorithm for testing the irreducibility
of a given nonnegative system $\System$. Note that if $\System$ is a square
system, then irreducibility can be tested in a straightforward manner
by checking that $\mathbb{R}epressorsMatrix$ is irreducible and that $\SupportersMatrix$ is nonsingular.
However, recall that a nonsquare system $\System$ is irreducible iff every hidden square system $\System(\FilterMatrix)$,
$\FilterMatrix \in \FilterMatrixFamily$, is irreducible.
Since $\FilterMatrixFamily$ might be exponentially large, a brute-force testing of $\System(\FilterMatrix)$ for every $\FilterMatrix$ is too costly, hence another approach is needed.
Before presenting the algorithm, we provide some notation.
\par Consider a directed graph $G=(V,E)$.
Denote the set of incoming neighbors of a node $\Entity_k$ by $\Gamma^{in}(\Entity_k,D)=\{ \Entity_j \mid e_{j,i} \in E(D)\}$. The incoming neighbors of a set of nodes $V' \in \EntitySet$ is denoted $\Gamma^{in}(V',D)=\bigcup_{\Entity_k \in V'}\Gamma^{in}(\Entity_k,D)$.
\paragraph{Algorithm Description.}
To test irreducibility, Algorithm ~\TestIrred~ (see Fig. \ref{figure:irreducibility_tester}) must verify that the constraint graph $\ConstraintsGraph_{\System(\FilterMatrix)}$ of every $\FilterMatrix \in \FilterMatrixFamily$ is strongly connected.
The algorithm consists of at most $n-1$ rounds.
In round $t$, it is given as input a partition $\mathcal{C}^{t}=\{C^{t}_{1}, \ldots, C^{t}_{k_t}\}$ of $\EntitySet$ into $k_t$ disjoint clusters such that $\bigcup_{i} C^{t}_{i}=\EntitySet$. For round $t=0$, the input is a partition $\mathcal{C}^{0}=\{C^{0}_{1}, \ldots, C^{0}_{n}\}$ of the entity set $\EntitySet$ into $n$ singleton clusters $C^{0}_{i}=\{\Entity_i\}$.
The output at round $t$ is a coarser partition $\mathcal{C}^{t+1}$, in which at least two clusters of $\mathcal{C}^{t}$ were merged into a single cluster in $\mathcal{C}^{t+1}$. The partition $\mathcal{C}^{t+1}$ is formed as follows.
The algorithm first forms a graph $D_t=(\mathcal{C}^{t}, E_t)$ on the clusters of the input partition $\mathcal{C}^{t}$, treating each cluster $C^{t}_i \in \mathcal{C}^{t}$ as a node, and including in $E_t$ a directed edge $(i,j)$ from $C^{t}_i$ to $C^{t}_j$ if and only if there exists an entity node $\Entity_k \in C^{t}_{i}$ such that \emph{each} of its supporters $\Affectors_i \in \Supporters_k$ is a repressor of \emph{some} entity $\Entity_{k'} \in C^{t}_{j}$, i.e., $\Supporters_k \subseteq \bigcup_{\Entity_{k'} \in C^{t}_{j}} \mathbb{R}epressors_{k'}$.
\par The partition $\mathcal{C}^{t+1}$ is now formed by merging clusters $C^{t}_j$ that belong to the same \SCC~ in $D_{t}$ into a single cluster $C^{t+1}_{k'}$ in $\mathcal{C}^{t+1}$.
Each cluster of $\mathcal{C}^{t+1}$ corresponds to a unique \SCC~ in $D_t$. If $D_t$ contains no \SCC~ except for singletons, which implies that no two cluster nodes of $D_t$ can be merged, then the algorithm declares the system $\System$ as reducible and halts. Otherwise, it proceeds with the new partition $\mathcal{C}^{t+1}$. Importantly, in $\mathcal{C}^{t+1}$ there are at least two entity subsets that belong to distinct clusters in $\mathcal{C}^{t}$ but to the same cluster node in $\mathcal{C}^{t+1}$. If none of the rounds ends with the algorithm declaring the system reducible (due to clusters ``merging" failure), then the procedure proceeds with the cluster merging until at some round $t^* \leq n-1$ the remaining partition $\mathcal{C}^{t^*}=\{\{\EntitySet\}\}$ consists of a single cluster node that encompasses the entire entity set.
\begin{figure*}
\caption{\label{figure:irreducibility_tester}
\label{figure:irreducibility_tester}
\end{figure*}
\paragraph{Analysis.}
We first provide some high level intuition for the correctness of the algorithm.
Recall, that the goal of the algorithm is to test whether the entire entity set $\EntitySet$ resides in a single \SCC~ in the constraint graph $\ConstraintsGraph_{\System(\FilterMatrix)}$ for every selection matrix $\FilterMatrix \in \FilterMatrixFamily$. This test is performed by the algorithm in a gradual manner by monotonically increasing the subsets of nodes that belong to the same \SCC~ in every $\ConstraintsGraph_{\System(\FilterMatrix)}$. In the beginning of the execution, the most one can claim is that every entity $\Entity_k$ is in its own \SCC. Over time, clusters are merged while maintaining the invariant that all entities of the same cluster belong to the same \SCC~ in every $\ConstraintsGraph_{\System(\FilterMatrix)}$.
More formally, the following invariant is maintained in every round $t$: the entities of each cluster $C^t_i \subseteq \EntitySet$ of the graph $D_t$ are guaranteed to be in the same \SCC~ in the constraint graph $\ConstraintsGraph_{\System(\FilterMatrix)}$ for every selection matrix
$\FilterMatrix \in \FilterMatrixFamily$.
We later show that if the system $\System$ is irreducible, then the merging process never fails and therefore the last partition $\mathcal{C}^{t^*}=\{\{\EntitySet\}\}$ consists of a single cluster node that contains all entities, and by the invariant, all entities are guaranteed to be in the same \SCC~ in the constraint graph of any hidden square subsystem.
\par We now provide some high level explanation for the validity of this invariant. Starting with round $t=0$, each cluster node $C^0_i=\{\Entity_i\}$ is a singleton and every singleton entity is trivially in its own \SCC~ in any constraint graph $\ConstraintsGraph_{\System(\FilterMatrix)}$. Assume the invariant holds up to round $t$, and consider round $t+1$.
The key observation in this context is that the new partition $\mathcal{C}^{t+1}$ is defined based on the graph $D_t=(\mathcal{C}^{t}, E_{t})$, whose edges are independent of the specific supporter selection that is made by the entities (and that determines the resulting hidden square subsystem). This holds due to the fact that a directed edge $(i,j) \in E_{t}$ between the clusters $C^{t}_{i}, C^{t}_{j} \in \mathcal{C}^{t}$ exists if and only if there exists an entity node $\Entity_k \in C^{t}_{i}$ such that \emph{each} of its supporter $\Affectors_i \in \Supporters_k$ is a repressor of \emph{some} entity $\Entity_{k'} \in C^{t}_{j}$. Therefore, if the edge $(i,j)$ exists in the $D_{t}$, then it exists also in the cluster graph corresponding to the constraint graph $\ConstraintsGraph_{\System(\FilterMatrix)}$ (i.e., the graph formed by representing every \SCC~ of $\ConstraintsGraph_{\System(\FilterMatrix)}$ by a single node) for \emph{every} hidden square subsystem $\System(\FilterMatrix)$, no matter which supporter $\Affectors_i \in \Supporters_k$ was selected by $\FilterMatrix$ for $\Entity_k$. Hence, under the assumption that the invariant holds for $\mathcal{C}^{t}$, the coarse-grained representation of the clusters of $\mathcal{C}^t$ in $\mathcal{C}^{t+1}$ is based on their membership in the same \SCC~ in the ``selection invariant" graph $D_{t}$, thus the invariant holds also for $t+1$.
We next formalize this argumentation. We say that round $t$ is \emph{successful} if $D_t$ contains a \SCC~ of size greater than 1. We begin by proving the following.
\begin{claim}
\label{cl:partition_induc}
For every successful round $t$, the partition
$\mathcal{C}^{t+1}$ satisfies the following properties.
\begin{description}
\item{(A1)}
$\mathcal{C}^{t+1}$ is a partition of $\EntitySet$, i.e.,
$C^{t+1}_i \subseteq \EntitySet$, $C^{t+1}_{j} \cap C^{t+1}_{i}=\emptyset$
for every $i,j \in [1,k_{t+1}]$, and $\bigcup_{j\leq k_{t+1}} C^{t+1}_{j}=\EntitySet$.
\item{(A2)}
Every $C^{t+1}_{j} \in \mathcal{C}^{t+1}$ is a \SCC~ in the constraint graph
$\ConstraintsGraph_{\System(\FilterMatrix)}$ for every selection matrix
$\FilterMatrix \in \FilterMatrixFamily$.
\end{description}
\end{claim}
\par\noindent{\bf Proof:~}
By induction on $t$.
Clearly, since $C^{0}_{i}=\{\Entity_i\}$ for every $i$,
Properties (A1) and (A2) trivially hold for $\mathcal{C}^{0}$.
We now show that if round $t=0$ is successful, then (A1) and (A2) hold for $\mathcal{C}^{1}$. Since the edges of $D_0$ exist also in the corresponding cluster graph of $\ConstraintsGraph_{\System(\FilterMatrix)}$ under any selection $\FilterMatrix$ of the entities, the clusters of $\mathcal{C}^{0}$ that are merged into a single \SCC~ in $\mathcal{C}^{1}$, belong also to the same \SCC~ in the constraint graph $\ConstraintsGraph_{\System(\FilterMatrix)}$ of every $\FilterMatrix \in \FilterMatrixFamily$.
Next, assume these properties to hold for every round up to $t-1$ and consider round $t$.
Since round $t$ is successful, any prior round $t' <t$ was successful as well, and thus the induction assumption can be applied on round $t-1$. In particular, since $\mathcal{C}^{t+1}$ corresponds to \SCC s of $D_t$, it represents a partition of the clusters of
$\mathcal{C}^{t}$. By the induction assumption for round $t-1$, Property (A1) holds for $\mathcal{C}^{t}$ and therefore $\mathcal{C}^{t}$ is a partition of the entity set $\EntitySet$. Since $\mathcal{C}^{t+1}$ corresponds to a partition of $\mathcal{C}^t$, it is a partition of $\EntitySet$ as well so (A1) is established. Property (A2) holds for $\mathcal{C}^{t+1}$ by the same argument provided for the induction base.
The claim follows.
\quad\blackslug\lower 8.5pt\null\par
We next show that the algorithm return ``yes" for every irreducible system.
Specifically, we show that for an irreducible system, if $|\mathcal{C}^{t}|>1$ then round $t$ is \emph{successful}, i.e., the merging operation of the cluster graph $D_t$ succeeds. Once $\mathcal{C}^t$ contains a single cluster (containing all entities), the algorithm terminates and returns ``yes".
We first provide an auxiliary claim.
\begin{claim}
\label{cl:aux}
If $\System$ is irreducible and $|\mathcal{C}^{t}|>1$, then
$|\Gamma^{in}(C^{t}_{j},D_{t})| \geq 1$ for every $C^{t}_{j} \in \mathcal{C}^{t}$.
\end{claim}
\par\noindent{\bf Proof:~}
First note that if $\mathcal{C}^{t}$ is defined, then round $t-1$ was successful. Therefore, by Property (A1) of Cl. \ref{cl:partition_induc}, $\mathcal{C}^{t}$
is a partition of the entity set $\EntitySet$.
Assume, towards contradiction that the claim does not hold, and let $C^{t}_{j}\in \mathcal{C}^{t}$ be such that
$\Gamma^{in}(C^{t}_{j},D_{t})=\emptyset$. Denote the set of incoming neighbors
of component $C^{t}_{j}$ in the constraint graph $\ConstraintsGraph_{\System}$ by $W=\Gamma^{in}(C^{t}_{j},\ConstraintsGraph_{\System}) \setminus C^{t}_{j}$.
Since $\ConstraintsGraph_{\System}$ is irreducible, the vertices of $C^{t}_{j}$ are reachable from the outside, so $W \neq \emptyset$.
Let the repressors set of $C^{t}_{j}$ be
$\mathbb{R}epressors(C^{t}_{j})=\bigcup_{\Entity_k \in C^{t}_{j}} \mathbb{R}epressors_{k}$.
We now construct a square hidden system $\System(\FilterMatrix^*)$ which is reducible, in contradiction to the irreducibility of $\System$. Specifically, we look for a selection matrix $\FilterMatrix^*$ satisfying that for every entity $\Entity_k \in W$, its selected supporter $\Affectors_k$ in $\System(\FilterMatrix^*)$ (i.e., the one for which $\FilterMatrix^*( \Affectors_k, \Entity_k)=1$) is not a repressor of any of the entities in $C^{t}_{j}$, i.e., $\Affectors_k
\in \Supporters_k \setminus \mathbb{R}epressors(C^{t}_{j})$.
Recall, that since $\System$ is irreducible, the supporter sets $\Supporters_i, \Supporters_j$ are pairwise disjoint (see Claim \ref{cor:distinct}).
Note that since $\Gamma^{in}(C^{t}_{j},D_{t})=\emptyset$,
such a selection matrix $\FilterMatrix^*$ exists.
To see this, assume, towards contradiction that $\FilterMatrix^*$ does not exist. This implies that there exists an entity $\Entity_{k} \in W$ such that $\Supporters_{k} \setminus \mathbb{R}epressors(C^{t}_{j})=\emptyset$ and therefore an affector in $\Supporters_k \setminus \mathbb{R}epressors(C^{t}_{j})$ could not be selected for $\FilterMatrix^*$. Hence, $\Supporters_k \subseteq \mathbb{R}epressors(C^{t}_{j})$.
Let $C^{t}_{i} \in \mathcal{C}^{t}$ be the cluster such that $\Entity_k \in C^{t}_{i}$. Since $\mathcal{C}^{t}$ is a partition of the entity set $\EntitySet$, such $ C^{t}_{i}$ exists. Since $\Supporters_k \subseteq \mathbb{R}epressors(C^{t}_{j})$, it implies that the edge $e_{i,j} \in D_{t}$, in contradiction to the fact that $C^{t}_{j}$ has no incoming neighbors in $D_{t}$.
We therefore conclude that $\FilterMatrix^*$ exists.
\par We now show that $\System(\FilterMatrix^*)$ is reducible. In particular, we show that
the incoming degree of the component $C^{t}_{j}$
(from entities in other components) in the constraint graph $\System(\FilterMatrix^*)$ of the square system $\System(\FilterMatrix^*)$, is zero, i.e.,
$\Gamma^{in}(C^{t}_{j},\ConstraintsGraph_{\System(\FilterMatrix^*)})=\emptyset$.
Assume, towards contradiction, that there exists a directed edge $e_{x,y}$ from entity $\Entity_x \in \EntitySet \setminus C^{t}_{j}$ to some $\Entity_y \in C^{t}_{j}$ in $\ConstraintsGraph_{\System(\FilterMatrix^*)}$. This implies that $e_{x,y} \in \ConstraintsGraph_{\System}$ exists in the constraint graph of the original (nonsquare) system $\System$ and thus $\Entity_x$ is in $W$. Let $\Affectors_{x'} \in \Supporters_x$ be the selected supporter
of $\Entity_x$ in $\FilterMatrix^*$. By construction of $\FilterMatrix^*$,
$\Affectors_{x'} \notin \mathbb{R}epressors(C^{t}_{j})$, in contradiction to the fact that the edge $e_{x,y} \in \ConstraintsGraph_{\System(\FilterMatrix^*)}$ exists.
Since there exists a node in $\ConstraintsGraph_{\System(\FilterMatrix^*)}$ with no incoming neihbors, this graph is not strongly connected, implying that $\System(\FilterMatrix^*)$ is reducible.
Finally, as $\System$ is irreducible, it holds that every hidden square system is irreducible, in particular $\System(\FilterMatrix^*)$, hence, contradiction. The claim follows.
\quad\blackslug\lower 8.5pt\null\par
\begin{lemma}
\label{cl:pos}
If $\System$ is irreducible then
Algorithm ~\TestIrred($\System$) returns ``yes".
\end{lemma}
\par\noindent{\bf Proof:~}
By Cl. \ref{cl:aux}, we have that if $\System$ is irreducible and
$|\mathcal{C}^{t}|>1$, then every node in $D_t$ has an incoming edge, which necessitates that there exists a (directed) cycle
$C=(C_{i_1}, \ldots, C_{i_k})$, for $k \geq 2$ in $D^{t}$. Since the nodes in such cycle $C$ are strongly connected, they can be merged in $\mathcal{C}^{t+1}$, and therefore round $t$ is successful.
Moreover, since at least two clusters of $\mathcal{C}^t$ are merged into a single cluster in $\mathcal{C}^{t+1}$, we have that
$|\mathcal{C}^{t+1}|<|\mathcal{C}^{t}|$.
This means that the merging never fails as long as $|\mathcal{C}^{t}|>1$, so $k_{t}=|\mathcal{C}^{t}|$ is monotonically decreasing.
It follows that the algorithm terminates within at most $n-1$ rounds with a ``yes". The Lemma follows.
\quad\blackslug\lower 8.5pt\null\par
We now consider a reducible system $\System$ and show
that ~\TestIrred($\System$) returns ``no".
\begin{lemma}
\label{lem:neg}
If $\System$ is reducible, then
Algorithm ~\TestIrred($\System$) returns ``no".
\end{lemma}
\par\noindent{\bf Proof:~}
Towards contradiction, assume otherwise, i.e., suppose that the algorithm accepts $\System$.
This implies that every round $t \in [1, t^*]$ in which $|\mathcal{C}^{t}|>1$ is successful.
\par The reducibility of $\System$ implies that there exists (at least one) hidden square system $\System(\FilterMatrix)$ which is reducible, namely, its constraint graph $\widehat{D}=\ConstraintsGraph_{\System(\FilterMatrix)}$ is not strongly connected. Thus $\widehat{D}$ contains at least two nodes $\Entity_i$ and $\Entity_j$ that belong to distinct \SCC s in $\widehat{D}$.
Note that $\Entity_i$ and $\Entity_j$ are in distinct clusters in $\mathcal{C}^{0}$, but belong to the same cluster in the partition of the final $\mathcal{C}^{t^*}$. Therefore, there must exists a round $t' \in (0, t^*)$
in which the cluster $C^{t'}_{i'}$ that contains $\Entity_i$ and the cluster $C^{t'}_{j'}$ that contains $\Entity_j$ appeared in the same \SCC~ in $D_{t'}$ and were merged into a single \SCC~ in $\mathcal{C}^{t'+1}$.
(Note that since $t'-1$ is a successful round,
$\mathcal{C}^{t'}$ is a partition of the entity set (Prop. (A1) of Cl. \ref{cl:partition_induc}) and therefore $C^{t'}_{i'}$ and $C^{t'}_{j'}$ exist.)
Since round $t'$ is successful (otherwise the algorithm would terminates with ``no"), by to Property (A2) of Cl. \ref{cl:partition_induc}, it follows that the entity subset of the unified cluster
$\mathcal{C} \in \mathcal{C}^{t'+1}$ is in the same connected component in the constraint graph $\ConstraintsGraph_{\System(\FilterMatrix')}$ for every $\FilterMatrix' \in \FilterMatrixFamily$. Since $\FilterMatrix \in \FilterMatrixFamily$ as well it holds that $\Entity_i$ and $\Entity_j$ are in the same connected component in $\widehat{D}$. Hence, contradiction. The lemma follows.
\quad\blackslug\lower 8.5pt\null\par
By Lemmas \ref{cl:pos} and \ref{lem:neg} it follows that
Algorithm ~\TestIrred($\System$) returns ``yes" iff the system $\System$ is irreducible, which establish the correctness of the algorithm.
\begin{claim}
\label{cl:runtime}
Algorithm ~\TestIrred~ terminates in $O(m \cdot n^2)$ rounds.
\end{claim}
\par\noindent{\bf Proof:~}
The algorithm consists of at most $n-1$ rounds
In each round $t$, it constructs the cluster graph
$D_t=(\mathcal{C}^{t-1}, E_t)$ in time $O(n \cdot m)$.
The decomposition into \SCC s can be done
in $O(|D_t|)=O(n^2)$. The claim follows.
\quad\blackslug\lower 8.5pt\null\par
\begin{theorem}
\label{lem:alg_irred}
There exists a polynomial time algorithm
for deciding irreducibility on nonnegative systems.
\end{theorem}
\section{Proof of the generalized \PFT}
\subsection{Proof overview and roadmap}
Our main challenge is to show that the optimal value of Program
(\ref{LP:Ext_Perron}) is related to an \emph{eigenvalue} of some hidden
square system $\System^{*}$ in $\System$ (where ``hidden" implies that
there is a selection on $\System$ that yields $\System^{*}$).
The flow of the analysis is as follows.
In Subsec. \ref{sec:geometry_n_1}, we consider a convex relaxation of Program (\ref{LP:Ext_Perron}) and show that the set of feasible solutions
of Program (\ref{LP:Ext_Perron}), for every $\beta \in (0, \beta^{*}]$, corresponds to a bounded polytope.
By dimension considerations, we then show that the vertices of such polytope correspond to feasible solutions with at most $n+1$ nonzero entries.
In Subsec. \ref{sec:weak}, we show that for irreducible systems, each vertex of such a polytope corresponds to a hidden \emph{weakly square} system
$\System^{*} \in \WeakSystemFamily$. That is, there exists a hidden weakly square system in $\System$ that achieves $\beta^{*}$. Note that a solution for such a hidden system can be extended to a solution for the original $\System$ (see Obs. \ref{obs:filter_to_square}).
Next, in Subsec. \ref{sec:zerostar},
we exploit the generalization of Cramer's rule for homogeneous linear systems (Cl. \ref{cl:cramer_non_square})
as well as a separation theorem for nonnegative matrices to show that there is
a hidden optimal \emph{square} system in $\System$ that achieves $\beta^{*}$,
which establishes the lion's share of the theorem.
Arguably, the most surprising conclusion of our generalized theorem is that
although the given system of matrices is not square, and eigenvalues cannot
be straightforwardly defined for it, the nonsquare system contains
a \emph{hidden optimal} square system, optimal in the sense that a solution $\overline{X}$ for this system can be translated into a solution $\overline{X}^m$ to the original system
(see Obs. \ref{obs:filter_to_square}) that satisfies
Program (\ref{LP:Ext_Perron}) with the optimal value $\beta^{*}$.
The power of a nonsquare system is thus not in the ability to create a solution
better than \emph{any} of its hidden square systems, but rather in
the \emph{option} to \emph{select} the best hidden square system
out of the possibly exponentially many ones.
\subsection{Existence of a solution with $n+1$ affectors}
\label{sec:geometry_n_1}
We now turn to characterize the feasible solutions of
Program (\ref{LP:Ext_Perron}).
The following is a convex variant of Program (\ref{LP:Ext_Perron}).
\begin{align}
\label{LP:Ext_Perron_convex}
\mbox{maximize~~} ~& 1 \mbox{~~subject to:~~}
\\
& \mathrm{d}isplaystyle \mathbb{R}epressorsMatrix \cdot \overline{X} ~\leq~
1/\beta \cdot \SupportersMatrix \cdot \overline{X} ~,&
\label{eq:SR-convex} \\
& \mathrm{d}isplaystyle \overline{X} \geq \overline{0}~, &
\label{eq:Ineq-convex}\\
& \mathrm{d}isplaystyle ||\overline{X}||_{1}=1~. &
\label{eq:eq-one-convex}
\end{align}
Note that Program (\ref{LP:Ext_Perron_convex}) has the same set of constraints
as those of Program (\ref{LP:Ext_Perron}). However, due to the fact that
$\beta$ is no longer a variable, we get the following.
\begin{claim}
\label{cl:convex}
Program (\ref{LP:Ext_Perron_convex}) is convex.
\end{claim}
To characterize the set of feasible solutions $(\overline{X}, \beta)$, $\beta>0$ of Program (\ref{LP:Ext_Perron}), we fix some $\beta>0$, and characterize the solution set of Program (\ref{LP:Ext_Perron_convex}) with this $\beta$.
It is worth noting at this point that using the above convex relaxation,
one may apply a binary search for finding a {\em near-optimal} solution
for Program (\ref{LP:Ext_Perron_convex}), up to any predefined accuracy.
In contrast, our approach, which is based on exploiting the special
geometric characteristics of the optimal solution,
enjoys the theoretically pleasing (and mathematically interesting) advantage
of leading to
an efficient algorithm for computing the optimal solution precisely, and thus establishing the polynomiality of the problem.
Throughout, we restrict attention to values of $\beta \in (0, \beta^{*}]$.
Let $\Polytope(\beta)$ be the polyhedron corresponding to Program
(\ref{LP:Ext_Perron_convex}) and denote by $V(\Polytope(\beta))$
the set of vertices of $\Polytope(\beta)$.
\begin{claim}
\label{cl:n_zero_polytope}
(a) $\Polytope(\beta)$ is bounded (or a polytope).
(b) For every $\overline{X} \in V(\Polytope(\beta))$,
$|NZ(\overline{X})| \leq n+1$. This holds even for reducible systems.
\end{claim}
\par\noindent{\bf Proof:~}
Part (a) holds by the Equality constraint (\ref{eq:eq-one-convex}) which enforces
$||\overline{X}||_{1}~=~1$. We now prove Part (b).
Every vertex $\overline{X} \in \mathbb{R}^{m}$ is defined by a set of $m$ linearly
independent equalities. Recall that one equality is imposed by the constraint
$||\overline{X}||_{1}~=~1$ (Eq. (\ref{eq:eq-one-convex})).
Therefore it remains to assign $m-1$ linearly independent equalities out of
the $n+m$ (possibly dependent) inequalities of Program
(\ref{LP:Ext_Perron_convex}). Hence even if all
the (at most $n$) linearly independent SR constraints (\ref{eq:SR-convex})
become equalities, we are still left with at least $m-1-n$ unassigned
equalities, which must be taken from the remaining $m$ nonnegativity constraints (\ref{eq:Ineq-convex}). Hence, at most $n+1$ nonnegativity inequalities
were not fixed to zero, which establishes the proof.
\quad\blackslug\lower 8.5pt\null\par
\subsection{Existence of a weak $\ZeroStar$-solution}
\label{sec:weak}
We now consider the case where the system $\System$ is irreducible and
a more delicate characterization of $V(\Polytope(\beta))$ can be deduced.
We begin with some definitions.
A solution $\overline{X}$ is called a {\em $\Zero$ solution}
(for Program (\ref{LP:Ext_Perron}))
if it is a feasible solution
$\overline{X}_{\widetilde{\beta}}$, $\widetilde{\beta} \in (0,\beta^{*}]$,
in which for each $\Entity_i \in \EntitySet$ only one affector has a non-zero
assignment, i.e., $NZ(\overline{X}) \cap \Supporters_i=1$ for every $i$.
A solution $\overline{X}$ is called a {\em $\WeakZero$ solution},
or a {\em ``weak'' $\Zero$ solution}, if it is a feasible vector
$\overline{X}_{\widetilde{\beta}}$, $\widetilde{\beta} \in (0,\beta^{*}]$,
in which for each $\Entity_i$, {\em except at most one}, say
$\Entity_\ell \in \EntitySet$, $|NZ(\overline{X}) \cap \Supporters_i| = 1$,
$\Entity_i \in \EntitySet \setminus \{\Entity_\ell\}$ and
$|NZ(\overline{X}) \cap \Supporters_\ell| = 2$.
A solution $\overline{X}$ is called a {\em $\ZeroStar$ solution} if it is
an optimal $\Zero$ solution.
Let $\WeakZeroStar$ be an optimal $\WeakZero$ solution.
For a feasible vector $\overline{X}$, we say that $\Affectors_k$ is
\emph{active} in $\overline{X}$ iff $X(\Affectors_k)>0$.
A subgraph $\GCGraph$ of a constraint graph $\ConstraintsGraph_{\System}$ is \emph{active} in $\overline{X}$ iff every edge
in $\GCGraph$ can be associated with (or ``explained by") an active affector, namely,
$$e(i, j) \in E(\GCGraph) ~~~\mbox{~iff~}~~~
\Supporters_{i} \cap \mathbb{R}epressors_{j} \cap NZ(\overline{X}) \neq \emptyset.$$
\par Towards the end of this section, we prove the following lemma which holds for every feasible solution of
Program (\ref{LP:Ext_Perron_convex}).
\begin{lemma}
\label{cl:entity_one_nonzero}
Let $\System$ be an irreducible system with a feasible solution $\overline{X}_{\beta}$ of Program (\ref{LP:Ext_Perron}). For every entity $\Entity_i$ there exists
an active affector $\Affectors_{\IndS(i)} \in \Supporters_i$, such that
$X_{\beta}(\Affectors_{\IndS(i)}) >0$, or in other words,
$\Supporters_{i} \cap NZ(\overline{X}_{\beta})\neq \emptyset$.
\end{lemma}
Let $\SelectionVec'$ be a partial selection determining $V' \subseteq \EntitySet$.
Define the collection of constraint graphs agreeing with $\SelectionVec'$ as
\begin{equation}
\label{eq:graph_family_selection}
\mathfrak{G}(\SelectionVec') ~=~ \{\ConstraintsGraph_{\System(\SelectionVec)} \mid
\text{a complete selection~} \SelectionVec \text{~satisfying~}\SelectionVec \sim \SelectionVec'\}.
\end{equation}
Note that by Obs. \ref{obs:reducible_graph_connected}(b), every
constraint graph $\GCGraph \in \mathfrak{G}(\SelectionVec')$ for every partial selection
$\SelectionVec'$ is strongly connected.
I.e., $\mathfrak{G}(\SelectionVec')$ contains the constraint graphs
for all square systems restricted to the partial selection dictated by
$\SelectionVec'$ for $V'$.
Note that when $|\SelectionVec'|=n$, $\SelectionVec'$ is a complete selection,
i.e., $\FilterMatrix(\SelectionVec') \in \FilterMatrixFamily$, and
$\mathfrak{G}(\SelectionVec')$ contains a single graph
$\ConstraintsGraph_{\System(\SelectionVec')}$ corresponding to the square system
$\System(\SelectionVec')$.
Given a feasible vector $\overline{X}$ and an irreducible system $\System$, the main challenge is to find an active (in $\overline{X}$) irreducible spanning
subgraph of $\ConstraintsGraph_{\System}$. Finding such a subgraph is crucial for both
Lemma \ref{cl:entity_one_nonzero} and Lemma \ref{lem:strict_equality} later on.
We begin by showing that given just one active affector $\Affectors_{p_1}$ in $\overline{X}$, it is possible to ``bootstrap" it and construct an active irreducible
spanning subgraph of $\ConstraintsGraph_{\System}$ (in $\overline{X}$).
Let $\Entity_{i_1}$ be an entity satisfying that $\Affectors_{p_1} \in \Supporters_{i_1}$.
(Such entity $\Entity_{i_1}$ must exist, since there are no redundant affectors).
In what follows, we build an ``influence tree" starting at $\Entity_{i_1}$ and
spanning the entire set of entities $\EntitySet$.
For a directed graph $G$ and vertex $v \in G$ let
$\BFS(G, v)$ be the \emph{breadth-first search} tree of $G$ rooted at $v$, obtained by placing vertex $w$ at level $i$ of the tree if the shortest directed path from $v$ to $w$ is of length $i$.
Given a constraint graph $\GCGraph$, let $L_{i}(\GCGraph)$ be the $i^{th}$ level of $\BFS(\GCGraph,\Entity_{i_1})$.
We now describe an iterative process for constructing a complete selection
$\SelectionVec^{*}$ of $n$ supporters with positive entries in $\overline{X}_{\beta}$,
i.e., such that $\SelectionVec^{*} \subseteq NZ(\overline{X}_{\beta})$ and
$|\Supporters_i \cap \SelectionVec^{*}| =1$ for every $\Entity_i$.
At step $t$, we start from the partial selection $\SelectionVec_{t-1}$ constructed in the previous step, and extend it to $\SelectionVec_{t}$.
The partial selection $\SelectionVec_{t}$ should satisfy the following four properties.
\begin{description}
\item{(A1)}
$\SelectionVec_{t} \subseteq NZ(\overline{X}_{\beta})$ (i.e., it consists of strictly positive supporters).
\end{description}
Consider the graph family $\mathfrak{G}(\SelectionVec_{t})$
defined in Eq. (\ref{eq:graph_family_selection}), consisting of all constraint
graphs for square systems induced by a selection that agrees with
$\SelectionVec_{t}$.
\begin{description}
\item{(A2)}
For every $i \in \{0, \ldots, t-1\}$ it holds that $L_{i}(\GCGraph_1) = L_{i}(\GCGraph_2), \text{~for every~} \GCGraph_1, \GCGraph_2 \in \bigcup_{j=i}^{t} \mathfrak{G}(\SelectionVec_{j})$, i.e., from step $i$ ahead, the $i$'th first levels coincide.
\item{(A3)}
$L_{t}(\GCGraph_1) = L_{t}(\GCGraph_2), \text{~for every~} \GCGraph_1, \GCGraph_2 \in
\mathfrak{G}(\SelectionVec_{t})$, (i.e., level $t$ coincides as well).
\end{description}
Denote $\CGLevel_{i}=L_{i}(\GCGraph)$, $\GCGraph \in \mathfrak{G}(\SelectionVec_{t})$,
for $i \in \{0, \ldots, t\}$ (by (A2) and (A3) this is well-defined). Let $Q_{-1}=\emptyset$, and $Q_{t}=\bigcup_{i=0}^{t} \CGLevel_{i}$ for $t \geq 0$, be set of entities in the first $t$ levels of $\mathfrak{G}(\SelectionVec_{t})$ graphs.
\begin{description}
\item{(A4)}
$\SelectionVec_{t}$ is a partial selection determining the entities in $Q_{t-1}$,
(i.e., $|\SelectionVec_{t}|=|Q_{t-1}|$ and $|\SelectionVec_t \cap \Supporters_i|=1$ for every $\Entity_i \in Q_{t-1}$).
\end{description}
Let us now describe the construction process of $\SelectionVec^{*}$ in more detail.
At step $t=0$, let $\SelectionVec_{0}=\emptyset$. Note that in this case
$$\mathfrak{G}(\SelectionVec_{0}) ~=~
\{\ConstraintsGraph_{\System(\FilterMatrix)} ~\mid~ \FilterMatrix \in
\FilterMatrixFamily\}.$$
It is easy to see that Properties (A1)-(A4) are satisfied.
For $t=1$, let $\SelectionVec_{1}=\{\Affectors_{p_1}\}$. As $L_0(\GCGraph)=\{\Entity_{i_1}\}$ and $L_1(\GCGraph)=\{\Entity_{i_2}~\mid~ \Affectors_{p_1} \in \mathbb{R}epressors_{i_2}\}$ for every $\GCGraph \in \mathfrak{G}(\SelectionVec_{1})$, Properties (A2) and (A3) holds. Property (A4) holds as well since $\SelectionVec_{1}$ determines $Q_0=\{\Entity_{i_1}\}$.
\par Now assume that Properties (A1)-(A4) hold after step $t$ (for $t \geq 1$), and
consider step $t+1$. We show how to construct $\SelectionVec_{t+1}$ given
$\SelectionVec_{t}$, and then show that it satisfies Properties (A1)-(A4).
Note that by definition
$\CGLevel_{t} \subseteq \EntitySet \setminus Q_{t-1}$.
Our goal is to find a partial selection $\Delta_t$ determining $\CGLevel_{t}$ such that $\Delta_t \subseteq NZ(\overline{X}_{\beta})$
Once finding such a set $\Delta_t$, the partial selection $\SelectionVec_{t+1}$ is taken to be
$\SelectionVec_{t+1}=\SelectionVec_{t} \cup \Delta_t$, where $\SelectionVec_{t}$ is the partial selection determining nodes in $Q_{t-1}$ by Property (A4) for step $t$. Note that since $Q_{t-1} \cap \CGLevel_{t}=\emptyset$, the corresponding
selections $\SelectionVec_{t}$ and $\SelectionVec_{t+1}$ agree.
We now show that such $\Delta_t$ exists. This follows by the next claim.
\begin{claim}
\label{cl:aux1}
For every $t>1$, every entity $\Entity_j \in \CGLevel_t$ has an active repressor in $\overline{X}_{\beta}$, i.e.,
$\mathbb{R}epressors_j \cap NZ(\overline{X}_{\beta}) \neq \emptyset$.
\end{claim}
\par\noindent{\bf Proof:~}
We prove the claim by showing a slightly stronger statement, namely, that for every
$\Entity_j \in \CGLevel_t$ there exists an affector
$\Affectors_k \in \mathbb{R}epressors_j \cap \SelectionVec_{t}$.
For ease of analysis, let's focus on one specific $\GCGraph \in \mathfrak{G}(\SelectionVec_{t})$.
Since $\Entity_j \in \CGLevel_{t}$, it follows that there exists some $\Entity_i \in \CGLevel_{t-1}$
such that $(\Entity_i, \Entity_j) \in E(\GCGraph)$. Since $\SelectionVec_{t}$ determines $Q_{t-1}$ and $\Entity_i \in Q_{t-1}$, there exists a unique affector
$\Affectors_{\IndS(i)}=\SelectionVec_{t} \cap \Supporters_{i}$.
In addition, by Property (A1) for step $t$, $X_{\beta}(\Affectors_{\IndS(i)})>0$.
Therefore, since $\Entity_j$ is an immediate outgoing neighbor of $\Entity_i$, it holds by Eq. (\ref{eq:cg_condition}) that $\Affectors_{\IndS(i)} \in \mathbb{R}epressors_j$,
which establishes the claim.
\quad\blackslug\lower 8.5pt\null\par
We now complete the proof for the existence of $\Delta_t$.
By Claim \ref{cl:aux1}, each entity $\Entity_i \in \CGLevel_{t}$ has a strictly
positive repression, or, $\TotR(\overline{X}_{\beta}, \System)_{i}>0$.
Since $\overline{X}_{\beta}$ is feasible,
it follows by Fact \ref{fc:feasible_tots_totr} that also $\TotS(\overline{X}_{\beta}, \System)_{i}>0$. Therefore we get that for every
$\Entity_i \in \CGLevel_{t}$, there exists an affector
$\Affectors_{\IndS(i)} \in \Supporters_{i} \cap NZ(\overline{X}_{\beta})$.
Consequently, set $\Delta_t=\{\Affectors_{\IndS(i)} ~\mid~ \Entity_i \in \CGLevel_t \}$ and let $\SelectionVec_{t+1}= \SelectionVec_{t} \cup \Delta_t$.
\begin{observation}
\label{obs:sim_relation}
$\SelectionVec_{t} \sim \SelectionVec_{t+1}$.
\end{observation}
\par\noindent{\bf Proof:~}
By definition, $\SelectionVec_{t}$ determines
$Q_{t-1}=\bigcup_{j=0}^{t-1} L_{j}(\GCGraph)$, for every $\GCGraph \in \mathfrak{G}(\SelectionVec_{t})$.
The selection $\SelectionVec_{t+1}$ consists of $\SelectionVec_{t}$ and a new selection for the new layer $\CGLevel_{t}$ such that
$\CGLevel_{t} \cap Q_{t-1}=\emptyset$ and therefore $\SelectionVec_{t}$ and
$\SelectionVec_{t+1}$ agree on their common part.
\quad\blackslug\lower 8.5pt\null\par
We now turn to prove Properties (A1)-(A4) for step $t+1$. Property (A1) follows immediately
by the construction of $\SelectionVec_{t+1}$. We next consider (A2).
\begin{claim}
\label{cl:aux4}
$\mathfrak{G}(\SelectionVec_{t+1}) \subseteq \mathfrak{G}(\SelectionVec_{t})$.
\end{claim}
\par\noindent{\bf Proof:~}
Consider some $\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$. By Eq. (\ref{eq:graph_family_selection}), there exists
a complete selection $\SelectionVec^{*}$, where
$\GCGraph=\ConstraintsGraph_{\System(\SelectionVec^{*})}$, such that
$\SelectionVec^{*} \sim \SelectionVec_{t+1}$.
Recall that $\CGLevel_{i}=L_{i}(\GCGraph')$ for every $\GCGraph' \in \mathfrak{G}(\SelectionVec_{t})$ and
for every $i \in \{0, \ldots, t\}$
and that $Q_{t-1}=\bigcup_{i=0}^{t-1} \CGLevel_{i}$ and $Q_{t}=Q_{t-1} \cup \CGLevel_{t}$
where $Q_{t-1} \cap \CGLevel_{t}=\emptyset$. Therefore $Q_{t-1} \subset Q_{t}$. By the inductive assumption,
$\SelectionVec_{t}$ determines $Q_{t-1}$ and by construction
$\SelectionVec_{t+1}$ determines $Q_{t}$.
Combining all the above, Obs. \ref{obs:sim_relation},
$\SelectionVec_{t+1} \sim \SelectionVec_{t}$. Obs. \ref{obs:chain_sym} implies that
$\SelectionVec^{*} \sim \SelectionVec_{t}$.
Therefore, by Eq. (\ref{eq:graph_family_selection}) again, $\GCGraph \in \mathfrak{G}(\SelectionVec_{t})$.
\quad\blackslug\lower 8.5pt\null\par
Due to Claim \ref{cl:aux4}, and Properties (A2) and (A3) for step $t$ , Property (A2) follows for step $t+1$.
It is therefore possible to fix some $\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$ and define $\CGLevel_{i}=L_{i}(\GCGraph)$ for every
$i \in \{0,\ldots, t\}$ (by (A2) for $t+1$ this is well-defined)
We consider now Property (A3) and show that $L_{t+1}(\GCGraph_1)=L_{t+1}(\GCGraph_2)$ for every $\GCGraph_1, \GCGraph_2 \in \mathfrak{G}(\SelectionVec_{t+1})$.
For every graph $\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$,
define $W(\GCGraph)$ as the set of all
immediate outgoing neighbors of $\CGLevel_{t}$ in $\GCGraph$, $W(\GCGraph) = \{\Entity_k \mid \exists \Entity_i \in \CGLevel_{t} \text{~such that~}
(\Entity_i,\Entity_k) \in E(\GCGraph)\}$.
\begin{observation}
\label{obs:w}
$W(\GCGraph_1)=W(\GCGraph_2)$ for every $\GCGraph_1, \GCGraph_2 \in \mathfrak{G}(\SelectionVec_{t+1})$.
\end{observation}
\par\noindent{\bf Proof:~}
Let $\GCGraph_1=\ConstraintsGraph_{\System(\SelectionVec_1)}$ and
$\GCGraph_2=\ConstraintsGraph_{\System(\SelectionVec_2)}$, where
$\SelectionVec_1, \SelectionVec_2$ correspond to complete legal selections.
Since $\GCGraph_1, \GCGraph_2 \in \mathfrak{G}(\SelectionVec_{t+1})$, it follows that
$\SelectionVec_1, \SelectionVec_2 \sim \SelectionVec_{t+1}$.
Since $\Delta_t$ determines $\CGLevel_t$, every entity $\Entity_i \in \CGLevel_t$
has the same unique supporter
$\Affectors_{\IndS(i)} \in \SelectionVec_{t+1} \cap \Supporters_i$ in both
$\SelectionVec_1, \SelectionVec_2$. By the definition of the constraint graph in Eq. (\ref{eq:cg_condition}),
it then follows that for graph $\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$, the immediate outgoing neighbors of $\CGLevel_t$, $W(\GCGraph)$ are fully determined by the partial selection $\Delta_t$. The observation follows.
\quad\blackslug\lower 8.5pt\null\par
Hereafter, let $W=W(\GCGraph)$, $\GCGraph\in \mathfrak{G}(\SelectionVec_{t+1})$, be the set
of immediate neighbors of $\CGLevel_t$ in $\GCGraph$ (by Obs. \ref{obs:w}, this is well-defined).
Finally, note that $L_{t+1}(\GCGraph)=W \setminus \left(\bigcup_{i=1}^{t} L_{i}(\GCGraph) \right)$, for every
$\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$. By Property (A2), $\CGLevel_{i}=L_{i}(\GCGraph)$ for every
$\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$ and $i \in \{0, \ldots,t\}$. Hence, $L_{t+1}(\GCGraph)=W \setminus Q_t$ and by Obs. \ref{obs:w}, Property (A3) is established.
Finally, it remains to consider Property (A4). First, note that by Property (A2) and (A3) for step $t+1$, we get that $Q_{t}=Q_{t-1} \cup L_{t}(\GCGraph)$ for every $\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$. By Property (A4) for step $t$ and Properties (A2) and (A3) for step $t+1$, it follows that the selection $\SelectionVec_{t+1}$ determines $Q_{t}$.
We now turn to discuss the stopping criterion. Let $t^{*}$ be the first time step $t$
where $\SelectionVec_{t^{*}}=\SelectionVec_{t^{*}-1}$. (Since $\SelectionVec_{t} \subseteq \SelectionVec_{t+1}$ for every $t\geq 0$, such $t^*$ exists). We then have the following.
\begin{lemma}
$|\SelectionVec_{t^{*}}|=n$ hence
$\System(\SelectionVec_{t^{*}})$ is a square system, and $\mathfrak{G}(\SelectionVec_{t^{*}})=\{\ConstraintsGraph_{\System(\SelectionVec_{t^{*}})}\}$,
\end{lemma}
\par\noindent{\bf Proof:~}
Recall that for every $i \in \{0, \ldots, t^{*}\}$, by Eq. (\ref{eq:graph_family_selection}), $\GCGraph' \in \mathfrak{G}(\SelectionVec_{i})$ represents a square system, and therefore by Obs. \ref{obs:reducible_graph_connected} it is strongly connected.
Fix some arbitrary $\GCGraph \in \mathfrak{G}(\SelectionVec_{t^{*}})$ and let $\CGLevel_i=L_i(\GCGraph)$ for every $i \in \{0, \ldots, t^{*}\}$ (By Property (A2) and (A3) this is well defined).
By Property (A4) it holds that the partial selection $\SelectionVec_{t^{*}-1}$ (resp., $\SelectionVec_{t^{*}}$) determines $Q_{t^*-2}$ (resp., $Q_{t^*-1}$).
As $\SelectionVec_{t^{*}-1}=\SelectionVec_{t^{*}}$, we have that $Q_{t^*-2}=Q_{t^*-1}$. Hence, $Q_{t^*-1} \setminus Q_{t^*-2}=\CGLevel_{t^*-1}=\emptyset$.
This implies that the BFS graph $BFS(\GCGraph, \Entity_{i_1})$ consists of $t^*-1$ levels $Q_{t^*-2}$. In addition, since $\GCGraph$ is strongly connected it follows that $Q_{t^*-2}=\EntitySet$.
By Property (A4), $\SelectionVec_{t^{*}}$ determines $Q_{t^*}$, hence $|\SelectionVec_{t^{*}}|=n$ meaning that $\SelectionVec_{t^{*}}$ is a complete selection, so $\System(\SelectionVec_{t^{*}})$
corresponds to a unique square system. Finally, since the $t^*-1$ layers of every $\GCGraph \in \mathfrak{G}(\SelectionVec_{t+1})$ are the same (Property (A2) and (A3)) and span all the entities it follows that $\mathfrak{G}(\SelectionVec_{t+1})$ consists of a single constraint graph, the lemma follows.
\quad\blackslug\lower 8.5pt\null\par
In summary, we end with a complete selection $\SelectionVec_{t^{*}}$ that spans the $n$ entities.
Every affector $\Affectors_k \in \SelectionVec_{t^{*}}$ is active and therefore
the constraint graph $\ConstraintsGraph_{\System(\SelectionVec_{t^{*}}}$ is
active in $\overline{X}_{\beta}$.
This establishes the following lemma.
\begin{lemma}
\label{lem:one_active}
For every feasible point $\overline{X}_{\beta}$ for Program (\ref{LP:Ext_Perron_convex}) and every active affector $\Affectors_{p_1}$ in $\overline{X}_{\beta}$,
there exists a complete selection $\SelectionVec^{*}$ for $\EntitySet$
such that $\SelectionVec^{*} \subseteq NZ(\overline{X}_{\beta})$,
hence the corresponding constraint subgraph $\ConstraintsGraph_{\System(\SelectionVec^{*})}$
is active in $\overline{X}_{\beta}$.
\end{lemma}
The following is an interesting implication.
\begin{corollary}
\label{cor:active}
For every feasible vector there exists an active spanning irreducible graph.
\end{corollary}
\par\noindent{\bf Proof:~}
Since every feasible vector is non-negative, there exists at least one active
affector in it, from which an active spanning irreducible graph can be constructed by Lemma \ref{lem:one_active}.
\quad\blackslug\lower 8.5pt\null\par
Finally, we are ready to complete the proof of Lemma \ref{cl:entity_one_nonzero}
for any irreducible system $\System$.
\par\noindent{\bf Proof:~}[Lemma \ref{cl:entity_one_nonzero}]
Since $\sum_{i} X_{\beta}(i)>0$, it follows that there exists at least one
affector $\Affectors_{p_1}$ such that $X_{\beta}(\Affectors_{p_1})>0$.
By Lemma \ref{lem:one_active}, there is a complete selection vector
$\SelectionVec^{*} \subseteq NZ( X_{\beta})$. The lemma follows.
\quad\blackslug\lower 8.5pt\null\par
We end this subsection by showing that every vertex
$\overline{X} \in V(\Polytope(\beta))$ is a $\WeakZero$ solution.
\begin{lemma}
\label{cl:weak_zero_polytope}
If the system of Program (\ref{LP:Ext_Perron_convex}) is irreducible, then
every $\overline{X} \in V(\Polytope(\beta))$ is a $\WeakZero$ solution for it, and in particular every optimal solution $\overline{X}^* \in V(\Polytope(\beta^*))$ is a $\WeakZeroStar$ solution.
\end{lemma}
\par\noindent{\bf Proof:~}
By Claim \ref{cl:n_zero_polytope}, for every
$\overline{X} \in V(\Polytope(\beta))$, $|NZ(\overline{X})| \leq n+1$.
By Lemma \ref{cl:entity_one_nonzero}, for every $1 \leq i \leq n$,
$|NZ(\overline{X}) \cap \Supporters_i| \geq 1$. Therefore there exists
at most one entity $\Entity_{i}$ such that
$|NZ(\overline{X}) \cap \Supporters_i|=2$, and $|NZ(\overline{X}) \cap \Supporters_j|=1$ for every $j \neq i$, i.e., the solution is $\WeakZero$. The above holds for every $\beta \in (0, \beta^*]$. In particular, for the optimal $\beta$ value, $\beta^*$, it holds that $\overline{X}^* \in V(\Polytope(\beta^*))$ is a $\WeakZeroStar$ solution. \quad\blackslug\lower 8.5pt\null\par
\subsection{Existence of a $\ZeroStar$ solution}
\label{sec:zerostar}
In the previous section we established the fact that when $\System$ is irreducible, every vertex
$\overline{X} \in V(\Polytope(\beta))$ corresponds to an $\WeakZero$ solution for Program (\ref{LP:Ext_Perron_convex}).
In particular, this statement holds for $\beta=\beta^{*}(\System)$, the optimal $\beta$ for $\System$.
By the feasibility of the system for $\beta^{*}$, the corresponding polytope
is non-empty and bounded
(and each of its vertices is a $\WeakZeroStar$ solution),
hence there exist $\WeakZeroStar$ solutions for the problem.
The goal of this subsection is to establish the existence of a $\ZeroStar$
solution for the problem and thus complete the proof of Thm. \ref{thm:pf_ext}. In particular, we consider Program (\ref{LP:Ext_Perron_convex}) for an irreducible system $\System$ and $\beta=\beta^{*}$, i.e., the optimal value of Program (\ref{LP:Ext_Perron}) for $\System$, and show that {\em every}
optimal $\overline{X} \in V(\Polytope(\beta^{*}))$ solution is in fact a $\ZeroStar$ solution.
We begin by showing that for $\beta^{*}$,
the set of $n$ SR Inequalities (Eq. (\ref{eq:SR-convex}))
hold with equality for every optimal solution $\overline{X}^{*}$, including one that is not a $\WeakZeroStar$ solution.
\begin{lemma}
\label{lem:strict_equality}
If $\System=\langle \SupportersMatrix,\mathbb{R}epressorsMatrix \rangle$ is irreducible, then
$\mathbb{R}epressorsMatrix \cdot \overline{X}^{*} =
1/\beta^{*}(\System) \cdot \SupportersMatrix \cdot \overline{X}^{*}$ for every optimal solution $\overline{X}^{*}$ of Program (\ref{LP:Ext_Perron_convex}).
\end{lemma}
\par\noindent{\bf Proof:~}
Consider an irreducible system $\System$.
By Lemma \ref{cl:entity_one_nonzero}, every entity $\Entity_i$ has at least
one active supporter in $NZ(\overline{X}^{*})$. Select, for every $i$, one such
supporter $\Affectors_{\IndS(i)} \in \Supporters_i \cap NZ(\overline{X}^{*})$.
Let $\SelectionVec^{*}= \{\Affectors_{\IndS(i)} \mid 1\le i\le n \}$.
By definition, $\SelectionVec^{*} \subseteq NZ(\overline{X}^{*})$. Also, by Claim \ref{cor:distinct}
the sets $\Supporters_i$ are disjoint. Therefore $\SelectionVec^{*}$ is a complete
selection
(i.e, for every $\Entity_i$, $|\Supporters_i \cap \SelectionVec^{*}|=1$), and hence $\System^{*}=\System(\SelectionVec^{*})$ is a square irreducible system.
Let $\GCGraph^*=\ConstraintsGraph_{\System^{*}}$ be the constraint graph of $\System^{*}$.
By Obs. \ref{obs:reducible_graph_connected}(a), $\GCGraph^*$ is strongly connected. In addition, since $\System^*$ has exactly one affector $\Affectors_{\IndS(i)}$ for every $\Entity_i$, and this affector is active, it follows that every edge
$e(\Entity_i,\Entity_j) \in E(\GCGraph^*)$ corresponds to an \emph{active} affector
in $\overline{X}^{*}$, i.e.,
$\Supporters_i \cap \mathbb{R}epressors_j \cap NZ(\overline{X}^{*}) \neq \emptyset$, and hence $\GCGraph^*$ is active.
\par Therefore, for an edge $(v_i,v_j)$ in $\GCGraph^*$,
if we reduce the power of the active supporter of $v_i$ which,
by the definition of $\GCGraph^*$ (see Eq. (\ref{eq:cg_condition})) is a repressor of $v_j$, then $v_j's$ inequality
can be made strict. Such reduction makes sense
only because we consider active affectors. This intuition
is next used in order to prove the lemma.
For a feasible solution $\overline{X}$ of Program (\ref{LP:Ext_Perron_convex}) and vaule $\beta$, let us formulate the SR constraints in terms of total support and total repression as in (Eq. (\ref{eq:SR_ind})) , and let
\begin{equation}
\label{eq:residual}
R_{i}(\overline{X}) = 1/\beta \cdot
\TotS(\overline{X})_{i}-\TotR(\overline{X})_{i}
\end{equation}
be the residual amount of the $i'th$ SR constraint
of (\ref{eq:SR_ind})(hence $R_{i}(\overline{X})>0$ implies strict inequality on the $i$th constraint with $\overline{X}$).
Then the lemma claims that for the optimal solution $\overline{X}^*$ and $\beta^*$, $R_{i}(\overline{X}^*)=0$ for every $i$.
\par Assume, toward contradiction, that there exists at least one entity, w.l.o.g. $\Entity_{0}$, for which $R_{0}(\overline{X}^*)>0$.
In what follows, we gradually construct a new assignment $\overline{X}^{**}$
that achieves a strictly positive residue $R_{i}(\overline{X}^{**})>0$ , or, a strict inequality in the SR constraint of Eq. (\ref{eq:SR_ind}),
for all $\Entity_i \in \EntitySet$.
Clearly, if all SR constraints are satisfied with strict inequality,
then there exists some larger $\beta^{**}>\beta^{*}(\System)$ that still satisfies all the constraints, in contradiction to the optimality of $\beta^{*}(\System)$.
To construct $\overline{X}^{**}$, we trace paths of
influence in the strongly connected (and active) constraint graph $\GCGraph^*$.
Think of $\Entity_0$ as the root, and let $L_{j}(\GCGraph^*)$ be the $j^{th}$ level of
$\BFS(\GCGraph^*, \Entity_{0})$ (with $L_0=\{\Entity_{0}\}$).
Let $Q_{-1}=\emptyset$, and $Q_{t}=\bigcup_{i=0}^{t} L_{i}(\GCGraph^*)$ for $t \geq 0$.
Let $\SelectionVec_{t}=\{\Affectors_{\IndS(i)} ~\mid~ \Entity_{i} \in Q_{t-1}\} \subseteq \SelectionVec^{*}$ be the partial selection
determining the entities in $Q_{t-1}$. I.e., $|\SelectionVec_{t}|=|Q_{t-1}|$ and
for every $\Entity_i \in Q_{t-1}$, $|\SelectionVec_{t} \cap \Supporters_{i}|=1$.
The process of constructing $\overline{X}^{**}$ consists of $d$ steps,
where $d$ is the depth of $\BFS(\GCGraph^*, \Entity_{0})$.
At step $t$, we are given $\overline{X}_{t-1}$ and use it to construct
$\overline{X}_{t}$.
Essentially, $\overline{X}_{t}$ should satisfy the following properties.
\begin{description}
\item{(B1)}
The set of SR inequalities corresponding to $Q_{t-1}$ entities hold with strict
inequality with $\overline{X}_{t}$. That is, for every $\Entity_i \in Q_{t-1}$, $R_{i}(\overline{X}_{t})>0$, i.e.,
$$ 1/\beta^{*} \cdot \TotS(\overline{X}_t)_{i} ~>~
\TotR(\overline{X}_t)_{i} ~. $$
\item{(B2)}
$\overline{X}_{t}$ is an optimal solution, i.e., it satisfies Program
(\ref{LP:Ext_Perron}) with $\beta^{*}(\System)$.
\item{(B3)}
$X_{t}(\Affectors)=X^{*}(\Affectors)$
for every $\Affectors \notin \SelectionVec_{t}$ and
$X_{t}(\Affectors)<X^{*}(\Affectors)$
for every $\Affectors \in \SelectionVec_{t}$.
\end{description}
Let us now describe the construction process in more detail.
Let $\overline{X}_0=\overline{X}^{*}$.
Consider step $t=1$ and recall that $R_{0}(\overline{X}_0)>0$.
Let $\Affectors_{k_0}$ be the active supporter of
$\Entity_{0}$, i.e.,
$\Affectors_{k_0} \in \Supporters_{0} \cap \SelectionVec^{*}$.
Then it is possible to slightly reduce the value of
$\Affectors_{k_0}$ in $\overline{X}_0$ while still maintaining feasibility, yielding $\overline{X}_1$.
Formally, let
$X_{1}(\Affectors_{k_0}) = X_0(\Affectors_{k_0})-
\min\{X_0(\Affectors_{k_0}),R_{0}(\overline{X}_0)\}/2$ and leave the rest
of the entries unchanged, i.e.,
$X_{1}(\Affectors_{k})=X^{*}(\Affectors_{k})$ for every other $k \neq k_0$.
We now show that Properties (B1)-(B3) are satisfied for $t\in \{0,1\}$ and then
proceed to consider the construction of $\overline{X}_{t}$ for $t>1$.
Since $L_{0}(\GCGraph^*)=\{\Entity_{0}\}$, and $Q_{-1}=\emptyset$, also $\SelectionVec_{0}=\emptyset$, so (B1) holds vacuously, and (B2) and (B3) follow by the fact that
$\overline{X}_{0}=\overline{X}^*$.
Next, consider $\overline{X}_{1}$. By the irreducibility of the system
(in particular, see Cl. \ref{cor:distinct}), since only $\Affectors_{k_0}$
was reduced in $\overline{X}_1$ (compared to $\overline{X}^{*}$), only the
constraint of $\Entity_0$ could have been damaged (i.e., become unsatisfied).
Yet, it is easy to verify that the constraint of $\Entity_{0}$ still holds
with strict inequality for $\overline{X}_{1}$, so Property (B2) holds. As $Q_{0}=\{\Entity_0\}$, Property (B1) needs to be verified only for $\Entity_0$, and indeed the new value of $X_1(\Affectors_{k_0})$ ensures $R_0(\overline{X}_{1})>0$, so (B1) is satisfied. Finally, $\SelectionVec_{1}=\{\Affectors_{k_0}\}$, and Property (B3) checks out as well.
Next, we describe the general construction step. Assume that we are given solution
$\overline{X}_{r}$ satisfying Properties (B1)-(B3)
for each $r\leq t$. We now describe the construction of $\overline{X}_{t+1}$
and then show that it satisfies the desired properties.
We begin by showing that the set of SR inequalities of Eq. (\ref{eq:SR_ind}) on the entities $\Entity_i$ in $L_{t}(\GCGraph^*)$
hold with strict inequality with $\overline{X}_{t}$.
\begin{claim}
\label{cl:strict_inequality_induc}
$R_j(\overline{X}_{t})>0$, or, $\TotR(\overline{X}_{t})_{j} < 1/\beta^{*} \cdot \TotS(\overline{X}_{t})_{j}$,
for every entity $\Entity_j \in L_{t}(\GCGraph^*)$.
\end{claim}
\par\noindent{\bf Proof:~}
Consider some $\Entity_j \in L_{t}(\GCGraph^*)$. By definition of $L_{t}(\GCGraph^*)$,
there exists an entity $\Entity_i \in L_{t-1}(\GCGraph^*)$ such that
$e(i,j) \in E(\GCGraph^*)$.
Since $\Entity_{i} \in Q_{t-1}$ and $\SelectionVec_{t}$ is a partial selection determining
$Q_{t-1}$, a (unique) supporter
$\Affectors_{\IndS(i)} \in \SelectionVec_{t} \cap \Supporters_{i}$
is guaranteed to exist.
By the definition of $\GCGraph^*$, $e(\Entity_i,\Entity_j) \in E(\GCGraph^*)$
implies that $\Affectors_{\IndS(i)} \in \mathbb{R}epressors_{j}$.
Finally, note that by Property (B3),
$X_{t}(\Affectors_{\IndS(i)})<X^*(\Affectors_{\IndS(i)})$ and
$X_{t}(\Affectors)=X^{*}(\Affectors)=X_{t-1}(\Affectors)$ for every
$\Affectors \in \Supporters_j$ (since $\SelectionVec_{t} \cap \Supporters_j =\emptyset$). I.e.,
\begin{equation}
\label{eq:tot_sup_rep_ineq}
\TotS(\overline{X}_{t})_{j}=\TotS(\overline{X}_{t-1})_{j} \text{~and~}
\TotR(\overline{X}_{t})_{j}<\TotR(\overline{X}_{t-1})_{j},
\end{equation}
which implies by Eq. (\ref{eq:SR_ind}) that
\begin{equation}
\label{eq:residual_step}
R_j(\overline{X}_{t-1})<R_j(\overline{X}_{t})~.
\end{equation}
By the optimality of $\overline{X}_{t-1}$
(Property (B2) for step $t-1$), we have that
$R_j(\overline{X}_{t-1}) \geq 0$.
Combining this with Eq. (\ref{eq:residual_step}),
$0 \leq R_j(\overline{X}_{t-1})<R_j(\overline{X}_{t})$,
which establishes the claim for $\Entity_j$. The same argument can be applied
for every $\Entity_j \in L_{t}(\GCGraph^*)$, thus the claim is established.
\quad\blackslug\lower 8.5pt\null\par
Let $\Delta_t \subseteq \SelectionVec^{*}$ be the partial selection that determines $L_{t}(\GCGraph^*)$. In the solution $\overline{X}_{t+1}$, only the entries of $\Delta_t$ have been reduced and the other entries remain as in $\overline{X}_{t}$.
Recall that by construction, $\SelectionVec^{*} \subseteq NZ(\overline{X}^{*})$
and therefore also $\SelectionVec^{*} \subseteq NZ(\overline{X}_{t})$.
By Claim \ref{cl:strict_inequality_induc}, the constraints of $L_{t}(\GCGraph^*)$ nodes
hold with strict inequality, and therefore it is possible to slightly reduce
the value of their positive supporters while still maintaining the strict
inequality (although with a lower residue). Formally, for every
$\Entity_k \in L_{t}(\GCGraph^*)$, consider its unique supporter in $\Delta_t$,
$\Affectors_{i_k}\in \Delta_t \cap \Supporters_k$.
By Claim \ref{cl:strict_inequality_induc}, $R_{k}(\overline{X}_{t})>0$.
Set $X_{t+1}(\Affectors_{i_k}) =
X_{t}(\Affectors_{i_k})-\min(X_{t}(\Affectors_{i_k}),R_{k}(\overline{X}_{t}))/2$.
In addition, $X_{t+1}(\Affectors_{i_k})=X_{t}(\Affectors_{i_k})$ for every other
supporter $\Affectors_{i_k} \notin \Delta_t$.
It remains to show that $\overline{X}_{t+1}$ satisfies the Properties (B1)-(B3).
(B1) follows by construction. To see (B2), note that since
$\Supporters_{i} \cap \Supporters_{j} = \emptyset$ for every
$\Entity_i, \Entity_j \in \EntitySet$, only the constraints of $L_{t}(\GCGraph^*)$ nodes
might have been violated by the new solution $\overline{X}_{t+1}$.
Formally, $\TotS(\overline{X}_{t+1})_{i}=\TotS(\overline{X}_{t})_{i}$ and
$\TotR(\overline{X}_{t+1})_{i} \leq \TotR(\overline{X}_{t})_{i}$ for every
$\Entity_i \notin L_{t}(\GCGraph^*)$. Although, for $\Entity_i \in L_{t}(\GCGraph^*)$,
we get that $\TotS(\overline{X}_{t+1})_{i}<\TotS(\overline{X}_{t})_{i}$
(yet $\TotR(\overline{X}_{t+1})_{i} = \TotR(\overline{X}_{t})_{i}$),
this reduction in the total support of $L_{t}(\GCGraph^*)$ nodes was performed
in a controlled manner,
guaranteeing that the corresponding $L_{t}(\GCGraph^*)$ inequalities
hold with \emph{strict} inequality. Finally, (B3) follows immediately.
After $d+1$ steps, by Property (B1) all inequalities hold with strict inequality
(as $Q_{d}=\EntitySet$) with the solution $\overline{X}_{d+1}$.
Thus, it is possible to find some $\beta^{**}>\beta^{*}(\System)$ that would
contradict the optimally of $\beta^{*}$.
Formally, let $R^{*}=\min R_{i}(\overline{X}_{d+1})$. Since $R^{*}>0$,
we get that $\overline{X}_{d+1}$ is feasible with
$\beta^{**}=\beta^{*}(\System)+R^{*}>\beta^{*}(\System)$,
contradicting the optimally of $\beta^{*}(\System)$.
Lemma \ref{lem:strict_equality} follows.
\quad\blackslug\lower 8.5pt\null\par
We proceed by considering a vertex of
$\overline{X}^{*} \in V(\Polytope(\beta^{*}))$.
By Lemma \ref{cl:weak_zero_polytope}, $\overline{X}^{*}$ is a $\WeakZeroStar$ solution.
To complete the proof of Thm. \ref{thm:pf_ext}, we have to prove that it is a $\ZeroStar$ solution.
To do that, we first transform $\System$ into a weakly square system $\WeakSystem$. First, if $m=n+1$, then the system is already weak.
Otherwise, without loss of generality, let the $i^{th}$ entry in
$\overline{X}^{*}$ correspond to $\Affectors_i$ where
$\Affectors_i=NZ(\overline{X}^{*}) \cap \Supporters_i$ for $i \in \{1,\ldots, n-1\}$ and
the $n^{th}$ and $(n+1)^{st}$ entries correspond to $\Affectors_n$ and $\Affectors_{n+1}$
respectively such that
$\{\Affectors_n, \Affectors_{n+1}\}=NZ(\overline{X}^{*}) \cap \Supporters_n$.
It then follows that $X^{*}(i) \neq 0$ for every $i \in \{1,\ldots,n+1\}$ and
$X^{*}(i) = 0$ for every $i \in \{n+2, \ldots, m\}$.
Let $\overline{X}^{**}=\left(X^{*}(1),\ldots, X^{*}(n+1)\right)$.
Let $\WeakSupportersMatrix \in \mathbb{R}^{n \times (n+1)}$ where
$\WeakSupportersMatrix(i,j)=\SupportersMatrix(i,j)$ for every
$i \in \{1,\ldots, n\}$ and every $j \in \{1,\ldots, n+1\}$, and define $\WeakRepressorsMatrix$ analogously.
From now on, we restrict attention to the weakly square system
$\WeakSystem=\langle \WeakSupportersMatrix,\WeakRepressorsMatrix\rangle$ where
$|\Supporters_{n}|=2$. Note that this system results from $\System$
by discarding the corresponding entries of
$\Affectors \setminus NZ(\overline{X}^{*})$.
Therefore,
$\beta^{*}(\System)=\beta^{*}(\WeakSystem)$.
Let $\SupportersMatrix_{n-1}$ correspond to the upper left
$(n-1) \times (n-1)$ submatrix of $\WeakSupportersMatrix$.
Let $\SupportersMatrix_{n}$ be obtained from $\WeakSupportersMatrix$ by
removing the $(n+1)^{st}$ column. Finally, $\SupportersMatrix_{n+1}$
is obtained from $\WeakSupportersMatrix$ by removing the $n^{th}$ column.
The matrices
$\mathbb{R}epressorsMatrix_{n-1},\mathbb{R}epressorsMatrix_{n},\mathbb{R}epressorsMatrix_{n+1}$
are defined analogously.
To study the weakly square system $\WeakSystem$, we consider the following three
\emph{square} systems:
\begin{eqnarray}
\label{eqn:ssystem_def}
\System_{n-1} &=& \langle \SupportersMatrix_{n-1}, \mathbb{R}epressorsMatrix_{n-1}\rangle~,
\\
\System_{n} &=& \langle \SupportersMatrix_{n}, \mathbb{R}epressorsMatrix_{n}\rangle~, \nonumber
\\
\System_{n+1} &=& \langle \SupportersMatrix_{n+1}, \mathbb{R}epressorsMatrix_{n+1}\rangle~. \nonumber
\end{eqnarray}
Note that
a feasible solution
$\overline{X}_{n+b}$ for the system $\System_{n+b}$, for $b \in \{0,1\}$,
corresponds to a feasible solution for $\WeakSystem$ by setting
$X_{w}(\Affectors_j)=X_{n+b}(\Affectors_j)$ for every $j \neq n+(1-b)$
and $X_{w}(\Affectors_{n+(1-b)})=0$.
For ease of notation, let
$\CP_{n}(\lambda) = \CP(Z(\System_{n}), \lambda)$,
$\CP_{n+1}(\lambda) = \CP(Z(\System_{n+1}), \lambda)$ and
$\CP_{n-1}(\lambda) = \CP(Z(\System_{n-1}), \lambda)$,
where $\CP$ is the characteristic polynomial defined in Eq. (\ref{eq:CP}).
Let $\beta^{*}_{n+b}=\beta^*(\System_{n+b})$ be the optimal
value of Program (\ref{LP:Ext_Perron}) for the system
$\System_{n+b}$.
Let $\beta^*=\beta^*(\System)$ and let
\begin{eqnarray*}
\lambda^{*}&=&1/\beta^{*} ,\\
\lambda^{*}_{n+b}&=&1/\beta^{*}_{n+b},
\mbox{~for~} b \in \{-1,0,1\}~.
\end{eqnarray*}
\begin{claim}
\label{cl:n_1_beta_optimal}
$\max\{\beta^{*}_{n}, \beta^{*}_{n+1}\} \leq \beta^{*} < \beta^{*}_{n-1}$.
\end{claim}
\par\noindent{\bf Proof:~}
The left inequality follows as any optimal solution $\overline{X}^{*}$ for
$\System_{n}$ (respectively, $\System_{n+1}$) can be achieved in the weakly square system
$\WeakSystem$ by setting $X^{*}(\Affectors_{n+1})=0$
(resp., $X^{*}(\Affectors_{n})=0$).
\par Assume towards contradiction that $\beta^*=\beta^*_{n-1}$ and let $\overline{X}'$ be the optimal solution for $\WeakSystem$.
By Lemma \ref{cl:entity_one_nonzero}, it holds that $X'(\Affectors_{n})+ X'(\Affectors_{n+1}) >0$.
Without loss of generality, assume that $X'(\Affectors_{n})>0$.
By Obs. \ref{obs:reducible_graph_connected}(a) and
the irreducibility of $\WeakSystem$, $\Entity_{n}$ is strongly connected to the rest of the graph for every selection of one of its two supporters. Thus there exists at least one entity $\Entity_{j}$, $j \in [1,n-1]$
such that $\Affectors_{n} \in \mathbb{R}epressors_{j}$.
Let $\overline{X}'' \in \mathbb{R}^{n-1}$ be obtained by taking the values of the first $n-1$ affectors as in $\overline{X}'$ and discarding the values of $\Affectors_{n}$ and $\Affectors_{n+1}$. We have the following.
\begin{eqnarray}
\label{eqn:totstotr}
\TotS(\overline{X}'', \System_{n-1})_{j}=\TotS(\overline{X}', \WeakSystem)_{j} \mbox{~and~}
\TotR(\overline{X}'', \System_{n-1})_{j}<\TotR(\overline{X}', \WeakSystem)_{j}~,
\end{eqnarray}
where strict inequality follows by the assumption that $X'(\Affectors_{n})>0$ and $\Affectors_{n}$ is a repressor of $\Entity_j$.
Since $\overline{X}'$ is an optimal solution for the system $\WeakSystem$, by Lemma \ref{lem:strict_equality}, it holds that $\TotS(\overline{X}', \WeakSystem)_{j}=\TotR(\overline{X}', \WeakSystem)_{j}$. Combining with Eq. (\ref{eqn:totstotr}), we get that $\TotS(\overline{X}'', \System_{n-1})_{j}<\TotR(\overline{X}'', \System_{n-1})_{j}$. Since $\overline{X}''$ is an optimal solution for $\System_{n-1}$, we end with contradiction to Lemma \ref{lem:strict_equality}, concluding that $\beta^* <\beta^*_{n-1}$. The claim follows.
\quad\blackslug\lower 8.5pt\null\par
Our goal in this section is to show that the optimal $\beta^{*}$ value for
$\WeakSystem$ can be achieved by setting either $X^{*}(\Affectors_{n})=0$ or
$X^{*}(\Affectors_{n+1})=0$, essentially showing that the optimal
$\WeakZeroStar$ solution corresponds to a $\ZeroStar$ solution.
This is formalized in the following lemma.
\begin{lemma}
\label{thm:0_solution}
$\beta^{*}=\max\{\beta^{*}_{n}, \beta^{*}_{n+1}\}$.
\end{lemma}
The following observation holds for every $b \in \{-1,0,1\}$ and follows
immediately by the definitions of feasibility and irreducibility and
the \PFT~\ref{thm:pf_full}.
\begin{observation}
\label{obs:perron_application}
\begin{description}
\item{(1)}
$\lambda^*_{n+b}>0$ is the maximal eigenvalue of $Z(\System_{n+b})$.
\item{(2)}
For an irreducible system $\System$, $\lambda^*_{n+b}=1/\beta^*_{n+b}$.
\item{(3)}
If the system is feasible then $\lambda^*_{n+b}>0$.
\end{description}
\end{observation}
For a square system $\System \in \SquareSystemFamily$, let $W^1$ be a modified
form of the matrix $Z$, defined as follows.
$$W^1(\System, \beta) ~=~ Z(\System)-1/\beta \cdot I ~~~\text{for}~~~ \beta \in (0, \beta^{*}].$$
More explicitly,
$$W^{1}(\System, \beta)_{i,j} ~=~
\begin{cases}
-1/\beta, & \text{if $i=j$;}\\
-g(\Entity_i,\Affectors_j)/g(i,i), & \text{otherwise.}
\end{cases}
$$
Clearly, $W^1(\System, \beta)$ cannot be defined for a nonsquare
system $\System \notin \SquareSystemFamily$. Instead, a generalization $W^2$ of $W^1$
for any (nonsquare) $m \geq n$ system $\System$ is given by
$$W^2(\System, \beta) ~=~
\mathbb{R}epressorsMatrix- 1/\beta \cdot \SupportersMatrix,
~~~\text{for}~~~ \beta \in (0, \beta^{*}],$$
or explicitly,
$$W^{2}(\System, \beta)_{i,j} ~=~
\begin{cases}
-g(i,i)/\beta, & \text{if $i=j$;}\\
-g(\Entity_i,\Affectors_j), & \text{otherwise.}
\end{cases}
$$
Note that if $\overline{X}_{\beta}$ is a feasible solution for $\System$,
then $W^{2}(\System, \beta) \cdot \overline{X}_{\beta} \leq 0$.
If $\System \in \SquareSystemFamily$, it also holds that
$W^{1}(\System, \beta) \cdot \overline{X}_{\beta} \leq 0$.
For $\System \in \SquareSystemFamily$, where both $W^{1}(\System, \beta)$ and
$W^{2}(\System, \beta)$ are well-defined, the following connection
becomes useful in our later argument. Recall that $\CP(Z(\System),t)$
is the characteristic polynomial of $Z(\System)$ (see Eq. (\ref{eq:CP})).
\begin{observation}
\label{obs:x_y_relation}
For a square system $\System$,\\
(a) $\mathrm{d}et(-W^{1}(\System, \beta))=\CP(Z(\System),1/\beta)$ and \\
(b) $\mathrm{d}et(-W^{2}(\System, \beta)) ~=~
\CP(Z(\System),1/\beta) \cdot \prod_{i=1}^{n} g(i,i)$.
\end{observation}
\par\noindent{\bf Proof:~}
The observation follows immediately by noting that
$W^{1}(\System, \beta)_{i,j}=W^{2}(\System, \beta)_{i,j} \cdot g(i,i)$ for every $i$ and $j$, and by Eq. (\ref{eq:CP}).
\quad\blackslug\lower 8.5pt\null\par
The next equality plays a key role in our analysis.
\begin{lemma}
\label{lem:P_n_n+1}
$\mathrm{d}isplaystyle \frac{g(n,n) \cdot X^{*}(n) \cdot \CP_{n}(\lambda^{*})}
{\CP_{n-1}(\lambda^{*})} +
\frac{g(n,n+1) \cdot X^{*}(n+1) \cdot \CP_{n+1}(\lambda^{*})}
{\CP_{n-1}(\lambda^{*})} = 0.$
\end{lemma}
\par\noindent{\bf Proof:~}
By Lemma \ref{lem:strict_equality}, it follows that
$-W^{2}(\WeakSystem,\beta^{*}) \cdot \overline{X}^{*}=0$,
or
$$
\begin{pmatrix}
g(1,1)/\beta^{*} & g(1,2) & \ldots &
g(1,n)&
g(1,n+1)\\
g(2,1) & g(2,2)/\beta^{*} & \ldots &
g(2,n)&
g(2,n+1)\\
\vdots & \ldots & \ldots & \vdots\\
g(n,1) & g(n,2) & \ldots &
g(n,n)/\beta^{*}&
g(n,n+1)/\beta^{*}\\
\end{pmatrix}
\cdot
\begin{pmatrix}
X^{*}(1)\\
X^{*}(2)\\
\vdots\\
X^{*}(n)\\
X^{*}(n+1)
\end{pmatrix}
=
\begin{pmatrix}
0\\
\vdots\\
0\\
0
\end{pmatrix}
$$
Next, we need to apply Claim \ref{cl:cramer_non_square}(b).
To do that, we first need to verify that
$W^{2}(\System_{n-1},\beta^{*})$, i.e., the $(n-1) \times (n-1)$ upper left
submatrix of $W^{2}(\WeakSystem,\beta^{*})$, is nonsingular.
This follows by noting that $\lambda^{*} \in \mathbb{R}_{>0}$ and
by Claim \ref{cl:n_1_beta_optimal}, $\lambda^{*}>\lambda^{*}_{n-1}$.
Moreover, note that $\lambda^{*}_{n-1}$ is the largest real root of
$\CP_{n-1}(\lambda)$, hence
\begin{equation}
\label{eq:cpnz}
\CP_{n-1}(\lambda^{*}) \neq 0~.
\end{equation}
Combining with Obs. \ref{obs:x_y_relation}(b), it follows that
$\mathrm{d}et(-W^{2}(\System_{n-1}, \beta^{*})) \neq 0$ or that
$W^{2}(\System_{n-1}, \beta^{*})$ is nonsingular.
Now we can safely apply Claim \ref{cl:cramer_non_square}(b),
yielding
\begin{eqnarray*}
\label{eq:Cramer_SINR_mid}
X^{*}(n) \cdot \frac{\mathrm{d}et \left(-W^{2}(\System_{n},\beta^{*}) \right)}
{\mathrm{d}et \left(-W^{2}(\System_{n-1},\beta^{*}) \right)}+ X^{*}(n+1) \cdot \frac{\mathrm{d}et (-W^{2}(\System_{n+1},\beta^{*}))}
{\mathrm{d}et \left(-W^{2}(\System_{n-1},\beta^{*}) \right)} = 0~.
\end{eqnarray*}
By plugging Obs. \ref{obs:x_y_relation}(b) and simplifying, the lemma follows.
\quad\blackslug\lower 8.5pt\null\par
Our work plan from this point on is as follows. We first define a range of
`candidate' values for $\beta^{*}$. Essentially, our interest is in
\emph{real} positive $\beta^{*}$.
Recall that $Z(\WeakSystem), Z(\System_{n})$ and $Z(\System_{n+1})$ are nonnegative
irreducible square matrices and therefore Theorem \ref{thm:pf_full} can be applied
throughout the analysis.
Without loss of generality, assume that $\beta^{*}_{n} \geq \beta^{*}_{n+1}$
(and thus $\lambda_{n}^{*}\leq \lambda_{n+1}^{*}$) and let
$Range_{\beta^*}=(\beta^{*}_{n}, \beta^{*}_{n-1}) \subseteq \mathbb{R}_{>0}$.
Let the corresponding range of $\lambda^{*}$ be
\begin{equation}
\label{eq:lambda_range}
Range_{\lambda^{*}}=(\lambda^{*}_{n-1},\lambda^{*}_{n})=(1/\beta^{*}_{n-1}, 1/\beta^{*}_{n}).
\end{equation}
To complete the proof for Lemma \ref{thm:0_solution} we assume, towards contradiction, that
$\beta^{*} > \beta^{*}_{n}$.
According to Claim \ref{cl:n_1_beta_optimal} and the fact that $\beta^{*} \neq \beta^{*}_{n}$,
it then follows that $\beta^{*} < \beta^{*}_{n}$, $\lambda^*< \lambda^*_{n}, \lambda^*_{n+1}$ and hence $\CP_{n}(\lambda^*),\CP_{n+1}(\lambda^*)\neq 0$.
In addition, $\beta^{*} \in Range_{\beta^*}$.
Note that since $Range_{\beta^{*}} \subseteq \mathbb{R}_{>0}$, also
$Range_{\lambda^{*}}\subseteq \mathbb{R}_{>0}$, namely, the corresponding $\lambda^*$ is real and positive as well.
This is important mainly in the context of nonnegative irreducible matrices
$Z(\System')$ for $\System' \in \SquareSystemFamily$.
In contrast to nonnegative primitive matrices
(where $\Period=1$) for irreducible matrices, such as $Z(\System')$,
by Thm. \ref{thm:pf_full} there are $\Period \geq 1$ eigenvalues,
$\lambda_i \in \EigenValue(\System')$, for which
$|\lambda_i|=\PFEigenValue(\System')$.
However, note that only one of these, namely, $\PFEigenValue(\System')$,
might belong to $Range_{\lambda^{*}}\subseteq \mathbb{R}_{>0}$.
(This follows as by Thm. \ref{thm:pf_full}, every other such $\lambda_i$ is either real but negative
or with a nonzero complex component).
Fix $b \in \{-1,0,1\}$ and let $k_{n+b}$ be the number of real and positive
eigenvalues of $Z(\System_{n+b})$. Let
$0<\lambda_{n+b}^{1} \leq \lambda_{n+b}^{2} \ldots \leq \lambda_{n+b}^{k_{n+b}}$
be the ordered set of \emph{real and positive} eigenvalues for
$Z(\System_{n+b})$, i.e., real positive roots of $\CP_{n+b}(\lambda)$.
Note that $\lambda_{n+b}^{k_{n+b}}=\lambda_{n+b}^{*}$.
By Theorem \ref{thm:pf_full}, we have that for every $b \in \{-1,0,1\}$
\\
(a) $\lambda_{n+b}^{*} \in \mathbb{R}_{>0}$, and
\\
(b) $\lambda_{n+b}^{*} > |\lambda_{n+b}^{p}|$, $p \in \{1,\ldots, k_{n+b}-1\}$.
We proceed by showing that the potential range for $\lambda^{*}$, namely,
$Range_{\lambda^{*}}$, can contain no root of $\CP_{n}(\lambda)$ and
$\CP_{n+1}(\lambda)$.
Since $Range_{\lambda^{*}}$ is real and positive, it is sufficient to consider
only real and positive roots of $\CP_{n}(\lambda)$ and $\CP_{n+1}(\lambda)$
(or real and positive eigenvalues of $Z(\System_{n})$ and $Z(\System_{n+1})$).
\begin{figure}
\caption{Real positive roots of $\CP_{n+1}
\label{fig:eigenval}
\end{figure}
\begin{claim}
\label{cl:no_root_in_range}
$\lambda_{n}^{p_0}, \lambda_{n+1}^{p_1} \notin Range_{\lambda^{*}}$ for every real
$\lambda_{n}^{p_0}, \lambda_{n+1}^{p_1}$, for $p_0 <k_{n}, p_1<k_{n+1}$.
\end{claim}
\par\noindent{\bf Proof:~}
Note that $Z(\System_{n-1})$ is the principal $(n-1)$ minor of both
$Z(\System_{n})$ and $Z(\System_{n+1})$. By the separation theorem of Hall and Porsching, see Lemma. \ref{lem:sep_thm}, we get that
$\lambda_{n}^{p_0}, \lambda_{n+1}^{p_1} \leq \lambda_{n-1}^{*}$ for every
$p_0 <k_{n}$ and $p_1 < k_{n+1}$, concluding by Eq. (\ref{eq:lambda_range}) that
$\lambda_{n}^{p_0}, \lambda_{n+1}^{p_1} \notin Range_{\lambda^{*}}$.
\quad\blackslug\lower 8.5pt\null\par
We proceed by showing that $\CP_{n}(\lambda)$ and $\CP_{n+1}(\lambda)$
have the same sign in $Range_{\lambda^{*}}$. See Fig. \ref{fig:eigenval} for a schematic description of the system.
\begin{claim}
\label{cl:same_sign}
$\Sign (\CP_{n}(\lambda)) =\Sign (\CP_{n+1}(\lambda))$
for every $\lambda \in Range_{\lambda^{*}}$.
\end{claim}
\par\noindent{\bf Proof:~}
Fix $b \in \{0,1\}$. By Claim \ref{cl:no_root_in_range}, $\CP_{n+b}$ has no roots in $Range_{\lambda^{*}}$, so $\Sign (\CP_{n+b}(\lambda_1)) =\Sign(\CP_{n+b}(\lambda_2))$ for every
$\lambda_1, \lambda_2 \in Range_{\lambda^{*}}$. Also note that by Thm. \ref{thm:pf_full}, $\Sign(\CP_{n+b}(\lambda_1) ) =\Sign(\CP_{n+b}(\lambda_2))$,
for every $\lambda_1, \lambda_2 > \lambda_{n+b}^{*}$.
We now make two crucial observations. First, as $\CP_{n}(\lambda)$ and
$\CP_{n+1}(\lambda)$ correspond to a characteristic polynomial of an
$n \times n$ matrix, they have the same leading coefficient (any characteristic polynomial is monic, i.e., with leading coefficient 1 and degree $n$) and therefore
$\Sign(\CP_{n}(\lambda))=\Sign(\CP_{n+1}(\lambda))$ for
$\lambda > \lambda_{n+1}^{*}$ (recall that we assume that
$\lambda_{n+1}^{*}\geq\lambda_{n}^{*}$).
Second, due to the \PFT, the maximal roots of
$\CP_{n}(\lambda)$ and $\CP_{n+1}(\lambda)$ are of multiplicity one and therefore
the polynomial $\CP_{n}(\lambda)$ (resp., $\CP_{n+1}(\lambda)$) necessarily changes its sign when $\lambda$ passes through its maximal real positive root $\lambda_{n}^{*}$ (respectively, $\lambda_{n+1}^{*}$).
Using these two observations, we now prove the claim via contradiction.
Assume, toward contradiction, that
$\Sign (\CP_{n}(\lambda)) \neq \Sign (\CP_{n+1}(\lambda))$ for
some $\lambda \in Range_{\lambda^{*}}$. Then
$\Sign (\CP_{n}(\lambda_1)) \neq \Sign (\CP_{n}(\lambda_2))$
for $\lambda_1 > \lambda_{n}^{*}$ and $\lambda_2 \in Range_{\lambda^{*}}$
also $\Sign (\CP_{n+1}(\lambda_1) )\neq \Sign (\CP_{n+1}(\lambda_2))$
for $\lambda_1 > \lambda_{n+1}^{*}$ and $\lambda_2 \in Range_{\lambda^{*}}$.
(This holds since when encountering a root of multiplicity one,
the sign necessarily flips). In particular, this implies that
$\Sign (\CP_{n}(\lambda))\neq \Sign (\CP_{n+1}(\lambda))$
for every $\lambda \geq \lambda_{n+1}^{*}$, in contradiction to the fact that
$\Sign(\CP_{n}(\lambda))=\Sign(\CP_{n+1}(\lambda))$
for every $\lambda>\lambda_{n+1}^{*}$. The claim follows.
\quad\blackslug\lower 8.5pt\null\par
We now complete the proof of Lemma \ref{thm:0_solution}.
\par\noindent{\bf Proof:~}
By Eqs. (\ref{eq:cpnz}) and
(\ref{eq:lambda_range}),
$\CP_{n-1}(\lambda) \neq 0$ for every $\lambda \in Range_{\lambda^{*}}$.
We can safely apply Claim \ref{cl:same_sign} to Lemma \ref{lem:P_n_n+1} and
and get that $\Sign (X^{*}(n)) \neq \Sign (X^{*}(n+1))$.
Since $X^{*}(n),X^{*}(n+1)$ and $g(n,n),g(n,n+1)$ are nonnegative, it follows that
$X^{*}(n)=0$ and $X^{*}(n+1)=0$. In contradiction to Lemma \ref{cl:entity_one_nonzero}.
We conclude that $\beta^{*}=\beta^{*}_{n}$.
\quad\blackslug\lower 8.5pt\null\par
We complete the geometric characterization of the generalized \PFT~by noting the following.
\begin{lemma}
\label{lem:no_weak_vertex}
Every vertex $\overline{X}\in V(\Polytope(\beta^{*}))$ is a $\ZeroStar$ solution.
\end{lemma}
\par\noindent{\bf Proof:~}
By Lemma \ref{cl:weak_zero_polytope}, it is sufficient to show that there
exists no $\overline{X}\in V(\Polytope(\beta^{*}))$ that is weak, namely,
which is a $\WeakZeroStar$ solution but not a $\ZeroStar$ solution.
Assume, towards contradiction, that $\overline{X}\in V(\Polytope(\beta^{*}))$
and that both $X(n) >0$ and $X(n+1)>0$. From now on, we replace
$\overline{X} \in \mathbb{R}^{m}$ by its truncated sub-vector in
$\mathbb{R}^{n+1}$, i.e., we discard the $m-n-1$ zero entries in $\overline{X}$.
Let $\System_{n-1},\System_{n}$ and $\System_{n+1}$ be defined as in Eq. (\ref{eqn:ssystem_def}). Recalling the notation of Sec. \ref{sec:per} where for matrix $A$, we denote $A_{-(i,j)}$ by the matrix that results from $A$ by removing the $i$-th row and the $j$-th column, define
$$a_i=(-1)^{n-i} \cdot \frac{\mathrm{d}et \left(W^{2}(\System_{n},\beta^{*})_{-(n,i)}\right)}{\mathrm{d}et\left( W^{2}(\System_{n-1},\beta^{*})\right)}$$
and
$$b_i=(-1)^{n-i} \cdot \frac{\mathrm{d}et \left(W^{2}(\System_{n+1},\beta^{*})_{-(n,i)} \right)}{\mathrm{d}et\left( W^{2}(\System_{n-1},\beta^{*})\right)}$$
for $i \in \{1, \ldots, n\}$.
By Eq. (\ref{eq:CP}), Claim \ref{cl:cramer_non_square}(a)
and the proof of Lemma \ref{lem:P_n_n+1},
every optimal solution, and in particular every $\overline{X}\in V(\Polytope(\beta^{*}))$, satisfies
\begin{equation}
\label{eq:i_entries_pf}
X(i)=a_{i} \cdot X(n)+ b_{i} \cdot X(n+1)
\end{equation}
for $i \in \{1,\ldots,n-1\}$. This implies that our weak solution $\overline{X}$ is given by
$$
\overline{X}~=~ X(n) \cdot [a_1, \ldots, a_{n-1},1,0]^{T}+X(n+1)
\cdot [b_1, \ldots, b_{n-1},0,1]^{T}.
$$
Let
$$
c_n ~=~ X(n) \cdot \left(1+\sum_{i=1}^{n-1}a_{i} \right)$$ and $$c_{n+1}=X(n+1) \cdot \left(1+\sum_{i=1}^{n-1}b_{i} \right),
$$
where the feasibility of $\overline{X}$ implies $c_{n}+c_{n+1}=1$.
Next, consider Lemma \ref{lem:P_n_n+1}. Since $\overline{X}$ is optimal,
with both $X(n) >0$ and $X(n+1)>0$, it follows that
$\mathrm{d}et \left(W^{2}(\System_{n},\beta^{*})\right)=\mathrm{d}et \left( W^{2}(\System_{n+1},\beta^{*})\right)=0$.
This means that when constructing an optimal solution $\overline{Y}$,
one has complete freedom to select any $Y(n),Y(n+1) \geq 0$ and the
rest of the coordinates are determined by Eq. (\ref{eq:i_entries_pf}).
In particular, setting $Y(n)=X(n)/c_{n}$ and
$Y(n+1)=X(n+1)/c_{n+1}$ yields the following two optimal solutions:
$\overline{Y}_1=X(n)/c_{n}\cdot [a_1, \ldots, a_{n-1},1,0]^{T}$ and
$\overline{Y}_2=X(n+1)/c_{n+1}\cdot[b_1, \ldots, b_{n-1},0,1]^{T}$.
Note that $\overline{X}$ can be described as a convex combination of
$\overline{Y}_1$ and $\overline{Y}_2$, i.e.,
$\overline{X}=c_{n} \cdot \overline{Y}_1+ c_{n+1}\cdot \overline{Y}_2$
(recall that $c_{n}+c_{n+1}=1$). This is in contradiction to the fact
that $\overline{X}$ is a vertex of a polytope. The lemma follows.
\quad\blackslug\lower 8.5pt\null\par
\begin{lemma}
\label{lem:zero_star}
There exists a selection
$\FilterMatrix^{*} \in \FilterMatrixFamily$
such that $\PFEigenValue(\System(\FilterMatrix^{*}))=1/\beta^{*}$.
\end{lemma}
\par\noindent{\bf Proof:~}
Recall that our $\ZeroStar$ solution,
$\overline{X}^{*}$, is a solution for the weak subsystem $\WeakSystem$, and
therefore $\overline{X}^{*} \in \mathbb{R}^{n+1}$. In addition, $|NZ(\overline{X}^{*})|=n$
and due to Lemma \ref{cl:entity_one_nonzero}, $|NZ(\overline{X}^{*}) \cap \Supporters_i|=1$ for every $\Entity_{i}$, or in other words,
$\SelectionVec'=NZ(\overline{X}^{*})$ is a complete selection for $\EntitySet$
such that $|\SelectionVec'|=n$. Taking $\FilterMatrix^{*}=\FilterMatrix(\SelectionVec')$ yields the desired claim. The lemma follows.
\quad\blackslug\lower 8.5pt\null\par
Note that Eq. (\ref{eq:i_entries_pf}) illustrates the additional degrees
of freedom at the optimum point of Program (\ref{LP:Ext_Perron}).
Specifically, to obtain an optimum solution for $\beta^{*}$,
one has the freedom to set $X_{n}\geq 0$ and $X_{n+1} \geq 0$
(as long as at least one of them is positive) and the rest of the coordinates are determined accordingly.
We are now ready to complete the proof of Thm. \ref{thm:pf_ext}.
\par\noindent{\bf Proof:~} [Thm. \ref{thm:pf_ext}]
Let $\FilterMatrix^{*}$ be the selection such that
$\PFEigenValue(\System)=\PFEigenValue(\System(\FilterMatrix^{*}))$.
Note that by the irreducibility of $\System$, the square system
$\System(\FilterMatrix^{*})$ is irreducible as well and therefore the \PFT~
for irreducible matrices can be applied.
In particular, by Thm. \ref{thm:pf_full}, it follows that
$\PFEigenValue(\System(\FilterMatrix^{*})) \in \mathbb{R}_{>0}$ and that
$\PFEigenVector(\System(\FilterMatrix^{*}))>0$.
Therefore, by Eq. (\ref{eq:general_pf_root}) and (\ref{eq:general_pf_vector}),
Claims (Q1)-(Q3) of Thm. \ref{thm:pf_ext} follow.
We now turn to claim (Q4) of the theorem. Note that for a symmetric system, in which
$g(i,j_1)=g(i,j_2)$ for every
$\Affectors_{j_1},\Affectors_{j_2} \in \Supporters_k$ and every $k,i \in [1,n]$,
the system is invariant to the selection matrix and therefore
$\PFEigenValue(\System(\FilterMatrix_1)) =
\PFEigenValue(\System(\FilterMatrix_2))$
for every $\FilterMatrix_1,\FilterMatrix_2 \in \FilterMatrixFamily$.
Finally, it remains to consider claim (Q5) of the theorem. Note that the optimization problem
specified by Program (\ref{LP:Ext_Perron}) is an alternative formulation
to the generalized Collatz-Wielandt formula given in (Q5).
We now show that $\PFEigenValue(\System)$ (respectively,
$\PFEigenVector(\System)$) is the optimum value (resp., point)
of Program (\ref{LP:Ext_Perron}).
By Lemma \ref{lem:zero_star}, there exists an optimal point
$\overline{X}^{*}$ for Program (\ref{LP:Ext_Perron}) which is a $\ZeroStar$
solution. Note that a $\ZeroStar$ solution corresponds to a unique hidden
square system, given by $\System^{*}=\System(NZ(\overline{X}^{*}))$
($\System^{*}$ is square since $|NZ(\overline{X}^{*})|=n$).
Therefore, by Thm. \ref{thm:pf} and Lemma \ref{lem:zero_star},
we get that
\begin{equation}
\label{eq:put_all_val}
\PFEigenValue(\System^{*}) ~=~ 1/\beta^{*}(\System^{*}) ~=~ 1/\beta^{*}(\System).
\end{equation}
Next, by Observation \ref{obs:filter_to_square}(b), we have that
$\PFEigenValue(\System(\FilterMatrix)) \geq
\PFEigenValue(\System)$. It therefore follows that
\begin{equation}
\label{eq:put_all}
\PFEigenValue(\System^{*}) ~=~
\min_{\FilterMatrix \in \FilterMatrixFamily} \PFEigenValue(\System(\FilterMatrix)).
\end{equation}
Combining Eq. (\ref{eq:put_all_val}), (\ref{eq:put_all}) and
(\ref{eq:general_pf_root}),
we get that the PF eigenvalue of the system $\System$
satisfies $\PFEigenValue(\System)=1/\beta^{*}(\System)$ as required.
Finally, note that
by Thm. \ref{thm:pf}, $\PFEigenVector(\System^{*})$ is the optimal
point for Program (\ref{LP:Ext_Perron}) with the square system $\System^{*}$.
By Eq. (\ref{eq:general_pf_vector}), $\PFEigenVector(\System)$ is an extension
of $\PFEigenVector(\System^{*})$ with zeros (i.e., a $\ZeroStar$ solution).
It can easily be checked that $\PFEigenVector(\System)$ is a feasible solution
for the original system $\System$ with
$\beta=\beta^{*}(\System^{*})=\beta^{*}(\System)$, hence it is optimal.
Note that by Lemma \ref{lem:strict_equality}, it indeed follows that $\mathbb{R}epressorsMatrix \cdot \PFEigenVector(\System) =
1/\beta^{*}(\System) \cdot \SupportersMatrix \cdot \PFEigenVector(\System)$,
for every optimal solution $\overline{X}^{*}$.
Theorem \ref{thm:pf_ext} follows.
\quad\blackslug\lower 8.5pt\null\par
\section{Computing the generalized PF vector}
\label{short:sec:Algorithm}
In this section we present a polynomial time algorithm for computing the generalized Perron eigenvector $\PFEigenVector(\System)$ of an irreducible
system $\System$.
\paragraph{The method.}
By Property (Q5) of Thm. \ref{thm:pf_ext}, computing $\PFEigenVector(\System)$
is equivalent to finding a $\ZeroStar$ solution for
Program (\ref{LP:Ext_Perron}) with $\beta=\beta^{*}(\System)$.
For ease of analysis, we assume throughout that the gains are integral,
i.e., $g(i,j) \in \mathbb{Z}^{+}$, for every $i \in \{1, \ldots, n\}$ and $j \in \{1, \ldots, m\}$.
If this does not hold, then the gains can be rounded or scaled to achieve this.
Let
\begin{equation}
\label{eq:gmax}
\MaxGain(\System) ~=~ \max_{i \in \{1,\ldots, n\}, j \in \{1, \ldots, m\}} \left\{|g(i,j)| \right\},
\end{equation}
and define $\LPRunTime$ as the running time of an LP solver such as the
interior point algorithm \cite{Boyd-Conv-Opt-Book} for Program (\ref{LP:Ext_Perron_convex}).
Recall that we look for an exact optimal solution for a non-convex optimization
problem (see Program (\ref{LP:Ext_Perron})). Using the convex relaxation of
Program (\ref{LP:Ext_Perron_convex}), a binary search can be applied
for finding an approximate solution up to a predefined accuracy.
The main challenge is then to find (a) an optimal solution
(and not an approximate one), and (b) among all the optimal solutions,
to find one that is a $\ZeroStar$ solution.
Let $\FilterMatrix_1, \FilterMatrix_2 \in \FilterMatrixFamily$ be two
selection matrices for $\System$. By Thm. \ref{thm:pf_ext}, there exists
a selection matrix $\FilterMatrix^{*}$ such that
$\PFEigenValue(\System)=\PFEigenValue(\System(\FilterMatrix^{*}))$ and
$\PFEigenVector(\System)$ is a $\ZeroStar$ solution corresponding to
$\PFEigenVector(\System(\FilterMatrix^{*}))$
(in addition $\beta^{*}=1/\PFEigenValue(\System(\FilterMatrix^{*}))$).
Our goal then is to find a selection matrix $\FilterMatrix^{*} \in \FilterMatrixFamily$ where
$|\FilterMatrixFamily|$ might be exponentially large.
\begin{theorem}
\label{thm:algorithm}
Let $\System$ be an irreducible system. Then $\PFEigenVector(\System)$ can be
computed in time
$O(n^{3} \cdot \LPRunTime \cdot
\left(\log \left(n \cdot \MaxGain \right) +n \right))$.
\end{theorem}
Let
\begin{equation}
\label{eq:delta_beta}
\Delta_{\beta} ~=~ (n\MaxGain)^{-8n^3}.
\end{equation}
The key observation in this context is the following ``minimum gap" observation.
\begin{lemma}
\label{lem:apart_in_range}
Consider a selection matrix $\FilterMatrix \in \FilterMatrixFamily$. If
$\beta^*(\System)-1/\PFEigenValue(\System(\FilterMatrix)) \leq \Delta_{\beta}$,
then $\beta^{*}(\System)=1/\PFEigenValue(\System(\FilterMatrix))$.
\end{lemma}
By performing a polynomial number of steps of binary search for the optimal
$\beta^{*}(\System)$, one can converge to a value $\beta^{-}$ that is at most
$\Delta_{\beta}$ far from $\beta^{*}(\System)$, i.e.,
$\beta^{*}(\System)-\beta^{-}<\Delta_\beta$.
Let $Range_{\beta^{*}}=[\beta^{-}, \beta^{*}]$.
Then by Lemma \ref{lem:zero_star}, we are guaranteed that
$\PFEigenValue(\System(\FilterMatrix))=1/\beta^{*}$
for any selection matrix $\FilterMatrix \in \FilterMatrixFamily$ such that
$1/\PFEigenValue(\System(\FilterMatrix)) \in Range_{\beta^{*}}$
(there could be many such matrices $\FilterMatrix$, but in this case,
they all correspond to systems with PF value $1/\beta^{*}$).
To prove Lemma \ref{lem:apart_in_range}, we first establish a lower
bound on the difference between \emph{any} two different PF eigenvalues of any
two irreducible square systems, i.e., we show that the PF roots
$\PFEigenValue(\System^{s}_1)$ and $\PFEigenValue(\System^{s}_2)$
of any two irreducible square systems
$\System^{s}_1, \System^{s}_2 \in \SquareSystemFamily$
cannot be too close if they are different.
Recall that for an irreducible square system $\System^{s}$,
$Z(\System^{s})=(\SupportersMatrix)^{-1} \cdot \mathbb{R}epressorsMatrix$,
where $\SupportersMatrix$ can be considered to be diagonal
with a strictly positive diagonal.
We begin the analysis by scaling the entries of $Z(\System^{s})$ to obtain
an integer-valued matrix $Z^{\mathrm{int}}$. The scaling is needed in order to
employ a well-known bound due to Bugeaud and Mignotte \cite{OnDistBetwRoots}
on the minimal distance between the roots
of integer polynomials (Lemma \ref{lemma:distance_of_roots}).
The guaranteed distance on
$\PFEigenValue(\System^{s}_1)$ and $\PFEigenValue(\System^{s}_2)$ is later
translated into a minimal bound on distance for their reciprocals
$1/\PFEigenValue(\System^{s}_1)$ and $1/\PFEigenValue(\System^{s}_2)$,
which correspond to $\beta$ values of Program (\ref{LP:Ext_Perron}),
i.e., optimal $\beta$ values of two different irreducible square systems
for Program (\ref{LP:Ext_Perron}).
Specifically, we show that for any given
sufficiently small range of $\beta$ values, $Range_\beta=[\beta_1, \beta_2]$
such that $|\beta_1-\beta_2| \leq \Delta_{\beta}$, there cannot be two
selection matrices
$\FilterMatrix_1,\FilterMatrix_2 \in \FilterMatrixFamily$ such that
$\PFEigenValue(\System(\FilterMatrix_1)) \neq
\PFEigenValue(\System(\FilterMatrix_2))$
and yet both
$1/\PFEigenValue(\System(\FilterMatrix_1)),
1/\PFEigenValue(\System(\FilterMatrix_2)) \in Range_\beta$.
\par The \emph{na\"{\i}ve height} of an integer polynomial $P$, denoted $H(P)$, is the maximum of the absolute
values of its coefficients.
\begin{lemma}[Bugeaud and Mignotte \cite{OnDistBetwRoots}]
\label{lemma:distance_of_roots}
Let $P(X)$ and $Q(X)$ be nonconstant integer polynomials of degree $n$ and $m$,
respectively. Denote by $r_P$ and $r_Q$ a zero of $P(X)$ and $Q(X)$, respectively.
Assuming that $P(r_Q)\ne 0$, we have
\begin{eqnarray*}
|\ r_P - r_Q |\ \ge
2^{1-n}(n+1)^{\frac{1}{2}-m}(m+1)^{-\frac{n}{2}}H(P)^{-m}H(Q)^{-n}.
\end{eqnarray*}
\end{lemma}
We first show the following.
\begin{lemma}
\label{lem:new_height}
$|\PFEigenValue(\System^{s}_1)-\PFEigenValue(\System^{s}_2)| \ge
(n\MaxGain)^{-6n^3}$ for every
$\System^{s}_1, \System^{s}_2\in \SquareSystemFamily$.
\end{lemma}
\par\noindent{\bf Proof:~}
Recall that for an irreducible square system $\System^{s}$,
$Z(\System^{s})=(\SupportersMatrix)^{-1} \cdot \mathbb{R}epressorsMatrix$,
where
$\SupportersMatrix$ can be considered to be diagonal with strictly positive
diagonal. Therefore, $Z(\System^{s})_{i,j}=|g(i,j)|/g(i,i)$ where $g(i,i)$
corresponds to the gain of the unique supporter of $\Entity_i$.
For ease of notation, let $Z_1=Z(\System^{s}_1)$, $Z_2=Z(\System^{s}_2)$,
$r_1=\PFEigenValue(\System^{s}_1)$ and $r_2=\PFEigenValue(\System^{s}_2)$.
Let $i_1$ (resp., $i_2$) be the index of the unique supporter of entity
$\Entity_{i}$ in the square system $\System^{s}_1$ (resp., $\System^{s}_2$).
To employ Lemma \ref{lemma:distance_of_roots}, we first scale $Z_1$ and $Z_2$
to obtain two integer-valued matrices $Z^{\mathrm{int}}_1$ and $Z^{\mathrm{int}}_2$.
The new matrix $Z_b^{\mathrm{int}}$, for $b \in \{1,2\}$, is constructed by multiplying each entry of
$Z_b$ by the common denominator of its entries,
i.e., $Z_{b}^{\mathrm{int}}(i,j)=Z_b(i,j)\cdot\prod_{i} \left(|g(i,i_1)| \cdot |g(i,i_2)| \right)$.
Thus all entries of $Z_b^{\mathrm{int}}$ are integers and bounded by $\MaxGain^{2n}$
(since $|g(i,j)|\le \MaxGain$).
Let $P_1(x)=\CP(Z^{\mathrm{int}}_1,x)$ and $P_2(x)=\CP(Z^{\mathrm{int}}_2,x)$ be the
characteristic polynomials of the matrices $Z^{\mathrm{int}}_1$ and
$Z^{\mathrm{int}}_2$ respectively, see Eq. (\ref{eq:CP}).
Note that $P_1(x)$ and $P_2(x)$ are integer polynomials of degree $n$,
and $H(P_1),H(P_2) \leq \MaxGain^{2n^2}$ (since $|\mathrm{d}et(Z)|\le (\MaxGain^{2n})^n$).
Let $r_1^{\mathrm{int}}$ and $r_2^{\mathrm{int}}$ correspond to the PF eigenvalues
of $Z_{1}^{\mathrm{int}}$ and $Z_{2}^{\mathrm{int}}$ respectively.
Lemma \ref{lemma:distance_of_roots} yields
\begin{eqnarray*}
|r_1^{\mathrm{int}}-r_2^{\mathrm{int}}| &\geq&
2^{1-n}(n+1)^{\frac{1}{2}-n}(n+1)^{-\tfrac{n}{2}}
(\MaxGain^{2n^2})^{-n}(\MaxGain^{2n^2})^{-n}
= 2^{1-n}(n+1)^{\frac{1-3n}{2}}\MaxGain^{-4n^3}.
\end{eqnarray*}
Finally, by definition of $Z^{\mathrm{int}}_1$ and $Z^{\mathrm{int}}_2$,
\begin{align*}
|r_1^{\mathrm{int}}-r_2^{\mathrm{int}}| ~=~ |r_1-r_2|\prod_{i} \left(|g(i,i_1)| \cdot |g(i,i_2)|\right) ,
\end{align*}
and thus
\begin{eqnarray*}
|r_1-r_2| &\geq&
\frac{2^{1-n}(n+1)^{\frac{1-3n}{2}}\MaxGain^{-4n^3}}{\prod_{i}\left(|g(i,i_1)| \cdot |g(i,i_2)| \right)}
~\geq~
\frac{2^{1-n}(n+1)^{\frac{1-3n}{2}}\MaxGain^{-4n^3}}{\MaxGain^{2n}}
\geq (n\MaxGain)^{-6n^3}.
\end{eqnarray*}
\quad\blackslug\lower 8.5pt\null\par
We now turn to translate the distance between $r_1$ and $r_2$ into a distance
between $1/r_1$ and $1/r_2$ (corresponding to the optimal $\beta$ values of
Program (\ref{LP:Ext_Perron}) with $\System^{s}_1$ and $\System^{s}_2$,
respectively).
The next auxiliary claim gives a bound for $\lambda \in \EigenValue(A)$
as a function of $\MaxGain$.
\begin{lemma}
\label{claim:lambd_up_bound}
Let $\lambda$ be an eigenvalue of an $n\times n$ matrix $Z$ such that
$|Z(i,j)|\le \MaxGain$. Then $|\lambda|\le n \MaxGain$.
\end{lemma}
\par\noindent{\bf Proof:~}
Let $\overline{X}$ be the eigenvector of $Z$ and assume that
$||\overline{X}||_{2}=1$. Since
$\overline{X}^{T} \cdot Z \cdot \overline{X} =
\lambda \overline{X}^{T} \cdot \overline{X}=\lambda$, we have:
\begin{eqnarray*}
|\lambda | &=& |\overline{X}^{T}Z \overline{X}|
~=~ |\sum_i\sum_j X(i) Z(i,j) X(j)|
~\le~ \MaxGain\cdot |\sum_i\sum_j X(i) \cdot X(j)|\\
\\ &=& \MaxGain\cdot |\sum_i X(i)|\cdot |\sum_j X(j)|
~=~ \MaxGain\cdot\|\overline{X} \|_1^2
~\le~ \MaxGain\cdot(\sqrt{n}\|\overline{X}\|_2)^2 ~=~ n\MaxGain~.
\end{eqnarray*}
\quad\blackslug\lower 8.5pt\null\par
We now turn to prove Lemma \ref{lem:apart_in_range}.
\par\noindent{\bf Proof:~} [of Lemma \ref{lem:apart_in_range}]
\\By Lemma \ref{lem:new_height} and \ref{claim:lambd_up_bound},
$$\left|\frac{1}{r_2}-\frac{1}{r_1}\right| =
\left|\frac{r_1-r_2}{r_1 r_2}\right| \ge
\frac{\left|r_1-r_2\right|}{(n\MaxGain)^2} \ge
(n\MaxGain)^{-8n^3}.$$
So far, we proved that if
$\PFEigenValue(\System(\FilterMatrix_1)) \not=
\PFEigenValue(\System(\FilterMatrix_2))$,
then
$|1/\PFEigenValue(\System(\FilterMatrix_1))
- 1/\PFEigenValue(\System(\FilterMatrix_2))| \geq \Delta_{\beta}$,
for every $\FilterMatrix_1,\FilterMatrix_2 \in \FilterMatrixFamily$.
By Thm. \ref{thm:pf_ext}, there exists a selection
$\FilterMatrix^{*} \in \FilterMatrixFamily$ such that
$\PFEigenValue(\System(\FilterMatrix^{*}))=1/\beta^{*}(\System)$.
Assume, toward contradiction, that there exists some
$\FilterMatrix^{'} \in \FilterMatrixFamily$ such that
$\PFEigenValue(\System(\FilterMatrix^{'}))\neq 1/\beta^{*}(\System)$ but
$|\beta^{*}(\System)-1/\PFEigenValue(\System(\FilterMatrix^{'}))| \leq
\Delta_{\beta}$. Let $r_1=\PFEigenValue(\System(\FilterMatrix^{*}))$ and
$r_2=\PFEigenValue(\System(\FilterMatrix^{'}))$. In this case, we get that
$|1/r_1-1/r_2|\leq \Delta_{\beta}$, contradiction.
Lemma \ref{lem:apart_in_range} follows.
\quad\blackslug\lower 8.5pt\null\par
\paragraph{Algorithm description.}
We now describe Algorithm $\AlgoName$ for $\PFEigenVector(\System)$ computation.
Consider some partial selection $\SelectionVec'\subseteq\Affectors$ for
$V'\subseteq \EntitySet$.
For ease of notation, let
$\System(\SelectionVec')=\langle \mathbb{R}epressorsMatrix(\SelectionVec'), \SupportersMatrix(\SelectionVec') \rangle$,
where
$\mathbb{R}epressorsMatrix(\SelectionVec')=\mathbb{R}epressorsMatrix \cdot \FilterMatrix(\SelectionVec')$ and
$\SupportersMatrix(\SelectionVec')=\SupportersMatrix \cdot \FilterMatrix(\SelectionVec')$.
Consider the Program
\begin{align}
\label{eq:alg}
\text{maximize} & ~\beta \text{~subject to:~}
\\
&
\mathrm{d}isplaystyle \mathbb{R}epressorsMatrix(\SelectionVec') \nonumber \cdot\overline{X} \leq
1/\beta \cdot \SupportersMatrix(\SelectionVec') \cdot\overline{X} , &
\\
& \mathrm{d}isplaystyle \overline{X} \geq \overline{0} ,& \nonumber
\\
& \mathrm{d}isplaystyle ||\overline{X}||_{1}~=~1~. & \nonumber
\end{align}
Note that if $\SelectionVec'=\emptyset$, then
Program (\ref{eq:alg}) is equivalent to Program
(\ref{LP:Ext_Perron}), i.e., $\System(\SelectionVec')=\System$.
Define
\begin{eqnarray*}
\fff(\beta,\System(\SelectionVec'))=
\left\{\begin{array}{ll}
1, & \mbox{if there exists an~} \overline{X} \mbox{ such that }
||\overline{X}||_{1}~=~1, ~\overline{X}\geq \overline{0}, \mbox{ and }
\\ &
\mathrm{d}isplaystyle \mathbb{R}epressorsMatrix(\SelectionVec')\cdot\overline{X} \leq
1/\beta \cdot \SupportersMatrix(\SelectionVec')\cdot\overline{X},
\\
0, & \hbox{otherwise}.
\end{array}\right.
\end{eqnarray*}
Note that $\fff(\beta,\System(\SelectionVec'))=1$ iff $\System(\SelectionVec')$ is feasible for
$\beta$ and that $\fff$ can be computed in polynomial time using the interior point method.
Algorithm $\AlgoName$ is composed of two main phases.
In the first phase it finds, using binary search, an estimate $\beta^-$ such that
$\beta^{*}(\System)-\beta^- \leq \Delta_{\beta}$.
In the second phase, it finds a hidden square system
$\System(\FilterMatrix^{*})$, $\FilterMatrix^{*} \in \FilterMatrixFamily$,
corresponding to a complete selection vector $\SelectionVec_n$ of size $n$ for $\EntitySet$.
By Lemma \ref{lem:apart_in_range}, it follows that
$\PFEigenValue(\System(\FilterMatrix^{*}))=1/\beta^{*}(\System)$.
We now describe the construction of $\SelectionVec_n$ in more detail.
The second phase consists of $n$ iterations. Iteration $t$ obtains a partial
selection $\SelectionVec_{t}$ for $\Entity_1, \ldots, \Entity_t$ such that
$\fff(\beta^-,\System(\SelectionVec_t))=1$. The final step achieves the desired $\SelectionVec_n$,
where $\System(\SelectionVec_n) \in \SquareSystemFamily$ and
$\fff(\beta^-,\System(\SelectionVec_n))=1$
(therefore also $\fff(\beta^-,\System(\FilterMatrix(\SelectionVec_n)))=1$).
Initially, $\SelectionVec_0$ is empty.
The $t$'th iteration sets $\SelectionVec_t=\SelectionVec_{t-1} \cup \{\Affectors_j\}$ for some supporter
$\Affectors_j \in\Supporters_{t}$
such that $\fff(\beta^-,\System(\SelectionVec_{t-1} \cup \{\Affectors_j\}))=1$.
We later show (in proof of Thm. \ref{thm:algorithm}) that such a supporter
$\Affectors_j$ exists.
Finally, we use $\PFEigenVector(\System(\SelectionVec_n))$ to construct
the Perron vector $\PFEigenVector(\System)$. This vector contains zeros
for the $m-n$ non-selected affectors, and the values of the $n$ selected affectors
are as in $\PFEigenVector(\System(\SelectionVec_n))$.
The pseudocode is presented formally next.
\begin{figure*}\label{alg:while loop}
\label{alg:for loop}
\end{figure*}
To establish Theorem \ref{thm:algorithm}, we prove the correctness of
Algorithm $\AlgoName$ and bound its runtime.
We begin with two auxiliary claims.
\begin{claim}
\label{cl:max_beta}
$\beta^{*}(\System) \leq \MaxGain$.
\end{claim}
\par\noindent{\bf Proof:~}
Let $\overline{X}^{*}=\PFEigenVector(\System)$ and let
$\SelectionVec^{*}=NZ(\overline{X}^{*})$.
Then by claims (Q3) and (Q5) of Thm. \ref{thm:pf_ext} we have that $|\SelectionVec^{*}|=n$.
Define $\FilterMatrix^{*}= \FilterMatrix(\SelectionVec^{*})$.
Since $\SelectionVec^{*}$ is a complete selection vector
(see Claim \ref{cl:entity_one_nonzero}), we have that
$\FilterMatrix^{*}\in \FilterMatrixFamily$.
Let $\Affectors_{\IndS(i)}$ be the supporter of entity
$\Entity_i$ in $\SelectionVec^{*}$, for every $i \in \{1,\ldots, n\}$.
Let $D=\ConstraintsGraph_{\System(\FilterMatrix^{*})}$. Since $\System$ is
irreducible, it follows by Obs. \ref{obs:reducible_graph_connected} that $D$ is strongly connected.
Let $C=(\Entity_{i_1}, \ldots, \Entity_{i_k})$ be a directed cycle in $D$, i.e.,
$(\Entity_{i_j},\Entity_{i_{j+1}}) \in E(D)$ for every $j \in \{1, \ldots, k\}$ and
$(\Entity_{i_k},\Entity_{i_{1}}) \in E(D)$.
For ease of notation, let $\Entity_{i_k}=\Entity_{i_{-1}}$.
Since $D$ is strongly connected, such a cycle $C$ exists.
By the optimality of $\overline{X}^{*}$ we have that
$$\beta^{*}(\System) \cdot \TotR(\overline{X}^{*}, \System)_{i} ~=~
\TotS(\overline{X}^{*}, \System)_{i}$$
for every $\Entity_i$. Note that by definition
$|g(\Entity_{i_j},\Affectors_{\IndS(i_{j-1})})| \cdot X^{*}(\Affectors_{\IndS(i_{j-1})}) \leq
\TotR(\overline{X}^{*}, \System)_{i_j}$ for every $j \in \{1, \ldots, k\}$,
and by the graph definition, $\Affectors_{\IndS(i_{j-1})} \in \mathbb{R}epressors_{i_j}$ or
$g(\Entity_{i_j},\Affectors_{\IndS(i_{j-1})})<0$, for every $j \in \{1, \ldots, k\}$.
Combining this with Fact \ref{fc:feasible_tots_totr}, we get that
\begin{eqnarray*}
\beta^{*}(\System)
|g(\Entity_{i_j},\Affectors_{\IndS(i_{j-1})})|
X^{*}(\Affectors_{\IndS(i_{j-1})}) ~\leq~
g(\Affectors_{\IndS(i_j)},\Entity_{i_j}) \cdot X^{*}(\Affectors_{\IndS(i_j)})
\end{eqnarray*}
for every $j \in \{1, \ldots, k\}$, and therefore
$$\beta^{*}(\System) ~\leq~ \min_{j \in \{1, \ldots, k\}}\left\{
\frac{g(\Affectors_{\IndS(i_j)},\Entity_{i_j})}{|g(\Affectors_{\IndS(i_{j-1})},\Entity_{i_j})|}
\cdot \frac{X^{*}(\Affectors_{\IndS(i_j)})}{X^{*}(\Affectors_{\IndS(i_{j-1})})} \right\}.
$$
It is easy to verify that
$\min_{j \in \{1, \ldots, k\}}
\left \{\frac{X^{*}(\Affectors_{\IndS(i_j)})}{X^{*}(\Affectors_{\IndS(i_{j-1})})}\right\} \leq 1$.
Therefore, by Eq. (\ref{eq:gmax})
we get that $\beta^{*}(\System) \leq \MaxGain$, as required.
\quad\blackslug\lower 8.5pt\null\par
\begin{lemma}
\label{cl:phase_1}
Phase 1 of Alg. $\AlgoName$ finds $\beta^{-}$ such that
$\beta^{*}(\System)-\beta^{-} \leq \Delta_{\beta}$.
\end{lemma}
\par\noindent{\bf Proof:~}
By Property (Q5) of Thm. \ref{thm:pf_ext}, $\PFEigenVector(\System)$
is an optimal solution for Program (\ref{LP:Ext_Perron}) and
$\PFEigenValue(\System)=1/\beta^{*}(\System)$. Therefore
$\fff(\beta,\System)=1$ for every $\beta \in (0,\beta^{*}]$.
Steps 3 and 5(b) in Alg. $\AlgoName$
yield
$\fff(\beta^-,\System)=1$. Therefore $\beta^{-} \leq \beta^{*}(\System)$.
By the stopping criterion of step 5, it ends with
$\fff(\beta^{+},\System)=0$, $\fff(\beta^{-},\System)=1$ and
$\beta^{+}-\beta^{-} \leq \Delta_{\beta}$. The first 2 conditions imply that
$\beta^{*} \in [\beta^{-},\beta^{+})$ as required. The claim follows.
\quad\blackslug\lower 8.5pt\null\par
Let $Range_{\beta^{*}}=[\beta^{-},\beta^{+})$.
\begin{lemma}
\label{cl:selection_opt}
By the end of phase 2, the selection $\SelectionVec_n$ satisfies
$\PFEigenValue(\System(\SelectionVec_n))=1/\beta^{*}(\System)$.
\end{lemma}
\par\noindent{\bf Proof:~}
Let $\SelectionVec_t$ be the partial selection obtained at step $t$,
$\System_{t}=\System(\SelectionVec_{t})$ be the corresponding system for step $t$
and $\beta_{t}=\beta^{*}(\System_{t})$ the optimal solution of Program
(\ref{LP:Ext_Perron}) for system $\System_{t}$.
We claim that $\SelectionVec_{t}$ satisfies the following properties for each
$t \in \{0, \ldots, n\}$:
\begin{description}
\item{(C1)}
$\SelectionVec_{t}$ is a partial selection vector of length $t$,
such that $\SelectionVec_t \sim \SelectionVec_{t-1}$.
\item{(C2)}
$\System(\SelectionVec_{t})$ is feasible for $\beta^{-}$.
\end{description}
The proof is by induction.
Beginning with $\SelectionVec_{0}=\emptyset$, it is easy to see that (C1) and (C2)
are satisfied (since $\System(\SelectionVec_0)=\System$). Next, assume that (C1) and (C2)
hold for $\SelectionVec_{i}$ for $i \leq t$
and consider $\SelectionVec_{t+1}$. Let $V_{t}\subseteq \EntitySet$ be such that $\SelectionVec_{t}$ is
a partial selection for $V_{t}$ (i.e., $|V_{t}|=|\SelectionVec_{t}|$, and
$|\Supporters_{i}(\System) \cap \SelectionVec_{t}|=1$
for every $\Entity_i \in V_{t}$).
Given that $\SelectionVec_{t}$ is a selection for nodes $\Entity_1, \ldots, \Entity_t$
that satisfies (C1) and (C2), we show that $\SelectionVec_{t+1}$ satisfies
(C1) and (C2) as well.
In particular, it is required to show that there exists at least one supporter
of $\Entity_{t+1}$, namely, $\Affectors_k \in \Supporters_{t+1}(\System)$,
such that $\fff(\beta^-,\System(\SelectionVec_{t} \cup \{\Affectors_k\}))=1$.
This will imply that step 7(a) of the algorithm always succeeds in expanding $\SelectionVec_{t}$.
By Observation \ref{obs:irreducible_selection} and Property (C2) for step $t$,
the system $\System(\SelectionVec_{t})$ is irreducible with $\beta_{t} \geq \beta^{-}$.
In addition, note that
$\FilterMatrixFamily(\System_{t}) \subseteq \FilterMatrixFamily(\System)$
(as every square system of $\System_t$ is also a square system of $\System$).
By Theorem \ref{thm:pf_ext}, there exists a square system
$\System_{t}(\FilterMatrix^{*}_{t})$,
$\FilterMatrix^{*}_{t} \in \FilterMatrix(\System_{t})$, such that
$\PFEigenValue(\System_{t}(\FilterMatrix^{*}_{t}))=1/\beta_{t}$.
In addition, $\PFEigenVector(\System_{t}(\FilterMatrix^{*}_{t}))$ is a feasible
solution for Program (\ref{LP:Ext_Perron_convex}) with the system
$\System_{t}(\FilterMatrix^{*}_{t})$ and $\beta=\beta_{t}$.
By Eq. (\ref{eq:FilterMatrixFamily}), the square system
$\System_{t}(\FilterMatrix^{*}_{t})$ corresponds to a complete selection
$\SelectionVec^{**}$, where $|\SelectionVec^{**}|=n$ and $\SelectionVec_{t} \subseteq \SelectionVec^{**}$,
i.e., $\System_{t}(\FilterMatrix^{*}_{t})=\System(\SelectionVec^{**})$.
Observe that by Property (Q5) of Thm. \ref{thm:pf_ext} for the system
$\System_{t}$,
there exists a $\ZeroStar$ solution for Program (\ref{LP:Ext_Perron_convex})
that achieves $\beta_{t}$. This $\ZeroStar$ solution is constructed from
$\PFEigenVector(\System_{t}(\SelectionVec^{**}))$,
the PF eigenvector of $\System_{t}(\SelectionVec^{**})$.
Let $\Affectors_k \in \Supporters_{t+1}(\System_{t}) \cap \SelectionVec^{**}$.
Note that by the choice of $\SelectionVec^{**}$, such an affector
$\Affectors_k$ exists.
We now show that $\SelectionVec_{t+1}=\SelectionVec_{t} \cup \{\Affectors_k\}$
satisfies Property (C2), thus establishing the existence of
$\Affectors_k \in \Supporters_{t+1}(\System_{t})$ in step 7(a).
We show this by constructing a feasible solution $X^{*}_{\beta^-} \in \mathbb{R}^{m(\SelectionVec_{t+1})}$
for $\System_{t+1}$. By the definition of $\SelectionVec^{**}$,
$\fff(\beta^-,\System(\SelectionVec^{**}))=1$ and therefore there exists
a feasible solution $\overline{X}^{t+1}_{\beta^-} \in \mathbb{R}^{n}$
for $\System(\SelectionVec^{**})$.
Since $\SelectionVec_{t+1} \subseteq \SelectionVec^{**}$, it is possible to extend
$\overline{X}^{t+1}_{\beta^-} \in \mathbb{R}^{n}$ to a feasible solution $X^{*}_{\beta^{-}}$
for system $\System_{t+1}$,
by setting $X^{*}_{\beta^{-}}(\Affectors_q)=X^{t+1}_{\beta^{-}}(\Affectors_q)$ for every
$\Affectors_q \in \SelectionVec^{**}$ and $X^{*}_{\beta^{-}}(\Affectors_q)=0$ otherwise.
It is easy to verify that this is indeed a feasible solution for $\beta^{-}$,
concluding that
$\fff(\beta^-,\System_{t+1})=1$.
So far, we have shown that there exists
an affector $\Affectors_k \in \Supporters_{t+1}(\System_t)$
such that $\fff(\beta^-,\System_{t+1})=1$.
We now claim that for any $\Affectors_k \in \Supporters_{t+1}(\System_t)$
such that $\fff(\beta^-,\System_{t+1})=1$,
Properties (C1) and (C2) are satisfied. This holds trivially,
relying on the criterion for selecting $\Affectors_k$, since $\Supporters_{t+1}(\System_t) \cap \SelectionVec_{t}=\emptyset$.
After $n$ steps, we get that $\SelectionVec_n$ is a complete selection,
$\FilterMatrix(\SelectionVec_n) \in \FilterMatrixFamily(\System_{n-1})$, and therefore
by Property (C1) for steps $t=1,\ldots,n$, it also holds that
$\FilterMatrix(\SelectionVec_n) \in \FilterMatrixFamily(\System)$.
In addition, by Property (C2), $\fff(\beta^-,\System_n)=1$.
Since $\System_n$ is equivalent to
$\System(\SelectionVec_n) \in \SquareSystemFamily$
(obtained by removing the $m-n$ columns corresponding to the affectors not selected
by $\SelectionVec_n$), it is easy to verify that
$\fff(\beta^-,\System(\SelectionVec_n))=1$.
Next, by Thm. \ref{thm:pf} we have that
$1/\PFEigenValue(\System(\SelectionVec_n)) \in Range_{\beta^{*}}$.
It remains to show that
$1/\PFEigenValue(\System(\SelectionVec_n))= \beta^{*}(\System)$.
By Theorem \ref{thm:pf_ext}, there exists a square system
$\System(\FilterMatrix^{*})$, $\FilterMatrix^{*} \in \FilterMatrix(\System)$,
such that
$\PFEigenValue(\System(\FilterMatrix^{*}))=1/\beta^{*}$.
Assume, toward contradiction, that
$1/\PFEigenValue(\System(\SelectionVec_n)) \neq 1/\beta^{*}$.
Obs. \ref{obs:filter_to_square}(b) implies that
$\PFEigenValue(\System(\FilterMatrix^{*})) <
\PFEigenValue(\System(\SelectionVec_n))$.
It therefore follows that $\System(\FilterMatrix^{*})$ and
$\System(\SelectionVec_n)$ are two non-equivalent hidden square systems
of $\System$ such that
$1/\PFEigenValue(\System(\FilterMatrix^{*})),
1/\PFEigenValue(\System(\SelectionVec_n)) \in Range_{\beta^{*}}$, or, that
$1/\PFEigenValue(\System(\SelectionVec_n))-
1/\PFEigenValue(\FilterMatrix^{*}) \leq \Delta_{\beta}$,
in contradiction to Lemma \ref{lem:apart_in_range}.
This completes the proof of Lemma \ref{cl:selection_opt}.
\quad\blackslug\lower 8.5pt\null\par
We are now ready to complete the proof of Thm. \ref{thm:algorithm}.
\par\noindent{\bf Proof:~} [Theorem \ref{thm:algorithm}]
We show that Alg. $\AlgoName$ satisfies
the requirements of the theorem.
By Obs. \ref{obs:filter_to_square}(b),
$\min_{\FilterMatrix \in \FilterMatrixFamily} \left\{\PFEigenValue(\System(\FilterMatrix)) \right\}
\geq 1/\beta^{*}(\System)$.
Therefore, since
$\PFEigenValue(\System(\SelectionVec_n))=1/\beta^{*}(\System)$,
the square system $\System(\SelectionVec_n)$ constructed in step 7 of the algorithm indeed yields the Perron value (by Eq. (\ref{eq:general_pf_root})), hence the correctness of the algorithm is established.
\par
Finally, we analyze the runtime of the algorithm.
Note that there are $O(\log \left(\beta^{*}(\System)/\Delta_{\beta}\right)+n)$
calls for the interior point method (computing $\fff(\beta^-,\System_{i})$),
namely,
$O(\log \left(\beta^{*}(\System)/\Delta_{\beta}\right))$ calls in the first
phase and $n$ calls in the second phase. By plugging Eq. (\ref{eq:gmax})
in Claim \ref{cl:max_beta}, Thm. \ref{thm:algorithm} follows.
\quad\blackslug\lower 8.5pt\null\par
\section{Limitations for the existence of a $\ZeroStar$}
\label{sec:limit}
In this section we provide a characterization of systems in which
a $\ZeroStar$ solution does not exist.
\paragraph{Bounded value systems.}
Let $\MaxPower$ be a fixed constant.
For a nonnegative vector $\overline{X}$, let
$$\max(\overline{X}) ~=~
\max\left\{X(j)/X(i) \mid 1 \leq i,j \leq n, X(i)>0\right\}.$$
A system $\System$ is called
a \emph{bounded power system} if $\max(\overline{X}) \leq \MaxPower$.
\begin{lemma}
\label{lem:bounded_system}
There exists a bounded power system $\System$ such that no optimal solution
$\overline{X}^{*}$ for $\System$ is a $\ZeroStar$ solution.
\end{lemma}
\par\noindent{\bf Proof:~}
Consider the optimization problem (\ref{LP:Ext_Perron}), and the following
system $\System=\langle \SupportersMatrix,\mathbb{R}epressorsMatrix\rangle$:
\begin{align*}
\SupportersMatrix &=
\left(\begin{array}{cccc}a&a&0&0\\0&0&a&a\end{array}\right),
~~~~~~~~~~~
\mathbb{R}epressorsMatrix =
\left(\begin{array}{cccc}0&0&4c\MaxPower^2&4c\MaxPower^2\\c&c&0&0\end{array}\right),
\end{align*}
for constants $a,c>0$. We first show that it is impossible to attain
the optimal value $\beta^{*}$
if $\overline{X}$ is a $\ZeroStar$ solution. Then, we show that there exists
a non-$\ZeroStar$ solution $\overline{X}$ that attains $\beta^{*}$.
Thus, for a given system, no $\ZeroStar$ solution is optimal.
Assume, by contradiction, that we have a $\ZeroStar$ solution that achieves
$\beta^{*}$ on $\System$. Due to symmetry, every $\ZeroStar$ solution will
yield the same $\beta^{*}$, so without loss of generality assume that $X(2)=0$
and $X(4)=0$, and thus the corresponding square system is
\begin{align*}
\widehat{\SupportersMatrix} &=
\left(\begin{array}{cc}a&0\\0&a\end{array}\right),
~~~~~~~~~~
\widehat{\mathbb{R}epressorsMatrix} =
\left(\begin{array}{cc}0&4c\MaxPower^2\\c&0\end{array}\right).
\end{align*}
By Lemma \ref{lem:strict_equality}, at the optimum value $\beta^{*}$,
the inequality constraints of Eq. (\ref{eq:SR}) holds with equality, namely,
$\left(\widehat{\mathbb{R}epressorsMatrix}-\frac{1}{\beta^{*}}\widehat{\SupportersMatrix}\right)
\cdot \overline{X}=0$. Plugging in the chosen values, we get
\begin{align*}
\left(\begin{array}{cc}
-\frac{a}{\beta}&4c\MaxPower^2
\\
c&-\frac{a}{\beta}
\end{array}\right)
\cdot
\left(\begin{array}{c}
X(1)
\\
X(3)
\end{array}\right)
=0~,
\end{align*}
leading to the equations
$-\frac{a}{\beta}X(1)+4c\MaxPower^2X(3)=0$
and
$cX(1)-\frac{a}{\beta}X(3)=0$.
Rewriting these two equations as
$\frac{X(1)}{X(3)} = 4c\MaxPower^2 / (a/\beta)$
and
$\frac{X(1)}{X(3)} = (a/\beta) / c$~,
we get that $\left(\frac{X(1)}{X(3)}\right)^2 =
4\MaxPower^2$, or, $\frac{X(1)}{X(3)} =
2\MaxPower$.
But this contradicts the assumption that $\System$ is a bounded value
system, namely, $\max(\overline{X}) \leq \MaxPower$.
It follows that there is no optimal $\ZeroStar$ solution for such a system.
Now we show that there exists a non-$\ZeroStar$ solution $\overline{X}$ for
$\System$ that achieves $\beta^{*}$. Consider some $\overline{X}$ satisfying
$X(2)=0,X(1)>0,X(3)>0$ and $X(4)>0$. Similar to the above steps, we derive that
$\frac{X(1)}{X(3)+X(4)} = \frac{\beta c}{a}$ and
$\frac{X(3)+X(4)}{X(1)} = \frac{4\beta c \MaxPower^2}{a}~$, hence
$\left(\frac{X(3)+X(4)}{X(1)}\right)^2 = 4\MaxPower^2$,
or, $\frac{X(3)+X(4)}{X(1)} =2\MaxPower$.
Clearly, the last equation does not contradict the value boundedness of
$\System$, since $\max(\overline{X}) \leq \MaxPower$ only imposes the
constraint $\frac{X(3)+X(4)}{X(1)}\le2\MaxPower$. It follows that there exists
a non-$\ZeroStar$ solution that attains $\beta^{*}$.
\quad\blackslug\lower 8.5pt\null\par
\paragraph{Second eigenvalue maximization.}
One of the most common applications of the \PFT~is the existence of the
stationary distribution for a transition matrix
(representing a random process).
The stationary distribution is the eigenvector of the largest eigenvalue of
the transition matrix. We remark that if the transition matrix is stochastic,
i.e., the sum of each row is $1$, then the largest eigenvalue is equal to $1$.
So this case does not give rise to any optimization problem.
However, in many cases we are interested in processes with fast mixing time.
Assuming the process is ergodic, the mixing time is determined by the
difference between the largest eigenvalue and the second largest eigenvalue.
So we can try to solve the following problem. Imagine that there is some rumor
that we are interested in spreading over two or more social networks.
Each node can be a member of several social networks. We would like to merge
all the networks into one large social network in a way that will result in fast
mixing time. This problem looks very similar to the one solved in this paper.
Indeed, one can use similar techniques and get an approximation.
But interestingly, this problem does not have the $\ZeroStar$ solution property,
as illustrated in the following example.
Assume we are given $n$ nodes. Consider the $n!$ different social
networks that arise by taking, for each permutation $\pi \in S(n)$, the path $P_\pi$ corresponding to the permutation $\pi$. Clearly, the best mixing graph
we can get is the complete graph $K_n$. We can get this graph if each node
chooses each permutation with probability $\frac{1}{n!}$.
We remind the reader that the mixing time of the graph $K_n$ is 1.
On the other hand, any $\ZeroStar$ solution have a mixing time $O(n^2)$.
This example shows that in the second largest eigenvalue, the solution
is not always a $\ZeroStar$ solution.
\section{Applications}
\label{short:sec:Applications}
We have considered several applications for our generalized \PFT.
All these examples concern generalizations of well-known applications
of the standard \PFT.
In this section, we illustrate applications
for power control in wireless networks,
and input--output economic model.
(In fact, our initial motivation for the study of generalized \PFT~arose
while studying algorithmic aspects of wireless networks
in the SIR model \cite{Avin2009PODC,KLPP2011STOC, Avin2012SINR}.)
\subsection{Power control in wireless networks.}
\label{subsec:power_control_app}
The rules governing the availability and quality of wireless connections
can be described by {\em physical} or {\em fading channel} models
(cf. \cite{PL95,B96,R96}). Among those, a commonly studied is the
{\em signal-to-interference ratio (SIR)} model\footnote{This is
a special case of the {\em signal-to-interference \& noise ratio (SINR)} model
where the noise is zero.}.
In the SIR model, the energy of a signal fades with the distance
to the power of the {\em path-loss parameter} $\alpha$.
If the signal strength received by a device divided by the interfering
strength of other simultaneous transmissions
is above some \emph{reception threshold} $\beta$, then the
receiver successfully receives the message, otherwise it does not. Formally,
let $\mathrm{d}ist{p,q}$ be the Euclidean distance between $p$ and $q$,
and assume that each transmitter $t_i$ transmits with power $X_i$.
At an arbitrary point $p$, the transmission of station $t_i$ is
correctly received if
\begin{align}\label{eq:sinr}
\frac{X_i \cdot \mathrm{d}ist{p, t_i}^{-\alpha}}
{\sum_{j \neq i} X_j \cdot \mathrm{d}ist{p, t_j}^{-\alpha}}
~ \geq ~ \beta ~ .
\end{align}
In the basic setting, known as the SISO (Single Input, Single Output) model,
we are given a network of $n$ receivers $\{r_i\}$ and transmitters $\{t_i\}$
embedded in $\mathbb{R}^d$ where each transmitter is assigned to a single
receiver. The main question is then is to find the optimal
(i.e., largest) $\beta^*$ and the power assignment $\overline{X^{*}}$ that
achieves it when we consider Eq. (\ref{eq:sinr}) at each receiver $r_i$.
The larger $\beta$, the simpler (and cheaper) is the hardware implementation
required to decode messages in a wireless device. In a seminal and elegant
work, Zander \cite{Zander92b} showed how to compute $\beta^*$ and
$\overline{X}^{*}$, which are essentially the \PFR~ and \PFV,
if we generate a square
matrix $A$ that captures the signal and interference for each station.
The motivation for the general \PFT\ appears when we consider Multiple Input
Single Output (MISO) systems.
In the MISO setting, a set of multiple synchronized transmitters,
located at different places, can transmit at the same time to the same receiver.
Formally, for each receiver $r_i$ we have a set of $k_i$ transmitters,
to a total of $m$ transmitters. Translating this to the generalized \PFT,
the $n$ receivers are the entities and the $m$ transmitters are affectors.
For each receiver, its supporter set consists of its $k_i$ transmitters and
its repressor set contains all other transmitters.
The SIR equation at receiver $r_i$ is then:
\begin{align}\label{eq:sinr2}
\frac{\sum_{\ell \in \Supporters_i} X_{\ell} \cdot \mathrm{d}ist{r_i, t_{\ell}}^{-\alpha}}
{\sum_{\ell \in \mathbb{R}epressors_i} X_{\ell} \cdot \mathrm{d}ist{r_i, t_{\ell}}^{-\alpha}}
~ \geq ~ \beta ~,
\end{align}
where $\Supporters_i$ and $\mathbb{R}epressors_i$ are the sets of supporters and
repressors of $r_i$, respectively.
As before, the gain $g(i,j)$ is proportional to $1/\mathrm{d}ist{r_i, t_j}^{-\alpha}$
(where the sign depends on whether $t_j$ is a supporter or repressor of $r_i$).
Using the generalized \PFT~we can again find the optimal reception threshold
$\beta^*$ and the power assignment $\overline{X}^{*}$ that achieves it.
An interesting observation is that since our optimal power assignment is
a $\ZeroStar$ solution using several transmitters at once for a receiver
is not necessary, and will not help to improve $\beta^*$, i.e.,
only the ``best" transmitter of each receiver needs to transmit
(where ``best" is with respect to the entire set of receivers).
\paragraph{Related work on MISO power control.}
We next highlight the differences between our proposed
MISO power-control algorithm and the existing approaches to this problem.
The vast literature on power control in MISO and MIMO systems
considers mostly the joint optimization of power control with beamforming
(which is represented by a precoding and shaping matrix).
In the commonly studied {\em downlink scenario}, a single transmitter
with $m$ antennae sends independent information signals to $n$ decentralized
receivers. With this formulation, the goal is to find an optimal power vector
of length $n$ and a $n \times m$ beamforming matrix.
The standard heuristic applied to this problem is an iterative strategy
that alternatively repeats a {\em beamforming} step
(i.e., optimizing the beamforming matrix while fixing the powers)
and a {\em power control} step
(i.e., optimizing powers while fixing the beamforming matrix)
till convergence \cite{CaiQT11,Cai2011,Chiang2007,Schu2004,Chee11}.
In \cite{CaiQT11}, the geometric convergence of such scheme has been
established. In addition, \cite{WieselES06} formalizes the problem
as a conic optimization program that can be solved numerically.
In summary, the current algorithms for MIMO power-control (with beamforming)
are of numeric and iterative flavor, though with good convergence guarantees.
In contrast, the current work considers the simplest MISO setting
(without coding techniques) and aims at \emph{characterizing} the mathematical
\emph{structure} of the optimum solution. In particular, we establish the fact
that the optimal max-min SIR value is an algebraic number
(i.e., the root of a characteristic polynomial) and the optimum power vector
is a $\ZeroStar$ solution. Equipped with this structure, we design
an efficient algorithm which is more accurate than off-the-shelf
numeric optimization packages that were usually applied in this context.
Needless to say, the structural properties of the optimum solution are of
theoretical interest in addition to their applicability.
We note that our results are (somewhat) in contradiction to the well-established
fact that MISO and MIMO (Multiple Input Multiple Output) systems, where
transmitters transmit in parallel, do improve the capacity of wireless networks,
which corresponds to increasing $\beta^*$ \cite{Foschini98onlimits}.
There are several reasons for this apparent dichotomy, but they are all
related to the simplicity of our SIR model. For example, if the ratio
between the maximal power to the minimum power is bounded, then our result
does not hold any more (as discussed in Section \ref{sec:limit}).
In addition, our model does not capture random noise and small scale fading
and scattering \cite{Foschini98onlimits}, which are essential for the benefits
of a MIMO system to manifest themselves.
\subsection{Input--output economic model.}
Consider a group of $n$ industries that each produce (output) one type of commodity,
but requires inputs from other industries
\cite{meyer2000matrix,pillai2005pft}.
Let $a_{ij}$ represent the number of $j$th industry commodity units that need to be purchased
by the $i$th industry to operate its factory for one time unit divided by the number of
commodity units produced by the $i$th industry in one time unit, where $a_{ij} \ge 0$.
Let $X_j$ represent a unit price of the $i$th commodity to be determined
by the solution.
In the following profit model (variant of Leontief's Model \cite{pillai2005pft}), the percentage
profit margin of an industry for a time unit is:
$$\beta_i ~=~ \text{Profit} ~=~ \text{Total income}/\text{Total expenses}.$$
That is, $\beta_i = X_i /\left(\sum_{j=1}^n a_{ij}X_j\right)$.
Maximizing the the profit of each industry can be solved
via Program (\ref{LP:Stand_Perron}), where $\beta^*$ is the minimum profit and $\overline{X}^{*}$ is the optimal pricing.
Consider now a similar model where the $i$th industry can produce $k_i$
alternative commodities in a time unit and requires inputs from other commodities of industries.
The industries are then the entities in the generalized Perron--Frobenius
setting, and for each industry, its own commodities are the supporters and
input commodities are optional repressors.
The repression gain $\mathbb{R}epressorsMatrix(i,j)$ of industry $i$ and commodity $j$ (produced by some other industry $i'$), is the number of $j$th commodity units that are required by the $i$th industry to produce (i.e., operate) for a one unit of time. Thus, $(\mathbb{R}epressorsMatrix \cdot \overline{X})_i$ is the total expenses of industry $i$ in one time unit.
The supporter gain $\SupportersMatrix(i,j)$ of industry $i$ to its commodity $j$ is the number of units it can produce in one time unit.
Thus, $(\SupportersMatrix \cdot \overline{X})_i$ is the total income of industry $i$ in one time unit.
Now, similar to the basic case, $\beta^*$ is the best minimum percentage profit for an industry and $\overline{X}^{*}$ is the optimal pricing for the commodities. The existence of a $\ZeroStar$ solution implies that it is sufficient for each industry to charge a nonzero cost for only \emph{one} of its commodities and produce the rest for free.
\section{Discussion and open problems}
Our results concern the generalized eigenpair of a nonsquare system
of dimension $n \times m$, for $m \geq n$.
We provide a definition, as well as a geometric and a graph theoretic characterization
of this eigenpair, and present centralized algorithm for computing it.
A natural question for future study is whether there exists an iterative
method with a good convergence guarantees for this task, as exists for
(the maximal eigenpair of) a square system.
In addition, another research direction involves studying the other eigenpairs
of a nonsquare irreducible system. In particular, what might be the meaning
of the 2nd eigenvalue of this spectrum?
Yet another interesting question involves studying the relation of our
spectral definitions with existing spectral theories for nonsquare matrices.
Specifically, it would be of interest to characterize the relation between
the generalized eigenpairs of irreducible systems according to our definition
and the eigenpair provided by the SVD approach. Finally, we note that
a setting in which $n < m$ might also be of practical use (e.g., for the
power control problem in \emph{Single Input Multiple Output} systems), and therefore deserves exploration.
\renewcommand{1}{1}
{\small
}
\end{document} | math | 191,823 |
\begin{document}
\title{Moreau Envelope Augmented Lagrangian Method for Nonconvex Optimization with Linear Constraints
\thanks{We thank Kaizhao Sun for discussions that help us complete this paper, as well as presenting to us an additional approach to ensure boundedness. The work of J. Zeng is partly supported by National Natural Science Foundation of China (No. 61977038) and the Thousand Talents Plan of Jiangxi Province
(No. jxsq2019201124). The work of D.-X. Zhou is partly supported by Research Grants Council of Hong Kong (No. CityU 11307319), Laboratory for AI-powered Financial Technologies, and the Hong Kong Institute
for Data Science.}
}
\author{Jinshan Zeng \and
Wotao Yin \and
Ding-Xuan Zhou
}
\institute{J. Zeng \at
School of Computer and Information Engineering, Jiangxi Normal University, Nanchang, China.\\
Liu Bie Ju Centre for Mathematical Sciences, City University of Hong Kong, Hong Kong.\\
\email{[email protected]}
\and
W. Yin \at
Department of Mathematics, University of California, Los Angeles, CA. \\
\email{[email protected]}
\and
D.X. Zhou \at
School of Data Science, Department of Mathematics, and Liu Bie Ju Centre for Mathematical Sciences, City University of Hong Kong, Hong Kong. \\
\email{[email protected]}
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
The augmented Lagrangian method (ALM) is one of the most useful methods for constrained optimization. Its convergence has been well established under convexity assumptions or smoothness assumptions, or under both assumptions. ALM may experience oscillations and divergence when the underlying problem is simultaneously nonconvex and nonsmooth. In this paper, we consider the linearly constrained problem with a nonconvex (in particular, weakly convex) and nonsmooth objective. We modify ALM to use a Moreau envelope of the augmented Lagrangian and establish its convergence under conditions that are weaker than those in the literature. We call it the \textit{Moreau envelope augmented Lagrangian (MEAL)} method. We also show that the iteration complexity of MEAL is $o(\varepsilon^{-2})$ to yield an $\varepsilon$-accurate first-order stationary point. We establish its whole sequence convergence (regardless of the initial guess) and a rate when a Kurdyka-{\L}ojasiewicz property is assumed. Moreover, when the subproblem of MEAL has no closed-form solution and is difficult to solve, we propose two practical variants of MEAL, an inexact version called \textit{iMEAL} with an approximate proximal update, and a linearized version called \textit{LiMEAL} for the constrained problem with a composite objective. Their convergence is also established.
\keywords{Nonconvex nonsmooth optimization \and augmented Lagrangian method \and Moreau envelope \and proximal augmented Lagrangian method \and Kurdyka-{\L}ojasiewicz inequality
}
\end{abstract}
\section{Introduction}
In this paper, we consider the following optimization problem with linear constraints
\begin{equation}
\label{Eq:problem}
\begin{array}{ll}
\mathrm{minimize}_{x\in \mathbb{R}^n} & f(x) \\
\mathrm{subject \ to} & Ax=b,
\end{array}
\end{equation}
where
$f: \mathbb{R}^n \rightarrow \mathbb{R}$ is a proper, lower-semicontinuous \textit{weakly convex} function, which is possibly nonconvex
and nonsmooth,
$A\in \mathbb{R}^{m\times n}$ and $b\in \mathbb{R}^m$ are some given matrix and vector, respectively.
A function $f$ is said to be \textit{weakly convex} with a modulus $\rho>0$ if $f(x)+\frac{\rho}{2}\|x\|^2$ is convex on $\mathbb{R}^n$,
where $\|\cdot\|$ is the Euclidean norm.
The class of weakly convex functions is broad \cite{Nurminskii73}, including all convex functions, smooth but nonconvex functions with Lipschitz continuous gradient, and their composite forms (say, $f(x)=h(x)+g(x)$ with both $h$ and $g$ being weakly convex, and $f(x)=g(h(x))$ with $g$ being convex and Lipschitz continuous and $h$ being a smooth mapping with Lipschitz Jacobian \cite[Lemma 4.2]{Drusvyatskiy-Paquette19}).
The augmented Lagrangian method (ALM) is a well-known algorithm for constrained optimization by Hestenes \cite{Hestenes69} and Powell \cite{Powell69}.
ALM has been extensively studied and has a large body of literature (\cite{Bertsekas73,Birgin10,Conn91,Conn96,Rockafellar73-ALM} just to name a few),
yet \emph{no ALM algorithm can solve the underlying problem (\ref{Eq:problem}) without at least one of the following assumptions}: convexity \cite{Bertsekas73,Bertsekas76,Fernadez12,Polyak-Tretyakov73,Rockafellar73-ALM}, or smoothness \cite{Andreani08,Andreani10,Andreani19,Andreani18,Curtis15}, or solving nonconvex subproblems to their global minima \cite{Birgin10,Birgin18}, or an auto-updated penalty sequence staying bounded on the problem at hand \cite{Birgin20,Grapiglia-Yuan19}.
Indeed, without these assumptions, ALM may oscillate and even diverge unboundedly on simple quadratic programs~\cite{Wang19,Zhang-Luo18} on weakly convex objectives. An example is given Sec. \ref{sc:exp1} below.
At a high level, we introduce a Moreau-envelope modification of the ALM for solving \eqref{Eq:problem} and show the method can converge under weaker conditions.
In particular, convexity is relaxed to weak convexity; nonsmooth functions are allowed; the subproblems can be solved inexactly to some extent; linearization can be applied to the Lipschitz-differential function in the objective; and, there is no assumption on the rank of $A$. On the other hand, we introduce two alternative subgradient properties in Definition \ref{Def:implicit-Lip-bounded-subgrad} below
as our main assumption. By also assuming either a bounded energy sequence or bounded primal-dual sequence, we derive certain subsequence rates of convergence.
We introduce a novel way to establish those boundedness properties based on a feasible coercivity assumption and a local-stability assumption on the subproblem. Finally, with the additional assumption of Kurdyka-{\L}ojasiewicz (K{\L}) inequality, we establish global convergence.
Overall, this paper shows that the Moreau envelope technique makes ALM applicable to more problems.
\subsection{Proposed Algorithms}
\label{sc:algorithms}
To present our algorithm, define the augmented Lagrangian:
\begin{equation}
\label{Eq:augmented-Lagrangian}
{\cal L}_{\beta} (x,\lambda) := f(x)+\langle \lambda, Ax-b \rangle + \frac{\beta}{2}\|Ax-b\|^2,
\end{equation}
and the \emph{Moreau envelope}
of ${\cal L}_{\beta}(x,\lambda)$:
\begin{equation}
\label{Eq:Moreauenvelope_AL}
\phi_{\beta}(z,\lambda) = \min_{x} \left\{{\cal L}_{\beta}(x,\lambda)+\frac{1}{2\gamma}\|x-z\|^2\right\},
\end{equation}
where $\lambda \in \mathbb{R}^m$ is a multiplier vector, $\beta>0$ is a penalty parameter, and $\gamma>0$ is a proximal parameter. The Moreau envelope applies to the primal variable $x$ for each fixed dual variable $\lambda$.
We introduce \textit{Moreau Envelope Augmented Lagrangian method} (dubbed \textit{MEAL}) as follows: given an initialization $(z^0,\lambda^0)$, $\gamma>0$, a sequence of penalty parameters $\{\beta_k\}$ and a step size $\eta \in (0,2)$, for $k=0,1,\ldots,$ run
\begin{equation}
\label{alg:MEAL}
\mathrm{(MEAL)} \quad
\left\{
\begin{array}{l}
z^{k+1} = z^k - \eta \gamma \nabla_z \phi_{\beta_k}(z^k,\lambda^k),\\
\lambda^{k+1} = \lambda^k + \beta_k \nabla_\lambda \phi_{\beta_k}(z^k,\lambda^k).
\end{array}
\right.
\end{equation}
The penalty parameter $\beta_k$ can either vary or be fixed.
Introduce
\begin{equation*}
x^{k+1}= \mathrm{Prox}_{\gamma,{\cal L}_{\beta_k}(\cdot,\lambda^{k})}(z^k) := \argmin_x \left\{ {\cal L}_{\beta_k}(x,\lambda^k)+\frac{1}{2\gamma}\|x-z^k\|^2\right\}, \ \forall k\in \mathbb{N},
\end{equation*}
which yields $\nabla_z \phi_{\beta_k}(z^k,\lambda^k) = \gamma^{-1} (z^k-x^{k+1})$ and $\nabla_\lambda \phi_{\beta_k}(z^k,\lambda^k) = A x^{k+1} - b$.
Then, MEAL \eqref{alg:MEAL} is equivalent to:
\begin{equation}
\label{alg:MEAL-reformulation}
\mathrm{(MEAL\ Reformulated)} \quad
\left\{
\begin{array}{l}
x^{k+1} = \mathrm{Prox}_{\gamma,{\cal L}_{\beta_k}(\cdot,\lambda^{k})}(z^k),\\
z^{k+1} = z^k -\eta (z^k - x^{k+1}),\\
\lambda^{k+1} = \lambda^k + \beta_k (Ax^{k+1}-b).
\end{array}
\right.
\end{equation}
Next, we provide two practical variants of MEAL that do not require an accurate computation of $\mathrm{Prox}_{\gamma,{\cal L}_{\beta}}$.
\paragraph{Inexact MEAL (iMEAL)}
We call $x^{k+1}$ an $\epsilon_k$-accurate stationary point of the $x$-subproblem in \eqref{alg:MEAL-reformulation} if there exists
\begin{equation}\label{iMealCond}
s^k \in \partial_x {\cal L}_{\beta_k}(x^{k+1},\lambda^k) + \gamma^{-1}(x^{k+1}-z^k)\quad\text{such that}~
\|s^k\| \leq \epsilon_k.
\end{equation}
\textit{iMEAL} is described as follows: given an initialization $(z^0,\lambda^0)$, $\gamma>0$, $\eta \in (0, 2)$, and two positive sequences $\{\epsilon_k\}$ and $\{\beta_k\}$, for $k=0,1,\ldots,$ run
\begin{equation}
\label{alg:iMEAL}
\mathrm{(iMEAL)} \quad
\left\{
\begin{array}{l}
\mathrm{find \ an} \ x^{k+1} \ \mathrm{to\ satisfy} \ \eqref{iMealCond},\\
z^{k+1} = z^k -\eta (z^k - x^{k+1}),\\
\lambda^{k+1} = \lambda^k + \beta_k (Ax^{k+1}-b).
\end{array}
\right.
\end{equation}
\paragraph{Linearized MEAL (LiMEAL)}
When problem \eqref{Eq:problem} has the following form
\begin{equation}
\label{Eq:problem-CP}
\begin{array}{ll}
\mathop{\mathrm{minimize}}_{x\in \mathbb{R}^n} & f(x):= h(x) + g(x)\\
\mathrm{subject \ to} & Ax=b,
\end{array}
\end{equation}
where $h:\mathbb{R}^n \rightarrow \mathbb{R}$ is Lipschitz-continuous differentiable and $g:\mathbb{R}^{n} \rightarrow \mathbb{R}$ is weakly convex and has an easy proximal operator (in particular, admitting a closed-form solution) \cite{Hajinezhad-Hong19,Wang19,Xu-Yin-BCD13,Zeng-DGD18},
we shall use $\nabla h$.
Write $f^k(x):= h(x^k)+\langle \nabla h(x^k), x - x^k\rangle + g(x)$
and
$
{\cal L}_{\beta,{f^k}}(x,\lambda):= f^k(x)+\langle \lambda, Ax-b \rangle + \frac{\beta}{2}\|Ax-b\|^2.
$
We describe
\textit{LiMEAL} for \eqref{Eq:problem-CP} as: given $(z^0,\lambda^0)$, $\gamma>0$, $\eta \in (0,2)$ and $\{\beta_k\}$, for $k=0,1,\ldots,$ run
\begin{equation}
\label{alg:LiMEAL}
\mathrm{(LiMEAL)} \quad
\left\{
\begin{array}{l}
x^{k+1} = \mathrm{Prox}_{\gamma,{\cal L}_{\beta_k,{f^k}}(\cdot,\lambda^k)}(z^k),\\
z^{k+1} = z^k - \eta(z^k-x^{k+1}),\\
\lambda^{k+1} = \lambda^k + \beta_k (Ax^{k+1}-b).
\end{array}
\right.
\end{equation}
Since one can choose to use $h$ or not in LiMEAL, LiMEAL is more general than MEAL.
\subsection{Relation to ALM and Proximal ALM}
\label{sc:relation-existing-methods}
Like ALM, MEAL alternatively updates primal and dual variables; but unlike ALM, MEAL applies the update to the Moreau envelope of augmented Lagrangian.
By \cite{Rockafellar-var97}, the Moreau envelope $\phi_{\beta_k}(z,\lambda^k)$ provides a smooth approximation of ${\cal L}_{\beta_k}(x,\lambda^k)$ from below and shares the same minima.
The smoothness of Moreau envelope alleviates the possible oscillation that arises when ALM is applied to certain nonconvex optimization problems.
For the problems satisfying the conditions in this paper, ALM may require a sequence of possibly unbounded $\{\beta_k\}$. When $\beta_k$ is large, the ALM subproblem is ill-conditioned.
Therefore, bounding $\beta_k$ is practically desirable \cite{Birgin-book14,Conn91}.
MEAL and its practical variants can use a fixed penalty parameter under a novel subgradient assumption in Definition \ref{Def:implicit-Lip-bounded-subgrad} later.
Proximal ALM was introduced in \cite{Rockafellar76-PALM}. Its variants were recently studied in \cite{Hajinezhad-Hong19,Hong17-Prox-PDA,Zhang-Luo20,Zhang-Luo18}.
These methods
add a proximal term to the augmented Lagrangian.
Under the reformulation \eqref{alg:MEAL-reformulation},
proximal ALM \cite{Rockafellar76-PALM} for problem \eqref{Eq:problem} is a special case of MEAL with the step size $\eta =1$.
In \cite{Hong17-Prox-PDA}, a proximal primal-dual algorithm called \textit{Prox-PDA} was proposed for problem \eqref{Eq:problem}.
Certain non-Euclidean matrix norms were adopted in Prox-PDA to guarantee the strong convexity of the ALM subproblem.
A proximal linearized version of Prox-PDA for the composite optimization problem \eqref{Eq:problem-CP} was studied in \cite{Hajinezhad-Hong19}. These methods are closely related to MEAL, but their convergence conditions in the literature are stronger.
Recently, \cite{Zhang-Luo20,Zhang-Luo18} modified proximal inexact ALM for the linearly constrained problems with an additional bounded box constraint set or polyhedral constraint set, denoted by ${\cal C}$. Our method is partially motivated by their methods.
Their problems are equivalent to the composite optimization problems \eqref{Eq:problem-CP} with $g(x) = \iota_{\cal C}(x)$, where $\iota_{\cal C}(x)=0$ when $x\in {\cal C}$ and $+\infty$ otherwise.
In this setting, the methods in \cite{Zhang-Luo20,Zhang-Luo18} can be regarded as prox-linear versions of LiMEAL \eqref{alg:LiMEAL}, that is, yielding $x^{k+1}$ via a prox-linear scheme \cite{Xu-Yin-BCD13} instead of the minimization scheme as used in LiMEAL \eqref{alg:LiMEAL}, together with an additional dual step size and a sufficiently small primal step size in \cite{Zhang-Luo20,Zhang-Luo18}. Specifically, in the case of $g(x) = \iota_{\cal C}(x)$, the updates of $x^{k+1}$ in methods in \cite{Zhang-Luo20,Zhang-Luo18} are yielded by
\begin{align*}
x^{k+1} = \mathrm{Proj}_{\cal C}(x^k - s\nabla K(x^k,z^k,\lambda^k)),
\end{align*}
where $K(x^k,z^k,\lambda^k)) = {\cal L}_{\beta^k,f}(x,\lambda^k) + \frac{1}{2\gamma}\|x-z^k\|^2$, and $\mathrm{Proj}_{\cal C}(x)$ is the projection of $x$ onto ${\cal C}$.
Besides the difference, LiMEAL can handle proximal functions beyond the indicator function and permits a wider choice $\eta \in (0,2)$.
\subsection{Other Related Literature}
On convex and constrained problems,
locally linear convergence\footnote{Locally linear convergence means exponentially fast convergence to a local minimum from a sufficiently close initial point.} of ALM has been extensively studied in the literature \cite{Bertsekas73,Bertsekas76,Bertsekas82,Conn00,Fernadez12,Nocedal99,Polyak-Tretyakov73}, mainly under the second order sufficient condition (SOSC) and constraint conditions such as the linear independence constraint qualification (LICQ).
Global convergence (i.e., convergence regardless of the initial guess) of ALM and its variants were studied in~\cite{Andreani07,Armand17,Birgin05,Birgin12,Birgin10,Conn91,Conn96,Rockafellar73-ALM,Tretykov73}, mainly under constraint qualifications and assumed boundedness of nondecreasing penalty parameters.
On nonconvex and constrained problems, convergence of ALM was recently studied in \cite{Andreani08,Andreani10,Andreani19,Andreani18,Birgin10,Birgin18,Curtis15}, mainly under the following assumptions: solving nonconvex subproblems to their approximate global minima or stationary points \cite{Birgin10,Birgin18}, or
boundedness of the nondecreasing penalty sequence \cite{Birgin20,Grapiglia-Yuan19}.
Most of them require \textit{Lipschitz differentiability} of the objective.
Convergence of proximal ALM and its variants was established under the assumptions of either convexity in \cite{Rockafellar76-PALM} or smoothness (in particular, Lipschitz differentiablity) in \cite{Hajinezhad-Hong19,Hong17-Prox-PDA,Jiang19,Xie-Wright19,Zhang-Luo20,Zhang-Luo18}.
Besides proximal ALM, other related works for nonconvex and constrained problems include \cite{Bian15,Haeser19,Nouiehed18,ONeill20}, which also assume smoothness of the objective, plus either gradient or Hessian information.
\subsection{Contribution and Novelty}
MEAL, iMEAL and LiMEAL achieve the same order of iteration complexity $o({\varepsilon^{-2}})$ to reach an $\varepsilon$-accurate first-order stationary point, slightly better than those in the ALM literature~\cite{Hajinezhad-Hong19,Hong17-Prox-PDA,Xie-Wright19,Zhang-Luo18,Zhang-Luo20} while also requiring weaker conditions. Our methods have convergence guarantees for a broader class of objective functions, for example, nonsmooth and nonconvex functions like
the smoothly clipped absolute deviation (SCAD) regularization \cite{Fan-SCAD} and minimax concave penalty (MCP) regularization \cite{Zhang-MCP}, which are underlying the applications of
statistical learning and beyond \cite{Wang19}.
Note that we only assume the feasibility of $Ax=b$, which is weaker than the commonly-used hypotheses such as: the strict complementarity condition in \cite{Zhang-Luo18}, certain rank assumption (such as $\mathrm{Im}(A)\subseteq \mathrm{Im}(B)$ when considering the two- (multi-) block case $Ax+By=0$) in \cite{Wang19}, and the linear independence constrained qualification (LICQ) in \cite{Bertsekas82,Nocedal99} (which implies the full-rank assumption in the linear constraint case).
Our analysis is noticeably different from those in the literature~\cite{Rockafellar76-PALM,Hajinezhad-Hong19,Hong17-Prox-PDA,Jiang19,Zhang-Luo18,Zhang-Luo20,Xie-Wright19,Wang19}. We base our analysis on new potential functions. The Moreau envelope in the potential functions is partially motivated by~\cite{Davis-Drusvyatskiy19}. Our overall potential functions are new and tailored for MEAL, iMEAL, and LiMEAL and include the augmented Lagrangian with additional terms. The technique of analysis may have its own value for further generalizing and improving ALM-type methods.
\subsection{Notation and Organization}
We let $\mathbb{R}$ and $\mathbb{N}$ denote the sets of real and natural numbers, respectively. Given a matrix $A$, $\mathrm{Im}(A)$ denotes its image, and $\tilde{\sigma}_{\min}(A^TA)$ denotes the smallest positive eigenvalue of $A^TA$. $\|\cdot\|$ is the Euclidean norm for a vector.
Given any two nonnegative sequences $\{\xi_k\}$ and $\{\zeta_k\}$, we write $\xi_k = o(\zeta_k)$ if $\lim_{k\rightarrow \infty} \frac{\xi_k}{\zeta_k}=0$, and $\xi_k = {\cal O}(\zeta_k)$ if there exists a positive constant $c$ such that $\xi_k \leq c \zeta_k$ for all sufficiently large $k$.
In the rest of this paper,
Section \ref{sc:preliminary} presents background and preliminary techniques.
Section \ref{sc:convergence-MEAL} states convergence results of MEAL and iMEAL.
Section \ref{sc:LiMEAL} presents the results of LiMEAL.
Section \ref{sc:main-proofs} includes main proofs.
Section \ref{sc:discussion} provides sufficient conditions for certain boundedness assumptions in above results along with comparisons with the related work. Section \ref{sc:experiment} provides some numerical experiments to demonstrate the effectiveness of proposed methods.
We conclude this paper in Section \ref{sc:conclusion}.
\section{Background and Preliminaries}
\label{sc:preliminary}
This paper uses extended-real-valued functions, for example, $h:\mathbb{R}^n \to \mathbb{R} \cup \{+\infty\}$.
Write the domain of $h$ as $\mathrm{dom}(h):=\{x\in \mathbb{R}^n: h(x)<+\infty\}$ and its range as $\mathrm{ran}(h):= \{y: y=h(x), \forall x\in \mathrm{dom}(h)\}$.
For each $x\in \mathrm{dom}(h)$, the \textit{Fr\'{e}chet subdifferential} of $h$ at $x$, written as $\widehat{\partial}h(x)$, is the set of vectors $v\in \mathbb{R}^n$ satisfying
\[
\liminf_{u\neq x, u\rightarrow x} \ \frac{h(u)-h(x)-\langle v,u-x\rangle}{\|x-u\|} \geq 0.
\]
When $x\notin \mathrm{dom}(h),$ we define
$\widehat{\partial} h(x) = \emptyset.$
The \emph{limiting-subdifferential} (or simply \emph{subdifferential}) of $h$~\cite{Mordukhovich-2006} at $x\in \mathrm{dom}(h)$ is defined
as
\begin{equation}
\label{Def:limiting-subdifferential}
\partial h(x) := \{v\in \mathbb{R}^n: \exists x^t \to x,\; h(x^t)\to h(x), \; \widehat{\partial} h(x^t) \ni v^t \to v\}.
\end{equation}
A necessary (but not sufficient) condition for $x\in \mathbb{R}^n$ to be a minimizer of $h$ is $0 \in \partial h(x)$.
A point that satisfies this inclusion is called \textit{limiting-critical} or simply \textit{critical}.
The distance between a point $x$ and a subset ${\cal S}$ of $\mathbb{R}^n$ is defined
as $\mathrm{dist}(x, {\cal S}) = \inf_u \{\|x-u\|: u\in {\cal S}\}$.
\subsection{Moreau Envelope}
\label{sc:moreau-envelope}
Given a function $h: \mathbb{R}^n \rightarrow \mathbb{R}$, define its \textit{Moreau envelope} \cite{Moreau65,Rockafellar-var97}:
\begin{equation}
\label{Eq:Moreau-envelope}
{\cal M}_{\gamma,h}(z) = \min_{x} \left\{ h(x) + \frac{1}{2\gamma}\|x-z\|^2\right\},
\end{equation}
where $\gamma>0$ is a parameter. Define its associated proximity operator
\begin{equation}
\label{Eq:prox-operator}
\mathrm{Prox}_{\gamma,h}(z) = \argmin_{x} \left\{h(x) + \frac{1}{2\gamma}\|x-z\|^2\right\}.
\end{equation}
If $h$ is $\rho$-weakly convex and $\gamma\in (0,\rho^{-1})$, then $\mathrm{Prox}_{\gamma,h}$ is monotone, single-valued, and Lipschitz, and ${\cal M}_{\gamma,h}$ is differentiable with
\begin{equation}
\label{Eq:Moreau-gradient}
\nabla {\cal M}_{\gamma,h}(z) = \gamma^{-1}\left(z-\mathrm{Prox}_{\gamma,h}(z)\right)\in \partial h(\mathrm{Prox}_{\gamma,h}(z));
\end{equation}
see \cite[Proposition 13.37]{Rockafellar-var97}.
From~\cite{Drusvyatskiy18,Drusvyatskiy-Paquette19}, we also have
\begin{align*}
&{\cal M}_{\gamma,h}(\mathrm{Prox}_{\gamma,h}(z)) \leq h(z), \nonumber\\
&\|\mathrm{Prox}_{\gamma,h}(z)-z\| = \gamma \|\nabla {\cal M}_{\gamma,h}(z)\|, \nonumber\\
&\mathrm{dist}(0,\partial h(\mathrm{Prox}_{\gamma,h}(z))) \leq \|\nabla {\cal M}_{\gamma,h}(z)\|.
\end{align*}
The first relation above presents Moreau envelope as a smooth lower approximation of $h$. By the second and third relations, small $\|\nabla {\cal M}_{\gamma,h}(z)\|$ implies that $z$ is \textit{near} its proximal point $\mathrm{Prox}_{\gamma,h}(z)$ and $z$ is \textit{nearly stationary} for $h$ \cite{Davis-Drusvyatskiy19}.
Therefore, $\|\nabla {\cal M}_{\gamma,h}(z)\|$ can be used as a \textit{continuous stationarity measure}.
Hence, replacing the augmented Lagrangian with its Moreau envelope
not only generates a strongly convex subproblem
but also yields a stationarity measure.
\subsection{Implicit Regularity Properties}
\label{sc:implicit-property}
Let $h$ be a proper, lower semicontinuous, $\rho$-weakly convex function.
Given a $\gamma \in (0,\rho^{-1})$, define the \textit{generalized inverse mapping} of $\mathrm{Prox}_{\gamma,h}$:
\begin{align}
\label{Eq:prox-operator-inverse}
\mathrm{Prox}_{\gamma,h}^{-1}(x):=\{w:\mathrm{Prox}_{\gamma,h}(w) = x\}, \quad \forall x\in \mathrm{ran}(\mathrm{Prox}_{\gamma,h}).
\end{align}
In the definition below, we introduce two important regularity properties.
\begin{definition}
\label{Def:implicit-Lip-bounded-subgrad}
Let $h$ be a proper, lower semicontinuous and $\rho$-weakly convex function.
\begin{enumerate}
\item[(a)] We say $h$ satisfies the \textbf{implicit Lipschitz subgradient} property if for any $\gamma \in (0,\rho^{-1})$, there exists $L>0$ (depending on $\gamma$) such that for any $u,v\in \mathrm{ran}(\mathrm{Prox}_{\gamma,h})$,
\[\|\nabla {\cal M}_{\gamma,h}(w) - \nabla {\cal M}_{\gamma,h}(w') \| \leq L\|u-v\|, \ \forall w\in \mathrm{Prox}_{\gamma,h}^{-1}(u), w'\in \mathrm{Prox}_{\gamma,h}^{-1}(v);\]
\item[(b)] We say $h$ satisfies the \textbf{implicit bounded subgradient} property if for any $\gamma \in (0,\rho^{-1})$, there exists $\hat{L}>0$ (depending on $\gamma$) such that for any $u\in \mathrm{ran}(\mathrm{Prox}_{\gamma,h})$,
\[\|\nabla {\cal M}_{\gamma,h}(w)\| \leq \hat{L}, \ \forall w\in \mathrm{Prox}_{\gamma,h}^{-1}(u).\]
\end{enumerate}
\end{definition}
Since $\nabla {\cal M}_{\gamma,h}(x) \in \partial h(\mathrm{Prox}_{\gamma,h}(x))$ for any $x\in \mathbb{R}^n$,
we have $\nabla {\cal M}_{\gamma,h}(w) \in \partial h(u), \forall u \in \mathrm{ran}(\mathrm{Prox}_{\gamma,h})$ and $w\in \mathrm{Prox}_{\gamma,h}^{-1}(u)$.
Hence, the \textit{implicit Lipschitz subgradient} and \textit{implicit bounded subgradient} imply, respectively, the \textit{Lipschitz continuity} and \textit{boundedness} only on the components of $\partial h$ that are Moreau envelope gradients, but not on other components of $\partial h$.
When $h$ is differentiable, \textit{implicit Lipschitz subgradient} implies \textit{Lipschitz gradient}.
Having \textit{implicit bounded subgradients} is weaker than having bounded $\partial h$,
which is commonly assumed in the analysis of nonconvex algorithms (cf. \cite{Davis-Drusvyatskiy19,Hajinezhad-Hong19,Zeng-DGD18}). Nonsmooth and nonconvex functions like
the SCAD regularization and MCP regularization which appear in statistical learning \cite{Wang19}, have \textit{implicit bounded subgradients}.
\subsection{Kurdyka-{\L}ojasiewicz Inequality}
\label{sc:KL-ineq}
The Kurdyka-{\L}ojasiewicz (K{\L}) inequality \cite{Bolte-KL2007a,Bolte-KL2007b,Kurdyka-KL1998,Lojasiewicz-KL1963,Lojasiewicz-KL1993}
is a property that leads to global convergence of nonconvex algorithms in the literature (see, \cite{Attouch13,Bolte2014,Wang19,Xu-Yin-BCD13,Zeng-BCD19,Zeng-ADMM19}).
The following definition of Kurdyka-{\L}ojasiewicz property is adopted from \cite{Bolte-KL2007a}.
\begin{definition}
\label{Def-KLProp}
A function $h:\mathbb{R}^n \rightarrow \mathbb{R}\cup \{+\infty\}$ is said to have the {Kurdyka-{\L}ojasiewicz property} at $x^*\in \mathrm{dom}(\partial h)$ if there exist a neighborhood ${\cal U}$ of $x^*$, a constant $\nu>0$, and a continuous concave function $\varphi(s) = cs^{1-\theta}$ for some $c>0$ and $\theta \in [0,1)$ such that the Kurdyka-{\L}ojasiewicz inequality holds: for all $x \in {\cal U} \cap \mathrm{dom}(\partial h)$ and $h(x^*) < h(x) < h(x^*)+\nu$,
\begin{equation}
\varphi'(h(x)-h(x^*)) \cdot\mathrm{dist}(0,\partial h(x))\geq 1, \label{Eq:KLIneq}
\end{equation}
(we use the conventions: $0^0=1, \infty/\infty=0/0=0$),
where $\theta$ is called the K{\L} exponent of $h$ at $x^*$. Proper lower semicontinuous functions satisfying the K{\L} inequality at every point of $\mathrm{dom}(\partial h)$ are called K{\L} functions.
\end{definition}
This property was firstly introduced by \cite{Lojasiewicz-KL1993} on real analytic functions \cite{Krantz2002-real-analytic} for $\theta \in \left[ \tfrac{1}{2},1\right) $, was then extended to functions defined on the o-minimal structure in \cite{Kurdyka-KL1998}, and was later extended to nonsmooth subanalytic functions in \cite{Bolte-KL2007a}.
K{\L} functions include real analytic functions \cite{Krantz2002-real-analytic},
semialgebraic functions \cite{Bochnak-semialgebraic1998},
tame functions defined in some o-minimal structures \cite{Kurdyka-KL1998}, continuous subanalytic functions \cite{Bolte-KL2007a}, definable functions \cite{Bolte-KL2007b}, locally strongly convex functions \cite{Xu-Yin-BCD13}, as well as many
deep-learning training models~\cite{Zeng-BCD19,Zeng-ADMM19}.
\section{Convergence of MEAL}
\label{sc:convergence-MEAL}
This section presents the convergence results of MEAL and iMEAL. We postpone their proofs to Section \ref{sc:main-proofs}.
\subsection{Assumptions and Stationarity Measure}
\label{sc:MEAL-assump}
\begin{assumption}
\label{Assump:feasibleset}
The set ${\cal X}:=\{x:Ax=b\}$ is nonempty.
\end{assumption}
\begin{assumption}
\label{Assump:MEAL}
The objective $f$ in problem \eqref{Eq:problem} satisfies:
\begin{enumerate}
\item[(a)] $f$ is proper lower semicontinuous and $\rho$-weakly convex; and for any $\gamma\in (0,\rho^{-1})$, \textbf{either (b) or (c):}
\item[(b)] $f$ satisfies the \textbf{implicit Lipschitz subgradient} property with a constant $L_f>0$ (possibly depending on $\gamma$); or,
\item[(c)] $f$ satisfies the \textbf{implicit bounded subgradient} property with a constant $\hat{L}_f>0$ (possibly depending on $\gamma$).
\end{enumerate}
\end{assumption}
We do not assume the following hypotheses:
the strict complementarity condition used in \cite{Zhang-Luo18}, any rank assumption (such as $\mathrm{Im}(A)\subseteq \mathrm{Im}(\mathrm{B})$ when considering the two- (multi-)block case $Ax+By=0$) used in \cite{Wang19}, the linear independence constrained qualification (LICQ) used in \cite{Bertsekas82,Nocedal99} (implying the full-rank assumption in the linear constraint case).
Assumption \ref{Assump:MEAL} is mild as discussed in Section \ref{sc:implicit-property}.
According to \eqref{Eq:Moreauenvelope_AL} and the update \eqref{alg:MEAL} of MEAL, we have
\begin{align}
\label{Eq:stationary-MEAL}
\nabla \phi_{\beta_k}(z^k,\lambda^k)=
\left(
\begin{array}{c}
(\eta\gamma)^{-1} (z^k - z^{k+1})\\
\beta_k^{-1}(\lambda^{k+1}-\lambda^k)
\end{array}
\right)
\in
\left(
\begin{array}{c}
\partial f(x^{k+1}) + A^T\lambda^{k+1}\\
Ax^{k+1}-b
\end{array}
\right).
\end{align}
Let
\begin{align}
\label{Eq:measure-MEAL}
\xi_{\mathrm{meal}}^k := \min_{0\leq t \leq k} \|\nabla \phi_{\beta_t}(z^t,\lambda^t)\| , \ \forall k\in \mathbb{N}.
\end{align}
Then according to \eqref{Eq:stationary-MEAL}, the bound $\xi_{\mathrm{meal}}^k \leq \varepsilon$ implies
\begin{align*}
\min_{0\leq t \leq k} \mathrm{dist}\left\{0,\left(
\begin{array}{c}
\partial f(x^{t+1}) + A^T\lambda^{t+1}\\
Ax^{t+1}-b
\end{array}
\right)\right\} \leq \xi^k_{\mathrm{meal}} \leq \varepsilon,
\end{align*}
that is, MEAL achieves $\varepsilon$-accurate first-order stationarity for problem \eqref{Eq:problem} within $k$ iterations.
Hence, $\xi_{\mathrm{meal}}^k$ is a valid stationarity measure of MEAL.
Define iteration complexity:
\begin{align}
\label{Eq:itercomplexity-meal}
T_{\varepsilon} = \inf\left\{t\geq 1: \|\nabla \phi_{\beta_t}(z^t,\lambda^t)\| \leq \varepsilon \right\}.
\end{align}
Comparing $T_{\varepsilon}$ to the common iteration complexity
\begin{align*}
\hat{T}_{\varepsilon}= \inf\left\{t\geq 1: \mathrm{dist}(0,\partial f(x^t)+A^T\lambda^t) \leq \epsilon \ \text{and}\ \|Ax^t-b\| \leq \varepsilon \right\},
\end{align*}
we get $T_{\varepsilon}\ge \hat{T}_{\varepsilon}$.
If $f$ is differentiable, $\mathrm{dist}(0,\partial f(x^t)+A^T\lambda^t)$ reduces to $\|\nabla f(x^t)+A^T\lambda^t\|$.
\subsection{Convergence Theorems of MEAL}
\label{sc:MEAL-convergence}
We present the quantities used to state the convergence results of MEAL.
Let
\begin{align}
\label{Eq:function-P}
{\cal P}_{\beta}(x,z,\lambda) = {\cal L}_{\beta}(x,\lambda) + \frac{1}{2\gamma}\|x-z\|^2,
\end{align}
for some $\beta, \gamma>0.$
Then according to \eqref{alg:MEAL-reformulation}, MEAL can be interpreted as a primal-dual update with respect to ${\cal P}_{\beta_k}(x,z,\lambda)$ at the $k$-th iteration, that is, updating $x^{k+1}$, $z^{k+1}$, and $\lambda^{k+1}$ by minimization, gradient descent, and gradient ascent respectively.
Based on \eqref{Eq:function-P}, we introduce the following \text{Lyapunov functions} for MEAL:
\begin{align}
\label{Eq:Lyapunov-seq-MEAL-S1}
{\cal E}_{\mathrm{meal}}^k
:= {\cal P}_{\beta_k}(x^k,z^k,\lambda^k)+ 2\alpha_k \|z^k - z^{k-1}\|^2, \ \forall k\geq 1,
\end{align}
associated with the \textit{implicit Lipschitz subgradient} assumption and
\begin{align}
\label{Eq:Lyapunov-seq-MEAL-S2}
\tilde{\cal E}_{\mathrm{meal}}^k := {\cal P}_{\beta_k}(x^k,z^k,\lambda^k) + 3\alpha_k \|z^k-z^{k-1}\|^2, \ \forall k\geq 1,
\end{align}
associated with the \textit{implicit bounded subgradient} assumption,
where
\begin{align}
\label{Eq:alphak}
\alpha_k := \frac{\beta_k+\beta_{k+1} + \gamma\eta(1-\eta/2) }{2c_{\gamma,A}\beta_k^2}, \ \forall k \in \mathbb{N},
\end{align}
and $c_{\gamma,A}:= \gamma^2 \tilde{\sigma}_{\min}(A^TA)$.
When $\beta$ is fixed, we also fix
\begin{align}
\label{Eq:alpha}
\alpha := \frac{2\beta+\gamma \eta(1-\eta/2)}{2c_{\gamma,A}\beta^2}.
\end{align}
\begin{theorem}[Iteration Complexity of MEAL]
\label{Theorem:Convergence-MEAL}
Suppose that Assumptions \ref{Assump:feasibleset} and \ref{Assump:MEAL}(a) hold. Pick $\gamma \in (0,\rho^{-1})$ and $\eta \in (0,2)$.
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by MEAL \eqref{alg:MEAL-reformulation}.
The following claims hold:
\begin{enumerate}
\item[(a)]
Set $\beta$
sufficiently large such that in \eqref{Eq:alpha}, $\alpha < \min\left\{\frac{1-\gamma \rho}{4\gamma(1+\gamma L_f)^2}, \frac{1}{8\gamma}(\frac{2}{\eta}-1)\right\}$.
Under Assumption \ref{Assump:MEAL}(b), if $\{{\cal E}_{\mathrm{meal}}^k\}$ is lower bounded, then $\xi^k_{\mathrm{meal}} = o(1/\sqrt{k})$ for $\xi_{\mathrm{meal}}^k$ in \eqref{Eq:measure-MEAL}.
\item[(b)] Pick any $K\geq 1$. Set $\{\beta_k\}$ so that in \eqref{Eq:alphak}, $\alpha_k \equiv \frac{\alpha^*}{K}$ for some positive constant $\alpha^* \leq \min\left\{ \frac{1-\rho \gamma}{6\gamma}, \frac{1}{12\gamma}\left(\frac{2}{\eta} -1\right)\right\}$.
Under Assumption \ref{Assump:MEAL}(c), if $\{\tilde{\cal E}_{\mathrm{meal}}^k\}$ is lower bounded, then
$\xi_{\mathrm{meal}}^K \leq \tilde{c}_1/\sqrt{K}$ for some constant $\tilde{c}_1>0$.
\end{enumerate}
\end{theorem}
Section \ref{sc:discussion-boundedness} provides conditions sufficient for the lower-boundedness assumptions. Let us interpret the theorem.
To achieve an $\varepsilon$-accurate stationary point, the iteration complexity of MEAL is $o(\varepsilon^{-2})$ assuming the implicit Lipschitz subgradient property and ${\cal O}(\varepsilon^{-2})$ assuming the implicit bounded subgradient property.
Both iteration complexities are consistent with the existing results of ${\cal O}(\varepsilon^{-2})$ in~\cite{Hajinezhad-Hong19,Hong17-Prox-PDA,Xie-Wright19,Zhang-Luo20}.
The established results of MEAL also hold for proximal ALM by setting $\eta = 1$.
We note that it is not our goal to pursue any better complexity (e.g., using momentum) in this paper.
\begin{remark}
Let $\bar{\alpha}:= \min\left\{\frac{1-\gamma \rho}{4\gamma(1+\gamma L_f)^2},\frac{1}{8\gamma}(\frac{2}{\eta}-1)\right\}$. By \eqref{Eq:alpha}, the requirement $0<\alpha < \bar{\alpha}$ in Theorem \ref{Theorem:Convergence-MEAL}(a) is met by setting
\begin{align}
\label{Eq:cond-beta-MEAL-S1}
\beta > \frac{1+\sqrt{1+\eta(2-\eta)\gamma c_{\gamma,A}\bar{\alpha}}}{2c_{\gamma,A}\bar{\alpha}}.
\end{align}
Similarly,
the assumption $\alpha_k = \frac{\alpha^*}{K}$ in Theorem \ref{Theorem:Convergence-MEAL}(b) is met by setting
\begin{align}
\label{Eq:cond-beta-MEAL-S2}
\beta_k = \frac{K\left(1+\sqrt{1+\eta(2-\eta)\gamma c_{\gamma,A}\alpha^*/K}\right)}{2c_{\gamma,A}\alpha^*}, \ k=1,\ldots, K.
\end{align}
\end{remark}
Next, we establish global convergence (whole sequence convergence regardless of initial points) and its rate for MEAL under the K{\L} inequality (Definition \ref{Def-KLProp}).
Let
$\hat{z}^k := z^{k-1}$, $y^k:= (x^k,z^k,\lambda^k,\hat{z}^k), \ \forall k\geq 1,$
$y:= (x,z,\lambda,\hat{z}) \in \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^n,$ and
\begin{align}
\label{Eq:Lyapunov-fun-MEAL}
{\cal P}_{\mathrm{meal}}(y) := {\cal P}_{\beta}(x,z,\lambda) + 3 \alpha \|z-\hat{z}\|^2
\end{align}
where $\alpha$ is defined in \eqref{Eq:alpha}.
\begin{proposition}[Global convergence and rate of MEAL]
\label{Proposition:globalconv-MEAL}
Suppose that the assumptions required for Theorem \ref{Theorem:Convergence-MEAL}(a) hold and that $\{(x^k,z^k,\lambda^k)\}$ generated by MEAL \eqref{alg:MEAL-reformulation} is bounded.
If ${\cal P}_{\mathrm{meal}}$ satisfies the K{\L} property at some point $y^*:= (x^*,x^*,\lambda^*,x^*)$ with an exponent of $\theta \in [0,1)$, where $(x^*,\lambda^*)$ is a limit point of $\{(x^k,\lambda^k)\}$, then
\begin{enumerate}
\item[(a)] the whole sequence $\{\hat{y}^k:=(x^k,z^k,\lambda^k)\}$ converges to $\hat{y}^*:=(x^*,x^*,\lambda^*)$; and
\item[(b)] the following rate-of-convergence results hold: (1) if $\theta =0$, then $\{\hat{y}^k\}$ converges within a finite number of iterations; (2) if $\theta \in (0,\frac{1}{2}]$, then $\|\hat{y}^k- \hat{y}^*\| \leq c \tau^k$ for all $k\geq k_0$, for certain $k_0>0, c>0, \tau \in (0,1)$; and (3) if $\theta \in (\frac{1}{2},1)$, then $\|\hat{y}^k - \hat{y}^*\| \leq c k^{-\frac{1-\theta}{2\theta-1}}$ for all $k\geq k_0$, for certain $k_0>0, c>0$.
\end{enumerate}
\end{proposition}
In Proposition \ref{Proposition:globalconv-MEAL}, the K{\L} property of ${\cal P}_{\mathrm{meal}}$ defined in \eqref{Eq:Lyapunov-fun-MEAL}
plays a central role in the establishment of global convergence of MEAL. The K{\L} exponent determines the convergence speed of MEAL; particularly, the exponent $\theta=1/2$ implies linear convergence so it is most desirable. Below we give some results on $\theta$, which are obtainable from~\cite[page 43]{Shiota1997}, \cite[Theorem 3.1]{Bolte-KL2007a}, \cite[Lemma 5]{Zeng-BCD19}, and \cite[Theorem 3.6 and Corollary 5.2]{Li-Pong-KLexponent18}.
\begin{proposition}
\label{Propos:KL-property-Lyapunov}
The following claims hold:
\begin{enumerate}
\item[(a)] If $f$ is subanalytic with a closed domain and continuous on its domain, then ${\cal P}_{\mathrm{meal}}$ defined in \eqref{Eq:Lyapunov-fun-MEAL} is a K{\L} function;
\item[(b)] If ${\cal L}_{\beta}(x,\lambda)$ defined in \eqref{Eq:augmented-Lagrangian} has the K{\L} property at some point $(x^*,\lambda^*)$ with exponent $\theta \in [1/2,1)$, then ${\cal P}_{\mathrm{meal}}$ has the K{\L} property at $(x^*,x^*,\lambda^*,x^*)$ with exponent $\theta$;
\item[(c)] If $f$ has the following form:
\begin{align}
\label{Eq:f-KL-1/2}
f(x) = \min_{1\leq i\leq r}
\left\{ \frac{1}{2}x^TM_ix+u_i^Tx + c_i + P_i(x)\right\},
\end{align}
where $P_i$ are proper closed polyhedral functions, $M_i$ are symmetric matrices of size $n$, $u_i \in \mathbb{R}^n$ and $c_i \in \mathbb{R}$ for $i=1,\ldots,r$, then ${\cal L}_{\beta}$ is a K{\L} function with an exponent of $\theta =1/2$.
\end{enumerate}
\end{proposition}
Claim (a) can be obtained as follows.
The terms in ${\cal P}_{\mathrm{meal}}$
besides $f$ are polynomial functions, which are both real analytic and semialgebraic
\cite{Bochnak-semialgebraic1998}.
Since $f$ is subanalytic with a closed domain and continuous on its domain, by \cite[Lemma 5]{Zeng-BCD19}, ${\cal P}_{\mathrm{meal}}$ is also subanalytic with a closed domain and continuous on its domain.
By \cite[Theorem 3.1]{Bolte-KL2007a}, ${\cal P}_{\mathrm{meal}}$ is a K{\L} function.
Claim (b) can be verified by applying \cite[Theorem 3.6]{Li-Pong-KLexponent18} to ${\cal P}_{\mathrm{meal}}$.
Claim (c) can be established as follows.
The class of functions $f$ defined by \eqref{Eq:f-KL-1/2} are weakly convex with a modulus $\rho = 2 \max_{1\leq i\leq r} \|M_i\|$. According to \cite[Sec. 5.2]{Li-Pong-KLexponent18}, this class covers many nonconvex functions such as SCAD \cite{Fan-SCAD} and MCP \cite{Zhang-MCP} in statistical learning.
The function ${\cal L}_{\beta}(x,\lambda) = \frac{\beta}{2} \|Ax+\beta^{-1}\lambda-b\|^2 + (f(x) - \frac{1}{2\beta}\|\lambda\|^2)$.
according to \cite[Corollary 5.2]{Li-Pong-KLexponent18}, is a K{\L} function with an exponent of $1/2$.
More results on the K{\L} functions with exponent $1/2$ can be found in~\cite{Li-Pong-KLexponent18,Yu-Li-Pong-KLexponent21} and the references therein.
\subsection{Convergence of iMEAL}
When considering iMEAL, the Lyapunov functions need to be slightly modified into
\begin{align}
\label{Eq:Lyapunov-seq-iMEAL-S1}
{\cal E}_{\mathrm{imeal}}^k:= {\cal P}_{\beta_k}(x^k,z^k,\lambda^k)+ 3\alpha_k \|z^k-z^{k-1}\|^2, \ \forall k\geq 1,
\end{align}
associated with the implicit Lipschitz subgradient assumption,
and
\begin{align}
\label{Eq:Lyapunov-seq-iMEAL-S2}
\tilde{\cal E}_{\mathrm{imeal}}^k:= {\cal P}_{\beta_k}(x^k,z^k,\lambda^k)+ 4\alpha_k \|z^k-z^{k-1}\|^2, \ \forall k\geq 1,
\end{align}
associated with the implicit bounded subgradient assumption, where $\alpha_k$ is defined in \eqref{Eq:alphak}.
\begin{theorem}[Iteration Complexity of iMEAL]
\label{Theorem:Convergence-iMEAL}
Let Assumptions \ref{Assump:feasibleset} and \ref{Assump:MEAL}(a) hold, $\gamma \in (0,\rho^{-1})$, and $\eta \in (0,2)$.
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by iMEAL \eqref{alg:iMEAL} with $\sum_{k=0}^{\infty} \epsilon_k^2 <\infty$.
The following claims hold:
\begin{enumerate}
\item[(a)]
Set $\beta$
sufficiently large
such that in \eqref{Eq:alpha},
$\alpha < \min\left\{\frac{1-\gamma \rho}{6\gamma(1+\gamma L_f)^2}, \frac{1}{12\gamma}(\frac{2}{\eta}-1)\right\}
$.
Under Assumption \ref{Assump:MEAL}(b), if $\{{\cal E}_{\mathrm{imeal}}^k\}$ is lower bounded, then $\xi^k_{\mathrm{meal}} = o(1/\sqrt{k})$ (cf. \eqref{Eq:measure-MEAL}).
\item[(b)] Pick $K\geq 1$. Set $\{\beta_k\}$ such that in \eqref{Eq:alphak}, $\alpha_k\equiv\frac{\hat{\alpha}^*}{K}$ for some positive constant $\hat{\alpha}^* \leq \min\left\{ \frac{1-\rho \gamma}{8\gamma}, \frac{1}{16\gamma}(\frac{2}{\eta}-1) \right\}$.
Under Assumption \ref{Assump:MEAL}(c), if $\{\tilde{\cal E}_{\mathrm{imeal}}^k\}$ is lower bounded, then
$\xi_{\mathrm{meal}}^K \leq \tilde{c}_2/\sqrt{K}$ for some constant $\tilde{c}_2>0$.
\end{enumerate}
\end{theorem}
By Theorem \ref{Theorem:Convergence-iMEAL}, the iteration complexity of iMEAL is the same as that of MEAL and also consistent with that of inexact proximal ALM~\cite{Xie-Wright19} (when the stationary accuracy $\epsilon_k$ is square summable).
Moreover, if the condition on $\epsilon_k$ is strengthened to be $\sum_{k=0}^{\infty} \epsilon_k <+\infty$ as required in the literature~\cite{Rockafellar76-PALM,Wang19}, then following a proof similar for Proposition \ref{Proposition:globalconv-MEAL}, global convergence and similar rates of MEAL also hold for iMEAL under the assumptions required for Theorem \ref{Theorem:Convergence-iMEAL}(a) and the K{\L} property.
\section{Convergence of LiMEAL for Composite Objective}
\label{sc:LiMEAL}
This section presents the convergence results of LiMEAL \eqref{alg:LiMEAL} for the constrained problem with a composite objective \eqref{Eq:problem-CP}. The proofs are postponed to Section \ref{sc:main-proofs} below.
Similar to Assumption \ref{Assump:MEAL}, we make the following assumptions.
\begin{assumption}
\label{Assump:LiMEAL}
The objective $f(x)=h(x)+g(x)$ in problem \eqref{Eq:problem-CP} satisfies:
\begin{enumerate}
\item[(a)] $h$ is differentiable and $\nabla h$ is Lipschitz continuous with a constant $L_h>0$;
\item[(b)] $g$ is proper lower-semicontinuous and $\rho_g$-weakly convex; and \textbf{either}
\item[(c)] $g$ has the \textbf{implicit Lipschitz subgradient} property with a constant $L_g>0$; \textbf{or}
\item[(d)] $g$ has the \textbf{implicit bounded subgradient} property with a constant $\hat{L}_g>0$.
\end{enumerate}
In (c) and (d), $L_g$ and $\hat{L}_g$ may depend on $\gamma$.
\end{assumption}
By the update \eqref{alg:LiMEAL} of LiMEAL, some simple derivations show that
\begin{align}
\label{Eq:xk+1-proxform-LiMEAL}
x^{k+1} = \mathrm{Prox}_{\gamma,g}(z^k - \gamma (\nabla h(x^k)+A^T\lambda^{k+1}))
\end{align}
and
\begin{align}
\label{Eq:stationary-LiMEAL}
g_{\mathrm{limeal}}^k:=
\left(
\begin{array}{c}
\gamma^{-1} (z^k - x^{k+1}) + (\nabla h(x^{k+1})-\nabla h(x^k))\\
\beta_k^{-1}(\lambda^{k+1}-\lambda^k)
\end{array}
\right)
\in
\left(
\begin{array}{c}
\partial f(x^{k+1}) + A^T\lambda^{k+1}\\
Ax^{k+1}-b
\end{array}
\right).
\end{align}
Actually, the term $\gamma^{-1}(z^k-x^{k+1})$ represents some \textit{prox-gradient sequence} frequently used in the analysis of algorithms for the unconstrained composite optimization (e.g., \cite{Davis-Drusvyatskiy19}).
Thus, let
\begin{align}
\label{Eq:measure-LiMEAL}
\xi_{\mathrm{limeal}}^k := \min_{0\leq t \leq k} \|g_{\mathrm{limeal}}^t\|, \ \forall k\in \mathbb{N},
\end{align}
which can be taken as an effective stationarity measure of LiMEAL for problem \eqref{Eq:problem-CP}.
In the following, we present the iteration complexity of LiMEAL for problem \eqref{Eq:problem-CP}. Since the prox-linear scheme is adopted in the update of $x^{k+1}$ in LiMEAL as described in \eqref{alg:LiMEAL}, thus, the proximal term (i.e., $\|x^k-x^{k-1}\|^2$) should be generally included in the associated Lyapunov functions of LiMEAL, shown as follows:
\begin{align}
\label{Eq:Lyapunov-seq-LiMEAL-S1}
{\cal E}^k_{\mathrm{limeal}}
&:= {\cal P}_{\beta_k}(x^k,z^k,\lambda^k) + 3\alpha_k (\gamma^2L_h^2 \|x^k-x^{k-1}\|^2 + \|z^k - z^{k-1}\|^2)
\end{align}
associated with the \textit{implicit Lipschitz gradient} assumption, and
\begin{align}
\label{Eq:Lyapunov-seq-LiMEAL-S2}
\tilde{\cal E}_{\mathrm{limeal}}^k:= {\cal P}_{\beta_k}(x^k,z^k,\lambda^k) + 4{\alpha}_{k}(\gamma^2L_h^2 \|x^k-x^{k-1}\|^2 + \|z^{k}-z^{k-1}\|^2),
\end{align}
associated with the \textit{implicit bounded subgradient} assumption,
where $\alpha_k$ is defined in \eqref{Eq:alphak}.
The iteration complexity of MEAL can be similarly generalized to LiMEAL as follows.
\begin{theorem}[Iteration Complexity of LiMEAL]
\label{Theorem:Convergence-LiMEAL}
Take Assumptions \ref{Assump:feasibleset} and \ref{Assump:LiMEAL}(a)-(b). Pick $\eta \in (0,2)$ and $0<\gamma<\frac{2}{(\rho_g+L_h)\left(1+\sqrt{1+\frac{2(2-\eta)\eta L_h^2}{(\rho_g+L_h)^2}} \right)}$.
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by LiMEAL \eqref{alg:LiMEAL}.
The following claims hold:
\begin{enumerate}
\item[(a)] Set $\beta$
sufficiently large such that $\alpha < \min\left\{\frac{1}{12\gamma}(\frac{2}{\eta}-1), \frac{1-\gamma(\rho_g+L_h) - \eta(1-\eta/2)\gamma^2L_h^2}{6\gamma \left((1+\gamma L_g)^2 + \gamma^2 L_h^2 \right)}\right\}$.
Under Assumption \ref{Assump:LiMEAL}(c), if $\{{\cal E}^k_{\mathrm{limeal}}\}$ is lower bounded,
then
$\xi^k_{\mathrm{limeal}} = o(1/\sqrt{k})$.
\item[(b)]
Pick $K\geq 1$. Set $\{\beta_k\}$ such that $\alpha_k \equiv \frac{\bar{\alpha}^*}{K}$ for some positive constant $\bar{\alpha}^*\leq\min\Big\{\frac{1-\gamma\left(\rho_g+L_h)-\eta(1-\eta/2)\gamma^2 L_h^2 \right)}{8\gamma(1+\gamma^2L_h^2)} $, $\frac{1}{16\gamma}\left(\frac{2}{\eta} -1 \right)\Big\}$.
Under Assumption \ref{Assump:LiMEAL}(d), if $\{\tilde{\cal E}^k_{\mathrm{limeal}}\}$ is lower bounded,
then $\xi_{\mathrm{limeal}}^K \leq \tilde{c}_3/\sqrt{K}$ for some constant $\tilde{c}_3>0$.
\end{enumerate}
\end{theorem}
Similar to the discussions following Theorem \ref{Theorem:Convergence-MEAL}, to yield an $\varepsilon$-accurate first-order stationary point, the iteration complexity of LiMEAL is $o(\varepsilon^{-2})$ under the \textit{implicit Lipschitz subgradient} assumption and ${\cal O}(\varepsilon^{-2})$ under the \textit{implicit bounded subgradient} assumption,
as demonstrated by Theorem \ref{Theorem:Convergence-LiMEAL}. The conditions on $\beta$ and $\beta_k$ in these two cases can be derived similarly to \eqref{Eq:cond-beta-MEAL-S1} and \eqref{Eq:cond-beta-MEAL-S2}, respectively.
In the following, we establish the global convergence and rates of LiMEAL under assumptions required for Theorem \ref{Theorem:Convergence-LiMEAL}(a) and the K{\L} property.
Specifically, let
$
\hat{x}^k:= x^{k-1}, \ \hat{z}^k := z^{k-1}, \ {y}^k:= (x^k,z^k,\lambda^k,\hat{x}^k,\hat{z}^k), \ \forall k\geq 1,
$
${y}:= (x,z,\lambda,\hat{x},\hat{z}) \in \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^n\times \mathbb{R}^n,$ and
\begin{align}
\label{Eq:Lyapunov-fun-LiMEAL}
{\cal P}_{\mathrm{limeal}}({y}) := {\cal P}_{\beta}(x,z,\lambda) + 4\alpha \left(\|z-\hat{z}\|^2 + \gamma^2 L_h^2 \|x-\hat{x}\|^2\right).
\end{align}
\begin{proposition}[Global convergence and rates of LiMEAL]
\label{Proposition:globalconv-LiMEAL}
Suppose that Assumptions \ref{Assump:feasibleset} and \ref{Assump:LiMEAL}(a)-(c) hold and that the sequence $\{(x^k,z^k,\lambda^k)\}$ generated by LiMEAL \eqref{alg:LiMEAL} is bounded.
If $\gamma \in (0,\frac{1}{\rho_g+L_h})$, $\eta \in (0,2)$, $0<\alpha < \min \left\{\frac{1}{8\gamma}\left(\frac{2}{\eta}-1\right), \frac{1-\gamma(\rho_g+L_h)}{8\gamma \left((1+\gamma L_g)^2+\gamma^2 L_h^2 \right)} \right\}$,
and ${\cal P}_{\mathrm{limeal}}$ satisfies the K{\L} property at some point $y^*:= (x^*,x^*,\lambda^*,x^*,x^*)$ with an exponent of $\theta \in [0,1)$, where $(x^*,\lambda^*)$ is a limit point of $\{(x^k,\lambda^k)\}$, then
\begin{enumerate}
\item[(a)] the whole sequence $\{\hat{y}^k:=(x^k,z^k,\lambda^k)\}$ converges to $\hat{y}^*:=(x^*,x^*,\lambda^*)$; and
\item[(b)] all the rates of convergence results in Proposition \ref{Proposition:globalconv-MEAL}(b) also hold for LiMEAL.
\end{enumerate}
\end{proposition}
\begin{remark}
The established results in this section is more general than those in \cite{Zhang-Luo18} and done under weaker assumptions on $h$ and for more general class of $g$. Specifically, as discussed in Section \ref{sc:relation-existing-methods}, the algorithm studied in \cite{Zhang-Luo18} is a prox-linear version of LiMEAL with $g$ being an indicator function of a box constraint set.
In~\cite{Zhang-Luo18}, global convergence and a linear rate of {proximal inexact ALM} were proved for quadratic programming, where that the augmented Lagrangian satisfies the K{\L} inequality with exponent $1/2$.
Besides, the strict complementarity condition required in \cite{Zhang-Luo18} is also removed in this paper for LiMEAL.
\end{remark}
\section{Main Proofs}
\label{sc:main-proofs}
In this section, we first prove some lemmas and then present the proofs of our main convergence results.
\subsection{Preliminary Lemmas}
\label{sc:preliminary-lemmas}
\subsubsection{Lemmas on Iteration Complexity and Global Convergence}
The first lemma concerns the convergence speed of a nonenegative sequence $\{\xi_k\}$ satisfying the following relation
\begin{align}
\label{Eq:sequence-fixed}
\tilde{\eta} \xi_k^2 \leq ({\cal E}_k - {\cal E}_{k+1}) + \tilde{\epsilon}_k^2, \ \forall k\in \mathbb{N},
\end{align}
where $\tilde{\eta}>0$, $\{{\cal E}_k\}$ and $\{\tilde{\epsilon}_k\}$ are two nonnegative sequences, and $\sum_{k=1}^{\infty} \tilde{\epsilon}_k^2 < +\infty$.
\begin{lemma}
\label{Lemma:sequence-fixed}
For any sequence $\{\xi_k\}$ satisfying \eqref{Eq:sequence-fixed}, $\tilde{\xi}_k := \min_{1\leq t\leq k} \xi_t = o(1/\sqrt{k})$.
\end{lemma}
\begin{proof}
Summing \eqref{Eq:sequence-fixed} over $k$ from $1$ to $K$ and letting $K\rightarrow +\infty$ yields
\begin{align*}
\sum_{k=1}^{\infty} \xi_k^2 \leq \tilde{\eta}^{-1}\left({\cal E}_1 + \sum_{k=1}^{\infty}\tilde{\epsilon}_k^2\right) <+\infty,
\end{align*}
which implies the desired convergence speed by $\frac{k}{2} \tilde{\xi}_k^2 \leq \sum_{\frac{k}{2}\leq j \leq k} {\xi}_j^2 \rightarrow 0$ as $k\rightarrow \infty$, as proved in \cite[Lemma 1.1]{Deng-parallelADMM17}.
\end{proof}
Then we provide a lemma to show the convergence speed of a nonenegative sequence $\{\xi_k\}$ satisfying the following relation instead of \eqref{Eq:sequence-fixed}
\begin{align}
\label{Eq:sequence-varying}
\tilde{\eta} \xi_k^2 \leq ({\cal E}_k - {\cal E}_{k+1}) + \tilde{\epsilon}_k^2 + {\alpha}_k \tilde{L}, \ \forall k\in \mathbb{N},
\end{align}
where $\tilde{\eta}>0,$ $\tilde{L}>0$, $\{{\cal E}_k\}$, $\{{\alpha}_k\}$ and $\{\tilde{\epsilon}_k\}$ are nonnegative sequences, and $\sum_{k=1}^{\infty} \tilde{\epsilon}_k^2 < +\infty$.
\begin{lemma}
\label{Lemma:sequence-varying}
Pick $K \ge 1$. Let $\{\xi_k\}$ be a nonnegative sequence satisfying \eqref{Eq:sequence-varying}. Set $\alpha_k \equiv \frac{\tilde{\alpha}}{K}$ for some $\tilde{\alpha}>0$. Then
$\tilde{\xi}_K := \min_{1\leq k\leq K} \xi_k \leq \tilde{c}/\sqrt{K}$ for some constant $\tilde{c}>0$.
\end{lemma}
\begin{proof}
Summing \eqref{Eq:sequence-varying} over $k$ from $1$ to $K$ yields
\begin{align*}
\sum_{k=1}^K \xi_k^2 \leq \frac{{\cal E}_1 + \sum_{k=1}^K \tilde{\epsilon}_k^2 + \tilde{L}\sum_{k=1}^K \alpha_k}{ \tilde{\eta}}.
\end{align*}
From $\sum_{k=1}^{\infty} \tilde{\epsilon}_k^2 < +\infty$ and $\sum_{k=1}^K{\alpha}_k=\tilde{\alpha}$, we get $K\tilde{\xi}_K^2\le \sum_{k=1}^K \xi_k^2\le\frac{{\cal E}_1 + \sum_{k=1}^\infty \tilde{\epsilon}_k^2 + \tilde{L}\tilde{\alpha}}{\tilde{\eta}}<+\infty$.
The result follows with $\tilde{c} := \sqrt{{\cal E}_1 + \sum_{k=1}^\infty \tilde{\epsilon}_k^2 + \tilde{L}\tilde{\alpha}}/\sqrt{{\tilde{\eta}}}$.
\end{proof}
In both Lemmas \ref{Lemma:sequence-fixed} and \ref{Lemma:sequence-varying}, the nonnegative assumption on the sequence $\{{\cal E}_k\}$ can be relaxed to its lower boundedness.
The following lemma presents the global convergence and rate of a sequence generated by some algorithm for the nonconvex optimization problem, based on the Kurdyka-{\L}ojasiewicz inequality, where the global convergence result is from \cite[Theorem 2.9]{Attouch13} while the rate results are from \cite[Theorem 5]{Attouch-Bolte09}.
\begin{lemma}[Existing global convergence and rate]
\label{Lemma:existing-global-converg}
Let ${\cal L}$ be a proper, lower semicontinuous function, and $\{u^k\}$ be a sequence that satisfies the following three conditions:
\begin{enumerate}
\item[(P1)]
(\textit{Sufficient decrease condition}) there exists a constant $a_1>0$ such that
${\cal L}(u^{k+1}) + a_1 \|u^{k+1}-u^k\|^2 \leq {\cal L}(u^{k}), \ \forall k\in \mathbb{N};$
\item[(P2)] (\textit{Bounded subgradient condition}) for each $k\in \mathbb{N}$, there exists $v^{k+1} \in \partial {\cal L}(u^{k+1})$ such that $\|v^{k+1}\| \leq a_2\|u^{k+1}-u^k\|$ for some constant $a_2>0$;
\item[(P3)] (\textit{Continuity condition}) there exist a subsequence $\{u^{k_j}\}$ and $\tilde{u}$ such that $u^{k_j} \rightarrow \tilde{u}$ and ${\cal L}(u^{k_j}) \rightarrow {\cal L}(\tilde{u})$ as $j\rightarrow \infty$.
\end{enumerate}
If ${\cal L}$ satisfies the K{\L} inequality at $\tilde{u}$ with an exponent of $\theta$, then
\begin{enumerate}
\item[(1)] $\{u^k\}$ converges to $\tilde{u}$; and
\item[(2)] depending on $\theta$,
(i) if $\theta =0$, then $\{u^k\}$ converges within a finite number of iterations; (ii) if $\theta \in (0,\frac{1}{2}]$, then $\|u^k- \tilde{u}\| \leq c \tau^k$ for all $k\geq k_0$, for certain $k_0>0, c>0, \tau \in (0,1)$; and (iii) if $\theta \in (\frac{1}{2},1)$, then $\|u^k - \tilde{u}\| \leq c k^{-\frac{1-\theta}{2\theta-1}}$ for all $k\geq k_0$, for certain $k_0>0, c>0$.
\end{enumerate}
\end{lemma}
\subsubsection{Lemmas on controlling dual ascent by primal descent}
In the following, we establish several lemmas to show that the dual ascent quantities of proposed algorithms can be controlled by the primal descent quantities.
\begin{lemma}[MEAL: controlling dual by primal]
\label{Lemma:dual-control-primal-MEAL}
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by MEAL \eqref{alg:MEAL-reformulation}. Take $\gamma \in (0,\rho^{-1})$.
\begin{enumerate}
\item[(a)]
Under Assumptions \ref{Assump:feasibleset}, \ref{Assump:MEAL}(a), and \ref{Assump:MEAL}(b), we have for any $k\geq 1$,
\begin{align}
&\|A^T(\lambda^{k+1}-\lambda^{k})\| \leq (L_f+\gamma^{-1})\|x^{k+1}-x^k\|+\gamma^{-1}\|z^k-z^{k-1}\|, \label{Eq:A-lambda-MEAL}\\
&\|\lambda^{k+1}-\lambda^{k}\|^2 \leq 2c_{\gamma,A}^{-1}\left[(\gamma L_f+1)^2\|x^{k+1}-x^k\|^2+\|z^k-z^{k-1}\|^2\right], \label{Eq:lambda-MEAL}
\end{align}
where $c_{\gamma,A} = \gamma^2 \tilde{\sigma}_{\min}(A^TA)$.
\item[(b)]
Alternatively, under Assumptions \ref{Assump:feasibleset}, \ref{Assump:MEAL}(a), and \ref{Assump:MEAL}(c), we have for any $k\geq 1$,
\begin{align}
\|\lambda^{k+1}-\lambda^{k}\|^2 \leq 3c_{\gamma,A}^{-1}\left[4\gamma^2 \hat{L}^2_f + \|x^{k+1}-x^k\|^2+\|z^k-z^{k-1}\|^2 \right]. \label{Eq:lambda-MEAL-S2}
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
The update \eqref{alg:MEAL-reformulation} of $x^{k+1}$ implies
\begin{align*}
x^{k+1} = \argmin_x \left\{f(x)+\langle \lambda^{k}, Ax-b \rangle + \frac{\beta_k}{2}\|Ax-b\|^2 + \frac{1}{2\gamma} \|x-z^k\|^2 \right\}.
\end{align*}
Its optimality condition and the update \eqref{alg:MEAL-reformulation} of $\lambda^{k+1}$ in MEAL together give us
\begin{align}
\label{Eq:optcond-x}
0 \in \partial \left(f+\frac{1}{2\gamma}\|\cdot-(z^k-\gamma A^T\lambda^{k+1})\|^2\right)(x^{k+1}).
\end{align}
Let $w^{k+1}:= z^k-\gamma A^T\lambda^{k+1}, \ \forall k\in \mathbb{N}.$
The above inclusion implies
\begin{align}
\label{Eq:xk+1-prox-wk+1}
x^{k+1} = \mathrm{Prox}_{\gamma,f}(w^{k+1}),
\end{align}
and thus by \eqref{Eq:Moreau-gradient},
\begin{align}
\label{Eq:A-lambda}
A^T\lambda^{k+1}
&= -\nabla {\cal M}_{\gamma,f}(w^{k+1})-\gamma^{-1}(x^{k+1}-z^k),
\end{align}
which further implies
\begin{align*}
\|A^T(\lambda^{k+1}-\lambda^{k})\|
&=\|(\nabla {\cal M}_{\gamma,f}(w^{k+1})-\nabla {\cal M}_{\gamma,f}(w^{k})) + \gamma^{-1}(x^{k+1}-x^k) - \gamma^{-1}(z^k-z^{k-1})\|.
\end{align*}
\textbf{(a)}
With Assumption \ref{Assump:MEAL}(b), the above equality yields
\begin{align*}
\|A^T(\lambda^{k+1}-\lambda^{k})\| \leq (L_f+\gamma^{-1})\|x^{k+1}-x^k\|+\gamma^{-1}\|z^k-z^{k-1}\|,
\end{align*}
which leads to \eqref{Eq:A-lambda-MEAL}.
By Assumption \ref{Assump:feasibleset} and the relation $\lambda^{k+1}-\lambda^k = \beta_k(Ax^{k+1}-b)$, $(\lambda^{k+1}-\lambda^k) \in \mathrm{Im}(A)$. Thus, from the above inequality, we deduce
\begin{align*}
\|\lambda^{k+1}-\lambda^{k}\| \leq \tilde{\sigma}_{\min}^{-1/2}(A^TA)\left[(L_f+\gamma^{-1})\|x^{k+1}-x^k\|+\gamma^{-1}\|z^k-z^{k-1}\| \right],
\end{align*}
and, further by $(u+v)^2 \leq 2(u^2+v^2)$ for any $u,v\in \mathbb{R}$,
\begin{align*}
\|\lambda^{k+1}-\lambda^{k}\|^2 \leq 2\tilde{\sigma}_{\min}^{-1}(A^TA)\left[(L_f+\gamma^{-1})^2\|x^{k+1}-x^k\|^2+\gamma^{-2}\|z^k-z^{k-1}\|^2\right].
\end{align*}
\textbf{(b)}
From Assumption \ref{Assump:MEAL}(c), we have
\begin{align*}
\|A^T(\lambda^{k+1}-\lambda^{k})\| \leq 2\hat{L}_f+\gamma^{-1}(\|x^{k+1}-x^k\|+\|z^k-z^{k-1}\|),
\end{align*}
which implies
\begin{align*}
\|\lambda^{k+1}-\lambda^{k}\| \leq \tilde{\sigma}_{\min}^{-1/2}(A^TA)\left[2\hat{L}_f+\gamma^{-1}(\|x^{k+1}-x^k\|+\|z^k-z^{k-1}\|) \right],
\end{align*}
and further by $(a+c+d)^2 \leq 3(a^2+c^2+d^2)$ for any $a,c,d\in \mathbb{R}$,
\begin{align*}
\|\lambda^{k+1}-\lambda^{k}\|^2 \leq 3\tilde{\sigma}_{\min}^{-1}(A^TA)\left[4\hat{L}^2_f + \gamma^{-2}(\|x^{k+1}-x^k\|^2+\|z^k-z^{k-1}\|^2)\right].
\end{align*}
\end{proof}
The similar lemma also holds for iMEAL shown as follows.
\begin{lemma}[iMEAL: controlling dual by primal]
\label{Lemma:dual-control-primal-iMEAL}
Let $(x^k,z^k,\lambda^k)$ be a sequence generated by iMEAL \eqref{alg:iMEAL}. Take $\gamma \in (0,\rho^{-1})$.
\begin{enumerate}
\item[(a)]
Under Assumptions \ref{Assump:feasibleset} \ref{Assump:MEAL}(a), and \ref{Assump:MEAL}(b), for any $k\geq 1$,
\begin{align*}
\|\lambda^{k+1}-\lambda^{k}\|^2 \leq 3 c_{\gamma,A}^{-1}\left[(\gamma L_f+1)^2\|x^{k+1}-x^k\|^2+\|z^k-z^{k-1}\|^2 + \gamma^2(\epsilon_k + \epsilon_{k-1})^2\right].
\end{align*}
\item[(b)]
Alternatively, under Assumptions \ref{Assump:feasibleset} \ref{Assump:MEAL}(a), and \ref{Assump:MEAL}(c), for any $k\geq 1$,
\begin{align*}
\|\lambda^{k+1}-\lambda^{k}\|^2 \leq 4c_{\gamma,A}^{-1}\left[4\gamma^2\hat{L}^2_f + \|x^{k+1}-x^k\|^2+\|z^k-z^{k-1}\|^2 + \gamma^2(\epsilon_k + \epsilon_{k-1})^2\right].
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma \ref{Lemma:dual-control-primal-MEAL}, but with \eqref{Eq:optcond-x} being replaced by
\begin{align*}
0 \in \partial \left(f+\frac{1}{2\gamma}\left\|\cdot-\Big(z^k-\gamma (A^T\lambda^{k+1}-s^k)\Big)\right\|^2\right)(x^{k+1}),
\end{align*}
and thus $w^{k+1}:= z^k-\gamma (A^T\lambda^{k+1}-s^k).$
\end{proof}
\begin{lemma}[LiMEAL: controlling dual by primal]
\label{Lemma:dual-control-primal-LiMEAL}
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by LiMEAL \eqref{alg:LiMEAL}.
Take $\gamma \in (0,\rho_g^{-1})$.
\begin{enumerate}
\item[(a)]
Under Assumptions \ref{Assump:feasibleset}, \ref{Assump:LiMEAL}(a)-(b), and \ref{Assump:LiMEAL}(c), for any $k\geq 1$,
\begin{align}
&\|A^T(\lambda^{k+1}-\lambda^{k})\| \label{Eq:A-lambda-LiMEAL}\\
&\leq (L_g+\gamma^{-1})\|x^{k+1}-x^k\|+L_h\|x^k-x^{k-1}\|+\gamma^{-1}\|z^k-z^{k-1}\|, \nonumber\\
&\|\lambda^{k+1}-\lambda^{k}\|^2 \label{Eq:lambda-LiMEAL}\\
&\leq 3c_{\gamma,A}^{-1}\left[(\gamma L_g+1)^2\|x^{k+1}-x^k\|^2+\gamma^2 L_h^2\|x^k-x^{k-1}\|^2+ \|z^k-z^{k-1}\|^2\right]. \nonumber
\end{align}
\item[(b)]
Alternatively, under Assumptions \ref{Assump:feasibleset}, \ref{Assump:LiMEAL}(a)-(b), and \ref{Assump:LiMEAL}(d), for any $k\geq 1$,
\begin{align}
&\|\lambda^{k+1}-\lambda^{k}\|^2 \label{Eq:lambda-LiMEAL-S2}\\
&\leq 4c_{\gamma,A}^{-1}\left[4\gamma^2\hat{L}^2_g + \|x^{k+1}-x^k\|^2+ \gamma^2 L_h^2 \|x^k-x^{k-1}\|^2 +\|z^k-z^{k-1}\|^2\right]. \nonumber
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is also similar to that of Lemma \ref{Lemma:dual-control-primal-MEAL}, but \eqref{Eq:optcond-x} needs to be modified to
\begin{align*}
0 \in \partial \left(g+\frac{1}{2\gamma}\|\cdot-\Big(z^k-\gamma (A^T\lambda^{k+1}+\nabla h(x^k))\Big)\|^2\right)(x^{k+1}),
\end{align*}
and thus $w^{k+1}:= z^k-\gamma (A^T\lambda^{k+1}+\nabla h(x^k))).$
\end{proof}
\subsubsection{Lemmas on One-step Progress}
Here, we provide several lemmas to characterize the progress achieved by a single iterate of the proposed algorithms.
\begin{lemma}[MEAL: one-step progress]
\label{Lemma:1-step-progress-MEAL}
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by MEAL \eqref{alg:MEAL}. Take Assumption \ref{Assump:MEAL}(a), $\gamma \in (0,\rho^{-1})$, and $\eta\in (0,2)$. Then for any $k\in \mathbb{N}$,
\begin{align}
\label{Eq:1-step-progress-MEAL}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})
\geq \frac{(1-\gamma\rho)}{2\gamma}\|x^{k+1}-x^k\|^2 \\
&+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2
+\frac{1}{4}\gamma \eta(2-\eta)\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2
-\alpha_k c_{\gamma,A}\|\lambda^{k+1}-\lambda^k\|^2, \nonumber
\end{align}
where $\alpha_k$ is presented in \eqref{Eq:alphak}
and $c_{\gamma,A} = \gamma^2 \tilde{\sigma}_{\min}(A^TA)$.
\end{lemma}
\begin{proof}
By the update \eqref{alg:MEAL-reformulation} of $x^{k+1}$ in MEAL, $x^{k+1}$ is updated via minimizing a strongly convex function ${\cal P}_{\beta_k} (x,z^k,\lambda^{k})$ with modulus at least $(\gamma^{-1}-\rho)$, we have
\begin{align}
\label{Eq:x-descent}
{\cal P}_{\beta_k}(x^k,z^k,\lambda^{k}) - {\cal P}_{\beta_k}(x^{k+1},z^k,\lambda^{k}) \geq \frac{\gamma^{-1}-\rho}{2} \|x^{k+1}-x^k\|^2.
\end{align}
Next, recall in \eqref{alg:MEAL-reformulation}, $z^{k+1} = z^k + \eta(x^{k+1}-z^k)$ implies
\begin{align}
\label{Eq:zk+zk+1}
2x^{k+1}-z^k-z^{k+1} = (2\eta^{-1}-1)(z^{k+1}-z^k).
\end{align}
So we have
\begin{align*}
&{\cal P}_{\beta_k}(x^{k+1},z^k,\lambda^{k}) - {\cal P}_{\beta_k}(x^{k+1},z^{k+1},\lambda^{k})
=\frac{1}{2\gamma}(\|x^{k+1}-z^k\|^2 - \|x^{k+1}-z^{k+1}\|^2)\\
&=\frac{1}{2\gamma} \langle z^{k+1}-z^k, 2x^{k+1}-z^k-z^{k+1}\rangle
=\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2.
\end{align*}
Moreover, by the update $\lambda^{k+1}=\lambda^k +\beta_k(Ax^{k+1}-b)$, we have
\begin{align*}
{\cal P}_{\beta_k}(x^{k+1},z^{k+1},\lambda^k) - {\cal P}_{\beta_k}(x^{k+1},z^{k+1},\lambda^{k+1}) = -\beta_k^{-1}\|\lambda^{k+1}-\lambda^k\|^2,
\end{align*}
and
\begin{align*}
{\cal P}_{\beta_k}(x^{k+1},z^{k+1},\lambda^{k+1}) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1}) = \frac{\beta_k-\beta_{k+1}}{2\beta_k^2}\|\lambda^{k+1}-\lambda^k\|^2.
\end{align*}
Combining the above four terms of estimates yields
\begin{align}
\label{Eq:primal-descent-MEAL}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1}) \\
&\geq \frac{(1-\rho\gamma)}{2\gamma}\|x^{k+1}-x^k\|^2 + \frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 -\frac{\beta_k+\beta_{k+1}}{2\beta_k^2}\|\lambda^{k+1}-\lambda^k\|^2. \nonumber
\end{align}
Then, we establish \eqref{Eq:1-step-progress-MEAL} from \eqref{Eq:primal-descent-MEAL}.
By the definition \eqref{Eq:stationary-MEAL} of $\nabla \phi_{\beta_k}(z^k,\lambda^k)$, we have
\begin{align*}
\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2 = (\eta \gamma)^{-2} \|z^k - z^{k+1}\|^2 + \beta_k^{-2} \|\lambda^{k+1}-\lambda^k\|^2,
\end{align*}
which implies
\begin{align*}
(\eta \gamma)^{-2} \|z^k - z^{k+1}\|^2 = \|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2 - \beta_k^{-2} \|\lambda^{k+1}-\lambda^k\|^2.
\end{align*}
Substituting this into the above inequality yields
\begin{align*}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})
\geq \frac{(1-\gamma\rho)}{2\gamma}\|x^{k+1}-x^k\|^2 \nonumber \\
&+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2
+\frac{1}{4}\gamma \eta(2-\eta)\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2
-\alpha_k c_{\gamma,A}\|\lambda^{k+1}-\lambda^k\|^2,
\end{align*}
where $\alpha_k = \frac{\beta_k+\beta_{k+1}+\gamma \eta(1-\eta/2)}{2c_{\gamma,A}\beta_k^2}$.
This finishes the proof.
\end{proof}
Next, we provide a lemma for iMEAL \eqref{alg:iMEAL}.
\begin{lemma}[iMEAL: one-step progress]
\label{Lemma:1-step-progress-iMEAL}
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by iMEAL \eqref{alg:iMEAL}.
Take Assumptions \ref{Assump:MEAL}(a) and (b), $\gamma \in (0,\rho^{-1})$, and $\eta \in (0,2)$. It holds that
\begin{align}
\label{Eq:1-step-progress-iMEAL}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})\\
&\geq \frac{(1-\gamma\rho)}{2\gamma}\|x^{k+1}-x^k\|^2 +\langle s^k, x^k - x^{k+1} \rangle
+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 \nonumber \\
&+\frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2
-\alpha_k c_{\gamma,A}\|\lambda^{k+1}-\lambda^k\|^2, \ \forall k\in \mathbb{N}. \nonumber
\end{align}
\end{lemma}
\begin{proof}
The proof of this lemma is similar to that of Lemma \ref{Lemma:1-step-progress-MEAL} and uses the descent quantity along the update of $x^{k+1}$.
By the update \eqref{alg:iMEAL} of $x^{k+1}$ in iMEAL and noticing that ${\cal L}_{\beta_k}(x,\lambda^k)+ \frac{\|x-z^k\|}{2\gamma}$ is strongly convex with modulus at least $(\gamma^{-1}-\rho)$, we have
\begin{align*}
{\cal P}_{\beta_k}(x^k,z^k,\lambda^k)
& \geq {\cal P}_{\beta_k}(x^{k+1},z^k,\lambda^k)+\langle s^k, x^k - x^{k+1} \rangle+ \frac{\gamma^{-1}-\rho}{2}\|x^{k+1}-x^k\|^2.
\end{align*}
By replacing \eqref{Eq:x-descent} in the proof of Lemma \ref{Lemma:1-step-progress-MEAL} with the above inequality and following the rest part of its proof, we obtain the following inequality
\begin{align*}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})
\geq \frac{1-\gamma\rho}{2\gamma}\|x^{k+1}-x^k\|^2 +\langle s^k, x^k - x^{k+1} \rangle \nonumber \\
&+ \frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 -\frac{\beta_k+\beta_{k+1}}{2\beta_k^2}\|\lambda^{k+1}-\lambda^k\|^2.
\end{align*}
We can establish \eqref{Eq:1-step-progress-iMEAL} with a derivation similar to that in the proof of Lemma \ref{Lemma:1-step-progress-MEAL}.
\end{proof}
Also, we state a similar lemma for one-step progress of LiMEAL \eqref{alg:LiMEAL} as follows.
\begin{lemma}[LiMEAL: one-step progress]
\label{Lemma:1-step-progress-LiMEAL}
Let $\{(x^k,z^k,\lambda^k)\}$ be a sequence generated by LiMEAL \eqref{alg:LiMEAL}.
Take Assumptions \ref{Assump:LiMEAL}(a) and (b), $\gamma \in (0,\rho_g^{-1})$, and $\eta \in (0,2)$. We have
\begin{align}
\label{Eq:1-step-progress-LiMEAL}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})\\
&\geq \left(\frac{1-\gamma(\rho_g+L_h)}{2\gamma} - \frac{1}{4}\gamma (2-\eta)\eta L_h^2\right)\|x^{k+1}-x^k\|^2 \nonumber\\
&+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2
+ \frac{1}{4}\gamma (1-\eta_k/2)\eta \|g_{\mathrm{limeal}}^k\|^2- \alpha_k c_{\gamma,A}\|\lambda^{k+1}-\lambda^k\|^2, \ \forall k\in \mathbb{N}. \nonumber
\end{align}
\end{lemma}
\begin{proof}
The proof of this lemma is similar to that of Lemma \ref{Lemma:1-step-progress-MEAL}.
By the update \eqref{alg:LiMEAL} of $x^{k+1}$ in LiMEAL,
$x^{k+1}$ is updated via minimizing $(\gamma^{-1}-\rho_g)$-strongly convex ${\cal L}_{\beta_k,f^k}(x,\lambda^k)+ \frac{\|x-z^k\|}{2\gamma}$, so
\begin{align*}
{\cal L}_{\beta_k,f^k}(x^k,\lambda^k)+ \frac{\|x^k-z^k\|^2}{2\gamma}
\geq {\cal L}_{\beta_k,f^k}(x^{k+1},\lambda^k)+ \frac{\|x-z^k\|^2}{2\gamma} + \frac{\gamma^{-1}-\rho_g}{2}\|x^{k+1}-x^k\|^2.
\end{align*}
By definition, ${\cal L}_{\beta_k,f^k}(x,\lambda) = h(x^k)+\langle \nabla h(x^k),x-x^k\rangle + g(x) + \langle \lambda, Ax-b\rangle + \frac{\beta}{2}\|Ax-b\|^2$ and ${\cal P}_{\beta_k}(x,z,\lambda) = h(x)+g(x)+ \langle \lambda, Ax-b\rangle + \frac{\beta_k}{2}\|Ax-b\|^2+\frac{\|x-z\|^2}{2\gamma}$, so the above inequality implies
\begin{align*}
{\cal P}_{\beta_k}(x^k,z^k,\lambda^k)
& \geq {\cal P}_{\beta_k}(x^{k+1},z^k,\lambda^k)+ \frac{\gamma^{-1}-\rho_g}{2}\|x^{k+1}-x^k\|^2 \nonumber\\
&- (h(x^{k+1}) - h(x^k) - \langle \nabla h(x^k),x^{k+1}-x^k\rangle) \nonumber\\
&\geq {\cal P}_{\beta_k}(x^{k+1},z^k,\lambda^k)+ \frac{\gamma^{-1}-\rho_g-L_h}{2}\|x^{k+1}-x^k\|^2,
\end{align*}
where the second inequality is due to the $L_h$-Lipschitz continuity of $\nabla h$.
By replacing \eqref{Eq:x-descent} in the proof of Lemma \ref{Lemma:1-step-progress-MEAL} with the above inequality and following the rest part of that proof, we obtain
\begin{align}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1}) \label{Eq:primal-descent-LiMEAL}\\
&\geq \frac{1-\gamma(\rho_g+L_h)}{2\gamma}\|x^{k+1}-x^k\|^2 + \frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 -\frac{\beta_k+\beta_{k+1}}{2\beta_k^2}\|\lambda^{k+1}-\lambda^k\|^2. \nonumber
\end{align}
Next, based on the above inequality, we establish \eqref{Eq:1-step-progress-LiMEAL}.
By the definition \eqref{Eq:stationary-LiMEAL} of $g_{\mathrm{limeal}}^k$ and noticing that $z^k-x^{k+1} = -\eta^{-1}(z^{k+1}-z^k) $ by the update \eqref{alg:LiMEAL} of $z^{k+1}$, we have
\[
\|g_{\mathrm{limeal}}^k\|^2 \leq 2L_h^2 \|x^{k+1}-x^k\|^2 + 2 (\gamma\eta)^{-2}\|z^{k+1}-z^k\|^2 + \beta_k^{-2}\|\lambda^{k+1}-\lambda^k\|^2,
\]
which implies
\[
(\gamma\eta)^{-2}\|z^{k+1}-z^k\|^2 \geq \frac{1}{2} \|g_{\mathrm{limeal}}^k\|^2 - \frac{1}{2}\beta_k^{-2}\|\lambda^{k+1}-\lambda^k\|^2 - L_h^2 \|x^{k+1}-x^k\|^2.
\]
Substituting this inequality into \eqref{Eq:primal-descent-LiMEAL} yields
\begin{align*}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})\\
&\geq \left(\frac{1-\gamma(\rho_g+L_h)}{2\gamma} - \frac{1}{4}\gamma (2-\eta)\eta L_h^2\right)\|x^{k+1}-x^k\|^2 \\
&+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2
+ \frac{1}{4}\gamma (1-\eta/2)\eta \|g_{\mathrm{limeal}}^k\|^2 - \alpha_k c_{\gamma,A}\|\lambda^{k+1}-\lambda^k\|^2.
\end{align*}
This finishes the proof of this lemma.
\end{proof}
\subsection{Proofs for Convergence of MEAL}
Based on the above lemmas, we give proofs of Theorem \ref{Theorem:Convergence-MEAL} and Proposition \ref{Proposition:globalconv-MEAL}.
\subsubsection{Proof of Theorem \ref{Theorem:Convergence-MEAL}}
\begin{proof}
We first establish the $o(1/\sqrt{k})$ rate of convergence under the \textit{implicit Lipschitz subgradient} assumption (Assumption \ref{Assump:MEAL}(b)) and then the convergence rate result under the \textit{implicit bounded subgradient} assumption (Assumption \ref{Assump:MEAL}(c)).
\textbf{(a)}
In the first case, $\beta_k = \beta$ and $\alpha_k = \alpha$.
Substituting \eqref{Eq:lambda-MEAL} into \eqref{Eq:1-step-progress-MEAL} yields
\begin{align*}
&{\cal P}_{\beta}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})
\geq \frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta}(z^k,\lambda^k)\|^2\\
&+\left(\frac{(1-\gamma\rho)}{2\gamma} - 2\alpha (1+\gamma L_f)^2 \right)\|x^{k+1}-x^k\|^2
+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 \\
&- 2\alpha \|z^k-z^{k-1}\|^2.
\end{align*}
By the definition \eqref{Eq:Lyapunov-seq-MEAL-S1} of ${\cal E}_{\mathrm{meal}}^k$, the above inequality implies
\begin{align}
\label{Eq:descent-MEAL-S1*}
{\cal E}_{\mathrm{meal}}^k - {\cal E}_{\mathrm{meal}}^{k+1}
&\geq \frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta}(z^k,\lambda^k)\|^2 + \left(\frac{1}{4\gamma}(\frac{2}{\eta}-1) -2\alpha \right)\|z^{k+1}-z^k\|^2 \nonumber\\
&+\left(\frac{1-\gamma\rho}{2\gamma} - 2\alpha (1+\gamma L_f)^2 \right)\|x^{k+1}-x^k\|^2 \\
&\geq \frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta}(z^k,\lambda^k)\|^2, \nonumber
\end{align}
where the second inequality holds due to the condition on $\alpha$.
Thus, claim (a) follows from the above inequality, Lemma \ref{Lemma:sequence-fixed} with $\tilde{\epsilon}_k=0$ and the lower boundedness of $\{{\cal E}_{\mathrm{meal}}^k\}$.
\textbf{(b)}
Similarly,
substituting \eqref{Eq:lambda-MEAL-S2} into \eqref{Eq:1-step-progress-MEAL} and using the definition \eqref{Eq:Lyapunov-seq-MEAL-S2} of $\tilde{\cal E}_{\mathrm{meal}}^k$, we have
\begin{align*}
&\tilde{\cal E}_{\mathrm{meal}}^k - \tilde{\cal E}_{\mathrm{meal}}^{k+1} \geq \frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2 - 12 \alpha_k \gamma^2 \hat{L}_f^2\\
&+\left(\frac{1-\gamma\rho}{2\gamma} - 3\alpha_k \right)\|x^{k+1}-x^k\|^2
+ \left(\frac{1}{4\gamma}(\frac{2}{\eta}-1) - 3\alpha_{k+1} \right)\|z^{k+1}-z^k\|^2.
\end{align*}
With $\alpha_k = \frac{\alpha^*}{K}$,
\begin{align*}
&\tilde{\cal E}_{\mathrm{meal}}^k - \tilde{\cal E}_{\mathrm{meal}}^{k+1}
\geq \frac{1}{2}\gamma (1-\eta/2)\eta\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2 - 12 \alpha_k \gamma^2 \hat{L}_f^2,
\end{align*}
which yields claim (b) by Lemma \ref{Lemma:sequence-varying} with $\tilde{\epsilon}_k=0$ and the lower boundedness of $\{\tilde{\cal E}_{\mathrm{meal}}^k\}$.
\end{proof}
\subsubsection{Proof of Proposition \ref{Proposition:globalconv-MEAL}}
\begin{proof}
With Lemma \ref{Lemma:existing-global-converg}, we only need to check conditions $(P1)$-$(P3)$ hold for MEAL.
\textbf{(a) Establishing $(P1)$}: With $a:= \frac{\gamma \eta(2-\eta)}{4\beta}$, we have $\frac{1+a}{\beta c_{\gamma,A}}=\alpha$ for $\alpha$ in \eqref{Eq:alpha}.
Substituting \eqref{Eq:lambda-MEAL} into \eqref{Eq:primal-descent-MEAL} with fixed $\beta_k$ yields
\begin{align*}
&{\cal P}_{\beta}(x^k,z^k,\lambda^k) - {\cal P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1}) \geq (\frac{1-\rho\gamma}{2\gamma}-2\alpha (\gamma L_f+1)^2)\|x^{k+1}-x^k\|^2\\
& + \frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 - 2\alpha \|z^k-z^{k-1}\|^2 + a\beta^{-1}\|\lambda^{k+1}-\lambda^k\|^2.
\end{align*}
For the definition \eqref{Eq:Lyapunov-fun-MEAL} of ${\cal P}_{\mathrm{meal}}$ and the assumption on $\alpha$, we deduce from the above inequality:
\begin{align}
\label{Eq:sufficient-descent-MEAL}
&{\cal P}_{\mathrm{meal}}(y^k) - {\cal P}_{\mathrm{meal}}(y^{k+1})\geq (\frac{1-\rho\gamma}{2\gamma}-2\alpha (\gamma L_f+1)^2)\|x^{k+1}-x^k\|^2 \nonumber\\
& + \left( \frac{1}{2\gamma}(\frac{2}{\eta}-1) - 3\alpha \right)\|z^{k+1}-z^k\|^2 + \alpha \|z^k-z^{k-1}\|^2 + a\beta^{-1}\|\lambda^{k+1}-\lambda^k\|^2 \nonumber\\
&\geq c_1\|y^{k+1}-y^k\|^2,
\end{align}
where $c_1:= \min\left\{\frac{1-\rho\gamma}{2\gamma}-2\alpha (\gamma L_f +1)^2, \alpha, a\beta^{-1}\right\}$ by $\frac{1}{2\gamma}(\frac{2}{\eta}-1) -3\alpha \geq \alpha$. This yields $(P1)$ for MEAL.
\textbf{(b) Establishing $(P2)$}:
Note that ${\cal P}_{\mathrm{meal}}(y) = f(x)+\langle \lambda,Ax-b\rangle + \frac{\beta}{2}\|Ax-b\|^2+\frac{1}{2\gamma}\|x-z\|^2 + 3\alpha \|z - \hat{z}\|^2$.
The optimality condition from the update of $x^{k+1}$ in \eqref{alg:MEAL-reformulation} is
\begin{align*}
0\in \partial f(x^{k+1}) + A^T\lambda^{k+1} + \gamma^{-1}(x^{k+1}-z^k),
\end{align*}
which implies $\gamma^{-1}(z^k - z^{k+1}) + A^T(\lambda^{k+1}-\lambda^{k}) \in \partial_x {\cal P}_{\mathrm{meal}}(y^{k+1}).$
From the update of $z^{k+1}$ in \eqref{alg:MEAL-reformulation}, $z^{k+1}-x^{k+1}=-(1-\eta)\eta^{-1}(z^{k+1}-z^k)$ and thus
\begin{align*}
\partial_z {\cal P}_{\mathrm{meal}}(y^{k+1})
= \gamma^{-1}(z^{k+1}-x^{k+1}) + 6\alpha (z^{k+1}-z^k)
=\left(6\alpha - \frac{1-\eta}{\eta\gamma}\right)(z^{k+1}-z^k).
\end{align*}
The update of $\lambda^{k+1}$ in \eqref{alg:MEAL-reformulation} yields $\partial_\lambda {\cal P}_{\mathrm{meal}}(y^{k+1}) = Ax^{k+1}-b = \beta^{-1}(\lambda^{k+1}-\lambda^k).$
Moreover, it is easy to show $\partial_{\hat{z}} {\cal P}_{\mathrm{meal}}(y^{k+1}) = 6\alpha (z^{k}-z^{k+1}).$
Thus, let
\[
v^{k+1}:=
\left(
\begin{array}{c}
\gamma^{-1}(z^k - z^{k+1}) + A^T(\lambda^{k+1}-\lambda^{k})\\
\left(6\alpha - \frac{1-\eta}{\eta\gamma}\right)(z^{k+1}-z^k)\\
\beta^{-1}(\lambda^{k+1}-\lambda^k)\\
6\alpha (z^{k}-z^{k+1})
\end{array}
\right),
\]
which obeys $v^{k+1} \in \partial {\cal P}_{\mathrm{meal}}(y^{k+1})$ and
\begin{align*}
\|v^{k+1}\|
&\leq \left(\gamma^{-1}+\left|6\alpha - \frac{1-\eta}{\eta\gamma} \right| + 6\alpha \right)\|z^{k+1}-z^k\| + \beta^{-1}\|\lambda^{k+1}-\lambda^k\| + \|A^T(\lambda^{k+1}-\lambda^k)\|\\
&\leq \left(\gamma^{-1}+\left|6\alpha - \frac{1-\eta}{\eta\gamma} \right| + 6\alpha \right)\|z^{k+1}-z^k\| + \beta^{-1}\|\lambda^{k+1}-\lambda^k\| \\
&+(L_f+\gamma^{-1})\|x^{k+1}-x^k\| + \gamma^{-1}\|\hat{z}^{k+1}-\hat{z}^k\|,
\end{align*}
where the second inequality is due to \eqref{Eq:A-lambda-MEAL}. This yields $(P2)$ for MEAL.
\textbf{(c) Establishing $(P3)$}: $(P3)$ follows from the boundedness assumption of $\{y^k\}$, and the convergence of $\{{\cal P}_{\mathrm{meal}}(y^k)\}$ is implied by $(P1)$.
This finishes the proof.
\end{proof}
\subsection{Proof for Convergence of iMEAL}
In this subsection, we present the proof of Theorem \ref{Theorem:Convergence-iMEAL} for iMEAL \eqref{alg:iMEAL}.
\begin{proof}[of Theorem \ref{Theorem:Convergence-iMEAL}]
We first show the $o(1/\sqrt{k})$ rate of convergence under Assumption \ref{Assump:MEAL}(b) and then the convergence rate result under Assumption \ref{Assump:MEAL}(c).
\textbf{(a)}
In this case, we use a fixed $\beta_k = \beta$ and thus $\alpha_k = \alpha$.
Substituting the inequality in Lemma \ref{Lemma:dual-control-primal-iMEAL}(a) into \eqref{Eq:1-step-progress-iMEAL} in Lemma \ref{Lemma:1-step-progress-iMEAL} yields
\begin{align}
\label{Eq:primal-descent-iMEAL-S1}
&{\cal P}_{\beta}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})
\geq \frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta}(z^k,\lambda^k)\|^2\\
&+\left(\frac{1-\gamma\rho}{2\gamma} - 3\alpha (1+\gamma L_f)^2 \right)\|x^{k+1}-x^k\|^2 +\langle s^k, x^k - x^{k+1} \rangle - 3\alpha \gamma^2 (\epsilon_k+\epsilon_{k-1})^2 \nonumber\\
&+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 - 3\alpha \|z^k-z^{k-1}\|^2. \nonumber
\end{align}
Let $\delta := 2\left(\frac{(1-\gamma\rho)}{2\gamma} - 3\alpha (1+\gamma L_f)^2 \right)$.
By the assumption $0<\alpha < \min\big\{\frac{1-\gamma \rho}{6\gamma(1+\gamma L_f)^2}$, $\frac{1}{12\gamma}(\frac{2}{\eta}-1)\Big\}$, we have $\delta>0$ and further
\begin{align*}
\langle s^k, x^k - x^{k+1} \rangle \geq -\frac{\delta}{2}\|x^{k+1}-x^k\|^2 - \frac{1}{2\delta} \|s^k\|^2 \geq -\frac{\delta}{2}\|x^{k+1}-x^k\|^2 - \frac{1}{2\delta} (\epsilon_k + \epsilon_{k-1})^2.
\end{align*}
Substituting this into \eqref{Eq:primal-descent-iMEAL-S1} and
noting the definition \eqref{Eq:Lyapunov-seq-iMEAL-S1} of ${\cal E}_{\mathrm{imeal}}^k$,
we have
\begin{align*}
{\cal E}_{\mathrm{imeal}}^k - {\cal E}_{\mathrm{imeal}}^{k+1} \geq \frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta}(z^k,\lambda^k)\|^2 - (3\alpha \gamma^2 + \frac{1}{2\delta})(\epsilon_k + \epsilon_{k-1})^2,
\end{align*}
which yields claim (a) by the assumption $\sum_{k=1}^{\infty} (\epsilon_k)^2 <+\infty$ and Lemma \ref{Lemma:sequence-fixed}.
\textbf{(b)} Then we establish claim (b) under Assumption \ref{Assump:MEAL}(c).
Substituting the inequality in Lemma \ref{Lemma:dual-control-primal-iMEAL}(b) into \eqref{Eq:1-step-progress-iMEAL} in Lemma \ref{Lemma:1-step-progress-iMEAL} yields
\begin{align}
\label{Eq:primal-descent-iMEAL-S2}
&{\cal P}_{\beta_k}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})
\geq \frac{1}{2}\gamma \eta(1-\eta/2)\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2 \nonumber\\
&+\left(\frac{(1-\gamma\rho)}{2\gamma} - 4\alpha_k \right)\|x^{k+1}-x^k\|^2 +\langle s^k, x^k - x^{k+1} \rangle - 4\gamma^2 \alpha_k (\epsilon_k+\epsilon_{k-1})^2 \nonumber\\
&+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 - 4\alpha_k \|z^k-z^{k-1}\|^2 - 16 \alpha_k \gamma^2 \hat{L}_f^2.
\end{align}
Let $\hat{\alpha}^*:= \min\left\{ \frac{1-\rho \gamma}{8\gamma}, \frac{1}{16\gamma}(\frac{2}{\eta}-1)\right\}$ and $\tilde{\delta}:= 2\left(\frac{(1-\gamma\rho)}{2\gamma} - 4\hat{\alpha}^* \right)>0$. We have
\begin{align*}
\langle s^k, x^k - x^{k+1} \rangle \geq -\frac{\tilde{\delta}}{2}\|x^{k+1}-x^k\|^2 - \frac{1}{2\tilde{\delta}} \|s^k\|^2 \geq -\frac{\tilde{\delta}}{2}\|x^{k+1}-x^k\|^2 - \frac{1}{2\tilde{\delta}} (\epsilon_k + \epsilon_{k-1})^2.
\end{align*}
Substituting this into \eqref{Eq:primal-descent-iMEAL-S2}, and by the definition \eqref{Eq:Lyapunov-seq-iMEAL-S2} of $\tilde{\cal E}_{\mathrm{imeal}}^k$
and setting of $\alpha_k$, we have
\begin{align*}
&\tilde{\cal E}_{\mathrm{imeal}}^k - \tilde{\cal E}_{\mathrm{imeal}}^{k+1} \\
&\geq \frac{1}{2}\gamma (1-\eta/2) \eta\|\nabla \phi_{\beta_k}(z^k,\lambda^k)\|^2 - (4\alpha_k \gamma^2 + \frac{1}{2\tilde{\delta}})(\epsilon_k + \epsilon_{k-1})^2 - 16 \alpha_k \gamma^2 \hat{L}_f^2,
\end{align*}
which yields claim (b) by the assumption $\sum_{k=1}^{\infty} (\epsilon_k)^2 <+\infty$ and Lemma \ref{Lemma:sequence-varying}.
\end{proof}
\subsection{Proofs for Convergence of LiMEAL}
Now, we show proofs of main convergence theorems for LiMEAL \eqref{alg:LiMEAL}.
\subsubsection{Proof of Theorem \ref{Theorem:Convergence-LiMEAL}}
\begin{proof}
We first establish claim (a) and then claim (b) under the associated assumptions.
\textbf{(a)} In this case, a fixed $\beta_k$ is used. Substituting \eqref{Eq:lambda-LiMEAL} into \eqref{Eq:1-step-progress-LiMEAL} yields
\begin{align*}
&{\cal P}_{\beta}(x^{k},z^k,\lambda^k) - {\cal P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})
\geq \frac{1}{4}\gamma (1-\eta/2)\eta \|g_{\mathrm{limeal}}^k\|^2\\
&+ \left(\frac{1-\gamma(\rho_g+L_h)}{2\gamma} - \frac{1}{4}\gamma (2-\eta)\eta L_h^2 - 3(1+\gamma L_g)^2 \alpha\right)\|x^{k+1}-x^k\|^2 \\
&+ \frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 - 3\alpha (\gamma^2L_h^2 \|x^k-x^{k-1}\|^2 + \|z^k - z^{k-1}\|^2).
\end{align*}
By the definition \eqref{Eq:Lyapunov-seq-LiMEAL-S1} of ${\cal E}^k_{\mathrm{limeal}}$, the above inequality implies
\begin{align}
\label{Eq:descent-LiMEAL-S1*}
&{\cal E}_{\mathrm{limeal}}^k - {\cal E}_{\mathrm{limeal}}^{k+1}
\geq \frac{1}{4}\gamma (1-\eta/2)\eta \|g_{\mathrm{limeal}}^k\|^2 + \left(\frac{1}{4\gamma}(\frac{2}{\eta}-1) -3\alpha \right)\|z^{k+1}-z^k\|^2\\
&+ \left(\frac{1-\gamma(\rho_g+L_h)}{2\gamma} - \frac{1}{4}\gamma (2-\eta)\eta L_h^2 - 3\alpha \left((1+\gamma L_g)^2 + \gamma^2L_h^2 \right)\right)\|x^{k+1}-x^k\|^2 \nonumber\\
& \geq \frac{1}{4}\gamma (1-\eta/2)\eta \|g_{\mathrm{limeal}}^k\|^2, \nonumber
\end{align}
where the second inequality holds under the conditions in Theorem \ref{Theorem:Convergence-LiMEAL}(a).
This shows the claim (a) by Lemma \ref{Lemma:sequence-fixed} and the lower boundedness of $\{{\cal E}_{\mathrm{limeal}}^k\}$.
\textbf{(b)} Similarly,
substituting \eqref{Eq:lambda-LiMEAL-S2} into \eqref{Eq:1-step-progress-LiMEAL} and using the definitions of ${\alpha}_k$ in \eqref{Eq:alphak} and $\tilde{\cal E}^k_{\mathrm{limeal}}$ in \eqref{Eq:Lyapunov-seq-LiMEAL-S2}, we obtain
\begin{align*}
&\tilde{\cal E}_{\mathrm{limeal}}^k - \tilde{\cal E}_{\mathrm{limeal}}^{k+1}\\
&\geq \frac{1}{4}\gamma (1-\eta/2)\eta_k \|g_{\mathrm{limeal}}^k\|^2 - 16{\alpha}_k \gamma^2 \hat{L}_g^2 + \left(\frac{1}{4\gamma}(\frac{2}{\eta}-1) -4{\alpha}_{k+1}\right)\|z^{k+1}-z^k\|^2\\
&+\left(\frac{(1-\gamma(\rho_g+L_h))}{2\gamma} - \frac{1}{4}\gamma (2-\eta)\eta L_h^2- 4 \alpha_k - 4\gamma^2L_h^2 \alpha_{k+1} \right) \|x^{k+1}-x^k\|^2 \nonumber\\
&\geq \frac{1}{4}c\gamma \eta \|g_{\mathrm{limeal}}^k\|^2 - 16{\alpha}_k \gamma^2 \hat{L}_g^2,
\end{align*}
where the second inequality is due to the settings of parameters presented in Theorem \ref{Theorem:Convergence-LiMEAL}(b).
This inequality shows claim (b) by Lemma \ref{Lemma:sequence-varying} and the lower boundedness of $\{\tilde{\cal E}_{\mathrm{limeal}}^k\}$.
\end{proof}
\subsubsection{Proof of Proposition \ref{Proposition:globalconv-LiMEAL}}
\begin{proof}
By Lemma \ref{Lemma:existing-global-converg}, we only need to verify conditions $(P1)$-$(P3)$ hold for LiMEAL.
\textbf{(a) Establishing $(P1)$}: Similar to the proof of Theorem \ref{Theorem:Convergence-MEAL}, let $a:= \frac{\gamma \eta(2-\eta)}{4\beta}$. Then $\frac{1+a}{\beta c_{\gamma,A}}=\alpha$, where $\alpha$ is defined in \eqref{Eq:alpha}.
Substituting \eqref{Eq:lambda-LiMEAL} into \eqref{Eq:primal-descent-LiMEAL} with fixed $\beta_k$ yields
\begin{align*}
&{\cal P}_{\beta}(x^k,z^k,\lambda^k) - {\cal P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})\\
&\geq \left(\frac{1-\gamma(\rho_g+L_h)}{2\gamma} - 3\alpha(1+\gamma L_g)^2 \right)\|x^{k+1}-x^k\|^2 - 3\alpha \gamma^2L_h^2 \|x^k-x^{k-1}\|^2 \\
&+\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^k\|^2 - 3\alpha \|z^k-z^{k-1}\|^2 + a\beta^{-1}\|\lambda^{k+1}-\lambda^k\|^2.
\end{align*}
By the definition \eqref{Eq:Lyapunov-fun-LiMEAL} of ${\cal P}_{\mathrm{limeal}}$, the above inequality implies
\begin{align*}
&{\cal P}_{\mathrm{limeal}}(y^k) - {\cal P}_{\mathrm{limeal}}(y^{k+1}) \\
&\geq \left(\frac{1-\gamma(\rho_g+L_h)}{2\gamma} - 4\alpha \left((1+\gamma L_g)^2 + \gamma^2 L_h^2 \right) \right) \|x^{k+1}-x^k\|^2\\
&+\left(\frac{1}{2\gamma}\left(\frac{2}{\eta} -1\right) - 4\alpha \right) \|z^{k+1}-z^k\|^2 + a\beta^{-1}\|\lambda^{k+1}-\lambda^k\|^2\\
&+\alpha \left(\gamma^2L_h^2 \|\hat{x}^{k+1}-\hat{x}^k\|^2 + \|\hat{z}^{k+1} - \hat{z}^k\|^2 \right),
\end{align*}
which, with the assumptions on the parameters, implies $(P1)$ for LiMEAL.
\textbf{(b) Establishing $(P2)$}:
Note that ${\cal P}_{\mathrm{limeal}}(y) = f(x)+\langle \lambda,Ax-b\rangle + \frac{\beta}{2}\|Ax-b\|^2+\frac{1}{2\gamma}\|x-z\|^2 + 4\alpha\gamma^2L_h^2 \|x-\hat{x}\|^2 + 4\alpha \|z - \hat{z}\|^2$.
The update of $x^{k+1}$ in \eqref{alg:LiMEAL} has the optimality condition
\begin{align*}
0\in \partial g(x^{k+1}) +\nabla h(x^k) + A^T\lambda^{k+1} + \gamma^{-1}(x^{k+1}-z^k),
\end{align*}
which implies
\begin{align*}
&(\nabla h(x^{k+1})-\nabla h(x^k)) + 8\gamma^2 L_h^2 \alpha (x^{k+1}-x^k)\\
&+\gamma^{-1}(z^k - z^{k+1}) + A^T(\lambda^{k+1}-\lambda^{k}) \in \partial_x {\cal P}_{\mathrm{limeal}}(y^{k+1}).
\end{align*}
The derivations for the other terms are straightforward and similar to those in the proof of Proposition \ref{Proposition:globalconv-MEAL}.
We directly show the final estimate: for some $v^{k+1} \in \partial {\cal P}_{\mathrm{limeal}}(y^{k+1})$,
\begin{align*}
\|v^{k+1}\|
&\leq \left(L_h+L_g+\gamma^{-1} + 16\alpha\gamma^2L_h^2\right)\|x^{k+1}-x^k\| \\
&+ \left(\gamma^{-1} + \left| 8\alpha - \frac{1-\eta}{\eta} \right| + 8\alpha \right) \|z^{k+1}-z^k\|\\
&+ \beta^{-1} \|\lambda^{k+1}-\lambda^k\| + L_h \|\hat{x}^{k+1} - \hat{x}^k\| + \gamma^{-1}\|\hat{z}^{k+1}-\hat{z}^k\|,
\end{align*}
which yields $(P2)$ for LiMEAL.
\textbf{(c) Establishing $(P3)$}: $(P3)$ follows from the boundedness assumption of $\{y^k\}$ and the convergence of $\{{\cal P}_{\mathrm{limeal}}(y^k)\}$ by $(P1)$.
This finishes the proof.
\end{proof}
\section{Discussions on Boundedness and Related Work}
\label{sc:discussion}
In this section, we discuss how to ensure the bounded sequences and then compare our results to related other work.
\subsection{Discussions on Boundedness of Sequence}
\label{sc:discussion-boundedness}
Theorem \ref{Theorem:Convergence-MEAL} imposes the condition of lower boundedness of $\{{\cal E}_{\mathrm{meal}}^k\}$ and Proposition \ref{Proposition:globalconv-MEAL} does with boundedness of the generated sequence $\{(x^k,z^k,\lambda^k)\}$.
In this section, we provide some sufficient conditions to guarantee the former and then the latter boundedness conditions.
Besides the $\rho$-weak convexity of $f$ (implying the curvature of $f$ is lower bounded by $\rho$), we impose the coerciveness on the constrained problem \eqref{Eq:problem} as follows.
\begin{assumption}[Coercivity]
\label{Assumption:coercive}
The minimal value $f^*:= \inf_{x\in {\cal X}} f(x) $ is finite (recall ${\cal X}:=\{x: Ax=b\}$), and $f$ is coercive over the set ${\cal X}$, that is, $f(x) \rightarrow \infty$ if $x\in {\cal X}$ and $\|x\| \rightarrow \infty$.
\end{assumption}
The coercive assumption is a common condition used to obtain the boundedness of the sequence, for example, used in~\cite[Assumption A1]{Wang19} for the nonconvex ADMM.
Particularly, let $(x^0,z^0,\lambda^0)$ be a finite initial guess of MEAL and
\begin{align}
\label{Eq:def-E0}
{\cal E}^0:={\cal E}_{\mathrm{meal}}^1 < +\infty.
\end{align}
By Assumption \ref{Assumption:coercive}, if $x\in {\cal X}$ and $f(x) \leq {\cal E}^0$, then there exists a positive constant ${\cal B}_0$ (possibly depending on ${\cal E}^0$) such that
$
\|x\| \leq {\cal B}_0.
$
Define another positive constant as
\begin{align}
\label{Eq:boundedness-B1}
{\cal B}_1 := {\cal B}_0 + \sqrt{2\rho^{-1} \cdot \max\{0,{\cal E}^0 - f^*\}}.
\end{align}
Given a $\gamma \in (0, 1/\rho)$ and $z\in \mathbb{R}^n$ with $\|z\|\leq {\cal B}_1$ and $u\in \mathrm{Im}(A)$, we define
\begin{align}
\label{Eq:def-x-u-z}
x(u;z):= \argmin_{\{x:Ax=u\}} \left\{f(x)+\frac{1}{2\gamma}\|x-z\|^2\right\}.
\end{align}
Since $f$ is $\rho$-weakly convex by Assumption \ref{Assump:MEAL}(a), then for any $\gamma \in (0, 1/\rho)$, the function $f(x)+\frac{1}{2\gamma}\|x-z\|^2$ is strongly convex with respect to $x$, and thus the above $x(u;z)$ is well-defined and unique for any given $z \in \mathbb{R}^n$ and $u\in \mathrm{Im}(A)$.
Motivated by \cite[Ch 5.6.3]{Boyd04}, we impose some \textit{local stability} on $x(u;z)$ defined in \eqref{Eq:def-x-u-z}.
\begin{assumption}[Local stability]
\label{Assumption:Lipschitz-min-path}
For any given $z\in \mathbb{R}^n$ with $\|z\|\leq {\cal B}_1$, there exist a $\delta>0$ and a finite positive constant $\bar{M}$ (possibly depending on $A$, ${\cal B}_1$ and $\delta$) such that
\[
\|x(u;z)-x(b;z)\| \leq \bar{M} \|u-b\|, \ \forall u \in \mathrm{Im}(A) \cap \{v: \|v-b\| \leq \delta\}.
\]
\end{assumption}
The above \textit{local stability} assumption is also related to the \textit{Lipschitz sub-minimization path} assumption suggested in \cite[Assumption A3]{Wang19}.
As discussed in \cite{Wang19}, the \textit{Lipschitz sub-minimization path} assumption relaxes the more stringent \textit{full-rank} assumption used in the literature (see the discussions in \cite[Sections 2.2 and 4.1]{Wang19} and references therein).
As $\{z\in \mathbb{R}^n: \|z\|\leq {\cal B}_1\}$ is a compact set, $\bar{M}$ can be taken as the supremum of these
stability constants over this compact set.
Based on Assumption \ref{Assumption:Lipschitz-min-path}, we have the following lemma.
\begin{lemma}
\label{Lemma:A-control}
Let $\{(x^k,z^k,\lambda^k)\}$ be the sequence generated by MEAL \eqref{alg:MEAL-reformulation} with fixed $\beta>0$ and $\eta>0$. If $\gamma \in (0,1/\rho)$, $\|z^k\|\leq {\cal B}_1$ and $\|Ax^{k+1}-b\|\leq \delta$, there holds
\[
\|x^{k+1}-x(b;z^k)\| \leq \bar{M}\|Ax^{k+1}-b\|, \ \forall k \in \mathbb{N}.
\]
\end{lemma}
\begin{proof}
Let $u^{k+1} = Ax^{k+1}$. By the update of $x^{k+1}$ in \eqref{alg:MEAL-reformulation}, there holds
\[
{\cal P}_{\beta}(x^{k+1},z^k,\lambda^{k}) \leq {\cal P}_{\beta}(x(u^{k+1};z^k),z^k,\lambda^{k}).
\]
Noting that $Ax(u^{k+1};z^k) = Ax^{k+1}$ due to its definition in \eqref{Eq:def-x-u-z}, the above inequality implies
\[
f(x^{k+1}) + \frac{1}{2\gamma}\|x^{k+1}-z^k\|^2 \leq f(x(u^{k+1};z^k)) + \frac{1}{2\gamma}\|x(u^{k+1};z^k)-z^k\|^2.
\]
By the definition of $x(u^{k+1};z^k)$ in \eqref{Eq:def-x-u-z} again and noting that $Ax^{k+1} = u^{k+1}$, we have
\[
f(x^{k+1}) + \frac{1}{2\gamma}\|x^{k+1}-z^k\|^2 \geq f(x(u^{k+1};z^k)) + \frac{1}{2\gamma}\|x(u^{k+1};z^k)-z^k\|^2.
\]
These two inequalities imply
\[
f(x^{k+1}) + \frac{1}{2\gamma}\|x^{k+1}-z^k\|^2 = f(x(u^{k+1};z^k)) + \frac{1}{2\gamma}\|x(u^{k+1};z^k)-z^k\|^2,
\]
which yields
\[
x^{k+1} = x(u^{k+1};z^k) = x(Ax^{k+1};z^k)
\]
by the strong convexity of function $f(x)+\frac{1}{2\gamma}\|x-z^k\|^2$ for any $\gamma \in (0,1/\rho)$ and thus the uniqueness of $x(u^{k+1};z^k)$.
Then by Assumption \ref{Assumption:Lipschitz-min-path}, we yield the desired result.
\end{proof}
Based on the above assumptions, we establish the lower boundedness of $\{{\cal E}_{\mathrm{meal}}^k\}$ and the boundedness of $\{(x^k, z^k, \lambda^k)\}$ as follows.
\begin{proposition}
\label{Propos:boundedness-x-z}
Let $\{(x^k, z^k, \lambda^k)\}_{k\in \mathbb{N}}$ be a sequence generated by MEAL \eqref{alg:MEAL-reformulation} with a finite initial guess $(x^0,z^0,\lambda^0)$ such that $\|z^0\|\leq {\cal B}_1$, where ${\cal B}_1$ is defined in \eqref{Eq:boundedness-B1}.
Suppose that Assumptions \ref{Assump:feasibleset}, \ref{Assump:MEAL}(a)-(b) and \ref{Assumption:coercive} hold and further Assumption \ref{Assumption:Lipschitz-min-path} holds with some $0<\bar{M}<\frac{2}{\sqrt{\sigma_{\min}(A^TA)}}$. If $\gamma \in ( 0,\rho^{-1})$, $\eta \in (0,2)$ and
$\beta > \max\left\{\frac{1+\sqrt{1+\eta(2-\eta)\gamma c_{\gamma,A}\alpha_{\max}}}{2c_{\gamma,A}\alpha_{\max}}, \frac{a_2+\sqrt{a_2^2+4a_1a_3}}{2a_1} \right\},$
where $\alpha_{\max} := \min\left\{\frac{1-\gamma \rho}{4\gamma(1+\gamma L_f)^2}, \frac{1}{8\gamma}(\frac{2}{\eta}-1)\right\}$, $c_{\gamma,A} = \gamma^2 \sigma_{\min}(A^TA)$, $a_1 = 4-\bar{M}^2\sigma_{\min}(A^TA)$, $a_2 = 4(\bar{L}+\gamma^{-1})\bar{M}^2 - \gamma \eta(2-\eta)$, $a_3 = (1+\gamma \bar{L})\eta(2-\eta)\bar{M}^2$ and $\bar{L}=\rho+2L_f$,
then the following hold:
\begin{enumerate}
\item[(a)] $\{{\cal E}_{\mathrm{meal}}^k\}$ is lower bounded;
\item[(b)] $\{(x^k,z^k)\}$ is bounded; and
\item[(c)] if further $\lambda^0 \in \mathrm{Null}(A^T)$ (the null space of $A^T$) and $\|\nabla {\cal M}_{\gamma,f}(w^1)\|$ is finite with $w^1 = z^0 - \gamma A^T\lambda^1$, then $\{\lambda^k\}$ is bounded.
\end{enumerate}
\end{proposition}
\begin{proof}
In order to prove this proposition, we firstly establish the following claim for sufficiently large $k$:
\textbf{Claim A:}
\textit{If $\|z^{k-1}\|\leq {\cal B}_1, \|Ax^k-b\| \leq \delta, \forall k\geq k_0$ for some sufficiently large $k_0$, then ${\cal E}_{\mathrm{meal}}^k \geq f^*$, and $\|z^{k}\|\leq {\cal B}_1$ and $\|x^k\| \leq {\cal B}_2$.}
By Theorem \ref{Theorem:Convergence-MEAL}(a), such $k_0$ does exist due to the lower boundedness of $\{{\cal E}_{meal}^k\}$ for all finite $k$ and thus $\xi_{\mathrm{meal}}^k \leq \hat{c}/\sqrt{k}$ for some constant $\hat{c}>0$ (implying $\|Ax^k-b\|$ is sufficiently small with a sufficiently large $k$ ).
In the next, we show \textbf{Claim A}.
By the definition \eqref{Eq:Lyapunov-seq-MEAL-S1} of ${\cal E}_{\mathrm{meal}}^k$, we have
\begin{align*}
{\cal E}_{\mathrm{meal}}^k
&= f(x^k) + \langle \lambda^{k}, Ax^k-b \rangle + \frac{\beta}{2}\|Ax^k-b\|^2 + \frac{1}{2\gamma}\|x^k-z^k\|^2 + 2\alpha \|z^k-z^{k-1}\|^2\\
&=f(x^k) + \langle A^T\lambda^{k}, x^k-\bar{x}^k \rangle + \frac{\beta}{2}\|Ax^k-b\|^2 + \frac{1}{2\gamma}\|x^k-z^k\|^2 + 2\alpha \|z^k-z^{k-1}\|^2,
\end{align*}
where
\[\bar{x}^k:=x(b;z^{k-1})
\]
as defined in \eqref{Eq:def-x-u-z}. Let $\bar{\lambda}^k$ be the associated optimal Lagrangian multiplier of $\bar{x}^k$ and $\bar{w}^k = z^{k-1} - \gamma A^T \bar{\lambda}^k$. Then we have
\[
\bar{x}^k = \mathrm{Prox}_{\gamma,f}(\bar{w}^k),
\]
and $\nabla {\cal M}_{\gamma,f}(\bar{w}^k) \in \partial f(\bar{x}^k)$.
By \eqref{Eq:A-lambda} in the proof of Lemma \ref{Lemma:dual-control-primal-MEAL}, we have
\begin{align*}
A^T\lambda^{k} = -\nabla {\cal M}_{\gamma,f}(w^k) - \gamma^{-1}(x^k-z^{k-1}),
\end{align*}
and $\nabla {\cal M}_{\gamma,f}(w^k) \in \partial f(x^k)$, where $w^k = z^{k-1} - \gamma A^T \lambda^{k}$.
Substituting the above equation into the previous equality yields
\begin{align}
{\cal E}_{\mathrm{meal}}^k
&= f(x^k) + \langle \nabla {\cal M}_{\gamma,f}(w^k),\bar{x}^k - x^k \rangle + \frac{\beta}{2}\|Ax^k-b\|^2 \label{Eq:key-ineq1}\\
&+\gamma^{-1}\langle x^k-z^{k-1}, \bar{x}^k - x^k \rangle + \frac{1}{2\gamma}\|x^k-z^k\|^2 + 2 \alpha \|z^k - z^{k-1}\|^2. \nonumber
\end{align}
Noting that $\nabla {\cal M}_{\gamma,f}(\bar{w}^k) \in \partial f(\bar{x}^k)$ and by the $\rho$-weak convexity of $f$, we have
\[f(x^k) \geq f(\bar{x}^k) + \langle \nabla{\cal M}_{\gamma,f}(\bar{w}^k),x^k-\bar{x}^k\rangle - \frac{\rho}{2}\|x^k-\bar{x}^k\|^2,
\]
which implies
\begin{align*}
&f(x^k) + \langle \nabla{\cal M}_{\gamma,f}(w^k),\bar{x}^k-x^k\rangle \\
&\geq f(\bar{x}^k) - \frac{\rho}{2}\|\bar{x}^k - x^k\|^2 - \langle \nabla{\cal M}_{\gamma,f}(\bar{w}^k)-\nabla{\cal M}_{\gamma,f}(w^k),\bar{x}^k-x^k\rangle\\
&\geq f(\bar{x}^k) - \frac{\rho}{2}\|\bar{x}^k - x^k\|^2 - \|\nabla{\cal M}_{\gamma,f}(\bar{w}^k)-\nabla{\cal M}_{\gamma,f}(w^k)\|\cdot \|\bar{x}^k-x^k\|.
\end{align*}
By the implicit Lipschitz subgradient assumption (i.e., Assumption \ref{Assump:MEAL} (b)) and the definition of $\bar{L}:= \rho+2L_f$, the above inequality yields
\begin{align}
\label{Eq:key-ineq2}
f(x^k)+\langle \nabla {\cal M}_{\gamma,f}(w^k), \bar{x}^k - x^k \rangle \geq f(\bar{x}^k) - \frac{\bar{L}}{2}\|\bar{x}^k-x^k\|^2.
\end{align}
Moreover, it is easy to show that
\begin{align}
\label{Eq:key-ineq3}
&\gamma^{-1}\langle x^k - z^{k-1}, \bar{x}^k - x^k \rangle + \frac{1}{2\gamma}\|x^k-z^k\|^2 + 2 \alpha \|z^k - z^{k-1}\|^2 \\
& = \frac{1}{2\gamma}\|\bar{x}^k-z^k\|^2 - \frac{1}{2\gamma}\|\bar{x}^k-x^k\|^2 + \gamma^{-1} \langle z^k - z^{k-1}, \bar{x}^k - x^k\rangle + 2\alpha \|z^k - z^{k-1}\|^2 \nonumber\\
&=\frac{1}{2\gamma}\|\bar{x}^k-z^k\|^2 - \left(\frac{1}{2\gamma} + \frac{1}{8 \alpha \gamma^2}\right)\|\bar{x}^k-x^k\|^2
+2\alpha \left\|(z^k - z^{k-1}) + \frac{1}{4 \alpha \gamma} (\bar{x}^k - x^k)\right\|^2. \nonumber
\end{align}
Substituting \eqref{Eq:key-ineq2}-\eqref{Eq:key-ineq3} into \eqref{Eq:key-ineq1} and by Lemma \ref{Lemma:A-control}, we have
\begin{align}
{\cal E}_{\mathrm{meal}}^k
&\geq f(\bar{x}^k) + \frac{1}{2\gamma}\|\bar{x}^k - z^k\|^2
+2\alpha \left\|(z^k - z^{k-1}) + \frac{1}{4\alpha \gamma} (\bar{x}^k - x^k)\right\|^2 \nonumber\\
&+ \frac{1}{2} \left[\beta - \left(\frac{1}{4\alpha\gamma^2}+\bar{L}+\gamma^{-1} \right)\bar{M}^2\right] \|Ax^k-b\|^2 \nonumber\\
&\geq f(\bar{x}^k) + \frac{1}{2\gamma}\|\bar{x}^k - z^k\|^2
+2\alpha \left\|(z^k - z^{k-1}) + \frac{1}{4\alpha\gamma} (\bar{x}^k - x^k)\right\|^2 \label{Eq:key-ineq4}\\
&\geq f^* + \frac{1}{2\gamma}\|\bar{x}^k - z^k\|^2
+2\alpha \left\|(z^k - z^{k-1}) + \frac{1}{4 \alpha \gamma} (\bar{x}^k - x^k)\right\|^2 \label{Eq:key-ineq5}\\
&> -\infty, \label{Eq:key-ineq6}
\end{align}
where the second inequality follows from the definition of $\alpha = \frac{2\beta+\gamma \eta(1-\eta/2)}{2\gamma^2\sigma_{\min}(A^TA)\beta^2}$ and the condition on $\beta$, the third inequality holds for $\bar{x}^k:= x(b;z^{k-1})$ and thus $A\bar{x}^k = b$ and $f(\bar{x}^k) \geq f^*$, and the final inequality is due to Assumption \ref{Assumption:coercive}.
The above inequality yields the lower boundedness of $\{{\cal E}_{\mathrm{meal}}^k\}$ in \textbf{Claim A}. Thus, clam (a) in this proposition holds.
Then, we show the boundedness of $\{(x^k,z^k)\}$ in \textbf{Claim A}.
By \eqref{Eq:key-ineq4} and \eqref{Eq:descent-MEAL-S1*}, we have
\begin{align*}
f(\bar{x}^k) \leq {\cal E}^0 := {\cal E}_{\mathrm{meal}}^1,
\end{align*}
which implies $\|\bar{x}^k\| \leq {\cal B}_0$
by Assumption \ref{Assumption:coercive}. By \eqref{Eq:key-ineq5} and the condition on $\gamma \in (0, \rho^{-1})$, we have $f^* + \frac{\rho}{2}\|\bar{x}^k - z^k\|^2 \leq f^* + \frac{1}{2\gamma}\|\bar{x}^k - z^k\|^2 \leq {\cal E}^0,$
which implies
\begin{align*}
\|z^k\| \leq {\cal B}_0 + \sqrt{2({\cal E}^0-f^*)/\rho} = {\cal B}_1.
\end{align*}
By \eqref{Eq:key-ineq5} again, we have $\left\|(z^k - z^{k-1}) + \frac{1}{4 \alpha \gamma} (\bar{x}^k - x^k)\right\|^2 \leq \frac{{\cal E}^0 - f^*}{2 \alpha},$
which, together with these existing bounds $\|z^{k-1}\|\leq {\cal B}_1$, $\|z^{k}\|\leq {\cal B}_1$ and $\|\bar{x}^k\| \leq {\cal B}_0$, yields
\begin{align}
\label{Eq:boundedness-B2}
\|x^k\| \leq {\cal B}_0 + 4\alpha\gamma \left(2{\cal B}_1 + \sqrt{\frac{{\cal E}^0 - f^*}{2\alpha}} \right) =: {\cal B}_2.
\end{align}
Thus, we have shown \textbf{Claim A}. Recursively, we can show that $\{x^k\}$ and $\{z^k\}$ are respectively bounded by ${\cal B}_2$ and ${\cal B}_1$ for any $k\geq 1$, that is, claim (b) in this proposition holds.
In the following, we show claim (c) of this proposition.
By the update of $\lambda^{k+1}$ in \eqref{alg:MEAL-reformulation}, it is easy to show $\lambda^k = \lambda^0 + \hat{\lambda}^k$, where $\hat{\lambda}^k = \beta \sum_{t=1}^{k} (Ax^t-b) \in \mathrm{Im}(A)$ by Assumption \ref{Assump:feasibleset}. Furthermore, by the assumption that $\lambda^0 \in \mathrm{Null}(A^T)$, we have
\begin{align}
\label{Eq:lambda-orth}
\langle \lambda^0, \hat{\lambda}^k \rangle =0, \ \forall k \geq 1.
\end{align}
By \eqref{Eq:A-lambda}, for any $k\geq 1$, we have
\begin{align*}
A^T\lambda^{k}= -(\nabla {\cal M}_{\gamma,f}(w^{k})-\nabla {\cal M}_{\gamma,f}(w^{1}))-\nabla {\cal M}_{\gamma,f}(w^{1})-\gamma^{-1}(x^{k}-z^{k-1}),
\end{align*}
where $w^k = z^{k-1} - \gamma A^T\lambda^k$. By Assumption \ref{Assump:MEAL}(b) and the boundedness of $\{(x^k,z^k)\}$ shown before, the above equation implies
\begin{align*}
\|A^T {\lambda}^k\|
&\leq L_f \|x^k - x^1\| + \|\nabla {\cal M}_{\gamma,f}(w^{1})\| + \gamma^{-1}\|x^{k}-z^{k-1}\| \\
&\leq \gamma^{-1}{\cal B}_1 + (2L_f+\gamma^{-1}){\cal B}_2 + \|\nabla {\cal M}_{\gamma,f}(w^{1})\| <+\infty.
\end{align*}
By the relation $\lambda^k = \lambda^0 + \hat{\lambda}^k$ and \eqref{Eq:lambda-orth}, the above inequality implies
\begin{align*}
\|A^T\hat{\lambda}^k\| \leq \gamma^{-1}{\cal B}_1 + (2L_f+\gamma^{-1}){\cal B}_2 + \|\nabla {\cal M}_{\gamma,f}(w^{1})\|.
\end{align*}
Since $\hat{\lambda}^k \in \mathrm{Im}(A)$, the above inequality implies
\begin{align*}
\|\hat{\lambda}^k\| \leq \tilde{\sigma}_{\min}^{-1/2}(A^TA) \|A^T\hat{\lambda}^k\| \leq \tilde{\sigma}_{\min}^{-1/2}(A^TA) \left[\gamma^{-1}{\cal B}_1 + (2L_f+\gamma^{-1}){\cal B}_2 + \|\nabla {\cal M}_{\gamma,f}(w^{1})\|\right],
\end{align*}
which yields the boundedness of $\{\lambda^k\}$ by the triangle inequality.
This finishes the proof.
\end{proof}
The proof idea of claim (c) of this proposition is motivated by the proof of \cite[Lemma 3.1]{Zhang-Luo18}.
Based on Proposition \ref{Propos:boundedness-x-z},
we show the lower boundedness of the Lypunov function sequence and the boundedness of the sequence generated by MEAL.
Following the similar analysis of this section, we can obtain the similar boundeness results for both iMEAL and LiMEAL.
\subsection{Discussions on Related Work}
\label{sc:related-work}
When compared to the tightly related work \cite{Hajinezhad-Hong19,Hong17-Prox-PDA,Jiang19,Rockafellar76-PALM,Xie-Wright19,Zhang-Luo20,Zhang-Luo18}, this paper provides some slightly stronger convergence results under weaker conditions. The detailed discussions and comparisons with these works are shown as follows and presented in Tables \ref{Tab:comp-alg1} and \ref{Tab:comp-alg2}.
\begin{table}
\caption{Convergence results of our and related algorithms for problem \eqref{Eq:problem}}
\footnotesize
\begin{center}
\begin{tabular}{c|c|c|c|c}\hline
Algorithm & MEAL (our) & iMEAL (our) & Prox-PDA \cite{Hong17-Prox-PDA} & Prox-ALM \cite{Xie-Wright19} \\\hline\hline
Assumption &\multicolumn{2}{|c|}{$f$: weakly convex, \textit{imp-Lip} or \textit{imp-bound}}
& \multicolumn{2}{|c|}{$\nabla f$: Lipschitz}
\\\hline
{Iteration} & \textit{imp-Lip}: $o(\varepsilon^{-2})$ & \textit{imp-Lip}: $o(\varepsilon^{-2})$ & \multirow{2}*{${\cal O}(\varepsilon^{-2})$} & \multirow{2}*{${\cal O}(\varepsilon^{-2})$} \\
complexity & \textit{imp-bound}: ${\cal O}(\varepsilon^{-2})$ & \textit{imp-bound}: ${\cal O}(\varepsilon^{-2})$ & ~ &~ \\\hline
{Global} & \multirow{2}*{$\checkmark$ under K{\L}} & \multirow{2}*{--} & \multirow{2}*{--} & \multirow{2}*{--} \\
Convergence & ~ & ~ & ~ &~ \\\hline
\end{tabular}
\end{center}
$\bullet$~\textit{imp-Lip}: the \textit{implicit Lipschitz subgradient} assumption \ref{Assump:MEAL}(b);\\
$\bullet$~\textit{imp-bound}: the \textit{implicit bounded subgradient} assumption \ref{Assump:MEAL}(c);\\
$\bullet$~\cite{Xie-Wright19} considers a nonlinear equality constraints $c(x)=0$ where $\nabla c$ is Lipschitz and bounded.
\label{Tab:comp-alg1}
\end{table}
\begin{table}
\caption{Convergence results of our and related algorithms for the composite optimization problem \eqref{Eq:problem-CP}.}
\footnotesize
\begin{center}
\begin{tabular}{c|c|c|c|c}\hline
Algorithm & LiMEAL (our) & PProx-PDA \cite{Hajinezhad-Hong19} & Prox-iALM \cite{Zhang-Luo18} & S-prox-ALM \cite{Zhang-Luo20} \\\hline\hline
\multirow{3}*{Assumption} &$\nabla h$: Lipschitz,
& $\nabla h$: Lipschitz, & $\nabla h$: Lipschitz, & $\nabla h$: Lipschitz,
\\
~ & $g$: weakly convex, & $g$: convex, & $g:\iota_{\cal C}(x)$, & $g:\iota_{\cal P}(x)$, \\
~ & \textit{imp-Lip} or \textit{imp-bound} &$\partial g$: bounded &${\cal C}$: box constraint &${\cal P}$: polyhedral set \\\hline
{Iteration} & \textit{imp-Lip}: $o(\varepsilon^{-2})$ & \multirow{2}*{${\cal O}(\varepsilon^{-2})$} & \multirow{2}*{${\cal O}(\varepsilon^{-2})$} & \multirow{2}*{${\cal O}(\varepsilon^{-2})$} \\
complexity & \textit{imp-bound}: ${\cal O}(\varepsilon^{-2})$ & ~ & ~ &~ \\\hline
{Global} & \multirow{2}*{$\checkmark$ under K{\L}} & \multirow{2}*{--} & $\checkmark$ for quadratic & \multirow{2}*{--} \\
Convergence & ~ & ~ & programming &~ \\\hline
\end{tabular}
\end{center}
\label{Tab:comp-alg2}
\end{table}
When reduced to the case of linear constraints, the proximal ALM suggested in \cite{Rockafellar76-PALM} is a special case of MEAL with $\eta = 1$, and the Lipschitz continuity of certain fundamental mapping at the origin \cite[p. 100]{Rockafellar76-PALM} generally implies the K{\L} property of the proximal augmented Lagrangian with exponent $1/2$ at some stationary point, and thus, the linear convergence of proximal ALM can be directly yielded by Proposition \ref{Proposition:globalconv-MEAL}(b).
Moreover, the proposed algorithms still work (in terms of convergence) for some constrained problems with nonconvex objectives and a fixed penalty parameter.
In \cite{Hong17-Prox-PDA}, a proximal primal-dual algorithm (named \textit{Prox-PDA}) was proposed for the linearly constrained problem \eqref{Eq:problem} with $b=0$.
Prox-PDA is shown as follows:
\begin{align*}
\text{(Prox-PDA)} \
\left\{
\begin{array}{l}
x^{k+1} = \argmin_{x\in \mathbb{R}^n} \ \left\{ f(x)+\langle \lambda^k,Ax\rangle + \frac{\beta}{2}\|Ax\|^2 + \frac{\beta}{2}\|x-x^k\|^2_{B^TB}\right\},\\
\lambda^{k+1} = \lambda^k + \beta Ax^{k+1},
\end{array}
\right.
\end{align*}
where $B$ is chosen such that $A^TA + B^TB \succeq \mathrm{I}_n$ (the identity matrix of size $n$).
To achieve a $\sqrt{\varepsilon}$-accurate stationary point,
the iteration complexity of Prox-PDA is ${\cal O}(\varepsilon^{-1})$ under the Lipschitz differentiability of $f$ (that is, $f$ is differentiable and has Lipschitz gradient) and the assumption that there exists some $\underline{f}>-\infty$ and some $\delta>0$ such that $f(x)+\frac{\delta}{2}\|Ax\|^2 \geq \underline{f}$ for any $x\in \mathbb{R}^n$. Such iteration complexity of Prox-PDA is consistent with the order of ${\cal O}(\varepsilon^{-2})$ to achieve an $\varepsilon$-accurate stationary point.
On one hand if we take $B = \mathrm{I}_n$ in Prox-PDA, then it reduces to MEAL with $\gamma = \beta^{-1}$ and $\eta=1$.
On the other hand, by our main Theorem \ref{Theorem:Convergence-MEAL}(a), the iteration complexity of the order of $o(\varepsilon^{-2})$ is slightly better than that of Prox-PDA, under weaker conditions (see, Assumption \ref{Assump:MEAL}(a)-(b)).
Moreover, we established the global convergence and rate of MEAL under the K{\L} inequality, while such global convergence result is missing (though obtainable) for Prox-PDA in \cite{Hong17-Prox-PDA}.
A prox-linear variant of Prox-PDA (there dubbed \textit{PProx-PDA}) was proposed in the recent paper \cite{Hajinezhad-Hong19} for the linearly constrained problem \eqref{Eq:problem-CP} with a composite objective. Besides Lipschitz differentiability of $h$, the nonsmooth function $g$ is assumed to be convex with bounded subgradients. These assumptions used in \cite{Hajinezhad-Hong19} are stronger than ours in Assumption \ref{Assump:LiMEAL}(a), (b) and (d), while the yielded iteration complexity of LiMEAL (Theorem \ref{Theorem:Convergence-LiMEAL}(b)) is consistent with that of PProx-PDA in \cite[Theorem 1]{Hajinezhad-Hong19}.
Moreover, we establish the global convergence and rate of LiMEAL (Proposition \ref{Proposition:globalconv-LiMEAL}), which is missing (though obtainable) for PProx-PDA.
In \cite{Xie-Wright19}, an ${\cal O}(\varepsilon^{-2})$-iteration complexity of proximal ALM was established for the constrained problem with nonlinear equality constraints, under assumptions that the objective is differentiable and its gradient is both Lipschitz continuous and bounded, and that the Jacobian of the constraints is also Lipschitz continuous and bounded and satisfies a \textit{full-rank} property (see \cite[Assumption 1]{Xie-Wright19}). If we reduce their setting to linear constraints, their iteration complexity is slightly worse than ours and their assumptions are stronger (of course, except for the part on nonlinear constraints).
In \cite{Zhang-Luo18}, a very related algorithm (called \textit{Proximal Inexact Augmented Lagrangian Multiplier method}, dubbed Prox-iALM) was introduced for the following linearly constrained problem
\begin{align*}
\min_{x\in \mathbb{R}^n} \ h(x) \quad \mathrm{subject \ to} \quad Ax=b, \ x\in {\cal C},
\end{align*}
where ${\cal C}$ is a box constraint set. Subsequence convergence to a stationary point was established under
the following assumptions:
(a) the origin is in the relative interior of the set $\{Ax-b: x\in {\cal C}\}$;
(b) the strict complementarity condition \cite{Nocedal99} holds for the above constrained problem;
(c) $h$ is differentiable and has Lipschitz continuous gradient.
Moreover, the global convergence and linear rate of this algorithm was established for the quadratic programming, in which case, the augmented Lagrangian satisfies the K{\L} inequality with exponent $1/2$, by noticing
the connection between Luo-Tseng error bound and K{\L} inequality \cite{Li-Pong-KLexponent18}.
According to Theorem \ref{Theorem:Convergence-LiMEAL} and Proposition \ref{Proposition:globalconv-LiMEAL}, the established convergence results in this paper are more general and stronger than that in \cite{Zhang-Luo18} but under weaker assumptions. Particularly, besides the weaker assumption on $h$, the strict complementarity condition (b) is also removed in this paper for LiMEAL.
The algorithm studied in \cite{Zhang-Luo18} has been recently generalized to handle the linearly constrained problem with the polyhedral set in \cite{Zhang-Luo20} (dubbed \textit{S-prox-ALM}). Under the Lipschitz differentiability of the objective, the iteration complexity of the order ${\cal O}(\varepsilon^{-2})$ was established in \cite{Zhang-Luo20} for the S-prox-ALM algorithm. Such iteration complexity is consistent with LiMEAL as shown in Theorem \ref{Theorem:Convergence-LiMEAL}.
Besides these major differences between this paper and \cite{Zhang-Luo20,Zhang-Luo18}, the step sizes $\eta$ are more flexible for both MEAL and LiMEAL (only requiring $\eta \in (0,2)$), while the step sizes used in the algorithms in \cite{Zhang-Luo20,Zhang-Luo18} should be sufficiently small to guarantee the convergence.
Meanwhile, the Lyapunov function used in this paper is motivated by the Moreau envelope of the augmented Lagrangian, which is very different from the Lyapunov function used in \cite{Zhang-Luo20,Zhang-Luo18}.
Based on the defined Lyapunov function, our analysis is much simpler than that in \cite{Zhang-Luo20,Zhang-Luo18}.
\section{Numerical Experiments}
\label{sc:experiment}
We use two experiments to demonstrate the effectiveness of the proposed algorithms:
\begin{enumerate}
\item The first experiment is based on a nonconvex quadratic program on which ALM with any bounded penalty parameter diverges~\cite[Proposition 1]{Wang19} but LiMEAL converges.
\item The second experiment borrows a general quadratic program from~\cite[Sec. 6.2]{Zhang-Luo18} and LiMEAL outperforms \textit{Prox-iALM} suggested in \cite{Zhang-Luo18}.
\end{enumerate}
The source codes can be accessed at \url{https://github.com/JinshanZeng/MEAL}.
\subsection{ALM vs LiMEAL}
\label{sc:exp1}
Consider the following optimization problem from \cite[Proposition 1]{Wang19}:
\begin{align}\label{Eq:exp1}
\min_{x,y\in \mathbb{R}} \ x^2-y^2,
\quad \text{subject to} \ x=y, \ x\in [-1,1].
\end{align}
ALM with any bounded penalty parameter $\beta$ diverges on this problem. By Theorem \ref{Theorem:Convergence-LiMEAL} and Proposition \ref{Proposition:globalconv-LiMEAL}, LiMEAL converges exponentially fast since its augmented Lagrangian is a K{\L} function with an exponent of $1/2$. For both ALM and LiMEAL, we set the penalty parameter $\beta$ to 50. We set LiMEAL's proximal parameter $\gamma$ to $1/2$ and test three different values $\eta's$: $0.5, 1, 1.5$. The curves of objective $f(x^k,y^k)=(x^k)^2 - (y^k)^2$, constraint violation error $|x^k-y^k|$, multiplier sequences $\{\lambda^k\}$, and the norm of gradient of Moreau envelope in \eqref{Eq:stationary-LiMEAL}, which is the stationarity measure, are depicted in Fig. \ref{Fig:Exp1}.
Observe that ALM diverges: its multiplier sequence $\{\lambda^k\}$ oscillates between two distinct values (Fig. \ref{Fig:Exp1} (a)) and the constraint violation converges to a positive value (Fig. \ref{Fig:Exp1} (b)). Also observe that LiMEAL converges exponentially fast (Fig. \ref{Fig:Exp1} (c)--(e)) and achieves the optimal objective value of 0 in about 10 iterations (Fig. \ref{Fig:Exp1} (f)) with all $\eta$ values. This verifies Proposition \ref{Proposition:globalconv-LiMEAL}.
\begin{figure}
\caption{Apply ALM and LiMEAL to problem \eqref{Eq:exp1}
\label{Fig:Exp1}
\end{figure}
\subsection{Quadratic Programming}
\label{sc:exp2}
Consider the quadratic program with box constraints:
\begin{align}\label{Eq:exp2}
\min_{x\in \mathbb{R}^n} \ \frac{1}{2} x^TQx + r^Tx \quad s.t. \quad Ax=b, \ \ell_i \leq x_i \leq u_i, \ i=1,\ldots,n,
\end{align}
where $Q\in \mathbb{R}^{n\times n}$, $r\in \mathbb{R}^n$, $A\in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^m$, and $\ell_i, u_i \in \mathbb{R}$, $i=1,\ldots, n$. Let ${\cal C}:=\{x: \ell_i \leq x_i \leq u_i, i=1,\ldots,n\}$.
Applying LiMEAL yields: initialize $(x^0, z^0,\lambda^0)$, $\gamma>0$, $\eta \in (0,2)$ and $\beta>0$, for $k=0,1,\ldots,$ run
\begin{equation*}
\mathrm{(LiMEAL)} \quad
\left\{
\begin{array}{l}
\tilde{x}^k = (\beta A^TA+\gamma^{-1}{\bf I}_n)^{-1}(\gamma^{-1}z^k+\beta A^Tb-r-Qx^k-A^T\lambda^k),\\
x^{k+1} = \mathrm{Proj}_{\cal C}(\tilde{x}^k),\\
z^{k+1} = z^k - \eta(z^k-x^{k+1}),\\
\lambda^{k+1} = \lambda^k + \beta_k (Ax^{k+1}-b).
\end{array}
\right.
\end{equation*}
Applying \textit{Prox-iALM} from~\cite[Algorithm 2.2]{Zhang-Luo18} yields:
initialize $(x^0, z^0,\lambda^0)$, parameters $\beta, p, \alpha, s, \eta>0$, for $k=0,1,\ldots,$ run
\begin{equation*}
\mathrm{(Prox-iALM)} \quad
\left\{
\begin{array}{l}
\bar{x}^k = (\beta A^TA+p{\bf I}_n)x^k +Qx^k+A^T\lambda^k-p z^k-(\beta A^Tb-r),\\
x^{k+1} = \mathrm{Proj}_{\cal C}(x^k-s\bar{x}^k),\\
z^{k+1} = z^k - \eta(z^k-x^{k+1}),\\
\lambda^{k+1} = \lambda^k + \beta_k (Ax^{k+1}-b).
\end{array}
\right.
\end{equation*}
When $\eta=1$, then Prox-iALM reduces to \textit{Algorithm 2.1} in \cite{Zhang-Luo18}, which we name \textit{iALM}.
The experimental settings are similar to~\cite[Sec. 6.2]{Zhang-Luo18}: set $m=5, n=20$, generate the entries of $Q$, $A$, $b$, and $\tilde{x}$ by sampling from the uniform distribution, and set $b=A\tilde{x}$. For LiMEAL, we set $\beta = 50, \gamma=\frac{1}{2\|Q\|_2}$ and test three values of $\eta's$: $0.5, 1, 1.5$. For Prox-iALM, we use the parameter settings in \cite[Sec. 6.2]{Zhang-Luo18}: $p = 2\|Q\|_2, \beta = 50, \alpha = \frac{\beta}{4}, s = \frac{1}{2(\|Q\|_2+p+\beta \|A\|_2^2)}$. Moreover, we test two values of $\eta's$: $1$ and $0.5$ for Prox-iALM. Prox-iALM with $\eta=1$ reduces to iALM. The curves of the objective sequence, $\|Ax^k-b\|$, $\|x^{k+1}-z^k\|$ and the norm of gradient of the Moreau envelope are depicted in Fig. \ref{Fig:exp2}.
We observe that LiMEAL converges faster than both iALM and Prox-iALM. By Fig. \ref{Fig:exp2}(d), LiMEAL converges exponentially fast with all three values of $\eta's$. These results verify the results in Proposition \ref{Proposition:globalconv-LiMEAL}(b) since the augmented Lagrangian of problem \eqref{Eq:exp2} is a K{\L} function with an exponent of $1/2$.
\begin{figure}
\caption{Performance of LiMEAL and Prox-iALM for the quadratic programming problem \eqref{Eq:exp2}
\label{Fig:exp2}
\end{figure}
\section{Conclusion}
\label{sc:conclusion}
This paper suggests a Moreau envelope augmented Lagrangian (MEAL) method for the linearly constrained weakly convex optimization problem.
By leveraging the \textit{implicit smoothing property} of Moreau envelope, the proposed MEAL generalizes the ALM and proximal ALM to the nonconvex and nonsmooth case.
To yield an $\varepsilon$-accurate first-order stationary point, the iteration complexity of MEAL is $o(\varepsilon^{-2})$ under the \textit{implicit Lipschitz subgradient} assumption and ${\cal O}(\varepsilon^{-2})$ under the \textit{implicit bounded subgradient} assumption.
The global convergence and rate of MEAL are also established under the further Kurdyka-{\L}ojasiewicz inequality.
Moreover, an inexact variant (called \textit{iMEAL}), and a prox-linear variant (called \textit{LiMEAL}) for the composite objective case are suggested and analyzed for different practical settings.
The convergence results established in this paper for MEAL and its variants are generally stronger than the existing ones, but under weaker assumptions.
One future direction of this paper is to get rid of the \textit{implicit Lipschitz subgradient} and \textit{implicit bounded subgradient} assumptions, which in some extent limit the applications of the suggested algorithms, though these two assumptions are respectively weaker than the \textit{Lipschitz differentiable} and \textit{bounded subgradient} assumptions commonly used in the literature.
Another direction is to generalize this work to the constrained problem with nonlinear constraints.
The third direction is to develop more practical variants of the proposed methods as well as establish their convergence results.
One possible application of our study is robustness and convergence of stochastic gradient descent in training parameters of structured deep neural networks such as deep convolutional neural networks \cite{Zhou20}, where linear constraints can be used to impose convolutional structures.
We leave them in our future work.
\end{document} | math | 118,924 |
\begin{document}
\date{ }
\title{ space*{12mm}
\vspace*{-2mm}
\pagestyle{fancy}
\fancyhead{}
\fancyhf{}
\renewcommand{0pt}{0pt}
\fancyhead[CE]{Frank Uhlig}
\fancyhead[CO]{Convergent look-ahead Difference Formulas}
\fancyhead[RO]{\thepage}
\fancyhead[LE]{\thepage}
\thispagestyle{empty}
\vspace*{-6mm}
{\normalsize
\noindent
{\bf Abstract : }
\noindent
Zhang Neural Networks rely on convergent 1-step ahead finite difference formulas of which very few are known. Those which are known have been constructed in ad-hoc ways and suffer from low truncation error orders. This paper develops a constructive method to find convergent look-ahead finite difference schemes of higher truncation error orders. The method consists of seeding the free variables of a linear system comprised of Taylor expansion coefficients followed by a minimization algorithm for the maximal magnitude root of the formula's characteristic polynomial. This helps us find new convergent 1-step ahead finite difference formulas of any truncation error order. Once a polynomial has been found with roots inside the complex unit circle and no repeated roots on it, the associated look-ahead ZNN discretization formula is convergent and can be used for solving any discretized ZNN based model. Our method recreates and validates the few known convergent formulas, all of which have truncation error orders at most 4. It also creates new convergent 1-step ahead difference formulas with truncation error orders 5 through 8.\\[2mm]
{\bf Subject Classifications :} \ 65Q10, 65L12, 92B20 \\[2mm]
{\bf Key Words :} finite difference formula, look-ahead difference formula, Taylor expansion, linear systems, free variable, characteristic polynomial, convergent multistep method, Zhang neural network, truncation error order}
\section{\Large Introduction}
Finite difference formulas have a long history of over 200 years in computational mathematics. They came about after the development of Calculus in the late 17th century and were introduced to estimate the behavior and slope of functions or to approximate areas and volumes. In the differentiation realm, one of the first such formulas that is still relevant today is Euler's forward finite difference formula, written here in its symmetric form as
\begin{equation} \lambda bel{Eulerappr} \dot{y}_j \approx \dfrac{y_{j+1} - y_{j-1}}{2 \tau}
\end{equation}
where $ y_j = y(t_j)$ with $t_j = t_0 + j \tau$ for a constant step size $\tau$, an initial time instance $t_0$ and any $j \ge 1$. If we solve (\ref{Eulerappr}) for $y_{j+1}$ we obtain the symmetric 1-step ahead finite difference formula of Euler
\begin{equation} \lambda bel{EulerFD} y_{j+1} = y_{j-1} + 2 \tau \dot{y}_j \ .
\end{equation}
After assembling the $y_j$ entries on the left of the equation (\ref{EulerFD}) this leads to
\begin{equation}\lambda bel{charpolyE} y_{j+1} - y_{j-1} = 2 \tau \dot{y}_j \ .
\end{equation}
Next we interpret the left hand side of equation (\ref{charpolyE}) as a polynomial $p$ of smallest degree in a variable $x$ where the subscripts in (\ref{charpolyE}) become the powers of $x$, namely $p(x) = x^2 - 1$. $p$ is called the characteristic polynomial of the difference equation (\ref{charpolyE}). This process is familiar to anyone who has taken a first course in Numerical Analysis and it is described in every elementary textbook on Numerics. We have explained this fundamental process here in full detail because we need to use it repeatedly in the future.\\[1mm]
The roots of the characteristic polynomial $p$ for Euler's finite difference formula (\ref{EulerFD}) are +1 and --1. They both lie on the periphery of the unit circle in $\array{*{16}{c}}} \def\endmat{\endarrayhbb {C}$ and are distinct. Therefore the 1-step Euler method is convergent. The requirement that all roots of $p$ must lie inside the closed unit disk and no repeated roots may lie on the unit circle is necessary and sufficient for convergence, see e.g. \cite[ch. 17.6.2]{EMU96}. Convergent finite difference schemes can be used repeatedly to trace solutions of differential equations in a look-ahead way, albeit for Euler with a low order of accuracy. This was initially done by Bunse-Gerstner et al in \cite{BBMN91} in 1991 for computing time-varying SVDs efficiently via a look-ahead Euler based integrator and subsequently in many other papers.\\[-3mm]
Zhang Neural Networks were first developed early in the new millennium by Yunong Zhang and others, called Zeroing Neural Network then, \cite{ZJW2002}. The idea was taken up by engineers and implemented in many time-varying applications with well over three hundred articles published mostly in engineering and applied math journals. Recently there have been impulses from numerical analysts, but ZNN is still largely unknown in numerical circles. ZNN based methods have been used to find time-varying reciprocals, square roots, generalized inverses, pseudoinverses, and for solving linear, Sylvester, Lyapunov and other equations or inequalities, as well as for matrix eigenvalues and eigenvectors and almost anything matrix related; in optimal design and control and for minimizations and so forth. The list of real-world applications goes on and on, from robotics to autonomous cars and experimental aircrafts where sensor data arrives at frequencies such as 50 Hz and an objective must be met accurately, 1-step ahead in real-time and for real-world situations. ZNN methods can be easily implemented in on-chip designs for practical control applications, see e.g. \cite{ZG2015}.
They currently have many industrial uses and give systems and machines improved performance. See e.g.
\cite{LMUZa2018,LMUZb2018,QZY2019,XL2016,ZJW2002,ZKXY2010,ZLYL2012,ZYLHW2017,ZYLUH2018,ZYGZ2011} for a short glimpse of their vast potential with time-varying systems.
Most recently there have been substantial advances on the numerical behavior of continuous-time ZNN methods for matrix problems such as by Lin Xiao et al in \cite{XLLTK} on the stability and robustness in control application, in \cite{XZZLL} on robot applications, and on general time-varying matrix inverse problems when using time-varying decay constants $\eta(t) > 0$ in \cite{WWS16,WSW18} for example. \\[2mm]
Zhang Neural Networks (ZNN) are designed for and most efficiently used to solve time-varying multi-dimensional equations $f(t) = b(t)$ predictively with high accuracy and quickly in real-time. There they use look-ahead finite difference formulas to solve a problem specific error differential equation. All ZNN methods are based on the error equation $e(t) = f(t) - b(t) \stackrel{!}{=}0$ and the stipulation that $e(t)$ should decay exponentially fast to zero, i.e.,
\begin{equation}\lambda bel{eDE}
\dot{e}(t) = -\eta e(t) \theta xt{ for } \eta > 0\ .
\end{equation}
When associated with a given continuous-time $f(t) = b(t)$ model, discretized ZNN methods can easily solve the associated discrete time-varying problem $f(t_k) = b(t_k)$ where the time steps $t_k$ are equidistant and the input data is derived from repeated sensor readings. The discretized method predicts the model's solution at time $t_{k+1}$ from a certain subset of the previous iterates $f(t_j)$ and $b(t_j)$ with $j \leq k$ and does so shortly after time $t_k$ by using certain derivatives that the error DE (\ref{eDE}) requires. Discrete ZNN methods must construct the next iterate well before the next time instance $t_{k+1}$ arrives. They succeed with high accuracy and a truncation error order of $O(\tau^{m+1})$ if the chosen 1-step ahead discretization formula has truncation error order $m+1$ for the constant sampling gap $\tau$. \\[-3mm]
Section 2 below describes how to set up the mechanics for finding high order convergent look-ahead finite difference formulas via Linear Algebra and elementary Matrix Theory. Section 3 then describes a characteristic polynomial root minimization process that helps us find look-ahead finite difference schemes which satisfy the convergence conditions. Section 4 provides a list of new and high truncation order convergent 1-step ahead finite difference schemes of truncation error orders up to $O(\tau^8)$, as well as open problems. Several of our newly found convergent 1-step ahead finite difference formulas are tested regarding their accuracy and convergence behavior in \cite{FUFoV} on the parameter-varying complex matrix field of values problem.
\section{Construction of general look-ahead Schemes from Seeds via Taylor Polynomials and elementary Linear Algebra}
A recent literature search for known convergent look-ahead finite difference formulas found less than half a dozen such formulas, all of which had truncation error orders less than or equal to 4. This is so despite hundreds of potentially known look-ahead discretization formulas in the literature. Most of them fail the characteristic roots condition and therefore they are unusable for ZNN type methods, such as the look-ahead but unstable formulas (27) to (30) in \cite{LMUZa2018}. With repeated roots on the unit circle, oscillations will eventually set in; with roots outside the unit circle, divergence to infinity will occur whenever corresponding fundamental solutions creep into the current states. With this paper we more than double the range of available convergent look-ahead methods from truncation error orders 2, 3, and 4, up to error orders 5, 6, 7 and 8. The new high error order formulas speed up convergence and improve accuracy to near machine constant error levels when properly implemented. \\[1mm]
This linear algebraic section develops the basis for an algorithm to determine look-ahead finite difference formulas of any truncation order, regardless of convergence or not. The computed look-ahead methods will then be used in the next section to act as initial guesses or seeds for starting a roots minimization process that may find convergent look-ahead schemes or it might not. Note that the previously found convergent methods were all (except for Euler) computed by ad hoc methods with lucky guesses and clever schemes. Here the process is formalized and developed into a computer code that needs no luck, no sweat and no tears.\\[-3mm]
Let us consider a discrete time-varying state vector $x_j = x(t_j) = x(j \cdot \tau)$ for a constant sampling gap $\tau$ and $j = 0,1,2,...$ and write out $\ell + 1$ explicit Taylor expansions for $x_{j+1}, x_{j-1}, ..., x_{j-\ell}$ around $x_j$ as follows:\\[-5mm]
\begin{eqnarray}
x_{j+1} & = & x_j ~ +~ \tau \dot{x}_j \ \overbrace { + \ \dfrac{\tau^2}{2!} ~ \ddot{x}_j \ \ ~ + ~ \ \ \dfrac{\tau^3}{3!} ~ \overset{\dots}{x}_j ~ \ \ ... \ \ \ \ \ \ \ + \ \dfrac{\tau^m}{m!} ~ \overset{m}{\dot{x}}_j \ \ \ \ \ \ \ \ \ } \ ~ + \ O(\tau^{m+1})\\[1mm]
x_{j-1} & = & x_j ~ - ~ \tau \dot{x}_j \ + \ \dfrac{\tau^2}{2!} ~ \ddot{x}_j \ \ ~ - ~ \ \ \dfrac{\tau^3}{3!} ~ \overset{\dots}{x}_j ~ \ \ ... \ \ \ + (-1)^m \dfrac{\tau^m}{m!} ~ \overset{m}{\dot{x}}_j \ \ \ + \ O(\tau^{m+1})\\[1mm]
x_{j-2} & = & x_j - 2\tau \dot{x}_j \ + \dfrac{(2\tau)^2}{2!} ~ \ddot{x}_j - \dfrac{(2\tau)^3}{3!} ~ \overset{\dots}{x}_j ~ ... + (-1)^m \dfrac{(2\tau)^m}{m!} ~ \overset{m}{\dot{x}}_j +O(\tau^{m+1})\\[1mm]
x_{j-3} & = & x_j - 3\tau \dot{x}_j \ + \dfrac{(3\tau)^2}{2!} ~ \ddot{x}_j - \dfrac{(3\tau)^3}{3!} ~ \overset{\dots}{x}_j ~ ... + (-1)^m \dfrac{(3\tau)^m}{m!} ~ \overset{m}{\dot{x}}_j +O(\tau^{m+1})\\
& \vdots& \\[2mm]
x_{j-\ell} & = & x_j - \ell\tau \dot{x}_j \ \ \underbrace { + \dfrac{(\ell\tau)^2}{2!} ~ \ddot{x}_j - \dfrac{(\ell\tau)^3}{3!} ~ \overset{\dots}{x}_j ~ ... \ + (-1)^m \dfrac{(\ell\tau)^m}{m!} ~ \overset{m}{\dot{x}}_j ~ } + ~ O \ (\tau^{m+1})
\end{eqnarray}
Each right hand side of the Taylor expansion rows or equations above contains $m+2$ terms. The central under- and overbraced $m-1$ 'column terms' on the right hand side of the equal signs each contain a factor of identical powers of $\tau$ and identical varying order partial derivatives of $x_j$. Our interest lies only in the remaining 'rational number' factors in the third through $(m+1)$st 'columns' on the right hand side of equations (5) through (10), i.e., for the moment we omit the powers of $\tau$ and the derivatives $\overset{r}{\dot{x}}_j$ for $r = 2, ..., m$ throughout what immediately follows. If we can find a linear combination of the $\ell+1$ equations (5) through (10) that makes the 'braced' $m-1$ number terms in each of these 'columns' disappear or become 0, we have found an equation for $x_{j+1}$ in terms of the already known values of $x_j$, $x_{j-1}$, ..., $x_{j-\ell}$, the derivative ${\dot{x}}_j$ with an overall error term of order $O(\tau^{m+1})$. These are the only items that are left once the braced region's linear row combination has become 0. To zero out the braced region, we now collect the relevant rational numbers factors in the constant matrix $A_{\ell+1,m-1}$
\begin{equation}\lambda bel{Amatrix}
A = \begin{pmat} \dfrac{1}{2!} & \dfrac{1}{3!} & \dfrac{1}{4!} & \cdots & \dfrac{1}{m!}\\[3mm]
\dfrac{1}{2!} & -\dfrac{1}{3!} & \dfrac{1}{4!} & \cdots & (-1)^m~ \dfrac{1}{m!}\\[3mm]
\dfrac{2^2}{2!} & - \dfrac{2^3}{3!} & \dfrac{2^4}{4!} & \cdots & (-1)^m~ \dfrac{2^m}{m!}\\[1.5mm]
\vdots & \vdots & \vdots & & \\[1mm]
\dfrac{\ell^2}{2!} & - \dfrac{\ell^3}{3!} & \dfrac{\ell^4}{4!} & \cdots & (-1)^m~ \dfrac{\ell^m}{m!}
\end{pmat} \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{\ell+1,m-1} \ .
\end{equation}
$A$'s entry $a_{u,v}$ in row $u$ and column $v$ is $\dfrac{(-1)^{v+1} ~ (u-1)^{v+1}}{(v+1)!}$ for $2 \leq u \leq \ell +1$ and $a_{1,v} = \dfrac{1}{(v+1)!}$ for all $v = 1, ..., m-1$. The complete over- and underbraced summed terms in equations (5) through (10) has the matrix times vector product form
\begin{equation}\lambda bel{ADx}
A_{\ell+1,m-1} \cdot taudx = \begin{pmat} \dfrac{1}{2!} & \dfrac{1}{3!} & \dfrac{1}{4!} & \cdots & \dfrac{1}{m!}\\[3mm]
\dfrac{1}{2!} & -\dfrac{1}{3!} & \dfrac{1}{4!} & \cdots & (-1)^m~ \dfrac{1}{m!}\\[3mm]
\dfrac{2^2}{2!} & - \dfrac{2^3}{3!} & \dfrac{2^4}{4!} & \cdots & (-1)^m~ \dfrac{2^m}{m!}\\[1.5mm]
\vdots & \vdots & \vdots & & \\[1mm]
\dfrac{\ell^2}{2!} & - \dfrac{\ell^3}{3!} & \dfrac{\ell^4}{4!} & \cdots & (-1)^m~ \dfrac{\ell^m}{m!}
\end{pmat}
\begin{pmat}
\tau^2 \ \ddot{x}_j \\[1mm] \tau^3 \ \overset{\dots}{x}_j \\[1mm] \tau^4 \ \overset{4}{\dot{x}}_j \\[1mm] \vdots \\[1mm] \tau^{m-1} \ \overset{m-1}{\dot{x}}_j \\[1mm] \tau^m \ \overset{m}{\dot{x}}_j
\end{pmat}
\end{equation}
where the vector $taudx \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{m-1}$ contains the increasing powers of $\tau $ multiplied by the respective higher derivatives of $x_j$ as entries. If we can find a left kernel row vector $x \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{\ell+1}$ for $A$ with $x\cdot A = o_{m-1}$, the zero row vector in $\array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{m-1}$, then $x \cdot A \cdot taudx = 0 \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}$ as well.
If we then form the linear combination of the $\ell+1$ equations in rows (5) through (10) as prescribed by the coefficients of $x$, the linear combination of the sums in the under- and overbraced columns in (5) through (10) will vanish and we obtain a single look-ahead formula, involving only $x_{j+1}, x_j, x_{j-1}, ..., x_{j-\ell}$, and $\dot{x}_j$ with an error term of order $O(\tau^{m+1})$. A non-zero left kernel vector $x$ for $A_{\ell+1,m-1}$ exists as soon as the number of rows $\ell +1$ of $A$ exceeds the number $m-1$ of columns of $A_{\ell+1,m-1}$. \\[-3mm]
A left null row vector $x$ for $A$ is a right null column vector for $A^T$ when transposed and vice versa.
Column null vectors can be found from a reduced row echelon form reduction $R$ of $A^T$ easily by setting the free variables of $R_{m-1,\ell+1}$ equal to a nonzero seed vector $y$ of length $\ell + 1 - (m-1) = \ell -m+2$. In this case -- which will be called the regular case from now on, $R$ is a single row block matrix with the identity matrix $I_{m-1}$ appearing in the first block position because the coefficient matrix $A$ always has full rank $m-1$ and the linear system is underdetermined. And generally, a dense $m-1$ by $m-1$ matrix $B$ appears in the second block position of $R$, i.e., $R$ has the form
\begin{equation}\lambda bel{Bdef}
R = \begin{pmat} I_{m-1} \ , \ B_{m-1,m-1} \end{pmat}_{m-1,2(m-1)} \ .
\end{equation}
This dimensional situation works very well here. In fact the method works for any matrix $A_{k+s,k}$ and any nonzero seed vector $y \in\array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^s$. Any nonzero seed vector $y \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^s$ spawns a null vector $q \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{k+s}$ for $A^T$ as $q =$ {\tt [-R*[zeros(k,1);y];y]} in Matlab notation. We then replace the vector $q$ by $q/q(1)$ in order to arrive at a normalized characteristic polynomial $p$ for the associated convergent look-ahead finite difference equation.\\
Once such a null vector $q\in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{2(m-1)}$ has been computed from a seed vector $y \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{m-1}$ in the regular $A_{2(m-1),m-1}$ case where $\ell+1 = 2(m-1)$, our algorithm forms the specific linear combination of the set of equations (5) through (10) that $q$ suggests in order to zero out all contributions from the entries in the under- and overbraced third through $m+1$st columns on the right hand side of equations (5) to (10). Then we separate the 1-step ahead state $x_{j+1}$ on the left hand side with likewise accumulations for the current and earlier states $x_j, x_{j-1}, ..., x_{j-2m+3}$ and the first time derivative $\dot{x}_j$ according to $q$ on the right hand side of the linearly combined equations. This process yields a finite difference multistep formula for $2m-1$ equidistant state vectors. Its characteristic polynomial is the normalized polynomial
$p =$ {\tt [1;-sum(q);q(2:2(m-1))]}$ \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^{2m-1}$, again in Matlab notation. The polynomial $p$ describes a 1-step-ahead difference equation for $2m\!-\!1$ contiguous instances, i.e., a $(2m\!\!-\!\!1)$-IFD formula in our abbreviation to denote '$(2m\!\!-\!\!1)$-Instance Difference Formulas' .
\begin{equation}\lambda bel{findiffe}
x_{j+1} + p_2 x_{j} + ... + p_{2m-1} x_{j-2m+3} = c \dot{x}_j \ .
\end{equation}
We solve (\ref{findiffe}) for $x_{j+1}$ by rearranging terms and obtain the associated look-ahead rule
\begin{equation}\lambda bel{laheadr}
x_{j+1} = -(p_2 x_{j} + ... +p_{ 2m -1} x_{j-2m+3}) + c \dot{x}_{\tilde j}
\end{equation}
where we have incorporated the linear combination of the first derivative $\dot{x}_j$ terms in equations (5) through (10) in the constant $c = p \ .\!* [1;0;-(1:2m-3)']$ (in Matlab notation). Here $p$ (with leading coefficient $p_1$ normalized to 1) is in row vector form to create the dot product $c$. And $ \dot{x}_{\tilde j}$ is a backward approximation of the derivative $\dot{x}_{ j} $ at time $t_j$ of sufficiently high truncation error order that can by computed from previous $x_{..}$ state data. To use formula (\ref{laheadr}) in the $A_{2m-2,m-1}$ regular case we clearly need $2m-2$ starting values for the $x_{..}$ terms. And we need $k+s$ starting values $x_{..}$ in the more general $A_{k+s,k}$ situation.
\\[2mm]
Our next task is to find convergent finite difference 1-step ahead formulas of a form such as (\ref{laheadr}). Convergence of such multistep formulas depends on the lay of their characteristic polynomial's roots: these must all lie inside the unit circle in $\array{*{16}{c}}} \def\endmat{\endarrayhbb {C}$ and if there are roots on the unit circle, those must be simple. See e.g. \cite[Ch. 17.6.2 and Definition 17.17, p. 475]{EMU96}. How to achieve convergence behavior here is the subject of our next Section.
\vspace*{-2mm}
\section{A Root Minimization Approach to find convergent look-ahead Difference Schemes of high Error Orders from general look-ahead Schemes}
In Section 2 we have described how to start from a short seed vector and construct a 1-step ahead finite difference scheme for possible use in general ZNN processes. The behavior of the associated characteristic polynomial determines convergence or divergence. If the constructed scheme is not convergent for the chosen seed $y$ then there are roots of its characteristic polynomial that exceed 1 in magnitude or there are repeated roots on the unit circle. In our experience, the former, i.e., roots outside the unit circle, is seemingly always the cause for non-convergence; we have never encountered the latter. Note that the matrix $A_{k+s,k}$ has very special rational number entries of integer powers divided by factorials. Therefore the reduced row echelon form $R$ of $A^T$ contains mostly integers or rational numbers with small integer denominators in its second block $B$ as defined in (\ref{Bdef}). \\[1mm]
In this section we propose a minimizing algorithm for the maximal magnitude characteristic polynomial root in terms of a given seed vector $y$. For $A_{k+s,k}$ the seed vector space is $\array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^s$ and any seed $y$ therein spawns a unique look-ahead finite difference scheme and an associated characteristic polynomial as was shown in Section 2. However, the set of possible characteristic polynomials itself is not a linear space since sums of such polynomials may or may not be representatives of look-ahead difference schemes. Therefore we can only vary the seed and not the intermediate polynomials in the minimization process and we will have to search indirectly for a characteristic polynomial with smaller maximal magnitude root in a neighborhood of the starting seed $y$. We implement this minimization process by using the multidimensional built-in Matlab {\tt fminsearch} minimizer function until we have found a seed with an associated characteristic polynomial that is convergent or there is no convergent such formula from the chosen seed. {\tt fminsearch} uses the Nelder-Mead downhill simplex method \cite{NM65} that finds local minima for non-linear functions such as ours without using derivatives. It mimics the method of steepest descend and performs a local minimizing search via multiple function evaluations. The main task is to discover generating seed vectors $y$ that can start the minimizing iteration and find a characteristic polynomial that is convergent according to the convergent root stipulations. Our seed selection process is currently based on random entry seeds since we know of no better way.\\[1mm]
For the general $A_{k+s,k}$ case and after many different approaches for choosing our random entry seed vectors $ y \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^s$, we decided to start from seeds $y$ with normally distributed entries and run {\tt fminsearch} to try and find a local minimum of the maximal magnitude root of the associated characteristic polynomial. Our minimization algorithm runs through a double do loop. An outside loop for a number (5 to 20 or 100 ...) of random entry starting seed vectors and an inner loop for a number (4 to 7 or 15 ...) of randomized restarts from a previously computed {\tt fminsearch} polynomial that is non-convergent. We project its seed onto a point with newly randomized entries nearby and use this new seed for a subsequent inner loop run several (2, 5 or 8) times.\\[1mm]
The whole MATLAB code fits onto 80 lines of code, plus 30 lines of comments and two dozen lines of known convergent 1-step ahead discretization formulas of varying truncation error orders between $O(\tau^2)$ to $O(\tau^8)$. Previously convergent look-ahead methods were completely unknown for truncation error orders above $O(\tau^4)$. Now we can compute many convergent polynomials of higher error orders quickly.
\vspace*{-2mm}
\section{List of all known and new convergent look-ahead finite Difference\\ Schemes and some computed Results}
Here is the complete short list of the six known 1-step ahead discretization formulas and their properties ordered by ascending truncation error order.\\[1mm]
{\bf (A) \ Symmetric Euler Differentiation and Discretization Formula with ZNN truncation error order $O(\tau^2)$} :\\
\hspace*{14mm} $ \dot{y}_j = \dfrac{1}{2\tau} y_{j+1} - \dfrac{1}{2\tau} y_{j-1} + O(\tau)$ or\\[1mm]
\hspace*{14mm} $y_{j+1} = y_{j-1} + O(\tau^2) + ... \theta xt{ problem specific terms from model's right hand side and } \dot{x}_j $\\[2mm]
\hspace*{6mm} Characteristic Polynomial : $p(x) = x^2 - 1$\\
Formally and according to multistep theory, the Euler formula should lead to a convergent look-ahead discretization formula, but Euler is not convergent as such in practice. Why so is a surprising mystery.\\ [2mm]
{\bf (B) \ 4-IFD Formula from \cite[equations (10), (12)]{LMUZb2018} with ZNN truncation error order $O(\tau^3)$} : \\[1mm]
\hspace*{6mm} $ (10) \ \ \dot{y}_j = \dfrac{1}{\tau} y_{j+1} - \dfrac{3}{2\tau} y_j + \dfrac{1}{\tau} y_{j-1} - \dfrac{1}{2\tau} y_{j-2} + O(\tau^2) $ or\\[1mm]
\hspace*{6mm} $ (12) \ \ y_{j+1} = \dfrac{3}{2} y_j - y_{j-1} + \dfrac{1}{2} y_{j -2} + O(\tau^3) + ... \theta xt{ problem specific terms from model's rhs and } \dot{x}_j$\\[2mm]
\hspace*{6mm} Characteristic Polynomial : $p(x) = 2x^3 - 3 x^2 + 2x -1$, (not normalized)\\[2mm]
{\bf (C) \ 4-IFD Formula from \cite[equation (11)]{LMUZb2018} with ZNN truncation error order $O(\tau^3)$} : \\[1mm]
\hspace*{6mm} $ (11) \ \ \dot{y}_j = \dfrac{3}{5\tau} y_{j+1} - \dfrac{3}{10\tau} y_j - \dfrac{1}{5\tau} y_{j-1} - \dfrac{1}{10\tau} y_{j-2} + O(\tau^2) $ or\\[1mm]
\hspace*{14mm} $ y_{j+1} = \dfrac{1}{2} y_j + \dfrac{1}{3} y_{j-1} + \dfrac{1}{6} y_{j -2} + O(\tau^3) + ... \theta xt{ problem specific terms from ...}$\\[2mm]
\hspace*{6mm} Characteristic Polynomial : $p(x) = 6x^3 - 3 x^2 - 2x -1$, (not normalized)\\[2mm]
{\bf (D) \ \ FIFD Formula from \cite[equations (14), (21)]{LMUZa2018} with ZNN truncation error order $O(\tau^3)$} : \\[1mm]
\hspace*{6mm} $ (14) \ \ \dot{y}_j = \dfrac{5}{8\tau} y_{j+1} - \dfrac{3}{8\tau} y_j - \dfrac{1}{8\tau} y_{j-1} - \dfrac{1}{8\tau} y_{j-2} + O(\tau^2) $ or \\[1mm]
\hspace*{6mm} $ (21) \ \ y_{j+1} = \dfrac{3}{5} y_j + \dfrac{1}{5} y_{j-1} + \dfrac{1}{5} y_{j -2} + O(\tau^3) + ... \theta xt{ problem specific terms from ...}$\\[2mm]
\hspace*{6mm} Characteristic Polynomial : $p(x) = 5x^3 - 3 x^2 - x -1$, (not normalized)\\[2mm]
{\bf (E) \ \ 5-IFD Formula from \cite[equations (23), (27)]{LMUZb2018} with ZNN truncation error order $O(\tau^4)$} : \\[1mm]
\hspace*{6mm} $ (23) \ \ \dot{y}_j = \dfrac{4}{9\tau} y_{j+1} + \dfrac{1}{18\tau} y_j - \dfrac{1}{3\tau} y_{j-1} - \dfrac{5}{18\tau} y_{j-2} + \dfrac{1}{9\tau} y_{j-3} + O(\tau^3) $ or \\[1mm]
\hspace*{6mm} $ (27) \ \ y_{j+1} = -\dfrac{1}{8} y_j + \dfrac{3}{4} y_{j-1} + \dfrac{5}{8} y_{j -2} - \dfrac{1}{4} y_{j-3} + O(\tau^4) + ... \theta xt{ problem specific terms from ...}$\\[2mm]
\hspace*{6mm} Characteristic Polynomial : $p(x) = 8x^4 +x^3 - 6 x^2 - 5x +2$, (not normalized)\\[2mm]
{\bf (F) \ \ 6N$\tau$CD Formula from \cite[equations (16), (18)]{QZGYL2018} with ZNN truncation error order $O(\tau^4)$} : \\[1mm]
\hspace*{6mm} $ (16) \ \ \dot{y}_j = \dfrac{13}{24\tau} y_{j+1} - \dfrac{1}{4\tau} y_j - \dfrac{1}{12\tau} y_{j-1} - \dfrac{1}{6\tau} y_{j-2} - \dfrac{1}{8\tau} y_{j-3} + \dfrac{1}{12\tau} y_{j-4} + O(\tau^3) $ \ or \\[1mm]
\hspace*{6mm} $ (18) \ \ y_{j+1} = \dfrac{6}{13} y_j + \dfrac{2}{13} y_{j-1} + \dfrac{4}{13} y_{j -2} + \dfrac{3}{13} y_{j-3} - \dfrac{2}{13} y_{j-4} + O(\tau^4) + ... \theta xt{ problem specific terms ...}$ \\[2mm]
\hspace*{6mm} Characteristic Polynomial : $p(x) = 13x^5 -6x^4 -2x^3 - 4 x^2 - 3x +2$, (not normalized)\\[-2mm]
Next we mention a list of new convergent look-ahead discretisation formulas that is detailed within our MATLAB codes.
We start with some results obtained using our {\tt runconv1step.m} file \cite{Uconstrcode2018} in Matlab. There are 4 integer inputs to
{\tt runconv1step.m} : the first input indicates how many outer loop runs are desired and the second input indicates how many separate repeats with altered seed inputs should be performed in an inner do loop for each outer run. A call of {\tt runconv1step(40,10,k,s)} would thus require 40 outer loop runs with 10 separate seeds each (400 runs of {\tt fminsearch} in total). This call tries to find convergent polynomials of truncation error orders $k+2$ , i.e., polynomials of degree $k+s$ with roots properly inside the closed unit disk in $\array{*{16}{c}}} \def\endmat{\endarrayhbb {C}$. Here $k$ can be any integer less than 6 and $s$ should be at least equal to $k$ so that the rational entry matrix $A_{k+s,k}$ has a nontrivial left nullspace. Here $s$ is the number of real entries in the seed vector $y$ and our highest tested-for truncation error order $O(\tau^{k+2})$ is $k+2 = 6+2 =8$ for $k = 6$. We see no real need to go beyond $k = 6$ since a truncation error order of $(50 Hz)^8 = 0.02^8 \approx 2.56 \cdot 10^{-14}$ seems close enough to the machine constant to wonder about further improvements.\\[-3mm]
The MATLAB output for validating the known convergent 5-IFD formula (E) above from \cite{LMUZb2018} which is listed in the examples list in our code {\tt runconv1step.m} \ \cite{Uconstrcode2018} is as follows with $y$ denoting the seed vector of length $s$ that was used.
\begin{verbatim}
format short, tic, TPOLY = runconv1step(1,1,2,2), toc
Truncation_error_order =
4
y =
-5 2
TPOLY = 1.0000 0.1250 -0.7500 -0.6250 0.2500 0 -0.0000 0.9025 2.2500
Elapsed time is 0.008423 seconds.
\end{verbatim}
\noindent
The first $k+s+1$ entries in {\tt TPOLY} are the computed coefficients of the normalized convergent polynomial $p(x) = x^4 +0.125x^3 - 0.75 x^2 - 0.625x + 0.25$ of degree $k+s = 4$ in decreasing exponent order. These are followed by a data separating zero, the deviation of the maximal magnitude root of $p$ from 1 (which is nearly zero), the magnitude of the second largest magnitude root of $p$ and finally the coefficient of $\tau$ that is to be used inside the ZNN discretization. The reader can sample any of the ZNN papers in the bibliography to learn how to set up and implement a discrete ZNN method from any known convergent polynomial.\\
Next a similar example for a convergent truncation error order 5 formula from the seed {\tt y = [a,110,-40]} with variable constant $1 \leq a \leq 2.5$, also given in our formula list inside {\tt runconv1step.m}. We have set $a = 1$ to obtain the output below.\\[-4mm]
\begin{verbatim}
format rat, tic, TPOLY = runconv1step(1,1,3,3), toc
Truncation_error_order =
5
y =
1 110 -40
TPOLY =
1 80/237 -182/237 -206/237 1/237 110/237 -40/237 0 0.0000 446/465 196/79
Elapsed time is 0.007906 seconds.
\end{verbatim}
Note that the resulting normalized characteristic polynomial has again only rational coefficients. This occurrence is rather rare.
If we run {\tt runconv1step(20,10,5,5)} with $20 \cdot 10 = 200$ individual searches we typically capture 4 to 6 convergent polynomials with discrete ZNN model truncation error orders $O(\tau^7)$. When doing the same with $k = 4$ and $s = 4$ in 200 searches, our code discovers around 40 convergent polynomials with associated truncation error order $O(\tau^6)$ in ZNN applications. It seems advantageous to increase $s$ beyond $s = k$ to find more higher truncation error order polynomials quickly. For example {\tt runconv1step(20,10,5,6)} with $s = k+1 = 6$ exhibits around a dozen good polynomials and {\tt runconv1step(20,10,5,7)} with $s$ set to $k+2 = 7$ around 18. These success numbers are influenced by the random nature of our seeds $y$ of any fixed length $s\geq k$. Note that if the call of {\tt runconv1step(runs,jend,k,s)} generates feasible polynomials, these will all be of degree $k+s$. Therefore an implementation inside any discrete ZNN model must compute $k+s$ starting values before an associated 1-step forward discrete ZNN iteration can be run.\\[1mm]
We have found very few high order convergent polynomials with integer coefficients. The usual output {\tt TPOLY} from {\tt runconv1step(...,...,k,s)} comes in 16 digit exponent 10 Matlab notation. The look-ahead convergent finite difference formulas of high truncation error orders $O(\tau^5)$ to $O(\tau^8)$ at the bottom of our our list in
{\tt runconv1step(...,...,k,s)} can be generated from the given seed information there. We have not tested the quality of the newly computed convergent polynomials and whether there are any noticeable differences in real-world problems between them. For example we do not know whether those polynomials with well separated largest and second largest magnitude roots perform better than those with multiple very near 1 magnitude roots. To study this issue, we have included the magnitude of the second largest root in the last but one column of {\tt TPOLY}.\\[1mm]
This paper describes how we can now create high error order convergent look-ahead finite difference schemes for any $k$ and $s \geq k$. Our newly found \verb+k_s+ designated polynomials \verb+3_3+, \verb+4_5+, \verb+5_5+ and \verb+5_6+ of truncation orders 5 through 7 were tested recently on the matrix field of values problem and compared in \cite{FUFoV}.\\[-2mm]
We conclude with a more theoretical aspect and an open question encountered with finding convergent polynomials for 1-step ahead ZNN processes.\\[-2mm]
\noindent
{\bf Remark} and {\bf Open Question :} \\[2mm]
\hspace*{5.5mm} \begin{minipage}{154mm}{ Every polynomial that we have constructed from any seed vector $y \in \array{*{16}{c}}} \def\endmat{\endarrayhbb {R}^s$ by our method has had at least one root on the unit circle (within $10^{-15}$ numerical accuracy). This is so even for non-convergent polynomials with some roots outside the unit disk. Is it true in general that all such Taylor expansion matrix $A$ based polynomials have at least one root on the unit circle in $\array{*{16}{c}}} \def\endmat{\endarrayhbb {C}$? Or are there some such convergent polynomials whose roots all lie inside the open unit disk.}
\end{minipage}
\section{Assessment and Outlook}
Our method to construct convergent look-ahead finite difference equations
hinges on two separate ideas and subsequently two constructive steps. The first problem, part one, is essentially linear. We want to eliminate the second to $m$th derivatives in the Taylor expansion equations (5) to (10) around $x_j$ and find a certain linear combination of the $\ell +1$ equations that can give us a candidate finite difference scheme. This linear first branch of our task starts from a short seed vector $y$. It completes the seed of length $s$ to a full difference equation of length $k+s$ and the associated characteristic polynomial coefficients. The resulting difference equation relates the 1-step ahead $x_{j+1}$ value to earlier $x_k$ values with $k \leq j$ and to the derivative at $x_j$ with an error term of order $O(\tau^{m+1})$. The second problem, part two, is highly nonlinear. It tries to select those finite difference equations whose characteristic polynomials satisfy certain root conditions for convergence. To solve the second, the nonlinear polynomial roots problem, we have chosen a multidimensional minimization function that starts from a candidate finite difference scheme an its associated set of coefficients and varies the original seed with an eye on minimizing the largest magnitude root of the associated varying characteristic polynomial. It is the nature of the very first seed from the linear part one that eventually determines the convergence qualities of the method at the end of each minimizing part two process. Only then do we know.\\[1mm]
In most of our trial test runs from a seed to a possibly convergent finite difference scheme, the original maximal magnitude characteristic polynomial root does not dip down to 1 or below. Instead the maximal magnitude root usually settles around 1.008.., 1.01.. or larger and does no longer budge, indicating that the original seed vector is not allowing our algorithm to find a usable difference scheme and that we must end this run. Clearly not all seeds can solve our problem, their surrounding maximal root 'valleys' simply may not dip low enough to be of use to us. But often enough, our randomized seed minimization algorithm leads to success in finding convergent look-ahead finite difference formulas with truncation error orders up to 8 where none were previously known.\\[2mm]
\vspace*{5mm}
\hspace*{30mm}{[} \ .../latex/constructconv1stepahead.tex \ ]
\today
\end{document} | math | 36,973 |
\begin{document}
\title{
Billiard dynamics of bouncing dumbbell }
\author{Y. Baryshnikov, V. Blumen, K. Kim, V. Zharnitsky }
\address{Department of Mathematics, University of Illinois, Urbana, IL 61801}
\begin{abstract}
A system of two masses connected with a weightless rod (called dumbbell in this paper) interacting with a flat boundary is considered. The sharp bound on the number of collisions with the boundary is found using billiard techniques. In case, the ratio of masses is large and the dumbbell rotates fast, an adiabatic invariant is obtained.
\end{abstract}
\maketitle
\section{Introduction}
Coin flipping had been already known to ancient Romans as a way to decide an outcome \cite{telegraph}. More recently, scientists inspired by this old question, how unbiased the real (physical) coin is, have been studying coin dynamics, see e.g. \cite{keller, mahadevan, diaconis}.
Previous studies have mainly focused on the dynamics of the flying coin assuming that it does not bounce and finding the effects of angular momentum
on the final orientation. Partial analysis in combination with numerical simulations of the bouncing effects has been done by Vulovic and Prange
\cite{vulovic}. It appears that this is the only reference that addressed the effect of bouncing on coin tossing.
On the other hand, there is a well developed theory of mathematical billiards: classical dynamics of a particle moving inside a bounded domain.
The particle moves along straight line until it hits the boundary. Next, the particle reflects from the boundary according to the Fermat's law.
The billiard problem originally appeared in the context of Boltzman ergodic hypothesis \cite{sinai} to verify physical assumptions about
ergodicity of a gas of elastic spheres.
However, various techniques in billiard dynamics turned out to be useful beyond the original physical problem. The so-called unfolding technique
(which is used in this paper) allows one to obtain estimates on the maximal number of bounces of a particle in a wedge.
One could expect that the bouncing coin dynamics could be interpreted as a billiard ball problem.
In this paper we consider a simpler system (with fewer degrees of freedom) which we call the dumbbell. The bouncing coin on a flat surface, restricted to have axis of rotation pointing in the same direction, can be modeled as a system of two masses connected with a weightless
rod.
The dumbbell dynamics that is studied in this article is a useful model to initiate investigation of this potentially useful relation.
Another motivation for the dumbbell dynamics comes from robotics exploratory problems, see {\em e.g.} \cite{lavalle}.
Consider an automated system that moves in a bounded domain and interacts with the boundary according to some simple laws.
In many applications, it is important to cover the whole region as {\em e.g.} in automated vacuum cleaners such as Roomba.
Then, a natural question arises: {\em what simple mechanical system can generate a dense coverage of a certain subset of the given
configuration space}.
The dumbbell, compared to a material point, has an extra degree of freedom which can generate more chaotic behavior as {\em e.g.} in Sinai billiards.
Indeed, a rapidly rotating dumbbell will quickly ``forget'' its initial orientation before the next encounter with the boundary
raising some hope for stronger ergodicity.
In this paper, we study the interaction of a dumbbell with the flat boundary. This is an important first step before understanding
the full dynamics of the dumbbell in some simple domains. By appropriately rescaling the variables, we obtain an associated single particle
billiard problem with the boundary corresponding to the collision curve (which is piecewise smooth) in the configuration space.
The number of collisions of the dumbbell with the boundary before scattering out depends on the mass ratio $m_1/m_2$. If this ratio
is far from 1, then the notion of adiabatic invariance can be introduced as there is sufficient time scales separation.
We prove an adiabatic invariant type theorem and we describe under what conditions it can be used.
Finally, we estimate the maximal number of bounces of the dumbbell with the flat boundary. \\
\noindent
{\bf Notation:}
We use some standard notation when dealing with asymptotic expansions in order to avoid cumbersome use of implicit constants. \\
$ f \lesssim g \Leftrightarrow f= O(g) \Leftrightarrow f \leq Cg $ for some $C>0$\\
$f \gtrsim g \Leftrightarrow g = O(f)$ \\
$f \sim g \Leftrightarrow f \lesssim g \,\, {\rm and} \,\, f \gtrsim g $ \\
\section{Collision Laws}
\subsection{Dumbbell-like System}
Let us consider a dumbbell-like system, which consists of two point masses $\mo$, $\mt$, connected by weightless rigid rod of length $1$ in the two-dimensional space with coordinates $(x,y)$. The coordinates of $\mo$, $\mt$, and the center of mass of the system are given by $(\xo, \yo)$, $(\xt, \yt)$, and $(x, y)$, respectively. Let $\phi$ be the angle measured in the counterclockwise direction from the base line through $\mo$ and to the rod. We also define the mass ratios $\ds \bo = \frac{\mo}{\mo+\mt}$ and $\ds \bt=\frac{\mt}{\mo+\mt}$ which correspond to the distance from the center of mass to $\mt$ and to $\mo$, respectively.
\begin{figure}
\caption{The system of dumbbell.}
\end{figure}
The dumbbell moves freely in the space until it hits the floor. In this system, the velocity of the center of mass in $x$ direction is constant since there is no force acting on the system in $x$ direction. Thus, we may assume without loss of generality that the center of mass does not move in $x$ direction. With this reduction, the dumbbell configuration space is two dimensional with the natural choice of coordinates $(y,\phi)$.
The moment of inertia of the dumbbell is given by
\[I=\mo \bt^2 + \mt \bo^2 = \bo\bt (\mo + \mt). \]
Introducing the total mass $m = \mo + \mt$, we can write the kinetic energy of the system as
\begin{align}\label{eq:egy}
K= \frac{1}{2}m\yd^2 + \frac{1}{2}\bo\bt m \pd^2.
\end{align}
Using the relations,
\[
\yo = y + \bt \sin \phi
\]
\[
\yt = y - \bo \sin \phi,
\]
we find the velocities of each mass
\[
\yod = \yd + \bt \pd \cos \phi
\]
\[
\ytd = \yd - \bo \pd \cos \phi.
\]
\subsection{Derivation of Collision Laws}
By rescaling $y=\sqrt{\frac{I}{m}}Y$, we rewrite the kinetic energy
\[
K=\frac{m}{2}\yd^2 + \frac{I}{2}\pd^2=\frac{I}{2}\left( \Yd^2 + \pd^2 \right).
\]
By Hamilton's principle of least action, true orbits extremize
\[
\int_{t_0, Y_0, \phi_0}^{t_1, Y_1, \phi_1} K( \Yd, \pd) \mathrm{d}t.
\]
Since the kinetic energy is equal to the that of the free particle,
the trajectories are straight lines between two collisions. When the dumbbell hits the boundary, the collision law is the same as in the classical billiard since in $(Y, \phi)$ coordinates the action is the same. Using the relations
\begin{equation*}\begin{split}\\
y_1 &=y + \bt\sin \phi \geq 0\\
y_2 &=y - \bo\sin \phi \geq 0,\\
\end{split}
\end{equation*}
we find the boundaries for the dumbbell dynamics in the $Y$-$\phi$ plane:
\begin{equation}\tag{2a}\label{bd1}
Y= -\sqrt{\frac{m}{I}}\bt\sin\phi =-\sqrt{\frac{\bt}{\bo}}\sin \phi
\end{equation}
\begin{equation}\tag{2b}\label{bd2}
Y= \;\;\, \sqrt{\frac{m}{I}}\bo\sin\phi=\;\;\,\sqrt{\frac{\bo}{\bt}} \sin \phi.
\end{equation}
The dumbbell hits the floor if one of the above inequalities becomes an equality. Therefore, we take the maximum of two equations to get the boundaries:
\begin{align}\tag{2c}\label{bd}
Y={\rm max} \left\{-\sqrt{\bt/\bo}\sin\phi, \sqrt{\bo/\bt}\sin\phi \right\} \text{ for } \phi \in [0, 2\pi].
\end{align}
Note that this boundary has non-smooth corner at $\phi = 0, \pi$. This is the case when the dumbbell's two masses hit the floor at the same time. We will not consider this degenerate case in our paper.
Now we will derive the collision law for the case when only $\mo$ hits the boundary. We recall that given vector $\vv_-$ and a unit vector $\vn$ the reflection of $\vv_-$ across $\vn$ is given by
\begin{equation}\label{eq:ref}\tag{3}
\vv_{+} = -2 \frac{\vv_-\cdot \vn }{\vn \cdot \vn} \vn + \vv _-.
\end{equation}
Here and in the remainder of the paper, $x_-, y_-, ...$ are defined as the
corresponding values right
before the collision and $x_+, y_+, ...$ are defined as the corresponding values
right before the next collision.
According to the collision law, the angle of reflection is equal to the angle of incidence. In our case, $\vn$ is the normal vector to the boundary
\[
Y =\ds -\sqrt{\frac{m}{I}}\bt \sin \phi
\]
so that
\begin{equation*}\begin{split}\\
\vn &=\left[1,\sqrt{m/I} \, \bt \cos \phi\right]\\
\vv_-&= \left[\Yd_-, \pdm\right]=\left[\sqrt{m/I} \, \ydm, \pdm\right].\end{split}
\end{equation*}
Then, using (\ref{eq:ref}), we compute $\ds \vv_{+} = \left[\Yd_+, \pdp\right]$. In this way, we express the translational and the angular velocities after the collision in terms of the velocities before $\mo$ hits the floor. Changing back to the original coordinates, we have
\begin{align}\label{eq:law}\tag{4}
\left( \begin{array}{r}
\ydp \\
\\
\pdp\\
\end{array} \right)
&=
\left( \begin{array}{c}
\ds \sqrt{\frac{I}{m}} \Yd_+ \\
\\
\pdp\\
\end{array} \right)
=\notag
\left( \begin{array}{l}
\ds \ydm \left ( -1 + \frac{2 \bt\cos^2 \phi}{\bo + \bt \cos^2\phi} \right ) - \pdm \left( \frac{2 \bo\bt\cos \phi}{\bo + \bt \cos^2\phi} \right)\\
\\
\ds \pdm \left( 1 - \frac{2 \bt\cos^2 \phi}{\bo + \bt \cos^2\phi} \right) - \ydm \left ( \frac{2\cos\phi}{\bo + \bt \cos^2 \phi} \right)
\end{array}\right).
\end{align}
\begin{rmk}
The bouncing law for the other case, when $\mt$ hits the boundary can be obtained in a similar manner: we switch $\bo$ and $\bt$, replace $\cos \phi$ and $\sin \phi$ with $-\cos \phi$ and $-\sin \phi$, and replace $\yo$ with $\yt$.
\end{rmk}
\section{Adiabatic Invariant}
Consider the case when $\mo \ll \mt$ and $\mo$ rotates around $\mt$ with high angular velocity $\pd$ and assume that the center of mass has slow downward velocity compared to $\pd$. Since multiplying velocities $(\dot \phi, \dot y)$ by a constant does not change the orbit, we normalize $\dot \phi$ to be of order 1, then $\dot y$ is small. Consider such dumbbell
slowly approaching the floor, rotating with angular velocity of order 1, {\em i.e.}
$\dot \phi \sim 1$.
At some moment the small mass $\mo$ will hit the floor. If the angle $\phi = \pi/2$
(or sufficiently close to it), then the dumbbell will bounce away without experiencing any more collisions. This situation is rather exceptional.
A simple calculation shows that $|\phi-3\pi/2|$ will be generically of order $\sqrt{|\dot y|}$ for our limit $\dot y \rightarrow 0$. In this section we assume this favorable scenario.
For the corresponding set of initial conditions, we obtain an adiabatic invariant (nearly conserved quantity). We start by deriving approximate map between two consecutive bounces.
\begin{figure}
\caption{The light mass bounces many times off the floor while the large
mass slowly approaches the floor.}
\label{fig2}
\end{figure}
\begin{lemma}
\label{lemma_bounce}
Let $\bo = \e \ll 1$, $\pdm \neq 0$ and assume $\mo$ bounces off the floor and hits the floor next before $\mt$ does.
Then there exist sufficiently small $ \de \gg \e$ such that if $ -\de < \ydm <0$ and $|\phi - \frac{3\pi}{2} | \gtrsim \sqrt \de$,
the collision map is given by
\begin{align}\label{eq:phi}\tag{5}
\pdp&=-\pdm - \frac{2}{\sqrt{1-\ym^2}}\ydm + O\left(\frac{\e}{\de}\right)
\end{align}
\begin{align}\label{eq:dist}\tag{6}
\yp&=\ym - \frac{2 \pi - 2 \arccos \ds \ym }{\pdm} \ydm + O\left(\de^{3/2}\right) + O\left(\frac{\e}{\sqrt{\de}}\right).
\end{align}
\end{lemma}
\begin{proof}
We prove (\ref{eq:phi}) in two steps. We first show that
\[
\pdp = -\pdm - \frac{2}{\cos \phi} \ydm + O\left(\frac{\e}{\de}\right)
\]
using the expression for $\pdp$ in (\ref{eq:law}). We have,
\begin{align*}
\pdp + \Big(\pdm &+ \frac{2}{\cos \phi} \ydm \Big)\\
&=\left( 1 - \frac{2 (1-\bo)\cos^2 \phi}{\bo + (1-\bo) \cos^2\phi} \right) \pdm+ \left( \frac{2\cos\phi}{\bo + (1-\bo) \cos^2 \phi}\right)\ydm + \left(\pdm + \frac{2}{\cos \phi} \ydm \right) \\
&=\frac{(\bo-(1-\bo)\cos^2 \phi)\pdm-(2\cos \phi )\ydm} {\bo \sin^2 \phi +\cos^2 \phi}+\left(\pdm+\frac{2}{\cos \phi} \ydm \right)\\
&=\ds {\bo}\left(\frac{\pdm+1+2{\ydm}(\frac{\sin^2{\phi}}{\cos{\phi}})}{{\bo}\sin^2{\phi}+\cos^2{\phi}}\right).
\end{align*}
For sufficiently small $\de$, $\left|\phi-\frac{3\pi}{2}\right| \gtrsim \sqrt{\de}$ implies $\cos \phi \gtrsim \sqrt{\de}$. It follows that
\begin{align*}
\left|\pdp + \left(\pdm + \frac{2}{\cos \phi} \ydm\right) \right| &\lesssim
\e \left|\frac{1+\frac{2\de}{\sqrt\de}}{\de}\right| = O\left(\frac{\e}{\de}\right).
\end{align*}
Observe from the Figure 2 that $\ds \phi = \frac{3\pi}{2} - \arccos \left( \frac{\ym}{1-\bo} \right)$.
Thus,
\begin{align*}
\pdp+\left(\pdm + \frac{2}{\cos \phi}\ydm \right)
=\pdp+\pdm + \frac{2}{\cos \left(\ds \frac{3\pi}{2} - \arccos \left(\frac{\ym}{1-\bo}\right) \right)}\ydm\\
=\pdp+\pdm + \frac{2}{\sqrt{1-\left(\ds \frac{\ym}{1-\bo}\right)^2}}\ydm
=\pdp+ \pdm + \frac{2}{\sqrt{1-\ym^2}} \ydm+ R_1,
\end{align*}
where
\[\left|R_1\right| \leq \bo\left|\frac{2\ydm\ym}{\sqrt{((1-\bo)^2 - \ym^2)^3}}\right|.\]
Using that $\sqrt{(1-\bo)^2- \ym^2}=(1-\bo) \cos \phi$, we obtain
\[\left|R_1\right| \lesssim \e\left|\frac{2\de}{((1-\bo)\sqrt \de)^3}\right|.\]
Combining the results, we have
\begin{align*}
\left|\pdp+ \pdm + \frac{2}{\sqrt{1-\ym^2}} \ydm \right|&=\left|\pdp+\left(\pdm + \frac{2}{\cos \phi}\ydm \right) \right| + \left|R_1 \right|\\
&\lesssim\e \left(\left|\frac{1+\frac{2\de}{\sqrt\de}}{\de}\right|+ \left|\frac{2\de} {((1-\bo))^3 \de^{3/2}}\right|\right) = O\left(\frac{\e}{\de}\right).
\end{align*}
This completes the proof for (\ref{eq:phi}). \\
Let $t$ be the time between the two consecutive collisions of $\mo$. Then $\yp = \ym - \ydm t$. The angular distance that $\mo$ traveled is given by
\begin{align*}
\psi &= 2\pi - \arccos\left(\frac{\ym}{1-\bo}\right) - \arccos(\yp)
= 2\pi - \arccos\left(\frac{\ym}{1-\bo}\right) - \arccos\left(\frac{\ym - \ydm t}{1-\bo}\right)\\
&= 2\pi - 2\arccos\left(\frac{\ym}{1-\bo}\right) + R_2 =2\pi - 2\arccos \ym + R_3+ R_2,
\end{align*}
where $R_2$ and $R_3$ are the error estimates for the Taylor series expansion and are given explicitly by
\begin{align*}
|R_2| &\leq \left|\frac{\ydm t}{\sqrt{(1-\bo)^2 - \ym^2} }\right|\\
|R_3| & \leq \left| \frac{\bo\ym}{(1-\bo)\sqrt{(1-\bo)^2 - \ym^2}} \right|.
\end{align*}
Therefore, we have
\begin{align*}
\ds \yp &= \ym - \ydm t =\ym - \ydm\left( \frac{\psi}{\pdm}\right)\\
&= \ym - \frac{\ydm}{\pdm} \left(2\pi - 2\arccos \ym + R_2 + R_3 \right)\\
&= \ym - \frac{2 \pi - 2 \arccos \ym}{\pdm} \ydm + \frac{\ydm}{\pdm}(R_2 + R_3).
\end{align*}
Since $\mo$ can travel at most $2\pi$ between two collisions, $t$ is bounded by $\ds |t| < \frac{2 \pi}{\pd}$. Also note that $R_2$ and $R_3$ contain the factor $\ydm$ and $\bo$ respectively. We finish the proof for (\ref{eq:dist}) by computing,
\begin{align*}
\ds \Big|\yp - \ym &+ \frac{2 \pi - 2 \arccos \ds \ym }{\pdm}\ydm \Big|
\lesssim \left|\frac{\ydm}{\pdm} (R_2 + R_3)\right|\\
& \lesssim \ydm^2\left|\frac{2\pi \ym}{\pdm \sqrt{(1-\bo)^2 - \ym^2}} \right| + \bo \left|\frac{2\pi \ym}{\pdm(1-\bo)\sqrt{(1-\bo)^2 - \ym^2}} \right| \\
& \lesssim \de^2\left|\frac{2\pi}{\sqrt \de} \right| + \e \left|\frac{2\pi}{((1-\bo))^2\sqrt\de} \right| = O\left(\de^{3/2}\right) + O\left(\frac{\e}{\sqrt\de}\right).
\end{align*}
\end{proof}
\begin{corollary}
Under the same assumptions as in Lemma \ref{lemma_bounce} with the exception
$\left|\phi - \frac{3\pi}{2} \right|\gtrsim {\delta^k}$ for $0\leq k \leq 1$ and $\e \ll \de^{2k}$, the variables after the collision are given by the similar equations to (\ref{eq:phi}) and (\ref{eq:dist}) but with different error terms.
\begin{align}\tag{5a}\label{eq:phi'}
\pdp=-\pdm - \frac{2}{\sqrt{1-\ym^2}}\ydm + O\left(\frac{\e}{\de^{2k}}\right)
\end{align}
\begin{align}\tag{6a}\label{eq:dist'}
\yp=\ym - \frac{2 \pi - 2 \arccos \ds \ym }{\pdm} \ydm + O\left(\frac{\de^2}{\de^k}\right) + O\left(\frac{\e}{\de^{k}}\right).
\end{align}
\end{corollary}
\begin{proof}
When computing the error terms, use $\cos \phi \gtrsim \de^k$.
\end{proof}
Now, we can state the adiabatic invariance theorem for the special case when the light mass hits the floor and the dumbbell is far away from the vertical position: $\phi = 3\pi/2$.
\begin{theorem}
\label{ai_theo}
Suppose right before the collision $\pdm \neq 0$ and $\phi-\frac{3\pi}{2} \neq 0$. Then there is $\de >0$ such that if
$0< \e =\de^2$, $ -\de < \dot y_0 < 0$,
then there exists an adiabatic invariant of the dumbbell system, given by $I = |\pd|f(y)$, where $f(y) = \pi -\arccos y $. In other words, $|\dot{\phi}_{n}| f(y_{n}) - |\dot{\phi}_{0}| f(y_{0})= O(\de)$ after $N= O(\de^{-1})$ collisions.
\end{theorem}
\begin{proof}
We prove this by finding $f(y)$ that satisfies
\[
|\pdp| f(\yp) - |\pdm| f(\ym)= O(\de^2).
\]
When $\e = \de^2$ and $\de$ is sufficiently small, it follows from (\ref{eq:dist'}) that,
\[
f(\yp)=f(\ym) - \left(\frac{2 \pi - 2 \arccos \ym }{\pdm} \ydm + O(\de^2) \right)f'(\ym).
\]
Then, we have
\begin{align*}
|\pdp| f(\yp)&= \left| \pdm + \frac{2\ydm}{\ds \sqrt{1-\ym^2}} +O(\de^2) \right| \left( f(\ym) -\left( \frac{2 \pi - 2 \arccos \ym }{\pdm} \ydm + O(\de^2)\right) f'(\ym) \right)\\
&=|\pdm| f(\ym) - \left(2 \pi - 2 \arccos\ym\right)\ydm f'(\ym) + \left|\frac{2}{\ds \sqrt{1-\ym^2}} \ydm\right| f(\ym)+ O(\de^2).
\end{align*}
Therefore, $f(y)$ satisfies $|\pdp| f(\yp) - |\pdm| f(\ym) = O(\de^2)$ provided
\[
- (2 \pi - 2 \arccos\ym ) \ydm f'(\ym) + \frac{2}{\ds \sqrt{1-\ym^2}} \ydm f(\ym)= 0.
\]
The solution of the above equation is given by
\[
f(\ym ) = \pi - \arccos \ym.
\]
Let $\ds N = O(\de^{-1})$ and let $\dot{\phi}_{N}$ and $y_{N}$ be the angular velocity and the distance after $N^{th}$ collision. Then, we have
\begin{align*}
|\dot{\phi}_{N}| f(y_{N}) - |\dot{\phi}_{0}| f(y_{0}) &= \sum_{k=1}^{N} \left(|\dot{\phi}_{k}| f(y_{k}) - |\dot{\phi}_{k-1}| f(y_{k-1}) \right)
\lesssim n \cdot \de^2 \lesssim \de.
\end{align*}
\end{proof}
\begin{rmk}
Adiabatic invariant has a natural geometric meaning: angular velocity times the distance traveled by the light mass
between two consecutive collisions.
\end{rmk}
Now, we state the theorem for a realistic scenario when a rapidly rotating dumbbell scatters
off the floor.
\begin{theorem}
Let the dumbbell approach the floor from infinity with \mbox{$\dot \phi_- \neq 0$}.
There exists $\de > 0$ such that if $0<\e =\de^2 $, $-\de < \dot y_- <0$, $|\phi_0-\frac{3\pi}{2}| \sim \sqrt{\delta}$
then, after $N = O(\delta^{-1})$ bounces the dumbbell will leave the floor after the final bounce by $\mo$ with $I_N = I_0 + O(\sqrt\de)$. The adiabatic invariant is defined as
above $I = |\dot \phi | f(y)$.
\end{theorem}
\begin{rmk}
The condition on the angle $|\phi_0-\frac{3\pi}{2}|\sim \sqrt{\delta}$ comes naturally from the following argument. If $\yd =- \de$, the dumbbell approaching from infinity will naturally hit the floor when $y \gtrsim 1-\bo - \de$. Since $\de$ is small, this implies $|\phi_0-\frac{3\pi}{2}|\lesssim \sqrt{\de}$. If $\phi_0$ happens to be too close to $3\pi/2$, then there is no hope to
obtain adiabatic invariant and we exclude such set of initial conditions. In the limit $\delta\rightarrow 0$ the relative measure of the set where $|\phi_0-\frac{3\pi}{2}| = o(\sqrt{\de})$ tends to zero.
\end{rmk}
\begin{proof}
We will split the iterations (bounces) into two parts: before the $n^{th}$ iteration and after it,
where $n = [\mu/\sqrt{\de}]$ and $\mu$ is sufficiently small (to be defined later). We claim that after $n$ bounces, $|\phi_{n} - \frac{3\pi}{2}| \gtrsim \sqrt[4]\de$. To prove this claim, we use energy conservation of the dumbbell system (\ref{eq:egy}), and (\ref{eq:phi'}). We have
\begin{align}\label{ydp}\tag{7}
\e(1-\e)\left(\pdm + \frac{2}{\sqrt{1-\ym^2}}\ydm + O\left(\frac{\e}{\de}\right)\right) ^2 + \ydp^2 = \e(1-\e) \pdm^2 + \ydm^2.
\end{align}
Next,
\begin{align*}
|\ydp^2 - \ydm^2 | =\left| \e(1-\e)\pdm^2 - \e(1-\e)\left(\pdm + \frac{2}{\sqrt{1-\ym^2}}\ydm + O\left(\frac{\e}{\de}\right)\right) ^2 \right|\\
\leq\left|\e \left( \frac{4\ydm^2}{1-\ym^2}+ \frac{4\pdm \ydm}{\sqrt{1-\ym^2}}+ 2\pdm O\left(\frac{\e}{\de}\right) + \frac{4\ydm}{\sqrt{1-\ym^2}}O\left(\frac{\e}{\de}\right)+O\left(\frac{\e}{\de}\right)^2 \right) \right|
\end{align*}
By our assumptions, $1-\ym \gtrsim \delta$ so it follows that
\begin{align*}
|\ydp^2 - \ydm^2| \lesssim \de^2 \left(\frac{\de^2}{\de} + \frac{\de}{\sqrt \de}+ \de + \frac{\de^2}{\sqrt \de} + \de^2 \right) \lesssim \de^{5/2},
\end{align*}
which implies
\[
|\ydp - \ydm| \lesssim \delta^{3/2}.
\]
After $n=\lfloor \mu/\sqrt\de \rfloor$ bounces, $|\yd_n - \yd_0 | \leq \de/2$ if $\mu$ is sufficiently small and we still have the vertical velocity of same order, {\em i.e.} $\yd_n \sim \yd_0 \sim \de$. Then, at the
$n^{th}$ collision, the center of mass will be located at $y_n \lesssim 1 - \sqrt \de$,
which will imply $|\phi_{n} - \frac{3\pi}{2}| \gtrsim \sqrt[4]\de$. Now using Lemma 3.1, Corollary 3.2, and Theorem 3.3, we compute the error term of the adiabatic invariant under the assumption that the total number of collisions is bounded by
$N\lesssim \delta^{-1}$ and the heavy mass does not hit the floor.
\begin{align*}
|\dot{\phi}_{N}| f(y_N) &- |\dot{\phi}_{0}| f(y_{0}) \\
&=\sum_{k=1}^{n} \left(|\dot{\phi}_{k}| f(y_{k}) - |\dot{\phi}_{k-1}| f(y_{k-1}) \right)+ \sum_{n}^{N} \left(|\dot{\phi}_{k}| f(y_{k}) - |\dot{\phi}_{k-1}| f(y_{k-1}) \right)\\
&= \frac{\mu}{\sqrt\de} O\left(\de)\right) + \left(\frac{C}{\de} \right) O\left(\de^{3/2}\right) =O(\sqrt\de).
\end{align*}
By the theorem proved in the next section there is indeed a uniform bound on the number of bounces.
\begin{comment}
By our assumption, the initial angle
$\phi_0$ is far enough from $\pi/2$, so Lemma \ref{lemma_bounce} and Theorem \ref{ai_theo}
can be applied multiple times for all bounces.
The only problem in directly applying
Theorem \ref{ai_theo} is that the total number of bounces can be larger than allowed by
the implicit constant in the Theorem \ref{ai_theo}. To get around this
issue, we estimate the change of adiabatic invariant if the number of bounces is given by $N = O(1/\delta^{1+\nu})$,
where $\nu>0$ is arbitrarily small constant. Clearly, this is asymptotically larger than the actual number
of bounces, provided $\delta$ is taken sufficiently small.
Assume also, for a moment, that the large mass never hits the floor. Then, we have
\[
|I_N -I_0| \leq \frac{1}{\delta^{1+\nu}} \, O(\delta^2) = O(\delta^{1-\nu})
\]
which proves this Theorem (under the assumption that the heavy mass did not hit the floor).
\end{comment}
If the heavy mass does hit the floor it can do so only once as shown in the next section.
We claim that the corresponding change in the adiabatic invariant will be only of order $\delta$.
Indeed, using formula \eqref{eq:law} and the comment after that, we obtain
\begin{align*}
\dot y_{+} &= -\dot y_{-} + O(\epsilon) \\
\dot \phi_{+} &= \dot \phi_{-} + O(\epsilon) + O(\delta),
\end{align*}
where subscripts $\pm$ denote the variables just after and before the larger mass hits the floor.
Let the pairs $(y_m,\dot \phi_m)$ $(y_{m+1},\dot \phi_{m+1})$ denote the corresponding values of
$(y, \dot \phi)$ when the light mass hits the floor right before and after the large mass hits the floor.
Then, since $\dot y = O(\delta)$, we find that $y_{m+1}-y_m = O(\delta)$ and
$\dot \phi_{m+1}-\dot \phi_m = O(\delta)$. As a consequence,
\[
|\dot \phi_{m+1}| f(y_{m+1}) - |\dot \phi_{m}| f(y_{m}) = O(\delta)
\]
and the change in adiabatic invariant due to large mass hitting the floor is sufficiently small $\Delta I = O(\delta)$.
\end{proof}
\section{Estimate of maximal number of collisions}
In this section, we estimate the maximal number of collisions of the dumbbell with the floor as a function of the mass ratios. As we have seen in section 2.2, on $(Y-\phi)$ plane, the dumbbell reduces to a mass point that has unit velocity and elastic reflection. We use the classical billiard result which states that the number of collisions inside a straight wedge with the inner angle $\gamma$ is given by $N_{\g}=\lceil \pi/\g\rceil$, see e.g. \cite{tabachnikov}.
\subsection{Boundaries on $Y-\phi$ plane}
First, we discuss the properties of the boundaries of the dumbbell system on $Y-\phi$ plane. \\
When $m_1=m_2$, we have the mass ratios $\bo = \bt = 1/2$. Recall from (\ref{bd}) that the boundaries are given by
\begin{align*}
Y={\rm max} \left\{-\sqrt{\bt/\bo}\sin\phi, \sqrt{\bo/\bt}\sin\phi \right\}= |\sin \phi | \text{ for } \phi \in [0, 2\pi]
\end{align*}
Note that the angle between the two sine waves is $\gamma = \pi/2$.\\
When $\mo \ne \mt$, it follows from (\ref{bd}) that the boundaries consist of two sine curves with different heights. We will assume $\mo < \mt$, since the case $\mt < \mo$ is symmetric. It is easy to see that generically in the limit $\mo/\mt \rightarrow 0$ most of repeated collisions will occur between two peaks of (\ref{bd1}). In Section 4.2, we will find the upper bound for the number of collisions of the mass point to the boundaries. To start the proof, let us consider the straight wedge formed by the tangent lines to (\ref{bd1}) at $\phi=0$ and $\pi$. We call these tangent lines $\ell_0$ and $\ell_\pi$ respectively, and denote the angle of the straight wedge by $\g$, see Figure 3. Let us denote the wedge created by the union of the sine waves when $Y>0$ and the tangent lines $\ell_0$ and $\ell_\pi$ when $Y\le 0$ as the hybrid wedge.
\begin{figure}
\caption{Construction of the straight wedge and the hybrid wedge on $Y-\phi$ plane.}
\end{figure}
\subsection{The upper bound for the number of collisions}
We first introduce some notations. Denote the trajectory bouncing from the hybrid wedge by $v'$, and let the approximating trajectory bouncing from the straight wedge by the double-prime symbols $v''$. When $v'$ or $v''$ is written with the subscript $i$, it denotes the segment of the corresponding trajectory between the $i$-th bounce and the $i+1$-st bounce. Let $\theta_i'$ be the angle from the straight wedge to $v_i'$, and $\theta_i''$ denote the angle from the straight wedge to $v_i''$ after the $i$-th collision. Define $\rho_i$ as the angle difference between the straight wedge and the curved wedge at $i$-th collision of $v_i'$.
The trajectory will terminate when the sequence of angles terminates (due to the absence of the next bounce), or when there will be no more intersections with the straight wedge. This will happen when the angle of intersection, $\theta$, $\theta'$ and $\theta''$, with the tangent line is less than or equal to $\g$.
\begin{figure}
\caption{Two different base cases for Lemma 4.1.}
\end{figure}
\begin{lemma}
Consider the hybrid wedge and the straight wedge described above. The sequence of angles $\theta_i'', 1\le i$ will terminate after or at the same index as the sequence of angles $\theta_i', 1\le i$.
\end{lemma}
\begin{proof}
Suppose that the initial segment $v_0'$ (of the full trajectory) crosses the straight wedge before it hits the hybrid wedge, as shown on the right panel of Figure 4.1. Then
\begin{align*}
\theta_1'=\pi-\g-2\rho_1'<\pi-\g=\theta_1''.
\end{align*}
When the initial segment $v_0'$ hits the hybrid wedge before crossing the straight wedge, as shown on the left panel, then set $\theta_1'=\theta_1''$.
Now we can proceed by induction if $\theta_i'> \g $ and $\theta_i''>\g$ and the sequence $\theta_i'$ has not terminated.
\begin{align*}
\theta_{i+1}'&=\theta_i'-\g-2\rho_{i+1}'\\
\theta_{i+1}''&=\theta_i''-\g
\end{align*}
which implies that $\theta_{i+1}'\le\theta_{i+1}''$.\\
Since $\theta_{i}'\le\theta_{i}''$, then $v'$ will terminate at the same time or before $v''$.
\end{proof}
Define the bridge as the smaller sine wave created by $Y=-\sqrt{\bt/\bo}\sin\phi$ when $m_1 \ll m_2$ from $\phi=0$ to $\phi=\pi$. The union of the bridge with the hybrid wedge will create the boundary as it is actually defined by the dumbbell dynamics.
\begin{lemma} The presence of the bridge in the hybrid wedge will increase the number of collisions of the dumbbell by at most one from the number of collisions of the dumbbell to the hybrid wedge.
\end{lemma}
\begin{proof}
Consider the true trajectory (denoted by $v$) that ``sees'' the bridge. Recall the definition of angle $\theta_i'$, which is the angle from the straight wedge to $v'_i$. Similarly, we let $\theta_i$ be the angle to $v_i$. Before $v$ intersects the bridge, by Lemma 4.1 we have
\begin{align*}
v_i &=v_i'\\
\theta_i&=\theta_i'\\
\theta_i&=\theta_{i-1}-\gamma-2\rho_{i}.
\end{align*}
Now define $\tau$ to be the angle measured from the horizontal line to the tangent line at the point where $v$ hits the bridge. Note that $\tau$ takes a positive value if the dumbbell hits the left half of the bridge, and $\tau$ takes a negative value if $v$ hits the right half of the bridge. We express $\theta_{i+1}$ after the bounce from the bridge in terms of $\theta_i$. By this convention, the bounce from the bridge does not increase the index count but we will have to add $+1$ in the end.\\
Then, we have
\begin{align*}\tag{8}\label{8}
\theta_{i+1}&=\pi-\theta_i-2\tau - 2\rho_{i+1}\\
\theta_{i+1}'&=\theta_i'-\gamma-2\rho_{i+1}'
\end{align*}
We may assume that $v$ hits the bridge with non-positive velocity in $Y$. If the dumbbell hits the bridge with positive velocity in $Y$, it will continue to move in the positive $Y$ direction after reflection from the bridge. Then, we consider the reverse trajectory to bound the number of collisions. This allows us to restrict $\theta_i$. Moreover, $v_i$ naturally hits the upper part of hybrid wedge than $v'_i$. We also assume that $v$ hits the left half of the bridge. Otherwise, we can reflect the orbit around the vertical line passing through the middle point of the bridge.
Utilizing the above arguments, we have the inequalities
\begin{align*}\tag{9} \label{9}
\frac{\pi+\g}{2}&<\theta_i \\
0<\tau &< \frac{\g}{2}\\
\rho_{i+1}' &< \rho_{i+1}.
\end{align*}
It is straightforward to verify that (\ref{8}) and (\ref{9}) imply $\theta_{i+1} \leq \theta_{i+1}'$. From the $i+2$-nd bounce, if $\theta_i$ has not terminated, we can apply induction argument similar to the proof in Lemma 4.1. We have the base case
\begin{align*}
\theta_{i+1} &\leq \theta_{i+1}'\\
\rho_{i+1} & \geq \rho_{i+1}'.
\end{align*}
Note that $\rho$'s indicate the relative position of a collision point in the hybrid wedge. That is, if $\rho_{i+1} \geq \rho_{i+1}'$, then the starting point of $v_{i+1}$ is located at or above that of $v_i$. Since $\theta_{i+1} \leq \theta_{i+1}'$ and $v_{i+1}$ starts above $v_{i+1}'$, we know $v_{i+2}$ will start on the hybrid wedge higher than $v_{i+2}'$. This implies
\[
\rho_{i+2} \leq \rho_{i+2}'.
\]
Then using the recursive relationship,
\begin{align*}
\theta_{i+2}=\theta_{i+1}-\gamma-2\rho_{i+2}\\
\theta_{i+2}'=\theta_{i+1}'-\gamma-2\rho_{i+2}',
\end{align*}
we obtain $\theta_{i+2} \leq \theta_{i+2}'$. By induction $\theta_i \leq \theta_i '$ for all $i$. Taking into account the bounce on the bridge, we conclude that the number of bounces of $v$ will increase at most by one relative to that of $v'$. Note that in most cases, the number of bounces of $v$ will be less than the number of bounces of $v'$.
\end{proof}
Now we are ready to prove the main theorem.
\begin{theorem}
The number of collisions of the dumbbell is bounded above by $\ds N_{\g}=\big \lceil \pi/\g\big\rceil+1$, where $\ds \g = \pi - 2 \arctan \sqrt{\bt /\bo}$.
\end{theorem}
\begin{proof}
When $\mo =\mt$, as we have found in the previous section 4.1, the boundaries form identical hybrid wedges which intersect at $\pi/2$. Using Lemma 4.1, we conclude that the upper bound for the number of collisions is $\lceil\pi/\g\rceil= 2$, which is less than $N_{\g}=3$.\\
When $\mo < \mt$, we consider the true boundaries which consist of a hybrid wedge with the bridge. Using Lemma 4.1 and Lemma 4.2, we conclude that the the maximal number of collisions to the true boundary is bounded above by $N_\g=\lceil\pi/\g\rceil+1$. Since $\g = \pi-2\arctan\sqrt{\bt/\bo}$, this completes the proof.
\end{proof}
\section*{Acknowledgment}
The authors acknowledge support from National Science Foundation
grant DMS 08-38434 â€EMSW21-MCTP: Research Experience for Graduate Students."
YMB and VZ were also partially supported by NSF grant DMS-0807897. The authors
would also like to thank Mark Levi for a helpful discussion.
\end{document} | math | 33,004 |
\begin{equation}gin{document}
\newcommand{\vspace*{5mm}\par}{\vspace*{5mm}\par}
\newcommand{I\hspace*{-0.6mm}d}{I\hspace*{-0.6mm}d}
\makeRR
\section{Introduction}
Hydrodynamical phenomena can be described by a wide variety of mathematical and numerical models, spanning a large range of possible levels of complexity and realism. When dealing with the representation of a complex fluid system, such as an ensemble of rivers and channels or a human blood system, the dynamical behavior of the flow is often spatially heterogeneous. This means that it is generally not necessary to use the most complex model everywhere, but that one can adapt the choice of the model to the local dynamics. One has then to couple several different models, corresponding to different areas. Such an approach is generally efficient from a computational point of view, since it avoids heavy computations with a full complex model in areas where a simpler model is able to represent the dynamics quite accurately. Thus this makes it possible to build a hybrid numerical representation of an entire complex system, while its simulation with a unique model would be either non relevant with a simple one or too expensive with a complex one.\par
In such a hierarchy of models, the simplest ones are often simplifications of the more complex ones. Let mention for instance the so called ``primitive equations", which are widely used to represent the large scale ocean circulation, and are obtained by making some assumptions in the Navier-Stokes equations. It is important to note that such simplifications may involve a change in the geometry and in the dimension of the physical domain, thus leading to simplified models which are $m$-D while the original one was $n$-D, with $n>m$.
An obvious example is given by the shallow water equations, which are derived from the Navier-Stokes equations by integration along the vertical axis, see for instance
\cite{gerbeauperthame} for a rigorous mathematical derivation using asymptotic analysis techniques in the 2-D to 1-D case
. Such a coupling between dimensionally heterogeneous models has been applied for several applications. Formaggia, Gerbeau, Nobile and Quarteroni \cite{formaggia1} have coupled 1-D and 3-D Navier-Stokes equations for studying blood flows in compliant vessels. In the context of river dynamics, Miglio, Perotto and Saleri
\cite{miglio}, Marin and Monnier \cite{monnier}, Finaud-Guyot, Delenne, Guinot and Llovel \cite{finaud-guyot}, Malleron, Zaoui, Goutal and Morel
\cite{malleron} have coupled 1-D and 2-D shallow water models. Leiva, Blanco and Buscaglia \cite{leiva2011} present also several such applications for Navier-Stokes equations. \par
Several techniques can be used to couple different models, either based on variational, algebraic, or domain decomposition approaches. In the context of dimensionally heterogeneous models, in addition to the previously mentioned references, there exists also a number of papers on this subject for purely hyperbolic problems (e.g. \cite{Godlewski2004,Godlewski2005}, \cite{bouttin}), but we will not elaborate on these studies since our focus is much more on hyperbolic/parabolic problems.
\par
In our study, we will focus on the design of an efficient Schwarz-like iterative coupling method. The possibility of performing iterations between both models, i.e. of using a Schwarz method, is already considered in Miglio, Perotto and Saleri \cite{miglio} and Malleron, Zaoui, Goutal and Morel \cite{malleron}. This kind of algorithm has several practical advantages. In particular, it is simple to develop and operate, and it does not require heavy changes in the numerical codes to be coupled: each model can be run separately, the interaction between subdomains being ensured through boundary conditions only. These are important aspects in view of complex realistic applications.
\par
Our final objective is to design an efficient algorithm for the coupling of a 1-D/2-D shallow water model with a 2-D/3-D Navier-Stokes model. As a first step in this direction, the present study aims at identifying the main questions that we will have to face, as well as an adequate mathematical framework and possible ways to address these questions. We will perform this preliminary stage on a very simple testcase, coupling a 2-D Laplacian equation with a corresponding simplified 1-D equation. Seemingly similar testcases were addressed by Blanco,
Discacciati and Quarteroni \cite{blanco} and Leiva, Blanco and Buscaglia \cite{leiva}, but with different coupling methodologies
(variational approach in \cite{blanco} and Dirichlet-Neumann coupling in \cite{leiva}). Moreover we have chosen to use non symmetrical boundary conditions in our 2-D model, in order to develop a fully two dimensional solution, and our 1-D model is obtained by integration of the 2-D equation along one direction, by analogy with the link between the shallow water system and the Navier-Stokes system. The rigorous mathematical derivation of the 1-D model clearly highlights its validity conditions.\par
Section 2 is devoted to the presentation of the 2-D Laplacian model, and to the derivation of the corresponding reduced 1-D model. Then a Schwarz iterative coupling algorithm is presented in Section 3, and its theoretical convergence properties are analyzed. In particular, the influence of the interface location is discussed. Finally numerical tests are presented in Section 4, which fully validate the previous analytical results.
\section{Derivation of the reduced model}\label{sec:derivation}
We are interested in the following boundary-valued problem in a domain $\Omega\subset\mathbb{R}^2$:
\begin{equation}gin{subequations}
\label{eq:full2-D0}
\begin{equation}gin{empheq}[left=\empheqlbrace\;]{align}
&- \Delta u(x,z) = F(x,z),\quad\forall (x,z)\in \Omega,\label{eq:full2-D0a}\\
& \alpha\dn{u}(x,z)+\kappa u(x,z)= 0,\quad\forall (x,z)\in \partial\Omega,
\end{empheq}
\end{subequations}
with $\alpha$ and $\kappa$ are nonnegative numbers that allows for Dirichlet, Neumann or Robin boundary conditions.\\
\noindent In this section, we want to take advantage of the shallowness of the domain $\Omega$ (or some subdomain of $\Omega$) to derive a reduced model. As it is done for the derivation of the shallow water model (see \textit{e.g.} \cite{gerbeauperthame}), we want to replace the complete 2-D model \eqref{eq:full2-D0} by a (simpler) 1-D equation wherever it is possible (and keep the original 2-D model everywhere else). We will finally obtain the coupled 1-D / 2-D system \eqref{eq:1-Dmodel}-\eqref{eq:2-Dmodel} which we will analyze and simulate in the coming sections.\\
In order to discriminate between 1-D and 2-D regions, we introduce the following definition:
\begin{equation}gin{definition}\label{def1}
Let $\Omegaoned$ be the subset of $\Omega$ in which 2-D effects may be neglected, and $\Omegatwod=\Omega\backslash\Omegaoned$ the subset of $\Omega$ in which 2-D effects cannot be neglected.
\end{definition}
\begin{equation}gin{remark}\label{rem1}
Naturally the definition of $\Omegaoned$ depends on several features, such as the domain aspect ratio, the considered system of equations, forcing terms (including boudary conditions), etc.
\end{remark}
\noindent For the sake of clarity (see Figure \ref{fig:racket}), we will assume that there exists $H$ and $L_1$ such that
$$\Omegaoned=\Omega\cap\{x<L_{1}\}=(0,L_1) \times (0,H)\mbox{ and } \Omegatwod=\Omega\cap\{x> L_{1}\}. $$
\begin{equation}gin{figure}[!ht]
\begin{equation}gin{center}
\includegraphics[width=14cm]{Figs/racket.eps}
\caption{\label{fig:racket}Typical computational domain $\Omega$, including a zone with a true 2-D behavior ($x\geq L_{1}$) together with a shallow zone where we intend to use a 1-D model ($x<L_{1}$).}
\end{center}
\end{figure}
Let us now consider equation \eqref{eq:full2-D0} with the following boundary conditions (see Figure \ref{fig:racket} for the notations):
\begin{equation}gin{subequations}
\label{eq:full2-D}
\begin{equation}gin{empheq}[left=\empheqlbrace\;]{align}
- \Delta u(x,z) &= F(x,z),\quad\forall (x,z)\in \Omega,\\
\dn{u}(x,z)&= 0,\quad\forall (x,z)\in \Gamma_{T},\\
\dn{u}(x,z)+\kappa u(x,z)&= 0,\quad\forall (x,z)\in \Gamma_{B},\\
u(x,z) &= \gamma_1(x,z),\quad\forall (x,z)\in \Gamma_{L},\\
u(x,z) &= \gamma_2(x,z),\quad\forall (x,z)\in \Gamma_{R}.
\end{empheq}
\end{subequations}
In order to derive the 1-D model in $\Omegaoned=\Omega\cap\{x<L_1\}$, we introduce the following dimensionless variables and numbers:
\begin{equation}gin{eqnarray}
&\varepsilon=\displaystyle\frac{H}{L_{1}},\\
&\tilde{x} = \displaystyle\frac{x}{L_{1}},\quad\tilde{z} = \displaystyle\frac{z}{H},\quad \tilde{u}(\tilde{x}, \tilde{z})
= \displaystyle\frac{u(x,z)}{U},\quad\tilde{F}(\tilde{x}, \tilde{z}) = F(x,z)\displaystyle\frac{ L_{1}^2}{U} \mbox{ and } \tilde{\kappa} = \kappa L_{1},
\end{eqnarray}
where $L_{1}$ (resp $H$) is the characteristic length (resp height) of $\Omegaoned$, $\varepsilon$ is called the aspect ratio, and $U$ is a characteristic value for $u(x,z)$.\\
The nondimensional form of equations \eqref{eq:full2-D} in $\Omegaoned$ reads\footnote{Since $\Omegaoned=(0,L_1)\times(0,H)$ we have $\vec n=\pm e_z$ in \eqref{eq:full2-Dndimb} and \eqref{eq:full2-Dndimc}.}:
\begin{equation}gin{subequations}
\label{eq:full2-Dndim}
\begin{equation}gin{empheq}[left=\empheqlbrace\;]{align}
&- \displaystyle\frac{\partial^2 \tilde{u}}{\partial \tilde{x}^2} -
\displaystyle\frac{1}{\varepsilon^2} \displaystyle\frac{\partial^2 \tilde{u}}{\partial \tilde{z}^2} = \tilde{F}\quad
\mbox{in}\quad \Omegaoned \label{eq:full2-Dndima} \\
&\displaystyle\frac{\partial \tilde{u}}{\partial \tilde{z}}= 0\quad \mbox{on}\quad \Gamma_{T}^{1} = \Gamma_T \cap \partial \Omegaoned \label{eq:full2-Dndimb}\\
&- \displaystyle\frac{1}{\varepsilon}\displaystyle\frac{\partial \tilde{u}}{\partial \tilde{z}} + \tilde{\kappa} \tilde{u}= 0\quad \mbox{on}\quad \Gamma_{B}^{1}
= \Gamma_B \cap \partial \Omegaoned\label{eq:full2-Dndimc}\\[0.3cm]
&\tilde{u} = \displaystyle\frac{\gamma_1}{U}\quad \mbox{on}\quad \Gamma_L^{1} = \Gamma_L.
\end{empheq}
\end{subequations}
We assume (see \cite{REF-KAPPA} for the scaling of $\tilde\kappa$) that
\begin{equation}\label{eq:SW}
\tilde{F}= O(1), \displaystyle\frac{\partial^2 \tilde{u}}{\partial \tilde{x}^2} =O(1)\; \mbox{ and }\; \tilde{\kappa} = O(\varepsilon),
\end{equation}
which is a sufficient condition to ensure that the 2-D effects are negligible in $\Omegaoned$. Indeed we deduce from equation \eqref{eq:full2-Dndima} that
\begin{equation}
\displaystyle\frac{\partial^2 \tilde{u}}{\partial \tilde{z}^2} = O(\varepsilon^2).
\end{equation}
By vertical integration on $(\tilde{z}, 1)$, and accounting for the boundary condition \eqref{eq:full2-Dndimb}, we find:
\begin{equation} \label{eq:dzordre2}
\displaystyle\frac{\partial \tilde{u}}{\partial \tilde{z}} = O(\varepsilon^2)
\end{equation}
and finally:
\begin{equation}\label{eq:order2}
\tilde{u}(\tilde{x},\tilde{z}) = \tilde{u}(\tilde{x},0) + O(\varepsilon^2).
\end{equation}
Going back to original variables, we have:
\begin{equation}gin{equation} \label{eqn : relation_ordre_2}
u(x,z) = u(x,0) + O(\varepsilon^2), \;\forall \;(x,z) \in\Omegaoned.
\end{equation}
We now introduce the averaging operator in the vertical direction. For any function $f$ of $z$, we set:
\begin{equation}
\av{f}=\displaystyle\frac{1}{H}\int_{0}^Hf(z)\,dz.
\end{equation}
We integrate equation \eqref{eq:order2} for $z \in (0, H)$ and obtain:
\begin{equation}gin{equation} \label{eqn : relation_ordre_2_bis}
\bar{u}(x) = u(x,0) + O(\varepsilon^2),\;\forall x \in [0, L].
\end{equation}
We now average equation \eqref{eq:full2-Dndima} in the vertical direction, taking into account the Robin boundary condition \eqref{eq:full2-Dndimc} on $\Gamma_B^{1}$, and find:
\begin{equation}
-\displaystyle\frac{\partial^2 \bar{u}}{\partial x^2} + \displaystyle\frac{\kappa}{H} u(x,0) = \bar{F},
\end{equation}
For every $x \in [0, L_{1}]$ we may use approximation (\ref{eqn : relation_ordre_2_bis}) to introduce the new problem:
\begin{equation}gin{equation} \label{eqn : chaleur1-D_robin}
-\displaystyle\frac{\partial^2 u_1}{\partial x^2} + \displaystyle\frac{\kappa}{H} u_1 = \bar{F} \quad \mbox{in}\quad [0, L_{1}].
\end{equation}
It will replace \eqref{eq:full2-D0a} in $\Omegaoned$. As evoked in Remark \ref{rem1}, the reader can be easily convinced that it is particularly awkward to guess the value of $L_{1}$. Indeed one has to specify the criteria that define 2-D effects, and in practical situations we may only be able to define $L_{2}$ which is such that $\big(\Omega\cap\{x\geq L_{2}\}\big)\subset\Omegatwod$, or in other words $L_{2}\geq L_{1}$. In this work we consider two different situations:
\begin{equation}gin{list}{-}{}
\item a funnel-shaped domain (see Figure \ref{fig:entonnoir}) with a thin left part, so that we anticipate 2-D effects on the right (wide) part of the domain. In this case the definition of $L_{2}$ is based on a geometrical criterion.
\item a rectangular domain (see Figure \ref{fig:rectangle}) with a small aspect ratio $\varepsilon=H/L$ (so that we can anticipate weak 2-D effects), but with some 2-D forcing terms occuring in the right end of the domain. In that case the definition of $L_{2}$ is based on the support of forcing terms.
\end{list}
\begin{equation}gin{figure}[h]
\begin{equation}gin{center}
\includegraphics[width=8cm]{Figs/entonnoir.eps}
\caption{\label{fig:entonnoir}Funnel-shaped computational domain. The domain is shallow for $x<L_{2}$.}
\end{center}
\end{figure}
\begin{equation}gin{figure}[h]
\begin{equation}gin{center}
\includegraphics[width=15cm]{Figs/Rectangle.eps}
\caption{\label{fig:rectangle}Rectangular computational domain $\Omega$. The domain is shallow: $H/L\ll 1$, and we assume that the forcing terms are supported in $\{x>L_{2}\}$.}
\end{center}
\end{figure}
\noindent At this point we have defined an upper bound $L_{2}\geq L_{1}$, but the exact value of $L_{1}$ remains unknown. From now on we choose an interface $L_{0}$ without any \textit{a priori} information (other than $L_{0}<L_{2}$) and decide to consider the model reduction \eqref{eq:1-Dmodel} on $\Omega_{1}=\Omega\cap\{x<L_{0}\}$, while we keep the 2-D model in $\Omega_{2}=\Omega\cap\{x>L_{0}\}$. Finally we have the two following systems
\begin{equation}gin{eqnarray} \label{eq:1-Dmodel}
\mbox{1-D model: } \qquad\left\{
\begin{equation}gin{array}{rcl}
\displaystyle -\displaystyle\frac{\partial^2 u_1}{\partial x^2} + \displaystyle\frac{\kappa}{H} u_1 &=& F_1 \mbox{ in } (0, L_0),\\
u_1(0)&=& \bar{\gamma}_1.
\end{array}
\right.
\end{eqnarray}
and
\begin{equation}gin{eqnarray} \label{eq:2-Dmodel}
\mbox{2-D model: } \qquad \left\{
\begin{equation}gin{array}{rcl}
\displaystyle - \Delta u_2 &=& F_2 \mbox{ in } \Omega_2,\\
\displaystyle \displaystyle\frac{\partial u_2}{\partial n}= 0 \mbox{ on } \Gamma^2_{T} &=& \Gamma_T \cap \partial \Omega_2, \\
\displaystyle \displaystyle\frac{\partial u_2}{\partial n} + \kappa u_2= 0 \mbox{ on }\Gamma^2_{B} &=& \Gamma_B \cap \partial \Omega_2, \\
u_2 &=& \gamma_2 \mbox{ on } \Gamma_R.
\end{array}
\right.
\end{eqnarray}
where $F_1 = \overline{F}$ and $F_2 = F_{|\Omega_2}$.\\
Two cases may occur:
\begin{equation}gin{list}{-}{}
\item Favourable case: $L_{0}<L_{1}$, so that $\Omega_{1}\subset\Omegaoned$ and the following model reduction is relevant. In particular, hypothesis \eqref{eq:SW} holds so that Theorem \ref{th:errorcontrol} applies,
\item Unfavourable case: $L_{0}\geq L_{1}$, so that $\Omega_{1}\not\subset\Omegaoned$ and the 1-D model will not be able to reproduce the 2-D reality (in particular, hypothesis \eqref{eq:SW} does not hold).
\end{list}
We now want to evaluate this model reduction in the favourable case $L_{0}<L_{1}$.
\section{Coupling algorithm}\label{algorithm}
Let us consider the two models (\ref{eq:1-Dmodel}) and (\ref{eq:2-Dmodel}) to be coupled respectively through the interfaces $x = L_0$ and $\Gamma$ as shown in Figure \ref{fig : domaine_couplage_1-D_2-D}. \\
\begin{equation}gin{figure}
\begin{equation}gin{center}
\includegraphics[width=14cm]{Figs/racket_couplage.eps}
\caption{Computational domains for the 1-D/2-D reduced model}
\label{fig : domaine_couplage_1-D_2-D}
\end{center}
\end{figure}
In coupling problems, the first difficulty lies in defining the coupling notion by itself, \textit{i.e.} defining the quantities or values to be exchanged between the two
models through the coupling interfaces. In our case and from a physical point of view, one may propose the following conditions, see \cite{blanco}, \cite{leiva}:
\begin{equation}gin{subequations}
\label{contraintes}
\begin{equation}gin{empheq}[left=\empheqlbrace\;]{align}
u_1(L_0) &= \frac{1}{H} \int_{0}^{H} u_2(L_0, z) dz \label{eqn : contrainte1} \\
\frac{\partial u_1}{\partial x}(L_0) &=\frac{1}{H} \int_{0}^{H} \frac{\partial u_2}{\partial x}(L_0, z) dz \label{eqn : contrainte2}
\end{empheq}
\end{subequations}
which correspond to the conservation of $u$ and its flux through the interface.\\
Unfortunately these two constraints do not allow the well-posedness of the 2-D model and of the coupled problem. They are called \textit{defective boundary conditions} in the
literature, \cite{formaggia1}, \cite{formaggia2}, \cite{leiva}.\\
One should then rather apply a coupling method ensuring the following points:
\begin{equation}gin{itemize}
\item [(i)] the well-posedness of the 1-D and 2-D models
\item [(ii)] the physical constraints are satisfied
\item [(iii)] the control of the difference between the coupled solution and the reference one (corresponding to the 2-D model over the whole domain $\Omega$). Indeed,
due to the nature of the problem, the reader can be easily convinced that one does not expect to end with a solution of the 2-D model
equal to the restriction of the reference one on $\Omega_2$.
\end{itemize}
The coupling problem with (\ref{eqn : contrainte1}) and (\ref{eqn : contrainte2}) has been studied in \cite{blanco} and \cite{leiva} using variational and algebraic approaches.
In this section, we propose an iterative coupling method based on classical Schwarz algorithms. These iterative methods were used for the first time in
the context of dimensionally heterogeneous coupling in \cite{formaggia1} and \cite{miglio} to study
a nonlinear hyperbolic coupling problem.\\
We will prove the convergence of these algorithms given an appropriate choice of boundary conditions at $x=L_0$ and on $\Gamma$.
Then we will study the solutions obtained after convergence
and compare them to the global reference solution $u$ defined by (\ref{eq:full2-D}). Finally we will give some results regarding the choice of the coupling interface position.\\
\subsection{Schwarz algorithms}
Let us introduce first the Schwarz algorithms in the context of dimensionally homogeneous coupling.
Consider the two systems, defined on $\Omega_1$ and $\Omega_2$ shown in Figure \ref{fig : domaine_couplage}:
\begin{equation}gin{figure}
\begin{equation}gin{center}
\includegraphics{Figs/DDM.eps}
\caption{Computational domains for the dimensionally homogeneous coupling problem}
\label{fig : domaine_couplage}
\end{center}
\end{figure}
\begin{equation}gin{eqnarray}\label{eqn : modele_mono}
\left\{
\begin{equation}gin{array}{ll}
\mathcal{L}_u u = f_u \;\;\; \mbox{in}\;\;\; \Omega_1 \subset \mathbb{R}^n\\
B_u^{out} u = g_u \;\;\; \mbox{on} \;\;\; \partial \Omega_1^{out} \\
\end{array}
\right.
\;\;\;\mbox{and}\;\;\;\;\;
\left\{
\begin{equation}gin{array}{ll}
\mathcal{L}_v v = f_v \;\;\; \mbox{in}\;\;\; \Omega_2 \subset \mathbb{R}^n \\
B_v^{out} v = g_v \;\;\; \mbox{on} \;\;\; \partial \Omega_2^{out} \\
\end{array}
\right.
\end{eqnarray}
The operators $\mathcal{L}_u$ and $\mathcal{L}_v$ are different. We assume that $u$ and $v$ have to satisfy the following constraints derived from the physics :
\begin{equation}gin{subequations}
\label{eq:mono}
\begin{equation}gin{empheq}[left=\empheqlbrace\;]{align}
C_1 u &= C_2 v \label{eqn : condition_couplage1}\\
C'_1 u &= C'_2 v \label{eqn : condition_couplage2}
\end{empheq}
\end{subequations}
through the interface $\Gamma$, where $C_1$, $C'_1$, $C_2$ and $C'_2$ are differential operators.\\
To couple these two models we can implement the following iterative algorithm:\\
{\it For a given $ v^0 $ and at each iteration $k \geq 0$, solve:
$$
\left\{
\begin{equation}gin{array}{lll}
\mathcal{L}_u u^{k+1} &=& f_u \quad \mbox{in}\quad \Omega_1 \\
B_u^{out} u^{k+1} &=& g_u \quad \mbox{on} \quad \partial \Omega_1^{out} \\
B_u u^{k+1} &=& B_v v^k \quad \mbox{on}\quad \Gamma
\end{array}
\right.
\quad \mbox{then}\quad
\left\{
\begin{equation}gin{array}{lll}
\mathcal{L}_v v^{k+1} &=& f_v \quad \mbox{in}\quad \Omega_2 \\
B_v^{out} v^{k+1} &=& g_v \quad \mbox{on} \quad \partial \Omega_2^{out} \\
B'_v v^{k+1} & =& B'_u u^{k+1} \quad \mbox{on}\quad \Gamma.
\end{array}
\right.
$$}
Once convergence is achieved, the physical constraints (\ref{eqn : condition_couplage1}) and (\ref{eqn : condition_couplage2}) have to be satisfied. Then
care should be taken to choose
the operators $B_u$, $B'_u$, $B_v$ and $B'_v$ in order to ensure convergence toward the unique solution
defined by (\ref{eqn : modele_mono}), (\ref{eqn : condition_couplage1}) and
(\ref{eqn : condition_couplage2}), see \cite{quarteroni} and \cite{vero}. \\
This method can be generalized to the dimensionally heterogeneous coupling case.
Let assume that we have to solve the following 1-D model/2-D model coupled problem:
$$
\left\{
\begin{equation}gin{array}{ll}
\mathcal{L}_1 u_1 = f_1 \quad \mbox{in}\quad \Omega_1 \subset \mathbb{R}\\
B_1^{out} u_1 = g_1 \quad \mbox{on} \quad \partial \Omega_1^{out} \\
\end{array}
\right.
\quad\mbox{and}\quad
\left\{
\begin{equation}gin{array}{ll}
\mathcal{L}_2 u_2 = f_2 \quad \mbox{in}\quad \Omega_2 \subset \mathbb{R}^2\\
B_2^{out} u_2 = g_2 \quad \mbox{on} \quad \partial \Omega_2^{out} \\
\end{array}
\right.
$$
and we suppose that we have the following coupling constraints to satisfy at $x=L_0$ and on $\Gamma$:
\begin{equation}gin{subequations}
\label{mono}
\begin{equation}gin{empheq}[left=\empheqlbrace\;]{align}
C_1 u_1(L_0) &= C_2 \left(\mathcal{R}u_2\right) (L_0) \label{eq : contrainte_generale1}\\
C'_2 u_2(L_0,z) &= C'_1 \left(\mathcal{E}u_1\right) (L_0,z)\quad\mbox{on}\quad \Gamma \label{eq : contrainte_generale2}
\end{empheq}
\end{subequations}
$\mathcal{R}u_2$ is a restriction of $u_2$ at $x=L_0$ and $\mathcal{E}u_1$ is an extension of $u_1(L_0)$ all along $\Gamma$. More generally
we can define the operators $\mathcal{R}$ and $\mathcal{E}$ as in \cite{blanco} by:
\begin{equation}gin{eqnarray*}
\mathcal{R} : \Lambda& \longrightarrow &\Lambda_0 \\
u_{2|\Gamma} &\longmapsto& \mathcal{R}u_{2|\Gamma}
\end{eqnarray*}
and
\begin{equation}gin{eqnarray*}
\mathcal{E} : \Lambda_0& \longrightarrow &\Lambda \\
u_{1|x=L_0} &\longmapsto& \mathcal{E}u_{1|x = L_0}
\end{eqnarray*}
The spaces $\Lambda_0$ et $\Lambda$ are the trace spaces on the interface $x=L_0$ for 1-D functions and on the interface $\Gamma$
for 2-D functions. As mentioned in \cite{blanco}, these two operators are not invertible.\\
One may thus
implement the following algorithm:\\
{\it For a given $ u_2^0 $ and at each iteration $k \geq 0$, solve}:
$$
\left\{
\begin{equation}gin{array}{lll}
\mathcal{L}_1 u_1^{k+1} &=& f_1 \quad \mbox{in}\quad \Omega_1 \\
B_1^{out} u_1^{k+1} &=& g_1 \quad \mbox{on} \quad \partial \Omega_1^{out} \\
B_1 u_1^{k+1} &=& B_2 \mathcal{R}u_2^k \quad \mbox{at}\quad x=L_0
\end{array}
\right.
\quad then \quad
\left\{
\begin{equation}gin{array}{lll}
\mathcal{L}_2 u_2^{k+1} &=& f_2 \quad \mbox{in}\quad \Omega_2 \\
B_2^{out} u_2^{k+1} &=& g_2 \quad \mbox{on} \quad \partial \Omega_2^{out} \\
B'_2 u_2^{k+1} & =& B'_1 \mathcal{E}u_1^{k+1} \quad \mbox{on}\quad \Gamma.
\end{array}
\right.
$$
In practice, we do not have conditions such as (\ref{eq : contrainte_generale1}) (at $x=L_0$) and (\ref{eq : contrainte_generale2}) (for $(x,z)\mbox{ on }\Gamma$), but only conditions at $x=L_0$ such as (\ref{eqn : contrainte1}) and (\ref{eqn : contrainte2}).
This leads us to make a choice of the operators $\mathcal{R}$, $\mathcal{E}$, $C_1$, $C_2$, $C'_1$ and $C'_2$.\\
In general the choice of the restriction operator $\mathcal{R}$ is more straightforward than the choice of the extension operator $\mathcal{E}$ one. In this
study, since the 1-D model is obtained after some approximations and by averaging the 2-D model, it is reasonable to define $\mathcal{R}$ as the vertical average. On the other hand, the question of the choice of the operator $\mathcal{E}$ remains open. \\
In \cite{blanco} and \cite{leiva}, authors proposed a constant extension of (\ref{eqn : contrainte1}) and (\ref{eqn : contrainte2}) along $\Gamma$, and
impose the following strong coupling constraints:
$$
\left\{
\begin{equation}gin{array}{lll}
u_2(L_0, z) &= & u_1(L_0)\quad \mbox{on} \quad \Gamma\\
\displaystyle \frac{\partial u_1}{\partial x}(L_0) &=& \displaystyle \frac{1}{H} \int_{0}^{H} \frac{\partial u_2}{\partial x}(L_0, z) dz
\end{array}
\right.
\;\;\;\mbox{or}\;\;\;\;\;
\left\{
\begin{equation}gin{array}{lll}
u_1(L_0) &= & \displaystyle \frac{1}{H}\int_{0}^{H}u_2(L_0,z) dz\\
\displaystyle \frac{\partial u_2}{\partial x}(L_0,z) &=& \displaystyle \frac{\partial u_1}{\partial x}(L_0)\quad \mbox{on}\quad \Gamma
\end{array}
\right.
$$
It is a choice among many others. One may also choose a multitude of operators $C_1$, $C_2$, $C'_1$ and $C'_2$ ensuring the physical constraints to be satisfied.\\
The strategy that we adopt here is to
choose, due to relations (\ref{eqn : relation_ordre_2}) and (\ref{eqn : relation_ordre_2_bis}), a constant extension of $u_1$ along $\Gamma$
and then to implement a family of Schwarz algorithms with appropriate boundary
conditions at $x = L_0$ and on $\Gamma$.
In this case Schwarz coupling algorithm reads:\\
{\it For a given $ u_2^0 $ and at each iteration $k \geq 0$, solve :
\begin{equation}gin{eqnarray} \label{eqn : iteration_1-D}
\left\{
\begin{equation}gin{array}{lllll}
\displaystyle - \dxx{u_1^{k+1}} + \frac{\kappa}{H} u_1^{k+1}= F_1 \quad \mbox{in}\quad (0, L_0) \\
\\
u_1^{k+1}(0) = \bar{\gamma}_1 \\
\\
\displaystyle B_1 u_1^{k+1}(L_0) = B_1\bar{u}_2^k
\end{array}
\right.
\end{eqnarray}
and then solve
\begin{equation}gin{eqnarray} \label{eqn : iteration_2-D}
\left\{
\begin{equation}gin{array}{lllllll}
-\Delta u_2^{k+1} = F_2 \quad \mbox{in}\quad \Omegatwo \\
\\
\displaystyle \dn{u_2^{k+1}}= 0 \quad \mbox{on}\quad \Gamma_{T}^2 \\
\\
\displaystyle \dn{u_2^{k+1}}+ \kappa u_2^{k+1}= 0 \quad \mbox{on}\quad \Gamma_{B}^2 \\
\\
u_2^{k+1} = \gamma_2 \quad \mbox{on} \quad \Gamma_R\\
\\
\displaystyle B_2 u_2^{k+1} = B_2 u_1^{k+1} \quad \mbox{on}\quad \Gamma
\end{array}
\right.
\end{eqnarray}
}
The linear operators $B_1$ and $B_2$ will be defined such that the points (i), (ii) and (iii) (see introduction of Section \ref{algorithm}) are satisfied and such that the algorithm converges.\\
We will first study the convergence of the coupling algorithm. Subsequently we move on the point (iii).\\
To ensure the convergence of Schwarz algorithms in the case of classical domain decomposition without overlapping, it is proposed in \cite{lions}
to use Robin operators. We will extend the use of these operators to our coupling problem. We define the operators $B_1$ and $B_2$ for a given $\lambda > 0$ as follows:
\begin{equation}gin{eqnarray} \label{eqn : robin_operator1}
B_1= \frac{\partial}{\partial n_1} + \lambda I\hspace*{-0.6mm}d
\end{eqnarray}
and
\begin{equation}gin{eqnarray} \label{eqn : robin_operator2}
B_2 = \dntwo{} + \lambda I\hspace*{-0.6mm}d
\end{eqnarray}
where $n_1$ and $n_2$ are the outward unit normal to the 1-D and 2-D domains respectively.
We note that the operators $B_1$ and $B_2$ ensure the well-posedness of the problem at each iteration. Let us study the convergence of Schwarz
algorithm with this family of operators.
\subsection{Algorithm convergence}
\begin{equation}gin{prop}
For each $ \lambda > 0$, the $(u_1^k, u_2^k)$ Schwarz algorithm converges in $H^1(\Omega_1) \times H^1(\Omega_2)$
to $(u_1^{\lambda}, u_2^{\lambda})$
that satisfies the physical constraints (\ref{eqn : contrainte1}),
(\ref{eqn : contrainte2}).\\
\end{prop}
\noindent {\bf Proof:}\\
Let define the differences between two successive iterations:
$$
e_1^{k+1}(x) = u_1^{k+1}(x) - u_1^k(x), \quad\forall x \in (0, L_0)
$$
and
$$
e_2^{k+1}(x,z) = u_2^{k+1}(x,z) - u_2^k(x,z),\quad \forall (x,z) \in \Omegatwo
$$
These functions satisfy the following systems:
\begin{equation}gin{eqnarray} \label{eqn : error_1-D}
\left\{
\begin{equation}gin{array}{lllll}
\displaystyle - \dxx{e_1^{k+1}} + \frac{\kappa}{H} e_1^{k+1}= 0 \quad \mbox{in}\quad (0, L_0) \\
\\
e_1^{k+1}(0) = 0 \\
\\
\displaystyle \frac{\partial e_1^{k+1}}{\partial x}(L_0) + \lambda e_1^{k+1}(L_0) =
\frac{\partial \bar{e}_2^{k}}{\partial x}(L_0) + \lambda \bar{e}_2^{k}(L_0)
\end{array}
\right.
\end{eqnarray}
and
\begin{equation}gin{eqnarray} \label{eqn : error_2-D}
\left\{
\begin{equation}gin{array}{lllllll}
-\Delta e_2^{k+1} = 0 \quad \mbox{in}\quad \Omegatwo \\
\\
\displaystyle \dn{e_2^{k+1}}= 0 \quad \mbox{on}\quad \Gamma_{T}^2 \\
\\
\displaystyle \dn{e_2^{k+1}}+ \kappa e_2^{k+1}= 0 \quad \mbox{on}\quad \Gamma_{B}^2 \\
\\
e_2^{k+1} = 0 \quad \mbox{on} \quad \Gamma_R\\
\\
\displaystyle -\frac{\partial e_2^{k+1}}{\partial x}(L_0,z) + \lambda e_2^{k+1}(L_0,z)
= -\frac{\partial e_1^{k+1}}{\partial x}(L_0) + \lambda e_1^{k+1}(L_0)\quad \mbox{on}\quad \Gamma.
\end{array}
\right.
\end{eqnarray}
The first two equations of (\ref{eqn : error_1-D}) lead to:
\begin{equation}gin{equation} \label{eqn : error1D}
e_1^{k+1}(x) = \alpha_{k+1} \sinh(a x), \;\;\; \forall x \in (0, L_0)
\end{equation}
where $\alpha_{k+1} \in \mathbb{R}$ and $\displaystyle a = \sqrt{\frac{\kappa}{H}}$.\\
If we take the vertical average of the boundary condition on $\Gamma$ in (\ref{eqn : error_2-D}), and due to the boundary condition at $x=L_0$, we obtain :
$$
\left\{
\begin{equation}gin{array}{ccc}
\displaystyle -\frac{\partial \bar{e}_2^{k+1}}{\partial x}(L_0) + \lambda \bar{e}_2^{k+1}(L_0)&= &
\displaystyle-\frac{\partial e_1^{k+1}}{\partial x}(L_0) + \lambda e_1^{k+1}(L_0)\\
\\
\displaystyle \frac{\partial \bar{e}_2^{k+1}}{\partial x}(L_0) + \lambda \bar{e}_2^{k+1}(L_0)& = &
\displaystyle \frac{\partial e_1^{k+2}}{\partial x}(L_0) + \lambda e_1^{k+2}(L_0).
\end{array}
\right.
$$
This implies:
\begin{equation}gin{eqnarray} \label{eqn : moyenne_e2}
\bar{e}_2^{k+1}(L_0) &= & \frac{1}{2\lambda}\left(A \alpha_{k+2} + B \alpha_{k+1} \right)
\end{eqnarray}
and
\begin{equation}gin{eqnarray} \label{eqn : moyenne_derivee_e2}
\dx{\bar{e}_2^{k+1}}(L_0) = \frac{1}{2} \left( A \alpha_{k+2} - B \alpha_{k+1}\right)
\end{eqnarray}
where $A = a \cosh(a L_0) + \lambda \sinh(a L_0)$ and $B = -a \cosh(a L_0) + \lambda \sinh(a L_0)$.\\
Now by multiplying the first equation of (\ref{eqn : error_2-D}) by $e_2^{k+1}$ and by integrating in $\Omega_2$, we obtain:
\begin{equation}gin{eqnarray*}
\int_{\Omega_2}^{} |\nabla e_2^{k+1}|^2 dx dz - \int_{\partial \Omega_2}^{} \frac{\partial e_2^{k+1}}{\partial n} e_2^{k+1} d \sigma = 0
\end{eqnarray*}
then using the boundary conditions on $\Gamma_T^2$ and $\Gamma_R$, we obtain:
\begin{equation}gin{eqnarray} \label{eqn : FV_e2}
\int_{\Omega_2}^{} |\nabla e_2^{k+1}|^2 dx dz + \int_{\Gamma_B^2}^{} \kappa |e_2^{k+1} |^2 dx &= &
-\int_{\Gamma}^{} \frac{\partial e_2^{k+1}}{\partial x}(L_0,z) e_2^{k+1}(L_0,z) dz.
\end{eqnarray}
We replace $\displaystyle \frac{\partial e_2^{k+1}}{\partial x}(L_0,z) $ in (\ref{eqn : FV_e2}) by its value obtained by using Robin boundary condition on $\Gamma$:
\begin{equation}gin{eqnarray} \label{eqn : FV_e2bis}
\int_{\Omega_2}^{} |\nabla e_2^{k+1}|^2 dxdz + \int_{\Gamma_B^2}^{} \kappa |e_2^{k+1}|^2 dx &= & \int_{\Gamma}^{}
\left( - \frac{\partial e_1^{k+1}}{\partial x}(L_0) + \lambda e_1^{k+1}(L_0) - \lambda e_2^{k+1}(L_0,z)\right) e_2^{k+1}(L_0,z) dz \nonumber\\
&=& B \alpha_{k+1} H \bar{e}_2^{k+1}(L_0) - \lambda \int_{0}^{H}|e_2^{k+1} |^2(L_0,z) d z \nonumber\\
& = & \frac{B \alpha_{k+1} H}{2 \lambda}\left(A \alpha_{k+2} + B \alpha_{k+1} \right) -
\lambda \int_{0}^{H}|e_2^{k+1} |^2(L_0,z) d z.
\end{eqnarray}
We now replace $e_2^{k+1}(L_0,z)$ in (\ref{eqn : FV_e2}) using the same Robin boundary condition:
\begin{equation}gin{eqnarray*}
\int_{\Omega_2}^{} |\nabla e_2^{k+1}|^2 dxdz + \int_{\Gamma_B^2}^{} \kappa |e_2^{k+1} |^2 d \sigma &= & -\int_{\Gamma}^{}
\frac{1}{\lambda}\left( - \frac{\partial e_1^{k+1}}{\partial x}(L_0) + \lambda e_1^{k+1}(L_0)\right)\frac{\partial e_2^{k+1}}{\partial x}(L_0,z) dz\\
&& - \frac{1}{\lambda}
\int_{0}^{H}\left|\frac{\partial e_2^{k+1}}{\partial x}(L_0,z) \right|^2 d z \\
&=& - \frac{B \alpha_{k+1}H}{\lambda} \frac{\partial \bar{e}_2^{k+1}}{\partial x}(L_0) - \frac{1}{\lambda}\int_{0}^{H}\left|
\frac{\partial e_2^{k+1}}{\partial x}(L_0,z) \right|^2 d z \\
& = & -\frac{B \alpha_{k+1} H}{2 \lambda}\left(A \alpha_{k+2} - B \alpha_{k+1} \right) - \frac{1}{\lambda}
\int_{0}^{H}\left|\frac{\partial e_2^{k+1}}{\partial x}(L_0,z) \right|^2 d z.
\end{eqnarray*}
Due to the fact that $\lambda > 0$, we deduce that:
$$
\frac{B \alpha_{k+1} H}{2 \lambda}\left(A \alpha_{k+2} + B \alpha_{k+1} \right) \geq 0
$$
and
$$
-\frac{B \alpha_{k+1} H}{2 \lambda}\left(A \alpha_{k+2} - B \alpha_{k+1} \right) \geq 0.
$$
Thus:
$$
A^2 \alpha_{k+2}^2 - B^2 \alpha_{k+1}^2 \leq 0.
$$
Then we obtain:
$$
\frac{\alpha_{k+2}^2}{\alpha_{k+1}^2} \leq \frac{B^2}{A^2} =
\left|\frac{-a \cosh(a L_0) + \lambda \sinh(a L_0)}{a \cosh(a L_0) + \lambda \sinh(a L_0)} \right|^2 < 1
$$
and finally
\begin{equation}gin{equation} \label{eqn : suite_geo}
\left| \frac{\alpha_{k+2}}{\alpha_{k+1}} \right| < \left| \frac{B}{A} \right| < 1.
\end{equation}
So that the sequence $(\alpha_k)_{k \in \mathbb{N}}$ converge to zero.\\
Let us now remark that for all $k \geq 0$, $n \geq 0$, we have:
\begin{equation}gin{eqnarray*}
u_1^{k+n} - u_1^{k} &=& \sum_{p=0}^{n-1} e_1^{k+p+1}\\
\end{eqnarray*}
and
\begin{equation}gin{eqnarray*}
\frac{ \partial \left( u_1^{k+n} - u_1^{k} \right)}{\partial x} &=& \sum_{p=0}^{n-1} \frac{\partial e_1^{k+p+1} }{\partial x}\\
\end{eqnarray*}
Using the relation (\ref{eqn : error1D}) and the fact that the sequence $(\alpha_k)_{k \in \mathbb{N}}$ converges, we can
prove that $(u_1^k)_{k \in \mathbb{N}}$ and $\displaystyle (\frac{\partial u_1^k}{\partial x})_{k \in \mathbb{N}}$ are Cauchy sequences in
$L^2(\Omega_1)$. So that $(u_1^k)_{k \in \mathbb{N}}$ is a Cauchy sequence in $H^1(\Omega_1)$.\\
In the same way we observe that for all $k \geq 0$, $n \geq 0$, we have:
\begin{equation}gin{eqnarray*}
\nabla (u_2^{k+n} - u_2^{k}) &=& \sum_{p=0}^{n-1} \nabla e_2^{k+p+1}\\
\end{eqnarray*}
Using (\ref{eqn : FV_e2bis}), we deduce that:
$$
\int_{\Omega_2}^{} |\nabla e_2^{k+1}|^2 dxdz \leq \frac{B \alpha_{k+1} H}{2 \lambda}\left(A \alpha_{k+2} + B \alpha_{k+1} \right)
$$
and then we can prove that $(\nabla e_2^k)_{k \in \mathbb{N}}$ is a Cauchy sequence in $L^2(\Omega_2)$ and due to the Poincar\'e inequality we have also
$(u_2^k)_{k \in \mathbb{N}}$ is a Cauchy sequence in $L^2(\Omega_2)$. So that $(u_2^k)_{k \in \mathbb{N}}$ is a Cauchy sequence in $H^1(\Omega_2)$.\\
To conclude we have prove that $(u_1^k, u_2^k)$ Schwarz algorithm converges in $H^1(\Omega_1) \times H^1(\Omega_2)$.
Moreover, at convergence the limit $(u_1^{\lambda}, u_2^{\lambda})$ verifies $B_2 u_2^{\lambda} = B_2 u_1^{\lambda}$
and $B_1 u_1^{\lambda} = B_1 \bar{u}_2^{\lambda}$. Taking the
vertical average on $\Gamma$ gives two linear combinations of the constraints (\ref{eqn : contrainte1}),
(\ref{eqn : contrainte2}).$\Box$
\begin{equation}gin{remark}~\\
\begin{equation}gin{itemize}
\item The Schwarz algorithms converge for all $\lambda$ positive, but we remark that for $\displaystyle \lambda = a \coth(a L_0)$, we have exact
convergence in two iterations. Indeed we have in this case:
$$
\displaystyle -\frac{\partial e_1^{k+1}}{\partial x} + a \coth(a L_0) e_1^{k+1} =0 \;\;\; \forall k \geq 0,
$$
and then:
$$
B_2 e_2^{k+1} = -\frac{\partial e_2^{k+1}}{\partial x} + a \coth(a L_0) e_2^{k+1} =-\frac{\partial e_1^{k+1}}{\partial x} + a \coth(a L_0) e_1^{k+1}=0 \;\;\; \forall k \geq 0,
$$
The operator $\displaystyle \dnone{} + a \coth(aL_0) I\hspace*{-0.6mm}d$ corresponds to the {\it absorbing} operator of the 1-D model.\\
We denote by $\lambda_{opt}$ this value of $\lambda$, see for example \cite{japhetnataf}.
\item If we take $\lambda_1 \neq \lambda_2$, we have {\it a priori} $(u_1^{\lambda_1}, u_2^{\lambda_1}) \neq (u_1^{\lambda_2}, u_2^{\lambda_2})$.
This is in accordance with the ill-posedness of coupling problem defined by (\ref{eq:1-Dmodel}), (\ref{eq:2-Dmodel}), (\ref{eqn : contrainte1}) and (\ref{eqn : contrainte2}).
\end{itemize}
For the sake of clarity we will denote $(u_1, u_2)$ the limit of Schwarz algorithm instead of $(u_1^{\lambda}, u_2^{\lambda})$.
\end{remark}
\subsection{Control of the difference between the coupled solution and the global reference solution}
Unlike the case of domain decomposition, at convergence of Schwarz algorithm, we have $u_2 \neq u|_{\Omega_2}$ due to the model reduction. But as mentioned above, we have chosen the family of {\it Robin} operators in order to get some control of
the difference between $u_2$ and $u|_{\Omega_2}$. In fact we have the following result:
\begin{equation}gin{theorem}\label{th:errorcontrol}
for each $\lambda > 0$, let $(u_1^{\lambda}, u_2^{\lambda})$ denotes the limit of the Schwarz algorithm. If $L_0 < L_1$ then
there exists $M(\lambda) > 0$ such that:
\begin{equation}gin{eqnarray} \label{eq : error_control}
\| u_{|\Omega_2} - u_2 \|_{H^1(\Omega_2)} \leq M(\lambda) \varepsilon\sqrt{1+ \delta^2}
\end{eqnarray}
where $\displaystyle \delta = \frac{L_1}{L_1-L_0}$.
\end{theorem}
{\bf Proof}\\
The function $u|_{\Omega_2} - u_2$ is the solution of the system:
\begin{equation}gin{eqnarray*}
\left\{
\begin{equation}gin{array}{lllllllll}
\displaystyle - \Delta (u -u_2) = 0\quad \mbox{in}\quad \Omega_2 \\
\\
\displaystyle \frac{\partial (u-u_2)}{\partial n}= 0 \quad \mbox{on}\quad \Gamma^2_{T}\\
\\
\displaystyle \frac{\partial (u-u_2)}{\partial n} + \kappa (u-u_2)= 0 \quad \mbox{on}\quad \Gamma^2_{B} \\
\\
u-u_2 = 0 \quad \mbox{on} \quad \Gamma_R.
\end{array}
\right.
\end{eqnarray*}
By multiplying the first equation by $u-u_2$ and using the boundary conditions
on $\Gamma_T^2 \cup \Gamma_B^2 \cup \Gamma_R$, we obtain:
\begin{equation}gin{equation} \label{eqn : etoile1}
\int_{\Omega_2}^{} |\nabla\left( u- u_2 \right)|^2 dx dz + \int_{\Gamma_B^2}^{} \kappa |u-u_2|^2 dx
- \int_{\Gamma}^{} \frac{\partial (u-u_2)}{\partial n}\left( u - u_2 \right) dz = 0. \tag{$*$}
\end{equation}
The integral term on $\Gamma$ is reformulated using the boundary condition
$\displaystyle -\frac{ \partial u_2}{\partial x} + \lambda u_2 = -\frac{\partial u_1}{\partial x} + \lambda u_1$ satisfied by the limit $u_2$:
\begin{equation}gin{eqnarray*}
\int_{\Gamma}^{} \frac{\partial (u-u_2)}{\partial n}\left( u - u_2 \right) dz &=& - \int_{0}^{H} \frac{\partial (u-u_2)}{\partial x}(L_0,z)\left( u - u_2 \right)(L_0,z) dz \\
&= & - \int_{0}^{H} \frac{\partial u}{\partial x}(L_0,z)\left( u - u_2 \right)(L_0,z) dz \\
& & -\int_{0}^{H} \left(- \frac{\partial u_1}{\partial x}(L_0) + \lambda u_1(L_0) - \lambda u_2(L_0,z) \right)\left( u - u_2 \right) (L_0,z)dz \\
&=& \int_{0}^{H} \left(- \frac{\partial u}{\partial x}(L_0,z) + \lambda u(L_0,z) \right)\left( u - u_2 \right) (L_0,z)dz \\
& & -\int_{0}^{H} \left(- \frac{\partial u_1}{\partial x}(L_0) + \lambda u_1(L_0) \right)\left( u - u_2 \right)(L_0,z) dz \\
&& - \lambda\int_{0}^{H} \left(u - u_2 \right)^2(L_0,z) dz.
\end{eqnarray*}
$\bullet$ The first term reads:
\begin{equation}gin{eqnarray*}
\int_{0}^{H} \left(- \frac{\partial u}{\partial x} + \lambda u \right)(L_0,z)\left( u - u_2 \right) (L_0,z)dz &=&
\int_{0}^{H} \left(- \frac{\partial u}{\partial x} + \lambda u\right) (L_0,z)\left( u(L_0,z) - \bar{u}(L_0) \right) dz \\
&& + \int_{0}^{H} \left(- \frac{\partial u}{\partial x} + \lambda u\right)(L_0,z)\left( \bar{u}(L_0) - u_1(L_0) \right) dz \\
&& + \int_{0}^{H} \left(- \frac{\partial u}{\partial x} + \lambda u \right)(L_0,z)\left( u_1(L_0) - u_2(L_0,z) \right) dz. \\
\end{eqnarray*}
Due to the relations (\ref{eqn : relation_ordre_2}) and (\ref{eqn : relation_ordre_2_bis}) and to the fact that $\displaystyle -\frac{\partial u}{\partial x}(L_0,z) + \lambda u(L_0,z)= O(1) $, we deduce:
$$
\int_{0}^{H} \left(- \frac{\partial u}{\partial x} + \lambda u\right) (L_0,z)\left( u(L_0,z) - \bar{u}(L_0) \right) dz = O(\varepsilon^2)
$$
In the same way, if we assume that $L_0 < L_1$, so that 2-D effects are insignificant in $\Omega_1 \cap {\{ L_0 \leq x \leq L_1\}}$, and applying
a similar asymptotic analysis as in the first section to the 2-D model defined in $\Omega_2$, we can deduce that:
\begin{equation}gin{eqnarray} \label{eqn : relation_u2_u1}
u_2(L_0,z) &=& \bar{u}_2(L_0) + O(\delta^2 \varepsilon^2) \nonumber \\
&= & u_1(L_0) + O(\delta^2 \varepsilon^2),\quad \forall z \in [0, H]
\end{eqnarray}
So that:
$$
\int_{0}^{H} \left(- \frac{\partial u}{\partial x} + \lambda u\right) (L_0,z)\left( u_1(L_0) - u_2(L_0,z) \right) dz = O(\delta^2 \varepsilon^2)
$$
Finally:
\begin{equation}gin{eqnarray} \label{eqn : dem_erreur1}
\int_{0}^{H} \left(- \frac{\partial u}{\partial x} + \lambda u\right) (L_0,z)\left( \bar{u}(L_0) - u_1(L_0) \right) dz =
H \left(- \frac{\partial \bar{u}}{\partial x} + \lambda \bar{u}\right) (L_0)\left( \bar{u}(L_0) - u_1(L_0)\right)
\end{eqnarray}
$\bullet$ Since $u_1(L_0) = \overline{u}_2(L_0)$, the second term reads:
\begin{equation}gin{align} \label{eqn : etoile2}
-\int_{0}^{H} \left(- \frac{\partial u_1}{\partial x}(L_0) + \lambda u_1(L_0) \right)\left( u - u_2 \right)(L_0,z) dz &=
-H \left(- \frac{\partial( u_1 - \bar u)}{\partial x} + \lambda (u_1 -\bar{u} )\right)(L_0)\left( \bar{u} - u_1 \right)(L_0) \notag\\
&- H \left(- \frac{\partial \bar{u}}{\partial x}
+ \lambda \bar{u}\right) (L_0)\left( \bar{u} - u_1\right)(L_0) \tag{$**$}
\end{align}
We reformulate the first term of the right. Note that the function $u_1 - \bar{u}$ satisfies the equation:
$$
-\frac{\partial^2 (u_1 - \bar{u})}{\partial x^2}(x) + a^2(u_1 -\bar{u})(x) = a^2(\bar{u}(x) - u(x,0)),\;\;\; \forall x \in (0, L_0)
$$
So that by multiplying this equation by $u_1 - \bar{u}$, after integration on $(0, L_0)$ and use of the boundary condition
$u_1(0) = \overline{u}(0)$, we obtain:
\begin{equation}gin{eqnarray*}
\int_{0}^{L_0} \left( \frac{\partial (u_1 - \bar{u})}{\partial x}\right)^2(x)dx + a^2 \int_{0}^{L_0}\left( u_1 - \bar{u}\right)^2(x) dx
- \frac{\partial (u_1 - \bar{u})}{\partial x}(L_0)(u_1 - \bar{u})(L_0) =
\\
\int_{0}^{L_0} a^2 (\bar{u}(x) - u(x,0))(u_1(x)-\bar{u})(x) dx
\end{eqnarray*}
thus:
\begin{equation}gin{eqnarray*}
- \frac{\partial (u_1 - \bar u)}{\partial x}(L_0)(u_1 - \bar{u})(L_0) =
\int_{0}^{L_0} a^2 (\bar{u}(x) - u(x,0))(u_1(x)-\bar{u}(x)) dx - \mathcal{A}_1(u_1-\bar{u}, u_1-\bar{u})
\end{eqnarray*}
where $\displaystyle\mathcal{A}_1(u_1-\bar{u}, u_1-\bar{u}) = \int_{0}^{L_0} \left( \frac{\partial (u_1 - \bar{u})}{\partial x}\right)^2(x)dx + a^2 \int_{0}^{L_0}\left( u_1 - \bar{u}\right)^2(x) dx $.\\
And then (\ref{eqn : etoile2}) becomes:
\begin{equation}gin{eqnarray} \label{eqn : dem_erreur3}
-\int_{0}^{H} \left(- \frac{\partial u_1}{\partial x}(L_0) + \lambda u_1(L_0) \right)\left( u - u_2 \right) dz
&=& H\int_{0}^{L_0} a^2 (\bar{u}(x) - u(x,0))(u_1(x)-\bar{u}(x)) dx \nonumber\\
&&- H\mathcal{A}_1(u_1-\bar{u}, u_1-\bar{u})
+ \lambda H \left( \bar{u} - u_1 \right)^2(L_0) \nonumber \\
&& - H \left(- \frac{\partial \bar{u}}{\partial x}
+ \lambda \bar{u}\right) (L_0)\left( \bar{u} - u_1\right)(L_0)
\end{eqnarray}
$\bullet$ To recap, the boundary term on $\Gamma$ in (\ref{eqn : etoile1}) becomes:
\begin{equation}gin{eqnarray*}
\int_{\Gamma}^{} \frac{\partial (u-u_2)}{\partial n}\left( u - u_2 \right) dz
&=& O(\varepsilon^2) + O(\delta^2 \varepsilon^2) + H\int_{0}^{L_0} a^2 (\bar{u}(x) - u(x,0))(u_1(x)-\bar{u}) dx \\
&& -H \mathcal{A}(u_1-\bar{u}, u_1-\bar{u})
+ \lambda H \left( \bar{u} - u_1 \right)^2(L_0)\\
&& - \lambda\int_{0}^{H} \left(u - u_2 \right)^2 dz
\end{eqnarray*}
We first observe that:
$$
\int_{0}^{L_0} a^2 (\bar{u}(x) - u(x,0))(u_1(x)-\bar{u}(x)) dx \leq \frac{a^2}{2} \int_{0}^{L_0} (\bar{u}(x) - u(x,0))^2dx + \frac{a^2}{2} \int_{0}^{L_0} (u_1(x)-\bar{u}(x))^2dx.
$$
It follows that:
\begin{equation}gin{eqnarray*}
\int_{\Gamma}^{} \frac{\partial (u-u_2)}{\partial n}\left( u - u_2 \right) dz & \leq & C(1+ \delta^2 )\varepsilon^2) +
H \frac{a^2}{2} \int_{0}^{L_0} (\bar{u}(x) - u(x,0))^2dx \\
&& - H\mathcal{A}_1(u_1-\bar{u}, u_1-\bar{u})
+H \frac{a^2}{2} \int_{0}^{L_0} (u_1(x)-\bar{u}(x))^2dx \\
&& + \lambda H \left( \bar{u} - u_1 \right)^2(L_0) - \lambda\int_{0}^{H} \left(u - u_2 \right)^2(L_0,z) dz
\end{eqnarray*}
where $C$ is a positive constant. \\
Then we have:
$$\displaystyle - \mathcal{A}_1(u_1-\bar{u}, u_1-\bar{u})
+ \frac{a^2}{2} \int_{0}^{L_0} (u_1(x)-\bar{u}(x))^2dx \leq 0$$
and finally, using the definition of $\overline{u}(L_0)$ and the relation $u_1(L_0) = \overline{u}_2(L_0)$, we obtain:
\begin{equation}gin{eqnarray*}
\lambda H \left( \bar{u} - u_1 \right)^2(L_0) - \lambda\int_{0}^{H} \left(u - u_2 \right)^2 dz &=& \lambda H \left( \frac{1}{H}\int_{0}^{H}(u -u_2)dz \right)^2- \lambda\int_{0}^{H} \left(u - u_2 \right)^2 dz\\
&= & \lambda \frac{1}{H} \left( \int_{0}^{H}(u -u_2)dz \right)^2 - \lambda\int_{0}^{H} \left(u - u_2 \right)^2 dz\\
&\leq& \lambda \frac{1}{H}\left( \int_{0}^{H}1 dz \right)\left( \int_{0}^{H}(u -u_2)^2dz \right) - \lambda\int_{0}^{H} \left(u - u_2 \right)^2 dz\\
&\leq& 0.
\end{eqnarray*}
We now come back to (\ref{eqn : etoile1}), which gives:
\begin{equation}gin{eqnarray*}
\int_{\Omega_2}^{} |\nabla\left( u- u_2 \right)|^2 dx dz + \int_{\Gamma_B^2}^{} \kappa |u-u_2|^2 dx
& \leq & M(1 + \delta^2) \varepsilon^2
\end{eqnarray*}
and thus:
$$
\int_{\Omega_2}^{} |\nabla\left( u- u_2 \right)|^2 dx dz \leq M (1 + \delta^2) \varepsilon^2
$$
Where $M$ denotes a positive constant.\\
Finally, due to the fact that $u - u_2 = 0$ on $\Gamma_R$, and by using Poincar\'e inequality we can deduce the inequality (\ref{eq : error_control}).$\Box$\\
\begin{equation}gin{remark}~\\
\begin{equation}gin{itemize}
\item This proposition fails if we choose the interface position in a zone where 2-D effects are significant. In this case relation (\ref{eqn : relation_u2_u1})
is no more available.
\item The right term of (\ref{eq : error_control}) is also an upper bound of $\|u_2^{\lambda_1} - u_2^{\lambda_2}\|_{H^1(\Omega_2)}$
for all $\lambda_1$ and $\lambda_2$ positive.
\end{itemize}
\end{remark}
\section{Numerical results}
The test cases presented in this section illustrate the coupling method of 1-D and 2-D elliptic equations based on Schwarz algorithm. All the computations have been done using the software package Freefem++ \cite{Freefem}, with a $P_2$ finite element discretization. \\
In the first part of this section, the two test cases will be described in details. In the second and third parts, we will focus on one hand on the Scharwz algorithm convergence and on the other hand on the comparison of the coupled solution with the reference solution in order to enlight the theoretical results obtained in the previous paragraphs.
\subsection{Description of the test cases}
\subsubsection{Test \#1:}
The first test case is concerned with the solution of the 2-D problem (\ref{eq:full2-D}) where the domain is a rectangle $\Omega=[0, L] \times [0, H]$ which is assumed to be uniformly shallow: $H \ll L$.
Let us consider that the right-hand side term $F(x,z)$ of the full 2-D problem is $\displaystyle F(x,z) = m\exp(-(x-x^*)^2) \sin(\frac{2\pi z}{H})$, where $x^* < L$.\\
The global reference solution $u$ is displayed in Figure \ref{fig: global solution test1 }.
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\includegraphics[width=14cm]{Figs/sol_rectangle_L20_H0pt5_K0pt001}
\caption{2-D reference solution for the first test case where $L=20$, $x^*=19$, $H=0.5$ and $\kappa=0.001$.}
\label{fig: global solution test1 }
\end{center}
\end{figure}
We notice that the 2-D effects are due to the particular form of the forcing term $F$ and are located around $x^*$.\\
Now let us define the coupled model. \\
The interface is located at $x=L_{0}< x^*$ as shown in Figure \ref{fig: domain test1 }. In the part of the domain $\Omega_{1}=[0, L_{0}] \times [0, H]$, we assume \textit{a priori} that the 2-D effects are negligible and consequently we replace the full 2-D equations by the 1-D model (see (\ref{eq:1-Dmodel})). \\
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\subfloat[Computational domain for the 2-D reference model]
{\includegraphics{Figs/domaine_rectangulaire.eps} \label{fig : rectangulaire_entier}}\\
\subfloat[Computational domain for the 1-D/2-D reduced model]
{\includegraphics{Figs/domaine_rectangulaire_c.eps} \label{fig : rectangulaire_couplage}}
\caption{Computational domains for both the reference and reduced models in test case \#1. For the reduced model (b), the 1-D/2-D interface $\Gamma$ is located in $x=L_0$.}
\label{fig: domain test1 }
\end{center}
\end{figure}
\subsubsection{Test \#2:}
In this second test case, the 2-D effects are due to the funnel-shaped geometry of the domain (see Figure \ref{fig : entonnoir}), and the forcing term is constant ($=1$). \\
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\subfloat[Computational domain for the 2-D reference model]{
\includegraphics{Figs/Entonnoir_antoine.eps}\label{fig : entonnoir}}\\
\subfloat[Computational domain for the 1-D/2-D reduced model]{
\includegraphics{Figs/entonnoir_couplage.eps}\label{fig : entonnoir_couplage}}
\caption{Computational domains for both the reference and reduced models in test case \#2. For the reduced model (b), the 1-D/2-D interface $\Gamma$ is located in $x=L_0$.}
\label{fig: domain test2 }
\end{center}
\end{figure}
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\includegraphics[width=10cm]{Figs/sol_ent_L2_H0pt05_l3_K0pt001}
\caption{2-D reference solution for the second test case where $L=2$, $H=0.05$, $l=3$ and $\kappa=0.001$}
\label{fig: global solution test2 }
\end{center}
\end{figure}
The reference solution in the whole domain is displayed in Figure \ref{fig: global solution test2 }.\\
The coupled model is defined by splitting the domain in two parts. The interface $\Gamma$ is located at $x = L_0$, $0 < L_0 < L_2$ as shown in Figure \ref{fig: domain test2 }.
\subsection{Convergence of the Schwarz algorithm}
In this section we provide numerical results to assess the theoretical results of \S \ref{algorithm}. We are interested in illustrating the optimal convergence of Schwarz algorithm for the parameter $\lambda=\lambda_{opt}$. Figure \ref{fig: Schwarz_test_rect} shows the difference between the iterates of the Schwarz algorithm in $L^{\infty}$ norm for the two test cases.\\
As demonstrated in \S \ref{algorithm}, the Schwarz algorithm converges in two iterations for the optimal parametrer $\lambda=\lambda_{opt}$. It is important to notice that this result is independent of the interface location.\\
\begin{equation}gin{figure}[h!]
\centering
\subfloat{
\includegraphics[width=7cm]{Figs/DDMConvergence_rect_L20_H05_L0eq16_k0pt001} \label{fig: Schwarz_test_rect}
\includegraphics[width=7cm]{Figs/DDMConvergence_entonnoir_L2_L0eq1pt5_H005_l3}
} \label{fig: Schwarz_test_ent}
\caption{Convergence of Schwarz algorithm with various values of $\lambda$. Left: test case \#1 with $L=20$, $L_0=16$, $H=0.5$ and $\kappa=0.001$. Right: test case \#2 with $L=2$, $L_0=1.5$, $H=0.05$, $l=3$ and $\kappa=0.001$.}
\end{figure}
\subsection{Difference between the coupled solution and the full 2-D solution}
One important point in the analysis of the accuracy of the coupling procedure is the comparison of the coupled solution with the reference solution as a function of the interface location $x=L_{0}$.
Contrarily to the classical domain decomposition problems, here there is a difference between the (converged) coupled solution and the full 2-D solution; this difference is due to the model reduction that is performed in the 1-D part of the domain. This difference depends on the location chosen to discriminate between 1-D and 2-D regions. Figures \ref{fig: Erreur_sol_ref_rect1} and \ref{fig: Erreur_sol_ref_ent1} left show the $H^1$ error between coupled and reference solutions as a function of the interface location for the two test cases.
Figures \ref{fig: Erreur_sol_ref__eps_rect1} and \ref{fig: Erreur_sol_ref_eps_ent1} right show the $H^1$ error in $\Omega_2$ between coupled and reference solutions as a function of $\displaystyle \varepsilon =\frac{H}{L}$ for the two test cases.\\
\begin{equation}gin{figure}[h!]
\centering
\subfloat[\underline{Test case \# 1}: {\scriptsize (left): $L=20$, $H=0.5$ and $\kappa=0.001$. (right): $L=20$, $L_0=14$ and $\kappa=0.01$.}]{
\includegraphics[width=7.5cm]{Figs/erreur_L0_rect_L20_H05_xstar19_kappa0pt001_bis} \label{fig: Erreur_sol_ref_rect1}
\includegraphics[width=7cm]{Figs/erreur_eps_rect_L20_L0eq14_K0pt01} \label{fig: Erreur_sol_ref__eps_rect1}
}\\
\subfloat[\underline{Test case \# 2}: {\scriptsize (left): $L=2$, $H=0.05$ and $\kappa=0.001$. (right): $L=2$, $L_0=1.5$ and $\kappa=0.001$.}]{
\includegraphics[width=7.5cm]{Figs/erreur_L0_ent_L2_H0pt05_l3_kappa0pt001_bis} \label{fig: Erreur_sol_ref_ent1}
\includegraphics[width=7cm]{Figs/erreur_eps_ent_L2_L0eq1pt5_K0pt001_l3} \label{fig: Erreur_sol_ref_eps_ent1}
}
\caption{Relative error as a function of $L_0$ (left) and $\varepsilon$ (right) between the coupled solution and the 2-D reference solution in test case \#1 (top) and \#2 (bottom).
In the left column, the red curves correspond (for both test cases) to the RHS of estimate (\ref{eq : error_control}).}
\label{fig:all}
\end{figure}
It is interesting to notice that for both test cases there is a discontinuity in the curve representing the error as a function of $L_{0}$ (see Figure \ref{fig:all}, left column). This discontinuity occurs both for the numerical difference between $u_2$ and $u_{\Omega_2}$ (black curve), and the theoretical curve (in red) corresponding to the right-hand-side of estimate (\ref{eq : error_control}). Indeed, if $L_{0}$ is greater than a certain threshold, the error grows very rapidly (and $\delta\longrightarrow\infty$ in estimate (\ref{eq : error_control})). This could be an indication of the real (a priori unknown) value of $L_{1}$ (see discussion at the end of Section \ref{sec:derivation} above).
\section{Conclusion}
In this paper we studied a linear boundary valued problem set in a 2-D domain, and assume that the solution may be approximated by a 1-D function in some part of the computational domain. We thus derive a reduced model that consists coupling a 1-D model (wherever we think it is legitimate) together with the original 2-D system (everywhere else).\\
The model reduction is performed thanks to a small aspect ratio hypothesis, with an integration in the shallow direction (we mimic the derivation of the shallow water equations). After this derivation we introduce an iterative method that couples the 1-D and 2-D systems and we prove some convergence results. One original aspect of this work
is the particular attention that is paid to the location of the 1-D/2-D interface. These theoretical results are illustrated with numerical simulations that underline the importance of the interface position, but also the way 1-D and 2-D models are coupled (boundary conditions at this interface). All these aspects, that have been studied here with a linear model, will be considered in a forthcoming study of dimensionally heterogeneous modelling in fluid dynamics.
\section*{Acknowledgments}
This work was supported by the research department of the French national electricity company,
\textit{EDF R}\&\textit{D}.
\tableofcontents
\begin{equation}gin{thebibliography}{10}
\bibitem{gerbeauperthame} {\sc J.~F. Gerbeau, B. Perthame}, {\em Derivation of Viscous Saint-Venant System for Laminar Shallow Water; Numerical Validation},
Discrete and Continuous Dynamical Systems, Series B 1, (2001), pp.~89--102 .
\bibitem{formaggia1} {\sc L. Formaggia, J.~F. Gerbeau, F. Nobile and A. Quarteroni}, {\em On the coupling of 3D and 1D Navier-Stokes
equations for flows problem in compliant vessels},
Computer Methods in Applied Mechanics and Engineering, 191, 6-7 (2001), pp.~561--582.
\bibitem{miglio} {\sc E. Miglio, S. Perotto and F. Saleri}, {\em Model coupling techniques for free-surface flow problems : Part I},
Nonlinear Analysis ELSEVIER,
63, (2005), pp.~1885--18896 .
\bibitem{monnier} {\sc J. Marin and J. Monnier}, {\em Superposition of local zoom models and
simultaneous calibration for 1D-2D shallow
water flows},
Mathematics and Computers in Simulation, Volume 80 Issue 3, (2009), pp.~547--560.
\bibitem{finaud-guyot} {\sc P. Finaud-Guyot, C. Delenne, V. Guinot and C. Llovel}, {\em1D--2D coupling for river flow modeling},
Comptes-Rendus de l'Acad\'emie des Sciences, Vol 339, (2011), pp.~ 226--234.
\bibitem{malleron} {\sc N. Malleron, F. Zaoui, N. Goutal and T. Morel}, {\em On the use of a high-performance framework
for efficient model coupling in hydroinformatics},
Environmental Modelling and Software, 26, (2011), pp.~1747--1758.
\bibitem{leiva2011} {\sc J. Leiva, P. Blanco and G. Buscaglia}
{\em Partitioned analysis for dimensionally-heterogeneous hydraulic networks}
SIAM Multiscale Model. Simul., vol 9, (2011), pp.~ 872--903.
\bibitem{Godlewski2004} {\sc E. Godlewski and P.A. Raviart}, {\em The numerical interface coupling of nonlinear
hyperbolic systems of conservation laws. The scalar case},
Numerische Mathematik, vol. 97, (2004), pp.~ 81--130.
\bibitem{Godlewski2005} {\sc E. Godlewski, K.C. Le Thanh and P.A. Raviart},
{\em The numerical interface
coupling of nonlinear hyperbolic systems of conservation laws. The case of
systems},
Math. Mod. Num. Anal., vol. 39(4), (2005), pp.~649--692.
\bibitem{bouttin} {\sc B. Bouttin},
{\em Mathematical and numerical study of nonlinear hyperbolic equations: model coupling and nonclassical shocks.},
Ph.D. thesis, Universit\'e Paris 6, 2009.
\bibitem{blanco} {\sc P.~J. Blanco, M. Discacciati and A. Quarteroni},
{\em Modeling dimensionally-heterogeneous problems: analysis, approximation and applications},
Numer. Math, vol. 119, Number 2, (2011), pp.~299--335.
\bibitem{leiva} {\sc J. Leiva, P. Blanco and G. Buscaglia},
{\em Iterative strong coupling of dimensionally-heterogeneous models},
International Journal for Numerical Methods in Engineering, Vol 81, (2010), pp.~1558--1580.
\bibitem{REF-KAPPA} {\sc Y. \c{C}engel},
{\em Introduction to thermodynamics and heat transfer},
McGraw-Hill Higher Education, 1997.
\bibitem{formaggia2} {\sc L. Formaggia, J.~F. Gerbeau, F. Nobile and A. Quarteroni},
{\em Numerical treatment of defective boundary conditions for the Navier-Stokes equations},
SIAM Journal on Numerical Analysis, Volume 40, Number 1, (2002), pp.~376-401.
\bibitem{quarteroni} {\sc A. Quarteroni and A. Valli},
{\em Domain Decomposition Methods for Partial Differential Equations},
Oxford University Press, NewYork, 2005.
\bibitem{vero} {\sc V. Martin},
{\em M\'ethodes de d\'ecomposition de domaine de type relaxation d'ondes pour des \'equations de l'oc\'eanographie.},
Ph.D thesis, Universit\'e Paris 13, 2003.
\bibitem{lions} {\sc P.~L Lions},
{\em On the Schwarz alternating method. III. A variant for nonoverlapping subdomains},
in Third International Symposium on Domain Decomposition Methods for Partial Differential Equations (Houston, TX, 1989), SIAM,
Philadelphia, PA, (1990), pp.~ 202--223.
\bibitem{japhetnataf} {\sc C. Japhet and F. Nataf},
{\em The best interface conditions for domain decomposition methods: absorbing boundary conditions},
in Absorbing boundaries and layers, domain decomposition methods, Applications to Large Scale Computations,
L. Tourrette and L. Halpern, eds.,
Nova Science Publishers, Inc., New York, 2001, pp.~348--373.
\bibitem{Freefem}
F.~Hecht, O.~Pironneau, and A.~Le~Hyaric.
\newblock {FreeFem++ manual}.
\newblock 2004.
\end{thebibliography}
\end{document} | math | 60,694 |
\begin{document}
\title{The Schur-Wielandt theory for central S-rings}
\author{Gang Chen}
\address{School of Mathematics and Statistics, Central China Normal University, Wuhan, China}
\email{[email protected]}
\author{Mikhail Muzychuk}
\address{Netanya Academic College, Netanya, Israel}
\email{[email protected]}
\author{Ilya Ponomarenko}
\address{Steklov Institute of Mathematics at St. Petersburg, Russia}
\email{[email protected]}
\thanks{The work of the first author was Financially supported by self-determined research funds of CCNU (No.CCNU15A02031) from the colleges’basic research and operation of MOE. The work of the third author was partially supported by the RFBR Grant 14-01-00156}
\date{}
\maketitle
\begin{abstract}
Two basic results on the S-rings over an abelian group are the Schur theorem on multipliers and the Wielandt
theorem on primitive S-rings over groups with a cyclic Sylow subgroup. None of these theorems is directly generalized
to the non-abelian case. Nevertheless, we prove that they are true for the central S-rings, i.e., for those which are
contained in the center of the group ring of the underlying group (such S-rings naturally arise in the supercharacter
theory). We also generalize the concept of a B-group introduced by Wielandt, and show that any Camina group is
a generalized B-group whereas with few exceptions, no simple group is of this type.
\end{abstract}
\section{Introduction}\label{sec:1}
A {\it Schur ring} or {\it S-ring} over a finite group $G$ can be defined as a subring of the group ring ${\mathbb Z} G$
that is a free ${\mathbb Z}$-module spanned by a partition of $G$ closed under taking inverse and containing the identity $e$
of $G$ as a class (see Section~\ref{150315x} for details). The S-ring theory was initiated by Schur~\cite{S} and then
developed by Wielandt~\cite{Wie} who wrote in \cite{Wie69} that S-rings provide one ``of three major tools'' to study
a group action.\footnote{The two other tools are the representation theory and the method of invariant relations.}
Until recently, the focus was on studying S-rings over abelian groups and the main applications of this theory were
connected with algebraic combinatorics problems~\cite{MP}. However, as it was observed in \cite{He10}, the
supercharacter theory developed to study group representations, is nothing else than the theory of commutative
S-rings of a special form that we call here {\it central}.
\begin{definition}
An S-ring over a group $G$ is said to be {\it central} if it is contained in the center ${\cal Z}({\mathbb Z} G)$
of the group ring ${\mathbb Z} G$.
\end{definition}
An example of such a ring is obtained from any permutation group $K$ such that
$$
G\Inn(G)\le K\le \sym(G),
$$
where $\Inn(G)$ is the inner automorphism group of $G$; the corresponding
partition of~$G$ is formed by the orbits of the stabilizer of $e$ in~$K$. In the special case when $K=\sym(G)$,
this produces the {\it trivial} central S-ring ${\mathbb Z} e+{\mathbb Z}\und{G}$, where $\und{G}$ is the sum of all elements of $G$.
On the other hand, if $K=G\Inn(G)$, the orbits are
the conjugacy classes of $G$; this shows that ${\cal Z}({\mathbb Z} G)$ is a central S-ring.
In particular, any S-ring over an abelian group is central. The main goal of the present paper is to extend the basic results
on S-rings from abelian case to the central one.
The Schur theorem on multipliers is a fundamental statement in the theory of S-rings over abelian groups. To explain it,
given an integer $m$ coprime to $|G|$, we define a permutation on the elements of the group $G$ by
$$
\sigma_m:G\to G,\ x\mapsto x^m.
$$
It permutes also the conjugacy classes of $G$, and so induces a linear isomorphism of the ring ${\cal Z}({\mathbb Z} G)$. If
the group~$G$ is abelian, then $\sigma_m\in\aut(G)$, ${\cal Z}({\mathbb Z} G)={\mathbb Z} G$ and the Schur theorem on multipliers states
that $\sigma_m$ is a Cayley automorphism of every S-ring over $G$. Our first result shows that in the nonabelian case,
$\sigma_m$ is still an automorphism (but not a Cayley one) of any central S-ring over~$G$.
\begin{theorem}l{100315v}
Let ${\cal A}$ be a central S-ring over a group $G$, and let $m$ be an integer coprime to $|G|$. Then $\sigma_m({\cal A})={\cal A}$ and
$\sigma_m|_{\cal A}\in\aut({\cal A})$.
\end{theorem}
Based on this result for the abelian case, Wielandt generalized the Schur theorem on primitive groups having a regular cyclic subgroup. In fact,
the Wielandt proof shows that if $G$ is an abelian group of composite order that has a cyclic Sylow subgroup, then
no proper S-ring over $G$ is primitive.\footnote{The primitivity concept in S-ring theory plays the same role as the simplicity in group theory.}
The following statement establishes ``a central version'' of the Wielandt theorem.
\begin{theorem}l{070215a}
Let ${\cal A}$ be a nontrivial central S-ring over a group $G$ of composite order. Suppose that $G$ has a normal cyclic
Sylow $p$-subgroup. Then ${\cal A}$ is imprimitive.
\end{theorem}
Following \cite{Wie}, a finite group $G$ is called a B-group if every primitive group containing a regular subgroup
isomorphic to~$G$ is 2-transitive. It should be remarked that most of the B-groups $G$ mentioned in~\cite{Wie}
satisfy a priori a stronger condition: no nontrivial S-ring over $G$ is primitive. In this sense, the following definition seems to be quite natural.
In what follows, we say that a central S-ring over $G$ is {\it proper} if it lies strictly between ${\cal Z}({\mathbb Z} G)$ and the trivial S-ring over~$G$.
\begin{definition}
A group $G$ is called a generalized B-group if no proper central S-ring over $G$ is primitive.
\end{definition}
Clearly, every B-group is also a generalized one. The converse statement is not true; see Subsection~\ref{190315a}.
A nontrivial example of a generalized B-group is given in Theorem~\ref{070215a}.
The following statement gives a family of generalized B-groups; we don't know whether they
are B-groups. Below, under a {\it Camina} group, we mean a group $G$ that has a proper
nontrivial normal subgroup~$H$ such that each $H$-coset distinct from $H$ is contained in a conjugacy
class of~$G$ (in other terms, $(G,H)$ is a Camina pair).\footnote{To simplify the presentation, we use the term ``Camina group" not only in the case
where $(G,G')$ is a Camina pair.}
\begin{theorem}l{100315a}
Any Camina group is a generalized B-group.
\end{theorem}
The class of the Camina groups includes, in particular, all Frobenius and extra-special groups; see~\cite{Ca}.
Thus, by Theorem~\ref{100315a}, we obtain the following statement.
\begin{corollary}l{100315u}
Any Frobenius or extra-special group is a generalized B-group.
\vrule height .9ex width .8ex depth -.1exl
\end{corollary}
The last result of the present paper shows that with a few possible exceptions, no simple
group is a generalized B-group. The proof is based on the Schur theorem on multipliers and
the characterization of rational simple groups given in~\cite{FS}.
\begin{theorem}l{070215b}
A generalized B-group $G$ is not simple unless $|G|\le 3$, or $G\cong\SP(6,2)$
or $\ORT^+(8,2)'$.\footnote{In fact, we do not know whether two simple groups from Theorem~\ref{070215b} are generalized B-groups.}
\end{theorem}
For the reader convenience, we collect the basic facts on S-rings in Section~\ref{150315x}. The proofs of
Theorems~\ref{100315v} and~\ref{070215a} are contained in Sections~\ref{190315t} and \ref {190315u}, respectively.
The results concerning generalized B-groups are in Section~\ref{190315v}.
{\bf Notation.}
As usual, ${\mathbb Z}$, ${\mathbb Q}$ and ${\mathbb C}$ denote the ring of integers and the fields of rationals and complex numbers, respectively.
The identity of a group $G$ is denoted by $e$; the set of non-identity elements in $G$ is denoted by $G^\#$.
The set of conjugacy classes of $G$ is denoted by $\cla(G)$.
Let $X\subseteq G$. The subgroup of $G$ generated by $X$ is denoted by $\grp{X}$;
we also set $\rad(X)=\{g\in G:\ gX=Xg=X\}$.
The element $\sum_{x\in X}x$ of the group ring ${\mathbb Z} G$ is denoted by $\und{X}$.
For an integer $m$, we set $X^{(m)}=\{x^m:\ x\in X\}$ and $\und{X}^{(m)}=\und{X^{(m)}}$.
The group of all permutations of the elements of $G$ is denoted by $\sym(G)$.
The additive and multiplicative groups of the ring ${\mathbb Z}/(n)$ are denoted by ${\mathbb Z}_n$ and ${\mathbb Z}^*_n$, respectively.
\section{Preliminaries}\label{150315x}
Let $G$ be a finite group. A subring~${\cal A}$ of the group ring~${\mathbb Z} G$ is called a {\it Schur
ring} ({\it S-ring}, for short) over~$G$ if there exists a partition ${\cal S}={\cal S}({\cal A})$ of~$G$
such that
\begin{enumerate}
\tm{S1} $\{e\}\in{\cal S}$,
\tm{S2} $X\in{\cal S}\ \Rightarrow\ X^{-1}\in{\cal S}$,
\tm{S3} ${\cal A}=\Span\{\und{X}:\ X\in{\cal S}\}$.
\end{enumerate}
In particular, for all $X,Y,Z\in{\cal S}$ there is a nonnegative integer $c_{XY}^Z$ such that
$$
\und{X}\,\und{Y}=\sum_{Z\in{\cal S}}c_{XY}^Z\und{Z},
$$
these integers are the structure constants of ${\cal A}$ with respect to the linear base $\{\und{X}:\ X\in{\cal S}\}$.
The number $\rk({\cal A})=|{\cal S}|$ is called the {\it rank} of~${\cal A}$.
Let ${\cal A}'$ be an S-ring over a group $G'$. Under a {\it Cayley isomorphism} from ${\cal A}$ to~${\cal A}'$, we mean
a group isomorphism $f:G\to G'$ such that ${\cal S}({\cal A})^f={\cal S}({\cal A}')$. This is a special case of the ordinary {\it isomorphism};
by definition, it is a bijection $f:G\to G'$ that induces a ring isomorphism from ${\cal A}$ to ${\cal A}'$ taking
$\und{X}$ to $\und{X'}$ for all $X\in{\cal S}$, where $X'=X^f$.
The classes of the partition ${\cal S}$ are called the {\it basic sets} of the S-ring~${\cal A}$. Any union of them
is called an {\it ${\cal A}$-set}. Thus, $X\subseteq G$ is an ${\cal A}$-set if and only if $\und{X}\in{\cal A}$. The set of all ${\cal A}$-sets
is closed with respect to taking inverse and product. Any subgroup of~$G$ that is an ${\cal A}$-set, is called an {\it ${\cal A}$-subgroup}
of~$G$ or {\it ${\cal A}$-group}. With each ${\cal A}$-set $X$, one can naturally associate two ${\cal A}$-groups,
namely $\grp{X}$ and $\rad(X)$ (see Notation). The S-ring ${\cal A}$ is called {\it primitive} if the only ${\cal A}$-groups are $e$ and $G$,
otherwise this ring is called {\it imprimitive}.
We will use the following statement proved in \cite[Proposition~22.3]{Wie}. Below for a function $f:{\mathbb Z}\to {\mathbb Z}$ and an
element $\xi=\sum_ga_gg$ of the ring ${\mathbb Z} G$, we set $f[\xi]=\sum_gf(a_g)g$.
\begin{lemma}l{110315c}
Let ${\cal A}$ be an S-ring, $f:{\mathbb Z}\to {\mathbb Z}$ an arbitrary function and $\xi\in{\cal A}$. Then $f[\xi]\in{\cal A}$.
\vrule height .9ex width .8ex depth -.1exl
\end{lemma}
The important special case is when $f(a)=1$ or $0$ depending on whether $a\ne 0$ or $a=0$. Then
$f[\xi]=\und{X}$, where $X$ is the support of~$\xi$, and we refer to Lemma~\ref{110315c}
as to the Schur-Wielandt principle.
\section{The Schur theorem on multipliers}\label{190315t}
{\bf Proof of Theorem~\ref{100315v}.} Since obviously $\sigma_{mm'}=\sigma_{m^{}}\sigma_{m'}$ for
all $m$ and $m'$, without loss of generality, we can assume that $m$ is a prime. We need the following auxiliary
lemma.
\begin{lemma}l{100315b}
Let $X\in\cla(G)$ and $p$ an arbitrary prime. Then
$$
\und{X}^p=\sum_{Y\in\cla(G)}a_Y\und{Y}
$$
for some nonnegative integers $a_Y$'s. Moreover,
$$
a_Y|Y|=\begin{cases}
|X|\ (\text{\rm mod}\hspace{2pt}p), &\text{if $Y=X^{(p)}$,}\\
\ 0\quad (\text{\rm mod}\hspace{2pt}p), &\text{if $Y\ne X^{(p)}$.}\\
\end{cases}
$$
\end{lemma}
\noindent{\bf Proof}.\ The first statement follows from the fact that ${\cal Z}({\mathbb Z} G)$ is an S-ring. To prove the second one, set
$$
T_Y=\{(x_1,\ldots,x_p)\in X^p:\ x_1\cdots x_p\in Y\}.
$$
Clearly, $(T_Y)^G=T_Y$. Since also $(x_1x_2\cdots x_p)^{x_1}=x_2\cdots x_px_1$, the set $T_Y$ is invariant
with respect to the cyclic shift $\pi:(x_1,x_2,\ldots,x_p)\mapsto (x_2,\ldots,x_p,x_1)$. Moreover,
since $p$ is prime, we have
$$
|(x_1,\ldots,x_p)^{\grp{\pi}}|=1\ \text{or}\ p.
$$
However, $|(x_1,\ldots,x_p)^{\grp{\pi}}|=1$ if and only if $x_1=\cdots=x_p$, which is possible only if $Y=X^{(p)}$;
in the latter case, the group $\grp{\pi}$ has exactly $|X|$ orbits of the form $\{(x,...,x)\}$, $x\in X$. Taking into
account that $T_Y$ is a disjoint union of $\grp{\pi}$-orbits, we have
$$
|T_Y|=|X|\delta+pu
$$
where $\delta=\delta_{Y,X^{(p)}}$ is the Kronecker delta and $u$ is the number of the $\grp{\pi}$-orbits of size $p$.
Thus, the required statement follows because $a_Y|Y|=|T_Y|$.
\vrule height .9ex width .8ex depth -.1exl
Let us continue the proof of the theorem. Since $p:=m$ is coprime to $|G|$, the mapping
$$
\und{X}\mapsto\und{X}^{(p)},\quad X\in\cla(G),
$$
is a bijection. It
induces a linear isomorphism of the ring ${\cal Z}({\mathbb Z} G)$; the image of the element $\xi$ under this isomorphism is
denoted by $\xi^{(p)}$. From Lemma~\ref{100315b}, it follows that $\mmod{\und{X}^p}{\und{X}^{(p)}}{p}$; here,
we make use of the fact that $|X^{(p)}|=|X|$ for all~$X$. Therefore,
\begin{equation}l{110315a}
\xi^{(p)}=f[\xi^p]\quad\text{for all}\quad\xi\in{\cal Z}({\mathbb Z} G).
\end{equation}
where $f(a)$ is the remainder in the division of $a$ by $p$.
To prove the first part of the theorem, let $X\in{\cal S}({\cal A})$. Then $X$ is a union of some classes $X_i\in\cla(G)$, $i\in I$. Thus,
by~\eqref{110315a}, we have
\begin{equation}l{120315c}
\sigma_p(\und{X})=\und{X}^{(p)}=\sum_{i\in I}\und{X_i}^{(p)}=\sum_{i\in I}f[\und{X_i}^p]=f[\sum_{i\in I}\und{X_i}^p]=
f[(\sum_{i\in I}\und{X_i})^p]=f[\und{X}^p].
\end{equation}
However, by Lemma~\ref{110315c}, the right-hand side belongs to~${\cal A}$. Therefore $\sigma_p(\und{X})\in{\cal A}$ and $X^{(p)}$
is an ${\cal A}$-set. Moreover, suppose that it contains a proper basic set $Y$. By the Dirichlet Theorem, one can find a
prime $p'$ such that $\mmod{pp'}{1}{n}$, where $n=|G|$. Now, the above argument shows that
$Y^{(p')}$ is a proper ${\cal A}$-subset of $X$. Thus $X^{(p)}\in{\cal S}({\cal A})$ and so $\sigma_p({\cal A})={\cal A}$.
To prove the second part of the theorem, it suffices to verify that $\sigma_m$ induces a ring isomorphism of ${\cal Z}({\mathbb Z} G)$:
then it is, obviously, an S-ring isomorphism of ${\cal Z}({\mathbb Z} G)$ that takes ${\cal A}$ to itself, and hence it is an isomorphism
of ${\cal A}$, as required. To do this, without loss of generality, we can assume that $p>2n$ (for otherwise,
by the Legendre theorem, there exists a prime $q>2n$ such that $\mmod{q}{p}{n}$, and then, obviously,
$\xi^{(q)}=\xi^{(p)}$ for all $\xi\in{\cal A}$). We have to prove that
\begin{equation}l{120315a}
c_{X^{(p)}Y^{(p)}}^{Z^{(p)}}=c_{X^{}Y^{}}^{Z^{}}
\end{equation}
for all $X,Y,Z\in\cla(G)$, where the numbers in the both sides are the structure constants of the S-ring ${\cal Z}({\mathbb Z} G)$.
Since this ring is commutative and $p$ is prime, formula~\eqref{120315c} implies that
$$
\und{X}^{(p)}\,\und{Y}^{(p)}\equiv\und{X}^p\und{Y}^p=
(\und{X}\,\und{Y})^p=(\sum_Zc_{XY}^Z\und{Z})^p\equiv
\sum_Zc_{XY}^Z\und{Z}^{(p)}\ (\text{\rm mod}\hspace{2pt}p).
$$
Thus the relation \eqref{120315a} is true modulo~$p$. Since $p>2n$, we are done.
\vrule height .9ex width .8ex depth -.1exl
There is an alternative way to prove the second part of Theorem 1.2. It is related to the action of the group ${\mathbb Z}_n^*$ on the
set $\Irr({\cal A})$ of all irreducible ${\mathbb C}$-characters of the S-ring~${\cal A}$, where as before, we can assume that ${\cal A}={\cal Z}({\mathbb Z} G)$.
Let $\varepsilon$ be an $n$-th primitive complex root of unity.
Then each $m\in{\mathbb Z}_n^*$ determines an automorphism~$\tau_m$ of the cyclotomic field ${\mathbb Q}(\varepsilon)$,
which sends $\varepsilon$ to $\varepsilon^m$. It follows that for any $\chi\in\Irr(G)$, the function
$\chi^{\tau_m}(g):=(\chi(g))^{\tau_m}$, $g\in G$, is also an irreducible character of $G$ and
$$
\chi^{\tau_m}(g) = \chi(g^m)
$$
(see \cite[Proposition~3.16]{Hu}). The primitive idempotents of ${\cal A}$ coincide with the central primitive idempotents of the group algebra ${\mathbb Q}(\varepsilon)[G]$ which, in turn, are in a one-to-one correspondence with the irreducible characters of $G$. More precisely, if
$e_\chi$ is the idempotent corresponding to $\chi\in \Irr(G)$, then
$$
e_\chi = \frac{1}{|G|}\sum\chi(g)g^{-1}.
$$
A direct computation shows that $\sigma_m(e_\chi) = e_{\chi^{\tau_m}}$. Thus, $\sigma_m$ permutes the primitive idempotents of ${\cal A}$.
This implies that $\sigma_m$ is an automorphism of ${\cal A}$, as required. We note that the above formula shows that there is a natural
one-to-one correspondence between the $\Irr(G)$ and $\Irr({\cal A})$. More precisely,
\begin{equation}l{170415a}
\Irr({\cal A})=\{\frac{1}{\chi(1)}\chi |_{{\cal A}}:\ \chi\in\Irr(G)\}.
\end{equation}
Given a set $X\subseteq G$, denote by $\tr(X)$ the union of the sets~$X^{(m)}$, where $m$ runs over the integers coprime to~$n=|G|$;
it is called the {\it trace} of $X$.
Let ${\cal A}$ be a central S-ring over $G$. Then from Theorem~\ref{100315v}, it follows that $\und{\tr(X)}\in{\cal A}$ for all
$X\in{\cal S}({\cal A})$. Therefore,
$$
\tr({\cal A})=\Span\{\und{\tr(X)}:\ X\in{\cal S}({\cal A})\}
$$
is a submodule of ${\cal A}$. It is easily seen that it consists of all fixed points of the natural action of the group
$\{\sigma_m:\ (m,n)=1\}$ on ${\cal A}$. Thus, $\tr({\cal A})$ is an S-ring, which is obviously central; it is called the {\it rational closure}
of the S-ring~${\cal A}$. It should be noted that our definitions agreed with the relevant definitions in the abelian case.
The following statement immediately follows from the fact that the $\tr(H)=H$ for any group $H\le G$.
\begin{proposition}{120315d}
Let ${\cal A}$ be a central S-ring over $G$. Then ${\cal A}$ is primitive if and only if so is $\tr({\cal A})$.
\vrule height .9ex width .8ex depth -.1ex
\end{proposition}
We say that a central S-ring is {\it rational} if it coincides with its rational closure, or equivalently,
if each of its basic sets is rational. The following statement justified the term ``rational''.
\begin{theorem}l{120315x}
Let ${\cal A}$ be a central S-ring over a group $G$. Then it is rational if and only if $\pi(\und{X})\in{\mathbb Q}$ for
all $\pi\in\Irr({\cal A})$ and all $X\in{\cal S}({\cal A})$.
\end{theorem}
\noindent{\bf Proof}.\ Let $m$ be an integer coprime to $n=|G|$. Since any character $\pi\in\Irr({\cal A})$ is equal to the restriction to ${\cal A}$
of a suitable character $\chi\in\Irr(G)$, from relation~\eqref{170415a} it follows that
\begin{equation}l{210315a}
\pi(\und{X})^{\tau_m}=\pi(\und{X}^{(m)}),\quad X\in{\cal S}({\cal A}),
\end{equation}
where $\tau_m$ is the above defined automorphism of the field ${\mathbb Q}(\varepsilon)$.
If the S-ring ${\cal A}$ is rational, then the right-hand side of this equality does not depend on the choice of~$m$. So
the number $\pi(\und{X})^\tau$ does not depend on the automorphism $\tau$ of ${\mathbb Q}(\varepsilon)$. Thus, $\pi(X)\in{\mathbb Q}$.
Assume now that $\pi(\und{X})\in{\mathbb Q}$ for all $\pi\in\Irr({\cal A})$ and $X\in{\cal S}({\cal A})$. Then from~\eqref{210315a} it follows that
$\pi(\und{X}) = \pi(\und{X}^{(m)})$ for all integers $m$ coprime to $n$ and all characters $\pi\in\Irr({\cal A})$. This implies that
$$
\und{X}=\sum_{\pi\in\Irr({\cal A})}\pi(\und{X})e_\pi=\sum_{\pi\in\Irr({\cal A})}\pi(\und{X}^{(m)})e_\pi=\und{X}^{(m)}),
$$
where $e_\pi$ is the primitive idempotent corresponding to the character~$\pi$. Thus, the S-ring ${\cal A}$ is rational.
\vrule height .9ex width .8ex depth -.1exl
\section{Proof of Theorem~\ref {070215a}}\label{190315u}
By the theorem hypothesis, $G$ has a normal Sylow $p$-subgroup $P\cong{\mathbb Z}_{p^n}$. So by the Schur-Zassenhaus theorem,
$G=PK$ , where $K$ is a Hall $p'$-subgroup of $G$. In what follows, we denote by $H$ the unique subgroup
of $P$ of order~$p$.
\begin{lemma}l{180315a}
Let $x\in G$ be such that $Hx\not\subset x^G$. Then $x\in C_G(P)$.
\end{lemma}
\noindent{\bf Proof}.\ The element~$x$ acts by conjugation as an automorphism of the cyclic group~$P$. Therefore,
there exists an integer~$m$ coprime to $p$ such that $h^x=h^m$ for all $h\in P$.
Rewriting this equality as $x^h =x h^{1-m}$, we obtain
$$
x^G\supseteq x^P\supseteq P^{(1-m)}x.
$$
Since $P^{(1-m)}$ is a subgroup of $P$ and $x^G\not\supseteq xH$, this implies that $P^{(1-m)}=e$. Thus,
$x^h = x$ for all $h\in P$, which means that $x\in C_G(P)$.
\vrule height .9ex width .8ex depth -.1exl
Suppose on the contrary that the S-ring ${\cal A}$ is primitive. Take a nontrivial basic set $X$, which intersects $H$
nontrivially. Then $\grp{X}\ne H$: indeed, otherwise $\grp{X}=H$ by the primitivity of ${\cal A}$ and $n=p$ is a prime
in contrast to the hypothesis. This proves the second
part of the following relations (the first one follows from the choice of~$X$):
\begin{equation}l{140315c}
X\cap H\ne\varnothing\quad\text{and}\quad X\setminus H\ne\varnothing\quad\text{and}\quad \grp{X\cap H}\le\rad(X\setminus H).
\end{equation}
To prove the third one, set $X_0=\{x\in X:\ xH\not\subset X\}$. Then from Lemma~\ref{180315a} it follows that
\begin{equation}l{130315b}
(X_0)^{(p)}\subseteq P^{(p)}K\subsetneq G.
\end{equation}
Moreover, it is easily seen that the sets $X_0$ and $X\setminus X_0$ are unions of some conjugacy classes of $G$. For
these classes, we can refine Lemma~\ref{100315b} as follows.
\begin{lemma}l{250315a}
For any class $Y\in\cla(G)$, we have
$$
\und{Y}^p\equiv\begin{cases}
\und{Y}^{(p)}\ (\text{\rm mod}\hspace{2pt}p), &\text{if $Y\subseteq C_G(P)$,}\\
\ 0\quad\ (\text{\rm mod}\hspace{2pt}p), &\text{if $Y\not\subseteq C_G(P)$.}\\
\end{cases}
$$
\end{lemma}
\noindent{\bf Proof}.\ The group $C:=C_G(P)$ is obviously normal in $G$. Therefore,
$$
Y\subseteq C\quad\text{or}\quad Y\cap C=\varnothing.
$$
Suppose first
that $Y\cap C=\varnothing$. Since $H\trianglelefteq G$, we have $y\und{H}=\und{H}y$ for all~$y\in Y$.
Denote by $S$ a full system of representatives of the family $\{Hy:\ y\in Y\}$. Then, since $|H|=p$, we have
$$
\und{Y}^p=(\sum_{y\in S}\und{H}y)^p\equiv\und{H}^p\und{S}^p\equiv 0\ (\text{\rm mod}\hspace{2pt}p),
$$
as required. Let now $Y\subseteq C$. Then $Y$ is a normal subset of $C$, i.e. $Y^G=Y$. Since the group~$C$ is a direct product of $P$
and $O_{p'}(C)$, each normal subset of $C$ is the disjoint union of $gY_g$, $g\in P$, where $Y_g$ is a normal subset of $O_{p'}(C)$.
Now
\begin{equation}l{180415a}
\underline{Y}^{(p)}=\sum_{g\in P} \underline{gY_g}^{(p)}
=\sum_{g\in P} g^p\,\underline{Y_g}^{(p)}.
\end{equation}
Moreover, since $Y_g$ is contained in the $p'$-subgroup $O_{p'}(C)$, by Lemma~\ref{100315b} we obtain
\begin{equation}l{180415b}
\underline{Y_g}^{(p)} = \underline{Y_g^{(p)}}\equiv \underline{Y_g}\,^p\ (\text{\rm mod}\hspace{2pt}p).
\end{equation}
Thus, from \eqref{180415a} and \eqref{180415b}, it follows that
$$
\underline{Y}^p\equiv\sum_{g\in P} (g\underline{Y_g})^p=\sum_{g\in P} g^p \underline{Y_g}^p\equiv
\sum_{g\in P} g^p\,\underline{Y_g}^{(p)}=\underline{Y}^{(p)}\ (\text{\rm mod}\hspace{2pt}p)
$$
as required.
\vrule height .9ex width .8ex depth -.1exl
To complete the proof of the third relation in~\eqref{140315c}, suppose on the contrary that the set~$X_0$ is not empty.
Then, if $X$ is the union of conjugacy classes $X_i$, $i\in I$, then by Lemma~\ref{250315a}, we have
\begin{equation}l{250315r}
\und{X}^p=(\sum_{i\in I}\und{X_i})^p=\sum_{i\in I}\und{X_i}^p=\sum_{i\in I_0}\und{X_i}^{(p)}\ (\text{\rm mod}\hspace{2pt}p),
\end{equation}
where $I_0=\{i\in I:\ X_i\subseteq X_0\}$.
Moreover, by Lemma~\ref{180315a}, given $x,y\in X_0$, the equality $x^p=y^p$ holds if and only if $y\in Hx$. Since
also
$$
1\le |Hx\cap X_0|\le p-1
$$
for all $x\in X_0$, the coefficient at $x^p\in G$ in the right-hand sum of~\eqref{250315r} is
between $1$ and $p-1$. Thus,
$$
\xi:=f[\und{X}^p]
$$
is a non-zero element of the S-ring~${\cal A}$, where $f$ is the function used
in~\eqref{110315a}. By the Schur-Wielandt principle, this implies that the support~$Y$ of the element~$\xi$ is an ${\cal A}$-set.
Therefore, $\grp{Y}$ is an ${\cal A}$-subgroup of~$G$. This subgroup is proper: $\grp{Y}\ne G$
by~\eqref{130315b} and $\grp{Y}\ne e$, because $X_0\ne\varnothing$.
But this contradicts the primitivity of the S-ring~${\cal A}$.
Thus, all the relations in~\eqref{140315c} are true. To complete the proof, we make use of the following theorem on separating subgroup
proved in~\cite{EP}.
\begin{theorem}l{t100703}
Let ${\cal A}$ be an S-ring over a group $G$. Suppose that $X\in{\cal S}({\cal A})$ and $H\le G$ satisfy relations~\eqref{140315c}.
Then $X=\grp{X}\setminus\rad(X)$ and $\rad(X)\le H\le\grp{X}$.
\vrule height .9ex width .8ex depth -.1ex
\end{theorem}
Now, since $\rad(X)$ and $\grp{X}$ are ${\cal A}$-groups, the primitivity assumption implies that
$\rad(X)=e$ and $\grp{X}=G$. By Theorem~\ref{t100703}, this implies that $X=G\setminus e$. This means that
$\rk({\cal A})=2$, i.e., the S-ring ${\cal A}$ is trivial. Contradiction.
\section{Generalized B-groups}\label{190315v}
\subsection{\hspace{-3mm}}t{Proof of Theorem~\ref{100315a}.}
Let $G$ be a Camina group. Then it has a normal subgroup $H$ such that $(G,H)$ is a Camina pair.
Let ${\cal A}$ be a proper central primitive S-ring over $G$. Take a set $X\in{\cal S}({\cal A})$ that contains
a nonidentity element of~$H$. It follows from the primitivity of ${\cal A}$ that
\begin{equation}l{250315f}
\rad(X)=e\quad\text{and}\quad \grp{X}=G.
\end{equation}
In particular, the first two relations in~\eqref{140315c} hold. Next, the set
$X$ is a union of some conjugacy classes of~$G$ as the S-ring ${\cal A}$ is central. By the definition
of a Camina pair, we have
$$
xH=Hx\subseteq X\setminus H
$$
for all $x\in X\setminus H$. This proves the third relation in~\eqref{140315c}. Thus, $X=\grp{X}\setminus\rad(X)$
by Theorem~\ref{t100703}. By~\eqref{250315f}, this implies that $X=G\setminus e$ and hence $\rk({\cal A})=2$. The latter
means that the S-ring~${\cal A}$ is not proper. Contradiction.
\vrule height .9ex width .8ex depth -.1exl
\subsection{\hspace{-3mm}}t{A generalized B-group, which is not a B-group.}\label{190315a}
Let $p>3$ be a prime congruent to $3$ modulo $4$, and let $G$ be the extraspecial group of order $p^3$ and exponent $p$.
Then there exists a skew Hadamard difference set $X$ in the group $G$; see~\cite{Fe}. This exactly means that $Y:=X^{-1}$
is equal to $G^\#\setminus X$ and
$$
\und{X}\und{Y}=|X|e + \frac{|X|-1}{2}(\und{X}+\und{Y}).
$$
Therefore, the module ${\cal A}=\Span\{e,\und{X},\und{Y}\}$ is a subring of ${\mathbb Z} G$ that satisfies the conditions
(S1), (S2), and (S3) with ${\cal S}=\{e,X,Y\}$. Thus, ${\cal A}$ is an S-ring of rank~$3$ over $G$. This S-ring is, obviously,
primitive. Since it is also proper, $G$ is not a B-group. On the other hand, it is a generalized B-group by Corollary~\ref{100315u}.
\subsection{\hspace{-3mm}}t{Simple groups.}
According to~\cite{FS}, a group $G$ is said to be {\it rational} if the number $\chi(g)$ is rational for all
$\chi\in\Irr(G)$ and all $g\in G$. Finite simple rational groups were characterized in Corollary~B1 of that paper
as follows: a noncycic simple group $G$ is rational if and only if $G\cong\SP(6,2)$ or $\ORT^+(8,2)'$.
{\bf Proof of Theorem~\ref{070215b}.} Let $G$ be a finite simple group other than $\SP(6,2)$ or $\ORT^+(8,2)'$. Without loss of
generality, we can assume that $G$ is not cyclic. Then by the above characterization of rational groups, $G$ is not rational and has
two elements $x$ and $y$ of distinct orders. Then the orders of $x^m$ and $y^m$ are also distinct for all integers $m$ coprime to~$|G|$.
This implies that the order of any element of $\tr(x^G)$ does not equal the order of any element of $\tr(y^G)$. So,
\begin{equation}l{250315s}
\tr(x^G)\ne \tr(y^G).
\end{equation}
Therefore, the rational closure $\tr({\cal A})$ of the S-ring ${\cal A}={\cal Z}({\mathbb Z} G)$ is
of rank at least~$3$. On the other hand, $\tr({\cal A})\ne{\cal A}$,
for otherwise the irreducible characters of ${\cal A}$ are rational valued (Theorem~\ref{120315x}) and then $G$ is a rational group.
Thus, $\tr({\cal A})$ is a proper central S-ring. It is primitive because so is ${\cal A}$ (Proposition~\ref{120315d}). Therefore $G$
can not be a generalized B-group.
\vrule height .9ex width .8ex depth -.1ex
\subsection{\hspace{-3mm}}t{AS-free groups.}
According to~\cite{ABC}, a transitive permutation group is called {\it AS-free} if it preserves no nontrivial
symmetric association scheme. From Theorem~17 of that paper, it follows that given a nonabelian simple group $G$,
the permutation group on $G$ defined by
$$
K=\grp{G_{right},\aut(G),\sigma},
$$
is AS-free, where $G_{right}$ is the group of all
right translations of $G$ and $\sigma$ is a permutation of $G$ that takes $g$ to $g^{-1}$, $g\in G$. It is easily
seen that the orbits of the stabilizer of $e$ in $K$ are the basic sets of a central S-ring ${\cal A}$ over $G$. Thus, using the
above result one can get another proof that $G$ is not a generalized B-group whenever ${\cal A}\ne {\cal Z}({\mathbb Z} G)$. However,
in general, the latter inequality is not true, e.g., for the group $\SP(6,2)$.
\subsection{\hspace{-3mm}}t{Miscellaneous.}
Let $G$ be a finite group having a relatively prime conjugacy class (examples of such groups can be found, e.g., in~\cite{DMN}).
Denote by ${\cal X}$ the association scheme of the permutation group $G\Inn(G)\le\sym(G)$ (see also, \cite[Theorem~7.2]{BI}).
Then one can see that $\cla(G)$ forms a relatively prime equitable partition for ${\cal X}$ in the sense of~\cite{HKK}.
By Theorem~3.1 of that paper, any primitive fusion of the scheme ${\cal X}$ must have rank~$2$. So, using the correspondence
between the Cayley schemes and S-rings over $G$, one can show that $G$ is a generalized B-group.
Let $G=G_1\times G_2$ where $G_1$ and $G_2$ are groups of the same order $n>1$. Then $G$ is not a generalized B-group.
Indeed, set $X_0=\{(e_1, e_2)\}$ where $e_i$ is the identity of $G_i$, and
$$
X_1=e_1\times (G_2)^\#\,\cup (G_1)^\#\times e_2.
$$
Denote by ${\cal A}$ the span of the set $\{\und{X_i}:\ i=0,1,2\}$ where $X_2$ is the complement to $X_0\cup X_1$ in $G$.
Then ${\cal A}$ is, obviously, a central S-ring of rank~$3$ over $G$. Since it is also primitive, we are done.
\centerline{\bf Acknowledgment.}
The paper was started during the visit of the third author to the Central China Normal University, Wuhan, China. He would like
to thank the faculty members of the School of Mathematics and Statistics for their hospitality.
\end{document} | math | 30,955 |
\begin{document}
\title{Isoperimetric and Sobolev inequalities
on hypersurfaces in sub-Riemannian Carnot groups}
\begin{abstract}
The geometric setting of this paper\mathit{f}ootnote{\bf Warning: \rm In this last version, we have corrected some mistakes and imprecisions. In particular, we have improved the estimate of the quantity $\mathcal B_2(t)$; see Section \ref{perdindirindina}. This allows us to state a more precise formulation of the main results. We refer the reader to Section \ref{mike0} for more detailed comments on this final version and, in particular, to Warning \ref{cpzzo}.} is that of smooth sub-manifolds immersed in a sub-Riemannian
$k$-step Carnot group $\mathbb{G}$ of homogeneous dimension $Q$. Our main
result is an isoperimetric-type inequality for the $\HH$-perimeter measure $\per$ in the case of a
compact hypersurface $S$ of class $\cont^2$ with (or without)
boundary $\partial S$; see Theorem \ref{ghaioio}. This result generalizes an inequality
involving the mean curvature of the hypersurface, proven by
Michael and Simon _{^{_{\HH_i}}}te{MS} and Allard _{^{_{\HH_i}}}te{Allard},
independently. Finally, we prove some related Sobolev-type inequalities; see Section \ref{sobineqg}.
\\{\noindent \scriptsize
\sc Key words and phrases:} {\scriptsize{\textsf {Carnot groups;
Sub-Riemannian Geometry; Hypersurfaces; Isoperimetric Inequality;
Sobolev Inequalities; Blow-up; Coarea
Formula.}}}\\{\scriptsize\sc{\noindent Mathematics Subject
Classification:}}\,{\scriptsize \,49Q15, 46E35, 22E60.}
\end{abstract}
\tableofcontents
\date{}
\normalsize
\section{Introduction}
In the last decades considerable efforts have been made to extend
to the general setting of metric spaces the methods of Analysis
and Geometric Measure Theory. This philosophy, in a sense already
contained in Federer's treatise _{^{_{\HH_i}}}te{FE}, has been pursued, among
other authors, by Ambrosio _{^{_{\HH_i}}}te{A2}, Ambrosio and Kirchheim
_{^{_{\HH_i}}}te{AK1}, Capogna, Danielli and Garofalo _{^{_{\HH_i}}}te{CDG}, Cheeger
_{^{_{\HH_i}}}te{Che}, Cheeger and Kleiner _{^{_{\HH_i}}}te{Cheeger1}, David and Semmes
_{^{_{\HH_i}}}te{DaSe}, De Giorgi _{^{_{\HH_i}}}te{DG}, Gromov _{^{_{\HH_i}}}te{Gr1}, Franchi,
Gallot and Wheeden _{^{_{\HH_i}}}te{FGW}, Franchi and Lanconelli
_{^{_{\HH_i}}}te{FLanc}, Franchi, Serapioni and Serra Cassano _{^{_{\HH_i}}}te{FSSC3,
FSSC5}, Garofalo and Nhieu _{^{_{\HH_i}}}te{GN}, Heinonen and Koskela
_{^{_{\HH_i}}}te{HaKo}, Koranyi and Riemann _{^{_{\HH_i}}}te{KR}, Pansu _{^{_{\HH_i}}}te{P1, P2},
but the list is far from being complete.
In this respect, {\it sub-Riemannian} or {\it
Carnot-Carath\'eodory} geometries have become a subject of great
interest also because of their connections with many different
areas of Mathematics and Physics, such as PDE's, Calculus of
Variations, Control Theory, Mechanics and Theoretical Computer
Science. For references, comments and other perspectives, we refer
the reader to Montgomery's book _{^{_{\HH_i}}}te{Montgomery} and the surveys
by Gromov, _{^{_{\HH_i}}}te{Gr1}, and Vershik and Gershkovich, _{^{_{\HH_i}}}te{Ver}. We
also mention, specifically for sub-Riemannian geometry,
_{^{_{\HH_i}}}te{Stric} and _{^{_{\HH_i}}}te{P4}. More recently, the
so-called Visual Geometry has also received new impulses from this
field; see _{^{_{\HH_i}}}te{SCM}, _{^{_{\HH_i}}}te{CMS} and references therein.
The setting of the sub-Riemannian geometry is that of a smooth
manifold $N$, endowed with a smooth non-integrable distribution
$\HH\subset\TT N$ of $\DH$-planes, or {\it horizontal subbundle}
($\DH\leq\mathrm{dim}N$), where a metric $g_{^{_\HH}}$ is defined. The
manifold $N$ is said to be a {\it Carnot-Carath\'eodory space} or
{\it CC-space} when one introduces the so-called {\it CC-metric}
$\dc$; see Definition \ref{dccar}. With respect to such a metric,
the only paths on $N$ which have finite length are
tangent to $\HH$ and therefore called {\it
horizontal}. Roughly speaking, in connecting two points we are
only allowed to follow horizontal paths joining them.
A $k$-{\it{step Carnot group}}
$(\mathbb{G},\bullet)$ is an $n$-dimensional, connected, simply
connected, nilpotent and stratified Lie group (with respect to the
group multiplication $\bullet$) whose Lie algebra $\mathit{g}\ccg\cong\TT_0\mathbb{G}$
satisfies the following:\[ {\mathfrak{g}}={\HH}_1\oplus...\oplus {\HH}_k,\quad
[{\HH}_1,{\HH}_{i-1}]={\HH}_{i}\textit{grad}\ssuad\mathit{f}orall\,\,i=2,...,k,\quad
{\HH}_{k+1}=\{0\}.\]We also set $\HH:=\HH_1$ and $\VV:={\HH}_2\oplus...\oplus {\HH}_k$. In the sequel, we shall refer to $\HH$ and $\VV$ as the \it horizontal space \rm and the \it vertical space\rm, respectively. Note that they also have a natural bundle structure, in which the basis is
the group $\mathbb{G}$. Let
$\underline{X_{^{_\HH}}}:=\{X_1,...,X_{\DH}\}$ be a frame of left-invariant vector
fields for the horizontal layer $\HH$. This frame can be completed to a global
left-invariant frame $\underline{X}:=\{X_1,...,X_n\}$ for $\mathit{g}\ccg$.
In fact, the standard basis $\{\ee_i:i=1,...,n\}$ of $\Rn$ can be
relabeled to be {\it graded} or {\it adapted to the
stratification}. Every Carnot group $\mathbb{G}$ is endowed with
a one-parameter group of positive dilations (adapted to the grading of $\mathit{g}\ccg$)
making it a {\it homogeneous group} of homogeneous dimension
$\Qdim\,(\DH_i=\mathrm{dim}\HH_i)$, in the sense of Stein's definition; see _{^{_{\HH_i}}}te{Stein}.
The number $Q$ coincides with the {\it Hausdorff dimension} of
$(\mathbb{G},\dc)$ as a metric space with respect to the CC-distance. Carnot groups are of special
interest for many reasons and, in particular, because they
constitute a wide class of examples of sub-Riemannian geometries.
Note that, by a well-know result due to Mitchell _{^{_{\HH_i}}}te{Mi} (see
also _{^{_{\HH_i}}}te{Montgomery}), the {\it
Gromov-Hausdorff tangent cone} at any regular point of a
sub-Riemannian manifold turns out to be a suitable Carnot group. This fact motivates
the interest towards Carnot groups which play for
sub-Riemannian geometries an analogous role to that of
Euclidean spaces in Riemannian geometry. The initial development of Analysis in this setting was motivated
by some works published in the first eighties. Among others, we
cite the paper by Fefferman and Phong _{^{_{\HH_i}}}te{FePh} about the
so-called ``sub-elliptic estimates'' and that of Franchi and
Lanconelli _{^{_{\HH_i}}}te{FLanc}, where a H\"{o}lder regularity theorem was
proven for a class of degenerate elliptic operators in divergence
form. Meanwhile, the beginning of Geometric Measure Theory was
perhaps an intrinsic isoperimetric inequality proven by Pansu in
his thesis _{^{_{\HH_i}}}te{P1}, for the {\it Heisenberg group}
$\mathbb{H}^1$. For further results about isoperimetric inequalities
on Lie groups and Carnot-Carath\'eodory spaces, see also
_{^{_{\HH_i}}}te{Varo}, _{^{_{\HH_i}}}te{Gr1}, _{^{_{\HH_i}}}te{P4}, _{^{_{\HH_i}}}te{GN}, _{^{_{\HH_i}}}te{CDG},
_{^{_{\HH_i}}}te{FGW}, _{^{_{\HH_i}}}te{HaKo}. For results on these topics, and for more
detailed bibliographic references, we refer the reader to
_{^{_{\HH_i}}}te{A2}, _{^{_{\HH_i}}}te{CDG}, _{^{_{\HH_i}}}te{FSSC3, FSSC5}, _{^{_{\HH_i}}}te{DGN3},
_{^{_{\HH_i}}}te{G}, _{^{_{\HH_i}}}te{GN}, _{^{_{\HH_i}}}te{Mag, Mag2}, _{^{_{\HH_i}}}te{Montea, Monteb},
_{^{_{\HH_i}}}te{HP}. We also quote _{^{_{\HH_i}}}te{CCM}, _{^{_{\HH_i}}}te{vari}, _{^{_{\HH_i}}}te{G}, _{^{_{\HH_i}}}te{Pauls},
_{^{_{\HH_i}}}te{RR}, for some results about minimal and constant
mean-curvature hypersurfaces immersed in Heisenberg groups.
In this paper we are concerned with
hypersurfaces immersed in Carnot groups, endowed with the
so-called {\it $\HH$-perimeter measure} $\per$; see Definition
\ref{sh}. We first study some technical
tools. In particular, we extend to hypersurfaces with
non-empty characteristic sets, the 1st-variation of $\per$, proved in _{^{_{\HH_i}}}te{Monte, Monteb}
for the non-characteristic case; see Section \ref{prvar0}. We then discuss a blow-up theorem, which
also holds for characteristic points and a horizontal Coarea
Formula for smooth functions on hypersurfaces; see Section
\ref{blow-up} and Section \ref{COAR}. In Section
\ref{mike}, these results will be used to investigate the validity in this context of a monotonicity inequality for the $\HH$-perimeter and of a related isoperimetric inequality. These results were proved by Michael and Simon in _{^{_{\HH_i}}}te{MS}
for a general setting including Riemannian geometries and,
independently,
by Allard in _{^{_{\HH_i}}}te{Allard} for varifolds; see below for a more precise statement.
In Section \ref{sobineqg}, we shall deduce some related Sobolev-type
inequalities, following a classical pattern by Federer-Fleming
_{^{_{\HH_i}}}te{FedererFleming} and Mazja _{^{_{\HH_i}}}te{MAZ}. We here observe that similar results in this
direction have been obtained by Danielli, Garofalo and Nhieu
in _{^{_{\HH_i}}}te{DGN3}, where a monotonicity estimate for the
$\HH$-perimeter has been proven for graphical strips in the
Heisenberg group $\mathbb{H}^1$.
Now we would like to make a short comment about the Isoperimetric
Inequality for compact hypersurfaces immersed in the Euclidean
space $\Rn$.
\begin{teo}[Euclidean Isoperimetric Inequality for $S\subset\Rn$]\label{w33w}Let
$S\subset\Rn\,(n>2)$ be a compact hypersurface of class $\cont^2$
with -or without- piecewise $\cont^1$ boundary. Then
\[\left(\sigma^{n-1}_{^{_\mathit{R}}}(S)\right)^{\mathit{f}rac{n-2}{n-1}}\leq C_{Isop}\left(\int_S|\mathcal{H}_{^{_\mathit{R}}}|\,\sigma^{n-1}_{^{_\mathit{R}}}+\sigma^{n-2}_{^{_\mathit{R}}}(\partial
S)\right)\]where $C_{Isop}>0$ is a dimensional constant.\end{teo}
In the above statement, $\mathcal{H}_{^{_\mathit{R}}}$ is the mean curvature and $\sigma^{n-1}_{^{_\mathit{R}}}$ and $\sigma^{n-2}_{^{_\mathit{R}}}$ denote, respectively, the Riemannian
measures on $S$ and $\partial S$.
The first step in the proof is a linear isoperimetric inequality. More
precisely, one proves that
\[\sigma^{n-1}_{^{_\mathit{R}}}(S)\leq r\left(\int_S|\mathcal{H}_{^{_\mathit{R}}}|\,\sigma^{n-1}_{^{_\mathit{R}}}+\sigma^{n-2}_{^{_\mathit{R}}}(\partial
S)\right),\]where $r$ is the radius of a Euclidean ball $B(x, r)$
containing $S$. Starting from this linear inequality and using Coarea Formula,
one gets the so-called {\it monotonicity inequality}, that is,
\[-\mathit{f}rac{d}{dt}\mathit{f}rac{\sigma^{n-1}_{^{_\mathit{R}}}(S_t)}{t^{n-1}}\leq
\mathit{f}rac{1}{t^{n-1}}\left(\int_{S_t}|\mathcal{H}_{^{_\mathit{R}}}|\,\sigma^{n-1}_{^{_\mathit{R}}}
+ \sigma^{n-2}_{^{_\mathit{R}}}(\partial S\cap B(x,t))\right)
\]for every $x\in {\rm Int}\,S$, for $\mathcal{L}^1$-a.e. $t>0$, where $S_t=S\cap B(x, t)$. (Note
that every interior point of a $\cont^2$ hypersurface $S$
is a {\it density-point}, that is, $\lim_{t\searrow
0^+}\mathit{f}rac{\sigma_{^{_\mathit{R}}}^{n-1}(S_t)}{t^{n-1}}=\omega_{n-1}$, where
$\omega_{n-1}$ denotes the measure of the unit ball in
$\R^{n-1}$).
By applying the monotonicity inequality along with a contradiction argument, one
obtains a calculus lemma
which, together with a standard Vitali-type
covering theorem, allows to achieve the proof of Theorem \ref{w33w}. We also remark that the monotonicity inequality is equivalent to an asymptotic
exponential estimate, that is, \[\sigma^{n-1}_{^{_\mathit{R}}}(S_t)\mathit{g}\cceq
\omega_{n-1}\, t^{n-1} e^{-\mathcal{H}^0 t}\]for $t\rightarrow
0^+$, where $x\in {\rm Int}\,S$ and $\mathcal{H}^0$ is any positive constant
such that $|\mathcal{H}_{^{_\mathit{R}}}|\leq\mathcal{H}^0$. In case of
minimal hypersurfaces (that is, $\mathcal H_{^{_\mathit{R}}}=0$), this implies that
$\sigma^{n-1}_{^{_\mathit{R}}}(S_t)\mathit{g}\cceq \omega_{n-1}\, t^{n-1}$ as $t\rightarrow
0^+$.
We now give a quick overview of the paper.
Section \ref{0prelcar} introduces Carnot groups, immersed hypersurfaces and submanifolds. In particular,
we describe some geometric structures and basic facts about
stratified Lie groups, Riemannian and
sub-Riemannian geometries, intrinsic measures and connections.
If $S\subset\mathbb{G}$ is a hypersurface of class $\mathbf{C}^1$, then
$x\in S$ is a {\it characteristic point} if $\HH_x\subset\TT_x S$.
If $S$ is non-characteristic, the {\it unit $\HH$-normal} along
$S$ is given by $\nn: =\mathit{f}rac{\PH\nu}{|\PH\nu|}$, where $\nu$ is
the Riemannian unit normal of $S$ and $\PH:\mathit{g}\ccg\longrightarrow\HH$ is the orthogonal projection operator onto $\HH$. By means of the {\it
contraction operator} $\LL$ on differential
forms\mathit{f}ootnote{\label{contraction}Recall that $\LL:
\Om^{k}(\mathbb{G})\rightarrow\Om^{k-1}(\mathbb{G})$
is defined, for $X\in\XX(\TG)$ and
$\alpha\in\Om^k(\mathbb{G})$, by
\begin{eqnarray*}(X \LL \alpha)
(Y_1,...,Y_{k-1}):=\alpha(X,Y_1,...,Y_{k-1})\end{eqnarray*} This operator extends, in a simple way, to $p$-vectors; see
_{^{_{\HH_i}}}te{Helgason}, _{^{_{\HH_i}}}te{FE}.}, we can define a differential $(n-1)$-form $\per\in\Om^{n-1}(S)$ as
\[\per:=(\nn
\LL \Vol)|_S,\]where
$\Vol:=\bigwedge_{i=1}^n\omega_i\in
\Om^n(\mathbb{G})$ denotes the Riemannian (left-invariant)
volume form on $\mathbb{G}$ (obtained by wedging together the elements of the \textquotedblleft dual\textquotedblright
basis $\underline{\omega}=\{\omega_1,...,\omega_n\}$ of
$\mathit{g}\ccg$, where $\omega_i=X_i^\ast\in\Om^1(\mathbb{G})$, for every $i=1,...,n$). Notice that this $(n-1)$-form is $(Q-1)$-homogeneous. By integrating the $(n-1)$-form $\per$ along $S$ we obtain the so-called $\HH$-perimeter measure. Note that the characteristic set $C_S$ of $S$ can
be seen as the set of all points at which the horizontal
projection of the unit normal vanishes, that is, $C_S=\{x\in S:
|\PH\nu|=0\}$.
Analogously, we can define a $(Q-2)$-homogeneous measure $\nis$ on
any $(n-2)$-dimensional smooth submanifold $N$ of $\mathbb{G}$. To this aim, let
$\nn=\nn^1\wedge\nn^2$ be a horizontal unit normal $2$-vector to $N$; see Definition
\ref{dens}. Then, we obtain a $(Q-2)$-homogeneous
measure by integrating the differential $(n-2)$-form $\nis:=(\nn
\LL \Vol)|_N$. The measures $\per$ and $\nis$ turn out to be
equivalent (up to bounded densities called {\it metric
factors}; see _{^{_{\HH_i}}}te{Mag, Mag2}), respectively, to the $(Q-1)$-dimensional and $(Q-2)$-dimensional
spherical Hausdorff measures $\mathcal{S}_\varrho^{Q-1}$ and
$\mathcal{S}_\varrho^{Q-2}$ associated with a
homogeneous distance $\varrho$ on $\mathbb{G}$; see, for instance, Section \ref{blow-up}.
\begin{oss}The stratification of $\mathit{g}\ccg$ induces a natural decomposition
of the tangent space of any smooth hypersurface $S\subset \mathbb{G}$. More precisely,
we intersect $\TT_x S\subset\TT_x\mathbb{G}$ with
$\TT_x^i\mathbb{G}=\oplus_{j=1}^i(\HH_j)_x$. Setting
$\TT^iS:=\TS\cap\TT^i\mathbb{G},$ $n'_i:=\dim\TT^iS$,
$\HH_iS:=\TT^iS\setminus \TT^{i-1}S$ and $\HS=\HH_1S$, yields
$$\TS:=\oplus_{i=1}^k\HH_iS$$ and $\sum_{i=1}^kn'_i=n-1.$
Henceforth, we shall set $\VS:=\oplus_{i=2}^k\HH_iS$.
\end{oss}
Section \ref{Preliminaries} contains some technical preliminaries.
In Section \ref{COAR} we state a smooth Coarea Formula for the $\HS$-gradient. More precisely, let $S\subset\mathbb{G}$ be a
compact hypersurface of class $\cont^2$ and let
$\varphi\in\mathbf{C}^1(S)$. Then
\[\int_{S}\psi(x)|\textit{grad}\ss\varphi(x)|\,\per(x)=\int_{\R}ds\int_{\varphi^{-1}[s]\cap
S}\psi(y)\,\nis(y)\]for every $\psi\in L^1(S; \per)$.
In Section \ref{prvar0} we discuss the 1st variation formula of $\per$; see Theorem \ref{1vg}. This result, proved in _{^{_{\HH_i}}}te{Monte, Monteb} for non-characteristic hypersurfaces, is generalized to the case of non-empty
characteristic sets. Roughly speaking, we shall show that the \textquotedblleft infinitesimal\textquotedblright 1st variation of $\per$ is given by
$$\Lie_W\per=\left(-\MS\langle W,\nu\rangle +\div_{^{_{\TS}}} \left( W\ot|\PH\nu|-\langle W,\nu\rangle\nn\ot \right)\right)\,\sigma_{^{_\mathit{R}}}^{n-1}.$$(Here $\Lie_W\per$ denotes the Lie derivative of $\per$ with respect to the initial velocity $W$ of the variation, $\MS=-\div_{^{_\HH}}\nn$ is the so-called \it horizontal mean curvature \rm of $S$; moreover, the symbols $W\op,\,W\ot$ denote the normal and tangential components of $W$, respectively). If $\MS$ is $L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$, then the function $\Lie_W\per$ turns out to be integrable on $S$ and the integral of $\Lie_W\per$ on $S$ gives the 1st variation of $\per$. Note that the
third term in the previous formula depends on the normal component of $W$. We stress that this term was omitted in _{^{_{\HH_i}}}te{Monteb}. Using a generalized divergence-type formula, the divergence term can be integrated on the boundary.
It is worth observing that a central point of this paper is to make an appropriate choice of the variation vector field in the 1st variation formula \eqref{fva2formula}; see Theorem \ref{1vg}.
In Section \ref{blow-up} we state a blow-up theorem for the
horizontal perimeter $\per$. In other words, we study the density of $\per$ at $x\in S$, or the
limit\[\lim_{r\rightarrow 0^+}\mathit{f}rac{\per (S\cap
B_{\varrho}(x,r))}{r^{Q-1}},\]where $B_{\varrho}(x,r)$ is a homogeneous
$\varrho$-ball of center $x\in S$ and radius $r$. We first discuss the blow-up procedure at
non-characteristic points of a $\cont^1$ hypersurface $S$; see,
for instance, _{^{_{\HH_i}}}te{FSSC3, FSSC5}, _{^{_{\HH_i}}}te{balogh}, _{^{_{\HH_i}}}te{Mag,
Mag2}. Then, under more regularity assumptions
on $S$, we tract the characteristic case; see
Theorem \ref{BUP}. A similar result can be found in _{^{_{\HH_i}}}te{Mag8} for submanifolds of
$2$-step groups.
Section \ref{mike} is devoted to our main results, that are a (global) monotonicity inequality for the $\HH$-perimeter and an isoperimetric-type inequality for compact
hypersurfaces with (or without) boundary, depending on the
horizontal mean curvature $\MS$. This
extends to Carnot groups an inequality proved by Michael and
Simon _{^{_{\HH_i}}}te{MS} and Allard _{^{_{\HH_i}}}te{Allard}, independently.
\begin{no}Set ${\bf r}(S):=\sup_{x\in {\rm Int}(S\setminus C_S)} r_0(x),$ where $r_0(x)=2\left(\mathit{f}rac{\per({S})}{k_\varrho(\nn(x))}\right)^{{1}/{Q-1}}$ and $k_\varrho(\nn(x))$ denotes the \rm metric factor \it at $x$; see Section \ref{dere} and Section \ref{blow-up}.
\end{no}
\begin{teo}[Isoperimetric-type Inequality]\label{ghaioiuuo0} Let
$S\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$ with
boundary $\partial S$ (piecewise) $\cont^1$ and assume that the horizontal mean curvature $\MS$ of $S$ is integrable, that is, $\MS\in L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$.There exists $C_{Isop}>0$ only
dependent on $\mathbb{G}$ and on the homogeneous metric $\varrho$ such that \begin{eqnarray}\label{2gha}\left(\per({S})\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop}\left(\int_S
|\MS|\,\per +\nis(\partial S)+ \sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2} \right),\end{eqnarray}where $\eta$ denotes the outward-pointing unit normal along $\partial S$ and $\P_{^{_{\HH_i}}}ss$ denotes the orthogonal projection onto $\HH_i S$.
In particular, if $\partial S=\emptyset$, it follows that \begin{eqnarray}\label{2gha}\left(\per({S})\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop} \int_S
|\MS|\,\per.\end{eqnarray}
\end{teo}
In order to better understand this result, we refer the reader to Section \ref{mike0}; see, in particular, Example \ref{zazaz1} and Warning \ref{cpzzo}. We also formulate an open problem which is intimately connected with the previous result; see Problem \ref{pope}.
The proof of this result is heavily inspired from the classical
one, for which we refer the reader to the book by Burago and
Zalgaller _{^{_{\HH_i}}}te{BuZa}. A similar strategy was useful in
proving isoperimetric and Sobolev inequalities in abstract metric
setting such as weighted Riemannian manifolds and graphs; see
_{^{_{\HH_i}}}te{CGY}.
The starting point is a linear isoperimetric inequality (see
Proposition \ref{correctdimin}) that is used to obtain a {\it
monotonicity formula} for the $\HH$-perimeter; see Theorem
\ref{rmonin}. This formula is one of our main results. We remark that, exactly as in the Euclidean/Riemannian case, the
monotonicity inequality is an ordinary differential inequality,
concerning the first derivative of the density-quotient
$$\mathit{f}rac{\per (S_t)}{t^{Q-1}},$$ where $S_t:=S\cap B_{\varrho}(x,t))$ and $x\in {\rm Int}\,(S\setminus C_S)$; see Section
\ref{wlineq}. We observe that, in the case of smooth hypersurfaces without boundary, we shall show that for every $x\in {\rm Int}(S\setminus C_S)$ the
following ordinary differential inequality
$$ -\mathit{f}rac{d}{dt}\mathit{f}rac{\per(S_t)}{t^{Q-1}}\leq
\mathit{f}rac{1}{t^{Q-1}} \int_{S_t}|\MS|\,\per$$
holds for $\mathcal{L}^1$-a.e. $t>0$. Hence, if $\MS=0$ it follows that $\mathit{f}rac{d}{dt}\mathit{f}rac{\per(S_t)}{t^{Q-1}}\mathit{g}\cceq 0$ for
$\mathcal{L}^1$-a.e. $t>0$.
In Section \ref{perdindirindina} we will discuss some explicit
estimates and then, in
Section \ref{isopineq1}, we will prove the Isoperimetric Inequality.
In Section \ref{asintper}
we give some straightforward applications of the monotonicity
formula. More precisely, let $S\subset\mathbb{G}$ be a
hypersurface of class $\cont^2$, let $x\in
{\rm Int} (S\setminus C_S)$ and, without loss of generality, assume that, near $x$, the horizontal mean curvature $\MS$
is bounded by a positive constant $\MS^0$. Then, we will show that
\[\per({S}_t)\mathit{g}\cceq \kappa_{\varrho}(\nn(x))\,t^{Q-1}
e^{-t\,\MS^0}\]as long as $t\searrow 0^+$, where
$\kappa_{\varrho}(\nn(x))$ denotes the metric factor at
$x$; see Corollary \ref{asynt}.
We also consider the case where $x\in C_S$;
see Corollary \ref{2asynt} and Corollary \ref{hasynt}.
Finally, in Section \ref{sobineqg} we
discuss some related inequalities which can
be deduced by the Isoperimetric Inequality, following a
classical argument by Federer-Fleming
_{^{_{\HH_i}}}te{FedererFleming} and Mazja _{^{_{\HH_i}}}te{MAZ}. The main result is a Sobolev-type inequality for compact hypersurfaces without boundary.\begin{teo}Let $\mathbb{G}$ be a $k$-step Carnot group endowed with a homogeneous metric $\varrho$ as in
Definition \ref{2iponhomnor1}. Let
$S\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$ without
boundary. Let $\MS$ be the horizontal mean curvature of
$S$ and assume that $\MS\in L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$.
Then\[\left(\int_S|\psi|^{\mathit{f}rac{Q-1}{Q-2}}\,\per\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop}\left( \int_{S}\left(|\psi|\,|\MS|
+|\textit{grad}\ss\psi|\right)\,\per+ \sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_S |\mathit{g}\ccrad_{^{_{\HH_i}}}ss\psi|\,\sigma_{^{_\mathit{R}}}^{n-1}\right) \]for every
$\psi\in\cont^1(S)$, where $C_{Isop}$ is the constant
appearing in Theorem \ref{ghaioiuuo0}.
\end{teo}
\section{Carnot
groups, submanifolds and measures}\label{0prelcar}
\subsection{Sub-Riemannian Geometry of Carnot groups}\label{prelcar}
In this section we introduce basic definitions and main
features of Carnot groups.
References for this large subject can be found in
_{^{_{\HH_i}}}te{CDG}, _{^{_{\HH_i}}}te{GN}, _{^{_{\HH_i}}}te{Gr1}, _{^{_{\HH_i}}}te{Mag}, _{^{_{\HH_i}}}te{Mi},
_{^{_{\HH_i}}}te{Montgomery}, _{^{_{\HH_i}}}te{P1, P2, P4}, _{^{_{\HH_i}}}te{Stric}. Let $N$ be a
${_{^{_{\HH_i}}}n}$-smooth connected $n$-dimensional manifold and let
$\HH\subset \TT N$ be an $\DH$-dimensional smooth subbundle of
$\TT N$. For any $x\in N$, let $\TT^{k}_x$ denote the vector
subspace of $\TT_x N$ spanned by a local basis of smooth vector
fields $X_{1}(x),...,X_{\DH}(x)$ for $\HH$ around $x$, together
with all commutators of these vector fields of order $\leq k$. The
subbundle $\HH$ is called {\it generic} if, for all $x\in N$,
$\mathrm{dim}\TT^{k}_x$ is independent of the point $x$ and {\it
horizontal} if $\TT^{k}_x = \TT N$, for some $k\in \N$. The pair
$(N,\HH)$ is a {\it $k$-step CC-space} if is generic and
horizontal and if $k=\inf\{r: \TT^{r}_x = \TT N \}$. In this case
\begin{equation*}0=\TT^{0}\subset
\HH=\TT^{\,1}\subset\TT^{2}\subset...\subset \TT^{k}=\TT
N\end{equation*} is a strictly increasing filtration of {\it
subbundles} of constant dimensions $n_i:=\dim
\TT^{i}\,\,i=1,...,{k}.$ Setting $(\HH_i)_x:=\TT^i_x\setminus
\TT^{i-1}_x,$ then $\mathit{g}\ccrr(\TT_x N)=\oplus_{i=1}^k (\HH_k)_x$ is the
associated {\it graded Lie algebra} at $x\in N$, with respect to
the Lie product $[_{^{_{\HH_2}}}ot,_{^{_{\HH_2}}}ot]$. We set $\DH_i:=\dim
{\HH}_{i}=n_i-n_{i-1}\,(n_0=\DH_0=0)$ and, for simplicity,
$\DH:=\DH_1=\dim\HH$. The $k$-vector
$\overline{\DH}=(\DH,\DH_2,...,\DH_{k})$ is the {\it growth
vector} of $\HH$.
\begin{Defi}[Graded frame] We say that $\underline{X}=\{X_1,...,X_n\}$
is a { graded frame} for $N$ if $\{{X}_{i_j}(x): n_{j-1}<i_j\leq
n_j\}$ is a basis for ${\HH_j}_x$ for any
$j=1,...,k$ and for any $x\in N$.\end{Defi}
\begin{no}Let $E\subset\TT N$ be any smooth subbundle of $\TT N$. Throughout this paper, we denote by $\XX^r(E)$ the space of sections of class $\cont^r$ of $E$ $(r\mathit{g}\cceq 0)$. When $r=+\infty$, we simply set $\XX(E)$. Furthermore, the space of differential $p$-forms on $N$, is denoted as $\Om^p(N)$.\end{no}
\begin{Defi}\label{dccar} A { sub-Riemannian metric} $g_{^{_\HH}}=\langle_{^{_{\HH_2}}}ot,_{^{_{\HH_2}}}ot\rangle_{^{_\HH}}$ on $N$ is a
symmetric positive bilinear form on $\HH$. If $(N,\HH)$ is a
{CC}-space, the { {CC}-distance} $\dc(x,y)$ between $x, y\in N$ is
defined by
$$\dc(x,y):=\inf \int\sqrt{\langle
\dot{\mathit{g}\ccamma},\dot{\mathit{g}\ccamma}\rangle_{^{_\HH}}} dt,$$
where the infimum is taken over all piecewise-smooth horizontal
paths $\mathit{g}\ccamma$ joining $x$ to $y$.
\end{Defi}
In fact, Chow's Theorem implies that $\dc$ is metric on $N$
and that any two points can be joined with at least one horizontal
curve. The topology induced on $N$ by the {CC}-metric is equivalent
to the standard manifold topology; see _{^{_{\HH_i}}}te{Gr1},
_{^{_{\HH_i}}}te{Montgomery}.
This is the setting of sub-Riemannian geometry. An important class of these geometries is represented by {\it Carnot groups} which,
for many reasons, play in sub-Riemannian geometry an analogous
role to that of Euclidean spaces in Riemannian geometry. For the geometry of Lie groups
we refer the reader to Helgason's book _{^{_{\HH_i}}}te{Helgason} and
Milnor's paper _{^{_{\HH_i}}}te{3}, while for sub-Riemannian
geometry, to Gromov, _{^{_{\HH_i}}}te{Gr1}, Pansu, _{^{_{\HH_i}}}te{P1, P4}, and
Montgomery, _{^{_{\HH_i}}}te{Montgomery}.
A $k$-{\it{step Carnot group}} $(\mathbb{G},\bullet)$ is an
$n$-dimensional, connected, simply connected, nilpotent and
stratified Lie group (with respect to the multiplication
$\bullet$). Let $0$ be the identity on $\mathbb{G}$. The Lie algebra $\mathit{g}\ccg\cong\TT_0\mathbb{G}$ of $\mathbb{G}$ is an $n$-dimensional vector space such that:\begin{equation*} {\mathfrak{g}}={\HH}_1\oplus...\oplus
{\HH}_k,\quad
[{\HH}_1,{\HH}_{i-1}]={\HH}_{i}\quad\mathit{f}orall\,\,i=2,...,k,\,\,\,
{\HH}_{k+1}=\{0\}.\end{equation*}
The first layer
${\HH}_1$ of the stratification of $\mathit{g}\ccg$ is called
{\it horizontal} and denoted by $\HH$. Let
${\VV}:={\HH}_2\oplus...\oplus {\HH}_k$ be the {\it
vertical subspace} of $\mathit{g}\ccg$. We set
$\DH_i=\dim{{\HH}_i},$
$n_i:=\DH_1+...+\DH_i$, for every $i=1,...,k$ ($\DH_1=\DH,\,n_k=n$). We assume that $\HH$
is generated by a frame $\underline{X_{^{_\HH}}}:=\{X_1,...,X_{\DH}\}$ of
left-invariant vector fields. This frame can always be completed to a global,
graded, left-invariant frame $\underline{X}:=\{X_i:
i=1,...,n\}$ for $\mathit{g}\ccg$, in a way that
${\HH}_l={\mathrm{span}}_\R\big\{X_i: n_{l-1}< i \leq
n_{l}\big\}$ for $l=1,...,k$. In fact, the standard basis $\{\ee_i:i=1,...,n\}$ of $\Rn\cong\TT_0\mathbb{G}$ can be
relabeled to be {\it adapted to the stratification}.
Note that each left-invariant vector field of the frame $\underline{X}$ is given by
${X_i}(x)={L_x}_\ast\ee_i\,(i=1,...,n)$, where ${L_x}_\ast$
is the differential of the left-translation by $x$.
\begin{no}\label{1notlne0}We denote by $\P_{^{_{\HH_i}}}:\mathit{g}\ccg\longrightarrow\HH_i$ the orthogonal projection map from $\mathit{g}\ccg$ onto $\HH_i$ for any $i=1,...,k$. In particular, we set $\P_{^{_\HH}}:=\P{_{^{_{\HH_1}}}}$. Analogously, we denote by $\P_{^{_\VV}}:\mathit{g}\ccg\longrightarrow\VV$ the orthogonal projection map from $\mathit{g}\ccg$ onto $\VV$.
\end{no}
\begin{no}\label{1notazione}We set $I_{^{_\HH}}:=\{1,...,h\}$,
$I_{^{_{\HH_2}}}:=\{n_1+1,...,n_2\}$,..., $I_{^{_\VV}}:=\{h +1,...,n\}$. Unless
otherwise specified, Latin letters $i, j, k,...$ are used for
indices belonging to $I_{^{_\HH}}$ and Greek letters
$\alpha, \beta, \mathit{g}\ccamma,...$ for indices belonging to $I_{^{_\VV}}$. The
function \textquotedblleft order\textquotedblright\, $\mathrm{ord}:\{1,...,n\}\longrightarrow\{1,...,k\}$
is defined by $\mathrm{ord}(a):= i $, whenever $n_{i-1}<a\leq
n_{i}$, $i=1,...,k$.
\end{no}
We use exponential coordinates of 1st
kind so that $\mathbb{G}$ will be identified with its Lie algebra $\mathit{g}\ccg$,
via the (Lie group) exponential map $\exp:\mathit{g}\ccg\longrightarrow\mathbb{G}$; see _{^{_{\HH_i}}}te{Vara}.
The {\it Baker-Campbell-Hausdorff
formula} gives the group law $\bullet$ of the group
$\mathbb{G}$, starting from a corresponding operation on the Lie algebra $\mathit{g}\ccg$.
In fact, one has
$$\exp(X)\bullet\exp(Y)=\exp(X\star Y)$$ for any $X,\,Y \in\mathit{g}\ccg$, where
${\star}:\mathit{g}\ccg \times \mathit{g}\ccg\longrightarrow \mathit{g}\ccg$ is the
{\it Baker-Campbell-Hausdorff product} defined by \begin{eqnarray}\label{CBHf}X\star Y= X +
Y+ \mathit{f}rac{1}{2}[X,Y] + \mathit{f}rac{1}{12} [X,[X,Y]] -
\mathit{f}rac{1}{12} [Y,[X,Y]] + \mbox{ brackets of length} \mathit{g}\cceq 3.\end{eqnarray}
In
exponential coordinates,
the group law $\bullet$ on $\mathbb{G}$ is polynomial and explicitly computable; see
_{^{_{\HH_i}}}te{Corvin}. Note that $0=\exp(0,...,0)$ and the inverse of each point
$x=\exp(x_1,...,x_{n})\in\mathbb{G}$ is given by ${x}^{-1}=\exp(-{x}_1,...,-{x}_{n})$.
When $\HH$ is endowed with a metric
$g_{^{_\HH}}=\langle_{^{_{\HH_2}}}ot,_{^{_{\HH_2}}}ot\rangle_{^{_\HH}}$, we say that $\mathbb{G}$
has a {\it sub-Riemannian structure}. It is always possible to define a left-invariant Riemannian metric
$g =\langle_{^{_{\HH_2}}}ot,_{^{_{\HH_2}}}ot\rangle$ on $\mathit{g}\ccg$ such that $\underline{X}$ is {\it
orthonormal} and $g_{|\HH}=\mathit{g}\cc$. Note that, if we fix a Euclidean metric on
$\Rn\cong \TT_0\mathbb{G}$ such that $\{\ee_i: i=1,...,n\}$ is an orthonormal
basis, this metric extends to each $\TT_x\mathbb{G}$ ($x\in\mathbb{G}$)
by left-translations. Since Chow's Theorem trivially holds true for Carnot groups, the {\it
Carnot-Carath\'eodory distance} $\dc$ associated with $g_{^{_\HH}}$ can
be defined and the pair $(\mathbb{G},\dc)$ turns out to be a complete metric
space where every couple of points can be joined by at least one $\dc$-geodesic.
Carnot groups are {\it homogeneous groups}, in the sense that they
admit a 1-parameter group of automorphisms
$\delta_t:\mathbb{G}\longrightarrow\mathbb{G}$ $(t\mathit{g}\cceq 0)$ defined by
\begin{eqnarray*}\delta_t x
:=\exp\left(\sum_{j=1}^k\sum_{i_j\in I_{^{_{\HH_j}}}}t^j\,x_{i_j}\ee_{i_j}\right),\end{eqnarray*}
for any $x=\exp\left(\sum_{j,i_j}x_{i_j}\ee_{i_j}\right)\in\mathbb{G}.$ By definition, the
{\it homogeneous dimension} of $\mathbb{G}$ is the positive integer $\Qdim,$
coinciding with the {\it Hausdorff dimension} of $(\mathbb{G},\dc)$ as a
metric space;
see _{^{_{\HH_i}}}te{Mi}, _{^{_{\HH_i}}}te{Gr1}.
\begin{Defi}\label{hometr}A continuous distance
$\varrho:\mathbb{G}\times\mathbb{G}\longrightarrow\R_+$ is called {\rm homogeneous}
if, and only if, the following hold:
\begin{itemize}\item[{\rm(i)}]$\varrho(x,y)=\varrho(z\bullet x,z\bullet
y)$ for every $x,\,y,\,z\in\mathbb{G}$;
\item[{\rm(ii)}]$\varrho(\delta_tx,\delta_ty)=t\varrho(x,y)$ for
all $t\mathit{g}\cceq 0$. \end{itemize}\end{Defi}
The CC-distance $\dc$ is an example of homogeneous distance.
Another example can be found in _{^{_{\HH_i}}}te{FSSC5}.\\
Any
Carnot group admits a smooth, subadditive, homogeneous
norm (see _{^{_{\HH_i}}}te{HeSi}), that is, there exists a continuous function
$\|_{^{_{\HH_2}}}ot\|_\varrho:\mathbb{G}\times\mathbb{G}\longrightarrow\R_+_{^{_{\HH_1}}}p \{0\}$ which is smooth on
$\mathbb{G}\setminus\{0\}$ and such
that:\begin{itemize}\item[{\rm(i)}]$\|x\bullet
y\|_\varrho\leq\|x\|_\varrho+\|y\|_\varrho$;
\item[{\rm(ii)}]$\|\delta_tx\|_\varrho=t\|x\|_\varrho\quad (t>
0)$;\item[{\rm(iii)}]$\|x\|_\varrho=0\Leftrightarrow x=0$;
\item[{\rm(iv)}]$\|x\|_\varrho=\|x^{-1}\|_\varrho$.
\end{itemize}
\begin{Defi}\label{2iponhomnor1}Let $\varrho:\mathbb{G}\times \mathbb{G}\longrightarrow\R_+$
be a homogeneous distance such that:
\begin{itemize}\item[{\rm(i)}]$\varrho$ is (piecewise)
$\cont^1$;\item[{\rm(ii)}]$|\mathit{g}\ccrad_{^{_\HH}}\varrho|\leq 1$ at each
regular point of $\varrho$;
\item[{\rm(iii)}]${|x_{^{_\HH}}|}\leq{\varrho(x)}$ for every $x\in\mathbb{G}$, where
$\varrho(x)=\varrho(0,x)=\|x\|_\varrho$. Furthermore, we shall
assume that there exist constants ${\bf c}_i\in\R_+$ such that
$|x_{^{_{\HH_i}}}|\leq {\bf c}_i
\varrho^i(x)$ for any $i=2,...,k.$\end{itemize}
\end{Defi}
\begin{es}\label{distusata}
A smooth homogeneous norm $\varrho$ on
$\mathbb{G}\setminus \{0\}$, can be defined by setting
\begin{equation}\label{distusata0}
\|x\|_\varrho:=\left(|x_{^{_\HH}}|^{\lambda}+C_2|x_{^{_{\HH_2}}}|^{\lambda/2}+C_3|x_{^{_{\HH_3}}}|^{\lambda/3}+...+C_k|x_{^{_{\HH_k}}}|^{\lambda/k}
\right)^{1/\lambda},
\end{equation}where $\lambda$ is any positive number evenly
divisible by $i=1,...,k$ and $|x_{^{_{\HH_i}}}|$ denotes the
Euclidean norm of the projection $x_{^{_{\HH_i}}}$ of $x$ onto the i-th layer
$\HH_i$ of $\mathit{g}\ccg$.
\end{es}
\begin{es}\label{Kor}Let us consider the case of Heisenberg groups
$\mathbb{H}^r$; see Example \ref{epocase}. It can be shown that the CC-distance $\dc$ satisfies all the assumptions of Definition \ref{2iponhomnor1}. Another example is the so-called {Koranyi
norm}, defined by
$$\|y\|_\varrho:=\varrho(y)=\sqrt[4]{|y_{^{_\HH}}|^4+16t^2}\textit{grad}\ssuad(y=\exp(y_{^{_\HH}},t)\in\mathbb{H}^r),$$ is homogeneous and
$_{^{_{\HH_i}}}n$-smooth outside
$0\in\mathbb{H}^r$ and satisfies \rm (ii) \it and \rm (iii) \it of Definition
\ref{2iponhomnor1}.
\end{es}
Since we have fixed a Riemannian metric on $\mathit{g}\ccg$, we may define the left-invariant
co-frame $\underline{\omega}:=\{\omega_i:i=1,...,n\}$ dual to
$\underline{X}$. In fact, the {\it left-invariant 1-forms}
\mathit{f}ootnote{That is, $L_p ^{\ast}\omega_I=\omega_I$ for every
$p\in\mathbb{G}.$} $\omega_i$ are uniquely determined by the condition:
$$\omega_i(X_j)=\langle X_i,X_j\rangle=\delta_i^j\textit{grad}\ssuad \mbox{for every}\,\,i,
j=1,...,n,$$ where $\delta_i^j$ denotes \textquotedblleft Kronecker delta\textquotedblright.
Recall that the {\it structural constants} of the Lie algebra
$\mathit{g}\ccg$ associated with the left invariant frame $\underline{X}$ are
defined by
$${C^{\gg}}^r_{ij}:=\langle [X_i,X_j],
X_r\rangle\textit{grad}\ssuad\mbox{for every}\,\, i, j, r=1,...,n. $$
\noindent {They satisfy the following:
\begin{itemize}\item [{\rm (i)}]\, ${C^{\gg}}^r_{ij} +{C^{\gg}}^r_{ji}=0$\,\,\, (skew-symmetry); \item
[{\rm(ii)}]\, $\sum_{j=1}^{n} {C^{\gg}}^i_{jl}{C^{\gg}}^{j}_{rm} +
{C^{\gg}}^i_{jm}{C^{\gg}}^{j}_{lr} + {C^{\gg}}^i_{jr}{C^{\gg}}^{j}_{ml}=0$\,\,\, (Jacobi's
identity).\end{itemize} \noindent The stratification of
the Lie algebra implies the following structural property:
\[ X_i\in {\HH}_{l},\, X_j \in
{\HH}_{m}\Longrightarrow [X_i,X_j]\in {\HH}_{l+m}.\]
\begin{Defi}[Matrices of structural constants]\label{nota}We set\begin{itemize}\item[{\rm(i)}]
$C^\alpha_{^{_\HH}}:=[{C^{\gg}}^\alpha_{ij}]_{i,j\in
I_{^{_\HH}}}\in\mathcal{M}_{h\times h}(\R)\,\,\,\textit{grad}\ssuad\mathit{f}orall\,\,\alpha\in I_{^{_{\HH_2}}}$;
\item[{\rm(ii)}] $ C^\alpha:=[{C^{\gg}}^\alpha_{ij}]_{i, j=1,...,n}\in
\mathcal{M}_{n\times n}(\R)\textit{grad}\ssuad\mathit{f}orall\,\,\alpha\in
I_{^{_\VV}}.$\end{itemize}The linear operators associated with these matrices are denoted in the same way.
\end{Defi}
\begin{Defi}\label{parzconn}
Let $\nabla$ be the (unique) left-invariant
Levi-Civita connection on $\mathbb{G}$ associated with the metric $g$. If
$X, Y\in\XH:=_{^{_{\HH_i}}}n(\mathbb{G},\HH)$, we set $\mathit{g}\ccc_X Y:=\PH(\nabla_X
Y).$
\end{Defi}
\begin{oss} The operation $\mathit{g}\ccc$ is called {\it horizontal $\HH$-connection}; see
_{^{_{\HH_i}}}te{Monteb} and references therein. By using the properties of the structural constants of
the Levi-Civita connection, one can show that $\mathit{g}\ccc$ is {\rm flat}, that is,
$\mathit{g}\ccc_{X_i}X_j=0$ for every $i,j\in I_{^{_\HH}}$. $\mathit{g}\ccc$ turns out to be
{\rm compatible with the sub-Riemannian metric} $g_{^{_\HH}}$, that is,
$X\langle Y, Z \rangle=\langle \mathit{g}\ccc_X Y, Z \rangle
+ \langle Y, \mathit{g}\ccc_X Z \rangle$ for all $X, Y, Z\in
\XH$. Moreover, $\mathit{g}\ccc$ is {\rm torsion-free}, that is,
$\mathit{g}\ccc_X Y - \mathit{g}\ccc_Y X-\PH[X,Y]=0$ for all $X, Y\in \XH$. All these properties easily follow from the very definition of $\mathit{g}\ccc$
together with the corresponding properties of the Levi-Civita connection
$\nabla$ on $\mathbb{G}$. Finally, we recall a fundamental property of $\nabla$, that is,
\begin{equation*}\nabla_{X_i} X_j =
\mathit{f}rac{1}{2}\sum_{r=1}^n\left( {C^{\gg}}_{ij}^r - {C^{\gg}}_{jr}^i +
{C^{\gg}}_{ri}^j\right) X_r\textit{grad}\ssuad \mathit{f}orall\,\,i,\,j=1,...,n.\end{equation*}
\end{oss}
\begin{Defi}
If $\psi\in_{^{_{\HH_i}}}n({\mathbb{G}})$ we define the horizontal gradient of
$\psi$ as the unique horizontal vector field $\textit{grad}\cc \psi$ such
that
$\langle\textit{grad}\cc \psi,X \rangle= d \psi (X) = X
\psi$ for all $X\in \XH.$ The horizontal
divergence of $X\in\XH$, $\div\cc X$, is defined, at each point
$x\in \mathbb{G}$, by
$$\div\cc X(x):= \mathrm{Trace}\big(Y\longrightarrow \mathit{g}\ccc_{Y} X
\big)(x)\quad(Y\in \HH_x).$$
\end{Defi}
\begin{es}[Heisenberg groups $\mathbb{H}^r$] \label{epocase}Let $\mathfrak{h}_r:=\TT_0\mathbb{H}^r=\R^{2r + 1}$
denote the Lie algebra of $\mathbb{H}^r$. The only non-trivial algebraic
rules are given by $[\ee_{i},\ee_{i+1}]=\ee_{2r + 1}$ for every $i=2k+1$ where
$k=0,...,r-1$. We have
$\mathfrak{h}_r=\HH\oplus \R\ee_{2r+1},$ where $\HH={\rm
span}_{\R}\{\ee_i:i=1,...,2r\}$ and the second layer turns out to be the 1-dimensional center of $\mathfrak{h}_r$. The Baker-Campbell-Hausdorff formula determines the group
law $\bullet$. For every
$x=\exp\left(\sum_{i=1}^{2r+1}x_iX_i\right),\,
y=\exp\left(\sum_{i=1}^{2r+1}y_i X_i\right)\in \mathbb{H}^r$ one has\begin{center}$x\bullet y =\exp \left(x_1 + y_1,x_2+y_2,...,x_{2r} +
y_{2r}, x_{2r+1} + y_{2r+1} + \mathit{f}rac{1}{2}\sum_{k=1}^{r} (x_{2k-1}
y_{2k}- x_{2k} y_{2k-1})\right).$\end{center} The matrix of structural constants is given by
$$C_{^{_\HH}}^{2r+1}:=\left|
\begin{array}{cccccc}
0 & 1 & 0 & 0 & _{^{_{\HH_2}}}ot &\\
-1 & 0 & 0 & 0 & _{^{_{\HH_2}}}ot &\\0 & 0 & 0 & 1 & _{^{_{\HH_2}}}ot &\\0 & 0 & -1 & 0 & _{^{_{\HH_2}}}ot &\\_{^{_{\HH_2}}}ot & _{^{_{\HH_2}}}ot & _{^{_{\HH_2}}}ot
& _{^{_{\HH_2}}}ot & _{^{_{\HH_2}}}ot &\!\!\!\!
\end{array}
\right|.$$\end{es}
\subsection{Hypersurfaces, homogeneous measures and geometric structures}\label{dere}
Hereafter, $\mathcal{H}^m_{\varrho}$ and $\mathcal{S}^m_{\varrho}$ will denote
the Hausdorff measure and the spherical
Hausdorff measure, respectively, associated with a homogeneous distance
$\varrho$ on $\mathbb{G}$\mathit{f}ootnote{We recall
that:\begin{itemize}\item[{\rm(i)}]
$\mathcal{H}_{{\varrho}}^{m}({S})=\lim_{\delta\to 0^+}\mathcal
{H}_{{\varrho},\delta}^{m}({S})$, where
$${\mathcal{H}}_{{\varrho},\delta}^{m}({S})=
\inf\left\{\sum_i\left(\mathrm{diam}_\varrho ({C}_i)\right)^{m}:\;{S}
\subset\bigcup_i
{C}_i;\;\mathrm{diam}_\varrho({C}_i)<\delta\right\}$$ and the
infimum is taken with respect to any non-empty family of closed
subsets $\{{C}_i\}_i\subset\mathbb{G}$;\item[{\rm(ii)}]
$\mathcal{S}_{{\varrho}}^{m}({S})=\lim_{\delta\to 0^+}\mathcal
{S}_{{\varrho},\delta}^{m}({S})$, where
$${\mathcal{S}}_{{\varrho},\delta}^{m}({S})=
\inf\left\{\sum_i\left(\mathrm{diam}_\varrho({B}_i)\right)^{m}:\;{S}
\subset\bigcup_i {B}_i;\;\mathrm{diam}_\varrho
({B}_i)<\delta\right\}$$ and the infimum is taken with respect to
closed $\varrho$-balls ${B}_i$.\end{itemize}}
The Riemannian left-invariant volume form on $\mathbb{G}$ is defined by
$\sigma_{^{_\mathit{R}}}^n:=\bigwedge_{i=1}^n\omega_i\in
\Om^n(\mathbb{G})$. Hereafter we will set $\Vol:=\sigma_{^{_\mathit{R}}}^n$. This is the Haar
measure of $\mathbb{G}$ and equals (the push-forward of) the
$n$-dimensional Lebesgue measure $\mathcal{L}^n$ on $\Rn\cong\TT_0\mathbb{G}$.
\begin{Defi}\label{caratt}
Let $S\subset\mathbb{G}$ be a hypersurface of class $\cont^1$. We
say that $x\in S$ is a {\rm characteristic point} if
$\dim\,\HH_x = \mathrm{dim}(\HH_x \cap \TT_x S)$ or, equivalently, if
$\HH_x\subset\TT_x S$. The {\rm characteristic set} $C_S$ of $S$ is the set of all characteristic points, that is, $ C_S:=\{x\in S : \dim\,\HH_x = \mathrm{dim}(\HH_x \cap
\TT_x S)\}.$
\end{Defi}
Note that a hypersurface $S\subset\mathbb{G}$ oriented by the outward-pointing normal
vector $\nu$ turns out to be {\it non-characteristic} if, and only if, the
horizontal space $\HH$ is {\it transversal} to $S$. We have to remark that the $(Q-1)$-dimensional CC-Hausdorff measure of $C_S$ vanishes, that is,
$\mathcal{H}_{CC}^{Q-1}(C_S)=0$; see _{^{_{\HH_i}}}te{Mag}. The
$(n-1)$-dimensional {\it Riemannian measure} along $S$ can be defined
by setting $\sigma^{n-1}_{^{_\mathit{R}}}:=(\nu\LL\Vol)|_{S},$ where $\LL$ denotes
the \it contraction operator \rm (or, \it interior product\rm) on differential
forms; see footnote \ref{contraction}. Just as in _{^{_{\HH_i}}}te{Monte, Monteb} (see also
_{^{_{\HH_i}}}te{vari}, _{^{_{\HH_i}}}te{HP}, _{^{_{\HH_i}}}te{RR}), since we are studying \textquotedblleft smooth\textquotedblright
hypersurfaces, instead of the variational definition \`a la De Giorgi (see, for instance,
_{^{_{\HH_i}}}te{FSSC3, FSSC5}, _{^{_{\HH_i}}}te{GN}, _{^{_{\HH_i}}}te{Monte} and bibliographies
therein) we define an $(n-1)$-differential form which, by
integration on smooth boundaries, yields the usual $\HH$-perimeter
measure.
\begin{Defi}[$\per$-measure]\label{sh}
Let $S\subset\mathbb{G}$ be a $\mathbf{C}^1$ non-characteristic
hypersurface and $\nu$ the outward-pointing unit normal vector. We call {\rm
unit $\HH$-normal} along $S$ the normalized projection of $\nu$
onto $\HH$, that is, $\nn: =\mathit{f}rac{\PH\nu}{|\PH\nu|}$. The
{\rm $\HH$-perimeter} along ${S}$ is the homogeneous measure associated
with the $(n-1)$-differential form $\per$ on $S$ given by $\per:=(\nn \LL
\Vol)|_S.$ \end{Defi}If we allow $S$ to have characteristic
points, one extends $\per$ by setting $\per\res C_{S}= 0$. It
turns out that $\per = |\PH \nu |\,\sigma^{n-1}_{^{_\mathit{R}}} $ and that $C_S=\{x\in S : |\PH\nu|=0\}$. We also remark that
$ \per(S\cap B)=k_{\varrho}(\nn)\,\mathcal{S}_{\varrho}^{Q-1}({S}\cap B)$ for all $B\in \mathcal{B}or(\mathbb{G})$, where the bounded density-function $k_{\varrho}(\nn)$, called {\it metric factor}, only
depends on $\nn$ and on the (fixed) homogeneous metric $\varrho$ on
$\mathbb{G}$; see _{^{_{\HH_i}}}te{Mag}; see also
Section \ref{blow-up}.
\begin{Defi}\label{carca}Setting $\mathit{H}_x S:=\HH_x\cap\TT_x S$ for every $x\in
S\setminus C_S$, yields $\HH_x=\mathit{H}_x
S\oplus{\rm span}_\R\{\nn(x)\}$ and this uniquely defines the subbundles
$\HS$ and $\nn S$, called, respectively, {\rm horizontal tangent
bundle} and {\rm horizontal normal bundle}. Note that $\mathrm{dim}\HH_x S=\mathrm{dim}\HH_x-1=2n-1$ at each non-characteristic point $x\in S\setminus C_S$.
\end{Defi}
The horizontal tangent space is well defined even at the characteristic set $C_S$. More precisely, in this case $\HH_x S=\HH_x$ for every $x\in C_S$. However $\dim\HH_xS=\dim\HH_x=\DH$ for any $x\in C_S$.
For the sake of simplicity, unless otherwise mentioned, \it we assume that $S\subset\mathbb{G}$ is a $\cont^2$ non-characteristic hypersurface. \rm We first remark
that, if $_{^{_{\TS}}}c$ is the connection induced on $\TT S$ from the
Levi-Civita connection $\nabla$ on $\TG$\mathit{f}ootnote{$_{^{_{\TS}}}c$ is the Levi-Civita connection on $S$; see _{^{_{\HH_i}}}te{Ch1}.},
then $_{^{_{\TS}}}c$ induces a \textquotedblleft partial connection\textquotedblright $\mathit{g}\ccs$ on
$\HS\subset\TT{S}$, that is defined by\mathit{f}ootnote{The map
$\P_{^{_{\HS}}}:\TT{S}\longrightarrow\HS$ denotes the orthogonal projection
onto $\HS$.}
$$\mathit{g}\ccs_XY:=\P_{^{_{\HS}}}(_{^{_{\TS}}}c_XY)\textit{grad}\ssuad\,\mathit{f}orall\,\,X,Y\in\XX^1(\HS):=\cont^1(S, \HS).$$
Note that the orthogonal decomposition
$\HH=\HS\oplus\nn S$ enable us to define $\nabla^{_{\HS}}$ in analogy with the
definition of ``connection on submanifolds''; see
_{^{_{\HH_i}}}te{Ch1}. More precisely, one has$$\mathit{g}\ccs_XY=\mathit{g}\ccc_X Y-\langle\mathit{g}\ccc_X
Y,\nn\rangle\,\nn\textit{grad}\ssuad\mathit{f}orall\,\,X,Y\in\XX^1(\HS).$$
\begin{Defi}Let $S\subset\mathbb{G}$ be a $\cont^2$ non-characteristic hypersurface and $\nu$ the outward-pointing unit normal vector.
The $\HS$-{gradient} $\textit{grad}\ss\psi$ of $\psi\in \cont^1({S})$ is the unique
horizontal tangent vector field such that
$\langle\textit{grad}\ss\psi,X \rangle= d \psi (X) = X
\psi$ for all $X\in \XX^1(\HS)$. We denote by $\div_{^{_{\HS}}}$
the divergence operator on $\HS$, that is, if $X\in\XX^1(\HS)$ and $x\in
{S}$, then
$$\div_{^{_{\HS}}} X (x) := \mathrm{Trace}\big(Y\longrightarrow
\mathit{g}\ccs_Y X \big)(x)\quad\,(Y\in \HH_xS).$$The \rm horizontal 2{nd}
fundamental form \it of ${S}$ is the continuous map given by
${B_{^{_\HH}}}(X,Y):=\langle\mathit{g}\ccc_X Y, \nn\rangle$ for every $X, Y\in\XX^1(\HS)$.
The {\rm horizontal mean curvature} $\MS$ is the trace of the linear operator ${{B}_{^{_\HH}}}$,
that is, $\MS:={\rm
Tr}B_{^{_\HH}}=-\div_{^{_\HH}}\nn.$ We set
\begin{itemize}\item[{\rm(i)}]$\varpi_\alpha:=\mathit{f}rac{\langle X_\alpha, \nu\rangle}{|\PH\nu|}\textit{grad}\ssuad (\nu_\alpha:=\langle X_\alpha, \nu\rangle)\textit{grad}\ssuad \mathit{f}orall\,\,\alpha\in
I_{^{_\VV}}$;\item[{\rm(ii)}]$\varpi:=\mathit{f}rac{\P_{^{_\VV}}\nu}{|\PH\nu|}=\sum_{\alpha\in I_{^{_\VV}}}\varpi_\alpha
X_\alpha$;\item[{\rm(iii)}] $C_{^{_\HH}}:=\sum_{\alpha\in {I_{^{_{\HH_2}}}}}
\varpi_\alpha\,C^\alpha_{^{_\HH}}$.\end{itemize}
\end{Defi}
Note that $\mathit{f}rac{\nu}{|\PH\nu|}=\nn+\varpi$. The horizontal 2nd
fundamental form ${B_{^{_\HH}}}(X,Y)$ is a (continuous) bilinear form of
$X$ and $Y$. However, in general, $B_{^{_\HH}}$ {\it is not symmetric} and so it can be written as a sum of two matrices, one
symmetric and the other skew-symmetric, that is, $B_{^{_\HH}}= S_{^{_\HH}} + A_{^{_\HH}}.$
It turns out that $A_{^{_\HH}}=\mathit{f}rac{1}{2}\,C_{^{_\HH}}\big|_{\HS}$; see
_{^{_{\HH_i}}}te{Monteb}.
\begin{Defi}\label{movadafr}Let $S\subset\mathbb{G}$ be a $\cont^2$ hypersurface with boundary $\partial S$. We call {\rm adapted frame to $S$} any graded orthonormal frame
$\underline{\tau}:=\{\tau_1,...,\tau_n\}$ for $\mathit{g}\ccg$ such that:
\begin{itemize}
\item $ \tau_1(x)=\nn(x) \quad\mathit{f}orall\,\,x\in \overline{S}\setminus C_S,$\item $\HH_x S=\mathrm{span}\{ \tau_2(x),...,\tau_{\DH}(x)\}\quad\mathit{f}orall\,\,x\in \overline{S}\setminus C_S;$\item $\tau_\alpha:=
X_\alpha.$
\end{itemize}
Furthermore set
$\TB_\alpha:=\tau_\alpha -
\varpi_\alpha\tau_1$ for every $\alpha\in I_{^{_\VV}}$. Note that
$\VV_xS= \mathrm{span}_{\R}\{\TB_\alpha(x): \alpha\in I_{^{_\VV}}\},$
where $\VV_x S$ is the orthogonal complement of $\HH_x S$ in
$\TT_x S$, that is, $\TT_x S=\HH_x S\oplus\VV_x S.$\end{Defi}
It is worth remarking that
$$\underline{\tau}=\{\underbrace{\tau_1}_{=\nn},
\underbrace{\tau_2,...,\tau_{\DH}}_{\mbox{\tiny{o.n. basis of}}\,\HS},\underbrace{\tau_{\DH+1},...,\tau_n}_{\mbox{\tiny o.n. basis of}\,\VV}\}.$$
\begin{oss}[Induced stratification on $\TS$; see _{^{_{\HH_i}}}te{Gr1}]\label{indbun}The stratification of $\mathit{g}\ccg$ induces a natural decomposition
of the tangent space of any smooth submanifold of $\mathbb{G}$. Let us
analyze the case of a hypersurface $S\subset\mathbb{G}$. To this aim, at
each point $x\in S$, we intersect $\TT_x S\subset\TT_x\mathbb{G}$ with
$\TT_x^i\mathbb{G}=\oplus_{j=1}^i(\HH_j)_x$. Setting
$\TT^iS:=\TS\cap\TT^i\mathbb{G},$ $n'_i:=\dim\TT^iS$,
$\HH_iS:=\TT^iS\setminus \TT^{i-1}S$ and $\HS=\HH_1S$, yields
$\TS:=\oplus_{i=1}^k\HH_iS$ and $\sum_{i=1}^kn'_i=n-1.$
Henceforth, we shall set $\VS:=\oplus_{i=2}^k\HH_iS$. It turns out
that the Hausdorff dimension of any smooth hypersurface $S$ is
$Q-1=\sum_{i=1}^k i\,n'_i$; see _{^{_{\HH_i}}}te{Gr1}, _{^{_{\HH_i}}}te{P4},
_{^{_{\HH_i}}}te{FSSC5}, _{^{_{\HH_i}}}te{Mag, Mag3}. Furthermore, if the horizontal
tangent bundle $\HS$ is generic and horizontal, then the couple
$(S, \HS)$ turns out to be a $k$-step CC-space; see Section
\ref{prelcar}.\end{oss}
\begin{es}Let $S\subset\mathbb{H}^n$ be a smooth hypersurface. If $n=1$,
the horizontal tangent bundle $\HS$ is $1$-dimensional at each non-characteristic point. On the constrary, if $n>1$, then $\HS$ turns out to be generic and horizontal along any non-characteristic domain
$\mathcal{U}\subseteq S$.
\end{es}
\begin{Defi}\label{iuoi}Let $N\subset\mathbb{G}$ be a $(n-2)$-dimensional submanifold of class $\cont^1$.
At each point $x\in N$, the horizontal tangent space is given by
$\HH_x N:=\HH_x\cap\TT_x N$. We say that $N$ is {\rm non-characteristic} at $x\in N$ if there exist two linearly
independent vectors $\nn^1,\,\nn^2\in\HH_x$ which are transversal
to $N$ at $x$. Without loss of generality, we can always assume that
$\nn^1,\,\nn^2$ are orthonormal and such that $|\nn^1 \wedge\nn^2
|=1$. If this condition holds for every $x\in N$, we say
that $N$ is {\rm non-characteristic}. In
this case, we define in the obvious way the associated vector
bundles $\HH N(\subset \TT N)$ and $\nn N$, called, respectively,
{\rm horizontal tangent bundle} and {\rm horizontal normal
bundle}. Note that $\HH_x:=\HH_x N\oplus{\nn}_xN$, where ${\nn}_xN\cong {\rm
span}_\R\{\nn^1(x)\wedge\nn^2(x)\}.$
\end{Defi}
\begin{Defi}[see _{^{_{\HH_i}}}te{Mag}]\label{carsetgen} Let $N\subset \mathbb{G}$ be a
$(n-2)$-dimensional submanifold of class $\cont^1$. The {\rm characteristic set}
$C_N$ is the set of all characteristic points of $N$.
Equivalently, one has $C_N:=\{x\in N : \dim\,\HH_x -\mathrm{dim}(\HH_x
\cap \TT_x N)\leq 1\}.$
\end{Defi}
Let $N\subset\mathbb{G}$ be a submanifold of class $\cont^1$; then the $(Q-2)$-dimensional Hausdorff measure (associated
with a homogeneous metric $\varrho$ on $\mathbb{G}$) of $C_N$ is $0$,
that is, $\mathcal{H}_{\varrho}^{Q-2}(C_N)=0$; see _{^{_{\HH_i}}}te{Mag}.
\begin{Defi}[$\nis$-measure]\label{dens}
Let $N\subset\mathbb{G}$ be a $(n-2)$-dimensional
non-characteristic submanifold of class $\mathbf{C}^1$; let $\nn^1, \nn^2\in\XX^0(\nn N):=\cont(N,\nn N)$ be as in
Definition \ref{iuoi} and set $\nn:=\nn^1\wedge \nn^2$. Equivalently, we are assuming that $\nn$ is a horizontal unit normal
$2$-vector field along $N$. Then, we define a $(Q-2)$-homogeneous measure
$\nis$ on $N$ by setting
$\nis:=(\nn \LL \Vol)|_N$.
\end{Defi}
The measure $\nis$ is the
contraction\mathit{f}ootnote{For the most general
definition of $\LL$, see _{^{_{\HH_i}}}te{FE}, Ch.1.} of the top-dimensional volume form $\Vol$ by
the\mathit{f}ootnote{It is unique, up to the sign.} horizontal unit normal
$2$-vector $\nn=\nn^1\wedge \nn^2$ which spans $\nn N$. Hence, $\nis$
can be represented in terms of the
$(n-2)$-dimensional Riemannian measure $\sigma_{^{_\mathit{R}}}^{n-2}$. In fact, let $\nu N$ denote the normal bundle of $N$
and let $\nu_1\wedge \nu_2\in \XX^0(\nu N)$ be a unit normal $2$-vector
field orienting $N$. By standard Linear Algebra, we get
that
$\nn=\mathit{f}rac{\PH\nu_1\wedge \PH\nu_2}{|\PH\nu_1\wedge
\PH\nu_2|}$. Moreover, it turns out that\[\nis = |\PH
\nu_1\wedge\PH \nu_2 |\,\sigma^{n-2}_{^{_\mathit{R}}}. \]Note that if $C_N\neq
\emptyset$, then $C_N=\{x\in N : |\PH \nu_1\wedge\PH \nu_2 |=0\}$
and $\nis$ can be extended up to $C_N$ just by setting $\nis\res
C_{N}= 0$. By construction, $\nis$ is $(Q-2)$-homogeneous with
respect to Carnot dilations $\{\delta_t\}_{t>0}$, that is,
$\delta_t^{\ast}\nis=t^{Q-2}\nis$. Furthermore, $\nis$ is
equivalent, up to a bounded density-function called metric-factor,
to the $(Q-2)$-dimensional Hausdorff
measure associated with a
homogeneous distance $\varrho$ on $\mathbb{G}$; see _{^{_{\HH_i}}}te{Mag3}. For the sake of completeness, we recall some
results obtained by Magnani and Vittone _{^{_{\HH_i}}}te{Mag3} and Magnani
_{^{_{\HH_i}}}te{Mag8}.
\begin{teo}[Blow-up for $(n-2)$-dimensional submanifolds; see
_{^{_{\HH_i}}}te{Mag3}]\label{Blowupfor}
Let $N\subset\mathbb{G}$ be a $(n-2)$-dimensional
submanifold of class $\cont^{1, 1}$, let $x\in N$ be a
non-characteristic point and let $\delta^x_t:\mathbb{G}\longrightarrow\mathbb{G}$ be the Carnot homothety centered at $x$. Then
\begin{equation}\label{bkaza}
\delta^x_{\mathit{f}rac{1}{r}}N\cap B_\varrho(0,1)\longrightarrow
\mathcal{I}^2(\nn(x))\cap B_\varrho(0,1)
\end{equation}as long as $r\rightarrow
0^+$, where $\mathcal{I}^2(\nn(x))$ denotes the
$(n-2)$-dimensional subgroup of $\mathbb{G}$ given by
$$\mathcal{I}^2(\nn(x)):=\left\{y\in\mathbb{G} : y=\exp (Y)\,\, \mbox{for any} \,\,Y\in\mathit{g}\ccg \,\,\mbox{such that} \,\,Y\wedge
\nn(x)=0\right\}$$ and $\nn=\nn^1\wedge\nn^2$ is a unit horizontal
normal $2$-vector along $N$. If $\nu=\nu_1\wedge\nu_2$
is a unit normal $2$-vector field orienting $N$, then
\[\lim_{r\rightarrow 0^+}\mathit{f}rac{\sigma_{^{_\mathit{R}}}^{n-2}(N\cap B_\varrho(x,
r))}{r^{Q-2}}=\mathit{f}rac{\kappa(\nn(x))}{|\PH \nu
(x)|},\]where $\kappa(\nn(x)):=\nis\left(
\mathcal{I}^2(\nn(x))\cap B_\varrho(0,1)\right)$ is a positive and bounded
density-function, called {\rm metric factor} and $\PH$ is the orthogonal projection operator extended to horizontal $2$-vectors.
\end{teo}
It is worth observing that the convergence in \eqref{bkaza} is understood with respect to the
Hausdorff distance of sets. We finally recall a recent
result about the
size of \it horizontal tangencies \rm (that is, characteristic sets, in our therminology) to non-involutive distributions; see _{^{_{\HH_i}}}te{Bal3}. The following theorem can be regarded as a generalized version of Derridj's Theorem; see Theorem 4.5 in _{^{_{\HH_i}}}te{Bal3}.
\begin{teo} \label{baloghteo}Let $\mathbb{G}$ be a $k$-step Carnot group.
\begin{itemize}\item[{\rm (i)}]If $S\subset \mathbb{G}$ is a hypersurface
of class $\cont^2$, then the Euclidean-Hausdorff dimension of the
characteristic set $C_S$ of $S$ satisfies the inequality
$\dim_{\rm Eu-Hau}(C_S)\leq n-2.$\item[{\rm (ii)}]Let
$\VV=\HH^\perp\subset\TG$ be such that $\mathrm{dim}\VV\mathit{g}\cceq 2$. If
$N\subset \mathbb{G}$ is a $(n-2)$-dimensional submanifold of class
$\cont^2$, then the Euclidean-Hausdorff dimension of the
characteristic set $C_N$ of $N$ satisfies the inequality
$\dim_{\rm Eu-Hau}(C_N)\leq n-3.$\end{itemize}
\end{teo}
It is worth observing, however, that more precise results can be obtained only with a further analysis of the algebraic structure of the Lie algebra of the given group.
\begin{oss}[What happens if $\mathrm{dim}\VV=1?$] \label{11baloghteo}First note that if $\mathrm{dim}\VV=1$, then $\mathbb{G}$ is a Carnot group of step $2$. In addition, the codimension of $\HH$ is $1$ so that $n=\dim\,\mathit{g}\ccg=h+1$. The most important example of such a Carnot group is, of course, the Heisenberg group $\mathbb H^r\,(r\mathit{g}\cceq 1)$. In this case, by using the results in _{^{_{\HH_i}}}te{Bal3},
we infer that: \begin{itemize}
\item if
$N\subset \mathbb H^r$ is a $(n-2)$-dimensional submanifold of class
$\cont^2$, then the Euclidean-Hausdorff dimension of the
characteristic set $C_N$ of $N$ satisfies the inequality
$\dim_{\rm Eu-Hau}(C_N)\leq r$, where $n=2r+1$.
\end{itemize}Hence $\dim_{\rm Eu-Hau}(C_N)\leq n-3$ if, and only if, $r>1$. Furthermore, it is not difficult to show that the same assertion holds for the direct product $\mathbb H^r\times \R^m$ of the Heisenberg group $\mathbb H^r$ with a Euclidean group $\R^m$.
\end{oss}
\begin{es}[A \textquotedblleft bad\textquotedblright example with $\mathrm{dim}\VV=1$] We here construct an elementary example of a $4$-dimensional $2$-step Carnot group $\mathbb{G}$ in which there may exist smooth $2$-dimensional horizontal submanifolds. In order to do this, let $\underline{X}=\{X_1, X_2, X_3, X_4\}$ be a basis of $\mathit{g}\ccg$ and assume that $[X_1, X_2]=[X_2, X_3]=X_4$ are the only non-trivial commuting relations. Obviously, we have $\HH=\mathrm{span}_\R\{X_1, X_2, X_3\}$, $\HH_2=\mathrm{span}_\R\{X_4\}$. In fact, since $[X_1, X_3]=0$, by applying
Frobenious' Theorem it follows that the $2$-plane $\{(x_1,...,x_4)\in\mathbb{G}: x_2=x_4=0\}$ is an integrable horizontal plane.
This example can be realized, for instance, by the following vector fields: $X_1=\partial_{x_1}-\mathit{f}rac{x_2}{2}\partial_{x_4}$, $X_2=\partial_{x_2}+\mathit{f}rac{(x_1+x_3)}{2}\partial_{x_4}$, $X_3=\partial_{x_3}-\mathit{f}rac{x_2}{2}\partial_{x_4}$, $X_4=\partial_{x_4}$. \end{es}
\section{Preliminary tools}\label{Preliminaries}
\subsection{Coarea Formula for the
$\HS$-gradient}\label{COAR}
\begin{teo}\label{TCOAR}Let $S\subset\mathbb{G}$ be a compact hypersurface of class $\mathbf{C}^2$ and
let $\varphi\in\cont^1(S)$. Then
\begin{equation}\label{1coar}\int_{S}\psi(x)|\textit{grad}\ss\varphi(x)|\,\per(x)=\int_{\R}ds \int_{\varphi^{-1}[s]\cap S}\psi(y)\,\nis(y)
\end{equation}
for every $\psi\in L^1(S,\per)$.\end{teo}
\begin{proof}
This result can be deduced by the Riemannian Coarea
Formula. Indeed, we have
\begin{equation*}\int_{S}\phi(x)|
\mathit{g}\ccrad_{^{_{\TS}}}\varphi(x)|\,\sigma^{n-1}_{^{_\mathit{R}}}(x)=\int_{\R}ds
\int_{\varphi^{-1}[s]\cap
S}\phi(y)\,\sigma^{n-2}_{^{_\mathit{R}}}(y)\end{equation*} for every $\phi\in
L^1(S,\sigma^{n-1}_{^{_\mathit{R}}})$; see
_{^{_{\HH_i}}}te{BuZa}, _{^{_{\HH_i}}}te{FE}. Choosing
$\phi=\psi\mathit{f}rac{|\mathit{g}\ccrad_{^{_{\HS}}}\varphi|}{|\mathit{g}\ccrad_{^{_{\TS}}}\varphi|}|\P_{^{_\HH}}\nu|,$ for some $\psi\in
L^1(S,\per)$, yields
\begin{eqnarray*}\int_{S}\phi|
\mathit{g}\ccrad_{^{_{\TS}}}\varphi|\,\sigma^{n-1}_{^{_\mathit{R}}}=\int_{S}\psi\mathit{f}rac{|\mathit{g}\ccrad_{^{_{\HS}}}\varphi|}{|\mathit{g}\ccrad_{^{_{\TS}}}\varphi|}
|\mathit{g}\ccrad_{^{_{\TS}}}\varphi|\underbrace{|\P_{^{_\HH}}\nu|\,\sigma^{n-1}_{^{_\mathit{R}}}}_{=\per}=\int_{S}\psi|\textit{grad}\ss\varphi|\,\per.
\end{eqnarray*}Since
$\eta=\mathit{f}rac{\mathit{g}\ccrad_{^{_{\TS}}}\varphi}{|\mathit{g}\ccrad_{^{_{\TS}}}\varphi|}$ along
$\varphi^{-1}[s]$, it follows that
$|\P_{^{_{\HS}}}\eta|=\mathit{f}rac{|\mathit{g}\ccrad_{^{_{\HS}}}\varphi|}{|\mathit{g}\ccrad_{^{_{\TS}}}\varphi|}$. Therefore
\begin{eqnarray*}\int_{\R}ds
\int_{\varphi^{-1}[s]\cap S}\phi(y)\,\sigma^{n-2}_{^{_\mathit{R}}}&=&\int_{\R}ds
\int_{\varphi^{-1}[s]\cap
S}\psi\mathit{f}rac{|\mathit{g}\ccrad_{^{_{\HS}}}\varphi|}{|\mathit{g}\ccrad_{^{_{\TS}}}\varphi|}|\P_{^{_\HH}}\nu|\,\sigma^{n-2}_{^{_\mathit{R}}}\\&=&\int_{\R}ds\int_{\varphi^{-1}[s]\cap
S}\psi\underbrace{|\P_{^{_{\HS}}}\eta||\P_{^{_\HH}}\nu|\,\sigma^{n-2}_{^{_\mathit{R}}}}_{=\nis}\\&=&\int_{\R}ds\int_{\varphi^{-1}[s]\cap
S}\psi\,\nis.\end{eqnarray*}
\end{proof}
\subsection{First variation of $\per$}\label{prvar0}
Below we shall discuss a general integral formula, the so-called first variation formula for the $\HH$-perimeter, which is the key-tool of this paper.
\begin{oss}[The measure $\nis$ along $\partial S$]\label{measonfr}
Let $S\subset\mathbb{G}$ be a hypersurface of class $\cont^2$ with
(piecewise) $\cont^1$ boundary $\partial{S}$. Let
$\eta\in\XX^1(\TS)$ be the outward-pointing unit normal vector along $\partial S$ and denote by $\sigma^{n-2}_{^{_\mathit{R}}}$
the Riemannian measure on $\partial{S}$, given by
$\sigma^{n-2}_{^{_\mathit{R}}}=(\eta\LL\sigma^{n-1}_{^{_\mathit{R}}})|_{\partial{S}}$.
We recall that $(X\LL\per)|_{\partial{S}}=\langle X, \eta\rangle
|\PH\nu|\, \sigma^{n-2}_{^{_\mathit{R}}} $ for every $X\in\XX^1(\TS)$. The
characteristic set $C_{\partial{S}}$ of ${\partial{S}}$ turns out to be given
by $C_{\partial{S}}=\{p\in{\partial{S}}: |\PH\nu||\P_{^{_{\HS}}}\eta|=0\}$.
Furthermore, by applying Definition \ref{dens}, one has
$${\nis} =
\left(\mathit{f}rac{\P_{^{_{\HS}}}\eta}{|\P_{^{_{\HS}}}\eta|}\LL\per\right)\bigg|_{\partial{S}},$$
or, equivalently ${\nis} =
|\PH\nu||\P_{^{_{\HS}}}\eta|\,\sigma^{n-2}_{^{_\mathit{R}}} $. The {\rm unit horizontal normal} along $\partial{S}$ is given
by
$\eta_{^{_{\HS}}}:=\mathit{f}rac{\P_{^{_{\HS}}}\eta}{|\P_{^{_{\HS}}}\eta|}$. Note that $(X\LL\per)|_{\partial{S}}=\langle X, \eta_{^{_{\HS}}}\rangle\,
{\nis} $ for every $X\in\XX^1(\HS)$.
\end{oss}
\begin{Defi} \label{leibniz}Let $S\subset\mathbb{G}$ be a hypersurface of class $\cont^2$. Let $\imath:S\rightarrow\mathbb{G}$ be the inclusion of $S$ in $\mathbb{G}$
and let
$\vartheta: ]-\epsilon,\epsilon[\times S
\rightarrow \mathbb{G}$ be a ${\cont}^2$-smooth map. We say that $\vartheta$ is a {\rm
variation} of $\imath$ if, and only if:
\begin{itemize}
\item[{\rm(i)}] every
$\vartheta_t:=\vartheta(t,_{^{_{\HH_2}}}ot):S\rightarrow\mathbb{G}$ is an
immersion;\item[{\rm(ii)}] $\vartheta_0=\imath$.
\end{itemize}
The {\rm variation vector} of $\vartheta$ is defined by
$X:=\mathit{f}rac{\partial \vartheta}{\partial
t}\big|_{t=0}$ and we also set $\widetilde{X}=\mathit{f}rac{\partial \vartheta}{\partial
t}$.
\end{Defi}
\begin{no}Let $S\subset\mathbb{G}$ be a hypersurface of class $\cont^2$. Let $X\in\XG$ and let $\nu$ be the outward-pointing unit normal vector along $S$. Hereafter, we shall denote by $X\op$ and $X\ot$ the standard decomposition of $X$ into its normal and tangential components, that is, $X\op=\langle X,\nu\rangle\nu$ and $X\ot=X-X\op$.
\end{no}
By definition, the 1st variation formula of $\per$ along $S$ is given by
\begin{equation}\label{nome}I_S(\per):=\mathit{f}rac{d}{dt}\left(\int_{S}\vartheta_t^\ast\pert\right) \Bigg|_{t=0},\end{equation}
where $\vartheta_t^\ast$ denotes the pull-back by $\vartheta_t$ and $\pert$ denotes the $\HH$-perimeter along $S_t:=\vartheta_t(S)$.
A natural question arises: is it possible to bring the -time- derivatives inside the integral sign?
Clearly, if we assume that $\overline{S}$ is non-characteristic, then the answer is affirmative.
In the general case, we can argue as follows. We first note that
$$ \int_{S}\vartheta_t^\ast\pert=\int_{S}|\P_{^{_{\HH_t}}}\nu^t|\,\mathcal{J}ac\,\vartheta_t\,\sigma_{^{_\mathit{R}}}^{n-1},$$where $\mathcal{J}ac\,\vartheta_t $ denotes the usual Jacobian of the map $\vartheta_t$; see _{^{_{\HH_i}}}te{Simon}, Ch. 2, $\S$ 8, pp. 46-48. Indeed, by definition, we have $\pert=|\P_{^{_{\HH_t}}}\nu^t|(\sigma_{^{_\mathit{R}}}^{n-1})_t$ and hence the previous formula follows from the well-known Area formula of Federer; see _{^{_{\HH_i}}}te{FE} or _{^{_{\HH_i}}}te{Simon}. Let us set $ f:]-\epsilon, \epsilon[\times S\longrightarrow\R$, \begin{equation}\label{faz}
f(t, x):=|\P_{^{_{\HH_t}}}\nu^t(x)|\,\mathcal{J}ac\,\vartheta_t(x).
\end{equation}In this case, we also set $C_{S}:=\left\lbrace x\in S: |\P_{^{_{\HH_t}}}\nu^t(x)|=0 \right\rbrace$. With this notation, our original question can be solved by applying to $f$ the Theorem of Differentiation under the integral; see, for instance, _{^{_{\HH_i}}}te{Jost}, Corollary 1.2.2, p.124. More precisely, let us compute
\begin{eqnarray}\label{ujh}\mathit{f}rac{d f}{dt}&=&\mathit{f}rac{d\,|\P_{^{_{\HH_t}}}\nu^t|}{dt}\,\mathcal{J}ac\,\vartheta_t + |\P_{^{_{\HH_t}}}\nu^t|\mathit{f}rac{d\,\mathcal{J}ac\,\vartheta_t}{dt}\\\nonumber &=&\left\langle\widetilde{X},\mathit{g}\ccrad\,|\P_{^{_{\HH_t}}}\nu^t|\right\rangle\,\mathcal{J}ac\,\vartheta_t + |\P_{^{_{\HH_t}}}\nu^t|\mathit{f}rac{d\,\mathcal{J}ac\,\vartheta_t }{dt}\\\nonumber &=&\left( \left\langle\widetilde{X}\op,\mathit{g}\ccrad\,|\P_{^{_{\HH_t}}}\nu^t|\right\rangle+\left\langle\widetilde{X}\ot,\mathit{g}\ccrad\,|\P_{^{_{\HH_t}}}\nu^t|\right\rangle + |\P_{^{_{\HH_t}}}\nu^t|\div_{^{_{\TT S_t}}}\widetilde{X}\right) \mathcal{J}ac\,\vartheta_t\\\nonumber &=&\left( \left\langle\widetilde{X}\op,\mathit{g}\ccrad\,|\P_{^{_{\HH_t}}}\nu^t|\right\rangle+ \div_{^{_{\TT S_t}}}\left(\widetilde{X}|\P_{^{_{\HH_t}}}\nu^t|\right)\right) \mathcal{J}ac\,\vartheta_t,
\end{eqnarray}where we have used the very definition of tangential divergence and the well-known calculation of $\mathit{f}rac{d\,\mathcal{J}ac\,\vartheta_t}{dt}$, which can be found in Chavel's book _{^{_{\HH_i}}}te{Ch2}; see Ch.2, p.34. Now since $|\P_{^{_{\HH_t}}}\nu^t|$ is a Lipschitz continuous function, it follows that $\mathit{f}rac{d f}{dt}$ is bounded on $S\setminus C_{S}$ and so lies to $L^1_{loc}(S; \sigma_{^{_\mathit{R}}}^{n-1})$. This shows that: \it we can pass the time-derivative through the integral sign. \rm
At this point the 1st variation formula follows from the calculation of the Lie derivative of $\per$ with respect to the initial velocity $X$ of the flow $\vartheta_t$.
\begin{oss}Let $M$ be a smooth manifold, let $\omega\in\Om^k(M)$ be a differential $k$-form on $M$ and let $X\in\XX(\TT M)$ be a differentiable vector field on $M$, with associated flow $\phi_t:M\longrightarrow M$. We recall that the Lie derivative of $\omega$ with respect to $X$, is defined by $\Lie_X\omega:=\mathit{f}rac{d}{dt}\phi_t^\ast\omega\big|_{t=0},$ where $\phi_t^\ast\omega$ denotes the pull-back of $\omega$ by $\phi_t$. In other words, the Lie derivative of $\omega$ under the flow generated by $X$ can be seen as the \textquotedblleft infinitesimal 1st variation\textquotedblright of $\omega$ with respect to $X$. Then, Cartan's identity says that \[\Lie_X\omega= (X\LL d\omega) +d(X\LL\omega).\]This formula is a very useful tool in proving variational formulas. For the case of Riemannian volume forms, we refer the reader to Spivak's book _{^{_{\HH_i}}}te{Spiv}; see Ch. 9, pp. 411-426 and 513-535.
\end{oss}
The Lie derivative of the differential $(n-1)$-form $\per$ with respect to $X$ can be calculated elementarily as follows.
We have$$ X\LL d \per=X\LL d(\nn\LL\Vol)= X\LL\left(\div\,\nn \Vol\right)=\langle X,\nu\rangle\,\div\,\nn\,\sigma_{^{_\mathit{R}}}^{n-1}.$$Note that $\div\,\nn =\div_{^{_\HH}}\nn=-\MS$. In fact, one has $$\div\,\nn =\sum_{i=1}^n\langle\nabla_{X_i}\nn,
{X_i}\rangle=\sum_{i=1}^\DH X_i({\nn}_i)=\div_{^{_\HH}}\nn=-\MS.$$
Now the second term in Cartan's identity can be computed using the following:\begin{lemma}\label{fondam}If $X\in\XX^1(\TG)$, then $(X\LL\per)|_S=\left( \left(X\ot|\PH\nu|-\langle X,\nu\rangle\nn\ot\right)\LL\sigma_{^{_\mathit{R}}}^{n-1}\right)\big|_S$ and at each non-characteristic point of $S$, we have $$d(X\LL\per)|_S=\div_{^{_{\TS}}} \left( X\ot|\PH\nu|-\langle X,\nu\rangle\nn\ot \right) \,\sigma_{^{_\mathit{R}}}^{n-1}.$$
\end{lemma}
\begin{proof}We have
\begin{eqnarray*}d(X\LL\per)|_S &=& (X\LL\nn\LL\Vol)|_S\\&=&d\left( \left( X\ot + X\op\right)\LL \left( \nn\ot + \nn\op\right)\LL\Vol\right)\big|_S\\&=&d\left( X\ot \LL \nn\op \LL\Vol\right)\big|_S+
d\left( \nn\ot\LL X\op \LL\Vol\right)\big|_S\\&=&d \left(X\ot\LL\per\right)\big|_S+d \left(\nn\ot\LL\langle X,\nu\rangle\sigma_{^{_\mathit{R}}}^{n-1}\right)\big|_S\\&=&\div_{^{_{\TS}}} \left( X\ot|\PH\nu|-\langle X,\nu\rangle\nn\ot \right) \,\sigma_{^{_\mathit{R}}}^{n-1}.\end{eqnarray*}
\end{proof}
\begin{oss}\label{mistake} The previous calculation corrects a mistake in _{^{_{\HH_i}}}te{Monteb}, where the normal component of the vector field $X$
was omitted and this caused the loss of some divergence-type terms in some of the variational formulas proved there.
\end{oss}
Thus, we can conclude that
\begin{equation}\label{9}
\Lie_X\per=\left(-\MS\langle X,\nu\rangle +\div_{^{_{\TS}}} \left( X\ot|\PH\nu|-\langle X,\nu\rangle\nn\ot \right)\right)\,\sigma_{^{_\mathit{R}}}^{n-1},
\end{equation}at each non-characteristic point of $S$.
Furthermore, if $\MS\in L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$, we can integrate this formula over all of $S$.
Indeed, in this case, all terms in the formula above turn out to be in $L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$; see also _{^{_{\HH_i}}}te{MonteStab}.
\begin{teo}[1st variation of $\per$] \label{1vg}Let $S\subset\mathbb{G}$ be a compact
hypersurface of class ${\cont}^2$ with -or without- boundary $\partial S$ and let
$\vartheta: ]-\epsilon,\epsilon[\times S
\rightarrow \mathbb{G}$ be a ${\cont}^2$-smooth variation of $S$. Let $X=\mathit{f}rac{d\,\vartheta_t}{dt}\big|_{t=0}$ be the variation vector field and denote by $X\op$ and $X\ot$ the normal and tangential components of $X$ along $S$, respectively. If $\MS\in L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$, then \begin{eqnarray}\label{fva}I_S(X,\per)=
\int_{S}\left(-\MS\langle X\op,\nu\rangle +\div_{^{_{\TS}}}\left( X\ot|\PH\nu|-\langle X\op,\nu\rangle\nn\ot \right)\right)\,\sigma_{^{_\mathit{R}}}^{n-1}\\\label{fva2formula}=\int_{S}-\MS\mathit{f}rac{\langle X\op,\nu\rangle}{|\PH\nu|}\,\per+ \int_{\partial S}\left\langle \left( X\ot-\mathit{f}rac{\langle X\op,\nu\rangle}{|\PH\nu|}\nn\ot \right),\mathit{f}rac{\eta }{|\P_{^{_{\HS}}}\eta|}\right\rangle \,\underbrace{|\PH\nu||\P_{^{_{\HS}}}\eta|\,\sigma_{^{_\mathit{R}}}^{n-2}}_{=\nis}.
\end{eqnarray}
\end{teo}
Note that the second equality follows by applying the following generalized Stokes' formula to the differential $(n-2)$-form $\alpha:=(X\LL \per)|_S\in\Om^{n-2}(S)$.
\begin{Prop}\label{ST} Let $M$ be an oriented $k$-dimensional manifold of class $\cont^2$ with boundary
$\partial M$. Then $\int_M d\alpha=\int_{\partial M}\alpha$ for every compactly supported $(k-1)$-form $\alpha$ such that $\alpha\in L^\infty(M)$, $d\alpha\in L^1(M)$ -or $d\alpha\in L^\infty(M)$- and $\imath_M^\ast\alpha\in L^\infty(\partial M)$, where $\imath_M:\partial M\longrightarrow\overline{M}$ is the natural inclusion.
\end{Prop}
\begin{oss}
The previous result can be deduced by applying a standard procedure\mathit{f}ootnote{See, for instance, Federer's book _{^{_{\HH_i}}}te{FE}, paragraph 3.2.46, p. 280; see also _{^{_{\HH_i}}}te{Pfeffer}, Remark 5.3.2, p. 197.} from a divergence-type theorem proved by Anzellotti; see, more precisely, Theorem 1.9 in _{^{_{\HH_i}}}te{Anze}. More recent and more general results can be found in the paper by Chen, Torres and Ziemer _{^{_{\HH_i}}}te{variZ}. See also _{^{_{\HH_i}}}te{Taylor}, formula (G.38), Appendix G.
\end{oss}
\subsection{Blow-up of the horizontal perimeter
$\per$ up to $C_S$}\label{blow-up}Let $S\subset\mathbb{G}$ be a smooth
hypersurface. In this section we shall study the following density-limit:
\begin{equation}\label{limit} \lim_{r\rightarrow
0^+}\mathit{f}rac{\per (S\cap
B_{\varrho}(x,r))}{r^{Q-1}},\end{equation}where $B_{\varrho}(x,r)$
is the $\varrho$-ball of center $x\in{\rm Int}\,S$ and radius $r$.
It is worth observing that the point
$x$ is {\it not necessarily non-characteristic}. For
a similar analysis, we refer the reader to _{^{_{\HH_i}}}te{Mag, Mag3,
Mag4} and to _{^{_{\HH_i}}}te{Mag8}, for the characteristic
case in the setting of $2$-step Carnot groups; see also
_{^{_{\HH_i}}}te{balogh, BTW}, _{^{_{\HH_i}}}te{FSSC3, FSSC5}.
We stress that the second part of the following Theorem \ref{BUP} (which, to the best of our knowledge, is
new) will be used only in Section \ref{perdindirindina} and Section \ref{asintper} in order to prove some monotonicity estimates for the $\HH$-perimeter of the intersection $S_t$ of a smooth hypersurface $S$ with a homogeneous $\varrho$-ball $B_\varrho(x, t)$ centered at an interior characteristic point $x\in S\cap C_S$.
\begin{teo}\label{BUP}Let $\mathbb{G}$ be a $k$-step Carnot group.
\begin{itemize}
\item [{\rm Case (i)}]\,\,Let $S$ be a
hypersurface of class $\cont^1$ and let $x\in {\rm Int}(S\setminus C_S)$;
then \begin{eqnarray}\label{BUP1}\per(S\cap B_\varrho(x,r))\sim
\kappa_\varrho(\nn(x))\, r^{Q-1}\textit{grad}\ssuad\mbox{for}\quad r\rightarrow
0^+,\end{eqnarray}where the density-function $\kappa_\varrho(\nn(x))$ is
called {\rm metric factor}. It turns out that
$$\kappa_\varrho(\nn(x))=\per\left(\mathcal{I}(\nn(x))\cap
B_\varrho(x,1)\right),$$where $\mathcal{I}(\nn(x))$
denotes the vertical hyperplane\mathit{f}ootnote{\label{piedipag}Note that $\mathcal{I}(\nn(x))$
corresponds to an ideal of the Lie algebra $\mathit{g}\ccg$. We also remark
that the $\HH$-perimeter on a vertical hyperplane equals the
Euclidean-Hausdorff measure $\Ar$ on the hyperplane.} through $x$ and orthogonal
to $\nn(x)$.
\item [{\rm Case (ii)}]\,\, Let $x\in {\rm Int}(S\cap C_S)$ and
let $\alpha\in I_{^{_\VV}}$ be such that, locally around $x$, $S$ can
be represented as an
$X_\alpha$-graph of class $\cont^i$, where $i={\rm ord}(\alpha)\in\{2,...,k\}$. In this case, we have
$$S\cap
B_\varrho(x,r)\subset\exp\left\{\left(\zeta_1,...,\zeta_{\alpha-1},
\psi(\zeta),\zeta_{\alpha+1},...,\zeta_n \right)\, : \,
\zeta:=(\zeta_1,...,\zeta_{\alpha-1},
0,\zeta_{\alpha+1},...,\zeta_n )\in \ee_\alpha^\perp\right\},$$
for some function $\psi:\ee_\alpha^{\perp}\cong\R^{n-1}\rightarrow\R$ of class $\cont^i$. Without loss of generality,
we may assume that $x=0\in\mathbb{G}$ and $\psi(0)=0$. If
\begin{equation}\label{0dercond}\mathit{f}rac{\partial^{\scriptsize(l)}
\psi}{\partial\zeta_{j_1}...\partial\zeta_{j_l}}(0)=0\textit{grad}\ssuad\mbox{whenever}\quad{\rm
ord}(j_1)+...+{\rm ord}(j_l)< i,
\end{equation}then
\begin{eqnarray}\label{BUPcarcase}\per(S\cap B_\varrho(x,r))\sim
\kappa_\varrho(C_S(x))\, r^{Q-1}\textit{grad}\ssuad\mbox{as long as}\textit{grad}\ssuad r\rightarrow
0^+,\end{eqnarray}where the function $\kappa_\varrho(C_S(x))$ can
be computed by integrating the measure $\per$ along a polynomial
hypersurface, which is the graph of the Taylor's expansion up to
$i={\rm ord}(\alpha)$ of $\psi$ at
$\zeta=0\in\ee_\alpha^\perp$. More precisely, one has$$\kappa_\varrho(C_S(x))=\per(S_\infty\cap
B_\varrho(x,1)),$$where
$S_\infty=
\{(\zeta_1,...,\zeta_{\alpha-1},\widetilde{\psi}
(\zeta),\zeta_{\alpha+1},...,\zeta_n)\, :\,
\zeta\in\ee_\alpha^\perp\}$ and\begin{eqnarray}\nonumber\widetilde{\psi}(\zeta)=\sum_{\stackrel{{j_1}}{\scriptsize{\rm
ord}(j_1)=i}}\mathit{f}rac{\partial\psi}{\partial\zeta_{j_1}}(0)\,\zeta_{j_1}+\ldots+
\sum_{\stackrel{{j_1,...,j_l}}{\scriptsize{{\rm ord}(j_1)+...+{\rm
ord}(j_l)=i}}}\mathit{f}rac{\partial^{\scriptsize(l)}\psi}{\partial\zeta_{j_1}...
\partial\zeta_{j_l}}(0)\,\zeta_{j_1}...\zeta_{j_l}.\end{eqnarray}
Finally, if \eqref{0dercond} does not hold, then $S_\infty=\emptyset$ and $\kappa_\varrho(C_S(x))=0$.
\end{itemize}
\end{teo}
\begin{oss}[Order of $x\in S$]\label{fraccicarla}The rescaled hypersurfaces $\delta_{\mathit{f}rac{1}{r}}S$
locally converge to a limit-set $S_\infty$, that is,
$\delta_{\mathit{f}rac{1}{r}}S\longrightarrow S_\infty$ for $r\rightarrow0^+$, where the
convergence is understood with respect the Hausdorff convergence
of sets; see _{^{_{\HH_i}}}te{Mag3, Mag8}. At every $x\in {\rm Int}(S\setminus
C_S)$ the limit-set $S_\infty$ is the vertical
hyperplane $\mathcal{I}(\nn(x))$. Otherwise, $S_\infty$ is a
polynomial hypersurface.
Assume that $S$ is smooth enough near its characteristic set $C_S$,
say of class $\cont^k$. Then, there exists a minimum integer $i={\rm
ord}(\alpha)$ such that \eqref{0dercond} holds. The number
${\rm ord}(x)=Q-i$ is called the \rm order \it of the
characteristic point $x\in C_S$. \end{oss}
\begin{proof}[Proof of Theorem \ref{BUP}]
We preliminarily note that the limit \eqref{limit} can be
computed, without loss of generality, at $0\in\mathbb{G}$,
just by left-translating $S$. We have
$${\per \left(S\cap B_{\varrho}(x,r)\right)}={\per \left(x^{-1}\bullet\left(S\cap
B_{\varrho}(x,r)\right)\right)}={\per \left(\left(x^{-1}\bullet
S\right)\cap B_{\varrho}(0,r)\right)}$$for any $x\in{\rm Int}\,S$,
where the second equality follows from the additivity of the
group law $\bullet$.
\begin{no}We shall set:\begin{itemize}
\item[\rm (i)] $S_r(x):=S\cap B_{\varrho}(x,r)$;\item[\rm (ii)]
$\widetilde{S}:=x^{-1}\bullet S$;
\item[\rm (iii)] $\widetilde{S}_r:=x^{-1}\bullet
S_r(x)=\widetilde{S}\cap B_{\varrho}(0,r).$\end{itemize}\end{no}By
using the homogeneity of $\varrho$ and the invariance of $\per$
under positive Carnot dilations\mathit{f}ootnote{This means that
$\delta_t^\ast\per=t^{Q-1}\per$, $t\in \R_+$; see Section
\ref{prelcar}.}, it follows that
$$\per(\widetilde{S}_r)=r^{Q-1}\per\left(\delta_\mathit{f}rac{1}{r}\widetilde{S}\cap B_\varrho(0,1)\right)$$for all $r\mathit{g}\cceq 0$.
Therefore $\mathit{f}rac{\per(
\widetilde{S}_r)}{r^{Q-1}}=\per\left(\delta_\mathit{f}rac{1}{r}\widetilde{S}\cap
B_\varrho(0,1)\right)$, and it remains to compute
\begin{equation}\label{dens1}
\lim_{r\rightarrow 0^+}\per\left(\delta_{\mathit{f}rac{1}{r}}\widetilde{S}\cap
B_\varrho(0, 1)\right).\end{equation}
We begin by studying the non-characteristic case; see also
_{^{_{\HH_i}}}te{Mag8, Mag4}.\\
\\\noindent{\rm Case (i).\,\,\rm Blow-up for non-characteristic points.}\,\,{\it Let $S\subset\mathbb{G}$
be a hypersurface of class $\cont^1$ and let $x\in {\rm Int}\,S$ be
non-characteristic}. Locally around $x$, the hypersurface $S$ is
oriented by unit $\HH$-normal $\nn(x)$,
that is, $\nn(x)$ is transversal\mathit{f}ootnote{We say that $X$ is transversal to $S$ at $x$, in symbols $X\pitchfork \TT_xS$, if
$\langle X, \nu\rangle\neq 0$ at $x$, where $\nu$ is a
unit normal vector along $S$.} to $S$ at $x$. Thus, at least locally
around $x$, we may think of $S$ as a
$\cont^1$-graph with respect to the horizontal direction
$\nn(x)$. Moreover, we can find an orthonormal change of
coordinates on
$\Rn\cong\TT_0\mathbb{G}$ such that
$$\ee_1=X_1(0)=(L_{x^{-1}})_\ast\nn(x).$$With no loss of generality, by the Implicit Function Theorem we can write
$\widetilde{S}_r=x^{-1}\bullet S_r(x)$, for some (small enough)
$r>0$, as the exponential image in $\mathbb{G}$ of a
$\cont^1$-graph\mathit{f}ootnote{Actually, since the argument is local,
$\psi$ can be defined just on a suitable neighborhood of $0\in
\ee_1^\perp\cong \R^{n-1}$.}. So let
$$\Psi=\{(\psi(xi),xi)\,:\, xi\in\R^{n-1}\}\subset\mathit{g}\ccg,$$ where
$\psi:\ee_1^{\perp}\cong\R^{n-1}\longrightarrow\R$ is a
$\cont^1$-function satisfying:
\begin{itemize}\item[${\rm(i)}$]$\psi(0)=0$;
\item[${\rm(ii)}$]${\partial\psi}/{\partialxi_j} (0)=0$ for every
$j=2,...,\DH\,(=\dim\HH)$,
\end{itemize}for $xi\in\ee_1^\perp\cong\R^{n-1}$. Therefore $\widetilde{S}_r=\exp\Psi\cap
B_\varrho(0,r),$ for all (small enough) $r>0$. This remark can be
used to compute the limit \eqref{dens1}. So let us fix a positive
$r_0$ satisfying the previous assumptions and let $0\leq r\leq
r_0$. Then
\begin{eqnarray}\label{dens2}\delta_\mathit{f}rac{1}{r}\widetilde{S}\cap
B_\varrho(0, 1)=\exp\left(\widehat{\delta}_\mathit{f}rac{1}{r}\Psi\right)\cap
B_\varrho(0,1),\end{eqnarray}where
$\{\widehat{\delta}_{t}\}_{t\mathit{g}\cceq 0}$ are the induced dilations on
$\mathit{g}\ccg$, that is, $\delta_t=\exp_{^{_{\HH_i}}}rc\widehat{\delta}_{t}$ for every
$t\mathit{g}\cceq 0$. Henceforth, we will consider the restriction of
$\widehat{\delta}_{t}$ to the hyperplane
$\ee_1^\perp\cong\R^{n-1}$. So, with a slight
abuse of notation, instead of
$(\widehat{\delta}_{t})\big|_{\ee_1^{\perp}}(xi)$ we shall
write $\widehat{\delta}_{t}xi$. Moreover, we shall assume
$\R^{n-1}=\R^{\DH-1}\oplus\R^{n-\DH}$. Note that the induced
dilations $\{\widehat{\delta}_{t}\}_{t\mathit{g}\cceq 0}$ make
$\ee_1^\perp\cong\R^{n-1}$ a {\it graded vector space}, whose
grading respects that of $\mathit{g}\ccg$. We have
$$\widehat{\delta}_\mathit{f}rac{1}{r}\Psi=\widehat{\delta}_\mathit{f}rac{1}{r}\left\{(\psi(xi),xi)\, :\, xi\in\R^{n-1}\right\}
=\left\{\left(\mathit{f}rac{\psi(xi)}{r},\widehat{\delta}_\mathit{f}rac{1}{r}xi\right)\,
:\, xi\in\R^{n-1}\right\}.$$By using the change of variables
$\zeta:=\widehat{\delta}_{{1}/{r}}xi$, we get that
$$\widehat{\delta}_\mathit{f}rac{1}{r}\Psi=
\left\{\left(\mathit{f}rac{\psi\left(\widehat{\delta}_{r}\zeta\right)}{r},\zeta\right)\,:\,
\zeta\in\R^{n-1}\right\}.$$By hypothesis $\psi\in \cont^1(U_0)$,
where $U_0$ is a suitable open neighborhood of $0\in\R^{n-1}$.
Using a Taylor's expansion of $\psi$ at $0\in\R^{n-1}$ and the
assumptions
(i) and (ii), yields
\begin{eqnarray*}\psi(xi)=\psi(0) + \langle\mathit{g}\ccrad_{\R^{n-1}}\psi(0), xi \rangle_{\R^{n-1}} + {\rm o}(\|xi\|_{\R^{n-1}})
=\langle\mathit{g}\ccrad_{\R^{n-\DH}}\psi(0), xi_{\R^{n-\DH}} \rangle_{\R^{n-\DH}}
+{\rm o}(\|xi\|_{\R^{n-1}}),\end{eqnarray*}as long as $xi\rightarrow
0\in \R^{n-1}$. Note that $\widehat{\delta}_{r}\zeta\longrightarrow
0\in\R^{n-1}$ for $r\rightarrow 0^+$. By the
previous change of variables, we get that
\[\psi\left(\widehat{\delta}_{r}\zeta\right)
= \left\langle\mathit{g}\ccrad_{\R^{n-\DH}}\psi(0),
\widehat{\delta}_{r}\left(\zeta_{\R^{n-\DH}}\right)
\right\rangle_{\R^{n-\DH}} + {\rm o}\left(r \right)\]for
$r\rightarrow 0^+$. Since $\left\langle\mathit{g}\ccrad_{\R^{n-\DH}}\psi(0),
\widehat{\delta}_{r}\left(\zeta_{\R^{n-\DH}}\right)
\right\rangle_{\R^{n-\DH}}={\rm o}(r)$ for $r\rightarrow 0^+$, we
easily get that the limit-set (obtained by blowing-up
$\widetilde{S}$ at the non-characteristic point $0$) is given by
\begin{equation}\label{dens3}\Psi_\infty=\lim_{r\rightarrow
0^+}\widehat{\delta}_\mathit{f}rac{1}{r}\Psi=\exp(\ee_1^\perp)=\mathcal{I}(X_1(0)),\end{equation}
where $\mathcal{I}(X_1(0))$ denotes the vertical hyperplane
through the identity $0\in\mathbb{G}$ and orthogonal to $X_1(0)$. We
have shown that \eqref{dens1} can be computed by means of
\eqref{dens2} and \eqref{dens3}. More precisely
\begin{equation*}\lim_{r\rightarrow 0^+}\per\left(\delta_\mathit{f}rac{1}{r}\widetilde{S}\cap
B_\varrho(0, 1)\right)=\per\left(\mathcal{I}(X_1(0))\cap
B_\varrho(0,1)\right)\end{equation*}By remembering the previous change of
variables, it follows that $S_\infty=\mathcal{I}(\nn(x))$ and that
$$\kappa_\varrho(\nn(x))=\lim_{r\rightarrow 0^+}\mathit{f}rac{\per (S\cap
B_{\varrho}(x, r))}{r^{Q-1}}=\per\left(\mathcal{I}(\nn(x))\cap
B_\varrho(x,1)\right),$$which was to be proven.\\
\noindent{\rm Case (ii).\,\rm Blow-up at the characteristic set.}
{\it We are now assuming that $S\subset\mathbb{G}$ is a
hypersurface of class $\cont^i$ for some $i\mathit{g}\cceq 2$ and that $x\in {\rm Int}(S\cap C_S)$}.
Near $x$ the hypersurface $S$ is oriented by some vertical
vector. Hence, at least locally around $x$, we may think of $S$ as the exponential image of a $\cont^i$-graph
with respect to
some vertical direction $X_\alpha$ transversal to $S$ at $x$. Note that $X_\alpha$ is a vertical
left-invariant vector field of the fixed left-invariant frame
$\underline{X}=\{X_1,...,X_n\}$ and $\alpha\in
I_{^{_\VV}}=\{h+1,..., n\}$ denotes a ``vertical'' index; see Definition
\ref{1notazione}. {\it Furthermore, we are assuming that}
$\mathrm{ord}(\alpha):= i$,
for some $i=2,..,k.$ As in the non-characteristic case,
for the sake of simplicity, we
left-translate $S$ in such a way that $x$
coincides with $0\in\mathbb{G}$. To this end, it is sufficient to
replace $S$ by $\widetilde{S}=x^{-1}\bullet S$. At the level of
the Lie algebra $\mathit{g}\ccg$, let us consider the hyperplane
$\ee_\alpha^\perp$ through the origin $0\in\mathit{g}\ccg\cong\R^{n}$ and
orthogonal to $\ee_\alpha=X_{\alpha}(0)$. Note that
$\ee_\alpha^\perp$ is the natural ``parameter space'' of a $\ee_\alpha$-graph. By the classical Implicit Function
Theorem, we may write
$\widetilde{S}_r=x^{-1}\bullet S_r(x)$ as the exponential image in
$\mathbb{G}$ of a $\cont^i$-graph. We have
$$\Psi=\left\{\left(xi_1,...,xi_{\alpha-1}\underbrace{,
\psi(xi),}_{\scriptsize{\alpha-th\,
place}}xi_{\alpha+1},...,xi_n \right)\, :\,
xi:=(xi_1,...,xi_{\alpha-1}, 0,xi_{\alpha+1},...,xi_n )\in
\ee_\alpha^\perp\cong \R^{n-1}\right\}$$ where
$\psi:\ee_\alpha^{\perp}\cong\R^{n-1}\longrightarrow\R$ is a
$\cont^i$-smooth function satisfying:
\begin{itemize}\item[${\rm(j)}$]$\psi(0)=0$;
\item[${\rm(jj)}$]${\partial\psi}/{\partialxi_j} (0)=0$ for every
$j=1,...,\DH\,(=\dim\HH)$.
\end{itemize}Thus we get that $\widetilde{S}_r=\exp\Psi\cap
B_\varrho(0,r),$ for every (small enough) $r>0$. Hence, we can use the above remarks to compute \eqref{dens1} and as in the non-characteristic case, we use
\eqref{dens2}. We have
\begin{eqnarray*}\widehat{\delta}_\mathit{f}rac{1}{r}\Psi&=&
\widehat{\delta}_\mathit{f}rac{1}{r}\left\{\left(xi_1,...,xi_{\alpha-1},
\psi(xi),xi_{\alpha+1},...,xi_n \right)\, :\, xi\in
\ee_\alpha^\perp\right\}\\
&=&\left\{\left(\mathit{f}rac{{xi_1}}{r},...,\mathit{f}rac{xi_{\alpha-1}}{r^{{\rm
ord}(\alpha-1)}},
\mathit{f}rac{\psi(xi)}{r^i},\mathit{f}rac{xi_{\alpha+1}}{r^{{\rm
ord}(\alpha+1)}},...,\mathit{f}rac{xi_n}{r^k} \right)\, :\, xi\in
\ee_\alpha^\perp\right\}.\end{eqnarray*}Setting
$$\zeta:=\widehat{\delta}_\mathit{f}rac{1}{r}
xi=\left(\mathit{f}rac{{xi_1}}{r},...,\mathit{f}rac{xi_{\alpha-1}}{r^{{\rm
ord}(\alpha-1)}}, 0,\mathit{f}rac{xi_{\alpha+1}}{r^{{\rm
ord}(\alpha+1)}},...,\mathit{f}rac{xi_n}{r^k} \right),$$where
$\zeta=(\zeta_1,...,\zeta_{\alpha-1},0,\zeta_{\alpha+1},...,\zeta_n)\in\ee_\alpha^\perp$,
yields
$$\widehat{\delta}_\mathit{f}rac{1}{r}\Psi=
\left\{\left(\zeta_1,...,\zeta_{\alpha-1},
\mathit{f}rac{\psi\left(\widehat{\delta}_r\zeta\right)}{r^i},\zeta_{\alpha+1},...,\zeta_n\right)\,:\,
\zeta\in\ee_\alpha^\perp\right\}.$$By hypothesis $\psi\in
\cont^i(U_0)$, where $U_0$ is an open neighborhood of
$0\in\ee_\alpha^{\perp}\cong\R^{n-1}$. Furthermore, one has
$\widehat{\delta}_{r}\zeta\longrightarrow 0$ as long as $r\rightarrow 0^+$. So
we have to study the following limit
\begin{equation}\label{lim}\widetilde{\psi}(\zeta):=\lim_{r\rightarrow
0^+}\mathit{f}rac{\psi\left(\widehat{\delta}_r\zeta\right)}{r^i},\end{equation}
whenever exists. The first remark is that, when this limit equals
$+\infty$, we have
$$\lim_{r\rightarrow
0^+}\mathit{f}rac{\per( \widetilde{S}_r)}{r^{Q-1}}=\lim_{r\rightarrow
0^+}\per\left(\exp\left(\widehat{\delta}_\mathit{f}rac{1}{r}\Psi\right)\cap
B_\varrho(0,1)\right)=0,$$ because $\exp\left(\widehat{\delta}_\mathit{f}rac{1}{r}\Psi\right)\cap
B_\varrho(0,1)\longrightarrow\emptyset$
as long as $r\rightarrow 0^+$.
At this point, making use of a Taylor's expansion of $\psi$ together with
$\rm(j)$ and $\rm(jj)$, yields
\begin{eqnarray*}\psi\left(\widehat{\delta}_{r}\zeta\right)&=
&\psi(0) +\sum_{j_1}r^{{\rm
ord}(j_1)}\mathit{f}rac{\partial\psi}{\partial\zeta_{j_1}}(0)\,\zeta_{j_1}
+\sum_{j_1, j_2}r^{{\rm ord}(j_1)+{\rm
ord}(j_2)}\mathit{f}rac{\partial^{\scriptsize(2)}\psi}{\partial\zeta_{j_1}
\partial\zeta_{j_2}}(0)\,\zeta_{j_1}\zeta_{j_2}\\&&+...+
\sum_{j_1,..., j_i}r^{{\rm ord}(j_1)+...+{\rm
ord}(j_i)}\mathit{f}rac{\partial^{\scriptsize(i)}\psi}{\partial\zeta_{j_1}...
\partial\zeta_{j_i}}(0)\,\zeta_{j_1}_{^{_{\HH_2}}}ot..._{^{_{\HH_2}}}ot\zeta_{j_i}+{\rm
o}\left(r^i\right)\end{eqnarray*}\begin{eqnarray*}&=& \sum_{j_1}r^{{\rm
ord}(j_1)}\mathit{f}rac{\partial\psi}{\partial\zeta_{j_1}}(0)\,\zeta_{j_1}
+\sum_{j_1, j_2}r^{{\rm ord}(j_1)+{\rm
ord}(j_2)}\mathit{f}rac{\partial^{\scriptsize(2)}\psi}{\partial\zeta_{j_1}\partial\zeta_{j_2}}(0)\,\zeta_{j_1}\zeta_{j_2}\\&&+...+
\sum_{j_1,..., j_i}r^{{\rm ord}(j_1)+...+{\rm
ord}(j_l)}\mathit{f}rac{\partial^{\scriptsize(l)}\psi}{\partial\zeta_{j_1}...
\partial\zeta_{j_i}}(0)\,\zeta_{j_1}_{^{_{\HH_2}}}ot..._{^{_{\HH_2}}}ot\zeta_{j_l}+{\rm
o}\left(r^i\right)\end{eqnarray*}as $r\rightarrow 0^+$. Therefore
\begin{eqnarray*}\mathit{f}rac{\psi\left(\widehat{\delta}_{r}\zeta\right)}{r^i}&=
& \sum_{j_1}r^{{\rm
ord}(j_1)-i}\mathit{f}rac{\partial\psi}{\partial\zeta_{j_1}}(0)\,\zeta_{j_1}
+\sum_{j_1, j_2}r^{{\rm ord}(j_1)+{\rm
ord}(j_2)-i}\mathit{f}rac{\partial^{\scriptsize(2)}\psi}{\partial\zeta_{j_1}\partial\zeta_{j_2}}(0)\,
\zeta_{j_1}\zeta_{j_2}\\&&+...+ \sum_{j_1,..., j_l}r^{{\rm
ord}(j_1)+...+{\rm
ord}(j_l)-i}\mathit{f}rac{\partial^{\scriptsize(l)}\psi}{\partial\zeta_{j_1}...
\partial\zeta_{j_l}}(0)\,\zeta_{j_1}_{^{_{\HH_2}}}ot..._{^{_{\HH_2}}}ot\zeta_{j_l}+{\rm
o}\left(1\right)\end{eqnarray*}as $r\rightarrow 0^+$. By applying the
hypothesis
$\mathit{f}rac{\partial^{\scriptsize(l)}
\psi}{\partial\zeta_{j_1}...\partial\zeta_{j_l}}(0)=0$ whenever ${\rm
ord}(j_1)+...+{\rm ord}(j_l)<i$,it follows that
\eqref{lim} exists. Setting
\begin{equation*}\Psi_\infty=\lim_{r\rightarrow
0^+}\widehat{\delta}_\mathit{f}rac{1}{r}\Psi=
\left\{\left(\zeta_1,...,\zeta_{\alpha-1},\widetilde{\psi}
(\zeta),\zeta_{\alpha+1},...,\zeta_n\right)\,:\,
\zeta\in\ee_\alpha^\perp\right\},\end{equation*}where
$\widetilde{\psi}$ is the polynomial function of homogeneous degree
$i={\rm ord}(\alpha)$ given by
\begin{eqnarray*}\widetilde{\psi}(\zeta)=\sum_{\stackrel{{j_1}}{\scriptsize{\rm
ord}(j_1)=i}}\mathit{f}rac{\partial\psi}{\partial\zeta_{j_1}}(0)\,\zeta_{j_1}+\ldots
+ \sum_{\stackrel{{j_1,...,j_l}}{\scriptsize{{\rm
ord}(j_1)+...+{\rm
ord}(j_l)=i}}}\mathit{f}rac{\partial^{\scriptsize(l)}\psi}{\partial\zeta_{j_1}...
\partial\zeta_{j_l}}(0)\,\zeta_{j_1}_{^{_{\HH_2}}}ot..._{^{_{\HH_2}}}ot\zeta_{j_l},\end{eqnarray*}
yields $S_\infty=x\bullet \Psi_\infty$ and the thesis
follows.
\end{proof}
\begin{oss}\label{boundonmetricfactor}The metric factor is not constant, in general. It
turns out to be constant, for instance, by assuming that $\varrho$
is symmetric on all layers; see _{^{_{\HH_i}}}te{Mag3}.
Anyway, it is {\rm uniformly bounded} by two positive constants $K_1$
and $K_2$. This can be seen by using the \textquotedblleft ball-box metric\textquotedblright \mathit{f}ootnote{By definition one has ${\rm Box}(x,r)=x\bullet {\rm Box}(0,r)$ for every $x\in\mathbb{G}$, where
$${\rm Box}(0,r)=\left\{y=\exp\left(\sum_{i=1}^k y_{^{_{\HH_i}}}\right)\in\mathbb{G}\,:\,
\|y_{^{_{\HH_i}}}\|_\infty\leq r^i\right\}.$$We stress that $y_{^{_{\HH_i}}}=\sum_{j_i\in
I_{^{_{\HH_i}}}}y_{j_i}\ee_{j_i}$ and that $\|y_{^{_{\HH_i}}}\|_\infty$ denotes the sup-norm on
the $i$-th layer of $\mathit{g}\ccg$; see _{^{_{\HH_i}}}te{Gr1},
_{^{_{\HH_i}}}te{Montgomery}.} and a homogeneity argument. Let $S$ be as in Theorem \ref{BUP}, Case (i). Let $B_\varrho(x,1)$ the unit $\varrho$-ball centered at $x\in {\rm Int} (S\setminus C_S)$ and let $r_1, r_2\mathit{g}\cceq 0$ be such that
$0<r_1\leq 1\leq r_2$. In particular, ${\rm Box}(x,r_1)\subseteq
B_\varrho(x,1)\subseteq{\rm Box}(x,r_2)$. Recall that
$$k_\varrho(\nn(x))=\per(\mathcal{I}(\nn(x))\cap B_\varrho(x,1))=\Ar(\mathcal{I}(\nn(x))\cap B_\varrho(x,1)),$$
where $\mathcal{I}(\nn(x))$ denotes the vertical hyperplane
orthogonal to $\nn(x)$. By homogeneity, one has $\delta_{t}{\rm
Box}(0,1/2)={\rm Box}(0,t/2)$ for every $t\mathit{g}\cceq 0$ and by an elementary
computation\mathit{f}ootnote{The unit box ${\rm Box}(x,1/2)$ is the
left-translated at $x$ of ${\rm Box}(0,1/2)$ and so, by
left-invariance of $\per$, the computation can be done at
$0\in\mathbb{G}$. Since ${\rm Box}(0,1/2)$ is the unit hypercube of
$\R^n\cong\mathit{g}\ccg$, it remains to estimate the
$\per$-measure of the intersection of ${\rm Box}(0,1/2)$ with a
generic vertical hyperplane through the origin
$0\in\R^n$. If $\mathcal{I}(X)$
is the vertical hyperplane through $0\in\Rn$ and
orthogonal to $X\in\HH$, we get that
$$1\leq\Ar({\rm Box}(0,1/2)\cap\mathcal{I}(X))\leq\sqrt{n-1},$$where
$\sqrt{n-1}$ is the diameter of any
face of the unit hypercube of $\Rn$. Therefore
\begin{eqnarray*}\big(\delta_{2r_1}{\rm
Box}(0,1/2)\cap\mathcal{I}(X)\big)\subseteq
\big(B_\varrho(0,1)\cap\mathcal{I}(X)\big)\subseteq
\big(\delta_{2r_2}{\rm Box}(0,1/2)\cap\mathcal{I}(X)\big)
\end{eqnarray*}and so\begin{eqnarray*}{(2r_1)}^{Q-1}&\leq&{(2r_1)}^{Q-1}\Ar({\rm Box}(0,1/2)\cap\mathcal{I}(X))
\leq
\Ar(B_\varrho(0,1)\cap\mathcal{I}(X))\\&=&\kappa_\varrho(X)\leq{(2r_2)}^{Q-1}\Ar({\rm
Box}(0,1/2)\cap\mathcal{I}(X))\leq\sqrt{n-1}{(2r_2)}^{Q-1}.\end{eqnarray*}}
we get that
$$(2r_1)^{Q-1}\leq k_\varrho(\nn(x))\leq
\sqrt{n-1}\,(2r_2)^{Q-1}.$$Set
$K_1:=(2r_1)^{Q-1}$,
$K_2:=\sqrt{n-1}\,{(2r_2)}^{Q-1}$. The previous argument shows that one can always
choose two positive constants $K_1,\,K_2$, independent of $S$, such
that
\begin{eqnarray}\label{emfac}K_1\leq \kappa_\varrho(\nn(x))\leq
K_2\textit{grad}\ssuad \mathit{f}orall\,\,x\in {\rm Int}({S}\setminus C_{S}).
\end{eqnarray} \end{oss}
\section{Isoperimetric Inequality on hypersurfaces}\label{mike}
\subsection{Statement of the main result and further remarks}\label{mike0}
\begin{no} Set $${\bf r}(S):=\sup_{x\in {\rm Int}(S\setminus C_S)} r_0(x),$$ where $r_0(x)=2\left(\mathit{f}rac{\per({S})}{k_\varrho(\nn(x))}\right)^{{1}/{Q-1}}$; see Lemma \ref{lem} and Notation \ref{klkl}.\end{no}
\begin{teo}[Isoperimetric-type Inequality]\label{ghaioio}Let
$S\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$ with
boundary $\partial S$ (piecewise) $\cont^1$ and assume that the horizontal mean curvature $\MS$ of $S$ is integrable, that is, $\MS\in L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$. There exists $C_{Isop}>0$ only
dependent on $\mathbb{G}$ and on the homogeneous metric $\varrho$ such that \begin{eqnarray}\label{2gha}\left(\per({S})\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop}\left(\int_S
|\MS|\,\per +\nis(\partial S)+ \sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2} \right).\end{eqnarray}In particular, if $\partial S=\emptyset$, it follows that \begin{eqnarray}\label{2gha}\left(\per({S})\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop} \int_S
|\MS|\,\per.\end{eqnarray}
\end{teo}
In general, we have $C_{Isop}=\max\{C_1, C_2\}$, where $C_1= {2^{Q}}/{K_1^{\mathit{f}rac{1}{Q-1}}}$ and $C_2=\max_{i=2}^k i\,{\bf c}_i\,h_i$. Here $K_1$ denotes a (universal) lower bound on the metric factor $k_\varrho(\nn(x))$; see Remark \ref{boundonmetricfactor}. Furthermore, the constants ${\bf c}_i\,(i=2,...,k)$ has been introduced in Definition \ref{2iponhomnor1}. Note that if $\partial S=\emptyset$, we can take $C_{Isop}= C_1$. The next example can be helpful in understanding our result.
\begin{es}[Key example] \label{zazaz1}Let $\mathbb H^1$ be the first Heisenberg group. In particular, let $\{X, Y, T\}$ be the standard left invariant frame for the Lie algebra $\mathfrak h^1=\HH\oplus \mathrm{span}_\R T$ of $\mathbb H^1$. We recall that $X=\partial_x-\mathit{f}rac{y}{2}\partial_t$, $Y=\partial_y+\mathit{f}rac{x}{2}\partial_t$ and $T=\partial_t$, where $(x, y, t)$ are exponential coordinates of the generic point of $\mathbb H^1$. Let $\mathcal{I}(X(0))=\{(x, y, t)\in\mathbb H^1: x=0\}$. The plane $\mathcal{I}(X(0))$ is a \textquotedblleft vertical plane\textquotedblright passing through the identity $0\in\mathbb H^1$. More precisely, $\mathcal{I}(X(0))$ turns out to be a maximal ideal of the Lie algebra $\mathfrak h^1$. It is well known that the horizontal mean curvature of any vertical plane turns out to be zero. Now let us consider a rectangle $R_{\texttt{h,v}}\subsetneq \mathcal{I}(X(0))$ with sides parallel to the directions $Y$ and $T$, respectively. In other words, we are assuming that $$R_{\texttt{h,v}}=\left\lbrace (x, y, t)\in\mathbb H^1: x=0,\, |y|\leq \texttt{h},\, |t|\leq \texttt{v} \right\rbrace.$$The $\HH$-perimeter of $R_{\texttt{h,v}}$ coincides with the Euclidean area and hence is obtained by multiplying (horizontal) base and (vertical) height, that is, $\sigma_{^{_\HH}}^2(R_{\texttt{h,v}})=\texttt{h}_{^{_{\HH_2}}}ot \texttt{v}$. It is not difficult to see that the only non-zero contributions to the homogeneous measure $\sigma_{^{_\HH}}^1$ of $\partial R_{\texttt{h,v}}$ come from the vertical sides. In fact $\sigma_{^{_\HH}}^1\left(\partial R_{\texttt{h,v}}\setminus\{x=0,\,|y|<\texttt{h}\}\right)=2\texttt{v}$. But, the \textquotedblleft horizontal sides\textquotedblright \,
have a non-zero Riemannian $1$-dimensional measure and, up to a normalization constant, their $1$-dimensional intrinsic Hausdorff measure is given by $\mathcal{H}_{\varrho}^1\left(\partial R_{\texttt{h,v}}\setminus\{x=0,\,|t|<\texttt{v}\}\right)=2\texttt{h}$. Hence, even if we fix the $\sigma_{^{_\HH}}^1$-measure of $\partial R_{\texttt{h,v}}$, we can indefinitely increase the $\HH$-perimeter of $ R_{\texttt{h,v}}$ by increasing the size of the horizontal sides.
\end{es}
Let
$S\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$ with
boundary $\partial S$. Notice that the previous example shows that in order to bound the $\HH$-perimeter $\per$ in terms of the $(Q-2)$-homogeneous measure $\nis$ \underline{only}, we need some extra assumptions on the characteristic set $C_{\partial S}$ of the boundary.
More precisely, we need -at least- to assume that $\sigma_{^{_\mathit{R}}}^{n-2}(C_{\partial S})=0$. We also stress that, assuming enough regularity on $\partial S$ it can be sufficient for the validity of the last condition (in fact, we have already seen that if $\partial S$ is of class $\cont^2$ and if $\mathrm{dim}\VV\mathit{g}\cceq 2$, then $\dim_{\rm Eu-Hau}(C_N)\leq n-3$; see Theorem
\ref{baloghteo} and Remark \ref{11baloghteo}. We also recall that the same assertion holds true for Heisenberg groups $\mathbb H^r$ with $r>1$).
\begin{war}\label{cpzzo} The present version of Theorem \ref{ghaioio} corrects some previous formulations of it posted on ArXiv. We would like to spend some words on this new version. First of all, we have to observe that the Isoperimetric Inequality stated in Theorem \ref{ghaioio} will be proved by following the classical scheme already discussed in the Introduction. In particular, the starting point is the so-called \rm Monotonicity Inequality, \it see Theorem \ref{rmonin}. More precisely, let
$S\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$ with (piecewise) $\cont^1$
boundary $\partial S$. We shall show that for every $x\in {\rm Int}(S\setminus C_S)$ the
following ordinary differential inequality
holds\[-\mathit{f}rac{d}{dt}\mathit{f}rac{\per({S}_t)}{t^{Q-1}}
\leq \mathit{f}rac{\mathcal{A}(t)+{\mathcal{B}}_2(t)}{t^{Q-1}}
\]for $\mathcal{L}^1$-a.e. $t>0$, where $S_t=S\cap B_\varrho(x, t)$ and $B_\varrho(x, t)$ denotes the homogeneous $\varrho$-ball centered at $x$ and of radius $t$; for the very definition of the integrals $\mathcal{A}(t),\,{\mathcal{B}}_2(t)$ we refer the reader to Definition \ref{lsd34} below. The key fact in order to prove this inequality, will be a density type estimate; see Lemma \ref{kr}. The proof of the Isoperimetric Inequality can then be done once we estimate the integrals $\mathcal{A}(t),\,{\mathcal{B}}_2(t)$. The first term can again be estimated by using a blow-up method and it turns out that $\mathcal{A}(t)\leq \int_{S_t}|\MS|\per$; see Lemma \ref{crux00}. Nevertheless, in order to estimate the integral ${\mathcal{B}}_2(t)$, \underline{we cannot use local estimates} and/or blow-up results. More precisely, we stress that ${\mathcal{B}}_2(t)=\int_{\partial S\cap B_\varrho(x, t)} f\,\nis$, for a suitable function $f:\partial S\longrightarrow \R_+$; see Definition \ref{lsd34}. Below, we shall show that
${\mathcal{B}}_2(t)\leq \nis(\partial
S\cap B_\varrho(x, t)) + \widetilde{{\mathcal{B}}_2}(t)$ where $$\widetilde{{\mathcal{B}}_2}(t)\lesssim \sum_{i=2}^k t^{i-1}\int_{\partial S\cap B_\varrho(x, t) }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2};$$see Lemma \ref{417}. Note that the right-hand side turns out to be $(Q-2)$-homogeneous with respect to Carnot dilations but, in general, cannot be expressed in terms of the measure $\nis$ only. It is important to observe that no blow-up method can be profitably used here: the reason is that the center of the $\varrho$-ball belongs to ${\rm Int}(S\setminus C_S)$. Hence there can be large balls intersecting a small portion of $\partial S$ and, on the contrary, there can be small balls very close to $\partial S$.
\end{war}
Taken all together, the previous remarks suggest that,
in order to prove a weaker formulation of the Isoperimetric Inequality for the $\HH$-perimeter $\per$, which only uses the homogeneous measure $\nis$ on the boundary, we need -at least- some extra assumptions on the characteristic set $C_{\partial S}$ of the boundary and, in particular, \it is necessary that $\sigma^{n-2}_{^{_\mathit{R}}}(C_{\partial S})=0$. \rm
We end this introductory section by formulating an interesting related open question.
\begin{Problem}\label{pope}Let $\Sigma^{n-1}$ denote the class of all compact hypersurfaces $S\subset\mathbb{G}$ with (piecewise) $\cont^1$ boundary $\partial S$ such that $\sigma^{n-2}_{^{_\mathit{R}}}(C_{\partial S})=0$. Furthermore, let
us set
$$\mu(\partial S):=\sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2}.$$Is there a dimensional constant $C_{dim}<+\infty$ such that $\mathit{f}rac{\mu(\partial S)
}{\nis(\partial S)}\leq C_{dim}$ for every $S\in \Sigma^{n-1}$?
\end{Problem}
In other words, we are asking if $\sup_{S\in \Sigma^{n-1}}\mathit{f}rac{\mu(\partial S)
}{\nis(\partial S)}<\infty.$ Notice that the ratio $\mathit{f}rac{\mu(\partial S)
}{\nis(\partial S)}$ is $0$-homogeneous with respect to Carnot dilations. Furthermore, $\mathit{f}rac{\mu(\partial S)
}{\nis(\partial S)}$ can always be estimated by\mathit{f}ootnote{Roughly speaking, this assertion can be proved by using the fact that, if $\sigma_{^{_\mathit{R}}}^{n-2}(C_{\partial S})=0$, then $\sigma_{^{_\mathit{R}}}^{n-2}\left(\left\{x\in\partial S: |\PH (\nu\wedge\eta)|\leq \epsilon\right\}\right)\longrightarrow 0$ as $\epsilon\rightarrow 0$, where $\nu\wedge\eta$ is any unit normal $2$-vector orienting $\partial S$.} a constant which depends on the characteristic set of $\partial S$ for any $S\in \Sigma^{n-1}$. As a matter of fact,
Problem \ref{pope} is equivalent to \underline{understand if such an estimate holds} \underline{with a universal constant}.
Clearly, a positive answer to this problem would automatically imply the following inequality:
$$\left(\per({S})\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C'_{Isop}\left(\int_S
|\MS|\,\per +\nis(\partial S)\right),$$with $C'_{Isop}=C_{Isop}(1+C_{dim})$. Note also that, a (purely) horizontal Sobolev-type inequality can be proved \underline{only if} the last inequality holds true.
\begin{oss}\label{ues}An equivalent formulation of Problem \ref{pope} is the following:\begin{itemize}
\item are there dimensional constants $0<C_i<+\infty$, $i\in\{2,...,k\}$, such that $$\left(\per(S)\right)^{\mathit{f}rac{i-1}{Q-1}} \leq C_i\mathit{f}rac{\nis(\partial S)}{ \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2}
} $$ whenever $S\in \Sigma^{n-1}$?
\end{itemize}
\end{oss}
\begin{es}[The case of the Heisenbeg group $\mathbb H^1$] In the first Heisenberg group $\mathbb H^1$, the problem just formulated in Remark \ref{ues} becomes: is there a constant $0<C<+\infty$ such that $$\left(\sigma_{^{_\HH}}^3(S)\right)^{\mathit{f}rac{1}{3}} \leq C \mathit{f}rac{\sigma_{^{_\HH}}^1(\partial S)}{ \int_{\partial
S }|\P{_{^{_{\HH_2 S}}}}\eta|\,\sigma_{^{_\mathit{R}}}^{1}
} $$for any $S\in \Sigma^{n-1}$? Here $\HH_2 S$ corresponds to the tangential direction $\mathbf t=|\PH \nu| T-\langle T, \nu \rangle \nn$. It is not difficult to show that $\sigma_{^{_\HH}}^1(\partial S)$ coincides with the integral over $\partial S$ of the contact form $\theta=T^\ast=dz+\mathit{f}rac{ydx-xdy}{2}$ and hence equals the Euclidean area of the projection of $S$ onto the $xy$-plane. Moreover, the integral at the denominator can be regarded (up to a normalization constant) as the $1$-dimensional intrinsic Hausdorff measure $\mathcal H^1_{\varrho}$ of $\partial S$. Obviously, in order to prove this inequality, the assumption that $\sigma^{1}_{^{_\mathit{R}}}(C_{\partial S})=0$ \underline{cannot} be removed.
\end{es}
The next sections are devoted to prove Theorem \ref{ghaioio}. Finally,
in Section \ref{sobineqg} we shall discuss some related Sobolev-type
inequalities.
\subsection{Linear isoperimetric inequality and monotonicity
formula}\label{wlineq}
Let $S\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$ with
boundary $\partial{S}$. Let $\nu$ denote the outward-pointing unit
normal vector along $S$ and $\varpi=\mathit{f}rac{\P_{^{_\VV}}\nu}{|\PH\nu|}$. Furthermore, we shall
set$$\varpi_{^{_{\HH_i}}}:=\P_{^{_{\HH_i}}}\varpi=\sum_{\alpha\in I_{^{_{\HH_i}}}}\varpi_\alpha
X_\alpha$$for $i=2,...,k$. Note that
$\mathit{f}rac{\nu}{|\PH\nu|}=\nn+\sum_{i=2}^k\varpi_{^{_{\HH_i}}}$.
\begin{no}Let
$\eta$ be the outward-pointing unit normal vector $\eta$ along $\partial
S$. Note that, at each point $x\in\partial S$,
$\eta(x)\in\TT_xS$. In the sequel, we shall set
$\chi:=\mathit{f}rac{\P_{^{_{\VS}}}\eta}{|\P_{^{_{\HS}}}\eta|}$ and $\chi_{^{_{\HH_i}}}ss:=\P_{^{_{\HH_i}}}ss\chi$ for any
$i=2,...,k$; see Remark \ref{indbun}.\end{no}
We have
$\chi=\sum_{i=2}^k\chi_{^{_{\HH_i}}}ss$ and
$\mathit{f}rac{\eta}{|\P_{^{_{\HS}}}\eta|}=\eta_{^{_{\HS}}}+\chi$; see also Remark
\ref{measonfr}.
\begin{Defi}\label{berlu}Fix a point $x\in\mathbb{G}$ and consider the \textquotedblleft Carnot homothety\textquotedblright centered
at $x$, that is, $\delta^x(t,y):=x\bullet\delta_t (x^{-1}\bullet
y)$. The variation vector of
$\delta^x_t(y):=\delta^x(t,y)$ at $t=1$ is given by
$Z_x:=\mathit{f}rac{\partial \delta^x_t}{\partial
t}\bigg|_{t=1}.$ \end{Defi}
Let us apply the 1st variation of $\per$, with a special
choice of the variation vector. So fix
a point $x\in\mathbb{G}$ and consider the Carnot homothety
$\delta_t^x(y):=x\bullet\delta_t (x^{-1}\bullet y)$ centered at
$x$.\begin{oss}Without loss of generality, by using group translations, we can choose $x=0\in\mathbb{G}$. In this case, we have
$$\vartheta^0(t,y)=\delta_ty=\esp \left( ty_{^{_\HH}},t^2y_{^{_{\HH_2}}},
t^3y_{^{_{\HH_3}}},...,t^iy_{^{_{\HH_i}}},...,t^ky_{^{_{\HH_k}}} \right) \textit{grad}\ssuad \mathit{f}orall\,\,t\in
\R,$$where $y_{^{_{\HH_i}}}=\sum_{j_i\in I_{^{_{\HH_i}}}} y_{j_i}\ee_{j_i}$ and $\esp$
is the Carnot exponential mapping; see Section \ref{prelcar}.
Thus the variation vector related to
$\delta^0_t(y):=\delta^0(t,y)$, at $t=1$, is simply given by
$$Z_0:=\mathit{f}rac{\partial \delta^0_t}{\partial
t}\Big|_{t=1}=\mathit{f}rac{\partial \delta_t}{\partial
t}\Big|_{t=1}=y_{^{_\HH}}+ 2y_{^{_{\HH_2}}}+...+ky_{^{_{\HH_k}}}.$$
\end{oss}
By invariance of $\per$ under Carnot dilations, one
gets$$\mathit{f}rac{d}{dt}\delta_t^\ast\per\Big|_{t=1}=(Q-1)\,\per({S}).$$Furthermore,
by using the 1st variation formula, it follows that
\begin{equation*}(Q-1)\,\per({S})=-\int_{{S}}\MS\left\langle Z_{x}, \mathit{f}rac{\nu}{|\PH\nu|}\right\rangle\,\per +
\int_{\partial{S}} \left\langle \left( Z_x\ot-\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\ot \right),\mathit{f}rac{\eta}{|\P_{^{_{\HS}}}\eta|}\right\rangle\,\nis.\end{equation*}
\begin{lemma}The following holds \begin{eqnarray}\label{inA}\mathit{f}rac{1}{\varrho_{x}}\left|\left\langle Z_{x},
\mathit{f}rac{\nu}{|\PH\nu|}\right\rangle\right| \leq
\left(1+\sum_{i=2}^k i\,{\bf c}_i\,\varrho_{x}^{i-1}|\varpi_{^{_{\HH_i}}}|\right).\end{eqnarray}Furthermore, we have $\left(1+\sum_{i=2}^k i\,{\bf c}_i\,\varrho_{x}^{i-1}|\varpi_{^{_{\HH_i}}}|\right)\leq 1+O\left(\mathit{f}rac{\varrho_x}{|\P_{^{_\HH}}\nu|}\right) $ as long as $\varrho_x\rightarrow 0^+$.
\end{lemma}
Here and elsewhere, we use the \textquotedblleft Big O\textquotedblright notation and the \textquotedblleft little o\textquotedblright notation.
\begin{proof}Without loss of generality, by left-invariance, let $x=0\in\mathbb{G}$.
Note
that
$$\left\langle Z_{0}, \mathit{f}rac{\nu}{|\PH\nu|}\right\rangle=
\langle Z_{0}, (\nn+\varpi)\rangle=\langle
y_{^{_\HH}},\nn\rangle+\sum_{i=2}^k\langle
y_{^{_{\HH_i}}},\varpi_{^{_{\HH_i}}}\rangle.$$
By Cauchy-Schwartz inequality, we immediately get that
\[\left|\left\langle Z_{0},
\mathit{f}rac{\nu}{|\PH\nu|}\right\rangle\right|\leq |y_{^{_\HH}}|+\sum_{i=2}^k
i\,|y_{^{_{\HH_i}}}||\varpi_{^{_{\HH_i}}}|.\] According with Definition
\ref{2iponhomnor1}, let ${\bf c}_i\in\R_+$ be constants such that
$|y_{^{_{\HH_i}}}|\leq {\bf c}_i \varrho^i(y)$ for $i=2,...,k.$ Using the last
inequality yields
\[\left|\left\langle Z_{0},
\mathit{f}rac{\nu}{|\PH\nu|}\right\rangle\right| \leq
\varrho\left(1+\sum_{i=2}^k i\,{\bf c}_i\varrho^{i-1}|\varpi_{^{_{\HH_i}}}|\right)\leq\varrho\left(1+O\left(\mathit{f}rac{\varrho}{|\P_{^{_\HH}}\nu|} \right) \right)\] as long as $\varrho\rightarrow 0^+$.
\end{proof}
\begin{Defi}\label{lsd34}Let $\mathbb{G}$ be a $k$-step Carnot group and $S\subset\mathbb{G}$ be a
hypersurface of class $\cont^2$ with (piecewise)
$\cont^1$ boundary $\partial S$. Moreover, let $S_r:=S\cap B_\varrho(x,r)$, where
$B_\varrho(x,r)$ is the open $\varrho$-ball centered at $x\in \mathbb{G}$
and of radius $r>0$. We shall set
\begin{eqnarray*}\mathcal{A}(r)&:=&\int_{S_r}|\MS|\left(1+\sum_{i=2}^k
i\,c_i\varrho_x^{i-1}|\varpi_{^{_{\HH_i}}}|\right)\,\per,\\\mathcal{B}_0(r)&:=&\int_{\partial
S_r}\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( Z_x\ot-\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\ot \right),\mathit{f}rac{\eta}{|\P_{^{_{\HS}}}\eta|}\right\rangle\right|\,\nis,\\\mathcal{B}_1(r)&:=&\int_{\partial
B_\varrho(x, r)\cap S}\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( Z_x\ot-\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\ot \right),\mathit{f}rac{\eta}{|\P_{^{_{\HS}}}\eta|}\right\rangle\right|\,\nis,\\{\mathcal{B}}_2(r)&:=&\int_{\partial
S\cap B_\varrho(x, r)}\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( Z_x\ot-\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\ot \right),\mathit{f}rac{\eta}{|\P_{^{_{\HS}}}\eta|}\right\rangle\right|\,\nis,\end{eqnarray*}where
$\varrho_x(y):=\varrho(x,y)$ for $y\in S$, that is, $\varrho_x$
denotes the $\varrho$-distance from a fixed point $x\in\mathbb{G}$.
\end{Defi}
Note that $\mathcal{B}_0(r)=\mathcal{B}_1(r)+\mathcal{B}_2(r)$. We clearly have the following:
\begin{Prop}[Linear Inequality]\label{correctdimin}Let ${S}\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$
with (piecewise) $\cont^1$ boundary $\partial{S}$. Let
$r$ be the radius of a $\varrho$-ball centered at $x\in\mathbb{G}$. Then
\begin{eqnarray*}(Q-1)\,\per({S}_r)\leq
{r}\left(\mathcal{A}(r)+\mathcal{B}_0(r)\right).\end{eqnarray*}
\end{Prop}\begin{proof}Immediate.\end{proof}
\begin{oss}In the sequel we will need
the following property: \rm there exists
$r_S>0$ such that
\begin{equation}\label{0key} \int_{S_{r+h}\setminus S_r}\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( Z_x\ot-\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\ot \right), \mathit{g}\ccrad_{^{_{\TS}}}\varrho_x\right\rangle\right|\,\per \leq \per(S_{r+h}\setminus S_r)
\end{equation}for $\per$-a.e. $x\in{\rm Int}\,S$, for
$\mathcal{L}^1$-a.e. $r,\, h>0$ such that $r+h\leq r_S $. \it
In the classical setting the previous inequality easily follows from a key-property of the Euclidean metric $d\Eu$, that is the
{\rm Ikonal equation} $|\mathit{g}\ccrad\Eu d\Eu|=1$. In fact, $Z_x(y)=y-x$
and, since $\nn$ coincides with $\nu$, one has $\nn\ot=0$. Thus setting $\varrho_x(y):=d\Eu(x, y)=|x-y|$ yields
\[\mathit{f}rac{\left|\left\langle Z_x(y),
\mathit{g}\ccrad_{^{_{\TS}}}\varrho_x(y)\right\rangle\right|}{\varrho_x(y)}=1-\left\langle\mathit{f}rac{y-x}{|y-x|},\textsl{n}_{\ee}\right\rangle^2\leq
1,\]where $\textsl{n}_{\ee}$ denotes the Euclidean unit normal of $S$. In particular, we may take $r_S=+\infty$.
A stronger version of \eqref{0key} is a natural assumption in the
Riemannian setting. At this regard we refer the reader to a paper by Chung,
Grigor'jan and Yau where this hypothesis is the
starting point of a general theory about isoperimetric
inequalities on weighted Riemannian manifolds and graphs; see _{^{_{\HH_i}}}te{CGY}.\end{oss}
\noindent It is worth observing that\mathit{f}ootnote{\label{19}We stress that \eqref{0kinawa2} holds true
for every (smooth enough) homogeneous distance
on any Carnot group $\mathbb{G}$.
\begin{lemma}\label{cardcaz}Let $\mathbb{G}$ be a $k$-step Carnot group and let
$\varrho:\mathbb{G}\times\mathbb{G}\longrightarrow\R_+$ be any $\cont^1$-smooth
homogeneous norm. Then $\mathit{f}rac{1}{\varrho_x}\langle Z_x,
\mathit{g}\ccrad\,\varrho_x\rangle=1$ for every $x\in\mathbb{G}$.
\end{lemma}\begin{proof}By homogeneity and left-invariance of $\varrho$. More
precisely, we have
$t\varrho(z)=\varrho(\delta_t z)$ for all $t>0$ and for every
$z\in\mathbb{G}$. Setting $z:=x^{-1}\bullet y$, we get that $t\varrho(x,
y)=t\varrho(z)=\varrho(\delta_t z)=\varrho(x, x\bullet\delta_t z)$
for every $x, y \in\mathbb{G}$ and for all $t>0$. Hence
$\varrho(x,y)=\mathit{f}rac{d}{dt}\varrho(x,
x\bullet\delta_t x^{-1}\bullet y)\big|_{t=1}=\langle\mathit{g}\ccrad\,\varrho_x(y),Z_x(y)
\rangle,
$ and the claim follows.\end{proof}}
\begin{equation}\label{0kinawa2}\mathit{f}rac{1}{\varrho_x}\langle Z_x, \mathit{g}\ccrad\,\varrho_x\rangle=1\end{equation}for every $x\in\mathbb{G}$.
The last identity can be used to rewrite \eqref{0key}. More precisely, we have
\[\mathit{f}rac{\left|\left\langle\left( Z_x\ot(y)-\mathit{f}rac{\langle Z_x\op(y),\nu\rangle}{|\PH\nu|}\nn\ot \right),
\mathit{g}\ccrad_{^{_{\TS}}}\varrho_x(y)\right\rangle\right|}{\varrho_x(y)}=\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(y),\nn(y)\right\rangle \left\langle Z_x(y),
\mathit{f}rac{\nu(y)}{|\PH\nu|}\right\rangle}{\varrho_x(y)}\right|.\]
Hence \eqref{0key} can be formulated as follows:\\
\noindent $(\spadesuit)$\,\it There exists
$r_S>0$ such that:
\begin{eqnarray}\label{0key2}\int_{S_{r+h}\setminus S_r}\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(y),\nn(y)\right\rangle \left\langle Z_x(y),\left(\nn(y)+\varpi(y)\right) \right\rangle}{\varrho_x(y)}\right|\,\per(y)\leq \per \left(S_{r+h}\setminus S_r\right)
\end{eqnarray} for $\per$-a.e. $x\in{\rm Int}\,S$, for
$\mathcal{L}^1$-a.e. $r,\, h>0$ such that $r+h\leq r_S $.\\\rm
\begin{lemma}[Key result]\label{kr}Let $x \in {\rm Int}(S\setminus C_S)$. Set $$\pi(S_t):=\int_{ S_t}\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(y),\nn(y)\right\rangle \left\langle Z_x(y),\left(\nn(y)+\varpi(y)\right) \right\rangle}{\varrho_x(y)}\right|\,\per(y)$$for $t>0$. Then $$\lim
_{t\rightarrow 0^+}\mathit{f}rac{\pi(S_t)}{\per(S_t)}=1.$$
\end{lemma}
\begin{oss}\label{kr1}
By using standard results about differentiation of measures, it follows from Lemma \ref{kr} that $ \pi(S_t)=\per(S_t)$ for $\per$-a.e. $x\in S$, for all $t>0$; see, for instance, Theorem 2.9.7 in _{^{_{\HH_i}}}te{FE}. Thus, we get that
$\pi\left(S_{t+h}\setminus S_t\right)=\per\left(S_{t+h}\setminus S_t\right)$ for $\per$-a.e. $x\in S$ and for every $t, h\mathit{g}\cceq 0$. \underline{In particular, we may choose} $r_S=+\infty$.
\end{oss}
\begin{proof}[Proof of Lemma \ref{kr}]Let $x \in {\rm Int}(S\setminus C_S)$ and note that $S_t=\vartheta_{t}^{x}\left( \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1)\right)$ for all $t>0$. So we have
\begin{eqnarray*}\mathit{f}rac{ \pi(S_t)}{\per \left(S_t\right)}&=&\mathit{f}rac{\int_{\vartheta_{t}^{x}\left( \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1)\right) }\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(y),\nn(y)\right\rangle \left\langle Z_x(y),\left(\nn(y)+\varpi(y)\right) \right\rangle}{\varrho_x(y)}\right|\,\per(y)}{\per \left(\vartheta_{t}^{x}\left( \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1)\right)\right)} \\&=&\mathit{f}rac{\int_{ \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1) }\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(\delta_{t}^{x}(z)),\nn(\delta_{t}^{x}(z))\right\rangle \left\langle Z_x(\delta_{t}^{x}(z)),
\left(\nn(\delta_{t}^{x}(z))+\varpi(\delta_{t}^{x}(z))\right) \right\rangle}{\varrho_x(\delta_{t}^{x}(z))}\right|\,\per(z)}{\per \left( \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1) \right)} \\&=&\mathit{f}rac{\int_{ \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1)}\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(\delta_{t}^{x}(z)),\nn(\delta_{t}^{x}(z))\right\rangle \left\langle t\left( [Z_x(z)]_{^{_\HH}}+ \overrightarrow{O}(t)\right),
\left(\nn(\delta_{t}^{x}(z))+\varpi(\delta_{t}^{x}(z))\right) \right\rangle}{t\varrho_x(z)}\right|\,\per(z)}{\per \left( \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1)\right)} \\&=&\mathit{f}rac{\int_{ \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1)}\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(\delta_{t}^{x}(z)),\nn(\delta_{t}^{x}(z))\right\rangle \left\langle [Z_x(z)]_{^{_\HH}},
\, \nn(\delta_{t}^{x}(z))\right\rangle}{\varrho_x(z)}\right|\,\per(z)}{\per \left( \delta_{\mathit{f}rac{1}{t}}^{x} S\cap B_{\varrho}(x, 1)\right)} + O(t),
\end{eqnarray*}as long as $t\rightarrow 0^+$.
It is worth observing that the horizontal gradient of $\varrho_x$ turns out to be homogeneous of degree $0$, and so independent of $t$. Note that $[Z_x(z)]_{^{_\HH}}=\PH(Z_x)(z)=z_{^{_\HH}}-x_{^{_\HH}}$. Therefore
\[\lim_{t\rightarrow 0^+}\mathit{f}rac{ \pi(S_t)}{\per \left(S_t\right)}=\mathit{f}rac{\int_{S_{\infty}\cap B_{\varrho}(x, 1) }\left|1-\mathit{f}rac{\left\langle
\mathit{g}\ccrad_{^{_\HH}}\,\varrho_x(z),\nn(x)\right\rangle \left\langle \left(z_{^{_\HH}}-x_{^{_\HH}}\right),
\, \nn(x)\right\rangle}{\varrho_x(z)}\right|\,\per(z)}{\per \left(S_{\infty}\cap B_{\varrho}(x, 1) \right)} =:L.
\]Recall that, by Theorem \ref{BUP}, we have $S_{\infty}=\mathcal{I}(\nn(x))$. This implies that $\left\langle \left(z_{^{_\HH}}-x_{^{_\HH}}\right),
\, \nn(x)\right\rangle=0$ whenever $z\in \mathcal{I}(\nn(x))$ and hence $L=1$, as wished.
\end{proof}
At this point, starting from Proposition \ref{correctdimin}, we may
prove a monotonicity formula for the $\HH$-perimeter
$\per$, which is one of our main results. We shall set ${S}_t:={S}\cap
{B_\varrho}(x,t)$, for $t>0$.
\begin{teo}[The monotonicity inequality for the measure $\per$]\label{rmonin} Let ${S}\subset\mathbb{G}$ be a
compact hypersurface of class $\cont^2$ with (piecewise) $\cont^1$ boundary $\partial S$. Then for every $x\in {\rm Int}(S\setminus C_S)$ the
following ordinary differential inequality
holds\begin{eqnarray}\label{rmytn}-\mathit{f}rac{d}{dt}\mathit{f}rac{\per({S}_t)}{t^{Q-1}}
\leq \mathit{f}rac{\mathcal{A}(t)+{\mathcal{B}}_2(t)}{t^{Q-1}}
\end{eqnarray}for $\mathcal{L}^1$-a.e. $t>0$.
\end{teo}\begin{proof}By
applying Sard's Theorem we get that ${S}_t$ is a
manifold of class $\cont^2$ with boundary for $\mathcal{L}^1$-a.e. $t>0$. From the
inequality in
Proposition \ref{correctdimin} we have
\[(Q-1)\,\per({S}_t)\leq
t\left(\mathcal{A}(t)+\mathcal{B}_0(t)\right)\]for
$\mathcal{L}^1$-a.e. $t>0$, where $t$ is the radius of a
$\varrho$-ball centered at $x\in {\rm Int}\,S$. Since
$$\partial{S}_t=\{\partial
B_\varrho(x,t)\cap {S}\}_{^{_{\HH_1}}}p\{\partial{S}\cap B_\varrho(x,t)\},$$
we get that
\begin{eqnarray*}(Q-1)\,\per({S}_t)\leq t\,\left(\mathcal{A}(t) +
\mathcal{B}_1(t)+\mathcal{B}_2(t)\right).\end{eqnarray*}We estimate
$\mathcal{B}_1(t)$ by using \eqref{0key} and Coarea Formula. For
every $t, h>0$ one has
\begin{eqnarray*}\int_{t}^{{t+h}}\mathcal{B}_1(s)\,ds&=&\int_{t}^{{t+h}}
\int_{\partial B_\varrho(x,s)\cap
{S}}\mathit{f}rac{1}{\varrho_{x}}\left|\left\langle \left( Z_x\ot-\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\ot \right),
\mathit{f}rac{\eta}{|\P_{^{_{\HS}}}\eta|}\right\rangle\right|\nis \\&=&
\int_{S_{t+h}\setminus S_t}\mathit{f}rac{1}{\varrho_{x}}\left|\left\langle
\left( Z_x\ot-\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\ot \right), \mathit{f}rac{\mathit{g}\ccrad_{^{_{\TS}}}\varrho_{x}}{|\textit{grad}\ss\varrho_{x}
|}\right\rangle\right||\textit{grad}\ss
\varrho_x|\,\per\\&=&\int_{S_{t+h}\setminus
S_t}\per,\end{eqnarray*}where we have used the following facts:
\begin{itemize}
\item $\eta=\mathit{f}rac{\mathit{g}\ccrad_{^{_{\TS}}}{\varrho_x}}{|\mathit{g}\ccrad_{^{_{\TS}}}{\varrho_x}|}$ and
$\eta_{^{_{\HS}}}=\mathit{f}rac{\textit{grad}\ss\varrho_x}{|\textit{grad}\ss\varrho_x|}$ along $\partial
B_\varrho(x,s)\cap {S}$ for $\mathcal{L}^1$-a.e. $s\in]t,
t+h[$;\item Coarea
formula \eqref{1coar} together with Lemma \ref{kr} and Remark \ref{kr1}.\end{itemize}Therefore $$\mathit{f}rac{\int_{t}^{{t+h}}\mathcal{B}_1(s)\,ds}{h}=
\mathit{f}rac{\per(S_{t+h}\setminus S_t)}{h} $$as long as $h\rightarrow
0^+$ and hence $\mathcal{B}_1(t)=\mathit{f}rac{d}{dt}\,\per({S}_t) $
for $\mathcal{L}^1$-a.e. $t>0$. So we get that
\[(Q-1)\,\per({S}_t)\leq t\left(\mathcal{A}(t)+{\mathcal{B}}_2(t)+\mathit{f}rac{d}{dt}\,\per({S}_t)\right)\]which
is equivalent to \eqref{rmytn}.
\end{proof}
\subsection{Further estimates}\label{perdindirindina}
In this section we study the integrals
$\mathcal{A}(t)$ and $\mathcal{B}_2(t)$ appearing in the
right-hand side of the monotonicity formula
\eqref{rmytn}.\\
\noindent{\bf \large Estimate of $\mathcal{A}(t)$.}
\begin{lemma}\label{perdopo}Let $S\subset\mathbb{G}$ be a
hypersurface of class $\cont^k$, let $x\in {\rm Int}\,S$ and let $S_t=S\cap B_{\varrho}(x,
t)$ for some $t>0$. Then there exists a constant ${\bf b}_\varrho>0$, only
dependent on $\varrho$ and $\mathbb{G}$, such that
\begin{eqnarray}\label{margheritacarosio}
\lim_{t\rightarrow
0^+}\mathit{f}rac{{\int_{{S}_t}|\varpi_{^{_{\HH_i}}}|\,\per}}{t^{Q-i}}\leq
\DH_i\,{\bf b}_\varrho\textit{grad}\ssuad\mbox{for every }\,\, i=2,..., k\end{eqnarray}where $\DH_i=\dim\HH_i$.
\end{lemma}
\begin{proof} For any
$\alpha=\DH+1,...,n$, we have
$\left(X_\alpha\LL\Vol\right)\big|_S= \langle X_\alpha, \nu\rangle\,\sigma^{n-1}_{^{_\mathit{R}}} \big|_S=
\ast\omega_\alpha |_S$, where $\ast$ denotes the Hodge star
operator; see _{^{_{\HH_i}}}te{Helgason}. Moreover
$\delta_t^\ast(\ast\omega_\alpha)=t^{Q-{\rm
ord}(\alpha)}(\ast\omega_\alpha)$ for every $t>0$. So we get that
$${\int_{{S}_t}|\varpi_{^{_{\HH_i}}}|\,\per}=\int_{{S}_t}|\P_{^{_{\HH_i}}}\nu|\,\sigma^{n-1}_{^{_\mathit{R}}} \leq \sum_{{\rm ord}(\alpha)=i}\int_{S_t} |X_\alpha\LL\Vol|=
\sum_{{\rm ord}(\alpha)=i}t^{Q-i}\int_{\delta^x_{\mathit{f}rac{1}{t}}S\cap
B_\varrho(x,
1)}\left|(\ast\omega_\alpha)_{^{_{\HH_i}}}rc\delta^x_t\right|.$$Since
\[\int_{\delta^x_{\mathit{f}rac{1}{t}}S\cap
B_\varrho(x,
1)}\left|(\ast\omega_\alpha)_{^{_{\HH_i}}}rc\delta^x_t\right|\leq
\sigma^{n-1}_{^{_\mathit{R}}}\left(\delta^x_{\mathit{f}rac{1}{t}}S\cap B_\varrho(x,
1)\right),\]by using Theorem \ref{BUP} we may pass to the limit as
$t\rightarrow 0^+$ the right-hand side. More precisely, if $x\in
{\rm Int}(S\setminus C_S)$ the rescaled hypersurfaces
$\delta^x_{\mathit{f}rac{1}{t}}S$ converge to the vertical hyperplane
$\mathcal{I}(\nn(x))$ as $t\rightarrow 0^+$. Otherwise, $x\in {\rm Int}(S\cap C_S)$ and we can assume that ${\rm ord}(x)=Q-i$, for
some $i=2,..., k$. We also recall that the limit-set $S_\infty$ is a polynomial
hypersurface of homogeneous degree $i$ passing through $x$; see
Remark \ref{fraccicarla}. So let
us
set $b_1:=\sup_{X\in\HH,\,|X|=1}\sigma^{n-1}_{^{_\mathit{R}}}(\mathcal{I}(X)\cap
B_\varrho(0, 1))$, where $\mathcal{I}(X)$ denotes the
vertical hyperplane through $0\in\mathbb{G}$ and orthogonal to $X$.
In order to study the characteristic case, let $b_2:=\sup_{\Psi\in\mathcal{P}ol^{k}_0}\sigma^{n-1}_{^{_\mathit{R}}}(\Psi\cap
B_\varrho(0, 1))$, where $\mathcal{P}ol^{k}_0$ denotes
the class of all graphs of polynomial functions of homogeneous degree $\leq k$, passing through
$0\in\mathbb{G}$. Using the left-invariance of
$\sigma^{n-1}_{^{_\mathit{R}}}$ and setting
\begin{equation}\label{tonda3}{b}_\varrho:=\max\{b_1,\,b_2\},\end{equation} yields
$\lim_{t\rightarrow0^+}\sigma^{n-1}_{^{_\mathit{R}}}\left(\delta^x_{\mathit{f}rac{1}{t}}S\cap B_\varrho(x,
1)\right)\leq {b}_\varrho$. Therefore
\begin{eqnarray*}
\lim_{t\rightarrow0^+}\mathit{f}rac{{\int_{{S}_t}|\varpi_{^{_{\HH_i}}}|\,\per}}{t^{Q-i}}\leq\lim_{t\rightarrow0^+} \DH_i\,
\sigma^{n-1}_{^{_\mathit{R}}}\left(\delta^x_{\mathit{f}rac{1}{t}}S\cap B_\varrho(x,
1)\right)\leq \DH_i\, {b}_\varrho\end{eqnarray*} which achieves
the proof of \eqref{margheritacarosio}.
\end{proof}
\begin{oss}If $S$ is
just of class $\cont^2$, then \eqref{margheritacarosio} holds
for every $x\in {\rm Int}(S\setminus C_S)$. The same assertion holds if $x\in C_S$ has order
${\rm ord}(x)=Q-i$ for some $i=2,...,k$ and $S$ is of class
$\cont^i$.
\end{oss}
\indent Let $S\subset\mathbb{G}$ be of class $\cont^2$, let $x\in{\rm
Int}(S\setminus C_S)$ and $S_t=S\cap B_\varrho(x,
t)$. Moreover, let $\mathcal{A}(t)$ be as in Definition
\ref{lsd34}. By applying Theorem \ref{baloghteo}, we get that
$\dim_{\rm Eu-Hau}(C_{ S})\leq n-2$. In particular $\sigma_{^{_\mathit{R}}}^{n-1}$-a.e. interior point of $S$ is non-characteristic.
\begin{lemma}\label{crux00}Under the previous assumptions, one has
\begin{equation}\label{cruxii}\mathcal{A}(t)\leq \int_{S_t}|\MS|\,\per.\end{equation}
\end{lemma}
\begin{proof}First, note that $\varrho_x(y)=\varrho(x, y)\rightarrow 0^+$
as $t\rightarrow 0^+$. Hence
\begin{eqnarray*}\mathcal{A}(t)=\int_{S_t}|\MS|\left(1+\sum_{i=2}^k
i\,{\bf c}_i\varrho_x^{i-1}|\varpi_{^{_{\HH_i}}}|\right)\,\per\leq
\int_{S_t}|\MS|\left(1+\mathit{f}rac{2
{\bf c}_2\varrho_x\left(1 + o(1)\right)}{|\PH\nu|}\right)\,\per
\end{eqnarray*}as long as $t\rightarrow
0^+$. Note that $\mathit{f}rac{1}{|\PH\nu|}$ is continuous near $x\in{\rm
Int}(S\setminus C_S)$. Since $\MS$ turns out to be continuous near
every non-characteristic point, by using standard differentiation
results in Measure Theory (see Theorem 2.9.7 in _{^{_{\HH_i}}}te{FE}), we get that
\[\lim_{t\rightarrow
0^+}\mathit{f}rac{ \int_{S_t}|\MS|\left(\mathit{f}rac{2 {\bf c}_2 \varrho_x\left(1 +
o(1)\right)}{|\PH\nu|}\right) \per }{\per(S_t)}=0 \] and \eqref{cruxii}
follows.\end{proof}
Actually, a similar result holds true even if $x\in {\rm
Int}(S\cap C_S)$, at least whenever $\MS$ is bounded and $S$ is smooth enough near
$C_S$. Below we shall make use of Theorem
\ref{BUP}, Case (ii).
\begin{lemma}\label{cux0}Let $S$ be a hypersurface of class $\cont^k$ and assume that $\MS$ is bounded on $S$. Let
$x\in {\rm Int}(S\cap C_S)$ be an interior characteristic point
such that ${\rm ord}(x)=Q-i$, for some $i=2,...,k$. This means that
there exists $\alpha=\DH+1,...,n$, ${\rm ord}(\alpha)=i$, such
that $S$ can be represented, locally around $x$, as a
$X_\alpha$-graph for which
\eqref{0dercond} holds. Then there exists a constant
${\bf d}_\varrho>0$, only dependent on $\varrho$ and $\mathbb{G}$, such that
$\mathcal{A}(t)\leq
\|\MS\|_{L^\infty(S)}\,\,\left(\kappa_\varrho(C_S(x))+
{\bf d}_\varrho\right)\,t^{Q-1}$ as long as $t\rightarrow
0^+$. In particular, we have$$\mathcal{A}(t)\leq \|\MS\|_{L^\infty(S)}\,\left(\kappa_\varrho(C_S(x))+
{\bf d}_\varrho\right)\mathcal{S}_{\varrho}^{Q-1}(S_t)$$for all $t>0$, where $\mathcal{S}_{\varrho}^{Q-1}$ denotes the
spherical Hausdorff measure computed with respect to the the homogeneous distance $\varrho$.
\end{lemma}
\begin{proof}Using Lemma \ref{perdopo} we obtain \begin{eqnarray*}\mathit{f}rac{\sum_{i=2}^k\int_{S_t}
i\,{\bf c}_i\varrho_x^{i-1}|\varpi_{^{_{\HH_i}}}|\,\per}{t^{Q-1}}&\leq&{\sum_{i=2}^k
i\,{\bf c}_i\DH_i\,{\bf b}_\varrho}
\end{eqnarray*}as $t\rightarrow 0^+$,
where ${\bf b}_\varrho$ is the constant defined by \eqref{tonda3}. Finally, the thesis follows by setting ${\bf d}_\varrho:={\sum_{i=2}^k
i\,{\bf c}_i\DH_i {\bf b}_\varrho}$ and by using a well-known density estimate; see Theorem 2.10.17 in _{^{_{\HH_i}}}te{FE}.
\end{proof}
The previous result will be applied only in Section \ref{asintper}.\\
\noindent{\bf \large Estimate of
${\mathcal{B}}_2(t)$.}\\
We define a family of (homogeneous) measures on $(n-2)$-dimensional submanifolds of $\mathbb{G}$.
\begin{Defi}\label{xazax} Let $N\subset\mathbb{G}$ be a $(n-2)$-dimensional submanifold of class
$\cont^1$. Let $xi\in\XX^0(\nn N)$ be a unit horizontal normal vector field to $N$ and let $\alpha\in I_{^{_\VV}}=\{h+1,...,n\}$ be such that ${\rm ord}(\alpha)=i$. Then we define a $(Q-i-1)$-homogeneous measure $\mu_\alpha\equiv\mu_\alpha(xi)$ on $N$ by setting$$\mu_\alpha(N\cap B):= \int_{N\cap B}|(xi\wedge X_\alpha)\LL\Vol|\textit{grad}\ssuad \mathit{f}orall \,\,B\in\mathcal{B}or(\mathbb{G}).$$
\end{Defi}Note that $|(xi\wedge X_\alpha)\LL\Vol|$ is the norm of the differential $(n-2)$-form $(xi\wedge X_\alpha)\LL\Vol\big|_N$; see _{^{_{\HH_i}}}te{FE}, pp. 31-32. Now let $\underline{\tau}=\{\tau_1,...,\tau_n\}$ be a \it graded orthonormal frame \rm on an open neighborhood of $N\subset \mathbb{G}$. This means that $\{\tau_{j_i}: j_i\in I_{^{_{\HH_i}}}\}$ is a orthonormal basis of the $i$-th layer $\HH_i$ for any $i=1,...,k$. Furthermore, let us denote by $\underline{\phi}=\{\phi_1,...,\phi_n\}$ the \it dual coframe \rm of $\underline{\tau}$ defined by duality as $\phi_i(\tau_j)=\delta_{i}^{j}$, where $\delta_{i}^{j}$ is the
{\it Kronecker delta}, that is, $\delta_{i}^{j}=1$ if $i=j$ and $0$ otherwise. For simplicity, we also assume that $\tau_1=xi$ where $xi\in\XX^0(\nn N)$ is a unit horizontal normal vector field to $N$. Using these new coordinates, it follows that $xi\wedge X_\alpha=\tau_1\wedge\tau_\alpha$ and we get that $(xi\wedge X_\alpha)\LL\Vol\big|_N=\ast \phi_1\wedge\phi_\alpha\big|_N$, where $\ast$ denotes the \it Hodge star operator \rm on differential forms; see _{^{_{\HH_i}}}te{Lee}, _{^{_{\HH_i}}}te{Helgason}. Clearly, the homogeneity degree (or \it weight\rm ; see _{^{_{\HH_i}}}te{P4}) of the differential $(n-2)$-form $\ast \phi_1\wedge\phi_\alpha\big|_N$ is given by $Q-({\rm ord}(\alpha)+1)=Q-i-1$.
\begin{lemma}\label{417}
Let $x\in {\rm Int}(S\setminus C_S)$. Then \begin{eqnarray*}{\mathcal{B}}_2(t)&\leq&\nis(\partial
S\cap B_\varrho(x, t))+ \sum_{i=2}^k i\,{\bf c}_i\, t^{i-1} \sum_{j_i\in I_{^{_{\HH_i}}}}\int_{\partial
S\cap B_\varrho(x, t)}|\nn\wedge X_{j_i} \LL\Vol| \end{eqnarray*}for every $t>0$; see Definition \ref{2iponhomnor1}.
\end{lemma}
\begin{no}\label{nuota1}
For the sake of brevity, hereafter we shall adopt the notation introduced in Definition \ref{xazax}. In particular, we assume that $xi=\nn$, where $\nn$ is the horizontal unit normal to $\overline{S}=S_{^{_{\HH_1}}}p\partial S$. We shall set $\mu_{j_i}(\partial S\cap B):=\int_{\partial S\cap B}|\nn\wedge X_{j_i} \LL\Vol|$ for all $B\in\mathcal{B}or(\mathbb{G})$ and $$\mu(x, t):=\sum_{i=2}^k i\,{\bf c}_i\, t^{i-1} \sum_{j_i\in I_{^{_{\HH_i}}}}\mu_{j_i}(\partial S\cap B_\varrho(x, t))\textit{grad}\ssuad \mathit{f}orall\,\, x\in {\rm Int}(S\setminus C_S)\quad \mathit{f}orall\,\,t>0.$$
\end{no}
\begin{proof}[Proof of Lemma \ref{417}] Let us set
\begin{eqnarray*} \widetilde{{\mathcal B}_2}(t)&:=&\int_{\partial
S\cap B_\varrho(x, t)}
\left|\left\langle \mathit{f}rac{\left([Z_x]_{^{_\VV}}-{\langle [Z_x]_{^{_\VV}} ,\varpi\rangle}\nn\right)\ot}{\varrho_x}, \chi\right\rangle\right|\nis.\end{eqnarray*}Furthermore, let $x\in {\rm Int}(S\setminus C_S)$ be fixed and let $f:\partial S\setminus C_{\partial S}\longrightarrow\R_+$ be defined as$$f(y):=\mathit{f}rac{1}{\varrho_x(y)}\left|\left\langle
\left( Z_x\ot(y)-\mathit{f}rac{\langle Z_x\op(y),\nu(y)\rangle}{|\PH\nu(y)|}\nn\ot(y) \right),\mathit{f}rac{\eta(y)}{|\P_{^{_{\HS}}}\eta(y)|}\right\rangle\right|.$$Then \begin{eqnarray*}f &\equiv&\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( Z_x -\mathit{f}rac{\langle Z_x\op,\nu\rangle}{|\PH\nu|}\nn\right)\ot ,\left(\eta_{^{_{\HS}}}+\chi\right)\right\rangle\right|\\&=&\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( [Z_x]_{^{_\HH}} -\mathit{f}rac{\langle [Z_x]_{^{_\HH}}\op,\nu\rangle}{|\PH\nu|}\nn+ \left([Z_x]_{^{_\VV}}-\mathit{f}rac{\langle [Z_x]_{^{_\VV}} ,\nu \rangle}{|\PH\nu |}\nn\right)\right)\ot ,\left(\eta_{^{_{\HS}}}+\chi\right)\right\rangle\right|\\&=&\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( [Z_x]_{^{_\HH}} -\mathit{f}rac{\langle [Z_x]_{^{_\HH}}\op,\nu\rangle}{|\PH\nu|}\nn+ \left([Z_x]_{^{_\VV}}- {\langle [Z_x]_{^{_\VV}} ,\varpi\rangle} \nn\right)\right)\ot ,\left(\eta_{^{_{\HS}}}+\chi\right)\right\rangle\right|\\&=&\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( [Z_x]_{^{_\HH}} - {\langle [Z_x]_{^{_\HH}}\op,\nn\rangle}\nn+ \left([Z_x]_{^{_\VV}}-{\langle [Z_x]_{^{_\VV}} ,\varpi\rangle}\nn\right)\right)\ot ,\left(\eta_{^{_{\HS}}}+\chi\right)\right\rangle\right|\\&=&\mathit{f}rac{1}{\varrho_x}\left|\left\langle
\left( [Z_{^{_{\HS}}}]_x +\left([Z_x]_{^{_\VV}}-{\langle [Z_x]_{^{_\VV}} ,\varpi\rangle}\nn\right)\right)\ot ,\left(\eta_{^{_{\HS}}}+\chi\right)\right\rangle\right|\\ &=&\mathit{f}rac{|\langle[Z_{^{_{\HS}}}]_x,\eta_{^{_{\HS}}}\rangle|}{\varrho_x} + \left|
\left\langle \mathit{f}rac{\left([Z_x]_{^{_\VV}}-{\langle [Z_x]_{^{_\VV}} ,\varpi\rangle}\nn\right)\ot}{\varrho_x}, \chi\right\rangle\right|\\&\leq& 1+\left|\left\langle \mathit{f}rac{\left([Z_x]_{^{_\VV}}-{\langle [Z_x]_{^{_\VV}} ,\varpi\rangle}\nn\right)\ot}{\varrho_x}, \chi\right\rangle\right|.\end{eqnarray*}So we have shown that
$${\mathcal{B}}_2(t) \leq \nis(\partial
S\cap B_\varrho(x, t))+\widetilde{{\mathcal B}_2}(t)\textit{grad}\ssuad\mathit{f}orall\,\,t>0.$$
Note that $[Z_x]_{^{_\VV}}=\sum_{i=2}^k \sum_{j_i\in I_{^{_{\HH_i}}}}\langle[Z_x]_{^{_\VV}}, X_{j_i}\rangle X_{j_i}$. Using the assumptions on $\varrho$ we get that $\left|\mathit{f}rac{\langle[Z_x]_{^{_\VV}}, X_{j_i}\rangle}{\varrho_x}\right|\leq \textbf{c}_i\varrho_x^{i-1}$; see Definition \ref{2iponhomnor1}. Therefore
\begin{eqnarray*} \widetilde{{\mathcal B}_2}(t)&\leq&\sum_{i=2}^k i\,{\bf c}_i\,t^{i-1}\sum_{j_i\in I_{^{_{\HH_i}}}}\int_{\partial
S\cap B_\varrho(x, t)}
\left|\left\langle {\left(X_{j_i}-{\langle X_{j_i} ,\varpi\rangle}\nn\right)\ot}, \chi\right\rangle\right|\nis\\&=&\sum_{i=2}^k i\,{\bf c}_i\,t^{i-1}\sum_{j_i\in I_{^{_{\HH_i}}}}\int_{\partial
S\cap B_\varrho(x, t)}
\left|\left\langle \left( |\PH \nu|X_{j_i}-\nu_{j_i}\nn\right) , \P_{^{_{\VS}}}\eta\right\rangle\right|\sigma_{^{_\mathit{R}}}^{n-2}. \end{eqnarray*} Now let $\underline{\tau}$ be an adapted moving frame to $S$; see Definition \ref{movadafr}.
Using this frame we get that
\begin{eqnarray*}\int_{\partial
S\cap B_\varrho(x, t)}
\left|\left\langle \left( |\PH \nu|X_{j_i}-\nu_{j_i}\nn\right) , \P_{^{_{\VS}}}\eta\right\rangle\right|\sigma_{^{_\mathit{R}}}^{n-2}&=&\int_{\partial
S\cap B_\varrho(x, t)}
\left|\nu_1\eta_{j_i}-\nu_{j_i}\eta_1\right|\sigma_{^{_\mathit{R}}}^{n-2}\\&=&\int_{\partial
S\cap B_\varrho(x, t)}
\left|\tau_1\wedge \tau_{j_i}\LL\Vol\right|\\&=&\int_{\partial
S\cap B_\varrho(x, t)}|\nn\wedge X_{j_i} \LL\Vol|.\end{eqnarray*}The thesis easily follows.
\end{proof}
\subsection{Proof of the Isoperimetric Inequality}\label{isopineq1}
By applying the results of Section \ref{perdindirindina} together with
Theorem \ref{rmonin} we get the following version of the
monotonicity inequality:
\begin{corollario} \label{rmonin2}Let ${S}\subset\mathbb{G}$ be a
hypersurface of class $\cont^2$ with (piecewise) $\cont^1$ boundary $\partial S$.
Then, for every
$x\in {\rm Int}(S\setminus C_S)$ we have
\begin{eqnarray}\label{rmytn2}-\mathit{f}rac{d}{dt}\mathit{f}rac{\per({S}_t)}{t^{Q-1}}
\leq\mathit{f}rac{1}{t^{Q-1}} \left(\int_{S_t}|\MS|\,\per
+\nis(\partial S\cap B_\varrho(x, t))+ \mu(x, t) \right)
\end{eqnarray}for $\mathcal{L}^1$-a.e. $t>0$; see Notation \ref{nuota1}.
\end{corollario}
\begin{proof}The proof follows by applying Theorem
\ref{rmonin}, Lemma \ref{crux00} and Lemma \ref{417}.
\end{proof}
\begin{no}Let $x\in {\rm Int}(S\setminus C_S)$. Henceforth, we shall set
\[\mathcal{D}(t):= \int_{S_t}|\MS|\,\per
+\nis(\partial S\cap B_\varrho(x, t))+ \mu(x, t) \textit{grad}\ssuad \mathit{f}orall\,\,t>0.\]\end{no}
\begin{lemma}\label{lem}Let ${S}\subset\mathbb{G}$ be a
hypersurface of class $\cont^2$ with (piecewise) $\cont^1$ boundary $\partial S$. Let $x\in {\rm Int}\,(S\setminus C_S)$ and let \begin{equation}r_0(x):=
2\left(\mathit{f}rac{\per({S})}{k_\varrho(\nn(x))}\right)^{{1}/{Q-1}}.\end{equation}
For every $\lambda\mathit{g}\cceq 2$ there exists $r\in ]0, r_0(x)]$
such that\begin{eqnarray*}\label{condlem}\per({S}_{\lambda r})\leq
\lambda^{Q-1}\,r_0(x)\,\mathcal{D}(r).\end{eqnarray*}\end{lemma}
Due to Remark \ref{boundonmetricfactor}, the number $r_0(x)>0$ can be globally estimated from above and below.
\begin{no}\label{klkl}We set ${\bf r}(S):=\sup_{x\in {\rm Int}(S\setminus C_S)} r_0(x)$.
\end{no}
\begin{proof}[Proof of Lemma \ref{lem}]Fix $r\in]0,r_0(x)]$ and note that $\per({S}_t)$
is a monotone non-decreasing function of $t$ on $]r,r_0(x)]$. We start from the identity
$$\per({S}_t)/t^{Q-1}=\left(\per({S}_t)-\per\left({S}_{r_0(x)}\right)\right)/t^{Q-1}+\per\left({S}_{r_0(x)}\right)/t^{Q-1}.$$The first addend is an increasing function of $t$, while
the second one is an absolutely continuous function of $t$.
Therefore, by integrating the differential inequality
\eqref{rmytn}, we get that
\begin{equation}\label{ppp1}\mathit{f}rac{\per({S}_{r})}{r^{Q-1}}
\leq\mathit{f}rac{\per\left({S}_{r_0(x)}\right)}{\left(r_0(x)\right)^{Q-1}}+\int_r^{r_0(x)}\mathcal{D}(t)\,{t^{-(Q-1)}}dt.\end{equation}Therefore
\begin{eqnarray*}\beta&:=&\sup_{r\in]0,r_0(x)]}\mathit{f}rac{\per({S}_{r})}{r^{Q-1}}
\leq\mathit{f}rac{\per\left({S}_{r_0(x)}\right)}{\left(r_0(x)\right)^{Q-1}}+
\int_0^{r_0(x)}\mathcal{D}(t)\,{t^{-(Q-1)}}dt.\end{eqnarray*} Now
we argue by contradiction. If the lemma is false, it follows that
for every $r\in]0,r_0(x)]$\begin{equation*}\per({S}_{\lambda
r})>\lambda^{Q-1}r_0(x)\,\mathcal{D}(t).\end{equation*}From the
last inequality we infer that
\begin{eqnarray*}\int_0^{r_0(x)}\mathcal{D}(t)
\,{t^{-(Q-1)}}dt&\leq&\mathit{f}rac{1}{\lambda^{Q-1}r_0(x)}\int_{0}^{r_0(x)}\per({S}_{\lambda
t})\,t^{-(Q-1)}dt\\&=&\mathit{f}rac{1}{\lambda\,r_0(x)}\int_{0}^{\lambda
r_0(x)}\per({S}_{s})\,s^{-(Q-1)}ds\\&=&\mathit{f}rac{1}{\lambda\,
r_0(x)}\int_{0}^{r_0(x)}\per({S}_{s})\,s^{-(Q-1)}ds+
\mathit{f}rac{1}{\lambda r_0(x)}\int_{r_0(x)}^{\lambda
r_0(x)}\per({S}_{s})\,s^{-(Q-1)}ds\\&\leq
&\mathit{f}rac{\beta}{\lambda}+\mathit{f}rac{\lambda-1}{\lambda}\mathit{f}rac{\per({S})}{\left(r_0(x)\right)^{Q-1}}.\end{eqnarray*}Therefore,
using \eqref{ppp1} yields
$$\beta\leq\mathit{f}rac{\per\left({S}_{r_0(x)}\right)}{\left(r_0(x)\right)^{Q-1}}+
\mathit{f}rac{\beta}{\lambda}+\mathit{f}rac{\lambda-1}{\lambda}\mathit{f}rac{\per({S})}{\left(r_0(x)\right)^{Q-1}}$$and so
$$\mathit{f}rac{\lambda-1}{\lambda}\beta\leq\mathit{f}rac{2\lambda-1}{\lambda}\left(\mathit{f}rac{\per({S})}{\left(r_0(x)\right)^{Q-1}}\right)
\leq\mathit{f}rac{2\lambda-1}
{\lambda}\left(\mathit{f}rac{k_\varrho(\nn(x))}{2^{Q-1}}\right).$$By its
own definition, one has
$$k_\varrho(\nn(x))=\lim_{r\searrow
0^+}\mathit{f}rac{\per({S}_r)}{r^{Q-1}}\leq\beta.$$Furthermore,
since\mathit{f}ootnote{Indeed, the first non-abelian Carnot group is the
Heisenberg group $\mathbb{H}^1$ for which $Q=4$.} $Q-1\mathit{g}\cceq 3$, we get
that $\lambda-1\leq \mathit{f}rac{2\lambda-1}{8},$ or equivalently
$\lambda\leq\mathit{f}rac{7}{6}$, which contradicts the hypothesis
$\lambda\mathit{g}\cceq 2$.
\end{proof}
The next covering lemma is well-known and can be found in _{^{_{\HH_i}}}te{BuZa}; see also _{^{_{\HH_i}}}te{FE}.
\begin{lemma}[Vitali's Covering Lemma]\label{cov}Let $(X,\varrho)$ be a compact metric space and
let $A\subseteq X$. Moreover, let ${\textsf{C}}$ be a covering of
$A$ by closed $\varrho$-balls with centers in $A$. We also assume
that each point $x$ of $A$ is the center of at least one closed
$\varrho$-ball belonging to $\textsf{C}$ and that the radii of the
balls of the covering ${\textsf{C}}$ are uniformly bounded by some
positive constant. Then, for every $\lambda> 2$ there exists a no
more than countable subset
${\textsf{C}}_\lambda\subsetneq{\textsf{C}}$ of pairwise
non-intersecting closed balls
$\overline{B}_\varrho(x_l,r_l),\,l\in \mathbb{N},$ such that
$$A\subset\bigcup_{l\in \mathbb{N}}{B}_\varrho(x_l,\lambda\,r_l).$$\end{lemma}
We are now in a position to prove our main result.
\begin{proof}[Proof of Theorem \ref{ghaioio}]We apply Lemma \ref{lem}. So let
$\lambda>2$ and, for every $x\in{\rm Int}({S}\setminus
C_{{S}})$, let $r(x)\in]0, {\bf r}(S)]$ be such
that\begin{equation}\label{fifiin}\per({S}_{r(x)})\leq\lambda^{Q-1}\,{\bf r}(S)\,
\mathcal{D}(r(x)).\end{equation}Let
$\textsf{C}=\left\{\overline{B_\varrho}(x,r(x)): x\in
{\rm Int}(S\setminus C_S)\right\}$ be a covering of
${S}$. By Lemma \ref{cov}, there
exists a non more than countable subset $\textsf{C}_\lambda\subsetneq\textsf{C}$ of pairwise
non-intersecting closed balls $\overline{B}_\varrho(x_k,r_k)$, $k\in \mathbb{N}$, such that
$${S}\setminus
C_{S}\subset\bigcup_{l\in\mathbb{N}}{B}_\varrho(x_l,\lambda\,r_l),$$
where we have set $r_l:=r(x_l)$. We
therefore get
\begin{eqnarray*}\per({S})&\leq&\sum_{l\in\mathbb{N}}\per({S}\cap
B_\varrho(x_l,\lambda\,r_l))\\&\leq&
\lambda^{Q-1}\,{\bf r}(S)\sum_{l\in\mathbb{N}}\,\mathcal{D}(r_l)\textit{grad}\ssuad\,\,\,
\mbox{(by \eqref{fifiin})}
\\&=&\lambda^{Q-1}\,{\bf r}(S)\,\sum_{l\in\mathbb{N}} \left(\int_{S_{r_l}}|\MS|\,\per
+\nis(\partial S\cap B_\varrho(x_l, {r_l})) + \mu(x_l, r_l) \right) \\
\\&\leq&\lambda^{Q-1}\,{\bf r}(S)\, \left( \int_S |\MS|\,\per +\nis(\partial S) +\sum_{l\in\mathbb{N}} \mu(x_l, r_l)\right).
\end{eqnarray*}
By letting $\lambda\searrow 2$, we get that
\begin{eqnarray*}\per({S})\leq
2^{Q-1}{\bf r}(S)\,\left(\int_S |\MS|\,\per + \nis(\partial S)+ \sum_{l\in\mathbb{N}} \mu(x_l, r_l)\right).\end{eqnarray*}
Since$$2^{Q-1}{\bf r}(S)\leq 2^{Q-1}\sup_{x\in {\rm Int}(S\setminus C_S)}2
\left(\mathit{f}rac{\per({S})}{k_\varrho(\nn(x))}\right)^{\mathit{f}rac{1}{Q-1}}=2^Q\sup_{x\in
{\rm Int}(S\setminus C_S)}\mathit{f}rac{\left(\per({S})\right)^{\mathit{f}rac{1}{Q-1}}}{\left(k_\varrho(\nn(x))\right)^{\mathit{f}rac{1}{Q-1}}},$$
using \eqref{emfac} yields
$$2^{Q-1}{\bf r}(S)\leq 2^Q\,
\mathit{f}rac{\left(\per({S})\right)^{\mathit{f}rac{1}{Q-1}}}{K_1^{\mathit{f}rac{1}{Q-1}}};$$ see Remark \ref{boundonmetricfactor}. Therefore\begin{eqnarray}\label{finineqf}\left(\per({S})\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_1\left(\int_S
|\MS|\,\per +\nis(\partial S)+\sum_{l\in\mathbb{N}} \mu(x_l, r_l)\right) \end{eqnarray}where we have set $C_{1}:={2^{Q}}/{K_1^{\mathit{f}rac{1}{Q-1}}}.$
Furthermore, we have
\begin{eqnarray*}\sum_{l\in\mathbb{N}} \mu(x_l, r_l)&=&\sum_{l\in\mathbb{N}}\sum_{i=2}^k i\,{\bf c}_i\,r_l^{i-1}\sum_{j_i\in I_{^{_{\HH_i}}}}\int_{\partial
S\cap B_\varrho(x_l, r_l)}|\nn\wedge X_{j_i} \LL\Vol|\\&=&\sum_{l\in\mathbb{N}}\sum_{i=2}^k i\,{\bf c}_i\,r_l^{i-1}\sum_{j_i\in I_{^{_{\HH_i}}}}\mu_{j_i}(\partial
S\cap B_\varrho(x_l, r_l))\end{eqnarray*}where we have
used Definition \ref{xazax} with $xi=\nn$; see Notation \ref{nuota1}. Then $$\sum_{l\in\mathbb{N}} \mu(x_l, r_l)\leq \sum_{i=2}^k i\,{\bf c}_i\, \left( {\bf r}(S) \right)^{i-1}\sum_{j_i\in I_{^{_{\HH_i}}}}\mu_{j_i}(\partial
S);$$see Notation \ref{klkl}. Furthermore, since $$\mu_{j_i}(\partial S)=\int_{\partial
S }|\nn\wedge X_{j_i} \LL\Vol|\leq \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\sigma_{^{_\mathit{R}}}^{n-2},$$ we have
$$\sum_{j_i\in I_{^{_{\HH_i}}}}\mu_{j_i}(\partial S)\leq h_i \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2},$$where $h_i=\mathrm{dim}\HH_i$.
Therefore, we conclude that
\begin{eqnarray*}\sum_{l\in\mathbb{N}} \mu(x_l, r_l)&\leq& \sum_{i=2}^k i\,{\bf c}_i\left( {\bf r}(S) \right)^{i-1}\sum_{j_i\in I_{^{_{\HH_i}}}}\mu_{j_i}(\partial S)\\&\leq & \sum_{i=2}^k i\,{\bf c}_i\,h_i\left( {\bf r}(S)\right)^{i-1} \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2}\\&\leq & C_{2}\sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2},\end{eqnarray*}where we have set $C_2:=\max_{i=2}^k i\,{\bf c}_i\,h_i$.
Using the last inequality it follows that\begin{eqnarray}\label{finineqf222}\left(\per({S})\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop}\left(\int_S
|\MS|\,\per +\nis(\partial S)+ \sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_{\partial
S }|\P_{^{_{\HH_i}}}ss\eta|\,\sigma_{^{_\mathit{R}}}^{n-2} \right) \end{eqnarray}where we have set $C_{Isop}:=\max\left\lbrace C_1, C_2 \right\rbrace$. This achieves the proof.
\end{proof}
\subsection{An application of the monotonicity formula: asymptotic
behavior of $\per$}\label{asintper}
The monotonicity formula
\eqref{rmytn} (see Theorem \ref{rmonin}) can be formulated as
follows:
\begin{eqnarray}\label{zoccowa}\mathit{f}rac{d}{dt}\left(\mathit{f}rac{\per({S}_t)}{t^{Q-1}}\,\exsp
\left(\int_0^t\mathit{f}rac{\mathcal{A}(s)+\mathcal{B}_2(s)}{\per({S}_s)}ds\right)\right)\mathit{g}\cceq
0\end{eqnarray} for $\mathcal{L}^1$-a.e. $t\in[0, r_S]$ and for
every $x\in {\rm Int}\,(S\setminus C_S)$. For the sake of simplicity, let $\partial S=\emptyset$ (and hence
$\mathcal{B}_2(s)=0$). By Theorem
\ref{BUP}, Case (i), we may pass to the limit as $t\searrow 0^+$
in the previous inequality; see Section \ref{blow-up}. Hence
\begin{eqnarray}\label{zoccona}\per({S}_t)\mathit{g}\cceq
\kappa_{\varrho}(\nn(x))\,t^{Q-1}\exsp
\left(-\int_0^t\mathit{f}rac{\mathcal{A}(s)}{\per({S}_s)}ds\right),\end{eqnarray}for
every $x\in{\rm Int}({S}\setminus C_S)$.
\begin{corollario}\label{asynt}Let $\mathbb{G}$ be a $k$-step Carnot group and let
${S}\subset\mathbb{G}$ be a hypersurface of class $\cont^2$ without boundary. Assume
that $|\MS|\leq\MS^0<+\infty$. Then, for every $x\in{\rm
Int}({S}\setminus C_S)$, one has
\begin{eqnarray}\label{ak}
\per({S}_t)\mathit{g}\cceq \kappa_{\varrho}(\nn(x))\,t^{Q-1} e^{-t\,
\MS^0}\end{eqnarray}as long as $t
\rightarrow 0^+$.
\end{corollario}
\begin{proof}We just have to bound $\int_0^t
\mathit{f}rac{\mathcal{A}(s)}{\per({S}_s)}\,ds$ from above. Using Lemma
\ref{crux00} yields\[\int_0^t
\mathit{f}rac{\mathcal{A}(s)}{\per({S}_s)}\,ds\leq
\MS^0\left(1+o(1)\right)\] as long as $t\rightarrow 0^+$ and
\eqref{ak} follows from \eqref{zoccona}.
\end{proof}
\begin{war}
If $S$ is smooth enough near its characteristic set $C_S$ and $\MS$ is globally bounded, the
previous asymptotic estimate can be generalized by applying the results of
Section \ref{perdindirindina}. In the following corollaries, however, \underline{we need to assume that condition \eqref{0key2} holds} at the point where the monotonicity inequality has to be proved.
\end{war}
\begin{corollario}\label{2asynt}Let $\mathbb{G}$ be a $k$-step Carnot group. Let
${S}\subset\mathbb{G}$ be a hypersurface without boundary and such that $|\MS|\leq\MS^0<+\infty$. Let
$x\in {\rm Int}(S\cap C_S)$ and assume that ${\rm ord}(x)=Q-i$, for
some $i=2,..., k$.
With no loss of generality, we suppose that
there
exists $\alpha\in\{\DH+1,...,n\}$, ${\rm ord}(\alpha)=i$, such that $S$
can be represented, locally around $x$, as $X_\alpha$-graph of
a $\cont^i$ function satisfying \eqref{0dercond}. We further assume that condition \eqref{0key2} holds at the point $x$. Then
\begin{eqnarray}\label{2asintotica}
\per({S}_t)\mathit{g}\cceq \kappa_{\varrho}(C_S(x))\,t^{Q-1} e^{-t\, \MS^0\left(\kappa_\varrho(C_S(x))+
{\bf d}_\varrho\right)}\end{eqnarray}as long as $t \rightarrow
0^+$.
\end{corollario}We recall that the function $\kappa_{\varrho}(C_S(x))$ has been defined in
Theorem \ref{BUP}; see Case (ii). Notice that
${\bf d}_\varrho={\sum_{i=2}^k i\,{\bf c}_i\DH_i\,{\bf b}_\varrho}$, where
${\bf b}_\varrho$ is the constant
defined by \eqref{tonda3}.
\begin{proof}By arguing as above, we may pass to the
limit in \eqref{zoccowa}
as $t\searrow 0^+$ and we get that
\begin{eqnarray*}\per({S}_t)\mathit{g}\cceq
\kappa_{\varrho}(C_S(x))\,t^{Q-1}\exsp
\left(-\int_0^t\mathit{f}rac{\mathcal{A}(s)}{\per({S}_s)}ds\right).\end{eqnarray*}
By using Lemma \ref{cux0}, we get that
\begin{eqnarray*}\int_0^t\mathit{f}rac{\mathcal{A}(s)}{\perh({S}_s)}ds\leq
\|\MS\|_{L^\infty(S)}\,\left(\kappa_\varrho(C_S(x))+
{\bf d}_\varrho\right)\,t\leq \MS^0\left(\kappa_\varrho(C_S(x))+
{\bf d}_\varrho\right)\,t
\end{eqnarray*}as long as $t\rightarrow
0^+$. This achieves the proof. \end{proof}
In particular, in the case of Heisenberg
groups
$\mathbb{H}^r$, the following holds:
\begin{corollario}\label{hasynt}Let
$(\mathbb{H}^r, \varrho)$ be the Heisenberg group endowed with
the Koranyi distance; see Example \ref{Kor}. Let
$S\subset\mathbb{H}^r$ be a hypersurface of class $\cont^2$ without boundary and assume
that $|\MS|\leq\MS^0<+\infty$. Furthermore, let $x\in{\rm Int} (S \cap C_S)$ be such that condition \eqref{0key2} holds. Then
\begin{eqnarray}\label{2asintotica}
\perh({S}_t)\mathit{g}\cceq \kappa_{\varrho}(C_S(x))\,t^{Q-1} e^{-t\, \MS^0
\left(\kappa_\varrho(C_S(x))+{\bf b}_\varrho\right)}\end{eqnarray}as long as $t \rightarrow
0^+$.\end{corollario}The density-function $\kappa_{\varrho}(C_S(x))$ has
been defined in Theorem \ref{BUP}; see Case (ii).
Moreover, ${\bf b}_\varrho$ is the constant defined by
\eqref{tonda3}.
\begin{proof}By arguing as for the non-characteristic case, we may pass to the
limit in \eqref{zoccowa}
as $t\searrow 0^+$. As above, we have
\begin{eqnarray*}\perh({S}_t)\mathit{g}\cceq
\kappa_{\varrho}(C_S(x))\,t^{Q-1}\exsp
\left(-\int_0^t\mathit{f}rac{\mathcal{A}(s)}{\perh({S}_s)}ds\right),\end{eqnarray*}as
$t\searrow 0^+$, for every $x\in {S}\cap C_S$. By applying Lemma
\ref{perdopo}
\begin{eqnarray*}\mathit{f}rac{\mathcal{A}(s)}{\perh({S}_s)}\leq
\MS^0\left(\kappa_\varrho(C_S(x))+ 2\,{\bf c}_2\,{\bf b}_\varrho \right)=\MS^0\left(\kappa_\varrho(C_S(x))+
{\bf b}_\varrho\right),
\end{eqnarray*}for every (small
enough) $s>0$, since in this case ${\bf c}_2=\mathit{f}rac{1}{2}$.
\end{proof}
\begin{es} Let $(\mathbb{H}^r, \varrho)$, where
$\varrho$ is the Koranyi distance and $Q=2r +2$. Let
$$S=\{\exp(x_{^{_\HH}}, t)\in \mathbb{H}^r : t=0\}.$$We have
$C_S=0\in\mathbb{H}^r $ and
$\nn=-\mathit{f}rac{1}{2}C^{2r+1}_{^{_\HH}} x_{^{_\HH}}$. Furthermore
\[-\MS=\div_{^{_\HH}} \nn=\mathit{f}rac{1}{2}\div_{\R^{2r}}(-x_2, x_1,...,-x_{2r},x_{2r-1})=0.\]Note that
$\kappa_\varrho(C_S)=\mathit{f}rac{O_{2r}}{4r}$, where $\mathit{O}_{2r-1}$
is the surface measure of the unit sphere
$\mathbb{S}^{2r-1}\subset\R^{2r}$. Thus \eqref{2asintotica} says
that $\perh({S}_t)\mathit{g}\cceq \mathit{f}rac{O_{2r}}{4r}\,t^{Q-1}$. This inequality can also be proved by using the formula $\perh=\mathit{f}rac{|x_{^{_\HH}}|}{2}\,d
\mathcal{L}^{2r}$ and then by introducing spherical coordinates on $\R^{2r}$.
\end{es}
\section{ Sobolev-type inequalities on hypersurfaces}\label{sobineqg}
The isoperimetric inequality \eqref{2gha} turns out to be equivalent to
a Sobolev-type inequality. The proof is analogous to that of the
equivalence between the (Euclidean) Isoperimetric Inequality and the
Sobolev one; see _{^{_{\HH_i}}}te{BuZa}. Below we shall assume that $S$ is a compact $\cont^2$ hypersurface without boundary.
\begin{teo}\label{sobolev2}Let $\mathbb{G}$ be a $k$-step Carnot group endowed with a homogeneous metric $\varrho$ as in
Definition \ref{2iponhomnor1}. Let
$S\subset\mathbb{G}$ be a compact hypersurface of class $\cont^2$ without
boundary. Let $\MS$ be the horizontal mean curvature of
$S$ and assume that $\MS\in L^1(S; \sigma_{^{_\mathit{R}}}^{n-1})$.
Then\begin{eqnarray}\label{sersobolev1}\left(\int_S|\psi|^{\mathit{f}rac{Q-1}{Q-2}}\,\per\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop}\left(\int_{S}\left(|\psi|\,|\MS|
+|\textit{grad}\ss\psi|\right)\,\per+ \sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_S |\mathit{g}\ccrad_{^{_{\HH_i}}}ss\psi|\,\sigma_{^{_\mathit{R}}}^{n-1}\right) \end{eqnarray}for every
$\psi\in\cont^1(S)$, where $C_{Isop}$ is the same constant
appearing in Theorem \ref{ghaioio}.
\end{teo}
\begin{proof}The proof follows a classical argument;
see _{^{_{\HH_i}}}te{FedererFleming}, _{^{_{\HH_i}}}te{MAZ}. Since
$|\textit{grad}\ss\psi|\leq|\textit{grad}\ss|\psi||$, without loss of generality we
assume $\psi\mathit{g}\cceq 0$. Set
$S_t:=\{x\in S: \psi(x)>t\}$. The set $S_t$ is a bounded open subset of $S$ and, by applying
Sard's Lemma, we see that its boundary $\partial S_t$ is $\cont^1$
for $\mathcal{L}^1$-a.e. $t\mathit{g}\cceq 0$. Furthermore, $S_t=\emptyset$
for each (large enough) $t>0$. The main tools
are {\it Cavalieri's
principle}\mathit{f}ootnote{\label{CavPrin}The following lemma, also known
as {\it Cavalieri's principle}, is a simple consequence of Fubini's
Theorem:
\begin{lemma}\label{CPrin}Let X be an abstract space, $\mu$ a measure on $X$,
$\alpha>0$, $\varphi\mathit{g}\cceq 0$ and $A_t=\{x\in X: \varphi>t\}$. Then
$$\int_0^{+\infty}t^{\alpha-1}\mu(A_t)\,dt=\mathit{f}rac{1}{\alpha}\int_{A_0}\varphi^\alpha\,d\mu.$$\end{lemma}}
and the Riemannian Coarea Formula; see _{^{_{\HH_i}}}te{BuZa}, _{^{_{\HH_i}}}te{Ch2}, _{^{_{\HH_i}}}te{FE}. We start by the
identity
\begin{equation}\label{concavp}\int_S|\psi|^{\mathit{f}rac{Q-1}{Q-2}}
\per=\mathit{f}rac{Q-1}{Q-2}\int_0^{+\infty}t^{\mathit{f}rac{1}{Q-2}}\,\per(S_t)\,dt\end{equation}
which follows from Lemma \ref{CPrin} with
$\alpha=\mathit{f}rac{Q-1}{Q-2}$. We also recall that, if
$\varphi:\R_+\longrightarrow\R_+$ is a positive {\it decreasing}
function and $\alpha\mathit{g}\cceq 1$,
then$$\alpha\int_0^{+\infty}t^{\alpha-1}\varphi^\alpha\,dt\leq\left(\int_0^{+\infty}\varphi(t)\,dt\right)^\alpha.$$
Using \eqref{concavp} and the last inequality yields
\begin{eqnarray*}&&\int_S\psi^{\mathit{f}rac{Q-1}{Q-2}}\,\per\\&=&\mathit{f}rac{Q-1}{Q-2}
\int_0^{+\infty}t^{\mathit{f}rac{1}{Q-2}}\,\per(S_t)\,dt \\&\leq&
\left[\int_0^{+\infty}\left(\per(S_t)\right)^{\mathit{f}rac{Q-2}{Q-1}}\,dt\right]^{\mathit{f}rac{Q-1}{Q-2}}\\&\leq&
\left[\int_0^{+\infty}C_{Isop}\,\left(\int_{S_t} |\MS|\,\per
+\nis(\partial S_t)+ \sum_{i=2}^k \left( {\bf r}(S_t)\right)^{i-1} \int_{\partial
S_t}|\P_{^{_{\HH_i}}}ss \eta|\,\sigma_{^{_\mathit{R}}}^{n-2}
\right)\,dt\right]^{\mathit{f}rac{Q-1}{Q-2}}\\\\&\leq& \left[C_{Isop}\left(\int_{S}\left(|\psi|\,|\MS|
+|\textit{grad}\ss\psi|\right)\,\per+ \sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_S |\mathit{g}\ccrad_{^{_{\HH_i}}}ss\psi|\,\sigma_{^{_\mathit{R}}}^{n-1}\right)\right]^{\mathit{f}rac{Q-1}{Q-2}},\end{eqnarray*}where
we have used \eqref{2gha}
with $S=S_t$ and the (Riemannian) Coarea formula together with the obvious estimate ${\bf r}(S_t)\leq {\bf r}(S)$.
\end{proof}
\begin{no}For any $p>0$, set
$\mathit{f}rac{1}{p^\ast}=\mathit{f}rac{1}{p}-\mathit{f}rac{1}{Q-1}$. Furthermore, we
denote by $p'$ the H\"{o}lder conjugate of $p$, that is, $\mathit{f}rac{1}{p}+\mathit{f}rac{1}{p'}=1$. \end{no}
Henceforth, we shall assume that $\MS$ is globally
bounded on $S$ and set
$\MS^0:=\|\MS\|_{L^\infty(S)}$.
\begin{corollario}\label{corsob1}\small Under the assumptions of Theorem \ref{sobolev2},
one has
$$\|\psi\|_{L^{p^\ast}(S)}\leq C_{Isop}\left[\MS^0 \|\psi\|_{L^p\left(S, \per\right)}+ c_{p^{\ast}}\left( \|\textit{grad}\ss\psi\|_{L^p\left(S, \per\right)} +\sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \|\mathit{g}\ccrad_{^{_{\HH_i}}}ss\psi\|_{L^p\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right) \right]$$ for every
$\psi\in \cont^1(S)$, where
$c_{p^{\ast}}:=p^\ast\mathit{f}rac{Q-2}{Q-1}$. Thus, there exists
$C_{p^{\ast}}=C_{p^{\ast}}(\MS^0, {\bf r}(S), \varrho, \mathbb{G})$ such
that$$\|\psi\|_{L^{p^\ast}\left(S, \per\right)}\leq C_{p^{\ast}}\left(
\|\psi\|_{L^p\left(S, \per\right)}+ \|\textit{grad}\ss\psi\|_{L^p\left(S, \per\right)}+ \|\mathit{g}\ccrad_{^{_{\VS}}}\psi\|_{L^p\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right)$$ for every
$\psi\in \cont^1(S)$.
\end{corollario}
\begin{proof}Let us apply \eqref{sersobolev1} with
$\psi$ replaced by $\psi|\psi|^{t-1}$, for some $t>0$. It follows
that
\begin{eqnarray}\nonumber\left(\int_S|\psi|^{t\,\mathit{f}rac{Q-1}{Q-2}}\,\per\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop}\left[ \int_{S}\left(\MS^0\,|\psi|^t+ t|\psi|^{t-1}
|\textit{grad}\ss\psi|\right)\per\right.\\\label{wfiha}\left. +\sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \int_{S} t|\psi|^{t-1}
|\mathit{g}\ccrad_{^{_{\HH_i}}}ss\psi|\,\sigma_{^{_\mathit{R}}}^{n-1} \right] .
\end{eqnarray}If we put $(t-1)p'=p^\ast$, one gets $p^\ast=t\,\mathit{f}rac{Q-1}{Q-2}$. Using H\"{o}lder inequality
yields
\begin{eqnarray*}\left(\int_S|\psi|^{p^\ast}\per\right)^{\mathit{f}rac{Q-2}{Q-1}}\leq
C_{Isop}\left(\int_S|\psi|^{p\ast}\per\right)^{\mathit{f}rac{1}{p'}}\times\\\times\left(\MS^0\,\|\psi\|_{L^p\left(S, \per\right)}
+ t\,\|\textit{grad}\ss\psi\|_{L^p\left(S, \per\right)} + \sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} t\,\|\mathit{g}\ccrad_{^{_{\HH_i}}}ss\psi\|_{L^p\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right).
\end{eqnarray*}
\end{proof}
\begin{corollario}\label{corsob12}Under the assumptions of Theorem \ref{sobolev2}, let
$p\in[1, Q-1[$. For all $q\in[p, p^\ast]$ one has
\begin{eqnarray*}\|\psi\|_{L^{q}\left(S, \per\right)}\leq \left(1+\MS^0\,C_{Isop}\right)
\|\psi\|_{L^p\left(S, \per\right)}\\+c_{p^\ast}\,C_{Isop}\left( \|\textit{grad}\ss\psi\|_{L^p\left(S, \per\right)}+\sum_{i=2}^k \left( {\bf r}(S)\right)^{i-1} \|\mathit{g}\ccrad_{^{_{\HH_i}}}ss\psi\|_{L^p\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right)\end{eqnarray*}for
every $\psi\in \cont^1(S)$. In particular, there exists
$C_q={C}_q(\MS^0, {\bf r}(S), \varrho, \mathbb{G})$ such that
$$\|\psi\|_{L^{q}\left(S, \per\right)}\leq C_q\left(
\|\psi\|_{L^p\left(S, \per\right)}+ \|\textit{grad}\ss\psi\|_{L^p\left(S, \per\right)}+ \|\mathit{g}\ccrad_{^{_{\VS}}}\psi\|_{L^p\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right)$$ for every
$\psi\in \cont^1(S)$.
\end{corollario}
\begin{proof}For any given $q\in[p, p^\ast]$ there exists $\alpha\in[0,1]$ such that
$\mathit{f}rac{1}{q}=\mathit{f}rac{\alpha}{p}+\mathit{f}rac{1-\alpha}{p^\ast}.$ Hence$$\|\psi\|_{L^q\left(S, \per\right)}\leq
\|\psi\|^\alpha_{L^p\left(S, \per\right)}\|\psi\|^{1-\alpha}_{L^{p^\ast}\left(S, \per\right)}
\leq\|\psi\|_{L^p\left(S, \per\right)}+\|\psi\|_{L^{p^\ast}\left(S, \per\right)},$$where we have
used {\it interpolation inequality} and Young's inequality.
The thesis follows from Corollary \ref{corsob1}.\end{proof}
\begin{corollario}[Limit case: $p=Q-1$]\label{corsob13}Under the assumptions of Theorem \ref{sobolev2}, let
$p=Q-1$. For every $q\in[Q-1, +\infty[$ there exists
$C_q={C}_q(\MS^0, {\bf r}(S), \varrho, \mathbb{G})$ such that
$$\|\psi\|_{L^{q}\left(S, \per\right)}\leq C_q
\left(\|\psi\|_{L^p\left(S, \per\right)}+\|\textit{grad}\ss\psi\|_{L^p\left(S, \per\right)}+ \|\mathit{g}\ccrad_{^{_{\VS}}}\psi\|_{L^p\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right)$$ for every
$\psi\in \cont^1(S)$.
\end{corollario}
\begin{proof}By using \eqref{wfiha} we easily get that there exists
${C}_1={C}_1(\MS^0, {\bf r}(S), t, \varrho, \mathbb{G})>0$ such that
\begin{eqnarray*}\left(\int_S|\psi|^{t\,\mathit{f}rac{Q-1}{Q-2}}\per\right)^{\mathit{f}rac{Q-2}{Q-1}}&\leq&
{C}_1\left[ \int_{S}\left(|\psi|^t+ |\psi|^{t-1} |\textit{grad}\ss\psi|\right)\,\per+ \int_S|\psi|^{t-1} |\mathit{g}\ccrad_{^{_{\VS}}}\psi|\,\sigma_{^{_\mathit{R}}}^{n-1}\right]
\end{eqnarray*}for every $\psi\in \cont^1(S)$. From now on we assume that $t\mathit{g}\cceq 1$. Using H\"{o}lder
inequality with $p=Q-1$, yields
\begin{eqnarray*}\|\psi\|_{L^{t\,\mathit{f}rac{Q-1}{Q-2}}\left(S, \per\right)}^t \leq
{C}_1\left[\|\psi\|_{{L^t}\left(S, \per\right)}^t +
\|\psi\|_{L^{\mathit{f}rac{(t-1)(Q-1)}{Q-2}}\!\left(S, \per\right)}^{t-1}\right.\times \\\left. \times\left( \|\textit{grad}\ss\psi\|_{L^{Q-1}\left(S, \per\right)}+ \|\mathit{g}\ccrad_{^{_{\VS}}}\psi\|_{L^{Q-1}\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right) \right]
\end{eqnarray*}for every $\psi\in \cont^1(S)$ and $t\mathit{g}\cceq 1$. By means of Young's
inequality, we get that there exists another constant
${C_2}={C_2}(\MS^0, {\bf r}(S), t, \varrho, \mathbb{G})$ such that
\begin{eqnarray*}\|\psi\|_{L^{t\,\mathit{f}rac{Q-1}{Q-2}}\left(S, \per\right)} \leq
{C_2}\left(\|\psi\|_{L^t\left(S, \per\right)}+
\|\psi\|_{L^{\mathit{f}rac{(t-1)(Q-1)}{Q-2}}\!\left(S, \per\right)}+\right.\\\left.+
\|\textit{grad}\ss\psi\|_{L^{Q-1}\left(S, \per\right)}+ \|\mathit{g}\ccrad_{^{_{\VS}}}\psi\|_{L^{Q-1}\left(S,\sigma_{^{_\mathit{R}}}^{n-1}\right)}\right).
\end{eqnarray*}By setting $t=Q-1$ in the last inequality we get
that
\begin{eqnarray*}\|\psi\|_{L^{\mathit{f}rac{(Q-1)^2}{Q-2}}\!\left(S, \per\right)}\leq
{C_2}\left(2\|\psi\|_{L^{Q-1}\left(S, \per\right)}+ \|\textit{grad}\ss\psi\|_{L^{Q-1}\left(S, \per\right)}+ \|\mathit{g}\ccrad_{^{_{\VS}}}\psi\|_{L^{Q-1}\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)}\right).
\end{eqnarray*}By reiterating this procedure for $t=Q, Q+1,...$
one can show that for all $q\mathit{g}\cceq Q-1$ there exists
$C_q={C}_q(\MS^0, {\bf r}(S), \varrho, \mathbb{G})$ such
that$$\|\psi\|_{L^q\left(S, \per\right)}\leq C_q\left(\|\psi\|_{L^{Q-1}\left(S, \per\right)}+
\|\textit{grad}\ss\psi\|_{L^{Q-1}\left(S, \per\right)}+ \|\mathit{g}\ccrad_{^{_{\VS}}}\psi\|_{L^{Q-1}\left(S, \sigma_{^{_\mathit{R}}}^{n-1}\right)} \right)$$for every
$\psi\in \cont^1(S)$, as wished.
\end{proof}
{\mathit{f}ootnotesize \noindent Francescopaolo Montefalcone:\\
Dipartimento di Matematica\\
Universit\`{a} degli Studi di Padova\\
Address: Via Trieste, 63,\,\,
35121 Padova (Italy)
\\ {\it E-mail}: {\textsf [email protected]}}}
\end{document} | math | 162,021 |
\begin{document}
\title[Twisted actions and regular Fell bundles over inverse semigroups]{Twisted actions and regular Fell bundles \\ over inverse semigroups}
\author{Alcides Buss}
\email{[email protected]}
\author{Ruy Exel}
\email{[email protected]}
\address{Departamento de Matemática\\
Universidade Federal de Santa Catarina\\
88.040-900 Florianópolis-SC\\
Brasil}
\begin{abstract}
We introduce a new notion of twisted actions of inverse semigroups and show
that they correspond bijectively to certain \emph{regular} Fell bundles over
inverse semigroups, yielding in this way a structure classification of such
bundles. These include as special cases all the \emph{stable} Fell bundles.
Our definition of twisted actions properly generalizes a previous one introduced
by Sieben and corresponds to Busby-Smith twisted actions in the group case. As
an application we describe twisted \'etale groupoid \cstar{}algebras in terms of
crossed products by twisted actions of inverse semigroups and show that Sieben's
twisted actions essentially correspond to twisted \'etale groupoids with
topologically trivial twists.
\end{abstract}
\subjclass[2000]{46L55, 20M18.}
\keywords{Fell bundle, inverse semigroup, ternary ring, twisted action, twisted groupoid, crossed product.}
\thanks{This research was partially supported by CNPq.}
\maketitle
\tableofcontents
\section{Introduction}
\label{sec:introduction}
In \cite{Exel:twisted.partial.actions} the second named author introduced a method for constructing a Fell bundle
(also called a \cstar{}algebraic bundle \cite{fell_doran}) over a group $G$, starting
from a \emph{twisted partial action} of $G$ on a \cstar{}algebra. The relevance of
this construction is due to the fact that a very large number of Fell bundles, including all stable,
second countable ones, arise from a twisted partial action of the base group on the unit fiber algebra \cite[Theorem~7.3]{Exel:twisted.partial.actions}.
It is the purpose of the present article to extend the above ideas in order to
embrace Fell bundles over \emph{inverse semigroups}.
The notion of Fell bundles over inverse semigroups was introduced by Sieben \cite{SiebenFellBundles}
and further developed in \cite{Exel:noncomm.cartan} and \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids}.
Among its important occurrences one has that every twisted étale groupoid or, more generally, every Fell bundle over an étale groupoid
(in the sense of Kumjian \cite{Kumjian:fell.bundles.over.groupoids}) gives rise to a Fell bundle
over the inverse semigroup of its open bisections (see \cite[Example~2.11]{BussExel:Fell.Bundle.and.Twisted.Groupoids}).
Given a Fell bundle $\A =\{\A_s\}_{s\in S}$ over an inverse
semigroup $S$, one says that $\A$ is \emph{saturated} if $\A_{st}$
coincides with the closed linear span of $\A_s\A_t$ for all $s$
and $t$ in $S$. The requirement that a Fell bundle over an inverse
semigroup be saturated is not as severe as in the case of groups because in
the former situation one may apply the process of \emph{refinement} (see
Section~\ref{sec:Refinements}), transforming any Fell bundle into a saturated one (albeit
over a different inverse semigroup). We will therefore mostly restrict our
attention to saturated Fell bundles.
Both in the case of groups and of inverse semigroups each fiber of a Fell
bundle possesses the structure of a \emph{\tro} (see \cite{Zettl:TROs}, \cite[Section~4]{Exel:twisted.partial.actions}),
namely a mathematical structure isomorphic to a closed linear space of
operators on a Hilbert space which is invariant under the ternary operation
\begin{equation*}
(x,y,z) \mapsto xy^*z.
\end{equation*}
Under special circumstances (see Definition~\ref{def:RegularTRO} below or \cite[Section~5]{Exel:twisted.partial.actions}) a {\tro}
$M$ admits a partial isometry $u$ (acting on the Hilbert space where $M$ is
represented) such that $M^*u=M^*M$, and $uM^*=MM^*$, in which case we say that
$u$ is \emph{associated to} $M$ and that $M$ is regular.
Likewise, given a Fell bundle $\A =\{\A_s\}_{s\in S}$ over an
inverse semigroup $S$, we say that $\A$ is \emph{regular} if every
$\A_s$ is regular as a {\tro}. Assuming this is the case, one may
consequently choose a family of partial isometries $\{u_s\}_{s\in S}$,
where each $u_s$ is associated to $\A_s$. These give rise to two
important families of objects, namely the automorphisms
\begin{equation}\label{FormulaForBeta}
\beta_s\colon a\in \A_{s^*s} \mapsto u_s a u_s^*\in \A_{ss^*},
\end{equation}
and the \emph{cocycle} $\omega=\{\omega(s,t)\}_{s, t\in S}$, given by
\begin{equation}\label{FormulaForOmega}
\omega(s,t) = u_su_tu_{st}^*.
\end{equation}
Although the partial isometries $u_s$ live outside our Fell bundle, the
presence of the $\beta_s$ and of the $\omega(s,t)$ is felt at the level of the
fibers over idempotent elements of $S$: this is obviously so with respect to
$\beta_s$, as its domain and range are fibers over idempotents, and one may show
that $\omega(s,t)$ is nothing but a unitary multiplier of the fiber over the
idempotent element $st(st)^*$.
The main point of departure for our research was the challenge of identifying a
suitable set of properties satisfied by the $\beta_s$ and the $\omega(s,t)$
which, when taken as axioms referring to abstractly given collections
$\{\beta_s\}_{s\in S}$ and $\{\omega(s,t)\}_{s,t\in S}$, could be used to
construct a Fell bundle over $S$. The properties we decided to pick are to be
found in Definition~\ref{def:twisted action} below, describing our notion of a \emph{twisted action of an inverse semigroup}.
Returning to the $\beta_s$ and the $\omega(s,t)$ of \eqref{FormulaForBeta} and
\eqref{FormulaForOmega}, there is in fact a myriad of algebraic relations which
can be proven for these, a sample of which is listed under Proposition~\ref{prop:properties of twisted action coming from partial isometries}.
Having as our main goal to generalize the group case of \cite{Exel:twisted.partial.actions},
we were happy to welcome into our definition two of the main axioms adopted in the group case \cite[Definition 2.1]{Exel:twisted.partial.actions},
namely properties~\eqref{prop:properties of twisted action coming from partial isometries:item:beta_r beta_s=Ad_omega(r,s)beta_rs} and
\eqref{prop:properties of twisted action coming from partial isometries:item:CocycleCondition}
of Proposition~\ref{prop:properties of twisted action coming from partial isometries}.
However, choosing the appropriate replacement for \cite[Definition 2.1.d]{Exel:twisted.partial.actions}, namely
\begin{equation}\label{SiebenCondition}
\omega(t,e)=\omega(e,t)=1,
\end{equation}
where $e$ denotes the unit of the group, turned out to be a major difficulty.
Nonetheless this dilemma proved to be the gateway to interesting and profound
phenomena relating, among other things, to topological aspects of twisted
groupoids, as we hope to be able to convey below.
Sieben \cite{SiebenTwistedActions} has considered a similar notion of twisted action, where he postulates
\eqref{SiebenCondition} for every $t$ and $e$ in $S$, such that $e$ is
idempotent. Although this may be justified by the fact that the idempotents of
$S$ play a role similar to the unit in a group, it cannot be proved in the model
situation where the $\omega(s,t)$ are given by \eqref{FormulaForOmega}, so we
decided not to adopt it.
After groping our way in the dark for quite some time (at one point we had
an untold number of strange looking axioms) we settled for~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} and~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} in Definition~\ref{def:twisted action}
which the reader may still find a bit awkward but which precisely fulfills our expectations.
Needless to say, regular Fell bundles over an inverse semigroup $S$ are then
shown to be in a one-to-one correspondence with twisted actions of $S$, as
proven in Corollary~\ref{cor:CorrespondenceRegularFellBundlesAndTwistedActions},
hence providing the desired generalization of the main result of \cite{Exel:twisted.partial.actions}.
Sieben's condition~\eqref{SiebenCondition}, although not satisfactory for our
goals, is quite relevant in a number of ways. Among other things it implies our
axioms and hence Sieben's definition \cite{SiebenTwistedActions} of twisted actions
is a special case of ours (see Proposition~\ref{prop:SiebensTwistedActions}).
As our main application we consider an important class of Fell bundles over
inverse semigroups, obtained from twisted groupoids \cite[Section~2]{Kumjian:cstar.diagonals}, \cite[Section~4]{RenaultCartan}.
Given an étale groupoid $\G$ equipped with a twist $\Sigma$,
one may consider the associated complex line bundle $L =(\Sigma\times\C)/\Torus$ \cite[Example 5.5]{Deaconi_Kumjian_Ramazan:Fell.Bundles},
which is a Fell bundle over $\G$ in the sense of \cite{Kumjian:fell.bundles.over.groupoids}.
Therefore, applying the procedure outlined in \cite[Example 2.11]{BussExel:Fell.Bundle.and.Twisted.Groupoids}
to $(\G, L)$, we may form a Fell bundle $\{\A_s\}_s$, over the inverse
semigroup of all open bisections $s\subseteq \G$.
En passant, the class of such Fell bundles consists precisely of the \emph{semi-abelian} ones, namely those having commutative fibers over idempotent
semigroup elements \cite[Theorem~3.36]{BussExel:Fell.Bundle.and.Twisted.Groupoids}.
Recall from \cite[Example 2.11]{BussExel:Fell.Bundle.and.Twisted.Groupoids} that for each bisection $s\subseteq \G$,
$\A_s$ is defined as the space of all continuous sections of $L$ over $s$ vanishing at infinity. Regularity
being the key hypothesis of our main Theorem, one should ask whether or not $\A_s$ is
regular. The answer is affirmative when $L$ is topologically trivial
over $s$, because one may then choose a nonvanishing continuous section over $s$
which turns out to be associated to $\A_s$.
Restricting our Fell bundle to a subsemigroup consisting of \emph{small}
bisections, i.e.~bisections where $L$ is topologically trivial (recall that $L$ is necessarily locally trivial), we get a
regular one to which we may apply our main result, describing the bundle in question
in terms of a twisted inverse semigroup action.
As an immediate consequence (Theorem~\ref{theo:CorrespondenceTwistedGroupoidsAndTwistedActions}) we prove that the twisted groupoid
\cstar{}algebra is isomorphic to the crossed product of $\contz\big(\G^{(0)}\big)$ by a
twisted inverse semigroup action. This simultaneously generalizes
\cite[Theorem~3.3.1]{Paterson:Groupoids}, \cite[Theorem~8.1]{Quigg.Sieben.C.star.actions.r.discrete.groupoids.and.inverse.semigroups} and \cite[Theorem~9.9]{Exel:inverse.semigroups.comb.C-algebras}.
Still speaking of Fell bundles arising from a twisted groupoid, we found a very
nice characterization of Sieben's condition (Proposition~\ref{prop:SiebenTwistedActions=TopologicalTrivial}): it holds if and only if the line
bundle $L$ is topologically trivial (although it needs not be trivial as a Fell bundle).
Our method sheds new light over the historic evolution of the notion of \emph{twisted groupoids}.
Indeed, recall that Renault \cite{RenaultThesis} first viewed twisted groupoids as those equipped with continuous two-cocycles,
although more recently the most widely accepted notion of a twist over a groupoid is that of a groupoid extension
\begin{equation*}
\Torus\times\G^{(0)} \to \Sigma \to \G
\end{equation*}
\cite[Section~2]{Kumjian:cstar.diagonals}, \cite[Section~2]{Muhly.Williams.Continuous.Trace.Groupoid}, \cite[Section~4]{RenaultCartan}.
While a two-cocycle is known to give rise to a groupoid extension,
the converse is not true, the obstruction being that $\Sigma$ may be
topologically nontrivial as a circle bundle over $\G$ \cite[Example~2.1]{Muhly.Williams.Continuous.Trace.Groupoid}.
Our point of view, based on inverse semigroups, may be seen as unifying the \emph{cocycle} and the
\emph{extension} points of view, because the Fell bundle over
the inverse semigroup of \emph{small} bisections may always be described by a
\emph{cocycle}, namely the $\omega(s,t)$ of our twisted action, even when the
twist itself is topologically nontrivial and hence does
not come from a 2-cocycle in Renault's sense. Nevertheless, when a twisted
étale groupoid does come from a 2-cocycle $\tau$ in Renault's sense, then
there is a close relationship between $\tau$ and our cocycle $\omega$ (Proposition~\ref{prop:RelationRenaultAndSiebensCocycles})
Most of our results are based on the regularity property of Fell bundles but in
some cases we can do without it. The key idea behind this generalization is
based on the notion of \emph{refinement} (see Definition~\ref{def:refinement}). In topology it
is often the case that assertions fail to hold globally, while holding locally.
In many of these circumstances one may successfully proceed after restricting
oneself to a covering of the space formed by small open sets where the assertion
in question holds. The idea of Fell bundle refinement is the inverse semigroup equivalent
of this procedure.
Although the process of refining a Fell bundle changes everything, including the
base inverse semigroup, the cross-sectional algebras are unchanged (Theorem~\ref{theo:RefinementPreserveFullC*Algebras}),
as one would expect. Moreover, if a semi-abelian Fell bundle $\A$ admits
$\B$ as a refinement, then the underlying twisted groupoids $(\G_\A,\Sigma_\A)$ and
$(\G_\B, \Sigma_\B)$ obtained by \cite[Section~3.2]{BussExel:Fell.Bundle.and.Twisted.Groupoids}
are isomorphic (Proposition~\ref{prop:RefinementPreserveTwistedGroupoids}).
A {\tro} is said to be \emph{locally regular} if it generated by its regular
ideals (Definition (2.8)). Likewise a Fell bundle over an inverse semigroup is
said to be \emph{locally regular} if its fibers are locally regular as {\tros}.
Our main reason for considering locally regular Fell bundles is that all semi-abelian
ones possess this property (Proposition~\ref{prop:commutative=>loc.regular}). Furthermore, our motivation for
studying the notion of refinement is that every locally regular bundle admits a
saturated regular refinement (Proposition~\ref{prop:RefinementForLocRegularFellBundle}), even if the original bundle is
not saturated. Therefore the scope of our theory is extended to include all locally
regular bundles.
\section{Regular ternary rings and imprimitivity bimodules}
\label{sec:regular TROs}
In this section we shall study a special class of ternary rings and imprimitivity bimodules named \emph{regular}.
This notion, which is the basis for our future considerations, has been first introduced in \cite{Exel:twisted.partial.actions}
building on ideas from \cite{BGR:MoritaEquivalence}.
Let $\hils$ be a Hilbert space, and let $\bound(\hils)$ denote the space of all bounded linear operators on $\hils$.
Recall that a ternary ring of operators is a closed subspace $\troa\sbe\bound(\hils)$ such that $\troa\troa^*\troa\sbe \troa$. It is a consequence that $\troa\troa^*\troa=\troa$ (see \cite[Corollary~4.10]{Exel:twisted.partial.actions}), where $\troa\troa^*\troa$ means the closed linear span of $\{xy^*z\colon x,y,z\in \troa\}$. Moreover, it is easy to see that $\troa^*\troa$ and $\troa\troa^*$ (closed linear spans) are \cstar{}subalgebras of $\bound(\hils)$. We refer to \cite{Exel:twisted.partial.actions} and \cite{Zettl:TROs} for more details on ternary rings.
\begin{lemma}\label{lem:ElementsAssociatedToTRO}
Let $\troa \sbe\bound(\hils)$ be a {\tro} and let $u\in \bound(\hils)$. Consider the following properties\textup:
\begin{enumerate}[(a)]
\item $\troa^*u=\troa^*\troa$;\label{lem:ElementsAssociatedToTRO:item:M*u=M*M}
\item $u\troa^*=\troa\troa^*$;\label{lem:ElementsAssociatedToTRO:item:uM*=MM*}
\item $uu^*\troa=\troa$;\label{lem:ElementsAssociatedToTRO:item:uu*M=M}
\item $\troa u^*u=\troa$.\label{lem:ElementsAssociatedToTRO:item:Mu*u=M}
\end{enumerate}
Then
\begin{enumerate}[(i)]
\item~\eqref{lem:ElementsAssociatedToTRO:item:M*u=M*M} and~\eqref{lem:ElementsAssociatedToTRO:item:uM*=MM*} imply \eqref{lem:ElementsAssociatedToTRO:item:uu*M=M};
\item~\eqref{lem:ElementsAssociatedToTRO:item:M*u=M*M} and~\eqref{lem:ElementsAssociatedToTRO:item:uM*=MM*} imply \eqref{lem:ElementsAssociatedToTRO:item:Mu*u=M};
\item~\eqref{lem:ElementsAssociatedToTRO:item:M*u=M*M} and~\eqref{lem:ElementsAssociatedToTRO:item:uu*M=M} imply \eqref{lem:ElementsAssociatedToTRO:item:uM*=MM*};
\item~\eqref{lem:ElementsAssociatedToTRO:item:uM*=MM*} and~\eqref{lem:ElementsAssociatedToTRO:item:Mu*u=M} imply \eqref{lem:ElementsAssociatedToTRO:item:uu*M=M}.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[(i)]
\item $uu^*\troa=u\troa^*\troa=\troa\troa^*\troa=\troa$.
\item $\troa u^*u=\troa \troa^* u=\troa\troa^*\troa=\troa$.
\item $u\troa^*=u\troa^*\troa\troa^*=uu^*\troa\troa^*=\troa\troa^*$.
\item $\troa^*u=\troa^*\troa\troa^*u=\troa^*\troa u^*u=\troa^*\troa$.
\end{enumerate}
\vskip-14pt
\end{proof}
\begin{definition}\label{def:RegularTRO}
We say that a {\tro} $\troa\sbe\bound(\hils)$ is regular if there is $u\in \bound(\hils)$ satisfying the
properties~\eqref{lem:ElementsAssociatedToTRO:item:M*u=M*M}-\eqref{lem:ElementsAssociatedToTRO:item:Mu*u=M} of the lemma above.
In this case, we say that $u$ is \emph{associated to $\troa$} and write $u\sim \troa$.
\end{definition}
By \cite[Proposition~5.3]{Exel:twisted.partial.actions}, the above definition is equivalent to Definition~5.1 in \cite{Exel:twisted.partial.actions}.
\begin{example}
Essentially, every regular {\tro} $\troa\sbe \bound(\hils)$ has the following form (this will become clearer as we proceed):
suppose $B\sbe \bound(\hils)$ is a \cstar{}subalgebra and suppose that $m$ is an element of $\bound(\hils)$
such that $m^*mB$ (closed linear span) is equal to $B$. Then it is easy to see that $\troa\defeq mB$ (closed linear span) is a regular {\tro}.
Note that $\troa^*\troa=B$ and that $A\defeq \troa\troa^*=uBu^*$ is a \cstar{}subalgebra of $\bound(\hils)$ which is isomorphic to $B$ (see also Corollary~\ref{cor:RegularIFFPartialIsometry} below).
\end{example}
\begin{proposition}\label{prop:PolarDecompositionAssociated}
Let $\troa\sbe\bound(\hils)$ be a {\tro}. Suppose $m\in \troa$ is associated to $\troa$ and let $m=u|m|$ be the polar decomposition of $m$.
Then $u$ is associated to $\troa$, $m^*$ and $u^*$ are associated to $\troa^*$,
$|m|$ and $m^*m$ are associated to $E^*E$, and $|m^*|$ and $mm^*$ are associated to $\troa\troa^*$.
\end{proposition}
\begin{proof}
Since $m\troa^*\troa=\troa$, we get $m^*m\troa^*\troa=m^*\troa=\troa^*\troa$. Taking adjoints, we also get $\troa^*\troa m^*m=\troa^*\troa$.
This means that $m^*m$ is associated to $\troa^*\troa$. This implies, in particular, that $m^*m$ and hence also $|m|=(m^*m)^{\frac{1}{2}}$
are multipliers of the \cstar{}algebra $\troa^*\troa$. Thus
\begin{equation*}
|m|\troa^*\troa\sbe \troa^*\troa= m^*m \troa^*\troa=|m|^2\troa^*\troa = |m||m|\troa^*\troa\sbe |m|\troa^*\troa.
\end{equation*}
This yields $\troa^*\troa|m|=|m|\troa^*\troa=\troa^*\troa$, so that $|m|$ is associated to $\troa^*\troa$.
Applying the involution to the properties~\eqref{lem:ElementsAssociatedToTRO:item:M*u=M*M}-\eqref{lem:ElementsAssociatedToTRO:item:Mu*u=M} of Lemma~\ref{lem:ElementsAssociatedToTRO}, it is easy to see that $n\in \bound(\hils)$ is associated to $\troa$ if and only if $n^*$ is associated to $\troa^*$. Thus $m^*$ is associated to $\troa^*$, and hence by the argument above,
$mm^*$ and $|m^*|$ are associated to $\troa\troa^*$. It remains to see that $u$ is associated to $\troa$. We have
\begin{equation*}
u^*\troa=u^*\troa\troa^*\troa=u^*m\troa^*\troa=|m|\troa^*\troa=\troa^*\troa
\end{equation*}
and (using that $um^*=|m^*|$)
\begin{equation*}
u\troa^*=u\troa^*\troa\troa^*=um^*\troa\troa^*=|m^*|\troa\troa^*=\troa\troa^*
\end{equation*}
\vskip-14pt
\end{proof}
Notice that if $u\sim \troa$, then
\begin{equation*}
\troa H=u\troa^*\troa H\sbe uH\quad\mbox{and}\quad \troa^*H=u^*\troa\troa^*H\sbe u^*H.
\end{equation*}
If both inclusions above are equalities, we say that $u$ is \emph{strictly associated to $\troa$} and write $u\ssim \troa$.
We can always assume that $u$ is strict by replacing $u$ by $up$, where
$p$ is the projection onto $\troa^*\troa H$. In fact, $upH=u\troa^*\troa H=\troa H$ and $up\sim \troa$ because
\begin{equation*}
up\troa^*=up\troa^*\troa\troa^*=u\troa^*=\troa\troa^*\quad\mbox{and}\quad \troa^*up=\troa^*\troa p=\troa^*\troa.
\end{equation*}
Given a \cstar{}subalgebra $A\sbe \bound(\hils)$, we write $1_A$ for the unit of the multiplier \cstar{}algebra $\mult(A)$ of $A$
(which we also represent in $\bound(\hils)$ is the usual way).
\begin{proposition}\label{prop:PartialIsometryStrictlyAssociated}
Let $\troa\sbe\bound(\hils)$ be a {\tro}, and let $u\in \bound(\hils)$ be a partial isometry associated to $\troa$.
Then the following assertions are equivalent\textup:
\begin{enumerate}[(i)]
\item $u$ is strictly associated to $\troa$;\label{prop:PartialIsometryStrictlyAssociated:item:uStrictlyAssociatedToM}
\item $\troa H=uH$ or $\troa^*H=u^*H$;\label{prop:PartialIsometryStrictlyAssociated:item:MH=uH_or_M*H=u*H}
\item $u^*u=1_{\troa^*\troa}$ and $uu^*=1_{\troa \troa^*}$;\label{prop:PartialIsometryStrictlyAssociated:item:u*u=1_and_uu*=1}
\item $u^*u=1_{\troa^*\troa}$ or $uu^*=1_{\troa \troa^*}$.\label{prop:PartialIsometryStrictlyAssociated:item:u*u=1_or_uu*=1}
\end{enumerate}
\end{proposition}
\begin{proof}
If $uH=\troa H$, then $p=u^*u$ is the projection onto
\begin{equation*}
u^*uH=u^*H=\troa^*H=\troa^*\troa H.
\end{equation*}
Thus, $px=x=xp$ for all $x\in \troa^*\troa$, so that $p=1_{\troa^*\troa}$.
Conversely, if $u^*u=1_{\troa^*\troa}$, then
\begin{equation*}
uH=uu^*uH=u1_{\troa^*\troa}H=u\troa^*\troa H=\troa\troa^*\troa H=\troa H.
\end{equation*}
Similarly, $u^*H=\troa H$ if and only if $uu^*=1_{\troa\troa^*}$.
Now if $\troa H=uH$, then
\begin{equation*}
u^*H=u^*uu^*H=u^*\troa\troa^*H=\troa^*\troa\troa^*H=\troa^*H.
\end{equation*}
Similarly, if $u^*H=\troa H$, then $u H=\troa H$. All these considerations imply the
equivalences~\eqref{prop:PartialIsometryStrictlyAssociated:item:uStrictlyAssociatedToM}$\Leftrightarrow$
~\eqref{prop:PartialIsometryStrictlyAssociated:item:MH=uH_or_M*H=u*H}$\Leftrightarrow$
~\eqref{prop:PartialIsometryStrictlyAssociated:item:u*u=1_and_uu*=1}$\Leftrightarrow$
~\eqref{prop:PartialIsometryStrictlyAssociated:item:u*u=1_or_uu*=1}.
\end{proof}
If $m=u|m|$ is the polar decomposition of an element $m\in \bound(\hils)$ associated to a {\tro} $\troa\sbe\bound(\hils)$, then
$m$ is strictly associated to $\troa$ if and only $u$ is strictly associated to $\troa$. In fact, this follows from the equalities
\begin{equation*}
uH=u|m|H=mH\quad\mbox{and}\quad u^*H=u^*|m^*|H=m^*H.
\end{equation*}
As a consequence of propositions~\ref{prop:PolarDecompositionAssociated} and~\ref{prop:PartialIsometryStrictlyAssociated}, we get the following:
\begin{corollary}\label{cor:RegularIFFPartialIsometry}
A {\tro} $\troa\sbe\bound(\hils)$ is regular if and only if there is a partial isometry $u\in \bound(\hils)$ satisfying
\begin{equation*}
u\troa^*\troa=\troa,\quad \troa\troa^*u=\troa,\quad u^*u=1_{\troa^*\troa}\quad\mbox{and}\quad uu^*=1_{\troa\troa^*}.
\end{equation*}
In particular, the map $a\mapsto u^*au$ is a \Star{}isomorphism from $\troa\troa^*$ to $\troa^*\troa$.
\end{corollary}
Later, we shall need the following fact:
\begin{proposition}\label{prop:strictlyAssociated=>weakClosure}
Let $\troa\sbe\bound(\hils)$ be a {\tro} and let $u\in \bound(\hils)$ be strictly associated to $\troa$.
Then $u$ lies in the weak closure of $\troa$ within $\bound(\hils)$.
\end{proposition}
\begin{proof} Let $\{e_i\}_i$ be an approximate unit for the \cstar{}algebra $\troa\troa^*$. We claim that
$u =\lim\limits_i e_iu$ in the strong topology of $\bound(\hils)$. In order to prove this let $\xi\in\hils$. Then
\begin{equation*}
u\xi\in u\hils = \troa\hils = \troa\troa^*\troa\hils\sbe \troa\troa^*\hils,
\end{equation*}
so by Cohen's Factorization Theorem, we may write $u\xi = a\eta$, with $a \in \troa\troa^*$ and $\eta\in \hils$. Therefore
\begin{equation*}
\lim\limits_i e_iu\xi =\lim\limits_i e_ia\eta = a\eta = u\xi,
\end{equation*}
thus proving our claim. It follows that $u$ belongs to the weak closure of $\troa\troa^*u=\troa\troa^*\troa = \troa$.
\end{proof}
Let $\troa$ be a {\tro}. An \emph{ideal} of $\troa$ is
a closed subspace $\trob\sbe\troa$ satisfying $\trob\troa^*\troa\sbe \trob$
and $\troa\troa^*\trob\sbe\trob$. Alternatively, $\trob$ is an ideal of $\troa$ if and only if $\trob$ is a sub-{\tro} of $\troa$
for which $\trob^*\trob$ is an ideal of $\troa^*\troa$ and $\trob\trob^*$ is an ideal of $\troa\troa^*$.
It is easy to see that our definition of ideals in {\tros} is equivalent to Definition~6.1 in \cite{Exel:twisted.partial.actions}. By Proposition~6.3 in \cite{Exel:twisted.partial.actions},
if $\troa$ is a regular {\tro}, then so is any ideal $\trob\sbe\troa$. Moreover, if $u\sim \troa$, then $u\sim\trob$.
\begin{definition}\label{def:TROLocallyRegular}
We say that a {\tro} $\troa$ is \emph{locally regular} if it generated by regular ideals
in the sense that there is a family $\{\troa_i\}_{i\in I}$ of ideals $\troa_i\sbe\troa$ such that each $\troa_i$ is a regular {\tro} and
$$\troa=\overline{\sum\limits_{i\in I}\troa_i}.$$
\end{definition}
\begin{proposition}\label{prop:commutative=>loc.regular}
Let $\troa\sbe\bound(\hils)$ be a {\tro}. If $A\defeq\troa\troa^*$ and $B\defeq\troa^*\troa$
are commutative \cstar{}algebras, then $\troa$ is locally regular.
\end{proposition}
\begin{proof}
Given $m\in \troa$, we show that the ideal $\gen{m}\sbe\troa$ generated by $m$,
namely $\gen{m}\defeq\troa\troa^*m\troa^*\troa$ (closed linear span) is regular.
Since $B=\troa^*\troa$ is commutative, we have
$$\gen{m}=AmB=\troa(\troa^*m)(\troa^*\troa)=\troa(\troa^*\troa)(\troa^*m)=\troa\troa^*m=Am.$$
Similarly, since $A=\troa\troa^*$ is commutative, we have $\gen{m}=mB$. Since $m\in \gen{m}$, this implies that $\gen{m}$ is regular.
\end{proof}
Although we have chosen to work with {\tro} concretely represented in Hilbert spaces, there is an abstract approach free of representations.
In fact, note that a {\tro} $\troa\sbe\bound(\hils)$ may be viewed as an imprimitivity Hilbert $A,B$-bimodule, where $A=\troa\troa^*$ and
$B=\troa^*\troa$ and the Hilbert bimodule structure (left $A$-action, right $B$-action and inner products) is given by the operations in $\bound(\hils)$. Conversely, any imprimitivity Hilbert $A,B$-bimodule $\F$ is isomorphic to some {\tro} $\troa\sbe\bound(\hils)$ (viewed as a bimodule). We refer the reader to \cite{Echterhoff.et.al.Categorical.Imprimitivity}
for more details on Hilbert modules.
Given Hilbert $B$\nb-modules $\F_1$ and $\F_2$,
we write $\K(\F_1,\F_2)$ and $\Ls(\F_1,\F_2)$ for the spaces of compact and adjointable operators $\F_1\to \F_2$, respectively.
The \emph{multiplier bimodule} of $\F$ is defined as $\mult(\F)\defeq\Ls(B,\F)$. This is, indeed, a Hilbert $\mult(A),\mult(B)$-bimodule with respect
to the canonical structure (see \cite[Section~1.5]{Echterhoff.et.al.Categorical.Imprimitivity} for details).
The notion of (local) regularity defined previously may be translated to the setting of Hilbert bimodules as follows.
\begin{definition}\label{def:RegularBimodule}
Let $\F$ be an imprimitivity Hilbert $A,B$-bimodule. We say that $\F$ \emph{regular} if there is a unitary multiplier
$u\in \mult(\F)$, that is, an adjointable operator $u\colon B\to \F$ such that $u^*u=1_{B}$ and $uu^*=1_{A}$ (here we tacitly identify
$\mult(A)\cong\Ls(\F)\cong\mult(\K(\F))$ in the usual way). We say that $\F$ is \emph{locally regular} if it is generated
by regular sub-$A,B$-bimodules, that is, if there is a family $\{\F_i\}_{i\in I}$ of sub-$A,B$-bimodules $\F_i\sbe\F$ such that
each $\F_i$ is is regular as an imprimitivity $A_i,B_i$-bimodule,
where $A_i=\cspn{_A}\braket{\F_i}{\F_i}$ and $B_i=\cspn\braket{\F_i}{\F_i}_B$, and
\begin{equation*}
\F=\overline{\sum\limits_{i\in I}\F_i}.
\end{equation*}
\end{definition}
If a (locally) regular Hilbert bimodule is concretely represented as a {\tro} on some Hilbert space, then this {\tro} is also
(locally) regular. Conversely, if a (locally) regular {\tro} is viewed as a Hilbert bimodule, this Hilbert bimodule is (locally) regular.
Of course, every result on (local) regularity of ternary rings of operators can be translated into an equivalent result on (local) regularity of
imprimitivity bimodules. For instance, a Hilbert sub-$A,B$-bimodule of a regular imprimitivity Hilbert $A,B$-bimodule $\F$ is again regular, and
if $A$ and $B$ are commutative, then $\F$ is locally regular.
We end this section discussing some examples. Given a \cstar{}algebra $B$, we may consider the \emph{trivial} $B,B$-imprimitivity bimodule $\F=B$
with the obvious structure. It is easy to see that it is regular. More generally, given \cstar{}algebras $A$ and $B$ and an isomorphism
$\phi\colon A\to B$, we may give $\F=B$ the structure of an imprimitivity Hilbert $A,B$-bimodule with the canonical structure:
\begin{align*}
a\cdot \xi & =\phi(a)b,\quad \mbox{ (product in $B$) },\quad _A\braket{\xi}{\eta}=\phi\inv(\xi\eta^*)\quad\mbox{and}\\
\xi\cdot b & =\xi b\quad \mbox{and}\quad \braket{\xi}{\eta}_B=\xi^*\eta \mbox{ (product and involution of $B$) }
\end{align*}
for all $a\in A$ and $\xi,\eta,b\in B$. We write $_\phi B$ for this $A,B$-bimodule.
The associated multiplier $\mult(A),\mult(B)$-bimodule is just $\mult(\F)=\mult(B)$,
the multiplier algebra of $B$ viewed as a $\mult(A),\mult(B)$-bimodule using the (unique) strictly continuous extension
$\bar\phi\colon\mult(A)\to\mult(B)$ of $\phi$. In other words, $\mult(_\phi B)=_{\bar\phi}\mult(B)$. From this, we
see that $_\phi B$ is regular. In fact, it is enough to take a unitary multiplier of $B$ (for instance, one may take the unit $1_B\in \mult(B)$)
and view it as a unitary multiplier of $_\phi B$. Moreover, up to isomorphism, every regular $A,B$-bimodule has the
form $_\phi B$ for some isomorphism $\phi\colon A\to B$:
\begin{proposition}\label{prop:Regular=>ABisomorphicAndBimoduleTrivial}
An imprimitivity $A,B$-bimodule $\F$ is regular if and only if there is an isomorphism
$\phi\colon A\to B$ such that $\F\cong _\phi B$ as $A,B$-bimodules.
\end{proposition}
\begin{proof}
Let $u$ be a unitary multiplier of $\F$ such that $\F=A\cdot u=u\cdot B$. Then it easy to see that
the map $\phi\colon A\to B$ defined by $\phi(a)=u^*au$ is an isomorphism of \cstar{}algebras.
Moreover, the map $b\mapsto ub$ from $B$ to $\F$ induces an isomorphism $_\phi B\congto\F$ of $A,B$-bimodules.
\end{proof}
The above result says that the structure of a regular $A,B$-bimodule is, up to isomorphism, essentially trivial.
Notice that the imprimitivity $A,B$-bimodule $_\phi B$ (induced from an isomorphism $\phi\colon A\to B$) is isomorphic to
the trivial imprimitivity $B,B$-bimodule $B=_{\id}\!\!B$ (induced from the identity map $\id\colon B\to B$).
More precisely, the identity map $\id\colon B\to B$ together with the coefficient homomorphisms $\phi\colon A\to B$ and $\id\colon B\to B$
defines an isomorphism $_\phi\id_\id$ from the $A,B$-bimodule $_AB_B=_\phi B$ to the $B,B$-bimodule $_\id B={_B}B_B$
(see \cite[Definition~1.16]{Echterhoff.et.al.Categorical.Imprimitivity} for the precise meaning of homomorphisms between bimodules with possibly different coefficient algebras).
From the above characterization of regular bimodules, we can also easily give examples of non-regular bimodules,
simply taking any imprimitivity $A,B$-bimodule for which $A$ and $B$ are non-isomorphic \cstar{}algebras.
For instance, one can take a Hilbert space $\hils$ and view it as an imprimitivity $\K(\hils),\C$-bimodule.
Then $\hils$ is regular if and only if it is one-dimensional.
There is one special case where regularity is always present: this is the case where the coefficient algebras are \emph{stable}.
Recall that a \cstar{}algebra $A$ is stable if $A\otimes\K\cong A$, where $\K=\K(l^2\N)$. If $\F$ is an imprimitivity
Hilbert $A,B$-bimodule with $A$ and $B$ separable (or, more generally, $\sigma$-unital) stable \cstar{}algebras,
then $\F$ is regular (this follows from \cite[Theorem~3.4]{BGR:MoritaEquivalence}).
Hence, after stabilization (tensoring with $\K$), we can make any separable (or, more generally, countably generated)
imprimitivity Hilbert bimodule into a regular one.
Let us now consider the case where $A=B=\contz(X)$ is a commutative \cstar{}algebra,
where $X$ is some locally compact Hausdorff space. It is well-known that imprimitivity $\contz(X),\contz(X)$-bimodules correspond bijectively
to (Hermitian) complex line bundles over $X$ (see \cite[Appendix~A]{Raeburn:PicardGroup}):
given a complex line bundle $L$ over $X$, it can be endowed with
an Hermitian structure, that is, there is a continuous family of inner products $\braket{\cdot}{\cdot}_x$ (which we assume to be linear on the
second variable) on the fibers $L_x$. Then the space $\contz(L)$ of continuous sections of $L$ vanishing at infinity has a canonical structure of an
imprimitivity $\contz(X),\contz(X)$-bimodule with the obvious actions of $\contz(X)$ and the inner products:
\begin{equation*}
_{\contz(X)}\braket{\xi}{\eta}(x)=\braket{\eta(x)}{\xi(x)}_x\quad\mbox{and}\quad\braket{\xi}{\eta}_{\contz(X)}(x)=\braket{\xi(x)}{\eta(x)}_x.
\end{equation*}
When is $\contz(L)$ regular? To answer this question, let us first observe that the associated multiplier $\contb(X),\contb(X)$-bimodule
is (isomorphic to) $\contb(L)$, the space of continuous bounded sections of $L$ endowed with the canonical bimodule structure as above.
Thus $\contz(L)$ is regular if and only if there is a unitary element $u\in \U\contb(L)$, that is,
a continuous unitary section for $L$. In other words, we have proved the following:
\begin{proposition}\label{prop:Contz(L)RegularIFFLTopologicallyTrivial}
The Hilbert $\contz(X),\contz(X)$-bimodule $\contz(L)$ described above is regular if and only if the line bundle $L$ is topologically trivial.
\end{proposition}
The above result illustrates again the already mentioned triviality restriction imposed by regularity.
From this point of view, the study of regular bimodules seems to be trivial. However, the theory becomes interesting when one
studies bundles of such bimodules. In this work we are interested in regular Fell bundles (over inverse semigroups), meaning
Fell bundles for which all the fibers are regular bimodules. Although each fiber is then isolatedly isomorphic to a trivial bimodule,
this isomorphism is not canonical, and the relation between these isomorphisms among different fibers is what makes the object interesting.
On the other hand, local regularity is much more flexible. For instance, the bimodule $\contz(L)$ is always locally regular.
In fact, notice that we already know this from Proposition~\ref{prop:commutative=>loc.regular},
but essentially this is a manifestation of the fact that $L$ is locally trivial. It is also easy to give examples of non-locally trivial
bimodules. It is enough to take an imprimitivity Hilbert $A,B$-bimodule $\F$ for which $A$ and $B$ are non-isomorphic simple \cstar{}algebras.
In this case $\F$ is \emph{simple}, that is, there is no Hilbert sub-$A,B$-bimodule of $\F$, except for the trivial ones $\{0\}$ and $\F$.
Hence $\F$ is locally regular if and only if it is regular, and in this case $A$ and $B$ are isomorphic by Proposition~\ref{prop:Regular=>ABisomorphicAndBimoduleTrivial}.
As a simple example, consider any Hilbert space $\hils$ and view it as an imprimitivity $\K(\hils),\C$-bimodule.
This is not locally trivial, unless $\hils$ is one-dimensional.
\section{Regular Fell bundles}
\label{sec:regular Fell bundles}
First, let us recall the definition of Fell bundles over inverse semigroups
(a concept first introduced by Nándor Sieben in~\cite{SiebenFellBundles} and later used by the
second author in~\cite{Exel:noncomm.cartan}).
\begin{definition}\label{def:Fell bundles over ISG}
Let $S$ be an inverse semigroup. A \emph{Fell bundle} over $S$ is a
collection $\A=\{\A_s\}_{s\in S}$ of Banach spaces $\A_s$
together with a \emph{multiplication} $\cdot\colon\A\times\A\to \A$, an
\emph{involution} $^{*}\colon\A\to \A$, and isometric linear maps
$j_{t,s}\colon\A_s\to \A_t$ whenever $s\leq t$, satisfying the following properties\textup:
\begin{enumerate}[(i)]
\item $\A_s\cdot\A_t\sbe\A_{st}$ and the multiplication
is bilinear from $\A_s\times\A_t$ to $\A_{st}$ for all $s,t\in S$;\label{def:Fell bundles over ISG:item:MultiplicationBilinear}
\item the multiplication is associative, that is, $a\cdot(b\cdot c)=(a\cdot b)\cdot c$
for all $a,b,c\in \A$;\label{def:Fell bundles over ISG:item:MultiplicationAssociative}
\item $\|a\cdot b\|\leq \|a\|\|b\|$ for all $a,b\in \A$;\label{def:Fell bundles over ISG:item:MultiplicationBounded}
\item $\A_s^*\sbe\A_{s^*}$ and the involution is conjugate linear
from $\A_s$ to $\A_{s^*}$;\label{def:Fell bundles over ISG:item:InvolutionConjugateLinear}
\item $(a^*)^*=a$, $\|a^*\|=\|a\|$ and
$(a\cdot b)^*=b^*\cdot a^*$;\label{def:Fell bundles over ISG:item:InvolutionIdempotentIsometricAntiMultiplicative}
\item $\|a^*a\|=\|a\|^2$ and $a^*a$ is a positive element of the \cstar{algebra} $\A_{s^*s}$
for all $s\in S$ and $a\in \A_s$;\label{def:Fell bundles over ISG:item:C*-Condition}
\item if $r\leq s\leq t$ in $S$, then $j_{t,r}=j_{t,s}\circ j_{s,r}$;\label{def:Fell bundles over ISG:item:CompatibilityInclusions}
\item if $s\leq t$ and $u\leq v$ in $S$, then $j_{t,s}(a)\cdot j_{v,u}(b)=j_{tv,su}(a\cdot b)$
for all $a\in \A_s$ and $b\in \A_u$. In other words, the following diagram commutes:
\begin{equation*}
\xymatrix{
\A_s\times\A_u \ar[r]^{\mu_{s,u}}\ar[d]_{j_{t,s}\times j_{v,u}} & \A_{su}\ar[d]^{j_{tv,su}}\\
\A_t\times\A_v \ar[r]_{\mu_{t,v}} & \A_{tv}}
\end{equation*}
where $\mu_{s,u}$ and $\mu_{t,v}$
denote the multiplication maps.\label{def:Fell bundles over ISG:item:CompatibilityMultiplicationWithInclusions}
\item if $s\leq t$ in $S$, then $j_{t,s}(a)^*=j_{t^*,s^*}(a^*)$ for all $a\in \A_s$, that is, the diagram
\begin{equation*}
\xymatrix{
\A_s\ar[r]^{^{*}}\ar[d]_{j_{t,s}} & \A_{s^*}\ar[d]^{j_{t^*,s^*}}\\
\A_t\ar[r]_{^{*}} & \A_{t^*}
}
\end{equation*}
commutes.\label{def:Fell bundles over ISG:item:CompatibilityInvolutionWithInclusions}
If $\A_s\cdot\A_t$ spans a dense subspace of $\A_{st}$ for all $s,t\in S$, we say that $\A$ is \emph{saturated}.
\end{enumerate}
\end{definition}
Later, we shall need the following technical result:
\begin{lemma}\label{lem:TechnicalLemmaDefFellBundle}
Let $S$ be an inverse semigroup and let $\A=\{\A_s\}_{s\in S}$ be a collection of Banach spaces satisfying
all the conditions of Definition~\ref{def:Fell bundles over ISG} except,
possibly for axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityMultiplicationWithInclusions}. Then the following
assertions are equivalent:
\begin{enumerate}[(a)]
\item axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityMultiplicationWithInclusions}
of Definition~\ref{def:Fell bundles over ISG}
holds (and hence $\A$ is a Fell bundle);\label{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusions}
\item $j_{t,s}(a)\cdot b=j_{tu,su}(a\cdot b)$ whenever $s,t,u\in S$, $s\leq t$, $a\in \A_s$ and $b\in \A_u$
(that is, axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityMultiplicationWithInclusions}
holds for $u=v$);\label{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsForu=v}
\item $a\cdot j_{v,u}(b)=j_{sv,su}(a\cdot b)$ whenever $s,u,v\in S$, $u\leq v$, $a\in \A_s$ and $b\in \A_u$ (that is,
axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityMultiplicationWithInclusions} holds
for $s=t$);\label{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsFors=t}
\end{enumerate}
\end{lemma}
\begin{proof}
Of course,~\eqref{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusions}
implies~\eqref{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsForu=v}
and~\eqref{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsFors=t}
(note that $j_{s,s}$ is the identity map for all $s\in S$).
Applying the involution and using axioms~\eqref{def:Fell bundles over ISG:item:InvolutionIdempotentIsometricAntiMultiplicative}
and~\eqref{def:Fell bundles over ISG:item:CompatibilityInvolutionWithInclusions} of Definition~\ref{def:Fell bundles over ISG},
it follows that~\eqref{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsForu=v}
and~\eqref{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsFors=t} are equivalent.
Finally, suppose that ~\eqref{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsForu=v}
and (hence also)~\eqref{lem:TechnicalLemmaDefFellBundle:item:CompatibilityMultiplicationWithInclusionsFors=t} hold.
This means that the diagrams
\begin{equation*}
\xymatrix{
\A_s\times\A_u \ar[r]^{\mu_{s,u}}\ar[d]_{j_{t,s}\times \id} & \A_{su}\ar[d]^{j_{tu,su}}\\
\A_t\times\A_u \ar[r]_{\mu_{t,u}} & \A_{tu}}
\quad\quad
\xymatrix{
\A_t\times\A_u \ar[r]^{\mu_{t,u}}\ar[d]_{\id\times j_{v,u}} & \A_{tu}\ar[d]^{j_{tv,tu}}\\
\A_t\times\A_v \ar[r]_{\mu_{t,v}} & \A_{tv}}
\end{equation*}
commute for all $s,t,u,v\in S$ with $s\leq t$ and $u\leq v$. Gluing these diagrams, we get the commutative diagram
\begin{equation*}
\xymatrix{
\A_s\times\A_u \ar[r]^{\mu_{s,u}}\ar[d]_{j_{t,s}\times \id} & \A_{su}\ar[d]^{j_{tu,su}}\\
\A_t\times\A_u \ar[r]_{\mu_{t,u}}\ar[d]_{\id\times j_{v,u}} & \A_{tu}\ar[d]^{j_{tv,tu}}\\
\A_t\times\A_v \ar[r]_{\mu_{t,v}} & \A_{tv}}
\end{equation*}
Since $(\id\times j_{v,u})\circ (j_{t,s}\times \id)=j_{t,s}\times j_{v,u}$ and
$j_{tv,tu}\circ j_{tu,su}=j_{tv,su}$ (see axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityInclusions}
in Definition~\ref{def:Fell bundles over ISG}), this yields the desired commutativity of the diagram
in Definition~\ref{def:Fell bundles over ISG}\eqref{def:Fell bundles over ISG:item:CompatibilityMultiplicationWithInclusions}.
\end{proof}
We shall say that a Fell bundle $\A=\{\A_s\}_{s\in S}$ is \emph{concrete} if all the Banach spaces
$\A_s$ are concretely represented as operators on some Hilbert space $\hils$ in such a way that
all the algebraic operations of $\A$ are realized by the operations in $\bound(\hils)$ and, in addition, the inclusion maps
$A_s\into\A_t$ for $s\leq t$ become real inclusions $\A_s\sbe\A_t$ in $\bound(\hils)$.
In other words, $\A$ is a concrete Fell bundle in $\bound(\hils)$ if $\A_s\sbe\bound(\hils)$ for all $s\in S$ and
the inclusion map $\A\to \bound(\hils)$ defines a \emph{representation} of $\A$ (see
Section~\ref{sec:RepresentationsCrossedProducts} below and \cite{Exel:noncomm.cartan}
for more details on representations of Fell bundles). Note that, in this case, each fiber $\A_s\sbe\bound(\hils)$ is a {\tro}.
Observe that every Fell bundle is isomorphic to some concrete Fell bundle.
In fact, it is enough to take a faithful representation of $C^*(\A)$ on some Hilbert space $\hils$ and
consider the universal representation $\pi^u$ of $\A$ into $C^*(\A)\sbe\bound(\hils)$. By Corollary~8.9 in \cite{Exel:noncomm.cartan},
the components $\pi^u_s\colon\A_s\to\bound(\hils)$ of $\pi^u$ are injective. It follows that the collection $\{\pi^u_s(\A_s)\}_{s\in S}$ is a concrete Fell bundle in $\bound(\hils)$ which is isomorphic to $\A$. Thus there is no loss of generality in assuming that a Fell bundle is concrete.
\begin{definition}
Let $S$ be an inverse semigroup, and let $\A=\{\A_s\}_{s\in S}$ be a (concrete) Fell bundle.
We say that $\A$ is \emph{(locally) regular} if each fiber $\A_s\sbe\bound(\hils)$ is (locally) regular as an
imprimitivity Hilbert $I_s,J_s$-bimodule (or, equivalently, as a {\tro} if $\A$ is concrete),
where $I_s=\A_s\A_s^*$ and $J_s=\A_s^*\A_s$ (closed linear spans).
\end{definition}
\begin{remark}
We shall work almost exclusively with saturated Fell bundles in this work.
This is not a strong restriction since we are going to see later (see Proposition~\ref{prop:RefinementForLocRegularFellBundle}) that there is a way to
replace $\A$ by another Fell bundle $\B$ (over a different inverse semigroup) such that $\B$ is saturated and $\A$ and $\B$ have isomorphic
cross-sectional \cstar{}algebras. Note that for a saturated Fell bundle $\A$,
we have $\A_s\A_s^*=\A_{ss^*}$ and $\A_s^*\A_s=\A_{s^*s}$.
\end{remark}
\begin{example}
Consider the inverse semigroup $S=\{s,s^*,s^*s,ss^*,0\}$ with five elements,
where $s^2=0$ and $0$ is a zero for $S$.
A Fell bundle $\A$ over $S$ is essentially the same as a Hilbert bimodule. In fact, if $\F$ is a Hilbert $A,B$-module, we define
$\A_s=\F$, $\A_{s^*}=\F^*$ (the Hilbert $B,A$-bimodule \emph{dual} of $\F$; see \cite[Example~1.6(3)]{Echterhoff.et.al.Categorical.Imprimitivity}
for the precise definition), $\A_{s^*s}=B$, $\A_{ss^*}=A$ and $\A_0=\{0\}$.
Note that the only inequalities in the canonical order relation of $S$ are $0\leq t$ for all $t\in S$.
Thus the inclusion maps are trivial: they do not play any important role.
Conversely, if $\A$ is a Fell bundle over $S$, then $\F=\A_s$
is a Hilbert $A,B$-bimodule in the natural way, where $A=\A_{ss^*}$ and $B=\A_{s^*s}$.
All the aspects of the Fell bundle $\A$ can be described in terms of the Hilbert bimodule $\F$.
For instance, $\A$ is saturated iff $\F$ is an imprimitivity Hilbert $A,B$-bimodule
and $\A$ is (locally) regular iff $\F$ is.
\end{example}
Let $\A=\{\A_s\}_{s\in S}$ be a regular, saturated, concrete Fell bundle in $\bound(\hils)$. Then each fiber $\A_s$ becomes a
regular {\tro} in $\bound(\hils)$, so that, for each $s\in S$, we can choose a partial isometry $u_s\in\bound(\hils)$
which is strictly associated to $\A_s$. Moreover, if $e\in E(S)$ is idempotent,
then we may choose $u_e$ to be the unit of $\mult(\A_e)$ (the multiplier algebra of $\A_e\sbe\bound(\hils)$).
We are going to denote the unit of $\mult(\A_e)$ by $1_e$ to simplify the notation.
Let $A\defeq C^*(\A)$ be the (full) $C^*$-algebra of $\A$ and let $B$ be the $C^*$-algebra $C^*(\E)$, where $\E=\A|_{E}$ is the restriction of $\A$
to the idempotent semilattice $E=E(S)$ of $S$. We may assume that $B\sbe A\sbe\bound(\hils)$ (we could have taken $\hils$ to be the Hilbert space of a faithful representation of $A$ to start with) and that $\A_s\sbe A$ for all $s\in S$. Note that $\A_e$ is an ideal of $B$ for all $e\in E$.
With the assumptions above, we define the ideals $\D_s\defeq\A_s\A_s^*=\A_{ss^*}$ in $B$, $s\in S$, and the maps
\begin{equation*}
\beta_s\colon\D_{s^*}\to \D_s\quad\mbox{by }\beta_s(a)\defeq u_sau_s^* \mbox{ for all }a\in \D_{s^*}.
\end{equation*}
It is easy to see that $\beta_s$ is well-defined and, since $u_su_s^*$ is the unit of $\mult(\D_{s^*})$, $\beta_s$ is a \Star{}isomorphism.
Given $s,t\in S$, we also define $\omega(s,t)\defeq u_su_tu_{st}^*\in \bound(\hils)$. Note that $\omega(s,t)\in \uni\mult(\D_{st})$ (the set of unitary multipliers). In fact, by \cite[Proposition~6.5]{Exel:twisted.partial.actions} $\omega(s,t)$ is a unitary multiplier of
\begin{equation*}
\A_s\A_t\A_t^*\A_s^*=\A_{stt^*s^*}=\D_{st}.
\end{equation*}
\begin{proposition}\label{prop:properties of twisted action coming from partial isometries}
With the notations above, the following properties hold for all $r,s,t\in S$ and $e,f\in E(S)$\textup:
\begin{enumerate}[(i)]
\item
the linear span of $\cup_{e\in E}\D_e$ is dense in $B$;
\label{prop:properties of twisted action coming from partial isometries:item:D_e_Generate_B}
\item
$\beta_e\colon\D_e\to\D_e$
is the identity map;\label{prop:properties of twisted action coming from partial isometries:item:beta_e=Identity}
\item
$\beta_r(\D_{r^*}\cap\D_s)=\D_{rs}$;\label{prop:properties of twisted action coming from partial isometries:item:beta_r(D_r*inter D_s)=D_rs}
\item
$\beta_r\circ\beta_s=\Ad_{\omega(r,s)}\circ\beta_{rs}$;
\label{prop:properties of twisted action coming from partial isometries:item:beta_r beta_s=Ad_omega(r,s)beta_rs}
\item
$\beta_r(x\omega(s,t))\omega(r,st)=\beta_r(x)\omega(r,s)\omega(rs,t)$ whenever $x\in\D_{r^*}\cap\D_{st}$;
\label{prop:properties of twisted action coming from partial isometries:item:CocycleCondition}
\item
$\omega(e,f)=1_{ef}$ and $\omega(r,r^*r)=\omega(rr^*,r)=1_{rr^*}$;
\label{prop:properties of twisted action coming from partial isometries:item:omega(e,f)=1_ef AND and omega(r,r^*r)=omega(rr^*,r)=1_rr*}
\item
$\omega(s^*,e)\omega(s^*e,s)x=\omega(s^*,s)x$ for all $x\in \D_{s^*es}$;
\label{prop:properties of twisted action coming from partial isometries:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x}
\item
$\omega(t^*,s)=\omega(t^*,ss^*)\omega(s^*,s)$ whenever $s\leq t$;
\label{prop:properties of twisted action coming from partial isometries:item:omega(t*,s)=omega(t*,ss*)omega(s*,s)}
\item
$\omega(t,r^*r)=\omega(t,s^*s)\omega(s,r^*r)$ whenever $r\leq s\leq t$.
\label{prop:properties of twisted action coming from partial isometries:item:omega(t,r^*r)=omega(t,s^*s)omega(s,r^*r)}
\end{enumerate}
\end{proposition}
\begin{proof}
Property~\eqref{prop:properties of twisted action coming from partial isometries:item:D_e_Generate_B}
is clear because $\D_e=\A_e$ for all $e\in E$. Since $u_e=1_e$ for all $e\in E$,~\eqref{prop:properties of twisted action coming from partial isometries:item:beta_e=Identity}
is also clear. To prove~\eqref{prop:properties of twisted action coming from partial isometries:item:beta_r(D_r*inter D_s)=D_rs}, note that
\begin{multline*}
\beta_r(\D_r^*\cap\D_s)=u_r\D_{r^*}\D_su_r^*=u_r\D_{r^*}\D_s\D_{r^*}u_r^*
\\=u_r\A_{r^*r}\A_{ss^*}\A_{r^*r}u_r^*=\A_r\A_{ss^*}\A_r^*=\A_{rss^*r^*}=\D_{rs}.
\end{multline*}
Note that $1_e1_f=1_{ef}$, so that
\begin{equation*}
\omega(e,f)=u_eu_fu_{ef}^*=1_e1_f1_{ef}^*=1_{ef}.
\end{equation*}
This together with the following calculation proves~\eqref{prop:properties of twisted action coming from partial isometries:item:omega(e,f)=1_ef AND and omega(r,r^*r)=omega(rr^*,r)=1_rr*}:
\begin{multline*}
\omega(r,r^*r)=u_ru_{r^*r}u_{rr^*r}^*=u_r1_{r^*r}u_r^*=u_ru_r^*=1_{rr^*}\\
=1_{rr^*}1_{rr^*}=1_{rr^*}u_ru_r^*=u_{rr^*}u_ru_{rr^*r}^*=\omega(rr^*,r).
\end{multline*}
To prove~\eqref{prop:properties of twisted action coming from partial isometries:item:beta_r beta_s=Ad_omega(r,s)beta_rs},
first note that the domains of $\beta_r\circ\beta_s$ and $\beta_{rs}$ coincide. In fact, by definition,
$\dom(\beta_{rs})=\D_{(rs)^*}=\A_{s^*r^*rs}$ and, on the other hand,
\begin{equation*}
\dom(\beta_r\circ\beta_s)=\{x\in \D_{s^*}\colon \beta_s(x)\in \D_s\cap\D_{r^*}\}=\beta_s\inv(\D_s\cap\D_{r^*}).
\end{equation*}
Now observe that
\begin{multline*}\beta_s\inv(\D_s\cap\D_{r^*})=\beta_s\inv(\D_s\D_{r^*})=u_s^*\A_{ss^*}\A_{r^*r}u_s=\A_{s^*}\A_{r^*r}u_s\\
=\A_{s^*}\A_{ss^*}\A_{r^*r}u_s=\A_{s^*}\A_{r^*r}\A_{ss^*}u_s=\A_{s^*r^*r}\A_s=\A_{s^*r^*rs}=\D_{(rs)^*}.
\end{multline*}
Thus $\dom(\beta_r\circ\beta_s)=\dom(\beta_{rs})=\dom(\Ad_{\omega(r,s)}\circ\beta_{rs})=\D_{(rs)^*}$.
Moreover, since $\omega(r,s)u_{rs}=u_ru_s$, for all $x\in \D_{(rs)^*}$ we get
\begin{equation*}
\beta_r(\beta_s(x))=u_r(u_sxu_s^*)u_r^*=\omega(r,s)u_{rs}xu_{rs}^*\omega(r,s)^*=\omega(r,s)\beta_{rs}(x)\omega(r,s)^*.
\end{equation*}
Item~\eqref{prop:properties of twisted action coming from partial isometries:item:CocycleCondition} is equivalent to the equality
\begin{equation}\label{eq:cocycle identity for partial isometries}
u_rxu_su_tu_{st}^*u_r^*u_ru_{st}u_{rst}^*=u_rxu_r^*u_ru_su_{rs}^*u_{rs}u_tu_{rst}^*.
\end{equation}
The left hand side of this equation equals $u_rxu_su_tu_{st}^*1_{r^*r}u_{st}u_{rst}^*$. Since $re\leq r$, where $e=(st)(st)^*$, we have
\begin{equation*}
u_rxu_su_tu_{st}^*\in u_r\D_{r^*}\D_{st}\mult(\D_{st})=\A_r\D_{st}=\A_{re}\sbe\A_r.
\end{equation*}
Thus
\begin{equation*}
u_rxu_su_tu_{st}^*1_{r^*r}u_{st}u_{rst}^*=u_rxu_su_tu_{st}^*u_{st}u_{rst}^*=u_rxu_su_t1_{(st)^*(st)}u_{rst}^*
\end{equation*}
Now note that
\begin{equation*}
xu_su_t\in \D_{st}u_su_t=A_{st}\A_{t^*}\A_{s^*}u_su_t=\A_{st}\A_{t^*s^*s}u_t\sbe \A_{st}\A_{t^*}u_t=\A_{st}\A_{t^*t}=\A_{st}.
\end{equation*}
Therefore, the left hand side of Equation~\eqref{eq:cocycle identity for partial isometries} equals
\begin{equation*}
u_rxu_su_t1_{(st)^*(st)}u_{rst}^*=u_rxu_su_tu_{rst}^*.
\end{equation*}
Similarly, the right hand side of Equation~\eqref{eq:cocycle identity for partial isometries} equals
\begin{multline*}
u_rxu_r^*u_ru_s1_{(rs)^*(rs)}u_tu_{rst}^*=u_rxu_r^*u_ru_su_tu_{rst}^*\\=u_rx1_{r^*r}u_su_tu_{rst}^*=u_rxu_su_tu_{rst}^*.
\end{multline*}
In order to prove~\eqref{prop:properties of twisted action coming from partial isometries:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x},
take $s\in S$, $e\in E(S)$ and $x\in \D_{s^*es}$. Note that
\begin{equation*}
u_sx\in u_s\D_{s^*es}=u_s\A_{s^*es}=u_s\A_{s^*}\A_{es}=\A_{ss^*}\A_{es}=\A_{es}=\A_e\A_s
\end{equation*}
so that $1_eu_sx=u_sx$. Therefore
\begin{multline*}
\omega(s^*,e)\omega(s^*e,s)x=u_{s^*}u_eu_{s^*e}^*u_{s^*e}u_su_{s^*es}^*x=u_{s^*}1_e1_{ess^*}u_s1_{s^*es}x\\
=u_{s^*}1_e1_{ss^*}u_sx=u_{s^*}1_eu_sx=u_{s^*}u_sx=u_{s^*}u_s1_{s^*s}x=\omega(s^*,s)x.
\end{multline*}
To prove~\eqref{prop:properties of twisted action coming from partial isometries:item:omega(t*,s)=omega(t*,ss*)omega(s*,s)},
suppose $s\leq t$. Then $t^*s=s^*s$ and $t^*ss^*=s^*ss^*=s^*$, so that
\begin{multline*}
\omega(t^*,ss^*)\omega(s^*,s)=u_{t^*}u_{ss^*}u_{t^*ss^*}^*u_{s^*}u_su_{s^*s}^*=u_{t^*}1_{ss^*}u_{s^*}^*u_{s^*}u_s 1_{s^*s}=\\
u_{t^*}1_{ss^*}1_{ss^*}u_s1_{s^*s}=u_{t^*}u_{s}=u_{t^*}u_{s}1_{s^*s}^*=u_{t^*}u_{s}u_{s^*s}^*=u_{t^*}u_{s}u_{t^*s}^*=\omega(t^*,s).
\end{multline*}
Finally, to prove~\eqref{prop:properties of twisted action coming from partial isometries:item:omega(t,r^*r)=omega(t,s^*s)omega(s,r^*r)},
assume that $r\leq s\leq t$. Note that $\omega(t,s^*s)=u_tu_{s^*s}u_{ts^*s}^*=u_t1_{s^*s}u_s^*=u_tu_s^*$ and
$1_{s^*s}u_r^*=1_{s^*s}1_{r^*r}u_r^*=1_{r^*r}u_r^*=u_r$. Therefore
\begin{equation*}
\omega(t,s^*s)\omega(s,r^*r)=u_tu_s^*u_su_r^*=u_t1_{s^*s}u_r^*=u_tu_r^*=\omega(t,r^*r).
\end{equation*}
\vskip-16pt
\end{proof}
\section{Twisted actions}
\label{sec:TwistedActions}
In this section, we give a relatively simple definition of inverse semigroup twisted action
generalizing Busby-Smith twisted actions \cite{Busby-Smith:Representations_twisted_group} and closely related to twisted partial actions of
\cite{Exel:twisted.partial.actions} for (discrete) groups.
It also generalizes the twisted actions in the sense of Sieben \cite{SiebenTwistedActions} for (unital) inverse semigroups.
We then extend the main result in \cite{Exel:twisted.partial.actions}
proving that our twisted actions give rise to regular, saturated Fell bundles yielding in this way a
structure classification of such bundles.
\begin{definition}\label{def:twisted action}
A \emph{twisted action} of an inverse semigroup $S$ on a \cstar{algebra} $B$ is
a triple $\big(\{\D_s\}_{s\in S},\{\beta_s\}_{s\in S},\{\omega(s,t)\}_{s,t\in S}\big)$ consisting of a family of
(closed, two-sided) ideals $\D_s$ of $B$ whose linear span is dense in $B$, a family of
\Star{}isomorphisms $\beta_s\colon\D_{s^*}\to \D_{s}$, and a family $\{\omega(s,t)\}_{s,t\in S}$
of unitary multipliers $\omega(s,t)\in \U\mult(\D_{st})$ satisfying properties~\eqref{prop:properties of twisted action coming from partial isometries:item:beta_r beta_s=Ad_omega(r,s)beta_rs}-\eqref{prop:properties of twisted action coming from partial isometries:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} of Proposition~\ref{prop:properties of twisted action coming from partial isometries}, that is,
for all $r,s,t\in S$ and $e,f\in E=E(S)$ we have:
\begin{enumerate}[(i)]
\item $\beta_r\circ\beta_s=\Ad_{\omega(r,s)}\circ\beta_{rs}$;\label{def:twisted action:item:beta_r beta_s=Ad_omega(r,s)beta_rs}
\item $\beta_r(x\omega(s,t))\omega(r,st)=\beta_r(x)\omega(r,s)\omega(rs,t)$
whenever $x\in\D_{r^*}\cap\D_{st}$;\label{def:twisted action:item:CocycleCondition}
\item $\omega(e,f)=1_{ef}$ and $\omega(r,r^*r)=\omega(rr^*,r)=1_{r}$, where $1_r$ is the unit of $\mult(\D_r)$;
\label{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1}
\item $\omega(s^*,e)\omega(s^*e,s)x=\omega(s^*,s)x$
for all $x\in \D_{s^*es}$.\label{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x}
\end{enumerate}
We sometimes write $(\beta,\omega)$ to refer to a twisted action, implicitly assuming that $\beta$ is a family of \Star{}isomorphisms
$\{\beta_s\}_{s\in S}$ between ideals $\D_{s^*}=\dom(\beta_s)$ and $\D_s=\ran(\beta_s)$, and
$\omega$ is a family $\{\omega(s,t)\}_{s,t\in S}$ of unitary multipliers $\omega(s,t)\in \U\mult(\D_{st})$.
\end{definition}
\begin{remark}
If $S$ has a unit $1$, then the condition in the above definition
that the closed linear span of the ideals $\D_e$ for $e\in E(S)$ is equal to $B$ is equivalent to the requirement $\D_1=B$
because $e\leq 1$ so that $\D_e\sbe\D_1$ for all $e\in E(S)$ (see Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:D_r sbe D_s} below).
For a discrete group $G$, the only idempotent is the unit element $1\in G$. In this case,
axiom~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} in the above definition is
equivalent to the condition $\omega(1,s)=\omega(s,1)=1_s$ for all $s\in G$.
Moreover, it is easy to see that this implies axiom~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x}.
Hence, in the group case, our definition of twisted action is the same
studied by Busby and Smith in \cite{Busby-Smith:Representations_twisted_group} (see also \cite{Packer-Raeburn:Stabilisation}).
It can also be seem as a special case of the twisted partial actions defined by the second named author in \cite{Exel:twisted.partial.actions},
where the condition $\beta_r(\D_{r\inv}\cap\D_s)=\D_r\cap\D_{rs}$ (axiom (b) in \cite[Definition~2.1]{Exel:twisted.partial.actions})
is replaced by $\beta_r(\D_{r\inv}\cap\D_s)=\D_{rs}$ (see Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_r(D_r* inter D_s)=D_rs} below).
\end{remark}
\begin{remark}
In \cite{SiebenTwistedActions}, Nándor Sieben has
considered a similar definition of twisted action (for unital inverse semigroups) in which the axiom
\begin{equation}\label{eq:SiebensCondition}
\omega(s,t)=1_{st}\quad\mbox{whenever }s\mbox{ or }t\mbox{ is an idempotent}
\end{equation}
is included, but we will see later (Section~\ref{sec:RelationToSiebensTwistedActions})
that this is too strong in general, that is, it can not be derived
from the axioms in Definition~\ref{def:twisted action} and that our definition really generalizes Sieben's one.
In some sense, the axioms~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} and~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} appearing in Definition~\ref{def:twisted action}
are designed to replace~\eqref{eq:SiebensCondition} in a compatible way.
Let us say some words about axiom~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} in Definition~\ref{def:twisted action}.
First, note that it can be rewritten as
\begin{equation*}
\omega(s^*,e)\omega(s^*e,s)=\Res_{s^*es}(\omega(s^*,s)),
\end{equation*}
where $\Res_{s^*es}\colon\mult(\D_{s^*s})\to \mult(\D_{s^*es})$ is the restriction homomorphism.
Observe that $\D_{s^*es}$ is an ideal of $\D_{s^*s}$ because $s^*es\leq s^*s$ (see
Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:D_r sbe D_s} below).
It is not clear for us, whether axiom~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} is a
consequence of the others. The closest property we were able to prove is the following equality very similar
to~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x}:
\begin{equation}\label{eq:omega(e,s^*)omega(es*,s)x=omega(s*,s)x}
\omega(e,s^*)\omega(es^*,s)x=\omega(s^*,s)x\quad\mbox{for all }e\in E(S),\, s\in S\mbox{ and }x\in \D_{es^*s}.
\end{equation}
To prove this, we only need axioms~\eqref{def:twisted action:item:CocycleCondition},~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} and the fact that $\beta_e$ is the identity on $\D_e$
(see Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_e=identity}).
In fact, note that $\omega(e,s^*)\omega(es^*,s)x$ and $\omega(s^*,s)x$ belong to $\D_{es^*s}=\D_e\D_{s^*s}\sbe\D_e$
(see Lemma~\ref{lem:ConsequencesDefTwistedAction} below). Moreover, if $y\in \D_{es^*s}$, then
\begin{multline*}
y\omega(s^*,s)x=\beta_e\big(y\omega(s^*,s)\big)1_{es^*s}x=\beta_e\big(y\omega(s^*,s)\big)\omega(e,s^*s)x\\
=\beta_e(y)\omega(e,s^*))\omega(es^*,s)x=y\omega(e,s^*))\omega(es^*,s)x.
\end{multline*}
Since $y$ was arbitrary, this implies~\eqref{eq:omega(e,s^*)omega(es*,s)x=omega(s*,s)x}.
As we have seen in Proposition~\ref{prop:properties of twisted action coming from partial isometries}, all the
axioms in Definition~\ref{def:twisted action} (and many others) are satisfied in case the cocycles $\omega(s,t)$ come from partial isometries $u_s$ associated to a regular, saturated Fell bundle as in Section~\ref{sec:regular Fell bundles}. Later, we are going to see (Proposition~\ref{prop:SiebensTwistedActions}) that both axioms~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} and~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} are automatically satisfied in the presence of~\eqref{def:twisted action:item:beta_r beta_s=Ad_omega(r,s)beta_rs},~\eqref{def:twisted action:item:CocycleCondition}
and, in addition, Sieben's condition~\eqref{eq:SiebensCondition}.
\end{remark}
The following result gives some consequences of the axioms of twisted action (compare with Proposition~\ref{prop:properties of twisted action coming from partial isometries}).
\begin{lemma}\label{lem:ConsequencesDefTwistedAction}
If $\bigl(\{\D_s\}_{s\in S},\{\beta_s\}_{s\in S},\{\omega(s,t)\}_{s,t\in S}\bigr)$ is a twisted action of $S$ on $B$, then
the following properties hold for all $r,s,t\in S$ and $e,f\in E(S)$\textup:
\begin{enumerate}[(i)]
\item $\D_s=\D_{ss^*}$;\label{lem:ConsequencesDefTwistedAction:item:Ds=Dss*}
\item $\beta_e\colon\D_e\to\D_e$ is the identity map;\label{lem:ConsequencesDefTwistedAction:item:beta_e=identity}
\item $\beta_r(\D_{r^*}\cap\D_s)=\D_{rs}$;\label{lem:ConsequencesDefTwistedAction:item:beta_r(D_r* inter D_s)=D_rs}
\item $\D_r\sbe\D_s$ if $r\leq s$;\label{lem:ConsequencesDefTwistedAction:item:D_r sbe D_s}
\item $\D_r\D_s=\D_{rr^*s}=\D_{ss^*r}$ and $\D_{rs}=\D_{rss^*}$.
In particular, $\D_e\D_f=\D_{ef}$;\label{lem:ConsequencesDefTwistedAction:item:D_rD_s=...}
\item $\beta_{s^*}=\Ad_{\omega(s^*,s)}\circ\beta_{s}\inv$;\label{lem:ConsequencesDefTwistedAction:item:beta_s*=Ad_omega(s*,s)beta_s inv}
\item $\beta_t\rest{\D_s}=\Ad_{\omega(t,s^*s)}\circ\beta_s$
whenever $s\leq t$;\label{lem:ConsequencesDefTwistedAction:item:beta_t restriction=...}
\item $\beta_s(\omega(s^*,s))=\omega(s,s^*)$. Here we have implicitly extended
$\beta_s\colon\D_{s^*}\to\D_{s}$ to the multiplier algebras $\beta_s\colon\mult(\D_{s^*})\to\mult(\D_{s})$;\label{lem:ConsequencesDefTwistedAction:item:beta_s(omega(s*,s))=omega(s,s*)}
\item $\beta_r(x\omega(s,t)^*)\omega(r,s)=\beta_r(x)\omega(r,st)\omega(rs,t)^*$ for all $x\in\D_{r^*}\cap\D_{st}$;\label{lem:ConsequencesDefTwistedAction:item:CocycleConditionWith*}
\item $\omega(s,e)=\omega(s,s^*se)$ and $\omega(e,s)=\omega(ess^*,s)$;\label{lem:ConsequencesDefTwistedAction:item:omega(s,e)=w(s,s*se)...}
\item $\omega(r,e)=1_{rr^*}$ whenever $e\geq r^*r$,
and $\omega(f,s)=1_{ss^*}$ whenever $f\geq ss^*$;\label{lem:ConsequencesDefTwistedAction:item:omega(r,e)=1_If_eGEQr*r}
\item $\omega(t^*,s)=\omega(t^*,ss^*)\omega(s^*,s)$ whenever $s\leq t$;\label{lem:ConsequencesDefTwistedAction:item:omega(t*,s)=...sLEQt}
\item $\omega(t,r^*r)x=\omega(t,s^*s)\omega(s,r^*r)x$ whenever $r\leq s\leq t$
and $x\in \D_r$.\label{lem:ConsequencesDefTwistedAction:item:omega(t,r*r)x=omega(t,s*s)omega(s,r*r)x}
\end{enumerate}
\end{lemma}
\begin{proof}
Note that $\D_s=\dom\big(\beta_s\circ\beta_{s^*}\big)=\dom\big(\Ad_{\omega(s,s^*)}\circ\beta_{ss^*}\big)=\D_{ss^*}$,
which proves~\eqref{lem:ConsequencesDefTwistedAction:item:Ds=Dss*}.
To prove~\eqref{lem:ConsequencesDefTwistedAction:item:beta_e=identity},
observe that $\beta_e\circ\beta_e=\Ad_{\omega(e,e)}\circ\beta_e=\beta_e$ because $\omega(e,e)=1_e$.
Since $\beta_e\colon\D_e\to \D_e$ is an automorphism, it must be the identity map.
To check~\eqref{lem:ConsequencesDefTwistedAction:item:beta_r(D_r* inter D_s)=D_rs}, notice that
\begin{equation*}
\beta_r(\D_{r^*}\cap\D_s)=\ran(\beta_r\circ\beta_s)=\ran(\Ad_{\omega(r,s)}\circ\beta_{rs})=\D_{rs}.
\end{equation*}
To prove~\eqref{lem:ConsequencesDefTwistedAction:item:D_r sbe D_s},
first observe that $sr^*r=r$ because $r\leq s$. Using (the just checked)
property~\eqref{lem:ConsequencesDefTwistedAction:item:beta_r(D_r* inter D_s)=D_rs}, we get
\begin{equation*}
\D_r=\D_{sr^*r}=\beta_s(\D_{s^*}\cap\D_{r^*r})\sbe\beta_s(\D_{s^*})=\D_s.
\end{equation*}
To prove~\eqref{lem:ConsequencesDefTwistedAction:item:D_rD_s=...} we use that $\beta_{rr^*}$ is the identity
on $\D_{rr^*}$, and also~\eqref{lem:ConsequencesDefTwistedAction:item:Ds=Dss*}
and~\eqref{lem:ConsequencesDefTwistedAction:item:beta_r(D_r* inter D_s)=D_rs} to get
\begin{equation*}
\D_r\D_s=\D_{rr^*}\D_s=\beta_{rr^*}(\D_{rr^*}\D_s)=\D_{rr^*s}.
\end{equation*}
Since $\D_r\D_s=\D_r\cap\D_s=\D_s\D_t$ (this holds for ideals of a \cstar{}algebra), we also have $\D_r\D_s=\D_{ss^*r}$.
And by~\eqref{lem:ConsequencesDefTwistedAction:item:Ds=Dss*}, we have $\D_{rss^*}=\D_{rss^*ss^*r^*}=\D_{rss^*r^*}=\D_{rs}$.
To prove~\eqref{lem:ConsequencesDefTwistedAction:item:beta_s(omega(s*,s))=omega(s,s*)},
we use axioms~~\eqref{def:twisted action:item:CocycleCondition}
and~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} in Definition~\ref{def:twisted action} to get
\begin{multline*}
\beta_s(\omega(s^*,s))=\beta_s(\omega(s^*,s))1_s=\beta_s(\omega(s^*,s))\omega(s,s^*s)\\
=\omega(s,s^*)\omega(ss^*,s)=\omega(s,s^*)1_s=\omega(s,s^*).
\end{multline*}
Property~\eqref{lem:ConsequencesDefTwistedAction:item:CocycleConditionWith*} is a consequence
of~\eqref{def:twisted action:item:CocycleCondition} in Definition~\ref{def:twisted action}. In fact,
applying Definition~\ref{def:twisted action}\eqref{def:twisted action:item:CocycleCondition} with $x\omega(s,t)^*$ in place of $x$, we get
\begin{equation*}
\beta_r(x)\omega(r,st)=\beta_r(x\omega(s,t)^*)\omega(r,s)\omega(rs,t).
\end{equation*}
Multiplying this last equation by $\omega(rs,t)^*$ on the right one arrives at~\eqref{lem:ConsequencesDefTwistedAction:item:CocycleConditionWith*}.
In order to check~\eqref{lem:ConsequencesDefTwistedAction:item:omega(s,e)=w(s,s*se)...},
take $y\in \D_{s^*se}$ and define $x\defeq \beta_s(y)\in \D_{ses^*}$ (note that every element
of $\D_{ses^*}$ has this form for some $y\in \D_{s^*se}$). Note that $\omega(s,s^*se)$ and $\omega(s,e)$ are unitary
multipliers of $\D_{se}=\D_{ses^*}$. Using axioms~\eqref{def:twisted action:item:CocycleCondition}
and~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} in Definition~\ref{def:twisted action}, we get
\begin{multline*}
x\omega(s,s^*se)=\beta_s(y)\omega(s,s^*se)=\beta_s(y1_{s^*se})\omega(s,s^*se)\\
=\beta_s(y\omega(s^*s,e))\omega(s,s^*se)=\beta_s(y)\omega(s,s^*s)\omega(ss^*s,e)=x1_{ss^*}\omega(s,e)=x\omega(s,e).
\end{multline*}
Since $x$ is an arbitrary element of $\D_{ses^*}$, we must have $\omega(s,s^*se)=\omega(s,e)$. Similarly, we
can check the second part of~\eqref{lem:ConsequencesDefTwistedAction:item:omega(s,e)=w(s,s*se)...}: note that
$\omega(e,s)$ and $\omega(ess^*,s)$ are multipliers of $\D_{es}=\D_{ess^*}$.
Take an arbitrary element $x\in \D_{ess^*}=\D_e\cap\D_{ss^*}$. Then, using
again~\eqref{def:twisted action:item:CocycleCondition} and~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1}
in Definition~\ref{def:twisted action} and that $\beta_e$ is the identity map on $\D_e$, we get
\begin{multline*}
x\omega(e,s)=\beta_e(x1_{ss^*})\omega(e,s)=\beta_e(x\omega(ss^*,s))\omega(e,s)\\
=\beta_e(x)\omega(e,ss^*)\omega(ess^*,s)=x1_{ess^*}\omega(ess^*,s)=x\omega(ess^*,s).
\end{multline*}
Therefore $\omega(e,s)=\omega(ess^*,s)$. To prove~\eqref{lem:ConsequencesDefTwistedAction:item:omega(r,e)=1_If_eGEQr*r},
take $r\in S$ and $e\in E(S)$ with $e\geq r^*r$. Then, by
Definition~\ref{def:twisted action}\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} and the property~\eqref{lem:ConsequencesDefTwistedAction:item:omega(s,e)=w(s,s*se)...} just checked, we
have $\omega(r,e)=\omega(r,r^*re)=\omega(r,r^*r)=1_{r}=1_{rr^*}$. The second part of~\eqref{lem:ConsequencesDefTwistedAction:item:omega(r,e)=1_If_eGEQr*r} is proved similarly.
In order to prove~\eqref{lem:ConsequencesDefTwistedAction:item:omega(t*,s)=...sLEQt}, assume that $s\leq t$ and take
$x\in \D_s\sbe\D_t$. Note that $y\defeq \beta_{t^*}(x)\in \D_{s^*}=\D_{s^*s}$ and every element of $\D_{s^*s}$ has this form.
Observe that $\omega(t^*,s)$, $\omega(t^*,ss^*)$ and $\omega(s^*,s)$ are unitary multipliers of
$\D_{t^*s}=\D_{s^*s}$ and by axioms~\eqref{def:twisted action:item:CocycleCondition}
and~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1}, we have
\begin{multline*}
y\omega(t^*,s)=\beta_{t^*}(x1_{ss^*})\omega(t^*,s)=\beta_{t^*}(x\omega(ss^*,s))\omega(t^*,s)\\
=\beta_{t^*}(x)\omega(t^*,ss^*)\omega(s^*,s)=y\omega(t^*,ss^*)\omega(s^*,s).
\end{multline*}
Thus $\omega(t^*,s)=\omega(t^*,ss^*)\omega(s^*,s)$. Finally, to prove~\eqref{lem:ConsequencesDefTwistedAction:item:omega(t,r*r)x=omega(t,s*s)omega(s,r*r)x},
assume that $r\leq s\leq t$ and $x\in \D_r=\D_{rr^*}$. Observe that $\omega(t,r^*r)x$ and $\omega(t,s^*s)\omega(s,r^*r)x$ are elements
of $\D_r$ every element of $\D_r$ has the form $z=\beta_t(y)$ for some $y\in \D_{r^*}=\D_{r^*r}$.
By axioms~\eqref{def:twisted action:item:CocycleCondition} and~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} in
Definition~\ref{def:twisted action}, we have
\begin{multline*}
z\omega(t,r^*r)x=\beta_t(y1_{r^*r})\omega(t,r^*r)x=\beta_t(y\omega(s^*s,r^*r))\omega(t,r^*r)x\\
=\beta_t(y)\omega(t,s^*s)\omega(ts^*s,r^*r)x=z\omega(t,s^*s)\omega(s,r^*r)x.
\end{multline*}
Therefore $\omega(t,r^*r)x=\omega(t,s^*s)\omega(s,r^*r)x$ as desired.
\end{proof}
Given a twisted action $\big(\{\D_s\}_{s\in S},\{\beta_s\}_{s\in S},\{\omega(s,t)\}_{s,t\in S}\big)$,
we would like to define a Fell bundle $\B$ over $S$ as follows:
\begin{equation}\label{eq:DefFellBundleFromTwistedAction}
\B\defeq \{(b,s)\in B\times S\colon b\in \D_s\}
\end{equation}
Writing $b\delta_s$ for $(b,s)\in \B$, we define the operations:
\begin{equation}\label{eq:DefProductFellBundleFromTwistedAction}
(b_s\delta_s)\cdot(b_t\delta_t)\defeq\beta_s\big(\beta_s^{-1}(b_s)b_t\big)\omega(s,t)\delta_{st}
\end{equation}
and
\begin{equation}\label{eq:DefInvolutionFellBundleFromTwistedAction}
(b_s\delta_s)^*\defeq \beta_s^{-1}(b_s^*)\omega(s^*,s)^*\delta_{s^*}
\end{equation}
for all $b_s\in \D_s$ and $b_t\in \D_t$. The product~\eqref{eq:DefProductFellBundleFromTwistedAction} is well-defined because
$b_s\in \D_s=\dom(\beta_s\inv)$ and hence $\beta_s\inv(b_s)b_t\in \D_{s^*}\D_t\sbe\D_{s^*}=\dom(\beta_s)$. Moreover,
axiom~\eqref{def:twisted action:item:beta_r beta_s=Ad_omega(r,s)beta_rs} of Definition~\ref{def:twisted action} shows that
\begin{equation*}
\beta_s\big(\beta_s^{-1}(b_s)b_t\big)\in \beta_s(\D_{s^*}\D_t)=\beta_s(\D_{s^*}\cap\D_t)=\D_{st}.
\end{equation*}
Since $\omega(s,t)\in \mult(\D_{st})$, we get $\beta_s\big(\beta_s^{-1}(b_s)b_t\big)\omega(s,t)\in \D_{st}$.
It is easy to see that the involution~\eqref{eq:DefInvolutionFellBundleFromTwistedAction} is also well-defined.
Moreover, by Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_s*=Ad_omega(s*,s)beta_s inv}, $\beta_s\inv(a)=\omega(s^*,s)^*\beta_{s^*}(a)\omega(s^*,s)$, so that
\begin{equation}\label{eq:OtherFormulaForInvolution}
(a\delta_s)^*=\beta_s\inv(a^*)\omega(s^*,s)^*\delta_{s^*}=\omega(s^*,s)^*\beta_{s^*}(a^*)\delta_{s^*}.
\end{equation}
Observe that the formulas~\eqref{eq:DefProductFellBundleFromTwistedAction} and~\eqref{eq:DefInvolutionFellBundleFromTwistedAction}
are exactly the same ones appearing in \cite[Section~2]{Exel:twisted.partial.actions}
for a Fell bundle defined from a twisted partial action of a group.
The main difficulty is to define the inclusion maps $j_{t,s}\colon\B_s\hookrightarrow \B_t$ whenever $s\leq t$.
One first obvious choice would be $j_{t,s}(b_s\delta_s)=b_s\delta_t$, but this does not
work in general although it would work if we had Sieben's condition~\eqref{eq:SiebensCondition}.
The problem is to prove that the inclusion maps $j_{t,s}$ are compatible with the operations above.
To motivate the correct definition, let us temporarily assume that
the twisted action comes from partial isometries $u_s$ associated to a regular, saturated, concrete Fell bundle $\A=\{\A_s\}_{s\in S}$
in $\bound(H)$ as in Section~\ref{sec:regular Fell bundles}. In this case, if $s\leq t$ in $S$,
then $\A_s\sbe \A_t\sbe\bound(H)$. And by regularity, $\A_s=\D_su_s$ and $\A_t=\D_tu_t$.
So, any element $x\in \A_s$ can be written in two ways: $x=au_s=bu_t$ for some $a\in \D_s$ and $b\in \D_t$. The relation between $a$ and $b$ is
$b=au_su_t^*$. Now note that $u_su_t^*=u_su_{s^*s}u_t^*=(u_tu_{s^*s}u_{ts^*s}^*)^*=\omega(t,s^*s)^*$, so that $b=a\omega(t,s^*s)^*$. Thus
the inclusion $\A_s\sbe\A_t$ determines a map $\D_s\to \D_t$ by the rule $a\mapsto a\omega(t,s^*s)^*$. Hence, it is natural to define
\begin{equation}\label{eq:DefInclusionsFellBundleFromTwistedAction}
j_{t,s}\colon\B_s\to \B_t\quad\mbox{by}\quad j_{t,s}(a\delta_s)\defeq a\omega(t,s^*s)^*\delta_t.
\end{equation}
Note that $\omega(t,s^*s)\in \mult(\D_{ts^*s})=\mult(\D_s)$ and hence $a\omega(t,s^*s)\in \D_s\sbe\D_t$ by
Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:D_r sbe D_s}, so that $j_{t,s}$ is well-defined.
\begin{theorem}\label{theo:FellBundleFromTwistedAction}
The bundle $\B$ defined by Equation~\eqref{eq:DefFellBundleFromTwistedAction} with the obvious
projection $\B\onto S$, with the linear and norm structure on each fiber $\B_s$ inherited from $\D_s$, with the algebraic
operations~\eqref{eq:DefProductFellBundleFromTwistedAction} and~\eqref{eq:DefInvolutionFellBundleFromTwistedAction},
and the inclusion maps~\eqref{eq:DefInclusionsFellBundleFromTwistedAction} is a saturated, regular Fell bundle over $S$.
\end{theorem}
\begin{proof}
Of course, the multiplication~\eqref{eq:DefProductFellBundleFromTwistedAction} is a bilinear map from $\A_s\times\A_t$ to $\A_{st}$
and the involution~\eqref{eq:DefInvolutionFellBundleFromTwistedAction} is a conjugate-linear map from $\A_s$ to $\A_{s^*}$ for all $s,t\in S$.
The associativity of the multiplication is proved in the same way as in \cite[Proposition~2.4]{Exel:twisted.partial.actions} (see
also the proof of Proposition~3.1 in \cite{SiebenTwistedActions}; it is also possible to use the idea appearing in \cite[Theorem~2.4]{DokuchaevExelSimon:twisted.partial.actions} where no approximate unit is needed).
Moreover, it is easy to see that $\|(a\delta_s)\cdot(b\delta_t)\|\leq\|a\delta_s\|\|b\delta_t\|$ for all $s,t\in S$, $a\in \D_s$ and $b\in \D_t$.
All this together gives us the axioms~\eqref{def:Fell bundles over ISG:item:MultiplicationBilinear},~\eqref{def:Fell bundles over ISG:item:MultiplicationAssociative},~\eqref{def:Fell bundles over ISG:item:MultiplicationBounded} and~\eqref{def:Fell bundles over ISG:item:InvolutionConjugateLinear} in Definition~\ref{def:Fell bundles over ISG}.
Let us check that $\big((a\delta_s)^*\big)^*=a\delta_s$ for all $s\in S$ and $a\in \D_s$. By
Equation~\eqref{eq:OtherFormulaForInvolution}, we have
\begin{align*}
\big((a\delta_s)^*\big)^*&=\big(\beta_s\inv(a^*)\omega(s^*,s)^*\delta_{s^*}\big)^*
=\omega(s,s^*)^*\beta_s(\omega(s^*,s)\beta_s\inv(a))\delta_s\\
&=\omega(s,s^*)^*\beta_s(\omega(s^*,s))a\delta_s
=\omega(s,s^*)^*\omega(s,s^*)a\delta_s=a\delta_s.
\end{align*}
Next, we prove that the involution on $\B$ is anti-multiplicative, that is, we show that
$\big((a\delta_s)\cdot(b\delta_t)\big)^*=(b\delta_t)^*\cdot (a\delta_s)^*$ for all $a\in \D_s$ and $b\in \D_t$.
We have
\begin{multline*}
\big((a\delta_s)\cdot(b\delta_t)\big)^*=\big(\beta_s(\beta_t\inv(a)b)\omega(s,t)\delta_{st}\big)^*\\
=\beta_{st}\inv\big(\omega(s,t)^*\beta_s(b^*\beta_s\inv(a^*))\big)\omega(t^*s^*,st)^*\delta_{t^*s^*}=\ldots
\end{multline*}
Let $x\defeq b^*\beta_s\inv(a^*)\in \D_t\cap\D_{s^*}$, so that $\beta_t\inv(x)\in \D_{t^*}\cap\D_{t^*s^*}$ and hence
\begin{equation*}
\beta_s(x)=\beta_s(\beta_t(\beta_t\inv(s)))=\omega(s,t)\beta_{st}(\beta_t\inv(x))\omega(s,t)^*.
\end{equation*}
Thus, the above equals
\begin{multline*}
\ldots=\beta_{st}\inv\big(\beta_{st}(\beta_t\inv(x))\omega(s,t)^*\big)\omega(t^*s^*,st)^*\delta_{t^*s^*}\\
=\omega(t^*s^*,st)^*\beta_{t^*s^*}\big(\beta_{st}(\beta_t\inv(x))\omega(s,t)^*\big)\delta_{t^*s^*}=\ldots
\end{multline*}
which by Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:CocycleConditionWith*} is equal to
\begin{align*}
\ldots=\omega(t^*s^*,st)^*&\beta_{t^*s^*}\big(\beta_{st}(\beta_t\inv(x))\big)\omega(t^*s^*,st)\omega(t^*s^*s,t)^*\omega(t^*s^*,s)^*\delta_{t^*s^*}\\
&=\beta_{st}\inv\big(\beta_{st}(\beta_t\inv(x))\big)\omega(t^*s^*s,t)^*\omega(t^*s^*,s)^*\delta_{t^*s^*}\\
&=\beta_t\inv(x)\omega(t^*s^*s,t)^*\omega(t^*s^*,s)^*\delta_{t^*s^*}\\
&=\beta_t\inv(b^*\beta_s\inv(a^*))\omega(t^*s^*s,t)^*\omega(t^*s^*,s)^*\delta_{t^*s^*}.
\end{align*}
On the other hand,
\begin{align*}
(b\delta_t)^*\cdot (a\delta_s)^*&=\big(\beta_t\inv(b^*)\omega(t^*,t)^*\delta_{t^*}\big)\cdot\big(\beta_s\inv(a^*)\omega(s^*,s)^*\delta_{s^*}\big)\\
&=\beta_{t^*}\Big(\beta_{t^*}\inv\big(\beta_t\inv(b^*)\omega(t^*,t)^*\big)\beta_s\inv(a^*)\omega(s^*,s)^*\Big)\omega(t^*,s^*)\delta_{t^*s^*}\\
&=\beta_{t^*}\Big(\beta_{t^*}\inv\big(\omega(t^*,t)^*\beta_{t^*}(b^*)\big)\beta_s\inv(a^*)\omega(s^*,s)^*\Big)\omega(t^*,s^*)\delta_{t^*s^*}=\ldots
\end{align*}
Let $(u_i)$ be an approximate unit for $\D_t$ and define $y\defeq \beta_{t^*}(b^*)\in \D_{t^*}$ and
$z\defeq\beta_s\inv(a^*)\omega(s^*,s)^*\in \D_{s^*}$. Then
\begin{align*}
\ldots&=\beta_{t^*}\Big(\beta_{t^*}\inv\big(\omega(t^*,t)^*y\big)z\Big)\omega(t^*,s^*)\delta_{t^*s^*}\\
&=\lim\limits_{i}\beta_{t^*}\Big(\beta_{t^*}\inv\big(\omega(t^*,t)^*y\big)u_iz\Big)\omega(t^*,s^*)\delta_{t^*s^*}\\
&=\lim\limits_{i}\omega(t^*,t)^*y\beta_{t^*}(u_iz)\omega(t^*,s^*)\delta_{t^*s^*}\\
&=\lim\limits_{i}\omega(t^*,t)^*\beta_{t^*}(b^*u_iz)\omega(t^*,s^*)\delta_{t^*s^*}\\
&=\omega(t^*,t)^*\beta_{t^*}(b^*z)\omega(t^*,s^*)\delta_{t^*s^*}\\
&=\omega(t^*,t)^*\beta_{t^*}\big(b^*\beta_s\inv(a^*)\omega(s^*,s)^*\big)\omega(t^*,s^*)\delta_{t^*s^*}=\ldots
\end{align*}
Using Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:CocycleConditionWith*} again, the above equals
\begin{align*}
\ldots&=\omega(t^*,t)^*\beta_{t^*}\big(b^*\beta_s\inv(a^*)\big)\omega(t^*,s^*s)\omega(t^*s^*,s)^*\omega(t^*,s^*)^*\omega(t^*,s^*)\delta_{t^*s^*}\\
&=\beta_t\inv\big(b^*\beta_s\inv(a^*)\big)\omega(t^*,t)^*\omega(t^*,s^*s)\omega(t^*s^*,s)^*\delta_{t^*s^*}.
\end{align*}
We conclude that $\big((a\delta_s)\cdot(b\delta_t)\big)^*=(b\delta_t)^*\cdot (a\delta_s)^*$ if and only if
\begin{equation*}
c\,\omega(t^*s^*s,t)^*\omega(t^*s^*,s)^*=c\,\omega(t^*,t)^*\omega(t^*,s^*s)\omega(t^*s^*,s)^*,
\end{equation*}
where $c\defeq \beta_t\inv(b^*\beta_s\inv(a^*))\in\D_{t^*s^*}$. Multiplying the above equation on the right by $\omega(t^*s^*,s)\in\U\mult(\D_{t^*s^*})$ and taking adjoints, we see that it is equivalent to
\begin{equation*}
\omega(t^*,s^*s)\omega(t^*s^*s,t)c^*=\omega(t^*,t)c^*.
\end{equation*}
And this last equation is a consequence of axiom~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x}
in Definition~\ref{def:twisted action}. Therefore the involution on $\B$ is
anti-multiplicative. It is easy to see that the involution is isometric, that is, $\|(a\delta_s)^*\|=\|a\delta_s\|$ for all $a\in \D_s$.
Thus, we have checked axiom~\eqref{def:Fell bundles over ISG:item:InvolutionIdempotentIsometricAntiMultiplicative}
in Definition~\ref{def:Fell bundles over ISG}.
To prove axiom~\eqref{def:Fell bundles over ISG:item:C*-Condition}
in Definition~\ref{def:Fell bundles over ISG}, take $s\in S$ and $a\in \D_s$. Then
\begin{align*}
(a\delta_s)^*\cdot(a\delta_s)&=\big(\omega(s^*,s)^*\beta_{s^*}(a^*)\delta_{s^*}\big)\cdot (a\delta_s)\\
&=\beta_{s^*}\Big(\beta_{s^*}\inv\big(\omega(s^*,s)^*\beta_{s^*}(a^*)\big)a\Big)\omega(s^*,s)\delta_{s^*s}\\
&=\beta_{s^*}\big(\beta_{s^*}\inv(\omega(s^*,s)^*)a^*a\big)\omega(s^*,s)\delta_{s^*s}\\
&=\omega(s^*,s)^*\beta_{s^*}(a^*a)\omega(s^*,s)\delta_{s^*s}.
\end{align*}
Now note that $\omega(s^*,s)^*\beta_{s^*}(a^*a)\omega(s^*,s)$ is a positive element of $\D_{s^*s}$ and its norm equals $\|a^*a\|=\|a\|^2$.
In order to prove axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityInclusions}
in Definition~\ref{def:Fell bundles over ISG}, take $r,s,t\in S$ with $r\leq s\leq t$ and let $x\in \D_r$.
Then we have
\begin{align*}
(j_{t,s}\circ j_{s,r})(a\delta_r)&=j_{t,s}(j_{s,r}(a\delta_r))=j_{t,s}(a\omega(s,r^*r)^*\delta_s)\\
&=a\omega(s,r^*r)^*\omega(r,s^*s)^*\delta_t\\
&=(\omega(r,s^*s)\omega(s,r^*r)a^*)^*\delta_t=a\omega(t,r^*r)^*\delta_t,
\end{align*}
where in the last equation we have used
Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:omega(t,r*r)x=omega(t,s*s)omega(s,r*r)x}.
Next, let us check that $j_{t,s}(a\delta_s)^*=j_{t^*,s^*}\big((a\delta_s)^*\big)$ for all $a\in \D_s$ and $s\leq t$ in $S$. We have
\begin{multline}\label{eq:PartCalculationInvolution}
j_{s,t}(a\delta_s)^*=\big(a\omega(t,s^*s)^*\delta_t\big)^*\\=\omega(t^*,t)^*\beta_{t^*}(\omega(t,s^*s)a^*)\delta_{t^*}
=\omega(t^*,s)^*\beta_{t^*}(a^*)\delta_{t^*},
\end{multline}
where the last equation follows from the identity
\begin{equation}\label{eq:identityConseq.CocycleCondition}
\beta_{t^*}(a)\omega(t^*,s)=\beta_{t^*}(a\omega(t,s^*s)^*)\omega(t^*,t)
\end{equation}
which in turn is a consequence of axiom~\eqref{def:twisted action:item:CocycleCondition}
in Definition~\ref{def:twisted action}. In fact, by Definition~\ref{def:twisted action}\eqref{def:twisted action:item:CocycleCondition}, we have
\begin{equation*}
\beta_{t^*}(a\omega(t,s^*s))\omega(t^*,s)=\beta_{t^*}(a)\omega(t^*,t)\omega(t^*t,s^*s)=\beta_{t^*}(a)\omega(t^*,t)1_{s^*s}.
\end{equation*}
By Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_t restriction=...}, $\beta_{t^*}(a)\in \D_{s^*s}$ so that $\beta_{t^*}(a)\omega(t^*,t)1_{s^*s}=\beta_{t^*}(a)\omega(t^*,t)$. Replacing $a$ by $a\omega(t,s^*s)^*$, this yields~\eqref{eq:identityConseq.CocycleCondition}
and hence also~\eqref{eq:PartCalculationInvolution}. On the other hand, using again Equation~\eqref{eq:OtherFormulaForInvolution}
and Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_t restriction=...},
\begin{align*}\label{eq:SecondPartCalculationInvolution}
j_{t^*,s^*}\big((a\delta_s)^*\big)&=j_{t^*,s^*}\big(\omega(s^*,s)^*\beta_{s^*}(a^*)\delta_{s^*}\big)\\
&=\omega(s^*,s)^*\beta_{s^*}(a^*)\omega(t^*,ss^*)^*\delta_{t^*}\\
&=\omega(s^*,s)^*\omega(t^*,ss^*)^*\beta_{t^*}(a^*)\omega(t^*,ss^*)\omega(t^*,ss^*)^*\delta_{t^*}\\
&=\omega(s^*,s)^*\omega(t^*,ss^*)^*\beta_{t^*}(a^*)\delta_{t^*}.
\end{align*}
Comparing this last equation with~\eqref{eq:PartCalculationInvolution}, we see that they are equal by
Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:omega(t*,s)=...sLEQt}.
This proves axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityInvolutionWithInclusions} in Definition~\ref{def:Fell bundles over ISG}.
To prove the missing axiom~\eqref{def:Fell bundles over ISG:item:CompatibilityMultiplicationWithInclusions}
in Definition~\ref{def:Fell bundles over ISG}, we shall use Lemma~\ref{lem:TechnicalLemmaDefFellBundle} proving
that $a\delta_s\cdot j_{v,u}(b\delta_u)=j_{sv,su}(a\delta_s\cdot b\delta_u)$ for all $s,v,u\in S$ with $u\leq v$, $a\in \D_s$ and $b\in \D_u$.
Defining $c=\beta_s\inv(a)b\in \D_{s^*}\cap\D_u$ and using Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:CocycleConditionWith*}, we get
\begin{align*}
a\delta_s\cdot j_{v,u}(b\delta_u)&=(a\delta_s)\cdot (b\omega(v,u^*u)^*\delta_v)\\
&=\beta_{s}\big(\beta_s\inv(a)b\omega(v,u^*u)^*\big)\omega(s,v)\delta_{sv}\\
&=\beta_{s}\big(c\,\omega(v,u^*u)^*\big)\omega(s,v)\delta_{sv}\\
&=\beta_{s}(c)\omega(s,vu^*u)\omega(sv,u^*u)^*\delta_{sv}\\
&=\beta_{s}(c)\omega(s,u)\omega(sv,u^*u)^*\delta_{sv},
\end{align*}
where in the last equation we have used that $u\leq v$ so that $vu^*u=u$. On the other hand,
\begin{align*}
j_{sv,su}(a\delta_s\cdot b\delta_u)&=j_{sv,su}\big(\beta_s(\beta_s\inv(a)b)\omega(s,u)\delta_{su}\big)\\
&=\beta_s(c)\omega(s,u)\omega(sv,u^*s^*su)^*\delta_{sv}
\end{align*}
Thus, to see that $a\delta_s\cdot j_{v,u}(b\delta_u)=j_{sv,su}(a\delta_s\cdot b\delta_u)$, it is enough to check
the equality $\omega(sv,u^*u)=\omega(sv,u^*s^*su)$. Since $u\leq v$, we have $v^*s^*svu^*u=v^*s^*su=v^*s^*suu^*u=v^*uu^*s^*su=u^*s^*su$.
Using Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:omega(s,e)=w(s,s*se)...}, we conclude that
\begin{equation*}
\omega(sv,u^*u)=\omega(sv,v^*s^*svu^*u)=\omega(sv,u^*s^*su).
\end{equation*}
Therefore $\B$ is a Fell bundle over $S$. Note that $\B$ is saturated because the element $\beta_s(\beta_s\inv(a)b)\omega(s,t)$
appearing in the product
\begin{equation*}
(a\delta_s)\cdot (b\delta_t)=\beta_s(\beta_s\inv(a)b)\omega(s,t)\delta_{st}
\end{equation*}
is an arbitrary element of $\D_{st}$. In fact, $\beta_s\inv(a)b$ for $a\in \D_s$ and $b\in \D_t$ defines an arbitrary element of $\D_{s^*}\cap \D_t$,
and by Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_r(D_r* inter D_s)=D_rs}, $\beta_s(\D_{s^*}\cap\D_t)=\D_{st}$. The conclusion follows because
$\omega(s,t)$ is a unitary multiplier of $\D_{st}$. Finally, we leave the reader to check that $\B$ is regular with respect to the unitary multipliers
$v_s\in \mult(\B_s)$ defined by
\begin{equation}\label{eq:definitionOfu_s}
v_s\cdot (a\delta_{s^*s})\defeq \beta_s(a)\delta_s\quad\mbox{for all }a\in \D_{s^*s}.
\end{equation}
Here $\mult(\B_s)=\Ls(\B_{s^*s},\B_s)$ denotes the multiplier of $\B_s$ considered as an imprimitivity
Hilbert $\B_{ss^*},\B_{s^*s}$-bimodule.
Observe that, formally, we have $v_s=\delta_s=1_{ss^*}\delta_s$.
Recall that $1_{e}$ denotes the unit of the multiplier algebra of $\D_e$.
\end{proof}
Summarizing our results, we have shown that there is a correspondence between twisted actions and regular, saturated Fell bundles.
Given a twisted action $(\beta,\omega)$ of $S$ on a \cstar{}algebra $B$, we have constructed above a regular, saturated Fell
bundle $\B$. Moreover, the original twisted action $(\beta,\omega)$ can be recovered from the Fell bundle $\B$ using the
unitary multipliers $v_s=1_{ss^*}\delta_s\in\mult(\B_s)$ defined by Equation~\eqref{eq:definitionOfu_s}. More precisely, starting
with $\B$ and the unitary multipliers $v_s$, and proceeding as in Section~\ref{sec:regular Fell bundles}, we get a
twisted action $(\tilde\beta,\tilde\omega)$ of $S$ on the \cstar{}algebra $C^*(\E)$, where $\E=\{\B_e\}_{e\in E}$
is the restriction of $\B$ to $E=E(S)$. The \cstar{}algebra $\B_e=\D_e\delta_e$ is canonically isomorphic to the ideal $\D_e\sbe B$
and the sum of these ideals is dense in $B$. By Proposition~4.3 in \cite{Exel:noncomm.cartan}, this yields a canonical
isomorphism $C^*(\E)\cong B$ extending the isomorphisms $\B_e\cong \D_e$. Under these isomorphisms, $\tilde\beta_s\colon\B_{s^*s}\to \B_{ss^*}$
corresponds to $\beta_s\colon\D_{s^*s}\to \D_{ss^*}$ and the unitary
multipliers $\tilde\omega(s,t)\in \U\mult(\B_{stt^*s^*})$ correspond to $\omega(s,t)\in \U\mult(\D_{st})=\U\mult(\D_{stt^*s^*})$.
In fact, first note that (by Equation~\eqref{eq:DefInvolutionFellBundleFromTwistedAction})
\begin{equation*}
v_s^*=(1_{ss^*}\delta_s)^*=\omega(s^*,s)^*\delta_{s^*}=\omega(s^*,s)^*1_{s^*s}\delta_{s^*}=\omega(s^*,s)^*v_{s^*}\in \mult(\B_{s^*}).
\end{equation*}
Thus, using Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_s(omega(s*,s))=omega(s,s*)}, we get
\begin{align*}
\tilde\beta_s(a\delta_{s^*s})&=v_s\cdot a\delta_{s^*s}\cdot v_s^*=(1_{ss^*}\delta_s)\cdot(a\delta_{s^*s})\cdot(\omega(s^*,s)^*\delta_{s^*})\\
&=(\beta_s(a)\delta_s)\cdot(\omega(s^*,s)^*\delta_{s^*})=\beta_s(a\omega(s^*,s)^*)\omega(s,s^*)\delta_{ss^*}\\
&=\beta_s(a)\omega(s,s^*)^*\omega(s,s^*)\delta_{ss^*}=\beta_s(a)\delta_{ss^*}.
\end{align*}
and (using that $\beta_s(1_{s^*s}1_e)=1_{ses^*}$ for all $e\in E(S)$)
\begin{align*}
\tilde\omega(s,t)&=v_su_tu_{st}^*=(1_{ss^*}\delta_s)\cdot (1_{tt^*}\delta_t)\cdot(1_{stt^*s^*}\delta_{st})\\
&=\big(\beta_s(\beta_s\inv(1_{ss^*})1_{tt^*})\omega(s,t)\delta_{st}\big)\cdot \big(\omega(t^*s^*,st)^*\delta_{t^*s^*}\big)\\
&=\big(1_{stt^*s^*}\omega(s,t)\delta_{st}\big)\cdot \big(\omega(t^*s^*,st)^*\delta_{t^*s^*}\big)\\
&=\beta_{st}\big(\beta_{st}\inv(\omega(s,t))\omega(t^*s^*,st)^*\big)\omega(st,t^*s^*)\delta_{stt^*s^*}\\
&=\omega(s,t)\omega(st,t^*s^*)^*\omega(st,t^*s^*)\delta_{stt^*s^*}=\omega(s,t)\delta_{stt^*s^*}.\\
\end{align*}
This shows that the twisted action $(\tilde\beta,\tilde\omega)$ of $S$ on $C^*(\E)$ is isomorphic to the
twisted action $(\beta,\omega)$ of $S$ on $B$. Now let us assume that $(\beta,\omega)$ already comes from a
regular, saturated Fell bundle $\A=\{\A_s\}_{s\in S}$ as in Section~\ref{sec:regular Fell bundles} for some family
$u=\{u_s\}_{s\in S}$ of unitary multipliers $u_s\in\mult(\A_s)$ with $u_e=1_e$ for all $e\in E(S)$. In other words,
we are assuming that $\beta_s(a)=u_sau_s^*$ and $\omega(s,t)=u_su_tu_{st}^*$. Then it is not difficult to see that
the map $\B\to\A$ given on the fibers $\B_s\to \A_s$ by $a\delta_s\mapsto au_s$ is an isomorphism of Fell bundles $\B\cong \A$.
Moreover, under this isomorphism, the unitary multiplier $v_s\in \mult(\B_s)$ corresponds to $u_s\in \mult(\A_s)$.
All this together essentially proves the following result:
\begin{corollary}\label{cor:CorrespondenceRegularFellBundlesAndTwistedActions}
There is a bijective correspondence between isomorphism classes of twisted actions $(\beta,\omega)$ of $S$
and isomorphism classes of pairs $(\A,u)$ consisting of regular, saturated Fell bundles $\A$ over $S$ and families $u=\{u_s\}_{s\in S}$ of
unitary multipliers $u_s\in \mult(\A_s)$ satisfying $u_e=1_e$ for all $e\in E(S)$.
\end{corollary}
An isomorphism of pairs $(\B,v)\cong (\A,u)$ has the obvious meaning: it is an isomorphism $\B\cong \A$ of the Fell bundles
under which $v_s\in \mult(\B_s)$ corresponds to $u_s\in \mult(\A_s)$ for all $s\in S$.
As already observed in Section~\ref{sec:regular TROs}, imprimitivity bimodules over $\sigma$\nb-unital stable \cstar{}algebras are automatically regular. This immediately implies the following:
\begin{corollary}
Let $\A=\{\A_s\}_{s\in S}$ be a saturated Fell bundle for which all the fibers $\A_e$ with $e\in E(S)$ are
$\sigma$\nb-unital and stable. Then $\A$ is isomorphic to a Fell bundle associated to some twisted action of $S$.
\end{corollary}
\begin{remark}
Let $\E$ be the restriction of $\A$ to $E(T)$. If $B=C^*(\E)$ is stable, then so are the fibers $\A_e$ for all $e\in E(S)$ because
each $\A_e$ may be viewed as an ideal of $B$. The converse is not clear and is related to Question~2.6 proposed in \cite{Rordam:StableExtensions}
of whether the sum of stable ideals in a \cstar{}algebra is again stable.
\end{remark}
\section{Relation to Sieben's twisted actions}
\label{sec:RelationToSiebensTwistedActions}
In this section we are going to see that our notion of twisted action generalizes Sieben's definition appearing in \cite{SiebenTwistedActions}.
\begin{proposition}\label{prop:SiebensTwistedActions}
Let $S$ be an inverse semigroup, let $B$ be a \cstar{}algebra, and consider the data $\bigl(\{\D_s\}_{s\in S},\{\beta_s\}_{s\in S},\{\omega(s,t)\}_{s,t\in S}\bigr)$ satisfying~\eqref{def:twisted action:item:beta_r beta_s=Ad_omega(r,s)beta_rs} and
~\eqref{def:twisted action:item:CocycleCondition}
as in Definition~\ref{def:twisted action}, and in addition assume that
\begin{equation}\label{eq:SiebenAxiomTwistedAction}
\omega(s,e)=1_{se}\quad\mbox{and}\quad\omega(e,s)=1_{es}\quad\mbox{for all }s\in S\mbox{ and }e\in E(S).
\end{equation}
Then $\bigl(\{\D_s\}_{s\in S},\{\beta_s\}_{s\in S},\{\omega(s,t)\}_{s,t\in S}\bigr)$ is a twisted action of $S$ on $B$.
\end{proposition}
\begin{proof}
If \eqref{eq:SiebenAxiomTwistedAction} holds, then axiom~\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1} in Definition~\ref{def:twisted action} is trivially satisfied.
And axiom~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} will follow from the following relation:
\begin{equation}\label{eq:SiebenAxiomConsequences}
\omega(s^*,s)1_{s^*es}=\omega(s^*e,s)=\omega(s^*,es)
\end{equation}
for all $s\in S$ and $e\in E(S)$. This is a consequence of properties (i) and (j) of Lemma~2.3 in \cite{SiebenTwistedActions}.
Now, the first equation in \eqref{eq:SiebenAxiomConsequences} immediately
implies~\eqref{def:twisted action:item:omega(s*,e)omega(s*e,s)x=omega(s*,s)x} in Definition~\ref{def:twisted action} because
$\omega(s^*,e)=1_{s^*e}=1_{s^*es}$.
\end{proof}
\begin{definition}\label{def:SiebensTwistedAction}
A twisted action $(\beta,\omega)$ satisfying \emph{Sieben's condition}~\eqref{eq:SiebenAxiomTwistedAction}
is called a \emph{Sieben's twisted action} (see \cite[Definition~2.2]{SiebenTwistedActions}).
\end{definition}
The following result gives conditions on a twisted action coming from a regular Fell bundle to be a Sieben's twisted action.
\begin{proposition}\label{prop:CoherentHomogeneous}
Let $\A=\{\A_s\}_{s\in S}$ be a concrete, saturated, regular Fell bundle in $\bound(\hils)$. Given $s\in S$,
let $u_s\in \bound(\hils)$ be a partial isometry strictly associated to the {\tro} $\A_s$ such that $u_e=1_e$ for all $e\in E(S)$.
Given $s,t\in S$, we write $\omega(s,t)\defeq u_su_tu_{st}^*$.
Then the following assertions are equivalent\textup:
\begin{enumerate}[(i)]
\item $\omega(s,e)=1_{se}$ for all $s\in S$ and $e\in E(S)$;\label{prop:CoherentHomogeneous:item:omega(s,e)=1_se}
\item $u_su_e=u_{se}$ for all $s\in S$ and $e\in E(S)$;
\item $u_rx=u_sx$ for all $r,s\in S$ with $r\leq s$ and $x\in \A_{r^*r}$;
\item $u_r\leq u_s$ \textup(as partial isometries of $\bound(\hils)$\textup) for all $r,s\in S$ with $r\leq s$;
\label{prop:CoherentHomogeneous:item:u_r leq u_s}
\item $yu_r=yu_s$ for all $r,s\in S$ with $r\leq s$ and $y\in \A_{rr^*}$;
\item $u_eu_s=u_{es}$ for all $s\in S$ and $e\in E(S)$;
\item $\omega(e,s)=1_{es}$ for all $s\in S$ and $e\in E(S)$;\label{prop:CoherentHomogeneous:item:omega(e,s)=1_es}
\end{enumerate}
\end{proposition}
\begin{proof}
Given $s,t\in S$, note that $u_su_t$ is associated to $\A_{st}$ because
\begin{equation*}
u_su_t\A_{t^*s^*st}=u_su_t\A_t^*\A_{s^*st}=u_s\A_{tt^*}\A_{s^*st}=u_s\A_{s^*st}=u_s\A_{s^*s}\A_t=A_s\A_t=\A_{st}
\end{equation*}
and similarly $\A_{stt^*s^*}u_su_t=\A_{st}$. Moreover $u_su_t$ is strictly associated to $\A_{st}$ because
$(u_su_t)^*(u_su_t)=u_t^*u_s^*u_su_t=u_t^*u_{s^*s}u_t=1_{t^*s^*st}$ (see Remark~\ref{rem:CoherenceOfPartialIsometries} and Proposition~\ref{prop:PartialIsometryStrictlyAssociated}).
Thus, we may view both $u_su_t$ and $u_{st}$ as unitary multipliers of the Hilbert bimodule $\A_{st}$.
The equation $\omega(s,t)=1_{st}$ is the same as $u_su_tu_{st}^*=1_{st}$ which is equivalent to
$u_su_t1_{t^*s^*st}=u_{st}$ because $u_{st}^*u_{st}=1_{t^*s^*st}$ and $1_{st}u_{st}=u_{st}$.
Moreover, since $u_su_t$ is a multiplier of $\A_{st}$, we have $u_su_t1_{t^*s^*st}=u_su_t$. Therefore the equation $\omega(s,t)=1_{st}$
is equivalent to $u_su_t=u_{st}$. From this we see that (i) is equivalent to (ii) and (vi) is equivalent to (vii).
Now, if $r\leq s$ in $S$, then $r=se$ for $e=r^*r$. Assuming (ii) and remembering that $u_e=1_e$, we get
\begin{equation*}
u_sx=u_su_ex=u_{se}x=u_rx\quad\mbox{for all }x\in \A_{r^*r}.
\end{equation*}
Thus (ii) implies (iii). Conversely, if $s\in S$ and $e\in E(S)$, and if we apply (iii) for $r=se$, then we get
$u_{se}x=u_rx=u_sx=u_su_ex$ for all $x\in A_{r^*r}=\A_{s^*se}=\A_{s^*s}\A_e$. This implies that $u_{se}=u_su_e$ because both $u_{se}$ and $u_su_e$
are multipliers of $\A_{se}$. Hence (ii) is equivalent to (iii). In the same way one can prove that (v) is equivalent to (vi).
Finally, condition (iv) is equivalent to $u_su_{r^*r}=u_su_r^*u_r=u_r$ whenever $r\leq s$ in $S$. Applying this for $r=se$ for some fixed $e\in E(S)$,
and using that $u_{r^*r}=u_{s^*se}=u_{s^*s}u_e$ and $u_su_{s^*s}=u_s$, we get condition (ii). Conversely, it is easy to see that (ii) implies (iv).
Similarly, observing that (iv) is equivalent to $u_{rr^*}u_s=u_{r}u_r^*u_s=u_r$ for all $r\leq s$, one proves that (iv) is equivalent to (vi).
\end{proof}
\begin{remark}\label{rem:CoherenceOfPartialIsometries}
Let notation be as in Proposition~\ref{prop:CoherentHomogeneous}.
Observe that the family of partial isometries $\{u_s\}_{s\in S}$ generates an inverse semigroup $\SSS$
under the product and involution of $\bound(\hils)$. Recall that the product $uv$ of two partial isometries
$u,v\in \bound(\hils)$ is again a partial isometry provided $u^*u$ and $vv^*$ commute.
This happens for the partial isometries $\{u_s\}_{s\in S}$ because $u_s^*u_s=1_{s^*s}$, $u_su_s^*=u_{ss^*}$
and $1_e1_f=1_{ef}=1_{fe}=1_{f}1_e$ for all $e,f\in E(S)$.
Thus the product $u_su_t$ is a partial isometry for all $s,t\in S$. Moreover, any finite product involving the partial isometries
$\{u_s\}_{s\in S}$ and their adjoints $\{u_s^*\}_{s\in S}$ is again a partial isometry. The idea is that for any such product $u$, say,
we have $u^*u=1_e$ and $uu^*=1_f$ for some $e,f\in E(S)$, and as we already said, $1_e$ and $1_f$ commute.
For example, to prove that $u_ru_su_t$ is a partial isometry,
one observes that $(u_ru_s)^*(u_ru_s)$ commutes with $u_tu_t^*=1_{tt^*}$. In fact, note that
\begin{equation*}
(u_ru_s)^*(u_ru_s)=u_s^*u_r^*u_ru_s=u_s^*1_{r^*r}u_s
\end{equation*}
and for every $e\in E(S)$, we have $u_s^*1_eu_s=1_{s^*es}$ because, for all $x\in \D_{s^*es}=\A_{s^*es}=\A_{s^*}\A_{es}$ we have
$u_sx\in u_s\A_{s^*}\A_{es}=\A_{ss^*}\A_{es}=\A_{es}=\A_e\A_s$, so that
\begin{equation*}
u_s^*1_eu_sx=u_s^*u_sx=1_{s^*s}x=x,
\end{equation*}
where in last equation we have used that $\D_{s^*es}\sbe\D_{s^*s}$ because $s^*es\leq s^*s$.
More generally, one can prove that
\begin{equation*}
(u_{s_1}\ldots u_{s_n})^*(u_{s_1}\ldots u_{s_n})=1_{s_n^*\ldots s_1^*s_1\ldots s_n}.
\end{equation*}
The inequality appearing in Proposition~\ref{prop:CoherentHomogeneous}\eqref{prop:CoherentHomogeneous:item:u_r leq u_s}
can be interpreted as the order relation of $\SSS$.
This justifies the argument at the end of the proof of Proposition~\ref{prop:CoherentHomogeneous} where we have used
that for $s\leq t$ in $S$, we have $u_s\leq u_t$ if and only if $u_s=u_tu_s^*u_s$ if and only if $u_s=u_su_s^*u_t$.
Note that the cocycles $\omega(s,t)$ belong to the inverse semigroup $\SSS$.
Moreover, if the partial isometries $\{u_s\}_{s\in S}$ satisfy the equivalent conditions~\eqref{prop:CoherentHomogeneous:item:omega(s,e)=1_se}-\eqref{prop:CoherentHomogeneous:item:omega(e,s)=1_es} of
Proposition~\ref{prop:CoherentHomogeneous}, then the cocycles $\{\omega(s,t)\}_{s,t\in S}$ satisfy
\begin{equation}\label{eq:CoherenceCocycles}
\omega(s,t)\leq\omega(s',t')\quad\mbox{for all }s,s',t,t'\in S\mbox{ with }s\leq s'\mbox{ and }t\leq t'.
\end{equation}
Here we view the unitaries $\omega(s,t)$ and $\omega(s',t')$ as elements of $\SSS$ in
order to give a meaning to the inequality~\eqref{eq:CoherenceCocycles}. Note that this follows directly from
condition~\eqref{prop:CoherentHomogeneous:item:u_r leq u_s} in Proposition~\ref{prop:CoherentHomogeneous} because the
product and the involution in $\SSS$ preserve the order relation. As in Proposition~\ref{prop:CoherentHomogeneous},
condition~\eqref{eq:CoherenceCocycles} can be also rewritten as
\begin{equation*}
\omega(s,t)x=\omega(s',t')x\quad\mbox{for all }s,s',t,t'\in S\mbox{ with }s\leq s'\mbox{ and }t\leq t'\mbox{ and }x\in \D_{st},
\end{equation*}
or equivalently as
\begin{equation*}
x\omega(s,t)=x\omega(s',t')\quad\mbox{for all }s,s',t,t'\in S\mbox{ with }s\leq s'\mbox{ and }t\leq t'\mbox{ and }x\in \D_{st}.
\end{equation*}
\end{remark}
\section{Representations and crossed products}
\label{sec:RepresentationsCrossedProducts}
In this section we prove that the correspondence between regular Fell bundles and twisted actions
obtained in the previous sections extends to the level of representations and yields an isomorphism of the associated
universal \cstar{}algebras.
We start recalling the definition of representations of Fell bundles (see \cite{Exel:noncomm.cartan}):
\begin{definition}\label{def:RepresentationFellBundle}
Let $\A=\{\A_s\}_{s\in S}$ be a Fell bundles over an inverse semigroup $S$.
A \emph{representation} of $\A$ on a Hilbert space $\hils$ is a family $\pi=\{\pi_s\}_{s\in S}$
of linear maps $\pi_s\colon \A_s\to \Ls(\hils)$ satisfying
\begin{enumerate}[(i)]
\item $\pi_s(a)\pi_t(b)=\pi_{st}(ab)$ and $\pi_s(a)^*=\pi_{s^*}(a^*)$ for all $s,t\in S$, $a\in \A_s$, $b\in \A_t$;\label{def:RepresentationFellBundle:item:AlgebraicOperations}
\item $\pi_t(j_{t,s}(a))=\pi_s(a)$ for all $s,t\in S$ with $s\leq t$ and $a\in \A_s$. Recall that $j_{t,s}\colon\A_s\to \A_t$
denotes the inclusion maps as in Definition~\ref{def:Fell bundles over ISG}.\label{def:RepresentationFellBundle:item:InclusionMaps}
\end{enumerate}
We usually view a representation of $\A$ as a map $\pi\colon\A\to \Ls(\hils)$ whose restriction to $\A_s$ gives the maps $\pi_s$ satisfying
the conditions above.
\end{definition}
Next, we need a notion of covariant representation for twisted crossed products.
It turns out that, although our definition of twisted action is more general than Sieben's
one (see Section~\ref{sec:RelationToSiebensTwistedActions}), the notion of representation still remains the same (see \cite[Definition~3.2]{SiebenTwistedActions}):
\begin{definition}\label{def:CovariantRepresentation}
Let $(\beta,\omega)$ be a twisted action of an inverse semigroup $S$ on a \cstar{}algebra $B$.
A \emph{covariant representation} of $(\beta,\omega)$ on a Hilbert space $\hils$
is a pair $(\rho,v)$ consisting of a $*$-homomorphism $\rho\colon B\to \Ls(\hils)$ and a family $v=\{v_s\}_{s\in S}$ of
partial isometries $v_s\in \Ls(\hils)$ satisfying
\begin{enumerate}[(i)]
\item $\rho(\beta_s(b))=v_s\rho(b)v_s^*$ for all $s\in S$ and $b\in B$;\label{def:CovariantRepresentation:item:rho(beta_s(b))=v_s rho(b) v_s*}
\item $\rho(\omega(s,t))=v_sv_tv_{st}^*$ for all $s,t\in S$; and \label{def:CovariantRepresentation:item:rho(omega(s,t))=v_sv_tv_st*}
\item $v_s^*v_s=\rho(1_{s^*s})$ and $v_sv_s^*=\rho(1_{ss^*})$. \label{def:CovariantRepresentation:item:v_s*v_s=pi(1_s*s)}
\end{enumerate}
Recall that $1_e$ denotes the unit of the multiplier algebra of $\D_e=\dom(\beta_e)$ for all $e\in E(S)$. The third condition above is equivalent
to the requirements $\rho(D_{s^*s})H=v_s^*v_sH$ and $\rho(\D_{ss^*})H=v_sv_s^*H$. In both axioms (ii) and (iii) above we have
tacitly extended $\rho$ to the enveloping von Neumann algebra $B''$ of $B$ in order to give a meaning to $\rho(\omega(s,t))$ and $\rho(1_e)$.
\end{definition}
To each notion of representation is attached a universal \cstar{}algebra which encodes all the representations.
In the case of a Fell bundle $\A$, it is the so called \emph{(full) cross-sectional \cstar{}algebra} $C^*(\A)$
defined in \cite{Exel:noncomm.cartan}. For a twisted action
$(B,S,\beta,\omega)$, the \emph{(full) crossed product} $B\rtimes_{\beta,\omega}S$ defined in
\cite{SiebenTwistedActions} can still be used although our definition of twisted action is more general.
Our aim in this section is to relate these notions when
$\A$ is the Fell bundle associated to $(B,S,\beta,\omega)$ as in the previous sections.
\begin{theorem}\label{theo:CorresponceRepresentations}
Let $(\beta,\omega)$ be a twisted action of and inverse semigroup $S$ on a \cstar{}algebra $B$,
and let $\A=\{\A_s\}_{s\in S}$ be the associated Fell bundle as in Section~\ref{sec:TwistedActions}.
Then there is a bijective correspondence between covariant representations of $(\beta,\omega)$ and representations of $\A$.
Moreover, there exists an isomorphism $B\rtimes_{\beta,\omega}S\cong C^*(\A)$.
\end{theorem}
\begin{proof}
Recall that the fiber $\A_s=\D_s\delta_s$ is a copy of $\D_s=\D_{ss^*}=\ran(\beta_s)$. During the proof we
write $u_s$ for the unitary multiplier $1_{ss^*}\delta_s\in \mult(\A_s)$ (here $\A_s$ is viewed as an imprimitivity Hilbert $A_{ss^*},A_{s^*s}$\nb-bimodule). With this notation, we have $\A_s=\D_{ss^*}u_s$ for all $s\in S$.
In particular, $\A_e=\D_e\delta_e\cong\D_e$ as \cstar{}algebras and this induces an isomorphism $C^*(\E)\cong B$,
where $\E$ is the restriction of $\A$ to $E(S)$. Moreover, as we have seen
in the previous section, the unitaries $u_s$ can be used to recover the twisted action $(\beta,\omega)$ through the formulas:
\begin{equation}\label{eq:CharacterizationOfBetaViaU_s}
\beta_s(b)\delta_{ss^*}=u_s(b\delta_{s^*s})u_s^*,
\end{equation}
\begin{equation}\label{eq:CharacterizationOfOmegaViaU_s}
\omega(s,t)\delta_{stt^*s^*}=u_su_tu_{st}^*.
\end{equation}
Let $\pi\colon \A\to \Ls(\hils)$ be a representation of $\A$. This representation integrates to a \Star{}homomorphism
$\tilde\pi\colon C^*(\A)\to \Ls(\hils)$. We may view the fiber $\A_e$ for $e\in E(S)$ and $C^*(\E)$ as subalgebras of $C^*(\A)$.
By restriction of $\tilde\pi$ to $C^*(\E)$, we get a \Star{}homomorphism $\rho$ from $B\cong C^*(\E)$ into $\Ls(\hils)$ which is characterized
by
\begin{equation*}
\rho(b)=\tilde\pi(b\delta_e)\quad\mbox{whenever }e\in E(S)\mbox{ and }b\in \D_e.
\end{equation*}
By Proposition~\ref{prop:strictlyAssociated=>weakClosure},
the unitary multipliers $u_s$ may be viewed as elements of the enveloping von Neumann algebra $A''$ of $A\defeq C^*(\A)$.
Considering the unique weakly continuous extension of $\tilde\pi$ to $A''$ (and still using the same notation for it), we define
the partial isometries $v_s\defeq \tilde\pi(u_s)\in \Ls(\hils)$. The pair $(\rho,v)$ is a covariant representation of $(\beta,\omega)$.
In fact, using Equation~\eqref{eq:CharacterizationOfBetaViaU_s}, we get
\begin{align*}
\rho(\beta_s(b))&=\tilde\pi(\beta_s(b)\delta_{ss^*})=\tilde\pi(u_s(b\delta_{s^*s})u_s^*)\\
&=\tilde\pi(u_s)\tilde\pi(b\delta_{s^*s})\tilde\pi(u_s)^*=v_s\rho(b)v_s^*
\end{align*}
for all $s\in S$ and $b\in \D_{s^*s}$. Since the ideals $\D_e\sbe B$ span a dense subspace of $B$, this proves~\eqref{def:CovariantRepresentation:item:rho(beta_s(b))=v_s rho(b) v_s*} in Definition~\ref{def:CovariantRepresentation}.
Definition~\ref{def:CovariantRepresentation}\eqref{def:CovariantRepresentation:item:rho(omega(s,t))=v_sv_tv_st*} follows in the same way
using Equation~\eqref{eq:CharacterizationOfOmegaViaU_s}:
\begin{equation*}
\rho(\omega(s,t))=\tilde\pi(\omega(s,t)\delta_{stt^*s^*})=\tilde\pi(u_su_tu_{st}^*)
=\tilde\pi(u_s)\tilde\pi(u_t)\tilde\pi(u_{st})^*=v_sv_tv_{st}^*.
\end{equation*}
Since $u_s^*u_s=1_{s^*s}\delta_{s^*s}$ and $u_su_s^*=1_{ss^*}\delta_{ss^*}$, axiom~\eqref{def:CovariantRepresentation:item:v_s*v_s=pi(1_s*s)}
in Definition~\ref{def:CovariantRepresentation} also follows easily. Hence we have a map $\pi\mapsto (\rho,v)$ from
the set of representations of $\A$ on $\hils$ to the set of covariant representation of $(\beta,\omega)$ on $\hils$.
Conversely, let us now start with a covariant representation $(\rho,v)$ of $(\beta,\omega)$ on $\hils$. Then we can
define $\pi\colon\A\to \Ls(\hils)$ by $\pi(a\delta_s)\defeq \rho(a)v_s$ for all $s\in S$ and $a\in \D_{ss^*}$.
Before we prove that $\pi$ is a representation of $\A$, observe that $v_e=\pi(1_e)$ for all $e\in E(S)$ (see proof of Proposition~3.5 in \cite{SiebenTwistedActions}) so that $v_s^*v_s=\rho(1_{s^*s})=v_{s^*s}$ and $v_sv_s^*=\rho(1_{ss^*})=v_{ss^*}$ for all $s\in S$.
By Lemma~\ref{lem:ConsequencesDefTwistedAction}\eqref{lem:ConsequencesDefTwistedAction:item:beta_s*=Ad_omega(s*,s)beta_s inv}, we have
\begin{align*}
v_s\rho(\beta_s\inv(a))&=v_s\rho(\omega(s^*,s))^*\rho(\beta_{s^*}(a))\rho(\omega(s^*,s))\\
&=v_sv_{s^*s}v_s^*v_{s^*}^*v_{s^*}\rho(a)v_{s^*}^*v_{s^*}v_sv_{s^*s}=\rho(a)v_s=\pi(a\delta_s).
\end{align*}
Thus, for all $s,t\in S$, $a\in \D_{ss^*}$ and $b\in \D_{tt^*}$,
\begin{align*}
\pi\big((a\delta_s)\cdot(b\delta_t)\big)&=\pi\big(\beta_s(\beta_s\inv(a)b)\omega(s,t)\delta_{st}\big)\\
&=\rho\big(\beta_s(\beta_s\inv(a)b)\big)\rho(\omega(s,t))v_{st}\\
&=v_s\rho(\beta_s\inv(a))\rho(b)v_s^*v_sv_tv_{st}^*v_{st}\\
&=v_s\rho(\beta_s\inv(a))\rho(b)v_t=\pi(a\delta_s)\pi(b\delta_t).\\
\end{align*}
Similarly,
\begin{align*}
\pi\big((a\delta_s)^*\big)&=\pi(\beta_s\inv(a^*)\omega(s^*,s)^*\delta_{s^*})=\rho(\beta_s\inv(a^*))\rho(\omega(s^*,s)^*)v_{s^*}\\
&=\rho(\beta_s\inv(a^*))(v_{s^*}v_sv_{s^*s})^*v_{s^*}=\rho(\beta_s\inv(a^*))v_s^*\\
&=v_s^*\rho(a^*)v_sv_s^*=(\rho(a)v_s)^*=\pi(a\delta_s)^*.
\end{align*}
This proves axiom~\eqref{def:RepresentationFellBundle:item:AlgebraicOperations} in Definition~\ref{def:RepresentationFellBundle}.
To prove Definition~\ref{def:RepresentationFellBundle}\eqref{def:RepresentationFellBundle:item:InclusionMaps},
take $s,t\in S$ with $s\leq t$ and $a\in \D_{s^*s}$. Note that
\begin{equation*}
v_{s^*s}v_t^*v_t=\rho(1_{s^*s})\rho(1_{t^*t})=\rho(1_{s^*st^*t})=\rho(1_{s^*s})=v_{s^*s}.
\end{equation*}
Hence
\begin{multline*}
\pi(j_{t,s}(a))=\pi(a\omega(t,s^*s)^*\delta_t)=\rho(a\omega(t,s^*s)^*)v_t=\rho(a)(v_tv_{s^*s}v_{ts^*s}^*)^*v_t\\
=\rho(a)v_sv_{s^*s}v_t^*v_t=\rho(a)v_sv_{s^*s}=\rho(a)v_s=\pi(a).
\end{multline*}
Therefore $\pi$ is a representation of $\A$ on $\hils$. It is not difficult to see that the assignments $\pi\mapsto (\rho,v)$
and $(\rho,v)\mapsto \pi$ between representations $\pi$ of $\A$ and covariant representations $(\rho,v)$ of $(\beta,\omega)$
are inverse to each other and hence give the desired bijective
correspondence as in the assertion. The isomorphism $B\rtimes_{\beta,\omega}S\cong C^*(\A)$ now follows from the universal properties of
$B\rtimes_{\beta,\omega}S$ and $C^*(\A)$ with respect to (covariant) representations
(see \cite{Exel:noncomm.cartan,SiebenTwistedActions} for details).
\end{proof}
In \cite{Exel:noncomm.cartan} the second named author also defines the \emph{reduced cross-sectional algebra} $C^*_\red(\A)$ of a Fell bundle $\A$.
It is the image of $C^*(\A)$ by (the integrated form of) a certain special representation of $\A$, the so called
\emph{regular representation} of $\A$. Using Theorem~\ref{theo:CorresponceRepresentations} we can now also define reduced crossed products
for twisted actions (this was not defined in \cite{SiebenTwistedActions}):
\begin{definition}
Let $(B,S,\beta,\omega)$ be a twisted action and let $\A$ be the associated Fell bundle.
Let $\Lambda$ denote the regular representation of $\A$ as defined in \cite[Section~8]{Exel:noncomm.cartan}.
The \emph{regular covariant representation} of $(B,S,\beta,\omega)$ is the representation $(\lambda,\upsilon)$ corresponding to $\Lambda$
as in Theorem~\ref{theo:CorresponceRepresentations}.
The \emph{reduced crossed product} is
\begin{equation*}
B\rtimes_{\beta,\omega}^\red S\defeq \lambda\rtimes\upsilon\big(B\rtimes_{\beta,\omega}S\big),
\end{equation*}
where $\lambda\rtimes\upsilon$ is the \emph{integrated form} of $(\lambda,\upsilon)$ defined in \cite[Definition~3.6]{SiebenTwistedActions}.
\end{definition}
Notice that, by definition, we have a canonical isomorphism
\begin{equation}\label{eq:IsomorphismReducedCrossedProductAndReducedGrupoideAlgebra}
B\rtimes_{\beta,\omega}^\red S\cong C^*_\red(\A).
\end{equation}
\section{Relation to twisted groupoids}
\label{sec:motivating example}
Let $\G$ be a locally compact étale groupoid with unit space $X=\Gz$ (see
\cite{Exel:inverse.semigroups.comb.C-algebras, KhoshkamSkandalis:CrossedProducts, Paterson:Groupoids, RenaultThesis} for
further details on étale groupoids). Recall that a \emph{twist} over $\G$ is a topological
groupoid $\Sigma$ that fits into a groupoid extension of the form
\begin{equation}\label{eq:ExtensionTwistedGroupoid}
\Torus\times X\into \Sigma \onto \G.
\end{equation}
The pair $(\G,\Sigma)$ is then called a \emph{twisted étale groupoid}.
We refer the reader to \cite{Deaconi_Kumjian_Ramazan:Fell.Bundles,Muhly.Williams.Continuous.Trace.Groupoid,RenaultCartan} for more details.
Twists over $\G$ can be alternatively described as principal circle bundles (these are the $\Torus$\nb-groupoids defined
in \cite[Section~2]{Muhly.Williams.Continuous.Trace.Groupoid}) and the classical passage to (complex) line bundles enables us to view
twists over $\G$ as \emph{Fell line bundles} over $\G$, that is,
(locally trivial) one-dimensional Fell bundles over the groupoid $\G$ in the sense of Kumjian \cite{Kumjian:fell.bundles.over.groupoids}.
As we have seen in \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids},
there is a correspondence between twisted étale groupoids and \emph{semi-abelian} saturated Fell bundles over inverse semigroups, that is, saturated Fell bundles $\A=\{\A_s\}_{s\in S}$ for which $\A_e$ is an abelian \cstar{}algebra for all $e\in E(S)$.
This enables us to apply our previous results and describe twisted étale groupoids from the point of view of twisted actions of
inverse semigroups on abelian \cstar{}algebras (compare with \cite[Theorem~3.3.1]{Paterson:Groupoids}, \cite[Theorem~8.1]{Quigg.Sieben.C.star.actions.r.discrete.groupoids.and.inverse.semigroups} and \cite[Theorem~9.9]{Exel:inverse.semigroups.comb.C-algebras}):
\begin{theorem}\label{theo:CorrespondenceTwistedGroupoidsAndTwistedActions}
Given a twisted étale groupoid $(\G,\Sigma)$, there is an inverse semigroup $S$ consisting of bisections of $\G$
and a twisted action $(\beta,\omega)$ of $S$ on $\contz\big(\Gz\big)$ such that the (reduced) groupoid \cstar{}algebra $C^*_\red(\G,\Sigma)$
is isomorphic to the (reduced) crossed product $\contz\big(\Gz\big)\rtimes_{\beta,\omega}^\red S$. Conversely, if $(\beta,\omega)$ is a twisted action of
an inverse semigroup $S$ on a commutative \cstar{}algebra $\contz(X)$ for some locally compact Hausdorff space $X$, then
there is a twisted étale groupoid $(\G,\Sigma)$ with $\Gz=X$ such that $C^*_\red(\G,\Sigma)\cong \contz(X)\rtimes^\red_{\beta,\omega} S$.
If, in addition, the groupoid $\G$ is Hausdorff or second countable, then we also have an isomorphism of full \cstar{}algebras
$C^*(\G,\Sigma)\cong \contz(X)\rtimes_{\beta,\omega} S$.
\end{theorem}
\begin{proof}
Let $(\G,\Sigma)$ be a twisted étale groupoid and let $L$ be the associated Fell line bundle over $\G$. Recall that an open subset $s\sbe \G$ is a \emph{bisection} (also called \emph{slice} in \cite{Exel:inverse.semigroups.comb.C-algebras}) if the restrictions of the source and range maps $\domain, \range\colon \G\to \Gz$ to $s$ are homeomorphisms onto their images. The set $S(\G)$ of all bisections of $\G$ forms an inverse semigroup with respect to the product $st\defeq\{\alpha\beta\colon \alpha\in s,\beta\in t,\domain(\alpha)=\range(\beta)\}$ and the involution $s^*\defeq\{\alpha\inv\colon \alpha\in s\}$ (see \cite{Exel:inverse.semigroups.comb.C-algebras}). Given $s\in S(\G)$, let $L_s$ be the restriction of $L$ to $s$. Let $S$ be the subset of $S(\G)$ consisting
of all bisections $s\in S(\G)$ for which the line bundle $L_s$ is trivial.
If $L_s$ and $L_t$ are trivial, then so are $L_{st}$ and $L_{s^*}$ because the (convolution) product
$\xi\cdot\eta$ (defined by $(\xi\cdot\eta)(\g)\defeq\xi(\alpha)\eta(\beta)$ whenever $\g=\alpha\beta\in st$)
is a unitary section of $L_{st}$ provided $\xi$ and $\eta$ are unitary sections of $L_s$ and $L_t$, respectively (note
that in a Fell line bundle $L$, we have $\|ab\|=\|a\|\|b\|$ for all $a,b\in L$); and the involution $\xi^*(\g)\defeq \xi(\g\inv)^*$
also provides a bijective correspondence between unitary sections of $L_s$ and $L_{s^*}$.
Thus $S$ is an inverse sub-semigroup of $S(\G)$.
Since $L$ is locally trivial, $S$ is a covering for $\G$ and we obviously have $s\cap t\in S$ for $s,t\in S$. In particular,
$S$ is \emph{wide} in the sense of \cite[Definition~2.14]{BussExel:Fell.Bundle.and.Twisted.Groupoids}, that is, it satisfies the condition:
\begin{equation}\label{eq:BasisCondition}
\mbox{for all }s,t\in S \mbox{ and } \g\in s\cap t,
\mbox{ there is } r\in S\mbox{ such that }\g\in r\sbe s\cap t.
\end{equation}
This condition also appears in \cite[Proposition~5.4.ii]{Exel:inverse.semigroups.comb.C-algebras}.
For each $s\in S$, we define $\A_s$ to be the space $\contz(L_s)$ of continuous sections of $L_s$ vanishing at infinity.
Then, with respect to the (convolution) product and the involution of sections defined above, the family $\A=\{\A_s\}_{s\in S}$ is a saturated Fell bundle over $S$ (see \cite[Example~2.11]{BussExel:Fell.Bundle.and.Twisted.Groupoids} for details). Moreover, by Proposition~\ref{prop:Contz(L)RegularIFFLTopologicallyTrivial},
the Hilbert $\contz(ss^*),\contz(s^*s)$\nb-bimodule $\A_s=\contz(L_s)$
is regular because $L$ is trivial over every $s\in S$. Thus $\A$ is a saturated, regular Fell bundle.
By our construction in Section~\ref{sec:regular Fell bundles}, $\A$ gives rise to a twisted action $(\beta,\omega)$ of $S$ on $B=C^*(\E)$, where $\E$ is the restriction of $\A$ to $E(S)$. Observe that $E(S)$ is a covering for $\Gz$ and since $L$
is trivial over $\Gz$ (because it is a one-dimensional continuous \cstar{}bundle)
we have $\contz(L_e)\cong\contz(e)$ for every $e\in E(S)$ (note that $e\sbe \Gz$).
This implies that $B\cong\contz\big(\Gz\big)$ by an application of Proposition~4.3 in \cite{Exel:noncomm.cartan}.
By Theorem~\ref{theo:CorresponceRepresentations} and Equation~\eqref{eq:IsomorphismReducedCrossedProductAndReducedGrupoideAlgebra}, we have canonical isomorphisms $\contz\big(\Gz\big)\rtimes_{\beta,\omega}S\cong C^*(\A)$ and $\contz\big(\Gz\big)\rtimes_{\beta,\omega}^\red S\cong C^*_\red(\A)$. Moreover,
by \cite[Theorem~4.11]{BussExel:Fell.Bundle.and.Twisted.Groupoids}, $C^*_\red(\A)\cong C^*_\red(\G,\Sigma)$ and by \cite[Proposition~2.18]{BussExel:Fell.Bundle.and.Twisted.Groupoids}, $\contz\big(\Gz\big)\rtimes_{\beta,\omega}S\cong C^*(\A)$ provided $\G$ is Hausdorff
or second countable.
Conversely, starting with a twisted action $(\beta,\omega)$ of an inverse semigroup $S$ on a commutative \cstar{}algebra $\contz(X)$,
let $\A=\{\A_s\}_{s\in S}$ be the associated Fell bundle as in Section~\ref{sec:TwistedActions}.
Then $\A$ is saturated and semi-abelian, so the construction in
\cite[Section~3]{BussExel:Fell.Bundle.and.Twisted.Groupoids} provides a twisted étale groupoid $(\G,\Sigma)$ with $\Gz=X$ together with isomorphisms
$C^*_\red(\G,\Sigma)\cong C^*_\red(\A)\cong \contz\big(\Gz\big)\rtimes_{\beta,\omega}^\red S$ again by \cite[Theorem~4.11]{BussExel:Fell.Bundle.and.Twisted.Groupoids} and Equation~\ref{eq:IsomorphismReducedCrossedProductAndReducedGrupoideAlgebra};
and if $\G$ is Hausdorff or second countable then
$C^*(\G,\Sigma)\cong C^*(\A)\cong \contz\big(\Gz\big)\rtimes_{\beta,\omega}S$ by \cite[Proposition~3.40]{BussExel:Fell.Bundle.and.Twisted.Groupoids}, \cite[Proposition~2.18]{BussExel:Fell.Bundle.and.Twisted.Groupoids} and Theorem~\ref{theo:CorresponceRepresentations}.
\end{proof}
We shall next study the question of whether or not the twisted action constructed from a twisted groupoid as above
satisfies Sieben’s condition (see Definition~\ref{def:SiebensTwistedAction}).
\begin{proposition}\label{prop:SiebenTwistedActions=TopologicalTrivial}
Let $(\G,\Sigma)$ be a twisted groupoid and let $(\contz\big(\Gz\big),S,\beta,\omega)$ be a twisted action associated to $(\G,\Sigma)$
as in Theorem~\ref{theo:CorrespondenceTwistedGroupoidsAndTwistedActions}.
Then the cocycles $\omega(s,t)$ can be chosen to satisfy Sieben's condition~\eqref{eq:SiebensCondition}
if and only if the twist $\Sigma$ is topologically trivial \textup(that is,
$\Sigma\cong\Torus\times\G$ as circle bundles\textup) or, equivalently, if the Fell line bundle $L$ associated to $(\G,\Sigma)$
is topologically trivial \textup(that is, $L\cong \C\times\G$ as complex line bundles\textup).
\end{proposition}
\begin{proof}
We shall use the same notation as in the proof of Theorem~\ref{theo:CorrespondenceTwistedGroupoidsAndTwistedActions}.
By definition, $(\beta,\omega)$ is the twisted action associated to the regular Fell bundles $\A$ as in Section~\ref{sec:regular Fell bundles}.
Thus, the cocycles $\omega(s,t)$ are given by $u_su_tu_{st}^*$
for a certain choice of unitary multipliers $u_s$ of $\A_s=\contz(L_s)$.
Since $\mult(\contz(L_s))\cong \contb(L_s)$, $u_s$ may be viewed as a unitary section of $L_s$.
Suppose the cocycles $\omega(s,t)$ satisfy~\eqref{eq:SiebensCondition}. We are going to prove that $L$ is topologically trivial, that is, that
there is a global continuous unitary section of $L$. By Proposition~\ref{prop:CoherentHomogeneous},
the family $\{u_s\}_{s\in S}$ of unitary sections satisfy $u_s\leq u_t$ whenever $s,t\in S$ with $s\leq t$.
This means that $u_s$ is the restriction of $u_t$ whenever $s\sbe t$ (this is the order relation of $S\sbe S(\G)$).
Now if $s,t$ are arbitrary elements of $S$,
the sections $u_s$ and $u_t$ have to coincide on the intersection $s\cap t$. If fact, since $S$ satisfies~\eqref{eq:BasisCondition},
given $\g\in s\cap t$, there is $r\in S$ such that $\g\in r\sbe s\cap t$. Hence $r\sbe s$ and $r\sbe t$, so that $u_s(\g)=u_r(\g)=u_t(\g)$.
Thus the map $u\colon\G\to L$ defined by
\begin{equation*}
u(\g)\defeq u_s(\g) \quad\mbox{whenever $s$ is an element of $S$ containing }\g\in \G
\end{equation*}
is a well-defined continuous unitary section of $L$ and therefore $L$ is topologically trivial.
Conversely, if $L$ is topologically trivial and $u\colon \G\to L$ is a continuous unitary section, then we may take $u_s$ to
be the restriction of $u$ to the bisection $s\sbe\G$. In this way $u_s$ is the restriction of $u_t$ whenever $s\sbe t$. Again by Proposition~\ref{prop:CoherentHomogeneous}, the cocycles $\omega(s,t)=u_su_tu_{st}^*$ satisfy Sieben's condition~\eqref{eq:SiebensCondition}.
\end{proof}
It is well-known (see \cite[Section 2]{Muhly.Williams.Continuous.Trace.Groupoid})
that the topologically trivial twists are exactly those associated to
a $2$-cocycle $\tau\colon \Gt\to \Torus$ in the sense of Renault \cite{RenaultThesis}.
Moreover, by Example~2.1 in \cite{Muhly.Williams.Continuous.Trace.Groupoid} there are twisted groupoids that are not topologically trivial.
This example shows that Sieben's condition~\eqref{eq:SiebenAxiomTwistedAction} cannot be expected to hold in general and therefore our notion of twisted action (Definition~\ref{def:twisted action}) properly generalizes Sieben’s \cite[Definition~2.2]{SiebenTwistedActions}.
In what follows we briefly describe the twists associated to $2$-cocycles and relate them to our cocycles $\omega(s,t)$.
Let $\tau\colon \Gt\to \Torus$ be a $2$-cocycle. Recall that $\tau$ satisfies the cocycle condition
\begin{equation*}
\tau(\alpha,\beta)\tau(\alpha\beta,\gamma)=\tau(\beta,\gamma)\tau(\alpha,\beta\gamma)\quad\mbox{for all }
(\alpha,\beta),(\beta,\gamma)\in \Gt.
\end{equation*}
It is interesting to observe the similarity of this condition with our cocycle condition
appearing in Definition~\ref{def:twisted action}\eqref{def:twisted action:item:CocycleCondition}.
In addition, we also assume that the $2$-cocycle $\tau$ is \emph{normalized} in the sense that
(compare with Definition~\ref{def:twisted action}\eqref{def:twisted action:item:omega(r,r*r)=omega(e,f)=1_ef_and_omega(rr*,r)=1})
\begin{equation*}
\tau(\alpha,\s(\alpha))=\tau(\r(\alpha),\alpha)=1\quad\mbox{for all }\alpha\in \G.
\end{equation*}
Given a $2$-cocycle $\tau$ on $\G$ as above, the associated twisted groupoid $(\G,\Sigma)$ is defined as follows. Topologically, $\Sigma$ is the trivial circle bundle $\Torus\times\G$. And the operations are defined by
\begin{equation}\label{eq:multiplicationTwistedGroupoid}
(\lambda,\alpha)\cdot (\mu,\beta)=(\lambda\mu\tau(\alpha,\beta),\alpha\beta)\quad\mbox{for all }\lambda,\mu\in \Torus\mbox{ and }(\alpha,\beta)\in \Gt
\end{equation}
\begin{equation}\label{eq:inversionTwistedGroupoid}
(\lambda,\alpha)\inv =(\overline{\lambda \tau(\alpha\inv,\alpha)},\alpha\inv)\quad\mbox{for all }\lambda\in\Torus\mbox{ and }\alpha\in \G.
\end{equation}
In this way, $\Sigma$ is a topological groupoid and the trivial maps $\Torus\times \Gz\into \Sigma$ and
$\Sigma\onto \G$ give us a groupoid extension as
in~\eqref{eq:ExtensionTwistedGroupoid}. The twisted groupoid $(\G,\Sigma)$ corresponds
to the (topologically trivial) Fell line bundle $L=\C\times\G$ with algebraic
operations of multiplication and involution given by the same formulas as in equations~\eqref{eq:multiplicationTwistedGroupoid} and~\eqref{eq:inversionTwistedGroupoid} (only replacing $\Torus$ by $\C$ and the inversion $\inv$ on the
left hand side of~\eqref{eq:inversionTwistedGroupoid} by the involution $^*$ sign).
\begin{proposition}\label{prop:RelationRenaultAndSiebensCocycles}
Let $(\G,\Sigma)$ be the twisted groupoid associated to a $2$-cocycle $\tau$ on $\G$ as above,
and let $(\contz\big(\Gz\big),S,\beta,\omega)$ be the twisted action associated to $(\G,\Sigma)$
as in Theorem~\ref{theo:CorrespondenceTwistedGroupoidsAndTwistedActions}
through the unitary sections $u_s$ of $L_s=\C\times s$ given by $u_s(\gamma)=(1,\gamma)$ for all $\gamma\in s$.
Then
\begin{equation}
\beta_s(a)(\r(\g))=a(\s(\g))\quad\mbox{and}\quad \omega(s,t)(\r(\alpha\beta))=\tau(\alpha,\beta)
\end{equation}
for all $s,t\in S$, $a\in \contz(s^*s)$, $\alpha,\g\in s$ and $\beta\in t$ with $\s(\alpha)=\r(\beta)$.
\end{proposition}
\begin{proof}
Note that $\dom(\beta_s)=\D_{s^*s}=\contz(s^*s)$ and $\ran(\beta_s)=\D_{ss^*}=\contz(ss^*)$.
Using the definitions \eqref{eq:multiplicationTwistedGroupoid} and \eqref{eq:inversionTwistedGroupoid}
and the (easily verified) relation $\tau(\g\inv,\g)=\tau(\g,\g\inv)$, and using the canonical identification $\contz(L_{s^*s})\cong\contz(s^*s)$
to view $a$ as a continuous section of $L_{s^*s}$, we get
\begin{align*}
\beta_s(a)(\r(\g))&=(u_sau_s^*)(\g\s(\g)\g\inv)=u_s(\g)\cdot a(\s(\g))\cdot u_s(\g)^*\\
&= (1,\g)\cdot(a(\s(\g)),\s(\g))\cdot(1,\g)^*\\
&=(a(\s(\g))\tau(\g,\s(\g)),\g)\cdot (\overline{\tau(\g\inv,\g)},\g\inv)\\
&=(a(\s(\g))\overline{\tau(\g\inv,\g)}\tau(\g,\g\inv),\s(\g))=a(\s(\g)).
\end{align*}
To yield the relation between the cocycles $\omega(s,t)$ and $\tau(\alpha,\beta)$,
first observe that $\omega(s,t)$ is a unitary element of $\contb(stt^*s^*)\cong\contz(L_{stt^*s^*})$, that is,
a continuous function $\omega(s,t)\colon stt^*s^*\to \Torus$. Now, since $s,t$ are bisections, every element of $stt^*s^*$
can be uniquely written as $\alpha\beta\beta\inv\alpha\inv=\r(\alpha\beta)$ for $\alpha\in s$ and $\beta\in t$. Thus
\begin{align*}
\omega(s,t)(\r(\alpha\beta))&=(u_su_tu_{st}^*)(\alpha\beta\beta\inv\alpha\inv)\\
&=u_s(\alpha)\cdot u_t(\beta)\cdot u_{st}(\alpha\beta)^*\\
&=(1,\alpha)\cdot (1,\beta)\cdot (1,\alpha\beta)^*\\
&=(\tau(\alpha,\beta),\alpha\beta)\cdot (\overline{\tau((\alpha\beta)\inv,\alpha\beta)},(\alpha\beta)\inv)\\
&=(\tau(\alpha,\beta)\overline{\tau((\alpha\beta)\inv,\alpha\beta)}\tau(\alpha\beta,(\alpha\beta)\inv),\alpha\beta\beta\inv\alpha\inv)\\
&=(\tau(\alpha,\beta),\r(\alpha\beta))=\tau(\alpha,\beta).
\end{align*}
\vskip-18pt
\end{proof}
Summarizing the results of this section, we have seen how to describe twisted étale groupoids in terms of inverse semigroup twisted
actions. While the $2$-cocycles on groupoids can only describe topologically trivial twisted groupoids, our twists have not such a limitation and
allow us to describe arbitrary twisted étale groupoids. As we have seen above, the topologically trivial twisted étale groupoids
essentially correspond to Sieben's twisted actions, a special case of our theory.
\section{Refinements of Fell bundles}
\label{sec:Refinements}
In this section, we introduce a notion of \emph{refinement} for Fell bundles and prove that several
constructions from Fell bundles, including cross-sectional \cstar{}algebras and twisted groupoids (in the semi-abelian case),
are preserved under refinements. Our main point in this section is to prove that locally regular Fell bundles admit a regular, saturated refinement.
This puts every locally regular Fell bundle into the setting of
Section~\ref{sec:regular Fell bundles} and enables us to describe it as a twisted action.
\begin{definition}\label{def:refinement}
Let $S,T$ be inverse semigroups and let $\A=\{\A_s\}_{s\in S}$ and $\B=\{\B_t\}_{t\in T}$ be Fell bundles.
A \emph{morphism} from $\B$ to $\A$ is a pair $(\phi,\psi)$, where $\phi\colon T\to S$ is a semigroup homomorphism,
and $\psi\colon\B\to \A$ is a map satisfying:
\begin{enumerate}[(i)]
\item $\psi(\B_t)\sbe \A_{\phi(t)}$ and the restriction $\psi_t\colon\B_t\to \A_{\phi(t)}$ is a linear map;
\item $\psi$ respects product and involution: $\psi(ab)=\psi(a)\psi(b)$ and $\psi(a^*)=\psi(a)^*$, for all $a,b\in \B$;
\item $\psi$ commutes with the inclusion maps: whenever $t\leq t'$ in $T$, we get a commutative diagram \label{def:refinement:inclusionmaps}
\[
\xymatrix{
\B_t\ar[dd]_{\psi_t}\ar[rr]^{j_{t',t}^{\B}} & & \B_{t'}\ar[dd]^{\psi_{t'}} \\
& & \\
\A_{\phi(t)}\ar[rr]_{j_{\phi(t'),\phi(t)}^\A} & & \A_{\phi(t')}
}
\]
We say that $\B$ is a \emph{refinement} of $\A$ if there is a morphism $(\phi,\psi)$ from $\B$ to $\A$ with
$\phi\colon T\to S$ surjective and \emph{essentially injective} in the sense that $\phi(t)\in E(S)$ implies $t\in E(T)$,
with $\psi_t\colon\B_t\to\A_{\phi(t)}$ injective for all $t\in T$, and such that
\begin{equation}\label{eq:ConditionRefinement}
\A_s=\overline{\sum\limits_{t\in \phi^{-1}(s)}\!\!\psi(\B_t)}\quad\mbox{for all }s\in S.
\end{equation}
\end{enumerate}
\end{definition}
\begin{remark}\label{rem:Refinement}
{\bf (1)} If $\B=\{\B_t\}_{t\in T}$ is a refinement of $\A=\{\A_s\}_{s\in S}$ via some morphism $(\phi,\psi)$,
then $\psi(\B_t)$ is an ideal of $\A_{\phi(t)}$ (as {\tros}) for all $t\in T$ (see comments before Definition~\ref{def:TROLocallyRegular}).
In fact, it is enough to check
that $\psi(\B_f)$ is an ideal of $\A_{e}$ for every idempotent $f\in T$ with $\phi(f)=e$.
Equation~\eqref{eq:ConditionRefinement} implies
\begin{equation*}
\A_e=\overline{\sum\limits_{\phi(g)=e}\!\!\psi(\B_g)}.
\end{equation*}
Since $\phi$ is essentially injective, each $g\in T$ with $\phi(g)=e$ is necessarily idempotent.
Hence,
\begin{equation*}
\psi(\B_f)\A_e=\overline{\sum\limits_{\phi(g)=e}\!\!\psi(\B_f)\psi(\B_g)}=\overline{\sum\limits_{\phi(g)=e}\!\!\psi(\B_f\B_g)}\sbe
\overline{\sum\limits_{\phi(g)=e}\!\!\psi(\B_{fg})}.
\end{equation*}
Given $g\in E(T)$ with $\phi(g)=e$, we have $fg\leq f$. Definition~\ref{def:refinement}\eqref{def:refinement:inclusionmaps} yields
\begin{equation*}
\psi(\B_{fg})=j_{e,e}^\A\big(\psi(\B_{fg})\big)=\psi\big(j_{f,fg}^\B(\B_{fg})\big)\sbe \psi(\B_f).
\end{equation*}
It follows that $\psi(\B_f)\A_e\sbe\psi(\B_f)$. Similarly, $\A_e\psi(\B_f)\sbe\psi(\B_f)$.
{\bf (2) }In the realm of (discrete) groups the notion of refinement is not interesting.
Indeed, if $S,T$ are groups and $\phi\colon T\to S$ is an essentially injective, surjective homomorphism, then
is it is automatically an isomorphism because the only idempotents are the group identities.
Hence for groups $S,T$ and Fell bundles $\A$ and $\B$ over $S$ and $T$, respectively, a refinement $(\phi,\psi)$ from $\B$ to $\A$
is the same as an isomorphism $(\phi,\psi)\colon(T,\B)\congto (S,\A)$.
However, we may have a Fell bundle $\A=\{\A_s\}_{s\in S}$ over a group $S$, and an interesting refinement
$\B=\{\B_t\}_{t\in T}$ allowing $T$ to be an inverse semigroup. For instance, we are going to prove (Proposition~\ref{prop:RefinementForLocRegularFellBundle}) that every Fell bundle admits a saturated refinement and this can, of course, be applied to Fell bundle over groups, but one has to allow the refinement itself to be a Fell bundle over an
inverse semigroup.
\end{remark}
First, we show that refinements preserve the Fell bundle cross-sectional \cstar{}algebras (see \cite{Exel:noncomm.cartan}
for details on the construction of these algebras).
\begin{theorem}\label{theo:RefinementPreserveFullC*Algebras}
Let $\B=\{\B_t\}_{t\in T}$ be a refinement of $\{\A_s\}_{s\in S}$ through a morphism $(\phi,\psi)$,
and let $\E_\B$ and $\E_\A$ be the restrictions of $\B$ and $\A$ to the idempotent parts of $T$ and $S$, respectively.
Then there is a \textup(unique\textup) isomorphism $\Psi\colon C^*(\B)\congto C^*(\A)$ satisfying $\Psi(b)=\psi(b)$ for all $b\in \B$.
Here we view each fiber $\B_t$ \textup(resp. $\A_s$\textup) as a subspace
of $C^*(\B)$ \textup(resp. $C^*(\A)$\textup) via the universal representation.
Moreover, $\Psi$ factors through an isomorphism $C^*_\red(\B)\congto C^*_\red(\A)$ and
restricts to an isomorphism $\Psi\rest{}\colon C^*(\E_\B)\congto C^*(\E_\A)$
\end{theorem}
\begin{proof}
Since $(\phi,\psi)$ is a morphism from $\B$ to $\A$, it induces a (unique) \Star{}ho\-mo\-mor\-phism
$\Psi\colon C^*(\B)\to C^*(\A)$ satisfying $\Psi(b)=\psi(b)$ for all $b\in \B$.
Moreover, it also induces a map from $\Rep(\A)$ to $\Rep(\B)$ (the classes of representations of $\A$ and $\B$, respectively)
that takes $\pi\in \Rep(\A)$ and associates the representation $\tilde\pi\defeq \pi\circ\psi\in \Rep(\B)$.
All this holds for any morphism of Fell bundles. Now, to prove that the induced map $\Psi$ is an isomorphism,
we need to use the extra properties of refinement. The surjectivity of $\Psi$ follows from
Equation~\eqref{eq:ConditionRefinement}. In fact, this equation implies that
$\Psi$ has dense image, and since any \Star{}homomorphism between $C^*$-algebras has closed image,
the surjectivity of $\Psi$ follows. To show that $\Psi$ is injective, it is enough to show that
any representation $\rho\in \Rep(\B)$ has the form $\rho=\tilde\pi$ for some representation $\pi\in \Rep(\A)$
(necessarily unique by Equation~\eqref{eq:ConditionRefinement}).
Given $\rho\in \Rep(\B)$ and $s\in S$, we define
\begin{equation*}
\pi_s(b)\defeq \sum\limits_{t\in \phi^{-1}(s)}\rho(b_t),
\end{equation*}
whenever $b\in \A_s$ is a finite sum of the form
\begin{equation*}
b=\sum\limits_{t\in \phi^{-1}(s)}\psi(b_t)
\end{equation*}
with all but finitely many non-zero $b_t$'s in $\B_t$. All we have to show is that $\pi_s$ is well-defined and extends
to $\A_s$. Since $\A_s$ is the closure of $b$'s as above, it is enough to show that
\begin{equation*}
\left\|\sum\limits_{t\in \phi^{-1}(s)}\rho(b_t)\right\|\leq \|b\|.
\end{equation*}
First, note that
\begin{equation*}
\|b\|^2=\left\|\sum\limits_{t,r\in \phi^{-1}(s)}\psi(b_t^*b_r)\right\|.
\end{equation*}
Given $t,s\in \phi^{-1}(s)$, we have $\phi(t^*r)=s^*s$, and because $\phi$ is essentially injective, this implies that $t^*r$ is idempotent.
Consequently, we may view $\sum\limits_{t,r\in \phi^{-1}(s)}b_t^*b_r$ as an element of $C^*(\E_\B)$.
Using \cite[Proposition 4.3]{Exel:noncomm.cartan}, it is easy to see that the \Star{}homomorphism
$\Psi\colon C^*(\B)\to C^*(\A)$ is injective (hence isometric) on $C^*(\E_\B)$.
Therefore,
\begin{multline*}
\|b\|^2=\left\|\sum\limits_{t,r\in \phi^{-1}(s)}\psi(b_t^*b_r)\right\|=\left\|\psi\left(\sum\limits_{t,r\in \phi^{-1}(s)}b_t^*b_r\right)\right\|
\\ =\left\|\sum\limits_{t,r\in \phi^{-1}(s)}b_t^*b_r\right\|\geq \left\|\sum\limits_{t,r\in \phi^{-1}(s)}\rho(b_t^*b_r)\right\|
=\left\|\sum\limits_{t\in \phi^{-1}(s)}\rho(b_t)\right\|^2.
\end{multline*}
Thus $\pi_s$ is well defined and extends to a (obviously linear) map $\pi_s\colon\A_s\to \bound(\hils_\rho)$.
Since $s$ is arbitrary, we get a map $\pi\colon\A\to \bound(\hils_\rho)$ which is easily seen to be a representation because $\rho$ is.
And, of course, we have $\tilde\pi=\rho$. This shows that $\Psi$ is injective and, therefore,
an isomorphism $C^*(\B)\to C^*(\A)$. It is clear that it restricts to an isomorphism
$\Psi\rest{}\colon C^*(\E_\B)\to C^*(\E_\A)$.
Finally, we show that $\Psi$ factors through an isomorphism $C^*_\red(\B)\congto C^*_\red(\A)$.
First, let us recall that the reduced cross-sectional \cstar{}algebra $C^*_\red(\A)$ is the image of $C^*(\A)$ by the
regular representation $\Lambda_\A\colon C^*(\A)\to C^*_\red(\A)$ of $\A$ (see \cite[Proposition~8.6]{Exel:noncomm.cartan}),
which is defined as the direct sum of all GNS-representations associated
to states $\tilde\varphi$ of $C^*(\A)$, where $\varphi$ runs over the set of all pure states of $C^*(\E_\A)$
and $\tilde\varphi$ is the canonical extension of $\varphi$ as defined in \cite[Section~7]{Exel:noncomm.cartan}.
Of course, the same is true for the regular representation $\Lambda_\B\colon C^*(\B)\to C^*_\red(\B)$ of $\B$.
Since $\Psi\rest{}\colon C^*(\E_\B)\to C^*(\E_\A)$ is an isomorphism, the assignment $\varphi\mapsto \varphi\circ\Psi\rest{}$ defines
a bijective correspondence between pure states of $C^*(\E_A)$ and $C^*(\E_\B)$. To prove that $\Psi$ factors through an
isomorphism $C^*_\red(\B)\to C^*_\red(\A)$ it is enough to show that the canonical extension of $\varphi\circ\Psi\rest{}$ coincides with
$\tilde{\varphi}\circ\Psi$, that is, $\widetilde{\varphi\circ\Psi\rest{}}=\tilde{\varphi}\circ\Psi$ for all pure states $\varphi$ of $C^*(\E_\A)$.
Let $t\in T$ and $b\in \B_t$. According to Proposition~7.4(i) in \cite{Exel:noncomm.cartan}, we have two cases to consider:
\emph{Case 1.} Assume there is an idempotent $f\in E(T)$ lying in $\supp(\varphi\circ\Psi\rest{})$, the support
of $\varphi\circ\Psi\rest{}$ (see \cite[Definition~7.1]{Exel:noncomm.cartan}), with $f\leq t$. Let $e\defeq \phi(f)$ and $s\defeq\phi(t)$.
Note that $e\leq s$. Moreover, since $\Psi\rest{}\colon C^*(\E_\B)\to C^*(\E_\A)$ is an isomorphism and $\varphi\circ\Psi\rest{}$ is
supported on $\B_f$, it follows that $\varphi$ is supported on $\psi(\B_f)$ and hence on $\A_e$ (by \cite[Proposition~5.3]{Exel:noncomm.cartan})
because $\psi(\B_f)$ is an ideal of $\A_e$ by Remark~\ref{rem:Refinement}(1). This implies that if $(u_i)$ is an approximate
unit for $\B_f$, then
\begin{equation*}
\lim_i\varphi\big(\psi(b)\psi(u_i)\big)=\tilde\varphi_e^s(\psi(b)).
\end{equation*}
Here use the same notation of \cite{Exel:noncomm.cartan} and write $\varphi_e$ for the restriction of
$\varphi$ to $\A_e$ and $\tilde\varphi_e^s$ for the canonical extension of $\varphi_e$ to $\A_s$ (see \cite[Proposition~6.1]{Exel:noncomm.cartan}).
It follows that
\begin{multline*}
\widetilde{\varphi\circ\Psi\rest{}}(b)=\big(\widetilde{\varphi\circ\Psi\rest{}}\big)_f^t(b)=\lim_i(\varphi\circ\Psi\rest{})(bu_i)\\
=\lim_i\varphi\big(\psi(b)\psi(u_i)\big)=\tilde\varphi_e^s(\psi(b))=\tilde\varphi\big(\psi(b)\big)=(\tilde\varphi\circ\Psi)(b).
\end{multline*}
\emph{Case 2.} Suppose there is no $f\in \supp(\varphi\circ\Psi\rest{})$ with $f\leq t$. In this
case, we have $\widetilde{\varphi\circ\Psi\rest{}}(b)=0$ by \cite[Proposition~7.4(i)]{Exel:noncomm.cartan}.
If, on the other hand, there is no $e\in \supp(\varphi)$ with $e\leq s\defeq \phi(t)$, then we also have $(\tilde\varphi\circ\Psi)(b)=0$.
Assume there is $e\in \supp(\varphi)$ such that $e\leq s$. Since $\phi$ is surjective, there is $g\in E(T)$ with
$\phi(g)=e$. Since $\phi(tg)=se=e$, we have $tg\in E(T)$ because $\phi$ is essentially injective.
Note that $tg\leq t$. By assumption, $tg\notin\supp(\varphi\circ\Psi\rest{})$, so that $(\varphi\circ\Psi\rest{})(\B_{tg})=\{0\}$ by
\cite[Proposition~5.5]{Exel:noncomm.cartan}. Let $(v_j)$ be an approximate unit for $\A_e$. Then
\begin{equation}\label{eq:tildeVarphiCircPsi}
(\tilde\varphi\circ\Psi)(b)=\tilde\varphi_e^s(\psi(b))=\lim_j\varphi(\psi(b)v_j).
\end{equation}
By~\eqref{eq:ConditionRefinement}, $\A_e$ is the closed linear span of $\psi(\B_g)$ with $\phi(g)=e$.
Thus each $v\in \A_e$ is a limit of finite sums of the form $\sum \psi(u_n)$ with $u_n\in \B_{g_n}$, where $g_n\in E(T)$ and $\phi(g_n)=e$.
Since $(\varphi\circ\Psi\rest{})(\B_{tg_n})=\{0\}$, we have $\varphi(\psi(bu_n))=0$ for all $n$.
It follows that $\varphi(\psi(b)v)=0$ for all $v\in \A_e$ and
hence Equation~\eqref{eq:tildeVarphiCircPsi} yields $(\tilde\varphi\circ\Psi)(b)=0$.
Therefore, in any case we have $\widetilde{\varphi\circ\Psi\rest{}}(b)=\tilde{\varphi}\circ\Psi(b)$ for all $b\in \B$
and hence for all $b\in C^*(\B)$, and this concludes the proof.
\end{proof}
Next, we show that any locally regular Fell bundle admits a regular, saturated refinement.
\begin{proposition}\label{prop:RefinementForLocRegularFellBundle}
\begin{enumerate}[(a)]
\item Every Fell bundle admits a saturated refinement.
\item Every locally regular Fell bundle admits a regular, saturated refinement.
\item If a Fell bundle admits a regular refinement, then it is locally regular.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) Let $\A=\{\A_s\}_{s\in S}$ be a Fell bundle over an inverse semigroup $S$.
We may assume that $\A$ is a concrete Fell bundle in $\bound(\hils)$ and the $C^*$-algebras $B=C^*(\E)$ and $A=C^*(\A)$ are all realized as operators in $\bound(\hils)$ for some Hilbert space $\hils$. Consider the set
$S_B$ of all {\tro} $\troa\sbe\bound(H)$ satisfying
\begin{equation}\label{eq:InverseSemigroupOfTROs}
\troa B,B\troa \sbe \troa \quad\mbox{and}\quad\troa^*\troa ,\troa \troa^*\sbe B.
\end{equation}
Note that $S_B$ is an inverse semigroup with respect to the multiplication $\troa \cdot \trob \defeq \troa\trob =\cspn(\troa\trob)$. In fact,
first we have to show that the multiplication is well-defined. For this, take $\troa ,\trob \in S_B$. It is easy to see that $\troa\trob$
satisfies~\eqref{eq:InverseSemigroupOfTROs}. To see that $\troa\trob$ is again a {\tro},
observe that $I=\troa^*\troa $ and $J=\trob\trob^*$ are ideals in $B$, and
ideals of any $C^*$-algebra always commute as sets ($IJ=I\cap J=JI$). Hence
\begin{equation*}
(\troa\trob)(\troa\trob)^*(\troa\trob)=\troa (\trob \trob^*)(\troa^*\troa )\trob
=\troa (\troa^*\troa )(\trob \trob^*)\trob =\troa\trob.
\end{equation*}
Thus $\troa\trob$ is a {\tro}. Now, because each $\troa \in S_B$ is a {\tro},
we have $\troa \troa^*\troa =\troa $ and $\troa^*=\troa^*\troa \troa^*$. Thus $\troa^*$ is an inverse of $\troa $ in $S_B$.
To show the uniqueness of inverses, it suffices to show that idempotents commute (see Theorem~3 in \cite[Chapter~1]{Lawson:InverseSemigroups}).
Let $\troa \in S_B$ be an idempotent, that is, $\troa^2=\troa \troa =\troa $.
We have $\troa^*=\troa^*\troa \troa^*=(\troa^*\troa )(\troa \troa^*)=(\troa \troa^*)(\troa^*\troa )=\troa \troa^*\troa^*\troa =\troa \troa^*\troa =\troa $. Thus, idempotents of $S_B$ have the form $\troa^*\troa $ with $\troa \in S_B$, and these commute because they are ideals of $B$. Hence $S_B$ is an inverse semigroup.
Observe that each $\troa =\A_s$ is a {\tro} in $\bound(\hils)$ that satisfies $\troa B,B\troa \sbe \troa $ and $\troa^*\troa ,\troa \troa^*\sbe B$
so that $\A_s\in S_B$. Consider
\begin{equation*}
T\defeq \{(s,\troa )\in S\times S_B\colon \troa \sbe \A_s\}.
\end{equation*}
Note that $T$ is an inverse sub-semigroup of $S\times S_B$ and we have a canonical
surjective homomorphism $\phi\colon T\to S$ given by the projection onto the first coordinate. Moreover, $\phi$ is essentially injective.
Indeed, suppose that $\phi(e,\troa )=e$ is idempotent in $S$. Since $(e,\troa )\in T$, we have $\troa \sbe \A_e$. Thus
\begin{equation*}
\A_e\troa^*\troa \sbe \A_e\A_e^*\troa =\A_e\troa \sbe \troa =\troa \troa^*\troa \sbe \A_e\troa^*\troa .
\end{equation*}
Therefore $\A_e\troa^*\troa =\troa $ and because $\troa^*\troa $ is an ideal of $\A_e$ we have $\A_e\troa^*\troa =\troa^*\troa $, that is, $\troa =\troa^*\troa $ is an idempotent of $S_B$.
We conclude that $(e,\troa )$ is an idempotent in $T$, whence $\phi$ is essentially injective.
Now we define a Fell bundle $\B=\{\B_t\}_{t\in T}$ over $T$ with fibers $\B_t\defeq \troa $ whenever $t=(s,\troa )$.
Notice that the order relation in $S_B$ is just the inclusion, that is, $\troa \leq \trob $ in $S_B$ if and only if $\troa \sbe \trob $. In fact,
if $\troa \leq \trob $, then $\troa =\trob \troa^*\troa \sbe \trob B\sbe \trob$. And if $\troa \sbe \trob $, then
\begin{equation*}
\trob \troa^*\troa \sbe \trob \trob^*\troa \sbe B\troa \sbe \troa =\troa \troa^*\troa \sbe \trob \troa^*\troa ,
\end{equation*}
so that $\trob \troa^*\troa =\troa $, that is, $\troa \leq \trob $. This allows us to define inclusion maps $j_{t',t}\colon\B_{t}\to \B_{t'}$ for $\B$ whenever $t\leq t'$ in $T$.
Of course, the algebraic operations of $\B$ are inherited from $\bound(\hils)$. With this structure, $\B$ is a saturated Fell bundle over $T$.
Moreover, by construction, it is a concrete Fell bundle in $\bound(\hils)$ and we have $\B_t\sbe \A_{\phi(t)}$ for all $t\in T$.
Thus, we get a canonical map $\psi\colon\B\to \A$ whose restriction to $\B_t$ is the inclusion $\B_t\into \A_{\phi(t)}$.
The pair $(\phi,\psi)$ is a morphism from $\B$ to $\A$ and through this morphism $\B$ is a refinement of $\A$. Therefore every Fell bundle
has a saturated refinement.
(b) Now we assume that $\A$ is locally regular. Then we redefine $S_B$ of part (a) taking only the \emph{regular} {\tro} $\troa\sbe\Ls(\hils)$ satisfying~\eqref{eq:InverseSemigroupOfTROs}.
If $\troa ,\trob $ are regular {\tro},
there are $u\in \troa $ and $v\in \trob $ with $u\sim \troa $ and $v\sim \trob $. We have
\begin{multline*}
uv(\troa\trob)^*(\troa\trob)=uv\trob^*\troa^*\troa\trob =u(\trob\trob^*)(\troa^*\troa )\trob\\
=u(\troa^*\troa )(\trob \trob^*)\trob =\troa\trob .
\end{multline*}
Analogously, $(\troa\trob)(\troa\trob)^*uv=\troa\trob$, so that $uv\sim \troa\trob$ and therefore $\troa\trob$ is a regular {\tro} in $\bound(\hils)$.
It follows that $S_B$ is also an inverse semigroup. With the same definition for $T$, $\B$ and $(\phi,\psi)$ as above, we get the desired regular refinement of $\A$. In fact, by construction, $\B$ is a regular, saturated Fell bundle which is a refinement of $\A$ through the morphism $(\phi,\psi)$. The only non-trivial axiom to be checked is~\eqref{eq:ConditionRefinement}. But this follows from the definition of local regularity.
(c) If $\A$ has a regular refinement $\B=\{\B_t\}_{t\in T}$ via some morphism $(\phi,\psi)$, then each $\psi(\B_t)$ is a regular ideal
in $\A_{\phi(t)}$ (see Remark~\ref{rem:Refinement}(1)) and therefore $\A_{\phi(t)}$ is locally regular by~\eqref{eq:ConditionRefinement}. Since $\phi$ is surjective, this shows that $\A_s$ is locally regular for all $s\in S$, that is, $\A$ is locally regular.
\end{proof}
Observe that Theorem~\ref{theo:RefinementPreserveFullC*Algebras} and Proposition~\ref{prop:RefinementForLocRegularFellBundle}
enable us to apply our main results to every (not necessarily saturated)
locally regular Fell bundle $\A$ and describe their cross-sectional \cstar{}algebras $C^*(\A)$ and $C^*_\red(\A)$ as
(full or reduced) twisted crossed products.
\begin{remark}
Let $\A=\{\A_s\}_{s\in G}$ be a regular (not necessarily saturated) Fell bundle over a (discrete) group $G$.
By Theorem~7.3 in \cite{Exel:twisted.partial.actions}, this corresponds to a \emph{twisted partial action} $(\alpha,\upsilon)$ of $G$ on the unit fiber $A\defeq\A_1$. On the other hand, applying Proposition~\ref{prop:RefinementForLocRegularFellBundle}, and taking a saturated, regular refinement $\B=\{\B_t\}_{t\in T}$ of $\A$ (here $T$ is an inverse semigroup), we may describe the same system as a twisted action of $T$.
More precisely, by Corollary~\ref{cor:CorrespondenceRegularFellBundlesAndTwistedActions}, there is a twisted action $(\beta,\omega)$ of $T$ on $C^*(\E_\B)\cong A$ (by Theorem~\ref{theo:RefinementPreserveFullC*Algebras}) such that the crossed product
$A\rtimes_{\beta,\omega}^{(\red)}T\cong C^*_{(\red)}(\B)$ is (again by Theorem~\ref{theo:RefinementPreserveFullC*Algebras}) isomorphic to the crossed product $A\rtimes_{\alpha,\upsilon}^{(\red)}G\cong C^*_{(\red)}(\A)$.
By the way, it should be also possible to define \emph{twisted partial actions} of inverse semigroups, generalizing at the same time our
Definition~\ref{def:twisted action} of twisted action and that of \cite[Definition~2.1]{Exel:twisted.partial.actions} for groups. As in our main result (Corollary~\ref{cor:CorrespondenceRegularFellBundlesAndTwistedActions}), twisted partial actions should correspond to regular (not necessarily saturated) Fell bundles. However, the (full or reduced) cross-sectional \cstar{}algebra
of any such Fell bundle can also be described as a (full or reduced) crossed product by
some twisted action in our sense (again by Theorem~\ref{theo:RefinementPreserveFullC*Algebras} and Proposition~\ref{prop:RefinementForLocRegularFellBundle}). This is the reason why we have chosen not to consider twisted partial actions.
\end{remark}
Recall from \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids} that a Fell bundle $\A=\{\A_s\}_{s\in S}$ is called \emph{semi-abelian} if the fibers $\A_e$ are commutative \cstar{}algebras for every idempotent $e\in S$.
\begin{corollary}\label{cor:RefinementForSemiAbelianFellBundle}
Every semi-abelian Fell bundle has a regular, saturated \textup(necessarily semi-abelian\textup) refinement.
\end{corollary}
\begin{proof}
By Proposition~\ref{prop:commutative=>loc.regular}, every semi-abelian Fell bundle is locally regular.
Hence the assertion follows from Proposition~\ref{prop:RefinementForLocRegularFellBundle}
\end{proof}
In \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids} we have shown that there is a close relationship between
saturated, semi-abelian Fell bundles and twisted étale groupoids, or equivalently, Fell line bundles over étale groupoids.
The following result shows that the twisted étale groupoids associated to semi-abelian Fell bundles are not changed under refinements:
\begin{proposition}\label{prop:RefinementPreserveTwistedGroupoids}
If $\B=\{\B_t\}_{t\in T}$ is a saturated refinement of a saturated, semi-abelian Fell bundle $\A=\{\A_s\}_{s\in S}$,
then the twisted étale groupoids associated to $\B$ and $\A$ as in \cite[Section~3.2]{BussExel:Fell.Bundle.and.Twisted.Groupoids} are isomorphic.
\end{proposition}
\begin{proof}
During the proof we shall write $(\G,L)$ and $(\G',L')$ for the twisted groupoids associated to $\B$ and $\A$, respectively,
as constructed in \cite[Section~3.2]{BussExel:Fell.Bundle.and.Twisted.Groupoids}. Here $\G$ and $\G'$ are étale groupoids
and $L$ and $L'$ are Fell line bundles over $\G$ and $\G'$, respectively. Let $(\phi,\psi)$ be the morphism from
$\B$ to $\A$ that gives $\B$ as a refinement of $\A$ as in Definition~\ref{def:refinement}.
The map $\psi\colon\B\to \A$ preserves all the algebraic operations and inclusion maps of $\B$ and $\A$ and
restricts to an injective map $\B_t\into \A_{\phi(t)}$ for all $t\in T$. So, we may suppress $\psi$ and identify each $\B_t$
as a closed subspace of $\A_{\phi(t)}$. By Theorem~\ref{theo:RefinementPreserveFullC*Algebras},
these inclusions extend to the level of $C^*$-algebras yielding an isomorphism $C^*(\B)\cong C^*(\A)$
which restricts to an isomorphism of the underlying commutative $C^*$-algebras
$C^*(\E_\B)$ and $C^*(\E_\A)$. Let us say that $C^*(\E_\B)\cong C^*(\A)\cong \cont_0(X)$ for some locally compact space $X$.
Through these identifications, $\B_f\sbe\A_e\sbe\cont_0(X)$ becomes an inclusion of ideals whenever $e\in E(S)$, $f\in E(T)$ and $\phi(f)=e$.
Moreover,
\begin{equation}\label{eq:U_f is the union of U_f's}
\A_e=\overline{\sum\limits_{f\in \phi^{-1}(e)}\B_f}\quad \Longrightarrow\quad\U_e=\bigcup\limits_{f\in \phi^{-1}(e)}\U_f,
\end{equation}
where $\U_e$ is the open subset of $X$ that corresponds to the ideal $\A_e\sbe\cont_0(X)$, that is, $\A_e=\cont_0(\U_e)$.
By definition, the $\G$ and $\G'$ are groupoid of germs for certain canonical actions $\theta$ and $\theta'$ of $T$ and $S$ on $X$, respectively
(see \cite[Proposition~3.5]{BussExel:Fell.Bundle.and.Twisted.Groupoids} for the precise definition).
We shall use the same notation as in \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids}
and write elements of $\G$ as equivalence classes $\germ tx$ of pairs $(t,x)$ where $t\in T$ and $x\in \dom(\theta_t)=\U_{t^*t}$,
and elements of $ L$ as equivalence classes $\qtrip btx$ of triples $(b,t,x)$ with $b\in \B_t$ and $x\in \dom(b)=\{y\in X\colon (b^*b)(y)>0\}$.
See \cite[Section~3.2]{BussExel:Fell.Bundle.and.Twisted.Groupoids} for further details.
Of course, we also use similar notations for elements of $\G'$ and $ L'$.
Now, consider the maps $\Phi\colon\G\to\G'$ and $\Psi\colon L\to L'$ defined by
$\Phi\germ ty=\germ{\phi(t)}{y}$ and $\Psi\qtrip bty=\qtrip{\psi(b)}{\phi(t)}{y}$. We are going to show that these maps
give us the desired isomorphism $(\G, L)\cong (\G', L')$.
It is easy to see that the definition of $\Phi$ and $\Psi$ do not depend on representant choices for the equivalence classes,
that is, $\Phi$ and $\Psi$ are well-defined maps. Since $\phi$ is surjective, so is $\Phi$.
By \cite[Proposition~3.5]{BussExel:Fell.Bundle.and.Twisted.Groupoids}, $\theta_t\colon\U_{t^*t}\to \U_{tt^*}$ is the union of the partial homeomorphisms $\theta_b$ with $b\in \B_t$ defined in
\cite[Lemma~3.3]{BussExel:Fell.Bundle.and.Twisted.Groupoids}. Of course, the same holds for the partial homeomorphisms $\theta_s'\colon\U_{s^*s}\to \U_{ss^*}$, $s\in S$, associated to the Fell bundle $\A$. By~\eqref{eq:ConditionRefinement},
each $\theta_s'$ is the union of the partial homeomorphisms $\theta_t$ with $t\in \phi^{-1}(s)$.
This implies that $\Psi$ is surjective.
To show that $\Phi$ is injective, suppose that $\germ ty,\germ{t'}{y}\in \G$ and
$\germ{\phi(t)}{y}=\germ{\phi(t')}{y}$ in $\G'$, that is, there is $e\in E(S)$ such that $y\in \U_e$
and $\phi(t)e=\phi(t')e$. Equation~\eqref{eq:U_f is the union of U_f's} yields $f\in E(T)$ such that $y\in \U_f$ and $\phi(f)=e$.
Thus $\phi(t_1)=\phi(t_2)$, where $t_1=tf$ and $t_2=t'f$. Since $\phi$ is essentially injective, $g\defeq t_1^*t_2$ belongs to $E(T)$. Note that $t_1g=ht_2$, where $h\defeq t_1t_1^*\in E(T)$. Moreover,
\begin{equation*}
ht_2=ht_2t_2^*t_2=t_2t_2^*ht_2=t_2(t_2^*t_1)(t_1^*t_2)=t_2g^*g=t_2g.
\end{equation*}
Hence $t_1g=t_2g$ and since $y\in \U_{t_1^*t_1}\cap\U_{t_2^*t_2}$ and $g=t_1^*t_2$ is idempotent, it follows that $y\in \U_g$
(see proof of \cite[Lemma~3.4]{BussExel:Fell.Bundle.and.Twisted.Groupoids}). We conclude that $tfg=t'fg$ and $y\in \U_{f}\cap\U_g=\U_{fg}$, so that
$\germ ty=\germ{t'}{y}$. This shows that $\Phi$ is injective. We now prove that $\Psi$ injective. Assume that $\qtrip{\psi(b)}{\phi(t)}{y}=\qtrip{\psi(b')}{\phi(t')}{y}$
in $ L'$, that is, there is $c,c'\in\E_\A$ such that $c(y),c'(y)>0$ and $\psi(b)c=\psi(b')c'$. Equation~\eqref{eq:U_f is the union of U_f's}
implies the existence of $d,d'\in\E_\B$ with $c=\psi(d)$ and $c'=\psi(d')$. Since $\psi$ induces the isomorphism $C^*(\E_\B)\cong C^*(\E_\A)\cong \cont_0(X)$, we must have $d(y),d'(y)>0$. Let $a\defeq bd$ and $a'\defeq b'd'$. Then $a\in \B_s$, $a'\in \B_{s'}$ for some $s,s'\in T$,
and we have $\psi(a)=\psi(a')$. Note that $\qtrip{a}{s}{y}=\qtrip bty$ and $\qtrip{a'}{s'}{y}=\qtrip{b'}{t'}{y}$.
The injectivity of $\Psi$ will follow if we show that $\qtrip{a}{s}{y}=\qtrip{a'}{s'}{y}$.
By definition of refinement, $\psi$ is injective when restricted to the fibers of $\B$. The only small problem is that $a$ and $a'$
might not be in the same fiber. However we may circumvent this problem: suppose that $a\in \A_t$ and $a'\in \A_{t'}$.
Since $\psi(a)=\psi(a')$, $\psi(a)\in \B_{\phi(t)}$ and $\psi(a')\in \B_{\phi(t')}$, we must have $\phi(t)=\phi(t')$.
Hence $\Phi(\germ ty)=\Phi(\germ{t'}{y})$. The (already checked) injectivity of $\Phi$ yields $f\in E(T)$ with $y\in \U_f$ and $r\defeq tf=t'f$.
Now take any function $a_f\in \B_f=\cont_0(\U_f)$ with $a_f(y)>0$. Note that $\qtrip{a}{r}{y}=\qtrip{aa_f}{r}{y}$,
$\qtrip{a'}{r'}{y}=\qtrip{a'a_f}{r}{y}$ and $\psi(aa_f)=\psi(a'a_f)$. Since now both $aa_f,a'a_f\in \B_r$ the injectivity of $\psi_r\colon\B_r\to \A_{\phi(r)}$ implies that $aa_f=a'a_f$.
Therefore $\qtrip{a}{t}{y}=\qtrip{a'}{t'}{y}$, whence the injectivity of $\Psi$ follows.
It is easy to see that $\Phi\colon \G\to \G'$ and $\Psi\colon L\to L'$ preserve all the algebraic operations involved.
Moreover, $\Phi$ and $\Psi$ are homeomorphisms: given $t\in T$ and an open subset $U\sbe\U_{t^*t}$,
the basic neighborhood $\open(t,U)=\{\germ sy\colon y\in U\}$ in $\G$
(see Equation~(3.6) in \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids}) is mapped by $\Phi$ to the
basic neighborhood $\open(s,U)$ in $\G'$, where $s=\phi(t)$ (note that $\U_{t^*t}\sbe\U_{s^*s}$).
Moreover, since any $\U_e$ with $e\in E(S)$ is the union of $\{\U_f\colon f\in \phi^{-1}(e)\}$,
the neighborhoods of the form $\open(s,U)$ with $s=\phi(t)$ and $U$ an open subset of $\U_{t^*t}$, generate the topology of $\G'$.
This implies that $\Phi$ is a homeomorphism. To prove that $\Psi$ is a homeomorphism, we apply \cite[Proposition~II.13.17]{fell_doran}.
For this it is enough to check that $\Psi$ is continuous and isometric on the fibers. But, since $L$ and $L'$ are both Fell bundles, the
injectivity of $\Psi$ (already checked above) implies that it is isometric. To see that $\Psi$ is continuous, let us recall from
\cite[Proposition~3.25]{BussExel:Fell.Bundle.and.Twisted.Groupoids} that the topology on $L$ is generated by the local sections
$\hat{b}\germ ty\defeq\qtrip bty$ for $b\in \B_t$, and similarly for $L'$. Now the continuity of
$\Psi$ follows from the equality $\Psi\circ \hat{b}=\widehat{\psi(b)}\circ\Phi$.
The bundle projections $\pi\colon L\onto \G$ and $\pi'\colon L'\onto \G'$
are defined by $\pi\qtrip bty= \germ ty$ and $\pi'\qtrip asx=\germ sx$.
From this, it is clear that the following diagram is commutative:
\begin{equation*}
\xymatrix{
L \ar[d]_{\Psi}\ar[r]^{\pi} & \G \ar[d]_{\Phi} \\
L' \ar[r]_{\pi'} & \G'
}
\end{equation*}
Therefore, the pair $(\Phi,\Psi)\colon (\G, L)\to(\G', L')$ is an isomorphism of Fell bundles.
\end{proof}
The semi-abelian Fell bundle associated to a twisted étale groupoid as in \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids}
is automatically saturated. On the other hand, if $\A$ is a non-saturated, semi-abelian Fell bundle, we may apply
Corollary~\ref{cor:RefinementForSemiAbelianFellBundle} to find a saturated, semi-abelian refinement $\B$ of $\A$
and then apply the results of \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids}
to find the associated twisted étale groupoid $(\G,\Sigma)$. By Theorem~\ref{theo:RefinementPreserveFullC*Algebras}
and Propositions~2.18 and~3.40 in \cite{BussExel:Fell.Bundle.and.Twisted.Groupoids}, we have isomorphisms
\begin{equation*}
C^*_{(\red)}(\A)\cong C^*_{(\red)}(\B)\cong C^*_{(\red)}(\G,\Sigma).
\end{equation*}
For the isomorphism between the full \cstar{}algebras above it is necessary to assume that
$\G$ is Hausdorff or second countable because this is part of the hypothesis in \cite[Propositions~2.18]{BussExel:Fell.Bundle.and.Twisted.Groupoids}.
\begin{bibdiv}
\begin{biblist}
\bib{BGR:MoritaEquivalence}{article}{
AUTHOR = {Brown, Lawrence G.},
AUTHOR = {Green, Philip},
AUTHOR = {Rieffel, Marc A.},
TITLE = {Stable isomorphism and strong {M}orita equivalence of
{$C\sp*$}-algebras},
JOURNAL = {Pacific J. Math.},
VOLUME = {71},
YEAR = {1977},
NUMBER = {2},
PAGES = {349--363},
}
\bib{Busby-Smith:Representations_twisted_group}{article}{
author={Busby, Robert C.},
author={Smith, Harvey A.},
title={Representations of twisted group algebras},
journal={Trans. Amer. Math. Soc.},
volume={149},
date={1970},
pages={503--537},
}
\bib{BussExel:Fell.Bundle.and.Twisted.Groupoids}{article}{
author={Buss, Alcides},
author={Exel, Ruy},
title={Fell bundles over inverse semigroups and twisted étale groupoids},
journal={Preprint (to appear in Journal of Operator Theory)},
volume={},
number={},
date={2009},
pages={},
issn={},
review={},
note={\arxiv{0903.3388}},
}
\bib{Deaconi_Kumjian_Ramazan:Fell.Bundles}{article}{
author={Deaconu, Valentin},
author={Kumjian, Alex},
author={Ramazan, Birant},
title={Fell bundles associated to groupoid morphisms},
journal={Math. Scand.},
volume={102},
number={2},
date={2008},
pages={305--319},
issn={0025-5521},
}
\bib{DokuchaevExelSimon:twisted.partial.actions}{article}{
AUTHOR = {Dokuchaev, M.},
AUTHOR = {Exel, R.},
AUTHOR = {Sim{\'o}n, J. J.},
TITLE = {Crossed products by twisted partial actions and graded
algebras},
JOURNAL = {J. Algebra},
VOLUME = {320},
YEAR = {2008},
NUMBER = {8},
PAGES = {3278--3310},
}
\bib{Echterhoff.et.al.Categorical.Imprimitivity}{article}{
AUTHOR = {Echterhoff, Siegfried},
AUTHOR = {Kaliszewski, Steven P.},
AUTHOR = {Quigg, John},
AUTHOR = {Raeburn, Iain},
TITLE = {A categorical approach to imprimitivity theorems for \cstar{}dynamical systems},
JOURNAL = {Mem. Amer. Math. Soc.},
VOLUME = {180},
YEAR = {2006},
NUMBER = {850},
PAGES = {viii+169},
}
\bib{Exel:twisted.partial.actions}{article}{
author={Exel, Ruy},
title={Twisted partial actions, a classification of regular $C^*$\nobreakdash-algebraic bundles},
journal={Proc. London Math. Soc.},
volume={74},
number={},
date={1997},
pages={417-443},
issn={},
review={},
}
\bib{Exel:inverse.semigroups.comb.C-algebras}{article}{
author={Exel, Ruy},
title={Inverse semigroups and combinatorial $C^*$\nobreakdash-algebras},
journal={Bull. Braz. Math. Soc. (N.S.)},
volume={39},
date={2008},
number={2},
pages={191--313},
issn={1678-7544},
review={\MRref{2419901}{}},
}
\bib{Exel:noncomm.cartan}{article}{
author={Exel, Ruy},
title={Noncommutative cartan sub-algebras of $C^*$\nobreakdash-algebras},
journal={Preprint},
volume={},
number={},
date={2008},
pages={},
issn={},
review={},
}
\bib{fell_doran}{book}{
author={Fell, James M.G.},
author={Doran, Robert S.},
title={Representations of \Star{}Algebras, Locally Compact Groups, and Banach \Star{}Algebraic Bundles Vol.1},
volume={126},
series={Pure and Applied Mathematics},
publisher={Academic Press Inc.},
date={1988},
pages={xviii + 746},
isbn={0-12-252721-6},
review={\MRref{936628}{90c:46001}},
}
\bib{KhoshkamSkandalis:CrossedProducts}{article}{
AUTHOR = {Khoshkam, Mahmood and Skandalis, Georges},
TITLE = {Crossed products of {$C\sp *$}-algebras by groupoids and
inverse semigroups},
JOURNAL = {J. Operator Theory},
VOLUME = {51},
YEAR = {2004},
NUMBER = {2},
PAGES = {255--279},
}
\bib{Kumjian:cstar.diagonals}{article}{
author={Kumjian, Alexander},
title={On $C^*$\nobreakdash-diagonals},
journal={Can. J. Math.},
volume={XXXVIII},
number={4},
date={1986},
pages={969-1008},
issn={},
review={},
}
\bib{Kumjian:fell.bundles.over.groupoids}{article}{
author={Kumjian, Alexander},
title={Fell bundles over groupoids},
journal={Proc. Amer. Math. Soc.},
volume={126},
number={4},
date={1998},
pages={1115-1125},
issn={},
review={},
}
\bib{Lawson:InverseSemigroups}{book}{
author={Mark V. Lawson},
title={Inverse semigroups: the theory of partial symmetries},
publisher={World Scientific Publishing Co.},
place={River Edge, NJ},
date={1998},
pages={xiv+411},
isbn={981-02-3316-7},
}
\bib{Muhly.Williams.Continuous.Trace.Groupoid}{article}{
AUTHOR = {Muhly, Paul S.},
AUTHOR = {Williams, Dana P.},
TITLE = {Continuous trace groupoid \cstar{}algebras. {II}},
JOURNAL = {Math. Scand.},
VOLUME = {70},
YEAR = {1992},
NUMBER = {1},
PAGES = {127--145},
ISSN = {0025-5521},
}
\bib{Packer-Raeburn:Stabilisation}{article}{
author={Packer, Judith A.},
author={Raeburn, Iain},
title={Twisted crossed products of $C^*$\nobreakdash-algebras},
journal={Math. Proc. Cambridge Philos. Soc.},
volume={106},
date={1989},
pages={293--311},
}
\bib{Paterson:Groupoids}{book}{
author={Paterson, Alan L. T.},
title={Groupoids, inverse semigroups, and their operator algebras},
series={Progress in Mathematics},
volume={170},
publisher={Birkh\"auser Boston Inc.},
place={Boston, MA},
date={1999},
pages={xvi+274},
}
\bib{Quigg.Sieben.C.star.actions.r.discrete.groupoids.and.inverse.semigroups}{article}{
author={Quigg, John},
author={Sieben, Nándor}
title={$C^*$-actions of r-discrete groupoids and inverse semigroups},
journal={J. Austral. Math. Soc.},
volume={66},
number={Series A},
date={1999},
pages={143-167},
}
\bib{Raeburn:PicardGroup}{article}{
AUTHOR = {Raeburn, Iain},
TITLE = {On the Picard group of a continuous trace \cstar{}algebra},
JOURNAL = {Trans. Amer. Math. Soc.},
VOLUME = {263},
YEAR = {1981},
NUMBER = {1},
PAGES = {183--205},
ISSN = {0002-9947},
}
\bib{RenaultThesis}{book}{
AUTHOR = {Renault, Jean},
TITLE = {A groupoid approach to {$C\sp{\ast} $}-algebras},
SERIES = {Lecture Notes in Mathematics},
VOLUME = {793},
PUBLISHER = {Springer},
ADDRESS = {Berlin},
YEAR = {1980},
PAGES = {ii+160},
ISBN = {3-540-09977-8},
}
\bib{RenaultCartan}{article}{
author = {Renault, Jean},
title = {Cartan subalgebras in {$C\sp *$}-algebras},
journal = {Irish Math. Soc. Bull.},
volume = {61},
date = {2008},
pages = {29--63},
issn = {0791-5578},
}
\bib{Rordam:StableExtensions}{article}{
AUTHOR = {R{\o}rdam, Mikael},
TITLE = {Extensions of stable {$C\sp *$}-algebras},
JOURNAL = {Doc. Math.},
VOLUME = {6},
YEAR = {2001},
PAGES = {241--246 (electronic)},
}
\bib{SiebenTwistedActions}{article}{
AUTHOR = {Sieben, N{\'a}ndor},
TITLE = {{$C\sp *$}-crossed products by twisted inverse semigroup actions},
JOURNAL = {J. Operator Theory},
VOLUME = {39},
YEAR = {1998},
NUMBER = {2},
PAGES = {361--393},
}
\bib{SiebenFellBundles}{article}{
author={Sieben, Nándor},
title={Fell bundles over $r$\nb-discrete groupoids and inverse semigroups},
journal={Unpublished preprint},
date={1998},
note={\href{http://jan.ucc.nau.edu/~ns46/bundle.ps.gz}{http://jan.ucc.nau.edu/\~{}ns46/bundle.ps.gz}},
}
\bib{Zettl:TROs}{article}{
AUTHOR = {Zettl, Heinrich},
TITLE = {A characterization of ternary rings of operators},
JOURNAL = {Adv. in Math.},
VOLUME = {48},
YEAR = {1983},
NUMBER = {2},
PAGES = {117--143},
}
\end{biblist}
\end{bibdiv}
\end{document} | math | 168,928 |
\begin{document}
\title[Higher order parallel surfaces in Bianchi-Cartan-Vranceanu spaces]
{Higher order parallel surfaces in Bianchi-Cartan-Vranceanu spaces}
\author[J. Van der Veken]{Joeri Van der Veken}
\address{Katholieke Universiteit Leuven\\ Departement
Wiskunde\\ Celestijnenlaan 200 B\\ B-3001 Leuven\\ Belgium}
\eqrefmail{[email protected]}
\thanks{The author is a postdoctoral researcher supported by the Research Foundation-Flanders (FWO)}
\begin{abstract}
We give a full classification of higher order parallel surfaces in
three-dimen\-sional homogeneous spaces with four-dimensional
isometry group, i.e. in the so-called Bianchi-Cartan-Vranceanu
family. This gives a positive answer to a conjecture formulated in
\cite{3}. As a partial result, we prove that totally umbilical
surfaces only exist if the ambient Bianchi-Cartan-Vranceanu space is
a Riemannian product of a surface of constant Gaussian curvature and
the real line, and we give a local parametrization of all totally
umbilical surfaces.
\eqrefnd{abstract}
\keywords{Higher order parallel, totally umbilical, surface, second
fundamental form, three-dimensional homogeneous space}
\subjclass[2000]{Primary: 53B25; Secondary: 53C40}
\maketitle
\section{Introduction}
A Riemannian manifold $(M,g)$ is said to be homogeneous if for every
two points $p$ and $q$ in $M$, there exists an isometry of $M$,
mapping $p$ into $q$. The classification of simply connected
3-dimensional homogeneous spaces is well-known. The dimension of the
isometry group must equal 6, 4 or 3. If the isometry group is of
dimension 6, $M$ is a complete real space form, i.e. Euclidean space
$\mathbb{E}^3$, a sphere $\mathbb{S}^3(\kappa)$, or a hyperbolic space
$\mathbb{H}^3(\kappa)$. If the dimension of the isometry group is 4, $M$ is
isometric to $\mathrm{SU}(2)$, the special unitary group, to
$[\mathrm{SL}(2,\mathbb{R})]^{\sim}$, the universal covering of the real
special linear group, to $\mathrm{Nil}_3$, the Heisenberg group, all with a
certain left-invariant metric, or to a Riemannian product
$\mathbb{S}^2(\kappa)\times\mathbb{R}$ or $\mathbb{H}^2(\kappa)\times\mathbb{R}$. Finally, if the
dimension of the isometry group is 3, $M$ is isometric to a general
simply connected Lie group with left-invariant metric. As will
become clear in the next section, Bianchi-Cartan-Vranceanu spaces
are in fact the spaces with 4-dimensional isometry group mentioned
above, together with $\mathbb{E}^3$ and $\mathbb{S}^3(\kappa)$.
The classification above contains the eight ``model geometries''
appearing in the famous conjecture of Thurston on the classification
of 3-manifolds, namely $\mathbb{E}^3$, $\mathbb{S}^3$, $\mathbb{H}^3$, $\mathbb{S}^2\times\mathbb{R}$,
$\mathbb{H}^2\times\mathbb{R}$, $[\mathrm{SL}(2,\mathbb{R})]^{\sim}$, $\mathrm{Nil}_3$ and $\mathrm{Sol}_3$. See
for example \cite{17a}. In theoretical cosmology, the metrics on
these spaces are known as Bianchi-Kantowski-Sachs type metrics, used
to construct spatially homogeneous spacetimes, see for example
\cite{13a}.
Immersions of curves and surfaces in 3-dimensional real space forms
are extensively studied and it is now very natural to allow the
other 3-dimensional homogeneous manifolds as ambient spaces. Initial
work in this direction can be found in \cite{12} and
\cite{7}.
An important class of surfaces to study are parallel surfaces. These
immersions have a parallel second fundamental form and hence their
extrinsic invariants ``are the same'' at every point. Parallel
submanifolds in real space forms are classified in \cite{1}. In
\cite{8}, \cite{9}, \cite{10} and \cite{15}, the notion of higher
order parallelism is introduced and a classification for
hypersurfaces in real space forms is obtained. In \cite{3}, a
classification of parallel surfaces in 3-dimensional homogeneous
spaces with 4-dimensional isometry group is given, whereas the
classification of higher order parallel surfaces is formulated as a
conjecture. In this article we will prove this conjecture (Theorem
\ref{theo7}). For an overview of the theory of parallel and higher
order parallel submanifolds we refer to \cite{16}.
Another important class of surfaces are totally umbilical ones. From
an extrinsic viewpoint, these surfaces are curved equally in every
direction. We will give a full local classification of totally
umbilical surfaces in 3-dimensional homogeneous spaces with
4-dimensional isometry group (Theorems \ref{theo}, \ref{T:3.3.2} and
\ref{T:3.3.3}). Although in a real space form a totally umbilical
surface is automatically parallel, this will no longer be the case
in the
spaces under consideration.\\
\section{Examples of three-dimensional homogeneous spaces}
\subsection{The Heisenberg group $\mathrm{Nil}_3$ with left-invariant metric}
The Heisenberg group $\mathrm{Nil}_3$ is a Lie group which is diffeomorphic
to $\mathbb{R}^3$ and the group operation is defined by
$$(x,y,z)\ast(\overline{x},\overline{y},\overline{z})=\left(x+\overline{x},\ y+\overline{y},\ z+\overline{z}+\frac{x\overline{y}}{2}-\frac{\overline{x}y}{2}\right).$$
Remark that the mapping
$$\mathrm{Nil}_3\rightarrow\left\{\left.\left(\begin{array}{ccc}1&a&b\\0&1&c\\0&0&1\eqrefnd{array}\right)\ \right|\ a,b,c\in\mathbb{R}\right\} : (x,y,z)\mapsto\left(\begin{array}{ccc}1&x&z+\frac{xy}{2}\\0&1&y\\0&0&1\eqrefnd{array}\right)$$
is an isomorphism between $\mathrm{Nil}_3$ and a subgroup of
$\mathrm{GL}(3,\mathbb{R})$. For every non-zero real number $\tau$ the
following metric on $\mathrm{Nil}_3$ is left-invariant:
$$ds^2=dx^2+dy^2+4\tau^2\left(dz+\frac{y\,dx-x\,dy}{2}\right)^2.$$
After the change of coordinates $(x,y,2\tau z)\mapsto(x,y,z)$, this
metric is expressed as
\begin{equation} \label{2.1}
ds^2=dx^2+dy^2+\left(dz+\tau(y\,dx-x\,dy)\right)^2.
\eqrefnd{equation}
\subsection{The projective special linear group $\mathrm{PSL}(2,\mathbb{R})$ with left-invariant metric}
Consider the following subgroup of $\mathrm{GL}(2,\mathbb{R})$:
$$\mathrm{SL}(2,\mathbb{R})=\left\{\left.\left(\begin{array}{cc}a&b\\c&d\eqrefnd{array}\right)\ \right|\ ad-bc=1\right\}.$$
First note that this group is isomorphic to the
following subgroup of $\mathrm{GL}(2,\mathbb{C})$:
$$G=\left\{\left.\left(\begin{array}{cc}\alpha&\beta\\\overline{\beta}&\overline{\alpha}\eqrefnd{array}\right)\ \right|\ |\alpha|^2-|\beta|^2=1\right\},$$
via the isomorphism
$$\mathrm{SL}(2,\mathbb{R})\rightarrow
G:\left(\begin{array}{cc}a&b\\c&d\eqrefnd{array}\right)\mapsto\frac{1}{2}\left(\begin{array}{cc}i&1\\1&i\eqrefnd{array}\right)\left(\begin{array}{cc}a&b\\c&d\eqrefnd{array}\right)\left(\begin{array}{cc}-i&1\\1&-i\eqrefnd{array}\right).$$
Now consider the Poincar\'e disc-model for the hyperbolic plane
$\mathbb{H}^2(\kappa)$ of constant Gaussian curvature $\kappa<0$:
\begin{eqnarray} \label{2.2}
\mathbb{H}^2(\kappa) &\cong& \left(\left\{(x,y)\in\mathbb{R}^2\ \left|\ x^2+y^2<-\frac{4}{\kappa}\right.\right\},\ \frac{dx^2+dy^2}{(1+\frac{\kappa}{4}(x^2+y^2))^2}\right)\\
&\cong& \left(\left\{z\in\mathbb{C}\ \left|\ |z|^2<-\frac{4}{\kappa}\right.\right\},\ \frac{dz\,d\overline{z}}{(1+\frac{\kappa}{4}|z|^2)^2}\right)\nonumber
\eqrefnd{eqnarray}
and define
$$F_{\left(\alpha\ \beta\atop\overline{\beta}\ \overline{\alpha}\right)}(z)=\frac{2}{\sqrt{-\kappa}}\frac{\alpha\sqrt{-\kappa}z + 2\beta}{\overline{\beta}\sqrt{-\kappa}z + 2\overline{\alpha}}.$$
Note that for $\kappa=-4$, this M\"obius transformation simplifies
to $z\mapsto\frac{\alpha z+\beta}{\overline{\beta}
z+\overline{\alpha}}$. The mapping
$$G\times\mathbb{H}^2(\kappa)\rightarrow\mathbb{H}^2(\kappa):(A,z)\mapsto F_A(z)$$
is a transitive, isometric action with stabilizers isomorphic to the
circle group $\mathrm{SU}(1)=\{\alpha\in\mathbb{C}\ |\ |\alpha|=1\}.$ This
action induces the following transitive action on the unitary
tangent bundle $\mathrm{U}\mathbb{H}^2(\kappa)$:
\begin{equation} \label{2.3}
G\times\mathrm{U}\mathbb{H}^2(\kappa)\rightarrow\mathrm{U}\mathbb{H}^2(\kappa):(A,(z,v))\mapsto(F_A(z),(F_A)_{\ast}v),
\eqrefnd{equation}
with stabilizers of order two. Hence we can identify
$\mathrm{U}\mathbb{H}^2(\kappa)$ with
$$\mathrm{PSL}(2,\mathbb{R})=\frac{\mathrm{SL}(2,\mathbb{R})}{\left\{\left(1\ 0\atop 0\ 1\right),-\left(1\ 0\atop 0\ 1\right)\right\}}.$$
Let us now define a metric on $\mathrm{U}\mathbb{H}^2(\kappa)$. If
$\gamma:I\subseteq\mathbb{R}\rightarrow\mathrm{U}\mathbb{H}^2(\kappa):t\mapsto
(z(t),v(t))$ is a curve, with $z(t)$ a curve in $\mathbb{H}^2(\kappa)$ and
for every $t\in I$, $v(t)\in T_{z(t)}\mathbb{H}^2(\kappa)$ and $\|v(t)\|=1$,
we put
\begin{equation} \label{2.4}
\|\gamma'(t_0)\|^2 = \|z'(t_0)\|^2 +
\left(\frac{2\tau}{\kappa}\right)^2\|(\nabla_{z'}v)_{z(t_0)}\|^2,\quad
\tau\in\mathbb{R}\setminus\{0\},
\eqrefnd{equation}
where $\nabla_{z'}v$ is the covariant derivative of the vector field
$v$ along the curve $z(t)$. For $\tau=\pm\frac{\kappa}{2}$, this
metric is induced from the standard metric on the tangent bundle. By
varying the parameter $\tau$, we distort the length of the fibres.
It is clear that the action (\ref{2.3}) is now isometric and hence
the induced metric on $\mathrm{PSL}(2,\mathbb{R})$ via the identification is
left-invariant. The metric (\ref{2.4}) can be explicitly computed,
analogous as in \cite{7}, in the coordinate system
\begin{multline*}
\mathbb{D}^2\left(\frac{2}{\sqrt{-\kappa}}\right)\times\mathbb{S}^1(1)\rightarrow\mathrm{U}\mathbb{H}^2(\kappa):\\
((x,y),\theta)\mapsto\left((x,y),\
\left(1+\frac{\kappa}{4}(x^2+y^2)\right)\left(\cos\left(\frac{\kappa}{2\tau}\theta\right)\frac{\partial}{\partial
x}+\sin\left(\frac{\kappa}{2\tau}\theta\right)\frac{\partial}{\partial
y}\right)\right),
\eqrefnd{multline*}
where $\mathbb{D}^2\left(\frac{2}{\sqrt{-\kappa}}\right)$ is the disc of
radius $\frac{2}{\sqrt{-\kappa}}$, yielding the following result:
\begin{equation} \label{2.5}
ds^2=\frac{dx^2+dy^2}{(1+\frac{\kappa}{4}(x^2+y^2))^2}+\left(d\theta+\tau\frac{y\,dx-x\,dy}{1+\frac{\kappa}{4}(x^2+y^2)}\right)^2,\quad
\kappa<0.
\eqrefnd{equation}
\subsection{The special orthogonal group $\mathrm{SO}(3)$ with left-invariant metric}
Consider the following subgroup of $\mathrm{GL}(2,\mathbb{C})$:
$$\mathrm{SU}(2)=\left\{\left.\left(\begin{array}{cc}\alpha&\beta\\-\overline{\beta}&\overline{\alpha}\eqrefnd{array}\right)\ \right|\ |\alpha|^2+|\beta|^2=1\right\}.$$
Using stereographic projection, we have for an arbitrary
$\kappa>0$:
\begin{equation} \label{2.8}
\mathbb{S}^2(\kappa)\setminus\{\infty\}\cong\left(\mathbb{R}^2,\
\frac{dx^2+dy^2}{(1+\frac{\kappa}{4}(x^2+y^2))^2}\right)\cong\left(\mathbb{C},\
\frac{dz\,d\overline{z}}{(1+\frac{\kappa}{4}|z|^2)^2}\right).
\eqrefnd{equation}
The analogy with the previous case is clear and we could now
proceed in the same way as above, putting
$$F_{\left(\alpha\ \ \beta\atop-\overline{\beta}\ \overline{\alpha}\right)}(z)=\frac{2}{\sqrt{\kappa}}\frac{\alpha\sqrt{\kappa}z + 2\beta}{(-\overline{\beta}\sqrt{\kappa}z + 2\overline{\alpha})},$$
and being careful in calculations involving the symbol $\infty$. In
this way we would find that $\mathrm{U}\mathbb{S}^2(\kappa)$ can be
identified with
$$\mathrm{PSU}(2)=\frac{\mathrm{SU}(2)}{\left\{\left(1\ 0\atop 0\ 1\right),-\left(1\ 0\atop 0\ 1\right)\right\}}.$$
But since $\mathrm{PSU}(2)$ is isomorphic to $\mathrm{SO}(3)$, see
for example \cite{18}, there is an easier way to construct the
desired group action. Looking at $\mathbb{S}^2(\kappa)$ as a hypersphere in
$\mathbb{E}^3$ centered at the origin, we can identify both points of the
surface and tangent vectors to it with elements of $\mathbb{R}^3$ and we
define
$$\mathrm{SO}(3)\times\mathrm{U}\mathbb{S}^2(\kappa)\rightarrow\mathrm{U}\mathbb{S}^2(\kappa):(A,(p,v))\mapsto (Ap,Av).$$
This is a transitive action with trivial stabilizers and a metric on
$\mathrm{U}\mathbb{S}^2(\kappa)$ analogous to (\ref{2.4}) turns it into an
isometric action. This means that the induced metric on
$\mathrm{SO}(3)$ will be left-invariant and in the local coordinates
$$\mathbb{R}^2\times\mathbb{S}^1(1)\rightarrow\mathrm{U}\mathbb{S}^2(\kappa):((x,y),\theta)\mapsto\left((x,y),\ \left(1+\frac{\kappa}{4}(x^2+y^2)\right)\left(\cos\left(\frac{\kappa}{2\tau}\theta\right)\frac{\partial}{\partial x}+\sin\left(\frac{\kappa}{2\tau}\theta\right)\frac{\partial}{\partial y}\right)\right)$$
it is expressed as
\begin{equation} \label{2.9}
ds^2=\frac{dx^2+dy^2}{(1+\frac{\kappa}{4}(x^2+y^2))^2}+\left(d\theta+\tau\frac{y\,dx-x\,dy}{1+\frac{\kappa}{4}(x^2+y^2)}\right)^2,\quad
\kappa>0.
\eqrefnd{equation}
\subsection{The Riemannian product spaces $\mathbb{H}^2(\kappa)\times\mathbb{R}$ and $\mathbb{S}^2(\kappa)\times\mathbb{R}$}Using respectively
the models (\ref{2.2}) and (\ref{2.8}) for $\mathbb{H}^2(\kappa)$ and
$\mathbb{S}^2(\kappa)$, one sees that the Riemannian product metric on these
spaces can be expressed (locally) as
\begin{equation}\label{2.10}
ds^2 = \frac{dx^2+dy^2}{(1+\frac{\kappa}{4}(x^2+y^2))^2}+ dz^2.
\eqrefnd{equation}
\subsection{Bianchi-Cartan-Vranceanu spaces}
Remark that the metrics (\ref{2.1}), (\ref{2.5}), (\ref{2.9}) and
(\ref{2.10}) of the homogeneous spaces above are of the same type.
Cartan classified all 3-dimensional spaces with 4-dimensional
isometry group in \cite{6}. In particular, he proved that they are
all homogeneous and obtained the following two-parameter family of
spaces, which are now known as the {\eqrefm Bianchi-Cartan-Vranceanu
spaces} or {\eqrefm BCV spaces} for short. For $\kappa,\tau\in\mathbb{R}$, we
define $\widetilde{M}^3(\kappa,\tau)$ as the following open subset of $\mathbb{R}^3$:
$$\left\{(x,y,z)\in\mathbb{R}^3\ \left|\ 1+\frac{\kappa}{4}(x^2+y^2)>0\right.\right\},$$
equipped with the metric
\begin{equation}\label{2.11}
ds^2=\frac{dx^2+dy^2}{(1+\frac{\kappa}{4}(x^2+y^2))^2}+\left(dz+\tau\frac{y\,dx-x\,dy}{1+\frac{\kappa}{4}(x^2+y^2)}\right)^2.
\eqrefnd{equation}
See also \cite{4}, \cite{5} and \cite{19}. The result of Cartan
shows that the examples above cover in fact all possible
3-dimensional homogeneous spaces with 4-dimensional isometry group.
The BCV family also includes two real space forms, which have
6-dimensional isometry group. The full classification of these
spaces is as follows:
\begin{itemize}
\item if $\kappa=\tau=0$, then $\widetilde{M}^3(\kappa,\tau)\cong \mathbb{E}^3$;
\item if $\kappa=4\tau^2\neq 0$, then $\widetilde{M}^3(\kappa,\tau)\cong \mathbb{S}^3\left(\frac{\kappa}{4}\right)\setminus\{\infty\}$;
\item if $\kappa >0$ and $\tau=0$, then $\widetilde{M}^3(\kappa,\tau)\cong (\mathbb{S}^2(\kappa)\setminus\{\infty\})\times\mathbb{R}$;
\item if $\kappa <0$ and $\tau=0$, then $\widetilde{M}^3(\kappa,\tau)\cong \mathbb{H}^2(\kappa)\times\mathbb{R}$;
\item if $\kappa >0$ and $\tau\neq 0$, then $\widetilde{M}^3(\kappa,\tau)\cong [U(\mathbb{S}^2(\kappa)\setminus\{\infty\})]^{\sim}\cong\mathrm{SU}(2)\setminus\{\infty\}$;
\item if $\kappa <0$ and $\tau\neq 0$, then $\widetilde{M}^3(\kappa,\tau)\cong [U\mathbb{H}^2(\kappa)]^{\sim}\cong[\mathrm{SL}(2,\mathbb{R})]^{\sim}$;
\item if $\kappa =0$ and $\tau\neq 0$, then $\widetilde{M}^3(\kappa,\tau)\cong \mathrm{Nil}_3$.
\eqrefnd{itemize}
To end this section, we discuss the geometry of these spaces. The
following vector fields form an orthonormal frame on
$\widetilde{M}^3(\kappa,\tau)$:
\begin{eqnarray*}
e_1 =
\left(1+\frac{\kappa}{4}(x^2+y^2)\right)\frac{\partial}{\partial
x}-\tau\frac{\partial}{\partial z},\quad e_2 =
\left(1+\frac{\kappa}{4}(x^2+y^2)\right)\frac{\partial}{\partial
y}+\tau\frac{\partial}{\partial z},\quad e_3 =
\frac{\partial}{\partial z}.
\eqrefnd{eqnarray*}
It is clear that these vector fields satisfy the following
commutation relations:
\begin{equation}\label{Lie-haken}
[e_1,e_2]=-\frac{\kappa}{2}ye_1+\frac{\kappa}{2}xe_2+2\tau e_3,
\qquad [e_2,e_3]=0, \qquad [e_3,e_1]=0.
\eqrefnd{equation}
The Levi Civita connection of $\widetilde{M}^3(\kappa,\tau)$ can then be
computed using Koszul's formula:
\begin{equation} \label{Levi Civita}
\begin{array}{ccc} \widetilde{\nabla}_{e_1}e_1=\frac{\kappa}{2}ye_2, & \widetilde{\nabla}_{e_1}e_2=-\frac{\kappa}{2}ye_1+\tau e_3, & \widetilde{\nabla}_{e_1}e_3=-\tau e_2, \\
\widetilde{\nabla}_{e_2}e_1=-\frac{\kappa}{2}xe_2-\tau e_3, & \widetilde{\nabla}_{e_2}e_2=\frac{\kappa}{2}xe_1, & \widetilde{\nabla}_{e_2}e_3=\tau e_1, \\
\widetilde{\nabla}_{e_3}e_1=-\tau e_2, & \widetilde{\nabla}_{e_3}e_2=\tau e_1, & \widetilde{\nabla}_{e_3}e_3=0. \eqrefnd{array}
\eqrefnd{equation}
Remark that $\widetilde{\nabla}_Xe_3=\tau(X\times e_3)$ for every $X\in
T\widetilde{M}^3(\kappa,\tau)$, where the cross product is defined as an
anti-symmetric bilinear operation, satisfying $e_1\times e_2=e_3$,
$e_2\times e_3=e_1$ and $e_3\times e_1=e_2$. The equations in
(\ref{Levi Civita}) yield the following expression for the curvature
tensor of $\widetilde{M}^3(\kappa,\tau)$:
\begin{multline}
\widetilde{R}(X,Y)Z=(\kappa-3\tau^2)(\langle Y,Z\rangle X-\langle X,Z\rangle Y)\\
-(\kappa-4\tau^2)(\langle Y,e_3 \rangle\langle Z,e_3 \rangle X - \langle X,e_3 \rangle\langle Z,e_3 \rangle Y
+ \langle X,e_3 \rangle\langle Y,Z \rangle e_3 - \langle Y,e_3 \rangle\langle X,Z \rangle e_3)
\eqrefnd{multline}
for $p\in \widetilde{M}^3(\kappa,\tau)$ and $X,Y,Z\in
T_p\widetilde{M}^3(\kappa,\tau)$.
Consider the following Riemannian surface with constant Gaussian
curvature $\kappa$:
$$\widetilde{M}^2(\kappa)=\left(\left\{(x,y)\in\mathbb{R}^2\ \left|\ 1+\frac{\kappa}{4}(x^2+y^2)>0\right.\right\}\ ,\ \frac{dx^2+dy^2}{(1+\frac{\kappa}{4}(x^2+y^2))^2}\right).$$
Then the mapping
$$\pi:\widetilde{M}^3(\kappa,\tau)\rightarrow\widetilde{M}^2(\kappa):(x,y,z)\mapsto (x,y)$$
is a Riemannian submersion, referred to as the
\eqrefmph{Hopf-fibration}. For $\kappa=4\tau^2\neq 0$, this mapping
coincides with the ``classical'' Hopf-fibration
$\pi:\mathbb{S}^3\left(\frac{\kappa}{4}\right)\rightarrow\mathbb{S}^2(\kappa)$. In
the following, by a \eqrefmph{Hopf-cylinder} we mean the inverse image
of a curve in $\widetilde{M}^2(\kappa)$ under $\pi$. By a \eqrefmph{leaf} of the
Hopf-fibration, we mean a surface which is everywhere orthogonal to
the fibres. From Frobenius' theorem and (\ref{Lie-haken}), it is
clear that this only exists if $\tau=0$.
\section{Surfaces immersed in BCV spaces}
Let us start with recalling the basic formulas from the theory of
submanifolds. Suppose that $F:M^n\rightarrow\widetilde{M}^{n+k}$ is an
isometric immersion of Riemannian manifolds and denote by $\nabla$
the Levi Civita connection of $M^n$ and by $\widetilde{\nabla}$ that
of $\widetilde{M}^{n+k}$. With the appropriate identifications, the formulas of
Gauss and Weingarten state respectively
\begin{eqnarray}
\widetilde{\nabla}_X Y &=& \nabla_X Y + \alpha(X,Y), \label{3.1} \\
\widetilde{\nabla}_X \xi &=& -S_{\xi}X + \nabla^{\perp}_X\xi,
\label{3.2}
\eqrefnd{eqnarray}
where $X$ and $Y$ are vector fields tangent to $M^n$ and $\xi$ is a
normal vector field along $M^n$. The symmetric (1,2)-tensor field
$\alpha$, taking values in the normal bundle, is called the
\eqrefmph{second fundamental form}, the symmetric (1,1)-tensor field
$S_{\xi}$ on $M^n$ is the \eqrefmph{shape operator associated to $\xi$}
and $\nabla^{\perp}$ is a connection in the normal bundle. From
these formulas the equations of Gauss and Codazzi can be deduced:
\begin{eqnarray}
\mathrm{tan}(\widetilde{R}(X,Y)Z) &=& R(X,Y)Z+S_{\alpha(X,Z)}Y-S_{\alpha(Y,Z)}X, \label{3.3} \\
\mathrm{tan}(\widetilde{R}(X,Y)\xi) &=&
(\nabla_YS)_{\xi}X-(\nabla_XS)_{\xi}Y, \label{3.4}
\eqrefnd{eqnarray}
for $p\in M^n$ and $X,Y,Z\in T_pM^n$, $\xi\in T_p^{\perp}M^n$. Here
$R$ is the Riemann-Christoffel curvature tensor of $M^n$,
$\widetilde{R}$ that of $\widetilde{M}^{n+k}$, ``tan'' denotes the projection
on the tangent space to $M^n$ and
$(\nabla_XS)_{\xi}Y=\nabla_X(S_{\xi}Y)-S_{\xi}(\nabla_XY)-S_{\nabla_X^{\perp}\xi}Y$.
Now let $F:M^2\rightarrow\widetilde{M}^3(\kappa,\tau)$ be an isometric
immersion of an oriented surface in a BCV space, with unit normal
$\xi$ and associated shape operator $S$. We denote by $\theta$ the
angle between $e_3$ and $\xi$ and by $T$ the projection of $e_3$ on
the tangent plane to $M^2$, i.e. the vector field $T$ on $M^2$ such
that $F_{\ast}T+\cos\theta\,\xi=e_3$. If we work locally, we may
assume $\theta\in [0,\frac{\pi}{2}]$. The equations of Gauss
(\ref{3.3}) and Codazzi (\ref{3.4}) give respectively
\begin{multline}\label{Gaussvgl}
R(X,Y)Z = (\kappa-3\tau^2)(\langle Y,Z\rangle X-\langle X,Z\rangle
Y)
-(\kappa-4\tau^2)(\langle Y,T \rangle\langle Z,T \rangle X - \langle X,T \rangle\langle Z,T \rangle Y\\
+\langle X,T \rangle\langle Y,Z \rangle T - \langle Y,T \rangle\langle X,Z \rangle T)
+\langle SY,Z \rangle SX - \langle SX,Z \rangle SY
\eqrefnd{multline}
and
\begin{equation}
\nabla_XSY-\nabla_YSX-S[X,Y]=(\kappa-4\tau^2)\cos\theta(\langle
Y,T\rangle X-\langle X,T\rangle Y)\label{Codazzivgl}
\eqrefnd{equation}
for $p\in M^2$ and $X,Y,Z\in T_pM^2$. From (\ref{Gaussvgl}) it
follows moreover that the Gaussian curvature of $M^2$ is given by
\begin{equation}
K=\det S+\tau^2+(\kappa-4\tau^2)\cos^2\theta.\label{Gaussvgl2}
\eqrefnd{equation}
Finally, we remark that the following structure equations hold for
$p\in M^2$ and $X\in T_pM^2$:
\begin{eqnarray}
\nabla_XT &=& \cos\theta(SX-\tau JX),\label{structuurvgl1}\\
X[\cos\theta] &=& -\langle SX-\tau JX,T\rangle,\label{structuurvgl2}
\eqrefnd{eqnarray}
where $J$ denotes the rotation over $\frac{\pi}{2}$ in $T_pM^2$.
These equations can be verified straightforwardly by comparing the
tangential and normal components of both sides of the equality
$\widetilde{\nabla}_X(T+\cos\theta\,\xi)= \tau(X\times(T+\cos\theta\,\xi))$.\\
The following theorem is proven in \cite{7}:
\begin{theorem}\label{stelling Daniel}\cite{7}
Let $M^2$ be a simply connected, oriented Riemannian surface with
metric $\langle\cdot,\cdot\rangle$, Levi Civita connection $\nabla$
and curvature tensor $R$. Let $J$ denote the rotation over
$\frac{\pi}{2}$ in $TM^2$ and $S$ a field of symmetric operators on
$TM^2$. Finally, let $T$ be a vector field on $M^2$ and let
$\cos\theta$ be a differentiable function, satisfying $\langle
T,T\rangle + \cos^2\theta =1$. Then there exists an isometric
immersion $F$ of $M^2$ in $\widetilde{M}^3(\kappa,\tau)$ with unit normal
$\xi$, such that $S$ is the shape operator and
$e_3=F_{\ast}T+\cos\theta\,\xi$ if and only if the equations
(\ref{Gaussvgl}), (\ref{Codazzivgl}), (\ref{structuurvgl1}) and
(\ref{structuurvgl2}) are satisfied. In this case the immersion is
moreover unique up to a global isometry of $\widetilde{M}^3(\kappa,\tau)$,
preserving both the orientations of the base space $\widetilde{M}^2(\kappa)$
and the fibres of $\pi$.
\eqrefnd{theorem}
\section{Parallel, semi-parallel and higher order parallel hypersurfaces}
Let $F:M^n\rightarrow\widetilde{M}^{n+1}$ be an isometric immersion
of Riemannian manifolds and $p\in M^n$. If $\alpha$ is the second
fundamental form and $\xi$ is a unit normal vector field on the
hypersurface, we define the scalar valued second fundamental form
$h$ to be the (0,2)-tensor field satifying $\alpha(X,Y)=h(X,Y)\,\xi$
for all $p\in M^n$ and $X,Y\in T_pM^n$. The covariant derivative of
$h$ is defined by
$$(\nabla h)(X,Y,Z) = X[h(Y,Z)] - h(\nabla_XY,Z) - h(Y,\nabla_XZ),$$
for all $X,Y,Z\in T_pM^n$ with $\nabla$ the Levi Civita connection
of $M^n$. If $R$ is the curvature tensor of $M^n$, we also define
$$(R\cdot h)(X,Y,Z_1,Z_2)=-h(R(X,Y)Z_1,Z_2)-h(Z_1,R(X,Y)Z_2),$$
for all $X,Y,Z_1,Z_2\in T_pM^n$. If $\nabla h=0$, we say that $M^n$
has parallel second fundamental form or, for short, that it is a
\eqrefmph{parallel} hypersurface. If $R\cdot h=0$, we say that $M^n$ is
a \eqrefmph{semi-parallel} hypersurface.\\
For any integer $k\geq2$, we define recursively
\begin{multline*}(\nabla^kh)(X_1,\ldots,X_k,Y,Z) = X_1[(\nabla^{k-1}h)(X_2,\ldots,X_k,Y,Z)]\\
-(\nabla^{k-1}h)(\nabla_{X_1}X_2,\ldots,X_k,Y,Z) - \ldots
-(\nabla^{k-1}h)(X_2,\ldots,X_k,Y,\nabla_{X_1}Z)\eqrefnd{multline*} for
$X_1,\ldots, X_k,Y,Z\in T_pM^n$. We call a hypersurface satisfying
$\nabla^kh=0$ a \eqrefmph{$k$-parallel} hypersurface or a \eqrefmph{higher
order parallel} hypersurface. With slight modifications, all these
notions can also be defined for
submanifolds with arbitrary codimension.\\
The classification of parallel hypersurfaces in real space forms is
proven in \cite{14}, whereas for the classification of $k$-parallel
hypersurfaces in real space forms we refer to \cite{8}, \cite{9} and
\cite{10}:
\begin{theorem} \cite{14} A parallel hypersurface in a simply connected, complete real
space form of constant sectional curvature $c$ is one of the
following. In $\mathbb{E}^{n+1}$: an open part of a product immersion
$\mathbb{E}^k\times\mathbb{S}^{n-k}$, $k\in\{0,\ldots, n\}$. In $\mathbb{S}^{n+1}(c)$: an
open part of a product immersion $\mathbb{S}^k\times\mathbb{S}^{n-k}$,
$k\in\{0,\ldots, n\}$. In $\mathbb{H}^{n+1}(c)$: an open part of a product
immersion $\mathbb{H}^k\times\mathbb{S}^{n-k}$, $k\in\{0,\ldots, n\}$ or of a
horosphere.
\eqrefnd{theorem}
\begin{theorem} \cite{8}, \cite{9}, \cite{10}
A $k$-parallel hypersurface in a simply connected, complete real
space form of constant sectional curvature $c$ is one of the
following. In $\mathbb{E}^{n+1}$: an open part of a parallel hypersurface or
of a cylinder on a plane curve, whose curvature is a polynomial
function of degree at most $k-1$ of the arc length. In
$\mathbb{S}^{n+1}(c)$: an open part of a parallel hypersurface or, for
$n=2$, of the inverse image under the Hopf-fibration
$\mathbb{S}^3(c)\rightarrow\mathbb{S}^2(4c)$ of a spherical curve in $\mathbb{S}^2(4c)$
whose geodesic curvature is a polynomial of degree at most $k-1$ of
the arc length. In $\mathbb{H}^{n+1}(c)$: an open part of a parallel
hypersurface.
\eqrefnd{theorem}
In \cite{3} the following classification for parallel surfaces in
BCV spaces is proven:
\begin{theorem} \label{theo4} \cite{3}
A parallel surface in $\widetilde{M}^3(\kappa,\tau)$, with $\kappa\neq
4\tau^2$, is an open part of a Hopf cylinder over a Riemannian
circle in $\widetilde{M}^2(\kappa)$ or of a totally geodesic leaf of the Hopf
fibration, the latter case only occuring for $\tau=0$.
\eqrefnd{theorem}
The technique used in the proof of this theorem is based on the fact
that for parallel surfaces the left-hand side of Codazzi's equation
(\ref{Codazzivgl}) is zero. For $k$-parallel
surfaces another approach is needed.\\
We refer to \cite{11} for a proof of the following lemma:
\begin{lemma}\label{lemma1} \cite{11}
A $k$-parallel surface immersed in a three-dimensional Riemannian
manifold is semi-parallel, or equivalently, it is flat or totally
umbilical.
\eqrefnd{lemma}
This means that in our search for $k$-parallel surfaces in BCV
spaces, we can focus on totally umbilical surfaces (meaning that at
every point the shape operator is a scalar multiple of the identity)
and flat surfaces (meaning that the Gaussian curvature at every
point is zero). In the next section we will give a complete
classification of totally umbilical surfaces in BCV spaces and in
the last section we will classify all flat, $k$-parallel surfaces in
BCV spaces.
\section{Totally umbilical surfaces}
In \cite{17}, it was proven that there are no totally umbilical
surfaces in the Heisenberg group $\mathrm{Nil}_3$. The following lemma
generalizes this result.
\begin{lemma}\label{lemma2}
Let $M^2\rightarrow\widetilde{M}^3(\kappa,\tau)$ be a totally umbilical surface
with shape operator $S=\lambda\,\mathrm{id}$. Then $\tau=0$ and the
following equations hold:
\begin{equation}\label{5.1}
T[\lambda]=-\kappa\cos\theta\sin^2\theta, \quad (JT)[\lambda]=0,
\quad T[\theta]=\lambda\sin\theta, \quad (JT)[\theta]=0,
\eqrefnd{equation}
\begin{equation}\label{5.2}
\nabla_T T=\lambda\cos\theta\,T, \quad
\nabla_{JT}T=\lambda\cos\theta\,JT, \quad \nabla_T
JT=\lambda\cos\theta\,JT, \quad \nabla_{JT}JT=-\lambda\cos\theta\,T.
\eqrefnd{equation}
\eqrefnd{lemma}
\noindent\eqrefmph{Proof.} First assume that $\theta$ is identically
zero. Then with the notations of section 2 we have
$TM^2=\mathrm{span}\{e_1,e_2\}$. But according to Frobenius' theorem
and (\ref{Lie-haken}), this distribution is only integrable if
$\tau=0$. Now $T=JT=0$ and, since $Se_1=-\widetilde{\nabla}_{e_1}e_3=0$ and
$Se_2=-\widetilde{\nabla}_{e_2}e_3=0$, also $\lambda=0$. All equations stated in
the lemma are satisfied.
We now work on an open subset of $M^2$ where $\theta$ is nowhere
zero. From Codazzi's equation (\ref{Codazzivgl}) for $X=T$ and
$Y=JT$, we get
\begin{equation}\label{5.3}
T[\lambda]=-(\kappa-4\tau^2)\cos\theta\sin^2\theta, \quad
JT[\lambda]=0.
\eqrefnd{equation}
The structure equations (\ref{structuurvgl1}) and
(\ref{structuurvgl2}) yield
\begin{equation}\label{5.4}
\nabla_T T=\cos\theta(\lambda T - \tau JT), \quad
\nabla_{JT}T=\cos\theta(\tau T + \lambda JT), \quad
T[\theta]=\lambda\sin\theta, \quad (JT)[\theta]=\tau\sin\theta.
\eqrefnd{equation}
Using orthonormal expansion and $\langle T,JT\rangle=0$, $\langle
T,T\rangle = \langle JT,JT\rangle = \sin^2\theta$, we get
\begin{equation}\label{5.5}
\nabla_T JT = \cos\theta(\tau T + \lambda JT), \quad
\nabla_{JT}JT=\cos\theta(-\lambda T + \tau JT).
\eqrefnd{equation}
Remark that $[T,JT]=\nabla_T JT - \nabla_{JT} T = 0$ and hence
$$0=[T,JT][\lambda]=T[(JT)[\lambda]]-(JT)[T[\lambda]]=(\kappa-4\tau^2)\tau\sin^2\theta(2\cos^2\theta-\sin^2\theta).$$
Since we assume $\kappa-4\tau^2\neq 0$ and $\sin\theta\neq 0$,
either $\tau=0$ or $2\cos^2\theta-\sin^2\theta=0$. But the latter
implies that $\theta$ is a constant and then from the last equation
of (\ref{5.4}) we also get $\tau=0$. The equations stated in the
lemma follow easily from (\ref{5.3}), (\ref{5.4}) and (\ref{5.5}).
By a continuity argument, these will hold on the whole of $M^2$.
$\square$
The following is an immediate corollary of Lemma \ref{lemma2}.
\begin{theorem} \label{theo} The only BCV spaces admitting totally umbilical
surfaces are the Riemannian products
$(\mathbb{S}^2(\kappa)\setminus\{\infty\})\times\mathbb{R}$ and
$\mathbb{H}^2(\kappa)\times\mathbb{R}$.
\eqrefnd{theorem}
It is now sufficient to study totally umbilical surfaces in
$\mathbb{S}^2(\kappa)\times\mathbb{R}$ and $\mathbb{H}^2(\kappa)\times\mathbb{R}$. To do this, we
consider these spaces as hypersurfaces of the four-dimensional
Euclidean space $\mathbb{E}^4$ and the four-dimensional Lorentzian space
$\mathbb{L}^4=(\mathbb{R}^4,-dx_1^2+dx_2^2+dx_3^2+dx_4^2)$ respectively:
$$\mathbb{S}^2(\kappa)\times\mathbb{R}=\left\{(x_1,x_2,x_3,x_4)\in\mathbb{E}^4\ \left|\ x_1^2+x_2^2+x_3^2=\frac{1}{\kappa}\right.\right\}$$
and
$$\mathbb{H}^2(\kappa)\times\mathbb{R}=\left\{(x_1,x_2,x_3,x_4)\in\mathbb{L}^4\ \left|\ -x_1^2+x_2^2+x_3^2=\frac{1}{\kappa},\ x_1>0\right.\right\}.$$
Remark that in both cases the vector field $\widetilde{\xi}$,
defined by
$\widetilde{\xi}(x_1,x_2,x_3,x_4)=\sqrt{|\kappa|}(x_1,x_2,x_3,0)$,
is orthogonal to the hypersurface and that
$\langle\widetilde{\xi},\widetilde{\xi}\rangle=1$ in the first case
and $\langle\widetilde{\xi},\widetilde{\xi}\rangle=-1$ in the second
case.\\
First, we remark that the only totally umbilical surfaces that are
also higher order parallel are trivial:
\begin{proposition} \label{prop1}
A $k$-parallel, totally umbilical surface in $\mathbb{S}^2(\kappa)\times\mathbb{R}$,
respectively $\mathbb{H}^2(\kappa)\times\mathbb{R}$, is totally geodesic and an open
part of $\mathbb{S}^2(\kappa)\times\{t_0\}$ or $\mathbb{S}^1(\kappa)\times\mathbb{R}$,
respectively $\mathbb{H}^2(\kappa)\times\{t_0\}$ or $\mathbb{H}^1(\kappa)\times\mathbb{R}$.
Moreover, these surfaces are the only totally geodesic ones.
\eqrefnd{proposition}
\noindent\eqrefmph{Proof.} If $\theta$ is identically zero, the surface
is an open part of $\mathbb{S}^2(\kappa)\times\{t_0\}$ or
$\mathbb{H}^2(\kappa)\times\{t_0\}$. Hence we may assume that $\theta\neq
0$. Putting $U=\frac{T}{\|T\|}=\frac{T}{\sin\theta}$ and $V=JT$, we
have $[U,V]=0$, so we can take coordinates $(u,v)$ with
$U=\frac{\partial}{\partial u}$ and $V=\frac{\partial}{\partial v}$.
Remark that $\lambda$ and $\theta$ only depend on $u$ and
\begin{equation} \label{5.6}
\lambda'=-\kappa\cos\theta\sin\theta=-\frac{\kappa}{2}\sin(2\theta),
\quad \theta'=\lambda.
\eqrefnd{equation}
Since
$$\nabla_U U=\frac{1}{\sin\theta}\left(T\left[\frac{1}{\sin\theta}\right]T+\frac{1}{\sin\theta}\nabla_T T\right)=0,$$
we have
$$0=(\nabla^kh)(U,U,\ldots,U,U)=U[U[\ldots U[h(U,U)]\ldots]]=\lambda^{(k)}(u),$$
which implies that $\lambda$ is a polynomial of degree at most $k-1$
in $u$. Now from (\ref{5.6}), we see that both $\sin(2\theta)$ and
$\theta$ are polynomials in $u$. The only possibility is that
$\theta$ is a constant and thus, again from (\ref{5.6}), $\lambda=0$
and $\cos\theta=0$. So $\theta=\frac{\pi}{2}$ and the surface is an
open part of $\gamma\times\mathbb{R}$, with $\gamma$ a curve in
$\mathbb{S}^2(\kappa)$ or $\mathbb{H}^2(\kappa)$.
It remains to prove that $\gamma$ is a geodesic. We continue the
proof for $\kappa>0$, but the other case is completely similar.
Assume that $\gamma$ is parametrized by arc length and denote the
immersion by
$$F:M^2\rightarrow\mathbb{S}^2(\kappa)\times\mathbb{R}\subset\mathbb{E}^4:(s,t)\mapsto(\gamma(s),t).$$
Denoting by `` $\cdot$ '' the inner product on $\mathbb{E}^3$ and by ``
$\times$ '' the cross product, we have that $F_s=(\gamma',0)$ and
$F_t=(0,1)$ span the tangent space, that
$\widetilde{\xi}=\sqrt{\kappa}(\gamma,0)$ is a unit vector
orthogonal to the surface and orthogonal to $\mathbb{S}^2(\kappa)\times\mathbb{R}$
and $\xi=\sqrt{\kappa}(\gamma\times\gamma',0)$ is a unit vector
orthogonal to $M^2$, tangent to $\mathbb{S}^2(\kappa)\times\mathbb{R}$. Moreover
\begin{eqnarray*}
\left\langle S\frac{\partial}{\partial s},\frac{\partial}{\partial s}\right\rangle &=& \langle F_{ss},\xi\rangle = \kappa((\gamma\times\gamma')\cdot\gamma'',0),\\
\left\langle S\frac{\partial}{\partial s},\frac{\partial}{\partial t}\right\rangle &=& \langle F_{st},\xi\rangle = (0,0),\\
\left\langle S\frac{\partial}{\partial t},\frac{\partial}{\partial
t}\right\rangle &=& \langle F_{tt},\xi\rangle = (0,0),
\eqrefnd{eqnarray*}
and thus the surface is totally umbilical (and automatically totally
geodesic) if and only if $(\gamma\times\gamma')\cdot\gamma''=0$, or
equivalently, if and only if $\gamma''$ is proportional to $\gamma$.
This means that $\gamma''$ has no component tangent to
$\mathbb{S}^2(\kappa)$ and hence has to be a geodesic, i.e. a great circle.
The fact that these surfaces are the only totally geodesic ones
follows immediately from the first equation of (\ref{5.1}).
$\square$
Before proceeding with the full classification, we develop some
machinery to study surfaces in $\mathbb{S}^2(\kappa)\times\mathbb{R}$ and
$\mathbb{H}^2(\kappa)\times\mathbb{R}$.
Consider an isometric immersion
$F:M^2\rightarrow\mathbb{S}^2(\kappa)\times\mathbb{R}$. Denoting by $\xi$ a unit
vector tangent to $\mathbb{S}^2(\kappa)\times\mathbb{R}$ and normal to $M^2$, one
easily sees that the fourth components of $F_{\ast}T$, $F_{\ast}JT$
and $\xi$ in $\mathbb{E}^4$ satisfy
\begin{equation}\label{5.6a}
(F_{\ast}T)_4=\sin^2\theta, \quad (F_{\ast}JT)_4=0, \quad
\xi_4=\cos\theta.
\eqrefnd{equation}
Take $\widetilde{\xi}$ as above and let $X$ be a tangent vector to
$M^2$. Then $\langle
\nabla^{\perp}_X\widetilde{\xi},\xi\rangle=\langle
D_X\widetilde{\xi},\xi\rangle=X_1\xi_1+X_2\xi_2+X_3\xi_3=-X_4\xi_4=-\langle
X,T\rangle\cos\theta,$ where $D$ denotes the Euclidean connection.
Thus, the normal connection of $M^2$ as a submanifold of $\mathbb{E}^4$ is
given by
\begin{equation*}
\nabla^{\perp}_X\widetilde{\xi} = -\langle
X,T\rangle\cos\theta\,\xi, \quad \nabla^{\perp}_X\xi = \langle
X,T\rangle\cos\theta\,\widetilde{\xi}.
\eqrefnd{equation*}
Using Weingarten's formula, we see that the shape operator
associated to $\widetilde{\xi}$, which we denote by $\widetilde{S}$,
must satisfy
\begin{eqnarray*}
F_{\ast}(\widetilde{S}T) &=& (-(F_{\ast}T)_1,-(F_{\ast}T)_2,-(F_{\ast}T)_3,0)-\cos\theta\sin^2\theta\,(\xi_1,\xi_2,\xi_3,\cos\theta),\\
F_{\ast}(\widetilde{S}(JT)) &=&
(-(F_{\ast}JT)_1,-(F_{\ast}JT)_2,-(F_{\ast}JT)_3,0)=-JT.
\eqrefnd{eqnarray*}
The second equation implies that the matrix of $\widetilde{S}$ with
respect to the basis $\{T,JT\}$ takes the form
$$\widetilde{S}=\left(\begin{array}{cc} a&0 \\ 0&-1 \eqrefnd{array}\right)$$
and looking at the fourth component of the first equation we get
$a=-\cos^2\theta$. Hence
\begin{equation} \label{5.7}
\widetilde{S}=\left(\begin{array}{cc} -\cos^2\theta&0 \\ 0&-1
\eqrefnd{array}\right).
\eqrefnd{equation}
Remark that from the other components of the first equation
\begin{equation} \label{5.8}
(F_{\ast}T)_j=-\cos\theta\,\xi_j,\qquad j=1,2,3.
\eqrefnd{equation}
We can do exactly the same for $\mathbb{H}^2(\kappa)\times\mathbb{R}$. The equations
(\ref{5.6a}) remain the same. The normal connection changes to
\begin{equation*}
\nabla^{\perp}_X\widetilde{\xi}=-\langle X,T\rangle\cos\theta\,\xi,
\quad \nabla^{\perp}_X\xi = -\langle
X,T\rangle\cos\theta\,\widetilde{\xi},
\eqrefnd{equation*}
but the shape operator associated to $\widetilde{\xi}$, (\ref{5.7}),
and formula (\ref{5.8}) remain the same.\\
We will now classify totally umbilical surfaces in $\mathbb{S}^2(1)\times\mathbb{R}$
and $\mathbb{H}^2(-1)\times\mathbb{R}$ and for arbitrary $\kappa$ the totally
umbilical surfaces will then be homothetic to these.
\begin{theorem} \label{T:3.3.2}
Let $F:M^2\rightarrow\mathbb{S}^2(1)\times\mathbb{R}\subset\mathbb{E}^4$ be a totally
umbilical surface with shape operator $S=\lambda\,\mathrm{id}$ and
angle function $\theta$, which is not totally geodesic. Then one can
choose local coordinates $(u,v)$ on $M^2$ such that $\lambda$ and
$\theta$ only depend on $u$ and
\begin{equation}\label{3.3.13}\theta(u)=\arctan\left(\frac{2ce^{\pm cu}}{1-c^2+e^{\pm 2cu}}\right), \qquad \lambda(u)=\frac{\theta'(u)}{\sin\theta(u)},\eqrefnd{equation}
for some real constant $c>0$. Moreover, the immersion is, up to an
isometry, locally given by
\begin{equation}\label{3.3.14}
F(u,v)=\frac{1}{c}\left(\lambda,\,\sin\theta\,\cos
v,\,\sin\theta\,\sin v,\,c\int\sin^2\theta\,du\right).
\eqrefnd{equation}
\eqrefnd{theorem}
\noindent\eqrefmph{Proof.} It follows from \eqref{5.2} that $[T,JT]=0$.
Hence, we can take local coordinates $(u,v)$ on $M^2$, such that
$T=\frac{\partial}{\partial u}$, $JT=\frac{\partial}{\partial v}$.
From \eqref{5.1} we see that $\lambda$ and $\theta$ only depend on $u$
and that they satisfy
\begin{equation}\label{3.3.15}
\lambda^2+\sin^2\theta=c^2, \qquad \theta'=\lambda\sin\theta,
\eqrefnd{equation}
for some strictly positive real constant $c$.
From the formula of Gauss, \eqref{5.1}, \eqref{5.7} and \eqref{5.8}, we obtain
for $j=1,2,3$
\begin{eqnarray}
(F_j)_{uu} &=& \lambda\cos\theta(F_j)_u-\lambda\frac{\sin^2\theta}{\cos\theta}(F_j)_u-\cos^2\theta\sin^2\theta\,F_j, \label{3.3.16}\\
(F_j)_{uv} &=& \lambda\cos\theta(F_j)_v, \label{3.3.17}\\
(F_j)_{vv} &=& -\frac{\lambda}{\cos\theta}(F_j)_u-\sin^2\theta\,F_j.
\label{3.3.18}
\eqrefnd{eqnarray}
The equations for the fourth component are trivially satisfied. The
solution of (\ref{3.3.17}) is
\begin{equation}\label{3.3.19}
F_j=(A_j(u)+B_j(v))\eqrefxp\left(\int\lambda\cos\theta\,du\right),
\eqrefnd{equation}
where $A_j$ and $B_j$ are real-valued functions in one variable.
Substituting this in (\ref{3.3.16}) yields
\begin{equation}\label{3.3.20}
A_j=a_j\int\eqrefxp\left(-\int\frac{\lambda}{\cos\theta}\,du\right)\,du
+ \alpha_j
\eqrefnd{equation}
with $a_j$, $\alpha_j\in\mathbb{R}$, and substituting it in (\ref{3.3.18})
gives $B_j''+(\lambda^2+\sin^2\theta)B_j=
A_j''-(\lambda^2+\sin^2\theta)A_j$, or equivalently $B_j''+c^2B_j=
A_j''-c^2A_j$. It is easy to check that the right hand side of this
equation is constant and thus the solution for $B_j$ is
\begin{equation}\label{3.3.21}
B_j=b_j\cos(cv)+\beta_j\sin(cv)+\frac{A_j''}{c^2}-A_j,
\eqrefnd{equation}
with $b_j$, $\beta_j\in\mathbb{R}$. By substituting \eqref{3.3.20} and
\eqref{3.3.21} in \eqref{3.3.19}, we conclude that the functions $F_j$ take
the form
\begin{multline} \label{3.3.22}
F_j=\left(-a_j\frac{\lambda}{c^2\cos\theta}\eqrefxp\left(-\int\frac{\lambda}{\cos\theta}\,du
\right)+b_j\cos(cv)\right.\\
+\beta_j\sin(cv)\left)\,
\eqrefxp\left(\int\lambda\cos\theta\,du\right)\right., \quad j=1,2,3
\eqrefnd{multline}
and from (\ref{5.6a}):
\begin{equation} \label{3.3.23}
F_4=\int\sin^2\theta\,du.
\eqrefnd{equation}
There are some conditions on $F$ which we have neglected so far,
namely $F\in\mathbb{S}^2(1)\times\mathbb{R}$, $\langle \xi,F_u\rangle=\langle
\xi,F_v \rangle=0$, $\langle \widetilde{\xi},F_u \rangle=\langle
\widetilde{\xi},F_v \rangle=0$, $\langle F_u,F_u \rangle = \langle
F_v,F_v \rangle = \sin^2\theta$, $\langle \xi,\xi \rangle = \langle
\widetilde{\xi},\widetilde{\xi} \rangle =1$ and $\langle
\xi,\widetilde{\xi} \rangle = \langle F_u,F_v \rangle = 0$. These
are equivalent to
\begin{equation} \label{3.3.24}
\sum_{j=1}^3 F_j^2 = 1, \quad \sum_{j=1}^3 (F_j)_u^2 =
\cos^2\theta\sin^2\theta, \quad \sum_{j=1}^3 (F_j)_v^2 =
\sin^2\theta, \quad \sum_{j=1}^3 (F_j)_u(F_j)_v = 0.
\eqrefnd{equation} Now looking at $a=(a_1,a_2,a_3)$,
$b=(b_1,b_2,b_3)$ and $\beta=(\beta_1,\beta_2,\beta_3)$ as vectors
in $\mathbb{R}^3$ with the Euclidean inner product `` $\cdot$ '', the
conditions (\ref{3.3.24}) are equivalent to
\begin{equation*}
a\cdot b = a\cdot\beta = b\cdot\beta=0,
\eqrefnd{equation*}
\begin{equation*}
\|a\|^2=a\cdot a=
c^2\cos^2\theta\,\eqrefxp\left(2\int\frac{\lambda\sin^2\theta}{\cos\theta}\,du\right),
\eqrefnd{equation*}
\begin{equation*}
\|b\|^2=b\cdot b=\beta\cdot\beta
=\frac{\sin^2\theta}{c^2}\,\eqrefxp\left(-2\int\lambda\cos\theta\,du\right).
\eqrefnd{equation*}
Remark that the right hand sides of these equations are constant.
They imply that after a suitable isometry of $\mathbb{S}^2(1)\times\mathbb{R}$ we
may assume that
\begin{equation*}
a =
\left(-c\cos\theta\eqrefxp\left(\int\frac{\lambda\sin^2\theta}{\cos\theta}\,du\right),\,0,\,0\right),
\eqrefnd{equation*}
\begin{equation*}
b =
\left(0,\,\frac{\sin\theta}{c}\eqrefxp\left(-\int\lambda\cos\theta\,du\right),\,0\right),
\eqrefnd{equation*}
\begin{equation*}
\beta
=\left(0,\,0,\,\frac{\sin\theta}{c}\eqrefxp\left(-\int\lambda\cos\theta\,du\right)\right).
\eqrefnd{equation*}
Now the reparametrization $cv\mapsto v$ gives the result \eqref{3.3.14}.
To conclude, we solve the equations \eqref{3.3.15} explicitly. Putting
$\theta=\arctan(f)$, we obtain
$$\left(\frac{\theta'}{\sin\theta}\right)^2+\sin^2\theta=c^2
\mathbb{L}eftrightarrow
\left(\frac{f'}{f\sqrt{1+f^2}}\right)^2+\frac{f^2}{1+f^2}=c^2
\mathbb{L}eftrightarrow \frac{(f')^2}{f^2(c^2+(c^2-1)f^2)}=1.$$ From the
last equation we see that $c^2+(c^2-1)f^2$ has to be positive and
hence we can proceed by integration:
\begin{eqnarray*}
\frac{f'}{f\sqrt{c^2+(c^2-1)f^2}}=\pm 1 &\mathbb{L}eftrightarrow& \ln\left(\frac{c+\sqrt{c^2+(c^2-1)f^2}}{f}\right)=\pm cu+d\\
&\mathbb{L}eftrightarrow& f=\frac{2c\,e^{\pm cu+d}}{1-c^2+e^{2(\pm cu+d)}},
\eqrefnd{eqnarray*}
for some $d\in\mathbb{R}$. After a change of the $u$-coordinate, which does
not change $\frac{\partial}{\partial u}$, we obtain the result
\eqref{3.3.13}.
$\square$
\begin{remark}\label{R:3.3.1} We can write \eqref{3.3.14} in a more explicit form. After the reparametrization $e^{\pm cu}\mapsto u$ and,
if necessary, an isometry switching the sign of some of the
components, (\ref{3.3.14}) is given by
\begin{equation*}
F(u,v)=\left(\frac{2u\cos v}{p(u)q(u)},\ \frac{2u\sin v}{p(u)q(u)},\
\frac{1-c^2-u^2}{p(u)q(u)},\
\ln\left(\frac{p(u)}{q(u)}\right)\right),
\eqrefnd{equation*}
where $p(u)=\sqrt{u^2+(c-1)^2}$ and $q(u)=\sqrt{u^2+(c+1)^2}$.
\eqrefnd{remark}
\begin{theorem}\label{T:3.3.3}
Let $F:M^2\rightarrow\mathbb{H}^2(-1)\times\mathbb{R}\subset\mathbb{E}^4_1$ be a totally
umbilical surface with shape operator $S=\lambda\,\mathrm{id}$ and
angle function $\theta$, which is not totally geodesic. Then one can
choose local coordinates $(u,v)$ on $M^2$ such that $\lambda$ and
$\theta$ only depend on $u$ and we are in one of the following three
cases:
\begin{itemize}
\item[$\mathrm{(i)}$] $\theta(u)$ and $\lambda(u)$ are given by
\begin{equation}\label{3.3.26}\theta(u)=\arctan\left(\frac{2ce^{\pm cu}}{1+c^2-e^{\pm 2cu}}\right),
\qquad \lambda(u)=\frac{\theta'(u)}{\sin\theta(u)},\eqrefnd{equation}
for some real constant $c>0$, and the immersion is, up to an
isometry, locally given by
\begin{equation} \label{3.3.27}
F(u,v)=\frac{1}{c}\left(\lambda,\,\sin\theta\,\cos
v,\,\sin\theta\,\sin v,\,c\int\sin^2\theta\,du\right),
\eqrefnd{equation}
\item[$\mathrm{(ii)}$] $\theta(u)$ and $\lambda(u)$ are given by
\begin{equation}\label{3.3.28}\theta(u)=\mathrm{arccot}\,(\pm u),
\qquad \lambda(u)=\frac{\mp 1}{\sqrt{1+u^2}},\eqrefnd{equation} and the
immersion is, up to an isometry, locally given by
\begin{equation} \label{3.3.29}
F(u,v)=\frac{1}{\sqrt{1+u^2}}\left(\frac{u^2+v^2}{2}+1,\,v,\,\frac{u^2+v^2}{2},\,\sqrt{1+u^2}\arctan
u\right),
\eqrefnd{equation}
\item[$\mathrm{(iii)}$] $\theta(u)$ and $\lambda(u)$ are given by
\begin{equation}\label{3.3.30}\theta(u)=\arctan\left(\frac{\tan c}{\sin(\pm u\sin c)}\right),
\qquad \lambda(u)=\frac{\theta'(u)}{\sin\theta(u)},\eqrefnd{equation}
for some real constant $c\neq 0$, and the immersion is, up to an
isometry, locally given by
\begin{equation}\label{3.3.31}
F(u,v)=\frac{1}{\sin c}\left(\sin\theta\,\cosh v,\,\sin\theta\,\sinh
v,\,\lambda,\,\sin c\int\sin^2\theta\,du\right).
\eqrefnd{equation}
\eqrefnd{itemize}
\eqrefnd{theorem}
\noindent\eqrefmph{Proof.} We use again the coordinates $(u,v)$ such
that $T=\frac{\partial}{\partial u}u$ and
$JT=\frac{\partial}{\partial v}$. From \eqref{5.1}, we obtain that
$\lambda$ and $\theta$ only depend on $u$ and that they satisfy
$\lambda^2-\sin^2\theta=C$, $\theta'=\lambda\sin\theta$, for some
real constant $C>-1$. The formula of Gauss yields for $j=1,2,3$:
\begin{eqnarray*}
(F_j)_{uu} &=& \lambda\cos\theta(F_j)_u-\lambda\frac{\sin^2\theta}{\cos\theta}(F_j)_u+\cos^2\theta\sin^2\theta F_j,\\
(F_j)_{uv} &=& \lambda\cos\theta(F_j)_v,\\
(F_j)_{vv} &=& -\frac{\lambda}{\cos\theta}(F_j)_u+\sin^2\theta F_j,
\eqrefnd{eqnarray*}
such that $F_j$ again takes the form (\ref{3.3.19}), with $A_j$
again equal to (\ref{3.3.20}). The differential equation for $B_j$
becomes $B_j''+(\lambda^2-\sin^2\theta)B_j=
A_j''-(\lambda^2-\sin^2\theta)A_j$, or equivalently
$B_j''+CB_j=A_j''-CA_j$. The right hand side of this equation is
again constant. We now consider three cases.
\vskip.05in\eqrefmph{Case} (A): $C>0$. This case corresponds to the
first case of the theorem. We can put $C=c^2$ for some strictly
positive real constant $c$. The rest of the proof is similar to the
one above and we will therefore omit it.
\vskip.05in\eqrefmph{Case} (B): $C=0$. The solution of the equations
$\lambda^2=\sin^2\theta$ and $\theta'=\lambda\sin\theta$ is given by
\eqref{3.3.28}. Substituting this in \eqref{3.3.20} yields that $A_j$ takes
the form $A_j(u)=p_ju^2+q_j$ for some $p_j,q_j\in\mathbb{R}$. The equation
for $B_j$ becomes $B_j''=A_j''$. From this equation and \eqref{3.3.19},
we obtain
$$F_j=\frac{1}{\sqrt{1+u^2}}(a_j(u^2+v^2)+b_jv+c_j), \qquad
j=1,2,3,$$ where $a_j,b_j,c_j\in\mathbb{R}$. Moreover, from \eqref{5.6a} and
\eqref{3.3.28}, we have
$$F_4=\int\sin^2\theta\,du=\arctan u.$$
The conditions analogous to (\ref{3.3.24}) now read
\begin{equation}\label{3.3.32}
\begin{aligned}
& -F_1^2+F_2^2+F_3^2=-1, \\
& -(F_1)_u^2+(F_2)_u^2+(F_3)_u^2=\cos^2\theta\sin^2\theta,\\
& -(F_1)_v^2+(F_2)_v^2+(F_3)_v^2=\sin^2\theta, \\
& -(F_1)_u(F_1)_v+(F_2)_u(F_2)_v+(F_3)_u(F_3)_v=0,
\eqrefnd{aligned}
\eqrefnd{equation}
and looking at $a$, $b$ and $c$ as vectors in $\mathbb{R}^3$, but now
equipped with the standard Lorentzian inner product `` $\cdot$ '',
these are equivalent to $a\cdot a=a\cdot b=b\cdot c=0$, $a\cdot
c=-\frac{1}{2}$, $b\cdot b=1$, $c\cdot c=-1$. After a suitable
isometry of $\mathbb{H}^2(-1)\times\mathbb{R}$, we may assume that
$a=(\frac{1}{2},0,\frac{1}{2})$, $b=(0,1,0)$ and $c=(1,0,0)$. This
gives the result \eqref{3.3.29}.
\vskip.05in\eqrefmph{Case} (C): $C<0$. Clearly, we have $C>-1$ and hence
we may put $C=-\sin^2c$, for some real number $c$. The equation for
$B_j$ becomes $B_j''-\sin^2c\,B_j=A_j''+\sin^2c\,A_j$, with solution
\begin{equation}\label{3.3.33}
B_j=b_j\cosh(v\sin c)+\beta_j\sinh(v\sin
c)-\frac{A_j''}{\sin^2c}-A_j.
\eqrefnd{equation}
Hence $F$ is given by
\begin{multline} \label{3.3.34}
F_j=\left(-a_j\frac{\lambda}{(\sin^2c)\cos\theta}\eqrefxp\left(-\int\frac{\lambda}{\cos\theta}\,du
\right)+b_j\cosh(v\sin c)\right.\\
+\beta_j\sinh(v\sin c)\left)\,
\eqrefxp\left(\int\lambda\cos\theta\,du\right)\right., \quad j=1,2,3
\eqrefnd{multline}
and $F_4$ takes the form \eqref{3.3.23}.
Looking at $a$, $b$ and $\beta$ as vectors in $\mathbb{R}^3$ with the
standard Lorentzian inner product, the conditions \eqref{3.3.32} yield
\begin{equation*}
a\cdot b = a\cdot\beta = b\cdot\beta=0,
\eqrefnd{equation*}
\begin{equation*}
a\cdot a
=\sin^2c\,\cos^2\theta\,\eqrefxp\left(2\int\frac{\lambda\sin^2\theta}{\cos\theta}\,du\right),
\eqrefnd{equation*}
\begin{equation*}
b\cdot b=-\beta\cdot\beta
=-\frac{\sin^2\theta}{\sin^2c}\,\eqrefxp\left(-2\int\lambda\cos\theta\,du\right),
\eqrefnd{equation*}
Remark that the right hand sides are again constant and that $b$ is
a timelike vector, whereas $a$ and $\beta$ are spacelike. A suitable
isometry of $\mathbb{H}^2(-1)\times\mathbb{R}$, followed by the reparametrization
$v\sin c\mapsto v$, transforms the immersion given by (\ref{3.3.34})
and (\ref{3.3.23}) into \eqref{3.3.31}.
Finally, we solve the equations $\lambda^2-\sin^2\theta=-\sin^2c$,
$\theta'=\lambda\sin\theta$ explicitly. Putting $\theta=\arctan(f)$,
we obtain
$$\left(\frac{\theta'}{\sin\theta}\right)^2-\sin^2\theta=-\sin^2c \mathbb{L}eftrightarrow \frac{(f')^2}{f^2(f^2\cos^2c-\sin^2c)}=1.$$
We see that $f^2\cos^2c-\sin^2c>0$, and by integration, we obtain
$$\arctan\left(\frac{\sin
c}{\sqrt{f^2\cos^2c-\sin^2c}}\right)=\pm u\sin c+d \mathbb{L}eftrightarrow
f=\frac{\tan c}{\sin(\pm u\sin c+d)},$$ for some $d\in\mathbb{R}$. After a
translation in the $u$-coordinate, we obtain \eqref{3.3.31}.
$\square$
\begin{remark}
We can write the immersions of the first and the last case of
Theorem \ref{T:3.3.3} more explicitly. After the substitution
$e^{\pm cu}\mapsto u$, the immersion \eqref{3.3.27} becomes
\begin{equation*}
F(u,v)=\left(\frac{1+c^2+u^2}{p(u)q(u)},\ \frac{2u\cos
v}{p(u)q(u)},\ \frac{2u\sin v}{p(u)q(u)},\
\frac{1}{4c^2}\arctan\left(\frac{u^2-1+c^2}{2c}\right)\right),
\eqrefnd{equation*}
with $p(u)=\sqrt{(u-1)^2+c^2}$ and $q(u)=\sqrt{(u+1)^2+c^2}$.
The immersion \eqref{3.3.31} is, after the substitution $\pm u\sin
c\mapsto u$, given by
\begin{equation*}\label{}
F(u,v)=\left(\frac{\cosh v}{p(u)},\ \frac{\sinh v}{p(u)},\
\frac{-\cos c\,\cos u}{p(u)},\ \arctan\left(\frac{\tan u}{\sin
c}\right)\right),
\eqrefnd{equation*}
with $p(u)=\sqrt{1-\cos^2c\,\cos^2u}$.
\eqrefnd{remark}
\begin{remark} Totally umbilical surfaces in BCV spaces and in the
Lie group $\mathrm{Sol}_3$ were independently studied in \cite{17aa}, from a
global viewpoint.
\eqrefnd{remark}
\section{Higher order parallel surfaces}
The following example shows that every Hopf-cylinder in a BCV space
is flat.
\begin{example} Consider a Hopf-cylinder in $\widetilde{M}^3(\kappa,\tau)$. Let
$\{E_1=ae_1+be_2, E_2=e_3\}$, with $a^2+b^2=1$, be an orthonormal
frame field along the surface, then $N=E_1\times E_2=be_1-ae_2$ is a
unit normal. Using the equations in (\ref{Levi Civita}), one
computes
\begin{eqnarray*}
\widetilde{\nabla}_{E_1}N &=& \left(aE_1[b]-bE_1[a]+\frac{\kappa}{2}(ay-bx)\right)E_1-\tau E_2,\\
\widetilde{\nabla}_{E_2}N &=& (aE_2[b]-bE_2[a]-\tau)E_1.
\eqrefnd{eqnarray*}
This means that the shape operator with respect to the basis
$\{E_1,E_2\}$ takes the form
\begin{eqnarray*}
S &=& \left(\begin{array}{cc} -aE_1[b]+bE_1[a]-\frac{\kappa}{2}(ay-bx) & -aE_2[b]+bE_2[a]+\tau \\ \tau & 0\eqrefnd{array}\right)\\
&=& \left(\begin{array}{cc} -aE_1[b]+bE_1[a]-\frac{\kappa}{2}(ay-bx) & \tau \\ \tau &
0\eqrefnd{array}\right),
\eqrefnd{eqnarray*}
the last equation due to the symmetry. Remark that from this
symmetry we have $aE_2[b]=bE_2[a]$, which, together with $a^2+b^2=1$
implies that $a$ and $b$ are constant along the fibres of the
Hopf-fibration. From Gauss' equation (\ref{Gaussvgl2}), we have
$$K=\det S + \tau^2 + (\kappa-4\tau^2)\cos^2\theta = -\tau^2 +
\tau^2 + (\kappa-4\tau^2)\cos^2\frac{\pi}{2} = 0.$$
\eqrefnd{example}
Now consider an arbitrary flat surface $M^2$ in $\widetilde{M}^3(\kappa,\tau)$.
Every $p\in M^2$ has an open neighbourhood $U$, which is isometric
to an open part of $\mathbb{E}^2$. Denote by $(u,v)$ the Euclidean
coordinates on $U$. Suppose $T=T_1\frac{\partial}{\partial
u}+T_2\frac{\partial}{\partial v}$ and $S=\left(S_{ij}\right)_{1\leq
i,j\leq 2}$ with respect to the orthonormal basis
$\left\{\frac{\partial}{\partial u}, \frac{\partial}{\partial
v}\right\}$. We consider $S_{11}$, $S_{12}$, $S_{22}$, $\cos\theta$,
$T_1$ and $T_2$ as functions of the Euclidean coordinates $(u,v)$ on
$U$.
\begin{lemma}\label{lemma3}
The functions $S_{11}$, $S_{12}$, $S_{22}$, $\cos\theta$, $T_1$ and
$T_2$ satisfy the following system of equations:
\begin{eqnarray}
T_1^2+T_2^2+\cos^2\theta=1;\label{1}\\
S_{11}S_{22}-S_{12}^2+\tau^2+(\kappa-4\tau^2)\cos^2\theta=0;\label{2}\\
\frac{\partial S_{12}}{\partial u}-\frac{\partial S_{11}}{\partial
v}=(\kappa-4\tau^2)T_2\cos\theta,\label{3}\\
\quad \frac{\partial S_{22}} {\partial u}-\frac{\partial S_{12}}{\partial v}=-(\kappa-4\tau^2)T_1\cos\theta;\nonumber\\
\frac{\partial T_1}{\partial u}=S_{11}\cos\theta, \quad
\frac{\partial T_1}{\partial v}=(S_{12}+\tau)\cos\theta,\label{4}\\
\frac{\partial T_2}{\partial u}=(S_{12}-\tau)\cos\theta,
\quad \frac{\partial T_2}{\partial v}=S_{22}\cos\theta;\nonumber\\
\frac{\partial\cos\theta}{\partial u}=-S_{11}T_1-S_{12}T_2+\tau T_2,\label{5}\\
\frac{\partial\cos\theta}{\partial v}=-S_{12}T_1-S_{22}T_2-\tau
T_1.\nonumber
\eqrefnd{eqnarray}
\eqrefnd{lemma}
\noindent\eqrefmph{Proof.} Equation (\ref{1}) follows immediately from
the definitions of $T$ and $\theta$. Equation (\ref{2}) expresses
Gauss' equation (\ref{Gaussvgl2}), while the equations (\ref{3})
express the equation of Codazzi (\ref{Codazzivgl}). The equations in
(\ref{4}) and (\ref{5}) are nothing but the structure equations
(\ref{structuurvgl1}) and (\ref{structuurvgl2}).
$\square$
The following result is the last step to obtain a full
classification of higher order parallel surfaces in BCV spaces.
\begin{proposition}\label{prop2}
A $k$-parallel, flat surface $M^2$ in a BCV space
$\widetilde{M}^3(\kappa,\tau)$, with $\kappa\neq4\tau^2$, is an open part of a
Hopf-cylinder over a curve in $\widetilde{M}^2(\kappa)$, whose curvature is a
polynomial function of degree at most $k-1$ of the arc length.
\eqrefnd{proposition}
\noindent\eqrefmph{Proof.} Since $M^2$ is $k$-parallel and flat, the
functions $S_{11}$, $S_{12}$ and $S_{22}$ have to be polynomials of
degree at most $k-1$ in $u$ and $v$. First one can show that the
equations in lemma $\ref{lemma3}$ then imply that $\theta$ has to be
a constant. This proof is very similar to the proof of the Main
Theorem in \cite{11} and we will therefore omit it.
Now it follows from (\ref{3}) that the functions $T_1$ and $T_2$ are
polynomial functions in $u$ and $v$. Since $T_1$ and $T_2$ satisfy
$T_1^2+T_2^2=1-\cos^2\theta$ and $\theta$ is a constant, they have
to be constant. Then the equations in (\ref{4}) imply that either
$\cos\theta=0$ or $\tau=0$ and $S=0$. Totally geodesic surfaces in
BCV-spaces with $\tau=0$ are classified in proposition \ref{prop1}
and it is clear that the only flat ones are Hopf-cylinders. Hence we
may conclude that $M^2$ is an open part of a Hopf-cylinder.
To finish, we prove the assertion about the curvature of the base
curve. Taking $E_1$ and $E_2$ as in example 1, one can verify that
$\nabla_{E_i}E_j=0$ and hence we can take Euclidean coordinates
$(u,v)$ such that $E_1=\frac{\partial}{\partial u}$ and
$E_2=\frac{\partial}{\partial v}$. As we remarked before, $a$ and
$b$ will only depend on $u$ and we write $a'$ and $b'$ for the
derivatives with respect to $u$. The base curve
$\gamma(u)=(x(u),y(u))$ satisfies
$\gamma'=\pi_{\ast}E_1=(1+\frac{\kappa}{4}(x^2+y^2))(a,b)$, such
that $u$ is an arc length parameter. We compute
$$\kappa_{\gamma}=(1+\frac{\kappa}{4}(x^2+y^2))\frac{x'y''-x''y'}{((x')^2+(y')^2)^{\frac{3}{2}}}
+\frac{\kappa}{2}\frac{x'y-xy'}{((x')^2+(y')^2)^{\frac{1}{2}}}=ab'-a'b+\frac{\kappa}{2}(ay-bx)=-S_{11}.$$
Looking at the expression for $S$, we see that the surface is
$k$-parallel if and only if $S_{11}$ is a polynomial of degree at
most $k-1$ in $u$ and $v$. This is equivalent to $\kappa_{\gamma}$
being a polynomial of degree at most $k-1$ in $u$.
$\square$
From Lemma \ref{lemma1}, Proposition \ref{prop1} and Proposition
\ref{prop2} we obtain a full classification of higher order parallel
surfaces in 3-dimensional homogeneous spaces with 4-dimensional
isometry group:
\begin{theorem}\label{theo7} A $k$-parallel surface in a BCV space
$\widetilde{M}^3(\kappa,\tau)$, with $\kappa\neq4\tau^2$, is one of
the following:
\begin{itemize}
\item[(i)] an open part of a Hopf-cylinder on a curve whose geodesic curvature is a polynomial function of
degree at most $k-1$ of the arc length;
\item[(ii)] an open part of a totally geodesic leaf of the Hopf-fibration;
\eqrefnd{itemize}
the latter case only occuring when $\tau=0$.
\eqrefnd{theorem}
\begin{thebibliography}{20}
\bibitem{1} E. Backes and H. Reckziegel, {\eqrefm On
symmetric submanifolds of spaces of constant curvature}, Math. Ann.
\textbf{263} (1983), 419-433
\bibitem{3} M. Belkhelfa, F. Dillen and J. Inoguchi, {\eqrefm Surfaces with parallel second fundamental form in
Bianchi-Cartan-Vranceanu spaces}, in: PDE's, Submanifolds and Affine
Differential Geometry, Banach Center Publications, Vol. \textbf{57},
pp. 67-87, Acad. Sci., Warsaw, 2002
\bibitem{4} L. Bianchi, Lezioni di Geometria Differenziale I,
E. Spoerri, Pisa, 1894
\bibitem{5} L. Bianchi, Lezioni sulla Teoria dei Gruppi Continui e Finiti di
Transformazioni, E. Spoerri, Pisa, 1918
\bibitem{6} \'E. Cartan, Le\c{c}ons sur la G\'eom\'etrie des Espaces de
Riemann, Gauthier-Villars, Paris, 1928
\bibitem{7} B. Daniel, {\eqrefm Isometric immersions into 3-dimensional homogeneous manifolds},
Comment. Math. Helv. \textbf{82} (2007), 87-131
\bibitem{8} F. Dillen, {\eqrefm The classification of hypersurfaces of a Euclidean space with parallel higher
order fundamental form}, Math. Z. \textbf{203} (1990), 635-643
\bibitem{9} F. Dillen, {\eqrefm Sur les hypersurfaces parall\`eles d'ordre
sup\'erieur}, Comptes Rendus de l'Aca\-d\'e\-mie des Sciences de
Paris \textbf{311} (1990), 185-187
\bibitem{10} F. Dillen, {\eqrefm Hypersurfaces of a real space form with parallel higher order fundamental form},
Soochow J. Math. \textbf{18} (1992), 321-338
\bibitem{11} F. Dillen and J. Van der Veken, {\eqrefm Higher order parallel surfaces in the Heisenberg
group}, Differ. Geom. Appl. \textbf{26} (2008), 1--8.
\bibitem{12} J. Inoguchi, T. Kumamoto, N. Ohsugi and Y. Suyama, {\eqrefm Differential geometry of curves and surfaces in 3-dimensional
homogeneous spaces I--IV}, Fukuoka Univ. Sci. Reports \textbf{29}
(1999), 155-182, \textbf{30} (2000), 17-47, 131-160, 161-168
\bibitem{13a} M. Lachi\`eze-Rey and J. Luminet, {\eqrefm Cosmic
topology}, Phys. Rep. \textbf{254} (1995), 135-214
\bibitem{14} H. B. Lawson, {\eqrefm Local rigidity theorems for minimal hypersurfaces}, Ann. of Math. \textbf{89}
(1969), 187-197
\bibitem{15} \"U. Lumiste, {\eqrefm Submanifolds with a Van der
Waerden-Bortolotti plane connection and parallelism of the third
fundamental form}, Izv. Vyssh. Uchebn. Mat. \textbf{31} (1987),
18-27
\bibitem{16} \"U. Lumiste, {\eqrefm Submanifolds with parallel fundamental form}, in: Handbook of Differential Geometry
Vol. 1, pp. 779-864, Elsevier Science B.V., Amsterdam, 2000
\bibitem{17} A. Sanini, {\eqrefm Gauss map of a surface of Heisenberg group},
Bollettino U.M.I. B(7) {\bf 11} (1997), 79-93
\bibitem{17aa} R. Souam and E. Toubiana, {\eqrefm Totally umbilic surfaces in homogeneous
3-manifolds}, preprint (2006), arXiv:math/0604391v1.
\bibitem{17a} W. M. Thurston, Three-dimensional Geometry and
Topology Vol. I, Princeton Math. Series, Vol. \textbf{35}, Princeton
University Press, 1997
\bibitem{18} V. S. Varadarajan, Lie Groups, Lie Algebras and their
Representations, Springer-Verlag New York Inc., 1984
\bibitem{19} G. Vranceanu, Le\c{c}ons de G\'eom\'etrie Diff\'erentielle
I, Ed. Acad. Rep. Roum., Bucarest, 1947
\eqrefnd{thebibliography}
\eqrefnd{document} | math | 60,800 |
\begin{document}
\title{Distinguishability of Hyper-Entangled Bell States by Linear Evolution and Local Projective Measurement}
\author{N. Pisenti}
\author{C.P.E. Gaebler}
\author{T.W. Lynn}
\email{[email protected]}
\affiliation{Department of Physics, Harvey Mudd College, 301 Platt Blvd., Claremont, California 91711, USA}
\date{\today }
\begin{abstract}
Measuring an entangled state of two particles is crucial to many quantum communication protocols. Yet Bell state distinguishability using a finite apparatus obeying linear evolution and local measurement is theoretically limited. We extend known bounds for Bell-state distinguishability in one and two variables to the general case of entanglement in $n$ two-state variables. We show that at most $2^{n+1}-1$ classes out of $4^n$ hyper-Bell states can be distinguished with one copy of the input state. With two copies, complete distinguishability is possible. We present optimal schemes in each case.
\end{abstract}
\pacs{03.67.-a,03.67.Hk,42.50.Dv}
\maketitle
\section{Introduction}
\label{Section:Intro}
Entangled systems are ubiquitous in quantum information science, playing key roles in teleportation~\cite{TeleportationProtocol}, quantum repeaters~\cite{QuantumRepeaters}, dense coding~\cite{DenseCoding}, entanglement swapping~\cite{entswapping,entswappingexpt}, and fault tolerant quantum computing~\cite{QuantumComputing}. Typical barriers to efficiently realizing these applications are twofold---first, the reliable generation of entangled pairs in a particular Bell state, and second, complete Bell-state measurement between two particles \cite{BellMeasurements,MethodsTeleportation}. Entangled pair creation can be achieved via numerous methods; for example, with photons it is possible through the non-linear interactions involved in spontaneous parametric downconversion \cite{KwiatSPDC}. However, a complete, deterministic Bell-state measurement is impossible within the broad class of apparatus obeying linear evolution and local measurement (LELM)~\cite{BellMeasurements,MethodsTeleportation}. Much focus is placed on these devices nonetheless, due to their ease of implementation. The inability to perform a complete, deterministic Bell-state measurement with LELM has limited the unconditional fidelity achieved in numerous experimental settings \cite{entswappingexpt,densecodingexpt,superdenseexpt,tpexpt}. A deeper understanding of the exact bounds placed on Bell-state distinguishability by LELM devices thus has implications for quantum communication protocols and other applications in quantum information science.
Recent experimental developments have opened the arena of entanglement between two particles in multiple degrees of freedom, a circumstance known as hyper-entanglement \cite{hyperE}. Existing bounds on nonlocal state distinguishability have involved systems entangled in two or fewer two-state variables \cite{BellMeasurements,MethodsTeleportation,HE}, or in one three-state or $n$-state variable \cite{Calsa-general,vanLoockLut,Carollo1,Carollo2}, yet experiments to date have achieved entanglement in up to three variables~\cite{hyperEexpt}. Thus there is considerable motivation for more general theoretical bounds, for instance to establish channel capacities for superdense coding. In this paper, we consider the general case of two particles entangled in $n$ two-state variables. Our analysis offers an $n$-variable distinguishability limit based on a simple understanding of the restrictions imposed by LELM; we further describe a straightforward apparatus which will always achieve maximum distinguishability between hyper-entangled Bell states.
\section{Notation and representation of LELM apparatus}
An apparatus constrained by ``linear evolution'' acts on each input particle independently of the other, so it can be represented as a unitary transformation over the space of single-particle input states. Consequently, the single-particle output modes of the device are linear combinations of the single-particle input modes. ``Local projective measurement'' means the detection event projects the system into a product state of two single-particle output modes. We consider measurement in a Fock state basis of output modes, corresponding to annihilation of particles in the two
detectors which register clicks.
The system of interest consists of two particles whose states are described by $n$ two-state variables, each of which is represented in the basis $\{|0\rangle, |1\rangle\}$. Some examples include the following: for photonic systems, the linear polarization states $\{H, V\}$, the subset $\{ +\hbar, -\hbar\}$ of orbital angular momentum states, or time bins $\{t_s, t_l\}$; for atomic systems, two ground or metastable electronic states $\{g_1,g_2\}$; for electronic spin qubits, the states $\{\uparrow_z,\downarrow_z\}$; and many more two-state quantum systems.
The two particles enter the LELM apparatus via separate spatial channels, designated \textit{L} and \textit{R}, as shown schematically in Fig.~\ref{schematic}. Each single-particle input undergoes unitary evolution to the set of orthogonal output modes. Each detector is capable of resolving number states in its associated mode, so two particles in a single detector can be reliably detected. A complete single-particle input is specified by particular values for all $n$ variables as well as the spatial channel. It has been shown that, for projective measurements with linear evolution, distinguishability between signal states cannot be improved by the use of auxiliary modes as long as the signal states are of definite particle number \cite{vanLoockLut,Carollo2}. Thus we may restrict our discussion to a space of single-particle input modes with dimension $2^{n+1}$, and a corresponding $2^{n+1}$-dimensional space of output modes. Finally, a complete detection event consists of annihilating particles in two output modes (possibly the same mode twice).
\begin{figure}
\caption{A pair of particles enters the measurement apparatus via separate channels (Left and Right). Each particle evolves independently of the other (linear evolution), hence the unitary evolution of single-particle input modes to output modes. Local measurement registers two clicks in the detectors, projecting the system into a product state of two single-particle output modes (possibly the same mode twice).
\label{schematic}
\label{schematic}
\end{figure}
A useful basis for the input states consists of kets $|\varphi_m\rangle$, each representing a particle in one of the $2^{n+1}$ possible input modes: either the $|0\rangle$ or $|1\rangle$ eigenstate of each variable, and either the left or right input channel. We assign odd indices $m$ to \textit{L}-channel states and even indices to \textit{R}-channel states, such that $|\varphi_{2s-1}\rangle = |\chi_s,L\rangle$ and $|\varphi_{2s}\rangle = |\chi_s,R\rangle$ are identical to one another except for the choice of left vs. right input channel; $s$ ranges from $1$ to $2^n$ and $\{\chi_s\}$ is the set of all binary strings of length $n$. For example, for $n=1$ (one variable), the input-state basis is:
\begin{equation}
|\varphi_1\rangle = |0,L\rangle, ~~
|\varphi_2\rangle = |0,R\rangle, ~~
|\varphi_3\rangle = |1,L\rangle, ~~
|\varphi_4\rangle = |1,R\rangle. \label{eq:1dinputbasis}
\end{equation}
The two-particle, or overall, input states are spanned by the set of tensor product states $|\varphi_m\rangle |\varphi_k\rangle$ with the restriction that $m \neq k~(\mathrm{mod}~2)$, since the input states of interest are limited to those with one particle in each of the left and right input channels. For indistinguishable particles 1 and 2, the \mbox{(anti)symmetrized} version $\frac{1}{\sqrt{2}}(|\varphi_m\rangle_1|\varphi_k\rangle_2 \pm |\varphi_k\rangle_1|\varphi_m\rangle_2)$ is understood instead.
We can describe a click in detector $i$ as a projection of the input state onto the single-particle output mode $|i\rangle$. The relationship between input and output modes depends on the apparatus, but without loss of generality we can write
\begin{equation}
|i\rangle = \sum_{m} U_{im}|\varphi_m\rangle \label{U-transform}
\end{equation}
where the LELM apparatus is represented by the unitary matrix $\mathbf{U}$. Each output mode thus takes the form
\begin{equation}
|i\rangle = \alpha_i|l_i\rangle + \beta_i|r_i\rangle,
\label{det-even-odd}
\end{equation}
where $|l_i\rangle$ is a superposition of left-channel input states and $|r_i\rangle$ is a superposition of right-channel input states.
A complete detection signature corresponds to a projection of the two-particle input state onto the tensor product state $|i\rangle|j\rangle$; for indistinguishable particles, the \mbox{(anti)symmetrized} version $\frac{1}{\sqrt{2}}(|i\rangle_1|j\rangle_2 \pm |j\rangle_1|i\rangle_2)$ is understood instead. Furthermore, if we constrain the inputs to include just one particle in the left channel and one particle in the right, we should consider the projection of $|i\rangle|j\rangle$ onto the subspace of two-particle input states spanned by $|\varphi_m\rangle|\varphi_k\rangle$ with $m\neq k~(\mathrm{mod}~2)$. We call this projection the detection signature, denoted $P_{LR}|i\rangle|j\rangle$.
Finally, we turn to a description of the Bell states themselves, which form an entangled basis for the two-particle system. In a single variable, the Bell basis consists of four maximally-entangled states given by
\begin{align}
|\Phi^{\pm}\rangle &=\frac{1}{\sqrt{2}}\Big(|0,L\rangle|0,R\rangle\pm |1,L\rangle|1,R\rangle\Big) \label{Equation:PhiBell}\\
|\Psi^{\pm}\rangle &=\frac{1}{\sqrt{2}}\Big(|0,L\rangle|1,R\rangle \pm |1,L\rangle|0,R\rangle\Big), \label{Equation:PsiBell}
\end{align}
or rather the (anti)symmetrized versions of Eqs.~\ref{Equation:PhiBell}~and~\ref{Equation:PsiBell}. To generalize this basis for hyper-entanglement in $n$ variables, we simply take a tensor product between the Bell states for each individual variable. Thus, the hyper-Bell states are $\{|\Phi^+\rangle,|\Phi^-\rangle,|\Psi^+\rangle,|\Psi^-\rangle\}^{\otimes n}$; for $n$ variables, these are $4^n$ mutually orthogonal entangled states.
\section{MAXIMUM NUMBER OF DISTINGUISHABLE BELL-STATE CLASSES}
As discussed above, there are $2^{n+1}$ mutually orthogonal output modes, or $2^{n+1}$ detectors. Each hyper-Bell state, due to its maximal entanglement, is capable of producing at least one click in any detector $i$. To see this, suppose that detector $i$ is never triggered by hyper-Bell state $|B\rangle$. Then
$\langle B|\frac{1}{\sqrt{2}}(|j\rangle_1|i\rangle_2 \pm |i\rangle_1|j\rangle_2)=0$ for all output modes $|j\rangle$, or equivalently,
\begin{equation}
\langle B_{\text{sym}}|\big(|j\rangle_1|i\rangle_2\big)=0 ~~ \forall j
\label{eq:todisprove}
\end{equation}
where $|B_{\text{sym}}\rangle$ is the Bell state symmetrized or antisymmetrized under exchange of particles 1 and 2. For example, the symmetrized version of the single-variable Bell state $|\Phi^+\rangle$ is
\begin{align}
|\Phi^+_{\text{sym}}\rangle = \frac{1}{2} \Big( &|0,L\rangle_1 |0,R\rangle_2 + |1,L\rangle_1|1,R\rangle_2 \notag
\\ &+ |0,R\rangle_1|0,L\rangle_2 + |1,R\rangle_1|1,L\rangle_2\Big).
\label{eq:phiplussym}
\end{align}
If Eq. \ref{eq:todisprove} holds, it follows that
\begin{equation}
\sum_{j} \big( \leftsub{2}{\langle i|} \leftsub{1}{\langle j|} \big) |B_{\text{sym}}\rangle \langle B_{\text{sym}}|
\big( |j\rangle_1|i\rangle_2\big)=0.
\end{equation}
However, the left-hand side of the last expression is simply $_2\langle i|Tr_1(|B_{\text{sym}}\rangle\langle B_{\text{sym}}|)|i\rangle_2$, where the trace is taken over the states of particle 1. However, this quantity cannot be zero since the reduced density matrix $Tr_1(|B_{\text{sym}}\rangle\langle B_{\text{sym}}|)$ is a multiple of the identity on the space of particle~2 states, including both left- and right-channel states. (Consider, for example, $Tr_1(|\Phi^+_{\text{sym}}\rangle\langle \Phi^+_{\text{sym}}|)$ calculated using Eq. \ref{eq:phiplussym}.) Thus our supposition fails: Eq. \ref{eq:todisprove} cannot hold, and so every Bell state can in fact trigger every detector. A single detector click cannot discriminate between any of the Bell states.
An alternate demonstration of this key point proceeds as follows. If the initial two-particle state is an arbitrary Bell state $|B\rangle$, the state following a single click in some detector is proportional to $\hat{c}|B\rangle$, where $\hat{c}$ is the annihilation operator associated with that output mode. The statement that the Bell state can cause the detector to click is equivalent to the statement that the norm of this post-click state is nonzero. Thus we must consider the quantity $\langle B|\hat{c}^{\dagger} \hat{c}|B\rangle$. We can rewrite the expression in terms of the annihilation operators $\hat{a}_m$ associated with the single-particle input modes $|\varphi_m\rangle$: $\hat{c} = \sum_{m} C_m \hat{a}_m$. (Recall from Sec. II that $m$ odd or even denotes left- or right-channel modes, respectively. Further, if we are considering the output mode associated with detector $i$, then $C_m = U^{\ast}_{im}$ in the notation of Eq. \ref{U-transform}.) Thus
\begin{align}
\langle B|\hat{c}^{\dagger} \hat{c}|B\rangle = &\sum_{m,k} C^{\ast}_m C_k \langle B|\hat{a}^{\dagger}_m \hat{a}_k|B\rangle \notag \\
= &\sum_{m} |C_m|^2 \langle B|\hat{a}^{\dagger}_m\hat{a}_m|B\rangle \notag \\
&+\sum_{m,k\neq m} C^{\ast}_m C_k \langle B|\hat{a}^{\dagger}_m \hat{a}_k|B\rangle.
\label{eq:postnorm}
\end{align}
To further evaluate this expression, we write the Bell state as
\begin{equation}
|B\rangle = \frac{1}{\sqrt{2^n}} \sum^{2^n}_{s=1} (-1)^{\sigma_B(s)}\hat{a}^{\dagger}_{2s-1} \hat{a}^{\dagger}_{2r_B(s)} |\mathbf{0}\rangle
\label{eq:Bellfromvac}
\end{equation}
where $|\mathbf{0}\rangle$ is the vacuum state, $\sigma_B(s)$ can take values 0 or 1, and $\{r_B(s)\}$ is a permutation of $\{s\}$, so each left-channel mode of index $2s-1$ is uniquely paired in this Bell state with a right-channel mode of index $2r_B(s)$ (and vice versa). From this form it is easy to see that, for any $k\neq m$, $\hat{a}_k|B\rangle$ and $\hat{a}_m|B\rangle$ are orthogonal to each other, so $\langle B| \hat{a}^{\dagger}_m \hat{a}_k |B\rangle=0$. The final sum in Eq. \ref{eq:postnorm} vanishes by this reasoning. In the remaining sum of Eq. \ref{eq:postnorm}, $\langle B| \hat{a}^{\dagger}_m\hat{a}_m|B\rangle = \frac{1}{2^n}$ for any input mode $m$ and any Bell state $|B\rangle$, and so the sum simply evaluates to $\frac{1}{2^n}\sum_m |C_m|^2 = \frac{1}{2^n}$. Thus in the end we have $\langle B|\hat{c}^{\dagger} \hat{c} |B\rangle = \frac{1}{2^n}$, a nonzero value independent of the particular Bell state and output mode. In particular, any output mode is compatible with all Bell states: a single detector click does not discriminate between Bell states.
\vskip1em
Because a single detector event provides no information about which hyper-Bell state the particles occupy, distinguishability must come from identifying one of the $2^{n+1}$ orthogonal outcomes for the second detector event. The $2^{n+1}$ possibilities form a simple upper bound on distinguishable Bell-state classes from LELM devices, obtainable also by considering the Schmidt number of at most 2 for any detection signature \cite{Calsa-general}. We will now show that the actual maximum is one less than the simple upper bound, namely, $2^{n+1}-1$. This general result agrees with previous results for $n=1$ (3 out of 4) and $n=2$ (7 out of 16) \cite{MethodsTeleportation,BellMeasurements,HE}.
For fermions, the amplitude to observe two clicks in detector $i$ must always be zero, since $|i\rangle|i\rangle$ is inherently symmetric under particle exchange. Thus for any detector $i$, at most $2^{n+1}-1$ detection signatures $P_{LR}|i\rangle|j\rangle$ are nonzero. Since all the Bell states are represented in these $2^{n+1}-1$ signatures, there are at most $2^{n+1}-1$ distinguishable classes of hyper-Bell states for two fermions.
For bosons, consider a single output mode $|i\rangle$ as represented in Eq. \ref{det-even-odd}. If either coefficient $\alpha_i$ or $\beta_i$ is zero, the detection signature $P_{LR}|i\rangle|i\rangle$ is zero, and we have at most $2^{n+1}-1$ distinguishable classes of Bell states.
If $|i\rangle$ is a nontrivial superposition of left- and right-channel inputs as in Eq. \ref{det-even-odd}, then some linear combination of output modes must satisfy:
\begin{equation}
|X\rangle = \sum_{j} \epsilon_j|j\rangle = \alpha_i|l_i\rangle - \beta_i|r_i\rangle.
\label{otherstate}
\end{equation}
The hypothetical detection signature $P_{LR}|i\rangle|X\rangle$ is zero, giving $\sum_{j} \epsilon_j P_{LR}|i\rangle|j\rangle = 0$. Consider some $j$ such that $\epsilon_j \neq 0$; any Bell state represented in the detection signature $P_{LR}|i\rangle|j\rangle$ must also be represented in at least one other detection signature $P_{LR}|i\rangle|k\rangle$. Thus it is not possible to reliably distinguish between a class of Bell states that can produce clicks in detectors $(i,j)$ and a class that can produce clicks in detectors $(i,k)$. Therefore the number of distinguishable Bell-state classes must be less than the full number of detection signatures involving detector $i$, and there can be at most $2^{n+1}-1$ distinguishable classes of hyper-Bell states for two bosons.
If the left and right input channels are not brought together in the apparatus, \textit{e.g.}, for experimental convenience, each output mode $|i\rangle$ of Eq. \ref{det-even-odd} is simply equal to $|l_i\rangle$ or to $|r_i\rangle$.
Thus $2^n$ output modes are superpositions of the left-channel inputs, and the other $2^n$ output modes are superpositions of the right-channel inputs. For any detector $i$, only $2^n$ detection signatures $P_{LR}|i\rangle|j\rangle$ are nonzero. Since all the Bell states are represented in these $2^n$ signatures, there are at most $2^n$ distinguishable classes of Bell states for this case.
\section{APPARATUS FOR MAXIMAL DISTINGUISHABILITY}
A best-case apparatus for separate \textit{L} and \textit{R} measurement can be achieved by measuring the \textit{L}- and \textit{R}-channel inputs each in the $\{|0\rangle,|1\rangle\}^{\otimes n}$ basis. This is a projective measurement in the $|\varphi_m\rangle$ basis, so detection signatures are of the form $|\chi_s,L\rangle|\chi_t,R\rangle$. Bell states represented in one detection signature must be tensor products of $|\Phi^\pm\rangle$ in variables where $|\chi_s\rangle$ and $|\chi_t\rangle$ share their eigenvalue, and $|\Psi^\pm\rangle$ in variables where they do not. Specifying $\Phi$ vs. $\Psi$ in this way yields $2^n$ classes of $2^n$ Bell states each.
A unitary transformation realizing the maximal \mbox{$2^{n+1}-1$} Bell-state classes for fermionic or bosonic inputs is given by ($s=1$ to $2^n$):
\begin{align}
|2s-1\rangle &= \frac{1}{\sqrt{2}}(|\varphi_{2s-1}\rangle + |\varphi_{2s}\rangle) =\frac{1}{\sqrt{2}}(|\chi_s,L\rangle + |\chi_s,R\rangle) \notag \\
|2s\rangle &= \frac{1}{\sqrt{2}}(|\varphi_{2s-1}\rangle - |\varphi_{2s}\rangle)=\frac{1}{\sqrt{2}}(|\chi_s,L\rangle - |\chi_s,R\rangle).
\label{eq:besttransform}
\end{align}
This is a Hadamard transform between the \textit{L} and \textit{R} channels for each $n$-variable eigenstate $|\chi_s\rangle$.
\begin{figure}
\caption{Apparatus for optimal hyper-Bell state distinguishability of photon pairs as in Eq. \ref{eq:besttransform}
\label{fig:optimaldet}
\end{figure}
For bosons with linear evolution governed by Eq. \ref{eq:besttransform}, a detection signature $P_{LR}|2s-1\rangle|2s-1\rangle$ or $P_{LR}|2s\rangle|2s\rangle$ identifies the two-particle input state $|\chi_s,L\rangle|\chi_s,R\rangle$. These detection signatures thus all identify the class of $2^n$ hyper-entangled Bell states $|\Phi^\pm\rangle^{\otimes n}$. Detection signatures of the form $P_{LR}|2s-1\rangle|2s\rangle$ or $P_{LR}|2s\rangle|2s-1\rangle$, however, are antisymmetric under particle exchange and do not occur. For fermions the roles are reversed; detection signatures $P_{LR}|2s-1\rangle|2s\rangle$ or $P_{LR}|2s\rangle|2s-1\rangle$ identify the class of $2^n$ hyper-entangled Bell states $|\Phi^\pm\rangle^{\otimes n}$, while detection signatures $P_{LR}|2s-1\rangle|2s-1\rangle$ or $P_{LR}|2s\rangle|2s\rangle$ are symmetric and do not occur.
Any detection signature not of the forms already discussed will give
\begin{equation}
P_{LR}|i\rangle|j\rangle = \frac{1}{\sqrt{2}}(|\chi_s,L\rangle|\chi_t,R\rangle \pm |\chi_t,L\rangle|\chi_s,R\rangle)
\label{smallclasses}
\end{equation}
with $t\neq s$. Bell states represented in such a detection signature have a well-defined sequence of $\Phi$ vs. $\Psi$ in the $n$ variables. Furthermore, the sign of the superposition in Eq. \ref{smallclasses} gives the symmetry of the overall state with respect to exchange of \textit{L} and \textit{R}; the $|\Psi^-\rangle$ Bell state is antisymmetric in this way while the others are all symmetric, so a $+$ ($-$) sign in Eq. \ref{smallclasses} restricts that detection signature to hyper-entangled Bell states with $|\Psi^-\rangle$ in an even (odd) number of variables. Thus each such detection signature identifies a class of $2^{n-1}$ Bell states. There are $2^{n+1}-2$ classes of this type and one $|\Phi^\pm\rangle^{\otimes n}$ class, so exactly $2^{n+1}-1$ classes are reliably distinguished.
An optimal apparatus for photons is depicted in Fig.~\ref{fig:optimaldet}. A 50/50 beamsplitter performs (up to overall phase shifts) the \textit{L}/\textit{R} Hadamard transform of Eq. \ref{eq:besttransform}; the input modes are then separated according to the value of each variable, so detector clicks project into the $\{|0\rangle,|1\rangle\}^{\otimes n}$ basis. Previous optimal distinguishability schemes for $n=1$ are of this form \cite{Innsbruck1,Innsbruck2,Innsbruck3,MethodsTeleportation,BellMeasurements}.
The schemes above can be varied by performing projective measurement in the diagonal $\{\frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle\right),\frac{1}{\sqrt{2}}\left(|0\rangle - |1\rangle\right)\}$ basis rather than the $\{|0\rangle,|1\rangle\}$ basis for one or more variables. For example, for $n=1$, an optimal unitary transformation can be written in the basis of Eq. \ref{eq:1dinputbasis} as
\begin{equation}
\label{Equation:UMatrix2}
\mathbf{U_{opt}}=\frac{1}{2}\begin{pmatrix}1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{pmatrix}.
\end{equation}
For polarization-entangled photons, Eq. \ref{Equation:UMatrix2} is realized by a 50/50 beamsplitter and polarizing beamsplitters at $45^{\circ}$.
With two copies of a hyper-Bell state, complete distinguishability can be achieved by measuring one copy in the apparatus of Eq. \ref{eq:besttransform}, and the second in a version where detectors project into the diagonal basis for all $n$ variables. $|\Phi^+\rangle$ and $|\Psi^-\rangle$ in each variable retain their forms in the diagonal basis, while the other Bell states exchange forms, $|\Phi^-\rangle \leftrightarrow |\Psi^+\rangle$. States indistinguishable via the first apparatus share a common sequence of $\Phi$ vs. $\Psi$ in the $n$ variables, and differ by $+$ vs. $-$ in one or more variables. All these states will have distinct $\Phi$ vs. $\Psi$ sequences in the diagonal basis, so they will give distinct measurement outcomes in the second apparatus. In fact, the same principle gives two-copy complete distinguishability even by measuring L- and R-channel inputs separately in the $\{|0\rangle,|1\rangle\}^{\otimes n}$ basis for one copy and the $\{\frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle\right),\frac{1}{\sqrt{2}}\left(|0\rangle - |1\rangle\right)\}^{\otimes n}$ basis for the second copy. Other optimal schemes exist, such as those already known for $n=2$ \cite{KW,KWexpt,HE}.
\section{Conclusion}
We have shown that devices constrained by linear evolution and local measurement cannot reliably distinguish more than $2^{n+1}-1$ Bell states or Bell-state classes for two particles entangled in $n$ degrees of freedom. This bound holds even for conditional measurements; after a click in detector $i$, no conditional evolution of the remaining state to output channels will avoid the limitations presented above. However, two copies of the hyper-Bell state allow complete distinguishability. We have constructed unitary transformations of input states to output states which achieve the upper bound on distinguishability for one and two copies of the hyper-Bell state.
This work illustrates the potential and limitations for manipulation and measurement of entangled systems with inherently linear, unentangling devices. It relies on a very physical approach to consider cases in which previous $n=1,2$ methods are computationally unattractive; checking the $n=3$ bound by previous methods involves searching for solutions to ${64\choose16}\approx 4.9\times10^{14}$ systems of 16 equations. Our approach gives another way of understanding the $n=1,2$ bounds, and may provide a framework for further bounds, perhaps on LELM distinguishability of hyper-entangled states involving qutrit or higher-dimensional variables.
\section{Acknowledgments}
The authors thank M. Orrison and D. Skjorshammer for useful conversations, and an anonymous referee for thoughtful feedback and suggestions. This work was supported by Research Corporation Cottrell College Science Grant No. 10598.
\end{document} | math | 25,650 |
\begin{document}
\title{Experimental Implementation of Quantum Walks on\ IBM Quantum Computers}
\begin{abstract}
The development of universal quantum computers has achieved remarkable success in recent years, culminating with the quantum supremacy reported by Google. Now is possible to implement short-depth quantum circuits with dozens of qubits and to obtain results with significant fidelity. Quantum walks are good candidates to be implemented on the available quantum computers. In this work, we implement discrete-time quantum walks with one and two interacting walkers on cycles, two-dimensional lattices, and complete graphs on IBM quantum computers. We are able to obtain meaningful results using the cycle, the two-dimensional lattice, and the complete graph with 16 nodes each, which require 4-qubit quantum circuits up to depth 100.
\
\noindent
Keywords: Quantum computing, quantum walks, quantum circuits, IBM Q Experience, Qiskit
\end{abstract}
\section{Introduction}
Quantum walks are considered the quantum analogue of classical walks, which are useful for developing classical randomized algorithms~\cite{MR96}. Quantum walks have already proved useful for designing quantum algorithms~\cite{Amb07a}. The most general definition of a quantum walk on a graph demands that its time evolution obey the laws of quantum mechanics and is constrained by graph locality~\cite{Por18book}.
A quantum computer that can realize computational tasks that the largest supercomputers available nowadays cannot simulate has been recently built~\cite{GoogleQC}, opening a broad highway to implement quantum walks efficiently. The implementation of quantum walks on quantum computers requires $\log_2 N$ qubits, where $N$ is the number of nodes of the graph around which the walkers ramble. On the other hand, in many forms of implementing quantum walk in laboratories, the number of devices the experimenter puts on the table scales with the number of nodes of the graph~\cite{DW09}, though the required resources are decreased by resorting to classical realization of quantum walks with optical systems~\cite{Regensburger2011photon,Schetal12,LMNPGBJS19}.
Quantum computers provide us with an exponential advantage, however, indicating by the noisy intermediate-scale quantum (NISQ) era, the available quantum computers are prone to errors above the threshold required to implement quantum error correcting codes \cite{Preskill2018quantum}. That obviously limits the depth of the circuits that would simulate the quantum walk reasonably well.
The time evolution of quantum walks on graphs can be continuous or discrete~\cite{ADZ93,FG98}. In the discrete-time case, the very first model~\cite{ADZ93}, originally called ``random quantum walks'', nowadays known as coined quantum walks, has an internal coin space, which was considered mandatory for many years until two alternative restricted coinless models were proposed: Szegedy's~\cite{Sze04a} and Patel et.~al's~\cite{PRR05}. Szegedy's model is defined on bipartite graphs, and Patel et.~al's model on hypercubic lattices. These two models are particular cases of the staggered quantum walk model~\cite{PSFG16}, which is defined on arbitrary graphs. To define discrete-time quantum walks on arbitrary graphs without resorting to internal spaces, the number of local unitary operators, the product of which is the evolution operator, must be larger than two depending on the graph type because locality demands that the vertex set must be partitioned into cliques, inevitably leading to the notion of graph tessellation cover, and to the results proved in~\cite{ACFKMPP20}. For instance, the evolution operator of discrete-time quantum walk on two-dimensional lattices must be the product of at least four local operators~\cite{PF17}.
In this work, we implement staggered quantum walks (SQWs)~\cite{PSFG16} on cycles, two-dimensional lattices with cyclic boundary conditions, and complete graph on IBM quantum computers. The evolution operator of a SQW on a graph is obtained using a graph tessellation cover, where a tessellation is a partition of the vertex set into cliques and a tessellation cover is a set of tessellations that covers all the edges of the graph. When implementing on IBM quantum computers, the evolution operator must be decomposed in terms of basic gates, CNOT and 1-qubit rotations. The most important gate in quantum walk implementations is the multi-controlled Toffoli gate, whose decomposition has been widely studied~\cite{HLZWW17,LL16,NC00}. In our case, we use an alternative version to this gate to shorten its decomposition, which is crucial in NISQ systems. We are able to implement one step of the quantum walk on a 16-node cycle, two steps of two interacting walkers on a 4-node cycle, and one step of the quantum walk on a 16-node two-dimensional lattice with cyclic borders. Those results improve earlier attempts using IBM quantum computers~\cite{BCS18,GZ19,Sha19} and are comparable with the size of cycles used in direct laboratory experiments~\cite{matjeschk2012experimental,flurin2017observing,dadras2019experimental}. We have also implemented quantum walk-based search algorithms on complete graphs with 8 and 16 vertices, which are more efficient that their classical versions in terms of oracle call. As far as we know, it seems that ours is the first implementation of a modified version of Grover's algorithm with four qubits on public access quantum computers with high fidelity (72.1\%) and faster than classical random search algorithms (see also the discussion at the concluding section of~\cite{SOM20}).
The structure of this paper is as follows. Sec.~\ref{sec:cycle} describes the dynamics of one walker and two interacting walkers on the $N$-cycle and their implementation on IBM quantum computers using 4 qubits. Sec.~\ref{sec:grid} describes the dynamics of one walker on a $N$-torus (cyclic two-dimensional lattice) and its implementation on IBM quantum computers using 4 qubits. Sec.~\ref{sec:search} presents the implementation of quantum walk-based search algorithms on complete graphs. Sec.~\ref{sec:conc} describes our conclusions.
\section{Quantum walk on the cycle}\label{sec:cycle}
Consider a $N$-cycle whose vertices are labeled by $0, \ldots, N-1$ and assume that $N$ is even. A tessellation cover $\{\mathcal{T}_\alpha,\mathcal{T}_\beta\}$ for this graph is depicted in Fig.~\ref{fig:non-uniform-tiles-N=8}, where $\mathcal{T}_\alpha=\{\alpha_x:0\le x\le N/2-1\}$, $\mathcal{T}_\beta=\{\beta_x:0\le x\le N/2-1\}$, $\alpha_x=\{2x,2x+1\}$, and $\beta_x=\{2x+1,2x+2\}$. The arithmetic is performed modulo $N$.
Each vertex $v$ is associated with a canonical basis vector $\ket{v}$ in a Hilbert space $\mathscr{H}^{N}$, whose computational basis is $\{\ket{x}:x=0,\ldots,N-1 \}$. Each tile $\alpha_x$ ($\beta_x$) of tessellation $\mathcal{T}_\alpha$ ($\mathcal{T}_\beta$) is associated with a unit vector $\ket{\alpha_x}$ ($\ket{\beta_x}$) in $\mathscr{H}^{N}$ as follows
\begin{align}
\left|\alpha_{x}\right\rangle &= \frac{\ket{2x}+\ket{2x+1}}{\sqrt{2}}, \label{eq:sm_alpha_latt} \\
\left|\beta_{x}\right\rangle &= \frac{\ket{2x+1}+\ket{2x+2}}{\sqrt{2}}. \label{eq:sm_beta_latt}
\end{align}
Using these vectors, we define projectors $\sum_x\ket{\alpha_x}\bra{\alpha_x}$ and $\sum_x\ket{\beta_x}\bra{\beta_x}$, which allow us to define the following Hermitian and unitary operators:
\begin{align}
H_0 &= 2\sum_{x=0}^{{N}/{2}-1} \ket{\alpha_x}\bra{\alpha_x}-\mathds{I}, \label{eq:sm_H0} \\
H_1 &= 2\sum_{x=0}^{{N}/{2}-1} \ket{\beta_x}\bra{\beta_x}-\mathds{I}, \label{eq:sm_H1}
\end{align}
where $\mathds{I}$ is the identity operator in $\mathscr{H}^N$, whose dimension should be clear from the context.
The evolution operator for the SQW on the cycle is given by
\begin{equation}
{{U}}=\textrm{e}^{{-i}\theta H_1}\textrm{e}^{{-i}\theta H_0},\label{eq:unit}
\end{equation}
where $\theta$ is an angle~\cite{POM17}. The quantum walk dynamics is generated by repeatedly applying the evolution operator, at discrete time steps, on an initial state.
\begin{figure}
\caption{Tessellation cover of the 8-cycle showing the vectors associated with each tile $\{v,w\}
\label{fig:non-uniform-tiles-N=8}
\end{figure}
In matrix form, operators $H_0$ and $H_1$ are given by
\begin{align}
H_0 &= \mathds{I} \otimes X,\label{eq:sm_H0_matrix} \\
H_1 &= \begin{bmatrix}
0 & \bf{} & 1 \\
\bf{} & \mathds{I} \otimes X & \bf{} \\
1 & \bf{} & 0
\end{bmatrix},\label{eq:sm_H1_matrix}
\end{align}
where $X=\left(\begin{smallmatrix}0&1 \\1 & 0 \end{smallmatrix} \right)$ and the empty entries are 0. Using that
\begin{equation}\label{eq:R_x}
R_x(\theta)=\exp(-i\theta X/2) = \begin {bmatrix} \cos \frac{\theta}{2} &-i\sin \frac{\theta}{2} \\
\noalign{
}-i\sin \frac{\theta}{2} &\cos \frac{\theta}{2}
\end {bmatrix},
\end{equation}
we obtain the evolution generated by $H_0$ and $H_1$, respectively, as
\begin{equation}\label{eq:U_0}
{{U}}_0 = \mathds{I} \otimes R_x(2\theta),
\end{equation}
which is a block-diagonal matrix, and
\begin{equation}\label{eq:U_1}
{{U}}_1 = \begin{bmatrix}
\cos\theta & & -i\sin\theta \\
& \mathds{I} \otimes R_x(2\theta) & \ \\
-i\sin\theta & & \cos\theta
\end{bmatrix},
\end{equation}
which is a permutation of the rows and columns of ${{U}}_0$.
In fact, the permutation (and circulant) matrix
\begin{equation}\label{eq:permutation}
P \,=\,
\sum_x \ket{x+1}\bra{x}
\,=\,
\begin{bmatrix}
0 & & & & 1 \\
1 & 0 & & & \\
& 1 & & & \\
& & \ddots & \ddots & \\
0 & & & 1 & 0
\end{bmatrix},
\end{equation}
transforms ${{U}}_0$ to ${{U}}_1$ via the similarity transformation ${{U}}_1=P^{-1} {{U}}_0 P$.
The SQW evolution operator can, therefore, be written as ${{U}}=P^{-1}{{U}}_0P\,{{U}}_0$. $P$ shifts the walker to the right and $P^{-1}$ shifts to the left. This dynamics is similar to the split-step protocol introduced by Kitagawa \textit{et. al}~\cite{KRBD10}, whose implementation using photonic technology was decribed in~\cite{KBFRBKADW12}. In the split-step protocol, the shift to the right (left) occurs only if the particle spin is up (down) otherwise the particle stays put. In the staggered dynamics, the spin plays no role and the shift to the right or left is unconditional.
In the staggered model, the unit vectors that are associated with the tiles can be different from the one described by Eqs.~(\ref{eq:sm_alpha_latt}) and~(\ref{eq:sm_beta_latt}). In this case, the new local evolution operators ${{U}}_0$ and ${{U}}_1$ have the same structure described in Eqs.~(\ref{eq:U_0}) and~(\ref{eq:U_1}), but they use new $2\times 2$ matrices in place of $R_x(2\theta)$. For instance, if the unit vector associated with the first tile of tessellation ${\mathcal T}_\beta$ is $(\ket{1}\pm i\ket{2})/\sqrt 2$, the corresponding block $R_x(2\theta)$ in the expression of ${{U}}_1$ is replaced by $\pm R_y(2\theta)=\pm\left(\begin{smallmatrix}\cos\theta&-\sin\theta \\ \sin\theta & \cos\theta \end{smallmatrix} \right)$. We use those tiles to shorten the decomposition of ${{U}}_1$ in terms of basic gates.
\subsubsection*{Two interacting quantum walkers}
Let us address the dynamics of a 2-particle quantum walk on a cycle with a special type of interaction between the walkers. The evolution operator of two independent quantum walks on a cycle is the tensor product
$${{U}}_\text{free}=\left({{U}}^{(1)}_1 {{U}}^{(1)}_0\right) \otimes \left({{U}}^{(2)}_1 {{U}}^{(2)}_0\right)$$
of two 1-particle quantum walks. The resulting operator belongs to the Hilbert space $\mathscr{H}^{N} \otimes\mathscr{H}^{N}$.
Now, suppose the walkers interact when they are simultaneously at the same vertex of the cycle, and consider the interaction described by a phase shift $\phi$ on top of the free evolution operator. The modified evolution operator is
\begin{equation}
{{U}} = {{U}}_\text{free} \, R,
\end{equation}
where
\begin{equation}
\label{eq:interaction_operator}
R \,\ket{x_1}\ket{x_2} = \begin{cases}
e^{i\phi}\ket{x_1}\ket{x_2}, &\text{if}\;\;\; x_1=x_2,\\
\ket{x_1}\ket{x_2}, &\text{otherwise.}
\end{cases}
\end{equation}
$R$ is a diagonal matrix, whose diagonal entries are either 1 or $e^{i\phi}$.
An alternative interacting model, similar to the one used when designing quantum search algorithms on graphs, is
\begin{equation}
\label{eq:interaction_operator_2}
R \,\ket{x_1}\ket{x_2} = \begin{cases}
e^{i\phi}\ket{x_1}\ket{x_2}, &\text{if}\;\;\; x_1=x_2=x^0,\\
\ket{x_1}\ket{x_2}, &\text{otherwise,}
\end{cases}
\end{equation}
where $x^0$ is a marked vertex. The decomposition of operator $R$ in the alternative model in terms of basic gates is shorter than the original one.
\subsection{Decomposition of the evolution operator}\label{sec:decomp}
In this section, we present the decomposition of the SQW evolution operators assuming that $N=2^n$ for an integer $n$. The Hilbert space $\mathscr{H}^N$ is spanned by the computational basis of $n$ qubits.
Note that each vertex of the cycle is represented by a computational basis vector $\ket{q_0\ldots q_{n-1}}$, where $q_i$ are qubits.
\subsubsection*{Decomposition of the permutation matrix $P$}
The matrix representation of the operator ${{U}}_0$, given by Eq.~(\ref{eq:U_0}), has the decomposition ${{U}}_0=\mathds{I}_2^{\otimes n-1} \otimes R_x(2\theta)$, that is, $(n-1)$ $2\times 2$-identity operators acting on the first $n-1$ qubits and a rotation $R_x(2\theta)$ on the last qubit (see the central part of Fig.~\ref{fig:U_1}). Fig.~\ref{fig:U_1} also depicts the circuit that implements ${{U}}_1$. The circuit of $P$ is shown at the left-hand part and its inverse $P^{-1}$ at the right-hand part.
\begin{figure}
\caption{Circuit of the operator ${{U}
\label{fig:U_1}
\end{figure}
The decomposition of matrix $P$ uses multi-controlled Toffoli gates~\cite{DW09}, which are defined in the following way. Suppose that $C_{i_1,i_2,\ldots}(X_j)$ represents a multi-controlled Toffoli gate with control qubits ${q_{i_1}},{q_{i_2}},\ldots$ and the target qubit ${q_j}$. The action of $C_{i_1,i_2,\ldots}(X_j)$ is nontrivial only on $\ket{q_j}$, which is given by
\begin{equation}
\label{eq:generalized_Toff}
C_{i_1,i_2,\ldots}(X_j)\ket{q_{i_1},q_{i_2}...}\ket{q_j}=\ket{q_{i_1},q_{i_2}...}X^{q_{i_1}\cdot q_{i_2} \cdots}\ket{q_j}=\ket{q_{i_1},q_{i_2}...}\ket{q_j\oplus (q_{i_1}\cdot q_{i_2}\cdots)},
\end{equation}
that is, the state of qubit ${q_j}$ changes only if $q_{i_1}$, $q_{i_2},\ldots$ are all set to 1. Note that, in Fig.~\ref{fig:U_1}, the first (top) qubit of the circuit is ${q_0}$ and the last (bottom) one is ${q_{n-1}}$. The correctness proof of this decomposition is shown in Appendix~\ref{appen:proof}.
\subsubsection*{Decomposition of the multi-controlled Toffoli gate}
To decompose the multi-controlled Toffoli gate $C_{0,...,n-2}(X_{n-1})$, which has $n-1$ control qubits $q_0$, ..., $q_{n-2}$ and one target qubit $q_{n-1}$, we initially use the identity
$$C_{0,...,n-2}(X_{n-1})=H_{n-1} C_{0,...,n-2}(Z_{n-1})H_{n-1},$$
where $H_{n-1}$ is the Hadamard gate acting on qubit $q_{n-1}$, and then we focus on the method to decompose $C_{0,...,n-2}(Z_{n-1})$. Fig.~\ref{fig:economicdecomp} shows how to decompose $C_{0,...,n-2}(Z_{n-1})$ in terms of a sequence of multi-controlled $R_z(\theta)$,
\begin{equation}\label{eq:R_z}
R_z(\theta)=\exp(-i\theta Z/2) =
\begin {bmatrix} \text{e}^{-i\theta/2} &0 \\
\noalign{
}0 & \text{e}^{i\theta/2}
\end {bmatrix},
\end{equation}
where $\theta=\pi/2^{n-j}$, $j=1,...,n$.
\begin{figure}
\caption{Decomposition of $C_{0,...,n-2}
\label{fig:economicdecomp}
\end{figure}
An example of the decomposition of the multi-controlled $R_z(\pi)$ gate for $n=4$ is depicted in Fig.~\ref{fig:mcRz}. This is the last multi-controlled gate in the decomposition of the CCCZ gate. The generic decomposition of the multi-controlled $R_z(\pi/2^j)$ gate is given in terms of an alternated sequence of CNOT and $u_1(\pm\pi/2^{j-n})$ gates as described by function \verb|new_mcrz| in Appendix~\ref{appen_1}, where
\begin{equation}\label{eq:gate_U_1}
u_1(\theta)=
\begin {bmatrix} 1 &0 \\
\noalign{
}0 & \text{e}^{i\theta}
\end {bmatrix}.
\end{equation}
Note that $R_z(\theta)$ and $u_1(\theta)$ differ by a global phase and sometimes can be interchanged.
The positions of the CNOT controls (except for the first CNOT) in the decomposition of the multi-controlled $R_z(\pi/2^j)$ gate are given by function
\begin{equation}\label{a(k)}
a(k)\,=\,\log_2[k-k\&(k-1)],
\end{equation}
where \& is the bitwise AND operator. For instance, the positions of the CNOT controls in Fig.~\ref{fig:mcRz} starting from the second is 0,1,0,2,0,1,0, which correspond to $a(1)$, ..., $a(7)$. The decomposition used in this work is useful only when the number of qubits is small, since the decomposition size increases as an exponential function in terms of the number of qubits. When the number of qubits is large, it is recommended to use ancilla qubits~\cite{HLZWW17}.
\begin{figure}
\caption{Decomposition of multi-controlled $R_z(\pi)$, where \fbox{$\pm\frac{\pi}
\label{fig:mcRz}
\end{figure}
\subsubsection*{Alternative version to matrix $P$}
In this subsection we describe an alternative version to matrix $P$, whose decomposition in terms of basic gates is shorter. The strategy is to use a sequence of vectors $\ket{\pm}=(\ket{v}\pm \ket{w})/\sqrt 2$ and $\ket{\pm i}=(\ket{v}\pm i\ket{w})/\sqrt 2$ as the unit vectors associated with the tiles of tessellation $\mathcal{T}_\beta$, where $v$ and $w$ are the vertices of the tile. The sequence is described by function $a(k)$ modulo 4 starting with $k=1$, where $a(k)$ is given by (\ref{a(k)}), and by Table~\ref{table:ak}, which associates each value $a(k) \mod 4$ with a unit vector in the set $\{\ket{\pm},\ket{\pm i}\}$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|l|c|l|}
\hline
$a(k)\mod 4$ & vector & Hamiltonian & evolution \\
\hline
0 & $\ket{+}$ & X &$R_x(2\theta)$ \\
\hline
1 & $\ket{-i}$ & -Y & $R_y(-2\theta)$ \\
\hline
2 & $\ket{-}$ & -X &$R_x(-2\theta)$ \\
\hline
3 & $\ket{+i}$ & Y &$R_y(2\theta)$ \\
\hline
\end{tabular}
\end{center}
\caption{Association between the set of tiles of tessellation $\mathcal{T}_\beta$ and the unit vectors $\ket{\pm}=(\ket{v}\pm \ket{w})/\sqrt 2$, $\ket{\pm i}=(\ket{v}\pm i\ket{w})/\sqrt 2$. The third and the forth columns describe the sub-matrices of $H_1$ and ${{U}}_1$, respectively.}\label{table:ak}
\end{table}
In order to obtain the new local operator ${{U}}_1$, which uses the new unit vectors, we replace all multi-controlled Toffoli gates (2 or more controls) by multi-controlled $C(R_x(\pi))$ gates. Fig.~\ref{fig:new_P} describes the circuit of the new version for 4 qubits. Note that the multi-controlled $C(R_x(\pi))$ gates can be expressed as $HC(R_z(\theta))H$, where $H$ is the Hadamard gate. The decomposition of the new version in terms of basic gates can be accomplished by using the technique shown in Fig.~\ref{fig:mcRz}. The number of CNOT gates in our decomposition of the alternative version to $P$ is 13 for $n=4$,
which is less than the original $P$, that has 21 CNOTs.
\begin{figure}
\caption{Circuit of the alternative version to matrices $P$. }
\label{fig:new_P}
\end{figure}
In the one-dimensional case, there is a straightforward equivalence between the SQW and coined models. The alternative version to $P$ in ${{U}}_1$ represents a nonhomogeneous coin, that is, a different coin for each vertex. ${{U}}_0$ on the other hand represents a lazy (when $\theta<\pi/2$) flip-flop shift operator.
\subsection{Implementations on IBM quantum computers}
We use Qiskit\footnote{\url{https://qiskit.org/}} to build the circuits of the evolution operator and to run the experiments that are shown in this section. The experiments must be run when the error rate of the quantum computers are as low as possible, otherwise they output useless results.
\subsubsection*{Results for one quantum walker}
Fig.~\ref{fig:4q1pqx2} depicts the probability distribution after one step of a staggered quantum walk with the modified tiles that are associated with the alternative version to matrix $P$. The walker's initial position is the origin. The action of the evolution operator spreads the position among vertices 0, 1, 2, and 15.
\begin{figure}
\caption{Probability distribution after one step using simulation (blue), ibmqx2 (red), and vigo (salmon) quantum computers employing 4 qubits. }
\label{fig:4q1pqx2}
\end{figure}
The blue bars represent the simulated probability distribution and the red bars represent the result of the experiment in ibmqx2 (red) and ibmq\_vigo (salmon) quantum computers. The fidelities between the simulated and actual results are given in Table~\ref{table:fid1}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
fidelity & ibmqx2 & vigo \\
\hline
$1-d$ & 0.519 & 0.547 \\
\hline
$1-h$ & 0.468 & 0.486 \\
\hline
\end{tabular}
\end{center}
\caption{Fidelities between the probability distributions generated by the quantum computer $p$ and the exact simulation $q$ for one walker on a 16-vertex cycle, where the total variation distance $d$ and the Hellinger distance $h$ are given by $d=\frac{1}{2}\sum_x |p_x-q_x|$ and $h^2=\frac{1}{2} \sum_x \left(\sqrt{p_x}-\sqrt{q_x}\right)^2$.}\label{table:fid1}
\end{table}
\subsubsection*{Results for two interacting quantum walkers}
Fig.~\ref{fig:interacting2} depicts the probability distribution of two interacting quantum walkers up to two steps on a 4-node cycle. We have used the interaction given by~(\ref{eq:interaction_operator_2}) taking the node with label 3 as marked. The initial state is $(x_1,x_2)=(0,2)$, which is obtained with high fidelity. After the first step, the positions of both walkers spread along the whole cycle, and then they interact at node with label 3. After the second step, we have an entangled state that shows that the walkers are at $(x_1,x_2)=(1,3)$ with high probability (77.6\%).
\begin{figure}
\caption{Probability distribution of the initial state (i.s.), first and second steps using simulation (blue) and vigo quantum computer (red) for two interacting quantum walkers on a 4-vertex cycle using 4 qubits. }
\label{fig:interacting2}
\end{figure}
The fidelities between the exact calculations (blue) and the results generated by the quantum computer (red) are given in Table~\ref{table:fid2}. Although the fidelity of the second step is not high, the position of the largest peak of probability distribution obtained from the quantum computer coincides with the correct position.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
fidelity & i.s. & step 1 & step 2 \\
\hline
$1-d$ & 0.950 & 0.885 & 0.433 \\
\hline
$1-h$ & 0.814 & 0.890 & 0.572 \\
\hline
\end{tabular}
\end{center}
\caption{Fidelities between the probability distributions generated by the quantum computer and by the exact calculations.}\label{table:fid2}
\end{table}
\section{Quantum walk on the torus}\label{sec:grid}
Consider a two-dimensional square lattice with cyclic boundary conditions and ${\sqrt{N}}\times {\sqrt{N}}$ vertices labeled by $0, \ldots, {N}-1$, where $N$ is a square number. At least four tessellations are required to define the evolution operator of a SQW on the lattice~\cite{ACFKMPP20}. There are infinite ways to tessellated the lattice. The simplest way without being trivial is depicted in Fig.~\ref{fig:lattice}.
\begin{figure}
\caption{A tessellation cover of the two-dimensional lattice with cyclic borders in the form of a 64-vertex torus. Each tessellation is associated with a local unitary operator.}
\label{fig:lattice}
\end{figure}
Using the one-dimensional SQW evolution operators (\ref{eq:U_0}) and (\ref{eq:U_1}), and labeling the vertices from top-left to down-right row-by-row, we obtain the matrix form for the two-dimensional SQW operators
\begin{align}
{{U}}_{00} &= \left(\mathds{I} \otimes \begin{bmatrix}1 & 0\\0 & 0\\ \end{bmatrix}\right) \otimes {{U}}_0 + \left(\mathds{I} \otimes \begin{bmatrix}0 & 0\\0 & 1\\ \end{bmatrix}\right) \otimes {{U}}_1 \label{eq:U_0_hor},\\
{{U}}_{10} &= \left(\mathds{I} \otimes \begin{bmatrix}1 & 0\\0 & 0\\ \end{bmatrix}\right) \otimes {{U}}_1 + \left(\mathds{I} \otimes \begin{bmatrix}0 & 0\\0 & 1\\ \end{bmatrix} \right) \otimes {{U}}_0 \label{eq:U_1_hor},
\end{align}
corresponding to the blue and red (first and second) tessellations in Fig.~\ref{fig:lattice}, and
\begin{align}
{{U}}_{01} &= {{U}}_0 \otimes \left(\mathds{I} \otimes \begin{bmatrix}1 & 0\\0 & 0\\ \end{bmatrix}\right) + {{U}}_1 \otimes \left(\mathds{I} \otimes \begin{bmatrix}0 & 0\\0 & 1\\ \end{bmatrix}\right) \label{eq:U_0_ver},\\
{{U}}_{11} &= {{U}}_1 \otimes \left(\mathds{I} \otimes \begin{bmatrix}1 & 0\\0 & 0\\ \end{bmatrix}\right) + {{U}}_0 \otimes \left(\mathds{I} \otimes \begin{bmatrix}0 & 0\\0 & 1\\ \end{bmatrix}\right) \label{eq:U_1_ver},
\end{align}
for the brown and green (third and forth) tessellations in Fig.~\ref{fig:lattice}.
The evolution operator for the SQW with Hamiltonians on the lattice is given by \cite{POM17,moqadam2018boundary}
\begin{equation}
{{U}}^{\mathrm{2D}} = {{U}}_{11}{{U}}_{10}{{U}}_{01}{{U}}_{00}.
\label{eq:2DSQW}
\end{equation}
The same evolution operator (with $\theta=\pi/4$) was used in \cite{PF17} to describe a quantum walk-based search algorithm and it is related with the alternate two-step model proposed in \cite{DMB11}.
\subsection{Decomposition of the 2D SQW evolution}
Assume that $\sqrt{N}$ is a power of two. The matrix representation of the operators given by Eqs.~(\ref{eq:U_0_hor}) and (\ref{eq:U_1_hor}) has the decomposition
\begin{align}
{{U}}_{00} &= \mathds{I}\otimes \ket{0}\bra{0} \otimes {{U}}_0 + \mathds{I}\otimes \ket{1}\bra{1} \otimes P^{-1}{{U}}_0 P \nonumber\\
&= Q_x^{-1} ( \mathds{I}\otimes {{U}}_0 ) {Q_x}, \label{eq:U_0_hor_decomp} \\
{{U}}_{10} &= \mathds{I} \otimes \ket{1}\bra{1} \otimes {{U}}_0 + \mathds{I} \otimes \ket{0}\bra{0} \otimes P^{-1}{{U}}_0 P \nonumber \\
&= (\mathds{I}\otimes X \otimes \mathds{I}) {{U}}_{00} (\mathds{I}\otimes X \otimes \mathds{I}) \label{eq:U_1_hor_decomp},
\end{align}
where $\ket{0}\bra{0}=\left(\begin{smallmatrix}1&0\\0&0\end{smallmatrix}\right)$, $\ket{1}\bra{1}=\left(\begin{smallmatrix}0&0\\0&1\end{smallmatrix}\right)$, $\{\ket{0},\ket{1}\}$ is the computational basis for the Hilbert space corresponding to a single qubit, and ${Q_x}$ is given by
\begin{align}
{Q_x} &= \mathds{I}\otimes\ket{0}\bra{0} \otimes \mathds{I} + \mathds{I}\otimes\ket{1}\bra{1} \otimes P. \label{eq:Q_0}
\end{align}
Operator ${Q_x}$ is a controlled-$P$ gate with the control qubit $\ket{q_{n/2-1}}$ and target qubits $\ket{q_{n/2}\ldots q_{n-1}}$. The inverse of ${Q_x}$ is obtained by replacing $P$ with $P^{-1}$ in Eq.~(\ref{eq:Q_0}).
Similarly, we find
\begin{align}
{{U}}_{01} &= {{U}}_0 \otimes \mathds{I} \otimes \ket{0}\bra{0} + P^{-1}{{U}}_0 P \otimes \mathds{I} \otimes \ket{1}\bra{1} \nonumber\\
&= Q_y^{-1} \Bigl( {{U}}_0\otimes \mathds{I} \Bigr) Q_y \label{eq:U_0_ver_decomp},\\
{{U}}_{11} &= P^{-1}{{U}}_0 P \otimes \mathds{I} \otimes \ket{0}\bra{0} + {{U}}_0 \otimes \mathds{I} \otimes \ket{1}\bra{1} \nonumber\\
&= (\mathds{I} \otimes X) {{U}}_{01} (\mathds{I} \otimes X) \label{eq:U_1_ver_decomp},
\end{align}
corresponding to operators (\ref{eq:U_0_ver}) and (\ref{eq:U_1_ver}), where
\begin{align}
Q_y &= \mathds{I} \otimes \mathds{I} \otimes \ket{0}\bra{0} + P \otimes \mathds{I} \otimes \ket{1}\bra{1}, \label{eq:R_0}
\end{align}
which is a controlled-$P$ gate with the control qubit $\ket{q_{n-1}}$ and target qubits $\ket{q_{0}\ldots q_{n/2-1}}$.
Fig.~\ref{fig:2dsqw} shows the circuit that implements the SQW evolution operator given by Eq.~(\ref{eq:2DSQW}) on a 16-vertex two-dimensional lattice.
\begin{figure}
\caption{Circuit of the two-dimensional SQW evolution operator, including ${{U}
\label{fig:2dsqw}
\end{figure}
\subsection{Implementations on IBM quantum computers}
In the construction of the controlled-$P$ gate, we use the alternative version to $P$ and add the control qubit to all its components, as depicted in Fig.~\ref{fig:new_cP}.
As discussed earlier, the alternative version to $P$ has a shorter decomposition in terms of basic gates at the cost of changing the unit vectors associated with the tiles of the tessellations shown in Fig.~\ref{fig:lattice}.
The corresponding sequence of unit vectors introduced by the controlled gate is described by function $a(k) \mod 4$~(\ref{a(k)}), similar to the alternative version to $P$, after interchanging $\ket{\pm}\leftrightarrow\ket{\pm i}$ in Table (\ref{table:ak}).
\begin{figure}
\caption{Circuit of the alternative version to controlled-$P$ obtained from the alternative version to $P$ described in Fig.~\ref{fig:new_P}
\label{fig:new_cP}
\end{figure}
\begin{figure}
\caption{Probability distribution after one step of the two-dimensional SQW, with a non-local initial state, using simulation (blue) and vigo quantum computer (red) employing 4 qubits.}
\label{fig:2dsqw-vigo}
\end{figure}
Fig.~\ref{fig:2dsqw-vigo} depicts the probability distribution after one step of the two-dimensional SQW with the modified tiles. The walker initial position is an equal superposition of all vertices with even labels with the amplitudes $1/\sqrt{8}$.
The fidelities between the simulated and actual results are given in Table~\ref{table:fid1_2D}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|}
\hline
fidelity & vigo \\
\hline
$1-d$ & 0.515 \\
\hline
$1-h$ & 0.468\\
\hline
\end{tabular}
\end{center}
\caption{Fidelities between the probability distributions generated by the vigo quantum computer and the exact simulation for one walker on a 16-vertex two-dimensional lattice.}\label{table:fid1_2D}
\end{table}
\section{Quantum walk-based spatial search}\label{sec:search}
In this section, we describe the implementation of a quantum walk-based spatial search algorithm on complete graphs with 8 ($K_8$) and 16 ($K_{16}$) vertices. Since the complete graph $K_N$ is 1-tessellable, the evolution operator of a staggered quantum walk on $K_N$ is the Grover operator $G$, given by $G=-H^{\otimes n}RH^{\otimes n}$, where $n=\log_2 N$ and
\begin{equation}
R = I-2\ket{0}\bra{0}.
\end{equation}
A quantum walk-based search algorithm uses a modified evolution operator ${{U}}'$~\cite{Por18book}, given by
\begin{equation}
{{U}}' = G\,R,
\end{equation}
when the marked vertex has label 0. The initial state $\ket{\psi_0}$ is the uniform superposition of all states of the computational basis $\ket{\psi_0}=\sum_{j=0}^{N-1}\ket{j}$, and the optimal number of steps is the closest integer to $(\pi/4)\sqrt{N}$. Note that the quantum walk-based spatial search on the complete graph is equivalent to Grover's algorithm~\cite{Gro97,Por18book}.
\subsection{Implementation on IBM quantum computers}
Since ${{U}}'= -(H^{\otimes n}R)^2$, the missing task is to find the decomposition of $R$. It is straightforward to check that
\begin{equation}
R=X^{\otimes n}C_{0,...,n-2}(Z_{n-1})X^{\otimes n}.
\end{equation}
The decomposition of $C_{0,...,n-2}(Z_{n-1})$ is depicted in Fig.~\ref{fig:economicdecomp}. As we have discussed in Sec.~\ref{sec:decomp}, the number of basic gates reduces if we replace $Z$ for $R_z(\pi)$ in $C_{0,...,n-2}(Z_{n-1})$. So, instead of using $R$ in our implementations, we use $R'$, which is given by
\begin{equation}
R'=X^{\otimes n}C_{0,...,n-2}(R_z(\pi))X^{\otimes n}.
\end{equation}
The success probability using $R'$ is not going to be as high as is in the original algorithm, but is high enough for complete graphs up to 16 vertices.
\begin{figure}
\caption{Probability distribution of a quantum walk-based search algorithm on $K_8$ (left) and $K_{16}
\label{fig:spatialsearch}
\end{figure}
Fig.~\ref{fig:spatialsearch} depicts the probability distribution after three steps of the quantum walk-based search algorithm on the complete graph $K_8$ (left panel) and $K_{16}$ (right panel); Table~\ref{table:fid1_2D_search} shows the corresponding fidelities using the total variation distance $d$ and the Hellinger distance $h$. The success probability of finding the marked vertex for the $K_8$ case using a quantum computer is $0.674$ and for the $K_{16}$ case is $0.218$. These results are better than a 3-attempt random search, which has success probability $3/8=0.375$ for 8 elements and $3/16=0.187$ for 16 elements.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
fidelity & $K_8$ & $K_{16}$ \\
\hline
$1-d$ & 0.825 & 0.639 \\
\hline
$1-h$ & 0.821 & 0.721 \\
\hline
\end{tabular}
\end{center}
\caption{Fidelities between the exact probability distribution (blue) and the probability distribution outputed by IBM quantum computers (red).}\label{table:fid1_2D_search}
\end{table}
\section{Conclusions}\label{sec:conc}
In this work, we have implemented the evolution operator of staggered quantum walks on cycles, two-dimensional lattices, and complete graphs on IBM quantum computers. We have shown how to decompose each local unitary operator in terms of basic gates. The multi-controlled Toffoli gate is an important building block, whose decomposition can be analyzed independently of the remaining circuit.
We have implemented the first step of a quantum walk on the 16-node cycle and 16-node two-dimensional lattice, and obtained results with fidelity around 50\%. We have implemented two steps of two interacting quantum walkers on the 4-node cycle and obtained result with fidelity around 57\%. Appendix~\ref{appen_2} describes further results for a quantum walk on the 8-node cycle, with high fidelity up to eight steps. Although high, the fidelity is not a good measure of the quality of the results for the final steps, because the exact probability distributions are almost flat, and error-dominated outputs have also almost-flat associated probability distributions.
We have implemented a quantum walk-based search algorithm on complete graphs with 8 and 16 vertices; the results are more efficient than the equivalent classical algorithms. Since the quantum walk version is equivalent to Grover's algorithm (the Qiskit program is the same), we have implemented on IBM quantum computers a modified version of Grover's algorithm that is more efficient than classical brute-force search on unsorted lists of 8 and 16 elements.
We conclude that IBM quantum computers are able to produce non-trivial results in terms of quantum walk implementations, which includes non-classical aspects of quantum mechanics, such as the entanglement between quantum walkers.
We also conclude that the staggered model is suitable for NISQ computers because the implementation of coinless models requires fewer qubits than the coined model. For instance, the implementation of two interacting quantum walkers on a 4-cycle in the coined model needs six qubits, in contrast with four qubits in the coinless case.
\section*{Acknowledgments}
The authors thank J.~Valardan, M.~A.~V.~Macedo Jr., I.~J.~Ara\'ujo Jr.,~and M.~Paredes for useful discussions.
JKM acknowledges financial support from CNPq grant PCI-DA No.~304865/2019-2.
RP acknowledges financial support from CNPq grant No.~303406/2015-1 and Faperj grant CNE No. E-26/202.872/2018.
\appendix
\section{Appendix}\label{appen:proof}
\begin{proposition}
The decomposition of $P$ given by Eq.~(\ref{eq:permutation}) in terms of multi-controlled Toffoli gates is
\[
P = X_{k-1}\,C_{k-1}(X_{k-2})C_{k-2,k-1}(X_{k-3})\cdots C_{1,\ldots,k-1}\,(X_{0}),
\]
where $k$ is the number of qubits.
\end{proposition}
\begin{proof}
From Eq.~(\ref{eq:permutation}), we have
\begin{equation}
P\ket{q}=\begin{cases} \ket{q+1}, & \text{if }q<N-1 \\ \ket{0}, &\text{if }q=N-1, \end{cases}
\end{equation}
where $\ket{q}$ is a generic state of the computational basis in decimal notation. The binary representation of $q$ is $(q_0\ldots q_{k-1})_2$. In the case $q=N-1$, the binary representation of the state is $\ket{(1\ldots1)_2}$ and it is straightforward to verify that the circuit of $P$ in Fig.~\ref{fig:U_1} generates the desired output state $\ket{(0\ldots 0)_2}$. Suppose that $q<N-1$. The action of $P$ on a generic qubit state is
\begin{equation}
\begin{aligned}
P\ket{q_0\cdots q_{k-1}} &=
C_{1,\ldots,k-1}(X_{0})\ket{q_{0}}\cdots C_{k-2,k-1}(X_{k-3})\ket{q_{k-3}}\,\,C_{k-1}(X_{k-2})\ket{q_{k-2}}\,\,X_{k-1}\ket{q_{k-1}}. \nonumber\\
\end{aligned}
\end{equation}
Simplifying the right-hand side by using Eq.~(\ref{eq:generalized_Toff}) gives
\[
\ket{q_{0}\oplus (q_1\cdots q_{k-1})}\ldots \ket{q_{k-3}\oplus(q_{k-2}\cdot q_{k-1})}\ket{q_{k-2}\oplus q_{k-1}}\ket{q_{k-1}\oplus 1}.
\]
On the other hand, the addition $q+1$ in the binary representation, namely $(q_0 \cdots q_{k-1})_2\oplus1$ yields
\[
\begin{tabular}{cccccc}
${\color{gray}q_1\cdots q_{k-1}}$ & & ${\color{gray}q_{k-2}\cdot q_{k-1}}$ & ${\color{gray}q_{k-1}}$ & &\\
$q_{0}$ & $\ldots$ & $q_{k-3}$ & $q_{k-2}$ & $q_{k-1}$ & \\
& & & & 1&$\oplus$ \\
\hline
$q_{0}\oplus (q_1\cdots q_{k-1})$ & $\ldots$ & $q_{k-3}\oplus(q_{k-2}\cdot q_{k-1})$ & $q_{k-2}\oplus q_{k-1}$ & $q_{k-1}\oplus 1$
\end{tabular}
\]
where the gray colored bits in the first line of the table show the carries. The result (given in the forth line of the table) is obtained by performing the addition of the rightmost bits of the table, that is, adding bits $q_{k-1}$ and 1. The result is $q_{k-1}\oplus 1$ and the carry is $q_{k-1}$, which is placed over $q_{k-2}$ as a gray colored bit. Then, bits $q_{k-1}$ and $q_{k-2}$ are added that gives $q_{k-2}\oplus q_{k-1}$ and the carry is $q_{k-2}\cdot q_{k-1}$, which is placed over $q_{k-3}$. The addition goes on until the leftmost bit is reached. The final result coincides with the action of $P$ on $\ket{q_0\ldots q_{k-1}}$, which proves the proposition.
\end{proof}
\section{Appendix}\label{appen_1}
This appendix describes function \verb|new_mcrz|, which decomposes the multi-controlled $R_z$ gate, and function \verb|new_mcz|, which decomposes the multi-controlled $Z$ gate. Those functions use the same syntax of functions \verb|mcrz| and \verb|mcz| implemented in Qiskit. Note that our implementation uses fewer CNOTs.
\begin{verbatim}
from qiskit import *
from math import pi,log
q = QuantumRegister(4)
qc = QuantumCircuit(q)
def new_mcrz(qc,theta,q_controls,q_target):
n = len(q_controls)
newtheta = -theta/2**n
a = lambda n: log(n-(n&(n-1)),2)
qc.cx(q_controls[n-1],q_target)
qc.u1(newtheta,q_target)
for i in range(1,2**(n)):
qc.cx(q_controls[int(a(i))],q_target)
qc.u1((-1)**i*newtheta,q_target)
QuantumCircuit.new_mcrz = new_mcrz
qc.new_mcrz(pi,[q[0],q[1],q[2]],q[3])
print(qc.draw())
\end{verbatim}
\begin{verbatim}
qc = QuantumCircuit(q)
def new_mcz(qc,q_controls,q_target):
L = q_controls + [q_target]
n = len(L)
qc.u1(pi/2**(n-1),L[0])
for i in range(2,n+1):
qc.new_mcrz(pi/2**(n-i),L[0:i-1],L[i-1])
QuantumCircuit.new_mcz = new_mcz
qc.new_mcz([q[0],q[1],q[2]],q[3])
print(qc.draw())
\end{verbatim}
\section{Appendix}\label{appen_2}
Fig.~\ref{fig:8-cycle} depicts our results for a staggered quantum walk on the 8-cycle with $\theta=\pi/4$ and initial condition $(\ket{3}+\ket{4})/\sqrt 2$ using three qubits of the ourense quantum computer. The two high peaks moving in opposite direction display the well known signature of quantum walks on the one-dimensional lattice.
\begin{figure}
\caption{Probability distribution of eight steps using exact calculations (blue) and the ourense quantum computer (red) using 3 qubits. The first plot refers to the preparation of the initial state (i.s.) $(\ket{3}
\label{fig:8-cycle}
\end{figure}
Table~\ref{table:fid3q} shows the corresponding fidelities, where $d$ and $h$ are the total variation and Hellinger distances, respectively. After the sixth step, the fidelity is high but the output of the quantum computer is worthless. This shows that the fidelity is not a good measure when the exact probability distribution is almost flat.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
fidelity & i.s. & step 1 & step 2 & step 3 & step 4 & step 5 & step 6 & step 7 & step 8 \\
\hline
$1-d$ & 0.927 & 0.891 & 0.864 & 0.896 & 0.823 & 0.965 & 0.956 & 0.710 & 0.639 \\
\hline
$1-h$ & 0.806 & 0.783 & 0.895 & 0.916 & 0.850 & 0.973 & 0.973 & 0.614 & 0.736 \\
\hline
\end{tabular}
\end{center}
\caption{Fidelities between the probability distributions generated by the ourense quantum computer and the exact simulation for one walker on a 8-vertex cycle up to step 8.}\label{table:fid3q}
\end{table}
\end{document} | math | 41,165 |
\begin{document}
\title{Detecting an Itinerant Optical Photon Twice without Destroying It}
\author{Emanuele~Distante$^{1}$}
\author{Severin~Daiss$^{1}$}
\author{Stefan~Langenfeld$^{1}$}
\author{Lukas~Hartung$^{1}$}
\author{Philip~Thomas$^{1}$}
\author{Olivier~Morin$^{1}$}
\author{Gerhard~Rempe$^{1}$}
\author{Stephan~Welte$^{1}$}
\email[To whom correspondence should be addressed. Email: ]{[email protected]}
\affiliation{$^{1}$Max-Planck-Institut f{\"u}r Quantenoptik, Hans-Kopfermann-Strasse 1, 85748 Garching, Germany}
\begin{abstract}
\noindent Nondestructive quantum measurements are central for quantum physics applications ranging from quantum sensing to quantum computing and quantum communication. Employing the toolbox of cavity quantum electrodynamics, we here concatenate two identical nondestructive photon detectors to repeatedly detect and track a single photon propagating through a $60\,\mathrm{m}$ long optical fiber. By demonstrating that the combined signal-to-noise ratio of the two detectors surpasses each single one by about two orders of magnitude, we experimentally verify a key practical benefit of cascaded non-demolition detectors compared to conventional absorbing devices.
\end{abstract}
\maketitle \nocite{apsrev41Control}
\noindent Quantum physics distinguishes between two kinds of measurements. Following Pauli \cite{pauli1933}, a measurement of the first kind projects the state of a system onto an eigenstate of the measured observable, with subsequent measurements of the same kind giving the same result. A measurement of the second kind, in contrast, exerts a random back-action on the complementary observable so that repeated measurements lead to different results. This distinction has been refined by introducing the concept of a quantum non-demolition (QND) measurement \cite{braginsky1980,braginsky1996}, where Pauli's measurement of the first kind is formally defined by requesting that the operator corresponding to the measured observable commutes with the Hamiltonian of the system. In practice, QND measurements thus allow for repeated observations without changing the outcome, a unique property that has (at least) two benefits. First, it allows to track the evolution of the observable without any back-action. Second, it allows to concatenate several measurements, each with a non-perfect detection sensitivity (e.g., signal-to-noise ratio), and in this way enhance the overall sensitivity. Repeatability therefore has been identified a powerful advantage, as was emphasized by Caves \textit{et al.} \cite{caves1980}: ``The key feature of such a non-demolition measurement is \textit{repeatability} -- once is not enough!".
The application potential of QND measurements was realized early on in the field of gravitational wave detection \cite{braginsky1980,braginsky1996} and has later on sparked large interest in a vast number of other fields ranging from astronomy \cite{kellerer2014quantum}, to high-precision metrology \cite{giovanetti2004, ma2011} and quantum-information processing \cite{ralph2006}. In the laboratory, QND measurements have been implemented in different matter-based experimental platforms such as ions \cite{hume2007}, superconducting qubits \cite{lupacscu2007}, solid state systems \cite{neumann2010}, and atomic ensembles \cite{kuzmich2000}. QND measurements of single photons, however, turned out to be comparatively difficult to implement, as they require the development of detectors capable of observing single photons without absorbing them \cite{braginsky1989}. Nevertheless, landmark experiments were proposed and performed in the microwave domain with photons stored in high-quality resonators \cite{brune1992, nogues1999seeing, guerlin2007} and later extended to the detection of itinerant microwave photons \cite{kono2018quantum, besse2018}. In the optical domain \cite{friberg1992, grangier1998}, a single nondestructive detection of a flying photon was achieved as well \cite{reiserer2013}. Although this experiment demonstrated the principle of one QND detection, so far an experimental verification of the feasibility to repeatedly measure and thereby track a flying optical photon remained elusive.
\begin{figure}
\caption{\label{fig:setup}
\label{fig:setup}
\end{figure}
Here we verify the repeatability and the increased sensitivity by using two QND detectors to observe one optical photon twice. The two devices, each made from a single atom coupled to an optical cavity, are distributed along an optical fiber in which the photon propagates. Once the latter has interacted with each detector, we observe correlations between the detection events. These correlations are the key element behind our demonstration that the combined system of two detectors outperforms both individual devices in terms of signal-to-noise ratio. Furthermore, we demonstrate that our QND detectors are specifically suited to detect single-photon states. To this end, we employ the first QND detector as a state-preparation device to generate single photons out of weak coherent states \cite{daiss2019}. We then use such single photons to probe the second QND detector. As they are eigenstates of the measurement, we show that they are detected with higher probability than the input coherent states and that their single-photon character is unaffected by the QND detection. Our protocol is widely applicable and could also be implemented with single ions \cite{mundt2002, takahashi2020}, superconducting qubits \cite{kono2018quantum, besse2018}, quantum dots \cite{fushman2008, kim2013, desantis2017}, silicon vacancy centers \cite{bhaskar2020} or rare-earth ions \cite{chen2020} coupled to cavities.
The setup consists of a concatenation of QND detectors, lined up along an optical fiber in which a photon propagates. An artist's view of the experiment is shown in Fig. \ref{fig:setup}(a) where the QND detectors are depicted as bulbs that light up as soon as the photon passes by. An external observer can see correlations between the successively illuminated bulbs, and photon loss manifests by dark bulbs downstream. Fig. \ref{fig:setup}(b) shows a more detailed sketch of our experiment. The two detectors, named QND1 and QND2 in the following, are separated by a distance of $60\,\mathrm{m}$. Each detector is made of a single \isotope[87]{Rb} atom trapped in a high-finesse cavity. The systems are both single-sided and operate in the strong-coupling regime of cavity quantum electrodynamics (CQED). The atoms are initialized in $\ket{\uparrow_z}=\ket{5\isotope[2]{S}_{1/2}, F=2, m_F=2}$ via optical pumping with light resonant with the $\ket{\uparrow_z}\leftrightarrow \ket{e}$ transition, where $\ket{e}=\ket{5\isotope[2]{P}_{3/2}, F=3, m_F=3}$. The cavities are both stabilized to this transition frequency. Following the optical pumping, we employ a pair of Raman lasers in each of the setups to prepare both atoms in the
superposition state $\ket{\uparrow_x}=\frac{1}{\sqrt{2}}(\ket{\uparrow_z}+\ket{\downarrow_z})$, where $\ket{\downarrow_z}=\ket{5\isotope[2]{S}_{1/2}, F=1, m_F=1}$ \cite{reiserer2013}. A state-detection laser resonant with the $\ket{\uparrow_z}\leftrightarrow\ket{e}$ transition allows to deterministically distinguish between the states $\ket{\uparrow_z}$ and $\ket{\downarrow_z}$ with a fidelity of $>99\%$.
Instead of injecting single photons into the fiber, we perform our experiments with weak coherent laser pulses $\ket{\alpha}$ that contain a mean photon number $\vert\alpha\vert^2=\braket{n}$ in front of the first QND detector. The choice of coherent pulses ($\lambda=780\,\mathrm{nm}$) allows us to study the application of our QND detectors as quantum state preparation devices of single photons. Additionally, in the limit of $\braket{n}\rightarrow 0$, we can approximate a single-photon input by eliminating the vacuum contribution through a post selection on a detection event (`click') in a standard absorbing detector at the far end of the propagation line. The light is injected into the fiber and, after the reflection from the QND detectors, hits two absorbing detectors arranged in Hanbury Brown-Twiss configuration that allows us to measure the second-order intensity auto-correlation function $g^{(2)}(\tau)$. The light pulses have a Gaussian shape with a full width at half maximum of $1\,\mathrm{\mu s}$ and are resonant with the transition $\ket{\uparrow_z}\leftrightarrow\ket{e}$.
Our protocol, depicted as a quantum circuit diagram in Fig. \ref{fig:setup}(c), starts by initializing each atom in the state $\ket{\uparrow_x}$. As a next step, light injected into the fiber successively interacts with the two QND detectors. In case of an odd number of photons in the light pulse, as for a single photon, a phase shift of $\pi$ in the combined atom-light state leads to a sign change in both atomic superposition states \cite{duan2004, xiao2004, reiserer2013, tiecke2014}. Each of the QND detectors then occupies the state $\ket{\downarrow_x}=\frac{1}{\sqrt{2}}(\ket{\uparrow_z}-\ket{\downarrow_z})$. For an even number of photons, such as for the vacuum state, the two atoms remain in $\ket{\uparrow_x}$. In the quantum-information language, the interaction of the photon and each of the atoms can be expressed as a controlled-Z gate. After reflection of the light from the two detectors, a $\pi/2$ pulse is applied which maps $\ket{\downarrow_x}$ $(\ket{\uparrow_x})$ onto $\ket{\uparrow_z}$ $(\ket{\downarrow_z})$. Therefore, a final atomic state detection of $\ket{\uparrow_z}$ ($\ket{\downarrow_z}$) heralds the presence of an odd (even) photon number at the corresponding detector.
Since the passing photon is ideally detected by both QND detectors, correlations between both of them must be observable. However in practice, the setup exhibits substantial losses between the two detectors (optical fiber coupling, fiber losses and limited circulator transmission, total transmission: $53\%$). To ensure the presence of light in front of each cavity, we can condition our data on the successful transmission of the photon through the entire system by postselecting on a click of one of our absorbing detectors downstream.
In a first experiment, we individually characterize the two QND detectors by measuring the probability of a successful QND photon detection conditioned on a click in the absorbing detectors. For the measurement, the mean photon number $\braket{n}$ in the impinging coherent pulse is scanned. For $\braket{n}=0.084$, the maximum probabilities $\text{P}(\uparrow_{z, \text{QND1}}\vert\text{click})=81.3\%$ and $\text{P}(\uparrow_{z, \text{QND2}}\vert\text{click})=87.0\%$ are observed. The difference in the measured maximum probabilities $\text{P}(\uparrow_{z, \text{QND1/2}}\vert\text{click})$ is due to slightly different experimental parameters, mainly coherence time and the quality of the Raman pulses, in the two QND detectors. A detailed description of the individual characterization is given in the \hyperref[supplement]{Supplemental Material}.
In the next experiment, we concatenate the QND devices and remove the conditioning on the classical detector click. In a first step, we measure the click probabilities $\text{P}(\uparrow_{z,\text{QND1/2}})$ of the two detectors when probing them with a coherent state containing a mean photon number $\braket{n}$. Since the detectors are sensitive to the parity of the photon number, the respective click probability monotonically increases from zero to $50\%$ as $\braket{n}$ is increased starting from zero. The saturation behavior results from the equal contributions of even and odd photon numbers in the limit of high $\braket{n}$. Due to the optical losses between the two setups, the respective mean photon number is different in front of the two detectors resulting in the observed scaling of the respective curves. Data are shown in Fig. \ref{fig:correlations}.
\begin{figure}
\caption{\label{fig:correlations}
\label{fig:correlations}
\end{figure}
Since the QND detectors are nondestructive, we can condition each of them on a click in the other detector. The respective click probabilities $\text{P}(\uparrow_{z,\text{QND1/2}}\vert\uparrow_{z,\text{QND2/1}})$ show a different behavior compared to $\text{P}(\uparrow_{z,\text{QND1/2}})$. When conditioning the downstream detector on the upstream detector, an increase in the click probability is observable compared to the case where no conditioning is applied. The effect, however, is strongly suppressed due to the inevitable losses between the setups. In the other case, where the upstream detector is conditioned on the downstream detector, the effect of the losses is suppressed as, for $\braket{n} \ll 1$, a click downstream selects the events when no losses occurred between the setups. For increasing $\braket{n}$, we observe a correlation maximum of $\text{P}(\uparrow_{z, \text{QND1}}\vert \uparrow_{z, \text{QND2}})=(68.4\pm0.7)\%$, clearly surpassing the $50\%$ threshold for uncorrelated random click events. For $\braket{n}$ approaching zero, these correlations decrease due to intrinsic dark counts in the QND detectors. The dark counts of QND1(2) stem from imperfect Raman rotations that leave $\text{dc}_{1(2)}=1.4\%(0.4\%)$ residual population in the state $\ket{\uparrow_z}$ when performing two consecutive $\pi/2$ pulses after initializing the atoms in $\ket{\uparrow_z}$. Due to higher photon-number contributions for increasing $\braket{n}$, the correlations asymptotically approach $50\%$.
A benefit of nondestructive photon detectors compared to conventional destructive devices is that they can be concatenated to enhance the overall detection efficiency or the signal-to-noise ratio (SNR). In the following, we show that introducing a logical ``or" ($\lor$) connection between the QND detector clicks allows to enhance the overall efficiency while a logical ``and" ($\land$) connection increases the signal-to-noise ratio. We start with the first case and measure the probability that at least one detector, QND1 or QND2, detects a photon conditioned on an absorbing-detector click downstream. A maximum probability of $95.1\%$ is observed which surpasses the maximal capabilities of both individual QND devices ($81.3\%$ and $87.0\%$) and therefore shows the enhancement of the detection efficiency. Data as a function of $\braket{n}$ are shown in Fig.~\ref{fig:efficiency}. A summary of the total measurement time, number of experimental runs, and coincidence rates is given in the \hyperref[supplement]{Supplemental Material}.
Although the efficiency is an important parameter of a single-photon detector, the signal-to-noise ratio is even more relevant in practice. In the limit of small $\braket{n}$, we define the SNR of the individual detectors as $\text{SNR}_{1(2)}=\text{P}(\uparrow_{\text{z,QND1(2)}}\vert\text{click})/\text{dc}_{1(2)}$ and the SNR of the concatenated detectors as $\text{SNR}_{1\land2}=\text{P}(\uparrow_{\text{z,QND1}}\land\uparrow_{\text{z,QND2}}\vert\text{click})/\text{dc}_{1\land2}$. In the last expression, $\text{dc}_{1\land2}$ is the probability of finding both QND1 and QND2 in $\ket{\uparrow_z}$ at the end of the protocol when no light is injected into the fiber. By employing the logical ``and" connection, we exploit the fact that the signal in both detectors is correlated while the dark counts are uncorrelated to enhance $\text{SNR}_{1\land2}$ compared to $\text{SNR}_{1}=59$ and $\text{SNR}_{2}=218$. While in the limit of small $\braket{n}$ the probability $\text{P}(\uparrow_{\text{z,QND1}}\land\uparrow_{\text{z,QND1}}\vert\text{click})$ is only slightly lowered compared to the individual devices (see Fig.\ref{fig:efficiency}), we find that $\text{SNR}_{1\land2}$ is a factor of 61 higher than $\text{SNR}_2$ and a factor of $227$ higher than $\text{SNR}_1$.
\begin{figure}
\caption{\label{fig:efficiency}
\label{fig:efficiency}
\end{figure}
We now demonstrate that increasing the state overlap of the impinging light, $\ket{\alpha}$, with that of a single-photon Fock state, $\ket{1}$, increases the click probability of QND2. This increase stems from the fact that single-photon states are eigenstates of our detector which are observed with unity efficiency in an ideal scenario. Starting with a weak coherent state at the input of the fiber, we employ two conditions ensuring that the light impinging on QND2 is approximately described by a single-photon state. First, the condition on a click in the downstream absorbing detectors is applied. This condition removes the vacuum contribution from the coherent state and additionally ensures that light is not lost in the fiber and therefore impinges on QND2. The limitations of this condition are that all photon-number contributions $n\ge1$ will remain in the pulse, and that for too low values of $\braket{n}$ the dark counts in the absorbing detectors add a small vacuum contribution. Nevertheless, for appropriately chosen low values of $\braket{n}$, this technique allows to approximate single photons. Second, and as an additional condition, we employ QND1 as a state preparation device. Since it is sensitive to the parity of the impinging photon number \cite{wang2005, hacker2019}, a click in this detector removes all even photon-number contributions \cite{daiss2019}. Therefore, in the limit of vanishing $\braket{n}$, employing both conditions applies a selection window around the desired single-photon Fock state and approximates a single photon better than just conditioning on the absorbing detector. As a result, for a proper single-photon detector, the following inequality must hold: $\text{P}(\uparrow_{\text{z, QND2}}\vert\uparrow_{\text{z, QND1}}\land\text{click})>\text{P}(\uparrow_{\text{z, QND2}}\vert\text{click})$.
As shown in Fig. \ref{fig:distillation}, we can indeed verify this inequality.
\begin{figure}
\caption{\label{fig:distillation}
\label{fig:distillation}
\end{figure}
Our data show that the click probability of QND2 increases if the impinging state has a higher overlap with an ideal single-photon state. Maximally, a probability $\text{P}(\uparrow_{\text{z, QND2}}\vert\uparrow_{\text{z, QND1}}\land\text{click})=90.7\%$ is observed.
To verify the preparation of single photons with our two QND detectors from the initial coherent pulse, we employ two absorbing detectors in a Hanbury Brown-Twiss configuration as shown in Fig. \ref{fig:setup}(b). This allows us to extract the second-order photon-correlation function $g^{(2)}(\tau)$. For the measurement, we use an initial coherent pulse with an average photon number of $\braket{n}=0.45$. We extract $g^{(2)}(0)$ and $\overline{g}^{(2)}(\tau\ne 0)$ averaged over all $\tau\ne0$. The second-order correlation function is conditioned on clicks in the two QND measurements. Table \ref{tab:g2data} contains the obtained data.
\begin{table}[t]
\caption{\label{tab:g2data} Measurements of the second-order photon-correlation function $g^{(2)}(\tau)$. The obtained data is conditioned on different combinations of QND detection events. Error bars are statistical.}
\begin{tabular}{ c c c }
\hline\hline
Condition & $g^{(2)}(\tau=0)$ & $\overline{g}^{(2)}(\tau\ne 0)$ \\
\hline
\text{None} & $1.005^{+0.253}_{-0.206}$ & $0.995^{+0.006}_{-0.005}$ \\
$\uparrow_{z,\text{QND} 1}$ & $0.354^{+0.121}_{-0.091}$ & $1.010^{+0.004}_{-0.004}$ \\
$\uparrow_{z,\text{QND} 2}$ & $0.047^{+0.021}_{-0.015}$ & $1.000^{+0.002}_{-0.002}$ \\
$\uparrow_{z,\text{QND} 1} \land \uparrow_{z,\text{QND} 2}$ & $0.038^{+0.023}_{-0.014}$ & $1.000^{+0.002}_{-0.002}$\\
\hline\hline
\end{tabular}
\end{table}
When not conditioning on any of the QND detection results, we verify the coherent character of our light since $g^{(2)}(\tau)$ is close to unity for all values of $\tau$. As a next step, we condition our data on a click of QND1 and observe a reduction of $g^{(2)}(0)$ from unity to $0.354^{+0.121}_{-0.091}$. Similarly, when conditioning the data on a click of QND2, we obtain $g^{(2)}(0)=0.047^{+0.021}_{-0.015}$. Both results show the single-photon character of the light after conditioning on the respective detector. The difference in the two obtained values for $g^{(2)}(0)$ stems from the different mean photon number in the coherent pulse impinging onto the respective QND detector due to the losses between the detectors. Finally, when the data is conditioned on a click of both QND detectors, we obtain $g^{(2)}(0)=0.038^{+0.023}_{-0.014}$. A comparison between the conditions $\uparrow_{\text{z,QND1}}$ and $\uparrow_{\text{z,QND1}}\land\uparrow_{\text{z,QND2}}$ shows that the single-photon character of the light after successful state preparation with QND1 is preserved by QND2.
In conclusion, we have shown the feasibility of repeatedly detecting an optical photon. This feature of a QND measurement allows us to enhance the detection efficiency or the signal-to-noise ratio. By adding time resolution to the QND detector, it should be possible to gain information about the propagation direction of a photon, a piece of information only accessible with QND detectors. Moreover, an improved setup with smaller losses between the two detectors could serve as a heralded source of photonic Fock states. Such a \textit{gedanken} device, described in detail in the \hyperref[supplement]{Supplemental Material}, is capable of decomposing a coherent state into its number-state constituents \cite{guerlin2007}. Arguably most fascinating is an extension to several nondestructive detectors for photonic qubits. This could speed up a plethora of quantum-network \cite{kimble2008,reiserer2015,wehner2018} protocols such as entanglement distribution where a new transmission attempt could be started immediately after a spatially-resolved loss detection between sender and receiver \cite{niemietz2021}.
\begin{acknowledgments}
This work was supported by the Bundesministerium f\"{u}r Bildung und Forschung via the Verbund Q.Link.X (16KIS0870), by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy – EXC-2111 – 390814868, and by the European Union’s Horizon 2020 research and innovation programme via the project Quantum Internet Alliance (QIA, GA No. 820445). E.D. acknowledges support by the Cellex-ICFO-MPQ postdoctoral fellowship program in the early stages of the experiment.
\end{acknowledgments}
\begin{thebibliography}{99}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem{pauli1933}
\bibinfo{author}{W. Pauli}.
\newblock \bibinfo{title}{Die allgemeinen Prinzipien der Wellenmechanik, Handbuch der Physik, 2. Auflage. Bd. 24, 1. Teil. S.83-272.}
\bibinfo{publisher}{Springer},
\bibinfo{address}{Berlin} \bibinfo{year}{1933}.
\bibitem{braginsky1980}
\bibinfo{author}{V.~B. Braginsky}, \bibinfo{author}{Y.~I. Vorontsov}, and
\bibinfo{author}{K.~S. Thorne}.
\newblock \emph{\bibinfo{title}{Quantum Nondemolition Measurements}}.
\newblock
\href{https://science.sciencemag.org/content/209/4456/547}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{209}}, \bibinfo{pages}{547--557}
(\bibinfo{year}{1980})}.
\bibitem{braginsky1996}
\bibinfo{author}{V.~B. Braginsky} and
\bibinfo{author}{F.~Ya. Khalili}.
\newblock \emph{\bibinfo{title}{Quantum nondemolition measurements: the route from toys to tools}}.
\newblock
\href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.68.1}{\bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{68}}, \bibinfo{pages}{1--11}
(\bibinfo{year}{1996})}.
\bibitem{caves1980}
\bibinfo{author}{C.~M. Caves},
\bibinfo{author}{K.~S. Thorne},
\bibinfo{author}{R.~W.~P. Drever},
\bibinfo{author}{V.~D. Sandberg}, and
\bibinfo{author}{M. Zimmermann}.
\newblock \emph{\bibinfo{title}{On the measurement of a weak classical force coupled to a quantum-mechanical oscillator. I. Issues of principle}}.
\newblock
\href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.52.341}{\bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{52}}, \bibinfo{pages}{341}
(\bibinfo{year}{1980})}.
\bibitem{kellerer2014quantum}
\bibinfo{author}{A. Kellerer}.
\newblock \emph{\bibinfo{title}{Quantum telescopes}}.
\newblock
\href{https://academic.oup.com/astrogeo/article/55/3/3.28/239181}{\bibinfo{journal}{Astronomy $\&$ Geophysics} \textbf{\bibinfo{volume}{55}}, \bibinfo{pages}{3.28--3.32}
(\bibinfo{year}{2014})}.
\bibitem{giovanetti2004}
\bibinfo{author}{V. Giovannetti},
\bibinfo{author}{S. Lloyd}, and
\bibinfo{author}{L. Maccone}.
\newblock \emph{\bibinfo{title}{Quantum-Enhanced Measurements: Beating the Standard Quantum Limit}}.
\newblock
\href{https://science.sciencemag.org/content/306/5700/1330.abstract}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{306}}, \bibinfo{pages}{1330--1336}
(\bibinfo{year}{2004})}.
\bibitem{ma2011}
\bibinfo{author}{J. Ma},
\bibinfo{author}{X. Wang},
\bibinfo{author}{C.~P. Sun}, and
\bibinfo{author}{F. Nori}.
\newblock \emph{\bibinfo{title}{Quantum spin squeezing}}.
\newblock
\href{https://www.sciencedirect.com/science/article/pii/S0370157311002201}{\bibinfo{journal}{Physics Reports} \textbf{\bibinfo{volume}{509}}, \bibinfo{pages}{89--165}
(\bibinfo{year}{2011})}.
\bibitem{ralph2006}
\bibinfo{author}{T.~C. Ralph},
\bibinfo{author}{S.~D. Bartlett},
\bibinfo{author}{J.~L. O'Brien},
\bibinfo{author}{G.~J. Pryde}, and
\bibinfo{author}{H.~M. Wiseman}.
\newblock \emph{\bibinfo{title}{Quantum nondemolition measurements for quantum information}}.
\newblock
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.73.012113}{\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{012113}
(\bibinfo{year}{2006})}.
\bibitem{hume2007}
\bibinfo{author}{D.~B. Hume},
\bibinfo{author}{T. Rosenband}, and
\bibinfo{author}{D.~J. Wineland}.
\newblock \emph{\bibinfo{title}{High-Fidelity Adaptive Qubit Detection through Repetitive Quantum Nondemolition Measurements}}.
\newblock
\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.99.120502}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}}, \bibinfo{pages}{120502}
(\bibinfo{year}{2007})}.
\bibitem{lupacscu2007}
\bibinfo{author}{A. Lupa\c{s}cu},
\bibinfo{author}{S. Saito},
\bibinfo{author}{T. Picot},
\bibinfo{author}{P.~C. de Groot},
\bibinfo{author}{C.~J.~P.~M. Harmans}, and
\bibinfo{author}{J.~E. Mooij}.
\newblock \emph{\bibinfo{title}{Quantum non-demolition measurement of a superconducting two-level system}}.
\newblock
\href{https://www.nature.com/articles/nphys509}{\bibinfo{journal}{Nat. Phys.} \textbf{\bibinfo{volume}{3}}, \bibinfo{pages}{119--123}
(\bibinfo{year}{2007})}.
\bibitem{neumann2010}
\bibinfo{author}{P. Neumann},
\bibinfo{author}{J. Beck},
\bibinfo{author}{M. Steiner},
\bibinfo{author}{F. Rempp},
\bibinfo{author}{H. Fedder},
\bibinfo{author}{P.~R. Hemmer},
\bibinfo{author}{J. Wrachtrup}, and
\bibinfo{author}{F. Jelezko}.
\newblock \emph{\bibinfo{title}{Single-Shot Readout of a Single Nuclear Spin}}.
\newblock
\href{https://science.sciencemag.org/content/329/5991/542}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{329}}, \bibinfo{pages}{542--544}
(\bibinfo{year}{2010})}.
\bibitem{kuzmich2000}
\bibinfo{author}{A. Kuzmich},
\bibinfo{author}{L. Mandel}, and
\bibinfo{author}{N.~P. Bigelow}.
\newblock \emph{\bibinfo{title}{Generation of Spin Squeezing via Continuous Quantum Nondemolition Measurement}}.
\newblock
\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.85.1594}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{1594}
(\bibinfo{year}{2000})}.
\bibitem{braginsky1989}
\bibinfo{author}{V.~B. Braginsky}.
\newblock \emph{\bibinfo{title}{Quantum-Non-Demolition Measurements of the Energy of Standing and Flying Photons}}.
\newblock
\href{https://dcc.ligo.org/LIGO-T890016/public}{\bibinfo{journal}{Proc. 3rd Int. Symp. Foundations of Quantum Mechanics, Tokyo} \textbf{\bibinfo{volume}{}}, \bibinfo{pages}{135--139}
(\bibinfo{year}{1989})}.
\bibitem{brune1992}
\bibinfo{author}{M. Brune},
\bibinfo{author}{S. Haroche},
\bibinfo{author}{J.~M. Raimond},
\bibinfo{author}{L. Davidovich}, and
\bibinfo{author}{N. Zagury}.
\newblock \emph{\bibinfo{title}{Manipulation of photons in a cavity by dispersive atom-field coupling: Quantum-nondemolition measurements and generation of 'Schr{\"o}dinger cat' states}}.
\newblock
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.5193}{\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{45}}, \bibinfo{pages}{5193}
(\bibinfo{year}{1992})}.
\bibitem{nogues1999seeing}
\bibinfo{author}{G. Nogues},
\bibinfo{author}{A. Rauschenbeutel},
\bibinfo{author}{S. Osnaghi},
\bibinfo{author}{M. Brune},
\bibinfo{author}{J.~M. Raimond}, and
\bibinfo{author}{S. Haroche}.
\newblock \emph{\bibinfo{title}{Seeing a single photon without destroying it}}.
\newblock
\href{https://www.nature.com/articles/22275}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{400}}, \bibinfo{pages}{239--242}
(\bibinfo{year}{1999})}.
\bibitem{guerlin2007}
\bibinfo{author}{C. Guerlin},
\bibinfo{author}{J. Bernu},
\bibinfo{author}{S. Del\'{e}glise},
\bibinfo{author}{C. Sayrin},
\bibinfo{author}{S. Gleyzes},
\bibinfo{author}{S. Kuhr},
\bibinfo{author}{M. Brune},
\bibinfo{author}{J.~M. Raimond}, and
\bibinfo{author}{S. Haroche}.
\newblock \emph{\bibinfo{title}{Progressive field-state collapse and quantum non-demolition photon counting}}.
\newblock
\href{https://www.nature.com/articles/nature06057}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{448}}, \bibinfo{pages}{889--893}
(\bibinfo{year}{2007})}.
\bibitem{kono2018quantum}
\bibinfo{author}{S. Kono},
\bibinfo{author}{K. Koshino},
\bibinfo{author}{Y. Tabuchi},
\bibinfo{author}{A. Noguchi}, and
\bibinfo{author}{Y. Nakamura}.
\newblock \emph{\bibinfo{title}{Quantum non-demolition detection of an itinerant microwave photon}}.
\newblock
\href{https://www.nature.com/articles/s41567-018-0066-3}{\bibinfo{journal}{Nat. Phys.} \textbf{\bibinfo{volume}{14}}, \bibinfo{pages}{546--549}
(\bibinfo{year}{2018})}.
\bibitem{besse2018}
\bibinfo{author}{J.~-C. Besse},
\bibinfo{author}{S. Gasparinetti},
\bibinfo{author}{M.~C. Collodo},
\bibinfo{author}{T. Walter},
\bibinfo{author}{P. Kurpiers},
\bibinfo{author}{M. Pechal},
\bibinfo{author}{C. Eichler}, and
\bibinfo{author}{A. Wallraff}.
\newblock \emph{\bibinfo{title}{Single-Shot Quantum Nondemolition Detection of Individual Itinerant Microwave Photons}}.
\newblock
\href{https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.021003}{\bibinfo{journal}{Phys. Rev. X} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{021003}
(\bibinfo{year}{2018})}.
\bibitem{friberg1992}
\bibinfo{author}{S.~R. Friberg},
\bibinfo{author}{S. Machida}, and
\bibinfo{author}{Y. Yamamoto}.
\newblock \emph{\bibinfo{title}{Quantum-nondemolition measurement of the photon number of an optical soliton}}.
\newblock
\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.69.3165}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{69}}, \bibinfo{pages}{3165}
(\bibinfo{year}{1992})}.
\bibitem{grangier1998}
\bibinfo{author}{P. Grangier},
\bibinfo{author}{J.~A. Levenson}, and
\bibinfo{author}{J.~-P. Poizat}.
\newblock \emph{\bibinfo{title}{Quantum non-demolition measurements in optics}}.
\newblock
\href{https://www.nature.com/articles/25059}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{396}}, \bibinfo{pages}{537--542}
(\bibinfo{year}{1998})}.
\bibitem{reiserer2013}
\bibinfo{author}{A. Reiserer},
\bibinfo{author}{S. Ritter}, and
\bibinfo{author}{G. Rempe}.
\newblock \emph{\bibinfo{title}{Nondestructive Detection of an Optical Photon}}.
\newblock
\href{https://science.sciencemag.org/content/342/6164/1349}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{342}}, \bibinfo{pages}{1349--1351}
(\bibinfo{year}{2013})}.
\bibitem{daiss2019}
\bibinfo{author}{S. Daiss},
\bibinfo{author}{S. Welte},
\bibinfo{author}{B. Hacker},
\bibinfo{author}{L. Li}, and
\bibinfo{author}{G. Rempe}.
\newblock \emph{\bibinfo{title}{Single-Photon Distillation via a Photonic Parity Measurement Using Cavity QED}}.
\newblock
\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.133603}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{122}}, \bibinfo{pages}{133603}
(\bibinfo{year}{2019})}.
\bibitem{mundt2002}
\bibinfo{author}{A.~B. Mundt},
\bibinfo{author}{A. Kreuter},
\bibinfo{author}{C. Becher},
\bibinfo{author}{D. Leibfried},
\bibinfo{author}{J. Eschner},
\bibinfo{author}{F. Schmidt-Kaler}, and
\bibinfo{author}{R. Blatt}.
\newblock \emph{\bibinfo{title}{Coupling a Single Atomic Quantum Bit to a High Finesse Optical Cavity}}.
\newblock
\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.89.103001}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{103001}
(\bibinfo{year}{2002})}.
\bibitem{takahashi2020}
\bibinfo{author}{H. Takahashi},
\bibinfo{author}{E. Kassa},
\bibinfo{author}{C. Christoforou}, and
\bibinfo{author}{M. Keller}.
\newblock \emph{\bibinfo{title}{Strong Coupling of a Single Ion to an Optical Cavity}}.
\newblock
\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.124.013602}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{124}}, \bibinfo{pages}{013602}
(\bibinfo{year}{2020})}.
\bibitem{fushman2008}
\bibinfo{author}{I. Fushman},
\bibinfo{author}{D. Englund},
\bibinfo{author}{A. Faraon},
\bibinfo{author}{N. Stoltz},
\bibinfo{author}{P. Petroff}, and
\bibinfo{author}{J. Vu\v{c}kovi\'{c}}.
\newblock \emph{\bibinfo{title}{Controlled Phase Shifts with a Single Quantum Dot}}.
\newblock
\href{https://science.sciencemag.org/content/320/5877/769.abstract}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{320}}, \bibinfo{pages}{769--772}
(\bibinfo{year}{2008})}.
\bibitem{kim2013}
\bibinfo{author}{H. Kim},
\bibinfo{author}{R. Bose},
\bibinfo{author}{T.~S. Shen},
\bibinfo{author}{G.~S. Solomon}, and
\bibinfo{author}{E. Waks}.
\newblock \emph{\bibinfo{title}{A quantum logic gate between a solid-state quantum bit and a photon}}.
\newblock
\href{https://www.nature.com/articles/nphoton.2013.48}{\bibinfo{journal}{Nat. Photonics} \textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{373--377}
(\bibinfo{year}{2013})}.
\bibitem{desantis2017}
\bibinfo{author}{L. De Santis}, \bibinfo{author}{C. Ant\'{o}n},
\bibinfo{author}{B. Reznychenko},
\bibinfo{author}{N. Somaschi},
\bibinfo{author}{G. Coppola},
\bibinfo{author}{J. Senellart},
\bibinfo{author}{C. G\'{o}mez},
\bibinfo{author}{A. Lema\^{i}tre},
\bibinfo{author}{I. Sagnes},
\bibinfo{author}{A.~G. White},
\bibinfo{author}{L. Lanco},
\bibinfo{author}{A. Auff\`{e}ves}, and
\bibinfo{author}{P. Senellart}.
\newblock \emph{\bibinfo{title}{A solid-state single-photon filter}}.
\newblock
\href{https://www.nature.com/articles/nnano.2017.85}{\bibinfo{journal}{Nat. Nanotechnol.} \textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{663--667}
(\bibinfo{year}{2017})}.
\bibitem{bhaskar2020}
\bibinfo{author}{M.~K. Bhaskar}, \bibinfo{author}{R. Riedinger},
\bibinfo{author}{B. Machielse},
\bibinfo{author}{D.~S. Levonian},
\bibinfo{author}{C.~T. Nguyen},
\bibinfo{author}{E.~N. Knall},
\bibinfo{author}{H. Park},
\bibinfo{author}{D. Englund},
\bibinfo{author}{M. Lon\v{c}ar},
\bibinfo{author}{D. Sukachev}, and
\bibinfo{author}{M.~D. Lukin}.
\newblock \emph{\bibinfo{title}{Experimental demonstration of memory-enhanced quantum communication}}.
\newblock
\href{https://www.nature.com/articles/s41586-020-2103-5}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{580}}, \bibinfo{pages}{60--64}
(\bibinfo{year}{2020})}.
\bibitem{chen2020}
\bibinfo{author}{S. Chen},
\bibinfo{author}{M. Raha},
\bibinfo{author}{C.~M. Phenicie},
\bibinfo{author}{S. Ourari}, and
\bibinfo{author}{J.~D. Thompson}.
\newblock \emph{\bibinfo{title}{Parallel single-shot measurement and coherent control of solid-state spins below the diffraction limit}}.
\newblock
\href{https://science.sciencemag.org/content/370/6516/592}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{370}}, \bibinfo{pages}{592--595}
(\bibinfo{year}{2020})}.
\bibitem{duan2004}
\bibinfo{author}{L.-M. Duan} and
\bibinfo{author}{H.~J. Kimble}.
\newblock \emph{\bibinfo{title}{Scalable Photonic Quantum Computation through Cavity-Assisted Interactions}}.
\newblock
\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.92.127902}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{127902}
(\bibinfo{year}{2004})}.
\bibitem{xiao2004}
\bibinfo{author}{Y.-F. Xiao},
\bibinfo{author}{X.-M. Lin},
\bibinfo{author}{J. Gao},
\bibinfo{author}{Y. Yang}, \bibinfo{author}{Z.-F. Han}, and
\bibinfo{author}{G.-C. Guo}.
\newblock \emph{\bibinfo{title}{Realizing quantum controlled phase flip through cavity QED}}.
\newblock
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.70.042314}{\bibinfo{journal}{Phys.
Rev. A} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{042314}
(\bibinfo{year}{2004})}.
\bibitem{tiecke2014}
\bibinfo{author}{T.G. Tiecke},
\bibinfo{author}{J.D. Thompson},
\bibinfo{author}{N.P. de Leon},
\bibinfo{author}{L.R. Liu},
\bibinfo{author}{V. Vuleti\`c}, and
\bibinfo{author}{M.D. Lukin}.
\newblock \emph{\bibinfo{title}{Nanophotonic quantum phase switch with a single atom}}.
\newblock
\href{https://www.nature.com/articles/nature13188?page=1}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{508}}, \bibinfo{pages}{241--244}
(\bibinfo{year}{2014})}.
\bibitem{wang2005}
\bibinfo{author}{B. Wang} and
\bibinfo{author}{L.-M. Duan}.
\newblock \emph{\bibinfo{title}{Engineering superpositions of coherent states in coherent optical pulses through cavity-assisted interaction}}.
\newblock
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.72.022320}{\bibinfo{journal}{Phys.
Rev. A} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{022320}
(\bibinfo{year}{2005})}.
\bibitem{hacker2019}
\bibinfo{author}{B. Hacker},
\bibinfo{author}{S. Welte},
\bibinfo{author}{S. Daiss},
\bibinfo{author}{A. Shaukat},
\bibinfo{author}{S. Ritter},
\bibinfo{author}{L. Li}, and
\bibinfo{author}{G. Rempe}.
\newblock \emph{\bibinfo{title}{Deterministic creation of entangled atom–light Schr{\"o}dinger-cat states}}.
\newblock
\href{https://www.nature.com/articles/s41566-018-0339-5}{\bibinfo{journal}{Nat. Photonics} \textbf{\bibinfo{volume}{13}}, \bibinfo{pages}{110--115}
(\bibinfo{year}{2019})}.
\bibitem{kimble2008}
\bibinfo{author}{H.~J. Kimble}.
\newblock \emph{\bibinfo{title}{The quantum internet}}.
\newblock
\href{https://www.nature.com/articles/nature07127}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{453}}, \bibinfo{pages}{1023--1030}
(\bibinfo{year}{2008})}.
\bibitem{reiserer2015}
\bibinfo{author}{A. Reiserer} and
\bibinfo{author}{G. Rempe}.
\newblock \emph{\bibinfo{title}{Cavity-based quantum networks with single atoms and optical photons}}.
\newblock
\href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.87.1379}{\bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{87}}, \bibinfo{pages}{1379--1418}
(\bibinfo{year}{2015})}.
\bibitem{wehner2018}
\bibinfo{author}{S. Wehner},
\bibinfo{author}{D. Elkouss}, and
\bibinfo{author}{R. Hanson}.
\newblock \emph{\bibinfo{title}{Quantum internet: A vision for the road ahead}}.
\newblock
\href{https://science.sciencemag.org/content/362/6412/eaam9288}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{362}}, \bibinfo{pages}{eaam9288}
(\bibinfo{year}{2018})}.
\bibitem{niemietz2021}
\bibinfo{author}{D. Niemietz},
\bibinfo{author}{P. Farrera},
\bibinfo{author}{S. Langenfeld}, and
\bibinfo{author}{G. Rempe}.
\newblock \emph{\bibinfo{title}{Nondestructive detection of photonic qubits}}.
\newblock
\href{https://www.nature.com/articles/s41586-021-03290-z}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{591}}, \bibinfo{pages}{570--574}
(\bibinfo{year}{2021})}.
\end{thebibliography}
\section{SUPPLEMENTAL MATERIAL}
\label{supplement}
\section{Theoretical modelling with the QuTip toolkit}
\noindent We model the experimental results with the QuTip toolkit for Python \cite{johansson2012}. The model is based on cavity input-output theory as outlined in \cite{kuhn2015}. Generally, the light reflected from each QND detector can be scattered into four different output modes, namely the reflection or transmission through the cavity, scattering via the atom and scattering on the cavity mirrors. We model the entire system by a concatenation of beam splitters describing the scattering into the respective modes. To account for the phase shift of the reflected light, the corresponding beam splitter is implemented using an additional phase shifter. Also, a lossy depolarizing quantum channel between the two setups is taken into account in our simulation. After propagating the light through the entire system of beam splitters and phase shifters, we trace out the loss channels and just consider the mode reflected from both cavities. Eventually, the light is detected with absorbing single-photon detectors which are modelled with another beam splitter to take into account the limited detection efficiency. Dark counts in the detectors are modelled by supplying one port of this beam splitter with thermal noise. In our simulation, we propagate the atom-atom-photon state through the series of beam splitters and eventually obtain a final density matrix.
Parameters extracted in independent characterization measurements are used in the theoretical model. These parameters comprise the fidelity of state preparation and readout of the atoms ($99\%$ at each of the nodes), their coherence times ($420\,\mathrm{\mu s}$ and $470\,\mathrm{\mu s}$, respectively), the transmission losses through the entire system ($60\%$ and $55\%$ intensity reflection of the two cavities, $53\%$ transmission between the setups and $50\%$ detection efficiency after QND2), the background-count rate of the absorbing detectors ($40\,\mathrm{Hz}$) and the relevant cavity QED parameters ($g_1(g_2)=7.6(7.6)\,\mathrm{MHz},\kappa_1(\kappa_2)=2.5(2.8)\,\mathrm{MHz},\gamma_1(\gamma_2)=3.0(3.0)\,\mathrm{MHz}$). Here, $g_{1}(g_{2})$ denotes the respective atom-cavity coupling constant (half the vacuum Rabi frequency) while $\kappa_{1}(\kappa_{2})$ and $\gamma_{1}(\gamma_{2})$ denote the respective cavity field decay rates and the atomic polarization decay rates. The given atom-cavity coupling constants are achieved on the cycling transition between the states $\ket{\uparrow_z}=\ket{5\isotope[2]{S}_{1/2}, F=2, m_F=2}$ and
$\ket{e}=\ket{5\isotope[2]{P}_{3/2}, F=3, m_F=3}$. Both of our atom-cavity systems operate in the strong-coupling regime and we achieve a cooperativity of $C_1(C_2)=3.9(3.4)$ where $C_{1(2)}=g_{1(2)}^2/(2\kappa_{1(2)}\gamma_{1(2)})$. Besides the aforementioned experimental parameters, our theoretical model also takes the depolarization in the connection fiber as well as a residual fiber birefringence into account.
\section{Information on measurement time, total number of experimental runs, and coincidence-click rates}
Depending on the employed mean photon number $\braket{n}$, we use a different total measurement time to obtain an error bar on the order of $1\%$ on the respective measured click probabilities of our two QND detectors. For the lowest employed photon number of $\braket{n}=0.04$, we measured for $199.8\,\mathrm{min}$ while for the highest $\braket{n}=3.11$, we only measured for $13.4\,\mathrm{min}$. In total, the measurement time for the entire scan of $\braket{n}$ amounts to $6.7$ hours. In some cases, the obtained data is conditioned on the detection event on the classical HBT photon detectors. Depending on the mean photon number employed, we achieve click rates between $0.12\,\mathrm{Hz}$ for $\braket{n}=0.04$ and $8.09\,\mathrm{Hz}$ for $\braket{n}=3.11$ on these classical detectors. These click rates are raw rates and not conditioned on the presence of a well pumped atom in each of the cavities. We start an experimental run of our double QND detection only if one atom is available in each of the cavities. The total number of experimental runs we perform until the error bars on the measured probabilities are on the order of $1\%$ depends on the mean photon number employed. In our experimental realization, the total number of runs varied between $4.7\times10^5$ for $\braket{n}=0.04$ and $3.0\times10^4$ for $\braket{n}=3.11$. Of particular importance in the experiment is the rate of events in that both QND detectors click conditioned on the detection click in the HBT detector. For the lowest employed mean photon number $\braket{n}=0.04$, we observed 1068 such events in $199.8\,\mathrm{min}$ which corresponds to a rate of $0.09\,\mathrm{Hz}$. For the highest photon number of $\braket{n}=3.11$, we observed 2155 such events in $13.4\,\mathrm{min}$ which gives a coincidence-click rate of $2.68\,\mathrm{Hz}$.
\section{Individual characterization of the two QND detectors}
\noindent We characterize our two QND devices individually. For the measurements, both atoms are initialized in the state $\ket{\uparrow_z}$ via optical pumping for $50\,\mathrm{\mu s}$ with right-circular polarized light resonant with the $\ket{\uparrow_z}\leftrightarrow \ket{e}$ transition. This pumping light is impinging on the cavity axis and transmitted through the cavity. After the pumping, the atoms are prepared in $\ket{\uparrow_x}$ by applying a $4\,\mathrm{\mu s}$ long Raman pulse with an area of $\pi/2$. Subsequently, a coherent pulse $\ket{\alpha}$ is reflected successively from the two atom-cavity systems and then impinges on absorbing photon detectors. The employed absorbing detectors are superconducting nanowire devices with a quantum efficiency of $\eta=90\%$ at our wavelength of $\lambda=780\,\mathrm{nm}$. They are connected to the output of the circulator at QND2 with a 40-m-long optical fiber. The optical pulse $\ket{\alpha}$ is propagating in a $60\,\mathrm{m}$ single mode fiber between the two resonator systems. The spatial extent of the pulse in the fiber is $200\,\mathrm{m}$, larger than the separation of the two QND detectors. Due to an engineered AC Stark shift of the excited states in our atoms, our scheme is capable of detecting photons in a specific polarization state, namely in a right-circular polarization \cite{hacker2019supplement}. To ensure that this particular polarization is maintained throughout the fiber, the birefringence of the latter is actively stabilized with piezoelectric fiber squeezers \cite{rosenfeld2008supplement} such that a one-to-one polarization mapping between the two cavities is established. After reflecting the light from the two cavities, we apply a $\pi/2$ pulse to the atoms such that they end up in the state $\ket{\uparrow_z}$ if an an odd number of photons impinges. In case of an even photon number, the atoms end up in the state $\ket{\downarrow_z}$. Atomic state detection eventually allows us to distinguish between $\ket{\uparrow_z}$ and $\ket{\downarrow_z}$ with a high fidelity ($>99\%$). After the atomic readout, we apply cooling light to the atoms for $660\,\mathrm{\mu s}$ and repeat the entire protocol with a rate of $1\,\mathrm{kHz}$.
For vanishing $\braket{n}$, an atom in the state $\ket{\uparrow_z}$ heralds the presence of a single photon. In the range of low values of $\braket{n}$, we achieve maximal detection efficiencies of $\text{P}(\uparrow_{\text{z,QND1}}\vert\text{click})=81.3\%$ and $\text{P}(\uparrow_{\text{z,QND2}}\vert\text{click})=87.0\%$ with our two QND detectors. With increasing $\braket{n}$, the probabilities $\text{P}(\uparrow_{\text{z,QND1(2)}}\vert\text{click})$ decrease, due to higher photon-number contributions in the coherent pulse, and eventually saturate at a level of $50\%$. The saturation behavior stems from the weight of even and odd photon-number contributions in the coherent pulse becoming equal when $\braket{n}$ is increased.
\renewcommand{S2}{S1}
\begin{figure}
\caption{\label{fig:data1}
\label{fig:data1}
\end{figure}
Since considerable losses occur between the two systems ($53\%$ transmission between the two QND detectors), the mean number of photons impinging on the second cavity is considerably lower than on the first cavity. The data characterizing the two QND detectors is shown in Fig. \ref{fig:data1}. Due to the photon losses between the setups, we observe a different scaling behavior of the probability $\text{P}(\uparrow_z\vert \text{click})$ for the two different detectors.
\section{Proposal: A heralded source of photon-number states}
\noindent As an outlook, we describe how our setup could be employed as a heralded source of photonic number states (Fock states) by decomposing incoming coherent states. The described scheme works up to photon numbers of $n=3$ with two atom-cavity systems, but can be extended to higher photon numbers with additional atom-cavity systems. In general, coherent states containing up to $2^{k}-1$ photons can be decomposed into Fock states with $k$ concatenated systems. In the following discussion, we concentrate on the case $k=2$.
In our experiment, a weak coherent pulse $\ket{\alpha}$ is reflected successively from the two cavities. We assume that the pulses are weak and that photon-number contributions with $n>3$ are negligible. Thus, the input state can be written as $\alpha\ket{0}+\beta\ket{1}+\gamma\ket{2}+\delta\ket{3}$ with $\vert\alpha\vert^2+\vert\beta\vert^2+\vert\gamma\vert^2+\vert\delta\vert^2\approx 1$. After optical pumping of the atoms in QND1 and QND2 into the state $\ket{\uparrow_z}$, a qubit rotation around the y axis is applied that brings them into the state $\ket{\uparrow_x}$. This qubit rotation can be expressed in matrix form as \cite{nielsen2002}
\begin{equation}
\text{R}_y(\theta)=\begin{pmatrix}\cos\theta/2 & -\sin\theta/2 \\\sin\theta/2 & \cos\theta/2\\\end{pmatrix}\overset{\theta=\pi/2}{=}\frac{1}{\sqrt{2}}\begin{pmatrix}1 & -1 \\1 & 1\\\end{pmatrix}.
\end{equation} The cavity of the first node is tuned into resonance with the $\ket{\uparrow_z}\leftrightarrow \ket{e}$ transition such that even/odd photon numbers result in a phase shift of even/odd multiples of $\pi$, toggling the atomic superposition into $\ket{\uparrow_x}$ or $\ket{\downarrow_x}$.
\renewcommand{S2}{S2}
\begin{figure}
\caption{\label{fig:photonsorter}
\label{fig:photonsorter}
\end{figure}
After the reflection of the light from the first node, the atom in this node is subject to another $\text{R}_\text{y}(\pi/2)$ rotation which maps $\ket{\uparrow_x}$ $(\ket{\downarrow_x})$ to $\ket{\downarrow_z}$ $(\ket{\uparrow_z})$. A final state detection of the atom thus heralds an even (odd) photon number, if the atom is found in $\ket{\downarrow_z}$ $(\ket{\uparrow_z})$ \cite{hacker2019supplement}.
The purpose of the second cavity is to discriminate between the two possible even ($n=0$ or $2$) and odd ($n=1$ or $3$) photon numbers. For that, the second cavity is detuned \cite{hacker2019supplement} such that a phase shift of $\pi/2(\pi)$ results for a single photon (two photons). The light coming from QND1 is reflected from this cavity and after the reflection, a qubit rotation pulse is applied. This pulse has an area of $\pi/2$ and its rotation axis depends on the measurement outcome of the state detection on the first atom. If an even photon number is detected at the first detector, a $\text{R}_\text{y}(\pi/2)$ rotation is applied to the second atom and maps $\ket{\uparrow_x}$ $(\ket{\downarrow_x})$ to $\ket{\downarrow_z}$ $(\ket{\uparrow_z})$. A final state detection of this atom in $\ket{\downarrow_z}$ $(\ket{\uparrow_z})$ therefore heralds the photonic Fock states $\ket{0}$ $(\ket{2})$.
If an odd photon number is detected at the first node, a $\pi/2$ qubit rotation around the x axis is applied to the second atom. It can be expressed as
\begin{equation}
\text{R}_x(\theta)=
\begin{pmatrix}\cos\theta/2 & -i\sin\theta/2 \\-i\sin\theta/2 & \cos\theta/2\\\end{pmatrix}\overset{\theta=\pi/2}{=}\frac{1}{\sqrt{2}}\begin{pmatrix}1 & -i \\-i & 1\\
\end{pmatrix}.
\end{equation}
This pulse maps $\ket{\uparrow_y}=\frac{1}{\sqrt{2}}(\ket{\uparrow_z}+i\ket{\downarrow_z})$ to $\ket{\uparrow_z}$ and $\ket{\downarrow_y}=\frac{1}{\sqrt{2}}(\ket{\uparrow_z}-i\ket{\downarrow_z})$ to $\ket{\downarrow_z}$. A final state detection of this atom in $\ket{\uparrow_z}$ $(\ket{\downarrow_z})$ therefore heralds the Fock states $\ket{1}$ $(\ket{3})$. The final light state can be characterized by employing a homodyne detection to reconstruct its Wigner function \cite{hacker2019supplement}.
Fig. \ref{fig:photonsorter} shows the dichotomy tree and the quantum circuit diagram of the devised protocol. It should be noted that the losses between the two setups need to be lowered substantially for this protocol. Furthermore, improved cavities with higher reflectivities are necessary. Nevertheless, the protocol is appealing since it can provide an elegant tool to decompose a coherent state into its different Fock-state contributions in a heralded fashion.
\end{document} | math | 53,248 |
\begin{document}
\title{Graded Frobenius cluster categories}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnotetext[2]{Email: \url{[email protected]}. Website: \url{http://www.maths.lancs.ac.uk/~grabowsj/}}
\footnotetext[3]{Email: \url{[email protected]}. Website: \url{http://www.iaz.uni-stuttgart.de/LstAGeoAlg/Pressland/}}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\setcounter{footnote}{0}
\begin{abstract} Recently the first author studied multi-gradings for generalised cluster categories, these being 2-Calabi--Yau triangulated categories with a choice of cluster-tilting object. The grading on the category corresponds to a grading on the cluster algebra without coefficients categorified by the cluster category and hence knowledge of one of these structures can help us study the other.
In this work, we extend the above to certain Frobenius categories that categorify cluster algebras with coefficients. We interpret the grading K-theoretically and prove similar results to the triangulated case, in particular obtaining that degrees are additive on exact sequences.
We show that the categories of Buan, Iyama, Reiten and Scott, some of which were used by Gei\ss, Leclerc and Schr\"oer to categorify cells in partial flag varieties, and those of Jensen, King and Su, categorifying Grassmannians, are examples of graded Frobenius cluster categories.
\noindent MSC (2010): 13F60 (Primary), 18E30, 16G70 (Secondary)
\end{abstract}
\section{Introduction}
Gradings for cluster algebras have been introduced in various ways by a number of authors and for a number of purposes. The evolution of the notion started with the foundational work of Fomin and Zelevinsky \cite{FZ-CA1}, who consider $\ensuremath{\mathbb{Z}}^{n}$-gradings where $n$ is precisely the rank of the cluster algebra. Shortly afterwards, in the course of considering Poisson structures compatible with cluster algebras, Gekhtman, Shapiro and Vainshtein \cite{GSV-CA-Poisson} gave a definition of a toric action on a cluster algebra, which dualises to that of a $\ensuremath{\mathbb{Z}}^{m}$-grading, where $m$ can now be arbitrary.
In \cite{GradedCAs} the first author examined the natural starting case of finite type cluster algebras without coefficients. A complete classification of the integer multi-gradings that occur was given and it was observed that the gradings so obtained were all \emph{balanced}, that is, there exist bijections between the set of variables of degree $\underline{d}$ and those of degree $-\underline{d}$.
This phenomenon was explained by means of graded generalised cluster categories, where---following \cite{DominguezGeiss}---by generalised cluster category we mean a 2-Calabi--Yau triangulated category $\curly{C}$ with a basic cluster-tilting object $T$. The definition made in \cite{GradedCAs} associates an integer vector (the multi-degree) to an object in the category in such a way that the vectors are additive on distinguished triangles and transform naturally under mutation. This is done via the key fact that every object in a generalised cluster category has a well-defined associated integer vector-valued datum called the index with respect to $T$; in order to satisfy the aforementioned two properties, degrees are necessarily linear functions of the index.
The categorical approach has the advantage that it encapsulates the global cluster combinatorics, or more accurately the set of indices does. Another consequence is an explanation for the observed balanced gradings in finite type: the auto-equivalence of the cluster category given by the shift functor induces an automorphism of the set of cluster variables that reverses signs of degrees. Hence any cluster algebra admitting a (triangulated) cluster categorification necessarily has all its gradings being balanced (providing the set of reachable rigid indecomposable objects, which is in bijection with the set of cluster variables, is closed under the shift functor). This is the case for finite type or, more generally, acyclic cluster algebras having no coefficients.
Our main goal is to provide a version of the above in the Frobenius, i.e.\ exact category, setting, similarly to the triangulated one. A Frobenius category is exact with enough projective objects and enough injective objects, and these classes of objects coincide. From work of Fu and Keller \cite{FuKeller} and the second author \cite{Pressland}, we have a definition of a Frobenius cluster category and objects in such a category also have indices.
From this we may proceed along similar lines to \cite{GradedCAs} to define gradings and degrees, except that we elect to work (a) over an arbitrary abelian group $\mathbb{A}$ and (b) in a more basis-free way by working K-theoretically and with the associated Euler form. We prove the foundational properties of gradings for Frobenius cluster categories: that degrees are compatible with taking the cluster character, that they are additive on exact sequences and that they are compatible with mutation.
Furthermore, we prove an analogue of a result of Palu \cite{Palu-Groth-gp} in which we show that the space of gradings for a graded Frobenius cluster category $\curly{E}$ is closely related to the Grothendieck group, namely that the former is isomorphic to $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_{0}(\curly{E})}{\mathbb{A}}$. This enables one to show that some categorical datum is a grading by seeing that it respects exact sequences, and conversely that from the cluster algebra categorified by $\curly{E}$ we may deduce information about $\mathrm{K}_{0}(\curly{E})$. We exhibit this on examples, notably the categories of Buan, Iyama, Reiten and Scott \cite{BIRS1} corresponding to Weyl group elements, also studied by Gei\ss, Leclerc and Schr\"oer \cite{GLS-PFV} in the context of categorifying cells in partial flag varieties.
The homogeneous coordinate rings of Grassmannians are an example of particular importance in this area. They admit a graded cluster algebra structure but beyond the small number of cases when this structure is of finite type, little is known about the cluster variables. A first step towards a better understanding is to describe how the degrees of the cluster variables are distributed: are the degrees unbounded? does every natural number occur as a degree? are there finitely many or infinitely many variables in each occurring degree? By using the Frobenius cluster categorification of Jensen, King and Su \cite{JKS} and the grading framework here, we can hope to begin to examine these questions.
\section{Preliminaries}
\label{preliminaries}
The construction of a cluster algebra of geometric type from an initial seed $(\underline{x},B)$, due to Fomin and Zelevinsky \cite{FZ-CA1}, is now well-known. Here $\underline{x}=(x_1,\dotsc,x_n)$ is a transcendence base for a certain field of fractions of a polynomial ring, called a cluster, and $B$ is an $n\times r$ integer matrix whose uppermost $r\times r$ submatrix (the principal part of $B$) is skew-symmetrisable. If the principal part of $B$ is skew-symmetric, then $B$ is often replaced by the (ice) quiver $Q=Q(B)$ it defines in the natural way.
We refer the reader who is unfamiliar with this construction to the survey of Keller \cite{Keller-CASurvey} and the books of Marsh \cite{Marsh-book} and of Gekhtman, Shapiro and Vainshtein \cite{GSV-Book} for an introduction to the topic and summaries of the main related results in this area.
We set some notation for later use. For each index $1\leqslant k\leqslant r$, set
\begin{align*}
\underline{b}_{k}^{+} & = -\underline{\boldsymbol{e}}_{k}+\sum_{b_{ik}>0}b_{ik}\underline{\boldsymbol{e}}_{i} \qquad \text{and} \\
\underline{b}_{k}^{-} & = -\underline{\boldsymbol{e}}_{k}-\sum_{b_{ik}<0}b_{ik}\underline{\boldsymbol{e}}_{i},
\end{align*}
where the vector $\underline{\boldsymbol{e}}_{i}\in \ensuremath{\mathbb{Z}}^{n}$ ($n$ being the number of rows of $B$) is the $i$th standard basis vector. Note that the $k$th column of $B$ may be recovered as $B_{k}=\underline{b}_{k}^{+}-\underline{b}_{k}^{-}$.
Then for $1\leqslant k\leqslant r$, the exchange relation for mutation of the seed $(\underline{x},B)$ in the direction $k$ is given by
\[ x_{k}^{\prime}=\underline{x}^{\underline{b}_{k}^{+}}+\underline{x}^{\underline{b}_{k}^{-}}, \]
where for $\underline{a}=(a_{1},\dotsc ,a_{n})$ we set
\[ \underline{x}^{\underline{a}} = \prod_{i=1}^{n} x_{i}^{a_{i}}. \]
If $(\underline{x},B)$ is a seed, we call the elements of $\underline{x}$ cluster variables. The variables $x_1,\dotsc,x_r$ are called mutable, and $x_{r+1},\dots,x_n$ (which appear in the cluster of every seed related to $(\underline{x},B)$ by mutations) are called frozen. Note that while some authors do not consider frozen variables to be cluster variables, it will be convenient for us to do so. We will sometimes also refer to the indices $1,\dotsc,r$ and $r+1,\dotsc,n$ as mutable and frozen respectively.
Throughout, for simplicity, we will assume that all algebras and categories are defined over $\mathbb{C}$. All modules are left modules. For a Noetherian algebra $A$, we denote the abelian category of finitely generated $A$-modules by $\fgmod{A}$. If $B$ is a matrix, we denote its transpose by $B^{t}$.
\subsection{Graded seeds, cluster algebras and cluster categories}\label{s:gradedCAs}
Let $\mathbb{A}$ be an abelian group. The natural definition for an $\mathbb{A}$-graded seed is as follows.
\begin{definition} A multi-graded seed is a triple $(\underline{x},B,G)$ such that
\begin{enumerate}[label=(\alph*)]
\item $(\underline{x}=(x_{1},\dotsc ,x_{n}),B)$ is a seed, and
\item $G\in\mathbb{A}^n$, thought of as a column vector, satisfies $B^{t}G=0$.
\end{enumerate}
The matrix multiplication in (b) makes sense since $\mathbb{A}$ is a $\ensuremath{\mathbb{Z}}$-module. This is most transparent when $\mathbb{A}=\ensuremath{\mathbb{Z}}^m$, so that $G$ is an $n\times m$ integer matrix.
\end{definition}
From now on, unless we particularly wish to emphasise $\mathbb{A}$, we will drop it from the notation and simply use the term ``graded''.
The above data defines $\underline{\degsave}_{G}(x_{i})=G_{i}\in\mathbb{A}$ (the $i$th component of $G$) and this can be extended to rational expressions in the generators $x_{i}$ in the obvious way. Condition (b) ensures that for each $1\leqslant k\leqslant r$, we have $\underline{b}_k^+\cdot G=\underline{b}_k^-\cdot G$, making sense of these dot products via the $\ensuremath{\mathbb{Z}}$-module structure of $\mathbb{A}$, so every exchange relation is homogeneous, and
\[G_k':=\underline{\degsave}(x_k')=\underline{b}_k^+\cdot G-G_k=\underline{b}_k^-\cdot G-G_k.\]
Thus we can also mutate our grading, and repeated mutation propagates a grading on an initial seed to every cluster variable and to the associated cluster algebra. Hence we obtain the following well-known result, given in various forms in the literature.
\begin{proposition}\label{p:gradedCA} The cluster algebra $\curly{A}(\underline{x},B,G)$ associated to an initial graded seed $(\underline{x},B,G)$ is an $\mathbb{A}$-graded algebra. Every cluster variable of $\curly{A}(\underline{x},B,G)$ is homogeneous with respect to this grading. \qed
\end{proposition}
We refer the reader to \cite{GradedCAs} for a more detailed discussion of the above and further results regarding the existence of gradings, relationships between gradings and a study of $\ensuremath{\mathbb{Z}}$-gradings for cluster algebras of finite type with no coefficients.
\subsection{Graded triangulated cluster categories}\label{s:graded-triang-cluster-categories}
Our interest here is in generalising the categorical parts of \cite{GradedCAs}, which refer to models of cluster algebras without frozen variables given by $2$-Calabi--Yau triangulated categories, and explain how to interpret the data of a grading on the cluster algebra at this categorical level. Our main goal is to provide a similar theory for stably $2$-Calabi--Yau Frobenius categories, which may be used to model cluster algebras that do have frozen variables.
In order to motivate what will follow for the Frobenius setting, we give the key definitions and statements from the triangulated case, without proofs as these may be found in \cite{GradedCAs}.
\begin{definition}[{\cite{DominguezGeiss}}] Let $\curly{C}$ be a triangulated 2-Calabi--Yau category with suspension functor $\Sigma$ and let $T\in \curly{C}$ be a basic cluster-tilting object. We will call the pair $(\curly{C},T)$ a generalised cluster category.
\end{definition}
Write $T=T_{1}\ensuremath{ \oplus} \dotsm \ensuremath{ \oplus} T_{r}$. Setting $\Lambda=\op{\End{\curly{C}}{T}}$, the functor\footnote{This functor is replaced by $E=F\Sigma$ in \cite{DominguezGeiss}, \cite{GradedCAs}; we use $F$ here, as in \cite{FuKeller}, for greater compatibility with the Frobenius case.} $F=\curly{C}(T,-)\colon \curly{C} \to \fgmod{\Lambda}$ induces an equivalence $\curly{C}/\text{add}(\Sigma T)\to\fgmod{\Lambda}$. We may also define an exchange matrix associated to $T$ by
\[ (B_{T})_{ij}=\dim \text{Ext}_{\Lambda}^{1}(S_{i},S_{j})-\dim \text{Ext}_{\Lambda}^{1}(S_{j},S_{i}). \]
Here the $S_{i}=FT_{i}/\mathop{\mathrm{rad}} FT_{i}$, $i=1,\dotsc ,r$ are the simple $\Lambda$-modules. Thus, if the Gabriel quiver of the algebra $\Lambda$ has no loops or $2$-cycles, $B_T$ is its corresponding skew-symmetric matrix.
For each $X\in \curly{C}$ there exists a distinguished triangle
\[ \bigoplus_{i=1}^{r} T_{i}^{m(i,X)} \to \bigoplus_{i=1}^{r} T_{i}^{p(i,X)} \to X \to \Sigma \left( \bigoplus_{i=1}^{r} T_{i}^{m(i,X)} \right) \]
Define the index of $X$ with respect to $T$, denoted $\ind{T}{X}$, to be the integer vector with $\ind{T}{X}_{i}=p(i,X)-m(i,X)$. By \cite[\S 2.1]{Palu}, $\ind{T}{X}$ is well-defined and we have a cluster character
\begin{align*} C_{?}^{T}\colon \text{Obj}(\curly{C}) &\to \mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{r}^{\pm 1}] \\
X & \mapsto \underline{x}^{\ind{T}{X}}\sum_{\underline{e}} \chi(\mathrm{Gr}_{\underline{e}}(F\Sigma X))\underline{x}^{B_{T}\cdot\underline{e}}
\end{align*}
Here $\mathrm{Gr}_{\underline{e}}(F\Sigma X)$ is the quiver Grassmannian of $\Lambda$-submodules of $F\Sigma X$ of dimension vector $\underline{e}$ and $\chi$ is the topological Euler characteristic. We use the same monomial notation $\underline{x}^{\underline{a}}$ as previously.
We recall that for any cluster-tilting object $U$ of $\curly{C}$, and any indecomposable summand $U_k$ of $U$, there are non-split exchange triangles
\[ U_{k}^{*} \to M \to U_{k} \to \Sigma U_{k}^{*} \qquad \text{and} \qquad U_{k} \to M' \to U_{k}^{*} \to \Sigma U_{k} \]
with $M,M' \in \operatorname{add}(U)$, that glue together to form an Auslander--Reiten $4$-angle
\[U_k\to M'\to M\to U_k\]
in $\curly{C}$ \cite[Definition~3.8]{IyamaYoshino}. If the quiver of $\op{\End{\curly{C}}{U}}$ has no loops or $2$-cycles incident with the vertex corresponding to $U_k$, then $M,M'\in\operatorname{add}(U/U_k)$ and $U^{*}=(U/U_{k})\ensuremath{ \oplus} U_{k}^{*}$ is again cluster-tilting. In the generality of our setting, these results are all due to Iyama and Yoshino \cite{IyamaYoshino}.
The natural definition of a graded generalised cluster category is then the following.
\begin{definition}[{\cite[Definition~5.2]{GradedCAs}}]\label{d:graded-gen-cl-cat} Let $(\curly{C},T)$ be a generalised cluster category and let $G\in\mathbb{A}^r$ such that $B_{T}G=0$. We call the tuple $(\curly{C},T,G)$ a graded generalised cluster category.
\end{definition}
Note that, in this context, $B_{T}$ is square and skew-symmetric, so we may suppress taking the transpose in the equation $B_{T}G=0$.
\begin{definition}[{\cite[Definition~5.3]{GradedCAs}}]\label{d:degree-graded-gen-cl-cat} Let $(\curly{C},T,G)$ be a graded generalised cluster category. For any $X\in \curly{C}$, we define $\underline{\degsave}_{G}(X)=\ind{T}{X}\cdot G$.\end{definition}
The main results about graded generalised cluster categories are summarised in the following Proposition, the most significant of these being \ref{p:prop-of-gen-cc-additive-on-triang}. The proofs in \cite{GradedCAs} are given for $\mathbb{A}=\ensuremath{\mathbb{Z}}^m$, but remain valid in the more general setting.
\begin{proposition}[{\cite[\S5]{GradedCAs}}]\label{p:prop-of-gen-cc} Let $(\curly{C},T,G)$ be a graded generalised cluster category.
\begin{enumerate}[label=(\roman*)]
\item Let $\mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{r}^{\pm 1}]$ be $\mathbb{A}$-graded by $\underline{\degsave}_{G}(x_{i})=G_{i}$ (the $i$th component of $G$). Then for all $X \in \curly{C}$, the cluster character $C_{X}^{T}\in \mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{r}^{\pm 1}]$ is homogeneous of degree $\underline{\degsave}_G(X)$.
\item\label{p:prop-of-gen-cc-additive-on-triang} For any distinguished triangle $X\to Y \to Z \to \Sigma X$ of $\curly{C}$, we have
\[ \underline{\degsave}_{G}(Y)=\underline{\degsave}_{G}(X)+\underline{\degsave}_{G}(Z).\]
\item\label{p:prop-of-gen-cc-mutation} The degree $\underline{\degsave}_{G}$ is compatible with mutation in the sense that for every cluster-tilting object $U$ of $\curly{C}$ with indecomposable summand $U_{k}$ we have
\[ \underline{\degsave}_{G}(U_{k}^{*})=\underline{\degsave}_{G}(M)-\underline{\degsave}_{G}(U_{k})=\underline{\degsave}_{G}(M')-\underline{\degsave}_{G}(U_{k}), \]
where $U_{k}^{*}$, $M$ and $M'$ are as in the above description of exchange triangles in $\curly{C}$.
\item\label{p:prop-of-gen-cc-Groth-gp} The space of gradings for a generalised cluster category $(\curly{C},T)$ may be identified with $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{C})}{\mathbb{A}}$, where $\mathrm{K}_0(\curly{C})$ is the Grothendieck group of $\curly{C}$ as a triangulated category.\footnote{This statement corrects \cite[Proposition~5.5]{GradedCAs} for the case $\mathbb{A}=\ensuremath{\mathbb{Z}}$, which replaces $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{C})}{\ensuremath{\mathbb{Z}}}$ by $\mathrm{K}_0(\curly{C})$ itself. The proof given in \cite{GradedCAs} proves the statement given here for an arbitrary abelian group essentially without modification. An example of $\curly{C}$ for which $\mathrm{K}_0(\curly{C})$ and $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{C})}{\ensuremath{\mathbb{Z}}}$ are non-isomorphic is provided by \cite[Thm.~1.3]{BKL}.}
\item For each $X\in \curly{C}$, $\underline{\degsave}_{G}(\Sigma X)=-\underline{\degsave}_{G}(X)$. That is, for each $d\in \mathbb{A}$, the shift automorphism $\Sigma$ on $\curly{C}$ induces a bijection between the objects of $\curly{C}$ of degree $d$ and those of degree $-d$. \qed
\end{enumerate}
\end{proposition}
Part~\ref{p:prop-of-gen-cc-mutation} of the preceding proposition shows how to mutate the data of $G$ when mutating the cluster-tilting object $T$, to obtain a new grading vector compatible with the exchange matrix of the new cluster-tilting object, defining the same grading on the cluster algebra.
However, we may obtain an even stronger conclusion from part~\ref{p:prop-of-gen-cc-Groth-gp}, since this provides a ``base-point free'' definition of a grading, depending only on the category $\curly{C}$ and not on the cluster-tilting object $T$. Read differently, this shows that if $(\curly{C},T,G)$ is a graded generalised cluster category, then for any cluster-tilting object $T'\in\curly{C}$, there is a unique $G'\in\mathbb{A}^r$ such that $(\curly{C},T',G')$ is a graded generalised cluster category and $\underline{\degsave}_G(X)=\underline{\degsave}_{G'}(X)$ for all $X\in\curly{C}$. We will explain this in more detail below in the case of Frobenius categories.
If $\curly{H}$ is the category of coherent sheaves on a weighted projective line with all weights odd, then the Grothendieck group of the cluster category $\curly{C}$ of $\curly{H}$ is a non-zero quotient of $\ensuremath{\mathbb{Z}}_2\oplus\ensuremath{\mathbb{Z}}_2$ \cite[Theorem~1.3]{BKL}. (If one only imposes relations coming from triangles obtained by projecting triangles of the derived category of $\curly{C}$ to $\curly{H}$, then one obtains exactly $\ensuremath{\mathbb{Z}}_2\oplus\ensuremath{\mathbb{Z}}_2$ \cite[Proposition~3.7(ii)]{BKL}, but $\curly{C}$ may have more triangles than these.) By part~\ref{p:prop-of-gen-cc-Groth-gp} of the preceding proposition, this cluster category $\curly{C}$ admits no $\ensuremath{\mathbb{Z}}$-gradings, but does admit $\ensuremath{\mathbb{Z}}_2$-gradings. In fact \cite[Proposition 3.10(ii)]{BKL}, any such grading is a linear combination of the functions giving the degree and rank of a sheaf modulo $2$.
Part (v) of Proposition~\ref{p:prop-of-gen-cc} shows that for cluster algebras admitting a categorification by a generalised cluster category $(\curly{C},T)$ such that the mutation class of $T$ is closed under the shift functor $\Sigma$, all gradings must be balanced, meaning that for any $d\in\mathbb{A}$, the cluster variables of degree $d$ are in bijection with those of degree $-d$.
If $Q$ admits a nondegenerate Jacobi-finite potential $W$, then the corresponding cluster algebra is categorified by the Amiot cluster category $\curly{C}_{Q,W}$, which has a cluster-tilting object $T$ whose endomorphism algebra is the Jacobian algebra of $(Q,W)$ \cite{Amiot}. If $Q$ admits a maximal green sequence, then it provides a sequence of mutations from $T$ to $\Sigma T$ in $\curly{C}_{Q,W}$, so the mutation class of $T$ is closed under $\Sigma$ \cite[Proposition~5.17]{Keller-QD}. It follows that all gradings of the cluster algebra associated to $Q$ are balanced. All of these assumptions hold, for example, when $Q$ is a finite acyclic quiver (so $W=0$); for the statement about maximal green sequences, see Br\"{u}stle, Dupont and P\'{e}rotin \cite[Lemma~2.20]{BDP}.
Conversely, we can use gradings to show that certain cluster algebras cannot admit a categorification as above. For example, the Markov cluster algebra, all of whose exchange matrices are given by
\[B=\begin{pmatrix}0&2&-2\\-2&0&2\\2&-2&0\end{pmatrix}\]
or its negative, admits the grading $(1,1,1)$. This is an integer grading under which all cluster variables have strictly positive degrees, so it is not balanced. While the Markov quiver associated to $B$ has a non-degenerate potential for which the resulting (completed) Jacobian algebra is finite dimensional, and thus has an associated Amiot cluster category $\curly{C}$, this category has exactly two mutation classes of cluster-tilting objects. (One can also realise this Jacobian algebra as that coming from a tagged triangulation of the once-punctured torus; such triangulations can include tagged arcs or not, but it is not possible to mutate a triangulation without tagged arcs into one with tagged arcs, giving another explanation for the existence of these two mutation classes.) The shift functor on $\curly{C}$ takes rigid indecomposable objects appearing as summands in one mutation class (which correspond to cluster variables) to rigid indecomposables from the other class (which do not), allowing the existence of a non-balanced grading on the cluster algebra.
It has been shown by Ladkani that many of these properties hold more generally for quivers arising from triangulations of punctured surfaces \cite{Ladkani}.
\section{Graded Frobenius cluster categories}
In this section, we provide the main technical underpinnings for the Frobenius version of the above theory, in which we consider exact categories rather than triangulated ones. Background on exact categories, and homological algebra in them, can be found in B\"{u}hler's survey \cite{Buehler}.
An exact category $\curly{E}$ is called a Frobenius category if it has enough projective objects and enough injective objects, and these two classes of objects coincide. A typical example of such a category is the category of finite dimensional modules over a finite dimensional self-injective algebra. More generally, if $B$ is a Noetherian algebra with finite left and right injective dimension as a module over itself (otherwise known as an Iwanaga--Gorenstein algebra), the category
\[\operatorname{GP}(B)=\{X\in\fgmod{B}:\Ext{i}{B}{X}{B}=0,\ i>0\},\]
is Frobenius \cite{Buchweitz}. (Here $\operatorname{GP}(B)$ is equipped with the exact structure in which the exact sequences are precisely those that are exact when considered in the abelian category $\fgmod{B}$.) The initials ``GP'' are chosen for ``Gorenstein projective''.
Given a Frobenius category $\curly{E}$, its stable category $\underline{\curly{E}}$ is formed by taking the quotient of $\curly{E}$ by the ideal of morphisms factoring through a projective-injective object. By a famous result of Happel \cite[Theorem~2.6]{Happelbook}, $\underline{\curly{E}}$ is a triangulated category with shift functor $\Omega^{-1}$, where $\Omega^{-1}X$ is defined by the existence of an exact sequence
\[0\to X\to Q \to\Omega^{-1}X\to0\]
in which $Q$ is injective. The distinguished triangles of $\underline{\curly{E}}$ are isomorphic to those of the form
\[X\to Y\to Z\to\Omega^{-1}X\]
where
\[0\to X\to Y\to Z\to 0\]
is a short exact sequence in $\curly{E}$.
\begin{definition}
A Frobenius category $\curly{E}$ is stably $2$-Calabi--Yau if the stable category $\underline{\curly{E}}$ is Hom-finite and there is a functorial duality
\[\mathrm{D}\Ext{1}{\curly{E}}{X}{Y}=\Ext{1}{\curly{E}}{Y}{X}\]
for all $X,Y\in\curly{E}$.
\end{definition}
\begin{remark} The above definition is somewhat slick---it is equivalent to requiring that $\underline{\curly{E}}$ is \linebreak $2$-Calabi--Yau as a triangulated category (that is, that $\underline{\curly{E}}$ is Hom-finite and $\Omega^{-2}$ is a Serre functor), as one might expect.
\end{remark}
Let $\curly{E}$ be a stably $2$-Calabi--Yau Frobenius category. If $U$ is cluster-tilting in $\curly{E}$, then it is also cluster-tilting in the $2$-Calabi--Yau triangulated category $\curly{E}$, and a summand $U_k$ of $U$ is indecomposable in $\underline{\curly{E}}$ if and only if it is indecomposable and non-projective in $\curly{E}$. Thus for any cluster-tilting object $U$ of $\curly{E}$ and for any non-projective indecomposable summand $U_{k}$ of $U$, we can lift the exchange triangles involving $U_k$ from $\underline{\curly{E}}$ to $\curly{E}$, and obtain exchange sequences
\[0\to U_{k}^{*} \to M \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to M' \to U_{k}^{*} \to 0 \]
with $M,M' \in\operatorname{add}{(U)}$. If the quiver of $\op{\End{\curly{E}}{U}}$ has no loops or $2$-cycles incident with the vertex corresponding to $U_k$, then $U_k'=U/U_k\oplus U_k^*$ is again cluster-tilting, just as in the triangulated case.
Fu and Keller \cite{FuKeller} give the following definition of a cluster character on a stably $2$-Calabi--Yau Frobenius category.
\begin{definition}[{\cite[Definition~3.1]{FuKeller}}]
Let $\curly{E}$ be a stably $2$-Calabi--Yau Frobenius category, and let $R$ be a commutative ring. A cluster character on $\curly{E}$ is a map $\varphi$ on the set of objects of $\curly{E}$, taking values in $R$, such that
\begin{itemize}
\item[(i)]if $M\ensuremath \cong M'$ then $\varphi_{M}=\varphi_{M'}$,
\item[(ii)]$\varphi_{M\ensuremath{ \oplus} N}=\varphi_{M}\varphi_{N}$, and
\item[(iii)]if $\dim\Ext{1}{\curly{E}}{M}{N}=1$ (equivalently, $\dim\Ext{1}{\curly{E}}{N}{M}=1$), and
\begin{align*}
&0\to M\to X\to N\to 0,\\
&0\to N\to Y\to M\to 0
\end{align*}
are non-split sequences, then
\[\varphi_{M}\varphi_{N}=\varphi_{X}+\varphi_{Y}.\]
\end{itemize}
\end{definition}
Let $\curly{E}$ be a stably $2$-Calabi--Yau Frobenius category, and assume there exists a cluster-tilting object $T\in\curly{E}$. Assume without loss of generality that $T$ is basic, and let $T=\bigoplus_{i=1}^nT_i$ be a decomposition of $T$ into pairwise non-isomorphic indecomposable summands. We number the summands so that $T_i$ is projective if and only if $r<i\leqslant n$. Let $\Lambda=\op{\End{\curly{E}}{T}}$, and $\underline{\Lambda}=\op{\End{\underline{\curly{E}}}{T}}=\Lambda/\Lambda e\Lambda$, where $e$ is the idempotent given by projection onto the maximal projective-injective summand $\bigoplus_{i=r+1}^nT_i$ of $T$.
We assume that $\Lambda$ is Noetherian, as with this assumption the forms discussed below will be well-defined. The examples that concern us later will have Noetherian $\Lambda$, but we acknowledge that this assumption is somewhat unsatisfactory, given that it is often difficult to establish.
Fu and Keller \cite{FuKeller} show that such a $T$ determines a cluster character on $\curly{E}$, as we now explain; while the results of \cite{FuKeller} are stated in the case that $\curly{E}$ is Hom-finite, the assumption that $\Lambda$ is Noetherian is sufficient providing one is careful to appropriately distinguish between the two Grothendieck groups $\mathrm{K}_0(\fgmod{\Lambda})$ and $\mathrm{K}_0(\fd{\Lambda})$ of finitely generated and finite dimensional $\Lambda$-modules respectively.
We write
\begin{align*}
F&=\Hom{\curly{E}}{T}{-}\colon\curly{E}\to\fgmod{\Lambda},\\
E&=\Ext{1}{\curly{E}}{T}{-}\colon\curly{E}\to\fgmod{\Lambda}.
\end{align*}
Note that $E$ may also be expressed as $\Hom{\underline{\curly{E}}}{T}{\Omega^{-1}(-)}$, meaning it takes values in $\fgmod{\underline{\Lambda}}$. For $M\in\fgmod{\Lambda}$ and $N\in\fd{\Lambda}$, we write
\begin{align*}
\ip{M}{N}_1&=\dim\Hom{\Lambda}{M}{N}-\dim\Ext{1}{\Lambda}{M}{N},\\
\ip{M}{N}_3&=\dim\Hom{\Lambda}{M}{N}-\dim\Ext{1}{\Lambda}{M}{N}+\dim\Ext{2}{\Lambda}{M}{N}-\dim\Ext{3}{\Lambda}{M}{N}.
\end{align*}
The algebra $\underline{\Lambda}=\op{\End{\underline{\curly{E}}}{T}}$ is finite dimensional since $\underline{\curly{E}}$ is Hom-finite, so $\fgmod{\underline{\Lambda}}\subseteq\fd\Lambda$. Fu and Keller show \cite[Proposition~3.2]{FuKeller} that if $M\in\fgmod{\underline{\Lambda}}$, then $\ip{M}{N}_3$ depends only on the dimension vector $(\dim\Hom{\Lambda}{P_i}{M})_{i=1}^n$, where the $P_i=FT_i$ are a complete set of indecomposable projective $\Lambda$-modules. Thus if $v\in\ensuremath{\mathbb{Z}}^r$, we define
\[\ip{v}{N}_3:=\ip{M}{N}_3\]
for any $M\in\fgmod{\underline{\Lambda}}$ with dimension vector $v$.
Let $R=\mathbb{C}[x_1^{\pm1},\dotsc,x_n^{\pm1}]$ be the ring of Laurent polynomials in $x_1,\dotsc,x_n$. Define a map $X\to C^T_{X}$ on objects of $\curly{E}$, taking values in $R$, via the formula
\[C^T_{X}=\prod_{i=1}^nx_i^{\ip{FX}{S_i}_1}\sum_{v\in\ensuremath{\mathbb{Z}}^r}\chi(\text{Gr}_v(EX))\prod_{i=1}^nx_i^{-\ip{v}{S_i}_3}.\]
Here, as before, $\text{Gr}_v(EX)$ denotes the projective variety of submodules of $EX$ with dimension vector $v$, and $\chi(\text{Gr}_v(EX))$ denotes its Euler characteristic. The modules $S_i=FT_i/\mathop{\mathrm{rad}} FT_i$ are the simple tops of the projective modules $P_i$. By \cite[Theorem~3.3]{FuKeller}, the map $X\mapsto C^T_{X}$ is a cluster character, with the property that $C^T_{T_i}=x_i$.
The cluster-tilting object $T$ also determines an index for each object $X\in\curly{E}$. To see that this quantity is well-defined we will use the following lemma, the proof of which is included for the convenience of the reader.
\begin{lemma}
\label{approximations-are-admissible}
Let $\curly{E}$ be an exact category, and let $M,T\in\curly{E}$.
\begin{itemize}
\item[(i)]If there exists an admissible epimorphism $T'\to M$ for $T'\in\operatorname{add}{T}$, then any right $\operatorname{add}{T}$-approximation of $M$ is an admissible epimorphism.
\item[(ii)]If there exists an admissible monomorphism $M\to T'$ for $T'\in\operatorname{add}{T}$, then any left $\operatorname{add}{T}$-approximation of $M$ is an admissible monomorphism.
\end{itemize}
\end{lemma}
\begin{proof}
We prove only (i), as (ii) is dual. Pick an admissible epimorphism $\pi\colon T'\to M$ with $T'\in\operatorname{add}{T}$ and a right $\operatorname{add}{T}$-approximation $f\colon R\to M$. Consider the pullback square
\[\begin{tikzcd}[column sep=20pt]
X\arrow{r}{g}\arrow{d}{\pi'}&T'\arrow{d}{\pi}\\
R\arrow{r}{f}&M
\end{tikzcd}\]
As $f$ is a right $\operatorname{add}{T}$-approximation, there is a map $h\colon T'\to R$ such that the square
\[\begin{tikzcd}[column sep=20pt]
T'\arrow{r}{1}\arrow{d}{h}&T'\arrow{d}{\pi}\\
R\arrow{r}{f}&M
\end{tikzcd}\]
commutes, and so by the universal property of pullbacks, there is $g'\colon T'\to X$ such that $gg'=1$. Thus $g$ is a split epimorphism, fitting into an exact sequence
\[\begin{tikzcd}[column sep=20pt]
0\arrow{r}&K\arrow{r}{i}&X\arrow{r}{g}&T'\arrow{r}&0.
\end{tikzcd}\]
It then follows, again by the universal property of pushouts, that $\pi'i$ is a kernel of $f$. Since $f\pi'=\pi g$ is the composition of two admissible epimorphisms, $f$ is itself an admissible epimorphism by the obscure axiom \cite[A.1]{KellerCC}, \cite[(Dual of) Proposition~2.16]{Buehler}.
\end{proof}
Given an object $X\in\curly{E}$, we may pick a minimal right $\operatorname{add}{T}$-approximation $R_X\to X$, where $R_X$ is determined up to isomorphism by $X$ and the existence of such a morphism. Let $P\to X$ be a projective cover of $X$, which exists since $\curly{E}$ has enough projectives; this is an admissible epimorphism by definition, and $P\in\operatorname{add}{T}$ since $T$ is cluster-tilting. Thus by Lemma~\ref{approximations-are-admissible}, the approximation $R_X\to X$ is an admissible epimorphism, and so there is an exact sequence
\[0\to K_X\to R_X\to X\to0\]
in $\curly{E}$. Since $T$ is cluster-tilting, $K_X\in\operatorname{add}{T}$, and we define $\ind{T}{X}=[R_X]-[K_X]\in\mathrm{K}_0(\operatorname{add}{T})$. It is crucial here that $\ind{T}{X}$ is defined in $\mathrm{K}_0(\operatorname{add}{T})$, rather than in $\mathrm{K}_0(\curly{E})$ where it would simply be equal to $[X]$.
We also associate to $T$ the exchange matrix $B_T$ given by the first $r$ columns of the antisymmetrisation of the incidence matrix of the quiver of $\Lambda$. By definition, $B_T$ has entries
\[(B_T)_{ij}=\dim\Ext{1}{\Lambda}{S_i}{S_j}-\dim\Ext{1}{\Lambda}{S_j}{S_i}\]
for $1\leqslant i\leqslant n$ and $1\leqslant j\leqslant r$.
\begin{definition}[{cf.\ \cite[Definition~3.3]{Pressland}}]\label{d:Frob-cl-cat}
A Frobenius category $\curly{E}$ is a Frobenius cluster category if it is Krull--Schmidt, stably $2$-Calabi--Yau and satisfies $\operatorname{gldim}(\op{\End{\curly{E}}{T}})\leqslant 3$ for all cluster-tilting objects $T\in\curly{E}$, of which there is at least one.
\end{definition}
Note that a Frobenius cluster category $\curly{E}$ need not be Hom-finite, but the stable category $\underline{\curly{E}}$ must be, since this is part of the definition of $2$-Calabi--Yau.
Let $\curly{E}$ be a Frobenius cluster category. Let $T=\bigoplus_{i=1}^n T_{i}\in \curly{E}$ be a basic cluster-tilting object, where each $T_i$ is indecomposable and is projective-injective if and only if $i>r$, let $\Lambda=\op{\End{\curly{E}}{T}}$ be its endomorphism algebra, and let $\underline{\Lambda}=\op{\End{\underline{\curly{E}}}{T}}$ be its stable endomorphism algebra. We continue to write $F=\Hom{\curly{E}}{T}{-}\colon \curly{E}\to\fgmod{\Lambda}$ and $E=\Ext{1}{\curly{E}}{T}{-}\colon\curly{E}\to\fgmod{\underline{\Lambda}}$. Since $\underline{\curly{E}}$ is Hom-finite, $\underline{\Lambda}$ is a finite dimensional algebra.
The Krull--Schmidt property for $\curly{E}$ is equivalent to $\curly{E}$ being idempotent complete and having the property that the endomorphism algebra $A$ of any of its objects is a semiperfect ring \cite[Corollary~4.4]{KrauseKS}, meaning there are a complete set $\{e_i:i\in I\}$ of pairwise orthogonal idempotents of $A$ such that $e_iAe_i$ is local for each $i\in I$. For many representation-theoretic purposes, semiperfect $\mathbb{K}$-algebras behave in much the same way as finite dimensional ones; for example, if $A$ is semiperfect then the quotient $A/\mathop{\mathrm{rad}}{A}$ is semi-simple, and its idempotents lift to $A$. For more background on semiperfect rings, see, for example, Anderson and Fuller \cite[Chapter~27]{AndersonFuller-Book}.
For us, a key property of a semiperfect ring $A$ is that the $A$-modules $Ae_i/\mathop{\mathrm{rad}}{Ae_i}$ (respectively, their projective covers $Ae_i$) form a complete set of finite dimensional simple $A$-modules (respectively indecomposable projective $A$-modules) up to isomorphism \cite[Proposition~27.10]{AndersonFuller-Book}. As we will require this later, we include being Krull--Schmidt in our definition of a Frobenius cluster category, noting that other work in this area---notably the original definition in \cite{Pressland}---requires only idempotent completeness.
Since $\Lambda$ is Noetherian and $\operatorname{gldim}{\Lambda}\leqslant 3$, the Euler form
\[\ip{M}{N}_e=\sum_{i\geqslant0}(-1)^i\dim\Ext{i}{\Lambda}{M}{N}\]
is well-defined as a map $\mathrm{K}_0(\fgmod\Lambda)\times \mathrm{K}_0(\fd\Lambda)\to\ensuremath{\mathbb{Z}}$, and coincides with the form $\ip{-}{-}_3$ introduced earlier.
\begin{remark}
\label{exchangematrixgivesip}
By a result of Keller and Reiten \cite[\S4]{KellerReiten} (see also \cite[Theorem~3.4]{Pressland}), $\fgmod{\Lambda}$ has enough $3$-Calabi--Yau symmetry for us to deduce that $\dim\Ext{k}{\Lambda}{S_i}{S_j}=\dim\Ext{3-k}{\Lambda}{S_j}{S_i}$ when $1\leqslant j\leqslant r$. It follows that
\[(-B_T)_{ij}=\ip{S_i}{S_j}_3=\ip{S_i}{S_j},\]
so the matrix of $\ip{-}{-}$, when restricted to the span of the simple modules in the first entry and the span of the first $r$ simple modules in the second entry, is given by $-B_T$.
\end{remark}
One can show by taking projective resolutions that the classes $[P_i]$ of indecomposable projective $\Lambda$-modules span $\mathrm{K}_0(\fgmod{\Lambda})$. Moreover, since $\ip{P_i}{S_j}_e=\delta_{ij}$, any $x\in \mathrm{K}_0(\fgmod{\Lambda})$ has a unique expression
\[x=\sum_{i=1}^n\ip{x}{S_{i}}_{e}[P_i]\]
as a linear combination of the $[P_i]$, and so these classes in fact freely generate $\mathrm{K}_0(\fgmod{\Lambda})$.
Recall from the definition of the index that if $X\in\curly{E}$, there is an exact sequence
\[0\to K_X\to R_X\to X\to 0\]
in which $K_X$ and $R_X$ lie in $\operatorname{add}{T}$. Since $E$ vanishes on $\operatorname{add}{T}$, the functor $F$ takes the above sequence to a projective resolution
\[0\to FK_X\to FR_X\to FX\to 0\]
of $FX$ in $\fgmod{\Lambda}$. Thus $FX$ has projective dimension at most $1$, and so $\ip{FX}{-}_1=\ip{FX}{-}_e$. We can therefore rewrite the cluster character of $X$ as
\[C^T_{X}=\prod_{i=1}^nx_i^{\ip{FX}{S_i}_e}\sum_{v\in\ensuremath{\mathbb{Z}}^r}\chi(\text{Gr}_v(EX))\prod_{i=1}^nx_i^{-\ip{v}{S_i}_e}.\]
We now proceed to defining gradings for Frobenius cluster categories. We can follow the same approach as in the triangulated case, using the index. However, by \cite{FuKeller}, we have the following expansion of the index in terms of the classes of the indecomposable summands of $T$:
\[ \ind{T}{X}=\sum_{i=1}^n \ip{FX}{S_i}_{e}[T_{i}]\in \mathrm{K}_0(\operatorname{add}{T}). \]
Since $\Ext{1}{\Lambda}{T}{T}=0$, there are no non-split exact sequences in $\operatorname{add}{T}$, and so $\mathrm{K}_0(\operatorname{add}{T})$ is freely generated by the $[T_i]$. For the same reason, the functor $F$ is exact when restricted to $\operatorname{add}{T}$, and so induces a map $F_{\ast}\colon K_{0}(\operatorname{add}{T}) \to K_{0}(\fgmod{\Lambda})$, which takes $[T_i]$ to $[P_i]$, and so is an isomorphism. Applying this isomorphism to the above formula, we obtain $F_{\ast}(\ind{T}{X})=\sum \ip{FX}{S_{i}}_{e}[P_{i}]=[FX]$.
From this we see that if we wish to work concretely with matrix and vector entries, the index can be computed explicitly. For the general theory, however, the equivalent K-theoretic expression is cleaner and so we shall phrase our definition of grading in those terms, the above observation showing us that this is equivalent to the approach in \cite{GradedCAs}.
We will define our $\mathbb{A}$-gradings to be certain elements of $\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$. To state a suitable compatibility condition, it will be necessary to extend the Euler form to an $\ensuremath{\mathbb{Z}}$-bilinear form $\mathrm{K}_0(\fgmod{\Lambda})\times(\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A})\to\mathbb{A}$. In the by now familiar way, we do this using the $\ensuremath{\mathbb{Z}}$-module structure on $\mathbb{A}$, and, abusing notation, define
\[\ip{x}{\sum y_i\otimes a_i}_e=\sum \ip{x}{y_i}_ea_i.\]
It is straightforward to check that this form is well-defined and $\ensuremath{\mathbb{Z}}$-linear in each variable.
Thus we arrive at the following definition of a graded Frobenius cluster category, exactly analogous to Definitions~\ref{d:graded-gen-cl-cat} and \ref{d:degree-graded-gen-cl-cat} in the triangulated case.
\begin{definition} Let $\curly{E}$ be a Frobenius cluster category and $T$ a cluster-tilting object of $\curly{E}$ such that $\Lambda=\op{\End{\curly{E}}{T}}$ is Noetherian. We say that $G\in\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$ is a grading for $\curly{E}$ if $\ip{M}{G}_{e}=0$ for all $M\in\fgmod{\underline{\Lambda}}$. We call $(\curly{E},T,G)$ a graded Frobenius cluster category.
\end{definition}
\begin{definition} Let $(\curly{E},T,G)$ be a graded Frobenius cluster category. Define $\underline{\degsave}_{G}\colon \curly{E} \to \mathbb{A}$ by $\underline{\degsave}_{G}(X)=\ip{FX}{G}_{e}$.
\end{definition}
We record some straightforward consequences of the above definitions.
\begin{remark}\label{grading-remarks} {\ }
\begin{enumerate}[label=(\roman*)]
\item When considering $\ensuremath{\mathbb{Z}}$-gradings, we may use the natural isomorphism $\mathrm{K}_0(\fd\Lambda)\otimes_\ensuremath{\mathbb{Z}}\ensuremath{\mathbb{Z}}\stackrel{\sim}{\to}\mathrm{K}_0(\fd\Lambda)$ to think of a grading as an element of the Grothendieck group itself. Similarly, we can think of $\ensuremath{\mathbb{Z}}^m$-gradings as elements of $\mathrm{K}_0(\fd\Lambda)^m$.
\item Using the basis of simples for $\mathrm{K}_0(\fd{\Lambda})$, we can write $G=\sum_{i=1}^n[S_i]\otimes G_i$ for some unique $G_i\in\mathbb{A}$. Writing $\underline{G}\in\mathbb{A}^n$ for the column vector with entries $G_i$, the grading condition is equivalent to requiring $B_T^t\underline{G}=0$, by Remark~\ref{exchangematrixgivesip} and the assumption that $\underline{\Lambda}$ is finite dimensional.
\item Let $G_i$ be as in (ii). Since $FT_{i}=P_{i}$ and $\ip{P_i}{S_j}_{e}=\delta_{ij}$, we may compute
\[\underline{\degsave}_{G}(T_{i})=\ip{FT_{i}}{G}_{e}=G_i,\]
as expected.
\end{enumerate}
\end{remark}
The K-theoretic phrasing of the above definition leads us to the following observation.
\begin{lemma}\label{l:proj-inj-grading} Let $\curly{E}$ be Hom-finite, let $T\in\curly{E}$ be a cluster-tilting object with endomorphism algebra $\Lambda$ and let $V\in\curly{E}$ be projective-injective. Write $F=\Hom{\curly{E}}{T}{-}$. Then $[FV]\in \mathrm{K}_{0}(\fd \Lambda)$ is a $\ensuremath{\mathbb{Z}}$-grading for $\curly{E}$, and $\underline{\degsave}_{[FV]}(X)=\dim\Hom{\curly{E}}{X}{V}$.
\end{lemma}
\begin{proof} Letting $M\in \fgmod \underline{\Lambda}$, we need to check that $\ip{M}{FV}_{e}=0$. By the internal Calabi--Yau property of $\fgmod \Lambda$ (see Remark~\ref{exchangematrixgivesip}), we may instead check that $\ip{FV}{M}_{e}=0$. Firstly, $\Ext{i}{\Lambda}{FV}{M}=0$ for $i>0$ since $FV$ is projective.
Recall from above that there is an idempotent $e\in\Lambda$, given by projecting onto a maximal projective summand of $T$, such that $\underline{\Lambda}=\Lambda/\Lambda e\Lambda$. Using this, $FV\in\operatorname{add}\Lambda e$ by the definition of $e$, and $\Hom{\Lambda}{\Lambda e}{M}=eM=0$ since $M$ is a $\underline{\Lambda}$-module. Hence $\Hom{\Lambda}{FV}{M}=0$ also, so that $\ip{FV}{M}_{e}=\ip{M}{FV}_{e}=0$ as required.
By definition, $\underline{\degsave}_{[FV]}(X)=\dim\Hom{\Lambda}{FX}{FV}$ for $X\in\curly{E}$. Since $T$ is cluster-tilting, we have the short exact sequence
\[0\to K_X\to R_X\to X\to 0,\]
with $K_X,R_X\in\operatorname{add}{T}$, used to define the index. Applying $\Hom{\curly{E}}{-}{V}$, we obtain the exact sequence
\[0\to\Hom{\curly{E}}{X}{V}\to\Hom{\curly{E}}{R_X}{V}\to\Hom{\curly{E}}{K_X}{V}.\]
Alternatively, we can apply $\Hom{\Lambda}{F{-}}{FV}$ to obtain the exact sequence
\[0\to\Hom{\Lambda}{FX}{FV}\to\Hom{\Lambda}{FR_X}{FV}\to\Hom{\Lambda}{FK_X}{FV}.\]
Since $F$ restricts to an equivalence on $\operatorname{add}{T}$, and $V\in\operatorname{add}{T}$ since it is projective-injective, the right-hand maps in these two exact sequences are isomorphic, yielding an isomorphism $\Hom{\curly{E}}{X}{V}\cong\Hom{\Lambda}{FX}{FV}$ of their kernels, from which the result follows.
\end{proof}
This gives us a family of $\ensuremath{\mathbb{Z}}$-gradings canonically associated to any Hom-finite Frobenius cluster category; note that in fact we only need $FV=\Hom{\curly{E}}{T}{V}\in \fd \Lambda$, so for some specific Hom-infinite $\curly{E}$ and specific $V$ and $T$ the result may still hold.
We will give some more examples of gradings later but first give the main results regarding graded Frobenius cluster categories, analogous to those in Proposition~\ref{p:prop-of-gen-cc} for the triangulated case. We treat the straightforward parts first.
\begin{proposition} Let $(\curly{E},T,G)$ be a graded Frobenius cluster category.
\begin{enumerate}[label=(\roman*)]
\item Let $\mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{n}^{\pm 1}]$ be graded by $\underline{\degsave}_{G}(x_{j})=G_i$, where $G_i$ is defined as in Remark~\ref{grading-remarks}(ii). Then for all $X \in \curly{E}$, the cluster character $C_{X}^{T}\in \mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{n}^{\pm 1}]$ is homogeneous of degree $\underline{\degsave}_G(X)$.
\item\label{p:prop-of-gen-cc-additive-on-exact-seq} For any exact sequence $0\to X\to Y \to Z \to 0$ in $\curly{E}$, we have
\[ \underline{\degsave}_{G}(Y)=\underline{\degsave}_{G}(X)+\underline{\degsave}_{G}(Z). \]
\item The degree $\underline{\degsave}_{G}$ is compatible with mutation in the sense that for every cluster-tilting object $U$ of $\curly{E}$ with indecomposable summand $U_{k}$ we have
\[ \underline{\degsave}_{G}(U_{k}^{*})=\underline{\degsave}_{G}(M)-\underline{\degsave}_{G}(U_{k})=\underline{\degsave}_{G}(M')-\underline{\degsave}_{G}(U_{k}), \]
where $U_{k}^{*}$, $M$ and $M'$ are as in the above description of exchange sequences in $\curly{E}$. It follows that $\underline{\degsave}_G(M)=\underline{\degsave}_G(M')$, which is the categorical version of the claim that all exchange relations in a graded cluster algebra are homogeneous.
\end{enumerate}
\end{proposition}
\begin{proof} {\ }
\begin{enumerate}[label=(\roman*)]
\item As usual, for $v\in\ensuremath{\mathbb{Z}}^n$ we write $\underline{x}^v=\prod_{i=1}^nx_i^{v_i}$. Then if $\underline{\degsave}_{G}(x_i)=G_i$, we have
\[\underline{\degsave}_{G}(\underline{x}^v)=\sum_{i=1}^nv_iG_i=\ip{\sum_{i=1}^nv_i[P_i]}{G}_{e}.\]
Each term of $C_X^T$ may be written in the form form $\lambda \underline{x}^v$, where
\[v_i=\ip{FX}{S_i}_{e}-\ip{M}{S_i}_{e}\]
for some $M\in\fgmod{\underline{\Lambda}}$, and $\lambda$ is a constant. It follows that
\[\sum_{i=1}^nv_i[P_i]=[FX]-[M],\]
so the degree of $\underline{x}^v$ is
\[\ip{FX}{G}_{e}-\ip{M}{G}_{e}=\ip{FX}{G}_{e}=\underline{\degsave}_{G}(X),\]
since $\ip{M}{G}_{e}=0$ by the definition of a grading. In particular, this is independent of $M$, so $C^T_X$ is homogeneous of degree $\underline{\degsave}_{G}(X)$.
\item Applying $F$ to the exact sequence $0\to X\to Y\to Z\to 0$ and truncating gives an exact sequence
\[0\to FX\to FY\to FZ\to M\to0\]
for some $M\subseteq EX$. In particular, $M\in\fgmod{\underline{\Lambda}}$. In $\mathrm{K}_0(\fgmod{\Lambda})$, we have
\[[FX]+[FZ]=[FY]+[M],\]
so applying $\ip{-}{G}_e$ gives
\[\underline{\degsave}_G(X)+\underline{\degsave}_G(Z)=\underline{\degsave}_G(Y)+\ip{M}{G}_e=\underline{\degsave}_G(Y)\]
since $M\in\fgmod{\underline{\Lambda}}$.
\item This follows directly from (ii) applied to the exchange sequences
\[0\to U_{k}^{*} \to M \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to M' \to U_{k}^{*} \to 0.\qedhere\]
\end{enumerate}
\end{proof}
Since the shift functor on $\underline{\curly{E}}$ does not typically lift to an automorphism of $\curly{E}$, and projective-injective objects of $\curly{E}$ may have non-zero degrees, we have no natural analogue of Proposition~\ref{p:prop-of-gen-cc}(v) in the Frobenius setting. It remains to give an analogue of part~\ref{p:prop-of-gen-cc-Groth-gp}, concerning the relationship between gradings and the Grothendieck group of a graded Frobenius cluster category. The first part of the following theorem is directly analogous to \cite[Theorem 10]{Palu-Groth-gp} for the triangulated case.
\begin{theorem}\label{t:grading-Groth-gp}
Let $\curly{E}$ be a Frobenius cluster category with a cluster-tilting object $T$ such that $\Lambda=\op{\End{\curly{E}}{T}}$ is Noetherian.
\begin{enumerate}[label=(\roman*)]
\item\label{t:grading-Groth-gp-relns} The Grothendieck group $\mathrm{K}_{0}(\curly{E})$, as an exact category, is isomorphic to the quotient of $\mathrm{K}_{0}(\operatorname{add}_{\curly{E}} T)$ by the relations $[X_{k}]-[Y_{k}]$, for $1\leqslant k\leqslant r$, where \[0\to U_{k}^{*} \to Y_k \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to X_k \to U_{k}^{*} \to 0\]
are the exchange sequences associated to the summand $U_k$ of $T$.
\item\label{t:grading-Groth-gp-grading-space} The space of $\mathbb{A}$-gradings of $\curly{E}$, defined above as a subspace of $\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$, is isomorphic to $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{E})}{\mathbb{A}}$, via the map $G\mapsto\underline{\degsave}_G$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\curly{H}^b(\operatorname{add}_{\curly{E}}{T})$ denote the bounded homotopy category of complexes with terms in $\operatorname{add}_{\curly{E}}{T}$, and let $\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T})$ denote the full subcategory of $\curly{E}$-acyclic complexes. By work of Palu \cite[Lemma~2]{Palu-Groth-gp}, there is an exact sequence
\[\begin{tikzcd}[column sep=20pt]
0\arrow{r}&\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\curly{H}^b(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\curly{D}^b(\curly{E})\arrow{r}&0,
\end{tikzcd}\]
of triangulated categories, to which we may apply the right exact functor $\mathrm{K}_0$ to obtain
\[\begin{tikzcd}[column sep=20pt]
\mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\arrow{r}&\mathrm{K}_0(\curly{H}^b(\operatorname{add}_{\curly{E}}{T}))\arrow{r}&\mathrm{K}_0(\curly{D}^b(\curly{E}))\arrow{r}&0.
\end{tikzcd}\]
By \cite[Proof of Lemma~9]{Palu-Groth-gp}, there is a natural isomorphism $\mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\stackrel{\sim}{\to}\mathrm{K}_0(\fgmod{\underline{\Lambda}})$. Moreover, since $T$ is cluster-tilting, there are no non-split exact sequences in $\operatorname{add}{T}$, and so $\mathrm{K}_0(\operatorname{add}{T})$ is freely generated by the indecomposable summands of $T$. Thus taking the alternating sum of terms gives an isomorphism $\mathrm{K}_0(\curly{H}^b(\operatorname{add}_{\curly{E}}{T}))\stackrel{\sim}{\to}\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})$ \cite{Rose-Note}.
These isomorphisms induce a commutative diagram
\[\begin{tikzcd}[column sep=20pt]
\mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\arrow{r}\arrow{d}&\mathrm{K}_0(\curly{H}^b(\operatorname{add}_{\curly{E}}{T}))\arrow{r}\arrow{d}&\mathrm{K}_0(\curly{D}^b(\curly{E}))\arrow{r}\arrow{d}&0\\
\mathrm{K}_0(\fgmod{\underline{\Lambda}})\arrow{r}{\varphi}&\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\mathrm{K}_0(\curly{E})\arrow{r}&0
\end{tikzcd}\]
with exact rows. Since the two leftmost vertical maps are isomorphisms, the induced map $\mathrm{K}_0(\curly{D}^b(\curly{E}))\to\mathrm{K}_0(\curly{E})$, which is again given by taking the alternating sum of terms, is also an isomorphism.
We claim that the map $\varphi$ in the above diagram is given by composing the map from $\mathrm{K}_0(\fgmod{\underline{\Lambda}})$ to $\mathrm{K}_0(\fgmod{\Lambda})$ induced by the inclusion of categories with the inverse of the isomorphism $F_*\colon\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})\stackrel{\sim}{\to}\mathrm{K}_0(\fgmod{\Lambda})$. Since $\underline{\Lambda}$ is finite dimensional, the Grothendieck group $\mathrm{K}_0(\fgmod{\underline{\Lambda}})$ is spanned by the classes of the simple $\underline{\Lambda}$-modules $S_k$ for $1\leqslant k\leqslant r$, so it suffices to check that $\varphi$ acts on these classes as claimed. Let
\[0\to U_{k}^{*} \to Y_k \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to X_k \to U_{k}^{*} \to 0\]
be the exchange sequences associated to the summand $U_k$ of $T$. Then there is an exact sequence
\[0\to FU_k\to FX_k\to FY_k\to FU_k\to S_k\to0.\]
From this we see that $[S_k]=[FX_k]-[FY_k]=F_*([X_k]-[Y_k])$ in $\mathrm{K}_0(\fgmod{\Lambda})$, and so we want to show that $\varphi[S_k]=[X_k]-[Y_k]$. On the other hand, $[S_k]$ is the image of the class of the $\curly{E}$-acyclic complex
\[\cdots\to0\to U_k\to X_k\to Y_k\to U_k\to0\to\cdots\]
under Palu's isomorphism $\mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\stackrel{\sim}{\to}\mathrm{K}_0(\fgmod{\underline{\Lambda}})$ (cf.\ \cite[Proof of Theorem~10]{Palu-Groth-gp}), and the image $\varphi[S_k]$ of this complex in $\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})$ is $[X_k]-[Y_k]$, as we wanted. This yields \ref{t:grading-Groth-gp-relns}.
Now applying $\Hom{\ensuremath{\mathbb{Z}}}{-}{\mathbb{A}}$ to the exact sequence
\[\begin{tikzcd}[column sep=20pt]
\mathrm{K}_0(\fgmod{\underline{\Lambda}})\arrow{r}{\varphi}&\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\mathrm{K}_0(\curly{E})\arrow{r}&0
\end{tikzcd}\]
shows that $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{E})}{\mathbb{A}}$ is isomorphic to the kernel of $\varphi^t=\Hom{\ensuremath{\mathbb{Z}}}{\varphi}{\mathbb{A}}$, which we will show coincides with the space of gradings. Indeed, we may identify $\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})$ with $\mathrm{K}_0(\fgmod{\Lambda})$ via $F_*$, and then use the Euler form to identify $\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$ with $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\fgmod{\Lambda})}{\mathbb{A}}$, the map
\[x\mapsto\ip{-}{x}_e\]
being an isomorphism as usual. Under this identification, we have $\varphi^tG=\ip{-}{G}_e|_{\mathrm{K}_0(\fgmod{\underline{\Lambda}})}$, and so $G\in\ker{\varphi}^t$ if and only if it is a grading.
The claim that the isomorphism is given explicitly by $G\mapsto\underline{\degsave}_G=\ip{F(-)}{G}_e$ can be seen by diagram chasing, and hence \ref{t:grading-Groth-gp-grading-space} is proved.
\end{proof}
The significance of this theorem is that, as in the triangulated case, it provides a basis-free method to identify gradings on Frobenius cluster categories and the cluster algebras they categorify. In the latter context, basis-free essentially means free of the choice of a particular cluster.
Specifically, as explained in more detail below, to establish that some categorical datum gives a grading, one only needs to check that that it respects exact sequences. This is potentially significantly easier than checking the vanishing of the product $B_{T}^t\underline{G}$ where $B_{T}$ is given in terms of dimensions of $\text{Ext}$-spaces over the endomorphism algebra $\Lambda$ of some cluster-tilting object $T$.
On the other hand, given some knowledge of the cluster algebra being categorified---in particular, knowing a seed---one can use the above theorem to deduce information about the Grothendieck group of the Frobenius cluster category.
As promised in Section~\ref{preliminaries}, we can use Theorem~\ref{t:grading-Groth-gp} to see how the grading in a graded Frobenius cluster category is independent of the cluster-tilting object. Precisely, let $(\curly{E},T,G)$ be a graded Frobenius cluster category, and let $\underline{\degsave}_G$ be the corresponding function on $\mathrm{K}_0(\curly{E})$. Let $T'=\bigoplus_{i=1}^nT_i'$ be another cluster-tilting object, with $\Lambda'=\op{\End{\curly{E}}{T'}}$, and denote the simple $\Lambda'$-modules by $S_i'$ for $1\leqslant i\leqslant n$. Using the inverse of the isomorphism of Theorem~\ref{t:grading-Groth-gp}, we see that if $G'$ in $\mathrm{K}_0(\fd\Lambda')$ is given by
\[G'=\sum_{i=1}^n\underline{\degsave}_G(T_i')[S_i'],\]
then $(\curly{E},T',G')$ is a graded Frobenius cluster category with $\underline{\degsave}_G=\underline{\degsave}_{G'}$, as one should expect. Note that this statement holds even if, as can happen, there is no sequence of mutations from $T$ to $T'$.
As was remarked about the triangulated case in \cite{GradedCAs}, these observations highlight how the categorification of a cluster algebra is able to see global properties, whereas the algebraic combinatorial mutation process is local.
The following example shows the theorem in action, although again we need the additional assumption of Hom-finiteness of $\curly{E}$.
\begin{lemma}\label{l:dim-vector} Assume that $\curly{E}$ is Hom-finite and let $P$ be a projective-injective object. Then $\dim \Hom{\curly{E}}{P}{-}$ and $\dim\Hom{\curly{E}}{-}{P}$ define $\ensuremath{\mathbb{Z}}$-gradings for $\curly{E}$.
\end{lemma}
\begin{proof} Since $P$ is projective and injective, both $\Hom{\curly{E}}{P}{-}$ and $\Hom{\curly{E}}{-}{P}$ are exact functors, and so in each case taking the dimension yields a function in $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{E})}{\ensuremath{\mathbb{Z}}}$. Then the result follows immediately from Theorem~\ref{t:grading-Groth-gp}.
\end{proof}
In sufficiently nice cases, applying this result with a complete set of indecomposable projectives will yield that the dimension vector of a module is a (multi-)grading.
However, we remark that some care may be needed regarding which algebra we measure ``dimension vector'' over. If $\curly{E}\subset\fgmod{\Pi}$ for some algebra $\Pi$ (as in most examples), then we may consider the $\Pi$-dimension vector of $X\in\curly{E}$, defined in the usual way. On the other hand, any Hom-finite Frobenius cluster category $\curly{E}$ is equivalent to $\GP(B)\subset\fgmod{B}$ for $B$ the opposite endomorphism algebra of a basic projective generator $P=\bigoplus_{i=1}^nP_i$ of $\curly{E}$, by \cite[Theorem~2.7]{KIWY}. Re-interpreting all of the objects of $\curly{E}$ as $B$-modules, the projective-injectives will now be precisely the projective $B$-modules, and $(\dim\Hom{\curly{E}}{P_i}{X})$ is the $B$-dimension vector of $X$ (tautologically, since the equivalence $\curly{E}\to\GP(B)$ takes $X$ to $\Hom{\curly{E}}{P}{X}$). Note that $B$ may not be the same as the algebra $\Pi$ from which $\curly{E}$ originated, and the $B$-dimension vector of a module may differ from the $\Pi$-dimension vector.
Given a complete set of projectives, it is natural to ask whether the associated grading might be standard, as defined in \cite{GradedCAs}; we briefly recall this definition and some related facts.
\begin{definition} Let $(\underline{x},B)$ be a seed. We call a multi-grading $G$ whose columns are a basis for the kernel of $B$ a standard multi-grading, and call $(\underline{x},B,G)$ a standard graded seed.
\end{definition}
It is straightforward to see, from rank considerations, that mutation preserves the property of being standard. Moreover, as shown in \cite{GradedCAs}, if $(\underline{x},B,G)$ is a standard graded seed and $H$ is any grading for $(\underline{x},B)$, then there exists an integer matrix $M=M(G,H)$ such that for any cluster variable $y$ in $\curly{A}(\underline{x},B,H)$ we have
\[ \underline{\degsave}_{H}(y)=\underline{\degsave}_{G}(y)M, \]
where on the right-hand side we regard $y$ as a cluster variable of $\curly{A}(\underline{x},B,G)$ in the obvious way.
That is, to describe the degree of a cluster variable of a graded cluster algebra $\curly{A}(\underline{x},B,H)$, it suffices to know its degree with respect to some standard grading $G$ and the matrix $M=M(G,H)$ transforming $G$ to $H$. In particular, to understand the distribution of the degrees of cluster variables, it suffices to know this for standard gradings.
Since the statement applies in the particular case when $G$ and $H$ are both standard, we see that from one choice of basis for the kernel of $B$, we obtain complete information. For if we chose a second basis, the change of basis matrix tells us how to transform the degrees. Hence up to a change of basis, there is essentially only one standard grading for each seed.
Then, depending on the particular Frobenius cluster category at hand, if we have knowledge of the rank of the exchange matrix, we may be able to examine categorical data such as the number of projective-injective modules or dimension vectors and hence try to find a basis for the space of gradings.
For example, for a basic cluster-tilting object $T$ in $\curly{E}$ a Hom-finite Frobenius cluster category, we have $n-r$ projective-injective summands in $T$: if the exchange matrix $B_{T}$ has full rank, a basis for the space of gradings has size $n-r$ so that, via Lemma~\ref{l:proj-inj-grading}, a canonical standard grading is given by the set $\{ [FT_{i}] \mid i>r \}$, which is linearly independent since it is a subset of the basis of projectives for $\mathrm{K}_0(\fd\Lambda)=\mathrm{K}_0(\fgmod{\Lambda})$.
From knowledge of this standard grading, we then obtain any other grading by means of some linear transformation. In the next section, we do this for two important examples.
\section{Examples of graded Frobenius cluster categories}
\subsection{Frobenius cluster categories associated to partial flag varieties}
Let $\mathfrak{g}$ be the Kac--Moody algebra associated to a symmetric generalised Cartan matrix. Let $\Delta$ be the associated Dynkin graph and pick an orientation $\vec{\Delta}$. Let $Q$ be the quiver obtained from $\vec{\Delta}$ by adding an arrow $\alpha^*\colon j\to i$ for each arrow $\alpha\colon i\to j$ of $\vec{\Delta}$. Then the preprojective algebra of $\Delta$ is
\[\Pi=\ensuremath \mathbb{C} Q/\sum_{\alpha\in\vec{\Delta}}[\alpha,\alpha^*],\]
which is, up to isomorphism, independent of the choice of orientation $\vec{\Delta}$.
For each $w\in W$, the Weyl group of $\mathfrak{g}$, Buan, Iyama, Reiten and Scott \cite{BIRS1} have introduced a category $\curly{C}_{w}$; the following version of its construction follows \cite{GLS-KacMoody}, and is dual to the original.
Assume $w$ has finite length and set $l(w)=n$; we do this for consistency with the notation used above but note that other authors (notably \cite{GLS-KacMoody}, \cite{GLS-QuantumPFV}) use $r$ and their $n$ is our $n-r$.
Set $\hat{I}_{i}$ to be the indecomposable injective $\Pi$-module with socle $S_{i}$, the 1-dimensional simple module supported at the vertex $i$ of $Q$.
Given a module $W$ in $\fgmod \Pi$, we define
\begin{itemize}
\item $\mathrm{soc}_{(l)}(W):= {\displaystyle \sum_{\substack{U\leqslant W \\ U\ensuremath \cong S_{l}}} U}$ and
\item $\mathrm{soc}_{(l_{1},l_{2},\ldots,l_{s})}(W):= W_{s}$ where the chain of submodules $0=W_0\subseteq W_{1} \subseteq \cdots \subseteq W_{s} \subseteq W$ is such that $W_{p}/W_{p-1} \ensuremath \cong \mathrm{soc}_{(l_{p})}(W/W_{p-1})$.
\end{itemize}
Let $\mathbf{i}=(i_n,\dotsc,i_1)$ be a reduced expression for $w$. Then for $1\leqslant k \leqslant n$, we define $V_{\mathbf{i},s} := \mathrm{soc}_{(i_{k},i_{s-1},\ldots,i_{1})}(\hat{I}_{i_{s}})$. Set $V_{\mathbf{i}}=\bigoplus_{k=1}^{n} V_{\mathbf{i},k}$ and let $I$ be the subset of $\{ 1,\dotsc ,n\}$ such that the modules $V_{\mathbf{i},i}$ for $i\in I$ are $\curly{C}_{w}$-projective-injective. Set $I_{\mathbf{i}}=\bigoplus_{i\in I} V_{\mathbf{i},i}$ and $n-r=\card{I}$. Note that this is also the number of distinct simple reflections appearing in $\mathbf{i}$.
Define
\[ \curly{C}_{\mathbf{i}}=\operatorname{Fac}(V_{\mathbf{i}})\subseteq \text{nil}\ \Pi. \]
That is, $\curly{C}_{\mathbf{i}}$ is the full subcategory of $\fgmod \Pi$ consisting of quotient modules of direct sums of finitely many copies of $V_{\mathbf{i}}$.
Then $\curly{C}_{\mathbf{i}}$ and $I_{\mathbf{i}}$ are independent of the choice of reduced expression $\mathbf{i}$ (although $V_{\mathbf{i}}$ is not), so that we may write $\curly{C}_{w}:=\curly{C}_{\mathbf{i}}$ and $I_w:= I_{\mathbf{i}}$. It is shown in \cite{BIRS1} that $\curly{C}_{w}$ is a stably 2-Calabi--Yau Frobenius category. Moreover $\curly{C}_{w}$ has cluster-tilting objects: $V_{\mathbf{i}}$ is one such. Indeed, cluster-tilting objects are maximal rigid, and vice versa. The indecomposable $\curly{C}_{w}$-projective-injective modules are precisely the indecomposable summands of $I_{w}$, and $\curly{C}_{w}=\text{Fac}(I_{w})$.
Furthermore, it is also shown in \cite[Proposition~2.19]{GLS-KacMoody} that the global dimension condition of Definition~\ref{d:Frob-cl-cat} also holds, leaving only the Krull--Schmidt condition. By \cite[Corollary~4.4]{KrauseKS}, we should check that the endomorphism algebras of objects of $\curly{C}_w$ are semiperfect, and that this category is idempotent complete. The first of these properties holds since $\curly{C}_w$ is Hom-finite. The second follows from the fact that $\curly{C}_w$ is a full subcategory of the idempotent complete category $\fgmod(\Pi/\operatorname{Ann}{I_w})$, and that if $M$ is an object of $\operatorname{Fac}(I_w)$, then so are all direct summands of $M$.
We conclude that $\curly{C}_{w}$ is a Frobenius cluster category, in the sense of Definition~\ref{d:Frob-cl-cat}.
Let $\Lambda=\op{\End{\curly{C}_{w}}{V_{\mathbf{i}}}}$ and $F=\Hom{\curly{C}_{w}}{V_{\mathbf{i}}}{-}$. Then, as above, the modules $P_{k}:= FV_{\mathbf{i},k}$ for $1\leqslant k\leqslant n$ are the indecomposable projective $\Lambda$-modules and the tops of these, $S_{k}$, are the simple $\Lambda$-modules. Recall that the exchange matrix obtained from the quiver of $\Lambda$, which we shall call $B_{\mathbf{i}}$, has entries
\[(B_{\mathbf{i}})_{ij}=\dim\Ext{1}{\Lambda}{S_i}{S_j}-\dim\Ext{1}{\Lambda}{S_j}{S_i}\]
for $1\leqslant i\leqslant n$ and $j\notin I$, so that the $r$ columns of $B_{\mathbf{i}}$ correspond to to the mutable summands $V_{\mathbf{i},j}$, $j\notin I$, of $V_{\mathbf{i}}$.
Let $L_{\mathbf{i}}$ be the $n\times n$ matrix with entries
\[ (L_{\mathbf{i}})_{jk}=\dim\Hom{\Pi}{V_{\mathbf{i},j}}{V_{\mathbf{i},k}}-\dim\Hom{\Pi}{V_{\mathbf{i},k}}{V_{\mathbf{i},j}}. \]
\noindent By \cite[Proposition~10.1]{GLS-QuantumPFV} we have
\[ \sum_{l=1}^{n} (B_{\mathbf{i}})_{lk}(L_{\mathbf{i}})_{lj}=2\delta_{jk}, \]
and hence the matrix $B_{\mathbf{i}}$ has maximal rank, namely $r$.
It follows that there exists some standard integer multi-grading $G_{\mathbf{i}}=(G_{1},\dotsc ,G_{n-r})\in \mathrm{K}_{0}(\fgmod{\Lambda})^{n-r}$ for $\curly{C}_{w}$ and $(\curly{C}_{w},V_{\mathbf{i}},G_{\mathbf{i}})$ is a graded Frobenius cluster category. As discussed above, such a standard grading can be used to construct all other gradings, so our goal is to identify one.
We have additional structure on $\curly{C}_{w}$ that we may make use of. Namely, $\curly{C}_{w}$ is Hom-finite and we may apply Lemma~\ref{l:proj-inj-grading} with respect to the $\curly{C}_{w}$-projective-injective modules $V_{\mathbf{i},i}$ that are the indecomposable summands of $I_{\mathbf{i}}$.
The resulting grading $[FV_{\mathbf{i},i}]$, $i\in I$, is standard, since its $n-r$ components are a subset of the basis of projectives for $\mathrm{K}_0(\fgmod{\Lambda})$, and so in particular are linearly independent. By Theorem~\ref{t:grading-Groth-gp}, the existence of this standard grading implies that the Grothendieck group $\mathrm{K}_0(\curly{C}_w)$ has rank $n-r$.
We wish to understand this standard grading more explicitly. Note that the objects of $\curly{C}_{w}$ are $\Pi$-modules and we may consider dimension vectors with respect to the $\Pi$-projective modules.
Then we notice that in fact the grading by $([FV_{\mathbf{i},i}])_{i\in I}$ is equal to the $\Pi$-dimension vector grading in the case at hand. This is because, by Lemma~\ref{l:proj-inj-grading}, the degree of $X$ with respect to $[FV_{\mathbf{i},i}]$ is $\dim\Hom{\Pi}{X}{V_{\mathbf{i},i}}$, and each $V_{\mathbf{i},i}$ is both a submodule and a minimal right $\curly{C}_w$-approximation of an indecomposable injective $\hat{I}_{i}$ for $\Pi$, so $\Hom{\Pi}{X}{V_{\mathbf{i},i}}=\Hom{\Pi}{X}{\hat{I}_{i}}$, the dimensions of the latter giving the $\Pi$-dimension vector of $X$.
In \cite[Corollary~9.2]{GLS-KacMoody}, Gei\ss, Leclerc and Schr\"{o}er have shown that
\[ \ensuremath \mbox{\underline{dim}}_{\Pi} V_{\mathbf{i},k}=\omega_{i_{k}}-s_{i_{1}}s_{i_{2}}\dotsm s_{i_{k}}(\omega_{i_{k}})\]
for all $1\leqslant k\leqslant n$, where the $\omega_{j}$ are the fundamental weights for $\mathfrak{g}$ and the $s_{j}$ the Coxeter generators for $W$. This enables us to construct the above grading purely combinatorially.
\begin{example}
We consider the following seed associated to $\mathfrak{g}$ of type $A_{5}$ with\[ \mathbf{i}=(3,2,1,4,3,2,5,4,3), \] as given in \cite[Example~12.11]{GLS-QuantumPFV}. The modules $V_{k}:= V_{\mathbf{i},k}$, in terms of the usual representation illustrating their composition factors as $\Pi$-modules, are
\begin{align*}
V_1&=\begin{smallmatrix}3\end{smallmatrix}&
V_2&=\begin{smallmatrix}3\\&4\end{smallmatrix}&
V_3&=\begin{smallmatrix}3\\&4\\&&5\end{smallmatrix}\\\\
V_4&=\begin{smallmatrix}&3\\2\end{smallmatrix}&
V_5&=\begin{smallmatrix}&3\\2&&4\\&3\end{smallmatrix}&
V_6&=\begin{smallmatrix}&3\\2&&4\\&3&&5\\&&4\end{smallmatrix}\\\\
V_7&=\begin{smallmatrix}&&3\\&2\\1\end{smallmatrix}&
V_8&=\begin{smallmatrix}&&3\\&2&&4\\1&&3\\&2\end{smallmatrix}&
V_9&=\begin{smallmatrix}&&3\\&2&&4\\1&&3&&5\\&2&&4\\&&3\end{smallmatrix}
\end{align*}
The exchange quiver for this seed is
\begin{center}
\scalebox{1}{\input{initialseedforMat33.tikz}}
\end{center}
It is straightforward to see that $\Pi$-dimension vectors yield a grading: for example, looking at the vertex corresponding to $V_{1}$, the sums of the dimension vectors of incoming and outgoing arrows are $[0,1,2,1,0]$ and $[0,1,1,0,0]+[0,0,1,1,0]$ respectively.
\end{example}
\subsection{Grassmannian cluster categories}
Let $\Pi$ be the preprojective algebra of type $\mathsf{A}_{n-1}$, with vertices numbered sequentially, and let $Q_k$ be the injective module at the $k$th vertex. In \cite{GLS-PFV}, Gei\ss, Leclerc and Schr\"oer show that the category $\operatorname{Sub}\, Q_{k}$ of submodules of direct sums of copies of $Q_k$ ``almost'' categorifies the cluster algebra structure on the homogeneous coordinate ring of the Grassmannian of $k$-planes in $\ensuremath \mathbb{C}^n$, but is missing a single indecomposable projective object corresponding to one of the frozen variables of this cluster algebra. The category $\Sub{Q_k}$ is in fact dual to one of the categories $\curly{C}_w$ introduced in the previous section, for $\Delta=\mathsf{A}_{n-1}$ and $w$ a particular Weyl group element depending on $k$, so it is a Frobenius cluster category in the same way.
Jensen, King and Su \cite{JKS} complete the categorification via the category $\CM(A)$ of maximal Cohen--Macaulay modules for a Gorenstein order $A$ (depending on $k$ and $n$) over $Z=\powser{\mathbb{C}}{t}$. One description of $A$ is as follows. Let $\Delta$ be the graph (of affine type $\tilde{\mathsf{A}}_{n-1}$) with vertex set given by the cyclic group $\ensuremath{\mathbb{Z}}_n$, and edges between vertices $i$ and $i+1$ for all $i$. Let $\Pi$ be the completion of the preprojective algebra on $\Delta$ with respect to the arrow ideal. Write $x$ for the sum of ``clockwise'' arrows $i\to i+1$, and $y$ for the sum of ``anti-clockwise'' arrows $i\to i-1$. Then we have
\[A=\Pi/\langle x^k-y^{n-k}\rangle.\]
In this description, $Z$ may be identified with the centre $\powser{\mathbb{C}}{xy}$ of $A$.
Jensen, King and Su also show \cite[Theorem~4.5]{JKS} that there is an exact functor \linebreak $\pi\colon \CM(A) \to \operatorname{Sub}\, Q_{k}$, corresponding to the quotient by the ideal generated by $P_{n}$, and that for any $N\in \operatorname{Sub}\, Q_{k}$, there is a unique (up to isomorphism) minimal $M$ in $\CM(A)$ with $\pi M\ensuremath \cong N$ and $M$ having no summand isomorphic to $P_{n}$. Such an $M$ satisfies $\mathrm{rk}(M)=\dim \mathrm{soc}\ \pi M$, where $\mathrm{rk}(M)$ is the rank of each vertex component of $M$, thought of as a $Z$-module.
We now show that $\CM(A)$ is again a Frobenius cluster category. Properties of the algebra $A$ mean that an $A$-module is maximal Cohen--Macaulay if and only if it is free and finitely generated as a $Z$-module. Since $Z$ is a principal ideal domain, and hence Noetherian, any submodule of a free and finitely generated $Z$-module is also free and finitely generated, and so $\CM(A)$ is closed under subobjects. In particular, $\CM(A)$ is closed under kernels of epimorphisms. Moreover \cite[Corollary~3.7]{JKS}, $A\in\CM(A)$, and so $\Omega(\fgmod{A})\subseteq\CM(A)$.
As a $Z$-module, any object $M\in\CM(A)$ is isomorphic to $Z^k$ for some $k$, so we have that $\op{\End{Z}{M}}\cong Z^{k^2}$ is a finitely generated $Z$-module. Since $Z$ is Noetherian, the algebra \linebreak $\op{\End{A}{M}}\subseteq\op{\End{Z}{M}}$ is also finitely generated as a $Z$-module. Thus $\op{\End{A}{M}}$ is Noetherian, as it is finitely generated as a module over the commutative Noetherian ring $Z$. We may now apply \cite[Proposition~3.6]{Pressland} to see that any cluster-tilting object $T\in\CM(A)$ satisfies $\operatorname{gldim}{\op{\End{A}{T}}}\leqslant 3$. Moreover \cite[Corollary~4.6]{JKS}, $\underline{\CM}(A)=\underline{\operatorname{Sub}}\,{Q_k}$, so $\underline{\CM}(A)$ is $2$-Calabi--Yau, and $\CM(A)$ is a Frobenius cluster category.
Unlike $\operatorname{Sub}\, Q_{k}$ and the $\curly{C}_w$, the category $\CM(A)$ is not Hom-finite. However, as already observed, the endomorphism algebras of its objects are Noetherian, so we may apply our general theory to this example.
In their study of the category $\CM(A)$, Jensen, King and Su show the following. Let
\[ \ensuremath{\mathbb{Z}}^{n}(k)=\{ x\in \ensuremath{\mathbb{Z}}^{n} \mid k\ \text{divides} \textstyle\sum_{i} x_{i} \} \] with basis $\alpha_{1},\dotsc ,\alpha_{n-1},\beta_{[n]}$, where the $\alpha_{j}=e_{j+1}-e_j$ are the negative simple roots for $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$ and $\beta_{[n]}=e_{1}+\dotsm +e_{k}$ is the highest weight for the representation $\bigwedge^{k}(\ensuremath \mathbb{C}^{n})$.
Then by \cite[\S 8]{JKS} we have that $\mathrm{K}_{0}(\CM (A))\ensuremath \cong \mathrm{K}_{0}(A)\ensuremath \cong \ensuremath{\mathbb{Z}}^{n}(k)$; let $G\colon \mathrm{K}_{0}(\CM(A))\to \ensuremath{\mathbb{Z}}^{n}(k)$ denote the composition of these isomorphisms. The $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$-weight of the cluster character of $M\in \CM(A)$ (called $\tilde{\psi}_{M}$ in $\cite{JKS}$) is given by the coefficients in an expression for $G[M]\in\ensuremath{\mathbb{Z}}^{n}(k)$ in terms of the basis of $\ensuremath{\mathbb{Z}}^n(k)$ given above \cite[Proposition~9.3]{JKS}, and thus this weight defines a group homomorphism $\mathrm{K}_0(\CM(A))\to\ensuremath{\mathbb{Z}}^n$.
Said in the language of this paper, $\CM(A)$ is a graded Frobenius cluster category with respect to $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$-weight, this giving a standard integer multi-grading.
Let $\delta\colon \ensuremath{\mathbb{Z}}^{n}(k)\to \ensuremath{\mathbb{Z}}$ be the (linear) function $\delta(x)=\frac{1}{k}\sum_{i} x_{i}$. By the linearity of gradings, composing $G$ with $\delta$ yields a $\ensuremath{\mathbb{Z}}$-grading on $\CM (A)$ also. Explicitly, $\delta(x)$ is the $\beta_{[n]}$-coefficient of $x$ in our chosen basis, and is also equal to the dimension of the socle of $\pi M$, which is equal to $\mathrm{rk}(M)$, which is equal to the degree of the cluster character of $M\in \CM(A)$ as a homogeneous polynomial in the Pl\"{u}cker coordinates of the Grassmannian.
It is well known that the cluster structure on the Grassmannian is graded with respect to either the $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$-weight (also called the content of a minor, and, by extension, of a product of minors) or the natural grading associated to the Pl\"{u}cker embedding. The results of \cite{JKS} show that these gradings are indeed naturally reflected in the categorification of that cluster structure. This opens the possibility of attacking some questions on, for example, the number of cluster variables of a given degree by examining rigid indecomposable modules in $\CM(A)$ of the corresponding rank, say. We hope to return to this application in the future.
Of course, one can also argue directly that $\mathrm{rk}(M)$ yields a grading on $\CM (A)$, considering it as a function on $\mathrm{K}_{0}(\CM(A))$. Note that the socle dimension of $\pi M$ is not a grading on $\operatorname{Sub}\, Q_{k}$, but rather it is the datum within $\operatorname{Sub}\, Q_{k}$ that specifies how one should lift $\pi M$ to $M$ (see \cite[\S 2]{JKS} for an illustration of this). As described in the previous section, $\operatorname{Sub}\, Q_{k}$ (in its guise as one of the $\curly{C}_{w}$) does admit gradings, such as the grading describing the degree of the cluster character of $\pi M\in \operatorname{Sub}\, Q_{k}$ (called $\psi_{\pi M}$ in \cite{JKS}) with respect to the standard matrix generators.
\small
\label{references}
\normalsize
\end{document} | math | 77,986 |
\begin{document}
\begin{flushleft}
{\Large\bf A quantitative formula for the imaginary part of a Weyl coefficient\\[5mm]
}
\textsc{
Jakob Reiffenstein
\hspace*{-14pt}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\setcounter{footnote}{2}
\footnote{
Department of Mathematics, University of Vienna \\
Oskar-Morgenstern-Platz 1, 1090 Wien, AUSTRIA \\
email: [email protected]}
} \\[1ex]
\end{flushleft}
{\small
\textbf{Abstract.}
We investigate two-dimensional canonical systems $y'=zJHy$ on an interval, with positive semi-definite Hamiltonian $H$
. Let $q_H$ be the Weyl coefficient of the system. We prove a formula that determines the imaginary part of $q_H$ along the imaginary axis up to multiplicative constants, which are independent of $H$. We also provide versions of this result for Sturm-Liouville operators and Krein strings. \\
Using classical Abelian-Tauberian theorems, we deduce characterizations of spectral properties such as integrability of a given comparison function w.r.t. the spectral measure $\mu_H$, and boundedness of the distribution function of $\mu_H$ relative to a given comparison function. \\
We study in depth Hamiltonians for which $\arg q_H(ir)$ approaches $0$ or $\pi$ (at least on a subsequence). It turns out that this behavior of $q_H(ir)$ imposes a substantial restriction on the growth of $|q_H(ir)|$. Our results in this context are interesting also from a function theoretic point of view.
\\[3mm]
\textbf{AMS MSC 2020:} 30E99, 34B20, 34L05, 34L40
\\
\textbf{Keywords:} Canonical system, Weyl coefficient, growth estimates, high-energy behaviour
}
\pagenumbering{arabic}
\setcounter{page}{1}
\setcounter{footnote}{0}
\section[{Introduction}]{Introduction}
\noindent We study two-dimensional \textit{canonical systems}
\begin{align}
\label{A33}
y'(t)=zJH(t)y(t), \quad \quad t \in [a,b) \, \text{ a.e.},
\end{align}
where $-\infty < a <b \leq \infty$, $z \in \bb C$ is a spectral parameter and $J:=\smmatrix 0{-1}10$. The \textit{Hamiltonian} $H$ is assumed to be a locally integrable, $\bb R^{2 \times 2}$-valued function on $[a,b)$ that further satisfies
\begin{itemize}
\item[$\rhd$] $H(t) \geq 0$ and $H(t) \neq 0$, \quad \quad $t \in [a,b)$ a.e.;
\item[$\rhd$] $H$ is definite, i.e., if $v \in \mathbb{C}^2$ is s.t. $H(t)v \equiv 0$ on $[a,b)$, then $v=0$;
\item[$\rhd$] $\int_a^b \tr H(t) \mkern4mu\mathrm{d} t=\infty$ (limit point case at $b$).
\end{itemize}
Together with a boundary condition at $a$, the equation (\ref{A33}) becomes the eigenvalue equation of a self-adjoint (possibly multi-valued) operator $A_H$ in a Hilbert space $L^2(H)$ associated with $H$. Throughout this paper, we fix the boundary condition $(1,0)y(a)=0$, which is no loss of generality. \\
Many classical second-order differential operators such as Schr\"odinger and Sturm-Liouville operators, Krein strings, and Jacobi operators can be transformed to the form (\ref{A33}), see, e.g., \cite{remling:2018,teschl:2009,behrndt.hassi.snoo:2020,kaltenbaeck.winkler.woracek:2007,kac:1999}. Canonical systems thus form a unifying framework. \\
All of the above operators have in common that their spectral theory is centered around the Weyl coefficient $q$ of the operator (also referred to as Titchmarsh-Weyl $m$-function). This function is constructed by Weyl's nested disk method and is a Herglotz function, i.e., it is holomorphic on $\bb C \setminus \bb R$ and satisfies there $\frac{\IM q(z)}{\IM z} \geq 0$ as well as $q(\overlineerline{z})=\overlineerline{q(z)}$. It can thus be represented as
\begin{align}
\label{A17}
q(z)=\alpha + \beta z + \int_{\bb R} \bigg(\frac{1}{t-z}-\frac{t}{1+t^2} \bigg) \mkern4mu\mathrm{d} \mu(t), \quad \quad z \in \bb C \setminus \bb R
\end{align}
with $\alpha \in \bb R$, $\beta \geq 0$, and $\mu$ a positive Borel measure on $\bb R$ satisfying $\int_{\bb R} \frac{d\mu(t)}{1+t^2} <\infty$. The measure $\mu$ in the integral representation (\ref{A17}) of the Weyl coefficient is a spectral measure of the underlying operator model if $\beta =0$ (if $\beta > 0$, a one-dimensional component has to be added). The importance of canonical systems in this context lies in the Inverse Spectral Theorem of L. de Branges, stating that each Herglotz function $q$ is the Weyl coefficient of a unique (suitably normalized) canonical system. \\
\noindent Given a Hamiltonian $H$, we are ultimately interested in the description of properties of its spectral measure $\mu_H$ in terms of $H$. The correspondence between $H$ and $\mu_H$ can be best understood using the Weyl coefficient $q_H$, whose imaginary part $\IM q_H$ determines $\mu_H$ via the Stieltjes inversion formula. \\
In their recent paper \cite{langer.pruckner.woracek:heniest}, Langer, Pruckner, and Woracek gave a two-sided estimate for $\IM q_H(ir)$ in terms of the coefficients of $H$:
\begin{align}
\label{A43}
L(r) \lesssim \IM q_H(ir) \lesssim A(r), \quad \quad r>0,
\end{align}
where $L,A$ are explicit in terms of $H$, and we used the notation $f(r) \lesssim g(r)$ to state that $f(r) \leq Cg(r)$ for a constant $C>0$. Moreover, in (\ref{A43}) the constants implicit in $\lesssim$ are independent of $H$. The exact formulation of this result will be recalled in \Cref{Y98}. \\
It may happen that $L(r)={\rm o} (A(r))$, and $\IM q_H(ir)$ is not determined by (\ref{A43}). A toy example for this is the Hamiltonian
\begin{align*}
H(t)=t \left(\begin{matrix}
|\log t|^{\color{white} 1} & |\log t|^2 \\
|\log t|^2 & |\log t|^3 \\
\end{matrix} \right), \quad \quad t \in [0,\infty).
\end{align*}
For $r \to \infty$, a calculation shows that
\begin{align*}
L(r) &\asymp (\log r)^{-3}, \quad \quad A(r) \asymp (\log r)^{-1},
\end{align*}
where $f(r) \asymp g(r)$ means that both $f(r) \lesssim g(r)$ and $g(r) \lesssim f(r)$.
\newline
\noindent The following theorem, which is our main result, improves the estimate (\ref{A43}) by giving a formula for $\IM q_H(ir)$ up to universal multiplicative constants.
\begin{theorem}
\label{T1}
Let $H$ be a Hamiltonian on $[a,b)$, and denote\footnote{When there is no risk of ambiguity, we write $\Omega$ and $\omega_j$ instead of $\Omega_H$ and $\omega_j^{(H)}$ for short.}
\begin{equation}\label{Y08}
H(t) = \begin{pmatrix} h_1(t) & h_3(t) \\ h_3(t) & h_2(t) \end{pmatrix},\quad
\Omega_H(t) = \begin{pmatrix} \omega_1^{(H)}(t) & \omega_3^{(H)}(t) \\ \omega_3^{(H)}(t) & \omega_2^{(H)}(t) \end{pmatrix}
\mathrel{\mathop:}=\int_a^t H(s)\mkern4mu\mathrm{d} s.
\end{equation}
Let $\hat t : (0,\infty) \to (a,b)$ be a function satisfying\footnote{We will see later that the equation $\det \Omega_H(t)=\frac{1}{r^2}$ has a unique solution for every $r>0$. A possible choice of $\hat t$ is thus the function that maps $r>0$ to this solution. }
\begin{align}
\label{A49}
\det \Omega_H(\hat t(r)) \asymp \frac{1}{r^2}, \quad \quad r \in (0,\infty).
\end{align}
Then
\begin{align}
\label{A2}
\IM q_H(ir) &\asymp \bigg|q_H(ir)-\frac{\omega_3^{(H)}(\hat t(r))}{\omega_2^{(H)}(\hat t(r))} \bigg| \asymp \frac{1}{r\omega_2^{(H)}(\hat t(r))}, \\[1.7ex]
\label{A3}
\frac{\IM q_H(ir)}{|q_H(ir)|^2} &\asymp \frac{1}{r\omega_1^{(H)}(\hat t(r))},
\end{align}
for $r \in (0,\infty)$. The constants implicit in $\asymp$ in (\ref{A2}) and (\ref{A3}) depend on the constants hidden in $\asymp$ in (\ref{A49}), but not on $H$. \\
If, in addition, $\IM q_H(ir)={\rm o} (|q_H(ir)|)$ for $r \to \infty$ (or $r \to 0$), then\footnote{With $f(r) \sim g(r)$ meaning $\lim \frac{f(r)}{g(r)}=1.$}
\begin{align}
\label{A11}
q_H(ir) \sim \frac{\omega_3^{(H)}(\hat t(r))}{\omega_2^{(H)}(\hat t(r))}, \quad \quad r \to \infty \quad ( r \to 0).
\end{align}
\end{theorem}
\noindent The two-sided estimate (\ref{A2}) has some useful features: its pointwise nature, its applicability for $r \to \infty$ and $r \to 0$, and the universality of the constants hidden in $\asymp$. However, it is rather different from an asymptotic formula: it does not capture small oscillations of $\IM q_H(ir)$ around $\frac{1}{r\omega_2^{(H)}(\hat t(r))}$. \\
Note also that the first relation in (\ref{A2}) can be seen as a statement about the real part of $q_H(ir)$. In fact, $\IM q_H(ir)$ is also obtained if we subtract $\RE q_H(ir)$ from $q_H(ir)$, then take absolute values. It is an open question whether $\RE q_H(ir)$ can be described more directly in terms of $H$.
\newline
\noindent A most important class of operators is that of Sturm-Liouville (in particular, Schr\"odinger) operators. Let us provide a reformulation of \Cref{T1} for these operators right away.
\subsection*{Sturm-Liouville operators}
We provide a version of \Cref{T1} for Sturm-Liouville equations
\begin{align}
\label{A44}
-(py')'+qy=zwy
\end{align}
on $(a,b)$, where $1/p, q,w \in L^1_{loc}(a,b)$, $w>0$ and $p,q$ are real-valued. Suppose that $a$ is in limit circle case and $b$ is in limit point case. Impose a Dirichlet boundary condition at $a$, i.e., $y(a)=0$. The Weyl coefficient for this problem is the unique number $m(z)$ with
\[
c(z,\cdot)+m(z)s(z,\cdot) \in L^2((a,b),w(x)\mkern4mu\mathrm{d} x)
\]
where $c(z,\cdot)$ and $s(z,\cdot)$ are solutions of (\ref{A44}) with initial values
\[
\binom{p(a)c'(z,a)}{c(z,a)}=\binom{0}{1}, \quad \binom{p(a)s'(z,a)}{s(z,a)}=\binom{1}{0}.
\]
\begin{theorem}
\label{T9}
For each $t \in (a,b)$, let $(.,.)_t$ and $\|.\|_t$ denote the scalar product and norm on $L^2((a,t),w(x)\mkern4mu\mathrm{d} x)$, i.e.,
\[
(f,g)_t=\int_a^t f(x)\overlineerline{g(x)} w(x) \mkern4mu\mathrm{d} x.
\]
For $\xi \in \mathbb{R}$, let $\hat t_\xi : (0,\infty) \to (a,b)$ be a function satisfying
\begin{align}
\label{A51}
\|c(\xi,\cdot)\|_{\hat t_\xi(r)}^2 \|s(\xi,\cdot)\|_{\hat t_\xi(r)}^2 - (c(\xi,\cdot),s(\xi,\cdot))_{\hat t_\xi(r)}^2 \asymp \frac{1}{r^2}, \quad \,\, r \in (0,\infty).
\end{align}
Then
\begin{align}
\label{A45}
\IM m(\xi+ir) &\asymp \frac{1}{r \|s(\xi,\cdot)\|_{\hat t_\xi(r)}^2}, \\
\label{A46}
\frac{\IM m(\xi+ir)}{|m(\xi+ir)|^2} &\asymp \frac{1}{r \|c(\xi,\cdot)\|_{\hat t_\xi(r)}^2},
\end{align}
for $r \in (0,\infty)$. The constants implicit in $\asymp$ are independent of $p,q,w$ as well as $\xi$, but do depend on the constants pertaining to $\asymp$ in (\ref{A51}).
\end{theorem}
\noindent In fact, \Cref{T9} is a direct consequence of \Cref{T1} upon employing a transformation (cf. \cite{remling:2018} for $p=w=1$ and $\xi=0$) that maps solutions of (\ref{A44}) to solutions of the canonical system $y'=(z-\xi)JH_\xi y$, where
\[
H_\xi (t) = w(t) \cdot \begin{pmatrix}
c(\xi,t)^2 & -s(\xi,t)c(\xi,t) \\
-s(\xi,t)c(\xi,t) & s(\xi,t)^2
\end{pmatrix}, \quad \quad t \in [a,b).
\]
The Weyl coefficients then satisfy $m(z)=q_{H_\xi}(z-\xi)$.
\subsection*{Historical remarks}
\noindent The origins of the Weyl coefficient in the theory of the Sturm-Liouville differential equation are well summarized in Everitt's paper \cite{everitt:2004}. We give a short account specifically on the history of estimates for the growth of the Weyl coefficient, which date back at least to the 1950s. Particular attention was often given to the deduction of asymptotic formulae for the Weyl coefficient \cite{marchenko:1952,kac:1973a,everitt:1972,kasahara:1975,atkinson:1981,bennewitz:1989}. However, asymptotic results usually depend on rather strong assumptions on the data. When weakening these assumptions, one can still ask for explicit estimates for $q(z)$ as $z \to \infty$ nontangentially in the upper half-plane. There is a number of rather early results that determine $|q(z)|$ up to $\asymp$, e.g., \cite{hille:1963,atkinson:1988,bennewitz:1989}, although these still depend on data subject to additional restrictions. Fundamental progress has been made by Jitomirskaya and Last \cite{jitomirskaya.last:1999}, who considered Schr\"odinger operators with arbitrary (real-valued and locally integrable) potentials. They found a formula up to $\asymp$ for $|q(z)|$, which also covers the case $z \to 0$. An analog of this formula for canonical systems was given in \cite{hassi.remling.snoo:2000}. \\
When it comes to $\IM q(z)$, however, no such formula was available. Only the very recent estimate (\ref{A43}) from \cite[Theorem 1.1]{langer.pruckner.woracek:heniest} made it possible to obtain our main result that determines $\IM q(z)$ up to $\asymp$.
\subsection*{Structure of the paper}
\noindent The proof of \Cref{T1}, together with some immediate corollaries, makes up \Cref{S2}. In \Cref{S5}, we continue with a first application, a criterion for integrability of a given comparison function with respect to $\mu_H$. We also characterize boundedness of the distribution function of $\mu_H$ relative to a given comparison function.
\newline
\noindent \Cref{S3} is dedicated to the boundary behavior of Herglotz functions. Cauchy integrals and the relative behavior of its imaginary and real part have been intensively studied. For example, for a Herglotz function $q$ it is known \cite{poltoratski:2003} that the set of $\xi \in \mathbb{R}$ for which
\begin{align}
\label{A47}
\lim_{r \to 0} \frac{\IM q(\xi+ir)}{|q(\xi+ir)|}=0
\end{align}
is a zero set w.r.t. $\mu$. In contrast to measure theoretic results like this, we use the de Branges correspondence $H \leftrightarrow q_H$ to investigate this behavior pointwise w.r.t. $\xi$. In \Cref{T7} we show that if $\xi$ is such that (\ref{A47}) holds, then $|q(\xi+ir)|$ is slowly varying (cf. \Cref{A48}). \Cref{T8} is a partial converse of this statement.
\newline
\noindent In \Cref{S4} we turn to a finer study of $\IM q_H(ir)$ in the context of the geometric origins of (\ref{A43}) and (\ref{A2}). Namely, the functions $L$ and $A$ describe the imaginary parts of bottom and top of certain Weyl disks containing $q_H(ir)$. We show that there are restrictions on the possible location of $q_H(ir)$ within the disks, and construct a Hamiltonian $H$ for which $q_H(ir)$ oscillates back and forth between the bottoms and tops of the disks. This construction allows us to answer several open problems that were posed in \cite{langer.pruckner.woracek:heniest}.
\newline
\noindent We conclude our work with a reformulation of \Cref{T1} for the principal Titchmarsh-Weyl coefficient $q_S$ of a Krein string. This reformulation is the content of \Cref{S6}.
\subsection*{Notation associated to Hamiltonians}
\noindent Let $H$ be a Hamiltonian on $[a,b)$.
\newline
\noindent An interval $(c,d) \subseteq [a,b)$ is called $H$-\textit{indivisible} if $H(t)$ takes the form $h(t)\binom{\cos \varphi}{\sin \varphi}\binom{\cos \varphi}{\sin \varphi}^*$ a.e. on $(c,d)$, with scalar-valued $h$ and fixed $\varphi \in [0,\pi)$. The angle $\varphi$ is then called the \textit{type} of the interval.
\begin{definition}
Let \begin{align}
\mathring a (H) &:=\inf \Big\{t > a \,\Big| \, (a,t) \text{ is not }H\text{-indivisible of type } 0 \text{ or } \frac{\pi}{2} \Big\}, \\
\hat a (H) &:=\inf \Big\{t > a \,\Big|\, (a,t) \text{ is not }H\text{-indivisible} \Big\}.
\end{align}
Usually, we write $\mathring a$ and $\hat a$ for short. Since $H$ is assumed to be definite, both of these numbers are smaller than $b$.
\end{definition}
\noindent Note that $(\omega_1 \omega_2)(t)>0$ if and only if $(a,t)$ is not $H$-indivisible of type $0$ or $\frac{\pi}{2}$, i.e., $t>\mathring a$. Using the assumption $\int_a^b \tr H(t) \mkern4mu\mathrm{d} t=\infty$, we infer that $\omega_1 \omega_2$ is an increasing bijection from $(\mathring a,b)$ to $(0,\infty)$. \\
Similarly, $\det \Omega (t)>0$ is equivalent to $t>\hat a$. We have
\[
\frac{d}{dt} \Big(\frac{\det \Omega (t)}{\omega_1(t)} \Big)=\omega_1(t)^{-2} \binom{-\omega_3(t)}{\omega_1(t)}^* H(t)\binom{-\omega_3(t)}{\omega_1(t)} \geq 0
\]
and (by symmetry) $\frac{d}{dt} \big(\frac{\det \Omega}{\omega_2}\big) \geq 0$. Since at least one of $\omega_1$ and $\omega_2$ is unbounded, $\det \Omega$ is an increasing bijection from $(\hat a,b)$ to $(0,\infty)$.
\begin{definition}
\label{A35}
For a Hamiltonian $H$ and a number $\eta >0$, set \\
\begin{minipage}{.5\linewidth}
\begin{equation*}
\mathring r_{\eta,H} : \left\{\begin{array}{ccc}
(\mathring a,b) &\to &(0,\infty) \\[0.5ex]
t &\mapsto & \frac{\eta}{ 2\sqrt{(\omega_1 \omega_2)(t)}},
\end{array}\right.
\end{equation*}
\end{minipage}
\begin{minipage}{.5\linewidth}
\begin{equation*}
\hat r_{\eta,H} : \left\{\begin{array}{ccc}
(\hat a,b) &\to &(0,\infty) \\[0.5ex]
t &\mapsto & \frac{\eta}{ 2\sqrt{\det \Omega (t)}}.
\end{array}\right.
\end{equation*}
\end{minipage}
\noindent Both of these functions are decreasing and bijective. We define their inverse functions,
\begin{align}
\mathring t_{\eta,H}:=\mathring r_{\eta,H}^{-1} \,:\, (0,\infty) \to (\mathring a,b), \quad \quad \hat t_{\eta,H}:=\hat r_{\eta,H}^{-1} \,:\, (0,\infty) \to (\hat a,b).
\end{align}
Note that the functions $\hat t_{\eta,H}$, for any $\eta>0$, satisfy (\ref{A49}). Functions of this form will be the default choice of $\hat t$ for the sake of \Cref{T1}. We will often fix $\eta$ and $H$ and write $\mathring r$, $\mathring t$, $\hat r$, $\hat t$ for short. If $\eta$ is fixed but the Hamiltonian is ambiguous, we may write $\mathring r_H$, $\mathring t_H$, $\hat r_H$, $\hat t_H$ to indicate dependence on $H$.
\end{definition}
\section{On the imaginary part of the Weyl coefficient}
\label{S2}
\noindent We start by providing the details of the estimate (\ref{A43}), which is the central result in \cite{langer.pruckner.woracek:heniest}.
\begin{theorem}[{\cite[Theorem 1.1]{langer.pruckner.woracek:heniest}}]
\label{Y98}
Let $H$ be a Hamiltonian on $[a,b)$, and let $\eta \in (0,1-\frac{1}{\sqrt 2})$ be fixed. For $r>0$, let $\mathring t(r)$ be the unique number satisfying
\begin{align}
\label{A50}
(\omega_1^{(H)} \omega_2^{(H)})(\mathring t(r))=\frac{\eta^2}{4r^2},
\end{align}
cf. \Cref{A35}. Set\footnote{If $\eta$ and $H$ are clear from the context, we may write $A$ and $L$ for short.}
\[
A_{\eta,H}(r):=\frac{\eta}{2r\omega_2^{(H)}(\mathring t(r))}, \quad \quad L_{\eta,H}(r):= \frac{\det \Omega_H (\mathring t(r))}{(\omega_1^{(H)} \omega_2^{(H)})(\mathring t(r))} \cdot A_{\eta,H}(r).
\]
Then the Weyl coefficient $q_H$ associated with the Hamiltonian $H$ satisfies
\begin{align}
|q_H(ir)| &\asymp A_{\eta,H}(r),
\label{Y35} \\[1.5ex]
L_{\eta,H}(r) \lesssim \IM q_H(ir) &\lesssim A_{\eta,H}(r)
\label{Y96}
\end{align}
for $r \in (0,\infty)$. The constants implicit in these relations are independent of $H$. Their dependence on $\eta$ is continuous.
\end{theorem}
\noindent In the following proof of \Cref{T1}, we will also show that \Cref{Y98} still holds if $\mathring t: (0,\infty) \to (a,b)$ is a function satisfying $(\omega_1 \omega_2)(\mathring t(r)) \asymp \frac{1}{r^2}$, and
\[
A(r):=\frac{1}{r\omega_2^{(H)}(\mathring t(r))}, \quad \quad L(r):= \frac{\det \Omega_H (\mathring t(r))}{(\omega_1^{(H)} \omega_2^{(H)})(\mathring t(r))} \cdot A(r).
\]
In particular, we can choose any $\eta>0$ in (\ref{A50}).
\begin{proof}[Proof of \Cref{T1}]
Let $\hat t_{\eta,H}$ be defined as in \Cref{A35}. We show that for any $\eta>0$, \Cref{T1} holds for $\hat t_{\eta,H}$ in place of $\hat t$, and that the dependence on $\eta$ of the constants hidden in $\asymp$ in (\ref{A2}) and (\ref{A3}) is continuous. This then implies that \Cref{T1} holds for any function $\hat t$ satisfying (\ref{A49}).
\newline
\noindent The proof is divided into steps. \\
\item[\textbf{Step 1.}]
We introduce a family of transformations of $H$ that leave the imaginary part of the Weyl coefficient unchanged. If $p \in \bb R$ and
\[
H_p(t):=\smmatrix 1p01 H(t) \smmatrix 10p1 =
\begin{pmatrix}
h_1(t)+2p h_3(t)+p^2 h_2(t) & h_3(t)+ph_2(t) \\
h_3(t)+ph_2(t) & h_2(t)
\end{pmatrix},
\]
an easy calculation shows that the Weyl coefficient $q_p$ of $H_p$ is given by $q_p(z)=q_0(z)+p=q_H(z)+p$. \\
\item[\textbf{Step 2.}]
We prove (\ref{A2})-(\ref{A11}) for fixed $\eta \in (0,1-\frac{1}{\sqrt{2}})$. The following abbreviations are used only in Step 2: \\
\begin{tabular}{|r|l||r|l||r|l|@{}m{0pt}@{}}
\hline
short form & meaning & short form & meaning &short form & meaning & \\[10pt]
\hline \hline
\rule{0pt}{3ex}$\mathring t$ & \rule{0pt}{3ex} $\mathring t_{\eta,H}$ & \rule{0pt}{3ex} $\mathring t_p$ & \rule{0pt}{3ex} $\mathring t_{\eta,H_p}$ & \rule{0pt}{3ex} $\Omega_p$ & \rule{0pt}{3ex} $\Omega_{H_p}$& \\[3pt]
\hline
\rule{0pt}{3ex}$\hat t$ & \rule{0pt}{3ex}$\hat t_{\eta,H}$ & \rule{0pt}{3ex}$\hat t_p$ & \rule{0pt}{3ex}$\hat t_{\eta,H_p}$ & \rule{0pt}{3ex}$\omega_j^{(p)}$ & \rule{0pt}{3ex}$\omega_j^{(H_p)}$ & \\[3pt]
\hline
\rule{0pt}{3ex}$L_p$& \rule{0pt}{3ex}$L_{\eta,H_p}$ & \rule{0pt}{3ex}$A_p$& \rule{0pt}{3ex}$A_{\eta,H_p}$ & \rule{0pt}{3ex}$\Omega$ & \rule{0pt}{3ex}$\Omega_H$
& \\[3pt]
\hline
\end{tabular}
\\[4pt]
\noindent Let $r>0$ be fixed (this is important). Our first observation is that $\hat t_p(r)=\hat t(r)$ for any $p$ since $\det \Omega_p(t)=\det \Omega (t)$ does not depend on $p$. If we can find $p$ such that $\mathring t_p(r)=\hat t_p(r)=\hat t(r)$, then clearly
\[
\frac{L_p(r)}{A_p(r)}=\frac{\det \Omega_p(\mathring t_p(r))}{(\omega_1^{(p)}\omega_2^{(p)})(\mathring t_p(r))}=\frac{\det \Omega_p(\hat t_p(r))}{(\omega_1^{(p)}\omega_2^{(p)})(\mathring t_p(r))}=1.
\]
We apply \Cref{Y98} with $\eta$ and $H_p$. The estimate (\ref{Y96}) then takes the form
\begin{equation}
\label{P1}
A_p(r) = L_p(r) \lesssim \IM q_H(ir) \lesssim A_p(r)
\end{equation}
while (\ref{Y35}) turns into
\begin{equation}
\label{P2}
|q_H(ir)+p| \asymp A_p(r),
\end{equation}
where
\[
A_p(r)= \frac{\eta}{2r\omega_2^{(p)}(\mathring t_p(r))} = \frac{\eta}{2r\omega_2(\hat t(r))}.
\]
The right choice of $p$ is
\[
p=- \frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))},
\]
leading to $\omega_3^{(p)}(\hat t(r))=0$ and thus
\[
(\omega_1^{(p)}\omega_2^{(p)})(\hat t(r))=\det \Omega_p (\hat t(r))=\det \Omega (\hat t(r))=\frac{\eta^2}{4r^2}.
\]
Consequently, $\mathring t_p(r)=\hat t(r)$. Observe that the implicit constants in (\ref{Y35}) and (\ref{Y96}) are independent of $H$ and $r$ and depend continuously on $\eta$. This shows that (\ref{A2}) holds, with constants depending continuously on $\eta$. \\
\item[\textbf{Step 3.}] (\ref{A3}) follows from an application of (\ref{A2}) to $\tilde H:=J^{\top}HJ=\smmatrix {h_2}{-h_3}{-h_3}{h_1}$ and note that $\hat t_{\eta,\tilde H}=\hat t_{\eta,H}$. Thus
\[
\frac{\IM q_H(ir)}{|q_H(ir)|^2}=\IM \Big(- \frac{1}{q_H(ir)} \Big)= \IM q_{\tilde H}(ir) \asymp \frac{1}{r \omega_2^{(\tilde H)}(\hat t_{\eta,\tilde H}(r))}=\frac{1}{r \omega_1^{(H)}(\hat t_{\eta,H}(r))}.
\]
Formula (\ref{A11}) follows if we divide (\ref{A2}) by $|q_H(ir)|$. Hence, we proved the assertion for $\eta \in (0,1-\frac{1}{\sqrt{2}})$.
\newline
\noindent In the remaining steps we treat the missing case $\eta \geq 1-\frac{1}{\sqrt{2}}$.
\item[\textbf{Step 4.}] Let $k>0$. For use in Step 5, we show that
\begin{align}
\label{A31}
\IM q_H(ir) \asymp \IM q_H (ikr), \quad \quad |q_H(ir)| \asymp |q_H (ikr )|
\end{align}
for $r \in (0,\infty)$, where the constants in $\asymp$ depend continuously on $k$ and are independent of $H$. \\
For the imaginary part, the statement is easy to see from the integral representation (\ref{A17}). For the absolute value, we use the Hamiltonian $\tilde H$ from Step 3 to obtain
\[
\frac{\IM q_H(ir)}{|q_H(ir)|^2}=\IM q_{\tilde H}(ir) \asymp \IM q_{\tilde H}(ikr)=\frac{\IM q_H(ikr)}{|q_H(ikr)|^2}.
\]
This shows that $|q_H(ir)| \asymp |q_H (ikr )|$ as well.
\item[\textbf{Step 5.}]
\noindent Fix a Hamiltonian $H$, and let $\eta_0 \geq 1-\frac{1}{\sqrt{2}}$. Then
\[
\mathring t_{\eta_0,H}(r)=\mathring t_{\frac 14,\frac{1}{4\eta_0}H}(r), \quad \quad \hat t_{\eta_0,H}(r)=\hat t_{\frac 14,\frac{1}{4\eta_0}H}(r)
\]
and
\[
A_{\eta_0,H}(r)= A_{\frac 14,\frac{1}{4\eta_0}H}(r), \quad \quad L_{\eta_0,H}(r)= L_{\frac 14,\frac{1}{4\eta_0}H}(r).
\]
Since $\frac 14$ is less than $1-\frac{1}{\sqrt{2}}$, we can use \Cref{Y98} with $\eta:=\frac 14$ to obtain
\begin{align}
\label{A30}
L_{\eta_0,H}(r) = L_{\frac 14,\frac{1}{4\eta_0}H}(r) \lesssim &\IM q_{\frac{1}{4\eta_0}H} (ir) \\
\leq &|q_{\frac{1}{4\eta_0}H}(ir)| \asymp A_{\frac 14,\frac{1}{4\eta_0}H}(r) = A_{\eta_0,H}(r) \nonumber
\end{align}
for $r \in (0,\infty)$. Since $q_{\frac{1}{4\eta_0}H}(z)=q_H \big(\frac{z}{4\eta_0} \big)$ and by Step 4, we see that \Cref{Y98} holds for arbitrary $\eta >0$. It is easy to check that continuous dependence of constants on $\eta$ is retained. Repeating Steps $1-3$ now shows that also \Cref{T1} holds for $\hat t_{\eta,H}$ for any $\eta >0$. Moreover, it is not hard to see that everything still works if $\hat t$ is a function satisfying (\ref{A49}).
\end{proof}
\begin{remark}
\label{A36}
\Cref{Y98} and \Cref{T1}, in the form we stated them, give information about $q_H(z)$ for $z=ir$. However, if $\vartheta \in (0,\pi )$ is fixed, these theorems also hold
\begin{itemize}
\item [$\rhd$] for $z=re^{i\vartheta}$ uniformly for $r \in (0,\infty )$ and
\item [$\rhd$] for $z=re^{i\varphi}$ uniformly for $r \in (0,\infty)$ and $|\frac{\pi}{2}-\varphi | \leq |\frac{\pi}{2}-\vartheta |$.
\end{itemize}
We restate the explicit constants coming from \cite{langer.pruckner.woracek:heniest}. Fix $\eta \in (0,1-\frac{1}{\sqrt{2}})$ and set $\sigma :=(1-\eta )^{-2}-1 \in (0,1)$. With
\begin{align*}
c_-(\eta ,\vartheta)=\frac{\eta \sin \vartheta}{2(1+|\cos \vartheta |)} \cdot \frac{1-\sigma}{1+\sigma}, \quad \quad c_+(\eta ,\vartheta)=\frac{\sigma+\frac{2}{\eta \sin \vartheta}}{1-\sigma},
\end{align*}
we have\footnote{Since $c_-$ and $c_+$ are clearly monotonic in $\vartheta $, (\ref{A37}) and (\ref{A38}) still hold when $q_H(re^{i\vartheta})$ is replaced by $q_H(re^{i\varphi})$, where $|\frac{\pi}{2}-\varphi | \leq |\frac{\pi}{2}-\vartheta |$.}
\begin{align}
\label{A37}
c_-(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_2(\hat t_{\eta ,H}(r))} &\leq \IM q_H(re^{i\vartheta}) \leq c_+(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_2(\hat t_{\eta ,H}(r))}, \\[1ex]
\label{A38}
c_-(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_1(\hat t_{\eta ,H}(r))} &\leq \frac{\IM q_H(re^{i\vartheta})}{|q_H(re^{i\vartheta})|^2} \leq c_+(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_1(\hat t_{\eta ,H}(r))}.
\end{align}
In order to show (\ref{A37}), we need to slightly adapt the proof of \Cref{T1} by replacing $ir$ with $re^{i\vartheta}$ in (\ref{P1}) and taking into account the constants provided in
\cite[Theorem 1.1]{langer.pruckner.woracek:heniest}. Then (\ref{A38}) follows as in Step 3 of the proof. \\
For $\vartheta =\frac{\pi}{2}$, the optimal choice of $\eta$ is around $0.13833$ which gives
\[
c_+(0.13833, \frac{\pi}{2}) \approx 1.568, \quad c_-(0.13833, \frac{\pi}{2}) \approx 0.002, \quad \frac{c_+(0.13833, \frac{\pi}{2})}{c_-(0.13833, \frac{\pi}{2})} \approx 675.772 .
\]
While it is possible to derive explicit constants also for $\eta \geq 1-\frac{1}{\sqrt{2}}$, doing so does not result in an improvement of the quotient $c_+ / c_-$.
\end{remark}
\subsection*{Immediate consequences of \Cref{T1}}
\noindent \textit{In order to simplify calculations, unless specified otherwise, we will always assume that $\mathring t(r)$ and $\hat t(r)$ are defined implicitly by}
\begin{align}
\label{A12}
(\omega_1 \omega_2)(\mathring t(r))=\frac{1}{r^2}, \quad \det \Omega (\hat t(r))=\frac{1}{r^2},
\end{align}
and similarly for $\mathring r$ and $\hat r$ (cf. \Cref{A35} with $\eta=2$).
\noindent We revisit the example from the introduction in more generality. The following example was communicated by Matthias Langer. The calculations can be found in the appendix. \\
\begin{example}
\label{A24}
Let $\alpha > 0$ and $\beta_1, \beta_2 \in \bb R$ where $\beta_1 \neq \beta_2$. Set $\beta_3 := \frac{\beta_1 + \beta_2}{2}$ and define, for $t \in (0,\infty)$,
\begin{align*}
H(t)=
t^{\alpha -1}\left(\begin{matrix}
|\log t|^{\beta_1} & |\log t|^{\beta_3} \\
|\log t|^{\beta_3} & |\log t|^{\beta_2} \\
\end{matrix} \right).
\end{align*}
Then for $r \to \infty$, we have
\begin{itemize}
\item[] $L(r) \asymp (\log r)^{\frac{\beta_1-\beta_2}{2}-2}$ and
\item[] $A(r) \asymp |q_H(ir)| \asymp (\log r)^{\frac{\beta_1-\beta_2}{2}}$,
\end{itemize}
i.e., $L(r) = {\rm o} ( A(r))$. Using \Cref{T1}, we can now continue the calculations, leading to
\[
\IM q_H(ir) \asymp (\log r)^{\frac{\beta_1-\beta_2}{2}-1} \asymp \sqrt{L(r)A(r)}.
\]
\end{example}
\noindent It is an immediate consequence of \Cref{T1} that $\IM q_H$ depends monotonically on the off-diagonal of $H$.
\begin{corollary}
\label{T1+}
Let $H=\smmatrix {h_1}{h_3}{h_3}{h_2}$ and $\tilde{H}=\smmatrix {h_1}{\tilde h_3}{\tilde h_3}{h_2}$ be two Hamiltonians on $[a,b)$. If $t>\hat a (H)$ such that
\[
\Big|\int_a^t h_3(s) \mkern4mu\mathrm{d} s \Big| \geq \Big|\int_a^t \tilde h_3(s) \mkern4mu\mathrm{d} s \Big|,
\]
then
\[
\IM q_H(i \hat r_H(t)) \lesssim \IM q_{\tilde H}(i \hat r_H(t))
\]
with a constant independent of $t$, $H$, and $\tilde H$.
\end{corollary}
\begin{proof}
Our condition states that $|\omega_3(t)| \geq |\tilde \omega_3(t)|$. Taking into account that $t>\hat a(H)$, this means that $0<\det \Omega (t) \leq \det \tilde \Omega (t)$. Hence $\hat r_H(t) \geq \hat r_{\tilde H}(t)$, and further $\hat t_{\tilde H}(\hat r_H(t)) \leq t$. Now, by (\ref{A2}),
\[
\IM q_H(i\hat r_H(t)) \asymp \frac{1}{\hat r_H(t)\omega_2(t)} \leq \frac{1}{\hat r_H(t)\omega_2(\hat t_{\tilde H}(\hat r_H(t)))} \asymp \IM q_{\tilde H}(i\hat r_H(t)).
\]
\end{proof}
\noindent The following result elaborates on the relative behavior of $\IM q_H$ and $|q_H|$. We obtain a quantitative and pointwise relation between $\frac{\IM q_H}{|q_H|}$ and $\frac{\det \Omega}{\omega_1 \omega_2}$, leading to the equivalence
\begin{align}
\label{A14}
\lim_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|}=0 \,\, \Longleftrightarrow \,\, \lim_{t \to \hat a} \frac{\det \Omega (t)}{(\omega_1 \omega_2)(t)}=0.
\end{align}
The relation between $\frac{\det \Omega}{\omega_1 \omega_2}$ and $\frac{\IM q_H(ir)}{|q_H(ir)|}$ has been investigated also in \cite{langer.pruckner.woracek:gapsatz-arXiv}. Their proof of (\ref{A14})\footnote{In \cite{langer.pruckner.woracek:gapsatz-arXiv}, $\lim_{t \to a}$ was considered instead of $\lim_{t \to \hat a}$.} is based on compactness arguments. \\
Note that our result shows that (\ref{A14}) holds true for $r \to 0$ and $t \to b$ as well.
\begin{proposition}
\label{A4}
Let $H$ be a Hamiltonian on $[a,b)$. Then\footnote{$\mathring r(\hat t(r))$ is well-defined because of $\hat t(r) \in (\hat a,b) \subseteq (\mathring a,b)$.}
\begin{align}
\label{A9}
\frac{\IM q_H(ir)}{|q_H(ir)|} \asymp \frac{\mathring r(\hat t(r))}{r} = \sqrt{\frac{\det \Omega (\hat t(r))}{(\omega_1 \omega_2)(\hat t(r))}}
\end{align}
for $r \in (0,\infty)$. Moreover,
\begin{align}
\label{A10}
\big|q_H \big( i \mathring r(\hat t(r)) \big) \big| \asymp |q_H(ir)|, \quad \quad r \in (0,\infty).
\end{align}
All constants implicit in $\asymp$ do not depend on $H$.
\end{proposition}
\begin{proof}
By definition of $\mathring r$ and using (\ref{A2}) and (\ref{A3}),
\[
\mathring r(\hat t(r)) = \frac{1}{\sqrt{(\omega_1 \omega_2)(\hat t(r))}} \asymp r \frac{\IM q_H(ir)}{|q_H(ir)|}.
\]
We also have
\[
\sqrt{\frac{\det \Omega (\hat t(r))}{(\omega_1 \omega_2)(\hat t(r))}} = \frac{1}{\sqrt{r^2(\omega_1 \omega_2)(\hat t(r))}} = \frac{\mathring r(\hat t(r))}{r},
\]
and (\ref{A9}) follows. \\
For the proof of (\ref{A10}), we need the formula
\[
\omega_1(\mathring t(r)) \asymp \frac{|q_H(ir)|}{r}
\]
which we get from \Cref{Y98} applied to $J^{\top}HJ$. Combine this with (\ref{Y35}) to get
\[
|q_H(ir)|^2 \asymp \frac{\omega_1(\mathring t(r))}{\omega_2(\mathring t(r))}.
\]
On the other hand, (\ref{A2}) and (\ref{A3}) give
\[
|q_H(ir)|^2 \asymp \frac{\omega_1(\hat t(r))}{\omega_2(\hat t(r))}=\frac{\omega_1 \Big(\mathring t \big( \mathring r(\hat t(r)\big)\Big)}
{\omega_2 \Big(\mathring t \big( \mathring r(\hat t(r)\big)\Big)} \asymp \big|q_H \big( i \mathring r(\hat t(r)) \big) \big|^2.
\]
\end{proof}
\noindent The freedom in the choice of $\eta$ leads to the following formula that we will refer to later on.
\begin{corollary}
\label{T5}
Let $H$ be a Hamiltonian on $[a,b)$. Then, for any $k>0$,
\begin{align}
\label{A21}
\IM q_H(ikr) \asymp \bigg|q_H(ikr)-\frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))} \bigg| &\asymp \bigg|q_H(ir)-\frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))} \bigg|
\end{align}
with constants depending on $k$, but not on $H$. \\
If $\IM q_H(ir)={\rm o} (|q_H(ir)|)$ for $r \to \infty$ \emph{[}$r \to 0$\emph{]}, then
\begin{align}
\label{A20}
q_H(ikr) \sim \frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))}, \quad \quad r \to \infty \quad [r \to 0].
\end{align}
\end{corollary}
\begin{proof}
Apply \Cref{T1} to $H$ using $\hat t_{1,H}$, and to $kH$ using $\hat t_{k,kH}$. Then $\hat t_{1,H}(r)=\hat t_{k,kH}(r)$, and we write $\hat t(r)$ for short. Keeping in mind that $q_{kH}(z)=q_H(kz)$, this leads to
\[
\IM q_H(ir) \asymp \bigg|q_H(ir)-\frac{\omega_3^{(H)}(\hat t(r))}{\omega_2^{(H)}(\hat t(r))} \bigg| \asymp \frac{1}{r\omega_2^{(H)}(\hat t(r))}
\]
as well as
\[
\IM q_H(ikr) \asymp \bigg|q_H(ikr)-\frac{k\omega_3^{(H)}(\hat t(r))}{k\omega_2^{(H)}(\hat t(r))} \bigg| \asymp \frac{1}{kr \cdot \omega_2^{(H)}(\hat t(r))}.
\]
(\ref{A21}) follows. Now (\ref{A20}) is obtained by dividing (\ref{A21}) by $|q_H(ikr)|$.
\end{proof}
\section{Behavior of tails of the spectral measure}
\label{S5}
\Cref{T1} that approximately determines the imaginary part of $q_H(ir)$
allows us to determine the growth of the spectral measure $\mu_H$ relative
to suitable comparison functions. Let us introduce the measure $\tilde\mu_H$ on $[0,\infty)$ by
\begin{equation}\label{Y140}
\tilde\mu_H([0,r)) := \tilde\mu_H(r) := \mu_H((-r,r)), \quad \quad r>0.
\end{equation}
In \Cref{Y190}, equivalent conditions are given for when the function $r \mapsto \tilde \mu_H$ is integrable w.r.t. a given weight function, and also when the measure $\tilde \mu_H$ is finite w.r.t. to a rescaling function. \\
On the other hand, we can view $\tilde\mu_H$ as a function of the positive real parameter $r$, and compare this to a given function $\ms g$. This is what we do in \Cref{Y02}. \\
We note that the content of this section is analogous to \cite[Section 4]{langer.pruckner.woracek:heniest}. The availability of formula (\ref{A2}) leads to improved results in the present article, however we provide less detail as was given in \cite{langer.pruckner.woracek:heniest}.
\newline
\noindent The proofs in this section are based on standard theorems of Abelian-Tauberian type, relating $\mu_H$ to its Poisson integral
\begin{align}
\mc P [\mu_H](z):= \int_{\bb R} \IM \Big( \frac{1}{t-z} \Big) \mkern4mu\mathrm{d} \mu_H(t).
\end{align}
By (\ref{A17}), we have $\mc P [\mu_H](z) = \IM q_H(z) - \beta \IM z $. If $\beta =0$, we can proceed with the application of Abelian-Tauberian theorems without problems. The case $\beta >0$ is equivalent to $a$ being the left endpoint of an $H$-indivisible interval of type $\frac{\pi}{2}$, i.e., $\mathring a(H) >a$ and $h_2$ vanishes a.e. on $[a,\mathring a(H))$. The restricted Hamiltonian $H_-:=H\big|_{[\mathring a(H),b)}$ then has the Weyl coefficient $q_{H_-}(z)=q_H(z)-\beta z$ and thus $\IM q_{H_-}(z) = \mc P [\mu_H](z)$. Hence, we can investigate $\mu_H$ by applying the theorems from this section to $H_-$.
\subsection[{Finiteness of the spectral measure w.r.t. given weight functions}]{Finiteness of the spectral measure w.r.t. given weight functions}
\label{Y190}
\begin{theorem}
\label{AT0}
Let $H$ be a Hamiltonian defined on $[a,b)$, and assume that $h_2$ does not vanish identically in a neighborhood of $a$. Let $\ms f$ be a continuous, non-decreasing function,
and denote by $\mu_H$ the spectral measure of $H$.
\noindent Then the following statements are equivalent:
\begin{Enumerate}
\item
\begin{equation}
\label{Y161}
\int_1^{\infty} \tilde\mu_H(r)\frac{\ms f(r)}{r^3}\mkern4mu\mathrm{d} r<\infty;
\end{equation}
\item
There is $ a' \in (\hat a,b)$ such that
\[
\int_{\hat a}^{a'}
\frac{1}{\omega_2(t)^2}\binom{\omega_2(t)}{-\omega_3(t)}^*H(t)\binom{\omega_2(t)}{-\omega_3(t)}
\cdot\ms f\bigl(\det \Omega (t)^{-\frac12}\bigr)\mkern4mu\mathrm{d} t<\infty.
\]
\end{Enumerate}
\noindent If, in addition, $\ms f$ is differentiable,
then the above conditions hold if and only if there is $ a'\in(\hat a,b)$ such that
\begin{align*}
\int_{\hat a}^{a'}
\frac{(\det \Omega)'(t)}{\omega_2(t)\det \Omega (t)^{\frac 12}} \ms f'\bigl(\det \Omega (t)^{-\frac12}\bigr)\mkern4mu\mathrm{d} t < \infty.
\end{align*}
\end{theorem}
\begin{proof}
First note that finiteness of the integrals in the proposition clearly
does not depend on $a'\in(\hat a,b)$.
\noindent Let $\xi$ be the measure on $[1,\infty)$ such that $\ms f(r)=\xi([1,r))$, $r\ge1$.
It follows from \cite[Lemma~4]{kac:1982} that
\[
\int_{[1,\infty)}\frac{\Poi{\mu_H}(ir)}{r}\mkern4mu\mathrm{d}\xi(r) < \infty
\quad\Longleftrightarrow\quad
\int_1^\infty \frac{\tilde\mu_H(r)\ms f(r)}{r^3}\mkern4mu\mathrm{d} r < \infty.
\]
Since $h_2$ does not vanish identically in a neighborhood of $a$, we have $\Poi{\mu_H}=\IM q_H$. By \Cref{T1}, we have
\[
\frac{\Poi{\mu_H}(ir)}{r}
\asymp \frac{1}{r^2 \omega_2(\hat t(r))}
\asymp \frac{\det \Omega (\hat t(r))}{\omega_2(\hat t(r))}.
\]
Hence
\begin{equation}\label{Y162}
\int_1^{\infty} \tilde\mu_H(r)\frac{\ms f(r)}{r^3}\mkern4mu\mathrm{d} r<\infty
\;\;\Longleftrightarrow\;\;
\int_{[1,\infty)}\frac{\det \Omega\bigl(\hat t(r)\bigr)}{\omega_2\bigl(\hat t(r)\bigr)}\mkern4mu\mathrm{d}\xi(r)
< \infty.
\end{equation}
We define a measure $\nu$ on $(0,\infty)$ via $\nu((r,\infty))=\frac{\det \Omega (\hat t(r))}{\omega_2(\hat t(r))}$, $r>0$. Let $\hat\nu$ be the measure on $(\hat a,b)$ satisfying $\hat\nu((\hat a,t))=\nu((\hat r(t),\infty))=\frac{\det \Omega (t)}{\omega_2(t)}$, $t>\hat a$.
Integrating by parts (see, e.g., \cite[Lemma~2]{kac:1965}), we can rewrite the first integral in (\ref{Y162}) as follows:
\begin{align*}
& \int_{[1,\infty)}\frac{\det \Omega (\hat t(r))}{\omega_2(\hat t(r))} \mkern4mu\mathrm{d}\xi(r)
= \int_{[1,\infty)}\nu\bigl((r,\infty)\bigr)\mkern4mu\mathrm{d}\xi(r)
\\
&= \int_{[1,\infty)}\!\ms f(r)\mkern4mu\mathrm{d}\nu(r) = \int_{(\hat a,\hat t(1)]}\!\ms f(\hat r(t))\mkern4mu\mathrm{d} \hat\nu(t)
= \int_{(\hat a,\hat t(1)]}\ms f(\hat r(t))\mkern4mu\mathrm{d} \bigg(\frac{\det \Omega}{\omega_2} \bigg)(t)
\\
&= \int_{\hat a}^{\hat t(1)}\ms f\bigl(\hat r(t)\bigr)\cdot \frac{1}{\omega_2(t)^2}\binom{\omega_2(t)}{-\omega_3(t)}^*H(t)\binom{\omega_2(t)}{-\omega_3(t)}\mkern4mu\mathrm{d} t.
\end{align*}
To prove the additional statement, let us assume that $\ms f$ is differentiable. Using a substitution we can rewrite
the second integral in \eqref{Y162} differently:
\begin{align*}
&\int_{[1,\infty)}\frac{\det \Omega\bigl(\hat t(r)\bigr)}{\omega_2\bigl(\hat t(r)\bigr)}\mkern4mu\mathrm{d}\xi(r)
= \int_1^\infty\frac{\det \Omega\bigl(\hat t(r)\bigr)}{\omega_2\bigl(\hat t(r)\bigr)} \ms f'(r)\mkern4mu\mathrm{d} r
\\
&= \int_{\hat t(r)}^{\hat a} \!\frac{\det \Omega (t)}{\omega_2(t)}\ms f'(\hat r(t))\hat r'(t)\mkern4mu\mathrm{d} t
= \frac 12 \int_{\hat a}^{\hat t(r)}\frac{\det \Omega (t)}{\omega_2(t)}\ms f'(\hat r(t))
\frac{(\det \Omega)'(t)}{\det \Omega (t) ^{\frac32}}\mkern4mu\mathrm{d} t.
\end{align*}
\end{proof}
\noindent
The following result provides, in particular, information on when the measure $\tilde \mu_H$ is finite w.r.t. a regularly varying rescaling function $\ms g$.
\begin{corollary}
Let $H$ be a Hamiltonian on $[a,b)$, and assume that $h_2$ does not vanish identically in a neighborhood of $a$.
Let $\ms g$ be a continuous function that is regularly varying with index $\alpha \in [0,2]$, and denote by $\mu_H$ the spectral measure of $H$ as in (\ref{A17}).
Then, for $\alpha \in (0,2)$ and every $a'\in (\hat a,b)$, the following statements are equivalent:
\begin{alignat*}{2}
&\rm (i)\, && \int_{[1,\infty)} \frac{\mkern4mu\mathrm{d}\tilde\mu_H(r)}{\ms g(r)} < \infty; \\[1.5ex]
&\rm (ii)\, &&\int_{\hat a}^{a'}\mkern-10mu
\frac{1}{\omega_2(t)^2}\binom{\omega_2(t)}{-\omega_3(t)}^* H(t)\binom{\omega_2(t)}{-\omega_3(t)}
\frac{\mkern4mu\mathrm{d} t}{\det \Omega (t)\ms g\bigl(\det \Omega (t)^{-\frac 12}\bigr)} < \infty . \\[1.5ex]
&\rm (iii)\, &&\int_{\hat a}^{a'}\frac{(\det \Omega)'(t)}{\omega_2(t)\det \Omega (t)\ms g\bigl(\det \Omega (t)^{-\frac 12}\bigr)}\mkern4mu\mathrm{d} t
< \infty;
\end{alignat*}
If $\alpha =0$, then $(iii) \Rightarrow (i)$ and $(iii) \Leftrightarrow (ii)$, while for $\alpha = 2$ we have $(iii) \Rightarrow (i)$ and $(iii) \Rightarrow (ii)$.
\end{corollary}
\begin{proof}
The increasing function $\ms f(r):=\int_1^r \frac{t}{\ms g(t)} \mkern4mu\mathrm{d} t$ is regularly varying by Karamata's Theorem (\cite[Propositions 1.5.8 and 1.5.9a]{bingham.goldie.teugels:1989}. Moreover,
\begin{equation}
\label{Y137}
\ms f(r)\;
\begin{cases}
\; \asymp\frac{r^2}{\ms g(r)}, & 0 \leq \alpha<2,
\\[2ex]
\; \gg\frac{r^2}{\ms g(r)}, & \alpha=2.
\end{cases}
\end{equation}
Clearly $(iii)$ is equivalent to
\[
\int_{\hat a}^{a'}
\frac{(\det \Omega)'(t)}{\omega_2(t)\det \Omega (t)^{\frac 12}} \ms f'\bigl(\det \Omega (t)^{-\frac12}\bigr)\mkern4mu\mathrm{d} t < \infty
\]
which is the term appearing in the additional statement of \Cref{AT0}.
Applying \Cref{AT0} and using (\ref{Y137}), this is equivalent to (for $\alpha \in [0,2)$) or implies (for $\alpha =2$) both $(ii)$ and
\[
\int_1^{\infty} \tilde\mu_H(r)\frac{dr}{r\ms g(r)}<\infty.
\]
By \cite[Proposition 4.5]{langer.pruckner.woracek:heniest}, this is further equivalent to (for $\alpha \in (0,2]$) or implies (for $\alpha =0$) the first item.
\end{proof}
\subsection{Comparative growth of the distribution function}
\label{Y02}
In this section we investigate $\limsup$-conditions for the
quotient $\frac{\tilde\mu_H(r)}{\ms g(r)}$ instead of integrability conditions.
Let us introduce the corresponding classes of measures.
\begin{definition}\label{Y77}
Let $\ms g(r)$ be a regularly varying function with index $\alpha \in [0,2]$
and $\lim_{r\to\infty}\ms g(r)=\infty$.
Then we set
\[
\mc F_{\ms g} \mathrel{\mathop:}= \big\{\mu\mid\mkern3mu \tilde\mu(r) \lesssim \ms g(r), \, r \to \infty \big\},\qquad
\mc F_{\ms g}^0 \mathrel{\mathop:}= \big\{\mu\mid\mkern3mu \tilde\mu(r) = {\rm o} ( \ms g(r)), \, r \to \infty\big\},
\]
where again $\tilde\mu(r)\mathrel{\mathop:}=\mu((-r,r))$.
\end{definition}
\noindent It should be mentioned that, for non-decreasing $\ms g$, if
\[
\int_{[1,\infty)} \frac{\mkern4mu\mathrm{d}\tilde\mu(r)}{\ms g(r)} < \infty ,
\]
then $\mu \in \mc F_{\ms g}^0 \subseteq \mc F_{\ms g}$. For further discussion of this relation, the reader is referred to \cite{langer.pruckner.woracek:heniest}.
\begin{theorem}\label{Y74}
Let $H$ be a Hamiltonian on $[a,b)$, and assume that $h_2$ does not vanish identically in a neighborhood of $a$.
Let $\ms g(r)$ be a regularly varying function with index $\alpha \in [0,2]$ and $\lim_{r\to\infty}\ms g(r)=\infty$. Denote by $\mu_H$ the spectral measure of $H$. For $\alpha < 2$, the following statements hold:
\begin{alignat*}{4}
&\rm (i)\quad &&\mu_H \in \mathcal{F}_{\ms g} \quad &&
\Leftrightarrow
&& \quad \limsup_{t \to \hat a} \frac{1}{\omega_2(t)\ms g \big( \det \Omega (t)^{-\frac 12} \big)}<\infty; \\[1ex]
&\rm (ii)\quad &&\mu_H \in \mathcal{F}_{\ms g}^0 \quad &&
\Leftrightarrow
&&\quad \lim_{t \to \hat a} \frac{1}{\omega_2(t)\ms g \big( \det \Omega (t)^{-\frac 12} \big)}=0.
\end{alignat*}
If $\alpha =2$, then the right hand side of $(i)$, $(ii)$ implies the left hand side, respectively.
\end{theorem}
\begin{proof}
We use \cite[Lemma 4.16]{langer.pruckner.woracek:heniest} which, adapted to our situation, reads as
\[
c_{\alpha}\limsup_{r\to\infty}\biggl(\frac{r}{\ms g(r)}\Poi{\mu_H}(ir)\biggr) \leq \limsup_{r\to\infty}\frac{\tilde\mu_H(r)}{\ms g(r)}
\leq c_{\alpha}' \limsup_{r\to\infty}\biggl(\frac{r}{\ms g(r)}\Poi{\mu_H}(ir)\biggr),
\]
and the second inequality holds even for $\alpha =2$. Since $h_2$ does not vanish identically in a neighborhood of $a$, we have $\Poi{\mu_H}=\IM q_H$. Therefore, the assertion follows from \Cref{T1} and a substitution $r=\hat r(t)$.
\end{proof}
\section{Weyl coefficients with tangential behavior}
\label{S3}
In this section, we investigate the scenario
\begin{align}
\label{A16}
\lim_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|}=0 \quad \quad \text{or} \quad \quad \liminf_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|}=0.
\end{align}
This is equivalent to tangential behavior of $q_H(ir)$, i.e.,
\[
\lim_{r \to \infty} \arg q_H(ir) \in \{0,\pi\} \quad \text{or} \quad \liminf_{r \to \infty} \min \big\{\arg q_H(ir),\, \pi-\arg q_H(ir) \big\}=0.
\]
From \Cref{A4} we get that
\begin{align}
\label{A25}
\lim_{n \to \infty} \frac{\IM q_H(i r_n)}{|q_H(i r_n)|}=0 \,\, \Longleftrightarrow \,\, \lim_{n \to \infty} \frac{\det \Omega(\hat t(r_n))}{(\omega_1 \omega_2)(\hat t(r_n))}=0.
\end{align}
for every sequence $r_n \to \infty$.
All results in this section can be seen from the canonical systems perspective as well as from the Herglotz functions perspective. \\
\noindent To start with, we observe that the second assertion in (\ref{A16}) implies the first unless the limit inferior is assumed only along very sparse sequences. We formulate this fact in the language of Herglotz functions, and prove it within the canonical systems setting. However, we do not know a purely function theoretic proof (which may very well exist in the literature).
\begin{lemma}
Let $q$ be a Herglotz function. Suppose there is a sequence $(r_n)_{n \in \bb N}$ with $r_n \to \infty$, $\sup_{n \in \bb N} \frac{r_{n+1}}{r_n} < \infty$, and
\[
\lim_{n \to \infty} \frac{\IM q(ir_n)}{|q(ir_n)|}=0.
\]
Then $\lim_{r \to \infty} \frac{\IM q(ir)}{|q(ir)|}=0$.
\end{lemma}
\begin{proof}
Let $H$ be a Hamiltonian (on $[0,\infty)$), such that $q=q_H$. \\
Let $d(t):=\frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)}$. Set $t_n := \hat t(r_n)$, then by (\ref{A9}),
\[
d(t_n) \asymp \bigg( \frac{\IM q(ir_n)}{|q(ir_n)|} \bigg)^2 \xrightarrow{n \to \infty} 0.
\]
Suppose that the assertion was not true, i.e., there is a sequence $\xi_1 > \xi_2 > ...$ converging to $0$, such that $d(\xi_k) \geq C>0$ for all $k$. For $k \in \bb N$, set $n(k):=\max \{n \in \bb N \mid t_n > \xi_k \}$. We obtain
\begin{align*}
&\Big(\frac{r_{n(k)+1}}{r_{n(k)}} \Big)^2 = \frac{\det \Omega(t_{n(k)})}{\det \Omega (t_{n(k)+1})} \geq \frac{\det \Omega(\xi_k)}{\det \Omega (t_{n(k)+1})} \\
&=\frac{d(\xi_k)}{d(t_{n(k)+1})} \cdot \frac{(\omega_1 \omega_2)(\xi_k)}{(\omega_1 \omega_2)(t_{n(k)+1})} \geq \frac{C}{d(t_{n(k)+1})} \xrightarrow{k \to \infty} \infty
\end{align*}
which contradicts our assumption.
\end{proof}
\noindent Recall formulae (\ref{A10}) and (\ref{A9}). On an intuitive level, they tell us that in the case that $\IM q_H(ir) \not\asymp |q_H(ir)|$, the growth of $|q_H(ir)|$ is restricted since $\mathring r(\hat t(r))$ is then far away from $r$. If read in the other direction, this means that if $|q_H(ir)|$ grows quickly and without oscillating too much, then $\mathring r(\hat t(r))$ and $r$ should be close to each other, and hence the quotient $\frac{\IM q_H(ir)}{|q_H(ir)|}$ should not decay.\\
The following definition introduces the notions needed in Theorems \ref{T7} and \ref{T8}, which confirm this intuition.
\begin{definition}
\label{A48}
\item[$\rhd$] A measurable function $f: (0,\infty) \to (0,\infty)$ is called \textit{regularly varying (at infinity) with index $\alpha \in \bb R$} if, for any $\lambda >0$,
\begin{align}
\lim_{r \to \infty}\frac{f(\lambda r)}{f(r)} = \lambda^{\alpha}.
\end{align}
If $\alpha =0$, then $f$ is also called \textit{slowly varying (at infinity)}.
\item[$\rhd$] A measurable function $f: (0,\infty) \to (0,\infty)$ is \textit{positively increasing (at infinity)} if there is $\lambda \in (0,1)$ such that
\begin{align}
\limsup_{r \to \infty} \frac{f(\lambda r)}{f(r)} <1.
\end{align}
Let us say explicitly that we do not require $f$ to be monotone.
\end{definition}
\stepcounter{lemma}
\begin{subtheorem}
\label{T7}
Let $q \neq 0$ be a Herglotz function. If $|q(ir)|$ or $\frac{1}{|q(ir)|}$ is positively increasing at infinity (in particular, if $|q(ir)|$ is regularly varying with index $\alpha \neq 0$), then $\IM q(ir) \asymp |q(ir)|$ as $r \to \infty$.
\end{subtheorem}
\begin{subtheorem}
\label{T8}
Let $q \neq 0$ be a Herglotz function. If $\IM q(ir) = {\rm o} (|q(ir)|)$ as $r \to \infty$, then, for every $\delta \in [0,1)$,
\begin{align}
\label{A19}
\lim_{r \to \infty} \frac{ q \Big(ir \Big[\frac{\IM q(ir)}{|q(ir)|}\Big]^{\delta} \Big)}{q(ir)} =1.
\end{align}
For $k>0$, we also have $\lim_{r \to \infty} \frac{q(ikr)}{q(ir)}=1$, in particular, $|q(ir)|$ is slowly varying at infinity.
\end{subtheorem}
\begin{remark}
In \Cref{T7}, the requirement that $|q(ir)|$ should be positively increasing is meaningful. It is not enough that $|q(ir)|$ grows sufficiently fast, say, $|q(ir)| \gtrsim r^{\delta}$ for $r \to \infty$ and some $\delta > 0$. \\
In fact, for any given $\delta \in (0,1)$, we construct in \Cref{T6} a Hamiltonian\footnote{Choose suitable parameters $p,l \in (0,1)$, such that $\delta = \frac{\log l}{\log (pl)}$, i.e., $p=l^{\delta ^{-1}-1}$.} $H$ whose Weyl coefficient $q_H$ satisfies (see \Cref{R2}) $|q_H(ir)| \gtrsim r^{\delta}$ as $r \to \infty$, but
\[
\liminf_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|} = 0.
\]
In other words, $\IM q_H(ir) \not\asymp |q_H(ir)|$. \\
Note also that for the above-mentioned $H$, certainly $|q_H(ir)|$ is not slowly varying \cite[Proposition 1.3.6]{bingham.goldie.teugels:1989}. Hence, in \Cref{T8} it is not enough to require $\IM q(ir_n) = {\rm o} (|q(ir_n)|)$ on some sequence $r_n \to \infty$.
\end{remark}
\begin{example}
Let $q(z)=\log z$, satisfying $|q(ir)| = \big[(\log r)^2+\frac{\pi^2}{4}\big]^{1/2}$ which is increasing. However, $\IM q(ir)$ is constant and hence $\IM q(ir) = {\rm o} (|q(ir)|)$ as $r \to \infty$. \Cref{T7} fails because $|q(ir)|$ is not positively increasing.
\end{example}
\begin{proof}[Proof of \Cref{T7}]
Assume first that $|q(ir)|$ is positively increasing. Then there are $\lambda, \sigma \in (0,1)$ and $R > 0$ such that
\begin{align}
\label{A18}
\frac{|q(i\lambda r)|}{|q(ir)|} \leq \sigma, \quad \quad r \geq R.
\end{align}
Let $H$ be a Hamiltonian with Weyl coefficient $q_H=q$, allowing us to use (\ref{A10}). \\
Suppose that the assertion was not true. Then there is a (w.l.o.g., monotone) sequence $r_n \to \infty$ with $\lim_{n \to \infty} \frac{\IM q(ir_n)}{|q(ir_n)|}=0$. Let $m(n)$ be such that
\[
\lambda^{m(n)+1} \leq \frac{\mathring r(\hat t(r_n))}{r_n} < \lambda^{m(n)}.
\]
Note that $m(n) \to \infty$ because of (\ref{A9}). \\
Furthermore, (\ref{A10}) ensures that there is $\beta >0$ with
\[
\beta \leq \frac{|q(i \mathring r(\hat t(r)))|}{|q(ir)|}, \quad r \in (0,\infty).
\]
We will also need that for $0<r<r'$,
\[
\frac{|q(ir)|}{|q(ir')|} \asymp \frac{r' \omega_2(\mathring t(r'))}{r \omega_2(\mathring t(r))} \leq \frac{r'}{r}
\]
because $\omega_2$ is nondecreasing. \\
Choosing $n$ so big that $\mathring r(\hat t(r_n)) \geq R$, we get the contradiction
\begin{align*}
\beta &\leq \frac{\big|q \big(i \mathring r(\hat t(r_n)) \big) \big|}{\big|q \big(ir_n \big)\big|} = \frac{\big|q \big(i \mathring r(\hat t(r_n)) \big) \big|}{\big|q \big(i\lambda^{m(n)}r_n \big)\big|} \cdot \prod_{j=0}^{m(n)-1}\frac{\big|q \big(i\lambda^{j+1} r_n \big)\big|}{\big|q \big(i\lambda^j r_n \big)\big|} \\
&\lesssim \frac{\lambda^{m(n)}r_n}{\mathring r(\hat t(r_n))} \sigma^{m(n)}
\leq \frac{\sigma^{m(n)}}{\lambda} \xrightarrow{n \to \infty} 0.
\end{align*}
This proves the theorem in the case that $|q(ir)|$ is positively increasing. \\
If, on the other hand, $\frac{1}{|q(ir)|}$ is positively increasing, we may set $\tilde q :=-\frac 1q$, for which $|\tilde q(ir)|$ is positively increasing. We obtain
\[
\frac{\IM q(ir)}{|q(ir)|} = \frac{\IM \tilde q(ir)}{|\tilde q(ir)|} \asymp 1.
\]
Finally, we note that if $|q(ir)|$ is regularly varying with index $\alpha >0$, then it is also positively increasing. If $|q(ir)|$ is regularly varying with index $\alpha < 0$, then $\frac{1}{|q(ir)|}$ is regularly varying with index $-\alpha > 0$ and thus positively increasing.
\end{proof}
\noindent Our proof of \Cref{T8} is elementary - only folklore facts that follow from the Herglotz integral representation (\ref{A17}) are needed. We would be interested in an elementary proof of \Cref{T7} as well, which so far we have not found. \\
\noindent One fact needed in the following proof is the following: For any Herglotz function $q$ and any $z \in \bb C_+$, we have
\begin{equation}
\label{A5}
|q'(z)| \leq \frac{\IM q(z)}{\IM z}.
\end{equation}
This can be seen using the representation (\ref{A17}): We write
\[
q'(z)=b+\int_{\bb R} \frac{d\sigma (t)}{(t-z)^2}
\]
and obtain
\[
|q'(z)| \leq b+\int_{\bb R} \frac{d\sigma (t)}{|t-z|^2}= \frac{\IM q(z)}{\IM z}.
\]
\begin{proof}[Proof of \Cref{T8}]
Let $k \in (0,1)$. Then
\begin{align}
\label{A15}
&|\log q \big(ikr \big)-\log q(ir)|=\Big|\int_{kr}^r i(\log q)'(is) \mkern4mu\mathrm{d} s \Big| \leq \int_{kr}^r \big|(\log q)'(is)\big| \mkern4mu\mathrm{d} s.
\end{align}
Apply (\ref{A5}) to $\log q$ and to $i\pi-\log q$ to obtain
\begin{align*}
&|(\log q)'(is)| \leq \frac 1s \min \big\{\IM [\log q(is)],\pi - \IM [\log q(is)] \big\} \\
&= \frac 1s \min \big\{\arg q(is), \pi-\arg q(is) \big\} \asymp \frac{\IM q(is)}{s|q(is)|}
\end{align*}
for all $s>0$. We will also need monotonicity in $s$ of $s\frac{\IM q(is)}{|q(is)|}$. In fact, it is easy to see from (\ref{A17}) that $s \IM q(is)$ is nondecreasing in $s$. Now we can write
\[
s\frac{\IM q(is)}{|q(is)|} = \sqrt{s \IM q(is) \cdot s \IM \Big(-\frac{1}{q(is)} \Big)}
\]
and hence $s\frac{\IM q(is)}{|q(is)|}$ is nondecreasing in $s$.
Putting together and continuing the estimation in (\ref{A15}), we obtain
\begin{align}
&|\log q \big(ikr \big)-\log q(ir)| \lesssim \int_{kr}^r \frac{\IM q(is)}{s|q(is)|} \mkern4mu\mathrm{d} s \leq r \frac{\IM q(ir)}{|q(ir)|} \cdot \int_{kr}^r \frac{ds}{s^2} \nonumber \\
&= r \frac{\IM q(ir)}{|q(ir)|}\Big(\frac{1}{kr}-\frac 1r \Big) \asymp \frac{\IM q(ir)}{|q(ir)|} \xrightarrow{r \to \infty} 0. \label{A28}
\end{align}
This shows $\lim_{r \to \infty} \frac{q(ikr)}{q(ir)}=1$. To prove (\ref{A19}), set $k(r):= \frac{\IM q(ir)}{|q(ir)|}$ and repeat the calculations up to the second to last term in (\ref{A28}), but with $k$ replaced by $k(r)^{\delta}$, where $\delta \in [0,1)$. Since
\[
r k(r) \Big(\frac{1}{rk(r)^{\delta}}-\frac 1r \Big) \asymp k(r)^{1-\delta} \xrightarrow{r \to \infty} 0,
\]
we arrive at (\ref{A19}).
\end{proof}
\noindent Note that $\lim_{r \to \infty} \frac{q(ikr)}{q(ir)}=1$ is also a consequence of (\ref{A20}). The preceding proof, in addition to being elementary, is needed to show (\ref{A19}) which, upon taking absolute values, can be seen as slow variation with a rate.
\section{Maximal oscillation within Weyl disks}
\label{S4}
\noindent In order to explain the aim of this section, let us first recall the notion of Weyl disks. Let $W(t,z) \in \bb C^{2 \times 2}$ be the fundamental solution of
\begin{equation}
\frac{d}{dt} W(t,z)J=zW(t,z)H(t),
\end{equation}
with initial condition $W(a,z)=I$, solving the transpose of equation (\ref{A33}). We define the \textit{Weyl disks}
\begin{equation}
\label{A34}
D_{t,z}:=\Big\{ \frac{w_{11}(t,z)\tau + w_{12}(t,z)}{w_{21}(t,z)\tau + w_{22}(t,z)} \Big| \tau \in \overlineerline{\bb C_+} \Big\} \subseteq \overlineerline{\bb C_+},
\end{equation}
where $\bb C_+=\{z \in \bb C | \IM z>0\}$, and the closure is taken in the Riemann sphere $\overlineerline{\bb C}=\bb C \cup \{ \infty \}$. For fixed $z \in \bb C_+$ and $t_1 \leq t_2$, we have $D_{t_1,z} \supseteq D_{t_2,z}$, and the disks shrink down to a single point which is $q_H(z)$:
\[
\bigcap_{t \in [a,b)} D_{t,z} = \{ q_H(z) \}.
\]
\noindent Now we review the estimate (\ref{A43}) which has a geometric interpretation. Namely, the functions $L(r)$ and $A(r)$ give, up to $\asymp$, the imaginary part of the bottom and top point of $D_{\mathring t(r),ir}$, respectively. The size of $\IM q_H(ir)$ relative to $L(r)$ and $A(r)$ thus corresponds to the vertical position of $q_H(ir)$ within the disk $D_{\mathring t(r),ir}$. \\
In this section we give answers to several questions from \cite{langer.pruckner.woracek:heniest}. For instance, the question was raised whether there is a Hamiltonian $H$ for which $L(r) \asymp \IM q_H(ir) \not\asymp A(r)$ for $r \to \infty$. The answer to this particular question is no, cf. \Cref{T10}. However\footnote{In this section, we use the more transparent notation $f(r) \ll g(r)$ instead of $f(r) = {\rm o} (g(r))$.}, $L(r_n) \asymp \IM q_H(ir_n) \ll A(r_n)$ on a subsequence $r_n \to \infty$ is possible, and we provide examples for this in \Cref{T6} and in \Cref{R3}. The Weyl coefficient of the Hamiltonian constructed in \Cref{T6} exhibits "maximal" oscillatory behaviour in the sense that it goes back and forth between the bottoms and tops of the disks $D_{\mathring t(r),ir}$.
\begin{proposition}
\label{T10}
Let $H$ be a Hamiltonian on $(a,b)$. The following statements hold:
\begin{itemize}
\item[$(i)$] Suppose that $L(r) \not\asymp A(r)$ as $r \to \infty$. Then there exists a sequence $(r_n)_{n \in \bb N}$ such that $r_n \to \infty$, $L(r_n) \ll A(r_n)$, and
\[
\IM q_H(ir_n) \gtrsim \sqrt{L(r_n)A(r_n)}.
\]
\item[$(ii)$] Suppose that $L(r) \not\asymp A(r)$, but not $L(r) \ll A(r)$ as $r \to \infty$. Then there is also $(r_n')_{n \in \bb N}$ with $r_n' \to \infty$, $L(r_n') \ll A(r_n')$, and
\begin{equation}
\label{A32}
\IM q_H(ir_n') \asymp \sqrt{L(r_n')A(r_n')}.
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
We shorten notation by setting $d(t):=\frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)}$. By assumption, $\liminf_{t \to \hat a} d(t)=0$. Let $c \in (\hat a,b)$ be fixed and set $t_n := \max\{t \leq c \mid d(t) \leq \frac 1n \}$. With $t_n^+:= \hat t(\mathring r(t_n)) \geq t_n$, we have $d(t_n^+) \geq \frac 1n = d(t_n)$ if $n$ is large enough for $t_n^+ \leq c$ to hold. Using (\ref{A9}), we obtain
\[
\bigg(\frac{\IM q_H(i \mathring r(t_n))}{A(\mathring r(t_n))} \bigg)^2 \asymp d(t_n^+) \geq d(t_n)=\frac{L(\mathring r(t_n))}{A(\mathring r(t_n))}.
\]
Note that $L(\mathring r(t_n)) \ll A(\mathring r(t_n))$ because of $d(t_n) \to 0$. \\
Suppose now that $s:=\limsup_{r \to \infty} \frac{L(r)}{A(r)}>0$. Set $\xi_n := \max \{t \leq t_n \mid d(\xi_n)=\frac s2\}$ and find $\tau_n$ between $\xi_n$ and $t_n$ such that $d(\tau_n) = \min \{d(t) \mid t \in [\xi_n, t_n]\}$. Certainly, $d(\tau_n) \leq d(t_n)=\frac 1n$ and $d(\tau_n) \leq d(t)$ for all $t \in [\xi_n,c]$. Also note that by the same arguments as above,
\begin{align}
\label{A26}
\IM q_H(i \mathring r(\tau_n)) \gtrsim \sqrt{L(\mathring r(\tau_n)A(\mathring r(\tau_n))}.
\end{align}
\noindent We prove next that $\hat r(\tau_n) \ll \mathring r(\xi_n)$. Note that by passing to a subsequence and possibly switching signs of $\omega_3$ by looking at $J^{\top}HJ$ instead of $H$, we can assume that
\[
\lim_{n \to \infty} \frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}=1.
\]
A calculation shows that $\sqrt{(\omega_1 \omega_2)(t)}-\omega_3(t)$ is increasing. Hence
\begin{align}
\label{A27}
&\bigg(\frac{\hat r(\tau_n)}{\mathring r(\xi_n)} \bigg)^2= \frac{(\omega_1 \omega_2)(\xi_n)}{\det \Omega(\tau_n)} \\
&=\frac{(\omega_1 \omega_2)(\xi_n)\Big(1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)}{\Big(\sqrt{(\omega_1 \omega_2)(\tau_n)}-\omega_3(\tau_n)\Big)^2 \Big(1+\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)} \nonumber\\
&\leq \frac{(\omega_1 \omega_2)(\xi_n)\Big(1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)}{\Big(\sqrt{(\omega_1 \omega_2)(\xi_n)}-\omega_3(\xi_n)\Big)^2 \Big(1+\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)} \nonumber\\
&= \frac{\Big(1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)}{\Big(1-\frac{\omega_3(\xi_n)}{\sqrt{(\omega_1 \omega_2)(\xi_n)}}\Big)^2 \Big(1+\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)} \lesssim 1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}} \to 0. \nonumber
\end{align}
Let $\tau_n^- := \mathring t(\hat r(\tau_n))$. By the calculation above, $\mathring r(\tau_n^-)=\hat r(\tau_n) < \mathring r(\xi_n)$ for large enough $n$, implying $\tau_n^- > \xi_n$ and hence $d(\tau_n^-) \geq d(\tau_n)$. Consequently,
\[
\frac{L(\hat r(\tau_n))}{A(\hat r(\tau_n))} =d(\tau_n^-) \geq d(\tau_n) \asymp \bigg(\frac{\IM q_H(i\hat r(\tau_n))}{A(\hat r(\tau_n))} \bigg)^2.
\]
This means that
\[
\IM q_H(i \hat r(\tau_n)) \leq C \sqrt{L(\hat r(\tau_n)) A(\hat r(\tau_n))}
\]
for some $C>0$ and all large $n$. Recall (\ref{A26}) and choose $C'>0$, w.l.o.g. $C'<C$, such that
\[
\IM q_H(i \mathring r(\tau_n)) \geq C' \sqrt{L(\mathring r(\tau_n)) A(\mathring r(\tau_n))}
\]
for large $n$. By continuity, we find, for each large $n$, an $r_n' \in [\mathring r(\tau_n),\hat r(\tau_n)]$ with
\[
\frac{\IM q_H(ir_n')}{\sqrt{L(r_n')A(r_n')}} \in [C',C],
\]
such that $(r_n')_{n \in \bb N}$ satisfies (\ref{A32}). The only thing left to prove is that $L(r_n') \ll A(r_n')$. \\
Suppose not, then on a subsequence we would have $L(r_n') \asymp A(r_n')$. Consider $\xi_n':=\mathring t(r_n') \leq \tau_n$ which would then satisfy $d(\xi_n') \gtrsim 1$ and hence
\[
1-\frac{\omega_3(\xi_n')}{\sqrt{(\omega_1 \omega_2)(\xi_n')}} \gtrsim 1.
\]
Now look at (\ref{A27}), but with $\xi_n$ replaced by $\xi_n'$. It follows that, for large $n$, $\hat r(\tau_n) < r_n'$, contradicting the choice of $r_n'$.
\end{proof}
\noindent In the following definition, we construct a Hamiltonian by prescribing $f:=\frac{\omega_3}{\sqrt{\omega_1 \omega_2}}$ and choosing $f$ to be a highly oscillating function. It should be mentioned that the method we use for prescription works on a general basis: Any locally absolutely continuous function with values in $(-1,1)$ occurs as $\frac{\omega_3}{\sqrt{\omega_1 \omega_2}}$ for some Hamiltonian. Details can be found in the appendix.
\begin{definition}
\label{T6}
Let $(t_n)_{n \in \bb N}, (\xi_n)_{n \in \bb N}$ be sequences of positive numbers converging to zero, where $\xi_{n+1}<t_n<\xi_n$ for all $n \in \bb N$. Choose $p,l \in (0,1)$ and set
\[
f(t_n)=1-p^n, \quad \quad f(\xi_n)=l^n
\]
and interpolate between those points using monotone and absolutely continuous functions (e.g., linear interpolation). Set
\[
\alpha_1(t):= \left\{ \begin{matrix}
\frac{f'(t)}{1-f(t)}, & t \in (\xi_{n+1},t_n), \\
0, & t \in (t_n,\xi_n)
\end{matrix}\right.
\]
and
\[
\alpha_2(t):= \left\{ \begin{matrix}
\frac{f'(t)}{1-f(t)}, & t \in (\xi_{n+1},t_n), \\
-2\frac{f'(t)}{f(t)}, & t \in (t_n,\xi_n)
\end{matrix}\right.
\]
For $t \in [0,t_1]$, let $\omega_i(t):=\exp \Big(-\int_t^{t_1} \alpha_i(s) \mkern4mu\mathrm{d} s \Big)$, $i=1,2$, and $\omega_3(t):=\sqrt{(\omega_1 \omega_2)(t)} \cdot f(t)$. Set $h_i(t)=\omega_i'(t)$, $i=1,2,3$, $t \in [0,t_1]$. For $t \in (t_1,\infty)$, let $h_1(t):=1$ and $h_2(t):=h_3(t):=0$. Finally, define
\[
H_{p,l} :=\begin{pmatrix}
h_1 & h_3 \\
h_3 & h_2
\end{pmatrix}.
\]
\end{definition}
\begin{lemma}
$H_{p,l}$ is a Hamiltonian on $[0,\infty)$, and $\omega_i(t)=\int_0^t h_i(s) \mkern4mu\mathrm{d} s$ for $i=1,2,3$ and $t \in [0,t_1]$. Moreover, $0$ is not the left endpoint of an $H_{p,l}$-indivisible interval.
\end{lemma}
\begin{proof}
We write $H$ instead of $H_{p,l}$ for short. First we show that $H(t) \geq 0$ for all $t \in [0,t_1]$. Start by noting that, for $i=1,2$,
\[
\frac{h_i(t)}{\omega_i(t)}=(\log \omega_i)'(t)=\alpha_i(t),
\]
and calculate
\begin{align*}
&\frac{h_3(t)^2}{(\omega_1 \omega_2)(t)}=\frac{\big[(\sqrt{\omega_1 \omega_2}f )'(t)\big]^2}{(\omega_1 \omega_2)(t)} = \Big(f'(t)+\frac 12 \Big[\frac{h_1(t)}{\omega_1(t)}+\frac{h_2(t)}{\omega_2(t)} \Big]f(t) \Big)^2 \\
&=\Big(f'(t)+\frac{\alpha_1(t)+\alpha_2(t)}{2} f(t) \Big)^2.
\end{align*}
If $t \in (t_n,\xi_n)$, then this equates to $0$, as does
\[
\frac{(h_1h_2)(t)}{(\omega_1 \omega_2)(t)}=\alpha_1(t)\alpha_2(t)=0.
\]
For $t \in (\xi_{n+1},t_n)$,
\[
\Big(f'(t)+\frac{\alpha_1(t)+\alpha_2(t)}{2} f(t) \Big)^2=\Big(\frac{f'(t)}{1-f(t)}\Big)^2=\alpha_1(t)\alpha_2(t)=\frac{(h_1h_2)(t)}{(\omega_1 \omega_2)(t)}.
\]
In both cases, $\det H(t)=0$. For $i=1,2$, as $\alpha_i(t) \geq 0$, $t \in [0,t_1]$, certainly $\omega_i(t)$ is increasing and thus $h_i(t) \geq 0$. This suffices to show that $H(t) \geq 0$. \\
$H$ is in limit point case since, for $t>t_1$, the trace of $H(t)$ equals $1$. To show that $\omega_i(t)=\int_0^t h_i(s) \mkern4mu\mathrm{d} s$, $i=1,2,3$, $t \in [0,t_1]$, we need to check that $\lim_{t \to 0} \omega_i(t)=0$. For $i=1$, this follows from
\begin{equation}
\int_0^{t_1} \alpha_1(s) \mkern4mu\mathrm{d} s = \sum_{n=1}^{\infty} \int_{\xi_{n+1}}^{t_n} \frac{f'(s)}{1-f(s)} \mkern4mu\mathrm{d} s=\sum_{n=1}^{\infty} \big[\log (1-l^{n+1})-\log(p^n) \big]=\infty.
\end{equation}
For $i=2$, it follows from the fact that $\alpha_2(t) \geq \alpha_1(t)$ for all $t \in [0,t_1]$, and for $i=3$ it follows from the definition of $\omega_3$ and the fact that $f(t) <1$, $t \in [0,t_1]$. \\
Finally, $0$ is not the left endpoint of an $H$-indivisible interval because
\[
\det \Omega(t)=(\omega_1 \omega_2)(t) \big(1-f(t)^2 \big) >0
\]
for all $t \in (0,t_1]$.
\end{proof}
\noindent We investigate the behaviour for $r \to \infty$ of $\IM q_{H_{p,l}}(ir)$ as well as $L(r)$ and $A(r)$. A rough description of the situation is:
\begin{center}
\label{tikz1}
\begin{tikzpicture}
\draw[scale=0.5, loosely dashed, domain=1:3.65, smooth, variable=\x] plot ({(\x+2)*(\x+2)}, {4*(\x+2)});
\draw[out=30, in=240] (4.5,6.3) to (6,8.5);
\draw[out=60, in=190] (6,8.5) to (6.7,9);
\draw[out=10, in=175] (6.7,9) to (7.5,8.9);
\draw[out=-5, in=190] (7.5,8.9) to (8,9);
\draw[out=10, in=205] (8,9) to (9.5,9);
\draw[out=25, in=240] (9.5,9) to (11.5,11.5);
\draw[out=60, in=185] (11.5,11.5) to (12.2,12.1);
\draw[out=5, in=205] (12.2,12.1) to (15,10.8);
\draw[out=25, in=245] (15,10.8) to (15.95,11.95);
\draw[out=30, in=190] (4.5,5.9) to (5.8,6.5);
\draw[out=10, in=200] (5.8,6.5) to (8,6.5);
\draw[out=20, in=220] (8,6.5) to (10.2,9.2);
\draw[out=40, in=170] (10.2,9.2) to (12,9.6);
\draw[out=-10, in=185] (12,9.6) to (13.4,9.4);
\draw[out=5, in=225] (13.4,9.4) to (14.8,10.3);
\draw[out=45, in=190] (14.8,10.3) to (15.95,10.95);
\draw[out=30, in=245] (4.5,6.05) to (5.7,7.5);
\draw[out=65, in=150] (5.7,7.5) to (7.5,8);
\draw[out=-30, in=240] (7.5,8) to (9.35,8.2);
\draw[out=60, in=221] (9.35,8.2) to (10.03,9.13);
\draw[out=41, in=237] (10.03,9.13) to (11.04,10.4);
\draw[out=57, in=180] (11.04,10.4) to (11.7,10.9);
\draw[out=0, in=140] (11.7,10.9) to (12.3,10.7);
\draw[out=-40, in=172] (12.3,10.7) to (13.52,9.75);
\draw[out=-8, in=218] (13.52,9.75) to (15,10.59);
\draw[out=38, in=241] (15,10.59) to (15.95,11.67);
\node (A) at (4.5,5) {$\mathring r(\xi_n)$};
\node (B) at (5.1,4.55) {$\hat r(\xi_n)$};
\node (C) at (5.85,5) {$\mathring r(t_n)$};
\node (D) at (9.15,4.55) {$\hat r(t_n)$};
\node (E) at (9.9,5) {$\mathring r(\xi_{n+1})$};
\node (F) at (10.5,4.55) {$\hat r(\xi_{n+1})$};
\node (G) at (11.25,5) {$\mathring r(t_{n+1})$};
\node (H) at (13.7,4.55) {$\hat r(t_{n+1})$};
\node (I) at (14.8,5) {$\mathring r(\xi_{n+2})$};
\node (J) at (15.4,4.55) {$\hat r(\xi_{n+2})$};
\draw[loosely dotted] (4.5,5.3) to (4.5,12.2);
\draw[loosely dotted] (5.1,4.85) to (5.1,12.2);
\draw[loosely dotted] (5.85,5.3) to (5.85,12.2);
\draw[loosely dotted] (9.15,4.85) to (9.15,12.2);
\draw[loosely dotted] (9.9,5.3) to (9.9,12.2);
\draw[loosely dotted] (10.5,4.85) to (10.5,12.2);
\draw[loosely dotted] (11.25,5.3) to (11.25,12.2);
\draw[loosely dotted] (13.7,4.85) to (13.7,12.2);
\draw[loosely dotted] (14.8,5.3) to (14.8,12.2);
\draw[loosely dotted] (15.55,4.85) to (15.55,12.2);
\node[scale=0.8] (J) at (7,6.28) {$L(r)$};
\node[scale=1.15] (K) at (7.1,7.15) {$r^{\frac{\log l}{\log (pl)}}$};
\node[scale=0.8] (L) at (7,8.48) {$\IM q_H(ir)$};
\node[scale=0.8] (M) at (7,9.18) {$A(r)$};
\end{tikzpicture}
A sketch of the behaviour of $q_{H_{p,l}}$
\end{center}
\noindent Formal details are given in the following theorem as well as in \Cref{R2}.
\begin{theorem}
\label{T2}
Let $p,l \in (0,1)$. For the Hamiltonian $H=H_{p,l}$ from \Cref{T6} and for all sufficiently large $n \in \bb N$, we have
\begin{equation}
\label{A8}
\mathring r(\xi_n) < \hat r(\xi_n) <\mathring r(t_n) < \hat r(t_n) < \mathring r(\xi_{n+1}).
\end{equation}
On the intervals delimited by the terms in (\ref{A8}), the functions $L(r)$, $\IM q_H(ir)$, and $A(r)$ behave in the following way:
\begin{itemize}
\item[$(i)$] $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\mathring r(\xi_n),\hat r(\xi_n)]$, $n \in \bb N$.
\item[$(ii)$] $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\hat r(\xi_n),\mathring r(t_n)]$, $n \in \bb N$. \\[1ex]
Moreover, $L(\mathring r(t_n)) \ll A(\mathring r(t_n))$.
\item[$(iii)$] $L(r) \ll A(r)$ uniformly for $r \in [\mathring r(t_n), \hat r(t_n)]$, $n \in \bb N$. \\[1ex]
In addition, $L(\mathring r(t_n)) \ll \IM q_H(i \mathring r(t_n)) \asymp A(\mathring r(t_n))$ as well as $L(\hat r(t_n)) \asymp \IM q_H(i \hat r(t_n)) \ll A(\hat r(t_n))$.
\item[$(iv)$] $L(r) \asymp \IM q_H(ir) \ll A(r)$ uniformly for $r \in [\hat r(t_n),\mathring r(\xi_{n+1})]$, $n \in \bb N$.
\end{itemize}
\end{theorem}
\noindent The proof of this theorem involves some (partly tedious) computations that are partly contained in the forthcoming lemma. \\
The symbol $\approx$ should mean equality up to an additive term that is bounded in $n$ and $t$.
\begin{lemma}
\label{R4}
For the Hamiltonian $H_{p,l}$, the following formulae hold.
\newline
\begin{tabular}{|lcl|}
\hline$\log \mathring r(t_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}$ \begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\[1ex]
\hline
$\log \mathring r(\xi_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}$ \begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
\end{tabular}
$\mkern-12mu$
\begin{tabular}{|lcl|}
\hline
$\log \hat r(t_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log (pl)}{2}-n \frac{\log l}{2}$
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \hat r(\xi_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}$
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
\end{tabular}
\begin{tabular}{|lcll|}
\hline
$\log \mathring r(t)$ & $\mkern-10mu \approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(t)$, & $\mkern-10mu t \in [t_n,\xi_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \mathring r(t)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log (1-f(t))$, & $\mkern-10mu t \in [\xi_{n+1},t_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \hat r(t)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(t)-\frac {\log (1-f(t))}{2}$, & $\mkern-10mu t \in [t_n,\xi_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \hat r(t)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\frac {\log (1-f(t))}{2}$, & $\mkern-10mu t \in [\xi_{n+1},t_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
\end{tabular}
\newline
\end{lemma}
\begin{proof}
First we calculate
\begin{align}
&\log(\mathring r(t_n))=-\frac 12 \log [(\omega_1 \omega_2)(t_n)] =\frac 12 \int_{t_n}^{t_1} (\alpha_1(s)+\alpha_2(s)) \mkern4mu\mathrm{d} s \nonumber\\
&= \sum_{k=1}^{n-1} \Big( \int_{t_{k+1}}^{\xi_{k+1}} \frac{-f'(s)}{f(s)} \mkern4mu\mathrm{d} s + \int_{\xi_{k+1}}^{t_k} \frac{f'(s)}{1-f(s)} \mkern4mu\mathrm{d} s \Big) \nonumber\\
&= \sum_{k=1}^{n-1} \Big(\log(1-p^{k+1})-(k+1)\log l + \log(1-l^{k+1}) -k\log p \Big) \nonumber\\
\label{A23}
&\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}.
\end{align}
This also leads to
\begin{align*}
\log \hat r(t_n) &=-\frac 12 \log (1-f(t_n)^2)+\log \mathring r(t_n) \approx -\frac 12 \log (1-f(t_n))+\log \mathring r(t_n) \\
&\approx -n^2 \frac{\log (pl)}{2}-n \frac{\log l}{2}.
\end{align*}
If $t \in [t_n,\xi_n]$, then
\begin{align*}
\log \mathring r(t)=\log \mathring r(t_n)-\int_{t_n}^t \frac{-f'(s)}{f(s)} \mkern4mu\mathrm{d} s \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(t).
\end{align*}
If $t \in [\xi_{n+1},t_n]$, then
\begin{align*}
&\log \mathring r(t)=\log \mathring r(t_n)+\int_t^{t_n} \frac{f'(s)}{1-f(s)} \mkern4mu\mathrm{d} s \\
&\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log (1-f(t)).
\end{align*}
By adding $-\frac 12 \log (1-f(t)^2) \approx -\frac 12 \log (1-f(t))$, the analogous formula for $\hat r(t)$ follows. Lastly,
\begin{align*}
\log \mathring r(\xi_n) &\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\xi_n) \\
&\approx -n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}.
\end{align*}
and
\begin{align*}
\log \hat r(\xi_n) &=-\frac 12 \log (1-f(\xi_n)^2)+\log \mathring r(\xi_n) \approx \log \mathring r(\xi_n).
\end{align*}
\end{proof}
\begin{proof}[Proof of \Cref{T2}]
It follows from \Cref{R4} that $\hat r(\xi_n)<\mathring r(t_n)$ and $\hat r(t_n) < \mathring r(\xi_{n+1})$ for large enough $n$. The remaining two inequalities in (\ref{A8}) follow from the basic fact that $\mathring r(t) < \hat r(t)$ for all $t \in (0,\infty)$. \\
We will now prove $(i)-(iv)$ in reverse order.
\\[1.7ex]
\underline{$(iv)$:} $\xi_{n+1} \leq \mathring t(r) \leq t_n$ and $\xi_{n+1} \leq \hat t(r) \leq t_n$. By \Cref{R4},
\begin{align*}
&-n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r))=\log r \\
&=\log \mathring r(\mathring t(r)) \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log \big(1-f(\mathring t(r))\big).
\end{align*}
Hence,
\begin{align*}
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f \big(\hat t(r)\big)^2} \asymp 1-f \big(\mathring t(r)\big)^2 =\frac{L(r)}{A(r)}.
\end{align*}
In addition,
\[
\frac{L \big( \mathring r(\xi_{n+1}) \big)}{A \big( \mathring r(\xi_{n+1}) \big)} \asymp 1-f(\xi_{n+1})^2 \asymp 1,
\]
while
\[
\frac{L \big(\hat r(t_n) \big)}{A \big(\hat r(t_n) \big)} \asymp 1-f \big(\mathring t(\hat r(t_n)) \big)^2 \asymp \sqrt{1-f(t_n)^2} = p^{\frac n2} \ll 1.
\]
\\[1.7ex]
\underline{$(iii)$:} $\xi_{n+1} \leq \mathring t(r) \leq t_n$ and $t_n \leq \hat t(r) \leq \xi_n$. Thus
\begin{align*}
&-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\hat t(r))-\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r)) \\
&=\log \mathring r(\mathring t(r)) \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log \big(1-f(\mathring t(r))\big).
\end{align*}
Consequently,
\[
\frac 12 \log \big(1-f(\hat t(r))\big) \approx n \log p + \log f(\hat t(r))-\log \big(1-f(\mathring t(r))\big),
\]
which implies
\[
\sqrt{1-f(\hat t(r))} \asymp p^n \frac{f(\hat t(r))}{1-f(\mathring t(r))}.
\]
Let us check that the term $f(\hat t(r))$ can be neglected. Using that $f(\mathring t(r)) \leq 1-p^n$, we get
\[
\sqrt{1-f(\hat t(r))} \lesssim f(\hat t(r))
\]
which is only possible if $f(\hat t(r))$ stays away from $0$. As $f(\hat t(r)) <1$, this means that $f(\hat t(r)) \asymp 1$, leading to
\[
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f(\hat t(r))} \asymp \frac{p^n}{1-f(\mathring t(r))}.
\]
Hence, $\IM q_H(i\mathring r(t_n)) \asymp A(\mathring r(t_n))$. Looking back at case $(iv)$, we know that $\IM q_H(i\hat r(t_n)) \asymp L(\hat r(t_n)) \ll A(\hat r(t_n))$. In particular, since $\frac{L(r)}{A(r)}=1-f(\mathring t(r))^2$ is increasing for $r$ in $[\mathring r(t_n),\hat r(t_n)]$, we have $L(r) \ll A(r)$ uniformly on this interval.\\[1.7ex]
\underline{$(ii)$:} $t_n \leq \mathring t(r) \leq \xi_n)$ and $t_n \leq \hat t(r) \leq \xi_n$, leading to
\begin{align*}
&-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\hat t(r))-\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r)) \\
&=\log \mathring r(\mathring t(r)) \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\mathring t(r)).
\end{align*}
Hence
\begin{align*}
\sqrt{1-f(\hat t(r))} \asymp \frac{f(\hat t(r))}{f(\mathring t(r))} > f(\hat t(r)).
\end{align*}
In particular, $1-f(\hat t(r))$ stays away from $0$, which means that
\[
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f(\hat t(r))} \asymp 1.
\]
In other words, $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\hat r(\xi_n), \mathring r(t_n)]$. As we already know, $L(\mathring r(t_n)) \ll \IM q_H(i\mathring r(t_n)) \asymp A(\mathring r(t_n))$.\\[1.7ex]
\underline{$(i)$:} $t_n \leq \mathring t(r) \leq \xi_n$ and $\xi_n \leq \hat t(r) \leq t_{n-1}$. In this case
\begin{align*}
&-n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}+\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r))=\log \mathring r(\mathring t(r)) \\
&\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\mathring t(r)).
\end{align*}
Taking into account that $f(\mathring t(r)) \geq l^n$ by definition, it follows that
\[
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f(\hat t(r))} \asymp \frac{f(\mathring t(r))}{l^n} \asymp 1.
\]
Therefore, $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\mathring r(\xi_n),\hat r(\xi_n)]$. At the left end of this interval, we even have $L(\mathring r(\xi_n)) \asymp A(\mathring r(\xi_n))$ by case $(iv)$.
\end{proof}
\noindent Before we state our next result, we note that by definition of $H_{p,l}$,
\begin{equation}
\label{A13}
\liminf_{t \to 0} \frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)}=\liminf_{t \to 0} \big(1-f(t)^2 \big)=0.
\end{equation}
In view of (\ref{A9}), we have $\liminf_{r \to \infty} \frac{\IM q_{H_{p,l}}(ir)}{|q_{H_{p,l}}(ir)|}=0$ and hence $\IM q_{H_{p,l}}(ir) \not\asymp |q_{H_{p,l}}(ir)|$. \\
Nevertheless, the following lemma shows that $|q_{H_{p,l}}(ir)|$ grows faster than a power. Recalling \Cref{T7}, this means that $|q(ir)| \gtrsim r^{\delta}$ for $r \to \infty$ is not a sufficient condition for $\IM q(ir) \asymp |q(ir)|$ as $r \to \infty$. Instead, we see that $|q(ir)|$ being positively increasing really means that not only does $|q(ir)|$ grow sufficiently fast, but also without oscillating too much.
\begin{lemma}
\label{R2}
Let $\delta := \frac{\log l}{\log (pl)} \in (0,1)$. Then
\begin{itemize}
\item[$\rhd$] $|q_{H_{p,l}}(ir)| \gtrsim r^{\delta}$, $r \to \infty$,
\item[$\rhd$] $|q_{H_{p,l}}(i\mathring r(\xi_n))| \asymp \mathring r(\xi_n)^{\delta}$.
\end{itemize}
\end{lemma}
\begin{proof}
We start the proof with calculating, for $t \in [t_n,\xi_n]$,
\begin{align*}
&\log \sqrt{ \frac{\omega_1(t)}{\omega_2(t)} }=\frac 12 \log \Big(\frac{\omega_1(t)}{\omega_2(t)} \Big)=\sum_{k=1}^{n-1} \int_{t_{k+1}}^{\xi_{k+1}} \frac{-f'(s)}{f(s)} \mkern4mu\mathrm{d} s + \int_{t_n}^t \frac{f'(s)}{f(s)} \mkern4mu\mathrm{d} s\\
&=\sum_{k=1}^{n-1} \Big(\log (1-p^{k+1})-(k+1)\log l \Big)+\log f(t)-\log (1-p^n) \\
&\approx -(n^2+n)\frac{\log l}{2}+\log f(t).
\end{align*}
Now we use our formula for $\log \mathring r(t)$:
\begin{align}
&\log \sqrt{\frac{\omega_1(t)}{\omega_2(t)} } \nonumber \\
&\approx \frac{\log l}{\log(pl)}\log \mathring r(t)+\frac 12 \bigg(\frac{\log(l)\log(\frac lp)}{\log(pl)} -\log l \bigg)n + \bigg(1-\frac{\log l}{\log(pl)} \bigg)\log f(t)\nonumber\\
\label{A22}
&=\frac{\log l}{\log(pl)}\log \mathring r(t)+\frac{\log p}{\log(pl)}\big(\log f(t)-n \log l\big), \quad t \in [t_n,\xi_n].
\end{align}
Since $f$ was assumed to be monotone decreasing on $[t_n,\xi_n]$, and $\log f(\xi_n)=n \log l$,
\[
\log \sqrt{\frac{\omega_1(t)}{\omega_2(t)} } \gtrapprox \frac{\log l}{\log(pl)}\log \mathring r(t)=\delta \log \mathring r(t),
\]
where $\gtrapprox$ indicates that the inequality holds up to an additive term that is bounded in $n$ and $t$. Therefore
\[
|q_{H_{p,l}}(i \mathring r(t))| \asymp \sqrt{\frac{\omega_1(t)}{\omega_2(t)}} \gtrsim \mathring r(t)^{\delta}, \quad t \in [t_n,\xi_n].
\]
Observing that $\frac{\omega_1}{\omega_2}$ is constant on $[\xi_{n+1},t_n]$ (since $\alpha_1-\alpha_2=0$ there), we obtain this estimate also for $t \in [\xi_{n+1},t_n]$:
\[
|q_{H_{p,l}}(i\mathring r(t))| \asymp \sqrt{\frac{\omega_1(t)}{\omega_2(t)}}=\sqrt{\frac{\omega_1(\xi_{n+1})}{\omega_2(\xi_{n+1})}} \gtrsim \mathring r(\xi_{n+1})^{\delta} \geq \mathring r(t)^{\delta}.
\]
Finally, setting $t=\xi_n$ in (\ref{A22}) yields $|q_{H_{p,l}}(i\mathring r(\xi_n))| \asymp \mathring r(\xi_n)^{\delta}$.
\end{proof}
\begin{example}
\label{R3}
Let $H$ be as in \Cref{T6}, but $f(\xi_n)=1-l^{n-1}$ instead, where $l > \sqrt{p}$. Similarly to \Cref{T2}, one can show that
\[
L(\hat r(t_n)) \asymp \IM q_H(i \hat r(t_n)) \ll A(\hat r(t_n)).
\]
However, for our new Hamiltonian,
\[
\lim_{t \to 0} \frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)} = \limsup_{t \to 0} \frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)} = \limsup_{t \to 0} \big( 1-f(t)^2 \big) = 0
\]
as opposed to (\ref{A13}).
\end{example}
\section{Reformulation for Krein strings}
\label{S6}
Recall that a \textit{Krein string} is a pair $S[L,\mathfrak{m}]$ consisting of a number $L \in (0,\infty ]$ and a nonnegative Borel measure $\mathfrak{m}$ on $[0,L]$, such that $\mathfrak{m}([0,t])$ is finite for every $t \in [0,L)$, and $\mathfrak{m}(\{L\})=0$. To this pair we associate the equation
\begin{equation}
y_+'(x)+z\int_{[0,x]} y(t)\mkern4mu\mathrm{d} \mathfrak{m}(t)=0, \quad \quad x \in [0,L),
\end{equation}
where $y_+'$ denotes the right-hand derivative of $y$, and $z$ is a complex spectral parameter. \\
For each string, we can construct a function $q_S$ called the \textit{principal Titchmarsh-Weyl coefficient} of the string (\cite{langer.winkler:1998} following \cite{kac.krein:1968}). This function belongs to the Stieltjes class, i.e., it is analytic on $\bb C \setminus [0,\infty)$, its imaginary part is nonnegative on $\bb C_+$, and its values on $(-\infty ,0)$ are positive. The correspondence between Krein strings and functions of Stieltjes class is bijective, as was shown by M.G.Krein. \newline
\noindent \Cref{A41} below is the reformulation of \Cref{T1} for the Krein string case.
\begin{theorem}
\label{A41}
Let $S[L,\mathfrak{m}]$ be a Krein string and set
\begin{equation}
\delta (t) := \bigg(\int_{[0,t)} \xi^2 \mkern4mu\mathrm{d} \mathfrak{m}(\xi) \bigg)\cdot \bigg(\int_{[0,t)} \mkern4mu\mathrm{d} \mathfrak{m}(\xi) \bigg)-\bigg(\int_{[0,t)} \xi \mkern4mu\mathrm{d} \mathfrak{m}(\xi) \bigg)^2.
\end{equation}
for $t \in [0,L)$. Let
\[
\hat \tau (r):=\inf \big\{t>0 \, \big| \, \frac{1}{r^2} \leq \delta (t) \big\}, \quad \quad r \in (0,\infty).
\]
We set
\begin{align}
f(r) :=\mathfrak{m}([0,\hat \tau (r)))+ \mathfrak{m}(\{\hat \tau (r)\}) \frac{\frac{1}{r^2}-\delta (\hat \tau (r))}{\delta (\hat \tau (r)+)-\delta (\hat \tau (r))}
\end{align}
if $\delta$ is discontinuous at $\hat \tau(r)$, and $f(r):=\mathfrak{m}([0,\hat \tau (r)))$ otherwise. Then
\begin{align}
\IM q_S(ir) \asymp \frac{1}{rf(r)}, \quad \quad r \in (0,\infty ),
\end{align}
with constants independent of the string.
\end{theorem}
\noindent Before proving \Cref{A41}, we need to introduce the concept of dual strings as well as a Hamiltonian associated to a string. Writing
\[
m(t) := \mathfrak{m}([0,t)), \quad \quad t \in [0,L)
\]
we can define the dual string $S[\hat L , \hat{\mathfrak{m}}]$ of $S[L,\mathfrak{m}]$ by setting
\[
\hat L :=\left\{ \begin{array}{ll}
m(L) & \text{if } L+m(L)=\infty ,\\
\infty & \text{else}
\end{array} \right.
\]
and
\[
\hat m (\xi):=\inf \{t >0 \, | \, \xi \leq m (t)\}.
\]
The function $\hat m$ is increasing and left-continuous and thus gives rise to a nonnegative Borel measure $\hat{\mathfrak{m}}$. \newline
\noindent The Hamiltonian defined by
\begin{equation}
\label{A39}
H(t) := \left\{ \begin{array}{ll} \begin{pmatrix}
\hat m(t)^2 & \hat m(t) \\
\hat m(t) & 1
\end{pmatrix} & \text{if } t \in [0,\hat L], \\[3ex]
\begin{pmatrix}
1 & 0 \\
0 & 0
\end{pmatrix} & \text{if } \hat L + \int_0^{\hat L} \hat m(t)^2 \mkern4mu\mathrm{d} t <\infty ,\,\, \hat L<t<\infty
\end{array} \right.
\end{equation}
then satisfies $q_S=q_H$, see e.g. \cite{kaltenbaeck.winkler.woracek:2007}.
\begin{proof}[Proof of \Cref{A41}]
In view of \Cref{T1} and the fact that $q_S=q_H$ for the Hamiltonian $H$ defined in (\ref{A39}), our task is to express $\hat t_H(r)$ in terms of the string. If $\delta (\hat \tau (r)) = \frac{1}{r^2}$, this is easy because of \cite[Corollary 3.4]{kaltenbaeck.winkler.woracek:2007} giving
\begin{align*}
\det \Omega_H(m (\hat \tau (r))) = \delta (\hat \tau (r))=\frac{1}{r^2}
\end{align*}
and hence $\hat t_H(r)=m (\hat \tau (r))$. \\
Otherwise, we have $\delta (\hat \tau (r))<\frac{1}{r^2}$ and $\delta (\hat \tau (r)+) \geq \frac{1}{r^2}$. Using again \cite[Corollary 3.4]{kaltenbaeck.winkler.woracek:2007}, we have
\begin{equation}
\label{A40}
\det \Omega_H(m (\hat \tau (r)))= \delta (\hat \tau (r)) < \frac{1}{r^2}, \quad \quad \det \Omega_H(m (\hat \tau (r)+))= \delta (\hat \tau (r)+) \geq \frac{1}{r^2}
\end{equation}
which tells us that $\hat t_H(r) \in \big(m (\hat \tau (r)),m (\hat \tau (r)+) \big]$. By \cite[Lemma 3.1]{kaltenbaeck.winkler.woracek:2007}, $\hat m$ is constant on this interval. Therefore, for $t \in \big(m (\hat \tau (r)),m (\hat \tau (r)+) \big]$,
\begin{align*}
\det \Omega_H(t) &=\bigg(\int_0^{m(\hat \tau (r))} \hat m (x)^2 \mkern4mu\mathrm{d} x +\big(t-m(\hat \tau (r))\big)\hat m (t)^2 \bigg)\cdot t \\
&-\bigg(\int_0^{m(\hat \tau (r))} \hat m (x) \mkern4mu\mathrm{d} x +\big(t-m(\hat \tau (r))\big)\hat m (t) \bigg)^2 = c_1(r)t+c_2(r)
\end{align*}
for some constants $c_1(r),c_2(r)$. Using (\ref{A40}), this leads to
\[
\det \Omega_H(t)=\delta (\hat \tau (r)) + \frac{t-m(\hat \tau (r))}{m(\hat \tau (r)+)-m(\hat \tau (r))} \big(\delta (\hat \tau (r)+)-\delta (\hat \tau (r))\big).
\]
If we equate this to $\frac{1}{r^2}$, we find that
\[
\hat t_H(r)=m(\hat \tau (r))+ \big(m(\hat \tau (r)+)-m(\hat \tau (r)) \big) \frac{\frac{1}{r^2}-\delta (\hat \tau (r))}{\delta (\hat \tau (r)+)-\delta (\hat \tau (r))}=f(r).
\]
Now we have $\omega_{H;2}(t)=\int_0^t h_2(s) \mkern4mu\mathrm{d} s =t$, and \Cref{T1} now shows
\[
\IM q_S(ir)=\IM q_H(ir) \asymp \frac{1}{r\hat t_H(r)}=\frac{1}{rf(r)}.
\]
\end{proof}
\appendix
\appendixpage
\section{A construction method for Hamiltonians with prescribed angle of $\boldsymbol{q_H}$}
\label{APP}
\setcounter{lemma}{0}
Let $H$ be a Hamiltonian on $(a,b)$. Assume for simplicity that $\hat a=a$, i.e., $a$ is not the left endpoint of an $H$-indivisible interval. As discussed at the beginning of \Cref{S3}, the behavior of
\[
\frac{\det \Omega (t)}{(\omega_1\omega_2)(t)}=1 - \frac{\omega_3(t)^2}{(\omega_1\omega_2)(t)} >0
\]
towards the left endpoint $a$ corresponds to the angle of $q_H(ir)$ for $r \to \infty$. It is thus desirable to be able to construct examples of Hamiltonians with prescribed $\frac{\omega_3(t)}{\sqrt{(\omega_1\omega_2)(t)}}$, which is what we did in \Cref{T6}. We give now a general version of this idea. \\
The following result is formulated for Hamiltonians in limit circle case, making the statement cleaner. When we made use of this construction method in \Cref{T6}, we obtained a Hamiltonian in limit point case by simply appending an infinitely long indivisible interval.
\begin{proposition}
\label{T3}
Let $f$ be locally absolutely continuous on $(a,b]$ and such that $f(t) \in (-1,1)$ for all $t \in (a,b]$. Then there is a Hamiltonian, in limit circle case at $b$, with the properties
\begin{itemize}
\item[(i)] $a=\hat a$ is not the left endpoint of an $H$-indivisible interval, and
\item[(ii)] $f(t)=\frac{\omega_3(t)}{\sqrt{(\omega_1\omega_2)(t)}}$ for all $t \in (a,b]$.
\end{itemize}
In addition, let
\[
\Delta(f):=\frac{2|f'|}{1-\sgn (f')f}
\]
which is in $L_{loc}^1((a,b])$.
Then all possible choices for $(\omega_1\omega_2)(t)$ are given by functions of the form
\begin{align*}
\exp \Big(c-\int_t^b g(s) \mkern4mu\mathrm{d} s \Big)
\end{align*}
where $c \in \bb R$, $g \in L_{loc}^1((a,b]) \setminus L^1((a,b])$ with $g(t) \geq \Delta(f)(t)$ and $g(t)>0$ for $t \in (a,b]$ a.e.
\end{proposition}
\begin{proof}
If $H$ is given and such that $(i),(ii)$ hold, then clearly $f$ is locally absolutely continuous and takes values in $(-1,1)$. \\
Let $f$ be as in the statement. Then clearly $f' \in L_{loc}^1((a,b])$. Also, the denominator of $\Delta (f)$ is locally bounded below by a positive number, and hence $\Delta(f) \in L_{loc}^1((a,b])$. We check the conditions that $\omega_1(t),\omega_2(t)$ must satisfy in order that they, together with $\omega_3(t):=\sqrt{(\omega_1\omega_2)(t)}f(t)$, give rise to a Hamiltonian through $h_i(t):=\omega_i'(t)$, $i=1,2,3$. Clearly, $\omega_1,\omega_2$ have to be increasing, absolutely continuous on $[a,b]$ and satisfy $\omega_1(0)=\omega_2(0)=0$. Moreover, we want
\[
(h_1h_2)(t) \geq h_3(t)^2=\Big(\sqrt{(\omega_1\omega_2)(t)}f'(t)+\frac{h_1(t)\omega_2(t)+\omega_1(t)h_2(t)}{2 \sqrt{(\omega_1\omega_2)(t)}}f(t) \Big)^2.
\]
This is equivalent to
\[
\frac{(h_1h_2)(t)}{(\omega_1\omega_2)(t)} \geq \bigg(f'(t)+\frac 12 \Big(\frac{h_1(t)}{\omega_1(t)}+\frac{h_2(t)}{\omega_2(t)} \Big) f(t)\bigg)^2
\]
Setting
\[
\alpha_i(t):=\frac{h_i(t)}{\omega_i(t)}, \quad i=1,2, \quad \quad g:=\alpha_1+\alpha_2 \in L_{loc}^1((a,b]),
\]
the inequality takes the form
\[
\alpha_i (g-\alpha_i) \geq \Big(f'+\frac 12 gf \Big)^2
\]
which is equivalent to
\begin{align}
\label{A7}
\alpha_i \in \Bigg[\frac g2-\sqrt{\frac{g^2}{4}-\Big(f'+\frac 12 gf \Big)^2},\frac g2+\sqrt{\frac{g^2}{4}-\Big(f'+\frac 12 gf \Big)^2}\Bigg].
\end{align}
In particular,
\begin{align*}
&\frac{g^2}{4}-\Big(f'+\frac 12gf \Big)^2 \geq 0 \, \Longleftrightarrow \, \frac g2 \geq \Big|f'+\frac 12gf \Big| \\
&\, \Longleftrightarrow g \geq \frac{2f'}{1-f} \text{ and } g \geq \frac{-2f'}{1+f} \\
&\, \Longleftrightarrow g \geq \frac{2|f'|}{1-\sgn(f')f} =\Delta(f).
\end{align*}
Since $\Delta(f) \in L_{loc}^1((a,b])$, we can find $g \in L_{loc}^1((a,b])$, $g(t) \geq \Delta (f)(t)$ a.e. on $(a,b]$, and additionally, $g \not\in L^1((a,b])$.
Choose measurable functions $\alpha_1,\alpha_2$ such that
\begin{itemize}
\item[$\rhd$] $\alpha_1+\alpha_2=g$,
\item[$\rhd$] (\ref{A7}) holds for $\alpha_1$ at almost all $t \in (a,b)$ (and hence for $\alpha_2$), and
\item[$\rhd$] $\alpha_1,\alpha_2 \not\in L^1((a,b])$.
\end{itemize}
Note that $\alpha_1$ and $\alpha_2$ belong to $L_{loc}^1((a,b])$ since $g$ does, and that such a choice is possible because one can always take $\alpha_1=\alpha_2=\frac g2$. \\
From the construction it is clear that for a Hamiltonian $H$ with $\frac{d}{dt} [\log \omega_i(t)]=\alpha_i(t)$, $i=1,2$, there is $c \in \bb R$ such that
\[
\omega_i(t):=\exp \Big(c-\int_t^b \alpha_i(s) \mkern4mu\mathrm{d} s \Big), \quad \quad i=1,2.
\]
\end{proof}
\section{Calculations for \Cref{A24}}
\label{APPB}
Let $H$ be the Hamiltonian from \Cref{A24},
\begin{align*}
H(t)=
t^{\alpha -1}\left(\begin{matrix}
|\log t|^{\beta_1} & |\log t|^{\beta_3} \\
|\log t|^{\beta_3} & |\log t|^{\beta_2} \\
\end{matrix} \right), \quad \quad t \in (0,\infty),
\end{align*}
where $\alpha > 0$, $\beta_1, \beta_2 \in \bb R$ such that $\beta_1 \neq \beta_2$, and $\beta_3 := \frac{\beta_1 + \beta_2}{2}$. We carry out the calculations to justify the claimed asymptotics from the example. They were communicated by Matthias Langer. \\
In order to calculate $\mathring t(r)$ and $\hat t(r)$, two lemmas are needed.
\begin{lemma}
\label{approx_inv}
Let $f: (0,\varepsilon) \to (0,\infty)$ be increasing and $f(t) \sim ct^a |\log t|^b$ as $t \to 0$, for $a>0$, $c>0$, $b \in \bb R$. Then
\[
f^{-1}(s) \sim \Big(ca^{-b}s|\log s|^{-b} \Big)^{\frac 1a}, \quad s \to 0.
\]
\end{lemma}
\begin{proof}
We have
\begin{align}
\label{sim}
\lim_{t \to 0} \,\frac{f(t)}{t^a|\log t|^b} = c.
\end{align}
Therefore,
\[
\lim_{t \to 0} \Big[\log f(t)-a\log t-b \log |\log t| \Big]= \log c
\]
and further
\[
\lim_{t \to 0} \Big[\frac{\log f(t)}{\log t} - a \Big]= \lim_{t \to 0} \Big[\frac{\log f(t)}{\log t} - a - b \frac{\log |\log t|}{\log t} \Big]= 0.
\]
In other words, $|\log t| \sim \frac 1a |\log f(t)|$. At the same time, by (\ref{sim}),
\[
t \sim \Big(c^{-1} f(t) |\log t|^{-b} \Big)^{\frac 1a} \sim \Big(c^{-1} f(t) [\frac 1a |\log f(t)|]^{-b} \Big)^{\frac 1a}
\]
which implies the assertion.
\end{proof}
\begin{lemma}
\label{int_asy}
Let $a>-1$ and $b \in \bb R$. Then
\begin{align*}
\int_0^t &s^a (-\log s)^b \mkern4mu\mathrm{d} s = \frac{1}{a+1}t^{a+1} (-\log t)^b \\
&\cdot \Big[1-\frac{b}{a+1}(-\log t)^{-1} + \frac{b(b-1)}{(a+1)^2}(-\log t)^2+{\rm O} \big((-\log t)^{-3} \big) \Big]
\end{align*}
\end{lemma}
\begin{proof}
\begin{align}
&\int_0^t s^a (-\log s)^b \mkern4mu\mathrm{d} s=\frac{1}{a+1}s^{a+1} (-\log s)^b \Big|_0^t - \frac{1}{a+1} \int_0^t s^a b (-\log s)^{b-1} \mkern4mu\mathrm{d} s \nonumber \\
\label{part_int_sa_logs_b}
&= \frac{1}{a+1}t^{a+1} (-\log t)^b - \frac{b}{a+1} \int_0^t s^a (-\log s)^{b-1} \mkern4mu\mathrm{d} s
\end{align}
Using (\ref{part_int_sa_logs_b}) two more times:
\begin{align*}
&= \frac{1}{a+1}t^{a+1} (-\log t)^b \\
&\quad- \frac{b}{a+1} \Big[\frac{1}{a+1}t^{a+1} (-\log t)^{b-1} - \frac{b-1}{a+1}\int_0^t s^a (-\log s)^{b-2} \mkern4mu\mathrm{d} s \Big] \\
&=\frac{1}{a+1}t^{a+1} (-\log t)^b \Big[1-\frac{b}{a+1}(-\log t)^{-1}+\frac{b(b-1)}{(a+1)^2}(-\log t)^{-2}\Big] \\
&\quad+c(a,b)\int_0^t s^a (-\log s)^{b-3} \mkern4mu\mathrm{d} s.
\end{align*}
The assertion follows using Karamata's Theorem \cite[Prop. 1.5.8 and 1.5.9a]{bingham.goldie.teugels:1989}.
\end{proof}
We are now in position to determine $L(r)$, $\IM q_H(ir)$, and $A(r)$.
\newline
\noindent \underline{Calculation of $\mathring t(r)$:}
By Karamata's Theorem we have
\[
\omega_i(t) \sim \frac{1}{\alpha}t^{\alpha} (-\log t)^{\beta_i}, \quad \quad i=1,2,3.
\]
Hence
\begin{align}
\label{m1m2}
(\omega_1\omega_2)(t) \sim \frac{1}{\alpha ^2}t^{2\alpha} (-\log t)^{\beta_1+\beta_2}.
\end{align}
Applying \Cref{approx_inv} yields
\[
(\omega_1\omega_2)^{-1}(s) \sim c \cdot s^{\frac{1}{2\alpha}} (-\log s)^{-\frac{\beta_3}{\alpha}}.
\]
We arrive at
\begin{align}
\label{tring}
\mathring t(r)=(\omega_1\omega_2)^{-1}(r^{-2}) \sim c' \cdot r^{-\frac{1}{\alpha}} (\log r)^{-\frac{\beta_3}{\alpha}}.
\end{align}
\noindent \underline{Calculation of $A(r)$:}
\begin{align*}
A(r)&= \sqrt{\frac{\omega_1(\mathring t(r))}{\omega_2(\mathring t(r))}} \sim (-\log \mathring t(r))^{\frac{\beta_1-\beta_2}{2}} \sim \Big[-\log \Big(r^{-\frac{1}{\alpha}} (\log r)^{-\frac{\beta_3}{\alpha}} \Big) \Big]^{\frac{\beta_1-\beta_2}{2}} \\
&\sim (\alpha \log r)^{\frac{\beta_1-\beta_2}{2}}.
\end{align*}
\noindent \underline{Calculation of $\hat t(r)$:} We use \Cref{int_asy} to calculate $\omega_i(t)$ with more precision:
\begin{align*}
\omega_i(t)=&\int_0^t s^{\alpha-1} (-\log s)^{\beta_i} \mkern4mu\mathrm{d} s = \frac{1}{\alpha}t^{\alpha} (-\log t)^{\beta_i} \\
&\cdot \Big[1-\frac{\beta_i}{\alpha}(-\log t)^{-1}+\frac{\beta_i (\beta_i -1)}{\alpha ^2}(-\log t)^{-2}+{\rm O} \big((-\log t)^{-3} \big) \Big]
\end{align*}
We get
\begin{align}
&\det \Omega (t)=\frac{1}{\alpha^2}t^{2\alpha} (-\log t)^{\beta_1+\beta_2} \nonumber\\
&\cdot \Big[1-\frac{\beta_1+\beta_2}{\alpha}(-\log t)^{-1}+\frac{\beta_1 (\beta_1-1)+\beta_2 (\beta_2-1) + \beta_1 \beta_2}{\alpha^2}(-\log t)^{-2} \nonumber\\
&-\Big(1-\frac{2\beta_3}{\alpha}(-\log t)^{-1}+\frac{2\beta_3 (\beta_3-1) + \beta_3^2}{\alpha^2}(-\log t)^{-2}+{\rm O} \big((-\log t)^{-3} \big) \Big)\Big] \nonumber\\
&=\frac{1}{\alpha^2}t^{2\alpha} (-\log t)^{\beta_1+\beta_2} \Big[\Big(\frac{\beta_1-\beta_2}{2\alpha} \Big)^2 (-\log t)^{-2} + {\rm O} \big((-\log t)^{-3} \big) \big)\Big] \nonumber\\
\label{detm}
&\sim c \cdot t^{2\alpha} (-\log t)^{2(\beta_3-1)}.
\end{align}
By \Cref{approx_inv},
\[
\big(\det \Omega \big)^{-1}(s) \sim c' \cdot s^{\frac{1}{2\alpha}} (-\log s)^{-\frac{\beta_3-1}{\alpha}}
\]
and further
\[
\hat t(r) = \big(\det \Omega \big)^{-1}(r^{-2}) \sim c'' \cdot r^{-\frac{1}{\alpha}} (\log r)^{-\frac{\beta_3-1}{\alpha}}.
\]
\noindent \underline{Calculation of $\IM q_H(ir)$:}
\begin{align*}
\IM q_H(ir)&\asymp \frac{1}{r\omega_2(\hat t(r))} \sim \frac{\alpha}{r} \big(\hat t(r) \big)^{-\alpha} \big(-\log \hat t(r) \big)^{-\beta_2} \\
&\sim c''' \cdot \frac{\alpha}{r} r (\log r)^{\beta_3-1} (\log r)^{-\beta_2} = c''' \cdot (\log r)^{\frac{\beta_1-\beta_2}{2}-1}.
\end{align*}
\noindent \underline{Calculation of $L(r)$:} Using (\ref{detm}), (\ref{m1m2}) and (\ref{tring}), we have
\[
\frac{\det \Omega (\mathring t(r))}{(\omega_1\omega_2)(\mathring t(r))} \sim \Big(\frac{\beta_1-\beta_2}{2\alpha} \Big)^2 (\alpha \log r)^{-2}.
\]
Multiplying by $A(r)$, we obtain
\[
L(r) \sim c (\log r)^{\frac{\beta_1-\beta_2}{2}-2}.
\]
\subsection*{Acknowledgements.}
\noindent This work was supported by the Austrian Science Fund and the Russian Federation for Basic Research (grant number
I-4600). Furthermore, I would like to thank my supervisor Harald Woracek for his support and the expertise he provided.
\printbibliography
\end{document} | math | 97,177 |
\begin{document}
\title[Theta Hypergeometric Series]
{Theta Hypergeometric Series}
\author{V.P. Spiridonov}
\address{Bogoliubov Laboratory of Theoretical Physics, Joint
Institute for Nuclear Research, Dubna, Moscow Region 141980,
Russia. }
\thanks{
Work supported in part by the Russian Foundation for Basic
Research (RFBR) grant no. 00-01-00299.
\\ \indent Published in:
Proceedings of the NATO ASI {\em Asymptotic Combinatorics with
Applications to Mathematical Physics} (St. Petersburg, July 9--22, 2001),
Eds. V. Malyshev and A. Vershik, Kluwer, Dordrecht, 2002, pp. 307--327.
}
\begin{abstract}
We formulate general principles of building hypergeometric type
series from the Jacobi theta functions that generalize the plain
and basic hypergeometric series. Single and
multivariable elliptic hypergeometric series are considered
in detail. A characterization theorem for a single variable
totally elliptic hypergeometric series is proved.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
This note is a mostly conceptual work reflecting partially the content
of a lecture on special functions of hypergeometric type associated with
elliptic beta integrals presented by the author at the NATO Advanced Study
Institute ``Asymptotic Combinatorics with Applications to Mathematical
Physics" (St. Petersburg, July 9-22, 2001). It precedes a
forthcoming extended and technically more elaborate review
\cite{spi:memoir}. Here we describe general principles of building
hypergeometric type series associated with the Jacobi theta functions
\cite{mum:tata}. Some other essential results of \cite{spi:memoir}
were briefly presented in \cite{spi:elliptic,spi:special}.
We discuss only single variable and multivariable series built out of
the Jacobi theta functions though some generalizations based upon the
multidimensional Riemann theta functions are possible.
Moreover, the main attention will be paid to such series obeying certain
ellipticity conditions, i.e. to elliptic hypergeometric series.
We start from a description of the Jacobi theta functions properties
\cite{abr-ste:handbook,whi-wat:course}.
Let us take two complex variables $p$ and $q$ lying inside the
unit disk, i.e. $|p|, |q|<1$. The {\em modular parameters}
$\sigma,\, \text{Im} (\sigma ) > 0,$ and $\tau,\, \text{Im} (\tau)>0,$
are introduced through an exponential representation
\begin{equation}\label{mod-par}
p=e^{2\pi i\tau }, \qquad q=e^{2\pi i\sigma}.
\end{equation}
Define the $p$-shifted factorials \cite{gas-rah:basic}
$$
(a;p)_\infty=\prod_{n=0}^\infty(1-ap^n),\qquad
(a;p)_s=\frac{(a;p)_\infty}{(ap^s;p)_\infty}.
$$
For a positive integer $n$ one has
$$ (a;p)_n=(1-a)(1-ap)\cdots(1-ap^{n-1}) $$
and
$$(a;p)_{-n}=\frac{1}{(ap^{-n};p)_n}. $$
It is convenient to use the following shorthand notations
$$
(a_1, \dots, a_k;p)_\infty \equiv (a_1;p)_\infty\cdots(a_k;p)_\infty.
$$
Let us introduce a Jacobi-type theta function
\begin{equation}\label{theta}
\theta(z;p)=(z,pz^{-1};p)_\infty.
\end{equation}
It obeys the following simple transformation properties
\begin{equation}\label{fun-rel}
\theta(pz;p)=\theta(z^{-1};p)=-z^{-1}\theta(z;p).
\end{equation}
One has also $\theta(p^{-1}z;p)=-p^{-1}z\theta(z;p)$.
Evidently, $\theta(z;p)=0$ for $z=p^{-M},\, M\in\mathbb{Z}$,
and $\theta(z;0)=1-z$.
The standard Jacobi's $\theta_1$-function \cite{whi-wat:course}
is expressed through $\theta(z;p)$ as follows
\begin{eqnarray} \nonumber
\theta_1(u;\sigma,\tau) &=&
-i\sum_{n=-\infty}^\infty (-1)^np^{(2n+1)^2/8}q^{(n+1/2)u}
\\ \nonumber
&=& 2\sum_{n=0}^\infty(-1)^n e^{\pi i\tau(n+1/2)^2}\sin\pi(2n+1)\sigma u
\\ \nonumber
&=& 2p^{1/8}\sin\pi \sigma u\:(p,pe^{2\pi i\sigma u},
pe^{-2\pi i\sigma u};p)_\infty \\
&=& p^{1/8} iq^{-u/2}\: (p;p)_\infty\: \theta(q^{u};p), \quad
u\in\mathbb{C}.
\label{theta1}\end{eqnarray}
We have introduced artificially the second modular parameter $\sigma$
into the definition of $\theta_1$-function---the variable
$u$ will often take integer values and it is convenient to
make an appropriate rescaling from the very beginning.
Note that other Jacobi theta functions $\theta_{2,3,4}(u)$
can be obtained from $\theta_1(u)$ by a simple shift of the
variable $u$ \cite{whi-wat:course}, i.e. these functions
structurally do not differ much from $\theta_1(u)$.
In the following considerations we shall be employing
convenient notations by replacing $\theta_1$-symbol in favor
of the elliptic numbers $[u]$ used in \cite{djkmo:exactly}:
\begin{eqnarray*}
&& [u] \equiv \theta_1(u) \qquad \text{or} \qquad
[u;\sigma,\tau]\equiv \theta_1(u;\sigma,\tau), \\
&& \makebox[4em]{}
[u_0,\ldots,u_k]\equiv \prod_{m=0}^k[u_m].
\end{eqnarray*}
Dependence on $\sigma$ and $\tau$ will be indicated explicitly only if
it is necessary. The function $[u]$ is entire,
odd $[-u]=-[u]$, and doubly quasiperiodic
\begin{eqnarray}\nonumber
&& [u+\sigma^{-1}] = -[u], \\
&& [u+\tau\sigma^{-1}] = -e^{-\pi i\tau-2\pi i\sigma u}[u].
\label{quasi}\end{eqnarray}
It is well-known that the theta function $[u]$ can be derived
uniquely (up to a constant factor) from the transformation properties
(\ref{quasi}) and the demand of entireness.
Modular transformations are described by the following $SL(2,\mathbb{Z})$
group action upon the modular parameters $\sigma$ and $\tau$
\begin{equation}\label{sl2}
\tau\to \frac{a\tau+b}{c\tau+d},\qquad \sigma\to \frac{\sigma}{c\tau+d},
\end{equation}
where $a,b,c,d \in\mathbb{Z}$ and $ad-bc=1$. This group is
generated by two simple transformations: $\tau\to\tau+1$,
$\sigma\to\sigma,$ and $\tau\to -\tau^{-1},$ $\sigma\to\sigma\tau^{-1}$.
In these two cases one has
\begin{eqnarray}\label{modular-tr1}
&& [u;\sigma,\tau+1] = e^{\pi i/4} [u;\sigma,\tau],\qquad
\\ \label{modular-tr2}
&& [u;\sigma/{\tau},-1/\tau]
= i(-i\tau)^{1/2}e^{\pi i\sigma^2u^2/\tau} [u;\sigma,\tau],
\end{eqnarray}
where the square root sign of $(-i\tau)^{1/2}$ is fixed from the
condition that the real part of this expression is positive.
\section{Theta hypergeometric series $_rE_s$ and $_rG_s$}
Now we are going to introduce formal power series $_rE_s$ and $_rG_s$
built out from the Jacobi theta functions (we will not consider their
convergence properties here). They generalize the
plain hypergeometric series $_rF_s$ and basic hypergeometric series
$_r\Phi_s$ together with their bilateral partners $_rH_s$ and $_r\Psi_s$.
The definitions given below follow the general
spirit of qualitative (but constructive) definitions of the
plain and basic hypergeometric series going back to Pochhammer and
Horn \cite{aar:special,gas-rah:basic,ggr:general}. In order to
define theta hypergeometric series we use as a key property
the quasiperiodicity of Jacobi theta functions (\ref{quasi}).
\begin{definition}
The series $\sum_{n\in\mathbb{N}} c_n$ and $\sum_{n\in\mathbb{Z}} c_n$
are called {\em theta hypergeometric series of elliptic type}
if the function $h(n)=c_{n+1}/{c_n}$ is a meromorphic doubly quasiperiodic
function of $n$ considered as a complex variable. More precisely, for
$x\in \mathbb{C}$ the function $h(x)$ should obey the following properties:
\begin{equation}\label{h(n)-period}
h(x+\sigma^{-1})=ah(x),\qquad
h(x+\tau\sigma^{-1})=be^{2\pi i\sigma \gamma x}h(x),
\end{equation}
where $\sigma^{-1}, \tau\sigma^{-1}$ are quasiperiods of the theta function
$[u]$ (\ref{quasi}) and $a, b, \gamma$ are some complex numbers.
\end{definition}
\begin{theorem}
Let a meromorphic function $h(n)$ satisfies the properties (\ref{h(n)-period}).
Then it has the following general form in terms of the Jacobi theta functions
\begin{equation}
h(n)=\frac{[n+u_1,\ldots, n+u_r]}{[n+v_1,\ldots, n+v_s]}\, q^{\beta n} y,
\label{h(n)-quasi}\end{equation}
where $r, s$ are arbitrary non-negative integers and
$u_1,\ldots, u_r, v_1,\ldots,v_s, \beta, y$ are arbitrary complex
parameters restricted by the condition of non-singularity of $h(n)$
and related to the quasiperiodicity multipliers $a, b, \gamma$ as follows:
\begin{eqnarray}\nonumber
&& a = (-1)^{r-s} e^{2\pi i\beta}, \qquad \gamma= s-r, \\
&& b=(-1)^{r-s}e^{\pi i\tau (s-r+2\beta)}
e^{2\pi i\sigma(\sum_{m=1}^sv_m-\sum_{m=1}^ru_m)}.
\label{multipliers}\end{eqnarray}
\end{theorem}
\begin{proof}
Let us tile the complex plane of $n$ by parallelograms whose edges
are formed by the theta function quasiperiods $\sigma^{-1}$ and $\tau\sigma^{-1}$.
Because the multipliers in the quasiperiodicity conditions (\ref{h(n)-period})
are entire functions of $n$ without zeros, the meromorphic function
$h(n)$ has the same finite number of zeros and poles in each
parallelogram on the plane.
Let us denote as $-u_1,\ldots, -u_r$ the zeros of $h(n)$ in one
of such parallelograms and as $-v_1,\ldots, -v_s$ its poles.
For simplicity we assume that these zeros and poles are simple---
for creating multiple zeros or poles it is sufficient to set some
of the numbers $u_m$ or $v_m$ to be equal to each other.
Let us represent the ratio of theta hypergeometric series
coefficients as follows: $c_{n+1}/c_n=h(n)g(n)$, where $h(n)$
has the form (\ref{h(n)-quasi}) with some unfixed parameter $\beta$.
Since all the zeros and poles of $c_{n+1}/c_n$ are sitting in
$h(n)$, the function $g(n)$ must be an entire function without
zeros satisfying the constraints $g(n+\sigma^{-1})=a'g(n),$
$g(n+\tau\sigma^{-1})=b'e^{2\pi i\sigma \gamma' n}g(n)$ for some
complex numbers $a', b', \gamma'$. However, the only function satisfying
such demands is the exponential $q^{\beta' n}$, where $\beta'$
is a free parameter. Since a factor of such type is already present in
(\ref{h(n)-quasi}) we may set $g(n)=1$ and this proves that the
most general $c_{n+1}/c_n$ has the form (\ref{h(n)-quasi}) (note
that $y$ is just an arbitrary proportionality constant). Direct
application of the properties (\ref{quasi}) yields the connection
between multipliers $a, b, \gamma$ and the parameters $u_m,\,
m=1,\ldots,r,$ $v_k,\, k=1,\ldots,s,$ $\beta$ as stated in
(\ref{multipliers}).
\end{proof}
Resolving the first order recurrence relation for the series coefficients
$c_n$ and normalizing $c_0=1$ we get the following explicit ``additive"
expression for the general theta hypergeometric series of elliptic type
\begin{equation}
\sum_{n\in\mathbb{N}\, \text{or}\, \mathbb{Z}}
\frac{[u_1,\ldots,u_{r}]_n}
{[v_1,\ldots,v_s]_n}\,q^{\beta n(n-1)/2} y^n,
\label{_rE_s-1}\end{equation}
where we used the elliptic shifted factorials defined
for $n\in\mathbb{N}$ as follows:
\begin{eqnarray}
&& [u_1,\ldots,u_k]_{\pm n}=\prod_{m=1}^k[u_m]_{\pm n},
\nonumber \\
&& [u]_n=[u][u+1]\cdots[u+n-1],\qquad [u]_{-n}=\frac{1}{[u-n]_n}.
\label{ell-shift}\end{eqnarray}
In order to simplify the trigonometric degeneration limit
$\text{Im}(\tau)\to+\infty$ (or $p\to 0$) in the series (\ref{_rE_s-1})
we renormalize $y$ and introduce as a main series argument another variable $z$:
$$
y \equiv (ip^{1/8})^{s-r}q^{(u_1+\ldots+u_r-v_1-\ldots-v_s)/2}z.
$$
Let us replace the parameter $\beta$ by another parameter $\alpha$ as well
through the relation $\beta\equiv \alpha + (r-s)/2$.
Then, we can rewrite the function (\ref{h(n)-quasi}) in the following
``multiplicative" form using the functions $\theta(tq^n;p)$:
\begin{equation}
h(n)=\frac{\theta(t_1q^n,\ldots,t_{r}q^n;p)}
{\theta(w_1q^n,\ldots, w_{s}q^n;p)}\, q^{\alpha n}z,
\label{h(n)-2}\end{equation}
where $t_m=q^{u_m},\, m=1,\ldots, r,$ $w_k=q^{v_k},\, k=1,\ldots, s,$ and
the following shorthand notations are employed:
$$
\theta(t_1,\ldots,t_k;p)=\prod_{m=1}^k\theta(t_m;p).
$$
Now we are in a position to introduce the unilateral theta hypergeometric
series $_rE_s$. In its definition we follow the standard plain and basic
hypergeometric series conventions. Namely, in the expression (\ref{h(n)-2})
we replace $s$ by $s+1$ and set $u_r\equiv u_0$ and $v_{s+1}=1$. This does
not restrict generality of consideration since one can remove such a
constraint by fixing one of the numerator parameters $u_m$ to be equal to 1.
Then we fix
\begin{eqnarray}\nonumber
\lefteqn{ _rE_s\left({t_0,\ldots, t_{r-1}\atop w_1,\ldots,w_s};q,p;\alpha,
z\right) } && \\ && \makebox[4em]{}
= \sum_{n=0}^\infty \frac{\theta(t_0,t_1,\ldots,t_{r-1};p;q)_n}
{\theta(q,w_1,\ldots,w_s;p;q)_n}\, q^{\alpha n(n-1)/2} z^n,
\label{_rE_s-2}\end{eqnarray}
where we have introduced new notations for the elliptic shifted factorials
$$
\theta(t;p;q)_n=\prod_{m=0}^{n-1}\theta(tq^m;p)
$$
and
$$
\theta(t_0,\ldots,t_k;p;q)_n=\prod_{m=0}^k\theta(t_m;p;q)_n.
$$
We draw attention to the particular ordering of $q,p$
used in the notations for theta hypergeometric series (in the
previous papers we were ordering $q$ after $p$ which does not match
with the ordering in terms of modular parameters $\sigma, \tau$
in $[u;\sigma,\tau]$).
An important fact is that theta hypergeometric series do not admit confluence
limits. Indeed, because of
the quasiperiodicity of theta functions the limits of parameters
$t_m, w_m\to 0$ or $t_m, w_m \to\infty$ are not well defined and it is not possible
to pass in this way from $_rE_s$-series to similar series with smaller
values of indices $r$ and $s$.
For the bilateral theta hypergeometric series we introduce
different notations:
\begin{eqnarray}\nonumber
\lefteqn{ _rG_s\left({t_1,\ldots, t_{r}\atop w_1,\ldots,w_{s}};q,p;\alpha, z
\right) } && \\ && \makebox[4em]{}
=\sum_{n=-\infty}^\infty\frac{\theta(t_1,\ldots,t_{r};p;q)_n}
{\theta(w_1,\ldots,w_{s};p;q)_n}\, q^{\alpha n(n-1)/2} z^n.
\label{_rG_s-2}\end{eqnarray}
This expression was derived with the help of (\ref{h(n)-2})
without any changes. The elliptic shifted factorials for negative
indices are defined in the following way:
$$
\theta(t;p;q)_{-n}=\frac{1}{\theta(tq^{-n};p;q)_n},\quad n\in\mathbb{N}.
$$
Due to the property $\theta(q;p;q)_{-n}=0$ (or $[1]_{-n}=0$) for $n>0$,
the choice $t_{s+1}=q$ (or $v_{s+1}=1$) in the $_rG_{s+1}$ series leads
to its termination from one side. After denoting $t_r\equiv t_0$ (or
$u_{r}\equiv u_0$) one gets in this way the general $_rE_s$-series.
Since the bilateral
series are more general than the unilateral ones, it is sufficient to
prove key properties of theta hypergeometric series in the
bilateral case without further specification to the unilateral one.
Consider the limit $\text{Im}(\tau)\to+\infty$ or $p\to 0$.
In a straightforward manner one gets
\begin{eqnarray}\nonumber
\lefteqn{ \lim_{p\to 0} {_rE_s} =
{_r\Phi_s}\left({t_0,t_1,\ldots, t_{r-1}
\atop w_1,\ldots,w_s }; q;\alpha, z\right) } && \\ && \makebox[3em]{}
=\sum_{n=0}^\infty \frac{(t_0,t_1,\ldots,t_{r-1};q)_n}
{(q,w_1,\ldots,w_s;q)_n}\, q^{\alpha n(n-1)/2} z^n.
\label{Phi-2}\end{eqnarray}
This basic hypergeometric series is different from the standard one by
the presence of an additional parameter $\alpha$. The definition of
$_r\Phi_s$ series suggested in \cite{sla:generalized} uses $\alpha=0$.
The definition given in \cite{gas-rah:basic} looks as follows
\begin{equation}
{_r\Phi_s}=\sum_{n=0}^\infty \frac{(t_0,t_1,\ldots,
t_{r-1};q)_n}{(q,w_1,\ldots,w_s;q)_n}
\left((-1)^nq^{n(n-1)/2}\right)^{s+1-r}\, z^n,
\label{Phi-3}\end{equation}
which matches with (\ref{Phi-2}) for $\alpha=s+1-r$ after the
replacement of $z$ by $(-1)^{s+1-r}z$. Actually, the $\alpha=0$
and $\alpha=s+1-r$ choices are related to each other through the
inversion transformation $q\to q^{-1}$ with the subsequent
redefinition of parameters $t_m, w_m, z$.
One of the characterizations of the basic hypergeometric series $\sum_nc_n$
consists in the demand for $c_{n+1}/c_n$ to be a general rational function
of $q^n$ which is satisfied by (\ref{Phi-2}) only for integer $\alpha$.
In order to get in the limit $p\to 0$ the standard $q$-hypergeometric
series we fix $\alpha=0$. It is not clear at the moment whether this choice
is the most ``natural" one or it does not play a fundamental role---this
question can be answered only after the discovery of good applications for
the series $_rE_s$ in pure mathematical or mathematical physics problems.
From the point of view of elliptic beta integrals \cite{spi:elliptic,spi:memoir}
this is the most natural choice indeed.
In the bilateral case we fix $\alpha=0$ as well, so that in the $p\to 0$ limit
the $_rG_s$ series are reduced to the general $_r\Psi_s$-series:
\begin{eqnarray}\nonumber
\lefteqn{{_r\Psi_s}\left({t_1,\ldots, t_{r}
\atop w_1,\ldots,w_s }; q;z\right) } && \\ && \makebox[3em]{}
=\sum_{n=-\infty}^\infty \frac{(t_1,\ldots,t_{r};q)_n}
{(w_1,\ldots,w_s;q)_n}\, z^n.
\label{Psi}\end{eqnarray}
\begin{definition}
The series $_{r+1}E_r$ and $_{r}G_{r}$ are called {\em balanced} if
their parameters satisfy the constraints, in the additive form,
\begin{equation}
u_0+\ldots+u_{r}=1+v_1+\ldots+v_r
\label{balance1}\end{equation}
and
\begin{equation}
u_1+\ldots+u_{r}=v_1+\ldots+v_r
\label{balance-g}\end{equation}
respectively.
In the multiplicative form these restrictions look as follows:
$\prod_{m=0}^r t_m$ $=q\prod_{k=1}^rw_k$ and $\prod_{m=1}^r t_m=$
$\prod_{k=1}^rw_k$ respectively.
\end{definition}
\begin{remark}
In the limit $p\to 0$ the series $_{r+1}E_r$ goes to $_{r+1}\Phi_r$
provided the parameters $u_m$ (or $t_m$), $m=0,\ldots,r,$ and
$v_k$ (or $w_k$), $k=1,\ldots,r,$ {\em remain fixed}.
Then our condition of balancing does not
coincide with the one given in \cite{gas-rah:basic}, where $_{r+1}\Phi_r$
is called balanced provided $q\prod_{m=0}^r t_m=\prod_{k=1}^rw_k$
(simultaneously one usually assumes also that $z=q$, but we drop this
requirement). A discrepancy in these definitions will be resolved
after imposing some additional constraints upon the series parameters
(see the very-well-poisedness condition below).
\end{remark}
\section{Elliptic hypergeometric series}
From the author's point of view the following definition plays a
fundamental role for the whole theory of hypergeometric type series
since it explains origins of some known peculiarities of the plain
and basic hypergeometric series.
\begin{definition}
The series $\sum_{n\in\mathbb{N}} c_n$ and $\sum_{n\in\mathbb{Z}} c_n$
are called {\em elliptic hypergeometric series}
if $h(n)=c_{n+1}/{c_n}$ is an elliptic function of the argument $n$
which is considered as a complex variable, i.e. $h(x)$ is a meromorphic
double periodic function of $x\in\mathbb{C}$.
\end{definition}
\begin{theorem}
Let $\sigma^{-1}$ and $\tau\sigma^{-1}$ be two periods of the
elliptic function $h(x)$, i.e. $h(x+\sigma^{-1})=h(x)$
and $h(x+\tau\sigma^{-1})=h(x)$. Let $r+1$ be the order of the elliptic
function $h(x)$, i.e. the number of its poles (or zeros) in the
parallelogram of periods. Then the unilateral (or bilateral) elliptic
hypergeometric series coincides with the balanced theta hypergeometric
series $_{r+1}E_r$ (or $_{r+1}G_{r+1}$).
\end{theorem}
\begin{proof}
It is well known that any elliptic function $h(x),\, x\in\mathbb{C}$,
of the order $r+1$ with the periods $\sigma^{-1}$ and $\tau\sigma^{-1}$
can be written as a ratio of $\theta_1$-functions as follows
\cite{whi-wat:course}:
\begin{equation}
h(x)=z\prod_{m=0}^r\frac{[x+\alpha_m;\sigma,\tau]}{[x+\beta_m;\sigma,\tau]},
\label{factorization}\end{equation}
where the zeros $\alpha_0,\ldots,\alpha_r$ and the poles
$\beta_0,\ldots,\beta_r$ satisfy the following constraint:
\begin{equation}\label{balance2}
\sum_{m=0}^r\alpha_m=\sum_{m=0}^r\beta_m.
\end{equation}
Now the identification of the unilateral elliptic hypergeometric
series with the balanced $_{r+1}E_r$-series is evident. One just has
to shift $x\to x-\beta_0+1$, set $x\in \mathbb{N}$, denote
$u_m=\alpha_m-\beta_0+1,\, v_m=\beta_m-\beta_0+1$, and resolve
the recurrence relation $c_{n+1}=h(n)c_n$. After this, the condition
(\ref{balance2}) becomes the balancing condition for the $_{r+1}E_r$
series. A similar situation takes place, evidently, in the bilateral
series case $_{r+1}G_{r+1}$, when $\alpha_m$ and $\beta_m$ just coincide
with $u_m$ and $v_m$ respectively.
Note that because of the balancing condition (\ref{balance2})
the function (\ref{factorization}) can be rewritten as a simple
ratio of $\theta(t;p)$-functions:
$$
h(x)=z\prod_{m=0}^r\frac{\theta(t_mq^x;p)}{\theta(w_mq^x;p)},
$$
where $t_m=q^{\alpha_m}$ and $w_m=q^{\beta_m}$.
\end{proof}
\begin{definition}
Theta hypergeometric series of elliptic type are called modular
hypergeometric series if they are invariant with respect to the
$SL(2,\mathbb{Z})$ group action (\ref{sl2}).
\end{definition}
Consider what kind of constraints upon the parameters of $_rE_s$
and $_rG_s$ series one has to impose in order to get the modular
hypergeometric series. Evidently, it is sufficient to establish modularity
of the function $h(n)=c_{n+1}/c_n$. From its explicit form (\ref{h(n)-quasi})
and the transformation laws (\ref{modular-tr1}) and (\ref{modular-tr2})
it is easy to see that in the unilateral case one must have
$$
\sum_{m=0}^{r-1}(x+u_m)^2=(x+1)^2+\sum_{m=0}^s(x+v_m)^2,
$$
which is possible only if a) $s=r-1$, b) the parameters satisfy the
balancing condition (\ref{balance1}), and c) the following constraint
is valid:
\begin{equation}
u_0^2+\ldots+u_{r-1}^2=1+v_1^2+\ldots+v_{r-1}^2.
\label{modular-E}\end{equation}
Under these conditions the $_{r}E_{r-1}$-series becomes modular invariant.
Let us note that modularity of theta hypergeometric series assumes
their ellipticity. The opposite is not correct, but a more strong
demand of ellipticity, to be formulated below, automatically leads
to modular invariance. Modular hypergeometric series represent particular
examples of Jacobi modular functions in the sense of Eichler and Zagier
\cite{eic-zag:theory}.
In the bilateral case one must have $r=s$, the balancing condition
(\ref{balance-g}), and the constraint
\begin{equation}
u_1^2+\ldots+u_{r}^2=v_1^2+\ldots+v_r^2
\label{modular-G}\end{equation}
for the $_{r}G_{r}$-series to be modular invariant.
\begin{definition}
The theta hypergeometric series $_{r+1}E_r$ is called {\em well-poised}
if its parameters satisfy the following constraints
\begin{equation}\label{well-poised-1}
u_0+1=u_1+v_1=\ldots=u_{r}+v_r
\end{equation}
in the additive form or
\begin{equation}\label{well-poised-2}
qt_0=t_1w_1=\ldots=t_{r}w_r
\end{equation}
in the multiplicative form.
Similarly, the series $_{r}G_{r}$ is called well-poised if
$u_1+v_1=\ldots=u_r+v_r$ or $t_1w_1=\ldots=t_rw_r$.
\end{definition}
This definition of well-poised series matches with the
one used in the theory of plain and basic hypergeometric series
\cite{gas-rah:basic}. Note that it does not imply the balancing
condition.
\begin{definition}
The series $_{r+1}E_r$ is called {\em very-well-poised}
if, in addition to the constraints (\ref{well-poised-1})
or (\ref{well-poised-2}), one imposes the restrictions
\begin{eqnarray}\nonumber
&& u_{r-3}=\frac{1}{2}u_0+1,\quad u_{r-2}=\frac{1}{2}u_0+1-\frac{1}{2\sigma}, \\
&& u_{r-1}=\frac{1}{2}u_0+1-\frac{\tau}{2\sigma},
\quad u_{r} = \frac{1}{2}u_0+1+\frac{1+\tau}{2\sigma},
\label{very-well-poised-1}\end{eqnarray}
or, in the multiplicative form,
\begin{eqnarray}\nonumber
&& t_{r-3}=t_0^{1/2}q,\quad t_{r-2}=-t_0^{1/2}q, \\
&& t_{r-1}=t_0^{1/2}qp^{-1/2}, \quad t_{r} =- t_0^{1/2}qp^{1/2}.
\label{very-well-poised-2}\end{eqnarray}
\end{definition}
Let us derive a simplified form of the very-well-poised series.
First, we notice that
$$
\theta(zp^{-1/2};p)=-zp^{-1/2}\theta(zp^{1/2};p)
$$
and
$$
\theta(z,-z,zp^{1/2},-zp^{1/2};p)=\theta(z^2;p).
$$
After application of these relations, one can find that
$$
\frac{\theta(t_{r-3},\ldots,t_{r};p;q)_n}
{\theta(qt_0/t_{r-3},\ldots,qt_0/t_{r};p;q)_n}=
\frac{\theta(t_0q^{2n};p)}{\theta(t_0;p)}\, (-q)^n.
$$
As a result, one gets
\begin{eqnarray}\nonumber
\lefteqn{
_{r+1}E_r\left({t_0,t_1,\ldots, t_{r-4},qt_0^{1/2},-qt_0^{1/2},
qp^{-1/2}t_0^{1/2},-qp^{1/2}t_0^{1/2} \atop
qt_0/t_1,\ldots,qt_0/t_{r-4},t_0^{1/2},-t_0^{1/2},
p^{1/2}t_0^{1/2},-p^{-1/2}t_0^{1/2} };q,p;z\right) } &&
\\ && \makebox[5em]{}
= \sum_{n=0}^\infty \frac{\theta(t_0q^{2n};p)}{\theta(t_0;p)}
\prod_{m=0}^{r-4}\frac{\theta(t_m;p;q)_n}{\theta(qt_0/t_m;p;q)_n}\, (-qz)^n.
\label{vwp-1}\end{eqnarray}
For convenience we introduce separate notations for the very-well-poised
series, since they contain an essentially smaller number of parameters
than the general theta hypergeometric series $_{r+1}E_r$.
For this we replace $z$ by $-z$ and all the parameters $t_m,\, m=0,
\ldots,r-4,$ by $t_0t_m$ (in particular, this replaces $t_0$ by $t_0^2$).
Then we write
\begin{eqnarray} \nonumber
\lefteqn{
_{r+1}E_r(t_0;t_1,\ldots,t_{r-4};q,p;z) }&&
\\ && \makebox[3em]{}
\equiv \sum_{n=0}^\infty \frac{\theta(t_0^2q^{2n};p)}{\theta(t_0^2;p)}
\prod_{m=0}^{r-4}\frac{\theta(t_0t_m;p;q)_n}{\theta(qt_0t_m^{-1};p;q)_n}\,
(qz)^n. \label{vwp-2}\end{eqnarray}
In terms of the elliptic numbers this series takes the form:
\begin{eqnarray} \nonumber
\lefteqn{
_{r+1}E_r(u_0;u_1,\ldots,u_{r-4};\sigma,\tau;z) }&&
\\ && \makebox[3em]{}
\equiv \sum_{n=0}^\infty \frac{[2u_0+2n]}{[2u_0]}\prod_{m=0}^{r-4}
\frac{[u_0+u_m]_n}{[u_0+1-u_m]_n}\, z^nq^{n(\sum_{m=0}^{r-4}u_m-(r-7)/2)}.
\label{vwp-3}\end{eqnarray}
We use the same symbol $_{r+1}E_r$ in (\ref{vwp-2}) and (\ref{vwp-3})
since these series can be easily distinguished from
the general $_{r+1}E_r$ series by the number of free parameters.
For $p=0$ theta hypergeometric series (\ref{vwp-2}) are reduced to the
very-well-poised basic hypergeometric series:
\begin{eqnarray} \nonumber
\lefteqn{
_{r-1}\Phi_{r-2}(t_0;t_1,\ldots,t_{r-4};q;qz) }&&
\\ \nonumber && \makebox[3em]{}
= \sum_{n=0}^\infty \frac{1-t_0^2q^{2n}}{1-t_0^2}
\prod_{m=0}^{r-4}\frac{(t_0t_m;q)_n}{(qt_0t_m^{-1};q)_n}\, (qz)^n,
\label{vwp-phi}\end{eqnarray}
which are different from the corresponding partners in \cite{gas-rah:basic}
by the replacement of $z$ by $qz$ and $t_m$ by $t_0t_m$
(a standard notation for these series would be $_{r-1}W_{r-2}$
but we are not using it here).
Remind that the balancing condition is not involved into the definition
of the very-well-poised theta hypergeometric series. Imposing
the corresponding constraint
$$
\sum_{m=0}^r(u_0+u_m)=1+\sum_{m=1}^r(u_0+1-u_m)
$$
upon (\ref{vwp-3}) we get
$$
\sum_{m=0}^{r-4} u_m= \frac{r-7}{2}.
$$
In the multiplicative form this condition takes the form
$$
\prod_{m=0}^{r-4}t_m= q^{(r-7)/2}.
$$
But this is precisely
the balancing condition for the very-well-poised series
appearing in the theory of basic hypergeometric series
\cite{gas-rah:basic}. Thus for the very-well-poised series
there is no discrepancy in the definitions of balancing condition
given in \cite{gas-rah:basic} and in this paper. This happens
because the constraints (\ref{very-well-poised-1}) taken separately
are not well defined in the limit $\text{Im} (\tau)\to+\infty$.
Note that for the balanced series an extra factor standing in
(\ref{vwp-3}) to the right of $z^n$ disappears.
Summarizing this consideration we conclude that a very natural
condition of ellipticity of the function $h(n)=c_{n+1}/c_n$
in the theta hypergeometric series provides a substantial
meaning to the (innatural) balancing condition for the standard
basic hypergeometric series.
If we impose balancing condition in the multiplicative form
then there appears an ambiguity.
Indeed, substituting into the condition $\prod_{m=0}^r(t_0t_m)=
q\prod_{m=1}^r(qt_0/t_m)$ the constraints
$t_{r-3}=q,\, t_{r-2}=-q,\, t_{r-1}=qp^{-1/2},\, t_{r} = -qp^{1/2}$
(these are the restrictions (\ref{very-well-poised-2}) after the shift
$t_m\to t_0t_m$) we get $\prod_{m=0}^{r-4}t_m^2=q^{r-7}$ which
yields $\prod_{m=0}^{r-4}t_m=\pm q^{(r-7)/2}$ and it is known that
only the plus sign corresponds to the correct balancing condition
for odd $r$ (the even $r$ cases remain ambiguous, but even $r$
do not appear in known examples of summation formulae of basic
hypergeometric series).
In the same way, the bilateral theta hypergeometric series
$_{r}G_{r}$ are called very-well-poised if the constraints
(\ref{very-well-poised-1}) or (\ref{very-well-poised-2})
are satisfied, where $u_0$ or $t_0=q^{u_0}$ is a free parameter.
Following the unilateral series case, we
replace $z$ by $-z$, shift the parameters $t_m\to t_0t_m,\,
m=0,\ldots, r-4,$ and introduce the following shorthand notations
for the simplified form of these series:
\begin{eqnarray} \nonumber
\lefteqn{
_{r}G_{r}(t_0;t_1,\ldots,t_{r-4};q,p;z) }&&
\\ && \makebox[3em]{}
= \sum_{n=-\infty}^\infty \frac{\theta(t_0^2q^{2n};p)}{\theta(t_0^2;p)}
\prod_{m=1}^{r-4}\frac{\theta(t_0t_m;p;q)_n}{\theta(qt_0t_m^{-1};p;q)_n}\,
(qz)^n \label{vwp-g2}\end{eqnarray}
or in terms of the elliptic numbers
\begin{eqnarray} \nonumber
\lefteqn{
_{r}G_{r}(u_0;u_1,\ldots,u_{r-4};\sigma,\tau ;z) }&&
\\ && \makebox[3em]{}
= \sum_{n=-\infty}^\infty \frac{[2u_0+2n]}{[2u_0]}\prod_{m=1}^{r-4}
\frac{[u_0+u_m]_n}{[u_0+1-u_m]_n}\, z^nq^{n(\sum_{m=1}^{r-4}u_m-(r-8)/2)}.
\label{vwp-g3}\end{eqnarray}
Repeating considerations for the bilateral series we find
the following compact form for the balancing condition for the
$_rG_r$-series (\ref{vwp-g2}):
$$
\sum_{m=1}^{r-4}u_m=\frac{r-8}{2}, \quad
\text{or} \quad \prod_{m=1}^{r-4}t_m=q^{(r-8)/2}.
$$
Under the constraint $u_{r-3}=u_0$ (or $t_{r-3}=t_0$) the
$_{r+1}G_{r+1}$-series is converted into the $_{r+1}E_r$ series.
A general connection between the very-well-poised series of $E$
and $G$ types looks as follows:
\begin{eqnarray}\nonumber
\lefteqn{ _{r}G_{r}(t_0;t_1,\ldots,t_{r-4};q,p;z)=
{_{r+2}E_{r+1}}(t_0;t_1,\ldots,t_{r-4},qt_0^{-1};q,p;z)} && \\ \label{G/E}
&& \makebox[4em]{}
+\frac{q^{r-7}}{z\prod_{m=1}^{r-4}t_m^2}
\frac{\theta(t_0^{-2}q^2;p)}{\theta(t_0^{-2};p)}
\prod_{m=1}^{r-4}\frac{\theta(t_mt_0^{-1};p)}{\theta(qt_0^{-1}t_m^{-1};p)}
\\ \nonumber && \makebox[4em]{}
\times {_{r+2}E_{r+1}}\left(\frac{q}{t_0};t_1,\ldots,
t_{r-4},t_0;q,p; \frac{q^{r-8}}{z\prod_{m=1}^{r-4}t_m^2}\right).
\end{eqnarray}
\begin{remark}
Within our classifications, an elliptic generalization of basic
hypergeometric series $_{r+1}\Phi_r$ introduced by Frenkel and Turaev
in \cite{fre-tur:elliptic} coincides with the very-well-poised
balanced theta hypergeometric series $_{r+1}E_r$
of the unit argument $z=1$. Such series have their origins in
elliptic solutions of the Yang-Baxter equation \cite{bax:exactly,abf:eight,
djkmo:exactly,fre-tur:elliptic} and biorthogonal rational functions with
self-similar spectral properties \cite{spi-zhe:spectral,spi-zhe:classical,
spi-zhe:gevp,spi:special}.
\end{remark}
\begin{definition}
The series $\sum_{n\in\mathbb{N}}c_n$ and $\sum_{n\in\mathbb{Z}}c_n$
are called {\em totally elliptic} hypergeometric series if
$h(n)=c_{n+1}/c_n$ is an elliptic function of {\em all free
parameters} entering it (except of the parameter $z$ by which one can
always multiply $h(n)$) with equal periods of double periodicity.
\end{definition}
\begin{theorem}
The most general (in the sense of a maximal number of independent
free parameters) totally elliptic theta hypergeometric series coincide with
the well-poised balanced theta hypergeometric series $_{r}E_{r-1}$
(in the unilateral case) and $_rG_r$ (in the bilateral case) for $r>2$.
Totally elliptic series are automatically modular invariant.
\end{theorem}
\begin{proof}
It is sufficient to prove this theorem for the bilateral series since
the unilateral series can be obtained afterwards by a simple reduction.
Ellipticity in $n$ leads to $h(n)$ of the form
$$
h(n)=\frac{[n+u_1,\ldots, n+u_r]}{[n+v_1,\ldots, n+v_r]}\, z,
$$
with the free parameters $u_1,\ldots,u_r$ and
$v_1,\ldots,v_{r}$ satisfying the
balancing condition $u_1+\ldots+u_r=v_1+\ldots+v_{r}$. From such a
representation it is evident that there is a freedom in the shift of
parameters by an arbitrary constant: $u_m\to u_m+u_0$, $v_m\to v_m+u_0$,
$m=1,\ldots, r$, which does not spoil the balancing condition.
Let us determine now the maximal possible number of independent
variables in the totally elliptic hypergeometric series. In general one
can denote as $a_l,\, l=1,\ldots, L,$ a set of free parameters of the
elliptic hypergeometric series in which the series is doubly periodic
with some periods.
Then $u_m=\sum_{k=1}^L \alpha_{mk}a_k+\beta_{m}$ and
$v_m=\sum_{k=1}^L \gamma_{mk}a_k+\delta_{m}$ are some linear combinations
of $a_l$ with integer
coefficients $\alpha_{mk}, \gamma_{mk}$. However, because of the possibilities to
change variables we can take a number of $u_m$ starting, say, from $u_1$
and $v_m$ as $a_1,\ldots,a_L$ and demand the double periodicity in these
parameters themselves. Since the minimal order of elliptic function is equal
to 2, the function $h(n)$ should have at least two zeros and two poles
(or one double zero or pole) in $u_1$. Double zeros and poles ask for
additional constraints, i.e. to a reduction of the number of free
parameters, and we discard such a possibility.
Let us assume that $u_r$ depends linearly on $u_1$ and suppose that
$u_1,\ldots,u_{r-1}$ are independent variables. Then it is evident that
all denominator parameters $v_m,\, m=1,\ldots,r,$ cannot contain
additional independent variables. Indeed, if it would be so, then,
inevitably, this parameter should show up at least in one $\theta$-function
in the numerator, which cannot happen by the assumption.
So, $L=r-1$ is the maximal possible number of independent variables in
the totally elliptic hypergeometric series and
$u_r$ together with $v_m,\, m=1,\dots,r,$ depend linearly on
$u_k,\, k=1,\ldots,r-1.$ Because of the permutational invariance in
the latter variables one must have $u_r=\alpha\sum_{k=1}^{r-1}u_k+\beta$,
where $\alpha, \beta$ are some numerical coefficients to be determined
(evidently, $\alpha$ must be an integer).
As to the choice of $v_m$ the unique option guaranteeing
permutational invariance of the product $\prod_{m=1}^r[x+v_m]$ in
$u_1,\ldots,u_{r-1}$ is the following one
$$
v_m=\gamma\sum_{k=1}^{r-1}u_k+\delta u_m+\rho,
\quad m=1,\ldots,r-1,
$$
and $v_r=\mu\sum_{k=1}^{r-1}u_k+\nu$,
where $\gamma, \delta, \rho, \mu, \nu$ are some numerical parameters
($\gamma, \delta, \mu$ must be integers).
This is the most general choice of $v_m$ since all other permutationally
invariant combinations of $u_m$ require products of more than $r$
theta functions. Substitution of the taken ansatz into the balancing
condition yields $1+\alpha=(r-1)\gamma+\delta+\mu$ and
$\beta=(r-1)\rho+\nu$, which guarantees invariance of $h(x)$ under the
shift $u_k\to u_k+\sigma^{-1}$ and cancels the sign factor emerging
from the shift $u_k\to u_k+\tau\sigma^{-1}$. A bit cumbersome but
technically straightforward analysis of the condition of cancellation
of the factors of the form $e^{-2\pi i\sigma u}$ yields the equations
$$
\delta^2=1,\qquad \alpha^2=(r-1)\gamma^2+2\gamma\delta+\mu^2
$$
and
$$
\alpha\beta=(r-1)\gamma\rho+\rho\delta+\mu\nu.
$$
The constraint generated by the cancellation of the factors of the form
$e^{-\pi i\tau}$ appears to be irrelevant and it will not be indicated.
Let $\delta=1$. Then two equations upon the coefficients $\alpha,\gamma,
\mu$ yield that either $\gamma=0$ or $\gamma(r-1)(r-2)/2+\mu(r-1)=1.$ Since
$\gamma$ and $\mu$ are integers, the second case cannot be valid
(the integers on the left hand side are proportional to $r-1$ whereas on
the right hand such proportionality does not take place for $r>2$).
The choice $\gamma=0$ leads to $\alpha=\mu$ and, from other two
equations, one gets $\rho=0$ and $\beta=\nu$. As a result, $h(n)=1$,
i.e. we get a trivial solution which is discarded.
Let $\delta=-1$. Solution of the taken equations gives uniquely
$$
\alpha=\frac{\gamma r}{2}, \quad
\mu=1-\frac{\gamma (r-2)}{2}, \quad
\beta=\frac{\rho r}{2},\quad \nu=-\frac{\rho (r-2)}{2},
$$
where $\gamma$ is an integer (for odd $r$ it must be an even
number) and $\rho$ is an arbitrary parameter. Arbitrariness of $\rho$
seems to contradict the statement that there are no new independent
parameters in the variable $u_r$. This paradox is resolved in the following
way. Let us first make the shift $\rho\to \rho -\gamma\sum_{k=1}^{r-1}u_k$.
It is easy to see that this leads to the removal of $\gamma$
from $h(n)$, i.e. it is a fake parameter. Denote now $\rho\equiv 2u_0$
and make the shifts $u_m\to u_m+u_0$, $m=1,\ldots,r-1$. As a result, one gets
\begin{equation}\label{vwp-theorem}
h(n)=\prod_{m=1}^{r-1}\frac{[n+u_0+u_m]}{[n+u_0-u_m]}\,
\frac{[n+u_0-\sum_{k=1}^{r-1}u_k]}{[n+u_0+\sum_{k=1}^{r-1}u_k]}\, z,
\end{equation}
i.e. the parameter $u_0$ plays the same role as $n$ and ellipticity
in it is evident. It is not difficult to recognize in (\ref{vwp-theorem})
the most general expression for $c_{n+1}/c_n$ of well-poised and balanced
theta hypergeometric series. Thus we have proved that ellipticity in
all free parameters in $h(n)$ leads uniquely to the balancing and
well-poisedness conditions.
Let us prove now that the totally elliptic hypergeometric series are
automatically modular invariant. For this is it sufficient to check
that the sum of squares of the parameters $u$ entering elliptic numbers
$[n+u]$ in the numerator of $h(n)$ (\ref{vwp-theorem}) and denominator
coincide. The numerator parameters generate the sum
$$
\sum_{k=1}^{r-1}u_k^2 +\Bigl(-\sum_{k=1}^{r-1} u_k\Bigr)^2
$$
which is trivially equal to the sum appearing from the denominator:
$$
\sum_{k=1}^{r-1}(-u_k)^2+\Bigl(\sum_{k=1}^{r-1} u_k\Bigr)^2,
$$
i.e. modular invariance is automatic. (Note that well-poised theta
hypergeometric series without balancing condition are not modular
invariant).
All the considerations given above were designed for the bilateral
$_{r}G_r$ series case, but a passage to the unilateral well-poised
and balanced $_rE_{r-1}$-series is done by a simple specification
of one of the parameters. One has to set $u_{r-1}=u_0-1$ and then
shift $u_0\to u_0+1/2,\, u_m\to u_m-1/2,\, m=1,\ldots, r-2$. This brings
$h(n)$ to the form $h(n)=z\prod_{m=0}^{r-1}[n+u_0+u_m]/[n+1+u_0-u_m]$,
where we introduced anew the $u_{r-1}$ parameter through the
relation $\sum_{m=0}^{r-1}u_m=r/2$.
\end{proof}
\begin{remark}
We have replaced $t_0$ by $t_0^2$ in the definition of very-well-poised
elliptic hypergeometric series $_{r+1}E_r$ in order to have $p$-shift
invariance in the variable $t_0$ (or ellipticity in $u_0$). Otherwise
there would be $p$-shift invariance in $t_0^{1/2}$.
\end{remark}
Thus we have found interesting origins of the balancing and well-poisedness
conditions for the plain and basic hypergeometric series calling for a
revision of these notions. However, origins of the very-well-poisedness
condition remain unknown. Probably elliptic functions $h(n)$ obeying
such a constraint have some particular arithmetic properties. An indication
on this is given by the transformation and summation formulas for some
of the very-well-poised balanced theta hypergeometric series of the
unit argument.
The theta hypergeometric series $_{r}E_s$ and $_rG_s$
are defined as formal infinite series. However, because of the
quasiperiodicity of the theta functions it is not a simple task
to determine their convergence and this problem will not be considered
here. It can be shown that for some choice of parameters the radius of
convergence $R$ of the balanced $_{r+1}E_r$-series is equal to 1.
If $R$ is the radius of convergence of the very-well-poised
$_{r+2}E_{r+1}$-series without balancing condition,
i.e. if these infinite series are well defined for $|z|<R$, then from
the representation (\ref{G/E}) it follows that the $_rG_r$-series
converge for $\left|{q^{r-8}}/{R\prod_{m=1}^{r-4}t_m^2}\right|<|z|<R.$
A rigorous meaning to the $_rE_s$-series can be given by imposing some
truncation conditions.
The theta hypergeometric series truncates if for some $m$,
\begin{equation}\label{e-trunc1}
u_m=-N-K\sigma^{-1}-M\tau\sigma^{-1}, \qquad N\in\mathbb{N},\quad
K, M\in\mathbb{Z},
\end{equation}
or in the multiplicative form
\begin{equation}\label{e-trunc2}
t_m=q^{-N}p^{-M}, \qquad N\in\mathbb{N},\quad M\in\mathbb{Z}.
\end{equation}
The well-poised elliptic hypergeometric series are double periodic in
their parameters with the periods $\sigma^{-1}$ and $\tau\sigma^{-1}$.
Therefore, these truncated series do not depend on the integers $K, M$.
The top-level identity in the theory of basic hypergeometric series is
the four term Bailey identity for non-terminating $_{10}\Phi_9$
very-well-poised balanced series of the unit argument \cite{gas-rah:basic}.
In the terminating case there remains only two terms.
In \cite{fre-tur:elliptic} Frenkel and Turaev have proved an
elliptic generalization of the Bailey identity in the terminating case.
In our notations it looks as follows
\begin{eqnarray}\nonumber
{_{12}E_{11}}(t_0;t_1,\dots,t_7;q,p;1) &=&
\frac{\theta(qt_0^2,qs_0/s_4,qs_0/s_5,q/t_4t_5;p;q)_N}
{\theta(qs_0^2,qt_0/t_4,qt_0/t_5,q/s_4s_5;p;q)_N} \\
&& \times {_{12}E_{11}}(s_0;s_1,\dots,s_7;q,p;1),
\label{ft-bailey}\end{eqnarray}
where it is assumed that $\prod_{m=0}^7t_m=q^2$, $t_0t_6=q^{-N}$,
$N\in\mathbb{N}$, and
$$
s_0^2=\frac{qt_0}{t_1t_2t_3},\quad s_1=\frac{s_0t_1}{t_0},\quad
s_2=\frac{s_0t_2}{t_0},\quad s_3=\frac{s_0t_3}{t_0},
$$
$$
s_4=\frac{t_0t_4}{s_0},\quad s_5=\frac{t_0t_5}{s_0},\quad
s_6=\frac{t_0t_6}{s_0},\quad s_7=\frac{t_0t_7}{s_0}.
$$
If one sets $t_2t_3=q$, then the left-hand side
of (\ref{ft-bailey}) becomes a terminating $_{10}E_9$-series, and in the
series on the right-hand side one gets $s_1=1$, i.e. only its first
term is different from zero. This gives the Frenkel-Turaev sum---an
elliptic generalization of the Jackson's sum for terminating
very-well-poised balanced $_8\Phi_7$-series \cite{gas-rah:basic}.
After diminishing the indices of $t_{4,5,6,7}$ by two
it takes the following form:
\begin{eqnarray}\nonumber
\lefteqn{ {_{10}E_9}(t_0;t_1,\dots, t_5;q,p;1) } && \\ && \makebox[4em]{}
= \frac{\theta (qt_0^2;p;q)_N\prod_{1\leq r<s\leq 3}
\theta (q/t_rt_s;p;q)_N}{\theta (q/t_0t_1t_2t_3;p;q)_N
\prod_{r=1}^3\theta (qt_0/t_r;p;q)_N},
\label{ft-sum}\end{eqnarray}
where the parameters $t_r$ are assumed to satisfy the balancing
condition $\prod_{r=0}^5 t_r =q$ and the truncation
condition $t_0t_4=q^{-N}$, $N\in \mathbb{N}$.
\begin{remark}
Due to the clarification of the relation of very-well-poisedness
condition with the general structure of theta hypergeometric series,
starting from this paper we change notations for the elliptic
hypergeometric series in the generalizations of Bailey and
Jackson identities. The symbols $_{10}E_9$ and $_8E_7$ in the papers
\cite{spi-zhe:spectral,spi-zhe:classical,spi-zhe:gevp,spi:solitons,
spi:special,die-spi:elliptic,die-spi:selberg,die-spi:modular,
die-spi:elliptic2} read in the current notations as $_{12}E_{11}$
and $_{10}E_9$ respectively.
\end{remark}
\begin{remark}
Despite of the double periodicity, infinite totally elliptic hypergeometric
functions are not elliptic functions of $u_r$ since they have infinitely
many poles in the parallelogram of periods. Indeed, some
of the poles in $t_s,\; s=1,\ldots,r-4$, are located at $t_s=t_0q^{n+1}p^m,$
where $n\in\mathbb{N}$ and $m\in\mathbb{Z}$.
If $q^k\neq p^l$ for any $k, l\in\mathbb{N},$ then, evidently, there are
infinitely many integers $n$ and $m$ such that $t_s$ stays, say, within
the bounds $|p|< |t_s|< 1.$ This means that there are infinitely many
poles in the parameters $t_s$ in this annulus.
\end{remark}
\begin{remark}
The symbols $_rE_s$ and $_rG_s$ were chosen for denoting the one-sided
and bilateral theta hypergeometric series in order to make them as
close as possible to the standard notations $_{r}F_s$ and $_rH_s$ used for
one-sided and bilateral plain hypergeometric series respectively.
The letter $``E"$ refers also to the word ``elliptic". To the author's taste
greek symbols $_{r}\Phi_s$ and $_r\Psi_s$ used for denoting basic
hypergeometric series fit well enough into the aesthetics created
by the sequence of letters $E, \Phi, F, G, \Psi, H$.
\end{remark}
\section{Multiple elliptic hypergeometric series}
Following the definition of theta hypergeometric series
of a single variable one can consider formal multiple sums of
quasiperiodic combinations of Jacobi $\theta_1$-functions depending
of more than one summation index. However, we shall limit ourselves
only to the multiple elliptic hypergeometric series.
\begin{definition}
The formal series
$$
\sum_{\lambda_1,\ldots,\lambda_n=0}^\infty c(\lambda_1,\ldots,\lambda_n)
\quad \text{or}
\sum_{\lambda_1,\ldots,\lambda_n=-\infty}^\infty
c(\lambda_1,\ldots,\lambda_n),
$$
and
$$
\sum_{\lambda_1,\ldots,\lambda_n=0
\atop \lambda_1\leq \ldots\leq \lambda_n}^\infty
c(\lambda_1,\ldots,\lambda_n)
\quad \text{or}
\sum_{\lambda_1,\ldots,\lambda_n=-\infty
\atop \lambda_1\leq \ldots\leq \lambda_n}^\infty
c(\lambda_1,\ldots,\lambda_n)
$$
are called {\em multiple elliptic hypergeometric series} if:
a) the coefficients $c(\mathbf{\lambda})$ are symmetric with respect
to an action of permutation group $\mathcal{S}_n$ upon the
summation variables $\lambda_1,\ldots,\lambda_n$ and the free
parameters entering $c(\mathbf{\lambda})$;
b) for all $k=1,\ldots,n$ the functions
$$
h_k(\mathbf{\lambda})=\frac{
c(\lambda_1,\ldots,\lambda_k+1,\ldots,\lambda_n)
}{c(\lambda_1,\ldots,\lambda_k,\ldots,\lambda_n)},
$$
are elliptic in $\lambda_k,\, k=1,\ldots,n,$
considered as complex variables. These series are called totally elliptic
if, in addition, the functions $h_k(\mathbf{\lambda})$ are
elliptic in all free parameters except of the free multiplication factors.
\end{definition}
Suppose that $h_k(\mathbf{\lambda})$ is symmetric in $\lambda_1,\ldots,
\lambda_{k-1},\lambda_{k+1},\ldots,\lambda_n$. Then, using the results
of the one variable analysis, it is not difficult to see that the most
general expression for the coefficients $c(\mathbf{\lambda})$ is:
\begin{equation}\label{c-general}
c(\mathbf{\lambda})=
\prod_{k=1}^n\Biggl(\prod_{1\leq i_1<\ldots<i_k\leq n} \prod_{m=1}^{r_k}
\frac{[u_{km}]_{\lambda_{i_1}+\ldots+\lambda_{i_k} } }
{[v_{km}]_{\lambda_{i_1}+\ldots+\lambda_{i_k} } }\Biggr)
z_1^{\lambda_1}\cdots z_n^{\lambda_n},
\end{equation}
where
$$
\sum_{k=1}^n C_{n-1}^{k-1} \sum_{m=1}^{r_k} (u_{km}-v_{km})=0.
$$
However, if the action of $\mathcal{S}_n$ permutes $\lambda_1,\ldots,
\lambda_n$ and simultaneously free parameters entering
$c(\mathbf{\lambda})$ other than $z_1,\ldots,z_n$, then the situation is
richer, e.g. more general combinations of $\lambda_k$ are allowed than it
is indicated in (\ref{c-general}). We shall not go into further analysis
of general situation but pass to some particular examples.
Currently there are two known examples of multiple elliptic hypergeometric
series leading to some constructive identities (multivariable analogues of
the Frenkel-Turaev $_{10}E_9$-summation formula). The first one corresponds
to an elliptic extension of the Aomoto-Ito-Macdonald type of series
\cite{aom:elliptic,ito:theta,mac:constant}.
Its structure is read off from the following multivariable generalization
of the Frenkel-Turaev summation formula considered in \cite{war:summation,
die-spi:elliptic,ros:proof}.
Let $N\in\mathbb{N}$ and the parameters $t, t_r\in\mathbb{C},\,
r=0,\ldots,5,$ are constrained by the balancing condition
$t^{2n-2}\prod_{r=0}^5t_r=q$ and the truncation condition
$t^{n-1}t_0t_4=q^{-N}.$
Then one has the following theta-functions identity
\begin{eqnarray}\nonumber
&&
\sum_{0\leq \lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n\leq N}
q^{\sum_{j=1}^n\lambda_j} t^{2 \sum_{j=1}^n (n-j)\lambda_j}
\\ \nonumber && \makebox[8em]{}\times
\prod_{1\leq j<k\leq n} \Biggl(
\frac{\theta(\tau_k\tau_jq^{\lambda_k+\lambda_j},
\tau_k\tau_j^{-1}q^{\lambda_k-\lambda_j};p)}
{\theta(\tau_k\tau_j,\tau_k\tau_j^{-1};p)} \\ \nonumber
&& \makebox[8em]{}\times
\frac{\theta(t\tau_k\tau_j;p;q)_{\lambda_k+\lambda_j}}
{\theta(qt^{-1}\tau_k\tau_j;p;q)_{\lambda_k+\lambda_j}}
\frac{\theta(t\tau_k\tau_j^{-1};p;q)_{\lambda_k-\lambda_j}}
{\theta(qt^{-1}\tau_k\tau_j^{-1};p;q)_{\lambda_k-\lambda_j}}
\Biggr) \\ \nonumber
&&\makebox[3em]{} \times \prod_{j=1}^n
\Biggl( \frac{\theta(\tau_j^2q^{2\lambda_j};p)}{\theta(\tau_j^2;p)}
\prod_{r=0}^5 \frac{\theta(t_r\tau_j;p;q)_{\lambda_j}}
{\theta(qt_r^{-1}\tau_j;p;q)_{\lambda_j}}\Biggr) \\
&&\makebox[3em]{}
=\frac{\theta(qt^{n+j-2}t_0^2;p;q)_N
\prod_{1\leq r <s \leq 3} \theta(qt^{1-j} t_r^{-1}t_s^{-1};p;q)_N}
{\theta(qt^{2-n-j}\prod_{r=0}^3t_r^{-1};p;q)_N
\prod_{r=1}^3 \theta(qt^{j-1}t_0t_r^{-1};p;q)_N}.
\label{multi-1}\end{eqnarray}
Here the parameters $\tau_j$ are related to $t_0$ and $t$ as follows:
$\tau_j=t_0t^{j-1}$, $j=1,\ldots ,n$.
Note that the series coefficients $c(\mathbf{\lambda})$ are symmetric
with respect to the simultaneous permutation of the variables
$\lambda_j$ and $\lambda_k$ and the parameters $\tau_j$ and $\tau_k$
for arbitrary $j\neq k$ (for this one has to assume that $\tau_j$ are
independent variables).
\begin{theorem}
The series standing on the left hand side of (\ref{multi-1}) is a totally
elliptic multiple hypergeometric series.
\end{theorem}
\begin{proof}
Ratios of the series coefficients yield
\begin{eqnarray}\nonumber
h_l(\mathbf{\lambda})=\prod_{j=1}^{l-1}
\frac{\theta(\tau_j\tau_lq^{\lambda_j+\lambda_l+1},\tau_j^{-1}\tau_l
q^{\lambda_l+1-\lambda_j}, t\tau_j\tau_lq^{\lambda_j+\lambda_l},
t\tau_j^{-1}\tau_lq^{\lambda_l-\lambda_j};p)}
{\theta(\tau_j\tau_lq^{\lambda_j+\lambda_l},\tau_j^{-1}\tau_l
q^{\lambda_l-\lambda_j}, t^{-1}\tau_j\tau_lq^{\lambda_j+\lambda_l+1},
t^{-1}\tau_j^{-1}\tau_lq^{\lambda_l+1-\lambda_j};p)}
\\ \nonumber
\times \prod_{k=l+1}^n\frac{\theta(\tau_k\tau_lq^{\lambda_k+\lambda_l+1},
\tau_k\tau_l^{-1}q^{\lambda_k-\lambda_l-1},t\tau_k\tau_lq^{\lambda_k+
\lambda_l},t^{-1}\tau_k\tau_l^{-1}q^{\lambda_k-\lambda_l};p)}
{\theta(\tau_k\tau_lq^{\lambda_k+\lambda_l},\tau_k\tau_l^{-1}
q^{\lambda_k-\lambda_l}, t^{-1}\tau_k\tau_lq^{\lambda_k+\lambda_l+1},
t\tau_k\tau_l^{-1} q^{\lambda_k-\lambda_l-1};p)}
\\
\times qt^{2(n-l)}
\frac{\theta(\tau_l^2q^{2\lambda_l+2};p)}{\theta(\tau_l^2;p)}
\prod_{m=0}^5\frac{\theta(t_m\tau_lq^{\lambda_l};p)}
{\theta(t_m^{-1}\tau_lq^{\lambda_l+1};p)}.
\label{h-AIM}\end{eqnarray}
Using the equalities (\ref{fun-rel}) one can easily check the ellipticity
of this $h_l(\mathbf{\lambda})$ in $\lambda_i$ for $i< l$ and $i>l$
(for this it is simply necessary to see that $h_l$ does not change after
the replacement of $q^{\lambda_i}$ by $pq^{\lambda_i}$). For the
ellipticity in $\lambda_l$ itself one has an essentially longer computation.
The replacement of $q^{\lambda_l}$ by $pq^{\lambda_l}$ in the
product $\prod_{j=1}^{l-1}$ yields a multiplier $t^{-4(l-1)}$.
The product $\prod_{k=l+1}^n$ yields the multiplier $t^{-4(n-l)}$. The
remaining part of $h_l$ generates the factor $q^{-4}\prod_{m=0}^5
qt_m^{-2}$. The product of all these three factors takes the form
$q^2t^{-4(n-1)}\prod_{m=0}^5 t_m^{-2}$ and it is equal to 1 due to the
balancing condition.
So, we have found that the taken series is indeed a multiple elliptic
hypergeometric series. Let us prove now its total ellipticity or
$p$-shift invariance in the parameters $t_m,\, m=0,\ldots,4$ and $t$.
The $p$-shift invariance in the parameters $t_1,\ldots,t_4$ follows
from the balancing condition in the same way as in the single variable
series case. Consider the $t_0\to pt_0$ shift.
The product $\prod_{j=1}^{l-1}$ yields a multiplier $t^{-4(l-1)}$
and the product $\prod_{k=l+1}^n$ yields the factor $t^{-4(n-l)}$. The
remaining part of $h_l$ generates the factor $q^2\prod_{m=0}^5
t_m^{-2}$. The product of all these three multipliers is equal to 1.
Finally, the shift $t\to pt$ calls for the most complicated computation.
The product $\prod_{j=1}^{l-1}$ yields a complicated multiplier
$(q^{2\lambda_l+1}t^{2(l-1)}\tau_l^2p^{2l-3})^{2(1-l)}$.
The product $\prod_{k=l+1}^n$ generates no less complicated
expression $(q^{2\lambda_l+1}t^{2(l-1)}\tau_l^2p^{2(l-1)})^{2(l-n)}$.
The remaining part of $h_l$ leads to the following factor (after the use
of the balancing condition): $(q^{2\lambda_l+1}t^{2(l-1)}\tau_l^2)^{2(n-1)}
p^{(4n-6)(l-1)}$. The product of all these three multipliers yields 1.
Thus we have proved the total ellipticity of the taken type of series.
\end{proof}
The second example of multiple series corresponds to an elliptic
generalization of the Milne-Gustafson type multiple basic hypergeometric
series \cite{mil:multidimensional,gus:macdonald,den-gus:q-beta},
which are, in turn, $q$-analogues of the Hollman, Biedenharn, and Louck
plain multiple hypergeometric series \cite{hbl:hypergeometric}.
Its structure is read off from the following summation formula suggested
in \cite{die-spi:modular}.
Let $q^n\neq p^m$ for $n,m\in\mathbb{N}$. Then
for the parameters $t_0,\ldots ,t_{2n+3}$ subject to the balancing condition
$q^{-1}\prod_{r=0}^{2n+3}t_r=1$ and the truncation conditions
$q^{N_j}t_jt_{n+j}=1,\, j=1,\ldots ,n,$ where $N_j\in\mathbb{N},$
one has the identity
\begin{eqnarray}\nonumber
&& \sum_{\stackrel{0\leq \lambda_j\leq N_j}{j=1,\ldots ,n}}
q^{\sum_{j=1}^n j\lambda_j}
\prod_{1\leq j<k\leq n}
\frac{\theta (t_jt_kq^{\lambda_j+\lambda_k},
t_jt_k^{-1}q^{\lambda_j-\lambda_k} ;p)}
{\theta (t_jt_k,t_jt_k^{-1} ;p)} \\
\label{multi-2}
&& \quad\qquad\qquad\qquad\times \prod_{1\leq j\leq n} \Biggl(
\frac{\theta (t_j^2q^{2\lambda_j};p)}{\theta (t_j^2;p)}
\prod_{0\leq r\leq 2n+3} \frac{\theta (t_jt_r;p;q)_{\lambda_j}}
{\theta (qt_jt_r^{-1};p;q)_{\lambda_j}}\Biggr) \\
&&= \theta (qa^{-1}b^{-1},qa^{-1}c^{-1},qb^{-1}c^{-1};p;q)_{N_1+\cdots +N_n}
\nonumber\\
&&\qquad\times\prod_{1\leq j<k\leq n}
\frac{\theta (qt_jt_k;p;q)_{N_j}\theta (qt_jt_k ;p;q)_{N_k}}
{\theta (qt_jt_k;p;q)_{N_j+N_k}} \nonumber\\
&&\qquad \times \prod_{1\leq j\leq n}
\frac{\theta (qt_j^2;p;q)_{N_j}}
{\theta (qt_ja^{-1},qt_jb^{-1},
qt_jc^{-1},q^{1+N_1+\cdots+N_n-N_j}t_j^{-1}a^{-1}b^{-1}c^{-1};p;q)_{N_j}},
\nonumber\end{eqnarray}
where $a\equiv t_{2n+1}$, $b\equiv t_{2n+2}$, $c\equiv t_{2n+3}$.
Note that this series coefficients $c(\mathbf{\lambda})$ are symmetric
with respect to simultaneous permutation of the variables
$\lambda_j$ and $\lambda_k$ together with the parameters $t_j$ and $t_k$
for arbitrary $j,k=1,\ldots,n,\, j\neq k$.
\begin{theorem}
The series standing on the left hand side of (\ref{multi-2}) is a
totally elliptic hypergeometric series.
\end{theorem}
\begin{proof}
Ratios of the successive series coefficients yield
\begin{eqnarray}\nonumber
h_l(\mathbf{\lambda})= \prod_{j=1}^{l-1}
\frac{\theta(t_jt_lq^{\lambda_j+\lambda_l+1},t_jt_l^{-1}
q^{\lambda_j-\lambda_l-1};p)}{\theta(t_jt_lq^{\lambda_j+\lambda_l},
t_jt_l^{-1}q^{\lambda_j-\lambda_l};p)} \\ \nonumber
\times
\prod_{k=l+1}^{n}
\frac{\theta(t_lt_kq^{\lambda_l+\lambda_k+1},t_lt_k^{-1}
q^{\lambda_l+1-\lambda_k};p)}{\theta(t_lt_kq^{\lambda_l+\lambda_k},
t_lt_k^{-1}q^{\lambda_l-\lambda_k};p)} \\ \times
q^l\frac{\theta(t_l^2q^{2\lambda_l+2};p)}
{\theta(t_l^2q^{2\lambda_l};p)} \prod_{m=0}^{2n+3}
\frac{\theta(t_lt_mq^{\lambda_l};p)}{\theta(t_lt_m^{-1}q^{\lambda_l+1};p)}.
\label{h-MG}\end{eqnarray}
It is easy to check the ellipticity of this expression in $\lambda_j$
for $j<l$ and $j>l$. Ellipticity in $\lambda_l$ itself follows from
a more complicated computation. Namely, the change of $q^{\lambda_j}$
to $pq^{\lambda_j}$ leads to additional multipliers in
$h_l(\mathbf{\lambda})$: $q^{2(1-l)}$---from the product
$\prod_{j=1}^{l-1}$, $q^{2(l-n)}$---from the product $\prod_{k=l+1}^n$, and
$q^{-4}\prod_{m=0}^{2n+3}qt_m^{-2}$---from the rest of
$h_l(\mathbf{\lambda})$. Multiplication of these three
expressions gives 1 due to the balancing condition.
Ellipticity in the parameters $t_m,\, m=0,\ldots,2n+3,$ is checked
separately for $m<l, m>l$ and $m=l$. The first two cases are easy
enough and do not worth of special consideration. The replacement
$t_l\to pt_l$ leads to the following multipliers:
$q^{2(1-l)}$---from the product $\prod_{j=1}^{l-1}$,
$q^{2(l-n)}$---from the product $\prod_{k=l+1}^n$, and
$q^{-2n}\prod_{m=0}^{2n+3}t_m^{-2}$---from the rest of
$h_l(\mathbf{\lambda})$.
The balancing condition guarantees again that the total multiplier is
equal to 1. Thus we have proved total ellipticity of this series as well.
\end{proof}
\begin{remark}
Modular invariance of the series (\ref{multi-1}) and (\ref{multi-2})
has been established in \cite{die-spi:elliptic} and \cite{die-spi:modular}
respectively. We conjecture that all totally elliptic multiple
hypergeometric series are automatically modular invariant similar to
the one-variable series situation. One can introduce general notions of
well-poised and very-well-poised multiple theta hypergeometric series,
with the given above examples being counted as very-well-poised series,
but we shall not discuss this topic in the present paper.
\end{remark}
The author is indebted to J.F. van Diejen and A.S. Zhedanov for
a collaboration in the work on elliptic hypergeometric series
and for useful discussions of this paper. Valuable comments and
encouragement from G.E. Andrews, R. Askey, and A. Berkovich
are highly appreciated as well.
\end{document} | math | 56,786 |
\begin{document}
\title{A stochastic differential equation with a sticky point}
\author{Richard F. Bass}
\date{\today}
\maketitle
\begin{abstract}
\noindent {\it Abstract:}
We consider a degenerate stochastic differential equation that has a sticky
point in the Markov process sense. We prove that weak existence and weak
uniqueness hold, but that pathwise uniqueness does not hold nor
does a strong solution exist.
\vskip.2cm
\noindent \emph{Subject Classification: Primary 60H10; Secondary 60J60, 60J65}
\end{abstract}
\section{Introduction}\label{S:intro}
The one-dimensional stochastic differential equation
\begin{equation}\label{intro-E0}
dX_t=\sigmagma(X_t)\, dW_t
\end{equation}
has been the subject of intensive study for well over half a century.
What can one say about pathwise uniqueness when $\sigmagma$ is allowed
to be zero at certain points?
Of course, a large amount is known, but there are
many unanswered questions remaining.
Consider the case where $\sigmagma(x)=|x|^{\alpha}$ for ${\alpha}\in (0,1)$.
When ${\alpha} \ge 1/2$, it is known there is pathwise uniqueness by the
Yamada-Watanabe criterion (see, e.g., \cite[Theorem 24.4]{stoch}) while if
${\alpha}<1/2$, it is known there are at least two solutions, the zero solution
and one that can be constructed by a non-trivial time change of Brownian
motion.
However, that is not the end of the story. In \cite{xtoal}, it was
shown
that there is in fact pathwise uniqueness when
${\alpha}<1/2$ provided one restricts attention to the class of solutions that spend zero time at 0.
This can be better understood by using ideas from Markov process theory.
The continuous strong Markov processes on the real line that are
on natural scale can be characterized by their speed measure. For the
example in the preceding paragraph, the speed measure $m$ is given by
$$m(dy)=1_{(y\ne 0)} |y|^{-2{\alpha}}\, dy+\gamma{\partial}ta_0(dy),$$
where $\gamma\in [0,\infty]$ and ${\partial}ta_0$ is point mass at 0.
When $\gamma=\infty$, we get the 0 solution, or more precisely, the
solution that stays at 0 once it hits 0. If we set $\gamma=0$, we get
the situation considered in \cite{xtoal} where the amount of time
spent at 0 has Lebesgue measure zero, and pathwise uniqueness
holds among such processes.
In this paper we study an even simpler equation:
\begin{equation}\label{intro-E1}
dX_t=1_{(X_t\ne 0)}\, dW_t,\qq X_0=0,
\end{equation}
where $W$ is a one-dimensional Brownian motion.
One solution is $X_t=W_t$, since Brownian motion spends zero time
at 0. Another is the identically 0 solution.
We take $\gamma\in (0,\infty)$ and consider the class of solutions
to \eqref{intro-E1} which spend a positive amount of time at 0,
with the amount of time parameterized by $\gamma$.
We give a precise description of what we mean by this in Section \ref{S:SMM}.
Representing diffusions on the line as the solutions to stochastic
differential equations has a long history, going back to It\^o in the 1940's,
and this paper is a small step in that program. For this reason we characterize
our solutions in terms of occupation times determined by a speed measure. Other
formulations that are purely in terms of stochastic calculus are possible; see
the system \eqref{EPeq1}--\eqref{EPeq2} below.
We start by proving weak existence of solutions to \eqref{intro-E1} for
each $\gamma\in (0,\infty)$. We in fact consider a much more
general situation. We let $m$ be any measure that
gives finite positive mass to each open interval and define the
notion of continuous local martingales with speed measure $m$.
We prove weak uniqueness, or equivalently, uniqueness in law, among
continuous local martingales with speed measure $m$.
The fact that we have uniqueness in law not only within the class of
strong Markov processes but also within the class of continuous
local martingales with a given speed measure may be of independent interest.
We then restrict our attention to \eqref{intro-E1} and look at the
class of continuous martingales that solve \eqref{intro-E1} and
at the same time have speed measure $m$, where now
\begin{equation}\label{intro-E3}
m(dy)=1_{(y\ne 0)}\, dy+\gamma{\partial}ta_0(dy)
\end{equation}
with $\gamma\in (0,\infty)$.
Even when we fix $\gamma$ and restrict attention to solutions to \eqref{intro-E1}
that have speed measure $m$ given by \eqref{intro-E3}, pathwise uniqueness does not hold.
The proof of this fact is the main result of this paper.
The reader familiar with excursions will recognize some ideas
from that theory in the proof.
Finally, we prove that for each $\gamma\in (0,\infty)$, no strong solution
to \eqref{intro-E1} among the class of continuous martingales with
speed measure $m$ given by \eqref{intro-E3} exists. Thus, given $W$, one cannot
find a continuous martingale $X$ with speed measure $m$ satisfying
\eqref{intro-E1} such that $X$ is adapted to the
filtration of $W$. A consequence of this is that certain natural
approximations to the solution of \eqref{intro-E1}
do not converge in probability,
although they do converge weakly.
Besides increasing the versatility of \eqref{intro-E0}, one can easily imagine
a practical application of
sticky points. Suppose a corporation has a takeover offer at \$10.
The stock price is then likely to spend a great deal of time
precisely at \$10
but is not constrained to stay at \$10. Thus \$10 would
be a sticky point for the solution of the stochastic differential equation that describes the stock
price.
Regular continuous strong Markov processes on the line which are
on natural scale and have speed measure given by \eqref{intro-E3} are
known as sticky Brownian motions. These were first studied by Feller in the
1950's and It\^o and McKean in the 1960's.
A posthumously published paper by Chitashvili (\cite{Chitashvili})
in 1997, based on a technical report produced in 1988, considered
processes on the non-negative real line that satisfied the stochastic
differential equation
\begin{equation}\label{one-sided}
dX_t=1_{(X_t\ne 0)}\, dW_t+{\theta}eta 1_{(X_t=0)}\, dt, \qq X_t\ge 0,
\quad X_0=x_0,
\end{equation}
with ${\theta}eta\in (0,\infty)$. Chitashvii proved weak uniqueness for the
pair $(X,W)$ and showed that no strong solution exists.
Warren (see \cite{Warren1} and also \cite{Warren2}) further investigated
solutions to \eqref{one-sided}. The process $X$ is not adapted
to the filtration generated by $W$ and has some ``extra randomness,''
which Warren characterized.
While this paper was under review, we learned of a preprint by Engelbert and Peskir
\cite{Engelbert-Peskir} on the subject of sticky
Brownian motions. They considered the system of equations
\begin{align}
dX_t&=1_{(X_t\ne 0)}\, dW_t, \label{EPeq1}\\
1_{(X_t=0)}\, dt&=\frac{1}{\mu}\, d\ell^0_t(X),\label{EPeq2}
\end{align}
where $\mu\in (0,\infty)$ and $\ell^0_t$ is the local time in the
semimartingale sense at 0 of $X$. (Local times in the Markov process sense
can be different in general.) Engelbert and Peskir proved weak uniqueness
of the joint law of $(X,W)$ and proved that no strong solution exists.
They also considered a one-sided version of this equation, where $X\ge 0$,
and showed that it is equivalent to \eqref{one-sided}. Their results thus
provide a new proof of those of Chitashvili.
It is interesting to compare the system \eqref{EPeq1}--\eqref{EPeq2}
investigated by \cite{Engelbert-Peskir} with the SDE considered in this paper.
Both include the equation \eqref{EPeq1}. In this paper, however, in place of
\eqref{EPeq2} we use a side condition whose origins come from Markov process
theory, namely:
\begin{align}
X &\mbox{\rm is a continuous martingale with speed measure }\label{RBeq2}\\
&~~~~~ m(dx)=
dx+\gamma {\partial}ta_0(dx),{\nonumber}
\end{align}
where ${\partial}ta_0$ is point mass at 0 and ``continuous martingale with speed
measure $m$'' is defined in \eqref{SMM-E1}. One can show that a solution to the system studied
by \cite{Engelbert-Peskir} is a solution to the formulation considered in this paper and vice versa, and we sketch the argument in
Remark \ref{comparison}. However, we did not see a way of proving this without
first proving the uniqueness results of this paper and using the uniqueness
results of \cite{Engelbert-Peskir}.
Other papers that show no strong solution exists for stochastic differential equations that are closely related include
\cite{Barlow-skew}, \cite{Barlow-LMS}, and \cite{Karatzasetal}.
After a short section of preliminaries, Section \ref{S:prelim}, we
define speed measures for local martingales in Section \ref{S:SMM} and
consider the existence of such local martingales. Section \ref{S:WU}
proves weak uniqueness, while in Section \ref{S:SDE} we prove that
continuous martingales with speed measure $m$ given by \eqref{intro-E3}
satisfy \eqref{intro-E1}.
Sections \ref{S:approx}, \ref{S:est}, and \ref{S:PU} prove that
pathwise uniqueness and strong existence fail. The first of these
sections considers some approximations to a solution to \eqref{intro-E1},
the second proves some needed estimates, and the proof is completed
in the third.
\noindent {\bf Acknowledgment.} We would like to thank Prof.\ H.\ Farnsworth
for suggesting a mathematical finance interpretation of a sticky point.
\section{Preliminaries}\label{S:prelim}
For information on martingales and stochastic calculus,
see \cite{stoch}, \cite{KaratzasShreve} or \cite{RevuzYor}.
For background on continuous Markov processes on the line,
see the above references and also
\cite{Ptpde}, \cite{ItoMcKean}, or \cite{Knight}.
We start with an easy lemma concerning continuous local martingales.
\begin{lemma}\label{prelim-L1} Suppose $X$ is a continuous local martingale
which exits a finite non-empty interval $I$ a.s. If the endpoints
of the interval are $a$ and $b$, $a<x<b$, and $X_0=x$ a.s.,
then
$${{\mathbb E}\,} \angel{X}_{\tau_I}=(x-a)(b-x),$$
where $\tau_I$ is the first exit time of $I$ and $\angel{X}_t$ is
the quadratic variation process of $X$.
\end{lemma}
\begin{proof} Any such local martingale is a time change of a Brownian motion,
at least up until the time of exiting the interval $I$. The result
follows by performing a change of variables
in the corresponding result for Brownian motion; see, e.g., \cite[Proposition
3.16]{stoch}.
\end{proof}
Let $I$ be a finite non-empty interval with endpoints $a<b$. Each
of the endpoints
may be in $I$ or in $I^c$.
Define $g_I(x,y)$ by
$$g_I(x,y)=\begin{cases} 2(x-a)(b-y)/(b-a),\phantom{\Big]}& a\le x<y\le b;\\
2(y-a)(b-x)/(b-a),& a\le y\le x\le b. \end{cases}$$
Let $m$ be a measure
such that $m$ gives finite strictly positive measure to every
finite open interval. Let
$$G_I(x)=\int_I g_I(x,y)\, m(dy).$$
If $X$ is a real-valued process adapted to a filtration $\{\sF_t\}$
satisfying the
usual conditions, we let
\begin{equation}\label{prelim-E301}
\tau_{I}=\inf\{t>0: X_t\notin I\}.
\end{equation}
When we want to have
exit times for more than one process at once, we write $\tau_{I}(X)$,
$\tau_{I}(Y)$, etc.
Define
\begin{equation}\label{prelim-E302}
T_x=\inf\{t>0: X_t=x\}.
\end{equation}
A continuous strong Markov process $(X,{\mathbb P}^x)$ on the real line is regular
if ${\mathbb P}^x(T_y<\infty)>0$ for each $x$ and $y$. Thus, starting at $x$, there
is positive probability of hitting $y$ for each $x$ and $y$.
A regular continuous strong Markov process
$X$ is on natural scale if whenever $I$ is a finite non-empty interval
with endpoints $a<b$, then
$${\mathbb P}^x(X_{\tau_I}=a)=\frac{b-x}{b-a}, \qq {\mathbb P}^x(X_{\tau_I}=b)=\frac{x-a}{b-a}$$
provided $a<x<b$. A continuous regular strong Markov process on the line
on natural scale has speed measure $m$ if for each finite non-empty interval $I$ we have
$${{\mathbb E}\,}^x \tau_I=G_I(x)$$
whenever $x$ is in the interior of $I$.
It is well known that if $(X,{\mathbb P}^x)$ and $(Y,\bQ^x)$ are continuous
regular strong Markov processes on the line on
natural scale with the same speed measure
$m$, then the law of $X$ under ${\mathbb P}^x$ is equal to the law of $Y$ under $\bQ^x$
for each $x$.
In addition, $X$ will be a local martingale under ${\mathbb P}^x$ for each
$x$.
Let $W_t$ be a one-dimensional Brownian motion and let $\{L^x_t\}$
be the jointly continuous local times. If we define
\begin{equation}\label{cE21}
{\alpha}_t=\int L_t^y \, m(dy),
\end{equation}
then ${\alpha}_t$ will be continuous and strictly increasing. If we let
$\begin{theorem}a_t$ be the inverse of ${\alpha}_t$ and set
\begin{equation}\label{prelim-E1}
X^M_t=x_0+W_{\begin{theorem}a_t},
\end{equation}
then $X^M$ will be a continuous regular strong Markov process on natural
scale with speed measure $m$ starting at $x_0$.
See the references listed above for a proof, e.g., \cite[Theorem 41.9]{stoch}.
We denote the law of $X^M$ started at $x_0$ by ${\mathbb P}^{x_0}_M$.
If $(\Omega, \sF, {\mathbb P})$ is a probability space and $\sG$ a $\sigmagma$-field
contained in $\sF$, a regular
conditional probability $\bQ$ for ${\mathbb P}(\cdot\mid \sG)$ is a map
from $\Omega\times \sF$ to $[0,1]$ such that\\
(1) for each $A\in \sF$, $\bQ(\cdot, A)$ is measurable with respect
to $\sF$;\\
(2) for each $\omega\in \Omega$, $\bQ(\omega,\cdot)$ is a probability
measure on $\sF$;\\
(3) for each $A\in \sF$, ${\mathbb P}(A\mid \sG)(\omega)={\mathbb Q}(\omega,A)$
for almost every $\omega$.
Regular conditional probabilities do not always exist, but will if
$\Omega$ has sufficient structure; see \cite[Appendix C]{stoch}.
The filtration $\{\sF_t\}$ generated by a process $Z$ is the smallest
filtration to which $Z$ is adapted and
which satisfies the usual conditions.
We use the letter $c$ with or without subscripts to denote finite
positive constants whose value may change from place to place.
\section{Speed measures for local martingales}\label{S:SMM}
Let $a:{\mathbb R}\to {\mathbb R}$ and $b:{\mathbb R}\to {\mathbb R}$ be Borel measurable functions
with $a(x)\le b(x)$ for all $x$. If $S$ is a finite stopping time,
let
$$\tau^S_{[a,b]}=\inf\{t>S: X_t\notin [a(X_S),b(X_S)]\}.$$
We say a continuous local martingale $X$ started at $x_0$ has speed measure $m$ if
$X_0=x_0$ and
\begin{equation}\label{SMM-E1}
{{\mathbb E}\,}[\tau^S_{[a,b]}-S\mid \sF_S]=G_{[a(X_S),b(X_S)]}(X_S), \qq \mbox{\rm a.s.}
\end{equation}
whenever $S$ is a finite stopping time and $a$ and $b$ are as above.
\begin{remark}\label{R-speed}{\rm
We remark that if $X$ were a strong Markov
process, then the left hand side of \eqref{SMM-E1} would be equal to
${{\mathbb E}\,}^{X_S} \tau^0_{[a,b]}$, where $\tau^0_{[a,b]}=\inf\{t\ge 0: X_t\notin [a,b]\}$. Thus the above definition of speed measure for a martingale is a generalization of the one for one-dimensional diffusions on natural scale.
}
\end{remark}
\begin{theorem}\label{SMM-T1} Let $m$ be a measure that is finite and positive on
every finite open interval. There exists a continuous local martingale
$X$ with $m$ as its speed measure.
\end{theorem}
\begin{proof} Set $X$ equal to $X^M$ as defined in \eqref{prelim-E1}.
We only need show that \eqref{SMM-E1} holds.
Since $X$ is a Markov
process and has associated with it probabilities ${\mathbb P}^x$ and
shift operators ${\theta}eta_t$, then
$$\tau^S_{[a,b]}-S=\sigmagma_{[a(X_0),b(X_0)]}\circ {\theta}eta_S,$$
where $\sigmagma_{[a(X_0),b(X_0)]}=\inf\{t>0: X_t\notin [a(X_0),b(X_0)]\}$.
By the strong Markov property,
\begin{equation}\label{SMM-E2}
{{\mathbb E}\,}[\tau^S_{[a,b]}-S\mid \sF_S]
={{\mathbb E}\,}^{X_S} \sigmagma_{[a(X_0), b(X_0)]} \qq \mbox{\rm a.s.}
\end{equation}
For each $y$, $\sigmagma_{[a(X_0),b(X_0)]}=\tau_{[a(y),b(y)]} $
under ${\mathbb P}^y$, and therefore
$${{\mathbb E}\,}^y \sigmagma_{[a(X_0),b(X_0)]}=G_{[a(y),b(y)]}(y).$$
Replacing $y$ by $X_S(\omega)$ and substituting in \eqref{SMM-E2}
yields \eqref{SMM-E1}.
\end{proof}
\begin{theorem}\label{SMM-P1} Let $X$ be any continuous local martingale that has speed
measure $m$ and let $f$ be a non-negative Borel measurable function.
Suppose $X_0=x_0$, a.s.
Let $I=[a,b]$ be a finite interval with $a<b$ such that $m$ does not give positive
mass to either end point.
Then
\begin{equation}\label{SMM-E3m}
{{\mathbb E}\,}\int_0^{\tau_I} f(X_s)\, ds=\int_I g_I(x,y) f(y)\, m(dy).
\end{equation}
\end{theorem}
\begin{proof} It suffices to suppose that $f$ is continuous and
equal to 0 at the
boundaries of $I$ and then
to approximate an arbitrary non-negative Borel measurable function by
continuous functions that are 0 on the boundaries of $I$.
The main step is to prove
\begin{equation}\label{SMM-E4}
{{\mathbb E}\,}\int_0^{\tau_I(X)} f(X_s)\, ds=
{{\mathbb E}\,}\int_0^{\tau_I(X^M)} f(X^M_s)\, ds.
\end{equation}
Let $\varepsilon>0$. Choose ${\partial}ta$ such that
$|f(x)-f(y)|<\varepsilon$ if $|x-y|<{\partial}ta$ with $x,y\in I$.
Set $S_0=0$ and $$S_{i+1}=\inf\{t>S_i: |X_t-X_{S_i}|\ge {\partial}ta\}.$$
Then
$${{\mathbb E}\,}\int_0^{\tau_I} f(X_s)\, ds={{\mathbb E}\,}\sum_{i=0}^\infty \int_{S_i\land \tau_I}^{S_{i+1}\land \tau_I} f(X_s)\, ds$$
differs by at most $\varepsilon {{\mathbb E}\,}\tau_I$ from
\begin{align}
{{\mathbb E}\,}\sum_{i=0}^\infty f(X_{S_i\land \tau_I})& (S_{i+1}\land \tau_I-S_i\land \tau_I)\label{SMM-E2a}\\
&={{\mathbb E}\,}\Big[\sum_{i=0}^\infty f(X_{S_i\land \tau_I}){{\mathbb E}\,}[S_{i+1}\land \tau_I
-S_i\land \tau_I\mid \sF_{S_i\land \tau_I}]\,\Big].{\nonumber}
\end{align}
Let $a(x)=a\lor (x-{\partial}ta)$ and $b(x)=b\land (x+{\partial}ta)$.
Since $X$ is a continuous local martingale with speed measure $m$,
the last line in \eqref{SMM-E2a} is equal to
\begin{equation}\label{SMM-E2b}
{{\mathbb E}\,}\sum_{i=0}^\infty f(X_{S_i\land \tau_I})
G_{[a(X_{S_i\land \tau_I}),b( X_{S_i\land \tau_I})]}(X_{S_i\land \tau_I}).
\end{equation}
Because ${{\mathbb E}\,} \tau_{[-N,N]}<\infty$ for all $N$, then $X$ is a time change
of a Brownian motion. It follows that the distribution of $\{X_{S_i\land \tau_I(X)}, i\ge 0\}$ is that of a simple random walk on the lattice $\{x+k{\partial}ta\}$
stopped the first time it exits $I$, and thus
is the same as the distribution of $\{X^M_{S_i\land \tau_I(X^M)}, i\ge 0\}$. Therefore
the expression is \eqref{SMM-E2b} is equal to the corresponding expression
with $X$ replaced by $X^M$. This in turns differs by at most
${{\mathbb E}\,} \varepsilon \tau_I(X^M)$ from
$${{\mathbb E}\,}\int_0^{\tau_I(X^M)} f(X^M_s)\, ds.$$
Since $\varepsilon$ is arbitrary, we have
\eqref{SMM-E4}.
Finally, the right hand side of \eqref{SMM-E4}
is equal to the right hand side of \eqref{SMM-E3m}
by \cite[Corollary IV.2.4]{Ptpde}.
\end{proof}
\section{Uniqueness in law}\label{S:WU}
In this section we show that if $X$ is a continuous local martingale under
${\mathbb P}$ with speed measure $m$, then $X$ has the same law as $X^M$.
Note that we do not suppose \emph{a priori} that $X$ is a strong
Markov process. We remark that the results of \cite{ES3} do not apply, since
in that paper a generalization of the system \eqref{EPeq1}--\eqref{EPeq2}
is studied rather than the formulation given by \eqref{EPeq1} together with
\eqref{RBeq2}.
\begin{theorem}\label{WU-T1} Suppose ${\mathbb P}$ is a probability measure and $X$
is a continuous local martingale with respect to ${\mathbb P}$. Suppose that
$X$ has speed measure $m$
and $X_0=x_0$ a.s. Then the law of $X$ under ${\mathbb P}$ is equal
to the law of $X^M$ under ${\mathbb P}^{x_0}_M$.
\end{theorem}
\begin{proof} Let $R>0$ be such that $m(\{-R\})=m(\{R\})=0$ and set $I=[-R,R]$.
Let $\overline X_t=X_{t\land \tau_I(X)}$ and $\overline X^M_t=X^M_{t\land \tau_I(X^M)}$,
the processes $X$ and $X^M$ stopped on exiting $I$.
For $f$ bounded and measurable let
$$H_{\lambda} f={{\mathbb E}\,} \int_0^{\tau_I({\overline X})} e^{-{\lambda} t} f({\overline X}_t)\, dt$$
and
$$H_{\lambda}^M f(x)={{\mathbb E}\,}^{x} \int_0^{\tau_I({\overline X}^M)} e^{-{\lambda} t} f({\overline X}^M_t)\, dt$$
for ${\lambda}\ge 0$. Since ${\overline X}$ and ${\overline X}^M$ are stopped at times $\tau_I({\overline X})$
and $\tau_I({\overline X}^M)$, resp., we can replace $\tau_I({\overline X})$
and $\tau_I({\overline X}^M)$ by $\infty$ in both of the above integrals
without affecting $H_{\lambda}$ or $H_{\lambda}^M$ as
long as $f$ is 0 on the boundary of $I$.
Suppose $f(-R)=f(R)=0$. Then $H_{\lambda}^M f(-R)$ and $H_{\lambda}^M f(R)$ are
also 0, since we are working with the stopped process.
We want to show
\begin{equation}\label{WU-E2}
H_{\lambda} f=H_{\lambda}^M f(x_0), \qq {\lambda}\ge 0.
\end{equation}
By Theorem \ref{SMM-P1} we know \eqref{WU-E2} holds for ${\lambda}=0$.
Let $K={{\mathbb E}\,} \tau_I(X)$. We have ${{\mathbb E}\,}^{x_0} \tau_I(X^M)=K$ as well
since both $X$ and $X^M$ have speed measure $m$.
Let ${\lambda}=0$ and $\mu\le 1/2K$. Let $t>0$ and let $Y_s={\overline X}_{s+t}$.
Let $\bQ_t$ be a regular conditional probability for ${\mathbb P}(Y\in \cdot\mid
\sF_t)$.
It is easy to see that for almost every $\omega$,
$Y$ is a continuous local martingale under $\bQ_t(\omega, \cdot)$ started at ${\overline X}_t$ and $Y$ has speed measure
$m$. Cf.\ \cite[Section I.5]{Ptpde} or \cite{xtoal}. Therefore by Theorem \ref{SMM-P1}
$${{\mathbb E}\,}_{\bQ_t}\int_0^\infty f(Y_s)\, ds=H^M_0 f({\overline X}_t).$$
This can be rewritten as
\begin{equation}\label{WU-E31}
{{\mathbb E}\,}\Big[\int_0^\infty f({\overline X}_{s+t})\, ds\mid \sF_t\Big]=H^M_0f({\overline X}_t), \qq
\mbox{\rm a.s.}
\end{equation}
as long as $f$ is 0 on the endpoints of $I$.
Therefore, recalling that ${\lambda}=0$,
\begin{align}
H_\mu H_{\lambda}^M f&={{\mathbb E}\,}\int_0^\infty e^{-\mu t} H_{\lambda}^M f({\overline X}_t)\, dt\label{WU-E3}\\
&={{\mathbb E}\,}\int_0^\infty e^{-\mu t} {{\mathbb E}\,}\Big[\int_0^\infty e^{-{\lambda} s}f({\overline X}_{s+t})\, ds\mid
\sF_t\Big]\, dt{\nonumber}\\
&={{\mathbb E}\,}\int_0^\infty e^{-\mu t}e^{{\lambda} t}\int_t^\infty e^{-{\lambda} s}f({\overline X}_s)\, ds\, dt{\nonumber}\\
&={{\mathbb E}\,}\int_0^\infty \int_0^s e^{-(\mu-{\lambda})t}\, dt\, e^{-{\lambda} s}f({\overline X}_s)\, ds{\nonumber}\\
&={{\mathbb E}\,}\int_0^\infty \frac{1-e^{-(\mu-{\lambda})s}}{\mu-{\lambda}}e^{-{\lambda} s}f({\overline X}_s)\, ds{\nonumber}\\
&=\frac{1}{\mu-{\lambda}}{{\mathbb E}\,}\int_0^\infty e^{-{\lambda} s}f({\overline X}_s)\, ds
-\frac{1}{\mu-{\lambda}}{{\mathbb E}\,}\int_0^\infty e^{-\mu s} f({\overline X}_s)\, ds.{\nonumber}\\
&=\frac{1}{\mu-{\lambda}}H_{\lambda}^M f(x_0)
-\frac{1}{\mu-{\lambda}}{{\mathbb E}\,}\int_0^\infty e^{-\mu s} f({\overline X}_s)\, ds.{\nonumber}
\end{align}
We used \eqref{WU-E31} in the second equality.
Rearranging,
\begin{equation}\label{SMM-E32}
H_\mu f=H_{\lambda}^M f(x_0)+({\lambda}-\mu)H_\mu(H_{\lambda}^Mf).
\end{equation}
Since ${\overline X}$ and ${\overline X}^M$ are stopped upon exiting $I$, then $H^M_{\lambda} f=0$
at the endpoints of $I$. We now take \eqref{SMM-E32} with $f$
replaced by $H_{\lambda}^M f$, use this to evaluate the last term in
\eqref{SMM-E32}, and obtain
$$H_\mu f=H_{\lambda}^Mf({ x}_0)+({\lambda}-\mu)H^M_{\lambda}(H^M_{\lambda} f)(x_0)+({\lambda}-\mu)^2 H_\mu(H^M_{\lambda}(H^M_{\lambda} f)).$$
We continue. Since $$|H_\mu g|\le \norm{g} {{\mathbb E}\,}\tau_I(X)=\norm{g} K$$
and $$\norm{H^M_{\lambda} g} \le \norm{g} {{\mathbb E}\,} \tau_I(X^M)=
\norm{g} K$$
for each bounded $g$, where $\norm{g}$ is the supremum norm of $g$,
we can iterate and get convergence as long as $\mu\le 1/2K$ and obtain
$$H_\mu f=H^M_{\lambda} f(x_0)+\sum_{i=1}^\infty (({\lambda}-\mu) H^M_{\lambda})^i H_{\lambda}^M f(x_0).$$
The above also holds when ${\overline X}$ is replaced by ${\overline X}^M$, so that
$$H_\mu^M f(x_0)=H^M_{\lambda} f(x_0)+\sum_{i=1}^\infty (({\lambda}-\mu) H^M_{\lambda})^iH_{\lambda}^M f(x_0).$$
We conclude $H_\mu f=H^M_\mu f(x_0)$ as long as $\mu\le 1/2K$ and
$f$ is 0 on the endpoints of $I$.
This holds for every starting point. If $Y_s={\overline X}_{s+t}$ and $\bQ_t$ is
a regular conditional probability for the law of $Y_s$ under ${\mathbb P}^x$
given $\sF_t$,
then we asserted above that
$Y$ is a continuous local martingale started at ${\overline X}_t$ with speed measure $m$
under $\bQ_t(\omega, \cdot)$
for almost every $\omega$. We replace $x_0$ by ${\overline X}_t(\omega)$ in the preceding
paragraph and derive
$${{\mathbb E}\,}\Big[\int_0^\infty e^{-\mu s}f({\overline X}_{s+t})\, ds\mid \sF_t\Big]=H^M_\mu f({\overline X}_t),
\qq \mbox{\rm a.s.}$$
if $\mu\le 1/2K$ and $f$ is 0 on the endpoints of $I$.
We now take ${\lambda}=1/2K$ and $\mu\in(1/2K, 2/2K]$. The same argument
as above shows that $H_\mu f=H^M_\mu f(x_0)$ as long as $f$ is 0 on the
endpoints of $I$. This is true for every starting point. We continue,
letting ${\lambda}=n/2K$ and using induction, and obtain
$$H_\mu f=H_\mu^M f(x_0)$$
for every $\mu\ge 0$.
Now suppose $f$ is continuous with
compact support and $R$ is large enough so that $(-R,R)$
contains the support of $f$. We have
that
$${{\mathbb E}\,}\int_0^{\tau_{[-R,R]}({\overline X})} e^{-\mu t} f({\overline X}_t)\, dt
={{\mathbb E}\,}^{x_0}\int_0^{\tau_{[-R,R]}({\overline X}^M)} e^{-\mu t} f({\overline X}^M_t)\, dt$$
for all $\mu> 0$.
This can be rewritten as
\begin{equation}\label{WU-E501}
{{\mathbb E}\,}\int_0^\infty e^{-\mu t} f({ X}_{t\land \tau_{[-R,R]}(X)})\, dt
={{\mathbb E}\,}^{x_0}\int_0^\infty e^{-\mu t} f({X}^M_{t\land\tau_{[R,R]}(X^M)})\, dt.
\end{equation}
If we hold $\mu$ fixed and let $R\to \infty$
in \eqref{WU-E501}, we obtain
$${{\mathbb E}\,}\int_0^\infty e^{-\mu t} f(X_t)\, dt
={{\mathbb E}\,}^{x_0}\int_0^\infty e^{-\mu t} f(X^M_t)\, dt$$
for all $\mu>0$.
By the uniqueness of the Laplace transform and the continuity of $f, X,$
and $X^M$,
$${{\mathbb E}\,} f(X_t)={{\mathbb E}\,}^{x_0} f(X^M_t)$$
for all $t$. By a limit argument, this holds whenever $f$ is a bounded Borel
measurable function.
The starting point $x_0$ was arbitrary. Using regular conditional
probabilities as above,
$${{\mathbb E}\,}[f(X_{t+s})\mid \sF_t]={{\mathbb E}\,}^{x_0} [f(X_{t+s}^M)\mid \sF_t].$$
By the Markov property, the right hand side is equal to
$${{\mathbb E}\,}^{X^M_t} f(X_s)=P_s f(X^M_t),$$
where $P_s$ is the transition probability kernel for $X^M$.
To prove that the finite dimensional distributions of
$X$ and $X^M$ agree, we use induction.
We have
\begin{align*}
{{\mathbb E}\,}\prod_{j=1}^{n+1} f_j(X_{t_j})
&={{\mathbb E}\,}_i \prod_{j=1}^{n} f_j(X_{t_j}){{\mathbb E}\,}_i[f_{n+1}(X_{t_{n+1}})\mid \sF_{t_n}]\\
&={{\mathbb E}\,}_i \prod_{j=1}^{n} f_j(X_{t_j})P_{t_{n+1}-t_n}f_{n+1}(X_{t_n}).
\end{align*}
We use the induction hypothesis to see that this is equal to
$${{\mathbb E}\,}^{x_0} \prod_{j=1}^{n} f_j(X^M_{t_j})P_{t_{n+1}-t_n}f_{n+1}(X^M_{t_n}).$$
We then use the Markov property to see that this in turn is equal to
$${{\mathbb E}\,}^{x_0}\prod_{j=1}^{n+1} f_j(X^M_{t_j}).$$
Since $X$ and $X^M$ have continuous paths and the same finite dimensional
distributions, they have the same law.
\end{proof}
\section{The stochastic differential equation}\label{S:SDE}
We now discuss the particular stochastic differential equation we want
our martingales to solve. We specialize to the following
speed measure. Let $\gamma\in (0,\infty)$ and
let
\begin{equation}\label{SDE-E31}
m(dx)= dx+\gamma{\partial}ta_0(dx),
\end{equation}
where ${\partial}ta_0$ is point mass at 0.
We consider the stochastic differential equation
\begin{equation}\label{SMM-E200}
X_t=x_0+\int_0^t 1_{(X_s\ne 0)}\, dW_s.
\end{equation}
A triple $(X,W,{\mathbb P})$ is a weak solution to \eqref{SMM-E200}
with $X$ starting at $x_0$ if ${\mathbb P}$ is a probability measure,
there exists a filtration $\{\sF_t\}$ satisfying the usual conditions,
$W$ is a Brownian motion under ${\mathbb P}$
with respect to $\{\sF_t\}$,
and $X$ is a continuous martingale adapted to $\{\sF_t\}$ with $X_0=x_0$ and satisfying \eqref{SMM-E200}.
We now show that any martingale with $X_0=x_0$ a.s.\ that has speed measure $m$ is the first
element of a triple that is a weak solution to \eqref{SMM-E200}.
Although $X$ has the same law as $X^M$ started at $x_0$, here we only have
one probability measure and we cannot assert that $X$ is a strong Markov
process.
We point out that \cite[Theorem 5.18]{ES3} does not apply here, since they study a generalization of the system \eqref{EPeq1}--\eqref{EPeq2}, and we do not know at this stage that this formulation is equivalent to the one used here.
\begin{theorem}\label{SMM-T3} Let ${\mathbb P}$ be a probability measure on a space that supports a Brownian motion and let
$X$ be a continuous martingale which has speed measure $m$ with $X_0=x_0$ a.s.
Then there
exists a Brownian motion $W$ such that $(X,W,{\mathbb P})$ is a weak
solution to \eqref{SMM-E200} with $X$ starting at $x_0$.
Moreover
\begin{equation}\label{sde-E323}
X_t=x_0+\int_0^t 1_{(X_s\ne 0)}\, dX_s.
\end{equation}
\end{theorem}
\begin{proof}
Let
$$W'_t=\int_0^t 1_{(X_s\ne 0)}\, dX_s.$$
Hence
$$d\angel{W'}_t=1_{(X_t\ne 0)}\, d\angel{X}_t.$$
Let $0<\eta<{\partial}ta$. Let $S_0=\inf\{t: |X_t|\ge {\partial}ta\}$,
$T_i=\inf\{t>S_i: |X_t|\le \eta\}$, and $S_{i+1}=\inf\{t>T_i:
|X_t|\ge {\partial}ta\}$ for $i=0,1,\ldots$.
The speed measure of
$X$ is equal to $m$, which in turn is equal
to Lebesgue measure on ${\mathbb R}\setminus\{0\}$,
hence $X$ has the same law as $X^M$ by Theorem \ref{WU-T1}. Since $X^M$ behaves
like a Brownian motion when it is away from zero,
we conclude $1_{[S_i,T_i]}\,d\angel{X}_t=1_{[S_i,T_i]}\, dt$.
Thus
for each $N$,
$$\int_0^t 1_{\cup_{i=0}^N [S_i,T_i]}(s)\, d\angel{X}_s
=\int_0^t 1_{\cup_{i=0}^N [S_i,T_i]}(s)\, ds.$$
Letting $N\to \infty$, then $\eta\to 0$, and finally ${\partial}ta\to \infty$,
we obtain
$$\int_0^t 1_{(X_s\ne 0)}\, d\angel{X}_s=\int_0^t 1_{(X_s\ne 0)}\, ds.$$
Let $V_t$ be an independent Brownian motion and let
$$W''_t=\int_0^t 1_{(X_s=0)}\, dV_s.$$
Let $W_t=W'_t+W''_t$. Clearly $W'$ and $W''$ are orthogonal martingales, so
$$d\angel{W}_t=d\angel{W'}_t+d\angel{W''}_t
=1_{(X_t\ne 0)}\, dt+1_{(X_t=0)}\, dt=dt.$$
By L\'evy's theorem (see \cite[Theorem 12.1]{stoch}), $W$ is a Brownian motion.
If
$$M_t=\int_0^t 1_{(X_s=0)}\, dX_s,$$
by the occupation times formula (\cite[Corollary VI.1.6]{RevuzYor}),
$$\angel{M}_t=\int_0^t 1_{(X_s=0)}\, d\angel{X}_s
=\int 1_{\{0\}}(x) \ell^x_t(X)\, dx=0$$
for all $t$,
where $\{\ell^x_t(X)\}$ are the local times of $X$ in the semimartingale sense.
This implies that $M_t$
is identically zero, and hence $X_t=W'_t$.
Using the definition of $W$, we deduce
\begin{equation}\label{sde-E5.95}
1_{(X_t\ne 0)} \, dW_t=1_{(X_t\ne 0)}\, dX_t=dW'_t=dX_t,
\end{equation}
as required.
\end{proof}
We now show weak uniqueness, that is, if $(X,W,{\mathbb P})$ and $(\widetilde X,\widetilde W,\widetilde {\mathbb P})$
are two weak solutions to \eqref{SMM-E200} with $X$ and $\widetilde X$ starting
at $x_0$ and in addition $X$ and $\widetilde X$ have
speed measure $m$, then the joint law of
$(X,W)$ under ${\mathbb P}$ equals the joint law of $(\widetilde X,\widetilde W)$ under $\widetilde {\mathbb P}$.
This holds even though $W$ will not in general be adapted
to the filtration of $X$.
We know that the law of $X$ under ${\mathbb P}$ equals the law of $\widetilde X$ under
$\widetilde {\mathbb P}$ and also that
the law of $W$ under ${\mathbb P}$ equals the law of $\widetilde W$ under
$\widetilde {\mathbb P}$, but the issue here is the joint law.
Cf.\ \cite{Cherny}. See also \cite{Engelbert-Peskir}.
\begin{theorem}\label{WU-T21} Suppose $(X,W,{\mathbb P})$ and $(\widetilde X,\widetilde W,\widetilde {\mathbb P})$
are two weak solutions to \eqref{SMM-E200} with $X_0=\widetilde X_0=x_0$
and that $X$ and $\widetilde X$
are both continuous martingales with speed measure $m$.
Then the joint law of
$(X,W)$ under ${\mathbb P}$ equals the joint law of $(\widetilde X,\widetilde W)$ under $\widetilde {\mathbb P}$.
\end{theorem}
\begin{proof}
Recall the construction of $X^M$ from Section \ref{S:prelim}.
With $U_t$ a Brownian motion with jointly continuous local times $\{L^x_t\}$
and $m$ given by \eqref{SDE-E31}, we define ${\alpha}_t$ by \eqref{cE21}, let
$\begin{theorem}a_t$ be the right continuous inverse of ${\alpha}_t$, and let $X^M_t=x_0+U_{\begin{theorem}a_t}$.
Since $m$ is greater than or equal to Lebesgue measure but is finite on every
finite interval, we see that ${\alpha}_t$ is strictly increasing, continuous,
and $\lim_{t \to \infty} {\alpha}_t=\infty$. It follows that $\begin{theorem}a_t$ is
continuous and tends
to infinity almost surely as $t\to \infty$.
Given any stochastic process $\{N_t, t\ge 0\}$, let $\sF^N_\infty$ be the
$\sigmagma$-field generated by the collection of random variables $\{N_t, t\ge 0\}$ together with the null sets.
We have $\begin{theorem}a_t=\angel{X^M}_t$ and $U_t=X^M_{{\alpha}_t}-x_0$.
Since $\begin{theorem}a_t$ is measurable with respect to $\sF^{X^M}_\infty$
for each $t$, then ${\alpha}_t$ is also, and hence so is $U_t$. In fact, we
can give a recipe to construct
a Borel measurable map $F:C[0,\infty)\to C[0,\infty)$ such that $U=F(X^M)$.
Note
also that $X^M_t$ is measurable with respect
to $\sF^U_\infty$ for each $t$ and there exists a Borel measurable map
$G:C[0,\infty)\to C[0,\infty)$ such that $X^M=G(U)$. In addition observe that $\angel{X^M}_\infty=\infty$ a.s.
Since $X$ and $X^M$ have the
same law, then $\angel{X}_\infty=\infty$ a.s. If $Z_t$ is a Brownian
motion with $X_t=x_0+Z(\zeta_t)$ for a continuous increasing process $\zeta$, then
$\zeta_t=\angel{X}_t$ is measurable with respect to $\sF^X_\infty$,
its inverse $\rho_t$ is also, and therefore $Z_t=X_{\rho_t}-x_0$ is as
well.
Moreover the recipe for constructing $Z$ from $X$ is exactly the same as the one for
constructing $U$ from $X^M$, that is,
$Z=F(X)$.
Since $X$ and $X^M$ have the same law, then the joint law of $(X,Z)$ is equal to the joint law of $(X^M,U)$.
We can therefore conclude that $X$ is measurable with respect to
$\sF^Z_\infty$ and $X=G(Z)$.
Let $$Y_t=\int_0^t 1_{(X_s=0)}\, dW_s.$$
Then $Y$ is a martingale with $$\angel{Y}_t=\int_0^t 1_{(X_s=0)}\, ds
=t-\angel{X}_t.$$
Observe that $\angel{X,Y}_t=\int_0^t 1_{(X_s\ne 0)}1_{(X_s=0)}\, ds=0$.
By a theorem of Knight (see \cite{Knight2} or \cite{RevuzYor}), there exists
a two-dimensional process $V=(V_1,V_2)$ such that $V$ is a two-dimensional
Brownian motion under ${\mathbb P}$ and
$$(X_t,Y_t)=(x_0+V_1(\angel{X}_t),V_2(\angel{Y}_t), \qq \mbox{\rm a.s.}$$
(It turns out that $\angel{Y}_\infty=\infty$, but that is not needed in Knight's
theorem.)
By the third paragraph of this proof, $X_t=x_0+V_1(\angel{X}_t)$ implies that $X_t$ is measurable with
respect to $\sF^{V_1}_\infty$, and in fact $X=G(V_1)$.
Since $\angel{Y}_t=t-\angel{X}_t$,
then
$(X_t,Y_t)$ is measurable with respect to $\sF^V_\infty$ for each $t$ and there
exists a Borel measurable map $H: C([0,\infty), {\mathbb R}^2)\to C([0,\infty), {\mathbb R}^2)$,
where $C([0,\infty), {\mathbb R}^2)$ is the space of continuous functions from
$[0,\infty)$ to ${\mathbb R}^2$, and $(X,Y)=H(V)$.
Thus $(X,Y)$ is the image under $H$ of a two-dimensional Brownian motion. If
$(\widetilde X, \widetilde W, \widetilde {\mathbb P})$ is another weak solution, then we can define $\widetilde Y$
analogously and find a two-dimensional Brownian motion $\widetilde V$ such that
$(\widetilde X, \widetilde Y)=H(\widetilde V)$. The key point is that the same $H$ can be used.
We conclude that the law of $(X,Y)$ is
uniquely determined.
Since $$(X,W)=(X,X+Y-x_0),$$ this proves that the joint law
of $(X,W)$ is uniquely determined.
\end{proof}
\begin{remark}\label{comparison}{\rm
In Section \ref{S:prelim} we constructed the continuous strong Markov process $(X^M,{\mathbb P}_M^x)$
and we now know that $X$ started at $x_0$ is equal in law to $X^M$ under
${\mathbb P}^{x_0}_M$. We pointed out in Remark \ref{R-speed} that in the strong Markov case the notion of speed
measure for a martingale reduces to that of speed measure for a one dimensional diffusion. In \cite{Engelbert-Peskir} it is shown that the solution to the
system \eqref{EPeq1}--\eqref{EPeq2} is unique in law and thus the solution
started at $x_0$ is equal in law to that of a diffusion on ${\mathbb R}$ started at $x_0$; let $\widetilde m$ be
the speed measure for this strong Markov process. Thus to show the equivalence
of the system \eqref{EPeq1}--\eqref{EPeq2} to the one given by \eqref{EPeq1} and
\eqref{RBeq2},
it suffices to show that $\widetilde m=m$
if and only if \eqref{EPeq2} holds, where $m$ is
given by \eqref{SDE-E31} and $\gamma=1/\mu$. Clearly both $\widetilde m$ and $m$ are equal to Lebesgue measure on ${\mathbb R}\setminus
\{0\}$, so it suffices to compare the atoms of $\widetilde m$ and $m$ at 0.
Suppose \eqref{EPeq2} holds and $\gamma=1/\mu$. Let $A_t=\int_0^t 1_{\{0\}}(X_s)\, ds$. Thus
\eqref{EPeq2} asserts that $A_t=\frac{1}{\mu}\ell_t^0$. Let $I=[a,b]=[-1,1]$,
$x_0=0$, and $\tau_I$ the first time that $X$ leaves the interval
$I$. Setting $t=\tau_I$ and taking expectations starting from 0, we
have
$${{\mathbb E}\,}^0 A_{\tau_I}=\frac1{\mu} {{\mathbb E}\,}^0 \ell^0_{\tau_I}.$$
Since $\ell^0_t$ is the increasing part of the submartingale $|X_t-x_0|-|x_0|$
and $X_{\tau_I}$ is equal to either 1 or $-1$, the right hand side is equal to
$$\frac{1}{\mu} {{\mathbb E}\,}^0|X_{\tau_I}|=\frac1{\mu}.$$
On the other hand, by \cite[(IV.2.11)]{Ptpde},
$${{\mathbb E}\,}^0 A_{\tau_I}=\int_{-1}^1 g_I(0,y) 1_{\{0\}}(y)\, \widetilde m(dy)
=\widetilde m(\{0\}).$$
Thus $\widetilde m=m$ if $\gamma=1/\mu$.
Now suppose we have a solution to the pair \eqref{EPeq1} and \eqref{RBeq2}
and $\gamma=1/\mu$; we will
show \eqref{EPeq2} holds. Let $R>0$, $I=[-R,R]$, and $\tau_I$ the first exit
time from $I$. Set $B_t=\frac{1}{\mu} \ell^0_t$. For any $x\in I$, we have
by \cite[(IV.2.11)]{Ptpde} that
\begin{equation}\label{Rc1}
{{\mathbb E}\,}^x A_{\tau_I}=\int_{-1}^1 g_I(x,y)1_{\{0\}}(y)\, m(dy)
=\gamma g_I(x,0).
\end{equation}
Taking expectations,
\begin{equation}\label{Rc2}
{{\mathbb E}\,}^x B_{\tau_I}=\frac{1}{\mu}{{\mathbb E}\,}^x[\,|X_{\tau_I}-x|-|x|\,].
\end{equation}
Since $X$ is a time change of a Brownian motion that exits $I$ a.s., the distribution of $X_{\tau_I}$ started at $x$ is the same as that of a Brownian motion
started at $x$ upon exiting $I$. A simple computation shows that the
right hand side of \eqref{Rc2} agrees with the right hand side of \eqref{Rc1}.
By the strong Markov property,
$${{\mathbb E}\,}^0[A_{\tau_I}-A_{\tau_I\land t}\mid \sF_t]=
{{\mathbb E}\,}^{X_t} A_{\tau_I}={{\mathbb E}\,}^{X_t}B_{\tau_I}
={{\mathbb E}\,}^0[B_{\tau_I}-B_{\tau_I\land t}\mid \sF_t]$$
almost surely on the set $(t\le \tau_I)$.
Observe that
if $U_t={{\mathbb E}\,}^0[A_{\tau_I}-A_{\tau_I\land t}\mid \sF_t]$, then we can write
$$U_t
={{\mathbb E}\,}^0[A_{\tau_I}-A_{\tau_I\land t}\mid \sF_t]
={{\mathbb E}\,}^0[A_{\tau_I}\mid \sF_t] -A_{\tau_I\land t}$$
and $$U_t={{\mathbb E}\,}^0[B_{\tau_I}-B_{\tau_I\land t}\mid \sF_t]
={{\mathbb E}\,}^0[B_{\tau_I}\mid \sF_t] -B_{\tau_I\land t}$$
for $t\le \tau_I$.
This expresses the supermartingale $U$ as a martingale minus an increasing process
in two different ways.
By the uniqueness of the Doob decomposition for supermartingales, we conclude
$A_{\tau_I\land t}=B_{\tau_I\land t}$
for $t\le \tau_I$. Since $R$ is arbitrary, this establishes \eqref{EPeq2}.
(The argument that the potential of an increasing process determines the process
is well known.)
}\end{remark}
\begin{remark}\label{eitherpaper}{\rm
In the remainder of the paper we prove that there does not exist a
strong solution to the pair \eqref{EPeq1} and \eqref{RBeq2} nor does pathwise
uniqueness hold. In \cite{Engelbert-Peskir}, the authors prove that there is no
strong solution to the pair \eqref{EPeq1} and \eqref{EPeq2} and that pathwise
uniqueness does not hold. Since we now know there is an equivalence between the pair
\eqref{EPeq1} and \eqref{RBeq2} and the pair \eqref{EPeq1} and \eqref{EPeq2},
one could at this point use the argument of \cite{Engelbert-Peskir} in place of the argument of this paper.
Alternatively, in the paper of \cite{Engelbert-Peskir} one could use our argument in place of theirs to establish the non-existence of a strong solution and
that pathwise uniqueness does not hold.
}\end{remark}
\section{Approximating processes}\label{S:approx}
Let $\widetilde W$ be a Brownian motion adapted to a filtration $\{\sF_t, t\ge 0\}$, let $\varepsilon\le \gamma$, and let $X_t^\varepsilon$ be the solution to
\begin{equation}\label{approx-E671}
dX^\varepsilon_t=\sigmagma_\varepsilon(X_t^\varepsilon)\, d\widetilde W_t, \qq X^\varepsilon_0=x_0,
\end{equation}
where
$$\sigmagma_\varepsilon(x)=\begin{cases} 1,& |x|>\varepsilon;\\
\sqrt{\varepsilon/\gamma},& |x|\le \varepsilon.\end{cases}$$
For
each $x_0$ the solution to the stochastic differential
equation is pathwise unique by \cite{LeGall} or \cite{Nakao}.
We also know that if ${\mathbb P}^x_\varepsilon$ is the law of $X^\varepsilon$ starting
from $x$, then $(X^\varepsilon, {\mathbb P}^x_\varepsilon)$ is a continuous regular
strong Markov process on natural scale. The speed measure
of $X^\varepsilon$ will be
$$m_\varepsilon(dy)=dy+\frac{\gamma}{\varepsilon} 1_{[-\varepsilon,\varepsilon]}(y)\, dy.$$
Let $Y^\varepsilon$ be the solution to
\begin{equation}\label{approx-E672}
dY^{\varepsilon}_t=\sigmagma_{2\varepsilon}(Y^\varepsilon_t)\, d\widetilde W_t, \qq Y^\varepsilon_0=x_0.
\end{equation}
Since $\sigmagma_\varepsilon\le 1$, then $d\angel{X^\varepsilon}_t\le dt$. By
the Burkholder-Davis-Gundy inequalities
(see, e.g., \cite[Section 12.5]{stoch}),
\begin{equation}\label{c6.2A}
{{\mathbb E}\,}|X^\varepsilon_t-X^\varepsilon_s|^{2p}\le c|t-s|^p
\end{equation}
for each $p\ge 1$, where the constant $c$ depends on $p$. It follows
(for example, by Theorems 8.1 and 32.1 of \cite{stoch}) that the law of $X^\varepsilon$ is tight in $C[0,t_0]$ for each
$t_0$. The same is of course true
for $Y^\varepsilon$ and $\widetilde W$, and so the triple $(X^\varepsilon,Y^\varepsilon,\widetilde W)$
is tight in $(C[0,t_0])^3$ for each $t_0>0$.
Let $P_t^\varepsilon$ be the transition probabilities for the Markov process
$X^\varepsilon$.
Let $C_0$ be the set of continuous functions on ${\mathbb R}$ that vanish at
infinity and let
$$L=\{f\in C_0: |f(x)-f(y)|\le |x-y|, x,y\in {\mathbb R}\},$$
the set of Lipschitz functions with Lipschitz constant 1 that vanish at
infinity.
One of the main results of \cite{Lsg} (see Theorem 4.2) is that $P_t^\varepsilon$
maps $L$ into $L$ for each $t$ and each $\varepsilon<1$.
\begin{theorem}\label{approx-T101}
If $f\in C_0$, then $P_t^\varepsilon f$ converges uniformly for each $t\ge 0$.
If we denote the limit by $P_tf$, then $\{P_t\}$ is a family of transition
probabilities for a continuous regular strong Markov process $(X, {\mathbb P}^x)$ on natural
scale with speed measure given by \eqref{SDE-E31}.
For each $x$, ${\mathbb P}^x_{\varepsilon}$ converges weakly to ${\mathbb P}^x$ with respect to
$C[0,N]$ for each $N$.
\end{theorem}
\begin{proof} \noindent \emph{Step 1.}
Let $\{g_j\}$ be a countable collection of $C^2$ functions in $L$ with
compact support such that the set of finite linear combinations of
elements of $\{g_j\}$ is dense in $C_0$ with respect to the supremum
norm.
Let $\varepsilon_n$ be a sequence converging to 0.
Suppose $g_j$ has support contained in $[-K,K]$ with $K>1$. Since
$X_t^\varepsilon$ is a Brownian motion outside $[-1,1]$, if $|x|>2K$, then
$$|P_t^\varepsilon g_j(x)|=|{{\mathbb E}\,}^x g_j(X_t^\varepsilon)|\le \norm{g_j}\,
{\mathbb P}^x(|X^\varepsilon|\mbox{ hits $|x|/2$ before time } t),$$
which tends to 0 uniformly over $\varepsilon<1$ as $|x|\to \infty$.
Here $\norm{g_j}$ is the supremum norm of $g_j$.
By the equicontinuity of the $P_t^{\varepsilon} g_j$, using the diagonalization
method there exists a subsequence, which we continue to denote
by $\varepsilon_n$, such that $P_t^{\varepsilon_n} g_j$
converges uniformly on ${\mathbb R}$ for every rational $t\ge 0$
and every $j$. We denote the limit by $P_tg_j$.
Since $g_j\in C^2$,
\begin{align*}
P_t ^\varepsilon g_j(x)-P_s^\varepsilon g_j(x)&={{\mathbb E}\,}^x g_j(X_t^\varepsilon)-{{\mathbb E}\,}^x g_j(X_s^\varepsilon)\\
&={{\mathbb E}\,}^x \int_s^t \sigmagma_\varepsilon(X_r^\varepsilon)g'_j(X_r^\varepsilon)\, d\widetilde W_r+\tfrac12 {{\mathbb E}\,}^x \int_s^t \sigmagma_\varepsilon(X_r^\varepsilon)^2g''_j(X_r^\varepsilon)\, dr\\
&=\tfrac12 {{\mathbb E}\,}^x \int_s^t \sigmagma_\varepsilon(X_r^\varepsilon)^2g''_j(X_r^\varepsilon)\, dr,
\end{align*}
where we used Ito's formula. Since $\sigmagma_\varepsilon$ is bounded by 1, we obtain
$$|P_t^\varepsilon g_j(x)-P_s^\varepsilon g_j(x)|\le c_j |t-s|,$$
where the constant $c_j$ depends on $g_j$. With this fact, we can deduce
that $P_t^{\varepsilon_n} g_j$ converges uniformly in $C_0$ for every $t\ge 0$.
We again call the limit $P_t g_j$. Since linear combinations of the
$g_j$'s are dense in $C_0$, we conclude that $P_t^{\varepsilon_n} g$ converges
uniformly to a limit, which we call $P_tg$, whenever $g\in C_0$. We note
that $P_t$ maps $C_0$ into $C_0$.
\noindent \emph{Step 2.} Each $X_t^\varepsilon$ is a Markov process, so $P_s^\varepsilon(P_t^\varepsilon g)=P_{s+t}^\varepsilon g$.
By the uniform convergence and equicontinuity and the fact
that $P_s^\varepsilon$ is a contraction, we see that $P_s(P_tg)=P_{s+t}g$
whenever $g\in C_0$.
Let $s_1<s_2<\cdots s_j$ and let $f_1, \ldots f_j$ be elements of $L$.
Define inductively $g_j=f_j$, $g_{j-1}=f_{j-1}(P_{s_j-s_{j-1}}g_j)$,
$g_{j-2}=f_{j-2}(P_{s_{j-1}-s_{j-2}}g_{j-1})$, and so on. Define $g_j^\varepsilon$
analogously where we replace $P_t$ by $P_t^\varepsilon$. By the Markov property
applied repeatedly,
$${{\mathbb E}\,}^x[f_1(X^\varepsilon_{s_1})\cdots f_j(X^\varepsilon_{s_j})] =P^\varepsilon_{s_1} g_1^\varepsilon(x).$$
Suppose $x$ is fixed for the moment
and let $f_1, \cdots, f_j\in L$. Suppose there is a subsequence $\varepsilon_{n'}$ of $\varepsilon_n$ such
that $X^{\varepsilon_{n'}}$ converges weakly, say to $X$,
and let ${\mathbb P}'$ be the limit law with corresponding expectation ${{\mathbb E}\,}'$. Using the
uniform convergence, the equicontinuity, and the fact that $P_t^\varepsilon$ maps $L$ into
$L$, we obtain
\begin{equation}\label{Approx-E31}
{{\mathbb E}\,}'[f_1(X_{s_1})\cdots f_j(X_{s_j})] =P_{s_1} g_1(x).
\end{equation}
We can conclude several things from this. First, since the limit is the same no
matter what subsequence $\{\varepsilon_{n'}\}$ we use, then the full sequence ${\mathbb P}^x_{\varepsilon_n}$
converges weakly.
This holds for each starting point $x$.
Secondly, if we denote the weak limit of the ${\mathbb P}^x_{\varepsilon_n}$ by ${\mathbb P}^x$,
then \eqref{Approx-E31} holds with ${{\mathbb E}\,}'$ replaced by ${{\mathbb E}\,}^x$.
From this
we deduce that $(X,{\mathbb P}^x)$ is a Markov process with transition semigroup
given by $P_t$.
Thirdly, since ${\mathbb P}^x$ is the weak limit of probabilities on $C[0,\infty)$,
we conclude that $X$ under ${\mathbb P}^x$ has continuous paths for each $x$.
\noindent \emph{Step 3.} Since $P_t$ maps $C_0$ into $C_0$ and $P_tf(x)={{\mathbb E}\,}^x f(X_t)\to f(x)$
by the continuity of paths if $f\in C_0$, we conclude by \cite[Theorem 20.9]{stoch} that
$(X,{\mathbb P}^x)$ is in fact a strong Markov process.
Suppose $f_1, \ldots, f_j$ are in $L$ and $s_1<s_2<\cdots < s_j<t<u$.
Since $X_t^\varepsilon$ is a martingale,
$${{\mathbb E}\,}^x_\varepsilon\Big[X_u^\varepsilon\prod_{i=1}^j f_i(X_{s_i}^\varepsilon) \Big]=
{{\mathbb E}\,}^x\Big[X_t^\varepsilon\prod_{i=1}^j f_i(X_{s_i}^\varepsilon) \Big].$$
Moreover, $X_t^\varepsilon$ and $X_u^\varepsilon$ are uniformly integrable due to
\eqref{c6.2A}.
Passing to the limit along the sequence $\varepsilon_n$, we have the equality
with $X^\varepsilon$ replaced by $X$ and ${{\mathbb E}\,}^x_\varepsilon$ replaced by ${{\mathbb E}\,}^x$.
Since the collection of random variables of the form $\prod_i f_i(X_{s_i})$ generate
$\sigmagma(X_r; r\le t)$, it follows that $X$ is a martingale under
${\mathbb P}^x$ for each $x$.
\noindent \emph{Step 4.}
Let ${\partial}ta, \eta>0$. Let $I=[q,r]$ and
$I^*=[q-{\partial}ta,r+{\partial}ta]$.
In this step we show that
\begin{equation}\label{approx-E102}
{{\mathbb E}\,} \tau_I(X)= \int_I g_I(0,y)\, m(dy).
\end{equation}
First we obtain a uniform bound on $\tau_{I^*}(X^\varepsilon)$.
If
$A^\varepsilon_t=t\land \tau_{I^*}(X^\varepsilon)$, then
$${{\mathbb E}\,}[A^\varepsilon_\infty-A^\varepsilon_t\mid \sF_t]={{\mathbb E}\,}^{X_t^\varepsilon} A^\varepsilon_\infty
\le \sup_x {{\mathbb E}\,}^x \tau_{I^*}(X^\varepsilon).$$
The last term is equal to
$$\sup_x \int_{I^*} g_{I^*}(x,y) \Big(1+\frac{\gamma}{\varepsilon}1_{I^*}(y)\Big)\, dy.$$
A simple calculation shows that this is bounded by $$c(r-q+2{\partial}ta)^2+c\gamma (r-q+2{\partial}ta),$$
where $c$ does not depend on $r, q, {\partial}ta$, or $\varepsilon$.
By Theorem I.6.10 of \cite{pta}, we then deduce that
$$ {{\mathbb E}\,} \tau_{I^*}(X^\varepsilon)^2
={{\mathbb E}\,} (A^\varepsilon_\infty)^2<c<\infty,$$
where $c$ does not depend on $\varepsilon$.
By Chebyshev's inequality, for each $t$,
$${\mathbb P}(\tau_{I^*}(X^\varepsilon)\ge t)\le c/t^2.$$
Next we obtain an upper bound on ${{\mathbb E}\,} \tau_I(X)$ in terms of $g_{I^*}$.
We have
\begin{align*}
{\mathbb P}(\tau_I(X)>t)&={\mathbb P}(\sup_{s\le t} |X_s|\le r,\inf_{s\le t}|X_s|\ge q)\\
&\le
\limsup_{\varepsilon_n\to 0} {\mathbb P}(\sup_{s\le t}|X_s^{\varepsilon_n}|\le r,\inf_{s\le t}|X_s|^\varepsilon\ge q)\\
&\le \limsup_{\varepsilon_n\to 0} {\mathbb P}(\tau_{I^*}(X^{\varepsilon_n})>t)\le c/t^2.
\end{align*}
Choose $u_0$ such that
$$\int_{u_0}^\infty {\mathbb P}(\tau_I(X)>t)\, dt<\eta, \qq
\int_{u_0}^\infty {\mathbb P}(\tau_{I^*}(X^{\varepsilon_n})>t)\, dt<\eta$$
for each $\varepsilon_n$.
Let $f$ and $g$ be continuous functions taking values in $[0,1]$
such that $f$ is equal
to $1$ on $(-\infty,r]$ and $0$ on $[r+{\partial}ta, \infty)$
and $g$ is equal to 1 on $[q,\infty)$ and 0 on $(-\infty,q-{\partial}ta]$.
We have
\begin{align*}
{\mathbb P}(\sup_{s\le t}|X_s|\le r,\inf_{s\le t}|X_s\ge q)
&\le {{\mathbb E}\,}[ f(\sup_{s\le t}|X_s|)g(\inf_{s\le t}|X_s|)]\\
&=\lim_{\varepsilon_n\to 0} {{\mathbb E}\,}[ f(\sup_{s\le t}|X_s^{\varepsilon_n}|)g(\inf_{s\le t}|X_s^{\varepsilon_n}|)].
\end{align*}
Then
{{\alpha}lowdisplaybreaks
\begin{align*}
\int_0^{u_0} {\mathbb P}(\tau_I(X)>t)\, dt&=\int_0^{u_0} {\mathbb P}(\sup_{s\le t}|X_s|\le r,\inf_{s\le t}|X_s|\ge q)\, dt\\
&\le \int_0^{u_0}{{\mathbb E}\,}[ f(\sup_{s\le t}|X_s|)g(\inf_{s\le t}|X_s|)]\, dt\\
&=\int_0^{u_0} \lim_{\varepsilon_n\to 0} {{\mathbb E}\,} [f(\sup_{s\le t}|X_s^{\varepsilon_n}|)g(\inf_{s\le t}|X^{\varepsilon_n}_s|)]\, dt\\
&=\lim_{\varepsilon_n\to 0} \int_0^{u_0} {{\mathbb E}\,} [f(\sup_{s\le t}|X_s^{\varepsilon_n}|)g(\inf_{s\le t}|X^{\varepsilon_n}_s|)]\, dt\\
&\le \limsup_{\varepsilon_n\to 0}\int_0^{u_0}{\mathbb P}(\sup_{s\le t}|X_s^{\varepsilon_n}|\le r+{\partial}ta,\inf_{s\le t}|X_s|\ge q-{\partial}ta)\, dt\\
&\le \limsup_{\varepsilon_n\to 0} \int_0^{u_0}{\mathbb P}(\tau_{I^*}(X^{\varepsilon_n})\ge t)\, dt\\
&\le \limsup_{\varepsilon_n\to 0} {{\mathbb E}\,} \tau_{I^*}(X^{\varepsilon_n}).
\end{align*}
}
Hence
$${{\mathbb E}\,} \tau_I(X)\le \int_0^{u_0}{\mathbb P}(\tau_I(X)>t)\, dt+\eta
\le \limsup_{\varepsilon_n\to 0} {{\mathbb E}\,} \tau_{I^*}(X^{\varepsilon_n})+\eta.$$
We now use the fact that $\eta$ is arbitrary and let $\eta\to 0$.
Then
\begin{align*}
{{\mathbb E}\,} \tau_I(X)&\le \limsup_{\varepsilon_n\to 0} {{\mathbb E}\,} \tau_{I^*}(X^{\varepsilon_n})\\
&=\limsup_{\varepsilon_n\to 0} \int_{I^*} g_{I^*}(0,y)\Big(1+\frac{\gamma}{\varepsilon}1_{[-\varepsilon,\varepsilon]}(y)\Big)\, dy\\
&=\int_{I^*} g_{I^*}(0,y)m(dy).
\end{align*}
We next use
the joint continuity of $g_{[-a,a]}(x,y)$ in the variables $a,x$ and $y$.
Letting ${\partial}ta\to 0$, we obtain
$${{\mathbb E}\,} \tau_I(X)\le \int_I g_I(0,y)\, m(dy).$$
The lower bound for ${{\mathbb E}\,} \tau_I(X)$ is done similarly,
and we obtain
\eqref{approx-E102}.
\noindent \emph{Step 5.}
Next we show that $X$ is a regular strong Markov process. This means
that if $x\ne y$, ${\mathbb P}^x(X_t=y\mbox{ for some }t)>0$. To show this,
assume without loss of generality that $y<x$. Suppose $X$ starting from
$x$ does not hit $y$ with positive probability.
Let $z=x+4|x-y|$. Since ${{\mathbb E}\,}^x\tau_{[y,z]}<\infty$, then with probability
one $X$ hits $z$ and does so before hitting $y$.
Hence $T_z=\tau_{[y,z]}<\infty$ a.s. Choose $t$ large
so that ${\mathbb P}^x(\tau_{[y,z]}>t)<1/16$.
By the optional stopping theorem,
$${{\mathbb E}\,}^x X_{T_z\land t}\ge z{\mathbb P}^x(T_z\le t)+y{\mathbb P}^x(T_z> t)
=z-(z-y){\mathbb P}^x(T_z>t).$$
By our choice of $z$, this is greater than $x$, which contradicts that
$X$ is a martingale. Hence $X$ must hit $y$ with positive probability.
Therefore $X$ is a regular continuous strong Markov process on the
real line. Since it is a martingale, it is on natural scale. Since
its speed measure is the same as that of $X^M$ by \eqref{approx-E102}, we conclude from
\cite[Theorem IV.2.5]{Ptpde} that $X$ and $X^M$ have the same law.
In particular, $X$ is a martingale with speed measure $m$.
\noindent \emph{Step 6.} Since we obtain the same limit law no matter what sequence
$\varepsilon_n$ we started with, the full sequence $P_t^\varepsilon$ converges to $P_t$
and ${\mathbb P}^x_\varepsilon$ converges weakly to ${\mathbb P}^x$ for each $x$.
All of the above applies equally well to $Y$ and its transition probabilities
and laws.
\end{proof}
Recall that the sequence $(X^\varepsilon, Y^\varepsilon, \widetilde W)$ is tight with respect to
$(C[0,N])^3$ for each $N$.
Take a subsequence $(X^{\varepsilon_n}, Y^{\varepsilon_n}, \widetilde W)$ that
converges weakly, say to the triple $(X,Y,W)$, with respect to $(C[0,N])^3$
for
each $N$.
The last task of this section is to prove that $X$ and $Y$ satisfy \eqref{SMM-E200}.
\begin{theorem}\label{approx-T55}
$(X,W)$ and $(Y,W)$ each satisfy \eqref{SMM-E200}.
\end{theorem}
\begin{proof}
We prove this for $X$ as the proof for $Y$ is exactly the same.
Clearly $W$ is a Brownian motion.
Fix $N$.
We will first show
\begin{equation}\label{Approx-E64}
\int_0^t 1_{(X_s\ne 0)}\, dX_s=\int_0^t 1_{(X_s\ne 0)}\, dW_s
\end{equation}
if $t\le N.$
Let ${\partial}ta>0$ and let $g$ be a continuous function taking values
in $[0,1]$ such that $g(x)=0$ if $|x|<{\partial}ta$ and $g(x)=1$ if $|x|\ge
2{\partial}ta$.
Since $g$ is bounded and continuous and $(X^{\varepsilon_n},\widetilde W)$ converges
weakly to $(X,W)$, then $(X^{\varepsilon_n}, \widetilde W, g(X^{\varepsilon_n}))$ converges
weakly to $(X,W, g(X))$. Moreover, since $g$ is 0 on $(-{\partial}ta, {\partial}ta)$, then
\begin{equation}\label{CE1}
\int_0^t g(X^{\varepsilon_n}_s)\, d\widetilde W_s= \int_0^t g(X^{\varepsilon_n}_s)\, dX^{\varepsilon_n}_s
\end{equation}
for ${\varepsilon_n}$ small enough.
By Theorem 2.2 of \cite{Kurtz-Protter}, we have
$$\Big(\int_0^t g(X^{\varepsilon_n}_s)\, d\widetilde W_s, \int_0^t g(X^{\varepsilon_n}_s)\, dX^{\varepsilon_n}_s\Big)$$
converges weakly to
$$\Big(\int_0^t g(X_s)\, dW_s, \int_0^t g(X_s)\, dX_s\Big).$$
Then
\begin{align*}{{\mathbb E}\,}&\arctan\Big(\Big|\int_0^t g(X_s)\, dW_s- \int_0^t g(X_s)\, dX_s\Big|\Big)\\
&=\lim_{n\to \infty}
{{\mathbb E}\,}\arctan\Big(\Big|\int_0^t g(X^{\varepsilon_n}_s)\, d\widetilde W_s- \int_0^t g(X^{\varepsilon_n}_s)\, dX^{\varepsilon_n}_s\Big|\Big)=0,
\end{align*}
or
$$\int_0^t g(X_s)\, dW_s= \int_0^t g(X_s)\, dX_s, \qq\mbox{\rm a.s.}$$
Letting ${\partial}ta\to 0$ proves \eqref{Approx-E64}.
We know $$X^M_t=\int_0^t 1_{(X^M_s\ne 0)}\, dX^M_s.$$
Since $X^M$ and $X$ have the same law, the same is true if we replace $X^M$ by $X$.
Combining with \eqref{Approx-E64} proves
\eqref{SMM-E200}.
\end{proof}
\section{Some estimates}\label{S:est}
Let $$j^\varepsilon(s)=\begin{cases} 1,&|X^\varepsilon_s|\in [-\varepsilon,\varepsilon] \mbox{ or }
|Y^\varepsilon_s|\in [-2\varepsilon,2\varepsilon] \mbox{ or both};\\
0,& \mbox{otherwise}.\end{cases}$$ Let
$$J^\varepsilon_t=\int_0^t j^\varepsilon_s\, ds.$$
Set $$Z_t^\varepsilon=X_t^\varepsilon-Y^\varepsilon_t,$$
suppose $Z_0^\varepsilon=0$,
and define $\psi_\varepsilon(x,y)=\sigmagma_\varepsilon(x)-\sigmagma_{2\varepsilon}(y)$.
Then
$$dZ^\varepsilon_t=\psi_\varepsilon(X^\varepsilon_t, Y^\varepsilon_t)\, d\widetilde W_t.$$
Let
\begin{align}
S_1&=\inf\{t: |Z_t^\varepsilon|\ge 6\varepsilon\},\label{est-E21}\\
T_i&=\inf\{t\ge S_i: |Z_t^\varepsilon|\notin [4\varepsilon,b]\},{\nonumber}\\
S_{i+1}&=\inf\{t\ge T_i: |Z_t^\varepsilon|\ge 6\varepsilon\}, \qq\mbox{\rm and}{\nonumber}\\
U_b&=\inf\{t: |Z_t^\varepsilon|=b\}.{\nonumber}
\end{align}
\begin{proposition}\label{est-P1}
For each $n$,
$${\mathbb P}(S_n< U_b)\le \Big(1-\frac{2\varepsilon}{b}\Big)^n.$$
\end{proposition}
\begin{proof}
Since $X^\varepsilon$ is a recurrent diffusion, $\int_0^t 1_{[-\varepsilon,\varepsilon]}(X_s^\varepsilon)
\, ds$ tends to infinity a.s.\ as $t\to \infty$.
When $x\in [-\varepsilon,\varepsilon]$, then $|\psi_{\varepsilon}(x,y)|\ge c\varepsilon$, and we
conclude that $\angel{Z^\varepsilon}_t\to \infty$ as $t\to \infty$.
Let $\{\sF_t\}$ be the filtration generated by $\widetilde W$.
$Z^\varepsilon_{t+S_n}-Z^\varepsilon_{S_n}$ is a martingale started at 0 with respect
to the regular conditional probability for the law of $(X^\varepsilon_{t+S_n}, Y^\varepsilon_{t+S_n})$ given $\sF_{S_n}$.
The conditional probability that
it hits $4\varepsilon$ before $b$
if $Z^\varepsilon_{S_n}=6\varepsilon$ is
the same as the conditional probability it hits $-4\varepsilon$ before $-b$ if
$Z_{S_n}^\varepsilon=-6\varepsilon$ and is equal to
$$\frac{b-6\varepsilon}{b-4\varepsilon}\le 1-\frac{2\varepsilon}{b}.$$
Since this is independent of $\omega$, we have
$${\mathbb P}\Big(|Z^\varepsilon_{t+S_n}-Z^\varepsilon_{S_n}|\mbox{ hits }4\varepsilon \mbox{ before hitting } b
\mid \sF_{S_n}\Big)\le 1-\frac{2\varepsilon}{b}.$$
Let $V_n=\inf\{t> S_n: |Z_t^\varepsilon|=b\}$.
Then
\begin{align*}
{\mathbb P}(S_{n+1}<U_b)&\le {\mathbb P}(S_n<U_b, T_{n+1}<V_n)\\
&={{\mathbb E}\,}[{\mathbb P}(T_{n+1}<V_n\mid \sF_{S_n}); S_n<U_b]\\
&\le \Big(1-\frac{2\varepsilon}{b}\Big){\mathbb P}(S_n<U_b).
\end{align*}
Our result follows by induction.
\end{proof}
\begin{proposition}\label{est-P2} There exists a constant $c_1$ such that
$${{\mathbb E}\,} J^\varepsilon_{T_n}\le c_1n\varepsilon$$
for each $n$.
\end{proposition}
\begin{proof} For $t$ between times $S_n$ and $T_n$ we know that
$|Z_t^\varepsilon|$ lies between
$4\varepsilon$ and $b$. Then at least one of $X^\varepsilon_t\notin [-\varepsilon,\varepsilon]$
and $Y^\varepsilon_t\notin [-2\varepsilon,2\varepsilon]$ holds.
If exactly one holds, then $|\psi_\varepsilon(X_t^\varepsilon,Y_t^\varepsilon)|\ge
1-\sqrt{2\varepsilon/\gamma}\ge 1/2$ if $\varepsilon$ is small enough. If
both hold, we can only say that $d\angel{Z^\varepsilon}_t\ge 0$. In any case,
$$d\angel{Z^\varepsilon}_t\ge \tfrac14\,dJ^\varepsilon_t$$
for $S_n\le t\le T_n$.
$Z_t^\varepsilon$ is a martingale, and by Lemma \ref{prelim-L1}
and an argument using regular conditional probabilities similar to those
we have done earlier,
\begin{equation}\label{est-E301}
{{\mathbb E}\,}[J^\varepsilon_{T_n}-J^\varepsilon_{S_n}]\le 4{{\mathbb E}\,}[\angel{Z^\varepsilon}_{T_n}
-\angel{Z^\varepsilon}_{S_n}]\le 4(b-6\varepsilon)(2\varepsilon)=c\varepsilon.
\end{equation}
Between times $T_n$ and $S_{n+1}$ it is possible that
$\psi_\varepsilon(X_t^\varepsilon,Y_t^\varepsilon)$ can be 0 or it can be
larger than $c\sqrt{\varepsilon/\gamma}$. However if either $X_t^\varepsilon\in [-\varepsilon,\varepsilon]$
or $Y_t^\varepsilon\in [-2\varepsilon,2\varepsilon]$, then
$\psi_\varepsilon(X_t^\varepsilon,Y_t^\varepsilon)\ge c\sqrt{\varepsilon/\gamma}$.
Thus
$$d\angel{Z^\varepsilon}_t\ge c\varepsilon\,dJ^\varepsilon_t$$
for $T_n\le t\le S_{n+1}$. By Lemma \ref{prelim-L1}
\begin{equation}\label{est-E302}
{{\mathbb E}\,}[J^\varepsilon_{S_{n+1}}-J^\varepsilon_{T_n}]\le c\varepsilon^{-1}{{\mathbb E}\,}[\angel{Z^\varepsilon}_{S_{n+1}}
-\angel{Z^\varepsilon}_{T_n}]\le c\varepsilon^{-1}(2\varepsilon)(10\varepsilon)=c\varepsilon.
\end{equation}
Summing each of \eqref{est-E301} and \eqref{est-E302} over $j$ from 1 to $n$
and combining yields the proposition.
\end{proof}
\begin{proposition}\label{est-P3} Let $K>0$ and $\eta>0$. There exists $R$ depending on
$K$ and $\eta$ such that
$${\mathbb P}(J^\varepsilon_{\tau_{[-R,R]}(X^\varepsilon)}<K)\le \eta, \qq \varepsilon\le 1/2.$$
\end{proposition}
\begin{proof} Fix $\varepsilon\le 1/2$. We will see that our estimates are
independent of $\varepsilon$. Note
$$J^\varepsilon_t\ge H_t=\int_0^t 1_{[-\varepsilon,\varepsilon]}(X_s^\varepsilon)\,ds.$$
Therefore to prove the proposition it is enough to prove that
$${\mathbb P}^0_\varepsilon(H_{\tau_{[-R,R]}(X^\varepsilon)}<K)\le \eta$$
if $R$ is large enough.
Let $I=[-1,1]$.
We have
$${{\mathbb E}\,}^0_\varepsilon H_{\tau_I(X^\varepsilon)}\ge \int_{-1}^1 g_{I} (0,y)\frac{\gamma}{\varepsilon} 1_{[-\varepsilon,\varepsilon]}(y)
\, dy\ge c_1.$$
On the other hand, for any $x\in I$,
$${{\mathbb E}\,}^x_0 H_{\tau_{I}(X^\varepsilon)}=\int_I g_I(x,y) \frac{\gamma}{\varepsilon}1_{[-\varepsilon,\varepsilon]}(y) \,
dy\le c_2.$$
Combining this with
$${{\mathbb E}\,}^0_\varepsilon[H_{\tau_I(X^\varepsilon)}-H_t\mid \sF_t]\le {{\mathbb E}\,}^{X_t^\varepsilon}_\varepsilon H_{\tau_{I}(X^\varepsilon)}$$
and Theorem I.6.10 of \cite{pta} (with $B=c_2$ there), we see that
$${{\mathbb E}\,} H_{\tau_I(X^\varepsilon)}^2\le c_3.$$
Let ${\alpha}_0=0$, $\begin{theorem}a_i=\inf\{t>{\alpha}_i: |X_t^\varepsilon|=1\}$ and
${\alpha}_{i+1}=\inf\{t>\begin{theorem}a_i: X_t^\varepsilon=0\}.$
Since $X_t^\varepsilon$ is a recurrent diffusion, each ${\alpha}_i$ is finite a.s.\ and
$\begin{theorem}a_i\to \infty$ as $i\to \infty$.
Let $V_i=H_{\begin{theorem}a_i}-H_{{\alpha}_i}$. By the strong Markov property,
under ${\mathbb P}^0_\varepsilon$ the $V_i$ are i.i.d.\ random variables with mean larger
than $c_1$ and variance bounded
by $c_4$, where $c_1$ and $c_4$ do not depend on $\varepsilon$ as long as $\varepsilon<1/2$.
Then
\begin{align*}
{\mathbb P}^0_\varepsilon\Big(\sum_{i=1}^k V_i\le c_1k/2\Big)&\le {\mathbb P}^0_\varepsilon\Big(\sum_{i=1}^k (V_i-{{\mathbb E}\,} V_i)\ge c_1k/2\Big)\\
&\le \frac{{\mathop {{\rm Var\, }}}(\sum_{i=1}^k V_i)}{(c_1k/2)^2}\\
&\le 4c_4/c_1^2k.
\end{align*}
Taking $k$ large enough, we see that
$${\mathbb P}^0_\varepsilon\Big(\sum_{i=1}^k V_i\le K\Big)\le \eta/2.$$
Using the fact that $X_t^\varepsilon$ is a martingale, starting at 1, the probability
of hitting $R$ before hitting 0 is $1/R$. Using the strong Markov property,
the probability of $|X|$ having no more than $k$ downcrossings of $[0,1]$ before
exiting
$[-R,R]$ is bounded by
$$1-\Big(1-\frac{1}{R}\Big)^k.$$
If we choose $R$ large enough, this last quantity will be less than
$\eta/2$. Thus, except for an event of probability at most $\eta$, $X_t^\varepsilon$
will exit $[-1,1]$ and return to 0 at least $k$ times before
exiting $[-R,R]$ and the total amount of time spent in $[-\varepsilon,\varepsilon]$
before exiting $[-R,R]$ will be at least $K$.
\end{proof}
\begin{proposition}\label{est-P4} Let $\eta>0, R>0,$ and $I=[-R,R]$. There exists
$t_0$ depending on $R$ and $\eta$ such that
$${\mathbb P}^0_\varepsilon(\tau_I(X^\varepsilon)>t_0)\le \eta, \qq \varepsilon\le 1/2.$$
\end{proposition}
\begin{proof}
If $\varepsilon\le 1$,
$${{\mathbb E}\,}^0_\varepsilon \tau_R(X^\varepsilon)=\int_I g_I(x,y)\, m_\varepsilon(dy).$$
A calculation shows this is bounded by $cR^2+cR$,
where
$c$ does not depend on $\varepsilon$ or $R$. Applying Chebyshev's inequality,
$${\mathbb P}^0_\varepsilon(\tau_I(X^\varepsilon)>t_0)\le \frac{{{\mathbb E}\,}^0_\varepsilon\tau_I(X^\varepsilon)}{t_0},$$
which is bounded by $\eta$ if $t_0\ge c(R^2+R)/\eta$.
\end{proof}
\section{Pathwise uniqueness fails}\label{S:PU}
We continue the notation of Section \ref{S:est}.
The strategy of proving that pathwise uniqueness does not hold owes a great
deal to \cite{Barlow-LMS}.
\begin{theorem}\label{PU-T1} There exist three processes $X,Y$, and $W$ and
a probability measure ${\mathbb P}$ such that $W$ is a Brownian motion
under ${\mathbb P}$, $X$ and $Y$ are continuous martingales under ${\mathbb P}$
with speed measure $m$ starting at 0, \eqref{SMM-E200} holds for $X$,
\eqref{SMM-E200} holds when $X$ is replaced by $Y$, and
${\mathbb P}(X_t\ne Y_t\mbox{ for some }t>0)>0$.
\end{theorem}
\begin{proof}
Let $(X^\varepsilon, Y^\varepsilon, \widetilde W)$ be defined as in \eqref{approx-E671} and
\eqref{approx-E672} and
choose a sequence $\varepsilon_n$ decreasing to 0 such that
the triple converges weakly on $C[0,N]\times C[0,N]\times C[0,N]$
for each $N$. By Theorems \ref{approx-T101} and \ref{approx-T55}, the weak limit,
$(X,Y,W)$ is such that $X$ and $Y$ are continuous martingales with
speed measure $m$, $W$ is a Brownian motion, and \eqref{SMM-E200}
holds for $X$ and also when $X$ is replaced by $Y$.
Let $b=1$ and
let $S_n$, $T_n$, and $U_b$ be defined by \eqref{est-E21}.
Let $A_1(\varepsilon,n)$ be the event where $T_n< U_b$.
By Proposition \ref{est-P1}
$${\mathbb P}(A_1(\varepsilon,n))= {\mathbb P}(S_n< U_b)\le \Big(1-\frac{2\varepsilon}{b}\Big)^n.$$
Choose $n\ge \begin{theorem}a/\varepsilon$, where $\begin{theorem}a$ is large enough so that
the right hand side is less than $1/5$ for all $\varepsilon$ sufficiently
small.
By Proposition \ref{est-P2},
$${{\mathbb E}\,} J_{T_n}^\varepsilon\le c_1n \varepsilon=c_1\begin{theorem}a.$$
By Chebyshev's inequality,
$${\mathbb P}(J^\varepsilon_{T_n}\ge 5c_1\begin{theorem}a)\le {\mathbb P}(J_{T_n}^\varepsilon\ge 5{{\mathbb E}\,} J_{T_n}^\varepsilon)
\le 1/5.$$
Let $A_2(\varepsilon, n)$ be the event where $J^\varepsilon_{T_n}\ge 5c_1\begin{theorem}a$.
Take $K=10c_1\begin{theorem}a$. By Proposition \ref{est-P3},
there exists $R$ such that
$${\mathbb P}(J^\varepsilon_{\tau_{[-R,R]}(X^\varepsilon)}<K)\le 1/5.$$
Let $A_3(\varepsilon,R,K)$ be the event where $J^\varepsilon_{\tau_{[R,R]}(X^\varepsilon)}< K$.
Choose $t_0$ using Proposition \ref{est-P4}, so that except for an
event of probability $1/5$ we have $\tau_{[-R,R]}(X^\varepsilon)\le t_0$.
Let $A_4(\varepsilon,R, t_0)$ be the event where $\tau_{[-R,R]}(X^\varepsilon)\le t_0$.
Let
$$B(\varepsilon)=(A_1(\varepsilon,n)\cup A_2(\varepsilon,n)\cup A_3(\varepsilon,R,K)\cup
A_4(\varepsilon, R, t_0))^c.$$
Note ${\mathbb P}(B(\varepsilon))\ge 1/5$.
Suppose we are on the event $B(\varepsilon)$. We have
$$J^\varepsilon_{T_n}\le 5c_1\begin{theorem}a< K\le J^\varepsilon_{\tau_{[-R,R]}(X^\varepsilon)}.$$
We conclude that $T_n< \tau_{[-R,R]}(X^\varepsilon)$.
Therefore, on the event $B(\varepsilon)$, we see that
$T_n$ has occurred before time $t_0$. We also know that $U_b$ has occurred before time $t_0$.
Hence, on $B(\varepsilon)$,
$${\mathbb P}(\sup_{s\le t_0} |Z_s^\varepsilon|\ge b)\ge 1/5.$$
Since $Z^\varepsilon=X^\varepsilon-Y^\varepsilon$ converges weakly to $X-Y$,
then with probability at least $1/5$, we have that $\sup_{s\le t_0} |Z_s|\ge b/2$.
This implies that $X_t\ne Y_t$ for some $t$, or pathwise uniqueness
does not hold.
\end{proof}
We also can conclude that strong existence does not hold.
The argument we use is similar to ones given in \cite{Cherny}, \cite{Engelbert},
and \cite{Kurtz}.
\begin{theorem}\label{PU-T2} Let $W$ be a Brownian motion. There does
not exist a continuous martingale $X$
starting at 0
with speed measure $m$
such that \eqref{SMM-E200} holds and such that $X$ is measurable
with respect to the filtration of $W$.
\end{theorem}
\begin{proof}
Let $W$ be a Brownian motion and suppose
there did exist such a process $X$.
Then there is a measurable map
$F:C[0,\infty)\to C[0,\infty)$ such that $X=F(W)$.
Suppose $Y$ is any other continuous martingale with
speed measure $m$ satisfying \eqref{SMM-E200}. Then
by Theorem \ref{WU-T1}, the law of $Y$ equals the law of $X$, and
by Theorem \ref{WU-T21}, the joint law of $(Y,W)$ is equal
to the joint law of $(X,W)$. Therefore $Y$ also satisfies
$Y=F(W)$, and we get pathwise uniqueness since $X=F(W)=Y$.
However, we know pathwise uniqueness does not hold. We conclude
that no such $X$ can exist, that is, strong existence does not hold.
\end{proof}
\noindent {\bf Richard F. Bass}\\
Department of Mathematics\\
University of Connecticut \\
Storrs, CT 06269-3009, USA\\
{\tt [email protected]}
\end{document} | math | 68,541 |
\begin{document}
\title{\Large{Semiparametric generalized linear models for time-series data}}
\author{\normalsize{{\bf Thomas Fung}$^a$ and {\bf Alan Huang}$^b$} {\varepsilon }space{3mm} \\
\normalsize{$^a$Macquarie University and $^b$University of Queensland} \\
\normalsize{[email protected], [email protected]}}
\date{\normalsize{3 November, 2015}}
\maketitle
\begin{abstract}
Time-series data in population health and epidemiology often involve non-Gaussian responses. In this note, we propose a semiparametric generalized linear models framework for time-series data that does not require specification of a working conditional response distribution for the data. Instead, the underlying response distribution is treated as an infinite-dimensional parameter which is estimated simultaneously with the usual finite-dimensional parameters via a maximum empirical likelihood approach. A general consistency result for the resulting estimators is given. Simulations suggest that both estimation and inferences using the proposed method can perform as well as correctly-specified parametric models even for moderate sample sizes, but can be more robust than parametric methods under model misspecification. The method is used to analyse the Polio dataset from \cite{Zeger1988} and a recent Kings Cross assault dataset from \cite{MWKF2015}. {\varepsilon }space{2mm} \\
Keywords: Empirical likelihood; Generalized Linear Models; Semiparametric Model; Time-series of counts.
\end{abstract}
\section{Introduction}
\label{intro}
Datasets in population health and epidemiology often involve non-Gaussian responses, such as incidence counts and time-to-event measurements, collected over time. Being outside the familiar Gaussian autoregressive and moving average (\textsc{arma }) framework makes modelling inherently difficult since it is not always apparent how to specify suitable conditional response distributions for the data. Consider, for example, the Polio dataset from \cite{Zeger1988}, consisting of monthly counts of poliomyelitis cases in the USA from year 1970 to 1983 as reported by the Centre of Disease Control. \cite{DDW2000} analysed this dataset using an observation-driven Poisson \textsc{arma } model, but find evidence of overdispersion in the counts. To remedy this, \cite{DW2009} considered a negative-binomial model for the data. However, probability integral transformation plots, which should resemble the histogram of a standard uniform sample if the underlying distribution assumptions are appropriate, show noticeable departures from uniformity for both the Poisson and negative-binomial models (see panels (a) and (b) of Fig. \ref{figure:polio pit} and Section 5.1). This suggests that neither distribution fits the data adequately. A flexible yet parsimonious framework for modelling non-Gaussian time-series datasets is therefore warranted for such situations where specification of the response distribution is difficult.
\begin{figure}
\caption{Goodness-of-fit plots for the Polio dataset. Panels (a)--(c) are histograms of the probability integral transforms (\textsc{pit }
\label{figure:polio pit}
\end{figure}
This paper concerns the class of generalized linear models (\textsc{glm}s) for time-series data. More precisely, we consider models of the form
\begin{equation}
\label{eq:model1}
Y_t | \mathcal{F}_{t-1} \sim \mbox{ExpFam}\{\mu(\mathcal{F}_{t-1}; \beta,\gamma)\} \ ,
\end{equation}
where $\mathcal{F}_{t-1}$ is the state of the system up to time $t$, which may include previous observations $Y_0,\ldots, Y_{t-1}$ and previous and current covariates $X_0, \ldots,X_{t}$, $\mu(\mathcal{F}_{t-1}; \beta, \gamma) = E(Y_t| \mathcal{F}_{t-1} )$ is a conditional mean function that may depend on a finite vector of parameters $(\beta\,^T,\gamma^T) \in \mathbb{R}^{q + r}$, and ExpFam$\{\mu\}$ is some exponential family indexed by its mean $\mu$. For clarity of exposition, we have split the mean-model parameter into two vectors here, with $\beta \in \mathbb{R}\,^q$ corresponding to the systematic component and $\gamma \in \mathbb{R}^r$ for the dependence structure of the model. We may also write $\mu_t \equiv \mu_t(\beta, \gamma)$ interchangeably for $\mu(\mathcal{F}_{t-1}; \beta, \gamma)$ whenever it is more convenient to do so.
Some specific models of the form (\ref{eq:model1}) have been discussed in \cite{KF2002}.
The framework (\ref{eq:model1}) is broad and encompasses, for example, Gaussian \textsc{arma } models for continuous data \citep[e.g.,][Chapter 3]{BD2009}, Poisson models for count data \citep[e.g.,][]{FRT2009} and gamma and inverse-Gaussian models for time-to-event data \citep[e.g.,][]{AB1999, GJ2006}. The following are three explicit models covered by this framework, with the third and perhaps most interesting example examined in more detail throughout this paper.
\begin{example}[Gaussian \textsc{arma } models]
\label{ex:1}
A process $\{Y_t\}$ is said to follow a Gaussian \textsc{arma}(1,1) process if it is stationary and satisfies
$$
Y_t = d+ \phi Y_{t-1}+Z_t+\psi Z_{t-1}
$$
where $\{Z_{t}\}$ is a gaussian white noise process. Here, the exponential family in (\ref{eq:model1}) is the normal and the mean-model parameters are $\beta=d$ for the systematic component and $\gamma^T = (\phi, \psi)$ for the autoregressive and moving average parameters.
\end{example}
\begin{example}[Poisson autoregression]
\label{ex:2}
\citet{Fokianos2011} recently reviewed the autoregressive Poisson model $
Y_t | \mathcal{F}_{t-1} \sim \mbox{Poisson}\{\mu_t\}
$ for count time-series data, where
$
\mu_t = d + a \mu_{t-1} + b Y_{t-1}
$
for a linear mean-model, or
$
\log(\mu_t) = d + a \log(\mu_{t-1}) + b \log(Y_{t-1}+1)
$
for a log-linear mean-model. Here, the particular exponential family in (\ref{eq:model1}) is the Poisson and the mean-model parameters are $\beta = d$ for the systematic component and $\gamma^T = (a,b)$ for the autoregressive component.
\end{example}
\begin{example}[Observation-driven Poisson models]
\label{ex:3}
\cite{DDS2003} introduced the following observation-driven model for count data $Y_t$ with covariates $x_t$,
$$
Y_t | \mathcal{F}_{t-1} \sim \mbox{Poisson}\left\{\mu_t=e^{W_t}\right\}\,.
$$
It is further assumed that the state equation $W_t$ follows the model
\begin{eqnarray*}
W_t &=& x_t^T \beta + Z_t, \quad Z_t = \phi_1(Z_{t-1} + e_{t-1}) +\ldots +\phi_s(Z_{t-s} + e_{t-s}) + \psi_1 e_{t-1} +\ldots+\psi_q e_{t-q}, \\
e_t &=& (Y_t - e^{W_t})/\mbox{var}(Y_t|\mathcal{F}_{t-1})^\lambda = (Y_t - e^{W_t})/e^{\lambda W_t}\quad \mbox{for some fixed } \lambda \in (0,1] \ .
\end{eqnarray*}
Here, the particular exponential family in (\ref{eq:model1}) is the Poisson, the mean-model parameters are $\beta$ for the systematic regression component and $\gamma^T = (\phi^T,\psi^T)$ for the autoregressive and moving average component. This model is a special case of a generalized linear autoregressive and moving average (\textsc{glarma}) model. Note that the entire \textsc{glarma } framework is covered by (\ref{eq:model1}).
\end{example}
While the fully parametric models in Examples \ref{ex:1}--\ref{ex:3} have proven useful for analyzing particular datasets, the requirement of a particular exponential family for the conditional response distribution may be too restrictive for general problems. Indeed, one only has to look at, for example, the myriad of quasi-Poisson \citep[][Chapter 4]{KF2002}, zero-inflated Poisson \citep{YZC2013} and negative-binomial models \citep{DW2009} to see that the basic Poisson model is often inadequate for modeling count data. Note also that these alternative models generally do not form exponential families and are therefore outside framework (\ref{eq:model1}).
This paper introduces a parsimonious approach for modeling time-series data that is flexible and robust, yet remains completely within the exponential family framework (\ref{eq:model1}). The proposed approach is based on a novel exponential tilt representation of generalized linear models (\textsc{glm}s), introduced in \cite{RG2009}, that allows the underlying exponential family in (\ref{eq:model1}) to be unspecified. Instead, the particular distribution generating the exponential family is treated as an infinite-dimensional parameter, which is then estimated simultaneously from data along with the usual model parameters $(\beta,\gamma)$.
The semiparametric framework we propose is rather generic, and can be used to relax distributional assumptions in essentially any parametric time-series model based on an exponential family. The usefulness of the proposed approach in practice is demonstrated by fitting it to the polio dataset from \cite{Zeger1988} as well as a more modern application to the Kings Cross assault dataset from \cite{MWKF2015}.
\section{Semiparametric GLMs for time-series data}
Using the exponential tilt representation of \citet{RG2009}, we can write model (\ref{eq:model1}) as
\begin{equation}
Y_t | \mathcal{F}_{t-1} \sim \exp(b_t + \theta_t y) dF(y) \ , \quad t=\ldots, 0,1,2,\ldots \ ,
\label{eq:model}
\end{equation}
where $F$ is some response distribution, $\{b_t\}$ are normalization constants satisfying
\begin{equation}
b_t = -\log \int \exp(\theta_t y) dF(y) \ , \quad t=\ldots, 0, 1,2 \ldots \ ,
\label{eq:normconstr}
\end{equation}
and the tilts $\{\theta_t\}$ are determined by the mean-model via
$$
\mu_t(\beta,\gamma) = E(Y_t|\mathcal{F}_{t-1}) = \int y \exp(b_t + \theta_t y) dF(y) \ , \quad t=\ldots, 0, 1, 2,\ldots .
$$
This last equation can also be rearranged as
\begin{equation}
0 = \int [y-\mu_t(\beta,\gamma)] \exp(\theta_t y) dF(y) \ , \quad t=\ldots, 0,1,2 \ldots,
\label{eq:meanconstr}
\end{equation}
which avoids explicit dependence on $b_t$.
\begin{remark} Note that $F$ is not the stationary distribution of the process. Rather, it is the underlying distribution generating the exponential family.
\end{remark}
Classical time-series models, such as those based on normal, Poisson or gamma distributions, can be recovered by setting $F$ to be a normal, Poisson or gamma distribution, respectively. The main advantage of the exponential tilt approach, however, is that the underlying distribution $F$ can be left
unspecified and instead treated as an infinite-dimensional parameter along with the usual mean-model parameters $(\beta,\gamma)$ in the log-likelihood function,
$$
l(\beta,\gamma, F) = \sum_{t=0}^n \left\{ \log dF(Y_t) + b_t + \theta_t Y_t \right\}
$$
with $b_t$ given by (\ref{eq:normconstr}) and $\theta_t$ given by (\ref{eq:meanconstr}), for $t=0,1,\ldots, n$.
Treating $F$ as a free parameter gives us much flexibility in model specification. For example, overdispersed or underdispersed counts can be dealt with simply by $F$ having heavier or lighter tails than a nominal Poisson distribution. Similarly, zero-inflated data can be dealt with by $F$ having an additional probability mass at zero. Two examples of non-standard exponential families covered by our framework are displayed in Fig. 1 in the Online Supplement. These include a zero-inflated Poisson family and a normal-mixture family.
It is important to note that our framework is semiparametric as it contains infinitely many exponential families, one for each unique underlying distribution $F$. We are by no means restricted to the handful of classical families, such as the normal, Poisson and gamma families. Indeed, instead of specifying $F$ {\it a priori},
the key innovation in this paper is to allow the underlying distribution $F$ to be estimated simultaneously along with $\beta$, thereby letting the data inform us about which distribution fits best. While there are conceivably many methods that can be used to carry out this joint estimation, this paper proposes a maximum empirical likelihood approach. This is described in more detail in Section 3. Some other theoretical properties of the model are given in the Appendix.
\section{Estimation via maximum empirical likelihood}
\subsection{Methodology}
To fit model (\ref{eq:model})--(\ref{eq:meanconstr}) to data, consider replacing the unknown $dF$ with a probability mass function $\{p_0,\ldots, p_n\}$ on the observed support $Y_0, \ldots, Y_n$ with $\sum_{j=0}^n p_j = 1$, so that the log-likelihood becomes the following empirical log-likelihood:
\begin{eqnarray}
\label{eq:emplik}
l(\beta,\gamma, p) &=& \sum_{t=0}^n \left\{ \log p_t + b_t + \theta_t Y_t \right\} \ , \\
\label{eq:empnorm}
\mbox{with } b_t &=& -\log \sum_{j=0}^n \exp(\theta_t Y_j) p_j \ , \hspace{19mm} t=0, 1,\ldots, n \ , \\
\label{eq:empmean}
\mbox{and } 0 &=& \sum_{j=0}^n [Y_j - \mu_t( \beta,\gamma)] \exp(\theta_t Y_j) p_j \ , \hspace{6mm} t=0,1,\ldots, n\ .
\end{eqnarray}
Note that equations (\ref{eq:empnorm}) and (\ref{eq:empmean}) are the empirical versions of the normalization and mean constraints (\ref{eq:normconstr}) and (\ref{eq:meanconstr}), respectively. A maximum empirical likelihood estimator (\textsc{mele}) for $(\beta,\gamma, p)$ can then be defined as
$$
(\hat \beta, \hat \gamma, \hat p) = \mbox{argmax}_{\beta,\gamma, p} l(\beta,\gamma, p),
$$
subject to constraints (\ref{eq:empnorm}) and (\ref{eq:empmean}) and the corresponding \textsc{mele } for $F$ is given by the cumulative sum
$$
\hat F(y) = \sum_{j=0}^n \hat p_j 1(Y_j \le y) \ .
$$
This also means that the \textsc{mele } for the cdf of a particular response $Y_t$ is
$$
\hat{F}_{Y_t}(y) = \sum^n_{j=0} \hat{p}_j \exp(b_t+\theta_t Y_j) 1(Y_j \leq y).
$$
Clearly the choice of regressors will have a big impact on the behaviour of $\mu_t(\beta,\gamma)$ and subsequently the properties of the estimator. \cite{DDW2000} and \cite{DW2009} discuss a list of conditions such that a regressor can be incorporated. The conditions are fairly general and hold for a wide range of covariates, including a trend function $x_{nt} = f(t/n)$ where $f(\cdot)$ is a continuous function defined on the unit interval $[0,1]$ and harmonics functions that specify annual, seasonal or day-of-the-week effects.
The following theorem establishes a general consistency result for the proposed estimator. In particular, it states that whenever consistency holds for the correctly-specified parametric maximum likelihood estimator (\textsc{mle}),
then it also holds for the semiparametric \textsc{mele}.
The proof of Theorem \ref{thm:consistency} is provided in the Supplementary Materials.
\begin{theorem}[Consistency]
{\color{black} Assume that the joint process $(Y_t, W_t)$ is stationary and uniformly ergodic, and that conditions C1--C3 in Appendix \ref{sec:regcond} hold.}
Then, whenever the (correctly-specified) parametric \textsc{mle } of $(\beta,\gamma)$ is consistent, so is the semiparametric \textsc{mele } $(\hat \beta, \hat \gamma)$. Moreover,
$\sup_{y \in \mathcal{Y}} |\hat F(y) - F^*(y)| \to 0$ in probability as $n \to \infty$, where $F^*$ is the true underlying distribution.
\label{thm:consistency}
\end{theorem}
Establishing stationarity, ergodicity and consistency even in relatively simple parametric settings is still ongoing research \citep[e.g.][]{DDW2000}. Theorem \ref{thm:consistency} implies that whenever advances are made about the consistency of particular parametric estimators, then such results automatically apply to the semiparametric estimator also.
Note that our
maximum empirical likelihood estimation scheme is generic for any mean-model $\mu_t$. Specific models for $\mu_t$ may require additional constraints and calculations, the complexity of which depends on the complexity of the systematic and dependency structures of the mean-model. To illustrate this, we make explicit below the optimization problem for computing the semiparametric \textsc{mele } for the \textsc{glarma } model from Example \ref{ex:3}. A {\tt MATLAB} program carrying out these calculations can be found in {\tt spglarma3.m}, which is obtainable by emailing the authors. An alternative version, {\tt spglarma3par.m}, utilizes the parallel computing toolbox for faster computations.
The corresponding calculations for the fully parametric Poisson model can be found in \citet[][Section 3.1]{DDS2003}.
{\varepsilon }space{3mm}
\noindent E\textsc{xample} 1.3 (continued). The constrained optimization problem for obtaining the \textsc{mele } of $(\beta, \phi, \psi, F)$ in the \textsc{glarma } model of \citet{DDS2003} with $\lambda = 1/2$ is:
{\varepsilon }space{-2mm} {
\begin{eqnarray*}
\hline
\mbox{maximize } l(\beta, \phi, && \hspace{-4mm} \psi, p; b, \theta) = \sum_{t=0}^n \left\{ \log p_t + b_t + \theta_t Y_t \right\} \mbox{ over } \beta, \phi, \psi, p, b\mbox{ and } \theta \ , \\
\mbox{subject to } \hspace{5mm} 0 &=& \sum_{j=0}^n \left[Y_j - \exp{( x_t^T\beta + Z_t)}\right] \exp(\theta_t Y_j) p_j \ , \ t=0, \ldots, n, \hspace{0mm} \mbox{ (mean constraints)}\\
b_t &=& -\log \sum_{j=0}^n \exp(\theta_t Y_j) p_j \ , \qquad t=0,\ldots, n, \hspace{8mm} \mbox{ (normalization constraints)}\\
\mbox{where \hspace{1cm}}
Z_t &=& \phi_1(Z_{t-1}+e_{t-1}) + \cdots+ \phi_s(Z_{t-s}+e_{t-s}) \\
&& \hspace{34mm}+\psi_1e_{t-1}+\cdots+\psi_qe_{t-q} \ , \hspace{13mm} \mbox{ (\textsc{arma } constraints)} \\
e_t &=& \frac{Y_t- \exp{( x_t^T\beta + Z_t)}}{\sqrt{\mbox{var}(Y_t|\mathcal{F}_{t-1})}}, \hspace{54mm} \mbox{ (Pearson residuals)} \\
\mbox{var}(Y_t|\mathcal{F}_{t-1}) &=& \sum_{j=0}^n \left[Y_j - \exp{( x_t^T\beta + Z_t)}\right]^2 \exp{(b_t + \theta_t Y_j)}p_j \ . \hspace{13mm} \mbox{ (variance function)} \\
\mbox{The \textsc{mele } for $F$ is } && \hspace{-5mm} \mbox{then obtained by the cumulative sum }
\hat F(y) = \sum_{t=0}^n \hat p_t 1(Y_t \le y) .\\
\hline
\end{eqnarray*}}
\noindent An alternative to Pearson residuals is score-type residuals, obtained by setting $\lambda=1$:
$$
\hspace{3cm} e_t = \frac{Y_t- \exp{( x_t^T\beta + Z_t)}}{\mbox{var}(Y_t|\mathcal{F}_{t-1})}. \hspace{57mm} \mbox{(Score-type residuals)}
$$
The method of \cite{CKL2008} can also be used.
\subsection{Simulation Studies}
\label{se:sim1}
To examine the practical performance of the proposed semiparametric \textsc{mele } under both correct specification and misspecification, we simulate from different \textsc{glarma } models based on Example \ref{ex:3} and compare the results with the \textsc{mle } from parametric models.
The first set of simulations is similar to those in \cite{DDS2003} which is based on a Poisson distribution with the log-link,
$$
Y_t| \mathcal{F}_{t-1} \sim \mbox{Poisson}\{\mu_t = e^{W_t}\},
$$
under two settings, namely, one corresponding to a stationary trend,
$$ \mbox{Model 1:} \quad W_t = \beta_0 + \psi(Y_{t-1}-e^{W_{t-1}})e^{-\frac{1}{2}W_{t-1}} \ ,$$
and the other with a linear trend,
$$ \mbox{Model 2:} \quad W_t = \beta_0 +\beta_1t/n+ \psi(Y_{t-1}-e^{W_{t-1}})e^{-\frac{1}{2}W_{t-1}}.$$
The sample sizes were $n=250$ with $N =1000$ replications for both settings. A burn-in period of length 100 was used in the simulation of the realisations. For each simulated dataset, we fit both the correctly-specified Poisson model, via the {\tt glarma} package in {\tt R} of \citet{DLS2014} with the default options (Pearson residuals and Fisher Scoring method), and the semiparametric model, via the {\tt spglarma3par} function in {\tt MATLAB}.
The densities of the parameter estimates across the simulations for both methods are plotted in Fig.~2 in the Online Supplement. As both models are correctly specified here, it is not surprising to see they exhibit almost identical performance for estimating all model parameters.
It is perhaps more interesting to look at the performance of the estimators under model misspecification. To this end, a second set of simulations was carried out based on a negative-binomial distribution with the log-link,
$$
Y_t|\mathcal{F}_{t-1} \sim \mbox{NegBin}(\mu_t = e^{W_t}, \alpha),$$
such that the conditional density function of $Y_t$ is
$$
P(Y_t = y_t | \mathcal{F}_{t-1}) = \frac{\Gamma(y_t+\alpha)}{\Gamma(\alpha)y_t!}\left(\frac{\alpha}{\alpha+\mu_t}\right)^{\alpha}\left(\frac{\mu_t}{\alpha+\mu_t}\right)^{y_t}, \quad \mbox{for $y_t = 1, \ldots$},$$
where
\begin{eqnarray*}
\mbox{Model 3:} \quad W_t &=& \beta_0 + \beta_1t/n + \beta_2\cos(2\pi t/6)+\beta_3 \sin(2\pi t/6) + Z_t,\\
Z_t &=& \phi( Z_{t-1} + e_{t-1}), \quad e_{t-1} = \frac{Y_{t-1} - e^{W_{t-1}}}{\sqrt{e^{W_{t-1}}+e^{2W_{t-1}}/\alpha}}.
\end{eqnarray*}
The linear predictor consists of a linear trend, two harmonic functions as well as an $\textsc{ar}(1)$ component. The sample size is $n=500$ with $N=1000$ replications. The parameter values are set as $(\beta_0, \beta_1, \beta_2, \beta_3, \phi,\alpha) = $(0$\cdot$1, 0$\cdot$2, 0$\cdot$3, 0$\cdot$4, 0$\cdot$25, 4). A burn-in period of length 100 was used in the simulation of the realisations. Note that the negative-binomial distribution does not correspond to a \textsc{glm } unless the dispersion parameter $\alpha$ is known {\it a priori}.
For each simulated dataset, we fit the correctly-specified negative-binomial model and the misspecified Poisson model using the {\tt glarma} package in {\tt R}. We also fit the proposed semiparametric model. The densities of the parameter estimates across the simulations for all three methods are plotted in Fig. \ref{Figure: Neg bin densities}. Although the misspecified Poisson model can give good estimates for the regression coefficients $(\beta_0,\beta_1, \beta_2, \beta_3)$, it gives very different and severely biased estimates for the autoregression coefficient $\phi$ (see subpanel 5). Note that the autoregressive component of the model here is intimately connected with the overall error structure, so it is perhaps not surprising that the misspecification of the conditional error distribution has led to particularly poor estimates of the autoregression parameter. In contrast, the proposed semiparametric method gives almost identical estimates as the correctly-specified negative-binomial model for both the regression and autoregressive coefficients, demonstrating the robustness and flexibility of the proposed method under model misspecification.
\begin{figure}
\caption{Empirical densities of the maximum likelihood estimators for Model (3) using the semiparametric (solid), misspecified Poisson (dashed) and correctly-specified negative-binomial (dotted) models.}
\label{Figure: Neg bin densities}
\end{figure}
{\color{black} The semiparameteric approach is computationally more demanding than parametric approaches, for obvious reasons. Models 2 ($n=250$) and 3 ($n=500$) took, on average, 19 and 67 seconds (Macbook Pro, 2.3 GHz Intel Core i7, 16 GB 1600 RAM) to fit, respectively, whereas the {\tt glarma} package took no more than a few seconds.
The current software uses only the built-in \texttt{MATLAB} optimizer \texttt{fmincon}, and we expect an improvement in computation times as we implement our own algorithm.}
\section{Inference via the likelihood ratio test}
\subsection{Methodology}
For inferences on the model parameters $(\beta, \gamma)$, \citet[][Section 3.1]{DDS2003} suggested inverting the numerical Hessian of the log-likelihood function at the \textsc{mle } to estimate its standard errors. Here, we propose a more direct method based on the likelihood ratio test.
Suppose we are interested in testing $H_0: \beta_k = 0$ for some component of $\beta$. To do this, we can fit the semiparametric \textsc{glm } with and without the constraint, obtaining the empirical log-likelihood ratio test (\textsc{lrt}) statistic,
$$
LRT = 2\left\{l(\hat \beta, \hat \gamma, \hat p) - \sup_{\beta_k = 0} l(\beta, \gamma, p) \right\} \ .
$$
From this, we can compute an ``equivalent standard error", denoted by $se_{eq}$, via
$$
LRT= \left(\frac{\hat \beta_k - 0}{se_{eq}}\right)^2
$$
and then construct $100(1-\alpha)\%$ Wald-type confidence intervals via the usual $\hat \beta_k \pm z_{\alpha/2} se_{eq}$. Note that the equivalent standard error is such that the corresponding $z$-test for testing $\beta_k = 0$ achieves the same significance level as the likelihood ratio test with a $\chi^2_1$ calibration. An identical procedure can be applied for inferences on the components of the \textsc{arma } parameter $\gamma$.
In some scenarios, perhaps motivated by previous experience and/or theoretical arguments, a null value other than 0 can be used to calculate an equivalent standard error, provided it does not lie on the boundary of the parameter space. In the absence of any additional knowledge, however, the choice of 0 seems a good default to use in practice, as supported by the simulation results in Section 4.2.
\subsection{Simulation Studies}
To check the accuracy of the likelihood ratio test as the basis for inferences, the following two expanded models were fitted to the data generated by Model 1 in Section 3.2.1:
$$ \mbox{Model 1$'$:} \quad W_t = \beta_0 +\beta_1t/n+ \psi(Y_{t-1}-e^{W_{t-1}})e^{-\frac{1}{2}W_{t-1}},$$
and
$$ \mbox{Model 1$''$:} \quad W_t = \beta_0 + \psi(Y_{t-1}-e^{W_{t-1}})e^{-\frac{1}{2}W_{t-1}} +\psi_2(Y_{t-2} -e^{W_{t-2}})e^{-\frac{1}{2}W_{t-2}}.$$
Note that the linear term $\beta_1 t/n$ in Model 1$'$ and the second order \textsc{ma} component $\psi_2(Y_{t-2} -e^{W_{t-2}})e^{-W_{t-2}/2}$ in Model 1$''$ are redundant. This is equivalent to testing $\beta_1=0$ and $\psi_2 = 0$, respectively.
If the proposed inferences based on the likelihood ratio test are to be useful, then the \textsc{lrt } statistic for both tests should have approximate $\chi^2_1$ distributions. The Type I error rates for testing $\beta_1 =0$ using the \textsc{lrt } at nominal 10\%, 5\% and 1\% levels were 12.3\%, 6.6\% and 1.6\%, respectively, across 1000 simulations. The corresponding Type I errors for testing $\psi_2 = 0$ were 10.0\%, 6.0\% and 1.2\%, respectively.
These results indicate that the asymptotic $\chi^2$ calibration is quite accurate even for moderate sample sizes.
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\caption{Empirical and estimated equivalent standard errors from negative-binomial data (Model 3) using the semiparametric, Poisson and negative-binomial models. $se_{emp}=$ empirical standard deviation ($\times 10\,^2$), $\overline{se}_{eq} =$ average equivalent standard error ($\times 10\,^2$), $\overline{se}=$ average standard error ($\times 10\,^2$), and coverage = coverage rate (\%) of the 95\% confidence interval, across 1,000 simulations}{
\begin{tabular}{crrcrrcrrc}
& \multicolumn{3}{c}{semiparametric} & \multicolumn{3}{c}{Poisson}& \multicolumn{3}{c}{negative-binomial} \\
parameter & $se_{emp}$ & $\overline{se}_{eq}$ & coverage& $se_{emp}$ & $\overline{se}\ $ & coverage & $se_{emp}$ & $\overline{se}\ $ & coverage\\
$\beta_0$& 16.8 & 16.1 & 93.9& 16.9 & 13.7 & 89.0& 16.9 & 16.3 & 94.1 \\
$\beta_1$ & 26.1 & 25.0 & 93.8 & 26.2 & 21.3 & 88.9 & 26.3 & 25.5 & 94.3 \\
$\beta_2$ & 7.3 & 7.2 & 95.1 & 7.2& 7.3 & 95.4& 7.2& 7.3 & 95.4\\
$\beta_3$ & 7.1 & 7.2 & 96.5& 7.1& 7.3 & 96.6 & 7.1 & 7.2 & 96.5 \\
$\phi $ & 4.2 & 4.1 & 93.9 & 3.6& 2.8 & 66.3 & 4.2 & 4.2& 94.3
\end{tabular}}
\label{Table: model 3 equiv se compare}
\end{center}
\end{table}
Next, we examine the performance of the proposed equivalent standard error approach for constructing confidence intervals for the parameters in the negative-binomial time-series generated by Model 3. The results from our simulation are reported in Table \ref{Table: model 3 equiv se compare}. We see that the average equivalent standard errors are very close to their corresponding empirical standard deviations across the 1,000 parameter estimates. Table \ref{Table: model 3 equiv se compare} also displays the coverage rates of 95\% Wald-type confidence intervals using the equivalent standard error, with the results suggesting that the proposed method performs as well as the correctly-specified negative-binomial model. In contrast, the Poisson results are particularly poor and indicate that model misspecification can lead to severe biases for inferences on model parameters. This reiterates the value of flexible and robust modeling of the conditional error distribution. We can conclude that the semiparametric equivalent standard errors can be used in much the same way as (correctly-specified) parametric standard errors for inferences on model parameters.
\section{Data analysis examples}
\subsection{Polio dataset}
To demonstrate the proposed method, we reanalyze the polio dataset studied by \cite{Zeger1988}, which consists of monthly counts of poliomyelitis cases in the USA from year 1970 to 1983 as reported by the Centres for Disease Control. \citet{DDW2000} and \citet{DW2009} modelled the counts using the Poisson and negative-binomial distributions, respectively. Here, we specify the same set of covariates as in these earlier studies, with
$$
x_{t} = \{1,\, t'/1000,\, \cos(2\pi t'/12), \, \sin(2\pi t'/12),\, \cos(2\pi t'/6),\, \sin(2\pi t'/6)\}^T,$$
where $t' = t-73$ is used to center the intercept term at January 1976. We specify a log-link and a \textsc{ma} process with lags 1, 2 and 5, following the \textsc{glarma } approach of \citet{DDW2000}.
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\caption{Polio dataset: Estimated coefficients, equivalent standard errors ($se_{eq}$) and standard errors ($se$) using the semiparametric, Poisson and negative-binomial models}{
\begin{tabular}{lrrrrrrr}
& \multicolumn{2}{c}{semiparametric} & \multicolumn{2}{c}{Poisson} & \multicolumn{2}{c}{negative-binomial} \\
Variable & estimate & $se_{eq}$ & estimate & $se$ & estimate & $se$ \\
Intercept & 0.149 & 0.137 & 0.130 & 0.112 & 0.180 & 0.144\\
Trend & -3.960& 2.771& -3.928 & 2.145 & -3.171 & 2.715\\
$\cos(2\pi t'/12)$ &-0.093 & 0.166 & -0.099 & 0.118 & -0.139 & 0.178\\
$\sin(2\pi t'/12)$ & -0.518& 0.188 & -0.531 & 0.138 & -0.544 & 0.205\\
$\cos(2\pi t'/6)$ & 0.281 & 0.147 & 0.211 & 0.111 & 0.353 & 0.150\\
$\sin(2\pi t'/6)$ & -0.277& 0.155& -0.393 & 0.116 & -0.380 & 0.153\\
\textsc{ma }\unskip, lag 1 & 0.320& 0.109& 0.218 & 0.047 & 0.304 & 0.065\\
\textsc{ma }\unskip, lag 2 & 0.221 & 0.098 & 0.127 & 0.047 & 0.238 & 0.080\\
\textsc{ma }\unskip, lag 5 & -0.016 & 0.099& 0.087 & 0.042 & -0.068 & 0.076
\end{tabular}}
\label{Table: Polio}
\end{center}
\end{table}
The coefficient estimates from the Poisson, negative-binomial and semiparametric fit to the data, along with their estimated standard errors, are reported in Table \ref{Table: Polio}. Whilst the fitted coefficients are quite similar for all three approaches, the estimated standard errors from the Poisson model are significantly smaller than the other two. The larger standard errors from the negative-binomial and semiparametric fits reflect the overdispersion in the counts, as noted in \citet{DDW2000}.
The estimated conditional response distributions for the 12th, 36th and 121st observations, covering a range of conditional mean values, are displayed in panels (d)--(f) of Fig. \ref{figure:polio pit}. These plots indicate that the semiparametric fit to the data is distinctively different to both the Poisson and negative-binomial fits.
We can also use histograms of the probability integral transformation (\textsc{pit }\unskip) to compare and assess the underlying distributional assumptions. The {\tt glarma} package uses the formulation in \cite{CGH2009} so that \textsc{pit } can be applied to a discrete distribution. If the distributional assumptions are correct, the histogram should resemble the density function of a standard uniform distribution. As the estimated response distribution is discrete under our semiparametric model, the implementation of \textsc{pit } for our approach is also straightforward. From the \textsc{pit }\unskip s of all three fitted models, plotted in panels (a)--(c) of Fig. \ref{figure:polio pit}, we see that the plot for the semiparametric model is closest to uniform, with the negative-binomial and Poisson models being noticeably inferior. This suggest that neither parametric model is adequate here. Subsequently, standard errors and inferences based on these parametric models may well be biased and unreliable.
\subsection{Kings Cross assault counts}
\cite{MWKF2015} recently studied the number of assaults in the Kings Cross area of Sydney. A major practical question for which the data were originally collected was to determine the effect of the January 2014 reforms to the NSW Liquor Act, designed to curb alcohol-related violence, to the number of incidence of assault in Kings Cross, NSW, Australia. The data consists of monthly counts of (police recorded non-domestic) assaults in the area from January 2009 to September 2014. A state space Poisson model for count data of \cite{DK2012} was used in the original study but
we shall reexamine the data within the \textsc{glarma } framework.
We follow the original study by specifying a log-link and the same set of covariates which includes an intercept, a step intervention term ( from January 2014 onwards), a cosine harmonic term at a period of 12 months. Specifically,
$$
x_t = \{1,\,I(t\geq 60),\,\cos(2\pi t/12)\},\quad t=0,1,\ldots 68 ,
$$
where $I(.)$ is an indicator function. A \textsc{ma } process with lags 1 and 2 was also included in the model.
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\caption{Kings Cross assault counts: Estimated coefficients, equivalent standard errors ($se_{eq}$) and standard errors ($se$) using the semiparametric and Poisson models}{
\begin{tabular}{lrrrrr}
& \multicolumn{2}{c}{semiparametric} & \multicolumn{2}{c}{Poisson} \\
Variable & estimate & $se_{eq}$ & estimate & $se$ \\
Intercept & 3.736 & 0.026 & 3.733 & 0.020\\
Step & -0.535 & 0.110 & -0.544 & 0.073\\
CosAnnual & 0.063 & 0.029 & 0.058 & 0.029\\
\textsc{ma }\unskip, lag 1 & 0.044 & 0.017 & 0.034 & 0.016\\
\textsc{ma }\unskip, lag 2 & -0.037 & 0.017 & -0.031 & 0.016\\
\end{tabular}}
\label{Table: Lockout}
\end{center}
\end{table}
The data were fit using a Poisson and the proposed semiparametric model. The coefficient estimates from the two models, along with their estimated standard errors, are reported in Table \ref{Table: Lockout}. Both models indicate that the step intervention term is highly significant and the results indicate that there has been substantial reduction in assault counts in the Kings Cross area.
We can again use histograms of the \textsc{pit } to assess the Poisson distribution assumption. From the \textsc{pit }\unskip s of the two models plotted in Fig. \ref{figure: Lockout PIT} we see that there is a distinct U-shape in the plot for the Poisson model whereas the one for the semiparametric model appears closer to uniform. This suggests that the Poisson error distribution model is inadequate here, so that corresponding standard errors and subsequent inferences may well be biased.
\begin{figure}
\caption{Kings Cross assault counts: Histograms of the probability integral transforms for the Poisson and semiparametric fits.}
\label{figure: Lockout PIT}
\end{figure}
Interestingly, a negative-binomial \textsc{glarma } model could not be fitted to this dataset as the Fisher Scoring algorithm failed to converge. We have yet to encounter any convergence or compatibility problems using our proposed method. Note that by leaving the underlying generating distribution unspecified, our model fitting algorithm is the same regardless of the distribution of the data. We therefore do not run into such incompatibility issues.
\section{Discussion}
\label{s:discuss}
Misspecifying the conditional response distribution in \textsc{glarma } models can lead to biased estimation of model parameters, especially parameters governing the dependence structure. This problem of model misspecification is particularly prevalent for responses that do not have conditionally normal distributions, such as incidence counts. These types of time-series occur often in population health and epidemiology.
The proposal in this paper is to use a semiparametric approach that maximizes over all possible exponential families of conditional distributions for the data in the \textsc{glarma } framework. This allows for robust and flexible models for non-Gaussian time-series data, as our simulations and two data analysis examples demonstrate. The proposed approach is also a unified one, covering the class of all \textsc{glarma } models that are based on an exponential family of conditional distributions for the responses. Moreover, the same computational algorithm can be used to fit the model to data regardless of the underlying generating distribution of the data.
\section*{Supplementary Materials}
The Online Supplement, containing additional simulation results and a proof of Theorem 1, is available by emailing the authors
.{\varepsilon }space*{-8pt}
\small
\appendix
\section{\hspace{-6mm}ppendix}
\subsection{Parameter space for $F$}
As with any \textsc{glm}, the underlying distribution $F$ generating the exponential family is required to have a Laplace transform in some neighborhood of the origin, so that the cumulant generating function (\ref{eq:normconstr}) is well-defined. Note that $F$ is not identifiable since any exponentially tilted version of $F$ would generate the same exponential family. To estimate $F$ uniquely, we set $b_0 = \theta_0 = 0$ in our \texttt{MATLAB} code.
The exponential tilt mechanism also implies that all observations $Y_t$ share a common support, $\mathcal{Y}$ -- this is another feature of \textsc{glm }s. {\color{black} Binomial responses with varying number of trials may seem to violate this, but they can be broken down into individual Bernoulli components each having common support $\{0,1\}$}, provided the number of trials at each time-point is given.
\subsection{Regularity conditions} \label{sec:regcond}
\begin{itemize}
\item[C1.] The parameter space for $(\beta,\gamma)$ is some compact subset of $\mathbb{R}^{q+r}$.
\item[C2.] The response space $\mathcal{Y}$ is (contained in) some compact subset of $\mathbb{R}$.
\item[C3.] The mean function $\mu(\mathcal{F}; \beta, \gamma)$ maps into the interior of $\mathcal{Y}$ for all states $\mathcal{F}$ and all parameter values $(\beta, \gamma)$ in some neighbourhood of the true value $(\beta^*, \gamma^*)$.
\end{itemize}
\begin{remark} Condition C2 is for simplicity of proof and can be weakened to tightness on the collection of allowable distributions. This then requires an additional step of approximating tight distributions by compactly-supported ones, with the compact support being increasingly widened so that the approximation error becomes arbitrarily small.
\end{remark}
\begin{remark} Condition C3 is required because we are not restricted to the canonical link for each exponential family. For example, a linear mean model can be used with nonnegative responses, provided the mean $\mu(\mathcal{F}; \beta, \gamma)$ is never negative for any state $\mathcal{F}$, at least for models $(\beta, \gamma)$ sufficiently close to the underlying data-generating model $(\beta^*, \gamma^*)$.
\end{remark}
\label{lastpage}
\end{document} | math | 38,995 |
\begin{document}
\title{ Recovering rank-one matrices via rank-$r$ matrices relaxation}
\author{\normalsize Pengwen~Chen\footnote{ Applied Mathematics, National Chung Hsing University, Taiwan}, Hung~Hung\footnote{
Institute of Epidemiology and Preventive Medicine, National Taiwan University, Taiwan}
}
\maketitle
\abstract{
PhaseLift, proposed by E.J. Cand\`{e}s et al., is one convex relaxation approach for phase retrieval. The relaxation enlarges the solution set from rank one matrices to positive semidefinite matrices.
In this paper, a relaxation is employed to nonconvex alternating minimization methods to recover the rank-one matrices.
A generic measurement matrix can be standardized to a matrix consisting of orthonormal columns. To recover the rank-one matrix,
the standardized frames are used to select the matrix with the maximal leading eigenvalue among the rank-$r$ matrices.
Empirical studies are conducted to validate the effectiveness of this relaxation approach.
In the case of Gaussian random matrices with a sufficient number of nearly orthogonal sensing vectors,
we show that the singular vector corresponding to the least singular value is close to the unknown signal, and thus it can be a good initialization for the nonconvex
minimization algorithm.
}
\section{Introduction}
Phase retrieval is one important inverse problem that arises in various fields, including electron microscopy, crystallography, astronomy, and optics. \cite{Millane:90, hurt2001phase, MiaoCell, Review,Fannjiang:12, IOPORT.06071995}. Phase retrieval aims to recover signals from magnitude measurements only (optical devices do not allow direct recording of the phase of the electromagnetic field).
Let $x_0\in \mathbf{R}^n$ or $x_0\in \mathbf{C}^n$ be some nonzero unknown vector to be measured.
Let $A\in \mathbf{R}^{N\times n}$ be the matrix whose rows are sensing vectors $\{a_i\in\mathbf{R}^n\}_{i=1}^N$ or $\{ a_i\in \mathbf{C}^n\}_{i=1}^N$. The measurement vector $b\in \mathbf{R}^N$ is the magnitude,
\[ \textrm{ $b=|Ax_0|$, or $ b_i=| {a}_i\cdot x_0|$ for $i=1,\ldots, N$. }\] Obviously, the signal $x_0$ can be determined up to a global phase factor at best, i.e., becasue \[ |x_0\cdot a_j e^{i\theta}|=|x_0\cdot a_j| \textrm{ for any $\theta\in [0,2\pi]$},\] then $x_0 e^{i\theta}$ is also a solution. The recovery of $x_0 e^{i\theta}$ is referred to as the exact recovery.
When $A$ is a Fourier matrix, the problem is known as phase retrieval.
With this specific measurement matrix, the task becomes more demanding, because Fourier magnitude is not only preserved under global phase shift, but also under spatial shift and conjugate inversion, which yields twin images\cite{IOPORT.06071995}.
The first widely accepted phase retrieval algorithm was presented by Gerchberg and Saxton\cite{GS}. Fienup\cite{Fienup:82} developed the convergence analysis of the error-reduction algorithm and proposed input-output iterative algorithms. The basic and hybrid input-output algorithms can be viewed as a nonconvex Dykstra algorithm and a nonconvex Douglas-Rachford algorithm, respectively\cite{Bauschke02phaseretrieval}. Empirically, the hybrid input-output algorithm is observed to converge to
a global minimum (no theoretical proof is available)\cite{Review}.
The major obstacle to phase retrieval is caused by the lack of convexity of the magnitude constraint\cite{IOPORT.06071995}.
PhaseLift\cite{CPAM}, proposed by E.J. Cand\`{e}s et al., is one convex relaxation approach for phase retrieval. The relaxation changes the problem of vector recovery into a rank-one matrix recovery. The global optimal solution can be achieved, when $A$ is a Gaussian random matrix and $N\ge Cn$ with some absolutely constant $C$\cite{FCM}. To some extent, this approach provides a solution to the phase retrieval problem, at least from the theoretical perspective, provided that the feasible set can shrink to one single point under a sufficient number of measurements. In practice, the sensing matrix $A$ does not belong to this specific Gaussian model or uniform models, and the computational load of solving the convex feasibility problem can be too demanding. In particular, it requires the computation of all the singular values in each iteration.
In this paper, we explore the possibility of using the rank-$r$ matrix relaxation in phase retrieval. In the first section, to illustrate the idea, we review the exact recovery condition in PhaseLift. Typically, the exact recovery of rank-one matrices requires a large $N/n$ ratio. We \textit{standardize} the frame, such that each matrix in the feasible set has an equal trace norm. Then, the desired rank one matrix is the matrix whose leading eigenvalue is maximized. Gradually enlarging the leading eigenvalue, the matrix moves towards the rank one matrix with high probability. Our simulation result substantiates the effectiveness of recovering rank one matrices.
To reduce the computational load, in section 2, we apply the relaxation to the nonconvex alternating direction minimization method (ADM) proposed in~\cite{Wen}.
Frames are standardized to ensure the equal trace among all feasible solutions. In theory, searching for the optimal solution in a higher dimensional space can alleviate the stagnation of local optima. Finally, with a sufficient amount of nearly orthogonal sensing vectors, we show that the corresponding singular vector is close to the unknown signal and can thus be a good initialization. To some extent, this theoretical result provides a partial answer to the solvability of phase retrieval. In fact, when there is a lack of nearly orthogonal sensing vectors, the ADM can fail to converge, as discussed in Section~\ref{Failure1}.
In section 3, we conduct a few experiments to demonstrate the performance of the ADM methods, including the convergence failure of nonconvex ADM, the comparison between rank one ADM to rank-$r$ ADM, and the application of phase retrieval computer simulations. Finally, given a generic matrix, we can find an equivalent matrix whose columns are orthogonal and whose rows have
equal norm. We discuss the existence and uniqueness proof of the orthogonal factorization in the appendix.
\subsection{Notation}
In this paper, we use the following notations. Let $x^\top$ be the Hermitian conjugate of $x$, where $x$
can be real or complex matrices (or vectors). Hence, $x$ is Hermitian if $x=x^\top$. The notation $x^*$ is reserved
for a limit point of a sequence $\{x^k\}_{k=0}^\infty$ or the final iteration of $x$ in the computation. Let $\|x\|_F$ be the Frobenius norm.
The function $diag(X)$ produces a vector that is the diagonal of a matrix $X$. The pseudo-inverse of matrix $X$ is denoted by $X^\dagger$.
The vector $e$ is a vector consisting of
one, and $e_j$ is the vector consisting of zero, except one at the $j^{th}$ entry.
Let $x_0\in \mathbf{R}^n$ be the unknown signal and $A\in \mathbf{R}^{N\times n}$ or $\in \mathbf{C}^{N\times n}$ be the sensing matrix. Hence $N$ is the number of measurements.
\subsection{Ratio $N/n$}
We shall briefly outline the threshold ratio $N/n $ on the exact recovery of $x_0$~\cite{balan2006signal}. The result can be regarded as a worst-case bound, because we demand the exact recovery for all possible nonzero vectors $x$.
Denote a nonlinear map associated with $A$ by $M^A: \mathbf{R}^n\to \mathbf{R}^N$,
\[
M^A(x)=\sum_{k=1}^N |a_k\cdot x| e_k.
\]
The range of the mapping $M^A$ consists of all the possible measurement vectors $b$ via the sensing matrix $A$.
Throughout this paper, we assume that $A$ has rank $n$.
We say that a matrix $A\in \mathbf{R}^{N\times n}$ satisfies the \textit{rank* condition} if
all square $n$-by-$n$ sub-matrices of $A$ has full rank and $N> n$.
That is, any $n$ row vectors of $A$ are linearly independent.
\begin{prop}\label{2n} Suppose that $A$ satisfies the rank* condition. If $N\ge 2n-1$, then $M^A: \mathbf{R}^{n}\to \mathbf{R}^N$ is injective.
\end{prop}
\begin{proof} Suppose that $M^A (x)=M^A(\hat x)$ with $x\neq \hat x$; then $|a_k\cdot x|=|a_k\cdot \hat x|$. Rearrange the indices and assume \[ a_k\cdot x=a_k\cdot \hat x \textrm{ for } k=1,\ldots, l, \]
\[ a_k\cdot x=-a_k\cdot \hat x \textrm{ for } k=l+1,\ldots, N. \]
Because $N\ge 2n-1$, then either $l\ge n$ or $N-l\ge n$. Suppose $l\ge n$. Then
$x-\hat x\in \mathbf{R}^n$ is orthogonal to $ a_1,\ldots, a_l$.
The full rank condition yields $x-\hat x=0$, which shows the
nonexistence of two distinct vectors $x,\hat x$. Similar arguments apply to the case $N-l\ge n$.
\end{proof}
According to the above proof, when $N\le 2n-2$, we can find a pair of vectors $x,\hat x$ such that $|Ax|=b=|A\hat x|$. Indeed, when $N=2n-2$, let $u$ be the vector orthogonal to $\{a_i\}_{i=1}^{n-1}$ and $v$ be the vector orthogonal to $\{a_i\}_{i=n}^{2n-2}$. Then $x=u+v$ and $\hat x=u-v$ are the desired pair of vectors.
However, for any particular vector $x$, it is possible that no $\hat x\in \mathbf{R}^n$ exists in the case $n+1\le N\le 2n-2$.
\begin{prop} Fix $x_0\in \mathbf{R}^n$. Suppose each row $a_i$ of $A$ is independently sampled from some continuous distribution on the unit sphere in $\mathbf{R}^n$. Let $b=|Ax_0|$. Then, with probability one, $|Ax|=b$ has a unique solution $x=x_0$ for $N\ge n+1$.
\end{prop}
\begin{proof}
Assume $N=n+1$.
Write \[A:=
\left[
\begin{array}{c}
A_1 \\
A_2
\end{array}
\right] \textrm{ with } A_1\in\mathbf{R}^{n,n}, A_2\in \mathbf{R}^{1,n}.\] Then with probability one, $A_1$ is full rank and thus we can find a unique nonzero vector $c\in\mathbf{R}^n$ such that $a_{n+1}=c^\top A_1$. Clearly, $c$ is a continuous random vector that depends on $A_2$.
Suppose that $x=\hat x$ is another solution of $|Ax|=b$. Then $\hat x$ should be one solution of $2^n$ possible systems \[ \{ (A_1 \hat x)_i=\pm b_i \} \textrm{ for $i=1,\ldots, n$.}\] Let $y:=A_1(x_0\pm \hat x)$. Then, $y$ must be one of the $3^n$ vectors with $y_i=\pm 2b_i $ or $0$ for $i=1,\ldots, n$. Note that $y$ is independent of the selection of $A_2$. Alternatively, $a_{n+1} x_0=\mp a_{n+1}\hat x$ yields the orthogonality between $c$ and $ A_1 (x_0\pm \hat x)$, i.e., \[ a_{n+1}\cdot (x_0\pm \hat x)=c^\top A_1 (x_0\pm \hat x)=c\cdot y=0.\]
Since $c$ is a continuous random vector that depends on $A_2$, then with probability one, $c\cdot y=0$ leads to $y=0$, which implies that $x=\pm \hat x$ ($A_1$ is full rank).
\end{proof}
However, for generic complex frames, the map is injective if
$N\ge 4n-2$, i.e., all vectors $x_0\in \mathbf{C}^n$ can be recovered.
To recover a fixed vector $x_0$, $N\ge 2n$ is a necessary condition. Interested readers are referred to the discussion in~\cite{balan2006signal} and \cite{Bandeira2013}.
One naive thought is that as $N$ grows faster than the speed of $n$, the rank one matrix can be recovered. Unfortunately, this can be incorrect in some circumstances.
We can construct some matrix $A\in\mathbf{R}^{N\times n}$ with $N$ being order of $2^n$, but some vector $x_0 $ still cannot be recovered due to the failure of the rank* condition, see the following remark.
\begin{rem}(Bernoulli random matrices)
We construct an example, in which $x_0=e_1$ cannot be recovered from the measurement $|Ax_0|=b=e$.
Denote by $S\subset \mathbf{R}^{n}$ a set of vectors whose entries are $\pm 1$. There are $2^n$ vectors in $S$. Pick any subset of $N$ vectors from $S$ as $\{a_i\}_{i=1}^N$ ($A\in\mathbf{R}^{N\times n}$ is the Bernoulli random matrix). All the vectors $e_j, j=1,\ldots, n$ satisfy $|Ae_j|=|Ax_0|=b=e$. Since these matrices are indistinguishable, $M^A$ does not the injective property.
Note that the rank of the random matrix $A$ is $n$ in most cases. The rank $n$ condition on $A$, together with a large $N$, does not imply the exact recovery of $x_0$.
One can easily verify that the Fourier matrix yields the same difficulty.
\end{rem}
\subsection{PhaseLift}
Next, we introduce the PhaseLift method proposed by Cand\`{e}s et al.\cite{CPAM}.
To simplify the discussion, we focus on the noiseless case.
Introduce the linear operator on Hermitian matrices,
\[
\mathcal{A}: \mathcal{H}^{n\times n}\to \mathcal{R}^N,\; \mathcal{A}(X):=diag(A^\top X A)=b^2, \; b_i^2=a_i^\top Xa_i, i=1,\ldots, N.
\]
An equivalent condition of $|Ax_0|=b$ is that $X:=x_0 x_0^\top $ is a rank-one solution to $\mathcal{A}(X)=b^2$.
Hence, the phase retrieval problem can be formulated as the matrix recovery problem,
\[
\min_{X} rank(X) \textrm{ subject to } \mathcal{A}(X)=b^2, X\succeq 0.
\]
By factorizing a rank one solution of $X$, we can recover the signal $x_0$.
To overcome the difficulty of rank minimization, Cand\`{e}s et al.~\cite{CPAM} propose a convex relaxation of the rank minimization problem, which is the trace minimization problem,
\[
\min_{X} tr(X), \textrm{ subject to } \mathcal{A}(X)=b^2, X\succeq 0.
\]
When \[ I_{n\times n}\in span\{a_i a_i^\top \}_{i=1}^n,\] the condition $\mathcal{A}(X)_i= tr(a_i a_i^\top X)$ automatically determines the trace $ tr(X)$ of $X$ and then
the trace minimization objective is redundant.
Recovering $X_0=x_0x_0^\top $ can be achieved via solving
the following convex feasibility problem,
\begin{equation}\label{PhaseLift}
\{X: X\succeq 0, \mathcal{A}(X)=b^2\}.
\end{equation}
In the next subsection, we will show that we can always remove the trace minimization objective via an orthogonal decomposition on $A$, either SVD or QR factorizations.
The following Prop.~\cite{CPAM} illustrates the optimality of the feasibility problem, which is a key tool for justifying the exact recovery theoretically. The proof can be found in~\cite{demanet2012stable}.
\begin{prop}
Suppose that the restriction of $\mathcal{A}$ to the tangent space at $X_0:=x_0x_0^\top $ is injective. One sufficient condition for the exact recovery is the existence of $y\in \mathbf{R}^N$, such that \[ Y:=\mathcal{A}^\top y=A^\top diag(y) A=\sum_{i=1}^N y_i a_i a_i^\top \] satisfies \[
Y_T=0 \textrm{ and } Y_{T^\bot}\succ 0.
\]
\end{prop}
The proposition states one sufficient condition under which $x_0x_0^\top $ can be recovered from the frame $A$. In the real case,
when $N\ge 2n-1$, the rank* condition on $A$ is one sufficient condition to ensure the injective property of
the restriction of $\mathcal{A}$.
Indeed, for any $x_0\neq 0$, $Ax_0$ consists of at most $n-1$ zeros thanks to the rank* condition, thus $Ax_0$ consists of at least $n$ nonzero entries.
Since the tangent space at $x_0x_0^\top $ consists of $\hat X$ in a form $x_0 x^\top +x x_0^\top $ with some $x\in \mathbf{R}^n$, then \[ \mathcal{A}(\hat X)=\mathcal{A}(x_0 x^\top +x x_0^\top )=2(Ax_0) (Ax)=0\in \mathbf{R}^N \] yields $x=0$ (due to the rank* condition), which implies $\hat X=0$.
\subsection{Special frames}
We shall highlight three special frames where the feasible set only consists of one single point. Thus the unknown signal $x_0$ can be recovered via PhaseLift.
In the first case, we show that a frame with
$N=n+1$ measurement vectors are sufficient to determine the unknown matrix $X=x_0x_0^\top$.
\begin{prop} Suppose that $a_j=e_j$ for $j=1,\ldots, n$ and $a_{n+1}(j) x_0(j)> 0$ for $j=1,\ldots, n$. \footnote{ This condition states that the entries of $a_{n+1}$ have the same sign as the ones of $x_0$.} Then, the feasible set of PhaseLift consists of only one single point, $x_0x_0^\top$.
\end{prop}
\begin{proof}
From $a_j=e_j$, we have \[ a_j^\top X a_j=X_{j,j}=|x_0(j)|^2.\] The positive semidefinite requirement of $X$ yields $X_{i,j}^2\le X_{i,i}X_{j,j}$.
The measurement $a_{n+1}^\top X a_{n+1}=|a_{n+1}\cdot x_0|^2$ enforces $a_{n+1}^\top X a_{n+1}$ to reach its upper bound among $X$ being positive semidefinite, i.e., the inequalities in the following relation become equalities, \[
a_{n+1}^\top Xa_{n+1}=\sum_{i,j} a_{n+1}(i) X_{i,j}a_{n+1}(j)\]\[\le \sum_{i,j} |a_{n+1}(i) a_{n+1}(j)| \sqrt{X_{i,i}}\sqrt{X_{j,j}}\le (\sum_{j} |a_{n+1}(j)|\sqrt{X_{j,j}})^2
=|a_{n+1}\cdot x_0|^2,\]
where we used the assumption $a_{n+1}(j) x_0(j)>0$ for all $j=1,\ldots, n$. Hence, $X_{i,j}^2=X_{i,i}X_{j,j}$ for all $i,j$, which implies $ X=xx^\top $
is the only feasible point with $ x:=(a_{n+1}x_0/|a_{n+1}|)$.
\end{proof}
In the second case,
the exact recovery is obtained via a set of sensing vectors orthogonal to $x_0$.
\begin{prop}
Suppose that some $n-1$ linear independent sensing vectors among $\{a_i\}_{i=1}^n$ exist such that $x_0\cdot a_i=0$; then, PhaseLift with measurement matrix $A$ recovers the matrix $x_0x_0^\top$ exactly.
\end{prop}
\begin{proof}
Without loss of generality, assume that $x_0=e_1$ and write $A$ as an $n\times n$ matrix,
\[
A=
\left(
\begin{array}{cc}
1_{1\times 1} & *_{1\times (n-1)}\\
0_{(n-1)\times 1} & A_1
\end{array}
\right)
\]
where
$A_1\in\mathbf{R}^{(n-1)\times (n-1)}$ has rank $n-1$, i.e., it consists of linear independent columns.
Choose $y$ to be a vector with $y_i>0$ for $i\ge 2$. Then $Y_T=0$ and for any $z\in\mathbf{R}^{n-1}$,
\[
(A_1 z)^\top diag([y_2,\ldots, y_n]) A_1 z=0. \]
Because $y_i>0$ for all $i=2,\ldots, n$, then
$ A_1z=0, \; i.e., z=0. $ Hence, $Y_{T^\bot}\succ 0$.
\end{proof}
This special choice of the first column of $A$ indicates \textit{ the orthogonality between $n-1$ sensing vectors $a_i$ and $x_0$}.
However, the orthogonality is generally not satisfied for arbitrary vector $x_0$.
In the third case, the exact recovery can be obtained via some structured sensing matrix, which in fact fails the rank* condition ($M^A$ is not injective).
\begin{prop}
Suppose that $N=2n-1$ and the sensing vectors in $A$ are $a_i=e_i$ for $i=1,\ldots, n$ and \[ \textrm{ $a_{n+i}= e_i+\beta_i e_{i+1}$ with $\beta_i\neq 0$ for $i=1,\ldots, n-1$.}\] Suppose that the entries of $x_0$ are nonzero. Then PhaseLift with measurement matrix $A$ recovers the matrix $x_0x_0^\top$ exactly.
\end{prop}
\begin{proof}
To simplify the discussion, assume $x_0=e$ and replace vectors $a_i$ with vectors $a_i x_0$ for all $i$.
Any matrix $X$ in the feasible set has the form,
\[
X\in \mathbf{R}^{n,n}=
\left(
\begin{array}{ccccc}
1& 1& &&\\
1& 1& 1 &&\\
& 1& 1 &1&\\
& & &\ldots&\\
& & &1&1
\end{array}
\right),
\]
i.e., $X_{i,i+1}=X_{i,i}=X_{i+1,i}=1$.
Claim: Because $X$ is positive semidefinite, any principal sub-matrices of $X$ are positive semidefinite, which implies that $X=ee^\top$.\\
Start with $\alpha:=X_{i_1,j_1}=X_{j_1,i_1}$ with $i_1=j_1+ 2$. Consider the principal sub-matrix $\{X_{i,j}: i,j\in \{j_1,j_1+1,j_1+2\}\}$.
Compute the determinant of this submatrix \[ -1+2\alpha-\alpha^2=-(1-\alpha)^2.\] Hence, the nonnegative determinant yields $\alpha=1$. Similar arguments work for $i_1=j_1+3$,\ldots. In the end, all entries of $X$ must be $1$, i.e., $X=x_0x_0^\top$ is the only matrix in the feasible set.
\end{proof}
Readers can apply the similar arguments to the recovery of $X_0=x_0x_0^\top$ with $x_0\in \mathbf{C}^n$ via the following matrix: Let $A\in \mathbf{C}^{N,n}$ with $N=n+2(n-1)$ and
\[
a_i=e_i,\; a_{n+i}= e_i+\beta_i e_{i+1},\textrm{ and $a_{2n-1+i}= e_i+\gamma_i e_{i+1}$, where }
\]
\[
\textrm{ $\beta_i\neq \gamma_i $ are nonzero for $i=1,\ldots, n-1$}.
\]
See \cite{doi:10.1137/110848074} for more discussion on the usage of $N=3n-2$ sensing vectors.
\subsection{Reduction of $N/n$ via standardized frames}
In the following, some orthogonality on $A$, $A^\top A=I_{n\times n}$ is expected to implement the matrix recovery algorithm. We say that a measurement matrix (a frame) $A$ is \textit{standardized} if $A$ consists of orthonormal columns, i.e., $A^\top A=I_{n\times n}$.
In fact,
given any measurement matrix $A\in \mathbf{R}^{N\times n}$ with rank $n$, we can
take the QR decomposition of the measurement matrix $A$, $A=QR$ with $Q\in \mathbf{R}^{N\times n}$ consisting of orthonormal columns and $R\in \mathbf{R}^{n\times n}$ being upper triangular.
The rank of $A$ is equal to the rank of $R$. Hence, denoting $Rx$ by $y$,
the problem is reduced to solving $y$ from the measurements \[
|Ax|=|QRx|=|Qy|=b.
\]
Once $y$ is obtained, $x$ can be computed via simply inverting the matrix $R$. Hence, the original frame $A$ is \textit{equivalent} to the standardized frame $Q$ in the sense that the two transforms $A,Q$ have the same range.
That is, $M^A$ is injective if and only if $M^Q$ is injective.
In the section, we propose one modification on PhaseLift to recover the rank one matrix $X_0$.
The idea is based on the following simple fact.
\textit{ Among the feasible set $\mathcal{A}(X)=b^2$ and $X$ being positive semidefinite, to recover the rank one solution, we should choose the matrix $X$ whose leading eigenvalue is maximized. }
\begin{figure}
\caption{{ Maximizing the leading eigenvalue yields the rank one solution. }
\label{local1}
\end{figure}
Consider the model
\[
\min_X - \sigma_1(X),
\]
subject to $X$ positive semidefinite and $\mathcal{A}X=b^2$,
where
$\sigma_1(X)$ refers to the largest eigenvalue function of $X$. See Fig.~\ref{local1}.
Then we have the following theoretical result.
\begin{thm} Suppose that $A$ is a standardized matrix with $N\ge 2n-1$ in the real case and $N\ge 4n-2$ in the complex case. Then with probability one, the global minimum occurs if and only if the minimizer $X$ is exactly $X=x_0 x_0^\top$.
\end{thm}
\begin{proof}
Because $|Ax|^2=b^2$ and $A$ consists of orthogonal columns,
\[
e\cdot b^2=tr(X)=\sum_{i=1}^n \sigma_i(X), \]
where $\sigma_i(X)$ refers to the $i$-th eigevlaue of $X$.
Because $X$ is positive semidefinite, the largest eigenvalue of $X$, which cannot exceed $e\cdot b^2$, is maximized if and only if $X$ is a rank one matrix. Finally, according to the above results, when $N$ exceeds the thresholds $2n-1$ or $4n-2$, with probability one, the rank one matrix is unique, which completes the proof.
\end{proof}
To address the problem, we propose the following alternating direction method(ADM).
The ADM can be formulated as
\[
\min_{X,Y} L_\beta(X,Y,\lambda):=-\sigma_1(X)-\lambda (X-Y)+\beta \|X-Y\|_F^2/2,
\]
subject to $X$ positive semidefinite and $\mathcal{A}Y=b$.
Hence,
the update of $X,Y$ is
\begin{equation}\label{X}
arg\min_{X} -\sigma_1(X)+\beta \|X-Y-\lambda/\beta\|_F^2/2 \textrm{ subject to positive semidefinite,}
\end{equation}
\[
arg\min_Y \beta \|X-Y-\lambda/\beta\|_F^2/2 \textrm{ subject to $\mathcal{A}Y=b^2$}.\]
The iteration becomes
\begin{enumerate}
\item Update $X$: Write $Y^k=UD_Y U^\top $; then,
thanks to the rotational invariance of Frobenius norm, the minimizer in Eq.~(\ref{X}) is
\[
X^{k+1}=UD_X U^\top, \textrm{ where }\; D_X=\max( D_{Y+\lambda/\beta},0)+\beta^{-1} e_1e_1^\top,
\]
where $D_{Y+\lambda/\beta}$ is the diagonal matrix of the eigenvalue decomposition of $Y^k+\lambda^k/\beta$ and the diagonal entries are in a decreasing order, i.e., the $(1,1)$ entry is the largest eigenvalue and will be added by $\beta^{-1}$ in the $X$ update.
\item Update $Y$ via the projection of $X-\lambda/\beta$,
\[
Y^{k+1}=\mathcal{A}^\top (\mathcal{A}\mathcal{A}^\top )^{-1} b+(I-\mathcal{A}^\top (\mathcal{A}\mathcal{A}^\top )^{-1} \mathcal{A})Z, \textrm{ where } Z=X^{k+1}-\lambda^k/\beta.
\]
The matrix $\mathcal{A}^\top (\mathcal{A}\mathcal{A}^\top )^{-1} \mathcal{A}$ is the orthogonal projector onto $Range(\mathcal{A}^\top )$ which is spanned by $\{a_ia_i^\top \}_{i=1}^N$, each of which has trace one.
\item Update $\lambda$:
\[
\lambda^{k+1}=\lambda^k-\beta (X^{k+1}-Y^{k+1}).
\]
\end{enumerate}
\begin{rem}[Counterexamples]
Because $\sigma_1(X)$ is convex in $X$, minimizing $-\sigma_1(X)$ yields a non convex minimization problem.
Theoretically, there is no guarantee that the global optimal solution can always be found numerically; however, the
empirical study shows that the exact recovery will occur with high probability.
Here is one counterexample.
\[
A=\left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & -1 & -1 \\
1 & \sqrt{3/2} & 0 \\
1 & -\sqrt{3/2} & 0 \\
1 & 0 &\sqrt{3} \\
1 & 0 &-\sqrt{3}
\end{array}
\right)
\]
Then the feasible set consists
of matrices $\left(
\begin{array}{ccc}
1-3\mu & 0 & 0 \\
0 & 2\mu & 0 \\
0 & 0 & \mu
\end{array}
\right)$ with $\mu\in [0,1/3)$. The maximization of the leading eigenvalue leads to
two possible solutions to $|Ax|^2=b^2$: one is $\mu=0$ (the rank-one solution) and the other is $\mu=1/3$ (the rank-two solution),
depending on the initialization. See Fig.~\ref{localOpt0}.
\end{rem}
\begin{figure}
\caption{{Maximizing the leading eigenvalue $\sigma_1(X)$ yields two local optimal solutions.}
\label{localOpt0}
\end{figure}
The following experiments illustrate that when the ratio $N/n$ is not large enough, the solution in PhaseLift is not rank one; we can successfully recover the rank one matrices via maximizing the leading eigenvalue;
see Table~\ref{defaultR} for the real case and Table~\ref{defaultC} for the complex case.
\begin{table}[htdp]
\caption{The number of successes out of $50$ random trials with $N=2n-1$. ``via Q" refers to the standardized measurement.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{n} &
\multicolumn{2}{c|}{PhaseLift} &
\multicolumn{2}{c|}{$\min-\sigma_1(X)$}\\
\cline{2-5}
& via $A$ & via $Q$
& via $A$ & via $Q$\\
\hline
5& 37& 38& 13& 50 \\
10& 28& 31& 11&49 \\
15 & 17 & 21 & 13& 47 \\
20 &16 & 25 & 15& 48 \\
25 &10 & 13 & 4& 48\\
30 & 3 & 7 &13& 48 \\
35& 3 &4 &16& 47 \\
40 & 1 &5 & 6& 46 \\
45& 0 &0 & 4& 48\\
50& 0 &0 & 12& 50 \\
\hline
\end{tabular}
\label{defaultR}
\end{center}
\end{table}
\begin{table}[htdp]
\caption{The number of successes out of $20$ random trials via $N=2n-1, 3n-1, 4n-2$ standardized measurements ( complex case). }
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
n& N=2n-1& N=3n-1 & N=4n-2\\
\hline
5& 3& 20& 20\\
10& 2& 20 & 20\\
15 & 1 & 20 & 20\\
20 & 0 & 20 & 20\\
25 & 0 & 20 &20\\
30 & 0 & 20 &20\\
35& 0 & 20 &20\\
40 & 0 &20 & 20\\
45& 0 &20 & 20\\
50& 0 &20 & 20\\
\hline
\end{tabular}
\end{center}
\label{defaultC}
\end{table}
The result of nonconvex minimization depends on the choice of initialization for $X^0$. When $X^0$ is near $X_0$, the exact recovery can be obtained.
\begin{prop} Let $X_0=x_0 x_0^\top $. Let $X$ be some positive semidefinite matrix.
Let $f(t)$ be the spectral norm of matrices $tX_0+(1-t)X$, \[
f(t)=\|tX_0+(1-t)X\|, \; 0\le t\le 1.
\]
Let $v_1$ be the unit eigenvector corresponding to the largest eigenvalue of $X$. If $tr(X_0 v_1v_1^\top )\ge \|X\|$, then $f(t)$ increases on the interval $[0,1]$. Hence, $\min (-\sigma_1(X))$ yields the recovery of $X_0$.
\end{prop}
\begin{proof}
Observe that $f(t)$ is convex in $t\in [0,1]$. It suffices to show that $f(t)\ge f(0)$ for $t\in (0,1)$. According to the subdifferential of the matrix spectral norm\cite{Watson199233}, we have \[
t^{-1}(f(t)-f(0))\ge tr((X_0-X)^\top G),
\]
where $G$ is a subgradient of the spectral norm at $X$. Choose $G=v_1 v_1^\top $, then we have $ tr((X_0-X)^\top G)=tr(X_0 v_1v_1^\top )-f(0)$, which completes the proof.
\end{proof}
\begin{rem} We provide a few examples to illustrate the recovery of $X_0$ via $-\min \sigma_1(X)$.
Let $x_0=e\in \mathbf{R}^3$ and $X_0=ee^\top\in\mathbf{R}^{3\times 3}$. Suppose $a_1=e_1$, $a_2=e_2$ and $a_3=e_3$. Then, any feasible matrix has the form
\[
X=\left(
\begin{array}{ccc}
1 & 1-\alpha_1 & 1-\alpha_2 \\
1-\alpha_1 & 1 & 1-\alpha_3 \\
1-\alpha_2 & 1-\alpha_3 & 1
\end{array}
\right) \textrm{ with } \alpha_i\ge 0.
\]
Consider the case $\alpha_1=\alpha_2=\alpha$ and $\alpha_3=\alpha t$, then denote
\begin{equation} \label{M3}
X_{\alpha,t}:=\left(
\begin{array}{ccc}
1 & 1-\alpha & 1-\alpha \\
1-\alpha & 1 & 1-\alpha t \\
1-\alpha & 1-\alpha t & 1
\end{array}
\right),
\end{equation}
and
\[ \det(X_{\alpha,t})=\alpha^2 t(4-2\alpha -t).\] Thus, $X_{\alpha,t}$ is positive semidefinite if and only if \[ \textrm{ $0\le t\le 4-2\alpha$ and $0\le \alpha\le 2$.}\]
Hence, $X_{\alpha, 4-2\alpha}$ has two positive eigenvalues and one zero eigenvalue if $0<\alpha< 2$. For instance,
when $\alpha=1/2$ and $t=3$, $X_{\alpha,t}$ has eigenvalues $1.5, 1.5, 0$. From the previous proposition, a positive semidefinite matrix $X_{\alpha,t}$ with $0\le \alpha\le 1/2$ can return to $X_0$ via maximizing the leading eigenvalue.
See Fig.~\ref{Matrix3}.
\end{rem}
\begin{figure}
\caption{{ The left subfigure shows $\sigma_1(X_{\alpha, 4-2\alpha}
\label{Matrix3}
\end{figure}
The next example illustrates the necessity of trace invariance in recovering $X_0$.
When the matrix trace is not constant in the feasible set, then maximizing the leading eigenvalue does not recover $X_0$ in general. For instance, consider$X_0=x_0x_0^\top$ with
$x_0=e=[1, 1, 1]^\top$, and \[
A=\left(
\begin{array}{ccc}
1 & 0 & 1 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 1 & 2
\end{array}
\right),\; b=|Ae|=\left(
\begin{array}{c}
2 \\
1 \\
1 \\
4
\end{array}
\right).\]
Any matrix $X$ in the feasible set has the form
\begin{equation}
X_{\alpha,\beta}:=\left(
\begin{array}{ccc}
3-2\beta & \beta & 2-\alpha \\
\beta & 1 & \alpha \\
2-\alpha & \alpha & 1
\end{array}
\right) \textrm{ with } \det(X_{\alpha,\beta})=-(2\alpha-\beta-1)^2,
\end{equation}
and thus $ \det(X_{\alpha,\beta})\ge 0$ yields $\beta=2\alpha-1$. In fact, the feasible set consists of matrices \begin{equation}\label{QR2}
X_{\alpha,2\alpha-1}=(1-t) \hat e \hat e^\top+tee^\top, \hat e=[3,1,1]^\top, \; t=(\alpha+1)/2\in [0, 1].
\end{equation}
Maximizing the leading eigenvalue yields the solution $e\hat e$, which is not $X_0$. Alternatively, consider the QR factorization,
\[
A=QR=\left(
\begin{array}{ccc}
1/\sqrt{2} & 0 & -1/\sqrt{3} \\
0 & 1 & 0 \\
0 & 0 & 1/\sqrt{3} \\
1/\sqrt{2} & 0 & 1/\sqrt{3}
\end{array}
\right)
\left(
\begin{array}{ccc}
\sqrt{2} & \sqrt{2} & \sqrt{2} \\
0 & 1 & 0 \\
0 & 0 & \sqrt{3}
\end{array}
\right),\]
which yields
the problem instead,
\[ b=|QRe|=|Qx_0|,\; x_0:=Re=\left(
\begin{array}{c}
3\sqrt{2} \\
1 \\
\sqrt{3}
\end{array}
\right).\]
Then the feasible set $
\{X: diag(QXQ^\top)=b^2, X\succeq 0\}
$ consists of
\begin{equation}
X_{\alpha}:=\left(
\begin{array}{ccc}
18 & 3\sqrt{2}\alpha & 3\sqrt{6} \\
3\sqrt{2}\alpha & 1 & \sqrt{3}\alpha \\
3\sqrt{6} & \sqrt{3}\alpha & 3
\end{array}
\right), \; \alpha\in\mathbf{R}.
\end{equation}
Let $x_0=[3\sqrt{2},-1,\sqrt{3}]^\top$ and $\hat x_0=[3\sqrt{2},-1,\sqrt{3}]^\top$.
The feasible set consists of matrices \begin{equation}\label{QR3}
X_{\alpha}=(1-t) \hat x_0 \hat x_0^\top+tx_0x_0^\top, \; t=(\alpha+1)/2\in [0, 1].
\end{equation}
When $\alpha$ lies in $(0, 1)$,
maximizing the leading eigenvalue of $X_{\alpha}$ yields the exact recovery of $X_0$.
\section{Low rank approaches}
In PhaseLift,
all eigenvalues of $X$ msy be computed in each iteration to be
projected on the feasible set, consisting of positive semidefinite matrices with rank $n$. The projection obviously becomes a laborious task when $n$ is large.
Here we propose to replace the feasible set with a subset consisting of rank-$r$ matrices,
where $r$ is much smaller than $n$.
Write the positive semidefinite matrices $X$ in PhaseLift as
$X=xx^\top$ with $x\in \mathbf{R}^{n,r}$ or $x\in \mathbf{C}^{n,r}$. Then, the original constraint in PhaseLift becomes
\[ b^2= \mathcal{A}(X)=diag(AXA^\top)=|Ax|^2.\]
\subsection{ADM with $r=1$}
Here we focus on the case $r=1$. In section~\ref{rankR}, we will discuss the case $r>1$.
When $r=1$, we arrive at the problem,
\[
\textrm{ finding $x\in \mathbf{R}^n$ or $ \mathbf{C}^n$ satisfying } |Ax|= b.
\]
In~\cite{Wen}, the framework
\begin{equation}\label{Wen}
\min \frac{1}{2}\||z|-b\|^2,\textrm{ subject to } z=Ax
\end{equation}
is proposed to address phase retrieval.
They introduce
the augmented Lagrangian function
\begin{equation}
\label{WenL}
L(z,x, \lambda)=\frac{1}{2}\||z|-b\|^2+\lambda\cdot (Ax-z)+\frac{\beta}{2}\|Ax-z\|^2.
\end{equation}
The algorithm consists of updating $z,x,$ and $\lambda$ as follows.
\begin{alg} \label{WenAlg}
Initialize $x^0$ randomly and $\hat \lambda^0=0$. Then repeat the steps for $k=0,1,2,\ldots$.
\begin{eqnarray*}
z^{k+1}&=&\frac{u}{|u|} \frac{b+\beta |u|}{1+\beta}, \; u=Ax^k+\beta^{-1}\lambda^k,\\
x^{k+1} &= & A^\dagger ( z^{k+1}-\beta^{-1} \lambda^{k}),\\
\lambda^{k+1}&=& \lambda^k+\beta (A x^{k+1}-z^{k+1} ).
\end{eqnarray*}
\end{alg}
Let us simplify the algorithm.
Let $P=AA^\dagger=QQ^\top$. Assume that $A$ has rank $n$.
By eliminating $x^{k}$, the $\lambda$-iteration becomes \[
\beta^{-1}\lambda^{k+1}=(I-P)(\beta^{-1} \lambda^k-z^{k+1}).
\]
Thus, $A^\dagger \lambda^{k+1}=0$. In the end, we have the following algorithm.
\begin{alg}\label{WenAlg1} Denote $\hat \lambda^k = \beta^{-1}\lambda^k$.
Initialize $x^0$ randomly and $\hat \lambda^0=0$. Compute $z^0$. Then, repeat the steps for $k=0,1,2,\ldots$,
\begin{eqnarray*}
z^{k+1}&=&\frac{u}{|u|} \frac{b+\beta |u|}{1+\beta}, \; u=Pz^k+\hat \lambda^k,\\
\hat \lambda^{k+1}&=&(I-P)(\hat \lambda^k-z^{k+1}).
\end{eqnarray*}
\end{alg}
\begin{rem}[Equivalence under right matrix multiplication]
Note that the iteration is updated via $P=QQ^\top$, instead of $A$. The matrix $R$ does not appear in the $z$ and $\lambda$ iterations. Thus, the algorithm is `` invariant" with respect to $R$. That is, for any invertible matrix $\hat R$, we get the same iterations $\{z^k,\hat \lambda^k\}_{k=1}^\infty$, when $(A, x^0)$ is replaced by \[ (Q\hat R, (\hat R)^{-1} Rx^0).\] In particular, the iteration with $Q$ yields the same result as the one with $A$ itself. However, ADM can produce different results when the \textit{left} matrix multiplication on $A$ is considered. See section~\ref{Failure1}.
\end{rem}
Suppose that $z^k$ converges to $z^*$ and $\hat \lambda^k$ converges to $\hat\lambda^*$. Then, $Pz^*=z^*$ and $P\hat \lambda^*=0$. Hence,
$x^*=A^\dagger z^*$, and $z^*= Pz^*=A x^*$. Consider the limit of the $z$-iteration,
\[
(1+\beta) z^*=\frac{u}{|u|} (b+\beta |u|)=\frac{u}{|u|} b+\beta (z^*+\hat \lambda^*).
\]
Thus, we have the orthogonal projection of $bu/|u|$ onto the range of $A$ and its null space,
\[\frac{u}{|u|} b=z^*-\beta\hat \lambda^*,\]
\begin{equation}\label{B}
\textrm{ thus } \|b\|^2=\| z^*\|^2+\beta^2 \|\hat \lambda^*\|^2.
\end{equation}
This result shows $\|Ax^*\|\le \|b\|$.
In particular, when $A=Q$, we have $\|x^*\|\le \|b\|=\|x_0\|$, i.e., any non-global solution has the smaller norm.
Besides, Eq.~(\ref{B}) suggests the usage of smaller $\beta$ to improve the recovery of $x_0$. Empirical experiments show that, starting with $\lambda^0=0$, a smaller value $\beta$ leads to a higher chance of exact recovery.
To analyze the convergence, we write the function $ L(z,x, \lambda)$ as
\[
\hat L(z,x, \lambda, s):=\frac{1}{2}\|z-bs\|^2+\lambda\cdot (Ax-z)+\frac{\beta}{2}\|Ax-z\|^2,
\]
where the entries of $s$ satisfies $|s_i|=1$ for $i=1,\ldots, N$ and clearly the optimal vector $s$ to minimize $\hat L$ is given by $u/|u|$. When $s$ is fixed, then the following customized proximal point algorithm
which consists of iterations
\begin{eqnarray*}
z^{k+1}&=&s \frac{b+\beta |u|}{1+\beta}, \; u=Ax^k+\beta^{-1}\lambda^k,\\
\lambda^{k+1}&=& \lambda^k+\beta (A x^{k}-z^{k+1} ),\\
x^{k+1} &= & A^\dagger ( z^{k+1}-\beta^{-1} \lambda^{k+1})
\end{eqnarray*}
can be used to solve the least squares problem
\begin{equation}\label{LSE}
\min \frac{1}{2}\|z-sb\|^2,\textrm{ subject to } z=Ax.
\end{equation}
Gu et al.~\cite{COAPHE} provide the convergence analysis of the customized proximal point algorithm. More precisely, fixing $s$, let $(z^*,x^*,\lambda^*)$ be a saddle point of
$\hat L(z,x,\lambda,s)$ and
let \[
\|v^{k+1}-v^k\|_M^2:=(v^{k+1}-v^k)^\top M (v^{k+1}-v^k)
\textrm{ with }\]
\[
M:=[\beta^{1/2}A, -\beta^{-1/2}I]^\top [\beta^{1/2}A, -\beta^{-1/2}I], \; v:=
\left(
\begin{array}{c}
x \\
\lambda
\end{array}
\right).
\]
In Lemma 4.2, Theorem 4.2 and Remark 7.1~\cite{COAPHE}, the sequence $\{v^k\}$ satisfies
\[
\|v^{k+1}-v^*\|_M^2+\|v^k-v^{k+1}\|_M^2\le \|v^k-v^*\|_M^2,
\]
and then $\lim_{k\to \infty} \|v^k-v^{k+1}\|_M^2=0$. Any limit point of $[z^k,x^k,\lambda^k]$ is a solution of the problem in Eq.~(\ref{LSE}) with $s$ fixed.
However,
the convergence analysis of the algorithm in Eq.~(\ref{WenAlg}) does not exist due to the lack of convexity in $z$.
In fact, when $s$ is updated in each $z$-iteration, this algorithm sometimes fails to converge, which is shown in our simulations; see Section~\ref{Failure1}.
\subsection{Recoverability}\label{Recovery}
We make the following two observations regarding$|Ax_0|=b$. Suppose the unknown signal $x_0$ satisfies $\|x_0\|=1$.
First, the vector $x$ is updated to maximize the inner product $|Ax|\cdot b$ in the ADM. However, because the norm constraint $\|x\|=1$ is not enforced explicitly, a non-global maximizer $x$ generally does not has the unit norm, $\|x\|<\|x_0\|=1$. In fact, classic phase retrieval algorithms e.g., ER, BIO, HIO\cite{Fienup:82}, do not enforce the constraint directly. Second, for those indices $i$ with $a_i\cdot x$ close to zero, the unit vector $x$ to be recovered should be approximately perpendicular to these sensing vectors $a_i$. The candidate set $\{x: |a_i\cdot x|\le b_i\}$ forms a cone, including unit vectors approximately orthogonal to $a_i$ corresponding to $b_i$ close to zero.
In particular, when $a_1\cdot x_0\neq 0$ and $a_i\cdot x_0=0$ for $i=2,\ldots, N$ with $N>n$, then the cone is exactly the one-dimensional subspace spanned by the vector $x_0$.
One important issue of non-convex minimization problems is that the initialization can affect the performance dramatically. The x-iteration in the ADM tends to produce a vector close to the singular vector corresponding to its least singular value of $A$, in the sense that $A^\dagger z$ boosts the component along the singular vector corresponding to the largest singular value of $A^\dagger$, i.e., the smallest singular value of $A$. In the following, we will analyze the recovery problem from a viewpoint of singular vectors and derive an error estimate between the unknown signal and the singular vector.
Rearrange the indices such that $\{b_i\}$ are sorted in an increasing order, \[ 0\le b_1\le b_2\le\ldots\le b_N.\]
Divide the indices into three groups,
\[
\{1,2,\ldots, N\}=I\cup II\cup III.
\]
We shall use subscripts $I,II,III$ to indicate the indices from these three groups. The set $ I$ consists of the indices corresponding to the smallest $N_{I}$ terms among $\{b_i\}$. The set $ II $ consists of the indices corresponding to the largest $N_{II}$ terms among $\{b_i\}$.
Denote the matrix consisting of rows $\{a_i\}_{i\in I} $ by $A_I$. Let $A_I$ and $A_{II}$ consist of $N_I$ and $N_{II}$ rows, respectively.
In the following, we illustrate that the singular vector $x_{min}$ corresponding to the least singular value of $A_I$ is a good initialization $x^0$ in the ADM.
Without loss of generality, assume that $x_0=e_1$ and that all the rows $\{a_i\}_{i=1}^N$ of $A$ are normalized, $\|a_i\|=1$. Observe that the desired vector satisfies $|A x_0|\le b$ and $\|x_0\|=1$. Hence, we look for a \textit{ unit} vector $x$ in the closed convex set \[
|a_i\cdot x|\le b_i \textrm{ for all $i$}.
\]
Whether phase retrieval can be solved depends on the structure of $A$. We make the following assumptions.
\begin{itemize}
\item First, sufficiently many indices $i\in I$ exist, such that
\[ \|b_I\|^2:=\sum_{i\in I} b_i^2 \textrm{ is sufficiently small compared to } \|A_I x_1\|^2,\]
where $x_1$ is a unit vector orthogonal to $x_0$. (Clearly, the matrix $A_I$ has rank at least $n-1$.)
\item Second, there are at least $n$ indices in $II$, such that \[ \textrm{entries } \{b_i\} \textrm{ are \textit{large} for } i\in II , \] and the matrix $A_{II}$ has rank $n$.
\end{itemize}
The assumption that $\{b_i\}_{i\in I}$ is close to zero implies that
$\{a_i\}_{i\in I}$ are almost orthogonal to $x_0$.
Thus, we instead
solve the problem \[ \textrm{ $\min_x \|A_I x\|^2$ with $\|x\|=1$.}\]
The minimizer denoted by $x_{min}$ is the singular vector $x_{min}$ corresponding to the least singular value of $A_I$. Then,
\[
\|A_I x_{min}\|\le \|b_I\|.
\]
Let $\{0\le \mu_1\le \ldots\le \mu_n\}_{i=1}^n$ be the singular values of $A_I$ with right singular vectors $v_i$.
Then $v_1=\pm x_{min}$ and we can write \[
x_0=\alpha_1 x_{min}+\sqrt{1-\alpha_1^2}\, w, \]
with some unit vector $w$ orthogonal to $x_{min}$.
Let
\[ x_1:=-(1-\alpha_1^2)^{1/2} x_{min}+\alpha w,\] then $x_1$ is a unit vector orthogonal to $x_0$.
Note that \[ \|x_0x_0^\top -x_{min}x_{min}^\top\|^2= \|x_0x_0^\top -x_{min}x_{min}^\top\|^2_F/2
=1-\alpha_1^2.\]
The following proposition gives a bound for the distance $ \|x_0x_0^\top -x_{min}x_{min}^\top\|$ via the ratio $\|A_I x_0\|/\|A_I x_1\|$.
\begin{prop} Let $\alpha_1=|x_0\cdot x_{min}|$.
Then,
\begin{eqnarray}\label{closeness}
&& (2-\alpha_1^2) \|b_I\|^2\ge (1-\alpha_1^2) \|A_Ix_1\|^2
\end{eqnarray}
Therefore, as $\|b_I\|$ is small enough, $1-\alpha_1^2$ must be close to $0$.
Note that $ \|b_I\|^2=\|A_Ix_0\|^2$, then \begin{eqnarray}\label{closeness1}
&& \|b_I\|^2 \|A_Ix_1\|^{-2}\ge \left(\frac{ 1-\alpha_1^2}{2-\alpha_1^2}\right) \ge
\frac{1}{2}\|x_0x_0^\top -x_{min} x_{min}^\top \|^2.
\end{eqnarray}
\end{prop}
\begin{proof} Note that
\[
\begin{cases} \|A_Ix_0\|^2=\alpha_1^2\|A_Ix_{min}\|^2+(1-\alpha_1^2)\|A_Iw\|^2, \\
\|A_Ix_1\|^2=(1-\alpha_1^2)\|A_Ix_{min}\|^2+\alpha_1^2\|A_Iw\|^2. \;
\end{cases}
\]
Because $x_{min}$ is the singular vector of the the least eigenvalue,
\begin{eqnarray*}
&&(1-2\alpha_1^2) \|A_Ix_0\|^2-(1-\alpha_1^2)\|A_Ix_1\|^2\\
&=&\|A_Ix_{min}\|^2+2(1-\alpha_1^2)^2 \left(
\|A_Iw\|^2-\|A_Ix_{min}\|^2\right)\ge 0.
\end{eqnarray*}
\end{proof}
Denote the sign vector of $A_{II} x_{min}$ by $u_{II}$, \[
u_{II}=\frac{ A_{II} x_{min}}{|A_{II}x_{min}|}. \]
The closeness $\|A_{II}(x_{min}-x_0)\|$ yields that
$A_{II}x_{min}$ should be close to $A_{II} x_0$. In particular, when the magnitude of entries $b_{II}$ are large enough, both vectors have the same sign \[ \frac{ A_{II} x_{min}}{|A_{II}x_{min}|}=\frac{ A_{II} x_{0}}{|A_{II}x_{0}|}\] and then
\[ A_{II} x_{0}=u_{II}b_{II}.
\]
Once the sign vector is retrieved, the vector $x_0$ can be computed via
\[
x_0=A_{II}^{-1} (u_{II}b_{II}).
\]
\subsection{Real Gaussian matrices }
As an example of computing $\|b_I\|$ and $\|A_Ix_1\|$,
let $A\in \mathbf{R}^{N\times n}$ be a random matrix consisting of i.i.d. normal $(0,1)$ entries. In the following, we would illustrate that when $N_I/N$ is small enough, $x_{min}$ is a good initialization for $x_0$ and thus we can recover the missing sign
vector $(Ax_0)/b$.
Let $x_0=e_1$. Then $x:=a_i\cdot x_0$ follows the distribution
the normal $(0,1)$ distribution.
Let $a>0$ be a function of $N_I/N$ and satisfy \begin{equation}\label{Feq}
F(a):=\int_{-a}^a (2\pi)^{-1/2}\exp(-x^2/2) dx=N_I/N.\end{equation}
Then the leading terms of Taylor series of Eq.~(\ref{Feq}) yields
\begin{equation}\label{aF}
(2/\pi)^{1/2}(a-a^3/6)\le N_I/N\le (2/\pi)^{1/2} a.
\end{equation}
Define the ``truncated" second moment
\[
\sigma_a^2:= \int_{-a}^a x^2 (2\pi)^{-1/2}\exp(-x^2/2)dx.
\]
Taking the Taylor expansion of $\sigma_a^2$ in terms of $a$ yields
the following result.
\begin{prop}\label{sigmaB}
\[ \sigma_a^2\le (2/\pi)^{1/2} a^3/3.\]
Additionally, when $N_I/N$ tends to zero, $\sigma^2_a$ is approximately $ (N_I/N)^3(\pi/6)$, where Eq.~(\ref{aF}) is used.
\end{prop}
When $N_I/N$ tends to zero, the following proposition shows that $N_I^{-1/2}\|b_I\|$ is approximately $ (\pi/6)^{1/2}N_I/N$,
see Fig.~\ref{bnorm} for one numerical simulation.
\begin{prop}\label{BnormA}
Suppose $N_I/N<c$ for some constant $c$. Then
with high probability \footnote{ An event occurs ``with high probability" if for any $\alpha\ge 1$,
the probability is at least $1-c_\alpha N^{-\alpha}$, where $c_\alpha$ depends only on $\alpha$. } \[
N_I^{-1/2}\|b_I\|\le c_6(\pi/6)^{1/2}N_I/N,
\]
for some constant $ c_6$.
\end{prop}
\begin{proof}
Because $x_0=e_1$,
$\|b_I\|$ is the norm of the first column of $A_I$.
Let $\hat \mu>0$ be the sample $N_I$ quantile~\cite{ChenH}, $\hat \mu=|a_{N_I}\cdot x_0|$.
We have the following probability inequality for $|\hat \mu-a|$:
For every $\epsilon>0$,
\[
\mathbb{P}(|\hat \mu-a|>\epsilon)\le 2\exp(-2N\delta_\epsilon^2)
\textrm{ for all $N$,}\] where \[\delta_\epsilon:=\min \{F(a+\epsilon)-N_I/N,N_I/N-F(a-\epsilon) \}.\]
That is, with high probability we have \begin{equation}\label{aa}
a-\epsilon\le |a_{N_I}\cdot x_0|\le a+\epsilon.
\end{equation}
The proof is based on the Hoeffding inequality in large deviation;
see
Theorem 7, p. 10, \cite{ChenH}.
Let $\hat N_I$ be the cardinality of the set
$\hat I:=\{i: |a_i\cdot x_0|\le a\}$. With high probability,
we have\footnote{ For every $\epsilon_1>0$,
let $\beta:=F^{-1} ( N_I(1-\epsilon_1)/N)$ and $\epsilon:= a-\beta=F(N_I/N)-\beta>0$. Then
\[ \beta-\epsilon\le |a_{ N_I/(1-\epsilon_1)}\cdot x_0|\le \beta+\epsilon \textrm{ with high probability.}\]
} \[ \hat N_I \ge N_I (1-\epsilon_1)\textrm{ for any $\epsilon_1>0$}.\]
Let $\{Z_i\}_{i=1}^N$ be independent bounded random variables, \[
Z_i :=
(N/N_I)^3\left((a_i\cdot x_0)^2- \sigma_a^2(N/ N_I)\right) \textrm{ if } |a_i\cdot x_0|\le a, \textrm{ zero, otherwise.}
\]
\[ \textrm{ Then } \mathbb{E}[Z_i]= (N/N_I)^3\left(\sigma_a^2-\sigma_a^2 (N/ N_I) \mathbb{P}(|a_i\cdot x_0|\le a)\right)=0,\] i.e.,
$Z_i$ is centered. The Hoeffding inequality
(e.g., Prop. 5.10 \cite{Roman})
yields for some positive constants $c_4,c_5$, \footnote{ The sub-gaussian norm $\|Z_i\|_{\psi_2}$is bounded by a constant ( depending on $N_I/N$), independent of $N$: \[\sup_{p\ge 1} p^{-1/2} (\mathbb{E}[Z_i^p])^{1/p}\le\sup_{p\ge 1} p^{-1/2} (N/N_I)^2 a^2(N_I/N)^{1/p-1}\le (N/N_I)^2 a^2(N_I/N)^{-1}.\]}
\[ \mathbb{P}(N^{-1}|\sum_{i=1}^N Z_i|\le t)\ge c_4 \exp(- c_5 N t^2).\]
That is,
with probability at least $1- c_4 \exp(- c_5 N t^2)$,
\[
\left | N^{-1} \|\hat b_{ I}\|^2- \sigma_a^2\frac{\hat N_I}{ N_I}\right|\ge t(N_I/ N)^3.
\]
Thanks to $\hat N_{ I}\ge N_I (1-\epsilon_1)$ with high probability and Eq.~ (\ref{aF}),~(\ref{aa}), \[
\frac{\|b_I\|^2}{N_I}\le \frac{ \|\hat b_I\|^2}{\hat N_I}+
\frac{\epsilon_1 N_I (a+\epsilon)^2 }{\hat N_{ I}}\le \frac{ \|\hat b_I\|^2}{\hat N_I}+
\epsilon_1 \frac{ (a+\epsilon)^2}{1-\epsilon_1},\]
and
\[
\frac{\|\hat b_{ I}\|^2}{\hat N_{ I}}\le \sigma_a^2 \frac{N}{ N_I}+t\frac{N_I}{\hat N_{ I}} \left(\frac{N_I}{N}\right)^2 \]\[\le
\left( \sigma_a^2(N_I/N)^{-3}+t/(1-\epsilon_1) \right)
( \frac{N_I}{N})^2.
\]
Together with Prop. \ref{sigmaB}, with high probability
\[N_I^{-1/2}\|b_I\|\le c_6 (\pi/6)^{1/2} (N_I/N) \textrm{
for some constant $c_6$.} \]
\end{proof}
To bound the norm $\|x_0x_0^\top-x_{min}x_{min}^\top\|$ in
Eq.~(\ref{closeness1}), we need to compute $\|A_Ix_1\|^2$.
Denote by $A'_{I}$ the sub-matrix of $A_I$ with the first column deleted, i.e.,
\[
A=[ *_{N_1\times 1}, A'_I].
\]
Denote by $\{\delta_{i}\}_{i=2}^{n}$ the singular values of the sub-matrix $A_I'$.
Since $x_1$ is orthogonal to $x_0$, then we have lower bounds for $\|A_I x_1\|=\|A_I' x_1\|$, i.e., $\delta_2$.
Observe that the first column of $A$ is independent of the remaining columns of $A$.
Note that entries of $A'_{I}$ are i.i.d. Normal(0,1).
According to the random matrix theory of Wishart matrices, with high probability, the singular values $\{\delta_i\}$ of the sub-matrix are bounded between $\sqrt{N_{I}}-\sqrt{n}$ and $\sqrt{N_{I}}+\sqrt{n}$.
More precisely,
\[
\mathbb{P}(\sqrt{N_I}-\sqrt{n}-t\le \delta_2\le \delta_n\le \sqrt{N_I}+\sqrt{n}+t)\ge 1-2e^{-t^2/2},\; t\ge 0,
\]
see Eq.~(2.3)~\cite{rudelson2010non}.
Together with
$ N_I^{-1/2}\|b_I\|\le c_6(\pi/6)^{1/2}N_I/N$ in Prop.~\ref{BnormA} and Eq.~(\ref{closeness1}),
we have
\[
\|x_0x_0^\top -x_{min}x_{min}^\top\|/\sqrt{2}\le \|b_I\|/(\sqrt{N_I}-\sqrt{n}-t)\]\[\le c_6\sqrt{\pi/6}(N_I/N) (1-\sqrt{(n-1)/N_I}-\sqrt{t/N_I})^{-1}.
\]
Let $ c_7:= c_6 (1-\sqrt{(n-1)/N_I}-\sqrt{t/N_I})^{-1} $, then we have the following result.
\begin{prop} Suppose that $\sqrt{N_I}>\sqrt{n}+t$ for $t>0$.
Then with high probability
\[
\frac{1}{\sqrt{2}}\|x_{min}x_{min}^\top-x_0 x_0^\top \|\le (N_I/N) c_7\sqrt{\pi/6}.
\]
for some constant $ c_7>0$, independent of $N$.
\end{prop}
\begin{figure}
\caption{{ Figures show $N^{-1}
\label{bnorm}
\end{figure}
\begin{rem}\label{Simulation}
The following simulation illustrates
that $x_{min}$ is a good initialization $x^0$.
Use the alternating minimization of $x$ and $s$ to solve the problem
\[
\min_s \min_{x} \| Ax-sb\|^2, \; |s|=1.
\]
Choose $A$ to be a Gaussian random matrix from $\mathbf{R}^{240\times 60}$ and $N_I=90$. Rescale both $x_0,x^*$ to unit vectors, where $x^*$ is the vector $x$ at the final iteration. In Fig.~\ref{xmin}, the
reconstruction error is measured in terms of
\[ \|x_0x_0^\top -x^*{x^*}^\top\|.\]
The figures in Fig.~\ref{xmin} show the results of $100$ trials using two different initializations. Obviously the singular vector is a good initialization.
\begin{figure}
\caption{Figures show the histogram of the reconstruction error via the random initialization (left) and via the $x_{min}
\label{xmin}
\end{figure}
\end{rem}
\subsection{ADM with rank-$r$ }\label{rankR}
Under some circumstances, the singular vector corresponding to the least singular value becomes a poor initialization for $x_0$, for instance, in the presence of noise. Empirically, we find that the ADM with rank $r$ can alleviate the situation;
see~\ref{NoiseEx}.
We propose the rank-$r$ method:
\begin{equation}
\min_{x\in \mathbf{R}^{n,r}} \frac{1}{2}\||Ax|-b\|^2,
\end{equation}
where $|z|$ refers to the vector whose $i$-th entry is the vector norm of the $i$-th row of the matrix $z$, i.e., $(\sum_{j=1}^r z_{i,j}^2)^{1/2}$.
Note that
when $r=n$, the set $\{xx^\top : |Ax|=b\}$ is convex.
In practical applications, we consider $r<n$ to save the computational load.
Hence, instead of vectors $x,z$ in Eq.~(\ref{Wen}), we consider matrices
$x\in \mathbf{R}^{n,r}$ and $z\in \mathbf{R}^{p,r}$ with $Ax=z$ in
the non-convex minimization problem,
\begin{equation}\label{rankR}
\min_{z,x} L(z,x,\lambda),\end{equation}
\begin{equation}
L(z,x,\lambda):= \{ \frac{1}{2}\||z|-b\|^2+ tr(\lambda^\top (Ax-z)) +\frac{\beta}{2}\|Ax-z\|_F^2\}.
\end{equation}
Similar to Alg.~\ref{WenAlg}, we can adopt the ADM consisting of $z,x,\lambda$-iterations to solve the non-convex minimization problem.
With $\lambda$ fixed, the optimal matrices $z,x$ have the following explicit expression.
\begin{prop}
Suppose $(z,x)$ is a minimizer in Eq.~(\ref{rankR}); then, $(zV,xV)$ is also a minimizer for any orthogonal matrix $V\in \mathbf{R}^{n,n}$. Moreover,
\begin{itemize} \item for each $z$ fixed, $x=A^\dagger (z-\beta^{-1} \lambda)$ is the optimal matrix.
\item
Fixing $x$, write $u=Ax+\beta^{-1}\lambda$, then the optimality of $z$ is \[
z=\frac{u}{|u|}\frac{b+\beta |u|}{1+\beta}.
\]
\end{itemize}
\end{prop}
\begin{proof} We only prove the $z$-part. The $x$-part is obvious. Because $L$ is separable in each row $z_i$ of $z$, then the optimization of $z_i$ can be solved via
\[
\min_{z_i} \frac{1}{2} (\|z_i\|^2-2b_i \|z_i\|+b_i^2)+\frac{\beta}{2} (\|u_i\|^2-2u_i\cdot z_i+\|z_i\|^2).
\]
The optimality of $z_i$ occurs if and only if $z_i$ parallels $u_i/\|u_i\|$. Let $z_i =\alpha_i u_i/\|u_i\| $ with $\alpha_i$ to be determined. Thus,
\[
\min_{\alpha_i} \frac{1}{2} (\alpha_i^2-2b_i |\alpha_i |+b_i^2)+\frac{\beta}{2} (\|u_i\|^2-2\alpha_i \|u_i\|+\alpha_i^2).
\]
Then, $\alpha_i\ge 0$ and \[
\alpha_i-b_i+\beta (\alpha_i-\|u_i\|)=0,
\]
i.e., $\alpha_i=(1+\beta)^{-1} (b_i+\beta \|u_i\|)$ completes the proof.
\end{proof}
Empirical experimentation shows that the above ADM can usually yield an optimal solution $x\in \mathbf{R}^{n,r}$
with rank not equal to one, which does satisfy $|Ax|=b$. To recover the rank-one matrix $x_0x_0^\top $, we take the following steps. First,
we standardize $A$ to be a matrix consisting of orthogonal columns via QR or SVD factorizations,
such that $I_{n\times n}$ lies in the range of $\mathcal{A}$. Indeed,
\[
\sum_{i=1}^N a_ia_i^\top =A^\top A=I_{n\times n}.
\]
When $y\in \mathbf{R}^{n,r}$, we have $b^2\cdot e= tr(yy^\top)=\|y\|^2_F$.
Hence, the norm $\|y\|^2_F=\sum_{i=1}^n \sigma_i(y)^2$ remains constant for all the feasible solutions $\{y: |Qy|=b\}$, where $\sigma_i(y)$ refers to the singular values of $y$.
Second, consider the objective function to retrieve the matrix $y$ with the maximal leading singular value,
\begin{equation}\label{sigma1}
\min_y \left(\frac{1}{2}\| Qy-z\|_F^2-\gamma \sigma_1(y)\right),
\end{equation}
where $\gamma>0$ is some parameter to balance the fidelity $|Qy|=|z|=b$ and the maximization of the leading singular value $\sigma_1(y)$. \footnote{ In experiments, we choose $\gamma=0.01$.}
Since the leading singular value of $y$ is maximized, there is no guarantee that we can always obtain the global optimal solution.
\begin{prop}
Write $Q^\top z$ in the SVD factorization, \[ Q^\top z=U_z D_zV_z^\top.\]
Then the optimal matrix $y$ in Eq.~(\ref{sigma1}) is $y=U_zD_yV_z^\top$ in the SVD factorization, where
$D_y=D_z+\gamma e_1 e_1^\top$.
\end{prop}
\begin{proof}
Observe that
\[
\frac{1}{2}\| Qy-z\|_F^2-\gamma \sigma_1(y)=\frac{1}{2}\|U_z^\top Q U_yD_yV_y^\top V_z- D_z\|_F^2-\gamma D_y(1,1),
\]
where $D_y(1,1)$ refers to the (1,1) entry of the diagonal matrix $D_y$.
Due to the rotational invariance of the Frobenius norm, then the first term achieves its minimum when
\begin{equation}\label{DD} U_z^\top Q U_yD_yV_y^\top V_z=D_z+\alpha e_1 e_1^\top \end{equation} and $\alpha$ is the minimizer of
\[
\min_\alpha \left(\alpha^2/2-\gamma (D_z(1,1)+\alpha)\right), \; i.e., \alpha=\gamma.
\]
Also, Eq.~(\ref{DD}) yields $U_z^\top Q U_y=I$ and $V_y^\top V_z=I$,
which completes the proof.
\end{proof}
\begin{rem}
Suppose that
$|Ax|=b$ for some $x\in \mathbf{R}^{n,r}$. Then, the minimizer of \[
\min_{x} \left(\frac{1}{2}\| |Ax|-b\|^2-\gamma \sigma_1(x )\right)
\]
is $(1+\gamma) x_0$. Indeed, consider $x=\alpha x_0$. Then,
\[
\alpha=arg\min_\alpha \left(\frac{1}{2} (\alpha-1)^2 \|b\|^2-\gamma \alpha\right)=1+\gamma.
\]
\end{rem}
In Eq.~(\ref{rankR}), replacing $A$ with $Q$ and replacing the term
$\frac{\beta}{2}\|Ax-z\|_F^2$ with \[
\beta\left(\frac{1}{2}\| Qy-z\|_F^2-\gamma \sigma_1(y)\right),
\]
then we adopt the ADM to
retrieve a rank-one solution.
\begin{alg}
\begin{enumerate} Initialize a random matrix $y\in \mathbf{R}^{n,r}$ and $\lambda^0=0_{N,r}\in \mathbf{R}^{N,r}$.
Repeat the following steps, $k=1,2,\ldots $. Then let the solution $x^*$ be the first column of $U_z$, i.e., the singular vector corresponding to the maximal singular value.
\item $z$-iteration:
\[
u=Qy^{k}+\lambda^k\beta^{-1},\;
z^{k+1}=\frac{u}{|u|}\frac{b+\beta |u|}{1+\beta},
\]
\item $\lambda$-iteration:
\[
\lambda^{k+1}=\lambda^k+\beta (Qy^k-z^{k+1}),
\]
\item $y$-iteration:
\[
U_zD_z V_z=Q^\top (z^{k+1}-\lambda^{k+1}\beta^{-1}),\;
y^{k+1}=U_z (D_z+\gamma e_1).
\]
\end{enumerate}
\end{alg}
\subsection{Standardized frames with equal norm}
In the simulations (section~\ref{Failure1}), we will show the importance of the unit norm condition $\|a_i\|=1$ for $i=1,\ldots, N$ in the ADM approach.
When the QR factorization is used to generate an equivalent standardized matrix consisting of rows $\{a_i\}_{i=1}^N$,
the sensing vectors $\{a_i\}$ do not have equal norm in general.
The following theorem states that we can standardize $A$ to obtain an orthogonal matrix $Q$ whose rows have equal norm.
The proof is given in the appendix.
\begin{thm}\label{Standard}
Given a matrix $A\in \mathbf{R}^{N\times n}$ satisfying the rank* condition and $N> n$, we can find a unique diagonal matrix $D$ with $D_{i,i}>0$,
such that \[
D^{-1/2}A=QB,
\]
and $Q$ is one \textit{standardized} matrix, which is
one projection matrix with $Q^\top Q=I_{n\times n}$,
$(QQ^\top)_{i,i}=(n/N)$ for all $i$, where $B$ is some $n\times n $ nonsingular matrix.
\end{thm}
Here the diagonal value $n/N$ is
the average of the norm $\|Q\|_F^2= tr(Q^\top Q)= tr(QQ^\top )=n$.
Also,
\[
N=\|D^{1/2} A\|_F^2=\|QB\|_F^2=\|B\|_F^2.
\]
With the uniqueness of $D$, $Q$ is also determined uniquely up to the right multiplication of an orthogonal matrix. Indeed, $B$ is uniquely determined up to the left multiplication of an orthogonal matrix:
\[
A^\top D^{-1/2}D^{-1/2} A=B^\top Q^\top QB=B^\top B.
\]
Recall that $A$ satisfies the rank* condition if any square $n$-by-$n$ sub-matrix of $A$ is full rank.
When a matrix $A$ satisfies the rank* condition then there exists
no orthogonal matrix $V\in\mathbf{R}^{n\times n}$, such that \begin{equation}\label{Block}AV=C=
\left(
\begin{array}{cc}
C_{1,1} & 0 \\
0 & C_{2,2}
\end{array}
\right),
\end{equation}
where the $0$s refer to zero sub-matrices with size $(N-N_1)\times n_1$ and size $N_1\times (n-n_1)$ and $C_{1,1}$ is an $N_1\times n_1$ matrix. \footnote{ Otherwise, it is easy to see that one of the following submatrices must be rank deficient: (1) the top submatrix with entries $\{C_{i,j}: i,j=1,\ldots, n\}$ or (2)the bottom submatrix with entries $\{C_{i,j}: i=N-n+1,\ldots, N,\; j=1,\ldots, n\}$.}
Furthermore, the condition ensures that
the norm of each row must be positive. It is easy to see that, with probability one, Gaussian random matrices satisfy the rank* condition.
\section{Experiments}
\subsection{ADM failure experiments}\label{Failure1}
Due to the nature of nonconvex minimization, the algorithm can fail to converge, which is indeed observed in the following two simulations.
First,
let us denote the input data by $(A,b)$ with $b_i\neq 0$ and the unknown signal by $x_0$.
Mathematically, solving problem (i) \[ |Ax_0|=b \] is equivalent to solving problem (ii)\[
b_i^{-1}|a_i\cdot x_0|=1.
\]
However, solving these two problems via the ADM~\cite{Wen} can yield different results.
Let $A$ be a real Gaussian random matrix, $A\in \mathbf{R}^{N\times n}$. Let $b=|Ax_0|$.
Rescale the system by $b^{-1}$, i.e., the input data becomes $(b^{-1}A, 1_{N\times 1})$, thus equal measurement values.
Figure~\ref{Failure} shows
the error $\| |AA^\dagger z|-b\|$ at each iteration. Here we use the random initialization for $x^0$.
\begin{figure}
\caption{{ The left figure shows the error $\| |AA^\dagger z|-b\|$ vs. the number of iteration via ADM with rank one~\cite{Wen}
\label{Failure}
\end{figure}
Second, we demonstrate a few experiments where the ADM also fails to converge. The convergence failure sheds
light on the importance of the two proposed assumptions in Section~\ref{Recovery}.
We sort a set of random generated sensing vectors $\{a_i\in \mathbf{R}^{100}\}_{i=1}^{400}$, such that \[ \textrm{
$|a_i\cdot x_0|\le |a_j\cdot x_0|$ for all $i<j$.}\] That is, the indices are sorted according to the values $b_i$.
We consider three different manners of selecting $200$ sensing vectors $\{a_i\}$: (1)
the vectors with the smallest indices,(2) the vectors with the largest indices, and
(3) a combination with $199$ small indices and one large index. Finally, we compare these results with the result using a random selection of sensing vectors, as shown in Fig.~\ref{Selection}. Here, we fix rank $r=1$ and $\beta=0.01$. Clearly, the combination with smaller indices and larger indices performs best.
\begin{figure}
\caption{{ Figure shows the histogram $\| x_0x_0^\top-x^*{x^*}
\label{Selection}
\end{figure}
\subsection{Comparison experiments with noises}\label{NoiseEx}
In this subsection, we demonstrate the performance of the ADM with $r=1$ and $r>1$ on a number of simulations, where
Gaussian white noise is added.
The noise-corrupted
data, $b$, is generated,
\[
b^2=\max ((Ax_0)^2+noise,0).
\]
The signal-to-noise ratio is defined by
\[
SNR=10\log_{10}\frac{\|Ax_0\|^2}{\|noise\|_F}.
\]
In Fig.~\ref{Noise}, we consider $A$ to be a real Gaussian random matrix with $N=2n$.
We rerun the experiments $200$ times to test the effect of random initialization.
The first row shows the histogram result with $n=30$ and $noise=0$. All the algorithms with $r=1,2,$ and $3$ work well.
The second row shows the histogram result with $n=30$ and $SNR=29$.
Here, we use $\beta=0.001$. Obviously the algorithms with $r>1$ have better performances.
\begin{figure}
\caption{{ Figure shows the histogram $\| x_0x_0^\top-x^*{x^*}
\label{Noise}
\end{figure}
Let $n=30$, $N=3n$ with $\beta=0.01$.
In Fig.~\ref{Noise1}, we demonstrate the comparison between the random initialization and the singular vector initialization, i.e., the initialization is chosen to be the singular vector corresponding to the least singular value of $A_{I} \in \mathbf{R}^{45\times 30}$.
Data $b$ is generated with $noise=2\times 10^{-4} \times Normal(0,1)$, $SNR=25dB$.
Furthermore, with the presence of noise, when ADM with $r=1$ is employed, the difference between the two initializations is very little, in contrast to the simulation result shown in Remark~\ref{Simulation}.
\begin{figure}
\caption{{ Figure shows the histogram $\| x_0x_0^\top-x^*{x^*}
\label{Noise1}
\end{figure}
\subsection{Phase retrieval experiments}
Next, we report phase retrieval simulation results (Fourier matrices), with $x_0$ being real, positive images. Images are reconstructed subject to the positivity constraints ( i.e., the leading singular vector). The results are provided to show some advantage of ADM with $r=2$ over ADM with $r=1$. Here we use $\beta=0.1$ in the following experiment.
According to our experience, the phase retrieval with the Fourier matrix is a very difficult problem, in particular in the presence of noise.
To alleviate the difficulty, researchers have suggested random illumination to enforce the uniqueness of solutions~\cite{IOPORT.06071995}.
It is known that the phase retrieval has a unique solution up to three classes: constant global phase, spatial shift, and conjugate inversion. With high probability absolute uniqueness holds with a random phase illumination; see Cor. 1~\cite{IOPORT.06071995}.Our experiences show that the random phase illumination works much better than the above uniform illumination.
In Fig.~\ref{FFTlena}, we demonstrate the the ADM with $r=1,2$ on the images with random phase illumination. Let $x_0\in \mathbf{R}^{300\times 300}$ be the intensity of the Lena image\footnote{We downsample the Lena image from http://www.ece.rice.edu/~wakin/images/ by approximately a factor $2$ and use zero padding with the oversampling rate\cite{Millane:90}\cite{Miao:98} $1.23$. }, see the bottom subfigure. We add noise and generate the data
\[
b^2=\max(|Ax_0|^2+noise,0),
\]
where $A$ is the Fourier matrix. The SNR is $39.8dB$ and the oversampling is $1.23$. Reconstruction errors
$\|\frac{x^*}{\|x^*\|_F}-\frac{x_0}{\|x^*_0\|_F}\|_F$ for rank one and rank two are $0.126$ and $0.109$, respectively. The ADM with $r=2$ has a better reconstruction.
\begin{figure}
\caption{\bf Figures show the reconstructed images and the error $\| |AA^\dagger z|-b\|_F$ vs. the number of iteration via ADM with rank $r=1$ (the first row) and $r=2$ (the second row).
}
\label{FFTlena}
\end{figure}
\subsection{Conclusions}
In this paper, we discuss the rank-one matrix recovery via two approaches. First, the rank-one matrix is computed among the Hermitian matrices as in PhaseLift. We make the observation that matrices in the feasible set have equal trace norm via the measurement matrices with orthonormal columns. Experiments show that with the aid of these orthogonal frames, exact recovery occurs under a smaller $N/n$ ratio compared with the PhaseLift in both real and complex cases.
In the second part of the paper, we discuss the ``lifting'' of the nonconvex alternating direction minimization method from rank-one to rank-$r$ matrices, $r>1$. The benefit of this relaxation cannot be overestimated, because the construction of large Hermitian matrices is avoided, as is the associated Hermitian matrices projection. Comparing with the ADM with rank-one, the ADM with rank $r>1$ performs better in recovering noise-contaminated signals, which is demonstrated in simulation experiments.
Another contribution is the error estimate between the unknown signal and the singular vector corresponding to the least singular value.
The initialization has an effect of importance in the nonconvex minimization. We demonstrate that a good initialization can be the least singular vector of the subset of sensing vectors corresponding to the small measurement values $b_i$. In the case of real Gaussian matrices, the error can be reduced, as the number of measurements grows at a rate proportional to the dimension of unknown signals.
One of our future works is the generalization of the error estimate to complex frames, in particular the case of
the Fourier matrix.
\appendix
\section{Standardization of $A$}
In the following, we will prove Theorem~\ref{Standard} in several steps.
We discuss the existence first. The uniqueness analysis will be shown later. Fixing $A$, let $\mathcal{D}$ be the inverse matrices of diagonal matrices $D$, \[ \mathcal{D}:=\{D^{-1}\in \mathbf{R}^{N\times N}: \|D^{-1/2} A\|_F^2=\sum_{i=1}^N D_{i,i}^{-1} \sum_{j=1}^n A_{i,j}^2=N, D_{i.i}\ge 0\}.\]
Clearly $\mathcal{D}$ is nonempty and convex compact.
In fact, $D_{i,i}$ has a positive lower bound, \[ D_{i,i}\ge N^{-1} \sum_{j=1}^n A_{i,j}^2 \textrm{ for all $i$.}\] For each $D^{-1}\in \mathcal{D}$, let $f: \mathcal{D}\to \mathcal{D}$ be the function \[ f(D^{-1})=\hat D^{-1}, \textrm{ where
$QB=D^{-1/2}A$ is the QR factorization},\]
and each row of
$\hat D^{-1/2}A B^{-1}$ has norm one. In fact, the function $f$ generates iterations $\{(D^{k})^{-1}\}_{k=0}^\infty$ with
$(D^{k+1})^{-1}=f((D^{k})^{-1})$.
That is, start with $Q^0=A$. Repeat the two steps for $k=0,1,2,\ldots$ until it converges:
\begin{eqnarray*}
&(ii)& \textrm{ Normalize the row of $Q^k$ by $(D^k)^{-1/2}Q^k$;} \\
&(ii)& \textrm{Take the QR factorization:}
(D^k)^{-1/2}Q^{k-1}=Q^{k}R^{k}.
\end{eqnarray*}
Since $D^{-1/2} A$ has rank $n$, then $B$ has rank $n$ and $B^{-1}$ exists. The function $f$ is well defined: Once $B$ is given, then choose the diagonal matrix $ D$ to be that which normalizes the rows of $A B^{-1}$.
According to
Brouwer's fixed-point theorem, we have the existence of $D$, such that $D^{-1/2}AB^{-1}=Q$ consists of orthogonal columns and each row has norm one.
Before the uniqueness proof, we state one equation of $D$.
\begin{prop}
The diagonal matrix $D$ satisfies the equation,
\begin{equation}\label{EqD} (n/N) D_{i,i}=(A (A^\top D^{-1} A)^{-1}A^\top )_{i,i}.\end{equation}
\end{prop}
\begin{proof}
According to $D^{-1/2} A=QB $, we have
\[
(n/N)D_{i,i}=(D^{1/2}QQ^\top D^{1/2})_{i,i}=(A (B^\top B)^{-1} A^\top )_{i,i}.
\]
Note that $B^\top B=A^\top D^{-1/2} Q^\top Q D^{-1/2} A=A^\top D^{-1} A$. Thus, \[
\; (n/N)D_{i,i}=(A (A^\top D^{-1} A)^{-1}A^\top )_{i,i}.
\]
\end{proof}
\begin{prop}\label{Jens}
Let $p_i\in (0,1)$, $i=1,\ldots, n$ with $\sum_{i=1}^n p_i=1$. Let $\lambda_i>0$ for $i=1,\ldots, n$. Then
\[
\sum_{i=1}^n p_i\lambda_i \ge (\sum_{i=1}^n p_i\lambda_i^{-1})^{-1},
\]
where equality holds if and only if $\{p_i\}_{i=1}^n$ are equal.
\end{prop}
\begin{proof}
Let $f(x)=x^{-1}$ for $x>0$, which is strictly convex. The statement is the application of Jensen inequality,
\[\sum_{i=1}^n p_i\lambda_i^{-1}\ge
(\sum_{i=1}^n p_i\lambda_i )^{-1}.
\]
\end{proof}
\begin{prop}
Suppose that $Q$ is a standardized matrix satisfying the rank* condition.
Let $F^1$ be a positive diagonal matrix. Then the iteration
\[
F^{k+1}_{i,i} =(N/n) (Q(Q^\top (F^k)^{-1} Q )^{-1}Q^\top )_{i,i},\; k=1,\ldots.
\]
yields $\lim_{k\to \infty } F^k=cI_{N\times N}$, where $c$ is some scalar.
\end{prop}
\begin{proof}
We will show $ tr(F^{k+1})\le tr(F^{k})$. Suppose that
$\{(\lambda_i^{-1},q_i)\}_{i}$ are eigenvalues-eigenvectors of $Q^\top (F^k)^{-1}Q$,
then $\{(\lambda_i,q_i)\}_{i}$ are eigenvalues-eigenvectors of $(Q^\top (F^k)^{-1}Q)^{-1}$. Hence,
\[
\lambda_i^{-1}=q_i^\top Q^\top (F^k)^{-1}Qq_i,
\]
and
\[ tr(F^{k+1})=\sum_{j=1}^N \mu^{k+1}_j=(N/n)\sum_{i=1}^n \lambda_i= (N/n)\sum_{i=1}^n (\sum_{j=1}^N(F^k_{j,j})^{-1}(Qq_i)_{j}^2)^{-1} .\]
Denote the j-th entry of $|(Qq_i)_{j}|$ by $p_{j,i}$. Then $\sum_{j=1}^N p_{j,i}^2=1$ and $\sum_{i=1}^n p_{j,i}^2=n/N$. Let $\{ \mu_i^{k}\}_{i=1}^N$ be the diagonal entries of $F^{k}$.
Then
\[
\sum_{j=1}^N \mu_j^{k+1}=(N/n)\sum_{i=1}^n(\sum_{j=1}^N (\mu^k_{j})^{-1} p_{j,i}^2)^{-1}\le (N/n)\sum_{i=1}^n\sum_{j=1}^N \mu^k_{j} p_{j,i}^2=\sum_{j=1}^N\mu^k_j,
\]
where the last equality is due to $\sum_{i=1}^n p_{j,i}^2=n/N$.
Hence, $ tr(F^{k+1})\le tr(F^k)$. Denote one of limiting points of $\mu^{k}_i$ by $\mu^*_i$ and then
\[
(\sum_{j=1}^N p_{j,i}^2(\mu^*_j)^{-1})^{-1}=\sum_{j=1}^N p_{j,i}^2 \mu^*_j \textrm{ for all } i.
\]
Hence, $\mu^*_i=\mu^*_j$ for all $ i,j$ with $p_{j,i}> 0$. Due to the rank* condition, $QV$ cannot be written in the form of Eq.~(\ref{Block}) for any orthogonal matrix $V$ whose columns are orthonormal vectors $\{q_i\}_{i=1}^n$ with $p_{j,i}=|(QV)_{j,i}|$.
Hence, $c=\mu^*_i=\mu^*_j$
for all $i,j$.
\end{proof}
Finally, we complete the proof in the following.
\begin{prop}
Suppose that $A$ satisfies the rank* condition.
Let $D_*$ be one solution of Eq.~(\ref{EqD}). Then with any positive diagonal matrix $D^0$,
the iteration
\[
D^{k+1}=(N/n)diag(A(A^\top (D^k)^{-1} A)^{-1} A^\top ),\; k=1,\ldots,
\]
yields
\[
\lim_{k\to \infty} D^k=D_*.
\]
Thus, $D_*$ is unique.
\end{prop}
\begin{proof}
Let $D_*^{-1/2}A=Q B$ be the QR factorization of $D_*^{-1/2}A$. Then
\[(A^\top D^{-1}A)^{-1}= B^{-1} (Q^\top (D_*^{-1/2} D D_*^{-1/2})^{-1} Q)^{-1}B^{-1},\]
and the iteration becomes
\[
D^{-1/2}_*D^{k+1}D^{-1/2}_* =(N/n)diag(Q(Q^\top ( D_*^{-1/2}D^k D_*^{-1/2})^{-1} Q)^{-1} Q^\top ),\; k=1,\ldots.
\]
Let $F^{k}=D^{-1/2}_*D^{k}D^{-1/2}_*$.
Since $A$ satisfies the rank* condition, then for any nonsingular matrix $B$, $D_*^{-1/2}AB^{-1}$ also satisfies the rank* condition and
cannot be written in the form in
Eq.~(\ref{Block}) for any orthogonal matrix.
According to Prop.~\ref{Jens}, the proof is completed.
\end{proof}
\end{document} | math | 71,562 |
\begin{document}
\author{Alexander I. Zhmakin\footnote{Ioffe Physical Technical Institute \& SoftImpact, Ltd., e-mail: [email protected]}}
\title{A Compact Introduction to Fractional Calculus}
\maketitle
\tableofcontents
\pagenumbering{arabic}
\section{Fractional Derivatives}
Fractional calculus (FC) is now an efficient tool for problems in science and engineering \cite{old74,mil93,kir94,sam87,car97,pod98,nakh03,tar11}.
The term "fractional" is kept for the historical reasons --- it is a misnomer since the order can be real \cite{gor03,ben13}.
Historical survey of the development of FC starting from the letter by Gottfried Leibniz to Guillaume l'H\^opital (1695) including contributions by Joseph Liouville, Bernhard Riemann, Niels Abel, Gr\"unwald, Aleksey Letnikov, Gerasimov, Marcel Riesz, Magnus Mittag-Leffler, Paul L\'evy, Raoul Nigmatullin, Yuri Rabotnov, Arthur Erd\'elyi and others during the XIX and XX centuries could be found in refs. \cite{ross97,hil08,kir10,kir11,kir14}.
Lorenzo \& Hartley \cite{lor98} analysed the minimal set of criteria for the generalized calculus formulated by B. Ross:
{{\em analyticity}: if $f (z)$ is an analytic function of the complex variable $z$, the derivative $D_z^{\nu} f (z)$ is an analytic function of $z$ and $\nu$;}
{{\em backward compatibility}: the operation $D_z^{\nu} f (z)$ must produce the same result as ordinary differentiation when $\nu = n$ is a positive integer; the operation $D_z^{\nu} f (z)$ must produce the same result as ordinary $n$-fold integration when $n$ is a negative integer; $D_z^{\nu} f (z)$ must vanish along with its $n - 1$ derivatives at $x = c$;}
{{\em zero property}: the operation of order zero leaves the function unchanged $D_z^0 f (z) = f (x) $;}
{{\em linearity}: the fractional operators must be linear $D_z^{\nu} [a f (x) + b g (x) ] = a D_z^{\nu} f (x) + D_z^{\nu} g (x)$;}
{{\em composition (index law)}: the law of exponents for integration of arbitrary order $D_z^{\nu} D_z^{\mu} f (x) = D_z^{\nu + \mu} f (x)$.}
The fractional derivatives are based on the extension of the repeated integration and are defined either by the continuation of the fractional integral to the negative order or by the integer order derivatives of the fractional integrals \cite{hil08}.
There is no unique definition \cite{sam87,pod98,tar11} (and notation is not standardized \cite{hil08,oli14}).
There are several kinds of definitions of the fractional derivatives (Riemann-Liouville, Caputo, Gr\"unfeld-Letnikov, Riesz, Weyl, Marchaud, Caputo-Fabrizio, Yang, Chen, He and others) that are not equivalent \cite{li07,li11}. The initial conditions for the Caputo derivative are expressed in terms of the initial values of the integer order derivatives; for the zero initial conditions Riemann-Liouville, Caputo and Gr\"unwald-Letnikov derivatives coincide \cite{rah10}.
Most of the fractional derivatives are defined through the fractional integral thus derivatives inherent some non-local behaviour \cite{iyi16}.
The following relation is valid fo all types of the fractional derivatives \cite{zas08}
\begin{equation*}
\frac{d^{\alpha + \beta}}{d t^{\alpha + \beta}} = \frac{d^{\alpha}}{d t^{\alpha}} \frac{d^{\beta}}{d t^{\beta}} = \frac{d^{\beta}}{d t^{\beta}} \frac{d^{\alpha}}{d t^{\alpha}}.
\end{equation*}
The most frequently used are the Riemann-Liouville (e.g., in the calculus), the Caputo (e.g., in the physics, the numerical computations) and Gr\"unwald-Letnikov (e.g., in the signal processing, the engineering) fractional derivatives \cite{agu14,oli14}.
Grigoletto \& de Oliveira \cite{gri13} considered the generalization of the fundamental theorem of calculus --- Fundamental Theorem of Fractional Calculus (FTFC) for the cases of the Riemann-Liouville, Liouville, Caputo, Weyl and Riesz derivatives.
Baleanu \& Fernandez \cite{bal19} considered the possible classification of the fractional operators into broad classes under some restrictions and criteria. In particularity, many operators could be considered as the special cases of \cite{fer19}
\begin{equation*}
_c^AI_x^{\alpha,\beta} \int\limits_c^x (x - t)^{ \alpha - 1} A \left( (x - t)^{\beta} \right) f(t) d t, \qquad A (z) = \sum\limits_{k =0}^{\infty} a_k z^k
\end{equation*}
where $c$ is a constant often taken as zero or $\infty$, $\alpha$ and $\beta$ are complex parameters with positive real parts, and $
A (z)$ is a general analytic function whose
coefficients $a_k \in \mathcal{C}$ are permitted to depend on $\alpha$ and $\beta$, $x$ as a real variable larger than $c$.
\subsection{Riemann-Liouville Fractional Integral}
Both the Riemann-Liouville (RL) and the Caputo fractional derivatives are based on the RL fractional integral that for any $\alpha > 0$ is defined as \cite{mai07,del13}
\begin{equation}
\label{lrl}
J_{a^+}^{\alpha} f (x) = \frac{1}{\Gamma (\alpha)} \int\limits_a^x (x - t)^{\alpha - 1} f (t) d t,
\end{equation}
If $\alpha = 0$, $J_{a^+}^0 = I$, $I$ is the identity operator.
Here
$$\Gamma (\alpha) = \int\limits_0^{\infty} \exp (- \alpha) u^{\alpha - 1} d u$$
is the Euler Gamma function.
This integral exists if $f (t)$ is the locally integrable function and for $t \rightarrow 0$ behaves like $O (t^{-\nu})$ with $\nu < \alpha$. To get the strict mathematical rigor it is possible to use the framework of the Lebesgue spaces of the summable functions or the Sobolev spaces of the generalized functions \cite{gor97a}.
The RL integral is a generalization of Cauchy's formula for an n-fold integral
\begin{equation*}
\int\limits_a^x dx_1 \int\limits_a^{x_1} dx_2 \dots \int\limits_a^{x_{n - 1}} dx_n = \frac{1}{(n - 1)!} \int\limits_a^x (x - t)^{n - 1} dt
\end{equation*}
using the relation
\begin{equation*}
(n - 1)! = \prod_{k = 1}^{n - 1} k = \Gamma (n).
\end{equation*}
The equation (\ref{lrl}) is {\em left-sided} RL integral. The {\em right-sided} RL integral is written as
\begin{equation}
\label{rrl}
J_{b^-}^{\alpha} f (x) = \frac{1}{\Gamma (\alpha)} \int\limits_x^b (t - x)^{\alpha - 1} f (t) d t.
\end{equation}
The RL integral is a case of the convolution integral of the Volterra type \cite{mai00}
\begin{equation*}
K * f (x) = \int\limits_a^b k (x - t) f (t) dt.
\end{equation*}
The RL integral has the {\em semi-group} property (also called {\em additivity law} \cite{hil08}):
\begin{equation*}
J_{a^+}^{\alpha} J_{a^+}^{\beta} f (x) = J_{a^+}^{\alpha + \beta} f (x), \quad \alpha > 0, \quad \beta > 0
\end{equation*}
which implies the {\em commutative} property \cite{mai00}: $ J_{a^+}^{\beta} J_{a^+}^{\alpha} = J_{a^+}^{\alpha} J_{a^+}^{\beta}$.
The RL fractional integral coincides with the classical definition in the case $\alpha \in \mathcal{N}$. The fractional integration improves the smoothness of functions \cite{die10}.
Sometimes the RL integral could be expressed via the elementary functions, e.g.,
\begin{equation*}
J_{a^+}^{\alpha} (x - a)^{\mu} = \frac{\Gamma (\mu + 1)}{\Gamma (\alpha + \mu +1)} (x - a)^{\alpha + \mu}.
\end{equation*}
A particular case of the RL fractional integrals is the Liouville fractional integrals \cite{gri13} that is obtained by transitions $a \rightarrow - \infty$ and $b \rightarrow \infty$ in equations (\ref{lrl}) and (\ref{rrl}) as
\begin{equation*}
J_{+}^{\alpha} f (x) = \frac{1}{\Gamma (\alpha)} \int\limits_{- \infty}^x (x - t)^{\alpha - 1} f (t) d t,
\quad
J_{-}^{\alpha} f (x) = \frac{1}{\Gamma (\alpha)} \int\limits_x^{\infty} (t - x)^{\alpha - 1} f (t) d t.
\end{equation*}
\subsection{Riemann-Liouville Fractional Derivative}
{The left and the right Riemann-Liouville fractional derivatives are defined as \cite{pod98}
\begin{equation}
\label{lrld}
D_{a^+}^{\alpha} [ f (x)]= \left \{
\begin{array}{rl}
\displaystyle \frac{1}{\Gamma (1 - \alpha)}
\frac{d}{dx}
\int\limits_a^x (x - t)^{- \alpha} f(t) dt, & \quad \alpha \in (0,1)\\
\displaystyle \frac{df (x)}{dt},
& \quad \alpha = 1
\end{array} \right.
\end{equation}
and
\begin{equation}
\label{rrld}
D_{b^-}^{\alpha} [ f (x)]= \left \{
\begin{array}{rl}
\displaystyle \frac{1}{\Gamma (1 - \alpha)}
\frac{d}{dx}
\int\limits_x^b (t - x)^{- \alpha} f(t) dt, & \quad \alpha \in (0,1)\\
\displaystyle \frac{df (x)}{dt},
& \quad \alpha = 1
\end{array} \right.
\end{equation}
Operator $D_{a^+}^{\alpha}$ is left-inverse meaning that $D_{a^+}^{\alpha} J_{a^+}^{\alpha} = I$, $I$ is the identity operator. Thus $D_{a^+}^{\alpha} J_{a^+}^{\alpha} f = f$ but the unconditional semigroup property of fractional differentiation in the RL sense does not hold: Diethelm \cite{die10} gives examples where $D_{a^+}^{\alpha_1} D_{a^+}^{\alpha_2} f = D_{a^+}^{\alpha_2} D_{a^+}^{\alpha_1} f \ne D_{a^+}^{\alpha1 + \alpha_2} f$ and $D_{a^+}^{\alpha_1} D_{a^+}^{\alpha_2} f \ne D_{a^+}^{\alpha_2} D_{a^+}^{\alpha_1} f = D_{a^+}^{\alpha1 + \alpha_2} f$.
Atangana \& Secer \cite{atang13} presented tables of the RL derivatives of the trigonometric and some special functions.
The fractional RL derivative of the power function is
\begin{equation*}
D_{a^+}^{\alpha} t^{\nu} = \frac{\Gamma (1 + \nu)}{\Gamma (1 + \nu - \alpha)} t^{\nu - \alpha}
\end{equation*}
and, particular, the derivative of a constant
$ D_{a^+}^{\alpha} 1 = {t^{- \alpha}}/{\Gamma (1 - \alpha)}$.
Since the fractional RL derivative of a constant is not zero, thus the magnitude of the fractional derivative changes with adding of the constant.
Jumarie \cite{jum} suggested a modification to remove this drawback. He started with a fractional derivative ({\em F-derivative})
\begin{equation*}
f^{\alpha} (x) = \lim_{h \rightarrow 0} \frac{\Delta^{\alpha} f (x)}{h^{\alpha}}
\end{equation*}
based on the fractional difference $\Delta^{\alpha} f (x)$ of order $\alpha, \quad \alpha \in \mathfrak{R}, \quad 0 < \alpha \le 1$.
Jumarie proposed the modification of the fractional RL derivative
\begin{equation*}
\displaystyle \frac{1}{\Gamma (1 - \alpha)}
\frac{d}{dx}
\int\limits_a^x (x - t)^{- \alpha} (f(t) - f (0))dt.
\end{equation*}
\subsubsection{Leibniz' formula}
The classical Leibnitz' formula for the first-order derivative (i.e. when $n \in \mathcal{N}$) is
\begin{equation*}
D^n [f (x) g (x)] = \sum_{k = 0}^n \binom{n}{k} [D^k g (x) D^{n - k} f (x)]
\end{equation*}
where $f (x)$ and $g (x)$ are the $n$-time differentiable functions.
The fractional derivatives violate the classical Leibnitz' rule \cite{tar1,tar2}. Generalization of the Lebnitz' formula was developed by Osler \cite{osl1,osl2}.
The Leibniz' formula for the differentiation of the product of the functions for the RL operators for the functions that are analytic on $(a - h, a + h)$ is written as \cite{die10}
\begin{equation*}
D_{a^+}^{n} [f g] (x) = \sum_{k = 0}^{\lfloor n \rfloor} \binom n k (D_{a^+}^{k} f) (x) (D_{a^+}^{n - k} g) (x) +
\sum_{k = [n] + 1}^{\infty} \binom n k (D_{a^+}^{k} f) (x) (J_{a^+}^{n - k} g) (x).
\end{equation*}
where $\lfloor \quad \rfloor$ denotes the floor function.
Jumarie studied the Leibniz' formula for the differentiation of the product of the non-differentiable functions \cite{jum13}.
\subsubsection{Fa\'a di Bruno formula (the chain rule)}
For the functions $f$ and $g$ with a sufficient number of the derivatives and $n \in \mathcal{N}$ \cite{die10,del13,tar16a}
\begin{equation*}
D^n [g (f (\cdot))] (x) = \sum (D^k g) \prod_{m = 1}^n (D^m f (x)^{b_m}
\end{equation*}
where the sum is over all partitions of $\{1, 2, \dots, n\}$ and for each partition $k$ is its number of the blocks and $b_j$ is the number of the blocks with exactly $j$ elements.
Tarasov \cite{tar16} analysed the simplified chain rules suggested by Jumarie \cite{jum07,jum07a,jum13a} and found that these simplifications are not universally valid.
\subsubsection{Fractional Taylor expansion}
The fractional Taylor expansion is written as \cite{die10,qi13,odi07}
\begin{equation*}
f (x) = \frac{(x - a)^{n - m}}{\Gamma (n - m +1)} \lim_{z \rightarrow a +} J_a^{m - n} f (z) +
\end{equation*}
\begin{equation*}
\sum_{k = 1}^{m - 1} \frac{(x - a)^{k + n - m}}{\Gamma (k + n - m +1)} \lim_{z \rightarrow a +} D^k J_a^{m - n} f (z) + J_a^n D_a^n f (x).
\end{equation*}
\subsubsection{Symmetrised space derivative}
Vermeersch \& Shakouri \cite{ver14} formulated the symmetrised space derivatives of the fractional order between 1 and 2 and between 0 and 1:
\begin{itemize}
\item {$1 < \alpha < 2$.
The symmetrised space derivative of the function $g (x)$ that is integrable over the entire real axis is
\begin{equation*}
\frac{\partial^{\alpha} g}{\partial |x|^{\alpha}} = \frac{\partial}{\partial x} \left[w_{\alpha} \star \frac{\partial g}{\partial x} \right]
\end{equation*}
where $\star$ denotes the convolution and $w_{\alpha}$ is an unknown kernel function with the Fourier image found to be
$W_{\alpha} = {1}/{|\xi|^{2 - \alpha}}$.
The Fourier inversion yields
\begin{equation*}
w_{\alpha} = \frac{1}{2 \pi} \int\limits_{- \infty}^{\infty} \frac{exp (j \xi x) d \xi}{|\xi|^{2 - \alpha}} = \frac{|x|^{- (\alpha - 1)}}{2 \Gamma (2 - \alpha) \cos [(2 - \alpha) \frac{\pi}{2}]}.
\end{equation*}
Thus
\begin{equation*}
\frac{\partial^{\alpha} g}{\partial |x|^{\alpha}} = \frac{1}{2 \Gamma (2 - \alpha) \cos [(2 - \alpha) \frac{\pi}{2}]} \frac{\partial}{\partial x} \int\limits_{- \infty}^{\infty} \frac{\frac{\partial g}{\partial x} (x^{\prime}) d x^{\prime}}{|x - x^{\prime}|^{\alpha - 1}}.
\end{equation*}
}
\item {$0 < \alpha < 1$}. The symmetrised space derivative of the function $g (x)$ is
\begin{equation*}
\frac{\partial^{\alpha} g}{\partial |x|^{\alpha}} = w_{\alpha} \star \frac{\partial g}{\partial x}.
\end{equation*}
The Fourier image of the kernel function $w_{\alpha}$ is $W_{\alpha} = {j \cdot sgn (\xi)}/{|\xi|^{1 - \alpha}}$.
Performing the Fourier inversion, the authors get finally
\begin{equation*}
\frac{\partial^{\alpha} g}{\partial |x|^{\alpha}} = \frac{- 1}{2 \Gamma (1 - \alpha) \cos [(1 - \alpha) \frac{\pi}{2}]} \frac{\partial}{\partial x} \int\limits_{- \infty}^{\infty} \frac{ - sgn (x) \cdot \frac{\partial g}{\partial x} (x^{\prime}) d x^{\prime}}{|x - x^{\prime}|^{\alpha}}.
\end{equation*}
\end{itemize}
In the case $\alpha = 1/2$ the fractional integrals and derivatives are also called {\em semi-integrals} and {\em semi-derivatives} \cite{sok02}.
\subsection{Caputo Fractional Derivative}
\label{capu}
The fractional derivatives in the Caputo sense on the left ($_CD_{a+}^{\alpha}$) and on the right ($_CD_{b-}^{\alpha}$) are defined via the RL fractional integral \cite{gri13}
$(_CD_{a+}^{\alpha} f) = (J_{a+}^{n -\alpha} f^{(n)}) (x)$ and $(- 1)^n (_CD_{b-}^{\alpha} f) = (J_{b-}^{n -\alpha} f^{(n)}) (x)$. It was introduced independently in 1948 by M. Caputo and by A.N. Gerasimov \cite{ger}; later by Dzherbashyan \& Nersesian \cite{dzh}.
The major difference of the Caputo fractional derivative is that the derivative act first on the function and after the integral is evaluated while in the RL approach the derivative act on the integral.
The Caputo fractional derivative is defined as \cite{pod98}
\begin{equation}
\label{cd}
D_{\star}^{\alpha} f (t) = \left \{
\begin{array}{rl}
\displaystyle \frac{1}{\Gamma (1 - \alpha)} \int\limits_0^t (t - x)^{- \alpha} \frac{df(x)}{dt} dt, & \quad \alpha \in (0,1)\\
\displaystyle \frac{df (x)}{dt},
& \quad \alpha = 1
\end{array} \right.
\end{equation}
The definition of the Caputo derivative (\ref{cd}) is more restrictive than of the RL one
(\ref{lrld}, \ref{rrld}) since it requires the absolute integrability of the derivative $df (x)/ dt$ \cite{gor97a}.
The Caputo fractional derivative can be considered as the regularization in the time origin for the RL derivative \cite{gor03}
\begin{equation*}
D_{\star}^{\alpha} f (t) = D^{\alpha} f (t) - f (0^+) \frac{t^{- \alpha}}{\Gamma (1 - \alpha)}
\end{equation*}
and satisfies the property of being zero when applied to a constant.
Yuan \& Agrawal and Singh \& Chatterjee suggested the alternative definitions of the Caputo fractional derivative \cite{die10}. The first approach is based on the introduction of the auxiliary bivariate function
$\phi: (0, \infty) \times [a, b] \rightarrow \mathcal{R}$ as
\begin{equation*}
\phi (w, x) = (- 1)^{\lfloor n \rfloor} \frac{2 \sin \pi n}{\pi} w^{2 n - 2 \lceil n \rceil + 1} \int_a^x f^{(\lceil n \rceil)} (\tau) e^{- (x - \tau) w^2} d \tau
\end{equation*}
where $\lceil \quad \rceil$ denote the ceiling function, and, finally
\begin{equation*}
D_{\star a}^n f (x) = \int\limits_0^{\infty} \phi (w, x) d w.
\end{equation*}
The second approach is based on expressing the fractional derivative of the the given function in the form of the integral over $(0, \infty)$ with the integrand that can be obtained as the solution of the first-order initial value problem
\begin{equation*}
\frac{\partial \phi^{\star} (w, x)}{\partial x} = - w^{\frac{1}{n - \lceil n \rceil - 1}} \phi^{\star} (w, x) + \frac{(- 1)^{\lfloor n \rfloor} \sin \pi n}{\pi (n - \lceil n \rceil - 1)} f^{(\lceil n \rceil)} (x)
\end{equation*}
with the initial condition $\phi^{\star} (w, a) = 0$.
Thus
\begin{equation*}
\phi^{\star} (w, x) = \frac{(- 1)^{\lfloor n \rfloor} \sin \pi n}{\pi (n - \lceil n \rceil - 1)} \int\limits_0^x f^{(\lceil n \rceil)} (\tau) \exp (- (x - \tau)w^{\frac{1}{n - \lceil n \rceil - 1}} ) d \tau
\end{equation*}
\begin{equation*}
D_{\star a}^n f (x) =
\int\limits_0^{\infty} \phi^{\star} (w, x) d w.
\end{equation*}
\subsection{Matrix Approach}
Operations of the fractional calculus can be expressed by matrix \cite{pod00,pod09}. E.g., the left-sided RL or Caputo derivative can be approximated in the nodes in the equidistant discretization net with the help of the upper triangular strip matrix $B_n^{(\alpha)}$ as \cite{pod00}
\begin{equation*}
\left[v_n^{(\alpha)} \; v_{n - 1}^{(\alpha)} \; \dots \; v_1^{(\alpha)} \; v_0^{(\alpha)} \right]^T = B_n^{(\alpha)} \left[v_n \; v_{n - 1} \; \dots \; v_1 \; v_0 \right]^T
\end{equation*}
where
\begin{equation*}
B_n^{(\alpha)} = \frac{1}{\tau^{\alpha}} \begin{bmatrix}
\omega_0^{(\alpha)}& \omega_0^{(\alpha)}& \dots& \dots& \omega_{n - 1}^{(\alpha)}& \omega_{n}^{(\alpha)}&\\
0& \omega_0^{(\alpha)}& \omega_0^{(\alpha)}& \dots& \dots& \omega_{n - 1}^{(\alpha)}& \\
\dots\\
\dots\\
0& 0& 0& \dots& \dots& \omega_{0}^{(\alpha)}&\\
\end{bmatrix}
\end{equation*}
Similarily, the right-hand RL or Caputo fractional derivative can be approximated with the help of the corresponding lower triangular strip matrix.
\subsection{Caputo \& Fabrizio Fractional Derivatives}
Caputo \& Fabrizio \cite{cf1,cf2} proposed the fractional derivatives without the singular kernel \cite{los15} by replacing the kernel $(t - \tau)^{- \alpha}$ with the function $\exp (- \alpha/(1 - \alpha))$ that does not have singularity for $t = \tau$ in the definition of the Caputo derivative and replacing the factor $1/\Gamma (1 - \alpha)$ with $M (\alpha)/(1 - \alpha)$.
E.g., the fractional time derivative for $\alpha \in [0, 1]$ and function $f \in L^1 (-\infty,b)$ is
\begin{equation*}
\mathcal{D}_t^{\alpha} f (t) = \frac{\alpha M (\alpha)}{1 - \alpha} \int\limits_{- \infty}^t (f (t) - f (\tau)) \exp \left[- \frac{\alpha (t - \tau)}{1 - \alpha} \right] d \tau
\end{equation*}
where $M (\alpha)$ is a normalization function such as $M (0) = M (1) =1 $.
\subsection{GC \& GRL derivatives}
Zhao \& Luo \cite{zha19a} suggested to divide the fractional derivative with different --- singular and non-singular --- kernels (e.g., RL, Caputo, Caputo-Fabrizio, Atangana-Baleanu \cite{at16}\footnote{
The equation with the Atangana-Baleanu operator is related to the derivatives of distributed order \cite{tat17}.
} with the kernel
\begin{equation*}
k (x, \alpha) = E_{\alpha} \left(- \frac{\alpha}{1 - \alpha} x \right),
\end{equation*}
Atangana-Gomez \cite{at17} with the kernel
\begin{equation*}
k (x, \alpha) = \exp \left(- \frac{\alpha}{1 - \alpha} x^2 \right)
\end{equation*}
derivative with the stretched exponential kernel \cite{sun17} (that is useful in the study of the water diffusion in the human brain using the magnetic resonance imaging \cite{ben06})
\begin{equation*}
k (x, \alpha) = \exp \left(- \frac{\alpha}{1 - \alpha} x^{\beta} \right), \quad \beta > 0, \quad \beta \ne 1)
\end{equation*}
into two classes --- GC (general, Caputo sense) and GRL (general, RL) derivatives that obeys the the principles formulated by V. Volterra in his "general laws of heredity" \cite{volt}:
{the linearity principle,}
{the invariance principle,}
{the fading memory principle,}
{the compatibility principle.}
The compatibility principle requires the validity of two limits: $D_{\alpha} f (x) \rightarrow f (x)$ when $\alpha \rightarrow 0$ and $D_{\alpha} f (x) \rightarrow f^{\prime} (x)$ when $\alpha \rightarrow 1$.
The principle of {\em nonlocality} was suggested by Tarasov \cite{tar18}.
\subsubsection{GC derivatives}
Zhao \& Luo \cite{zha19a} defined the GC derivative by
\begin{equation*}
D_{a, \alpha}^{GC} f (x) = N (\alpha) \int\limits_a^x k (x - t, \alpha)\frac{d f (t)}{d t} d t
\end{equation*}
The fading memory principle requires that the remote time and position has less effect: $k (x - t, \alpha)$ decreases when $x$ increases and $k (x - t, \alpha) \rightarrow 0$ when $x \rightarrow \infty$.
The compatibility principle requires that $N (\alpha) / k (x, \alpha) \rightarrow 1 $ when $\alpha \rightarrow 0$ and $N (\alpha) / k (x, \alpha) \rightarrow \delta(x) $ when $\alpha \rightarrow 1$.
\subsubsection{GRL derivatives}
\begin{equation*}
D_{a, \alpha}^{RL} f (x) = \frac{d}{d x} N (\alpha) \int\limits_a^x k (x - t, \alpha) f (t) d t.
\end{equation*}
The restrictions on $k (x - t, \alpha)$ and $N (\alpha)$ are the same.
\subsection{Marchaud-Hadamard Fractional Derivatives}
Marchaud's approach is based on the analytic coninuation of the fractional integrals to the negative orders using the Hadamard's finite parts of the divergent integrals (Hadamard's idea is to ignore the unbounded contribution to the integral and to assign the value of the remaining --- finite --- expression \cite{die10}).
The Marchaud fractional derivative with the lower limit $a$ is
\begin{equation*}
(M_{a+}^{\alpha} f) (x) = \frac{f (x)}{\Gamma (1 - \alpha) (x - a)^{\alpha}} + \frac{\alpha}{\Gamma (1 - \alpha)} \int\limits_a^x \frac{f (x) - f(y)}{(x - y)^{\alpha + 1}} d y
\end{equation*}
and with the upper limit $b$ is
\begin{equation*}
(M_{b-}^{\alpha} f) (x) = \frac{f (x)}{\Gamma (1 - \alpha) (b - x)^{\alpha}} + \frac{\alpha}{\Gamma (1 - \alpha)} \int\limits_x^b \frac{f (x) - f(y)}{(x - y)^{\alpha + 1}} d y.
\end{equation*}
Marchard's method is to extend the RL integral to $\alpha < 0$
\begin{equation}
\label{283}
(J_+^{- \alpha} f) (x) = \frac{1}{\Gamma (- \alpha)} \int\limits_0^{\infty} y^{- \alpha - 1} f (x - y) d y
\end{equation}
and to substract the divergent part of the integral in (\ref{283})
\begin{equation*}
\int\limits_{\epsilon}^{\infty} y^{- \alpha - 1} f (x - y) d y = \frac {f (x)}{\alpha \epsilon^{\epsilon}}
\end{equation*}
to get finally
\begin{equation}
\label{285}
(M_{+}^{\alpha} f) (x) = \lim_{\epsilon \rightarrow 0 +} \frac{1)}{\Gamma ( - \alpha)} \int\limits_{\epsilon}^{\infty} \frac{f (x) - f(y)}{y^{\alpha + 1}} d y.
\end{equation}
There are two approaches to extend the definition (\ref{285}) to the case $\alpha > 1$ \cite{hil08}:
\begin{enumerate}
\item {To apply (\ref{285}) to the $n$th derivative $d^nf/d x^n$ for $n < \alpha < n + 1$.}
\item {To consider $f (x - y) - f (x)$ as the first-order difference and to generalize to $n$th order difference (difference quotient)}
\begin{equation}
\label{dq}
(\Delta_h^n f) (x) = (\mathcal{I} - Th)^n f (x) = \sum_{k = 0}^n (- 1)^k \binom{n}{k} f (x - k h)
\end{equation}
where $\mathcal{I}$ is the identity operator and $T_h = f (x - h)$ is the translation operator.
Thus Marchard fractional derivative for $0 < \alpha < n$ is written as
\begin{equation*}
(M_{+}^{\alpha} f) (x) = \lim_{\epsilon \rightarrow 0 +} \frac{1}{C_{\alpha,n}} \int\limits_{\epsilon}^{\infty} \frac{\Delta_y^n f (x)}{y^{\alpha + 1}} d y
\end{equation*}
where
\begin{equation*}
C_{\alpha,n} = \int\limits_0^{\infty} \frac{(1 - e^{- y})^n}{y^{\alpha + 1}}.
\end{equation*}
\end{enumerate}
\subsection{Gr\"unwald - Letnikov Derivative}
The approach suggested independently by Gr\"unwald in 1867 and Letnikov \cite{let} in 1868 is based on the use the limits of the difference quotients (\ref{dq}) similar to the classical definition of the derivatives for $n \in \mathcal{N}, \quad f \in C^n [a, b], \quad a < x \le b$
\begin{equation*}
\tilde{D}^n f (x) = \lim_{h \rightarrow 0} \frac{\Delta_h^n f) (x)}{h^n}
\end{equation*}
and extension to the case of the arbitrary $n$.
Since $\binom{n}{k} = 0$ if $n \in \mathcal{N}$ and $n < k$ the expression (\ref{dq}) is equivalent to
\begin{equation}
\label{dq1}
(\Delta_h^n f) (x) = \sum_{k = 0}^{\infty} (- 1)^k \binom{n}{k} f (x - k h).
\end{equation}
The series (\ref{dq1}) is uniformly convergent for any bounded function if $n > 0$ \cite{gol10}.
The use of (\ref{dq1}) introduce two problems \cite{die10}: {the function $f$ needs to be defined on $(\infty,b]$; the function $f$ should be such that the series converges.
These problems are solved by the introduction a new function $f^{\star}$
\begin{equation*}
f^{\star} = \begin{cases}
f (x) & x \in [a,b]\\
0 & x \in (- \infty, a)
\end{cases}
\end{equation*}
that is used instead of the original $f$.
It is also assumed that in the tending to zero $h$ takes only the Gr\"unwald-Letnikov fractional derivative of order $n$ defined as
\begin{equation}
\label{ll}
\tilde{D}_{a}^n = \lim_{N \rightarrow \infty} \frac{\Delta_{h_N}^n f (x)}{h_N^n} = \lim_{N \rightarrow \infty} \sum_{k = 0}^N (- 1)^k \binom{n}{k} f (x - k h_N).
\end{equation}
The Gr\"unwald-Letnikov derivative is called {\em pointwise} or {\em strong} depending on whether the limit is taken pointwise or in the norm of a suitable Banach space \cite{hil08}.
The binomial coefficient can be generalized to the non-integer arguments
\begin{equation*}
(- 1)^j \binom{q}{j} = (- 1)^j \frac{\Gamma (q + 1)}{\Gamma (j + 1) \Gamma (q - j + 1)} = \frac{\Gamma (j - q)}{\Gamma (- q) \Gamma (j + 1)}.
\end{equation*}
The (left-sided) Gr\"unwald-Letnikov derivative could be written as ($nh = x - a$)
\begin{equation*}
\tilde{D}_{a}^{\alpha} f (x) = \lim_{h \rightarrow 0} \frac{1}{h^{\alpha}} \sum_{k = 0}^{\lfloor n \rfloor} (- 1)^k \frac{\Gamma (\alpha + 1) f (x - k h)}{\Gamma (k + 1) \Gamma (\alpha - k + 1)}
\end{equation*}
and right-sided ($nh = b - x$) as
\begin{equation*}
\tilde{D}_{b}^{\alpha} f (x) = \lim_{h \rightarrow 0} \frac{1}{h^{\alpha}} \sum_{k = 0}^{\lfloor n \rfloor} (- 1)^k \frac{\Gamma (\alpha + 1) f (x + k h)}{\Gamma (k + 1) \Gamma (\alpha - k + 1)}.
\end{equation*}
The Gr\"unwald-Letnikov integral of the order $n$ of the function $f$ is written as
\begin{equation*}
\tilde{J_a^n} f (x) = \frac{1}{\Gamma (n)} \lim_{N \rightarrow \infty} h_N^n \sum_{k = 0}^N \frac{\Gamma (n + k)}{\Gamma (k + 1)} f (x - k h_N).
\end{equation*}
\subsection{Riesz Fractional Operators}
The fractional integral of the order $\alpha$ in the Riesz sense (also known as the Riesz potential) is defined by the Fourier convolution product
\begin{equation*}
(\mathcal{I}^{\alpha} f) (\vec{x}) = \int\limits_{\mathcal{R}^n} \vec{K}_{\alpha} (\vec{x} - \vec{\xi}) f (\vec{\xi}) d \vec{\xi},
\end{equation*}
where $Re (\alpha) > 0$.
The Riesz kernel
\begin{equation*}
\vec{K}_{\alpha} = \frac{1}{\gamma_n (\alpha)}
\begin{cases}
\Vert{\vec{x}}\Vert^{\alpha - n}, & \alpha - n \ne 0, 2, \dots,\\
\Vert{\vec{x}}\Vert^{\alpha - n} \ln \left(\frac{1}{\Vert{\vec{x}\Vert}} \right), & \alpha - n = 0, 2, \dots
\end{cases}
\end{equation*}
where $\gamma_n (\alpha)$ is defined by
\begin{equation*}
\frac{\gamma_n (\alpha)}{2^{\alpha} \pi^{\frac{n}{2}} \Gamma(\alpha/2)} =
\begin{cases}
\left[\Gamma \left(\frac{n - \alpha}{2} \right) \right]^{- 1}, & \alpha - n \ne 0, 2, \dots,\\
(- 1)^{\frac{n - \alpha}{2}} 2^{- 1}\Gamma \left(\frac{\alpha - n}{2} \right), & \alpha - n = 0, 2, \dots.
\end{cases}
\end{equation*}
The Riesz fractional integral is \cite{gri13}
\begin{equation*}
(\mathcal{I}^{\alpha} f) (\vec{x}) = \frac{\Gamma \left(\frac{1 - \alpha}{2} \right) }{2^{\alpha} \pi^{\frac{n}{2}}\Gamma(\alpha/2)} \int\limits_{- \infty}^{\infty} f (\xi) |x - \xi|^{\alpha - 1} d \xi.
\end{equation*}
The Riesz fractional derivative is \cite{oli14}
\begin{equation*}
D^ {\alpha} [f (x)] = - \frac{1}{2 \cos (\alpha \pi/2)} \frac{1}{\Gamma (\alpha)}
\end{equation*}
\begin{equation*}
\frac{d^n}{d x^n} \left[\int\limits_{- \infty}^x (x - \xi)^{n - \alpha_n - 1} f (\xi) d \xi + \int\limits_x^{\infty} (x - \xi)^{n - \alpha_n - 1} f (\xi) d \xi \right].
\end{equation*}
The Riesz derivative is the generalization of the Laplace operator \cite{zas08}
\begin{equation*}
(- \Delta)^{\frac{\alpha}{2}} = - \frac{1}{2 \cos (\alpha \pi/2)} \left[ \frac{d^{\alpha}}{d x^{\alpha}} + \frac{d^{\alpha}}{d (- x)^{\alpha}} \right], \quad \alpha \ne 1.
\end{equation*}
The Riesz derivative could be expressed in terms of the Marchaud derivative
\begin{equation*}
D^ {\alpha} [f (x)] = - \frac{1}{2 \cos (\alpha \pi/2)} [(M_+^{\alpha} f) (x) + (M_-^{\alpha} f)].
\end{equation*}
The related Riesz-Feller derivative \cite{gor02a} has an additional parameter - "skewness" $\theta$
\begin{multline*}
D_{\theta}^{\alpha} f (x) = \frac{\Gamma (1 + \alpha)}{\pi} \times\\ \left[\sin \left[(\alpha + \theta) \frac{\pi}{2} \right] \int\limits_0^{\infty} \frac{f (x + \xi) f (x)}{\xi^{1 + \alpha}} d \xi
+ \sin \left[(\alpha - \theta) \frac{\pi}{2} \right] \int\limits_0^{\infty} \frac{f (x - \xi) f (x)}{\xi^{1 + \alpha}} d \xi \right].
\end{multline*}
The allowed region of the parameters $\alpha$ and $\theta$ turn out to be a diamond in the plane $\{\alpha, \theta \}$ with the vertices in the points (0,0), (1,1), (2,0), (1, -1) called the "Feller-Takayasu diamond" \cite{gor03,met04}.
\subsection{Weyl Fractional Derivative}
The Weyl derivative is based on the generalization of the differentiation in the Fourier space \cite{gol10} --- the integer derivative of the $n$th order $(ik)^n$ of the absolutely integrable function on $[- \pi,\pi]$ presented as the Fourier series is extended to the noninteger $n$.
The Weyl fractional derivative is defined as \cite{mai04}
\begin{equation*}
D_{\pm}^{\alpha} = \begin{cases}
\displaystyle{
\pm\frac{d}{d x} [I_{\pm}^{1 - \alpha} f (x)]} &0 < \alpha < 1,\\
\displaystyle{
\frac{d^2}{d x^2} [I_{\pm}^{2 - \alpha} f (x)]} &1 < \alpha < 2,
\end{cases}
\end{equation*}
where the Weyl fractional integrals are ($\mu > 0$)
\begin{equation*}
I_+^{\mu} = \frac{1}{\Gamma (\mu)} \int\limits_{- \infty}^x (x - \chi)^{\mu - 1} f (\chi)d \chi.
\end{equation*}
\subsection{Erd\'elye-Kober Fractional Operators}
The Erd\'elye-Kober integral for a well-behaved function $\phi (t)$ is defined as \cite{luch,pag12}
\begin{equation*}
I_{\eta}^{\gamma, \mu} \phi (t) = \frac{\eta}{\Gamma (\mu)} t^{- \eta (\mu + \gamma)} \int\limits_0^t \tau^{\eta (\gamma + 1) - 1} (t^{\eta} - \tau^{\eta})^{\mu - 1} \phi (\tau) d \tau,
\end{equation*}
where $\mu > 0, \quad \eta > 0, \quad \gamma \in \mathcal{R}$.
In the special case $\gamma = 0, \eta = 1$ the Erd\'elye-Kober fractional integral is related to the RL fractional integral of the order $\mu$ as
\begin{equation*}
I_1^{0, \mu} \phi (t) = \frac{t^{- \mu}}{\Gamma (\mu)} \int\limits_0^t (t - \tau)^{\mu - 1} \phi (\tau) d \tau = t^{- \mu} J^{\mu} \phi (t).
\end{equation*}
The Erd\'elye-Kober fractional derivative for $n - 1 < \mu <n, \quad n \in \mathcal{N}$ is defined as
\begin{equation*}
D_{\eta}^{\gamma, \mu} \phi (t) = \prod_{j = 1}^n \left( \gamma + j + \frac{1}{\eta} t \frac{d}{d t} \right) (I_{\eta}^{\gamma + \mu, n - \mu} \phi (t)).
\end{equation*}
The Erd\'elye-Kober fractional derivative reduces to the identity operator when $\mu = 0$
\begin{equation*}
D_{\eta}^{\gamma, 0} \phi (t) = \phi (t)
\end{equation*}
and for $\eta = 1$ and $\gamma = - \mu$ is related to the RL fractional derivative as
\begin{equation*}
D_{\eta}^{\gamma, \mu} \phi (t) = t^{\mu} D_{RL}^{\mu} \phi (t).
\end{equation*}
\subsection{Interpretation of Fractional Integral and Derivatives}
The integer-order and integrals have a clear physiscal and geometrical interpretation that simplify their use in practice.
The numerous different interpretations of the fractional derivatives and integrals have been proposed \cite{hil19}: the probabilistic \cite{t13, t14a, t15}, geometric \cite{t16, t17, t20, t21}, physical interpretations \cite{t20, t21, t22, t23, t24}.
However, as noted Podlubny \cite{pod02}, "since the appearance of the idea of differentiation and of arbitrary (not necessary integer) order there was not any acceptable geometric and physical interpretation of these operations for more than 300
years".
Teneiro Machado \cite{t15} wrote the G\"unwald-Letnikov derivative of $x (t)$ as
\begin{equation*}
D^{\alpha} [x (t)] = \lim_{h \rightarrow 0} \left[\frac{1}{h^{\alpha}} \sum_{k = 0}^{\infty} \gamma (\alpha, k) x (t - k h) \right], \quad
\gamma (\alpha, k) = (- 1)^k \frac{\Gamma (\alpha + 1)}{k! \Gamma (\alpha -_{} k + 1)}
\end{equation*}
where $h$ is the time increment.
The author noted that
\begin{equation*}
\gamma (\alpha, 0) = 1, \quad - \sum_{k = 0}^{\infty} \gamma (\alpha, k) =1
\end{equation*}
thus the "present" (P) is constituted by $x (t)$ the probability 1 while the totality of the "past/future"
(PF) is constituted by the samples $x (t), x (t - h), x (t - 2h), \dots$; each sample is weighted with a probability $- \gamma (\alpha, k)$.
Nigmatullin \cite{t22,nig05} interpreted the fractional integral in terms of the fractal Cantor set. The author considered the evolution of the state of the physical system
\begin{equation*}
J (t) = \int\limits_0^t K (t, \tau) f (t) d \tau
\end{equation*}
where the memory function $K (t, \tau) f (t)$ accounts for the loss of some states of th system; the fractional index of integration equals the fractal dimension of the Cantor set.
Podlubly \cite{pod02} and Podlubny et al. \cite{pod07} suggested the geometrical interpreation of the left-sided (equation (\ref{lrl})) and right-sided (equation (\ref{rrl})) RL integrals and of the RL (equations (\ref{lrld}) - (\ref{rrld})) and the Caputo (equation (\ref{cd})) derivatives, as well as of the Riesz potential that is the sum of the left-sided and right-sided RL fractional integrals
\begin{equation*}
R_b^{\alpha} f (x) = \frac{1}{\Gamma (\alpha)} \left[ \int\limits_a^x (x - t)^{\alpha - 1} f (t) d t + \int\limits_x^b (t - x)^{\alpha - 1} f (t) d t \right]
\end{equation*}
and of the Feller potential
\begin{equation*}
\Phi^{\alpha} f (x) = c J_{a^+}^{\alpha} f (x) + d J_{b^-}^{\alpha} f (x).
\end{equation*}
The geometric interpretation by Podlubny is based on additing the third dimension
\begin{equation*}
g_x (t) = \frac{1}{\Gamma (\alpha +1)} [x^{\alpha} + (x - t)^{\alpha}]
\end{equation*}
to the pair (t, f (x)) and considering the three-dimensional line $(t, g_x (t), f (t))$ as the top edge of the "fence" that gives shadow on the wall in the (g,f) plane.
Tarasov \cite{tar17} proposed the ”informatic” (”computer science”) interpretation of the RL and the Caputo derivatives of the non-integer orders using the reconstructions from the infinite sequence of the derivatives of the integer orders. Such reconstructions atre based on the Kotel’nikov theorem (also known as the sampling theorem) proved by Vladimir Kotel’nikov in 1933 and also by Claude Shannon 1949:
under the certain restrictive conditions, function $f(t)$ can be restored from its samples $f[n] = f(nT)$ according to the Whittaker-Shannon interpolation formula. The author stressed that infinity of the sequences of the integer derivatives plays a fundamental role in representation of the fractional derivatives that describe nonlocality and memory.
G\'omez-Aguilar et al. \cite{gom14} analysed the Caputo differentiation using the RC circuit for which the fractional version of the Ohm's law and Kirchhoff's law are written as
\begin{equation*}
v (t) = \frac{1}{\sigma^{1 - \gamma}} \frac{d^{\gamma} q}{d t^{\gamma}}, \qquad R \frac{d q}{d t} + \frac{1}{C} q (t) = v (t)
\end{equation*}
where $q$ is the electric charge, $v$ is the voltage, $R$ is the resistance of the conductor, $C$ is the capacitance. The parameter $\sigma$ is introduced in order to be consistent with the dimensionality; it characterizes the fractional structures (the components that show the intermediate behaviour between conservative (capacitor) and dissipative (resistor) \cite{gom14}. The authors derived the fractional differential equation for the RC circuit
\begin{equation*}
\frac{d^{\gamma}}{d t^{\gamma}} + \frac{1}{\tau_{\gamma}} q (t) = \frac{C}{\tau_{\gamma}} v (t), \qquad
\tau_{\gamma} = \frac{R C}{\sigma^{1 - \gamma}}
\end{equation*}
where $\tau_{\gamma}$ is the time constant.
G\'omez-Aguilar et al. claimed that the differentiation is related to the memory effects that reflect the intrinsic dissipation in the system.
Sierociuk et al. \cite{sie15} used the RC network to model the fractional order diffusion based on the analogy between the heat and electrical conduction. The authors
showed that the equations for the capacitor and for the resistor in the transmission line could be used to get the diffusion equation; the loosing of heat was represented by the additional resistors connected parallel to capacitors.
Carpinteri et al. \cite{car09} considered the mechanical interpretation of the Marchaud fractional derivative using the body springs connecting the non-adjacent points of the body with the stiffness decaying with the distance between the material points.
\subsection{Local Fractional Derivatives}
The fractional derivatives are nonlocal. Several researches introduced the local variants \cite{zha06} that are useful for study of the pointwise behaviour of the fractal and multifractal functions that describe, e.g., the stress and deformation patterns in materials exhibiting the fractal-like microstructure \cite{car09} or the velocity field of turbulent fluid \cite{kol98}.
Kolwankar \& Gangal \cite{kol97,kol98,kol98a,kol13} defined
the derivative via the RL derivative as
\begin{equation*}
\mathfrak{D}^q f (y) = \lim_{x \rightarrow y} \frac{D^q (f (x) - f (y))}{(x - y)^q}
\end{equation*}
if the limit exists and finite.
The local fractional Taylor formula is written as \cite{yang12c}
\begin{equation*}
f (x) = \sum_{i = 0}^n \frac{f^{(n) (y)}}{\Gamma (1 + n)} (x - y)^n \frac{\mathfrak{D}^{\alpha}}{\Gamma (n! + \alpha)} (x - y)^{\alpha} + R_{\alpha} (x, y).
\end{equation*}
Yang et al. \cite{yang16,zha16} used similar definition
\begin{equation*}
\mathfrak{D}^{(k)} f (\tau) = \lim_{\tau \rightarrow \tau_0} \frac{f (\tau) - f (\tau_0)}{\tau^k - \tau_0^k}.
\end{equation*}
Chen et al. \cite{che10} proposed the local derivatives based on the integrals of the difference-quotient (IDQ) or the singular of difference-quotient (SIDQ). For example, the right SIDQ local derivative is
\begin{equation*}
\mathfrak{D}^{\alpha} f (y) = \frac{1}{\Gamma (1 - \alpha)} \lim_{h \rightarrow 0_+} \int\limits_0^1 (1 - t)^{- \alpha} \frac{f (t h + y) - f (y)}{h^{\alpha}} d t.
\end{equation*}
The local fractional derivative is essentially the {\em fractal} derivative \cite{chen06a,he14}. In contrast to the purely analytical approach of the fractional calculus, the fractal calculus follows the physical-geometric approach; to avoid confusion it is suggested to call the latter the {\em scaled} calculus \cite{ont13}.
The fractal ("Hausdorff") derivative on the time fractal is defined as \cite{he18}
\begin{equation*}
\frac{\partial f}{\partial t^{\sigma}} = \lim_{t_B \rightarrow t_A} \frac{f (t_B) - f (t_A)}{(t_B)^{\sigma} - (t_A)^{\sigma}}
\end{equation*}
where ${\sigma}$ is the fractal dimension of time.
A more general definition is formulated as \cite{h1,h2}
\begin{equation*}
\frac{\partial^{\tau} f}{\partial t^{\sigma}} = \lim_{t_B \rightarrow t_A} \frac{f^{\tau} (t_B) - f^{\tau} (t_A)}{(t_B)^{\sigma} - (t_A)^{\sigma}}
\end{equation*}
Since the fractal derivative is the local operator, the numerical solution of the fractal derivative equations can be performed by the standard numerical techniques for the integer-order differential equations \cite{chen10}.
The similar properties have the so called "conformable" fractional derivatives.
\subsubsection{"Conformable" Fractional Derivative}
Most fractional derivatives do not have the desirable properties \cite{abd14,kha14,kat14}:
{the derivative of a constant is not zero;}
{they do not obey the product rule $D^{\alpha} (f g) = f D^{\alpha} (g) + g D^{\alpha} f$ ; }
{they do not obey the quotient rule
$D^{\alpha} ({f}/{g}) = (g D^{\alpha} (f) - f D^{\alpha} (g)) / g^2$; they do not obey
the chain rule $D^{\alpha} (f g) = f^{\alpha} (g (t) g^{\alpha} (t)$;
they do not obey in general $D^{\alpha} D^{\beta} f = D^{\alpha + \beta} f$.
Khalil et al.\cite{kha14} and Katugampola \cite{kat14,kat14a}
suggested the so called "conformable" limit based \cite{and15} derivatives
\begin{equation*}
D^{\alpha} f (t) = \lim_{\epsilon \rightarrow 0} \frac{f (t + \epsilon t^{1 - \alpha}) - f (t)}{\epsilon}, \qquad 0 < \alpha < 1,
\end{equation*}
and
\begin{equation*}
D^{\alpha} f (t) = \lim_{\epsilon \rightarrow 0} \frac{f (t e^{\epsilon t^{-\alpha}}) - f (t)}{\epsilon}, \qquad 0 < \alpha < 1.
\end{equation*}
Since the conformable derivative is the extension of the classical derivative definition, this derivative obeys the product rule, the quotient rule, the linearity property, the zero derivative for the constant and are valid for some extensions of the classical calculus such as the Rolle's Theorem or Mean Value Theorem \cite{iyi16}.
\section{Tempered Fractional Calculus}
Sabzikar et al. \cite{temp} suggested a variant of the fractional calculus where power laws are tempered by the exponential factor. The random walks model with the exponentially tempered power law jumps converges to a tempered stable motion \cite{cha11}. This {\em tempered} fractional diffusion is useful in the geophysical \cite{s44,s70} and financial \cite{s11} problems.
The authors considered two intervals for the parameter $\alpha$:
\begin{itemize}
\item {$0 < \alpha < 1$.
The {\em tempered} fractional derivative $\partial_x^{\alpha, \lambda}$ is defined as the function with the Fourier transform $[(\lambda + i k)^{\alpha} - \lambda^{\alpha}] \hat{f} (k)$ that in real space is written as
\begin{equation*}
\partial_x^{\alpha, \lambda} f (x) = \frac{\alpha}{\Gamma (1 - \alpha)} \int\limits_0^{\infty} (f (x) - f (x - y)) e^{- \lambda y} y^{- \alpha - 1} d y.
\end{equation*}
The negative {\em tempered} fractional derivative $\partial_{- x}^{\alpha, \lambda}$ is defined as the function with the Fourier transform $[(\lambda - i k)^{\alpha} - \lambda^{\alpha}] \hat{f} (k)$ that in real space is written as
\begin{equation*}
\partial_{- x}^{\alpha, \lambda} f (x) = \frac{\alpha}{\Gamma (1 - \alpha)} \int\limits_0^{\infty} (f (x) - f (x + y)) e^{- \lambda y} y^{- \alpha - 1} d y.
\end{equation*}
}
\item {$1 < \alpha < 2$.
The {\em tempered} fractional derivative $\partial_x^{\alpha, \lambda}$ is defined as the function with the Fourier transform $[(\lambda + i k)^{\alpha} - \lambda^{\alpha} - i k \alpha \lambda^{\alpha - 1}] \hat{f} (k)$ that in real space is
\begin{equation*}
\partial_x^{\alpha, \lambda} f (x) = \frac{\alpha (1 - \alpha)}{\Gamma (2 - \alpha)} \int\limits_0^{\infty} (f (x - y) - f (x) + y f^{\prime} (x)) e^{- \lambda y} y^{- \alpha - 1} d y.
\end{equation*}
The negative {\em tempered} fractional derivative $\partial_x^{\alpha, \lambda}$ is defined as the function with the Fourier transform $[(\lambda - i k)^{\alpha} - \lambda^{\alpha} + i k \alpha \lambda^{\alpha - 1}] \hat{f} (k)$ that in real space is
\begin{equation*}
\partial_{- x}^{\alpha, \lambda} f (x) = \frac{\alpha (1 - \alpha)}{\Gamma (2 - \alpha)} \int\limits_0^{\infty} (f (x + y) - f (x) - y f^{\prime} (x)) e^{- \lambda y} y^{- \alpha - 1} d y.
\end{equation*}
}
\end{itemize}
Sabzikar et al. introduced the positive tempered integral as
\begin{equation*}
\mathfrak{I}_+^{\alpha, \lambda} f (x) = \frac{1}{\Gamma (\alpha)} \int\limits_{- \infty}^x f (u) (x - u)^{\alpha - 1}e^{- \lambda (x - u)} d u
\end{equation*}
and the negative tempered integral as
\begin{equation*}
\mathfrak{I}_-^{\alpha, \lambda} f (x) = \frac{1}{\Gamma (\alpha)} \int\limits_{- \infty}^x f (u) (u - x)^{\alpha - 1}e^{- \lambda (u -x)} d u
\end{equation*}
called the RL tempered integrals since for $\lambda = 0$ they reduce to the usual RL integrals.
The authors defined the RL tempered fractional derivatives $\mathcal{D}_{\pm}^{\alpha, \lambda}$ as functions with the Fourier transform $(\lambda \pm i k)^{\alpha} \hat{f} (k)$ that can be expressed
\begin{equation*}
\mathcal{D}_{\pm}^{\alpha, \lambda} f (x )=
\begin{cases}
\partial_{\pm x}^{\alpha, \lambda} f (x) + \lambda^{\alpha} f (x) & 0 < \alpha < 1 \\
\partial_{\pm x}^{\alpha, \lambda} f (x) + \lambda^{\alpha} f (x) \pm \alpha \lambda^{\alpha - 1} f^{\prime} (x)\ &1 < \alpha < 2.
\end{cases}
\end{equation*}
Evidently, integration and differentiation are the inverse operators: $$\mathcal{D}_{\pm}^{\alpha, \lambda} \mathfrak{I}_{\pm}^{\alpha, \lambda} f (x) = f (x), \quad \mathfrak{I}_{\pm}^{\alpha, \lambda} \mathcal{D}_{\pm}^{\alpha, \lambda} f (x) = f (x).$$
The integration and differentiation operators have the semigroup property
\begin{equation*}
\mathfrak{I}_{\pm}^{\alpha, \lambda} \mathfrak{I}_{\pm}^{\beta, \lambda} f = \mathfrak{I}_{\pm}^{\alpha + \beta, \lambda} f, \quad
\mathcal{D}_{\pm}^{\alpha, \lambda} \mathcal{D}_{\pm}^{\beta, \lambda} f = \mathcal{D}_{\pm}^{\alpha + \beta, \lambda} f.
\end{equation*}
\section{Fractional Differential Equations}
\index{fractal}\index{fractional}
Generally, the fractal media could not be considered as continuous media.
The use of the non-integer dimensional spaces \cite{old74} is necessary to describe a fractal medium by the continuous models \cite{tar16}.
The fractional differential equations \cite{duan,nakh06,kilb06,die10} are non-local (i.e. could incorporate the effects of the memory and the spatial correlations) and could be formulated in the distinct but mathematically equivalent forms. Mainardi et al. \cite{main07} compared the fractional extensions of the standard Cauchy problem
\begin{equation}
\label{m21}
\frac{\partial u (x, t)}{\partial t} = \frac{\partial^2 u (x, t)}{\partial x^2}, \qquad x \in \bf{R}, \qquad t \in R_0^+, \qquad
u (x, 0^+) = u_0 (x).
\end{equation}
The fundamental solution (or Green function) of (\ref{m21}), i.e. the solution subjected to the initial condition $u_0 (x) = \delta (x)$, is the Gaussian probability density function
\begin{equation*}
u (x, t) = \frac{1}{2 \sqrt{\pi}} t^{- 1/2}e^{- x^2 / (4 t)}.
\end{equation*}
The Green function has the scaling property
$u (x, t) = t^{{1}/{2}} U ({x}/t^{1/2})$,
$U (x)$ is the reduced Green function.
The Cauchy problem (\ref{m21}) is equivalent to the integro-differential equation
\begin{equation*}
u (x, t) = u_0 (x) + \int\limits_0^t \left[\frac{\partial^2 u (x,\tau)}{\partial x^2} \right] d \tau
\end{equation*}
where the initial condition is incorporated.
The fractional diffusion equation could be written with the use of the RL derivative $D^{1 - \beta}$ ($\beta$ is the real number $0 < \beta < 1$)
\begin{equation}
\label{m27}
\frac{\partial u (x, t)}{\partial t} = D^{1 - \beta} \frac{\partial^2 u (x, t)}{\partial x^2}
\end{equation}
or the Caputo derivative $D_{\star}^{\beta}$
\begin{equation}
\label{m28}
D_{\star}^{\beta} u (x, t) = \frac{\partial^2 u (x, t)}{\partial x^2}.
\end{equation}
The equations (\ref{m27}) and (\ref{m28}) are equivalent to the equation based on the use of the RL fractional integral of the order $\beta$
\begin{equation}
\label{m29}
u (x, t) = u_0 (x) + J^{\beta} \left[\frac{\partial^2 u (x,\tau)}{\partial x^2} \right].
\end{equation}
Note that the equation (\ref{m27}) could be obtained by differentiating (\ref{m29}), the equation (\ref{m29}) can be derived by the fractional integration of (\ref{m28}).
The equation (\ref{m27}) was studied by Metzler et al. \cite{met94} and by Saichev \& Zaslavsky \cite{sai97}, the equation (\ref{m29}) by Gorenflo et al. \cite{gor95,gor98} and by Mainardi \cite{m96,m97},
the integrodifferential equation (\ref{m29}) by Schneider \& Wyss \cite{sch89} using the Mellin transform.
Mainardi et al. \cite{main07} search for the fundamental solution
of the equation (\ref{m28}) by applying the sequence of the
Fourier
\begin{equation*}
\mathcal{F} \{v (x); k\} = \hat{v} (k) = \int\limits_{- \infty}^{\infty} e^{i k x} v (x) d x, \qquad k \in \bf{R}
\end{equation*}
and the Laplace
\begin{equation*}
\mathcal{L} \{w (t); s \} = \tilde{w} (s) = \int\limits_0^{\infty} e^{- s t} w (t) d t, \qquad s \in \bf{C}
\end{equation*}
transforms.
Thus the Green function in the Fourier-Laplace domain is determined by
\begin{equation}
\label{m212}
\hat{\tilde{u}} (k, s) = \frac{s^{\beta - 1}}{s^{\beta} + k^2}, \qquad 0 < \beta \le 1, \qquad \mathcal {R} (s) > 0, \qquad k \in \bf{R}.
\end{equation}
There are two strategies to determine the Green function in the space-time domain $u (x,t)$ related to the order in performing inversions in the expression (\ref{m212}) \cite{main07}:
1) {Invert the Fourier transform to get $\tilde{u} (x, s)$ and then invert the Laplace transform \cite{m96,m97}} or 2) invert the Laplace transform to get $\hat{u} (k,t)$ and then invert the Fourier transform \cite{gor00,main01}.
Nieto \cite{nie10} studied the linear fractional differential equation with the spatial RL derivative for initial or periodic boundary conditions and derived the maximum principle using the properties of the Mittag-Leffler functions.
Compte \cite{com96} and West et al. \cite{wes97} studied
the equation for the hyperdiffusion (L\'evy-flight diffusion)
\begin{equation*}
\frac{\partial P}{\partial t} = D (- \Delta)^{\frac{\gamma}{2}}
\end{equation*}
where the fractional $n$-dimensional Laplace operator $(- \Delta)^{\frac{\gamma}{2}}$ is defined by its Fourier transform with respect to the spatial variable \cite{duan}
\begin{equation*}
\mathcal{F}[(- \Delta)^{\frac{\gamma}{2}} g (x)] = |\omega|^{\gamma} \mathcal{F} [g (x)].
\end{equation*}
Luchko \cite{luc09} derived the maximum principle for the initial-boundary-value problem for the time-fractional diffusion equation with Caputo derivative over the open bounded domain $G \times (0. T), G \subset R^n$.
The equation could subjected to the complex transformation \cite{y16,he12,li12b}
$s = {x^S}/{\Gamma (1 + \alpha)}$
to convert to a partial differential equation\footnote{
Such transformation is possible in the multidimensional case if the variables $t, x, y, z$ obey the following constraint \cite{he12} ($q, p, k, l$ are constants)
\begin{equation*}
\xi = \frac{q t^{\alpha}}{\Gamma (1 + \alpha)} + \frac{p x^{\beta}}{\Gamma (1 + \beta)} + \frac{k y^{\gamma}}{\Gamma (1 + \gamma)} + \frac{l z^{\lambda}}{\Gamma (1 + \lambda)}.
\end{equation*} }
For example, the heat conduction equation ($\alpha$ is the fractal dimension of the fractal medium)
\begin{equation*}
\frac{\partial T}{\partial t} = \frac{\partial^{\alpha}}{\partial x^{\alpha}} \left(\lambda \frac{\partial^{\alpha} T}{\partial x^{\alpha}} \right)
\end{equation*}
is converted into the equation
\begin{equation*}
\frac{\partial T}{\partial t} = \frac{\partial}{\partial s} \left(\lambda \frac{\partial T}{\partial s} \right).
\end{equation*}
That could be further transformed by introduction of the Boltzmann variable \cite{liu15a}
$\chi = {s}/{\sqrt{t}} = {x^{\alpha}}/{\sqrt{t} \Gamma (1 + \alpha)}$
into the ordinary differential equation
\begin{equation*}
\frac{d}{d \chi} \left(\lambda \frac{d T}{d \chi} \right) + \frac{\chi}{2} \frac{d T}{d \chi}.
\end{equation*}
For the general fractional differential equation in the Jumarie's modification of the RL derivatives $$
f (u, u_t^{\alpha}, u_x^{\beta}, u_y^{\gamma}, u_z^{\lambda}, u_t^{2 \alpha}, u_x^{2 \beta}, u_y^{2 \gamma}, u_z^{2 \lambda}, \dots) = 0$$
He \& Li \cite{he13} suggested the following transforms
\begin{equation*}
s = \frac{q t^{\alpha}}{\Gamma (1 + \alpha)}, \quad X = \frac{p x^{\beta}}{\Gamma (1 + \beta)}, \quad Y = \frac{k y^{\gamma}}{\Gamma (1 + \gamma)}, \quad Z = \frac{l z^{\lambda}}{\Gamma (1 + \lambda)}
\end{equation*}
thus converting the fractional derivatives into classical derivatives
\begin{equation*}
\frac{\partial^{\alpha} u}{\partial t^{\alpha}} = q \frac{\partial u}{\partial s}, \quad
\frac{\partial^{\beta} u}{\partial x^{\beta}} = p \frac{\partial u}{\partial X}, \quad
\frac{\partial^{\gamma} u}{\partial y^{\gamma}} = k \frac{\partial u}{\partial Y}, \quad
\frac{\partial^{\lambda} u}{\partial z^{\lambda}} = l \frac{\partial u}{\partial Z}.
\end{equation*}
Babusci et al. \cite{bab} discussed relations between the differential equations and the theories of the pseudo-operators \cite{po1,po2} and the generalized integral transforms.
\subsection{Distributed order differential equations}
\label{dofe_}
The distributed order differential equations (DODE) are a special class of the fractional differential equations \cite{do1,do2,do3a,do3b,do3,do4,do5,gor05,main07a}.
Chechkin et al. \cite{chech} discussed the natural and the modifies forms of DODEs and noted that the latter in combination with the continuity equation and the retarded linear response equation for the flux exhibiting memory of the processes at the previous times admits a thermodynamic interpretation. DODEs are used to describe the accelerating subdiffusion, decelerating superdiffusion or transformation of the anomalous behaviour at the short times into the normal behaviour at the long times. For example, Metzler \& Klafter \cite{met04} considered the DODE for the description of the ultraslow diffusion with the logarithmic time dependence $\left< x^2 (t) \right> \propto \log^k t$ including the so called Sinai diffusion ($k = 4$).
The concept of the distributed order differentiation is close to the variable order fractional operators that are useful for the study of the viscoelasticity, the reaction kinetics of proteins, the electrorheological fluids, the damage modelling \cite{gor02,var1,var2}.
There are two approaches to the formulation of the distributed order differential equations:
1) {direct --- a new variable does not assigned;} 2) {Independent variable approach --- the order is considered as a function of some independent variable.}
Mainardi et al. \cite{main07} studied the fractional diffusion equation of distributed order
\begin{equation}
\label{m31}
\int\limits_0^1 b (\beta) [D^{\beta} u (x,t)] d \beta = \frac{\partial^2 u (x,t)}{\partial x^2}, \quad b(\beta) \ge 0, \quad \int\limits_0^1 b(\beta) d \beta = 1
\end{equation}
with $x \in \bf{R}$, $t \ge 0$ and the initial condition
$u (x,0^+) = \delta (x)$.
The weight function $b (\beta)$ is called the order-density.
The authors used the Fourier and Laplace transforms to get the fundamental solution similar to a single-order case (\ref{m212})
\begin{equation*}
\left[\int\limits_0^1 b (\beta) s^{\beta} d \beta \right] \hat{\tilde{u}} (k, s) - \int\limits_0^1 b (\beta) s^{\beta - 1} d \beta = - k^2 \hat{\tilde{u}} (k, s)
\end{equation*}
and
\begin{equation}
\label{m32}
\hat{\tilde{u}} (k, s) = \frac{B (s)/s}{B (s) + k^2}, \qquad k \in \bf{R}, \qquad
B (s) = \int\limits_0^1 b (\beta) s^{\beta} d \beta.
\end{equation}
In the case of small $k$ the equation (\ref{m32}) can be approximated as
\begin{equation*}
\hat{\tilde{u}} (k, s) = \frac{1}{s} \left( 1 - \frac{k^2}{B (s)} + \dots \right)
\end{equation*}
and the second moment is written as
\begin{equation}\label{m34}
\tilde{\mu_2} (s) = - \frac{\partial^2}{\partial k^2} \hat{\tilde{u}} (k=0, s) = \frac{2}{B (s)}.
\end{equation}
The special case of DODEs are the double-order fractional equations \cite{chech}
\begin{equation*}
b (\beta) = b_1 \delta(\beta - \beta_1) + b_2 \delta(\beta - \beta_2), \quad 0 < \beta_1 < \beta_2 \le 1, \quad \beta_1 > 0, \beta_2 > 0, \quad \beta_1 + \beta_2 = 1.
\end{equation*}
Asymptotic behaviour of $\mu_2 (t)$ follows from (\ref{m34}) for cases of {the slow diffusion (the power-law growth, $b (\beta) = b_1 \delta(\beta - \beta_1) + b_2 \delta(\beta - \beta_2) $) where $ \tilde{\mu_2} (s) = {2}/({b_1 s^{\beta_1 + 1} + b_2 s^{\beta_2 + 1})}$ and }
{the ultra-slow diffusion (the logarithmic growth, $b (\beta) =1$) with
$\tilde{\mu_2} (s) = 2 {\ln s}/{s (s - 1)}$.
}
The distributed order equations allow to describe the more complex media.
The time-fractional diffusion equation of the distributed order (\ref{m31}) is potentially more flexible to represent the local phenomena while the space-fractional diffusion equation of the distributive order is more suited to represent the variations in space \cite{cap03}.
\subsection{Special Functions}
There are special functions related to the differential equation similar to the classical case (such as e.g., the Bessel and the cylindrical functions, the classical orthogonal polynomials, Airy functions etc.) \cite{leb}.
The most important functions in the fractional calculus are the Mittag-Leffler function \cite{hau11},
the H-functions \cite{hil0,hf,kir2}, the Wright functions \cite{wf,wf1}, the generalized Lommel-Write functions \cite{lwf}.
The Mittag-Leffler function is even called the "Queen"-function of the fractional calculus \cite{kir2}.
\subsubsection{Mittag-Leffler Functions}
\label{ml}
The eigenfunction of the RL derivatives are the solutions of the equation \cite{hil08}
\begin{equation*}
D_{0+}^{\alpha} [ f (x)] = \lambda f (x)
\end{equation*}
where $\lambda$ is the eigenvalue.
The eigenfunctions are $
f (x) = x^{1 - \alpha} E_{\alpha, \alpha} (\lambda x^{\alpha})$ where
\begin{equation}
\label{ml2}
E_{\alpha, \beta} = \sum_{k = 0}^{\infty} \frac{x^K}{\Gamma (\alpha k + \beta)}
\end{equation}
is the generalized Mittag-Leffler function (also called the Wiman's function \cite{shu07}).
The more general eigenvalue equation for derivatives of the orders $\alpha$ and $\beta$ is
\begin{equation*}
D_{0+}^{\alpha, \beta} [ f (x)] = \lambda f (x)
\end{equation*}
The solution is \cite{hil08} $ f (x) = x^{(1 - \beta)(1 - \alpha)} E_{\alpha, \alpha + \beta} (\lambda x^{\alpha})$.
The special case is the equation $D_{0+}^{\alpha, 1} [ f (x)] = \lambda f (x)$
with eigenfunction $f (x) = E_{\alpha} (\lambda x^{\alpha})$.
The one-parameter Mittag-Lefler function is the particular case of (\ref{ml2}) for $\beta = \alpha$.
Evidently \cite{die10},
\begin{equation*}
E_{0, 1} (x) = \sum_{k = 0}^{\infty} \frac{1}{\Gamma (1)} = \sum_{k = 0}^{\infty} x^k = \frac{1}{1 - x}, \qquad
E_1 (x) = \sum_{k = 0}^{\infty} \frac{x^k}{k!} = \exp (x).
\end{equation*}
There are other special cases such as \cite{hau11} $E_2 (- x^2) = E_{2, 1} (- x^2) = \cos (x)$; $E_2 (x^2) = E_{2, 1} (x^2) = \cosh (x)$; for $x > 0 \quad E_{1/2} (x^{1/2}) = E_{1/2} (x^{1/2}) = (1 + erf (x) ) \exp (x^2)$;
for $x \in \mathcal{C}$ and $r \in \mathcal{N}$
\begin{equation*}
E_{1, r} = \frac{1}{x^{r - 1}} \left(\exp (x) - \sum_{k = 0}^{r - 2} \frac{x^k}{k!} \right);
\end{equation*}
$E_3 (x) = {1}/{2} [e^{x^{1/3}} + 2 e^{- 1/2 x^{1/3}} \cos ({\sqrt{3}}/{2} x^{1/3})]$;
$E_4 (x) = {1}/{2} [\cos(x^{1/4}) + \cosh(x^{1/4})]$
where $ erf (x) = {2}/{\sqrt{\pi}} \int_0^x \exp(-t^2) d t$.
The Mittag-Leffler function $E_1$ satisfies the functional relation \cite{die10,hau11}
$E_1 (x-y) = {E_1 (x)}/{E_1 (y)}$
and the relation between two Mittag-Leffler functions with different parameters
$E_{n_1,n_2} (x) = x E_{n_1, n_1 + n_2} (x) + {1}/{\Gamma (n_2)}$.
Note that the frequently used relation $
E_{\alpha} (a(t + s)^{\alpha}) = E_{\alpha} (a t^{\alpha}) E_{\alpha} (a s^{\alpha}), \quad t, s \ge 1$
is valid only if $\alpha = 0$ or $\alpha = 1$ \cite{pen10}.
Asymptotic expansions and integral representations of the Mittag-Leffler functions could be found in the papers \cite{gor97,gor02,hau11}.
Prabhakar \cite{pra71} suggested the extension
\begin{equation*}
E_{\alpha, \beta}^{\gamma} (x) = \sum_{n = 0}^{\infty} \frac{(\gamma)_n}{\Gamma (\alpha n + \beta)} \frac{x^n}{n!}, \qquad Re (\alpha) > 0, Re (\beta) > 0
\end{equation*}
$(\gamma)_n$ is the Pochhammer symbol \cite{shu07} $(\gamma)_0 =1, (\gamma)_n = \gamma (\gamma + 1) (\gamma + 2) \dots (\gamma + n - 1)$.
The extension to the multi-index Mittag-Leffler functions \cite{kir1,kir2}
\begin{equation*}
E_{(\frac{1}{\rho_i}),(\mu_i)} (x) = \sum_{k = 0}^{\infty} \frac{x^k}{\Gamma (\mu_1 + k/\rho_1) \dots \Gamma (\mu_m + k/\rho_m)}
\end{equation*}
is performed by replacing of the indices $\alpha = 1/\rho$ and $\beta = \mu$ by two sets of multi-indices $\alpha \rightarrow (1/\rho_1, 1/\rho_2, \dots, 1/\rho_m)$ and $\beta \rightarrow (\mu_1, \mu_2, \dots, \mu_m)$.
There are a couple of related functions \cite{nakh03}
\begin{itemize}
\item {Barret's function
\begin{equation*}
U (x, \lambda) = \sum_{k = 1}^{\infty} \frac{\lambda^{k - 1} x^{k \alpha i}}{\Gamma (k \alpha - i +1);}
\end{equation*}
}
\item {Rabotnov's (fractional exponential) function \cite{rab97,rab1}}
\begin{equation*}
\mathcal{E}_{\alpha} (\beta, x) = x^{\alpha} \sum_{n = 0}^{\infty} \frac{\beta^n x^{n (\alpha + 1)}}{\Gamma ((n + 1)(1 + \alpha))}.
\end{equation*}
\end{itemize}
\subsubsection{H Functions}
The H-function of order $(m, n, p, q) \in \mathcal{N}^4$ is defined via the Mellin-Barnes type contour integral \cite{duan,hf}
\begin{equation*}
H_{p, q}^{m, n} (z) = \frac{1}{2 \pi i} \int\limits_{\mathcal{L}} \mathcal{H}_{p, q}^{m, n} z^s d s
\end{equation*}
where $z^s = exp[s (ln |z| + i arg z)]$,
\begin{equation*}
\mathcal{H}_{p, q}^{m, n} = \frac{A (s) B (s)}{C (s) D (s)}, \qquad
A (s) = \prod_{j = 1}^m \Gamma(b_j - \beta_j s),\quad B (s) = \prod_{j = 1}^n \Gamma (1 - a_j + \alpha_j s),
\end{equation*}
\begin{equation*}
C (s) = \prod_{j = m + 1}^q \Gamma(1 - b_j + \beta_j s),\quad D (s) = \prod_{j = n + 1}^p \Gamma (a_j - \alpha_j s).
\end{equation*}
Here $m,n,p,q$ are integers satisfying
$0 \le n \le p, \quad 1 \le m \le q, m^2 + n^2 \ne 0$,
$a_j (j = 1, \dots, p), b_j (j = 1, \dots, q)$ are complex numbers.
The integration contour $\mathcal{L}$ could be chosen in different ways:
\begin{itemize}
\item {$\mathcal{L} = \mathcal{L}_{- i \infty, i \infty}$ chosen to go from $- i \infty \quad$ to $\quad i \infty$} leaving to the right all poles of $\mathcal{P} (A)$ of the functions $\Gamma$ in $A (s)$ and to the left all poles of $\mathcal{P} (B)$ of the functions $\Gamma$ in $B (s)$;
\item {$\mathcal{L} = \mathcal{L}_{i \infty}$ is a loop beginning and ending at $+ \infty$ and encircling in the negative direction all the poles of $\mathcal{P} (A)$;}
\item {$\mathcal{L} = \mathcal{L}_{- i \infty}$ is a loop beginning and ending at $- \infty$ and encircling in the negative direction all the poles of $\mathcal{P} B)$.}
\end{itemize}
\subsubsection{Wright Functions}
The Write function is defined by the series representation that is convergent in the whole $z$-complex plane \cite{wf1,gor99,gor0,kil02}
\begin{equation*}
W_{\lambda, \mu} (z) = \sum_{n = 0}^{\infty} \frac{z^n}{n! \Gamma (\lambda n + \mu)}, \quad \lambda > - 1, \mu \in \mathcal{C}.
\end{equation*}
The integral representation of the Write function is written as
\begin{equation*}
W_{\lambda, \mu} (z) = \frac{1}{2 \pi i} \int\limits_{Ha} e^{\sigma + z \sigma^{- \lambda}} \frac{d \sigma}{\sigma^{\mu}}
\end{equation*}
where $Ha$ is the Hankel path (a loop that starts from $- \infty$ along the lower side of the negative real axis, encircles the circular area the origin with radius $\epsilon \rightarrow 0$ in the positive sense, and ends at $- \infty$ along the upper side of the negative real axis).
There are Write-type {\em auxiliary} functions $ F_{\nu} (z) = W_{- \nu,0} (z)$, $ M_{\nu} (z) = W_{- \nu, 1 -\nu} (z)$,
where $0 < \nu <1$; these functions are related $F_{\nu} (z) = \nu z M_{\nu} (z)$.
The series representations of the {\em auxiliary} functions are
\begin{equation*}
F_{\nu} (z) = \sum_{n = 1}^{\infty} \frac{(- z)^n}{n! \Gamma (- \nu n)} = \frac{1}{\pi} \sum_{n = 1}^{\infty} \frac{(- z)^{n - 1}}{n!} \Gamma (\nu n + 1) \sin (\pi \nu n)
\end{equation*}
and
\begin{equation*}
M_{\nu} (z) = F_{\nu} (z) = \sum_{n = 0}^{\infty} \frac{(- z)^n}{n! \Gamma [- \nu n + (1 - \nu))]} = \frac{1}{\pi} \sum_{n = 1}^{\infty} \frac{(- z)^{n - 1}}{(n - 1)!} \Gamma (\nu n) \sin (\pi \nu n).
\end{equation*}
\section{Solution of Fractional Differential Equations}
\subsection{Analytical Methods}
Numerous approximate analytical methods are known:
\begin{itemize}
\item {the Adomian decomposition method (ADM) \cite{zha1};}
\item {the combined Laplace-Adomian method (CLAM) \cite{waz10};}
\item {the variational iteration method (VIM) \cite{y11,wu10,far11,yan13,yan13b}\footnote{
VIM includes three steps to determine the variational iteration formula:
\begin{enumerate}
\item {establishing the correction functional;}
\item {identifying the Lagrange multipliers;}
\item {determining the initial iteration.}
\end{enumerate}
The second step is the crucial one \cite{zha9}.
} and its local (LVIM) \cite{yang12a,yan13b,zha9,yang15} and fractional (using the fractional order Lagrange multipliers) \cite{khan,zha15b} variants; }
\item {the homotopy perturbation method (HPM)\cite{abb06,zha6,yil10,sin11a,raf12,wei17} and its modification \cite{yan13c} and local fractional variant (LFHPM) \cite{yang15a};}
\item {the differential transformation method \cite{jon09,all09,gha15a};}
\item {the heat-balance integral method (HBIM)\cite{chr10,hr11,chr11};}
\item {the fractional complex transform method (FCTM) \cite{yang,y16,y16a,he12a,su13};}
\item {the local fractional Fourier series method (FSM) \cite{yang13,zha14a}; }
\item {the modified simple equation method \cite{jaw10,you14};}
\item {the method of images (limited to special spatial symmetries);}
\item {the Mellin integral transform method \cite{luc13};}
\item {the local fractional decomposition method (LFDM) \cite{ahm15};}
\item {the fractional sub-equation method \cite{y19,guo12};}
\item {the Sumudu transform methods \cite{wat02a,dem15} and its variant --- the local fractional homotopy perturbation Sumudu transform mehod \cite{zhao17};}
\item {the theta-method \cite{asl14};}
\item {the Picard succesive approximation method (PSAM) \cite{yang12b,yan14};}
\item {the local Laplace transforms.}
\end{itemize}
Frequently analytical methods are variants of perturbation methods \cite{vd}).
For example, He \cite{he1,he2,he3} based his method to solve the general equation
$A (u) - f (r) = 0 $
with the general differential operator $A$ divided into linear $L$ and nonlinear $N$ parts
$L (u) + N (u) - f (r) =0$ on the approach of Liao \cite{liao} (who used the two-parameter family of equations) by considering the one-parameter family
$(1 - p) L (u) + p N (u) = 0$.
He constructed the homotopy $v(r, p): \Omega \times [0, 1] \rightarrow R $ that satisfies
\begin{equation*}
H(v, p) = (1 - p) [L (v) - L (v_0)] + p [A (v) - f (r)] = 0
\end{equation*}
where the homotopy parameter $p \in [0, 1]$,
$v_0$ is the initial approximation.
Evidently, $H (v, 0) = L (v) - L (v_0) =0$ and
$H (v, 1) = [A (v) - f (r)] = 0$.
In topology, $L (v) - L (v_0)$ is called deformation.
The homotopy parameter $p$ is considered as a small parameter and the solution is written as a series
\begin{equation*}
v = v_0 + p v_1 + p^2 v_2 + p^3 v_3 + \dots
\end{equation*}
and when $p \rightarrow 1$
\begin{equation*}
u = \lim_{p \rightarrow 1}v = v_0 + v_1 + v_2 + v_3 + \dots.
\end{equation*}
The Adomian decomposition method (ADM) \cite{ad1,ad2} does not use linearization, perturbation or the Green's functions. The accuracy of the approximate analytical solutions can be verified by the direct substitution.
The initial value is written as $L u + R u + N u = g$ where $L$ is the linear operator to be inverted, $R$ is the linear remainder operator and $N$ is the nonlinear operator.
Thus $L^{- 1} L u = u - \Phi$, $\Phi$ incorporates the initial values.
The solution and nonlinear term are decomposed into series
\begin{equation*}
u = \sum_{n = 0}^{\infty} u_n, \qquad
N u = \sum_{n = 0}^{\infty} A_n
\end{equation*}
where $A_n$ are the Adomian polynomials for $N u = f (u)$ are \cite{ad2}
\begin{equation*}
A_n = \frac{1}{n!} \frac{\partial^n}{\partial \lambda^n} f \left(\sum_{k = 0}^{\infty} u_k \lambda^k \right), \qquad n = 0, 1, 2, \dots.
\end{equation*}
Finally
\begin{equation*}
\sum_{n = 0}^{\infty} u_n = \Phi + L^{- 1} g - L^{- 1} \left[R \sum_{n = 0}^{\infty} u_n + \sum_{n = 0}^{\infty} A_n \right].
\end{equation*}
The nonlinear term $Nu (x, t)$ can be also decomposed
\cite{yan13c} as
\begin{equation*}
N u = \sum_{n = 0}^{\infty} p^n H_n
\end{equation*}
where He's polynomials are \cite{noor,gho09}
\begin{equation*}
H_n (u_0, u_1, \dots, u_n) = \frac{1}{n!} \frac{\partial^n}{\partial p^n} \left[N \left( \sum_{i= 0}^n p^i u_i\right) \right].
\end{equation*}
The fractional sub-equation method includes several steps
\cite{guo12}:
\begin{itemize}
\item {Transformation of the nonlinear fractional equation in two variables $x$ and $t$ $\quad D_t^{\alpha} u, D_x^{\alpha} u, \dots) = 0, \quad 0 < \alpha \le 1$,
$D_t^{\alpha} u$ and $D_x^{\alpha} u$ are Jumarie modification of the RL derivtives,
using the travelling wave transformation
$u (x, t) = u (\xi), \xi = x + c t$,
where $c$ is a constant to be determined, to the equation \begin{equation}
\label{g9}
P (u, c u^{\prime}, u^{\prime}, c D_{\xi}^{\alpha} u, D_{\xi}^{\alpha} u, \dots) = 0.
\end{equation}
}
\item {The solution of the equation (\ref{g9}) is assumed to have the form
\begin{equation*}
u (\xi) = \sum_{i = - n}^{- 1} a_i \phi^i + a_0 + \sum_{i = 1}^{n} a_i \phi^i,
\end{equation*}
where $a_i (i = - n, - n + 1, \dots, n - 1, n)$ are constants to be determined, $\phi = \phi (\xi)$ are functions that satisfy the following Riccati equation $D_{\xi}^{\alpha} \phi (\xi) = \sigma \phi^2 (\xi)$,
$\sigma$ is a constant.
}
\item {Formulation of a set of overdetermined nonlinear algebraic equations for $c$ and $a_i (i = - n, - n + 1, \dots, n - 1, n)$ \cite{guo12}. }
\end{itemize}
\subsection{Numerical Methods}
Diethelm et al. \cite{die06} listed the requirement to the numerical methods that should be
{convergent,} {consistent of some reasonable order $h^p$,} {stable,} {reasonably inexpensive to run,} {reasonably easy to program.}
Numerous methods are used in practice: finite difference, finite elements, radial basis functions, spectral methods, meshfree methods.
The numerical methods for the fractional differential equations usually are constructed by the modification of the methods for the ordinary differential equations but require significantly more computation time and storage. The approximation of the fractional derivative needs the computation of the convolution integral that requires to sample and multiply the behaviour of two functions over the whole of the interval of integration leading to the operation count of $O (n^2)$ where $n$ is number of sampling points \cite{ford}.
The reduction of the computational efforts is related to the fading memory property of the fractional derivatives that allows to restrict the integration interval --- using the {\em short memory principle} \cite{den07a,den15} (also {\em fixed memory principle} \cite{ford} and {\em logarithmic memory principle} \cite{han11}), and using adaptive time stepping and basis selection \cite{bru10}.
Numerous methods are used to solve the fractional differential equations in practice: the finite difference \cite{fd1,fd2} (both the explicit, e.g. Euler \cite{odi08} and the implicit \cite{impl1,impl2}, e.g., the Crank-Nicolson \cite{cn1,cn2} or the alternating direction implicit \cite{zha16a,zha19b} schemes, compact schemes \cite{cfd1,cfd2,cfd3,cfd4}), the finite elements \cite{hua08,jia11,zeng15,fem1,fem2} (including least squres FEM \cite{fix}, Galerkin FEM \cite{sin06b,jin15}, discontinuous Galerkin FEM \cite{mus13}), the spectral methods \cite{can06,esm11,doha11,zay14}, the meshfree methods \cite{dua08,shi12} (including the radial basis functions methods that exploit cubic $\phi = r^3$, Gaussian $\phi = exp (- r^2 / c^2)$, multiquadrics $\phi = \sqrt{c^2 + r^2}$ or inverse multiquadrics $\phi =1 / \sqrt{c^2 + r^2}$ functions \cite{rb1,rb2,rb3}), Legendre wavelet collocation method \cite{hey14}.
Bahuguna et al. \cite{bah09}, Hanert \cite{ha10}, Deng et al. \cite{den04} and Deng \& Li \cite{deng12}, Ford \& Simpson \cite{ford}, Ford \& Connolly \cite{ford1}, Momani et al. \cite{mom} reported the results of the comparison of several numerical methods.
\begin{thebibliography}{100}
\bibitem{abb06}
{\sc Abbaasbandy, S.}
\newblock The application of homotopy analysis method to nonlinear equations
arising in heat transfer.
\newblock {\em Phys. Lett. A 360\/} (2006), 109--113.
\bibitem{abd14}
{\sc Abdeljawad, T.}
\newblock On conformable fractional calculus.
\newblock arXiv: 1402.6892 [math.DS], 2014.
\bibitem{ad1}
{\sc Adomian, G.}
\newblock {\em Solving Frontier Problems of Physics: The Decomposition Method}.
\newblock Kluver Academic Publishers, Boston, 1994.
\bibitem{agu14}
{\sc Aguilar, J. F.~G., and Hern\'andez}.
\newblock Space-time farctional diffusion-advection equation with {C}aputo
derivative.
\newblock {\em Abstr. Appl. Anal. 2014\/} (2014), 283019.
\bibitem{ahm15}
{\sc Ahmad, J., Mohyud-Din, S.~T., Srivastava, H.~M., and Yang, X.-J.}
\newblock Analytic solutions of the {H}elmholtz and {L}aplace equations by
using local fractional derivative operators.
\newblock {\em Wave Wavelets Fractals Adv. Anal. 1\/} (2015), 22--26.
\bibitem{all09}
{\sc Allahviranloo, T., Kiani, N.~A., and Motamedi, N.}
\newblock Solving fuzzy differential equations by differential transformation
method.
\newblock {\em Inform. Sci. 170\/} (2009), 956--966.
\bibitem{and15}
{\sc Anderson, D.~R., and Ulness, D.~J.}
\newblock Properties of the {K}atugampola fractional derivative with potential
applications n quantum mechanics.
\newblock {\em J. Math. Phys. 56\/} (2015).
\bibitem{asl14}
{\sc Aslefallah, M., Rostamy, D., and Hosseinkhani, K.}
\newblock Solving time-fractional differential diffusion equation by
theta-method.
\newblock {\em Int. J. Adv. Appl. Math. Mech. 2\/} (2014), 1--8.
\bibitem{rb3}
{\sc Aslefallah, M., and Shivanian, E.}
\newblock Nonlinear fractional integro-differential reaction-diffusion equation
via radial basis functions.
\newblock {\em Eur. Phys. J. Phys. 130\/} (2015), 47.
\bibitem{h2}
{\sc Atangana, A.}
\newblock Fractal-fractional differentiation and integration: connecting
fractal calculus and fractional calculus to predict complex systems.
\newblock {\em Chaos, Solit. Fract. 102\/} (2017), 396--406.
\bibitem{at16}
{\sc Atangana, A., and Baleanu, D.}
\newblock New fractional derivatives with nonlocal and non-singular kernel:
theory and application to heat transfer model.
\newblock {\em Thermal Sci. 20\/} (2016), 763--769.
\bibitem{at17}
{\sc Atangana, A., and Gomez-Aguilar, J.}
\newblock A new derivative with normal distribution kernel: theory, methods and
applications.
\newblock {\em Phys. A Stat. Mech. Appl. 476\/} (2017), 1--14.
\bibitem{atang13}
{\sc Atangana, A., and Secer, A.}
\newblock A note on fractional derivatives of some special functions.
\newblock {\em Abstr. Appl. Anal. 2013\/} (2013), 279681.
\bibitem{bab}
{\sc Babusci, D., Dattoli, G., and Quattromini, M.}
\newblock Relativistic equations with fractional and pseudodifferential
operators.
\newblock {\em Phys. Rev. A 83\/} (2011), 062109.
\bibitem{bah09}
{\sc Bahuguna, D., Ujlayan, A., and Pandey, D.~N.}
\newblock A comparative study of numerical methods for solving an
integro-differential equation.
\newblock {\em Comput. Math. Appl. 57\/} (2009), 1485--1493.
\bibitem{bal19}
{\sc Baleanu, D., and Fernandez, A.}
\newblock On fractional operators and their classifications.
\newblock {\em Mathematics 7\/} (2019), 830.
\bibitem{t17}
{\sc Ben~Adda, F.}
\newblock Geometric interpretation of the differentiability and gradient of
real order.
\newblock {\em Compt. Rend. - Series I - Math. 326\/} (1997), 931--934.
\bibitem{t16}
{\sc Ben~Adda, F.}
\newblock Geometric interpretation of the fractional derivative.
\newblock {\em J. Fract. Calc. 11\/} (1997), 21--51.
\bibitem{ben06}
{\sc Bennett, K.~M., Hyde, J.~S., and Schmainda, K.~M.}
\newblock Water diffusion heterogeneity index in the human brain is insensitive
to the orientation of applied magnetic field gradient.
\newblock {\em Magn. Resonan. Med. 56\/} (2006), 235--239.
\bibitem{ben13}
{\sc Benson, D.~A., Meerschaert, M.~M., and Revielle, J.}
\newblock Fractional calculus in hydrological modelling: {A} numerical
perspective.
\newblock {\em Adv. Water Resour. 51\/} (2013), 479--497.
\bibitem{rab1}
{\sc Bosiakov, S., and Rogosin, S.}
\newblock Analytical modeling of the viscoelastic behavior of periodontal
ligament with using {R}abotnov's fractional exponential function.
\newblock In {\em Lect. Notes Electr. Eng.\/} (2015), pp.~156--167.
\bibitem{bru10}
{\sc Brunner, H., Ling, L., and Yamamoto, M.}
\newblock Numerical simulation of 2{D} fractional subdiffusion problems.
\newblock {\em J. Comput. Phys. 229\/} (2010), 6613--6622.
\bibitem{can06}
{\sc Canoto, C., Hussaini, M.~Y., Quarteroni, A., and Zang, T.~A.}
\newblock {\em Spectral Methods. {F}undamentals in Single Domains}.
\newblock Springer, Berlin, 2006.
\bibitem{do5}
{\sc Caputo, M.}
\newblock Distributed order differential equations modelling dielectric
induction and diffusion.
\newblock {\em Fract. Calc. Appl. Anal. 4\/} (2001), 421--442.
\bibitem{cap03}
{\sc Caputo, M.}
\newblock Diffusion with space memory modelle with distributed order space
fractional differential equations.
\newblock {\em Ann. Geophys. 46\/} (2003), 223--234.
\bibitem{cf1}
{\sc Caputo, M., and Fabrizio, M.}
\newblock A new definition of fractional derivative without singular kernel.
\newblock {\em Progr. Fract. Differ. Appl. 1\/} (2015), 73--85.
\bibitem{cf2}
{\sc Caputo, M., and Fabrizio, M.}
\newblock Applications of new time and spatial fractional derivatives with
exponential kernels.
\newblock {\em Progr. Fract. Differ. Appl.\/} (2016).
\bibitem{car97}
{\sc Carpin, A., and Mainardi, F.}
\newblock {\em Fractals and Fractional Calcuus in Continuum Mechanics}.
\newblock Springer, Wien, N. Y., 1997.
\bibitem{car09}
{\sc Carpinteri, A., Cornetti, P., Sapora, A., Di~Paola, M., and Zingales, M.}
\newblock Fractional calculis in solid mechanics: local versus non-local
approach.
\newblock {\em Phys. Scr. T136\/} (2009), 014003.
\bibitem{s11}
{\sc Carr, P., Geman, H., Madan, D.~B., and Yor, M.}
\newblock Stochastic volatility for {L}\'evy processes.
\newblock {\em Math. Finance 13\/} (2003), 345--382.
\bibitem{cha11}
{\sc Chakrabarty, A., and Meerschaert, M.~M.}
\newblock Tempered stable laws as random walk limits.
\newblock {\em Stat. Probab. Lett. 81\/} (2011).
\bibitem{chech}
{\sc Chechkin, A.~V., Gonchar, V.~Y., Gorenflo, R., Korabel, N., and Sokolov,
I.~M.}
\newblock Generalized fractional diffusion equations for accelerating
subdiffusion and truncated {L}\'evy flights.
\newblock {\em Phys. Rev. E 78\/} (2008), 021111.
\bibitem{do3a}
{\sc Chechkin, A.~V., Gorenflo, R., and Sokolov, I.~M.}
\newblock Retarding subdiffusion and accelerating superdiffusion governed by
distributed-order fractional diffusion equation.
\newblock {\em Phys. Rev. E 66\/} (2002), 046129.
\bibitem{do3b}
{\sc Chechkin, A.~V., Gorenflo, R., and Sokolov, I.~M.}
\newblock Fractional diffusion in inhomogeneous media.
\newblock {\em Phys. Rev. E 38\/} (2005), L679--L684.
\bibitem{do3}
{\sc Chechkin, A.~V., Gorenflo, R., Sokolov, I.~M., and Gonchchar, V.~Y.}
\newblock Distributed order time fractional diffusion equation.
\newblock {\em Fract. Calc. Appl. Anal. 6\/} (2003), 259--279.
\bibitem{chen06a}
{\sc Chen, W.}
\newblock Time-space fabric underlying anomalous diffusion.
\newblock {\em Chaos, Soltons, Fractals 28\/} (2006), 923--929.
\bibitem{h1}
{\sc Chen, W., and Liang, Y.}
\newblock New methodologies in fractional and fractal derivatives modeling.
\newblock {\em Chaos, Solitons Fract. 102\/} (2017), 72--77.
\bibitem{chen10}
{\sc Chen, W., Sun, H., Zhang, X., and Koro\^sak, D.}
\newblock Anomalous diffusion by fractal and fractional derivatives.
\newblock {\em Comput. Math. Appl. 59\/} (2010), 1754--1758.
\bibitem{che10}
{\sc Chen, Y., Yan, Y., and Zhang, K.}
\newblock On the local fractional derivative.
\newblock {\em J. Math. Anal. Appl. 362\/} (2010), 17--33.
\bibitem{t21}
{\sc Cioc, R.}
\newblock Physical and geometrical interpretation of {G}runwald-{L}etnikov
differintegrals: {M}easurement of path and acceleration.
\newblock {\em Fract. Calcul. Appl. Anal. 19\/} (2016), 161--172.
\bibitem{var1}
{\sc Coimbra, C. F.~M.}
\newblock Mechanics wuth variable-order differential operators.
\newblock {\em Ann. Phys. 12\/} (2003), 692--703.
\bibitem{com96}
{\sc Compte, A.}
\newblock Stochastic foundations of fractional dynamics.
\newblock {\em Phys. Rev. E 53\/} (1996), 4191--4193.
\bibitem{oli14}
{\sc de~Oliveira, E.~C., and Teneiro~Machado, J.~A.}
\newblock A review of definitions for fractional derivatives and integral.
\newblock {\em Math. Probl. Eng. 2014\/} (2014), 238459.
\bibitem{del13}
{\sc Delkhosh, M.}
\newblock Introduction of derivatives and integrals of fractional order and its
applications.
\newblock {\em Appl. Math. Phys. 1\/} (2013), 103--119.
\bibitem{dem15}
{\sc Demiray, S.~T., Bulut, H., and Belgasem, F.}
\newblock Sumudu transform methods for analytical solution of fractional type
ordinary differential equations.
\newblock {\em Math. Probl. Eng. 2015\/} (2015), 131690.
\bibitem{den07a}
{\sc Deng, W.}
\newblock Short memory principle and predicor-corrector approach for fractional
differential equations.
\newblock {\em J. Comp. Appl. Math. 206\/} (2007), 1774--188.
\bibitem{deng12}
{\sc Deng, W., and Li, C.}
\newblock Numerical schemes for fractional ordinary differential euations.
\newblock In {\em Numerical Modelling\/} (2012), P.~Miidla, Ed., pp.~356--34.
\bibitem{den04}
{\sc Deng, W., Singe, V.~P., and Bengtsson, L.}
\newblock Numerical solution of fractional advection-dispersion equation.
\newblock {\em J. Hydraulic Eng. 130\/} (2004), 422--431.
\bibitem{den15}
{\sc Deng, W., Zhao, L.~J., and Wu, Y.~J.}
\newblock Efficient algorithm for solving the fractional ordinary differential
equations.
\newblock {\em Appl. Math. Comp. 269\/} (2015), 196--216.
\bibitem{die10}
{\sc Diethelm, K.}
\newblock {\em The Analysis of Fractional Differential Equations. An
Application-Oriented Exposition Using Differential Operators of Caputo Type}.
\newblock Springer, Berlin-Helderberg, 2010.
\bibitem{doha11}
{\sc Doha, E.~H., Bhrawy, A.~H., and Ezz-Eldien, S.~S.}
\newblock A {C}hebyshev spectral method based on operational matrix fr initial
and boundary vale problems of fractional order.
\newblock {\em Comput. Math. Appl. 62\/} (2011), 2364--2373.
\bibitem{cfd4}
{\sc Du, R., Gao, G., and Sun, Z.}
\newblock A compact difference scheme for the fractional diffusion-wave
equations.
\newblock {\em Appl. Math. Model. 34\/} (2010), 2998--3007.
\bibitem{duan}
{\sc Duan, J.-S.}
\newblock Time- and space-fractional partial differential equations.
\newblock {\em J. Math. Phys. 46\/} (2005), 013504.
\bibitem{ad2}
{\sc Duan, J.-S., Rach, R., Baleunu, D., and Wazwaz, A.-M.}
\newblock A review of the {A}domian decomposition method and its applications
to fractional differential equations.
\newblock {\em Commun. Frac. Calc. 3\/} (2012), 73--99.
\bibitem{dua08}
{\sc Duan, Y.}
\newblock A note on the meshless method using radial basis functions.
\newblock {\em Comp. Math. Appl. 55\/} (2008), 66--75.
\bibitem{dzh}
{\sc Dzherbashyan, M.~M., and Nersesian, A.~D.}
\newblock Fractional derivatives and the {C}auchy problem for differential
equations of fractional order (in {R}ussian).
\newblock {\em Izv. Acad. Nauk Armjanskoy SSR, Matematika 3\/} (1968), 3--29.
\bibitem{tar17}
{\sc E., T.~V.}
\newblock Interpretation of fractional derivatives as reconstruction from
sequence of integer derivatives.
\newblock {\em Fundamenta Informat. 151\/} (2017), 431--442.
\bibitem{esm11}
{\sc Esmaeli, S., and Shamsi, M.}
\newblock A pseudo-spectral scheme for the approximate solution of a family of
fractional differential equations.
\newblock {\em Commun. Nonlinear Sci. Numer. Simulat. 16\/} (2011), 3646--3654.
\bibitem{far11}
{\sc Faraz, N., Khan, Y., Jafari, H., Yildrim, A., and Madani, M.}
\newblock Fractional variational iteration method via modified
{R}iemann-{L}iouville derivaive.
\newblock {\em J. King Saud Univ. - Sci. 23\/} (2011), 413--417.
\bibitem{fer19}
{\sc Fernandez, A., \"Ozarslan, M.~A., and Baleanu, D.}
\newblock On fractional calculus with general analytic kernels.
\newblock {\em Appl. Math. Comput. 354\/} (2019), 248--265.
\bibitem{fix}
{\sc Fix, G.~J., and Roop, J.~P.}
\newblock Least squire finite-element solution of a fractional order two-point
boundary value problem.
\newblock {\em Comput. Math. Appl. 48\/} (2004), 1017--1033.
\bibitem{ford1}
{\sc Ford, N.~J., and Connolly, J.~A.}
\newblock Comparison of numerical methods for fractional differential
equations.
\newblock {\em Comm. Pure Appl. Anal. 5\/} (2006), 289--307.
\bibitem{ford}
{\sc Ford, N.~J., and Simpson, A.~C.}
\newblock The numerical solution of fractional differential equations: speed
versus accuracy.
\newblock {\em Numer. Algor. 26\/} (2001), 333--346.
\bibitem{cfd3}
{\sc Gao, G., and Sun, Z.}
\newblock A compact difference scheme for the fractional sub-diffusion
equations.
\newblock {\em J. Comput. Phys. 230\/} (2011), 586--595.
\bibitem{ger}
{\sc Gerasimov, A.~N.}
\newblock Generalization of the linear deformation laws and applications to the
problems of inner friction (in {R}ussian).
\newblock {\em Appl. Math. Mech. 12\/} (1948), 529--539.
\bibitem{gha15a}
{\sc Ghazanfari, B., and Ebrahimi, P.}
\newblock Differential transformation method for solving fuzzy fractonal heat
equations.
\newblock {\em Int. J. Math. Model. Comput. 5\/} (2015), 81--89.
\bibitem{impl2}
{\sc Ghazizadeh, H.~R., Maerefat, M., and Azimi, A.}
\newblock Explicit and implicit finite difference schemes for factional
{C}attaneo equation.
\newblock {\em J. Comput. Phys. 229\/} (2010), 7042--7057.
\bibitem{gho09}
{\sc Ghorbani, A.}
\newblock Beyond ({A})domian polynomials: {H}e polynomials.
\newblock {\em Chaos Soliton. Fract. 39\/} (2009), 1486--1492.
\bibitem{gol10}
{\sc Goloviznin, V.~M., Kondratenko, P.~S., Matveev, L.~V., Korotkin, I.~A.,
and Dranikov, I.~L.}
\newblock {\em Anomalous Diffusion of Radionuclides in Strongly Nonuniform
Geological Formations (in Russian)}.
\newblock Nauka, Moscow, 2010.
\bibitem{gom14}
{\sc G\'omez-Aguilar, J.~F., Razo-Hernandez, R., and Granados-Lieberma, D.}
\newblock A physical interpretation of fractional calculus in observables
terms: analysis of fracional time constant and the transtory response.
\newblock {\em Revista Mexicana de Fisica 60\/} (2014), 32--38.
\bibitem{gor02}
{\sc Gorenflo, R., Loutschko, J., and Luchko, Y.}
\newblock Computation of the {M}ittag-{L}effler function and its derivatives.
\newblock {\em Fract. Calcul. Appl. Anal. 5\/} (2002), 491--518.
\bibitem{gor99}
{\sc Gorenflo, R., Luchko, Y., and Mainardi, F.}
\newblock Analytical properties and applications of the {W}right function.
\newblock {\em Fract. Calcul. Appl. Anal. 2\/} (1999), 383--414.
\bibitem{gor0}
{\sc Gorenflo, R., Luchko, Y., and Mainardi, F.}
\newblock Wright functions as scale-invariant solutions of diffusion-wave
equation.
\newblock {\em J. Comput. Appl. Math. 118\/} (2000), 175--191.
\bibitem{gor97}
{\sc Gorenflo, R., Luchko, Y., and Rogosin, S.~V.}
\newblock Mittag-{L}effler type functions, notes on growth properties and
distribution of zeros.
\newblock Tech. Rep. A04-97, Freie Universit\"at Berlin, 1997.
\bibitem{gor97a}
{\sc Gorenflo, R., and Mainardi, F.}
\newblock Integral and differential equations of fractional order.
\newblock In {\em Fractals and Fractional Calculus in Continuum Mechanics\/}
(Wien, New York, 1997), A.~Carpinteri and F.~Mainardi, Eds., Springer.
\bibitem{gor03}
{\sc Gorenflo, R., and Mainardi, F.}
\newblock Fractional diffusion processes: probability distributions and
continuos time random walk.
\newblock In {\em Processes with Long Range Correlations\/} (Berlin, 2003),
G.~Rangarajan and M.~Ding, Eds., Springer, pp.~148--166.
\bibitem{gor05}
{\sc Gorenflo, R., and Mainardi, F.}
\newblock Simply and multiply scaled diffusion limits for continuous time
random walks.
\newblock {\em J. Phys. Conf. Series 7\/} (2005), 1--16.
\bibitem{gor02a}
{\sc Gorenflo, R., Mainardi, F., Moretti, D., Pagnini, G., and Paradisi, P.}
\newblock Fractional diffusion: probability disributions and random walk
models.
\newblock {\em Physica A 305\/} (2002), 106--112.
\bibitem{gor98}
{\sc Gorenflo, R., Mainardi, F., and Srivastava, H.~M.}
\newblock Special functions in fractional relaxation-oscillation and fractional
diffusion-wave phenomena.
\newblock In {\em Proc. VIII Int. Colloq. Differ. Equat.\/} (1997), D.~Bainov,
Ed., pp.~195--202.
\bibitem{gor95}
{\sc Gorenflo, R., and Rutman, R.}
\newblock On ultraslow and intermediate processes.
\newblock In {\em Transform Metods and Special Functions\/} (Singapore, 1995),
D.~Rusev, I.~Dimovski, and V.~Kiryakova, Eds., pp.~171--183.
\bibitem{gor00}
{\sc Gorenfo, R., Iskenderov, A., and Luchko, Y.}
\newblock Mapping between soltions of fractional diffusion-wave equations.
\newblock {\em Fract. Calc. Appl. Anal. 3\/} (2000), 75.
\bibitem{gri13}
{\sc Grigoletto, E.~C., and de~Oliveira, E.~C.}
\newblock Fractional versions of the fundamental theorem of calculus.
\newblock {\em Appl. Math. 4\/} (2013), 23--33.
\bibitem{guo12}
{\sc Guo, S., Mei, L., Li, Y., and Sun, Y.}
\newblock The improved fractional sub-equation method and its applications to
the space-time fractional differential equations in fluid mechanics.
\newblock {\em Phys. Lett. A 376\/} (2012), 407--411.
\bibitem{ha10}
{\sc Hanert, E.}
\newblock A comparison of three {E}ulerian numerical methods for
fractional-order transport models.
\newblock {\em Environ. Fluid Mech. 10\/} (2010), 7--20.
\bibitem{han11}
{\sc Hanert, E.}
\newblock On the numerical solution of space-time fractional diffusion models.
\newblock {\em Comput. Fluids 46\/} (2011), 33--39.
\bibitem{hau11}
{\sc Haubold, H.~J., Mathai, A.~M., and Saxena, R.~K.}
\newblock Mittag-{L}efler functions and their applications.
\newblock {\em J. Appl. Math. 2011\/} (2011), 298628.
\bibitem{y11}
{\sc He, J.-H.}
\newblock Variational iteration approach to nonlinear problems and its
applications.
\newblock {\em Int. J. Non-Linear Mech. 20\/} (1998), 30--31.
\bibitem{he1}
{\sc He, J.-H.}
\newblock Homotopy perturbation technique.
\newblock {\em Comp. Meth. Appl. Mech. 178\/} (1999), 257--262.
\bibitem{he2}
{\sc He, J.-H.}
\newblock New interpretation of homotopy perturbation method.
\newblock {\em Int. J. Modern Phys. B 20\/} (2006), 2561--2568.
\bibitem{he14}
{\sc He, J.-H.}
\newblock A tutorial feview on fractal spacetime and fractional calculus.
\newblock {\em Int. J. Theor. Phys. 53\/} (2014), 3698--3718.
\bibitem{he3}
{\sc He, J.-H.}
\newblock Recent development of the homotopy perturbation method.
\newblock {\em Topolog. Meth. Nonlinear Anal. 31\/} (2016), 205--209.
\bibitem{he18}
{\sc He, J.-H.}
\newblock Fractal calculus and its geometrical explanation.
\newblock {\em Results Phys. 10\/} (2018), 272--276.
\bibitem{he12a}
{\sc He, J.-H., Elagan, S.~K., and Li, Z.~B.}
\newblock Geometrical explanation of the fractional complex transform and
derivative chain rule for fractional calculus.
\newblock {\em Phys. Lett. A 376\/} (2012), 257--259.
\bibitem{he12}
{\sc He, J.-H., and Li, Z.-B.}
\newblock Converting fractional differential equations into partial
differential equations.
\newblock {\em Thermal Sci. 16\/} (2012), 331--334.
\bibitem{he13}
{\sc He, J.~H., and Liu, F.}
\newblock Local fractional iterative method for fractal heat transfer in silk
cocoon hierarchy.
\newblock {\em Nonlinear Sci. Lett. 4\/} (2013), 15--20.
\bibitem{hey14}
{\sc Heydari, M.~H., Maalek~Ghaini, F.~M., and Hooshmandasl, M.~R.}
\newblock Legendre wavelet method for numerical solution of time-fractional
heat equation.
\newblock {\em Wavelets Lin. Algeb. 1\/} (2014), 15--24.
\bibitem{t24}
{\sc Heymans, N., and Podlubny, I.}
\newblock Physical interpretation of initial conditions for fractional
differential equations with {R}iemann-{L}iouville fractional derivatives.
\newblock {\em Rheolog. Acta 45\/} (2006), 765--772.
\bibitem{hil0}
{\sc Hilfer, R.}
\newblock Fractional diffusion based on {R}iemann-{L}iouville fractional
derivatives.
\newblock arXiv:cond-mat/0006427 [cond-mat.stat-mech], 2000.
\bibitem{hil08}
{\sc Hilfer, R.}
\newblock Threefold introduction to fractional derivatives.
\newblock In {\em Anomalous Transport. Foundations and Applications\/} (2008),
R.~Klages, G.~Radons, and I.~M. Sokolov, Eds., Wiley-VCH, pp.~17--74.
\bibitem{hil19}
{\sc Hilfer, R.}
\newblock Mathematical and physical interpretations of fractional derivatives
and integrals.
\newblock In {\em In Handbook of Fractional Calculus with Applications\/}
(Berlin, 2019), A.~Kochubei and Y.~Luchko, Eds., vol.~1, de Gruyter,
pp.~47--85.
\bibitem{chr10}
{\sc Hristov, J.}
\newblock Heat-balance integral to fractional (half-time) heat diffusion
sub-model.
\newblock {\em Thermal Sci. 14\/} (2010), 291--316.
\bibitem{chr11}
{\sc Hristov, J.}
\newblock Approximate solutions to fractional sub-diffusion equations: {T}he
heat-balance integral method.
\newblock {\em Europ. Phys. J. --- Special Topics 193\/} (2011), 229--243.
\bibitem{hr11}
{\sc Hristov, J.}
\newblock Transient flow of a generalized second grade fluid due to a constant
surface shear stress: an approximate integral-balance solution.
\newblock {\em Int. Rev. Chem. Eng. 3\/} (2011), 802--809.
\bibitem{hua08}
{\sc Huang, Q., Huang, G., and Zhan, H.}
\newblock A finite elements solution for the fractional advection-dispersion
equation.
\newblock {\em Adv. Water Res. 31\/} (2008), 1578--1589.
\bibitem{iyi16}
{\sc Iyiola, O.~S., and Nwaeze, E.~R.}
\newblock Some new results on the new conformable fractional calculus with
application using {D}'{A}lambert approach.
\newblock {\em Progr. Fract. Differ. Appl. 2\/} (2016), 1--7.
\bibitem{jaw10}
{\sc Jawad, A. J.~W., Petcovic, M.~D., and Biswas, A.}
\newblock Modified simple equation method for nonlinear evolution equations.
\newblock {\em Appl. Math. Comput. 217\/} (2010), 869--877.
\bibitem{jia11}
{\sc Jiang, Y., and Ma, J.}
\newblock High-order finite element methods fr time-fractional partial
differential equations.
\newblock {\em J. Comp. Appl. Math. 235\/} (2011), 3285--3290.
\bibitem{jin15}
{\sc Jin, B., Lazarov, R., Liu, Y., and Zhou, Z.}
\newblock the {G}alerkin finite element method for a multi-term time-fractional
diffusion equation.
\newblock {\em J. Comput. Phys 281\/} (2015), 825--843.
\bibitem{jon09}
{\sc Joneidi, A.~A., Ganji, D.~D., and Babaelahi, M.}
\newblock Differential transformation method to detrmine the efficiency of
convective straight fins with temperature dependent thermal conductivity.
\newblock {\em Int. Comm. Heat Mass Transfer 36\/} (2009), 757--762.
\bibitem{jum}
{\sc Jumarie, G.}
\newblock Modified {R}iemann-{L}iouville derivative and fractional {T}aylor
series of nondifferentiable functions further results.
\newblock {\em Comput. Math. Appl. 51\/} (2006), 1376--1376.
\bibitem{jum07a}
{\sc Jumarie, G.}
\newblock Lagrangian mechanics of farctional order, {H}amilton-{J}acobi
fractional {P}{D}{E} and {T}aylor's series of nondifferentialable functions.
\newblock {\em Chaos Soltons Fractals 32\/} (2007), 969--987.
\bibitem{jum07}
{\sc Jumarie, G.}
\newblock The {M}inlowski's space-time is consistent with differential geometry
of fractional order.
\newblock {\em Phys. Lett. 363\/} (2007).
\bibitem{jum13}
{\sc Jumarie, G.}
\newblock The {L}eibniz rule for fractional derivatives holds with
non-differentiable functions.
\newblock {\em Math. Stat. 1\/} (2013), 50--52.
\bibitem{jum13a}
{\sc Jumarie, G.}
\newblock On the derivative chain-rules in fractional calculus via fractional
difference and their application to systems modelling.
\newblock {\em Open. Phys. 11\/} (2013), 617--633.
\bibitem{die06}
{\sc K., D., Ford, J.~M., Ford, N.~J., and Weilbeer, M.}
\newblock Pitfalls in fast solvers for fractional differential equations.
\newblock {\em J. Comp. Appl. Math. 186\/} (2006), 482--503.
\bibitem{kat14a}
{\sc Katugampola, U.~N.}
\newblock New approach to a generalized fractional derivative.
\newblock {\em Math. Anal. Appl. 6\/} (2014), 1--15.
\bibitem{kat14}
{\sc Katugampola, U.~N.}
\newblock A new fractional derivative with classical properties.
\newblock arXiv: 1410.6535 [math.CA], 2014.
\bibitem{kha14}
{\sc Khalil, R., Al~Horani, M., Yousef, A., and Sababheh, M.}
\newblock A new definition of fractional derivative.
\newblock {\em J. Comput. Appl. Math. 264\/} (2014), 65--70.
\bibitem{khan}
{\sc Khan, Y., Faraz, N., Yildirim, A., and Wu, Q.}
\newblock Fractional varoational iteration method for fractional
initial-boundary value problems arising in the application of nonlinear
science.
\newblock {\em Comput. Math. Appl. 62\/} (2011), 2273--2278.
\bibitem{kil02}
{\sc Kilbas, A.~A., Saigo, M., and Trujillo, J.~J.}
\newblock On the generalized {W}right function.
\newblock {\em Fract. Calcul. Appl. Anal. 5\/} (2002), 437--460.
\bibitem{kilb06}
{\sc Kilbas, A.~A., Srivastave, H.~M., and Trujillo, J.~J.}
\newblock {\em Theory and Applications of Fractional Differential Equations}.
\newblock North Holland, 2006.
\bibitem{kir94}
{\sc Kiryakova, V.}
\newblock {\em Generalized fractional calculus and applications}.
\newblock Longman, Harlow, 1994.
\bibitem{kir1}
{\sc Kiryakova, V.}
\newblock Multiple (multiindex) {M}ittag-{L}effler functions and relations to
generalized fractional calculus.
\newblock {\em J. Comput. Appl. Math. 118\/} (2000), 241--259.
\bibitem{kir2}
{\sc Kiryakova, V.}
\newblock Multiindex {M}ittag-{L}effler functions as an important class of
special functions of fractional calculus.
\newblock {\em Comput. Math. Appl. 59\/} (2010), 1885--1895.
\bibitem{do4}
{\sc Kochubei, A.~I.}
\newblock Distributed order calculus and equations of ultraslow diffusion.
\newblock {\em J. Math. Anal. Appl. 340\/} (2008), 252--281.
\bibitem{kol13}
{\sc Kolwankar, K.~M.}
\newblock Local fractional calculus: a review.
\newblock arXiv:1307.0739, 2013.
\bibitem{kol97}
{\sc Kolwankar, K.~M., and Gangal, A.~D.}
\newblock H\"older exponents of irregular functions and local fractional
derivatives.
\newblock {\em Pramana J. Phys. 48\/} (1997), 49--68.
\bibitem{kol98}
{\sc Kolwankar, K.~M., and Gangal, A.~D.}
\newblock Local fractional derivatives and fractal functions of several
variables.
\newblock arXiv: physics/9801010 [math-ph], 1998.
\bibitem{kol98a}
{\sc Kolwankar, K.~M., and Gangal, A.~D.}
\newblock Local fractional {F}okker-{P}lank equation.
\newblock {\em Phys. Rev. Lett. 80\/} (1998), 214--217.
\bibitem{leb}
{\sc Lebedev, N.~N.}
\newblock {\em Special Functions and their Aplications}.
\newblock Prentice-Hall, 1965.
\bibitem{zha1}
{\sc Lesnic, D., and Elliot, L.}
\newblock The decomposition approach to inverse heat conduction.
\newblock {\em J. Math. Anal. Appl. 232\/} (1999), 82--98.
\bibitem{let}
{\sc Letnikov, A.~V.}
\newblock Theory of differentiation of fractional order.
\newblock {\em Math. Sb. 3\/} (1868), 1--7.
\bibitem{li07}
{\sc Li, C., and Deng, W.}
\newblock Remarks on fractional derivatives.
\newblock {\em Appl. Math. Comput. 187\/} (2007), 777--784.
\bibitem{li11}
{\sc Li, C., Qian, D., and Chwn, Y.~Q.}
\newblock On {R}iemann-{L}iouville and {C}aputo derivatives.
\newblock {\em Discr. Dynam. Nature Soc. 2011\/} (2011), 562494.
\bibitem{fd1}
{\sc Li, C., and Zeng, F.}
\newblock Finite difference methods for fractional differential rquations.
\newblock {\em Int. J. Bifurc. Chaos 22\/} (2012), 1230014.
\bibitem{y16a}
{\sc Li, Z.-B.}
\newblock An extended fractional complex transform.
\newblock {\em J. Nonlinear Sci. Numer. Simul. 11\/} (2010), S0335--S0337.
\bibitem{y16}
{\sc Li, Z.-B., and He, J.-H.}
\newblock Fractional complex transform for fracional differential equations.
\newblock {\em Math. Comput. Appl. 15\/} (2010), 970--973.
\bibitem{li12b}
{\sc Li, Z.-B., Zhu, W.-H., and He, J.-H.}
\newblock Exact solutions of time-fractional heat conduction equation by the
extended fractional complex transform.
\newblock {\em Thermal Sci 16\/} (2012), 335--338.
\bibitem{liao}
{\sc Liao, S.~J.}
\newblock Numerically solving nonlinear problems by the homotopy analysis
method.
\newblock {\em Comp. Mech. 20\/} (1997), 530--540.
\bibitem{liu15a}
{\sc Liu, F.-J., Li, Z.-B., Zhang, S., and Liu, H.-Y.}
\newblock He's fractional derivative for heat conduction in a fractal medium
arising in silkworm coocon hierarchy.
\newblock {\em Thermal Sci. 19\/} (2015), 1155--1159.
\bibitem{lor98}
{\sc Lorenzo, C.~F., and Hartley, T.~T.}
\newblock Initialization, conceptualization, and application in the generalized
fractional calculus.
\newblock NASA/TP-1998-208415, 1998.
\bibitem{los15}
{\sc Losada, J., and Nieto, J.~J.}
\newblock Properties of a new fractionalmderivative withou singular kernel.
\newblock {\em Prog. Fract. Differ. Appl. 1\/} (2015), 87--92.
\bibitem{luch}
{\sc Luchko, Y.}
\newblock Operational rules for a mixed operator of the {E}rd\'elye-{K}ober
type.
\newblock {\em Frac. Calc. Appl. Anal. 7\/} (2004), 339--364.
\bibitem{luc09}
{\sc Luchko, Y.}
\newblock Maximum principle for the generalized time-fractional diffusion
equation.
\newblock {\em J. Math. Anal. Appl. 351\/} (2009), 218--223.
\bibitem{luc13}
{\sc Luchko, Y., and Kiryakova, V.}
\newblock The {M}ellin integral transform in fractional calculus.
\newblock {\em Fract. Calc. Appl. Anal. 16\/} (2013), 405--430.
\bibitem{m96}
{\sc Mainardi, F.}
\newblock Fractional relaxation-oscillation and fractional diffusion-wave
phenomena.
\newblock {\em Chaos Solitons Fract. 7\/} (1996), 1461--1477.
\bibitem{m97}
{\sc Mainardi, F.}
\newblock Fractional calculus: some basic problems in continuum and statistical
mechanics.
\newblock In {\em Fractals and Fractional Calculus in Continuum Mechanics\/}
(Wien and N.Y., 1997), A.~Carpinteri and F.~Mainardi, Eds., Springer,
pp.~231--248.
\bibitem{mai04}
{\sc Mainardi, F.}
\newblock Application of integral transforms in fractional diffusion processes.
\newblock {\em Integral Transf. Special Funct. 15\/} (2004), 477--484.
\bibitem{mai00}
{\sc Mainardi, F., and Gorenflo, R.}
\newblock On mittag-leffler function in fractional evaluation processes.
\newblock {\em J. Comput. Appl. Math. 118\/} (2000), 283--299.
\bibitem{mai07}
{\sc Mainardi, F., and Gorenflo, R.}
\newblock Time-fractional derivatives in relaxation processes: a tutorial
survey.
\newblock {\em Int. J. Theor. Appl. 10\/} (2007), 269--308.
\bibitem{main01}
{\sc Mainardi, F., Luchko, Y., and Pagnini, G.}
\newblock The fundamental solution of the space-time fractional diffusion
equation.
\newblock {\em Fract. Calc. Appl. Anal. 4\/} (2001), 153--192.
\bibitem{wf1}
{\sc Mainardi, F., Mura, A., and Pagnini, G.}
\newblock The {M}-{W}right function in time-fractional diffusion processes: a
tutorial survey.
\newblock {\em Int. J. Differ. Equations 2010\/} (2010), 104505.
\bibitem{main07a}
{\sc Mainardi, F., Mura, A., Pagnini, G., and Gorenflo, R.}
\newblock Time-fractional diffusion of distributed order.
\newblock arXiv: 0701132 [cond-mat.stat-mech, 2007.
\bibitem{wf}
{\sc Mainardi, F., and Pagnini, G.}
\newblock The {W}right functions as soluition of the time-fractional diffusion
equation.
\newblock {\em Appl.Math. Comput. 141\/} (2003), 51--62.
\bibitem{main07}
{\sc Mainardi, F., Pagnini, G., and Gorenflo, R.}
\newblock Some aspects of fractional diffusion equatins of single and
distributed order.
\newblock {\em Appl. Math. Comput. 187\/} (2007), 295--305.
\bibitem{hf}
{\sc Mainardi, F., Pagnini, G., and Saxena, R.~K.}
\newblock The {F}ox {H} functions in fractional diffusion.
\newblock {\em J. Comput. Appl. Math. 178\/} (2005), 321--331.
\bibitem{s44}
{\sc Meerschaert, M.~M., Zhang, Y., and Baeumer, B.}
\newblock Tempered anomalous diffusions in heterogeneous systems.
\newblock {\em Geophys. Res. Lett. 35\/} (2008), L17403--L17407.
\bibitem{met94}
{\sc Metzler, R., Gl\"ockle, W.~G., and Nonnenmacher, T.~F.}
\newblock Fractional model equation for anomalous diffusion.
\newblock {\em Physica A 211\/} (1994), 13--24.
\bibitem{met04}
{\sc Metzler, R., and Klafter, J.}
\newblock The restaurant at the end of the random walk: recent developments in
the description of anomalous transport by fractional dynamics.
\newblock {\em J. Phys. A: Math. Gen. 37\/} (2004), R161--R208.
\bibitem{mil93}
{\sc Miller, K., and Ross, B.}
\newblock {\em An introduction to the fractional calculus and fractional
differential equations}.
\newblock John Wiley and Sons, New York, 1993.
\bibitem{mom}
{\sc Momani, S., Odibat, Z., and Hashim, I.}
\newblock Algorithms for nonlonear fractional partial differentialequations: a
selection of numerical methods.
\newblock {\em Topol. Meth. Nonlin. Anal. 31\/} (2008), 211--226.
\bibitem{t20}
{\sc Moshrefi-Torbati, M., and Hammond, J.~K.}
\newblock Physical and geometrical interpretation of fractional operators.
\newblock {\em J. Franklin Institute 335\/} (1998), 1077--1086.
\bibitem{impl1}
{\sc Murio, D.~A.}
\newblock Implicit finite difference approximation for time fractional
diffusion equations.
\newblock {\em Comput. Math. Appl. 56\/} (2008), 1138--1145.
\bibitem{mus13}
{\sc Mustapha, K.~A.}
\newblock Superconvergent discontinuous {G}alerkin method for {V}olterra
integro-differential equations.
\newblock {\em Math. Comput. 82\/} (2013), 1987--2005.
\bibitem{do1}
{\sc Naber, M.}
\newblock Distributed order fractional sub-diffusion.
\newblock {\em Fractals 12\/} (2004), 23--32.
\bibitem{nakh03}
{\sc Nakhushev, A.~M.}
\newblock {\em Fractional Calculus and Applications}.
\newblock Fismatlit, Moscow, 2003.
\bibitem{nakh06}
{\sc Nakhusheva, V.~A.}
\newblock {\em Differential Euations of Mathematical Models of Nonlocal
Processes (in Russian)}.
\newblock Nauka, Moscow, 2006.
\bibitem{nie10}
{\sc Nieto, J.~J.}
\newblock Maximum principle for fractional differential equations derived from
{M}ittag-{L}effler functions.
\newblock {\em Appl. Math. Lett. 23\/} (2010), 1248--1251.
\bibitem{t22}
{\sc Nigmatullin, R.~R.}
\newblock A fractional integral and its physical interpretation.
\newblock {\em Theor. Math. Phys. 90\/} (1992), 242--251.
\bibitem{nig05}
{\sc Nigmatullin, R.~R., and Le~Mehaute, A.}
\newblock Is there geometrical/physical meaning of the fractional integral with
complex exponent.
\newblock {\em J. Non-Crystalline Solids 351\/} (2005), 2888--2899.
\bibitem{noor}
{\sc Noor, M.~A., and Mohyud-Din, S.~T.}
\newblock Variational iteration method for solving higher-order nonlinear
boundary value problems using {H}e's polynomials.
\newblock {\em Int. J. Nonlinear Sci. Numer. Simul. 9\/} (2008), 141--156.
\bibitem{odi08}
{\sc Odibat, Z.~M., and Momani, S.}
\newblock an algoritm for the numerical solution of differential equations of
fractional order.
\newblock {\em J. Appl. Math. Informatics 26\/} (2008), 15--27.
\bibitem{odi07}
{\sc Odibat, Z.~M., and Shawagfeh, N.~T.}
\newblock Generized {T}aylor's formula.
\newblock {\em Appl. Math. Comput. 186\/} (2007), 285--294.
\bibitem{old74}
{\sc Oldham, K., and Spanier, J.}
\newblock {\em The fractional calculus; theory and applications of
differentiation and integration to arbitrary order}.
\newblock Academic Press, New York, 1974.
\bibitem{ont13}
{\sc Ontololan, J., Borres, M., Patac, A., and Maglasang, G.}
\newblock Review of fractal and fractal derivatives in relation to the physics
of fractals.
\newblock {\em UV j. Res.\/} (2013), 219--228.
\bibitem{osl1}
{\sc Osler, T.~J.}
\newblock Leibniz rule for fractional derivatives generalized and application
to infinite series.
\newblock {\em SIAM J. Appl. Math 18\/} (1970), 658--674.
\bibitem{osl2}
{\sc Osler, T.~J.}
\newblock A correction to {L}eibniz rule for fractional derivatives.
\newblock {\em SIAM J. Math. Anal. 4\/} (1973), 456--459.
\bibitem{pag12}
{\sc Pagnini, G.}
\newblock Erd\'elye-{K}ober fractional diffusion.
\newblock {\em Frac. Calc. Appl. Anal. 15\/} (2012), 117--127.
\bibitem{pen10}
{\sc Peng, J., and Li, K.}
\newblock A note on property of the {M}ittag-{L}effler function.
\newblock {\em J. Math. Anal. Appl. 370\/} (2010), 635--638.
\bibitem{rb2}
{\sc Piret, C., and Hanert, E.}
\newblock A radial basis function method for fractional diffusion.
\newblock {\em J. Comput. Phys 238\/} (2013), 71--78.
\bibitem{pod98}
{\sc Podlubny, I.}
\newblock {\em Fractional Differential Equations}.
\newblock Academic Press, 1998.
\bibitem{pod00}
{\sc Podlubny, I.}
\newblock Matrix approach to discrete fractional calaculus.
\newblock {\em Frac. Calc. Appl. Anal. 3\/} (2000), 359--386.
\bibitem{pod02}
{\sc Podlubny, I.}
\newblock Geometric and physical interpretation of fractional integration and
fractional differentiation.
\newblock {\em Frac. Calc. Appl. Anal. 5\/} (2002), 36--386.
\bibitem{pod09}
{\sc Podlubny, I., Chechkin, A., Chen, Y.~Q., and Vinagre, B. M.~J.}
\newblock Matrix approach to discrete fractional calaculus {I}{I}: {P}artial
differential equations.
\newblock {\em J. Comput. Phys. 228\/} (2009), 3137--3153.
\bibitem{pod07}
{\sc Podlubny, I., Despotovic, V., Skovranek, T., and McNaughton, B.~H.}
\newblock Shadows on the walls: Geometric interpretation of fractional
integration.
\newblock {\em The J. Online Math. Appl. 7\/} (2007), 1664.
\bibitem{pra71}
{\sc Prabhakar, T.~R.}
\newblock A singular integral equation with a generalized {M}ittag-{L}effler
function in the kernel.
\newblock {\em Yokohama Math. 19\/} (1971), 7--15.
\bibitem{lwf}
{\sc Prieto, A.~I., de~Romero, S.~S., and Srivastava, H.~M.}
\newblock Some fractional-calculus results involving the generalized
{L}ommel-{W}right and related functions.
\newblock {\em Appl. Math. Lett. 20\/} (2007), 17--22.
\bibitem{qi13}
{\sc Qi, H.~T., Xu, H.~Y., and Guo, X.~W.}
\newblock The generalized {T}aylor's formula.
\newblock {\em Appl. Math. Comput. 186\/} (2007), 286--293.
\bibitem{rab97}
{\sc Rabotnov, Y.~N.}
\newblock {\em Elements of Hereditary Mechanics of Solids (in Russian)}.
\newblock Nauka, Moscow, 1977.
\bibitem{raf12}
{\sc Raftari, B., and Vajravelu, K.}
\newblock Homotopy analysis method for {M}{H}{D} viscoelastic fluid flow and
heat transfer in a channel with a stretching wall.
\newblock {\em Comm. Nonlin. Sci. Numer. Simul. 17\/} (2012), 4149--4162.
\bibitem{rah10}
{\sc Rahimy, M.}
\newblock Applications of fractional diferential equations.
\newblock {\em Appl. Math. Sci. 4\/} (2010), 2453--2461.
\bibitem{fem2}
{\sc Roop, J.~P.}
\newblock Computational aspects of {F}{E}{M} approximation of fractional
advection diffusion equations on bounded domains in $\mathbb{R}^2$.
\newblock {\em J. Comput. appl. Math. 193\/} (2006), 243--268.
\bibitem{ross97}
{\sc Ross, B.}
\newblock The development of fractional calculus 1695-1990.
\newblock {\em Historia Math. 4\/} (1977), 75--89.
\bibitem{t23}
{\sc Rutman, R.~S.}
\newblock On physical interpretations of fractional integration and
differentiation.
\newblock {\em Theor. Math. Phys. 105\/} (1995), 1509--1519.
\bibitem{temp}
{\sc Sabzikara, F., Meerschaerta, M.~M., and Chen, J.}
\newblock Tempered fractional calculus.
\newblock {\em J. Comp. Phys. 293\/} (2015), 14--28.
\bibitem{sai97}
{\sc Saichev, A., and Zaslavsky, G.}
\newblock Fractional kinetic equations: solutions and applications.
\newblock {\em Chaos 7\/} (1997), 753--764.
\bibitem{var2}
{\sc Samko, A.~G.}
\newblock Fractional integration and differentiatin of variable order.
\newblock {\em Anal. Math. 21\/} (1995), 213--236.
\bibitem{sam87}
{\sc Samko, S.~G., Kilbas, A.~A., and Marichev, O.~I.}
\newblock {\em Fractional integrals and derivatives. Theory and applications}.
\newblock Gordon and Breach Science Publishers, 1993.
\bibitem{fd2}
{\sc Scherer, R., Kalla, S.~L., Tang, Y., and Huang, J.}
\newblock The {G}r\"unwald-{L}etnikov method for fractional differential
equations.
\newblock {\em Comput. Math. Appl. 62\/} (2011), 902--917.
\bibitem{sch89}
{\sc Schneider, W.~R., and Wyss, W.}
\newblock Fractional diffusion and wave equations.
\newblock {\em J. Math. Phys. 30\/} (1989), 134--144.
\bibitem{shi12}
{\sc Shirzadi, A., Ling, L., and Abbasbandy, S.}
\newblock Meshless simulations of the two-dimensional fractional-time
convection-diffusion-reaction equations.
\newblock {\em Eng. Anal. Boundar. Elem. 36\/} (2012), 1522--1527.
\bibitem{po2}
{\sc Shubin, M.~A.}
\newblock {\em Pseudo Differential Operators and Spectral Theory}.
\newblock Springer, Berlin, 1987.
\bibitem{shu07}
{\sc Shukla, A.~K., and Prajapati, J.~C.}
\newblock On a generalization of {M}ittag-{L}efler function and its
application.
\newblock {\em J. Math. Anal. Appl. 336\/} (2007), 797--811.
\bibitem{sie15}
{\sc Sierociuk, D., Skovranek, T., Macias, M., Podlubny, I., Petra, I.,
Dzielinski, A., and Ziubinsky, P.}
\newblock Diffusion process modeling by using fractional-order models.
\newblock {\em Appl. Math. Comp. 257\/} (2015), 2--11.
\bibitem{sin11a}
{\sc Singh, J., Gupta, P.~K., and Rai, K.~N.}
\newblock Homotopy perturbation method to space-time fractional solidification
in a finite slab.
\newblock {\em Appl. Math. Model. 35\/} (2011), 1937--1945.
\bibitem{sin06b}
{\sc Singh, S.~J., and Chatterjee, A.}
\newblock Galerkin projections and finite elements for fractional order
derivatives.
\newblock {\em Nonlinear Dyn. 45\/} (2006), 183--206.
\bibitem{sok02}
{\sc Sokolov, I.~M., Klafter, J., and Blumen, A.}
\newblock Fractional kinetics.
\newblock {\em Phys. Todsay\/} (2002), 48--54.
\bibitem{t13}
{\sc Stanislavsky, A.~A.}
\newblock Probabilistic interpretation of the integral of fractional order.
\newblock {\em Theor. Math. Phys. 138\/} (2004), 418--431.
\bibitem{su13}
{\sc Su, W.-H., Yang, X.-J., Jafan, H., and Baleanu, D.}
\newblock Fractional complex transform method for wave equations on {C}antor
sets within local fractional differential operator.
\newblock {\em Adv. Diff. Equat. 2013\/} (2013), 97.
\bibitem{sun17}
{\sc Sun, X.~G., Hao, X., Zhang, Y., and Baleanu, D.}
\newblock Relaxation and diffusion models with non-singular kernels.
\newblock {\em Phys. A Stat. Mech. Appl. 468\/} (2017), 590--596.
\bibitem{cn1}
{\sc Tadjeran, C., and Meerschaert, M.~M.}
\newblock A second-order accrate numerical method for the two-dimensional
fractional diffusion equation.
\newblock {\em J. Comput. Phys 220\/} (2007), 813--823.
\bibitem{cn2}
{\sc Tadjeran, C., Meerschaert, M.~M., and Scheeffler, H.-P.}
\newblock A second-order accrate numerical approximation for the fractional
diffusion equation.
\newblock {\em J. Comput. Phys 213\/} (2006), 205--213.
\bibitem{tar18}
{\sc Tarasov, V.}
\newblock No nonlocality, no farctional derivative.
\newblock {\em Nonlinear Sci. Numer. Simulat. 62\/} (2018), 157--163.
\bibitem{tar11}
{\sc Tarasov, V.~E.}
\newblock {\em Fractional Dynamics: Applications of Fractional Calculus to
Dynamics of Particles, Fields and Media (Nonlinear Physical Science)}.
\newblock Springer, 2011.
\bibitem{tar1}
{\sc Tarasov, V.~E.}
\newblock No violation of the {L}ebniz rule.
\newblock {\em Commun. Nonlinear Sci. Numer. Simul. 18\/} (2013), 2945--2948.
\bibitem{tar16}
{\sc Tarasov, V.~E.}
\newblock Heat transfer in fractal materials.
\newblock {\em Int. J. Heat Mass Transfer 93\/} (2016), 427--430.
\bibitem{tar2}
{\sc Tarasov, V.~E.}
\newblock Leibniz rule and fractional derivative of power functions.
\newblock {\em J. Comput. Nonlinear. Dynam. 11\/} (2016), 031014.
\bibitem{tar16a}
{\sc Tarasov, V.~E.}
\newblock On chain rule for fractional derivatives.
\newblock {\em Comm. Nonlin. Sci. Num. Simul. 30\/} (2016), 1--4.
\bibitem{tat17}
{\sc Tateishi, A.~A., Ribeiro, H.~V., and Lenzi, E.~K.}
\newblock The role of fractional time-derivative operators on anomalous
diffusion.
\newblock {\em Front. Phys. 5\/} (2017), 52.
\bibitem{po1}
{\sc Taylor, M.~E.}
\newblock {\em Pseudo Differential Operators}.
\newblock Princeton University Press, Princeton, 1981.
\bibitem{t15}
{\sc Teneiro~Machado, J.~A.}
\newblock Fractional derivatives: Probability interpretation and frequency
response of rational approximations.
\newblock {\em Comm. Nonlin. Sci. Num. Sim. 14\/} (2009), 3492--3497.
\bibitem{t14a}
{\sc Teneiro~Machado, J.~A.}
\newblock A probabilistic interpretation of the fractional-order
differentiation.
\newblock {\em Fract. Calcul. Appl. Anal. 6\/} (2009), 73--80.
\bibitem{kir10}
{\sc Teneiro~Machado, J.~A., Kiryakova, V., and Mainardi, F.}
\newblock A poster about the recent history of fractional calculis.
\newblock {\em Fract. Calculus Appl. Anal. 13\/} (2010), 329--334.
\bibitem{kir14}
{\sc Teneiro~Machado, J.~A., Kiryakova, V., and Mainardi, F.}
\newblock Recent history of fractional calculis.
\newblock {\em Commun. Nonlin. Sci. Numer. Simul. 16\/} (2011), 1140--1153.
\bibitem{kir11}
{\sc Valerio, D., Machado, J.~T., and Kiryakova, V.}
\newblock Some pioneers of the applicatoons of fractional calculus.
\newblock {\em Fract. Calculus Appl. Anal. 17\/} (2011), 552--578.
\bibitem{vd}
{\sc Van~Dyke, M.}
\newblock {\em Perturbation Methods in Fluid Mechanics}.
\newblock Parabolic Press, Standford, 1975.
\bibitem{ver14}
{\sc Vermeersch, B., and Shakouri, A.}
\newblock Spatiotemporal flux memory to nondiffusive transport.
\newblock arXiv: 1412.8571 [cond-mat.stat-mech], 2014.
\bibitem{volt}
{\sc Volterra, V.}
\newblock {\em Theory of Functional and of Integral and Integro-Differential
Equations}.
\newblock Blackie and Son Ltd., London and Glasgow, 1930.
\bibitem{cfd1}
{\sc Vong, S., Lyu, P., Chen, X., and Lei, S, L.}
\newblock High order finite difference method for time-space fractional
differential equations with {C}aputo and {R}iemann-{L}iouville derivatives.
\newblock {\em Numer. Algor. 72\/} (2016), 195--210.
\bibitem{cfd2}
{\sc Vong, S., and Wang, Z.}
\newblock A high order compact scheme for the fractional {F}okker-{P}lanck
equation.
\newblock {\em Appl. Math. Lett. 43\/} (2015), 38--43.
\bibitem{wat02a}
{\sc Watugala, G.~K.}
\newblock The {S}umudu transform for functions of two variables.
\newblock {\em Math. Eng. Industr. 8\/} (2002), 293--302.
\bibitem{waz10}
{\sc Wazwaz, A.-M., and Mehanna, M.~S.}
\newblock The combined {L}aplace-{A}domian method for handling singular
integral equation of heat transfer.
\newblock {\em Int. J. Nonlinear Sci. 10\/} (2010), 248--252.
\bibitem{wei17}
{\sc Wei, C., and Wang, H.}
\newblock Solutions of the heat-conduction model described by fractional
{E}mden-{F}owler type equation.
\newblock {\em Thermal Sci. 21\/} (2017), S113--S120.
\bibitem{rb1}
{\sc Wei, S., Chen, W., and Hon, Y.-C.}
\newblock Implicit local radial basis function method for solving
two-dimensional time fractional diffusion equations.
\newblock {\em Thermal Sci. 19\/} (2015), S59--S67.
\bibitem{wes97}
{\sc West, B.~J., Grigolini, P., Metzler, R., and Nonnenmacher, T.~F.}
\newblock Fractional diffusion and {L}\'evy stable processes.
\newblock {\em Phys. Rev. E 55\/} (1997), 99--106.
\bibitem{zha9}
{\sc Wu, G.~C.}
\newblock Applications of the variational iteration method to fractional
diffusion equation: local versus nonlocal ones.
\newblock {\em Int. Rev. Chem. Eng. 4\/} (2012), 505--510.
\bibitem{wu10}
{\sc Wu, G.~C., and Lee, E. W.~M.}
\newblock Fractional variational iteration method and its applications.
\newblock {\em Phys. Lett. A 374\/} (2010), 2506--2509.
\bibitem{yan13c}
{\sc Yan, L.-M.}
\newblock Modified homotopy perturbation method coupled with {L}aplace
transform for fractional heat transfer and porous media equations.
\newblock {\em Thermal Sci. 17\/} (2013), 1409--1414.
\bibitem{yan13}
{\sc Yang, A.~M., Cattani, C., Jafari, H., and Yang, X.~J.}
\newblock Analytical solutions of the one-dimensional heat equations arising in
fractal transient conduction with local fractional derivatives.
\newblock {\em Abstract Appl. Anal. 2013\/} (2013), 462535.
\bibitem{yan14}
{\sc Yang, A.-M., Zhang, C., Jafari, H., Cattani, C., and Jiao, Y.}
\newblock Picard successsive approximation method for solving differential
equations in fractal heat transfer with local fractional derivative.
\newblock {\em Abstr. Appl. Anal. 2014\/} (2014), 395710.
\bibitem{zha6}
{\sc Yang, X.~J.}
\newblock {\em Advanced Local Fractional Calculus and its Applications}.
\newblock World Science Publisher, N. Y., 2012.
\bibitem{yang12c}
{\sc Yang, X.-J.}
\newblock Generalized local fractional {T}aylor's formula with local farctional
derivative.
\newblock {\em J. Expert Systems 1\/} (2012), 1--5.
\bibitem{yang12b}
{\sc Yang, X.~J.}
\newblock Picard's approximation method for solving a class of local fractional
{V}olterra integral equations.
\newblock {\em Adv. Intell. Transport. Syst. 1\/} (2012), 67--70.
\bibitem{yan13b}
{\sc Yang, X.~J., and Baleanu, D.}
\newblock Fractal heat conduction problem solved by local fractional variation
method.
\newblock {\em Thermal Sci. 17\/} (2013), 625--628.
\bibitem{yang}
{\sc Yang, X.-J., Baleanu, D., and He, J.-H.}
\newblock Transport equations in fractal porous media within fractional complex
transform method.
\newblock {\em Proc. Romanian Acad. 14\/} (2013), 287--292.
\bibitem{yang15}
{\sc Yang, X.-J., Baleanu, D., and Srivastava, H.~M.}
\newblock {\em Local Fractional Integral Transforms and their Applications}.
\newblock Academic Press, 2015.
\bibitem{yang15a}
{\sc Yang, X.-J., Srivastava, H.~M., and Cattani, C.}
\newblock Local fractional homotopy perturbation method for solving fractal
partial differential equations arising in mathematical physics.
\newblock {\em Rom. Rep. Phys. 67\/} (2015), 752--761.
\bibitem{yang12a}
{\sc Yang, X.-J., and Zhang, F.~R.}
\newblock Local fractional variational iteration method and its algorithms.
\newblock {\em Adv. Comput. Math. Appl. 1\/} (2012), 139--145.
\bibitem{yang16}
{\sc Yang, X.-J., Zhang, Z.-Z., Teneiro~Machado, J.~A., and Baleanu, D.}
\newblock On local fractional operators view of computational complexity.
{D}iffusion and relaxation defined on {C}antor sets.
\newblock {\em Thermal Sci. 20\/} (2016), S755--S767.
\bibitem{yang13}
{\sc Yang, Y.-J., Baleanu, D., and Yang, X.-J.}
\newblock Analysis of fractal wave equations by local fractional {F}ourier
series method.
\newblock {\em Adv. Math. Phys. 2013\/} (2013), 632309.
\bibitem{yil10}
{\sc Yildrim, A., and S., M.}
\newblock Series solutions of a fractional oscillator by means of the homotopy
perturbation method.
\newblock {\em Int. J. Comp. Math. 87\/} (2010), 1072--1082.
\bibitem{you14}
{\sc Younis, M.}
\newblock A new approach for the exact solution of nonlinear equations of
fractional order via modified simple equation method.
\newblock {\em Appl. Math. 5\/} (2014), 1927--1932.
\bibitem{do2}
{\sc Zaslavsky, G.~M.}
\newblock Chaos, fractional kinetics and anomalous transport.
\newblock {\em Phys. Reports 371\/} (2002), 461--580.
\bibitem{zas08}
{\sc Zaslavsky, G.~M.}
\newblock {\em Hamiltonian Chaos and Fractional Dynamics}.
\newblock OUP, 2008.
\bibitem{zay14}
{\sc Zayernouri, M., and Karniadakis, G.~M.}
\newblock Exponentially accurate spectral and spectral element methods for
fractional {O}{D}{E}s.
\newblock {\em J. Comput. Phys. 257\/} (2014), 460--480.
\bibitem{zeng15}
{\sc Zeng, F., Li, C., Liu, F., and Turner, I.}
\newblock Numerical algorithms for time-fractional subdiffusion equations with
second-order accuracy.
\newblock {\em SIAM J. SCi. Comp. 37\/} (2015), A55--A78.
\bibitem{zha16a}
{\sc Zhai, S., and Feng, X.}
\newblock Investigations on several compact {A}{D}{I} methods for the 2{D} time
fractional diffusion equation.
\newblock {\em Numer. Heat Transfer Fundam. 69\/} (2016), 364--376.
\bibitem{zha19b}
{\sc Zhai, S., Weng, Z., Feng, X., and Yuan, J.}
\newblock Investigations on several high-order adi methods for time-space
fractional diffusion equation.
\newblock {\em Numer. Algor. 82\/} (2019), 69--106.
\bibitem{zha06}
{\sc Zhang, G., and Li, B.}
\newblock Thermal conductivity of nanotubes revisited: {E}ffects of chirality,
isotope impurity, tube length, and temperature.
\newblock arXiv:cond-mat/0501194[cond-mat.mtrl-sci], 2006.
\bibitem{y19}
{\sc Zhang, S., and Zhang, H.~Q.}
\newblock Fractional sub-equation method and its applications to nonlinear
fractional {P}{D}{E}.
\newblock {\em Phys. Lett. A 375\/} (2011), 1069--1073.
\bibitem{zha14a}
{\sc Zhang, Y.}
\newblock Solving initial-boundary value problems for local fractional
differential equation by local fractional {F}ourier series method.
\newblock {\em Abstr. Appl. Anal. 2014\/} (2014), 912464.
\bibitem{zha15b}
{\sc Zhang, Y., Cattani, C., and Yang, X.-J.}
\newblock Local fractional homotopy perturbation method for solving
non-homogeneous heat conduction equations in fractal domains.
\newblock {\em Entropy 17\/} (2015), 6753--6764.
\bibitem{s70}
{\sc Zhang, Y., Meerschaert, M.~M., and Packman, A.~I.}
\newblock Linking fluvial bed sediment transport across scales.
\newblock {\em Geophys. Res. Lett. 39\/} (2012), L20404--L20406.
\bibitem{zha19a}
{\sc Zhao, D., and Luo, M.}
\newblock Representations of acting processes and memory effects: general
fractional derivative and its application to theory of heat conduction with
finite wave speeds.
\newblock {\em Appl. Math. Comput. 346\/} (2019), 531--544.
\bibitem{zhao17}
{\sc Zhao, D., Singh, J., Kumar, D., Rathore, S., and Yang, X.-J.}
\newblock An effucient computational technique for local fractional heat
conduction equations in fractal media.
\newblock {\em J. Nonlin. Sci. Appl. 10\/} (2017), 1478--1486.
\bibitem{zha16}
{\sc Zhao, Y., Cai, Y.-G., and Yang, X.-J.}
\newblock A local fractional derivative with applications to fractal relaxation
and diffusion phenomena.
\newblock {\em Thermal Sci. 20\/} (2016), S723--S727.
\bibitem{fem1}
{\sc Zhu, P., and Xie, S.}
\newblock A{D}{I} finite element method for 2{D} nonlinear time fractional
reaction-subdiffusion equation.
\newblock {\em Am. J. Comput. Math. 6\/} (2016), 336--356.
\end{thebibliography}
\end{document} | math | 130,246 |
\begin{document}
\title{Optical implementations, oracle equivalence, and the
Bernstein-Vazirani algorithm}
\author{Arvind}
\email{[email protected]}
\affiliation{Department of Physics,
Indian Institute of Technology Madras,
Chennai 600036}
\altaffiliation[Also at:]{
Department of Physics, Guru Nanak Dev
University, Amritsar 143005}
\author{Gurpreet Kaur}
\affiliation{Department of Physics,
IIT Madras,
Chennai 600036}
\author{Geetu Narang}
\affiliation{Department of Physics, Guru Nanak Dev University,
Amritsar, 143 005}
\pacs{03.67.Lx,42.25.Ja,42.25.Hz}
\begin {abstract}
We describe a new implementation of the Bernstein-Vazirani
algorithm which relies on the fact that the polarization states
of classical light beams can be cloned. We explore the
possibility of computing with waves and discuss a classical
optical model capable of implementing any algorithm (on $n$
qubits) that does not involve entanglement. The
Bernstein-Vazirani algorithm (with a suitably modified oracle),
wherein a hidden $n$ bit vector is discovered by one oracle query
as against $n$ oracle queries required classically, belongs to
this category. In our scheme, the modified oracle is also
capable of computing $f(x)$ for a given $x$, which is not
possible with earlier versions used in recent NMR and optics
implementations of the algorithm.
\end{abstract}
\maketitle
\section{Introduction}
\label{introduction}
Quantum mechanical systems have a large in-built information
processing ability and can hence be used to perform
computations~\cite{divin-sc-95,benn-pt-95,benn-nat}.
The basic unit of quantum information is the \underline{qu}antum
\underline{bit} (qubit), which can be visualized as a quantum
two-level system. The implementation of quantum logic gates is
based on reversible logic and the fact that the two states of a
qubit can be mapped onto logical 0 and
1~\cite{bar-pr-95,divin-pr-95,divin-roy-98}. The
quantum mechanical realization of logical operations can be used
to achieve a computing power far beyond that of any classical
computer~\cite{feyn-the-82,benioff,deu-roy-85,deu-roy-89}.
A few quantum algorithms have been designed and experimentally
implemented, that perform certain computational tasks
exponentially faster than their classical counterparts. While the
Deutsch-Jozsa (DJ) algorithm ~\cite{deu-roy-92} and Shor's
quantum factoring algorithm~\cite{shor-siam-97,ekert-revmod-96}
lead to an exponential speedup, Grover's rapid search
algorithm~\cite{grover-prl-97} and the
Bernstein-Vazirani~\cite{BV-97} algorithm are examples where a
substantial (though non-exponential) computational advantage is
achieved.
The exponential gain in computational speed achieved by quantum
algorithms is intimately related to
entanglement~\cite{MHorodecki01}, and it turns out that when
there is no entanglement (or the amount of entanglement is
limited) in a pure state, the dynamics of a quantum algorithm can be
simulated efficiently via classically deterministic or classical
random means~\cite{jozsabook,nielsen,JL02}. However in
algorithms that do not lead to an exponential gain in speed or in
those that use mixed states, the possibilities of achieving
speedup without entanglement still exist~\cite{tal-mor-qph}.
Classical waves share certain properties of quantum systems. For
example, the polarization states of a beam of light can act as
qubits. It is to be noted that the superposition of classical
waves does not lead to entanglement. For $n$ beams of light, with
their polarization states providing us with $n$ qubits, we can
only implement $U(2)\otimes U(2) \cdots \otimes U(2)$ transformations
via optical elements~\cite{simon-pla-90} and cannot in general
implement $U(2^n)$ transformations. Therefore, although
superposition and interference are present and can be utilized,
their scope is limited compared to what could be achieved with
$n$ qubits which are actually quantum in character. However, it
is interesting to explore the question if any useful computation
could be performed with classical waves, that exploits their
superposition and interference. It turns out that, if an
algorithm based on qubits does not involve entanglement at any
stage of its implementation, it can be realized using this
classical model. The Deutsch-Jozsa algorithm for one and two
qubits and the Bernstein-Vazirani algorithm for any number of
qubits, can be re-cast in this form with a suitable modification
of the oracle~\cite{Meyer00soph,BV-97,TerhalSmolin98}.
This modification of the algorithm has been central to the
implementation of the Deutsch-Jozsa algorithm up to two qubits
~\cite{collins-pra-98,arvind-pramana,kavita-pramana} and the Bernstein-Vazirani
algorithm on any number of qubits using NMR~\cite{djnmr} as well
as optics~\cite{djoptics} and superconducting
nanocircuits~\cite{bvsuper}.
In this paper, we propose a model based on classical light
beams in which the $n$-qubit eigen states are mapped on to
polarization states of these beams, and un-entangling unitary
operators are implemented using passive optics. An added
feature in this model is that cloning of states is possible
as we are working entirely within the domain of classical
optics. It turns out that this possibility of cloning along
with interference leads to interesting results for certain
algorithms.
We discuss an entirely new scheme for implementing the
Bernstein-Vazirani algorithm. Instead of Hadamard
transformations, we use cloning and re-interference to discover a
hidden $n$-bit binary vector $a$, using only a single oracle
call. The non-entangling nature of the modified oracle for the
Bernstein-Vazirani algorithm is central to this implementation as
it is for the earlier implementations~\cite{djnmr,djoptics}.
However this scheme differs from the earlier schemes in two ways:
(a) instead of Hadamard transformation we use the cloning of
classical beams via beam splitters, and (b) we are able to
operate the modified oracle in the `classical' mode as well,
wherein we are able to obtain $f(x)$ for a given $x$.
The material in this paper is arranged as follows: the optical
model based on polarization states is described in
Section~\ref{optics}. Section~\ref{BV-algo} begins with the
description of the original Bernstein-Vazirani algorithm and
later discusses the modified oracle and its implications. The
new optical scheme is described in Section~\ref{new-algo} and
Section~\ref{conclusion} has some concluding remarks.
\section{Optical implementation based on polarization}
\label{optics}
Consider a classical system consisting of a monochromatic
light beam propagating in a given direction with a pure
polarization. The polarization states of such a beam are in
one-to-one correspondence with the states of a two-level
quantum system and the beam can therefore be visualized as a qubit.
The unitary transformations that transform one polarization state
to another can be easily performed.
Consider a
birefringent plate with its thickness adjusted to introduce
a phase difference of $\eta\/$ between the $x\/$ and $y\/$
components of the electric field, with its slow axis making
an angle $\phi\/$ with the $x\/$ axis. The unitary operator
corresponding to this plate is given by
\begin{equation}
U(\eta,\phi) \!=\!
\left[\begin{array}{cc} \cos\phi & -\sin \phi\\
\sin\phi & \cos \phi\end{array}
\right]\!\!
\left[\begin{array}{lr} e^{i\eta/2} & \!\!\!\!\!\!0\\ 0 &
\!\!\!\!\!\!e^{-i \eta/2}
\end{array} \right]\!\!
\left[\begin{array}{cc} \cos\phi & \sin \phi\\
-\sin\phi & \cos \phi\end{array} \right]
\label{biref}
\end{equation}
For $\eta=\pi\/$, it becomes a half-wave plate
(denoted by $H_{\phi}\/$), while for $\eta=\pi/2\/$ it becomes
a quarter-wave plate (denoted by $Q_{\phi}\/$).
It has been shown that all $U(2)\/$ transformations can be
realized on the polarization states by taking two quarter-wave
plates and one half-wave plate with suitable choices of angles of
their slow axes with the $x\/$ axis. We will henceforth refer
to this device, capable of implementing $SU(2)\/$
transformations, as ``Q-H-Q'' (a detailed discussion is found
in~\cite{simon-pla-90}). Combining this with an overall trivial
phase transformation, we can implement the complete set of $U(2)$
transformations.
\unitlength=0.6mm
\thicklines
\begin{figure}
\caption{The action of the Q-H-Q device on the polarization state `$\vert
x\rangle$' of a single beam, taking it to a state `$\vert
x\rangle$', where $U$ could be an arbitrary $SU(2)$
transformation}
\label{QHQ}
\end{figure}
Further, let us map the $x\/$ polarization state to logical
$1\/$ and the $y\/$ polarization state to logical $0\/$.
With this mapping, we proceed to work with this system as a
qubit. Since this system comprises essentially of classical
elements, we call it a ``classical qubit''. We will use a
notation where we specify the polarization state as `$\vert
x\rangle$' (i.e a ket vector within quotation marks)
throughout this paper, where $x$ can take values $0$ or $1$.
Multiple beams of this type can be considered and on each
one of them arbitrary $U(2)$ transformations can be
performed. All the computational basis states are mapped
to appropriate polarization states using the above mapping.
It is to be noted that we cannot obtain any entangled states
here because the transformations available are
$U(2)\otimes U(2)\cdots \otimes U(2)$.
\begin{figure}
\caption{The action of a beam splitter with transmission
coefficient $t$ and reflection coefficient $r$ on a classical light
beam with polarization state given by `$\vert x\rangle$'.
The same polarization is being sent into both ports of
the beam splitter and no polarization change occurs during the
whole process. For instance, if the beam in one of the ports is missing
and we use a 50-50 beam splitter, the beam splitter
generates two identical beams which are clones of the
original input beam and with their intensity reduced to half.}
\label{BS-action}
\end{figure}
A beam splitter can be used to `split' a beam and also to
interfere beams with the same polarization. The transformation
matrix of this operation on the amplitudes is given by
\begin{equation}
\mbox{BS} = \left(\begin{array}{cc}
\sqrt{t} & -\sqrt{r}\\
\sqrt{r} & \sqrt{t}
\end{array}\right)
\end{equation}
Where $t$ and $r$ are transmission and reflection coefficients
respectively.
This matrix acts on the amplitudes of the two beams
entering the two ports of the beam splitter and not on the
polarization states. Polarization states do not undergo
any transformation
under the action of the beam splitter.
\section{The Bernstein-Vazirani algorithm and a modified oracle}
\label{BV-algo}
Consider the binary
function $f(x)$ defined from an $n$-bit domain space to a $1$-bit range.
\begin{equation}
f:\{0,1\}^n \longrightarrow \{0,1\}
\end{equation}
The
function is considered to be of the form $f(x)=a \cdot x$,
where $a$ is an $n$
bit string of zeros and ones and $a.x$ denotes bitwise
XOR( or scalar product modulo 2):
\begin{equation}
f(x) =a_{1}x_{1} \oplus a_{2}x_{2} \oplus ........a_{n}x_{n}
\label{bv_function}
\end{equation}
The aim of the algorithm is to find the $n$-bit string $a$,
given that we have access to an oracle which gives us the values
of the function $f(x)$ when we supply it with an input $x$.
Classically at least $n$ queries to the oracle are required in order
to find the binary string $a$. The Bernstein-Vazirani algorithm
solves this problem with a single query to a quantum oracle of
the form
\begin{equation}
\vert x \rangle_{n-\rm qubit} \vert y \rangle_{1-\rm qubit}
\buildrel{U_a}\over{\longrightarrow} \vert x \rangle_{n-\rm
qubit} \vert f(x)
\oplus y \rangle_{1-\rm qubit}
\label{oracle1}
\end{equation}
where $ x \in \{0,.........2^{n-1}\}$ is a
data register and $ \vert y \rangle$ acts as
a target register.
The algorithm works as follows: begin with an initial state
with the first $n$-qubits in $\vert 0 \rangle$ state and the last
qubit in the state $\vert 1 \rangle$. Apply a Hadamard
transformation on all the $n+1$ qubits and then make a call to the
oracle giving the following results:
\begin{eqnarray}
&&\vert 0 \rangle^n
\vert 1 \rangle \buildrel{H^{\otimes
n+1}}\over{\longrightarrow}\frac{1}{2^{n/2}}
\sum_{x=0}^{2^{n}-1} \vert x \rangle
\;{1 \over \sqrt{2}} (\vert 0 \rangle - \vert 1 \rangle)\nonumber \\
&&\buildrel{U_a}\over{\longrightarrow} \frac{1}{2^{n/2}}
\sum_{x=0}^{2^{n}-1} (-1)^{(x.a)} \vert x
\rangle \frac{1}{\sqrt{2}}
(\vert 0 \rangle - \vert 1 \rangle )\nonumber \\
&&\buildrel{H^{\otimes n+1}}\over{\longrightarrow}{1\over
2^n}\sum_{x=0}^{2^{n}-1}
\sum_{z=0}^{2^{n}-1} (-1)^{(x.a)}(-1)^{(x.z)} \vert z
\rangle \vert 1 \rangle
\nonumber \\
&&\equiv \vert a \rangle
\vert 1 \rangle
\label{bv-orig}
\end{eqnarray}
where we have used the fact that
$$ {1\over2^n}\sum_{x=0}^{2^{n}-1}
(-1)^{(a.x)}(-1)^{(x.z)} =
\delta_{a z} $$
A measurement in the computational basis immediately reveals
the binary vector $a$. This algorithm therefore achieves the
discovery of the vector $a$ in a single oracle call as
opposed to $n$ oracle calls required classically. The
oracle~(\ref{oracle1}) has been queried on a superposition
of states for this algorithm. However, if we query the
oracle on a state $\vert x \rangle$ with the function
register set to $\vert 0 \rangle$ we will recover the value
$f(x)$ in the function register. This demonstrates that we
can run the oracle in the classical mode when desired.
\subsection{Oracle modification and implementation without
entanglement} This unitary oracle~(\ref{oracle1}) requires
$n+1$ qubits and can be operated in two different ways. If
we use eigen states in the input and set $y=0$, the
algorithm outputs $f(x)$ for a given input $x$ in a
reversible manner (which the original classical algorithm
would do irreversibly). However, the algorithm can be
performed on arbitrary quantum states (typically a uniform
superposition of input states in the Deutsch-Jozsa and
Bernstein-Vazirani algorithms).
A careful perusal of Equation~(\ref{bv-orig}) reveals two
important facts about the Bernstein-Vazirani algorithm.
\begin{itemize}
\item[(a)] The
register qubit does not play any role in the algorithm. It is
used only in the function evaluation step because the
oracle~(\ref{oracle1}) demands that we supply this extra qubit.
However, if we modify the oracle to
\begin{equation}
\vert x \rangle_{\mbox{\tiny n-bit}}
\stackrel{U_{f}}{\longrightarrow}
(-1)^{f(x)} \vert x \rangle_{\mbox{\tiny n-bit}}
\label{new-oracle}
\end{equation}
we can implement everything on $n$ qubits.
Since the state of the last
qubit does not change, it can be considered redundant and we
can remove the one-qubit target register altogether.
Although this oracle suffices to execute the Bernstein-Vazirani
algorithm, it cannot give
us the value of $f(x)$ for a given $x$. Therefore, one can argue
that the connection with the original classical problem is lost
and one is solving an altogether different problem. In this
paper, we demonstrate that in the classical model based on
polarization of light beams, this problem can be
circumvented and we
can obtain the value of $f(x)$ from $x$ via a suitable
modification of the circuit. We will come back to these subtle
points again in the next section.
\item[(b)]
It turns out that this version of the oracle is implementable
without requiring any entanglement for the case of the
Bernstein-Vazirani algorithm.
The modified oracle~(\ref{new-oracle}) can be
implemented without introducing any entanglement because
the unitary
transformation $U_{a}$ can be decomposed as a direct product
of single qubit operations.
\begin{eqnarray} U_{a} &=&
U^{(1)}_{a} \otimes U^ {(2)}_{a} \otimes
\cdots U^{(n)}_{a}\nonumber
\\
&=&(\sigma_z^1)^{a_1} \otimes (\sigma_z^2)^{a_2}\cdots
(\sigma_z^n)^{a_n}
\label{factor}
\end{eqnarray}
where $\sigma_z^j$ is the Pauli operator acting on the $j$ th
qubit.
On an $n$-qubit eigen state $\vert x
\rangle = \vert x_1\rangle \vert x_2\rangle
\cdots \vert x_n\rangle$ labeled by the binary string $x$ the
action reduces to
\begin{equation}
U_a\equiv (-1)^{x_1.a_1}\, (-1)^{x_2.a_2}\, \cdots (-1)^{x_n.a_n}
\label{factor1}
\end{equation}
\end{itemize}
\begin{figure}
\caption{
\label{bv-circuit}
\label{bv-circuit}
\end{figure}
This simplified version of the
Bernstein-Vazirani algorithm where only $n\/$ qubits
are used and we have separable states at all stages of
the implementation has been depicted in figure~(\ref{bv-circuit}).
All the implementations till date have been along the lines of
this circuit~\cite{djnmr,djoptics}.
\section{New optical implementation of the
Bernstein-Vazirani algorithm}
\label{new-algo}
It was shown in Section~(\ref{optics}) that
$n\/$ classical beams of light
can be visualized as an $n\/$ qubit system and the action of
a non-entangling unitary transformation can be implemented via
a suitable combination of two quarter wave plates and a half wave
plate on each beam. Although the set of unitaries that can be
implemented is limited, there is an added advantage that we can
clone these beams by using beam splitters.
The waves are classical and therefore there
is no problem in dividing the amplitude of a given polarization to
obtain two copies of the same polarization state. We
will now use this property
of the model to implement the Bernstein-Vazirani
algorithm in a new way and also
to make the modified oracle more powerful in terms of its
capacity to compute $f(x)$ from $x$.
\begin{figure}
\caption{Optical circuit to (a) implement the Bernstein-Vazirani algorithm in a new
way and (b) to compute $f(x)$ from $x$. BS's represent 50/50
beam splitters, the corner elements are mirrors and D's are
light detectors. The XOR gate is
implemented on pairs of bits till we are left with only
a one bit result.}
\label{classical-circuit}
\end{figure}
A notation similar to quantum mechanics
is used in which single
quotations marks will be used around ket vectors for describing
the polarization states of light beams, where `$\vert x_j\rangle$'
represents $x$-polarization if $x_j=0$ and
$y$-polarization if $x_j=1$. Each beam splitter in the circuit
splits the beam into two, keeping the polarization state of both the
beams identical to the original polarization. The intensity of
the split beams is half that of the original beam.
We now follow the circuit described in
Figure~(\ref{classical-circuit}) to arrive at our
results. Consider the input state labeled by the binary
vector $x$ with its bits given by $x_{1}, x_{2} \cdots
x_{n}$. We represent it by a polarization state
`$\vert x_{1}\rangle$'`$\vert x_2\rangle$'$\cdots$ `$ \vert
x_n \rangle$' where each beam has an $x$ or $y$ polarization
depending upon the corresponding bit being $0$ or $1$.
Each beam goes through an identical set of operations.
Consider the $j$th beam. The initial state of this beam is
`$\vert x_j\rangle$' and after the beam splitter we have two
copies of the same state (classical cloning of polarization
states). The oracle acts on one of the copies and converts
it via the unitary transformation $U_a^j=(-1)^{x_j.a_j}$;
the other copy does not undergo any change. Both these
copies are brought together and mixed at the beam splitter
BS$_j^{\prime}$ and the intensity is measured at the
detector $D_j$.
\begin{eqnarray}
{\rm `}\vert x_j\rangle{\rm '}\longrightarrow\begin{array}{ccc}
\frac{1}{\sqrt{2}}{\rm
`}\vert x_j\rangle{\rm '}&&{\rm Transmitted}\\ \\
-\frac{1}{\sqrt{2}}{\rm `}\vert x_j\rangle{\rm '} &&{\rm reflected}
\end{array}
\end{eqnarray}
The transmitted component then undergoes the action of the oracle
unitary (Equations~(\ref{factor}) and (\ref{factor1})) which for
the $j$th qubit acts via
$U_a^j=\sigma_z^{a_J}=(-1)^{x_j.a_j}$. The state of the
beam is
\begin{equation}
\frac{1}{\sqrt{2}}{\rm `}\vert x_j\rangle{\rm '}\stackrel{U_a^j}{\longrightarrow}
\frac{1}{\sqrt{2}}(-1)^{x_j.a_j} {\rm `}\vert x_j\rangle{\rm '}
\end{equation}
As is clear from Equation~(\ref{biref}), the implementation
of $U_a^j$ on the $j$th beam is straightforward and is a
polarization dependent phase shift corresponding to a
single half wave plate with $\phi=0$ and $\eta=\pi$
~\cite{phase}.
Finally, this beam meets the other beam (the one
that did not
undergo the oracle unitary) at the beam splitter $BS_j$, where they
interfere to give the state of the beam moving towards
detector $D_j$
\begin{equation}
\frac{1}{2}(-1)^{x_j.a_j} {\rm `}\vert x_j\rangle{\rm '}-
\frac{1}{2}{\rm `}\vert x_j\rangle{\rm
'}=\frac{1}{2}((-1)^{x_j.a_j} -1) {\rm `}\vert x_j\rangle{\rm '}
\label{phase}
\end{equation}
The negative sign in Equation~(\ref{phase}) implies that the
beam which does not pass through the oracle acquires an
extra phase factor of $\pi$, which can be easily arranged.
After interference, the amplitude and hence the intensity at
the detector $D_j$ is zero if $x_j.a_j$ is zero. On the
other hand, if $x_j.a_j$ is one the intensity at the
detector is $\frac{1}{4}$ (assuming that we started with a
beam of unit intensity).{\em This happens for all the beams and
therefore each detector measures the corresponding
$x_j.a_j$}.
\subsection{To find the $n$ bit string `$a$'}
We can find $x_j.a_j$ separately for all $j$'s with this
simple interferometric arrangement. The computation of the
binary string $a$ is now straightforward. If we choose a
special input state with $x_j=1$ for all the $j \in
\{1,2,\cdots n\}$ then the detectors measure the
corresponding $a_j$ and therefore we are able to compute the
string $a$. This is quite different from the quantum version
of the Bernstein-Vazirani algorithm where we use Hadamard
gates to create superpositions. As a matter of fact, the
scheme with Hadamard gates described in
Figure~(\ref{bv-circuit}) can also be implemented in our
model with polarization qubits.
\subsection{Computing $f(x)$ for a given $x$}
In order to compute $f(x)$ for a given $x$, the appropriate
polarization state representing the $n$-bit input $x$ is chosen.
The outputs from all the detectors are fed into a pair-wise XOR
gate to compute addition modulo 2 (the XOR is applied to pairs of
inputs until we are left with only one output). This process
amounts to computing $f(x)= x_1.a_1\oplus x_2.a_2 \cdots
\oplus x_n.a_n$. We can thus compute $f(x)$ for any given $x$.
\section{Concluding Remarks}
\label{conclusion}
We have described a classical optical scheme to implement
the Bernstein-Vazirani algorithm. This scheme is entirely
classical as we have used only 'classical qubits' (based on the
polarization states of light beams), and passive optical
elements such as
detectors, beam splitters, phase shifters and mirrors. The number of
components needed to implement the algorithm increases
linearly with the number of input beams. We have explicitly cloned
the input and interfered it again with the part which undergoes
the oracle unitary, in order to
solve the Bernstein-Vazirani problem. This scheme does not
require the implementation of any Hadamard gates. We have also shown
through our interference arrangement that we can
use the same oracle to compute
$f(x)$ for a given $x$.
We believe that this analysis is a step in the direction
where information processors based on interference of waves are analyzed
in detail for their computation power. These systems seem
to provide a model that is in-between the classical
computation model based on bits and a fully quantum
computer. The computational power is also likely to be
in-between the two models (these issues will be discussed
elsewhere).
\end{document} | math | 23,654 |
\begin{document}
\title{\bf Asymptotics of Radially Symmetric Solutions for the Exterior Problem of Multidimensional Burgers Equation}
\author{{\bf Tong Yang}\\[1mm]
Department of Mathematics, City University of Hong Kong, China\\
Email address: [email protected]\\[2mm]
{\bf Huijiang Zhao}\\[1mm]
School of Mathematics and Statistics, Wuhan University, China\\
Computational Science Hubei Key Laboratory, Wuhan University, China\\
Email address: [email protected]\\[2mm]
{\bf Qingsong Zhao}\\[1mm]
School of Mathematics and Statistics, Wuhan University, China\\
Email address: [email protected]
}
\date{}
\maketitle
\begin{center}
{\bf Dedicated to Professor Shuxing Chen on the occasion of his 80th birthday}
\end{center}
\begin{abstract}
We are concerned with the large-time behavior of the radially symmetric solution for multidimensional Burgers equation on the exterior of a ball $\mathbb{B}_{r_0}(0)\subset \mathbb{R}^n$ for $n\geq 3$ and some positive constant $r_0>0$, where the boundary data $v_-$ and the far field state $v_+$ of the initial data are prescribed and correspond to a stationary wave. It is shown in \cite{Hashimoto-Matsumura-JDE-2019} that a sufficient condition to guarantee the existence of such a stationary wave is $v_+<0, v_-\leq |v_+|+\mu(n-1)/r_0$. Since the stationary wave is no longer monotonic, its nonlinear stability is justified only recently in \cite{Hashimoto-Matsumura-JDE-2019} for the case when $v_\pm<0, v_-\leq v_++\mu(n-1)/r_0$. The main purpose of this paper is to verify the time asymptotically nonlinear stability of such a stationary wave for the whole range of $v_\pm$ satisfying $v_+<0, v_-\leq |v_+|+\mu(n-1)/r_0$. Furthermore, we also derive the temporal convergence rate, both algebraically and exponentially. Our stability analysis is based on a space weighted energy method with a suitable chosen weight function, while for the temporal decay rates, in addition to such a space weighted energy method, we also use the space-time weighted energy method employed in \cite{Kawashima-Matsumura-CMP-1985} and \cite{Yin-Zhao-KRM-2009}.
\end{abstract}
\maketitle
\section{Introduction}
This paper is concerned with the precise description of the large time behaviors of solutions of the following initial-boundary value problem of multidimensional Burgers equation in an exterior domain $\Omega:=\mathbb{R}^n\backslash \overline{\mathbb{B}}_{r_0}(0)\subset\mathbb{R}^n$ for $n\geq 3$:
\begin{eqnarray}\label{1.1}
{\bf u}_t+({\bf u}\cdot\nabla){\bf u}&=&\mu\Delta {\bf u}, \quad t>0,\ x\in \Omega,\nonumber\\
{\bf u}(0,x)&=&{\bf u}_0(x),\quad x\in\Omega,\\
{\bf u}(t,x)&=&{\bf b}(t,x),\quad t>0,\ x\in\partial\mathbb{B}_{r_0}(0),\nonumber
\end{eqnarray}
and, as in \cite{Hashimoto-NonliAnal-2014, Hashimoto-OsakaJMath-2016, Hashimoto-Matsumura-JDE-2019}, our main purpose is to understand how the space dimension $n$ effect the large time behaviors of solutions of the initial-boundary value problem \eqref{1.1}. Here ${\bf u}=\left(u_1(t,x),\cdots,u_n(t,x)\right)$ is a vector-valued unknown function of $x=(x_1,\cdots,x_n)\in \mathbb{R}^n$ and $t\geq 0,$ ${\bf u}\cdot\nabla=\sum\limits_{j=1}^nu_j\frac{\partial}{\partial x_j}$, $\mu$ and $r_0>0$ are some given positive constants. ${\bf u}_0(x)$ and ${\bf b}(t,x)$ are given initial and boundary values respectively satisfying the compatibility condition ${\bf b}(0,x)={\bf u}_0(x)$ for all $x\in\partial\mathbb{B}_{r_0}(0)$.
In this paper, we will focus on the radially symmetric solutions for the initial-boundary value problem \eqref{1.1}. In fact, if ${\bf b}(t,x)=\frac{x}{|x|}v_-, {\bf u}_0(x)=\frac{x}{|x|}v_0(|x|)$ satisfying $\lim\limits_{|x|\to+\infty}v_0(|x|)=v_+$ and $v_0(r_0)=v_-$ for some given constants $v_\pm\in\mathbb{R}$ with $v_0(|x|)$ being some given scalar function, then one can seek radially symmetric solutions to the initial-boundary value problem \eqref{1.1}. For such a case, if we introduce a new unknown function $v(t,r)$ by letting ${\bf u}(t,x)=\frac xrv(t,r)$ with $r=|x|$, then the radially symmetric solution $v(t,r):= v(t,|x|)$ of the initial-boundary value problem (\ref{1.1}) satisfies the following initial-boundary value problem
\begin{eqnarray}\label{1.2}
v_t+\left(\frac{v^2}{2}\right)_r&=&\mu\left(v_{rr}+(n-1)\left(\frac vr\right)_r\right),\quad t>0,\ r>r_0,\nonumber\\
v(t,r_0)&=&v_-,\quad t>0,\\
\lim\limits_{r\rightarrow \infty }v(t,r)&=&v_+,\quad t>0,\nonumber\\
v(0,r)&=&v_0(r),\quad r>r_0,\nonumber
\end{eqnarray}
where the initial data $v_0(r)$ is assumed to satisfy the compatibility condition
\begin{equation}\label{Compatibility condition}
v_0(r_0)=v_-,\quad \lim\limits_{r\to\infty}v_0(r)=v_+.
\end{equation}
Throughout the rest of this paper, we set $V_-=v_--\frac{\mu(n-1)}{r_0}$.
To see the influence of the space dimension $n$ on the asymptotics of the initial-boundary value problem \eqref{1.2}, one needs first to consider the corresponding problem for the one-dimensional Burgers equation in the half line
\begin{eqnarray}\label{1.1.1}
v_t+\left(\frac{v^2}{2}\right)_r&=&\mu v_{rr},\quad r>0,\ t>0,\nonumber\\
v(t,0)&=&v_-,\quad t>0,\\
\lim\limits_{r\rightarrow \infty }v(t,r)&=&v_+,\quad t>0,\nonumber\\
v(0,r)&=&v_0(r),\quad r>0,\nonumber
\end{eqnarray}
and for the initial-boundary value problem \eqref{1.1.1}, as illustrated in \cite{Matsumura-MAA-2001} and \cite{Nishihara-ADC-2001}, its large time behavior can be completely classified by the unique global entropy solution of the resulting Riemann problem of the inviscid Burgers equation
\begin{eqnarray}\label{Riemann Problem for Burgers}
v_t+\left(\frac{v^2}{2}\right)_r&=&0,\quad t>0,\ r\in\mathbb{R},\nonumber\\
v(0,r)&=&\left\{
\begin{array}{rl}
v_-,\quad &r<0,\\
v_+,\quad &r>0
\end{array}
\right.
\end{eqnarray}
together with the so-called stationary wave $\phi_{v_-,v_+}(r)$ connecting the boundary value $v_-$ and the far-field state $v_+$, which is due to the occurrence of the boundary and satisfies
\begin{eqnarray}\label{SW-1d}
\left(\frac{v^2}{2}\right)_r&=&\mu v_{rr},\quad r>0,\nonumber\\
v(0)&=&v_-,\\
\lim\limits_{r\to+\infty}v(r)&=&v_+.\nonumber
\end{eqnarray}
The rigorous mathematical justification of the above classifications for the initial-boundary value problem \eqref{1.1.1} was carried out in \cite{Liu-Matsumura-Nishihara-SIMA-1998, Liu-Nishihara-JDE-1997, Liu-Yu-ARMA-1997, Nishihara-JMAA-2001} and the results can be summarized in the Table \ref{1.1.2}. Here $\phi_{v_-,v_+}(r)$ is the stationary wave solving the problem (\ref{SW-1d}), $\psi^R_{v_-,v_+}(t,r)$ is the unique rarefaction wave solution of the Riemann problem \eqref{Riemann Problem for Burgers} for the case $v_-<v_+$, and $W_{v_-,v_+}(r-st)$ with $s=\frac{v_-+v_+}{2}$ is the suitably shifted traveling wave solution of the one-dimensional Burgers equation satisfying $W_{v_-,v_+}(\pm\infty)=v_\pm$ for the case $v_->v_+$ and $s\geq 0$.
\begin{table}\label{1.1.2}
\centering \caption{Results available for \eqref{1.1.1}}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{2}{|c|}{Boundary Condition} & Asymptotic Behavior \\
\hline
{$v_-<v_+$}&$v_-<v_+\leq0$ & $\phi_{v_-,v_+}$,\ \ \cite{Liu-Matsumura-Nishihara-SIMA-1998}\\
\cline{2-3}
&$v_-<0<v_+$ & $\phi_{v_-,0}+\psi^R_{0,v_+}$,\ \ \cite{Liu-Matsumura-Nishihara-SIMA-1998}\\
\cline{2-3}
&$0\leq v_-<v_+$&$\psi^R_{v_-,v_+}$,\ \ \cite{Liu-Matsumura-Nishihara-SIMA-1998}\\
\hline
{$v_->v_+$}&$v_-+v_+<0$&$\phi_{v_-,v_+}$,\ \ \cite{Liu-Nishihara-JDE-1997}\\
\cline{2-3}
&$v_-+v_+\geq0$&$W_{v_-,v_+}$,\ \ \cite{Liu-Nishihara-JDE-1997, Liu-Yu-ARMA-1997, Nishihara-JMAA-2001}\\
\hline
\end{tabular}
\end{table}
But for the solutions of the initial-boundary value problem (\ref{1.2}), the story is quite is different which are due to the following reasons:
\begin{itemize}
\item The first reason is that the corresponding stationary wave $\phi_{v_-,v_+}(r)$ connecting the boundary value $v_-$ and the far-field state $v_+$, which solves the following boundary problem
\begin{eqnarray}\label{SW-RS-Burgers}
\left(\frac{v^2}{2}\right)_r&=&\mu\left(v_{rr}+(n-1)\left(\frac vr\right)_r\right),\quad r>r_0,\nonumber\\
v(r_0)&=&v_-,\\
\lim\limits_{r\to+\infty}v(r)&=&v_+,\nonumber
\end{eqnarray}
is no longer monotonic, hence the techniques employed in \cite{Liu-Matsumura-Nishihara-SIMA-1998} and \cite{Liu-Nishihara-JDE-1997} can not be used here;
\item The second reason is that the first equation of \eqref{1.2} is nonautonomous, thus even the problem on how to construct non-stationary travelling wave $W_{v_-,v_+}(r-st)$ satisfying $W_{v_-,v_+}(\pm\infty)=v_{\pm}$ and $s\not=0$ is unknown, to say nothing about its stability.
\end{itemize}
Even so, for certain cases, it is still hopeful to use the stationary wave $\phi_{v_-,v_+}(r)$ solving \eqref{SW-RS-Burgers}, the unique rarefaction wave solution $\psi^R_{v_-,v_+}(t,r)$ of the Riemann problem \eqref{Riemann Problem for Burgers} and/or their linear superposition to describe the asymptotics of the global solutions of the initial-boundary value problem \eqref{1.2}. Such a problem was studied in \cite{Hashimoto-NonliAnal-2014, Hashimoto-OsakaJMath-2016, Hashimoto-Matsumura-JDE-2019} and the results available up to now can be summarized as follows:
\begin{itemize}
\item For the case $n=3,$ a complete classification of its asymptotic behaviors, which are similar to that of the one-dimensional Burgers equation in the half line, i.e. the initial-boundary value problem \eqref{1.1.1}, together with its rigorous mathematical justification, which includes even a linear superposition of stationary and viscous shock waves, are given in \cite{Hashimoto-Matsumura-JDE-2019};
\item For $n>3,$ only the following two cases are studied in \cite{Hashimoto-NonliAnal-2014, Hashimoto-OsakaJMath-2016, Hashimoto-Matsumura-JDE-2019}:
\begin{itemize}
\item [(a).] For the case of $V_-\leq 0\leq v_+,$ if one assume further that
\begin{equation}\label{v-+-condiiton}
0<v_-<\frac{2\mu}{r_0\left(1+\sqrt{(n-3)/(n-1)}\right)},
\end{equation}
it is shown in \cite{Hashimoto-NonliAnal-2014, Hashimoto-OsakaJMath-2016, Hashimoto-Matsumura-JDE-2019} that the asymptotics is a linear superposition of a stationary wave $\phi_{v_-,0}$ and a rarefaction wave $\psi^R_{0,v_+}.$ The nonlinear stability of the above wave pattern together with the temporal convergence rate of the global solution $v(t,r)$ of the initial-boundary value problem \eqref{1.2} toward such a wave pattern are studied in \cite{Hashimoto-NonliAnal-2014, Hashimoto-OsakaJMath-2016, Hashimoto-Matsumura-JDE-2019};
\item [(b).] For the case of $V_-\leq v_+<0$, the stationary wave $\phi_{v_-,v_+}$ is shown to be nonlinear stable, while the temporal convergence rate is unknown.
\end{itemize}
\end{itemize}
Thus for $n>3$, to the best of our knowledge, the following four cases remain unsolved:
\begin{itemize}
\item [(a).] $V_-\leq 0\leq v_+$ but $v_-\notin\left(0,\frac{2\mu}{r_0\left(1+\sqrt{(n-3)/(n-1)}\right)}\right)$;
\item [(c).] $0<V_-<v_+$;
\item [(d).] $V_->v_+, V_-+v_+<0;$
\item [(e).] $V_->v_+, V_-+v_+\geq0,$
\end{itemize}
and the main purpose of this present paper is to consider the case
\begin{equation}\label{1.3}
v_+<0,\ \ V_-\leq |v_+|.
\end{equation}
Such a boundary condition corresponds to the stationary wave.
\begin{table}\label{1.1.3}
\centering \caption{Results available for \eqref{1.2}}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Boundary Condition} & Asymptotic Behavior&Temporal convergence rate \\
\hline
{$V_-<v_+$}&$V_-\leq v_+<0$& $\phi_{v_-,v_+}$,\ \ \cite{Hashimoto-Matsumura-JDE-2019} \& This Paper & This Paper\\
\cline{2-4}
&$V_-<0\leq v_+$ & $\phi_{v_-,0}+\psi^R_{0,v_+}$,\ \ \cite{Hashimoto-NonliAnal-2014, Hashimoto-OsakaJMath-2016, Hashimoto-Matsumura-JDE-2019} for partial results&\cite{Hashimoto-Matsumura-JDE-2019} for partial results \\
\cline{2-4}
&$0< V_-<v_+$&Unsolved&Unsolved\\
\hline
{$V_->v_+$}&$V_-\leq -v_+>0$&$\phi_{v_-,v_+}$,\ \ This Paper& This Paper\\
\cline{2-4}
&$V_-+v_+>0$&Unsolved&Unsolved\\
\hline
\end{tabular}
\end{table}
Notice that (\ref{1.3}) includes both the case (b) and the case (d). Our main results contain two parts:
\begin{itemize}
\item For the case (b), temporal convergence rate is obtained;
\item For the case (d), the nonlinear stability of the stationary wave $\phi_{v_-,v_+}$ together with the temporal convergence rate are obtained.
\end{itemize}
The results available up to now for the rigorous mathematical justifications of the asymptotics of the initial-boundary value problem \eqref{1.2} can be summarized in the Table 2.
Before concluding this section, we outline our main ideas used in this paper. As pointed out before, this paper is concentrated on the case when the boundary condition corresponds to the stationary wave. For such a case, it is proved in Proposition 2.3 of \cite{Hashimoto-Matsumura-JDE-2019} that a sufficient condition to guarantee the unique existence of the stationary wave $\phi(r)$ to the boundary value problem \eqref{SW-RS-Burgers} is \eqref{1.3}. As for the nonlinear stability of such a stationary wave $\phi(r)$, since it is no longer monotonic, the main idea used in \cite{Hashimoto-Matsumura-JDE-2019} is to introduce the new unknown function $z(t,r)=r^{\frac{n-1}{2}}(v(t,r)-\phi(r))$ and from \eqref{1.2} and \eqref{SW-RS-Burgers}, one can deduce that $z(t,r)$ solves (cf. (5.2) in \cite{Hashimoto-Matsumura-JDE-2019})
\begin{eqnarray}\label{Hashimoto-Matsumura-JDE-2019-formula}
z_t+(\phi z)_r+\left(-\frac{n-1}{2r}\phi+\frac{\mu (n^2-1)}{4r^2}\right)z-\mu z_{rr}&=&\frac{n-1}{2r^{\frac{n+1}{2}}}z^2-\frac{1}{2r^{\frac{n-1}{2}}}\left(z^2\right)_r,\quad t>0,\ r>r_0,\nonumber\\
z(t,r_0)&=&0,\quad t>0,\\
z(0,r)&=&z_0(r):=r^{\frac{n-1}{2}}\left(v_0(r)-\phi(r)\right),\quad r>r_0.\nonumber
\end{eqnarray}
For the initial-boundary value problem \eqref{Hashimoto-Matsumura-JDE-2019-formula}, if one performs the energy method as in \cite{Hashimoto-Matsumura-JDE-2019} to yield the zeroth-order energy type estimates, the following term appears in the left hand side of the estimates, cf. the estimate (5.2) in \cite{Hashimoto-Matsumura-JDE-2019}
\begin{equation}\label{Bad term}
I:=\frac 12\int^t_0\int^{+\infty}_{r_0}\left(\phi_r-\frac{n-1}{r}\phi +\frac{\mu(n^2-1)}{2r^2}\right)z^2drd\tau.
\end{equation}
For the one-dimensional case and if $\phi(r)$ is monotonic increasing, then such a term has a nice sign and then the nonlinear stability result follows easily. But for $n\geq 3$, since $\phi(r)$ is no longer monotonic, it is difficult to determine the sign of such a term. The main observation in \cite{Hashimoto-Matsumura-JDE-2019} is that when
\begin{equation}\label{Hashimoto-JDE-Assumptions-on-v+-}
v_\pm<0,\quad V_-\leq v_+,
\end{equation}
one can further deduce, cf. Lemma 2.4 in \cite{Hashimoto-Matsumura-JDE-2019}, that there exists a negative constant $\nu_0\leq 0$ such that
\begin{equation}\label{Upper Negative Bound on SW}
\phi(r)\leq \nu_0\leq 0,\quad r\geq r_0
\end{equation}
and it holds that $\phi(r)-\frac{\mu (n-1)^2}{2r}$ is monotonically non-decreasing for $r>r_0$, that is
\begin{equation}\label{Monotonical Property of SW}
\frac{d\phi(r)}{dr}+\frac{\mu(n-1)^2}{2r^2}\geq 0,\quad r> r_0.
\end{equation}
Having obtained the above estimates, the main idea used in \cite{Hashimoto-Matsumura-JDE-2019} is that one can deduce from the estimates \eqref{Upper Negative Bound on SW} and \eqref{Monotonical Property of SW} that $I\geq 0$ and hence the nonlinear stability result can be obtained via the elementary energy method, cf. the proofs of Theorem 3.2 in \cite{Hashimoto-Matsumura-JDE-2019} for details.
We note, however, that since our main purpose of this paper is to cover the whole range of $v_\pm$ satisfying \eqref{1.3}, which is a sufficient condition to guarantee the existence of the stationary wave $\phi(r)$, the above method developed in \cite{Hashimoto-Matsumura-JDE-2019} can not be applied and our main idea is to use the anti-derivative method to introduce the new unknown function
\begin{equation}\label{our-new-unknown}
w(t,r)=-\int_r^\infty(v(t,y)-\phi(y))dy
\end{equation}
and to use a space weighted energy method to deduce the desired nonlinear stability result. The key point in our analysis here is to introduce a suitable weight function $\chi(r)$ to overcome the difficulties induced by the non-monotonicity of the stationary wave $\phi(r)$ and the boundary condition. For details, see the properties of such a weight function $\chi(r)$ stated in Lemma \ref{3.4.1} and the proof of Theorems \ref{4.1}, \ref{4.3}, and \ref{4.5} given in Section 3. For the temporal convergence rates, both algebraically and exponentially, in addition to such a space weighted energy method, we also use the space-time weighted energy method employed in \cite{Kawashima-Matsumura-CMP-1985} and \cite{Yin-Zhao-KRM-2009}.
The rest of this paper is organized as follows. In Section 2, we first list some basic properties of the stationary wave $\phi_{v_-,v_+}(r)$, show how to construct the weighted function $\chi(r)$ which will play an essential role in our analysis, and then state our main results. The proofs of our main results will be given in Section 3.
\vskip 2mm
\noindent \textbf{Notations:} We denote the usual Lebesgue space of square integrable functions over $(r_0,\infty)$ by $L^2=L^2((r_0,\infty))$ with norm $\|\cdot\|$ and for each non-negative integer $k$, we use $H^k$ to denote the corresponding $k$th-order Sobolev space $H^k((r_0,\infty))$ with norm $\|\cdot\|_{H^k}.$
Set $\langle r\rangle=\sqrt{1+r^2}$ and for $\alpha\in \mathbb{R},$ we denote the algebraic weighted Sobolev space, that is, the space of functions $f$ satisfying $\langle r\rangle^{\alpha/2}f\in H^k,$ by $H^{k,\alpha}$ with norm
$$
\|f\|_{k,\alpha}:=\left\|\langle r\rangle^{\alpha/2}f\right\|_{H^k}.
$$
For $k=0,$ we denote $\|\cdot\|_{0,\alpha}$ by $|\cdot|_\alpha$ for simplicity. We also denote the exponential weighted Sobolev space, that is, the space of functions $f$ satisfying $e^{\alpha r/2}f\in H^k$ for some $\alpha\in\mathbb{R}$, by $H^{k,\alpha}_{\exp}.$ For $k=0,$ we denote $\|\cdot\|^{0,\alpha}_{\exp}$ by $|\cdot|_{\alpha,\exp}$ for simplicity. For an interval $I\in \mathbb{R}^1$ and a Banach space $X,$ $C(I;X)$ denotes the space of continuous $X$-valued functions on $I,$ $C^k(I;X)$ the space of $k$-times continuously differentiable $X$-valued functions.
\section{Preliminaries and main results}
This section is devoted to the statement of our main results. Before doing so, we first collect some results obtained in \cite{Hashimoto-NonliAnal-2014, Hashimoto-OsakaJMath-2016, Hashimoto-Matsumura-JDE-2019} on the stationary wave of (\ref{1.2}). Recall that $\phi(r)$ is called a stationary solution of (\ref{1.2}) if $\phi(r)$ solves the boundary value problem \eqref{SW-RS-Burgers}. Integrating the the first equation of \eqref{SW-RS-Burgers} with respect to $r$ from $r$ to $\infty,$ then it is easy to see that $\phi(r)$ satisfies
\begin{eqnarray}\label{2.2}
\phi_r+\frac{(n-1)}{r}\phi&=&\frac1{2\mu}\left(\phi^2-v_+^2\right), \quad r>r_0,\nonumber\\
\phi(r_0)&=&v_-, \\
\lim\limits_{r\to\infty}\phi(r)&=&v_+.\nonumber
\end{eqnarray}
If we introduce a new unknown function $\psi(r)$ by
\begin{equation}\label{2.4}
\psi(r)=\phi(r)-\frac{\mu(n-1)}{r},
\end{equation}
then \eqref{2.2} can be reformulated as
\begin{eqnarray}\label{2.3}
\psi_r&=&\frac{1}{2\mu}\left(\psi^2-v_+^2\right)-\frac{c_0}{r^2},\quad r>r_0,\nonumber\\
\psi(r_0)&=&V_-,\\
\lim\limits_{r\to\infty}\psi(r)&=&v_+,\nonumber
\end{eqnarray}
where $c_0=\frac{\mu(n-1)(n-3)}{2}$ is a constant. Noticing that $c_0=0$ for $n=3,$ thus it is hopeful to expect that, for $n=3,$ the large time behavior of the unique global solution $v(t,r)$ to the initial-boundary value problem (\ref{1.2}) is similar to that of the corresponding initial-boundary value problem (\ref{1.1.1}) of the one-dimensional Burgers equation in the half line and this is the main reason why one can give a complete classification of the asymptotic behaviors of solution $v(t,r)$ to the initial-boundary value problem (\ref{1.2}), cf. \cite{Hashimoto-Matsumura-JDE-2019} for more details. For this reason, we will focus on the case $n\geq 4$ in the rest of this paper.
For the solvability of the boundary value problem \eqref{2.3} and the properties of its unique solution $\psi(r)$, we can get from Proposition 2.3 and Lemma 2.4 obtained in \cite{Hashimoto-Matsumura-JDE-2019} that
\begin{lemma}\label{2.5}
Suppose that (\ref{1.3}) holds, then the boundary value problem \eqref{2.3} admits a unique smooth solution $\psi(r)$ satisfying
\begin{equation*}
\left|\psi(r)-v_+\right|:=\left|\phi(r)-v_+-\frac{\mu(n-1)}{r}\right|\leq O\left(r^{-2}\right),\quad r\to+\infty.
\end{equation*}
Consequently there exists a positive constant $C>0$ such that
\begin{eqnarray}\label{Properties of Stationary Wave}
\psi(r)-v_+&\in& C^\infty\cap L^1\left(\left[r_0,\infty\right)\right),\nonumber\\
\left|\frac{d^k\psi(r)}{d r^k}\right|&\leq& C,\quad k=1,2.
\end{eqnarray}
Moreover, if one assumes further that $v_\pm$ satisfy \eqref{Hashimoto-JDE-Assumptions-on-v+-}, then the estimates \eqref{Upper Negative Bound on SW} and \eqref{Monotonical Property of SW} hold for $r> r_0$.
\end{lemma}
Now we turn to consider the initial-boundary value problem \eqref{1.2}. If we take
\begin{equation}\label{3.2}
w(t,r)=-\int_r^\infty(v(t,y)-\phi(y))dy,
\end{equation}
then it is easy to deduce from \eqref{1.2}, \eqref{2.4} and \eqref{2.3} that $w(t,r)$ solves the following initial-boundary value problem
\begin{eqnarray}\label{3.3}
w_t+\psi w_r-\mu w_{rr}&=&-\frac12 w_r^2,\quad t>0,\ r>r_0,\nonumber\\
w_r(t,r_0)&=&\lim\limits_{r\to\infty}w_r(t,r)=\lim\limits_{r\to\infty}w(t,r)=0,\quad t>0,\\
w(0,r)&=&w_0(r):=-\int_r^\infty(v_0(y)-\phi(y))dy,\quad r>r_0.\nonumber
\end{eqnarray}
As pointed out above, compared with the initial-boundary value problem \eqref{1.1.1} for the one-dimensional Burgers equation in the half line, the main difficulty to yield the global solvability and the large time behavior of solutions to the initial-boundary value problem \eqref{3.3} is caused by the fact that the stationary wave $\phi(r)$ obtained in Lemma \ref{2.5} is no longer monotonic. Our main idea to overcome such a difficulty is to use the weighted energy method. To this end, we introduce the following weight function $\chi(r): [r_0,\infty)\rightarrow\mathbb{R}$:
\begin{equation}\label{3.4}
\chi(r)=\exp\left(-\frac1\mu\int_{r_0}^r\psi(s)ds\right)\int_r^\infty \left(\frac2{r_0}-\frac 1s\right)\exp\left(\frac1\mu\int_{r_0}^s\psi(\tau)d\tau\right)ds.
\end{equation}
The properties of such a weight function $\chi(r)$ can be summarized in the following lemma:
\begin{lemma}\label{3.4.1}
Suppose that (\ref{1.3}) holds, then $\chi(r)$ satisfies
\begin{itemize}
\item There exist positive constants $c_i (i=1,2)$ such that
$$
0< c_1\leq \chi(r)\leq c_2<\infty
$$
holds for each $r\geq r_0$;
\item $\chi(r)$ solves
$$
\frac{d\chi(r)}{dr}+\frac{\psi(r)}{\mu}\chi(r)=\frac 1r-\frac2{r_0},\quad r>r_0
$$
and consequently
\begin{eqnarray*}
\left.\left(\frac{d\chi(r)}{dr}+\frac{\psi(r)}{\mu}\chi(r)\right)\right|_{r=r_0}&=&-\frac {1}{r_0}<0,\\
\frac{d^2\chi(r)}{dr^2}+\frac{d}{dr}\left(\frac{\psi(r)}{\mu}\chi(r)\right)&=&-\frac 1{r^2}<0,\quad r>r_0.
\end{eqnarray*}
\end{itemize}
\end{lemma}
\begin{proof} In the following, we will prove a more general result. For each $f(r)\in C^1([r_0,\infty))$ satisfying
\begin{equation}\label{3.6}
f'(r)<0,\quad f(r_0)<0,\quad f'(r)\in L^1([r_0,\infty)).
\end{equation}
A direct corollary of the above assumptions is that the limit $\lim\limits_{r\to+\infty}|f(r)|$ exists and satisfies
\begin{equation}\label{3.6-2}
0<\lim\limits_{r\to+\infty}|f(r)|<+\infty.
\end{equation}
Now we set
\begin{equation}\label{3.5}
\widetilde{\chi}(r)=-\exp\left(-\frac1\mu\int_{r_0}^r\psi(s)ds\right)\int_r^\infty f(s)\exp\left(\frac 1\mu\int_{r_0}^s\psi(\tau)d\tau\right)ds.
\end{equation}
It is easy to see from the assumptions \eqref{3.6} and \eqref{3.6-2} imposed on $f(r)$ and the properties of $\psi(r)$ obtained in Lemma \ref{2.5} that $\widetilde{\chi}(r)$ is well-defined and satisfies
\begin{equation}\label{chi-differential equation-1}
\frac{d\widetilde{\chi}(r)}{dr}+\frac{\psi(r)}{\mu}\widetilde{\chi}(r)=f(r),\quad r>r_0.
\end{equation}
From which one can conclude from the assumptions \eqref{3.6} imposed on $f(r)$ that
\begin{eqnarray}\label{Properties of Chi}
\left.\left(\frac{d\widetilde{\chi}(r)}{dr}+\frac{\psi(r)}{\mu}\widetilde{\chi}(r)\right)\right|_{r=r_0}&=&f(r_0)<0,\nonumber\\
\frac{d^2\widetilde{\chi}(r)}{dr^2}+\frac{d}{dr}\left(\frac{\psi(r)}{\mu}\widetilde{\chi}(r)\right)&=&f'(r)<0,\quad r>r_0.
\end{eqnarray}
Now we prove that there exist positive constants $c_i (i=1,2)$ such that
\begin{equation}\label{bounds on weight function}
0< c_1\leq \widetilde{\chi}(r)\leq c_2<\infty
\end{equation}
holds for each $r\geq r_0$.
For this purpose, set
$$
A(r)=\exp\left(\frac1\mu\int_{r_0}^r\psi(s)ds\right),\quad B(r)=\int_r^\infty|f(s)|A(s)ds,
$$
then it is easy to see that $A(r)>0, B(r)>0$ hold for $r\in[r_0,\infty)$ and consequently one can deduce that $\widetilde{\chi}(r)>0$ for each $r\geq r_0$.
Noticing that
\begin{eqnarray*}
\int^r_{r_0}\psi(s)ds&=&\int^r_{r_0}\left(\psi(s)-v_+\right)ds+v_+\left(r-r_0\right),\\
f(r)&=&f(r_0)+\int^r_{r_0}f'(s)ds,
\end{eqnarray*}
we can get from the facts that $v_+<0$, $\psi(r)-v_+\in L^1([r_0,\infty))$ and the assumptions imposed on $f(r)$ that
\begin{eqnarray}\label{3.7}
A(r)&=&\exp\left(-\frac{|v_+|}{\mu}(r-r_0)\right) \exp\left(\frac1\mu\int_{r_0}^r(\psi(s)-v_+)ds\right)\nonumber\\
&\leq& \exp\left(\frac1\mu\|\psi-v_+\|_{L^1}\right)
\end{eqnarray}
and
\begin{eqnarray}\label{3.8}
B(r)&\leq& \left(\left|f\left(r_0\right)\right|+\left\|f'\right\|_{L^1}\right) \exp\left(\frac1\mu\|\psi-v_+\|_{L^1}\right)\int_r^\infty \exp\left(-\frac{|v_+|}{\mu}(s-r_0)\right)ds\nonumber\\
&=&\frac{\mu\left(\left|f(r_0)\right|+\|f'\|_{L^1}\right)}{|v_+|} \exp\left(\frac1\mu\|\psi-v_+\|_{L^1}-\frac{|v_+|}\mu(r-r_0)\right)\\
&\leq& \frac{\mu\left(\left|f(r_0)\right|+\|f'\|_{L^1}\right)} {|v_+|} \exp\left(\frac1\mu\|\psi-v_+\|_{L^1}\right)\nonumber
\end{eqnarray}
hold for each $r\geq r_0$. Moreover, the estimates \eqref{3.7} and \eqref{3.8} implies that
\begin{equation}\label{3.9}
\lim\limits_{r\to+\infty} A(r)=\lim\limits_{r\to+\infty}B(r)=0.
\end{equation}
From \eqref{3.9} and l'Hopital's rule, one can deduce from the assumption \eqref{3.6-2} imposed on $f(r)$ that
\begin{equation}\label{3.10}
\lim\limits_{r\to\infty}\widetilde{\chi}(r)=\lim\limits_{r\to\infty}\frac{B(r)}{A(r)} =\lim\limits_{r\to\infty}\frac{B'(r)}{A'(r)} =\frac\mu{|v_+|}\lim\limits_{r\to\infty}|f(r)|\in (0,+\infty).
\end{equation}
Having obtained \eqref{3.7}, \eqref{3.8} and \eqref{3.10}, the estimate \eqref{bounds on weight function} follows immediately.
If we take $f(r)=\frac1r-\frac2{r_0}$, then one can easily deduce the results stated in Lemma \ref{3.4.1}.
\end{proof}
With the above preparations in hand, we now turn to state our main results. The first result is concerned with the nonlinear stability of the stationary wave $\phi(r)$ constructed in Lemma \ref{2.5}.
\begin{theorem}\label{4.1}
Suppose that (\ref{1.3}) holds and $w_0\in H^2.$ Then, there exists a sufficiently small positive constant $\epsilon$ such that if $\|w_0\|_{H^2}\leq\epsilon,$ the initial-boundary value problem (\ref{1.2}) admits a unique global solution $v(t,r)$ satisfying
\begin{equation*}
v(t,r)-\phi(r)\in C(0,\infty;H^1)\cap L^2_{loc}(0,\infty;H^2)
\end{equation*}
and
\begin{equation}\label{4.2}
\lim\limits_{t\to +\infty}\sup_{r\geq r_0}|v(t,r)-\phi(r)|=0.
\end{equation}
\end{theorem}
For the temporal convergence rates of the unique global solution $v(t,r)$ of the initial-boundary value problem (\ref{1.2}) toward the stationary wave $\phi(r)$, we first have the following result on the algebraic decay rates.
\begin{theorem}\label{4.3}
Under the assumptions imposed in Theorem \ref{4.1}, then for any $\alpha>0,$ if we assume further that $w_0\in L^2_\alpha,$ we can get that
\begin{equation}\label{4.4}
\sup_{r\geq r_0}|v(t,r)-\phi(r)|\leq C\left(\|w_0\|_{H^2}+|w_0|_\alpha\right)(1+t)^{-\frac \alpha 2}
\end{equation}
holds for $t\geq 0$. Here $C$ is some time-independent positive constant $C>0$.
\end{theorem}
For the temporal exponential decay estimates, we have
\begin{theorem}\label{4.5}
Under the assumptions listed in Theorem \ref{4.1}, for any $\beta$ and $\gamma$ satisfies
\begin{equation}\label{4.6}
0< \beta\leq \min\left\{\frac2{r_0},\frac{8}{(8\|\chi\|_{L^\infty}+1)r_0}\right\},\quad 0<\gamma\leq\frac{3\mu\beta}{8r_0\|\chi\|_{L^\infty}},
\end{equation}
if we assume further that $w_0\in H^{2,\beta}_{\exp},$ then there exists a time-independent positive constant $C>0$ such that
\begin{equation}
\sup_{r\geq r_0}|v(t,r)-\phi(r)|\leq C\|w_0\|_{\exp}^{2,\beta}\exp(-\gamma t)
\end{equation}
holds for all $t\geq 0$.
\end{theorem}
\begin{remark} In our main results, we ask the $H^2-$norm of the initial perturbation $w_0(r)$ to be sufficiently small, it would be an interesting problem to deduce the corresponding nonlinear stability result for large initial perturbation and such a problem is under our current research. For related results on the nonlinear stability of stationary waves for certain hyperbolic conservation laws with dissipation and large initial perturbation, those interested are referred to \cite{Fan-Liu-Wang-Zhao-JDE-2014, Fan-Liu-Zhao-AA-2013, Fan-Liu-Zhao-JHDE-2011, Fan-Liu-Zhao-Zou-KRM-2013} and the references cited therein.
\end{remark}
\section{Proof of our main results}
This section is devoted to the proofs of our main results. To make the presentation clear, we divide this section into three subsections, the first one focuses on the proof of Theorem \ref{4.1}.
\subsection{Proof of Theorem \ref{4.1}}
To prove Theorem \ref{4.1}, for some positive constants $T>0$ and $M>0$, we first define the set of functions for which we seek the solution of the initial-boundary value problem (\ref{3.3}) by
\begin{equation*}
X_M(0,T)=\left\{w(t,r)\in C\left([0,T];H^2\right), w_r(t,r)\in L^2\left([0,T];H^2\right),\ \sup\limits_{t\in [0,T]}\|w(t)\|_{H^2}\leq M\right\}.
\end{equation*}
Put
\begin{equation*}
N(T)=\sup_{0\leq t\leq T}\|w(t)\|_{H^2},
\end{equation*}
then we can get the following local existence result.
\begin{proposition}[Local Existence]\label{5.1}
Under the assumptions listed in Theorem \ref{4.1}, for any positive constant $M,$ there exists a positive constant $T=T(M)$ such that if $\|w_0\|_{H^2}\leq M,$ the initial-boundary value problem (\ref{3.3}) has a unique solution $w(t,r)\in X_{2M}(0,T).$
\end{proposition}
Proposition \ref{5.1} can be proved by a standard iterative method, so we omit the details for brevity.
Our second result is concerned with the following a priori estimates on the local solution $w(t,r)$ constructed in Proposition \ref{5.1}, from which and the local solvability result Proposition \ref{5.1}, Theorem \ref{4.1} follows immediately.
\begin{proposition}[A Priori Estimates]\label{5.2}
Under the assumptions listed in Theorem \ref{4.1}, suppose that $w(t,r)\in X_M(0,T)$ is a solution of (\ref{3.3}) defined on the time interval $[0,T].$ Then if one assumes further that there exists a sufficiently small positive constant $\epsilon_1>0$, which is independent of $M$ and $T$, such that if $N(T)\leq \epsilon_1,$ the following estimate holds
\begin{equation}\label{5.2.0}
\|w(t)\|_{H^2}^2+\int_0^t\left(\left\|\frac{w(\tau)}{r}\right\|^2+\|w_r(\tau)\|_{H^2}^2 +w^2(\tau,r_0)\right)d\tau\leq C\|w_0\|_{H^2}^2,\ \ t\in[0,T]
\end{equation}
for and $t\in[0,T]$ and some positive constant $C$ independent of $M$ and $T$.
\end{proposition}
\begin{remark} In fact by choosing the weight function $\widetilde{\chi}(r)$ suitably, we can obtain better a priori estimates. For example, if we take $f(r)=\frac{1}{2\epsilon}\left(r^{-2\epsilon}-r_0^{-2\epsilon}\right)-1$ in $\widetilde{\chi}(r)$ for $\epsilon>0$ in (\ref{3.5}), then we can deduce the following new a priori estimates
\begin{equation*}
\|w(t)\|_{H^2}^2+\int_0^t\left(\left\|\frac{w(\tau)}{r^{\frac12+\epsilon}}\right\|^2 +\left\|w_r(\tau)\right\|_{H^2}^2+w^2(\tau,r_0)\right)d\tau\leq C\|w_0\|_{H^2}^2,\ \ t\in[0,T].
\end{equation*}
However, $\epsilon$ can not take $0,$ which is different from the a priori estimates obtained in \cite{Hashimoto-Matsumura-JDE-2019}.
\end{remark}
\begin{proof}
Firstly we deduce the zeroth-order energy type estimates. To this end, we first notice that Lemma \ref{3.4.1} means that the norms $\|\cdot\|_\chi$ and $\|\cdot\|$ are equivalent. Then
multiplying $(\ref{3.3})_1$ by $\chi w$, we can get that
\begin{equation*}
\left(\frac 12\chi w^2\right)_t-\frac\mu2\left(\chi_{rr}+\frac1\mu(\psi\chi)_r\right)w^2 +\mu\chi w_r^2+\mu\left(-\chi ww_r+\frac 12\left(\chi_r+\frac\psi\mu\chi\right)w^2\right)_r=-\frac 12\chi ww_r^2.
\end{equation*}
Lemma \ref{3.4.1} together with the above identity yields
\begin{equation}\label{5.2.1}
\left(\frac 12\chi w^2\right)_t+\frac\mu2\frac{w^2}{r^2}+\mu\chi w_r^2 -\mu\left(\chi ww_r+\left(\frac1{r_0}-\frac1{2r}\right)w^2\right)_r=-\frac 12\chi ww_r^2.
\end{equation}
Integrating the above identity with respect to $r$ over $(r_0,\infty),$ and noticing that
\begin{equation*}
-\frac 12\int_{r_0}^\infty\chi ww^2_rdr\leq C\|w\|_{L^\infty}\|w_r\|^2
\leq CN(T)\|w_r\|^2,
\end{equation*}
we obtain
\begin{equation*}
\frac 12\frac {d}{dt}\|w(t)\|_\chi^2 +\frac{\mu}2\left\|\frac{w(t)}{r}\right\|^2+\mu\|w_r(t)\|_\chi^2+\frac \mu{2r_0}w^2(t,r_0)\leq CN(T)\|w_r\|^2.
\end{equation*}
Integrating the above equation with respect to $t$ over $[0,T],$ we then have the zeroth-order estimate as long as $N(T)$ is assumed to be small enough
\begin{equation}\label{5.3}
\|w(t)\|^2+\mu\int_0^t\left(\left\|\frac{w(\tau)}{r}\right\|^2+\|w_r(\tau)\|^2+\frac 1{r_0}w^2(\tau,r_0)\right)d\tau\leq C\|w_0\|^2.
\end{equation}
Now we turn to deal with the first-order energy type estimates. For this purpose, differentiating $(\ref{3.3})_1$ with respect to $r$ once and multiplying the resulting identity by $w_r$ and integrating the resulting equation with respect to $r$ over $[r_0,\infty),$ we obtain
\begin{equation}\label{5.4}
\frac 12\frac{d}{dt}\|w_r(t)\|^2+\mu\left\|w_{rr}(t)\right\|^2\leq C \int_{r_0}^\infty \left(\left|\psi_rw_r^2\right|+\left|\psi w_rw_{rr}\right|+\left|w_r^2w_{rr}\right| \right)(t,r) dr.
\end{equation}
From the properties of the stationary wave $\psi(r)$ obtained in Lemma \ref{2.5}, the right hand side of the above inequality can be estimates as follows:
\begin{eqnarray*}
\int_{r_0}^\infty\left|\psi_r(r)w^2_r(t,r)\right|dt&\leq&C\left\|w_r(t)\right\|^2,\\
\int_{r_0}^\infty \left|\psi(r) w_r(t,r)w_{rr}(t,r)\right|dr&\leq& C \left\|w_r(t)\right\|^2+\frac\mu4\left\|w_{rr}(t)\right\|^2,\\
\int_{r_0}^\infty\left|w_r^2(t,r)w_{rr}(t,r)\right|dr&\leq& \left\|w_r(t)\right\|_{L^\infty}\int_{r_0}^\infty\left|w_r(t,r)w_{rr}(t,r)\right|dr\\
&\leq& CN(T)\left(\left\|w_r(t)\right\|^2+\left\|w_{rr}(t)\right\|^2\right).
\end{eqnarray*}
Substituting the above estimates into \eqref{5.4} and by choosing $N(T)$ sufficiently small, one can get from the zeroth-order energy type estiamtes (\ref{5.3}) that
\begin{equation}\label{5.5}
\left\|w_r(t)\right\|^2+\mu\int_0^t\left\|w_{rr}(\tau)\right\|^2d\tau\leq C\left\|w_0\right\|_{H^1}^2.
\end{equation}
Finally, we deduce the second-order energy type estimates. To do so, differentiating $(\ref{3.3})_1$ with respect to $r$ twice yields
\begin{equation}\label{5.6}
w_{rrt}-\mu w_{rrrr}=-w_{rr}^2-w_rw_{rrr}-\psi_{rr}w_r-2\psi_rw_{rr}-\psi w_{rrr}.
\end{equation}
Multiplying (\ref{5.6}) by $w_{rr}$ and integrating the resulting equation with respect to $r$ over $[r_0,\infty),$ we get that
\begin{eqnarray}\label{5.7}
&&\frac 12\frac d{dt}\left\|w_{rr}(t)\right\|^2+\mu\left\|w_{rrr}(t)\right\|^2\\
&\leq& -\mu w_{rr}(t,r_0)w_{rrr}(t,r_0)
-\int_{r_0}^\infty\left( w_{rr}^3+w_rw_{rr}w_{rrr} +\psi_{rr}w_rw_{rr}+2\psi_rw^2_{rr}+\psi w_{rr}w_{rrr} \right)(t,r)dr.\nonumber
\end{eqnarray}
Now we turn to estimate the terms in the right hand side of \eqref{5.7}. For the first term, if we take $r=r_0$ in (\ref{5.6}), then we can get from the facts $w_r(t,r_0)=0, \psi(r_0)=V_-$ that
\begin{equation*}
w_{rrr}(t,r_0)=\frac{V_-}{\mu}w_{rr}(t,r_0).
\end{equation*}
Consequently, from the above identity and the Sobolev inequality, the first term in the right hand side of \eqref{5.7} can be estimated as
\begin{eqnarray}\label{5.9}
\left|w_{rr}(t,r_0)w_{rrr}(t,r_0)\right|&\leq& C\left|w_{rr}(t,r_0)\right|^2\nonumber\\
&\leq& C\left\|w_{rr}(t)\right\|\left\|w_{rrr}(t)\right\|\\
&\leq& \frac{\mu}4\left\|w_{rrr}(t)\right\|^2+C\left\|w_{rr}(t)\right\|^2.\nonumber
\end{eqnarray}
For the second term in the right hand side of (\ref{5.7}), the Sobolev inequality and the definition of $N(T)$ tell us that
\begin{eqnarray}\label{5.10}
-\int_{r_0}^\infty \left(w_{rr}^3+w_rw_{rr}w_{rrr}\right)(t,r)dr
&=&\int_{r_0}^\infty w_r(t,r)w_{rr}(t,r)w_{rrr}(t,r)dr\nonumber\\
&\leq& \left\|w_r(t)\right\|_{L^\infty}\int_{r_0}^\infty\left|w_{rr}(t,r)w_{rrr}(t,r)\right|dr\\
&\leq& CN(T)\left(\left\|w_{rr}(t)\right\|^2+\left\|w_{rrr}(t)\right\|^2\right),\nonumber
\end{eqnarray}
\begin{equation}\label{5.11}
-\int_{r_0}^\infty\psi_{rr}(r)w_r(t,r)w_{rr}(t,r)dr\leq C\left(\left\|w_r(t)\right\|^2+\left\|w_{rr}(t)\right\|^2\right),
\end{equation}
\begin{equation}\label{5.12}
-2\int_{r_0}^\infty\psi_r(r) w^2_{rr}(t,r)dr\leq C\left\|w_{rr}(t)\right\|^2,
\end{equation}
\begin{equation}\label{5.13}
-\int_{r_0}^\infty \psi(r) w_{rr}(t,r)w_{rrr}(t,r)dr\leq \frac\mu4\left\|w_{rrr}(t)\right\|^2+C\left\|w_{rr}(t)\right\|^2.
\end{equation}
Substituting the estimates (\ref{5.9})-(\ref{5.13}) into (\ref{5.7}), integrating the resulting inequality with respect to $t$ from $0$ to $t,$
and by making use of the zeroth-order energy type estimates (\ref{5.3}) and the first-order energy type estimates (\ref{5.5}), we have
\begin{equation}\label{5.14}
\left\|w_{rr}(t)\right\|^2+\mu\int_0^t\left\|w_{rrr}(\tau)\right\|^2d\tau\leq C\left\|w_0\right\|_{H^2}.
\end{equation}
Having obtained the estimates (\ref{5.3}), (\ref{5.5}) and (\ref{5.14}), the a priori estimate (\ref{5.2.0}) follows immediately and this completes the proof of Proposition \ref{5.2}.
\end{proof}
\subsection{Proof of Theorem \ref{4.3}}
We prove Theorem \ref{4.3} in this subsection. To this end, the key point is to deduce the following result.
\begin{lemma}\label{5.15}
Under the assumptions listed in Theorem \ref{4.3}, for any $0\leq\beta\leq \alpha$ and $\gamma\geq0,$ it holds that
\begin{eqnarray}\label{5.16}
&&(1+t)^\gamma|w(t)|_\beta^2+\int_0^t\left\{\beta(1+\tau)^\gamma|w(\tau)|^2_{\beta-1} +(1+\tau)^\gamma\left|w_r(\tau)\right|^2_\beta
+(1+\tau)^\gamma w^2(\tau,r_0)\right\}d\tau\nonumber\\
&\leq& C\left\{\left|w_0\right|_\beta^2 +\gamma\int_0^t(1+\tau)^{\gamma-1}|w(\tau)|_\beta^2d\tau+
\beta\int_0^t(1+\tau)^\gamma\left\|w_r(\tau)\right\|^2d\tau\right\}.
\end{eqnarray}
\end{lemma}
\begin{proof}
Multiplying (\ref{5.2.1}) by $\langle r\rangle^\beta$ and integrating the resultant equation with respect to $r$ over $[r_0,\infty),$ we obtain
\begin{eqnarray}\label{5.24}
&&\left(\frac12\int_{r_0}^\infty\chi(r)\langle r\rangle^\beta w^2(t,r)dr\right)_t+\mu\int_{r_0}^\infty\left\{A_\beta(r)\langle r\rangle^{\beta-1}w^2(t,r)+\chi(r)\langle r\rangle^\beta w_r^2(t,r)\right\}dr+\frac\mu{2r_0}\langle r_0\rangle^\beta w^2(t,r_0)\nonumber\\
&\leq& C\beta\int_{r_0}^\infty\langle r\rangle^{\beta-1}\left|w(t,r)w_r(t,r)\right|dr,
\end{eqnarray}
where $A_\beta(r)=\frac{\langle r\rangle}{2r^2}+\beta\frac{r}{\langle r\rangle}\left(\frac1{r_0}-\frac1{2r}\right)\geq \frac{\beta}{2\langle r_0\rangle}.$
The term in the right hand side of (\ref{5.24}) can controlled by
\begin{eqnarray}\label{algebraic-decay}
&&C\beta\int_{r_0}^\infty\langle r\rangle^{\beta-1}\left|w(t,r)w_r(t,r)\right|dr\nonumber\\
&\leq&\frac{\mu\beta}{4\langle r_0\rangle} \int_{r_0}^\infty\langle r\rangle^{\beta-1}w^2(t,r)dr+\beta C\left(\int_{r_0}^R+\int_{R}^\infty\right)\langle r\rangle^{\beta-1}w_r^2(t,r)dr\\
&\leq&\frac{\mu\beta}{4\langle r_0\rangle}\int_{r_0}^\infty\langle r\rangle^{\beta-1}w^2(t,r)dr +\beta C(R)\int_{r_0}^\infty w_r^2(t,r)dr+\frac{\beta C}R\int_{r_0}^\infty\langle r\rangle^\beta w_r^2(t,r)dr,\nonumber
\end{eqnarray}
where $C(R)$ depends only on $R$.
By choosing $R$ suitable large in \eqref{algebraic-decay}, we can get from \eqref{5.24}, \eqref{algebraic-decay} and Lemma \ref{3.4.1} that
\begin{equation}\label{add-1}
\left(\int_{r_0}^\infty\chi(r)\langle r\rangle^\beta w^2(t,r)dr\right)_t+\int_{r_0}^\infty\left(\langle r\rangle^{\beta-1}w^2(t,r) +\langle r\rangle^\beta w_r^2(t,r)\right)dr+w^2(t,r_0) \leq C\beta\left\|w_r(t)\right\|^2.
\end{equation}
Having obtained \eqref{add-1}, the estimate \eqref{5.16} follows immediately by multiplying \eqref{add-1} by $(1+t)^\gamma$, integrating the resultant equation with respect to $t$ from $0$ to $t$ and the properties on the weight function $\chi(r)$ obtained in Lemma \ref{3.4.1}. This completes the proof of Lemma \ref{5.15}.
\end{proof}
Having obtained Lemma \ref{5.15}, by repeating the arguments used in \cite{Kawashima-Matsumura-CMP-1985}, we can deduce the following two results which is crucial to yield the temporal convergence rate for $\|w(t)\|.$
\begin{lemma}\label{5.17}
It holds that for $k=0,1,\cdots,[\alpha],$
\begin{equation}\label{5.18}
(1+t)^k|w(t)|^2_{\alpha-k}+\int_0^t\left\{(\alpha-k)(1+\tau)^k|w(\tau)|^2_{\alpha-k-1} +(1+\tau)^k\left|w_r(\tau)\right|^2_{\alpha-k}
+(1+\tau)^kw^2(\tau,r_0)\right\}d\tau\leq C|w_0|^2_\alpha.
\end{equation}
\end{lemma}
\begin{lemma}\label{5.19}
It holds for any $\epsilon>0$ that
\begin{equation}\label{5.20}
(1+t)^{\alpha+\epsilon}\|w(t)\|^2+\int_0^t(1+\tau)^{\alpha+\epsilon} \left\{\left\|w_r(\tau)\right\|^2+
w^2(\tau,r_0)\right\}d\tau\leq C(1+t)^\epsilon|w_0|_\alpha^2.
\end{equation}
\end{lemma}
To deduce the corresponding temporal convergence rates on the high-order derivatives of the solution $w(t,r)$ with respect to $r$, we can get the following lemma.
\begin{lemma}\label{5.21}
Let $l=1,2,$ then it holds for any $\epsilon>0$ and $t\geq 0$ that
\begin{equation}\label{5.22}
(1+t)^{\alpha+\epsilon}\left\|\partial_r^lw(t)\right\|^2 +\int_0^t(1+\tau)^{\alpha+\epsilon}\left\|\partial_r^lw_r(\tau)\right\|^2
d\tau \leq C(1+t)^\epsilon\left(\left\|w_0\right\|_{H^2}^2+\left|w_0\right|^2_\alpha\right).
\end{equation}
\end{lemma}
With the above results in hand, we now turn to prove Theorem \ref{4.3}. To do so, we first consider the convergence rate of $\|w(t)\|.$ If $\alpha$ is an integer, then we can take $k=\alpha$ in Lemma \ref{5.17} to deduce that
\begin{equation}
(1+t)^\alpha\|w(t)\|^2+\int_0^t(1+\tau)^\alpha\left\{\left\|w_r(\tau)\right\|^2 +w_0^2(\tau,r_0)\right\}d\tau\leq C|w_0|_\alpha^2,
\end{equation}
which means that
\begin{equation}\label{5.23}
\|w(t)\|\leq C (1+t)^{-\frac\alpha2}|w_0|_\alpha,
\end{equation}
while if $\alpha$ is not an integer, we can use Lemma \ref{5.19} to yield (\ref{5.23}).
For the corresponding temporal convergence rates on $\|\partial_r^lw(t)\|$ for $l=1,2$, we can deduce from Lemma \ref{5.21} that
\begin{equation}
\|w(t)\|_{H^2}\leq C(1+t)^{-\frac\alpha2}\left(\|w_0\|_{H^2}+|w_0|_\alpha\right).
\end{equation}
Consequently, the Sobolev inequality gives
\begin{eqnarray}
\sup\limits_{r\geq r_0}|v(t,r)-\phi(r)|&=&\sup\limits_{r\geq r_0}\left|w_r(t,r)\right|\nonumber\\
&\leq& C\left\|w_r(t)\right\|^{\frac12}\left\|w_{rr}(t)\right\|^{\frac12}\\
&\leq& C\left(\|w_0\|_{H^2}+|w_0|_\alpha\right)(1+t)^{-\frac\alpha 2}.\nonumber
\end{eqnarray}
This is \eqref{4.4} and the proof of Theorem \ref{4.3} is completed.
\subsection{Proof of Theorem \ref{4.5}}
To prove Theorem \ref{4.5}, we first give the following result.
\begin{lemma}\label{5.3.0}
Suppose that (\ref{4.6}) holds and let $B_\beta(r)=\frac{1}{2r^2}+\beta\left(\frac1{r_0}-\frac1{2r}\right),$ then it holds for $r\geq r_0$ that
\begin{equation}\label{5.3.1}
B_\beta(r)\geq\max\left\{\frac{3\beta}{4r_0}, \beta^2\chi(r)\right\}.
\end{equation}
\end{lemma}
\begin{proof} Since
$$
\frac{dB_\beta(r)}{dr} =\frac{\beta r-2}{2r^3},
$$
if $\beta\leq \frac{2}{r_0},$ one can deduce that
$$
\min\limits_{r\geq r_0}B_\beta(r)=B_\beta\left(\frac{2}{\beta}\right),
$$
and consequently for $r\geq r_0$
\begin{equation}\label{add-exponential-1}
B_\beta(r)\geq B_\beta\left(\frac{2}{\beta}\right)=\frac{\beta(8-\beta r_0)}{8r_0}\geq \frac{3\beta}{4r_0}.
\end{equation}
While if $\beta\leq \frac{8}{(8\|\chi\|_{L^\infty}+1)r_0},$ one can deduce that \begin{equation}\label{add-exponential-2}
B_\beta(r)-\beta^2\chi(r)\geq\frac{\beta^2}{r_0}\left(\frac1\beta-\frac{\left(8\|\chi\|_{L^\infty}+1\right) r_0}{8}\right)\geq 0.
\end{equation}
\eqref{add-exponential-1} together with \eqref{add-exponential-2} imply \eqref{5.3.1}, this completes the proof of Lemma \ref{5.3.0}.
\end{proof}
For the exponential temporal convergence rates on $w(t,r)$, we have
\begin{lemma}\label{5.3.2}
Under the assumptions listed in Theorem \ref{4.5}, it holds for all $t\geq 0$ that
\begin{equation}\label{5.3.3}
e^{\gamma t}|w(t)|_{\beta,\exp}^2 +\int_0^te^{\gamma\tau}\left\{\beta|w(\tau)|^2_{\beta,\exp} +\left|w_r(\tau)\right|^2_{\beta,\exp}+w^2(\tau,r_0) \right\}d\tau\leq C|w_0|^2_{\beta,\exp}.
\end{equation}
\end{lemma}
\begin{proof}
Multiplying (\ref{5.2.1}) by $e^{\beta r}$ and then integrating the result with respect to $r$ from $r_0$ to $\infty$ yield
\begin{eqnarray}\label{add-exponential-3}
&&\left(\frac12\int_{r_0}^\infty e^{\beta r}\chi(r) w^2(t,r)dr\right)_t+\mu \int_{r_0}^\infty e^{\beta r}\left\{B_\beta(r)w^2(t,r)+\chi(r) w_r^2(t,r)\right\}dr+\frac{\mu}{2r_0} e^{\beta r_0}w^2(t,r_0)\nonumber\\
&=&\underbrace{-\frac 12\int_{r_0}^\infty e^{\beta r}\chi(r) w(t,r)w_r^2(t,r)dr}_{I_1}\underbrace{-\mu\beta\int_{r_0}^\infty e^{\beta r}\chi(r) w(t,r)w_r(t,r)dr}_{I_2}.
\end{eqnarray}
Now we turn to estimate $I_j (j=1,2)$ term by term. Firstly, $I_1$ is estimated as
\begin{equation*}
\left|I_1\right|\leq C \|w(t)\|_{L^\infty}\left|w_r(t)\right|^2_{\beta,\exp}
\leq CN(T)\left|w_r(t)\right|^2_{\beta,\exp},
\end{equation*}
while for $I_2$, we can get that
\begin{eqnarray*}
\left|I_2\right|&\leq&\frac{\mu}2\int_{r_0}^\infty e^{\beta r}\cdot 2\cdot \beta\chi(r)^{\frac12}|w(t,r)|\cdot\chi(r)^{\frac12}\left|w_r(t,r)\right|dr\\
&\leq&\frac{\mu}2\int_{r_0}^\infty e^{\beta r}\left\{\beta^2\chi(r)|w(t,r)|^2 +\chi(r) w_r(t,r)^2\right\}dr\\
&\leq&\frac{\mu}2\int_{r_0}^\infty e^{\beta r}\left\{B_\beta(r)w^2(t,r)+\chi(r) w_r(t,r)^2\right\}dr,
\end{eqnarray*}
where we have used Lemma \ref{5.3.0}.
Substituting the above estimates into \eqref{add-exponential-3} and by choosing $N(T)$ sufficiently small such that $N(T)\leq \frac{\mu}{4C}$, we can derive that
\begin{equation}\label{5.3.4}
\left(\int_{r_0}^\infty e^{\beta r}\chi(r) w^2(t,r)dr\right)_t +\frac{3\beta\mu}{4r_0}|w|^2_{\beta,\exp}+\frac{\mu}2\int_{r_0}^\infty e^{\beta r}\chi(r) w_r(t,r)^2dr+\frac{\mu e^{\beta r_0}}{r_0}w^2(t,r_0)\leq 0.
\end{equation}
Multiplying (\ref{5.3.4}) by $e^{\gamma t}$ and integrating the result with respect to $t$ from $0$ to $t,$ we obtain
\begin{eqnarray}\label{5.3.41}
&&e^{\gamma t}\int_{r_0}^\infty e^{\beta r}\chi(r) w^2(t,r)dr+\int_0^te^{\gamma \tau}\left\{\frac{3\beta\mu}{4r_0}|w(\tau)|^2_{\beta,\exp}+\frac\mu2\int_{r_0}^\infty e^{\beta r}\chi(r) w_r(\tau,r)^2dr+\frac{\mu e^{\beta r_0}}{r_0}w^2(t,r_0)\right\}d\tau\nonumber\\
&\leq& C|w_0|^2_{\beta,\exp}+
\gamma\int_0^t e^{\gamma \tau}\int_{r_0}^\infty e^{\beta r}\chi(r) w^2(t,r)drd\tau.
\end{eqnarray}
From the assumption (\ref{4.6}), the last term in the right hand side of (\ref{5.3.41}) can be bounded by
\begin{equation}\label{add-exponential-5}
\gamma\int_0^t e^{\gamma \tau}\int_{r_0}^\infty e^{\beta r}\chi(r) w^2(t,r)drd\tau\leq\gamma\|\chi\|_{L^\infty}
\int_0^t e^{\gamma\tau}|w(\tau)|^2_{\beta,\exp}d\tau
\leq \frac{3\beta\mu}{8r_0}\int_0^t e^{\gamma\tau}|w(\tau)|^2_{\beta,\exp}d\tau.
\end{equation}
Putting \eqref{5.3.41} and \eqref{add-exponential-5} together, one can deduce (\ref{5.3.3}) immediately. This completes the proof of Lemma \ref{5.3.2}.
\end{proof}
Similarly, for the corresponding estimates on $\partial^l_rw(t,r)$ for $l=1,2$, we can get that
\begin{lemma}\label{5.3.5}
Under the assumptions listed in Theorem \ref{4.5}, it holds for $t\geq 0$ that
\begin{equation}\label{5.3.6}
e^{\gamma t}\left|w_r(t)\right|_{\beta,\exp}^2 +\int_0^te^{\gamma\tau}\left|w_{rr}(\tau)\right|^2_{\beta,\exp}d\tau\leq C\|w_0\|^2_{H^{1,\beta}_{\exp}}
\end{equation}
and
\begin{equation}\label{5.3.7}
e^{\gamma t}\left|w_{rr}(t)\right|_{\beta,\exp}^2 +\int_0^te^{\gamma\tau}\left|w_{rrr}(\tau)\right|^2_{\beta,\exp}d\tau\leq C|w_0|^2_{H^{2,\beta}_{\exp}}.
\end{equation}
\end{lemma}
Having obtained Lemma \ref{5.3.2} and Lemma \ref{5.3.5}, Theorem \ref{4.5} follows easily and we omit the details for brevity.
\end{document} | math | 49,830 |
\begin{document}
\title{The consequences of checking for zero-inflation and overdispersion in the analysis of count data}
\author{
\name{Harlan Campbell, [email protected]}
}
\maketitle
\begin{abstract}
\noindent 1. Count data are ubiquitous in ecology and the Poisson generalized linear model (GLM) is commonly used to model the association between counts and explanatory variables of interest. \textcolor{black}{When fitting this model to the data, one typically proceeds by first confirming that the model assumptions are satisfied. If the residuals appear to be overdispersed or if there is zero-inflation, key assumptions of the Poison GLM may be violated and }researchers will then typically consider alternatives to the Poison GLM. An important question is whether the potential model selection bias introduced by this data-driven multi-stage procedure merits concern.
\noindent 2. Here we conduct a large-scale simulation study to investigate the potential consequences of model selection bias that can arise in the simple scenario of analyzing a sample of potentially overdispersed, potentially zero-heavy, count data. \textcolor{black}{Specifically, we investigate model selection procedures recently recommended by \cite{blasco2019does}} using either a series of score tests or the AIC statistic to select the best model.
\noindent 3. We find that, when sample sizes are small, model selection based on preliminary score tests (or the AIC) can lead to potentially substantial type 1 error inflation. When sample sizes are large, model selection based on preliminary score tests is less problematic.
\noindent 4. Ignoring the possibility of overdispersion and zero inflation during data analyses can lead to invalid inference. However, if one does not have sufficient power to test for overdispersion and zero inflation, \emph{post hoc} model selection may also lead to substantial bias. This ``catch-22'' suggests that, if sample sizes are small, a healthy skepticism is warranted whenever one rejects the null hypothesis.
\end{abstract}
\begin{keywords}
model selection bias, overdispersion, zero inflation, zero-inflated models
\end{keywords}
\section{Introduction}
\vskip 0.22in
Despite the ongoing debate surrounding the use (and misuse) of significance testing in ecology \citep{murtaugh2014defense, dushoff2019can} (and in other fields \citep{amrhein2019scientists}), hypothesis testing remains prevalent. Indeed, many research fields have been criticized for publishing studies with serious errors of testing and interpretation, and ecologists have been accused of being ``confused'' about when and how to conduct appropriate hypothesis tests \citep{stephens2005information}. One issue that receives a substantial amount of attention is that of failing to check for possible violations of distributional assumptions. According to \citet{freckleton2009seven}, using statistical tests that assume a given distribution on the data while failing to test for the assumptions required of said distribution is one of ``seven deadly sins.''
One of the most popular statistical models in ecology (and in many other fields, e.g., finance, psychology, neuroscience, and microbiome research, \citep{bening2012generalized, loeys2012analysis, zoltowski2018scaling, xu2015assessment}) is the Poisson generalized linear model (GLM) \citep{nelder1972generalized}. With count outcome data, a Poisson GLM is the most common starting point for testing an association between a given outcome, $Y$, and a given covariate of interest, $X$. The Poisson GLM assumes the outcome data, conditional on the covariates, are the result of independent sampling from a Poisson distribution where, importantly, the mean and variance are equal. However, in practice, count data will often be show more variation than is implied by the Poisson distribution and the use of Poisson models is not always appropriate \citep{cox1983some}.
Count data frequently exhibit two (related) characteristics: (1) overdispersion and (2) zero-inflation. \textcolor{black}{Overdispersion refers to an excess of variability, while zero-inflation refers to an excess of zeros \citep{yang2010score}. If model residuals} are overdispersed or have an excess of zeros, assumptions underlying the Poisson GLM will not hold and ignoring this will lead to serious errors (e.g., biased parameter estimates and invalid standard errors). It is therefore routine practice for researchers to check if the assumptions required of \textcolor{black}{a} Poisson model hold and adopt an alternative statistical model in the event that they do not; see \citet{zuur2010protocol}.
In the case of overdispersion, popular alternatives to the Poison GLM include the Quasi-Poisson (QP) model \citep{wedderburn1974quasi} and the Negative Binomial (NB) model \citep{richards2008dealing, linden2011using}. Note that when selecting between the QP and NB models, the best choice is not always straightforward; see \citet{ver2007quasi}. In the case of zero-inflation, popular alternatives to the Poison GLM include the Zero-Inflated Poisson model (ZIP) \citep{martin2005zero, lambert1992zero} and the Zero-Inflated Negative-Binomial model (ZINB) \citep{greene1994accounting}.
A multi-stage procedure will typically have researchers testing for overdispersion and zero-inflation in a preliminary stage (based on the residuals from a Poisson GLM), before testing the main hypothesis of interest (i.e., the association between $Y$ and $X$) in a second stage; see \citet{blasco2019does}. If the first stage tests are not significant, the Poisson GLM is selected, regression coefficients are estimated along with their standard errors, and $p$-values are calculated allowing one to test for the association between $Y$ and $X$. On the other hand, if the first stage test for overdispersion is significant, a QP or a NB model will be fit to the data. Or, alternatively, if the first stage test for zero-inflation is significant, a ZIP model may be used. In cases when there is evidence of both overdispersion and zero-inflation, more complex models such as the ZINB model or hurdle models will often be considered; see \citet{zorn1998analytic}.
Such a multi-stage, multi-test procedure may appear rather reasonable, and goodness-of-fit tests are frequently reported to confirm that the model-selection is appropriate. However, recently, some researchers have warned against preliminary testing for distributional assumptions; e.g., \citet{shuster2005diagnostics} and \cite{wells2007dealing}. Their warnings are based on the following concern: since the preliminary tests are applied to the same data as the main hypothesis tests, this multi-stage procedure amounts to ``using the data twice'' \citep{hayes2020}. A hypothesis test using a model selected based on preliminary testing fails to take into account one's uncertainty with regards to the distributional properties of the data. \textcolor{black}{Unless the preliminary tests and the main hypothesis tests are entirely independent, this can result in model selection bias.}
The model selection bias at issue here is not the better known model selection bias associated with deciding \emph{post hoc} which variables to include in the model, e.g., the model selection bias associated with stepwise regression \citep{hurvich1990impact, whittingham2005habitat}. Instead, here we are concerned with the potential bias introduced when deciding \emph{post hoc} which distributional assumptions should be accepted. The implications of considering \emph{post hoc} alternatives (or adjustments) to accommodate for distributional assumptions have been previously considered in other contexts. Three examples come to mind.
First, in the context of time-to-event data, the consequences of checking and adjusting for potential violations of the proportional hazards (PH) assumption required of a Cox PH model are considered by \citet{campbell2014consequences}. The authors find that the ``common two-stage approach'' (in which one selects a model based on a preliminary test for PH) can lead to a substantial inflation of the type 1 error, even in scenarios where there is no violation of the PH assumption.
Second, in the simple context of testing the means of two independent samples, \citet{rochon2012test} investigate the consequences of conducting a preliminary test for normality (e.g., the Shapiro-Wilk test). The authors conclude that while ``[f]rom a formal perspective, preliminary testing for normality is incorrect and should therefore be avoided,'' in practice, ``preliminary testing does not seem to cause much harm.''
Finally, in the context of clinical trials, factorial trials are an efficient method of estimating multiple treatments in a single trial. However, factorial trials rely on the strict assumption of no interaction between the different treatments. \citet{kahan2013bias} investigates the consequences of conducting a preliminary test for the interaction between treatment arms (as is often recommended). By means of a simulation study, \citet{kahan2013bias} shows that the estimated treatment effect from a factorial trial under the ``two-stage analysis'' can be severely biased, even in the absence of a true interaction.
Model selection bias is considered a ``quiet scandal in the statistical community'' \citep{breiman1992little} and is now all the more important to understand given recent concerns with research reproducibility and researcher incentives \citep{kelly2019, nosek2012scientific, gelman2013garden, fraser2018questionable}. In some fields, such as psychology, the issue is finally being recognized. \citet{williams2019dealing} conclude that ``it is currently unclear how [psychology] researchers should deal with distributional assumptions'' since ``diagnosing and responding to distributional assumption problems'' may result in ``error rates [that] vary considerably from the nominal error rates.''
In ecology, some have warned about model selection bias (e.g., \citet{buckland1997model}), but the problem ``remains widely over-looked'' \citep{whittingham2006we}. Indeed, ecologists will readily admit that ``this problem is commonly not appreciated in modelling applications'' \citep{whittingham2005habitat}. \citet{anderson2007model} notes that: ``Model selection bias is subtle but its effects are widespread and little understood by many people working in the life sciences.''
{In this paper, we conduct a large-scale simulation study to investigate the potential consequences of model selection bias that can arise in the simple scenario of analyzing a sample of potentially overdispersed, potentially zero-inflated, count data.} \textcolor{black}{It is difficult to anticipate what these consequences might be. Often, while model selection bias is problematic from a theoretical perspective, it does not lead to substantial problems in practice. We restrict our attention to two model selection procedures, one based on conducting score tests, and another based on calculating AIC statistics.}
In Section 2, we review commonly used models and in Section 3, we outline the framework of a simulation study to investigate the consequences of checking for zero-inflation and overdispersion. In Section 4, we discuss the results of this simulation study and we conclude in Section 5 with a summary of findings and general recommendations.
\section{Materials and Methods}
\subsection{Models for the analysis of count data}
Let us begin with the simplest version of the Poisson GLM. Let $Y_{i}$, for $i$ in $1, \ldots, n$, be the outcome of interest observed for $n$ independent samples. Let $X_{i}$, for $i$ in $1,\ldots,n$, represent a single covariate of interest. If the covariate of interest is categorical with $k$ different categories (e.g., $k$ different species of fish), $X_{i}$ will be a vector with length equal to $k-1$; otherwise it will be a single scalar and $k=2$. The simplest Poisson regression model, with a standard $log$ link, will have that:
\begin{equation}
Y_{i} \sim Poisson(\lambda_{i} = exp(\beta_{0} + \beta_{X}X_{i})), \quad \textrm{or equivalently:}
\end{equation}
\textcolor{black}{
\begin{equation}
Pr(Y_{i}=y_{i}|\beta_{0}, \beta_{X}) = \frac{(exp(\beta_{0} + \beta_{X}X_{i}))^{y_{i}}exp(-exp(\beta_{0} + \beta_{X}X_{i}))}{y_{i}!},
\end{equation}
}
\noindent for $i \textrm{ in }1,\ldots,n$, where $\beta_{0}$ is the intercept, and $\beta_{X}$ is the coefficient (or coefficient-vector of length $k-1$) representing the association between $X$ and $Y$. Note that this model implies the following equality: $E(Y_{i}) = Var(Y_{i}) = \lambda_{i}$, for $i$ in $1,\ldots, n$.
Parameter estimates, $\widehat{\beta_{0}}$, and $\widehat{\beta_{X}}$, can be obtained by maximum likelihood estimation via iterative Fischer scoring. A confidence interval for $\beta_{X}$ is typically calculated by the standard profile likelihood approach where one inverts a likelihood-ratio test; see \citet{venzon1988method}, or more recently \cite{uusipaikka2008confidence}. Maximum likelihood estimation via iterative Fischer scoring is implemented as a default for the \verb|glm()| function in \verb|R| \citep{dunn2018generalized} and profile likelihood confidence intervals are provided by default when using the \verb|confint()| function for GLMs; see \citet{ripley2013package}.
To test whether there is an association between $Y$ and $X$, we define the following hypothesis test: $H_{0}: \beta_{X} = 0$ vs. $H_{1}: \beta_{X} \ne 0$. A simple likelihood ratio test (LRT), or Wald test will provide a $p$-value to evaluate this hypothesis; see \citet{zeileis2008regression}. The LRT and Wald test are asymptotically equivalent. For the likelihood ratio test, the $Z$ statistic is obtained by calculating the null and residual deviance as $Z_{LRT} = D_{1}-D_{0}$, where:
\(D_{0} =2 \sum_{i=1}^{n}\left\{Y_{i} \log \left(Y_{i} / exp(\widehat{\beta}_{0}) \right)-\left(Y_{i}-exp(\widehat{\beta}_{0})\right)\right\}\), \textrm{and:} \\
\textcolor{black}{
\(D_{1} =2 \sum_{i=1}^{n}\left\{Y_{i} \log \left(Y_{i} / \widehat{{\lambda}_{i}}\right)-\left(Y_{i}-\widehat{{\lambda}_{i}}\right)\right\}\), where $\widehat{\lambda_{i}}=\exp (\widehat{\beta}_{0}+\widehat{\beta}_{X} X_{i})$.
}
If the distributional assumptions of the Poisson GLM are met and the null hypothesis holds, the $Z$ statistic will follow (asymptotically) a $\chi^{2}$ distribution with $df = k-1$ degrees of freedom, and the $p$-value is calculated as: $ p\textrm{-value} = P_{\chi^{2}}(Z, df = k-1)$. (For the Wald test, with $k=2$, the $Z$-statistic is defined as $Z_{Wald} = \left({\widehat{\beta_{X}}}/{\textrm{se}(\widehat{\beta_{X}})}\right)^{2}$, where $\textrm{se}(\widehat{\beta_{X}})$ is the standard error of the maximum likelihood estimate (MLE); see \cite{hilbe20077} for details when $k>2$).
However, if the distributional assumptions do not hold, the $Z$ statistic will be compared with the wrong reference distribution invalidating any significance test (and associated confidence intervals). {Therefore, in order to conduct valid inference, researchers will typically carry out an extensive model selection procedure. \textcolor{black}{Note that model selection must always be based on model residuals and not on the distribution of the response variable (which is erroneously done on occasion).} \citet{blasco2019does} outline and illustrate a proposed protocol. Such a procedure is typically based on:
\begin{itemize}
\item measuring indices (e.g., the dispersion index \citep{fisher1950significance}; the zero-inflation index \citep{puig2006count});
\item conducting score tests (e.g., the $D\&L$ score test for Poisson vs. NB regression \citep{dean1989tests}; the $vdB$ score test for Poisson vs. ZIP regression \citep{van1995score}; the score test for ZIP vs ZINB regression \citep{ridout2001score});
\item and evaluating candidate models with goodness-of-fit tests (e.g., likelihood ratio tests; the Vuong and Clarke tests) and model selection criteria (e.g., AIC, AICc and BIC).
\end{itemize}
}
In this paper, for simplicity, we will only consider three alternative models \textcolor{black}{in addition to the Poisson model described above}: the (type 2) NB, the ZIP, and the (type 2) ZINB regression models as described in \citet{blasco2019does}. Let us briefly review these three alternative regression models.
\noindent \textbf{(1) The ZIP regression model - } We will consider the following zero-inflated Poisson model where the probability of a structural zero, $\omega_{i}$, is a function of the covariate $X_{i}$. Specifically,
\begin{eqnarray}
Pr(Y_{i}=y_{i}|\omega_{i}, \lambda_{i}) &=& \omega_{i} + (1-\omega_{i})exp(-\lambda_{i}), \quad \textrm{if} \quad y=0; \\
&=& (1-\omega_{i})exp(-\lambda_{i})\lambda_{i}^{y_{i}}/y_{i}!, \quad \textrm{if} \quad y_{i} > 0; \nonumber
\end{eqnarray}
\noindent where we have a log link function for $\lambda_{i}$ and a logit link function for $\omega_{i}$ (for $i$ in $1,\ldots,n$) such that:
\begin{eqnarray}
\label{eq:log_logit1}
\lambda_{i} &=& exp(\beta_{0} + \beta_{X}X_{i}), \quad \textrm{and} \\
\label{eq:log_logit2}
\omega_{i} &=& \left(\frac{exp(\gamma_{0} + \gamma_{X}X_{i})}{1+exp(\gamma_{0} + \gamma_{X}X_{i})}\right).
\end{eqnarray}
The ZIP model has that $0 \le \omega_{i} \le 1$ and $\lambda_{i} > 0$, and implies the following about the mean and variance of the data: $\textrm{E}(Y_{i}) = \lambda_{i}(1-\omega_{i}) = \mu_{i}$ and $\textrm{Var}(Y_{i}) = \mu_{i} + \mu_{i}^2\omega_{i}/(1-\omega_{i})$. \textcolor{black}{ The dispersion index is therefore equal to $d = \textrm{Var}(Y_{i})/\textrm{E}(Y_{i}) = 1 + \lambda_{i}\omega_{i}$. As $\omega_{i} \rightarrow 0$, we have that $Y_{i}$ reverts to follow a Poisson distribution with mean $\lambda_{i}$. } A null hypothesis of no association between $X$ and $Y$ is specified as: $H_{0} : \beta_{X} = \gamma_{X} = 0$.
\noindent \textbf{(2) The (type 2) NB regression model - } We will consider the following NB regression model:
\begin{eqnarray}
\Pr(Y_{i} = y_{i}|\nu,\lambda_{i}) = \frac{\Gamma(y_{i} + \nu)}{\Gamma(y_{i} + 1)\Gamma(\nu)}\left(\frac{1}{1 + \lambda_{i}/\nu}\right)^{\nu}\left(\frac{\lambda_{i}/\nu}{1 + \lambda_{i}/\nu} \right)^{y_{i}};
\end{eqnarray}
\noindent where we use a log link function for $\lambda_{i} = exp(\beta_{0} + \beta_{X}X_{i})$, and where $\nu > 0$ is a dispersion parameter that does not depend on covariates. The type 2 NB model implies the following about the mean and variance of the data: $\textrm{E}(Y_{i}) = \lambda_{i}$, and $\textrm{Var}(Y_{i}) = \lambda_{i} + \lambda_{i}^{2}/\nu$. \textcolor{black}{ The dispersion index is therefore equal to $d = Var(Y_{i})/E(Y_{i}) = 1 + \lambda_{i}/\nu$.} As $\nu \rightarrow \infty$, we have that $Y_{i}$ reverts to follow a Poisson distribution with mean $\lambda_{i}$. A null hypothesis of no association between $X$ and $Y$ is specified as: $H_{0} : \beta_{X} = 0$.
\noindent \textbf{(3) The (type 2) ZINB regression model - } We will consider the following ZINB regression model:
\begin{eqnarray}
Pr(Y_{i}=y_{i}|\nu, \omega_{i}, \lambda_{i}) &=& \omega_{i} + (1-\omega_{i})(1/(1 + \lambda_{i}/\nu))^{\nu}, \quad \textrm{if} \quad y=0; \\
&=& (1-\omega_{i})\frac{\Gamma(y_{i} + \nu)}{\Gamma(y_{i} + 1)\Gamma(\nu)}\left(\frac{1}{1 + \lambda_{i}/\nu}\right)^{\nu}\left(\frac{\lambda_{i}/\nu}{1 + \lambda_{i}/\nu} \right)^{y_{i}}, \quad \textrm{if} \quad y_{i} > 0; \nonumber
\end{eqnarray}
\noindent where we use a log link function for $\lambda_{i}$ and a logit link function for $\omega_{i}$ as described in equations (\ref{eq:log_logit1}) and (\ref{eq:log_logit2}); and where $\nu > 0$ is a dispersion parameter that does not depend on covariates. The type 2 ZINB model implies the following about the mean and variance of the data: $\textrm{E}(Y_{i}) = \lambda_{i}(1-\omega_{i})$, and $\textrm{Var}(Y_{i}) = (1-\omega_{i})(\lambda_{i} + \lambda_{i}^{2}(\omega_{i} + 1/\nu))$. \textcolor{black}{ The dispersion index is therefore equal to $d = Var(Y_{i})/E(Y_{i}) = 1 + \lambda_{i}(\omega_{i} + 1/\nu)$.} A null hypothesis of no association between $X$ and $Y$ is specified as: $H_{0} : \beta_{X} = \gamma_{X} = 0$.
\subsection{Simulation Study}
\label{sec:methods}
As discussed in the previous section, prevailing practice for the analysis of count data is first to try to fit one's data with a Poisson GLM and only consider alternatives in the event that a preliminary test indicates that the distributional assumptions of the Poisson GLM may be violated. We will therefore consider the following multi-stage testing procedure in our simulation study investigation. This follows the recommendations of \citet{blasco2019does} yet represents a simplification of the typical process followed by researchers. \cite{walters2007using} also recommends a similar multi-step model selection procedure.
For the illustrative purposes of this paper, we consider the \citet{dean1989tests} score test (D\&L test) for oversdispersion and the \cite{vuong1989likelihood} test for zero-inflation (see Appendix for details) in the following seven step procedure:
\pagebreak
\begin{shaded}
\begin{itemize}
\item \textbf{Step 1.} Conduct the $D\&L$ score test for overdispersion ($H_{0}$: Poisson vs. $H_{1}$: NB).
\item{ \begin{itemize}
\item \textbf{Step 2.} If the $D\&L$ score test fails to reject the null, conduct a Vuong test for zero-inflation ($H_{0}$: Poisson vs. $H_{1}$: ZIP). Otherwise, proceed to Step 5.
\item{ \begin{itemize}
\item \textbf{Step 3.} If the Vuong test for zero-inflation fails to reject the null, fit the Poisson GLM and calculate the $p$-value ($H_{0}: \beta_{X} = 0$ vs. $H_{1}: \beta_{X} \ne 0$). Otherwise, proceed to Step 4.
\item \textbf{Step 4.} If the Vuong test for zero-inflation rejects the null, fit the ZIP model and calculate the $p$-value ($H_{0} : \beta_{X} = \gamma_{X} = 0$).
\end{itemize}}
\item \textbf{Step 5.} If the $D\&L$ score test rejects the null, conduct the Vuong test for zero-inflation ($H_{0}$: NB vs. $H_{1}$: ZINB).
\item{ \begin{itemize}
\item \textbf{Step 6.} If the the Vuong test for zero-inflation fails to reject the null, fit the NB model and calculate the $p$-value ($H_{0} : \beta_{X} = 0$). Otherwise, proceed to Step 7.
\item \textbf{Step 7.} If the Vuong test for zero-inflation rejects the null, fit the ZINB regression model and calculate the $p$-value ($H_{0} : \beta_{X} = \gamma_{X} = 0$).
\end{itemize}}
\end{itemize}}
\end{itemize}
\end{shaded}
Figure \ref{fig:tree} illustrates the multi-stage model selection procedure with the Poisson GLM as the starting point. Note that, in their example analysis of plant-herbivore interaction data, \cite{blasco2019does} conduct a version of the above procedure. First, based on the $D\&L$ score test, ``the data is clearly overdispersed and a NB model was preferred to a Poisson.'' The authors also conduct Vuong and Clarke tests: ``The Vuong and Clarke tests rejected the Poisson and NB models in favour of their zero‚Äêinflated versions[...].'' We decided to consider the Vuong test in our simulations instead of the Clarke test (or the Ridout score test), since the Vuong test appears to be the most widely used in practice. \textcolor{black}{We also investigate another, simpler, model selection strategy: among the four models considered, the model with lowest $AIC$ is chosen and the corresponding $p$-value for the association between $X$ and $Y$ is calculated \citep{brooks2019statistical}.}
\begin{figure}
\caption{The multi-stage model selection procedure. The Poisson GLM is the starting point. Three score tests lead to one of four models. Numbers in the top right-hand corner of each node indicate the expected number of datasets (out of a total of 100) to reach each outcome if the data was Poisson (with $\beta_{X}
\label{fig:tree}
\end{figure}
We conducted a large-scale simulation study in which samples of data were drawn from four different distributions:
\begin{enumerate}
\item the Poisson distribution: \\
\indent $y_{i} \sim Poisson(\lambda = exp(\beta_{0}))$, for $i$ in 1,...$n$;
\item the (type 2) Negative Binomial distribution: \\
\indent $y_{i} \sim NegBin(\nu, \lambda = exp(\beta_{0}))$, for $i=1,...,n$;
\item the Zero-Inflated Poisson distribution: \\
\indent $y_{i} \sim ZIPoisson(\omega, \lambda = exp(\beta_{0}))$, for $i=1,...,n$; and
\item the Zero-Inflated Negative Binomial distribution: \\
\indent $y_{i} \sim ZINegBin(\nu, \omega, \lambda = exp(\beta_{0}))$,
for $i=1,...,n$.
\end{enumerate}
\noindent For each scenario, all data are simulated under the null hypothesis (i.e., with $\beta_{X}=0$ and $\gamma_{X}=0$). We varied the following: the sample size, $n = (50, 100, 250, 500, 1000, 2000)$, the intercept, $\beta_{0} = (0.5, 1.0, 1.5, 2.0, 2.5)$, and the probability of a structural zero, $\omega = (0, 0.05, 0.1, 0.2, 0.5)$. We also varied the degree of overdispersion by setting $\phi = \nu/\lambda = (\infty, 2, 1, 1/2, 1/3)$ (so that data simulated from the Negative Binomial distribution has a dispersion index of $d = 1 + \lambda/\nu = 1+1/\phi = (1.0, 1.5, 2.0, 3.0, 4.0)$). To be clear, we consider:
\begin{itemize}
\item scenarios with $\phi=\infty$ and $\omega=0$ as those with data simulated from the Poisson distribution;
\item scenarios with $\phi<\infty$ and $\omega=0$ as those with data simulated from the Negative Binomial distribution;
\item scenarios with $\phi=\infty$ and $\omega>0$ as those with data simulated from the Zero-inflated Poisson distribution; and
\item scenarios with $\phi<\infty$ and $\omega>0$ as those with data simulated from the Zero-Inflated Negative Binomial distribution.
\end{itemize}
We considered $X_i$ as a univariate continuous covariate from a Normal distribution, with mean of zero and variance of 100: $X_i \sim Normal (0, 100)$, for $i$ in 1,..., $n$ (as such, $k=2$). Note that the covariate matrix $X$ is simulated anew for each individual simulation run. Therefore, we are considering the case of \emph{random} regressors. \citet{chen2011finite} discuss the difference between fixed and random covariates. The assumption of fixed covariates is generally considered only in experimental settings whereas an assumption of random covariates is typically more appropriate for observational studies.
\textcolor{black}{Note that, for the Poisson distributed data, we are simulating data with overall mean of $\lambda = exp(\beta_{0}) \approx (1.6, 2.7, 4.5, 7.4, 12.2)$. For $\lambda>5$, zeros in the data are quite rare since $Pr(Y=0|\lambda) \approx 0$. The simulation study could be expanded in several ways. For instance, we did not consider models that deal with under-dispersion, even though under-dispersed counts may arise in various ecological studies; see \cite{lynch2014dealing}. Also note that the simulation study only tests for rates of false-positives (since $\beta_{X}=0$ and $\gamma_{X}=0$ for all scenarios). We are not testing for excessive false-negatives (and overly wide confidence intervals) which are also undesirable \citep{brooks2019statistical}. }
In total, we considered 750 distinct scenarios and for each simulated 15,000 unique datasets. For each dataset, we conducted the seven step procedure and recorded all $p$-values and whether or not the null hypotheses is rejected at the 0.05 significance level under the entire procedure. We also recorded all AIC statistics. We are interested in the unconditional type one error. We specifically chose to conduct 15,000 simulation runs so as to keep computing time within a reasonable limit while also reducing the amount of Monte Carlo standard error to a negligible amount (for looking at type 1 error with $\alpha=0.05$, Monte Carlo SE will be approximately $0.0018 \approx \sqrt{0.05(1-0.05)/15,000}$; see \cite{morris2019using}).
To test the association between $X$ and $Y$ with each of the regression models, we conducted a Wald test to obtain the necessary $p$-value since in \verb|R|, the $p$-values in the default summary.glm output are from Wald tests. Moreover, in initial simulations, LRTs performed rather erratically in rare situations when the model was misspecified (e.g., when a Poisson model was fit to ZIP data).
\section{Discussion}
\paragraph*{Analysis under the ``correct model'' - }
We first wish to confirm that the models under investigation deliver correct type 1 error when used as intended. In other words, suppose the ``correct model'' is known a-\emph{priori} and is used regardless of any preliminary diagnostic testing, would we obtain the desired 0.05 level of type 1 error? See Figure \ref{fig:correct} which plots the rejection rates corresponding to this question.
\begin{figure}
\caption{The empirical level of Type 1 error obtained under the ``correct'' model. For panel 1, the ``correct'' model is the Poisson GLM; for panels 2-5 the ``correct'' model is the NB GLM; for panels 6, 11, 16, 21, the ``correct'' model is the ZIP GLM; and for other panels, the ``correct'' model is the ZINB GLM.}
\label{fig:correct}
\end{figure}
In summary, we see that for data simulated from the Poisson distribution (Figure \ref{fig:correct}, panel 1), empirical type 1 error is slightly smaller than 0.05 for small sample-size scenarios ($n \le 100$) and approximately 0.05 otherwise. We also note that for data simulated from the NB distribution (Figure \ref{fig:correct}, panels 2-5), empirical type 1 error is approximately 0.05 for all $n \ge 100$ and for all $\beta_{0}$. For data simulated from the ZIP distribution (see Figure \ref{fig:correct}, panels 6, 11, 16, 21), empirical type 1 error can be substantially conservative (i.e., less than 0.05) for small values of $n$ and small values of $\omega$. Finally, for ZINB data, we note that, when $n$ is small, the type 1 error appears to be higher than the advertised rate of 0.05 for some scenarios and less than 0.05 for others. For example, with $n = 100$, $\beta_{0}=0.5$, $\phi = 1/3$, and $\omega=0.5$, the type 1 error is 0.074, whereas, when $n = 100$, $\beta_{0}=2.5$, $\phi = 2$, and $\omega=0.05$, the type 1 error is 0.040 (see Figure \ref{fig:correct}, panels 7 and 25).
We also note that none of the models appear to be ``robust'' to model misspecification. The Poisson model applied to non-Poisson data leads to very high rejection rates (so high they are often off-the chart in Appendix - Figure \ref{fig:poisson}). The ZIP model also performs poorly when applied to non-ZIP data (see Appendix - Figure \ref{fig:zip}), as does the NB model when applied to non-NB data (see Appendix - Figure \ref{fig:nb}), and as does the ZINB model (see Appendix - Figure \ref{fig:zinb}) when applied to non-ZINB data (specifically when applied to Poisson data and NB data).
As such, it seems inadvisable to recommend simply fitting a ZIP or ZINB to Poisson data if one is uncertain about the possibility of zero-inflation, or overdispersion. As the sample size, $n$, increases (and as $\beta_{0}$ decreases), the type 1 error rates obtained when the ZIP and ZINB models are fit to Poisson data increase well beyond 0.05 (see Figures \ref{fig:zip} and \ref{fig:zinb}, panel 1). This unexpected result may be due to the fact that these models are testing a null hypothesis that lies on the boundary of the parameter space (i.e., $\omega=0$).
\paragraph*{Preliminary testing - }
The next question is ``how often do the preliminary tests reject their null hypotheses?'' We also wish to determine how often the preliminary testing scheme successfully identifies the ``correct'' model.
\begin{figure}
\caption{The probability of selecting the ``correct model'' after following the seven step testing scheme outlined in Section \ref{sec:methods}
\label{fig:correct_model}
\end{figure}
Let us first consider the D\&L score test (see Appendix - Figure \ref{fig:overdispersion}) and specifically as it applies to the NB scenarios. Recall that the NB scenarios are those with overdispersion ($\phi<\infty$) but no structural zero-inflation ($\omega=0$). With the exception of the small sample-size scenarios with a small amount of overdispersion ($n\le100, \phi\le2$), the D\&L test correctly rejects the null hypothesis of no overdispersion for the vast majority of cases (Appendix - Figure \ref{fig:overdispersion}, panels 2-5). For Poisson data (when $\phi=\infty$ and $\omega=0$), the D\&L test appears to show approximately correct type 1 error, with rejection rates ranging from 0.0368 to 0.0514 (see Figure \ref{fig:overdispersion}, panel 1). However, for ZIP data (when $\phi=\infty$ and $\omega>0$), the D\&L test will often reject the null hypothesis of no overdispersion; see Figure \ref{fig:overdispersion}, panels 6, 11, 16, 21. The rate of rejection increases with increasing sample size, with increasing $\omega$, and with increasing $\beta_{0}$. Strictly speaking, rejection in these cases is correct since an excess of zeros ($\omega>0$) does contribute to overdispersion. \textcolor{black}{However, it must be noted that using the NB model for overdispersion when the underlying issue is zero-inflation is not appropriate; see \citet{harrison2014using}. Indeed, when the NB model is fit to ZIP data, we record type 1 error rates either much too low or much too high, depending on $\omega$, $\beta_{0}$, and $n$; see Figure \ref{fig:nb}, panels 6, 11, 16, and 21.}
Now let us discuss the Vuong test for zero-inflation. See Appendix - Figures \ref{fig:vuongP} and \ref{fig:vuongNB} for the Vuong test results. Note that the ``Poisson vs. ZIP'' Vuong test will often reject the null of no zero-inflation for NB data (Appendix - Figure \ref{fig:vuongP}, panels 2-5). In contrast, the ``NB vs. ZINB'' Vuong test will rarely reject the null of no zero-inflation for NB data (Appendix - Figure \ref{fig:vuongNB}, panels 2-5). In this way, the Vuong test acts as a second-line defense against erroneously selecting the Poisson model. If the D\&L score test fails to select the NB model in Step 1, the ``Poisson vs. ZIP'' Vuong test in Step 3 will often reject the Poisson model in favour of the ZIP model (particularly when $n$ and $\beta_{0}$ are large). The ZIP model, when used for NB, is not ideal, but is definitely preferable to the Poisson model; compare Appendix - Figures \ref{fig:poisson} and \ref{fig:zip}, panels 2-5.
Overall, the probability that the preliminary seven-step testing scheme selects the ``correct'' model depends highly on $\beta_{0}$, $\omega$, $\phi$, and $n$, see Figure \ref{fig:correct_model}. With Poisson data, if each of the diagnostic tests were truly independent (and each had a $\alpha=0.05$ type 1 error rate), then the probability of selecting the ``correct'' model should be 90.25\% ($=0.95\times95\%$); see Figure \ref{fig:tree}. The numbers we obtain from the simulation study range from 90.21\% to 96.19\%.
For the ZIP data scenarios ($\omega>0$, $\phi=\infty$; Figure \ref{fig:correct_model}, panels 6, 11, 16, 21), the ``incorrect'' ZINB model is chosen in a majority of cases. This may not necessarily lead to type 1 error inflation since the ``incorrect'' ZINB model is often conservative when applied to ZIP data; see Appendix - Figure \ref{fig:zinb}. For ZINB data scenarios (i.e., when $\omega>0$ and $\phi<\infty$), in cases when the ZINB model is not selected, it is most likely that the NB model is selected instead. This also might not necessarily lead to type 1 error inflation since the misspecified NB model appears to maintain a type 1 error rate at or bellow the advertised rate in many of these situations (specifically when $\phi<2$ and $\omega<0.2$); see Appendix - Figure \ref{fig:nb}.
\paragraph*{Post-testing unconditional type 1 error - }
Our main question of interest is whether or not the null hypotheses of no association between $X$ and $Y$ is rejected at the desired 0.05 significance level when following the entire seven-step procedure outlined in Section \ref{sec:methods}. The corresponding rejection rates are plotted in Figure \ref{fig:type1}. Table \ref{tab:rates} lists rejection rates and model selection rates for a select number of scenarios. Let us consider the results for each distribution.
\begin{figure}
\caption{Type 1 error obtained following the seven step testing scheme outlined in Section \ref{sec:methods}
\label{fig:type1}
\end{figure}
\begin{table}[h!]
\centering
\begin{footnotesize}
\begin{tabular}{|lcccc|}
\hline
& \textbf{Poisson GLM} & \textbf{ZIP GLM} & \textbf{NB GLM} & \textbf{ZINB GLM} \\
\hline
\multicolumn{3}{|l}{\textbf{Scenario ``3''}} & &\\
\multicolumn{3}{|l}{($n=250$, $\beta_{0}=0.5$, $\phi = \infty$, $\omega= 0$; Poisson)} & &\\
\hspace{0.1cm} $\operatorname{Pr}( \textrm{reject $H_{0}$} )$& 0.05 & 0.04 & 0.05 & 0.04 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ selected by tests})$& 0.05 & 0.04 & 0.02 & 0.00 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ selected by tests})$& 0.93 & 0.02 & 0.05 & 0.00 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ has lowest AIC})$& 0.05 & 0.16 & 0.02 & 0.04 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ has lowest AIC})$& 0.83 & 0.11 & 0.05 & 0.01 \\
\multicolumn{3}{|l}{\textbf{Scenario ``6''}} & &\\
\multicolumn{3}{|l}{($n=2,000$, $\beta_{0}=0.5$, $\phi = \infty$, $\omega= 0$; Poisson)} & &\\
\hspace{0.1cm} $\operatorname{Pr}( \textrm{reject $H_{0}$} )$& 0.05 & 0.07 & 0.05 & 0.06 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ selected by tests})$& 0.05 & 0.09 & 0.04 & 0.30 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ selected by tests})$& 0.93 & 0.02 & 0.05 & 0.00 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ has lowest AIC})$& 0.05 & 0.30 & 0.04 & 0.13 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ has lowest AIC})$& 0.83 & 0.10 & 0.06 & 0.01 \\
\multicolumn{3}{|l}{\textbf{Scenario ``36''}} & &\\
\multicolumn{3}{|l}{($n=2,000$, $\beta_{0}=0.5$, $\phi = 2$, $\omega= 0$; NB)} & &\\
\hspace{0.1cm} $\operatorname{Pr}( \textrm{reject $H_{0}$} )$& 0.11 & 0.08 & 0.05 & 0.07 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ selected by tests})$& -- & -- & 0.05 & 0.12 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ selected by tests})$& 0.00 & 0.00 & 0.98 & 0.02 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ has lowest AIC})$& -- & -- & 0.05 & 0.24 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ has lowest AIC})$& 0.00 & 0.00 & 0.87 & 0.13 \\
\multicolumn{3}{|l}{\textbf{Scenario ``43''}} & &\\
\multicolumn{3}{|l}{($n=50$, $\beta_{0}=1.5$, $\phi = 2$, $\omega= 0$; NB)} & &\\
\hspace{0.1cm} $\operatorname{Pr}( \textrm{reject $H_{0}$} )$& 0.10 & 0.04 & 0.06 & 0.02 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ selected by tests})$& 0.11 & 0.00 & 0.04 & 0.18 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ selected by tests})$& 0.40 & 0.00 & 0.60 & 0.00 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ has lowest AIC})$& 0.10 & 0.09 & 0.04 & 0.03 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ has lowest AIC})$& 0.33 & 0.08 & 0.56 & 0.03 \\
\multicolumn{3}{|l}{\textbf{Scenario ``182''}} & &\\
\multicolumn{3}{|l}{($n=100$, $\beta_{0}=0.5$, $\phi = 2$, $\omega= 0.05$; ZINB)} & &\\
\hspace{0.1cm} $\operatorname{Pr}( \textrm{reject $H_{0}$} )$& 0.12 & 0.08 & 0.05 & 0.05 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ selected by tests})$ & 0.12 & 0.29 & 0.05 & 0.14 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ selected by tests})$ & 0.09 & 0.00 & 0.87 & 0.04 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ has lowest AIC})$ & 0.12 & 0.13 & 0.05 & 0.14 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ has lowest AIC})$ & 0.06 & 0.19 & 0.67 & 0.08 \\
\multicolumn{3}{|l}{\textbf{Scenario ``302''}} & &\\
\multicolumn{3}{|l}{ ($n=100$, $\beta_{0}=0.5$, $\phi = \infty$, $\omega= 0.1$; ZIP)} & &\\
\hspace{0.1cm} $\operatorname{Pr}( \textrm{reject $H_{0}$} )$& 0.06 & 0.05 & 0.05 & 0.04 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ selected by tests})$ & 0.07 & 0.17 & 0.03 & 0.15 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ selected by tests})$ & 0.71 & 0.02 & 0.25 & 0.02 \\
\hspace{0.1cm} $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{$M$ has lowest AIC})$ & 0.06 & 0.10 & 0.03 & 0.07 \\
\hspace{0.1cm} $\operatorname{Pr} (\textrm{$M$ has lowest AIC})$ & 0.51 & 0.31 & 0.16 & 0.02 \\
\hline
\end{tabular}
\end{footnotesize}
\caption{Rejection rates and model selection rates for a number of selected scenarios from the simulation study. These numbers can be used to calculate the overall unconditional type 1 error rates. For example, for Scenario ``302'', the type 1 error obtained after model selection via AIC is 0.071 (=$ 0.06\times0.51 + 0.10\times0.31 + 0.03\times0.16 + 0.07\times0.02$); and the type 1 error obtained after model selection via sequential score tests is 0.060 (=$0.07\times0.71 + 0.17\times0.02 + 0.03\times0.25 + 0.15\times0.02$).}
\label{tab:rates}
\end{table}
First, for data simulated from the Poisson distribution (Figure \ref{fig:type1}, panel 1), empirical type 1 error appears to be unaffected by model selection bias. This is due to the fact that incorrect models are rarely selected, even when sample sizes are small (see Figure \ref{fig:correct_model}, panel 1). Consider two specific scenarios:
\begin{itemize}
\item Scenario ``3'' ($n=250$, $\beta_{0}=0.5$, $\phi=\infty$, and $\omega=0$) - When $\beta_{0}=0.5$ and $n=250$, the Poisson model is correctly selected in approximately 93\% of cases while the NB and ZIP models are selected in about 5\% and 2\% of cases, respectively. Numbers in the top right-hand corner of each node in Figure \ref{fig:tree} indicate the expected number of datasets (out of a total of 100) to reach each outcome if the data was Poisson (with $\beta_{X}=0$), and each of the tests were truly independent (with a $\alpha=0.05$ type 1 error rate). The numbers in parentheses correspond to results from the simulation study for this scenario.
For those datasets directed to the NB and ZIP models, the null hypothesis of no association between $X$ and $Y$ is rejected with probability of 0.021 (=0.10/(4.63+0.10)) and 0.044 (=0.11/(2.34+0.11)), respectively. As such, model selection bias, in this case, has the innocuous effect of ever so slightly lowering the type 1 error level: the Poisson GLM fit to this data provides a type 1 error rate of 0.050, whereas the unconditional type 1 error rate obtained after following the seven-step procedure is 0.049 (=0.0471+0.0011+0.0010+0.0000).
\item Scenario ``6'' ($n=2,000$, $\beta_{0}=0.5$, $\phi=\infty$, and $\omega=0$) - When $\beta_{0}=0.5$ and $n=2,000$, the Poisson model is correctly selected in approximately 93\% of cases while the NB and ZIP models are selected in about 5\% and 2\% of cases, respectively. While the NB model is conservative for this data ($\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{NB model selected by tests}) = 0.037$), the ZIP model is not ($\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{ZIP model selected by tests}) = 0.089$). However, the impact is negligible: the unconditional type 1 error rate obtained after following the seven-step procedure is 0.047.
\end{itemize}
Second, for data simulated from the ZIP distribution (i.e., when $\omega>0$ and $\phi=\infty$), the ``incorrect'' ZINB model is almost always selected due to the fact that the model selection procedure tests for zero-inflation only after first testing for overdispersion. However, the type 1 error under this ``incorrect'' ZINB model is, for most scenarios, not substantially higher than the advertised 0.05 rate, (see Appendix - Figure \ref{fig:zinb}, panels 6, 11, 16, 21). There are, however, exceptions where model selection bias is apparent. Consider, for example, scenario ``302'':
\begin{itemize}
\item Scenario ``302'' ($n=100$, $\beta_{0}=0.5$, $\phi = \infty$ and $\omega= 0.1$) - The unconditional type 1 error obtained after following the seven step procedure is 0.060 (see Figure \ref{fig:type1}, panel 11). Amongst the simulated datasets for which the ZIP model is selected (by the D\&L and Vuong tests), the ZIP model has a rejection rate of 0.168. Amongst the simulated datasets for which the ZINB model is selected, the ZINB model has a rejection rate of 0.154; see Table \ref{tab:rates}. This clearly shows that the diagnostic tests (the D\&L and Vuong tests) and the subsequent hypothesis tests ($H_{0}: \beta_{X}=\gamma_{X}=0$) are not independent of one another. In this instance, the D\&L test will not only screen for overdispersion, but will also direct the data towards a model that is more likely to reject $H_{0}: \beta_{X}=\gamma_{X}=0$, thereby inflating the type 1 error.
\end{itemize}
With data simulated from the NB distribution (i.e., when $\phi<\infty$ and $\omega=0$; see Figure \ref{fig:type1}, panels 2-5), we see that model selection bias can lead to modest type 1 error inflation when $n$ is small. When sample sizes are sufficiently large, there is little evidence of any substantial type 1 error inflation caused by model selection bias. Consider for example ``Scenario 43'':
\begin{itemize}
\item Scenario ``43'' ($n=50$, $\beta_{0}=1.5$, $\phi = 2$ and $\omega= 0$) - The unconditional type 1 error obtained after following the seven step procedure is 0.067 (see Figure \ref{fig:type1}, panel 2), whereas the type 1 error obtained with the ``correct'' NB model is 0.056. This inflation is due to the fact that, for this data, there is a 40\% probability of selecting the Poisson model following the seven-step procedure and that $\operatorname{Pr}(\textrm{reject $H_{0}$} | \textrm{Poisson model is selected by tests})=0.11$; see Table \ref{tab:rates}.
\end{itemize}
Finally, consider data simulated from the ZINB distribution (i.e., when $\phi<\infty$ and $\omega>0$; see Figure \ref{fig:type1}, panels 7-10, 12-15, 17-20, 22-25). We see type 1 error rates much higher than 0.05 for some scenarios (e.g., when $n$ is small and $\omega$ is large). For example, consider scenario ``182'':
\begin{itemize}
\item Scenario ``182'' ($n=100$, $\beta_{0}=0.5$, $\phi = 2$ and $\omega= 0.05$) - The unconditional type 1 error obtained after following the seven step procedure is 0.058 (see Figure \ref{fig:type1}, panel 7), whereas the type 1 error obtained with the ``correct'' ZINB model is 0.048. Note that, when applied to the data ignoring the results of the diagnostic tests, both the NB and the ZINB models demonstrate appropriate rejection rates (of 0.051 and 0.048, respectively; see Table \ref{tab:rates}). However, amongst the simulated datasets for which the ZINB model is selected (by the D\&L and Vuong diagnostic tests) the ZINB model has a rejection rate of 0.142. This clearly shows that, to the detriment of the type 1 error rate, the diagnostic tests and the subsequent hypothesis test for $H_{0}: \beta_{X}=\gamma_{X}=0$ are not independent.
\end{itemize}
\paragraph*{AIC model selection - }
We also investigated model selection using the AIC. We were curious as to how often the ``correct'' model is the model with the lowest AIC. Figure \ref{fig:correct_selectionAIC} plots the results. We see that the probability that the AIC statistic selects the ``correct'' model depends highly on $\beta_{0}$, $\omega$, $\phi$, and $n$. Overall, across all scenarios we considered, the AIC selected the correct model for 77\% of datasets whereas the seven-step model selection based on score tests selects the correct model for 58\% of datasets.
\begin{figure}
\caption{The probability that the ``correct'' model is the one with the lowest AIC.}
\label{fig:correct_selectionAIC}
\end{figure}
\begin{figure}
\caption{Type 1 error obtained from model with the lowest AIC.}
\label{fig:type1AIC}
\end{figure}
More specifically, for the NB data scenarios ($\omega=0$, $\phi<\infty$; Figure \ref{fig:correct_selectionAIC}, panels 2-5), the ``correct'' NB model is chosen using the AIC in a large majority of cases for most scenarios. In contrast, for ZIP data (i.e., when $\omega>0$ and $\phi=\infty$), the AIC is less capable of determining the ``correct model'' when $\beta_{0}$, $n$, and $\omega$ are small. For ZINB data scenarios (when $\omega>0$ and $\phi<\infty$), the probability of selecting the correct model using the AIC ranges substantially and increases (somewhat predictably) with increasing $n$ and increasing $\beta_{0}$.
We also wish to determine whether or not the null hypotheses of no association between $X$ and $Y$ is rejected at the 0.05 significance level when following model selection via AIC. Figure \ref{fig:type1AIC} shows that, when $\beta_{0}$ is small, there are several scenarios in which the unconditional type 1 error is much too high. Perhaps most surprisingly, with Poisson data (i.e., scenarios with $\omega=0$ and $\phi=\infty$), when $\beta_{0}=0.5$, the unconditional type 1 error increases with increasing $n$, from 0.049 to 0.071 (see Figure \ref{fig:type1AIC}, panel 1). Consider again Scenario ``3'' ($n=250$, $\beta_{0}=0.5$, $\phi=\infty$, and $\omega=0$) and Scenario ``6'' ($n=2,000$, $\beta_{0}=0.5$, $\phi=\infty$, and $\omega=0$); see Table \ref{tab:rates}.
\begin{itemize}
\item For Scenario ``3'', the probability that the AIC incorrectly selects the ZIP GLM is 0.11, and $\operatorname{Pr} (\textrm{reject $H_{0}$}|\textrm{ZIP GLM has lowest AIC}) = 0.16$. The unconditional type 1 error rate = 0.06 (=$0.05\times0.83 + 0.16\times0.11 + 0.02\times0.05 + 0.04\times0.01$).
\item For Scenario ``6'', the probability that the AIC incorrectly selects the ZIP GLM is 0.10, and $\operatorname{Pr} (\textrm{reject $H_{0}$}|\textrm{ZIP GLM has lowest AIC}) = 0.30$. The unconditional type 1 error rate = 0.07 ($= 0.05\times0.83 + 0.30\times0.10 + 0.04\times0.06 + 0.13\times0.01$).
\end{itemize}
With NB data (i.e., scenarios with $\omega=0$ and $\phi<\infty$), the unconditional type 1 error is also much higher than 0.05, even when $n$ and $\beta_{0}$ are large. This is due to the fact that the ZINB model, when erroneously selected in a minority of cases, rejects the null of no association between $X$ and $Y$ at rates much much higher than 0.05. This particularly true when $n$ is large. Consider for example, Scenario ``36'':
\begin{itemize}
\item
Scenario ``36'' ($n=2,000$, $\beta_{0}= 0.5$, $\phi=2$, and $\omega= 0$) - Amongst the 87\% of datasets for which the AIC correctly selects the NB model, the null hypothesis of no association between $X$ and $Y$ is rejected with probability of exactly 0.050; see Table \ref{tab:rates}. However, amongst the remaining 13\% of datasets for which the ZINB model is erroneously selected, the probability of rejecting the null hypothesis of no association between $X$ and $Y$ is 0.240. As a result the unconditional type 1 error rate is 0.074 ($=0.240\times0.13 + 0.87\times0.05$).
\end{itemize}
In summary, while the AIC is able to select the ``correct'' model more often than the sequential score testing scheme, there appears to be more potential for type 1 error inflation. How can this be? In the presence of model selection bias, selecting the ``correct'' model more often is, somewhat paradoxically, not always preferable. Consider once again Scenario ``302'' (with $n=100$, $\beta_{0}=0.5$, $\phi = \infty$ and $\omega= 0.1$). Following the sequential score tests, the ``correct'' ZIP model was only selected with a 2\% probability. With model selection via AIC, the ``correct'' ZIP model was selected with a 31\% probability. However, the unconditional type 1 error obtained after following the seven step procedure is 0.060, whereas the unconditional type 1 error obtained after model selection via AIC is 0.071; see Table \ref{tab:rates}. We see a similar phenomenon with Scenario ``182'' (with $n=100$, $\beta_{0}=0.5$, $\phi = 2$ and $\omega= 0.05$); see Table \ref{tab:rates}. The ``correct'' ZINB model is chosen more often with the AIC than with the score tests (8\% vs. 4\%). However, the type 1 error obtained after model selection via AIC is 0.074, vs. 0.058 after model selection by sequential score tests.
\section{Conclusions}
If the population distribution is known in advance, model selection bias will not be a problem. If the assumptions required of the Poisson distribution are known to be wrong, alternative models that do not depend on these assumptions can be used and ideally a valid model can be pre-specified prior to obtaining/observing any data. However, outside of a highly controlled laboratory experiment, this may not be realistic. The potentially problematic (and most likely scenario) is when one cannot, with a high degree of confidence, determine the distributional nature of the data before observing the data. What should be done in these circumstances? \cite{tsou2006robust} suggest using a ``robust'' Poisson regression model ``so that one need not worry about the correctness of the Poisson assumption.'' However, when the distributional assumptions of the Poisson GLM do hold, \cite{tsou2006robust} acknowledge that the ``robust approach might not be as efficient.'' Given the potentially immense expense required to obtain data, anyone working in data-driven research will no doubt be reluctant to adopt any approach which compromises statistical power.
Researchers who do not know in advance whether or not there is overdispersion or zero-inflation, might decide to simply use a ZIP or ZINB as a ``safer bet'' \citep{perumean2013zero} and pay a price in terms of efficiency \citep{williamson2007power}. However, this is problematic. We observed that the ZIP and ZINB models, when fit to ordinary Poisson data, can lead to type 1 error well above the advertised rate when sample sizes are large. (Future work should consider whether hurdle models \citep{rose2006use} are similarly problematic.) Instead, if there is sufficient data, researchers should proceed with a model selection procedure, ideally one based on efficient score tests.
Our simulation study suggests that, if sample sizes are sufficiently large, there is little need to worry about model selection bias following a series of sequential score tests. However, when sample sizes are small, our simulation study demonstrated that model selection bias can lead to potentially substantial type 1 error inflation.
Model selection based on the AIC cannot be recommended. We observed that, even when sample sizes are large, when the true underlying distribution of the data is Poisson, using the AIC to select the ``best'' model can often lead to substantial type 1 error inflation. Future work should investigate the suitability of other information criteria (e.g., BIC, AICc).
Ignoring the possibility of overdispersion and zero inflation during data analyses can lead to invalid inference. However, if one does not have sufficient power to confidently test for overdispersion and zero inflation, it \textcolor{black}{may be} best to simply use a model that can accommodate for these possibilities (e.g., use a robust model) instead of going through a model selection procedure that might inflate the type 1 error. In summary, if one does not have the power to test for distributional assumptions, testing for distributional assumptions may not be wise. And if one does have a sufficiently large sample size to test for distributional assumptions, testing for distributional assumptions may be very beneficial. Note that our simulation study did not include any covariates and in studies where there are several covariates, it will no doubt be difficult to determine what constitutes a ``sufficiently large'' sample size. To conclude, be reminded that researchers should always be cautious when interpreting results when $n$ is small \citep{button2013power}. Model selection bias is just one more reason to have a healthy skepticism of NHST when small sample sizes are small.
\section{Appendix}
Let us briefly review the \cite{dean1989tests} score test for overdispersion and the Vuong test for zero-inflation.
\noindent \textbf{(1) The $D\&L$ score test - } \citet{dean1989tests} proposed calculating the following score statistic for testing overdispersion:
\begin{equation}
T_{1}=\sum_{i=1}^{n}\left\{\left(y_{i}-\hat{\lambda}_{i}\right)^{2}-y_{i}\right\} /\left(2 \sum_{i=1}^{n} \hat{\lambda}_{i}^{2}\right)^{1 / 2}
\end{equation}
\noindent Under the null hypothesis of no overdispersion, the $T_{1}$ statistic converges to a standard Normal distribution and the $p$-value is calculated as: $p\textrm{-value} = P_{\mathcal{N}}(T_{1}).$
\noindent \textbf{(2) The Vuong test for zero-inflation - } The Vuong test statistic is calculated as follows:
\begin{equation}
V = \frac{\sum_{i=1}^{n}(\textrm{log}dL_{i})}{\sqrt{n}\cdot \sqrt{\sum_{i=1}^{n}((\textrm{log}dL_{i}-\sum_{i=1}^{n}(\textrm{log}dL_{i})/n)^2/(n-1))}},
\end{equation}
\noindent where, if the Poisson model is compared to it's zero-inflated counterpart, the ZIP model, we define: $\textrm{log}dL_{i} = \textrm{log}(Pr_{ZIP}(Y_{i}=y_{i}|\widehat{\omega}_{i}, \widehat{\lambda}_{i})) - \textrm{log}(Pr_{Pois}(Y_{i}=y_{i}|\widehat{\lambda}_{i})).$ If the NB model is compared to the ZINB model, we define: $
\textrm{log}dL_{i} = \textrm{log}(Pr_{ZINB}(Y_{i}=y_{i}|\widehat{\nu}, \widehat{\omega}_{i}, \widehat{\lambda}_{i})) - \textrm{log}(Pr_{NB}(Y_{i}=y_{i}|\widehat{\nu}, \widehat{\lambda}_{i})).$
The $V$ statistic, under the null, will follow the Normal distribution and a $p$-value is calculated as: $\textrm{$p$-value} = 1 - P_{N}(|V|)$. Note that \citet{desmarais2013testing} have suggested an adjustment to the Vuong test which, for larger samples, may have greater efficiency. Also, note that the Vuong test for zero-inflation, while widely used in practice, is somewhat controversial, see \citet{wilson2015misuse}.
\section{R-Code: Models and score tests}
\begin{footnotesize}
\begin{verbatim}
library("pscl", "lmtest")
#####################
### MODELS:
#####################
### Poisson model ###
PoissonGLM <- glm(y ~ newX, family = poisson)
Poisson_pval <- waldtest(PoissonGLM)[ "Pr(>F)"][2,]
Poisson_AIC <- AIC(PoissonGLM)
#####################
### NB model ###
NB_AIC<-Inf
NB_pval <- 0.99
tryCatch({
NB_mod <- glm.nb(y ~ newX)
NB_pval <- waldtest(NB_mod)[ "Pr(>F)"][2,]
NB_AIC <- AIC(NB_mod)
},
error=function(e){})
#####################
### ZIP model ###
ZIP_AIC <- Inf
zip_mod <- 99
ZIP_pval <- 0.99
zip_modNA <- TRUE
tryCatch({
zip_mod <- zeroinfl(y ~ newX|newX, dist = "poiss")
zip_modNA <- sum(is.na(unlist((summary(zip_mod))$coefficients)))>0
},
error=function(e){})
if(is.double(zip_mod)| zip_modNA){
tryCatch({
zip_mod <- zeroinfl(y ~ newX|newX , dist = "poiss", EM=TRUE)},
error=function(e){})}
tryCatch({
ZIP_pval <- waldtest(zip_mod)[ "Pr(>Chisq)"][2,]
ZIP_AIC <- AIC(zip_mod)
},
error=function(e){})
#####################
### ZINB model ###
ZINP_AIC <- Inf
zinb_mod<-99
ZINB_pval <- 0.99
zinb_modNA <- TRUE
tryCatch({
zinb_mod <- zeroinfl(y ~ newX|newX , dist = "negbin")
zinb_modNA <- sum(is.na(unlist((summary(zinb_mod))$coefficients)))>0
},
error=function(e){})
if(is.double(zinb_mod) | zinb_modNA){
tryCatch({
zinb_mod <- zeroinfl(y ~ newX|newX , dist = "negbin", EM=TRUE); summary(zinb_mod)
},
error=function(e){})}
tryCatch({
ZINB_pval<-waldtest(zinb_mod)[ "Pr(>Chisq)"][2,]
dimK<-dim(summary(zinb_mod)$coefficients$count)[1]
ZINP_AIC <-AIC(zinb_mod)
},
error=function(e){})
#####################
### SCORE TESTS:
#######
## vuong_f test : compare model1 to model2 (modified from "pscl" package)
#######
vuong_f<-function (m1, m2, digits = getOption("digits"))
{
m1y <- m1$y
m2y <- m2$y
m1n <- length(m1y)
m2n <- length(m2y)
if (m1n == 0 | m2n == 0)
stop("Could not extract dependent variables from models.")
if (m1n != m2n)
stop(paste("Models appear to have different numbers of observations.\n",
"Model 1 has ", m1n, " observations.\n", "Model 2 has ",
m2n, " observations.\n", sep = ""))
if (any(m1y != m2y)) {
stop(paste("Models appear to have different values on dependent variables.\n"))
}
p1 <- predprob(m1)
p2 <- predprob(m2)
if (!all(colnames(p1) == colnames(p2))) {
stop("Models appear to have different values on dependent variables.\n")
}
whichCol <- match(m1y, colnames(p1))
whichCol2 <- match(m2y, colnames(p2))
if (!all(whichCol == whichCol2)) {
stop("Models appear to have different values on dependent variables.\n")
}
m1p <- rep(NA, m1n)
m2p <- rep(NA, m2n)
for (i in 1:m1n) {
m1p[i] <- p1[i, whichCol[i]]
m2p[i] <- p2[i, whichCol[i]]
}
k1 <- length(coef(m1))
k2 <- length(coef(m2))
lm1p <- log(m1p)
lm2p <- log(m2p)
m <- lm1p - lm2p
bad1 <- is.na(lm1p) | is.nan(lm1p) | is.infinite(lm1p)
bad2 <- is.na(lm2p) | is.nan(lm2p) | is.infinite(lm2p)
bad3 <- is.na(m) | is.nan(m) | is.infinite(m)
bad <- bad1 | bad2 | bad3
neff <- sum(!bad)
if (any(bad)) {
cat("NA or numerical zeros or ones encountered in fitted probabilities\n")
cat(paste("dropping these", sum(bad), "cases, but proceed with caution\n"))
}
aic.factor <- (k1 - k2)/neff
bic.factor <- (k1 - k2)/(2 * neff) * log(neff)
v <- rep(NA, 3)
arg1 <- matrix(m[!bad], nrow = neff, ncol = 3, byrow = FALSE)
arg2 <- matrix(c(0, aic.factor, bic.factor), nrow = neff,
ncol = 3, byrow = TRUE)
num <- arg1 - arg2
s <- apply(num, 2, sd)
numsum <- apply(num, 2, sum)
v <- numsum/(s * sqrt(neff))
names(v) <- c("Raw", "AIC-corrected", "BIC-corrected")
pval <- rep(NA, 3)
msg <- rep("", 3)
for (j in 1:3) {
if (v[j] > 0) {
pval[j] <- 1 - pnorm(v[j])
msg[j] <- "model1 > model2"
}
else {
pval[j] <- pnorm(v[j])
msg[j] <- "model2 > model1"
}
}
out <- data.frame(v, msg,(pval))
names(out) <- c("Vuong z-statistic", "H_A", "p-value")
return(out)
}
###################
###########
### the D&L score test for overdispersion:
lambdahat <- yhat <- predict(PoissonGLM, type="response")
T_1 <- sum((y-lambdahat)^2 - y)/ sqrt(2*sum(lambdahat^2))
LRT_pval <- DLtest_pval <- pnorm(T_1, lower.tail=FALSE)
###########
### the Vuong test for zero-inflation:
vuong_P_ZIP_pval <- vuong_NB_ZINB_pval <- 1
if(exists("zip_mod") & sum(y==0)>1 ){
tryCatch({
vv_P_ZIP<-(vuong_f(PoissonGLM, zip_mod))
vuong_P_ZIP_pval <-as.numeric(as.character(vv_P_ZIP[1,3]))
},
error=function(e){})}
if(exists("zinb_mod") & sum(y==0)>1 ){
tryCatch({
vv_NB_ZINB<-(vuong_f(NB_mod, zinb_mod))
vuong_NB_ZINB_pval <-as.numeric(as.character(vv_NB_ZINB[1,3]))
},
error=function(e){})}
#######################################################
\end{verbatim}
\end{footnotesize}
\section{Appendix Figures}
\begin{figure}
\caption{Probability that the D\&L test rejects the null hypothesis that there is no overdispersion.}
\label{fig:overdispersion}
\end{figure}
\begin{figure}
\caption{Probability that the Vuong test rejects the null hypothesis that there is no zero-inflation, comparing the Poisson model to the ZIP model.}
\label{fig:vuongP}
\end{figure}
\begin{figure}
\caption{Probability that the Vuong test rejects the null hypothesis that there is no zero-inflation, comparing the NB model to the ZINB model.}
\label{fig:vuongNB}
\end{figure}
\begin{figure}
\caption{Probability that the Poisson model rejects the null hypothesis of $H_{0}
\label{fig:poisson}
\end{figure}
\begin{figure}
\caption{Probability that the ZIP model rejects the null hypothesis of $H_{0}
\label{fig:zip}
\end{figure}
\begin{figure}
\caption{Probability that the NB model rejects the null hypothesis of $H_{0}
\label{fig:nb}
\end{figure}
\begin{figure}
\caption{Probability that the ZINB model rejects the null hypothesis of $H_{0}
\label{fig:zinb}
\end{figure}
\begin{figure}
\caption{Difference between type 1 error under ``correct'' model (in Figure \ref{fig:correct}
\label{fig:diff}
\end{figure}
\begin{figure}
\caption{Proportion of datasets for which the preliminary testing scheme selects the Poisson model for analysis.}
\label{fig:propPoisson}
\end{figure}
\begin{figure}
\caption{Proportion of datasets for which the preliminary testing scheme selects the NB model for analysis.}
\label{fig:propNB}
\end{figure}
\begin{figure}
\caption{Proportion of datasets for which the preliminary testing scheme selects the ZIP model for analysis.}
\label{fig:propZIP}
\end{figure}
\begin{figure}
\caption{Proportion of datasets for which the preliminary testing scheme selects the ZINB model for analysis.}
\label{fig:propZINB}
\end{figure}
\end{document} | math | 66,345 |
\begin{document}
\title[Index for Toeplitz Operators as Corollary of Bott Periodicity]{The Index Theorem for Toeplitz Operators as a Corollary of Bott Periodicity}
\author{Paul F.\ Baum}
\address{The Pennsylvania State University, University Park, PA, 16802, USA}
\email{[email protected]}
\author{Erik van Erp}
\address{Dartmouth College, 6188, Kemeny Hall, Hanover, New Hampshire, 03755, USA}\email{[email protected]}
\dedicatory{
Dedicated to the memory of Sir Michael Atiyah.}
\maketitle
\tableofcontents
\section{Introduction}
This is an expository paper about the index of Toeplitz operators,
and in particular Boutet de Monvel's theorem \cite{Bo79}.
We prove Boutet de Monvel's theorem as a corollary of Bott periodicity,
and independently of the Atiyah-Singer index theorem.
Let $M$ be an odd dimensional closed Spin$^c$ manifold with Dirac operator $D$
acting on sections of the spinor bundle $S$.
If $E$ is a smooth $\CC$ vector bundle on $M$,
$D^E$ denotes $D$ twisted by $E$.
The closure $\bar{D}^E$ of $D^E$ is an unbounded self-adjoint operator on
the Hilbert space $L^2(M,S\otimes E)$ of $L^2$-sections of $S\otimes E$.
$\bar{D}^E$ has discrete spectrum with finite dimensional eigenspaces.
Denote by $L^2_+(M,S\otimes E)$ the Hilbert space direct sum of the eigenspaces of $\bar{D}_E$ for eigenvalues $\lambda\ge 0$.
$P^E_+$ denotes the orthogonal projection
\[ P^E_+:L^2(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
Suppose that $\alpha$ is an automorphism of $E$,
and $I_S\otimes \alpha$ the resulting automorphism of $S\otimes E$.
$\mathcal{M}_\alpha$ is the bounded invertible operator on $L^2(S\otimes E)$
obtained from $I_S\otimes \alpha$.
The Toeplitz operator $T_\alpha$ is the composition of $\mathcal{M}_\alpha:L^2_+\to L^2$
with $P^E_+:L^2\to L^2_+$,
\[ T_\alpha =P^E_+\mathcal{M}_\alpha : L^2_+(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
The Toeplitz operator $T_\alpha$ is a Fredholm operator (see section \ref{sec2}).
\begin{theorem}\label{Thm}
Let $M$ be an odd dimensional compact Spin$^c$ manifold without boundary.
If $E$ is a smooth $\CC$ vector bundle on $M$,
and $\alpha$ is an automorphsim of $E$,
then
\[ \ind T_\alpha = (\ch(E,\alpha)\cup \Td(M))[M]\]
Here $\ch(E,\alpha)$ is the Chern character of $(E,\alpha)$, $\Td(M)$ is the Todd class of the Spin$^c$ vector bundle $TM$
and $[M]$ is the fundamental cycle of $M$.
\end{theorem}
Our proof of Theorem \ref{Thm} is based on three points:
\begin{itemize}
\item Bott periodicity.
\item Bordism invariance of the index.
\item Invariance of the index under vector bundle modification.
\end{itemize}
The last two points are analytical, and are proved in this paper.
The key topological feature of our proof is Bott periodicity (in its original form).
Our proof does not use $K$-theory or $K$-homology, or cobordism theory, and is independent of the Atiyah-Singer theorem.
In section \ref{sec:BdM} we show that Theorem \ref{Thm} implies Boutet de Monvel's theorem.
Special cases of Theorem \ref{Thm}, when $M=S^1$, were proven by F.\ Noether \cite{No20}, and Gohberg-Krein \cite{GK60}.
Venugopalkrishna \cite{V72} proved the case of Boutet de Monvel's theorem when $M=S^{2r+1}$.
\section{Todd class and Chern character}
In this section we review the characteristic classes that appear in Theorem \ref{Thm}.
We assume familiarity with Chern and Pontryagin classes (see \cite{MilSta}).
The Todd class of a $\CC$ vector bundle is the characteristic class
\[ \mathrm{Td} = \prod_j \frac{x_j}{1-e^{-x_j}}\]
where $x_j$ are the Chern roots.
The $\hat{A}$ class of an $\RR$ vector bundle is the characteristic class
\[ \hat{A} = \prod_j \frac{x_j/2}{\sinh{(x_j/2)}}\]
where $x_j$ are the Pontryagin roots.
Due to the power series identity
\[ e^{x/2}\cdot \frac{x/2}{\sinh{(x/2)}} = \frac{x}{1-e^{-x}}\]
for a $\CC$ vector bundle $F$,
\[ \mathrm{Td}(F) = e^{c_1(F)/2} \; \hat{A}(F)\]
where $\hat{A}(F)$ is the $\hat{A}$ class of the underlying $\RR$ vector bundle of $F$.
A spin$^c$ vector bundle is an $\RR$ vector bundle with given extra structure.
Thus (as an $\RR$ vector bundle) it has an $\hat{A}$ class.
Associated to each spin$^c$ vector bundle $F$ is a $\CC$ line bundle $L_F$.
If $F$ is a $\CC$ vector bundle, $L_F$ is the determinant bundle.
The first Chern class $c_1(F)$ of a spin$^c$ vector bundle
is, by definition, the Chern class of $L_F$.
The Todd class of a spin$^c$ vector bundle $F$ is defined by the formula,
\[ \mathrm{Td}(F) = e^{c_1(F)/2} \; \hat{A}(F)\]
For a spin vector bundle $F$ the associated line bundle is trivial, and so
\[ \mathrm{Td}(F) = \hat{A}(F)\]
The Chern character $\mathrm{ch}(E,\alpha)$ of a smooth $\CC$ vector bundle $E$ with smooth automorphism $\alpha$
is as follows.
First assume that $E$ is the trivial bundle $X\times \CC^r$,
and $\alpha$ is a smooth map $\alpha:X\to GL(r, \CC)$.
Then if the dimension of $X$ is $2m+1$, the Chern character is the cohomology class represented by the differential form
\[ \mathrm{ch}(\alpha)
= \sum_{j=0}^m -\frac{j!}{(2j+1)!(2\pi i)^{j+1}} \mathrm{Tr}((\alpha^{-1}d\alpha)^{2j+1})\]
More generally, if $E$ is not trivial, choose a vector bundle $E'$ such that $E\oplus E'$ is trivialized,
and extend $\alpha$ by adding the identity automorphism on $E'$.
Then proceed as above.
\section{Toeplitz operators on the circle}
The simplest case of Theorem \ref{Thm} is:
\begin{theorem}\label{thm:Noether}
For a continuous function $f:S^1\to \CC\setminus \{0\}$
the Fredholm operator $T_f$ has index
\[ \Ind T_f = -\mathrm{winding\; number}\, f\]
\end{theorem}
\begin{proof}
First consider $f(z)=z^m$ with $m\ge 0$.
Using the orthonormal basis $e_n(z)=z^n$ (with $n\ge 0$) of $L^2_+(S^1)$,
we have $T_{z^m}e_n=e_{n+m}$. Thus.
\[ \dim\mathrm{Ker}\, T_{z^m} = 0\qquad \dim\mathrm{Coker}\, T_{z^m} =m\]
If $m<0$ we get $T_{z^m}e_n=0$ for $n=0,\dots, |m|-1$ and $T_{z^m}e_n=e_{n+m}$ otherwise. Then,
\[ \dim\mathrm{Ker}\, T_{z^m} =|m|= -m\qquad \dim\mathrm{Coker}\, T_{z^m} =0\]
In both cases we find $\Ind T_f = -m$.
Now let $f:S^1\to \CC\setminus \{0\}$ be an arbitrary continuous function with winding number $m$.
Then $f$ is homotopic to $z^m$
by a homotopy $f_t:S^1\to \CC\setminus \{0\}$, $t\in [0,1]$.
Since the map $f\mapsto T_f$, $C(S^1)\to \mathcal{B}(L^2_+)$ is continous,
it follows that $T_{f_t}$ is a norm continuous path of Fredholm operators.
Therefore
\[\Ind\, T_{f}=\Ind\, T_{z^m}=-m=-\mathrm{winding\;number}\, f\]
\end{proof}
When $f$ is smooth,
\[ -\mathrm{winding\; number}\, f=-\frac{1}{2\pi i}\int_{S^1}f^{-1}df=\mathrm{ch}(f)[S^1]\]
\section{Toeplitz operators on closed manifolds}\label{sec2}
With notation as in the introduction, in this section we prove that the Toeplitz operator $T_\alpha$ is a Fredholm operator.
First assume that $E=M\times \CC$ is a trivial line bundle.
The Dirac operator $D$ of $M$
acts on $C^\infty$-sections of the spinor bundle $S$,
\[ D: C^\infty(M,S)\to C^\infty(M,S)\]
As above, denote by $L^2_+(M,S)$ the Hilbert space direct sum of the eigenspaces of $\bar{D}$ for eigenvalues $\lambda\ge 0$.
$P_+$ denotes the orthogonal projection
\[ P_+:L^2(M,S)\to L^2_+(M,S)\subset L^2(M,S)\]
For a continuous function $f\in C(M)$, $\mathcal{M}_f$ denotes the multiplication operator on $L^2(S)$,
\[ (\mathcal{M}_fu)(x) = f(x)u(x) \qquad u\in L^2(S)\]
\begin{lemma}
The commutator $[P_+,\mathcal{M}_f]=P_+\mathcal{M}_f-\mathcal{M}_fP_+$ is compact for every continuous function $f$ on $M$.
\end{lemma}
\begin{proof}
If $f$ is a smooth function, then $\mathcal{M}_f$ is a pseudodifferential operator of order zero.
Therefore, the commutator $[P_+,\mathcal{M}_f]$ is a pseudodifferential operator of order $-1$, and hence compact.
Since $\|\mathcal{M}_f\|=\|f\|_\infty$ we have $\|[P,\mathcal{M}_f]\|\le 2\|f\|_\infty$, so the map $f\mapsto [P_+,\mathcal{M}_f]$ is continuous.
The lemma follows since $C^\infty(M)$ is uniformly dense in $C(M)$ and the space of compact operators is closed in operator norm.
\end{proof}
Now let $E$ be any smooth hermitian vector bundle on $M$.
For a continous endomorphism $\alpha$ of $E$ (viewed as a topological vector bundle),
$\mathcal{M}_\alpha$ denotes the bounded operator on $L^2(M,S\otimes E)$
determined by $I_S\otimes \alpha$.
Here $\alpha$ is not necessarily an automorphism of $E$.
\begin{lemma}\label{lem:comm2}
The commutator $[P_+^E,\mathcal{M}_\alpha]$ is compact for every continuous endomorphism $\alpha$ of $E$.
\end{lemma}
\begin{proof}
If $E=M\times \CC^r$ is a trivial bundle, then $\mathcal{M}_\alpha$
is a $r\times r$ matrix of multiplication operators by functions.
Then $[P^E_+,\mathcal{M}_\alpha]$ is a matrix of commutators of $P_+$ with functions, each of which is compact by the previous lemma.
Thus $[P_+^E,\mathcal{M}_\alpha]$ is compact when $E$ is a trivial vector bundle.
In general, choose $F$ such that $E\oplus F$ is a trivial vector bundle.
Then we can take $D^{E\oplus F}=D^E\oplus D^F$, hence $P^{E\oplus F}_+=P^E_+\oplus P^F_+$.
Also $\mathcal{M}_{\alpha\oplus 0} = \mathcal{M}_\alpha\oplus 0$,
and so $[P_+^{E\oplus F}, \mathcal{M}_{\alpha\oplus 0}]=[P_+^E, \mathcal{M}_\alpha]\oplus 0$.
Since $[P_+^{E\oplus F}, \mathcal{M}_{\alpha\oplus 0}]$ is compact, so is $[P_+^E, \mathcal{M}_\alpha]$.
\end{proof}
\begin{proposition}
If $\alpha, \beta$ are two endomorphisms of $E$, then $T_\alpha T_\beta-T_{\alpha\beta}$ is a compact operator.
Therefore, if $\alpha$ is an automorphism of $E$ then $T_\alpha$ is a Fredholm operator.
\end{proposition}
\begin{proof}
Due to Lemma \ref{lem:comm2}, modulo compact operators we have
\[T_\alpha T_\beta = P\mathcal{M}_\alpha P \mathcal{M}_\beta \sim PP\mathcal{M}_\alpha \mathcal{M}_\beta = P\mathcal{M}_{\alpha\beta}=T_{\alpha\beta}\]
It follows that if $\alpha$ is an automorphism with inverse $\beta$, then $T_\alpha T_\beta-I$ and $T_\beta T_\alpha-I$ are compact operators.
Therefore $T_\alpha$ is Fredholm by Atkinson's theorem.
\end{proof}
\section{Toeplitz operators on compact manifolds with boundary}\label{sec3}
In this section $\Omega$ is a compact even dimensional Spin$^c$ manifold with boundary $M=\partial \Omega$, and $D_\Omega$ is the Dirac operator of $\Omega$, acting on sections in the graded spinor bundle $S^+\oplus S^-$,
\[ D_\Omega: C_c^\infty(\Omega\setminus M, S^+) \to C_c^\infty(\Omega\setminus M, S^-)\]
We denote by $\bar{D}_\Omega$ the maximal closed extension of $D_\Omega$.
$\bar{D}_\Omega$ is a closed unbounded Hilbert space operator with domain
\[ \{u\in L^2(\Omega,S^+)\;\mid \; D_\Omega u \in L^2(\Omega, S^-)\}\]
where $D_\Omega u$ is taken in the distributional sense.
Denote the kernel of $\bar{D}_\Omega$ by
\[\Hardy = \{u\in L^2(\Omega,S^+)\;\mid \; D_\Omega u = 0 \} \]
and let $Q$ be the Hilbert space projection of $L^2(\Omega, S^+)$
onto the closed linear subspace $\Hardy$,
\[ Q:L^2(\Omega,S^+)\to \Hardy\]
For a continuous function $f\in C(\Omega)$, let $\mathcal{M}_f$ denote the multiplication operator
\[ \MM_f : L^2(\Omega, S^+\oplus S^-)\to L^2(\Omega, S^+\oplus S^-)\qquad (\MM_fu)(x) = f(x)u(x) \]
$\MM^+_f$ and $\MM_f^-$ are the restrictions of $\MM_f$ to positive and negative spinors respectively.
\begin{proposition}\label{prop:commutator1}
The commutator $[Q, \mathcal{M}^+_f]$ is a compact operator for all $f\in C(\Omega)$.
Moreover, $\mathcal{M}^+_f Q$ is compact if $f(x)=0$ for all $x\in \partial \Omega$.
\end{proposition}
\begin{proof}
Let $T$ be the bounded operator
\[T=\bar{D}_\Omega(1+\bar{D}^*_\Omega\bar{D}_\Omega)^{-1/2}\]
and $V$ the partial isometry (with the same kernel as $\bar{D}_\Omega$) determined by the polar decomposition
\[ \bar{D}_\Omega= V|\bar{D}_\Omega|,\qquad |\bar{D}_\Omega| = (\bar{D}^*_\Omega\bar{D}_\Omega)^{1/2}\]
$T$ has the following standard properties:
\begin{itemize}
\item $T-V$ is a compact operator.
\item $\mathcal{M}_f^+T-T\mathcal{M}_f^-$ is compact for all $f\in C_0(\Omega\setminus M)$.
\item $\mathcal{M}_f^+(I-T^*T)$ is compact for all $f\in C_0(\Omega\setminus M)$.
\end{itemize}
Proposition 1.1 in \cite{BDT89} implies the much stronger property:
\begin{itemize}
\item $\mathcal{M}_f^+T-T\mathcal{M}_f^-$ is compact for all $f\in C(\Omega)$.
\end{itemize}
Since also $T^*\mathcal{M}_f^+-\mathcal{M}_f^-T^*$ is compact for all $f\in C(\Omega)$,
we obtain that the commutators $[\mathcal{M}_f^+, T^*T]$ are compact.
Note that
\[ \mathrm{ker}\; \bar{D}_\Omega = \mathrm{ker}\; T = \mathrm{ker}\; V \]
Therefore $Q=I-V^*V$ differs from $I-T^*T$ by a compact operator.
Hence $[Q,\mathcal{M}_f^+]$ is compact.
Finally, the third property above implies that $\mathcal{M}^+_fQ$ is compact if $f\in C_0(\Omega\setminus M)$.
(For full details, see \cite{BDT89}.)
\end{proof}
An endomorphism $\theta$ of the trivial vector bundle $E=\Omega\times \CC^r$
is naturally identified with a continuous matrix-valued function
\[ \theta:\Omega\to M_r(\CC)\]
The multiplication operator $\MM_\theta$ is the bounded operator on the Hilbert space
\[ L^2(\Omega, S^+\otimes E) = L^2(\Omega, S^+)\otimes \CC^r\]
obtained from $I_{S^+}\otimes \theta$.
Denote $Q_r=Q\otimes I_r$,
\[ Q_r: L^2(\Omega, S^+)\otimes \CC^r \to \Hardy \otimes \CC^r\]
where $I_r$ is the $r\times r$ identity matrix.
\begin{corollary}\label{comcomp}
For every continuous map $\theta:\Omega\to M_r(\CC)$,
the commutator $[Q_r, \MM_\theta]$ is a compact operator on $L^2(\Omega, S^+)\otimes \CC^r$.
Moreover, $\MM_\theta Q_r$ is compact for every $\theta$ such that $\theta(x)=0$
for all $x\in M$.
Here $Q_r$ is viewed as an operator from $L^2$ to $L^2$.
\end{corollary}
\begin{proof}
The proof is the same as the proof of Lemma \ref{lem:comm2}.
\end{proof}
The Toeplitz operator $T_\theta$ is the composition of $\MM_\theta$ with $Q_r$,
\[ T_\theta =Q_r\mathcal{M}_\theta : \Hardy\otimes \CC^r \to \Hardy\otimes \CC^r\]
\begin{proposition}
If $\theta, \eta$ are two continuous maps $\Omega\to M_r(\CC)$, then $T_\theta T_\eta-T_{\theta\eta}$ is compact.
If $\theta(x)=0$ for all $x\in M=\partial\Omega$ then $T_\theta$ is compact.
Therefore $T_\theta$ is a Fredholm operator if $\theta(x)$ is an invertible matrix for every $x\in M$.
\end{proposition}
\begin{proof}
The first two statements follow from Proposition \ref{prop:commutator1}.
Now let $\theta:\Omega\to M_r(\CC)$ be a continuous function,
and $\theta(x)$ invertible for every $x\in M$.
Let $\eta(x) = \theta(x)^{-1}$, and extend $\eta$ to a continuous function $\eta:\Omega\to M_r(\CC)$.
Then modulo compact operators $T_{\theta\eta}\sim I$ and so $T_{\theta}T_\eta\sim T_\eta T_{\theta}\sim I$.
\end{proof}
\begin{remark}
Results closely related to the content of this section were proved by Venugopalkrishna \cite{V72} in the special case where $\Omega$ is a strongly pseudoconvex domain in $\CC^n$.
For the general case of compact Spin$^c$ manifolds with boundary see \cite{BDT89}, and also \cite{BD91}.
\end{remark}
\section{The trace map and the Calderon projection}\label{sec4}
In this section we show how the index of Toeplitz operators on $\Omega$ is related to the index of Toeplitz operators on $M$.
As in the previous section, $\Omega$ is a compact even dimensional Spin$^c$ manifold
and $M$ is the boundary of $\Omega$.
$D_\Omega$ is the Dirac operator of $\Omega$, acting on sections of the spinor bundles $S^+$ and $S^-$.
$D_M$ is the Dirac operator of $M$, acting on sections of the spinor bundle $S$.
We identify $S^+|M = S$.
We denote the $L^2$ Sobolev space of degree $s\in \RR$ (on $\Omega$ or on $M$) by $W_s$.
Note that $W_0=L^2$.
$C^\infty(\Omega, S^+)$ is the space of sections of $S^+$ that are smooth on $\Omega$,
which means, in particular, that all derivatives up to all orders extend continuously to the boundary $M$.
For a smooth section $s\in C^\infty(\Omega, S^+)$,
let $\gamma(s)\in C^\infty(M,S)$ denote the restriction of $s$ to $M$.
The restriction map $\gamma$ extends to a bounded linear map on Sobolev spaces, called the trace map,
\[ \gamma_s:W_{s+\frac{1}{2}}(\Omega, S^+)\to W_{s}(M,S)\qquad s\in \RR\]
Let $W_s^\natural(M,S)$ be the subspace of $W_s(M,S)$ consisting of restrictions to the boundary (via the trace map) of distributional solutions of $D_\Omega u=0$,
\[ W_s^\natural(M,S) = \{ \gamma_s(u) \mid u \in W_{s+\frac{1}{2}}(\Omega,S^+), \; D_\Omega u=0\}\]
The Calderon projection $P_\natural$ is the orthogonal projection of $L^2$ onto $W^\natural_0$,
\[ P_\natural:L^2(M,S)\to W_0^\natural(M,S)\subset L^2(M,S)\]
$P_\natural$ is a pseudodifferential operator of order zero.
For all $s\in \RR$, the range of the idempotent $P_\natural:W_{s}\to W_{s}$ is $W_s^\natural(M,S)$.
\begin{proposition}\label{propa}
The bounded operator $F:=P_\natural(1+D_M^2)^{-1/4}\circ \gamma_{-\frac{1}{2}}$,
\[ F:\Hardy \to W_0^\natural(M,S)\]
is Fredholm.
\end{proposition}
\begin{proof}
The space of $L^2$-solutions $u\in L^2(\Omega, S^+)$ of the Dirichlet problem
\[ D_\Omega u=0, \qquad \gamma_{-\frac{1}{2}}(u)=0\]
is finite dimensional.
Therefore the surjective map
\[ \gamma_{-\frac{1}{2}}: \Hardy \to W_{-\frac{1}{2}}^\natural(M,S)\]
is a Fredholm operator.
Denote $A=(1+D_M^2)^{-1/4}$.
$A$ is an elliptic pseudodifferential operator of order $-1/2$.
$A$ has scalar symbol, and so the commutator $[A, P_\natural]$ is of order $-3/2$.
Let $B$ be the pseudodifferential operator of order $-1/2$,
\[ B = P_\natural AP_\natural +(I-P_\natural )A(I-P_\natural )\]
$A-B$ is of order $-3/2$,
\begin{align*}
A-B&= P_\natural A(I-P_\natural ) +(I-P_\natural )AP_\natural \\
&= [P_\natural ,A](I-P_\natural ) +(I-P_\natural )[A,P_\natural ]
\end{align*}
and so $A$ and $B$ have the same principal symbol.
Thus $B$ is elliptic, and the bounded operator
\[B:W_{-\frac{1}{2}}(M,S)\to L^2(M,S)\]
is Fredholm. Because $B$ commutes with $P_\natural$, $B$ restricts to a bounded operator
\[ B_\natural: W_{-\frac{1}{2}}^\natural (M,S)\to W_0^\natural(M,S)\]
which is also Fredholm.
Finally, $F=P_\natural A\circ \gamma_{-\frac{1}{2}}=B_\natural\circ \gamma_{-\frac{1}{2}}$ is the composition of two Fredholm operators.
\end{proof}
For a smooth funtion $\tilde{f}\in C^\infty(\Omega)$,
let $T_{\tilde{f}}=Q\mathcal{M}_{\tilde{f}}$ be the operator
\[ T_{\tilde{f}} : \Hardy \to \Hardy\]
with $Q$ as in section \ref{sec3}.
If $f$ is the restriction of $\tilde{f}$ to $M$, let $T^\natural_f=P_\natural\mathcal{M}_f$ be the operator
\[ T_{f}^\natural : W^0_\natural(M,S)\to W^0_\natural(M,S)\]
\begin{proposition}\label{propb}
With $F=P_\natural(1+D_M^2)^{-1/4}\circ \gamma_{-\frac{1}{2}}$ as above,
the diagram
\[ \xymatrix{ \Hardy \ar[r]^{T_{\tilde{f}}}\ar[d]_F & \Hardy \ar[d]^{F}& \\
W^0_\natural(M,S) \ar[r]^{T_f^\natural} & W^0_\natural(M,S)}
\]
commutes modulo compact operators, i.e. $FT_{\tilde{f}}-T^\natural_f F$ is a compact operator for any $\tilde{f}\in C^\infty(\Omega)$, and $f=\tilde{f}|M$.
\end{proposition}
\begin{proof}
Let $\sim$ denote equality modulo compact operators. By Proposition \ref{prop:commutator1},
\[ FT_{\tilde{f}} = FQ\mathcal{M}_{\tilde{f}} \sim F\mathcal{M}_{\tilde{f}} Q = F \mathcal{M}_{\tilde{f}}= P_\natural A\gamma_{-\frac{1}{2}} \mathcal{M}_{\tilde{f}} \]
$P_\natural$ and $A$ are pseudodifferential operators, and the principal symbols of $P_\natural$ and $A$ commute with the symbol of $\mathcal{M}_f$.
Therefore $P_\natural\mathcal{M}_f \sim \mathcal{M}_f P_\natural$ and $A\mathcal{M}_f\sim \mathcal{M}_f A$, and
\[ T^\natural_f F = P_\natural\mathcal{M}_f P_\natural A \gamma_{-\frac{1}{2}}
\sim P_\natural A \mathcal{M}_f \gamma_{-\frac{1}{2}}\]
The proposition now follows from the equality
\[\gamma_{-\frac{1}{2}} \mathcal{M}_{\tilde{f}} = M_f \gamma_{-\frac{1}{2}}\]
\end{proof}
\begin{corollary} \label{corc}
With $\tilde{f}$, $f$, as above, if $f(x)\ne 0$ for all $x\in M$, then
$T_{\tilde{f}}$ and $T_f^\natural$ are Fredholm operators, and
\[ \mathrm{Index}\, T_{\tilde{f}} = \mathrm{Index}\, T_f^\natural\]
\end{corollary}
\begin{proof}
As in section \ref{sec2}, since the projection $P_\natural$ is a pseudodifferential operator of order zero,
$T^\natural_f$ is Fredholm if $f(x)\ne 0$ for all $x\in M$.
Combining Propositions \ref{propa} and \ref{propb} we see that $FT_{\tilde{f}}$ and $T_f^\natural F$ are Fredholm operators with the same index, and therefore
\[ \mathrm{Index}\, F + \mathrm{Index}\, T_{\tilde{f}} = \mathrm{Index}\, T_f^\natural + \mathrm{Index}\, F\]
\end{proof}
The Calderon operator $P_\natural$ and the projection $P_+$ (defined in section \ref{sec2}) are pseudodifferential operators of order zero with the same principal symbol.
This fact, combined with Corollary \ref{corc}, gives the main result of this section.
\begin{proposition}\label{propd}
Let $\theta:M\to M_r(\CC)$ be a continuous matrix-valued function on $M$, such that $\theta(x)$ is invertible for all $x\in M$,
and $\tilde{\theta}:\Omega\to M_r(\CC)$ any continuous extension of $\theta$ to $\Omega$.
Then the Fredholm operators
\[ T_\theta. :L^2_+(M,S)\otimes \CC^r\to L^2_+(M,S)\otimes \CC^r\]
(as in section \ref{sec2}) and
\[ T_{\tilde{\theta}} : \Hardy\otimes \CC^r\to \Hardy\otimes \CC^r\]
(as in section \ref{sec3}) have the same index,
\[ \mathrm{Index}\, T_\theta = \mathrm{Index}\, T_{\tilde{\theta}}\]
\end{proposition}
\begin{proof}
For simplicity, assume first that $r=1$.
The projections $P_\natural$ and $P_+$ differ by a compact operator, because they have the same principal symbol.
This implies that
\[ \mathrm{Index}\, T_\theta = \mathrm{Index}\, T_\theta^\natural\]
The proposition now follows from Corollary \ref{corc}.
For $r>1$, the evident generalizations of Proposition \ref{propa}, Proposition \ref{propb}, and Corollary \ref{corc} are valid, and Proposition \ref{propd} follows.
\end{proof}
\section{Bordism invariance of the index}
In this section we prove bordism invariance of the index of Toeplitz operators,
based on the results of sections \ref{sec3} and \ref{sec4}.
\begin{proposition}
Let $\Omega$ be a compact even dimensional Spin$^c$ manifold with boundary $M$.
Let $\tilde{E}$ be a smooth $\CC$ vector bundle on $\Omega$, and $\tilde{\alpha}$ an automorphism of $\tilde{E}$.
Denote by $(E,\alpha)$ the restriction of $(\tilde{E}, \tilde{\alpha})$ to $M$,
and by $T_\alpha=P_+^E\MM_\alpha$ the Toeplitz operator determined by $\alpha$,
\[ T_\alpha : L^2_+(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
Then
\[ \mathrm{Index}\, T_\alpha = 0\]
\end{proposition}
\begin{proof}
Choose $\tilde{F}$ such that $\tilde{E}\oplus \tilde{F}\cong \Omega\times \CC^r$ is trivial.
Then the Toeplitz operator $T_{\alpha \oplus I_F}$ determined by the automorphism $\alpha\oplus I_F$
of $E\oplus F$ has the same index as $T_\alpha$.
Thus, it suffices to prove the proposition for trivial vector bundles $E$.
Hence we assume that $\tilde{\alpha}$ and $\alpha$ are matrix valued functions,
\[ \tilde{\alpha}:\Omega\to GL(r,\CC) \qquad \alpha: M\to GL(r,\CC)\]
Note that $\tilde{\alpha}(x)$ is invertible for all $x\in \Omega$.
By Proposition \ref{propd}, it suffices to show that
\[\mathrm{Index}\, T_{\tilde{\alpha}}=0\]
where $T_{\tilde{\alpha}}$
is the Toeplitz operator determined by $\tilde{\alpha}$,
\[ T_{\tilde{\alpha}} : \Hardy\otimes \CC^r\to \Hardy\otimes \CC^r\]
There is the direct sum decomposition
\[ L^2(\Omega, S^+) \otimes \CC^r= \mathcal{N}(\bar{D}_\Omega)\otimes \CC^r \oplus \mathcal{N}(\bar{D}_\Omega)^\perp\otimes \CC^r\]
With respect to this decomposition,
the multiplication operator $\mathcal{M}_{\tilde{\alpha}}=\mathcal{M}^+_{\tilde{\alpha}}$ is a $2\times 2$ matrix
\[ \mathcal{M}^+_{\tilde{\alpha}}
=\left(\begin{array}{cc}T_{\tilde{\alpha}}&A\\B&S_{\tilde{\alpha}}\end{array}\right)
\]
where $A$ and $B$ are compact operators by Proposition \ref{comcomp}.
Since $\mathcal{M}_{\tilde{\alpha}}$ is invertible, $T_{\tilde{\alpha}}$ and $S_{\tilde{\alpha}}$ are Fredholm operators, and
\[ \mathrm{Index}\, T_{\tilde{\alpha}} + \mathrm{Index}\, S_{\tilde{\alpha}} = \mathrm{Index}\, \mathcal{M}^+_{\tilde{\alpha}} = 0\]
Thus it will suffice to show that
\[ \mathrm{Index}\, S_{\tilde{\alpha}} = 0\]
As in section \ref{sec3} above, let $V$ be the partial isometry determined by $\bar{D}_\Omega = V |\bar{D}_\Omega|$,
where $V$ has the same kernel as $\bar{D}_\Omega$.
Denote $V_r=V\otimes I_r$.
Consider the diagram
\[ \xymatrix{ \mathcal{N}(\bar{D}_\Omega)^\perp \otimes \CC^r \ar[r]^{S_{\tilde{\alpha}}}\ar[d]_{V_r} & \mathcal{N}(\bar{D}_\Omega)^\perp\otimes \CC^r \ar[d]^{V_r}& \\
L^2(\Omega, S^-)\otimes \CC^r\ar[r]^{\mathcal{M}^-_{\tilde{\alpha}}}& L^2(\Omega, S^-)\otimes \CC^r}
\]
In this diagram $V_r$ is a Fredholm operator.
The diagram commutes modulo compact operators, because
\[ V_rS_{\tilde{\alpha}} = V_r(I-Q_r)\mathcal{M}^+_{\tilde{\alpha}} \sim V_r\mathcal{M}^+_{\tilde{\alpha}}(I-Q_r)=V_r\mathcal{M}^+_{\tilde{\alpha}}\]
where $\sim$ denotes equality modulo compact operators.
Finally
\[ V_r\mathcal{M}^+_{\tilde{\alpha}} \sim \mathcal{M}^-_{\tilde{\alpha}} V_r\]
follows from
\[ T\mathcal{M}^+_{\tilde{\alpha}} \sim \mathcal{M}^-_{\tilde{\alpha}} T\]
where $T=\bar{D}_\Omega(I+\bar{D}_\Omega^*\bar{D}_\Omega)^{-1/2}$
(as in the proof of Proposition \ref{prop:commutator1}),
and the fact that $T-V$ is compact.
Thus, the Fredholm operators $V_rS_{\tilde{\alpha}}$ and $\mathcal{M}^-_{\tilde{\alpha}}V_r$ have the same index, and additivity of the index implies
\[ \mathrm{Index}\, S_{\tilde{\alpha}} = \mathrm{Index}\,\mathcal{M}^-_{\tilde{\alpha}}\]
Since $\mathcal{M}^-_{\tilde{\alpha}}$ is an invertible operator, it has index zero.
\end{proof}
\section{The product lemma}
Let $M_1$ and $M_2$ be two closed even dimensional spin$^c$ manifolds with Dirac operators $D_1, D_2$.
The Dirac operator of $M_1\times M_2$
is the sharp product,
\[ D_{M_1\times M_2} = D_1 \# D_2 =\left(\begin{matrix}D_1\otimes I & -I\otimes D_2^*\\ I\otimes D_2& D_1^*\otimes I\end{matrix}\right)\]
where the $2\times 2$ matrix is an operator from the positive spinors
\[ S^+_{M_1\times M_2} = (S_1^+\boxtimes S_2^+)\oplus (S_1^-\boxtimes S_2^-)\]
to the negative spinors
\[ S^-_{M_1\times M_2} = (S_1^+\boxtimes S_2^-)\oplus (S_1^+\boxtimes S_2^-)\]
It is straightforward to derive from this formula that
\[ \mathrm{Index}\, D = \mathrm{Index}\, D_1\,\cdot \, \mathrm{Index}\, D_2\]
In this section we prove the analogue of this product formula
in the case when $M_1$ is odd dimensional and $M_2$ is even dimensional.
\begin{proposition}\label{Prod}
Let $M$ be a closed odd dimensional spin$^c$ manifold,
and let $E$, $\alpha$, $\mathcal{M}_\alpha$, $T_\alpha$ be as above.
Let $W$ be a closed even dimensional spin$^c$ manifold,
and $F$ a smooth $\CC$ vector bundle on $W$.
On the odd dimensional spin$^c$ manifold $M\times W$,
let $T_{\alpha\otimes I_F}$ be the Toeplitz operator associated to the automorphism $\alpha\otimes I_F$ of the vector bundle $E\boxtimes F$.
Then
\[ \mathrm{Index}\, (T_{\alpha\otimes I_F}) = \mathrm{Index}\, T_\alpha \,\cdot \, \mathrm{Index}\, (D_{W}\otimes F)\]
where $D_W\otimes F$ is the Dirac operator of $W$ twisted by $F$.
\end{proposition}
\begin{proof}
Let $A=D_M\otimes E$ denote the Dirac operator of $M$ twisted by $E$,
and $B=D_W\otimes F$ the Dirac operator of $W$ twisted by $F$.
$B=B^+\oplus B^-$ is graded.
The spinor bundle of $M\times W$ is
\[ S_{M\times W} = S_M\boxtimes S_W = (S_M\boxtimes S_W^+)\oplus (S_M\boxtimes S_W^-)\]
where $S_M, S_W=S_W^+\oplus S_W^-$ are the spinor bundles of $M, W$.
The Dirac operator of $M\times W$ twisted by $E\otimes F$ is
\[ D = \left(\begin{matrix}A\otimes I & I\otimes B^-\\ I\otimes B^+& -A\otimes I\end{matrix}\right)\]
acting on the direct sum
\[ [(S_M\otimes E) \boxtimes (S_W^+\otimes F)]\oplus [(S_M\otimes E) \boxtimes (S_W^-\otimes F)]\]
Consider the polar decomposition $B=V|B|$, where $V$ is a partial isometry with the same kernel as $B$.
Let $U$ be the operator
\[ U = \left(\begin{matrix}0& I\otimes -V^-\\ I\otimes V^+& 0\end{matrix}\right)\]
$U$ anticommutes with $D$ and commutes with $\mathcal{M}_{\alpha\otimes I_F}$,
\[UD=-DU\qquad U\mathcal{M}_{\alpha\otimes I_F}=\mathcal{M}_{\alpha\otimes I_F}U\]
$V$ restricts to a unitary operator on $(\mathrm{Ker}\, B)^\perp$,
and thus $U$ restricts to a unitary operator on $L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp$.
View the Hilbert space $L^2((S_M\otimes E)\boxtimes (S_W\otimes F))$ as the direct sum
\[ [L^2(S_M^E) \otimes \mathrm{Ker}\, B^+]\oplus [L^2(S_M^E) \otimes \mathrm{Ker}\, B^-]\oplus [L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp]\]
where $S_M^E=S_M\otimes E$.
The multiplication operator $\mathcal{M}_{\alpha\otimes I_F}$ is
\[ \mathcal{M}_{\alpha\otimes I_F} =
\left(\begin{matrix}
\mathcal{M}_\alpha \otimes I_{{\mathrm Ker}\, B^+} & 0&0\\
0 & \mathcal{M}_\alpha \otimes I_{{\mathrm Ker}\, B^-} & 0\\
0& 0& \mathcal{M}_\alpha \otimes I_{({\mathrm Ker}\, B)^\perp}
\end{matrix}\right)\]
The three summands are invariant spaces for $D$, and the positive space of $D$ is
\[ [L^2_+(S_M^E) \otimes \mathrm{Ker}\, B^+]\oplus [L^2_-(S_M^E) \otimes \mathrm{Ker}\, B^-]\oplus H\]
where $H$ is a closed linear subspace of $L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp$.
Thus the Toeplitz operator $T_{\alpha\otimes I_F}$ is of the form
\[ T_{\alpha\otimes I_F} =
\left(\begin{matrix}
T_\alpha \otimes I_{{\mathrm Ker}\, B^+} & 0 & 0\\
0 & T^-_\alpha \otimes I_{{\mathrm Ker}\, B^-} & 0\\
0 & 0& Q
\end{matrix}\right)\]
where $T^-_\alpha$ is the compression of $\mathcal{M}_\alpha$ to $L^2_-(S^E_M)$,
and $Q$ is the restriction of $T_{\alpha\otimes I_F}$ to $H$.
Since
\[ \mathrm{Index}\, T^-_\alpha = - \mathrm{Index}\, T_\alpha\]
it follows that
\begin{align*}
\mathrm{Index}\, T_{\alpha\otimes I_F} &= \mathrm{Index}\, T_\alpha \,\cdot\,\mathrm{dim\, Ker}\,B^+ + \mathrm{Index}\, T^-_\alpha\,\cdot\, \mathrm{dim\, Ker}\,B^-+\mathrm{Index}\,Q\\
& = \mathrm{Index}\, T_\alpha\,\cdot\, \mathrm{Index}\,B+\mathrm{Index}\,Q
\end{align*}
On the third summand $L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp$ we have
\[ UDU^*=-D\qquad U\mathcal{M}_{\alpha\otimes I_F}U^*=\mathcal{M}_{\alpha\otimes I_F}\]
Therefore the compression of $\mathcal{M}_{\alpha\otimes I_F}$
to the positive space (i.e. the operator $Q$) has the same index as the compression to the negative space.
Hence both are zero, i.e. $\mathrm{Index}\,Q=0$
\end{proof}
\section{Vector bundle modification}
With $M, E, \alpha$ as above, let $F\to M$ be a spin$^c$ vector bundle on $M$ with even fiber dimension $n=2r$.
Part of the Spin$^c$ datum of $F$ is a principal $\Spinc$ bundle $P$ on $M$,
\[F=P\times_\Spinc \RR^n\]
where $\Spinc$ acts on $\RR^n$ via the map $\Spinc\to \SO$.
The Bott generator vector bundle $\beta$ on $S^n\subset \RR^n\times \RR$ is $\Spinc$ equivariant,
where $\Spinc$ acts on the first factor $\RR^n$.
Therefore, associated to $P$ we have a fiber bundle $\pi\,\colon \Sigma F \to M$ whose fibers are oriented spheres of dimension $n$,
\[ \Sigma F = P\times_\Spinc S^n\]
and a vector bundle on $\Sigma F$
\[ \beta_F = P\times_\Spinc \beta\]
Since $M$ is a an odd dimensional Spin$^c$ manifold, the total space of $F$ is an odd dimensional Spin$^c$ manifold, because $TF = \pi^*F\oplus \pi^*TM$,
and the direct sum of two Spin$^c$ vector bundles is Spin$^c$.
Every trivial bundle is Spin$^c$, so the total space of $F\oplus \underline{\RR}$ is Spin$^c$.
$\Sigma F$ is a Spin$^c$ manifold as the boundary of the unit ball bundle of $F\oplus \underline{\RR}$.
On the odd dimensional Spin$^c$ manifold $\Sigma F$ we have a Toeplitz operator $T_{\tilde \alpha}$,
where $\tilde{\alpha}$ is the automorphism $\pi^*\alpha\otimes I$ of the vector bundle $\pi^*E\otimes \beta_F$.
Here $\pi:\Sigma F\to M$ is the projection.
\begin{proposition}\label{VM}
\[ \mathrm{Index}\, T_\alpha = \mathrm{Index}\,T_{\tilde\alpha}\]
\end{proposition}
The proof of Proposition \ref{VM} is a straightforward generalization of the proof of Proposition \ref{Prod},
and uses the following basic fact about the Dirac operator of an even dimensional sphere.
\begin{proposition}\label{index1}
If $D$ is the Dirac operator of the even dimensional sphere $S^n$
with the Spin$^c$ structure it receives as the boundary of the unit ball in $\RR^{n+1}$,
then $D_\beta$ has one dimensional kernel and zero cokernel, and so
\[ \mathrm{Index} \;D_\beta = 1\]
Moreover, $D_\beta$ is equivariant for the group Spin$^c(n+1)$ which acts by orientation preserving isometries on $S^n$, and this group acts trivially on the kernel of $D_\beta$.
\end{proposition}
\begin{proof}
Recall that if $V$ is a finite dimensional vector space, then there is a canonical nonzero element in $V\otimes V^*$,
which maps to the identity map under the isomorphism $V\otimes V^*\cong \mathrm{Hom}(V,V)$.
On any even dimensional Spin$^c$ manifold $M$, there is a canonical isomorphism of vector bundles
\[ S\otimes S^* \cong \Lambda_\CC TM\]
where $S$ is the spinor bundle of $M$.
This is implied by the fact that, as representations of $\Spinc$,
\[ \CC^{2^r}\otimes (\CC^{2^r})^*\cong \Lambda_\CC \RR^n\]
Via this isomorphism, $D_{S^*}$ identifies (up to lower order terms) with $d+d^*$,
where $d$ is the de Rham operator, and $d^*$ its formal adjoint.
Note that the kernel of $D_{S^*}$ is the same as the kernel of $D_{S^*}^2=(d+d^*)^2$,
i.e. it consists of harmonic forms.
On $S^n$ the only harmonic forms are the constant functions, and scalar multiples of the standard volume form.
Because the Bott generator vector bundle $\beta$ is dual to $S^+$,
$S^+\otimes \beta\cong \mathrm{Hom}(S^+,S^+)$ contains a trivial line bundle,
which identifies with the line bundle in $\Lambda^0\oplus \Lambda^n$
spanned by $(1,\omega)$, where $\omega$ is the standard volume form.
This follows from representation theory.
Thus, the kernel of $D_\beta$ is the one dimensional vector space spanned by the harmonic forms $c+c\omega$, $c\in \CC$.
The intersection of $\Lambda^0\oplus \Lambda^n$ with $S^-\otimes \beta$ is zero,
and so the cokernel of $D_\beta$ is zero.
\end{proof}
\begin{remark} $D_\beta$ is the (positive) half-signature operator of $S^n$ (with $n$ even).
\end{remark}
\begin{proof}[Proof of Proposition \ref{VM}]
If $F$ is a trivial bundle, then $\Sigma F=M\times S^n$
and Proposition \ref{VM} is a special case of Proposition \ref{Prod}.
In general, $\Sigma F$ is a sphere bundle over $M$.
The proof is essentially the same as the proof of Proposition \ref{Prod}, with the following modifications.
$D_\beta$ is equivariant for the action of the structure group of the sphere bundle $\Sigma F$.
Therefore, there is a well-defined ``vertical'' operator $B$ on $\Sigma F$ which in each local trivialization of the sphere bundle $\Sigma F$ is $I\otimes D_\beta$.
Next, we construct a first order differential operator $A$ on $\Sigma F$
which is a ``lift'' of $D_M\otimes E$ from $M$ to $\Sigma F$.
Choose a finite open cover $\{U_j\}$ of $M$,
such that the fiber bundle $\Sigma F$ restricted to each $U_j$ has been trivialized,
\[ \Sigma F|U_j \cong U_j\times S^n\]
Let $A_j$ be the evident lift of $D_M\otimes E$ from $U_j$ to the product $U_j\times S^n$,
\[A_j:C^\infty(\pi^*((S^+\otimes E)|U_j))\to C^\infty(\pi^*((S^-\otimes E)|U_j)) \]
$A_j$ differentiates in the $U_j$ direction.
Let $\{\varphi_j\}$ be a smooth partition of unity on $M$ subordinate to the cover $\{U_j\}$.
Using this partition of unity and the local trivializations $\Sigma F|U_j \cong U_j\times S^n$,
we construct the lift $A$ as
\[ A := \sum A_j(\varphi_j\circ \pi) :C^\infty(\pi^*(S_1^+\otimes E))\to C^\infty(\pi^*(S_1^-\otimes E)) \]
Note that $A$ and $B$ commute.
Now, the sharp product formula for the Dirac operator of $\Sigma F$ twisted by $\pi^*E\otimes \beta_F$ is
\[ D = \left(\begin{matrix}A & B^-\\ B^+& -A\end{matrix}\right)\]
We can construct the partial isometry $U$ as in the proof of Proposition \ref{Prod},
with the properties
\[UD=-DU\qquad U\mathcal{M}_{\alpha\otimes I}=\mathcal{M}_{\alpha\otimes I}U\]
$D$ and $\mathcal{M}_{\alpha\otimes I}$ act on the Hilbert space $\mathcal{H}=L^2(S_{\Sigma F}\otimes \pi^*E\otimes \beta_F)$.
The Hilbert space $L^2(S_{S^n}\otimes \beta)$ on which $D_\beta$ acts is a direct sum $\mathrm{Ker}D_\beta\oplus(\mathrm{Ker}D_\beta)^\perp$.
The summands are invariant for the structure group $SO(n+1)$ of the sphere bundle $\Sigma F$.
If we view $\mathcal{H}$ as the Hilbert space of $L^2$-sections in a field of Hilbert spaces over $M$,
the orthogonal decomposition of its fibers gives rise to an orthogonal decomposition $\mathcal{H}=\mathcal{H}_1\oplus \mathcal{H}_2$.
Due to the commuting of $A$ and $B$,
the subspaces $\mathcal{H}_1$ and $\mathcal{H}_2$ are invariant for $D$ as well as for $\mathcal{M}_{\alpha\otimes I}$,
and $U$ restricts to a unitary on $\mathcal{H}_2$.
The Toeplitz operator $T_{\tilde{\alpha}}=T_1\oplus T_2$ has a corresponding direct sum decomposition,
where $T_j$ acts on a subspace of $\mathcal{H}_j$.
As in the proof of Proposition \ref{Prod}, $\mathrm{Index}\,T_2=0$
because $\mathrm{Index}\,UT_2U^* = -\mathrm{index} \,T_2$.
Due to the fact that the structure group acts trivially on $\mathrm{Ker}D_\beta$,
the first summand $\mathcal{H}_1$ identifies canonically with $L^2(S_M\otimes E)$,
and $T_1$ identifies with $T_\alpha$. Thus,
\[ \mathrm{Index} \,T_{\tilde{\alpha}}= \mathrm{Index} \,T_1+ \mathrm{Index} \,T_2= \mathrm{Index} \,T_\alpha\]
\end{proof}
\begin{remark}
The canonical identifications $\mathcal{H}_1=L^2(S_M\otimes E)$ and $T_1=T_\alpha$ in the above proof are due to the fact that the kernels of the family of Dirac operators of the fibers of $\Sigma F$, twisted by $\beta_F$, form a trivial line bundle over $M$.
\end{remark}
\section{Bordism and vector bundle modification for the topological index}
Bordism invariance of the topological index follows immediately from Stokes' Theorem.
\begin{proposition}
Let $\Omega$ be a compact even dimensional Spin$^c$ manifold with boundary $M$.
Let $\tilde{E}$ be a smooth $\CC$ vector bundle on $\Omega$, and $\tilde{\alpha}$ an automorphism of $\tilde{E}$.
Denote by $(E,\alpha)$ the restriction of $(\tilde{E}, \tilde{\alpha})$ to $M$,
and by $T_\alpha=P_+^E\MM_\alpha$ the Toeplitz operator determined by $\alpha$,
\[ T_\alpha : L^2_+(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
Then
\[ (\mathrm{ch}(E, \alpha)\cup \mathrm{Td}(M))[M] = 0\]
\end{proposition}
\begin{proof}
The restriction of the Spin$^c$ vector bundle $T\Omega$ to $M$
is the direct sum of the Spin$^c$ vector bundle $TM$ with a trivial line bundle (i.e. the normal bundle).
The Todd class is a stable characteristic class.
Therefore the cohomology class $\mathrm{Td}(M)$ is the restriction of $\mathrm{Td}(\Omega)$ to $M$.
Likewise, by naturality, $\mathrm{ch}(E, \alpha)$ is the restriction to $M$ of
$\mathrm{ch}(\tilde{E}, \tilde{\alpha})$.
The proposition now follows from Stokes' Theorem.
\end{proof}
Invariance of the topological index under vector bundle modification is
\[ (\mathrm{ch}(\pi^*E\otimes \beta_F, \pi^*\alpha\otimes I)\cup \mathrm{Td}(\Sigma F))[\Sigma F] =(\mathrm{ch}(E,\alpha)\cup \mathrm{Td}(M))[M]\]
Note that
\[\mathrm{ch}(\pi^*E\otimes \beta_F, \pi^*\alpha\otimes I) = \pi^*\mathrm{ch}(E,\alpha)\cup \mathrm{ch}(\beta_F)\]
As spin$^c$ vector bundles on $\Sigma F=S(F\oplus\underline{\RR})$,
\[ \underline{\RR} \oplus T(\Sigma F) \cong \pi^*F\oplus \underline{\RR} \oplus \pi^*(TM)\]
Multiplicativity of the Todd class implies
\[ \mathrm{Td}(\Sigma F) = \pi^*\mathrm{Td}(M)\cup \pi^*\mathrm{Td}(F)\]
Therefore invariance of the topological index under vector bundle modification follows from
\begin{proposition}\label{IF}
\[\pi_!\,\mathrm{ch}(\beta_F) = \frac{1}{\mathrm{Td}(F)}\]
where $\pi_!$ is integration along the fiber of $\pi\,\colon \Sigma F\to M$.
\end{proposition}
For a proof of Proposition \ref{IF}, see section 6.2 in \cite{BvE1}.
\section{Sphere lemma}
\begin{lemma}\label{sphere}
Let $M, E, \alpha$ be as above.
If $n=\mathrm{dim}\,M$, then for every odd integer $m\ge 2n+1$
there exists a continuous map $\tilde{\alpha}:S^{m}\to \mathrm{GL}(r,\CC)$
such that
\[ \mathrm{Index}\,T_\alpha = \mathrm{Index}\,T_{\tilde{\alpha}}\]
and
\[(\ch(E,\alpha)\cup \Td(M))[M]=\ch(\tilde{\alpha})[S^m]\]
\end{lemma}
\begin{remark}
Note that $\mathrm{Td}(S^m)=1$ if $m$ is odd.
\end{remark}
\begin{proof}
We shall prove the Lemma by moving, in a finite number of steps, from $(M,E,\alpha)$ to $(S^m,\underline{\CC}^r,\tilde{\alpha})$.
Each step preserves the analytic index as well as the topological index.
We denote equality of both the analytic and topological index as an equivalence $\sim$ of triples.
By the Whitney Embedding Theorem, $M$ embeds in $\RR^m$.
The 2-out-of-3 principle implies that the normal bundle $\nu$
\[ 0\to TM\to M\times \RR^m\to \nu\to 0\]
is spin$^c$ oriented.
Vector bundle modification by $\nu$ gives
\[ (M,E,\alpha)\sim (\Sigma\nu, \pi^*E\otimes \beta_\nu, \pi^*\alpha\otimes I)\]
By adding on a vector bundle with its identity automorphism, if necessary,
\[ (\Sigma\nu, \pi^*E\otimes \beta_\nu, \pi^*\alpha\otimes I) \sim (\Sigma\nu,\underline{\CC}^r,\alpha_1)\]
Since $M$ is compact we may assume that
$M$ is embedded in the interior of the unit ball of $\RR^{m}$.
Using the inclusion $\RR^{m}\to \RR^{m+1}$,
a compact tubular neighborhood of $M$ in $\RR^{m+1}$
identifies with the ball bundle $B(\nu\oplus \underline{\RR})$, whose boundary is $\Sigma \nu$.
Let $\Omega$ be the unit ball of $\RR^{m+1}$ with the interior of $B(\nu\oplus \underline{\RR})$ removed.
Then $\Omega$ is a bordism of spin$^c$ manifolds from $\Sigma \nu$ to $S^{m}$.
By Lemma \ref{L} below, there exist continuous maps
$\alpha_2:\Omega\to \mathrm{GL}(r,\CC), \alpha_3:B(\nu\oplus \underline{\RR})\to \mathrm{GL}(r,\CC)$, such that, when restricted to $\Sigma\nu$, $\alpha_3\alpha_1=\alpha_2$.
Then
\[ \mathrm{Index}\,T^{\Sigma\nu}_{\alpha_2}=\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_3\alpha_1}=\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_3}+\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_1}\]
Since $\Sigma \nu$ is the boundary of $B(\nu\oplus \underline{\RR})$ and $\alpha_3$ is invertible on $B(\nu\oplus \underline{\RR})$,
\[ \mathrm{Index}\,T^{\Sigma\nu}_{\alpha_3}=0\]
Thus
\[ \mathrm{Index}\,T^{\Sigma\nu}_{\alpha_2}=\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_1}\]
The same argument applies to the topological index, and we obtain
\[ (\Sigma\nu,\underline{\CC}^r,\alpha_1)\sim(\Sigma\nu,\underline{\CC}^r,\alpha_2)\]
Finally, the bordism $(\Omega,\alpha_2)$ gives
\[(\Sigma\nu,\underline{\CC}^r,\alpha_2)\sim(S^m,\underline{\CC}^r,\alpha_2)\]
\end{proof}
\begin{lemma}\label{L}
Let the unit ball in $\RR^n$ be the union of two compact subsets $B, \Omega$ with $\Sigma=B\cap \Omega$.
Given a continuous map $\alpha:\Sigma\to \mathrm{GL}(r,\CC)$, there exist continuous maps $\alpha_1:B\to \mathrm{GL}(r,\CC)$, $\alpha_2:\Omega\to \mathrm{GL}(r,\CC)$ such that, when restricted to $\Sigma$, $\alpha_1\alpha=\alpha_2$.
\end{lemma}
\begin{proof}
Let $E$ be the vector bundle on the unit ball in $\RR^n$
obtained by clutching trivial bundles on $B$ and $\Omega$ via $\alpha$,
\[ E = \Omega\times \CC^r\sqcup B\times \CC^r/\sim\qquad (p,v)\sim (p,\alpha(p)v)\quad p\in \Sigma\]
Because the unit ball is contractible, $E$ can be trivialized.
A trivialization of $E$ amounts to a choice of continuous maps $\alpha_1:B\to \mathrm{GL}(r,\CC)$, $\alpha_2:\Omega\to \mathrm{GL}(r,\CC)$
such that $\alpha_1(p)\alpha(p)v=\alpha_2(p)v$ for all $p\in \Sigma$, $v\in \CC^r$.
\end{proof}
\section{Proof of the index theorem}
Bott periodicity is the following statement about the homotopy groups of $\mathrm{GL}(n,\CC)$ \cite{Bo59},
\[ \pi_j\,\mathrm{GL}(n,\CC)\cong\left\{
\begin{array}{ll} \ZZ & j\;\mbox{odd}\\ 0& j\;\mbox{even} \end{array}
\right.\]
\[j=0,1,2,\dots, 2n-1\]
\begin{proposition}\label{sphere2}
Let $S^m$ be an odd dimensional sphere.
If $\alpha:S^m\to \mathrm{GL}(r,\CC)$ is a continuous map
then
\[ \ind T_\alpha = \ch(\alpha)[S^m]\]
\end{proposition}
\begin{proof}
Assume $r\ge (m+1)/2$, so that $\pi_m\mathrm{GL}(r,\CC)\cong \ZZ$.
If not, replace $r$ with $r'\ge (m+1)/2$, and $\alpha$ with $\alpha\oplus I_{r'-r}$.
Due to homotopy invariance of the index, the analytic and topological index determine maps
\[ \phi:\pi_m\,\mathrm{GL}(r,\CC)\to \ZZ\qquad \phi([\alpha]) = \mathrm{Index}\,T_\alpha\]
\[ \psi:\pi_m\,\mathrm{GL}(r,\CC)\to \QQ\qquad \psi([\alpha]) = \ch(\alpha)[S^m]\]
Note that if $\alpha_1,\alpha_2:S^m\to \mathrm{GL}(r,\CC)$ represent two elements in $\pi_m \mathrm{GL}(r,\CC)$,
their product in $\pi_m \mathrm{GL}(r,\CC)$ can be represented by the map $p\mapsto \alpha_1(p)\alpha_2(p)$.
Since
\[ \mathrm{Index}\,T_{\alpha_1\alpha_2}=\mathrm{Index}\,T_{\alpha_1}+\mathrm{Index}\,T_{\alpha_2}
\qquad \ch(\alpha_1\alpha_2)=\ch(\alpha_1)+\ch(\alpha_2)\]
it follows that $\phi:\ZZ\to \ZZ$ and $\psi:\ZZ\to \QQ$ are homomorphisms of abelian groups.
By Theorem \ref{thm:Noether}, for $f:S^1\to \CC\setminus \{0\}, f(z)=z^{-1}$,
\[ \mathrm{Index}\,T_f=\ch(f)[S^1]=1\]
Then by Lemma \ref{sphere}
there exists $\alpha:S^m\to \mathrm{GL}(r,\CC)$ with
\[ \mathrm{Index}\,T_\alpha=\ch(\alpha)[S^m]=1\]
This implies that $\phi([\alpha]) = \psi([\alpha])$
for all $[\alpha]\in \pi_m \mathrm{GL}(r,\CC)\cong \ZZ$.
\end{proof}
\begin{remark}
For an alternate approach to this proof, using Bott periodicity combined with a direct calculation, see Venugopalkrishna \cite{V72}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Thm}]
By Lemma \ref{sphere2}, Theorem \ref{Thm} holds for odd dimensional spheres $M=S^m$.
Then by Lemma \ref{sphere}, Theorem \ref{Thm} holds for all odd dimensional spin$^c$ manifolds $M$.
\end{proof}
\section{Boutet de Monvel's theorem}\label{sec:BdM}
Let $\tilde{\Omega}$ be a complex analytic manifold, and $\Omega^0\subset \tilde{\Omega}$ a relatively compact open submanifold with smooth boundary $M=\partial \Omega^0$.
Choose a defining function of the boundary $\rho:\tilde{\Omega}\to \RR$ with $\Omega^0=\rho^{-1}((-\infty,0))$, $\Omega=\rho^{-1}((-\infty,0])$, $M=\rho^{-1}(0)$, and $d\rho(p)\ne 0$ for all $p\in M$.
The boundary of $\Omega^0$ is called strictly pseudoconvex if for every $p\in M$ and every holomorphic vector $w$ that is tangent to $M$,
\[w\in T^{1,0}_pM = T^{1,0}_p\tilde{\Omega}\cap (T_pM\otimes \CC)\]
we have
\[\partial\bar{\partial}\rho(w,\bar{w})> 0\qquad \text{if}\;w\ne 0\]
Strict pseudoconvexity is biholomorphically invariant.
A domain $\Omega^0$ in $\CC^n$ that is strictly convex in the Euclidean sense is strictly pseudoconvex.
The Hardy space $H^2(M)$ is the space of $L^2$-functions on $M$ that extend to a holomorphic function on $\Omega$.
The Szeg\"o projection $S$ is the orthogonal projection
\[ S:L^2(M)\to H^2(M)\]
For a continuous map $\alpha:M\to \mathrm{GL}(r,\CC)$, let $\mathcal{M}_\alpha$ be the corresponding multiplication operator
on $L^2(M)\otimes \CC^r$.
The Toeplitz operator $\mathfrak{T}_\alpha$ is the composition of $\mathcal{M}_\alpha$
with $S\otimes I_r$,
\[ \mathfrak{T}_\alpha =(S\otimes I_r)\mathcal{M}_\alpha : H^2(M)\otimes \CC^r\to H^2(M)\otimes \CC^r\]
$\mathfrak{T}_\alpha$ is a bounded Fredholm operator.
Boutet de Monvel's theorem is (Theorem 1 in \cite{Bo79}):
\begin{theorem}\label{BdM}
\[ \ind \mathfrak{T}_\alpha = (\ch(\alpha)\cup \Td(M))[M]\]
\end{theorem}
\begin{proof}
The complex manifold $\Omega$ is a spin$^c$ manifold,
and so is its boundary $M$.
\footnote{A strictly pseudoconvex boundary $M$ is a contact manifold, with contact $1$-form $(\partial\rho-\bar{\partial}\rho)|M$.
The spin$^c$ structure of $M$ as the boundary of $\Omega$ agrees with its spin$^c$ structure as a contact manifold.}
Using the projection $P_+$ onto the positive space of the Dirac operator of $M$, as in the introduction of this paper,
we form a Toeplitz operator
\[ T_\alpha = (P_+\otimes I_r)\mathcal{M}_\alpha:L^2_+(M,S)\otimes \CC^r\to L^2_+(M,S)\otimes \CC^r\]
Theorem \ref{BdM} follows from Theorem \ref{Thm}
if we can show that
\[ \ind \mathfrak{T}_\alpha = \ind T_\alpha\]
This is Proposition 4.6 of \cite{BDT89}.
An outline of the proof given in \cite{BDT89} is as follows. According to Proposition \ref{propd} above,
\[ \ind T_\alpha = \ind T_{\tilde{\alpha}}\]
where $\tilde{\alpha}:\Omega\to M_r(\CC)$ is any continuous function whose restriction to
the boundary $M=\partial\Omega$ is $\alpha$, and $T_{\tilde{\alpha}}$ is the Toeplitz operator,
\[ T_{\tilde{\alpha}} : \Hardy\otimes \CC^r\to \Hardy\otimes \CC^r\]
Here $\Hardy$ is the null-space of the maximal closed extension of $D_\Omega$, as in section \ref{sec3}.
Following section 3 of \cite{BDT89}, we now choose a different domain for the Dirac operator of $\Omega$,
using $\bar{\partial}$-Neumann boundary conditions.
The Dirac operator of $\Omega$ is the assembled Dolbeault complex,
\[\bar{\partial}+\bar{\partial}^*:C^\infty(\Omega,S^+)\to C^\infty(\Omega,S^-)\]
with positive and negative spinor bundles
\[ S^+= \bigoplus_{k\;\text{even}}\Lambda^kT^{0,1}\Omega\qquad S^-= \bigoplus_{k\;\text{odd}}\Lambda^kT^{0,1}\Omega\]
Here $C^\infty(\Omega, S^{+/-})$ denotes the space of spinors that are smooth up to and including the boundary. Let
\[ \mathcal{A}^k := \{u\in C^\infty(\Omega,\Lambda^kT^{0,1}\Omega)\;\mid\; \iota_{ \bar{\partial} \rho} u=0\;\text{on}\;M=\partial\Omega\}\]
where, as usual, $\iota$ denotes contraction.
Let $D_N$ denote the closure of $\bar{\partial}+\bar{\partial}^*$ restricted to $\bigoplus_{k\;\text{even}} \mathcal{A}^k$. Then
\[ D_\Omega\subseteq D_N\subseteq \bar{D}_\Omega\]
\begin{comment}
In section \ref{sec3} the Dirac operator $D_\Omega$ acts on $C^\infty$-sections with compact support,
\[D_\Omega :C^\infty_c(\Omega^0,S^+)\to C^\infty_c(\Omega^0,S^-)\qquad \Omega^0=\Omega\setminus M\]
Elliptic regularity implies that the null space $\mathcal{N}(\bar{D}_\Omega)$
consists of $L^2$-sections in the null space of $D^\dagger_\Omega$,
\[ \mathcal{N}(\bar{D}_\Omega) =\mathcal{N}(D^\dagger_\Omega)\cap L^2(\Omega,S^+)\]
Therefore the range of the Calderon projection $P_\natural$ (see section \ref{sec4}) is a direct sum of $H^2(M)$ and a finite dimensional space.
In summary,
the ranges of the projections $S$ and $P_\natural$ differ by a space of finite rank,
while $P_\natural$ and $P_+$ differ by a compact operator --- and so with the notation of section \ref{sec4},
\[ \ind \mathfrak{T}_\alpha = \ind T^\natural_\alpha=\ind T_\alpha\]
\end{comment}
By the change of domain principle of section 2 of \cite{BDT89},
the index of any Toeplitz operator obtained by using the projection onto the null space $\Hardy\otimes \CC^r$,
is equal to the index of the Toeplitz operator obtained by using projection onto the null space $\mathcal{N}(D_N)\otimes \CC^r$ (Proposition 3.3 in \cite{BDT89}).
Let $\Box=D_N^*D_N$ be the complex Laplacian acting on $(0,k)$-forms with $k$ even,
likewise $\Box=D_ND_N^*$ acting on $(0,k)$-forms with $k$ odd.
By a result of J.\ Kohn \citelist{\cite{Ko63} \cite{Ko64}}, if the boundary of $\Omega$ is strictly pseudoconvex, the operator $\Box$ has compact resolvant for $k\ne 0$.
This then implies that $\mathcal{N}(D_N^*)$ is finite dimensional,
and $\mathcal{N}(D_N)$ is at most a finite dimensional perturbation of
\[ H^\omega(\Omega) = \{u\in L^2(\Omega)\;\mid\; \bar{\partial}u=0\}\]
The projection $L^2(\Omega)\to H^\omega(\Omega)$ is the Bergmann projection.
The index of any Toeplitz operator obtained by using projection onto the null space $\mathcal{N}(D_N)\otimes \CC^r$
is equal to the index of the Toeplitz operator obtained using the Bergmann projection.
Finally, the transition from the Bergmann projection (on $\Omega$) to the Szeg\"o projection $S$ (on $M=\partial\Omega$) is done by the same argument as in the proof of Proposition \ref{propd}.
In summary, the proof is
\[ P_+
\rightsquigarrow \text{Calderon}
\rightsquigarrow \mathcal{N}(\bar{D}_\Omega)
\rightsquigarrow \mathcal{N}(D_N)
\rightsquigarrow \text{Bergmann}
\rightsquigarrow \text{Szeg\"o}
\]
\end{proof}
\begin{remark}
A crucial step in the proof of Theorem \ref{BdM} is the passage from spinors to functions (i.e. the step $\mathcal{N}(D_N)
\rightsquigarrow \text{Bergmann}$),
which is done by applying a result of J.\ Kohn.
Compare this to the sheaf theoretic result that, if $\Omega$ is strictly pseudoconvex, then the sheaf cohomology $H^k(\Omega^0,\mathcal{O})$ is a finite dimensional complex vector space for $k>0$ (Proposition 4 in \cite{Gr58}).
Here $\mathcal{O}$ denotes the structure sheaf (germs of holomorphic functions) of $\Omega^0$.
$H^0(\Omega^0,\mathcal{O})$ is the space of holomorphic functions on $\Omega^0$.
$H^k(\Omega^0,\mathcal{O})$
identifies with the $k$-th homology of the Dolbeault complex.
\end{remark}
\end{document} | math | 52,724 |
\begin{document}
\begin{abstract} There exists a function $f\colon \mathbb{N}\to\mathbb{N}$ such that for every positive integer $d$,
every quasi-finite field $K$ and every projective hypersurface $X$ of degree $d$ and dimension
$\ge f(d)$, the set $X(K)$ is non-empty. This is a special case of a more general result about
intersections of hypersurfaces of fixed degree in projective spaces of sufficiently high dimension over fields with finitely generated Galois groups.
\end{abstract}
\title{A Weak Chevalley-Warning Theorem for Quasi-finite Fields}
\section{Introduction}
The Chevalley-Warning theorem \cite[I,~Th.~3]{Se2} asserts that every finite field $K$ is $C_1$, i.e., for every $n\in\mathbb{N}$ and every degree $n$ hypersurface $X$ of dimension $\ge n$, the set $X(K)$ is non-empty. One might ask whether the Chevalley-Warning theorem extends to all quasi-finite fields (i.e., perfect fields with Galois group isomorphic to $\hat\mathbb{Z}$). Ax \cite{Ax}, answering a question of Serre \cite[II,~\S3]{Se1}, gave an example of a quasi-finite field which is not $C_n$ for any $n$.
In this note, we prove a weak version of Chevalley-Warning, as follows:
\begin{thm}
\label{qf}
There exists a function $f\colon \mathbb{N}\to\mathbb{N}$ such that for every positive integer $d$,
every quasi-finite field $K$, and every projective hypersurface $X$ of degree $d$ and dimension
$\ge f(d)$, the set $X(K)$ is non-empty.
\end{thm}
The proof can in principle be used to produce a function $f$, but it has very rapid growth.
Of course, we have already seen that $f(d)$ cannot have polynomial growth.
Theorem~\ref{qf} is a special case of the following more general result.
\goodbreak
\begin{thm}
\label{Linear}
There exists a function $h\colon \mathbb{N}^4\to\mathbb{N}$ such that given:
\begin{itemize}
\item any perfect field $K$, not formally real, such that $G_K$ has a generating set with $g$ elements,
\item any positive integers $d$, $e$, and $k$,
\item any sequence $d_1,\ldots,d_e$ of positive integers $\le d$,
\item any vector space $V$ over $K$ of dimension $n+1 > h(d,e,g,k)$,
\item any sequence of forms $F_1\in \mathrm{Sym}^{d_1}V^*,\ldots, F_e\in\mathrm{Sym}^{d_e}V^*$,
\end{itemize}
there exists a subspace $W\subset V$ of dimension $k$ on which all the forms $F_i$
are identically zero.
\end{thm}
Specializing to $k=1$, we get the following:
\begin{thm}
\label{General}
There exists a function $h\colon \mathbb{N}^3\to\mathbb{N}$ such that if $K$ is any perfect field, not formally real, such that $G_K$ has a generating set with $g$ elements; $d$ and $e$ are positive integers; and $X$ is an intersection of $e$
degree $\le d$ hypersurfaces in $\mathbb{P}^n$ for $n\ge h(d,e,g)$, then $X(K)$ is non-empty.
\end{thm}
We remark that there are interesting examples of fields $K$ for which $G_K$ is finitely generated but not
abelian. For instance, it is known \cite[Th.~7.5.10]{NSW}
that every $p$-adic field has a finitely generated Galois group.
For $K=\mathbb{Q}_p$, $p>2$, we can take $g=4$. For this family of fields,
Theorem~\ref{General} is due to Schmidt \cite{Sc}.
Specializing Theorem~\ref{General} to the case $e=g=1$, we obtain Theorem~\ref{qf}, since by
definition, quasi-finite fields are perfect, and a quasi-finite field cannot be formally real.
Indeed, the Brauer group of a quasi-finite field is trivial \cite[XIII, Prop.~5]{Se3}. Therefore,
the Severi-Brauer curve $x^2+y^2+z^2=0$ (\cite[X, \S7, Ex. (e)]{Se3}) has a rational point over $K$,
which means that $-1$ is a sum of two squares in $K$. Thus $K$ satisfies the hypotheses of
Theorem~\ref{General}.
The idea of the proof of Theorem~\ref{Linear} is to use a polarization argument due to Brauer \cite{Br} to reduce to the case of diagonal forms. By a Galois cohomology argument, we can further reduce to the case of Fermat hypersurfaces. These can be treated using identities introduced by Hilbert in his work on Waring's problem.
We would like to thank Kiseop Park and Olivier Wittenberg for calling our attention to relevant literature.
We begin with Fermat hypersurfaces
$$F_{d,n}:\ x_0^d+x_1^d+\cdots+x_n^d = 0.$$
\begin{thm}
If $d$ is a positive integer and $K$ is a field, not formally real,
such that $K_d := K^\times/(K^\times)^d$ is finite, then $F_{d,n}(K)$ is non-empty
whenever $n\ge|K_d|$.
\end{thm}
\begin{proof}
Let
$$\Sigma_{d,r} := (\underbrace{(K^\times)^d + \cdots + (K^\times)^d}_r)\cup\{0\}.$$
Then $\Sigma_{d,r}$ is stable by multiplication by $(K^\times)^d$ and is therefore characterized by
the image in $K_d$ of its non-zero elements. As $\Sigma_{d,r}\subset \Sigma_{d,r+1}$, it follows that the
sequence must stabilize, and every sum of $d$th powers of elements of $K$ can be expressed as a sum of at most $|K_d|$ such powers.
It remains to show that for every positive integer $d$, $-1$ is a sum of $d$th powers.
If the characteristic of $K$ is positive, this is obvious, so we assume that $\mathrm{char}\,a\ K = 0$.
By hypothesis, $-1$ is a sum of squares in $K$. This implies that $\Sigma_{2,r} = K$ for $r\gg 0$. Indeed,
the condition $a_1^2+\cdots+a_n^2 = -1$ implies
$$\Bigl(\frac{c+1}2\Bigr)^2 + \sum_{i=1}^n \Bigl(a_i\Bigl(\frac{c-1}2\Bigr)\Bigr)^2 = c.$$
For odd $d$, it is obvious that $-1$ is a sum of $d$th powers. If for some $d$ and some $r$,
$-1\in \Sigma_{d,r}$, then we can write
$$-1 = \sum_{i=1}^r\Bigl(\sum_{j=1}^{|K_2|} a_{i,j}^2\Bigr)^d.$$
By Hilbert's identity \cite[Th.~3.4]{Na}, this implies that $-1$ is a sum of $2d$th powers
of certain $\mathbb{Q}$-linear combinations of the $a_{i,j}$.
The theorem follows by induction on the largest power of $2$ dividing $d$.
\end{proof}
Next we consider diagonal hypersurfaces in general.
\begin{lem}
\label{minimum-generators}
Let $G$ be a profinite group and $H < G$ an open subgroup. If $G$ can be topologically generated by $g$ elements, then $H$ can be topologically generated by
$1+[G:H](g-1)$ elements.
\end{lem}
\begin{proof}
By compactness, it suffices to prove this when $G$ and $H$ are finite groups.
Let $G'$ denote a free group on $g$ elements and $\pi\colon G'\to G$ a surjective homomorphism.
Let $H' := \pi^{-1}(H)$. Thus $H'$ is a subgroup of $G'$ of index $[G:H]$.
Identifying $G'$ with the fundamental group of a join $X$ of $g$ circles, the quotient of the universal
cover of $X$ by $H'$ is a covering space of $X$ of degree $[G':H']$ and therefore a finite, connected, 1-dimensional CW-complex $Y$. Thus $H'$ is free, and the number of its generators is the rank of
$H_1(Y,\mathbb{Z})$. We have
$$\mathrm{char}\,i(Y) = [G':H']\mathrm{char}\,i(X) = [G:H]\mathrm{char}\,i(X) = [G:H](g-1),$$
where $\mathrm{char}\,i$ denotes the Euler characteristic. Thus, $H'$ has $1+[G:H](g-1)$ generators, and the same is true of its quotient $H$.
\end{proof}
\begin{prop}
If $K$ is a perfect field such that $G_K$ can be generated by $g$ elements and $d$ is a positive integer, then
$$|K_d|\le d^{dg+1}.$$
\end{prop}
\begin{proof}
Let $K_d$ denote the extension of $K$ generated by $\mu_d$, the group of solutions of $x^d = 1$ in
$\bar K$.
We have Kummer isomorphisms
$$K^\times/(K^\times)^d\cong H^1(K,\mu_d)$$
and
$$K_d^\times/(K_d^\times)^d\cong H^1(K_d,\mu_d)\cong \mathrm{Hom}(G_{K_d},\mu_d)$$
Now, $G_{K_d}$ is the kernel of the homomorphism $G_K\to \mathrm{Gal}\,(K_d/K)$ whose image has
order $\le d-1$.
Applying Lemma~\ref{minimum-generators}
to $G_{K_d}\subset G_K$, we see that
$$|\mathrm{Hom}(G_{K_d},\mu_d)|\le d^{(d-1)(g-1)+1}.$$
The order of the cohomology group $H^1(\mathrm{Gal}\,(K_d/K),\mu_d)$ is bounded above by the number of $1$-cochains.
By the inflation-restriction sequence,
$$|K^\times/(K^\times)^d| \le |H^1(\mathrm{Gal}\,(K_d/K),\mu_d)|\cdot |H^1(K_d,\mu_d)|\le d^{dg+1}.$$
The proposition follows.
\end{proof}
We can now prove Theorem~\ref{Linear}.
\begin{proof}
By \cite[Th.~C]{Br}, it suffices to find a bound $N$, depending only on $d$ and $g$, such that every
diagonal homogeneous form of degree $d$ in $K$ in more than $N$ variables has a solution in $K$.
Let $n = d^{dg+1}$, $N = n^2$, and
$$F:=a_0 x_0^d+\cdots+a_{N} x_{N}^d$$
a diagonal form. If any coefficients $a_i$ is zero, then $F$ has an obvious non-trivial solution.
Otherwise at least $n+1$ of the $a_i$ must belong to a single
coset $a (K^\times)^d$. Assuming without loss of generality that $a_0,\ldots,a_n\in a(K^\times)^d$ and letting $(b_0,\ldots,b_n)$ denote a non-trivial solution of $x_0^d+\cdots+x_n^d=0$ in $K$,
the non-zero vector
$$(b_0,\ldots,b_n,0,\ldots,0)$$
is a solution of $F$.
\end{proof}
We conclude with two examples, to show that the hypotheses on $K$ are really needed.
If $K = \mathbb{R}$ and $F:= x_1^2+x_2^2+\cdots+x_n^2$, there are no non-trivial solutions regardless of how large $n$ may be.
If $K$ is a separable closure of the infinite-dimensional function field $\mathbb{F}_p(t_1,t_2,\ldots)$, then
$$F := t_1 x_1^p + t_2 x_2^p + \cdots + t_n x_n^p = 0$$
has no solutions regardless of how large $n$ may be since the $t_i$ are linearly independent over
$\mathbb{F}_p(t_1^p,t_2^p,\ldots)$.
\end{document} | math | 9,123 |
\begin{document}
\title[Convergence rates in deterministic homogenization]{Approximation of
homogenized coefficients in deterministic homogenization and convergence
rates in the asymptotic almost periodic setting}
\author{Willi J\"{a}ger}
\curraddr{W. J\"{a}ger, Interdisciplinary Center for Scientific Computing
(IWR), University of Heidelberg, Im Neuenheimer Feld 205, 69120 Heidelberg,
Germany.}
\email{[email protected]}
\author{Antoine Tambue}
\curraddr{A. Tambue, a) Department of Computing Mathematics and Physics,
Western Norway University of Applied Sciences, Inndalsveien 28, Bergen 5063,
Norway; b) Center for Research in Computational and Applied Mechanics
(CERECAM), and Department of Mathematics and Applied Mathematics, University
of Cape Town, Rondebosch 7701, South Africa; c) The African Institute for
Mathematical Sciences (AIMS) and Stellenbosch University, 6--8 Melrose Road,
Muizenberg 7945, South Africa}
\email{[email protected]}
\author{Jean Louis Woukeng}
\curraddr{J. L. Woukeng, Interdisciplinary Center for Scientific Computing
(IWR), University of Heidelberg, Im Neuenheimer Feld 205, 69120 Heidelberg,
Germany.}
\email{[email protected]}
\date{2022}
\subjclass[2000]{35B40; 46J10 }
\keywords{Rates of convergence; corrector; deterministic homogenization}
\dedicatory{Dedicated to the memory of V.V. Zhikov, 1940-2017.}
\begin{abstract}
For a homogenization problem associated to a linear elliptic operator, we
prove the existence of a distributional corrector and we find an
approximation scheme for the homogenized coefficients. We also study the
convergence rates in the asymptotic almost periodic setting, and we show
that the rates of convergence for the zero order approximation, are near
optimal. The results obtained constitute a step towards the numerical
implementation of results from the deterministic homogenization theory
beyond the periodic setting. To illustrate this, numerical simulations based
on finite volume method are provided to sustain our theoretical results.
\end{abstract}
\maketitle
\section{Introduction}
The purpose of this work is to establish the existence of a distributional
corrector in the deterministic homogenization theory for a family of second
order elliptic equations in divergence form with rapidly oscillating
coefficients, and find an approximation scheme for the homogenized
coefficients, without smoothness assumption on the coefficients. Under
additional condition, we also study the convergence rates in the asymptotic
almost periodic setting. We start with the statement of the problem (\ref
{1.1}).
Let $\mathcal{A}$ be an algebra with mean value on $\mathbb{R}^{d}$, that
is, a closed subalgebra of the $\mathcal{C}^{\ast }$-algebra of bounded
uniformly continuous real-valued functions on $\mathbb{R}^{d}$, $\mathrm{BUC}
(\mathbb{R}^{d})$, which contains the constants, is translation invariant
and is such that any of its elements possesses a mean value in the following
sense: for every $u\in \mathcal{A}$, the sequence $(u^{{\Greekmath 0122}
})_{{\Greekmath 0122} >0}$ ($u^{{\Greekmath 0122} }(x)=u(x/{\Greekmath 0122} )$) weakly$\ast $
-converges in $L^{\infty }(\mathbb{R}^{d})$ to some real number $M(u)$
(called the mean value of $u$) as ${\Greekmath 0122} \rightarrow 0$. The mean
value expresses as
\begin{equation}
M(u)=\lim_{R\rightarrow \infty }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}}u(y)dy\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }u\in \mathcal{A} \label{0.1}
\end{equation}
where we have set $
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ }
\vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}}=\frac{1}{\left\vert B_{R}\right\vert }\int_{B_{R}}$.
For $1\leq p<\infty $, we define the Marcinkiewicz space $\mathfrak{M}^{p}(
\mathbb{R}^{d})$ to be the set of functions $u\in L_{loc}^{p}(\mathbb{R}
^{d}) $ such that
\begin{equation*}
\underset{R\rightarrow \infty }{\lim \sup }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}}\left\vert u(y)\right\vert ^{p}dy<\infty .
\end{equation*}
Then $\mathfrak{M}^{p}(\mathbb{R}^{d})$ is a complete seminormed space
endowed with the seminorm
\begin{equation*}
\left\Vert u\right\Vert _{p}=\left( \underset{R\rightarrow \infty }{\lim
\sup }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}}\left\vert u(y)\right\vert ^{p}dy\right) ^{1/p}.
\end{equation*}
We denote by $B_{\mathcal{A}}^{p}(\mathbb{R}^{d})$ ($1\leq p<\infty $) the
closure of $\mathcal{A}$ in $\mathfrak{M}^{p}(\mathbb{R}^{d})$. Then for any
$u\in B_{\mathcal{A}}^{p}(\mathbb{R}^{d})$ we have that
\begin{equation}
\left\Vert u\right\Vert _{p}=\left( \lim_{R\rightarrow \infty }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}}\left\vert u(y)\right\vert ^{p}dy\right) ^{\frac{1}{p}
}=(M(\left\vert u\right\vert ^{p}))^{\frac{1}{p}}. \label{0.2}
\end{equation}
Consider the space $B_{\mathcal{A}}^{1,p}(\mathbb{R}^{d})=\{u\in B_{\mathcal{
A}}^{p}(\mathbb{R}^{d}):{\Greekmath 0272} _{y}u\in (B_{\mathcal{A}}^{p}(\mathbb{R}
^{d}))^{d}\}$ which is a complete seminorned space with respect to the
seminorm
\begin{equation*}
\left\Vert u\right\Vert _{1,p}=\left( \left\Vert u\right\Vert
_{p}^{p}+\left\Vert {\Greekmath 0272} _{y}u\right\Vert _{p}^{p}\right) ^{\frac{1}{p}}.
\end{equation*}
The Banach counterpart of the previous spaces are defined as follows. We set
$\mathcal{B}_{\mathcal{A}}^{p}(\mathbb{R}^{d})=B_{\mathcal{A}}^{p}(\mathbb{R}
^{d})/\mathcal{N}$ where $\mathcal{N}=\{u\in B_{\mathcal{A}}^{p}(\mathbb{R}
^{d}):\left\Vert u\right\Vert _{p}=0\}$. We define $\mathcal{B}_{\mathcal{A}
}^{1,p}(\mathbb{R}^{d})$ mutatis mutandis: replace $B_{\mathcal{A}}^{p}(
\mathbb{R}^{d})$ by $\mathcal{B}_{\mathcal{A}}^{p}(\mathbb{R}^{d})$ and $
\partial /\partial y_{i}$ by $\overline{\partial }/\partial y_{i}$, where $
\overline{\partial }/\partial y_{i}$ is defined by
\begin{equation}
\frac{\overline{\partial }}{\partial y_{i}}(u+\mathcal{N}):=\frac{\partial u
}{\partial y_{i}}+\mathcal{N}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }u\in B_{\mathcal{A}}^{1,p}(\mathbb{R
}^{d}). \label{0.3}
\end{equation}
It is important to note that $\overline{\partial }/\partial y_{i}$ is also
defined as the infinitesimal generator in the $i$th direction coordinate of
the strongly continuous group $\mathcal{T}(y):\mathcal{B}_{\mathcal{A}}^{p}(
\mathbb{R}^{d})\rightarrow \mathcal{B}_{\mathcal{A}}^{p}(\mathbb{R}^{d});\
\mathcal{T}(y)(u+\mathcal{N})=u(\cdot +y)+\mathcal{N}$. Let us denote by $
{\Greekmath 0125} :B_{\mathcal{A}}^{p}(\mathbb{R}^{d})\rightarrow \mathcal{B}_{
\mathcal{A}}^{p}(\mathbb{R}^{d})=B_{\mathcal{A}}^{p}(\mathbb{R}^{d})/
\mathcal{N}$, ${\Greekmath 0125} (u)=u+\mathcal{N}$, the canonical surjection. Remark:
$u\in B_{\mathcal{A}}^{1,p}(\mathbb{R}^{d})$ implies ${\Greekmath 0125} (u)\in
\mathcal{B}_{\mathcal{A}}^{1,p}(\mathbb{R}^{d})$ and observing (\ref{0.3}), $
\frac{\overline{\partial }{\Greekmath 0125} (u)}{\partial y_{i}}={\Greekmath 0125} \left( \frac{
\partial u}{\partial y_{i}}\right) $.
We assume in the sequel that the algebra $\mathcal{A}$ is ergodic, that is,
any $u\in \mathcal{B}_{\mathcal{A}}^{p}(\mathbb{R}^{d})$ that is invariant
under $(\mathcal{T}(y))_{y\in \mathbb{R}^{d}}$ is a constant in $\mathcal{B}
_{\mathcal{A}}^{p}(\mathbb{R}^{d})$, i.e., if $\left\Vert \mathcal{T}
(y)u-u\right\Vert _{p}=0$ for every $y\in \mathbb{R}^{d}$, then $\left\Vert
u-c\right\Vert _{p}=0$, $c$ a constant. Let us also recall the following
property \cite{CMP, NA}:
\begin{itemize}
\item[(\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{1)}] The mean value $M$ viewed as defined on $\mathcal{A}$,
extends by continuity to a non negative continuous linear form (still
denoted by $M$) on $B_{\mathcal{A}}^{p}(\mathbb{R}^{d})$. For each $u\in B_{
\mathcal{A}}^{p}(\mathbb{R}^{d})$ and all $a\in \mathbb{R}^{d}$, we have $
M(u(\cdot +a))=M(u)$, and $\left\Vert u\right\Vert _{p}=\left( M(\left\vert
u\right\vert ^{p})\right) ^{1/p}$.
\end{itemize}
To the space $B_{\mathcal{A}}^{p}(\mathbb{R}^{d})$ we also attach the
following \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{corrector} space
\begin{equation*}
B_{\#\mathcal{A}}^{1,p}(\mathbb{R}^{d})=\{u\in W_{loc}^{1,p}(\mathbb{R}
^{d}):{\Greekmath 0272} u\in B_{\mathcal{A}}^{p}(\mathbb{R}^{d})^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }
M({\Greekmath 0272} u)=0\}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
In $B_{\#\mathcal{A}}^{1,p}(\mathbb{R}^{d})$ we identify two elements by
their gradients: $u=v$ in $B_{\#\mathcal{A}}^{1,p}(\mathbb{R}^{d})$ iff $
{\Greekmath 0272} (u-v)=0$, i.e. $\left\Vert {\Greekmath 0272} (u-v)\right\Vert _{p}=0$. We equip $
B_{\#\mathcal{A}}^{1,p}(\mathbb{R}^{d})$ with the gradient norm $\left\Vert
u\right\Vert _{\#,p}=\left\Vert {\Greekmath 0272} u\right\Vert _{p}$ and obtain a
Banach space \cite[Theorem 3.12]{Casado} containing $B_{\mathcal{A}}^{1,p}(
\mathbb{R}^{d})$.
We recall the $\Sigma $-convergence. A sequence $(u_{{\Greekmath 0122}
})_{{\Greekmath 0122} >0}\subset L^{p}(\Omega )$ ($1\leq p<\infty $) is said to:
\begin{itemize}
\item[(i)] \emph{weakly }$\Sigma $\emph{-converge} in $L^{p}(\Omega )$ to $
u_{0}\in L^{p}(\Omega ;\mathcal{B}_{\mathcal{A}}^{p}(\mathbb{R}^{d}))$ if,
as ${\Greekmath 0122} \rightarrow 0$,
\begin{equation}
\int_{\Omega }u_{{\Greekmath 0122} }(x)f\left( x,\frac{x}{{\Greekmath 0122} }\right)
dx\rightarrow \int_{\Omega }M(u_{0}(x,\cdot )f(x,\cdot ))dx \label{3.1}
\end{equation}
for any $f\in L^{p^{\prime }}(\Omega ;\mathcal{A})$ ($p^{\prime }=p/(p-1)$)$
; $
\item[(ii)] \emph{strongly }$\Sigma $\emph{-converge} in $L^{p}(\Omega )$ to
$u_{0}\in L^{p}(\Omega ;\mathcal{B}_{\mathcal{A}}^{p}(\mathbb{R}^{d}))$ if (
\ref{3.1}) holds and further $\left\Vert u_{{\Greekmath 0122} }\right\Vert
_{L^{p}(\Omega )}\rightarrow \left\Vert u_{0}\right\Vert _{L^{p}(\Omega ;
\mathcal{B}_{\mathcal{A}}^{p}(\mathbb{R}^{d}))}$.
\end{itemize}
We denote (i) by "$u_{{\Greekmath 0122} }\rightarrow u_{0}$ in $L^{p}(\Omega )$
-weak $\Sigma $", and (ii) by "$u_{{\Greekmath 0122} }\rightarrow u_{0}$ in $
L^{p}(\Omega )$-strong $\Sigma $".
The main properties of the above concept are:
\begin{itemize}
\item Every bounded sequence in $L^{p}(\Omega )$ ($1<p<\infty $) possesses a
subsequence that weakly $\Sigma $-converges in $L^{p}(\Omega )$.
\item If $(u_{{\Greekmath 0122} })_{{\Greekmath 0122} \in E}$ is a bounded sequence in $
W^{1,p}(\Omega )$, then there exist a subsequence $E^{\prime }$ of $E$ and a
couple $(u_{0},u_{1})\in W^{1,p}(\Omega )\times L^{p}(\Omega ;B_{\#\mathcal{A
}}^{1,p}(\mathbb{R}^{d}))$ such that
\begin{align*}
u_{{\Greekmath 0122} }& \rightarrow u_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }W^{1,p}(\Omega )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-weak} \\
\frac{\partial u_{{\Greekmath 0122} }}{\partial x_{j}}& \rightarrow \frac{\partial
u_{0}}{\partial x_{j}}+\frac{\partial u_{1}}{\partial y_{j}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }
L^{p}(\Omega )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-weak }\Sigma \ \ (1\leq j\leq d)
\end{align*}
\item If $u_{{\Greekmath 0122} }\rightarrow u_{0}$ in $L^{p}(\Omega )$-weak $
\Sigma $ and $v_{{\Greekmath 0122} }\rightarrow v_{0}$ in $L^{q}(\Omega )$-strong $
\Sigma $, then $u_{{\Greekmath 0122} }v_{{\Greekmath 0122} }\rightarrow u_{0}v_{0}$ in $
L^{r}(\Omega )$-weak $\Sigma $, where $1\leq p,q,r<\infty $ and $\frac{1}{p}+
\frac{1}{q}=\frac{1}{r}$.
\end{itemize}
Our aim is to study the following problem
\begin{equation}
-{\Greekmath 0272} \cdot \left( A\left( x,\frac{x}{{\Greekmath 0122} }\right) {\Greekmath 0272}
u_{{\Greekmath 0122} }\right) =f\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ in }\Omega \RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }u_{{\Greekmath 0122} }\in
H_{0}^{1}(\Omega ) \label{1.1}
\end{equation}
where ${\Greekmath 0122} >0$ is a small parameter, $f\in L^{2}(\Omega )$, $\Omega $
is an open bounded set of $\mathbb{R}^{d}$ (integer $d\geq 1$) with smooth
boundary $\partial \Omega $, and $A\in \mathcal{C}(\overline{\Omega }
;L^{\infty }(\mathbb{R}^{d})^{d\times d})$ is a symmetric matrix satisfying
\begin{equation}
{\Greekmath 010B} \left\vert {\Greekmath 0115} \right\vert ^{2}\leq A(x,y){\Greekmath 0115} \cdot {\Greekmath 0115}
\leq {\Greekmath 010C} \left\vert {\Greekmath 0115} \right\vert ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }(x,{\Greekmath 0115}
)\in \overline{\Omega }\times \mathbb{R}^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and a.e. }y\in \mathbb{R}
^{d}; \label{1.2}
\end{equation}
\begin{equation}
A(x,\cdot )\in (B_{\mathcal{A}}^{2}(\mathbb{R}^{d}))^{d\times d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for
all }x\in \overline{\Omega } \label{1.3}
\end{equation}
where ${\Greekmath 010B} $ and ${\Greekmath 010C} $ are two positive real numbers.
It is well-known that under assumptions (\ref{1.2}), problem (\ref{1.1})
uniquely determines a function $u_{{\Greekmath 0122} }\in H_{0}^{1}(\Omega )$.
Under the additional assumption (\ref{1.3}), the following result holds.
\begin{theorem}
\label{t1.1}There exists $u_{0}\in H_{0}^{1}(\Omega )$ such that $
u_{{\Greekmath 0122} }\rightarrow u_{0}$ weakly in $H_{0}^{1}(\Omega )$ and
strongly in $L^{2}(\Omega )$ (as ${\Greekmath 0122} \rightarrow 0$) and $u_{0}$
solves uniquely the problem
\begin{equation}
-{\Greekmath 0272} \cdot (A^{\ast }(x){\Greekmath 0272} u_{0})=f\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\Omega , \label{1.4}
\end{equation}
$A^{\ast }$ being the homogenized matrix defined by
\begin{equation}
A^{\ast }(x)=M\left( A(x,\cdot )(I_{d}+{\Greekmath 0272} _{y}{\Greekmath 011F} (x,\cdot ))\right)
\label{1.5}
\end{equation}
where, ${\Greekmath 011F} =({\Greekmath 011F} _{j})_{1\leq j\leq d}\in \mathcal{C}(\overline{\Omega }
;B_{\#\mathcal{A}}^{1,2}(\mathbb{R}^{d})^{d})$ is such that, for any $x\in
\Omega $, ${\Greekmath 011F} _{j}(x,\cdot )$ is the unique solution (up to an additive
constant depending on $x$) of the problem
\begin{equation}
{\Greekmath 0272} _{y}\cdot \left( A(x,\cdot )(e_{j}+{\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot
))\right) =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}. \label{1.6}
\end{equation}
If we set $u_{1}(x,y)={\Greekmath 0272} u_{0}(x){\Greekmath 011F} (x,y)=\sum_{i=1}^{d}\frac{\partial
u_{0}}{\partial x_{i}}(x){\Greekmath 011F} _{i}(x,y)$ and assume that $u_{1}\in
H^{1}(\Omega ;\mathcal{A}^{1})$ ($\mathcal{A}^{1}=\{v\in \mathcal{A}:{\Greekmath 0272}
_{y}v\in (\mathcal{A})^{d}\}$), then, as ${\Greekmath 0122} \rightarrow 0$,
\begin{equation}
u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} u_{1}^{{\Greekmath 0122} }\rightarrow 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
in }H^{1}(\Omega )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ strongly} \label{1.7}
\end{equation}
where $u_{1}^{{\Greekmath 0122} }(x)=u_{1}(x,x/{\Greekmath 0122} )$ for a.e. $x\in
\Omega $.
\end{theorem}
\begin{remark}
\label{r1.1'}\emph{Problem (\ref{1.6}) is the }corrector problem\emph{. It
helps to obtain a first order approximation }$u_{{\Greekmath 0122} }(x)\approx
u_{0}(x)+{\Greekmath 0122} u_{1}(x,x/{\Greekmath 0122} )$\emph{\ of }$u_{{\Greekmath 0122} }$
\emph{\ as seen in (\ref{1.7}). Its solvability is addressed in the
following result, which is the first main result of this work.}
\end{remark}
\begin{theorem}
\label{t4.1}Let ${\Greekmath 0118} \in \mathbb{R}^{d}$ and $x\in \overline{\Omega }$ be
fixed. There exists a unique (up to an additive function of $x$) function $
v_{{\Greekmath 0118} }\in \mathcal{C}(\overline{\Omega };H_{loc}^{1}(\mathbb{R}^{d}))$
such that ${\Greekmath 0272} _{y}v_{{\Greekmath 0118} }\in \mathcal{C}(\overline{\Omega };B_{\mathcal{
A}}^{2}(\mathbb{R}^{d})^{d})$ and $M({\Greekmath 0272} _{y}v_{{\Greekmath 0118} }(x,\cdot ))=0$,
which solves the equation
\begin{equation}
{\Greekmath 0272} _{y}\cdot \left( A(x,\cdot )({\Greekmath 0118} +{\Greekmath 0272} _{y}v_{{\Greekmath 0118} }(x,\cdot
))\right) =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}. \label{4.1}
\end{equation}
\end{theorem}
The proof of Theorem \ref{t4.1} will be obtained as a consequence of Lemma
\ref{l4.1} in Section 2 below. The progress compared to the previously known
results exists in the solution of the corrector problem: it is obtained by
approximation with distributional solutions of partial differential
equations in sufficiently large balls. Since the approximation can be
quantitatively controlled, this method also provides a basis for the
numerical calculation. Theorem \ref{t4.1} is well known in the random
stationary ergodic environment. However for the general deterministic
setting, we believe that a detailed proof must be provided since it also
covers the non ergodic algebras framework.
The next step consists in finding an approximation scheme for the
homogenized matrix $A^{\ast }$ (see (\ref{1.5})). This problem has been
solved (for (\ref{1.1})) in the periodic setting, since under the periodic
assumption, the corrector problem is posed on a bounded domain (namely the
periodic cell $Y=(0,1)^{d}$) since in that case, the solution ${\Greekmath 011F} _{j}$ is
periodic. A huge contrast between the periodic setting and the general
deterministic setting (as considered in this work) is that in the latter,
the corrector problem is posed on the whole space $\mathbb{R}^{d}$, and
cannot be reduced (as in the periodic framework) to a problem on a bounded
domain. As a result, the solution of the corrector problem (\ref{1.6}) (and
hence the homogenized matrix which depends on this solution) can not be
computed directly. Therefore, as in the random setting (see e.g. \cite
{BP2004}), truncations of (\ref{1.6}) must be considered, particularly on
large domains $(-R,R)^{d}$ with appropriate boundary conditions, and the
homogenized coefficients will therefore be captured in the asymptotic
regime. This is done in Theorem \ref{t3.1} (see Section 3). We then find the
rate of convergence for the approximation scheme (see Theorem \ref{t3.2}).
It is natural to determine the convergence rates for the approximation (\ref
{1.7}) setting in two cases:
\begin{itemize}
\item[1)] the asymptotic periodic one represented by the algebra $\mathcal{A}
=\mathcal{C}_{0}(\mathbb{R}^{d})+\mathcal{C}_{per}(Y)$;
\item[2)] the asymptotic almost periodic one represented by the algebra $
\mathcal{A}=\mathcal{C}_{0}(\mathbb{R}^{d})+AP(\mathbb{R}^{d})$.
\end{itemize}
In case 1), the corrector function ${\Greekmath 011F} _{j}(x,\cdot )$ (solution of (\ref
{1.6})) belongs to the Sobolev-Besicovitch space $B_{\mathcal{A}}^{1,2}(
\mathbb{R}^{d})$ associated to the algebra $\mathcal{A}$ and is bounded in $
L^{\infty }(\mathbb{R}^{d})$. As a result, we proceed as in the well-known
periodic setting. In contrast with case 1), the corrector function in case
2) does not (in general) belong to the associated Sobolev-Besicovitch space $
B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$, but rather to $B_{\#\mathcal{A}
}^{1,2}(\mathbb{R}^{d})$. So information is available mainly for the
gradient of the corrector. To address this issue, we use the approximate
corrector ${\Greekmath 011F} _{T,j}$, distributional solution to $-{\Greekmath 0272} \cdot
A(e_{j}+{\Greekmath 0272} {\Greekmath 011F} _{T,j})+T^{-2}{\Greekmath 011F} _{T,j}=0$ in $\mathbb{R}^{d}$, which
belongs to $B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$ as shown in Section 2.
This leads to the following result, which is one of the main result of the
work.
\begin{theorem}
\label{t1.4}Let $\Omega $ be a $\mathcal{C}^{1,1}$ bounded domain in $
\mathbb{R}^{d}$. Suppose that the matrix $A(x,y)\equiv A(y)$ and is
asymptotic almost periodic. Assume that $A$ satisfies \emph{(\ref{1.2})}.
For $f\in L^{2}(\Omega )$, let $u_{{\Greekmath 0122} }$ and $u_{0}$ be the weak
solutions of Dirichlet problems \emph{(\ref{1.1})} and \emph{(\ref{1.4})}
respectively. Then there exists a function ${\Greekmath 0111} :(0,1]\rightarrow \lbrack
0,\infty )$ depending on $A$ with $\lim_{t\rightarrow 0}{\Greekmath 0111} (t)=0$ such
that
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\right\Vert _{H^{1}(\Omega )}\leq C{\Greekmath 0111} ({\Greekmath 0122} )\left\Vert
f\right\Vert _{L^{2}(\Omega )} \label{Eq03}
\end{equation}
and
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}\right\Vert _{L^{2}(\Omega )}\leq C\left[
{\Greekmath 0111} ({\Greekmath 0122} )\right] ^{2}\left\Vert f\right\Vert _{L^{2}(\Omega )}
\label{Eq02}
\end{equation}
where $T={\Greekmath 0122} ^{-1}$ and ${\Greekmath 011F} _{T}$ is the approximate corrector
defined by \emph{(\ref{11.5})}, and $C=C(\Omega ,A,d)$.
\end{theorem}
The precise convergence rates in case 1) are presented in the following
result.
\begin{theorem}
\label{t5.1}Suppose that $A$ is asymptotic periodic and satisfies
ellipticity conditions \emph{(\ref{1.2})} and \emph{(\ref{2.2})}. Assume $
\Omega $, $f$, $u_{{\Greekmath 0122} }$ and $u_{0}$ are as in Theorem \emph{\ref
{t1.4}}. Denoting by ${\Greekmath 011F} $ the corrector defined by \emph{(\ref{1.6})},
there exists $C=C(\Omega ,A,d)>0$ such that
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} ^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\right\Vert _{H^{1}(\Omega )}\leq C{\Greekmath 0122} ^{\frac{1}{2}}\left\Vert
f\right\Vert _{L^{2}(\Omega )} \label{5.8}
\end{equation}
and
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}\right\Vert _{L^{2}(\Omega )}\leq
C{\Greekmath 0122} \left\Vert f\right\Vert _{L^{2}(\Omega )}. \label{1.14}
\end{equation}
\end{theorem}
Theorem \ref{t5.1} can be obtained as a special case of Theorem \ref{t1.4}.
However we provide an independent proof since we do not need the approximate
corrector in this special situation. Estimate (\ref{1.14}) is optimal.
The above results generalize the well known ones in the periodic and the
uniformly almost periodic settings as considered in \cite{Shen}. In Theorem
\ref{t5.1} we assume that the matrix $A$ has the form $A=A_{0}+A_{per}$
where $A_{0}$ has entries in $L^{2}(\Omega )$ and $A_{per}$ is periodic. In
Theorem \ref{t1.4}, we do not make any restriction on $A_{0}$ as above.
Also, the estimate (\ref{Eq02}) is near optimal. The assumptions will be
made precise in the latter sections.
The problem considered in Theorems \ref{t1.4} and \ref{t5.1} has been
firstly addressed in the periodic framework by Avellaneda and Lin \cite{AL87}
(see also \cite{Jikov}), and in the random setting (that is, for second
order linear elliptic equations with random coefficients) by Yurinskii \cite
{Yu86}, Pozhidaev and Yurinskii \cite{Po-Yu89}, and Bourgeat and Piatnitski
\cite{BP2004} (see also a recent series of works by Gloria and Otto \cite
{Gloria, GNO14, GNO15}, and the recent monograph \cite{Armstrong1}).
Although it is shown in \cite{24'''} that deterministic homogenization
theory can be seen as a special case of random homogenization theory at
least as far as the qualitative study is concerned, we can not expect to use
this random formulation to address the issues of rate of convergence in the
deterministic setting. Indeed, in the random framework, the rate of
convergence relies systematically on the \emph{uniform mixing} property (see
e.g. \cite{BP2004, Po-Yu89, Yu86}) of the coefficients of the equation. As
proved by Bondarenko et al. \cite{BBMM05}, the almost periodic operators do
not satisfy the uniform mixing property. As a result, we can not use the
random framework to address the issue in the general deterministic setting.
We therefore need to elaborate a new framework for solving the underlying
problem. Beyond the periodic (but non-random) setting Kozlov \cite{Koz79}
determined the rates of convergence in almost periodic homogenization by
using almost periodic coefficients satisfying a \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{frequency condition}
(see e.g. (\ref{FC})). In the same vein, Bondarenko et al. \cite{BBMM05}
derived the rates of convergence by considering a perturbation of periodic
coefficients (in dimension $d=1$). The very first works that use the general
almost periodicity assumption are a recent series of work by Shen et al.
\cite{AS2016, Shen, Shen1} in which they treated second order linear
elliptic systems in divergence form. They used approximate correctors to
derive the rates of convergence. A reason to use approximate correctors is
the lack of sufficient knowledge on the corrector itself. Indeed in that
case it is known that the gradient of the corrector is almost periodic.
However it is not known in general whether the corrector itself is almost
periodic. Under certain conditions, it is shown in \cite{Armstrong, Shen1}
that the corrector is almost periodic. But the approximate corrector is in
general almost periodic together with its gradient.
It seems necessary to compare ours results in Theorems \ref{t1.4} and \ref
{t5.1} with the existing ones in the literature. First of all, it is worth
noting that the algebra of continuous asymptotic almost periodic functions
is included in the Banach space of Weyl almost periodic functions; see e.g.
\cite{Besicovitch}. Thus the results obtained in \cite{Shen1} can be seen as
generalizing those in Theorems \ref{t1.4} and \ref{t5.1}. However it is not
exactly the case. Indeed in \cite{Shen1}, the rates of convergence are found
in terms of the modulus of Weyl-almost periodicity of the matrix $A$, that
is, in terms of the function
\begin{equation*}
{\Greekmath 011A} _{A}^{1}(R,L)=\sup_{y\in \mathbb{R}^{d}}\inf_{\left\vert z\right\vert
\leq R}\left( \sup_{x\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{L}(x)}\left\vert A(t+y)-A(t+z)\right\vert ^{2}dt\right) ^{\frac{1
}{2}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }R,L>0
\end{equation*}
where $B_{L}(x)$ stands for the open ball \ in $\mathbb{R}^{d}$ centered at $
x$ and of radius $L>0$. In our work, we distinguish two cases: 1) the
asymptotic periodic case in which we show that the rate of convergence is
optimal, that $\left\Vert u_{{\Greekmath 0122} }-u_{0}\right\Vert _{L^{2}(\Omega
)}=O({\Greekmath 0122} )$; 2) In the general continuous asymptotic almost periodic
setting, we show as in \cite{Shen1}, that the rate of convergence depends on
the modulus of asymptotic almost periodicity defined by
\begin{equation*}
{\Greekmath 011A} _{A}(R,L)=\sup_{y\in \mathbb{R}^{d}}\inf_{\left\vert z\right\vert \leq
R}\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{L}(0))}.
\end{equation*}
As it is easily seen, the comparison between ${\Greekmath 011A} _{A}^{1}(R,L)$ and ${\Greekmath 011A}
_{A}(R,L)$ is not straightforward. So our result in Theorem \ref{t5.1} does
not follows directly from its counterpart Theorem 1.4 in \cite{Shen1}.
Our work combines the framework of \cite{Shen} with the general
deterministic homogenization theory introduced by Zhikov and Krivenko \cite
{Zhikov4} and Nguetseng \cite{Hom1}. Furthermore, numerical simulations
based on finite volume method are provided to sustain our main theoretical
results.
The further investigation is organized as follows. Section 2 is devoted to
the proof of Theorems \ref{t1.1} and \ref{t4.1}. Section 3 deals with the
approximation of the homogenized coefficients. In Section 4, we prove
Theorems \ref{t1.4} while in Section 5 we prove Theorem \ref{t5.1}. In
Section 6, we provide some examples of concrete algebras and functions for
which the results, in particular those of Theorems \ref{t3.2}, \ref{t1.4}
and \ref{t5.1} apply. Finally, in Section 7 we present numerical results
illustrating the method and supporting the proposed procedure.
\section{Existence result for the corrector equation}
Let the matrix $A$ satisfy (\ref{1.2}) and (\ref{1.3}). Our aim is to solve
the corrector problem (\ref{1.6}). Let $B_{\mathcal{A}}^{2,\infty }(\mathbb{R
}^{d})=B_{\mathcal{A}}^{2}(\mathbb{R}^{d})\cap L^{\infty }(\mathbb{R}^{d})$,
which is a Banach space under the $L^{\infty }(\mathbb{R}^{d})$-norm.
\begin{lemma}
\label{l4.1}Let $h\in \mathcal{C}(\overline{\Omega };B_{\mathcal{A}
}^{2,\infty }(\mathbb{R}^{d}))$ and $H\in \mathcal{C}(\overline{\Omega };B_{
\mathcal{A}}^{2,\infty }(\mathbb{R}^{d})^{d})$. For any $T>0$, there exists
a unique function $u\in \mathcal{C}(\overline{\Omega };B_{\mathcal{A}}^{1,2}(
\mathbb{R}^{d}))$ such that
\begin{equation}
-{\Greekmath 0272} _{y}\cdot \left( A(x,\cdot ){\Greekmath 0272} _{y}u(x,\cdot )\right)
+T^{-2}u(x,\cdot )=h(x,\cdot )+{\Greekmath 0272} _{y}\cdot H(x,\cdot )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }
\mathbb{R}^{d} \label{4.2}
\end{equation}
for any fixed $x\in \overline{\Omega }$. The solution $u$ satisfies further
\begin{equation}
\sup_{z\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}\left( T^{-2}\left\vert u(x,y)\right\vert ^{2}+\left\vert
{\Greekmath 0272} u(x,y)\right\vert ^{2}\right) dy\leq C\sup_{z\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}(\left\vert H(x,y)\right\vert ^{2}+T^{2}\left\vert
h(x,y)\right\vert ^{2})dy \label{4.3}
\end{equation}
for any $R\geq T$ and all $x\in \overline{\Omega }$, where the constant $C$
depends only on $d$, ${\Greekmath 010B} $ and ${\Greekmath 010C} $.
\end{lemma}
\begin{proof}
Since the variable $x$ in (\ref{4.2}) behaves as a parameter, we drop it
throughout the proof of the existence and uniqueness. Thus, in what follows,
we keep using the symbol ${\Greekmath 0272} $ instead of ${\Greekmath 0272} _{y}$ to denote the
gradient with respect to $y$, if there is no danger of confusion.
1. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Existence}. Fix $R>0$ and define $v_{T,R}\equiv v_{R}\in
H_{0}^{1}(B_{R})$ as the unique solution of
\begin{equation*}
-{\Greekmath 0272} \cdot A{\Greekmath 0272} v_{R}+T^{-2}v_{R}=h+{\Greekmath 0272} \cdot H\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }B_{R}.
\end{equation*}
Extending $v_{R}$ by $0$ off $B_{R}$, we obtain a sequence $(v_{R})_{R}$ in $
H_{loc}^{1}(\mathbb{R}^{d})$. Let us show that the sequence $(v_{R})_{R}$ is
bounded in $H_{loc}^{1}(\mathbb{R}^{d})$. We proceed as in \cite{Gloria}
(see also \cite{Po-Yu89}). In the variational formulation of the above
equation, we choose as test function, the function ${\Greekmath 0111} _{z}^{2}v_{R}$,
where ${\Greekmath 0111} _{z}(y)=\exp (-c\left\vert y-z\right\vert )$ for a fixed $z\in
\mathbb{R}^{d}$, $c>0$ to be chosen later. We get
\begin{align*}
\int_{B_{R}}{\Greekmath 0111} _{z}^{2}A{\Greekmath 0272} v_{R}\cdot {\Greekmath 0272}
v_{R}+T^{-2}\int_{B_{R}}{\Greekmath 0111} _{z}^{2}v_{R}^{2}& =-2\int_{B_{R}}{\Greekmath 0111}
_{z}v_{R}A{\Greekmath 0272} v_{R}\cdot {\Greekmath 0272} {\Greekmath 0111} _{z}-2\int_{B_{R}}{\Greekmath 0111}
_{z}v_{R}H\cdot {\Greekmath 0272} {\Greekmath 0111} _{z} \\
& -\int_{B_{R}}{\Greekmath 0111} _{z}^{2}H\cdot {\Greekmath 0272} v_{R}+\int_{B_{R}}h{\Greekmath 0111}
_{z}^{2}v_{R} \\
& =I_{1}+I_{2}+I_{3}+I_{4}.
\end{align*}
The left-hand side of the above equality is bounded from below by
\begin{equation*}
{\Greekmath 010B} \int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left\vert {\Greekmath 0272} v_{R}\right\vert
^{2}+T^{-2}\int_{B_{R}}{\Greekmath 0111} _{z}^{2}v_{R}^{2},
\end{equation*}
while for the right-hand side, we have the following bounds (after using the
Young's inequality and the bounds on $A$):
\begin{align*}
\left\vert I_{1}\right\vert & \leq \frac{{\Greekmath 010B} {\Greekmath 010C} T^{-2}}{k}
\int_{B_{R}}v_{R}^{2}\left\vert {\Greekmath 0272} {\Greekmath 0111} _{z}\right\vert ^{2}+\frac{
T^{2}{\Greekmath 010C} k}{{\Greekmath 010B} }\int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left\vert {\Greekmath 0272}
v_{R}\right\vert ^{2}, \\
\left\vert I_{2}\right\vert & \leq \frac{{\Greekmath 010B} {\Greekmath 010C} T^{-2}}{k}
\int_{B_{R}}v_{R}^{2}\left\vert {\Greekmath 0272} {\Greekmath 0111} _{z}\right\vert ^{2}+\frac{T^{2}k
}{{\Greekmath 010B} {\Greekmath 010C} }\int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left\vert H\right\vert ^{2}, \\
\left\vert I_{3}\right\vert & \leq \frac{T^{2}{\Greekmath 010C} k}{{\Greekmath 010B} }
\int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left\vert {\Greekmath 0272} v_{R}\right\vert ^{2}+\frac{
T^{-2}{\Greekmath 010B} }{4k}\int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left\vert H\right\vert ^{2}, \\
\left\vert I_{4}\right\vert & \leq \frac{{\Greekmath 010B} {\Greekmath 010C} T^{-2}c^{2}}{k}
\int_{B_{R}}v_{R}^{2}{\Greekmath 0111} _{z}^{2}+\frac{T^{2}k}{4{\Greekmath 010B} {\Greekmath 010C} c^{2}}
\int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left\vert h\right\vert ^{2}
\end{align*}
where $k>0$ is to be chosen later. Noticing that $\left\vert {\Greekmath 0272} {\Greekmath 0111}
_{z}\right\vert =c{\Greekmath 0111} _{z}$, we readily get after using the series of
inequalities above,
\begin{eqnarray*}
&&\int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left( {\Greekmath 010B} -2\frac{T^{2}{\Greekmath 010C} k}{{\Greekmath 010B} }
\right) \left\vert {\Greekmath 0272} v_{R}\right\vert ^{2}+T^{-2}\int_{B_{R}}{\Greekmath 0111}
_{z}^{2}\left( 1-3\frac{{\Greekmath 010B} {\Greekmath 010C} c^{2}}{k}\right) v_{R}^{2} \\
&\leq &\int_{B_{R}}\left[ \left( \frac{T^{2}k}{{\Greekmath 010B} {\Greekmath 010C} }+\frac{
T^{-2}{\Greekmath 010B} }{4{\Greekmath 010C} k}\right) \left\vert H\right\vert ^{2}+\frac{kT^{2}}{
4{\Greekmath 010B} {\Greekmath 010C} c^{2}}\left\vert h\right\vert ^{2}\right] {\Greekmath 0111} _{z}^{2}.
\end{eqnarray*}
Choosing therefore $k=\frac{{\Greekmath 010B} ^{2}}{4{\Greekmath 010C} T^{2}}$ and $c=\frac{1}{
2{\Greekmath 010C} T}\left( \frac{{\Greekmath 010B} }{6}\right) ^{1/2}$, we obtain the estimate
\begin{equation}
{\Greekmath 010B} \int_{B_{R}}{\Greekmath 0111} _{z}^{2}\left\vert {\Greekmath 0272} v_{R}\right\vert
^{2}+T^{-2}\int_{B_{R}}{\Greekmath 0111} _{z}^{2}v_{R}^{2}\leq \int_{B_{R}}\left[ \left(
\frac{{\Greekmath 010B} }{4{\Greekmath 010C} ^{2}}+\frac{1}{{\Greekmath 010B} }\right) \left\vert H\right\vert
^{2}+\frac{3}{2}T^{2}\left\vert h\right\vert ^{2}\right] {\Greekmath 0111} _{z}^{2}.
\label{4.5}
\end{equation}
The inequality (\ref{4.5}) above shows that the sequence $(v_{R})$ is
bounded in $H_{loc}^{1}(\mathbb{R}^{d})$; indeed, for any compact subset $K$
in $\mathbb{R}^{d}$, the left-hand side of (\ref{4.5}) is bounded from below
by $c_{K}({\Greekmath 010B} \int_{B_{R}}\left\vert {\Greekmath 0272} v_{R}\right\vert
^{2}+T^{-2}\int_{B_{R}}v_{R}^{2})$ where $c_{K}=\min_{K}{\Greekmath 0111} _{z}^{2}>0$
while the right-hand side is bounded from above by $C\int_{\mathbb{R}
^{d}}{\Greekmath 0111} _{z}^{2}$ where
\begin{equation*}
C=\left( \frac{{\Greekmath 010B} }{4{\Greekmath 010C} ^{2}}+\frac{1}{{\Greekmath 010B} }\right) \left\Vert
H\right\Vert _{\mathcal{C}(\overline{\Omega };L^{\infty }(\mathbb{R}
^{d}))}^{2}+\frac{3}{2}T^{2}\left\Vert h\right\Vert _{\mathcal{C}(\overline{
\Omega };L^{\infty }(\mathbb{R}^{d}))}^{2}.
\end{equation*}
Hence there exist a subsequence of $(v_{R})$ and a function $v\in
H_{loc}^{1}(\mathbb{R}^{d})$ such that the above mentioned subsequence
weakly converges in $H_{loc}^{1}(\mathbb{R}^{d})$ to $v$, and it is easy to
see that $v$ is a distributional solution of (\ref{4.2}) in $\mathbb{R}^{d}$
. Taking the $\lim \inf_{R\rightarrow \infty }$ in (\ref{4.5}) yields
\begin{equation}
{\Greekmath 010B} \int_{\mathbb{R}^{d}}{\Greekmath 0111} _{z}^{2}\left\vert {\Greekmath 0272} v_{R}\right\vert
^{2}+T^{-2}\int_{\mathbb{R}^{d}}{\Greekmath 0111} _{z}^{2}v_{R}^{2}\leq \int_{\mathbb{R}
^{d}}\left[ \left( \frac{{\Greekmath 010B} }{4{\Greekmath 010C} ^{2}}+\frac{1}{{\Greekmath 010B} }\right)
\left\vert H\right\vert ^{2}+\frac{3}{2}T^{2}\left\vert h\right\vert ^{2}
\right] {\Greekmath 0111} _{z}^{2}. \label{4.6}
\end{equation}
We infer from (\ref{4.6}) that
\begin{equation}
\sup_{z\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}(\left\vert {\Greekmath 0272} v\right\vert ^{2}+T^{-2}v^{2})\leq C
\label{e2.4}
\end{equation}
where $C$ does not depend on $z$, but on $T$. Estimate (\ref{4.3}) (for $R=T$
) follows from \cite{Po-Yu89} while the case $R>T$ is a consequence of
Caccioppoli's inequality; see \cite[Lemma 3.2]{Shen1}.
Let us show that $v\in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$. It suffices
to check that $v$ solves the equation
\begin{equation}
M(A({\Greekmath 0118} +{\Greekmath 0272} v)\cdot {\Greekmath 0272} {\Greekmath 011E} +T^{-2}v{\Greekmath 011E} )=M(h{\Greekmath 011E} -H\cdot {\Greekmath 0272}
{\Greekmath 011E} )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, all }{\Greekmath 011E} \in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d}).
\label{4.7}
\end{equation}
To this end, let ${\Greekmath 0127} \in \mathcal{C}_{0}^{\infty }(\mathbb{R}^{d})$ and
${\Greekmath 011E} \in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$. Define (for fixed $
{\Greekmath 0122} >0$), ${\Greekmath 0120} (y)={\Greekmath 0127} ({\Greekmath 0122} y){\Greekmath 011E} (y)$. Choose ${\Greekmath 0120} $
as test function in the variational form of (\ref{4.2}) and get
\begin{align*}
& \int_{\mathbb{R}^{d}}\left[ A{\Greekmath 0272} u\cdot ({\Greekmath 0122} {\Greekmath 011E} {\Greekmath 0272}
{\Greekmath 0127} ({\Greekmath 0122} \cdot )+{\Greekmath 0127} ({\Greekmath 0122} \cdot ){\Greekmath 0272} {\Greekmath 011E}
)+T^{-2}u{\Greekmath 0127} ({\Greekmath 0122} \cdot ){\Greekmath 011E} \right] dy \\
& =\int_{\mathbb{R}^{d}}\left[ h{\Greekmath 0127} ({\Greekmath 0122} \cdot ){\Greekmath 011E} -H\cdot
({\Greekmath 0122} {\Greekmath 011E} {\Greekmath 0272} {\Greekmath 0127} ({\Greekmath 0122} \cdot )+{\Greekmath 0127} ({\Greekmath 0122}
\cdot ){\Greekmath 0272} {\Greekmath 011E} )\right] dy.
\end{align*}
The change of variables $t={\Greekmath 0122} y$ leads (after multiplication by $
{\Greekmath 0122} ^{d}$) to
\begin{align*}
& \int_{\mathbb{R}^{d}}\left[ A^{{\Greekmath 0122} }({\Greekmath 0272} _{y}u)^{{\Greekmath 0122}
}\cdot ({\Greekmath 0122} {\Greekmath 011E} ^{{\Greekmath 0122} }{\Greekmath 0272} {\Greekmath 0127} +{\Greekmath 0127} ({\Greekmath 0272}
_{y}{\Greekmath 011E} )^{{\Greekmath 0122} })+T^{-2}u^{{\Greekmath 0122} }{\Greekmath 0127} {\Greekmath 011E} ^{{\Greekmath 0122} }
\right] dt \\
& =\int_{\mathbb{R}^{d}}\left[ h^{{\Greekmath 0122} }{\Greekmath 011E} ^{{\Greekmath 0122} }{\Greekmath 0127}
-H^{{\Greekmath 0122} }\cdot ({\Greekmath 0122} {\Greekmath 011E} ^{{\Greekmath 0122} }{\Greekmath 0272} {\Greekmath 0127}
+{\Greekmath 0127} ({\Greekmath 0272} _{y}{\Greekmath 011E} )^{{\Greekmath 0122} })\right] dt
\end{align*}
where $w^{{\Greekmath 0122} }(t)=w(t/{\Greekmath 0122} )$ for a given $w$. Letting $
{\Greekmath 0122} \rightarrow 0$ above yields
\begin{align*}
\int_{\mathbb{R}^{d}}M(A{\Greekmath 0272} u\cdot {\Greekmath 0272} {\Greekmath 011E} +T^{-2}u{\Greekmath 011E} ){\Greekmath 0127} dt&
=\int_{\mathbb{R}^{d}}M(h{\Greekmath 011E} -H\cdot {\Greekmath 0272} {\Greekmath 011E} ){\Greekmath 0127} dt \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{for all }{\Greekmath 0127} & \in \mathcal{C}_{0}^{\infty }(\mathbb{R}^{d})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
and }{\Greekmath 011E} \in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d}).
\end{align*}
which amounts to (\ref{4.7}). So, we have just shown that, if $v\in
H_{loc}^{1}(\mathbb{R}^{d})$ solves (\ref{4.2}) in the sense of
distributions in $\mathbb{R}^{d}$, then it satisfies (\ref{4.7}). Before we
proceed any further, let us first show that (\ref{4.7}) possesses a unique
solution in $B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$ up to an additive
function $w\in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$ satisfying $
M(\left\vert w\right\vert ^{2})=0$. First and foremost, we recall that the
space $\mathcal{B}_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})=B_{\mathcal{A}}^{1,2}(
\mathbb{R}^{d})/\mathcal{N}$ (where $\mathcal{N}=\{u\in B_{\mathcal{A}
}^{1,2}(\mathbb{R}^{d}):\left\Vert u\right\Vert _{1,2}=0\}$) is a Hilbert
space with inner product
\begin{equation*}
(u+\mathcal{N},v+\mathcal{N})_{1,2}=M(uv+{\Greekmath 0272} u\cdot {\Greekmath 0272} v)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }
u,v\in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d}).
\end{equation*}
If $w\in \mathcal{N}$ then $M(w)=0$, since $\left\vert M(w)\right\vert \leq
M(\left\vert w\right\vert )\leq (M(\left\vert w\right\vert
^{2}))^{1/2}=\left\Vert w\right\Vert _{2}=0$, so that $\left( ,\right)
_{1,2} $ is well defined. Now, (\ref{4.7}) is equivalent to $a(v,{\Greekmath 011E} )=\ell
({\Greekmath 011E} ) $ for all ${\Greekmath 011E} \in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$ where
\begin{equation*}
a(v,{\Greekmath 011E} )=M(T^{-2}v{\Greekmath 011E} +A{\Greekmath 0272} v\cdot {\Greekmath 0272} {\Greekmath 011E} ),\ \ell ({\Greekmath 011E}
)=M(h{\Greekmath 011E} -H\cdot {\Greekmath 0272} {\Greekmath 011E} ).
\end{equation*}
$a(\cdot ,\cdot )$ defines a continuous coercive bilinear form on $\mathcal{B
}_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$; $\ell $ is a continuous linear form
on $\mathcal{B}_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$. Lax-Milgram theorem
implies that $v+\mathcal{N}$ is a unique solution of (\ref{4.7}). This
yields $v\in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$.
2. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Uniqueness}. The uniqueness of the solution amounts to consider (
\ref{4.2}) with $h=0$ and $H=0$. We derive from (\ref{4.6})
\begin{equation*}
{\Greekmath 010B} \int_{\mathbb{R}^{d}}{\Greekmath 0111} _{z}^{2}\left\vert {\Greekmath 0272} v\right\vert
^{2}+T^{-2}\int_{\mathbb{R}^{d}}{\Greekmath 0111} _{z}^{2}v^{2}=0,
\end{equation*}
so that $v=0$ for the corresponding equation.
3. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Continuity}. To investigate the continuity of $v$ with respect to
$x$, we fix $x_{0}\in \overline{\Omega }$ \ and we let $w(x)=v(x,\cdot
)-v(x_{0},\cdot )$. Then $w(x)\in B_{\mathcal{A}}^{1,2}(\mathbb{R}^{d})$ and
\begin{eqnarray*}
-{\Greekmath 0272} \cdot A(x,\cdot ){\Greekmath 0272} w(x)+T^{-2}w(x) &=&h(x,\cdot )-h(x_{0},\cdot
)+{\Greekmath 0272} \cdot (H(x,\cdot )-H(x_{0},\cdot )) \\
&&+{\Greekmath 0272} \cdot (A(x,\cdot )-A(x_{0},\cdot )){\Greekmath 0272} v(x_{0},\cdot ),
\end{eqnarray*}
so that, using estimate (\ref{4.3}), we find (for any $R\geq T$)
\begin{eqnarray*}
\sup_{z\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}\left( T^{-2}\left\vert w(x)\right\vert ^{2}+\left\vert
{\Greekmath 0272} w(x)\right\vert ^{2}\right) dy &\leq &CT^{2}\sup_{z\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}\left\vert h(x,y)-h(x_{0},y)\right\vert ^{2}dy \\
&&+C\sup_{z\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}\left\vert H(x,y)-H(x_{0},y)\right\vert ^{2}dy \\
&&+C\sup_{z\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}\left\vert A(x,y)-A(x_{0},y)\right\vert ^{2}\left\vert
{\Greekmath 0272} v(x_{0},y)\right\vert ^{2}dy \\
&\leq &CT^{2}\left\Vert h(x,\cdot )-h(x_{0},\cdot )\right\Vert _{L^{\infty }(
\mathbb{R}^{d})}^{2} \\
&&+C\left\Vert H(x,\cdot )-H(x_{0},\cdot )\right\Vert _{L^{\infty }(\mathbb{R
}^{d})}^{2} \\
&&+C\left\Vert A(x,\cdot )-A(x_{0},\cdot )\right\Vert _{L^{\infty }(\mathbb{R
}^{d})}^{2}.
\end{eqnarray*}
Continuity is a consequence of the following estimate
\begin{eqnarray*}
&&T^{-2}\left\Vert v(x,\cdot )-v(x_{0},\cdot )\right\Vert
_{2}^{2}+\left\Vert {\Greekmath 0272} v(x,\cdot )-{\Greekmath 0272} v(x_{0},\cdot )\right\Vert
_{2}^{2} \\
&\equiv &\lim_{R\rightarrow \infty }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}T^{-2}\left\vert w(x)\right\vert ^{2}+\left\vert {\Greekmath 0272}
w(x)\right\vert ^{2}dy \\
&\leq &CT^{2}\left\Vert h(x,\cdot )-h(x_{0},\cdot )\right\Vert _{L^{\infty }(
\mathbb{R}^{d})}^{2}+C\left\Vert H(x,\cdot )-H(x_{0},\cdot )\right\Vert
_{L^{\infty }(\mathbb{R}^{d})}^{2} \\
&&+C\left\Vert A(x,\cdot )-A(x_{0},\cdot )\right\Vert _{L^{\infty }(\mathbb{R
}^{d})}^{2}.
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of Theorem \protect\ref{t4.1}]
1. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Existence and continuity}. Let us denote by $({\Greekmath 011F} _{T,j}(x,\cdot
))_{T\geq 1}$ (for fixed $1\leq j\leq d$) the sequence constructed in Lemma
\ref{l4.1} and corresponding to $h=0$ and $H=Ae_{j}$, $e_{j}$ being denoting
the $j$th vector of the canonical basis of $\mathbb{R}^{d}$. It satisfies (
\ref{4.3}), so that by the weak compactness, the sequence $({\Greekmath 0272} {\Greekmath 011F}
_{T,j}(x,\cdot ))_{T\geq 1}$ weakly converges in $L_{loc}^{2}(\mathbb{R}
^{d})^{d}$ (up to extraction of a subsequence) to some $V_{j}(x,\cdot )\in
L_{loc}^{2}(\mathbb{R}^{d})^{d}$. From the equality $\partial ^{2}{\Greekmath 011F}
_{T,j}(x,\cdot )/\partial y_{i}\partial y_{l}=\partial ^{2}{\Greekmath 011F}
_{T,j}(x,\cdot )/\partial y_{l}\partial y_{i}$, a limit passage in the
distributional sense yields $\partial V_{j,i}(x,\cdot )/\partial
y_{l}=\partial V_{j,l}(x,\cdot )/\partial y_{i}$, where $V_{j}=(V_{j,i})_{1
\leq i\leq d}$. This implies $V_{j}(x,\cdot )={\Greekmath 0272} {\Greekmath 011F} _{j}(x,\cdot )$
for some ${\Greekmath 011F} _{j}(x,\cdot )\in H_{loc}^{1}(\mathbb{R}^{d})$. Using the
boundedness of $(T^{-1}{\Greekmath 011F} _{T,j}(x,\cdot ))_{T\geq 1}$ in $L_{loc}^{2}(
\mathbb{R}^{d})$, we pass to the limit in the variational formulation of (
\ref{4.2}) (as $T\rightarrow \infty $) to get that ${\Greekmath 011F} _{j}$ solves (\ref
{4.1}). Arguing exactly as in the proof of (\ref{4.7}) (in Lemma \ref{l4.1}
), we arrive at $V_{j}(x,\cdot )\in B_{\mathcal{A}}^{2}(\mathbb{R}^{d})^{d}$
. Also, since ${\Greekmath 011F} _{T,j}(x,\cdot )\in B_{\mathcal{A}}^{1,2}(\mathbb{R}
^{d}) $, we have $M({\Greekmath 0272} {\Greekmath 011F} _{T,j}(x,\cdot ))=0$, hence $M({\Greekmath 0272} {\Greekmath 011F}
_{j}(x,\cdot ))=0$. We repeat the proof of the Part 3. in the previous lemma
to find that ${\Greekmath 0272} _{y}{\Greekmath 011F} _{j}\in \mathcal{C}(\overline{\Omega };B_{
\mathcal{A}}^{2}(\mathbb{R}^{d})^{d})$.
2. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Uniqueness} (of ${\Greekmath 0272} _{y}{\Greekmath 011F} _{j}$). Fix $x\in \overline{
\Omega }$ and assume that ${\Greekmath 011F} _{j}(x,\cdot )\in H_{loc}^{1}(\mathbb{R}
^{d}) $ is such that $-\func{div}(A(x,\cdot ){\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot
))=0$ in $\mathbb{R}^{d}$ and ${\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot )\in B_{\mathcal{
A}}^{2}(\mathbb{R}^{d})^{d}$. Then it follows from \cite[Property (3.10)]
{Shen} that, given $0<{\Greekmath 011B} <1$, there exists $C_{{\Greekmath 011B} }>0$ independent
from $r$ and $R$\ such that
\begin{equation}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}}\left\vert {\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,y)\right\vert ^{2}dy\leq
C_{{\Greekmath 011B} }\left( \frac{r}{R}\right) ^{{\Greekmath 011B} }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}}\left\vert {\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,y)\right\vert ^{2}dy\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
for all }0<r<R. \label{04}
\end{equation}
Next, since $-\func{div}(A(x,\cdot ){\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot ))=0$ in $
\mathbb{R}^{d}$ and ${\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot )\in B_{\mathcal{A}}^{2}(
\mathbb{R}^{d})^{d}$, we show as for (\ref{4.7}) that
\begin{equation}
M(A(x,\cdot ){\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot )\cdot {\Greekmath 0272} _{y}{\Greekmath 011E} )=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
for all }{\Greekmath 011E} \in B_{\#\mathcal{A}}^{1,2}(\mathbb{R}^{d}). \label{05}
\end{equation}
Choosing ${\Greekmath 011E} ={\Greekmath 011F} _{j}(x,\cdot )$ in (\ref{05}), and using the
ellipticity of $A$, it emerges $M(\left\vert {\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot
)\right\vert ^{2})=0$, that is, $\lim_{R\rightarrow \infty }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}}\left\vert {\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,y)\right\vert ^{2}dy=0$.
Coming back to (\ref{04}) and letting there $R\rightarrow \infty $, we are
led to $\int_{B_{r}}\left\vert {\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,y)\right\vert ^{2}dy=0$
for all $r>0$. This gives ${\Greekmath 0272} _{y}{\Greekmath 011F} _{j}(x,\cdot )=0$.
\end{proof}
We can now prove Theorem \ref{t1.1}.
\begin{proof}[Proof of Theorem \protect\ref{t1.1}]
Let $\Phi _{{\Greekmath 0122} }={\Greekmath 0120} _{0}+{\Greekmath 0122} {\Greekmath 0120} _{1}^{{\Greekmath 0122} }$
with ${\Greekmath 0120} _{1}^{{\Greekmath 0122} }(x)={\Greekmath 0120} _{1}(x,x/{\Greekmath 0122} )$ ($x\in \Omega
$), where ${\Greekmath 0120} _{0}\in \mathcal{C}_{0}^{\infty }(\Omega )$ and ${\Greekmath 0120}
_{1}\in \mathcal{C}_{0}^{\infty }(\Omega )\otimes \mathcal{A}^{\infty }$, $
\mathcal{A}^{\infty }=\{u\in \mathcal{A}:D^{{\Greekmath 010B} }u\in \mathcal{A}$ for
all ${\Greekmath 010B} \in \mathbb{N}^{d}\}$. Taking $\Phi _{{\Greekmath 0122} }$ (wich
belongs to $\mathcal{C}_{0}^{\infty }(\Omega )$) as a test function in the
variational formulation of (\ref{1.1}) yields
\begin{equation}
\int_{\Omega }A^{{\Greekmath 0122} }{\Greekmath 0272} u_{{\Greekmath 0122} }\cdot {\Greekmath 0272} \Phi
_{{\Greekmath 0122} }dx=\int_{\Omega }f\Phi _{{\Greekmath 0122} }dx. \label{4.6'}
\end{equation}
It is not difficult to see that the sequence $(u_{{\Greekmath 0122}
})_{{\Greekmath 0122} >0}$ is bounded in $H_{0}^{1}(\Omega )$, so that,
considering an ordinary sequence $E\subset \mathbb{R}_{+}^{\ast }$, there
exist a couple $(u_{0},u_{1})\in H_{0}^{1}(\Omega )\times L^{2}(\Omega ;B_{\#
\mathcal{A}}^{1,2}(\mathbb{R}^{d}))$ and a subsequence $E^{\prime }$ of $E$
such that, as $E^{\prime }\ni {\Greekmath 0122} \rightarrow 0$,
\begin{equation*}
u_{{\Greekmath 0122} }\rightarrow u_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }H_{0}^{1}(\Omega )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-weak
and in }L^{2}(\Omega )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-strong}
\end{equation*}
\begin{equation}
{\Greekmath 0272} u_{{\Greekmath 0122} }\rightarrow {\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}u_{1}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }
L^{2}(\Omega )^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-weak }\Sigma .\ \ \ \ \ \ \ \ \ \ \label{4.7'}
\end{equation}
On the other hand
\begin{equation}
{\Greekmath 0272} \Phi _{{\Greekmath 0122} }={\Greekmath 0272} {\Greekmath 0120} _{0}+({\Greekmath 0272} _{y}{\Greekmath 0120}
_{1})^{{\Greekmath 0122} }+{\Greekmath 0122} ({\Greekmath 0272} {\Greekmath 0120} _{1})^{{\Greekmath 0122}
}\rightarrow {\Greekmath 0272} {\Greekmath 0120} _{0}+{\Greekmath 0272} _{y}{\Greekmath 0120} _{1}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }L^{2}(\Omega
)^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-strong }\Sigma . \label{4.8'}
\end{equation}
This yields in (\ref{4.6'}) the following limit problem
\begin{equation}
\int_{\Omega }M\left( A({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}u_{1})\cdot ({\Greekmath 0272} {\Greekmath 0120}
_{0}+{\Greekmath 0272} _{y}{\Greekmath 0120} _{1})\right) dx=\int_{\Omega }f{\Greekmath 0120} _{0}dx\ \ \forall
({\Greekmath 0120} _{0},{\Greekmath 0120} _{1})\in \mathcal{C}_{0}^{\infty }(\Omega )\times (\mathcal{C
}_{0}^{\infty }(\Omega )\otimes \mathcal{A}^{\infty }). \label{4.9'}
\end{equation}
Problem (\ref{4.9'}) above is equivalent to the system
\begin{equation}
\int_{\Omega }M\left( A({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}u_{1})\cdot {\Greekmath 0272} {\Greekmath 0120}
_{0}\right) dx=\int_{\Omega }f{\Greekmath 0120} _{0}dx\ \ \forall {\Greekmath 0120} _{0}\in \mathcal{C}
_{0}^{\infty }(\Omega ) \label{4.10'}
\end{equation}
\begin{equation}
\int_{\Omega }M\left( A({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}u_{1})\cdot {\Greekmath 0272} _{y}{\Greekmath 0120}
_{1}\right) dx=0\ \ \forall {\Greekmath 0120} _{1}\in \mathcal{C}_{0}^{\infty }(\Omega
)\otimes \mathcal{A}^{\infty }. \label{4.11'}
\end{equation}
Taking in (\ref{4.11'}) ${\Greekmath 0120} _{1}(x,y)={\Greekmath 0127} (x)v(y)$ with ${\Greekmath 0127} \in
\mathcal{C}_{0}^{\infty }(\Omega )$ and $v\in \mathcal{A}^{\infty }$, we get
\begin{equation}
M\left( A(x,\cdot )({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}u_{1})\cdot {\Greekmath 0272} _{y}v\right)
=0\ \ \forall v\in \mathcal{A}^{\infty },x\in \overline{\Omega },
\label{4.12'}
\end{equation}
which is, thanks to the density of $\mathcal{A}^{\infty }$ in $B_{\mathcal{A}
}^{1,2}(\mathbb{R}^{d})$, the weak form of
\begin{equation}
{\Greekmath 0272} _{y}\cdot \left( A(x,\cdot )({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}u_{1})\right) =0
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ (for all fixed }x\in \overline{\Omega }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
),} \label{4.13'}
\end{equation}
with respect to the duality defined by (\ref{4.12'}). So fix ${\Greekmath 0118} \in
\mathbb{R}^{d}$ and consider the problem
\begin{equation}
{\Greekmath 0272} _{y}\cdot \left( A(x,\cdot )({\Greekmath 0118} +{\Greekmath 0272} _{y}v_{{\Greekmath 0118} }(x,\cdot
))\right) =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d};\ v_{{\Greekmath 0118} }(x,\cdot )\in B_{\#\mathcal{A
}}^{1,2}(\mathbb{R}^{d}). \label{4.14'}
\end{equation}
Thanks to Theorem \ref{t4.1}, Eq. (\ref{4.14'}) possesses a unique solution $
v_{{\Greekmath 0118} }$ (up to an additive constant depending on $x$) in $\mathcal{C}(
\overline{\Omega };B_{\#\mathcal{A}}^{1,2}(\mathbb{R}^{d}))$. Choosing there
${\Greekmath 0118} ={\Greekmath 0272} u_{0}(x)$, the uniqueness of the solution implies $
u_{1}(x,y)={\Greekmath 011F} (x,y)\cdot {\Greekmath 0272} u_{0}(x)$ where ${\Greekmath 011F} =({\Greekmath 011F} _{j})_{1\leq
j\leq d}$ with ${\Greekmath 011F} _{j}=v_{e_{j}}$, $e_{j}$ the $j$th vector of the
canonical basis of $\mathbb{R}^{d}$. Replacing in (\ref{4.10'}) $u_{1}$ by $
{\Greekmath 011F} \cdot {\Greekmath 0272} u_{0}$, we get
\begin{equation*}
\int_{\Omega }(M(A(I+{\Greekmath 0272} _{y}{\Greekmath 011F} ){\Greekmath 0272} u_{0})\cdot {\Greekmath 0272} {\Greekmath 0120}
_{0}dx=\int_{\Omega }f{\Greekmath 0120} _{0}dx\ \ \forall {\Greekmath 0120} _{0}\in \mathcal{C}
_{0}^{\infty }(\Omega ),
\end{equation*}
that is, $-{\Greekmath 0272} \cdot A^{\ast }(x){\Greekmath 0272} u_{0}=f$ in $\Omega $.
It remains to verify (\ref{1.7}). Define $\Phi _{{\Greekmath 0122}
}(x)=u_{0}(x)+{\Greekmath 0122} u_{1}(x,x/{\Greekmath 0122} )$. Then using (\ref{1.2})
we obtain
\begin{align*}
{\Greekmath 010B} \int_{\Omega }\left\vert {\Greekmath 0272} u_{{\Greekmath 0122} }-{\Greekmath 0272} \Phi
_{{\Greekmath 0122} }\right\vert ^{2}dx& \leq \int_{\Omega }A^{{\Greekmath 0122} }{\Greekmath 0272}
(u_{{\Greekmath 0122} }-\Phi _{{\Greekmath 0122} })\cdot {\Greekmath 0272} (u_{{\Greekmath 0122} }-\Phi
_{{\Greekmath 0122} })dx \\
& =\int_{\Omega }f(u_{{\Greekmath 0122} }-\Phi _{{\Greekmath 0122} })dx-\int_{\Omega
}A^{{\Greekmath 0122} }{\Greekmath 0272} \Phi _{{\Greekmath 0122} }\cdot {\Greekmath 0272} (u_{{\Greekmath 0122}
}-\Phi _{{\Greekmath 0122} })dx.
\end{align*}
Since $u_{1}\in L^{2}(\Omega ;\mathcal{A}^{1})$, we have that $\int_{\Omega
}f(u_{{\Greekmath 0122} }-\Phi _{{\Greekmath 0122} })dx\rightarrow 0$. Indeed $\Phi
_{{\Greekmath 0122} }\rightarrow u_{0}$ in $L^{2}(\Omega )$ (and hence $
u_{{\Greekmath 0122} }-\Phi _{{\Greekmath 0122} }\rightarrow 0$ in $L^{2}(\Omega )$).
Next observe that ${\Greekmath 0272} \Phi _{{\Greekmath 0122} }\rightarrow {\Greekmath 0272}
u_{0}+{\Greekmath 0272} _{y}u_{1}$ in $L^{2}(\Omega )$-strong $\Sigma $; in fact, $
{\Greekmath 0272} \Phi _{{\Greekmath 0122} }={\Greekmath 0272} u_{0}+{\Greekmath 0122} ({\Greekmath 0272}
u_{1})^{{\Greekmath 0122} }+({\Greekmath 0272} _{y}u_{1})^{{\Greekmath 0122} }$, and since ${\Greekmath 0272}
_{y}u_{1}\in L^{2}(\Omega ;\mathcal{A})$, we obtain $({\Greekmath 0272}
_{y}u_{1})^{{\Greekmath 0122} }\rightarrow {\Greekmath 0272} _{y}u_{1}$ in $L^{2}(\Omega )$
-strong $\Sigma $. One gets readily ${\Greekmath 0272} u_{{\Greekmath 0122} }-{\Greekmath 0272} \Phi
_{{\Greekmath 0122} }\rightarrow 0$ in $L^{2}(\Omega )$-weak $\Sigma $. Using $A$
as a test function, $\int_{\Omega }A^{{\Greekmath 0122} }{\Greekmath 0272} \Phi _{{\Greekmath 0122}
}\cdot {\Greekmath 0272} (u_{{\Greekmath 0122} }-\Phi _{{\Greekmath 0122} })dx\rightarrow 0$. We
have just shown that $u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} u_{1}^{{\Greekmath 0122}
}\rightarrow 0$ in $L^{2}(\Omega )$ and ${\Greekmath 0272} (u_{{\Greekmath 0122}
}-u_{0}-{\Greekmath 0122} u_{1}^{{\Greekmath 0122} })={\Greekmath 0272} u_{{\Greekmath 0122} }-{\Greekmath 0272}
\Phi _{{\Greekmath 0122} }\rightarrow 0$ in $L^{2}(\Omega )$. This proves (\ref
{1.7}) and completes the proof of Theorem \ref{t1.1}.
\end{proof}
We assume henceforth that the matrix $A$ does not depend on $x$, that is, $
A(x,y)=A(y)$. Let ${\Greekmath 011F} _{T}=({\Greekmath 011F} _{T,j})_{1\leq j\leq d}$ be defined by (
\ref{e01}).
\begin{lemma}
\label{l11.1}Let $T\geq 1$ and ${\Greekmath 011B} \in (0,1)$. Assume that $A\in (
\mathcal{A})^{d\times d}$. There exist positive numbers $C=C(A,d)$ and $
C_{{\Greekmath 011B} }=C_{{\Greekmath 011B} }(d,{\Greekmath 011B} ,A)$ such that
\begin{equation}
T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq C,
\label{e5.6}
\end{equation}
\begin{equation}
\sup_{x\in \mathbb{R}^{d}}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}(x)}\left\vert {\Greekmath 0272} {\Greekmath 011F} _{T}\right\vert ^{2}dy\right) ^{
\frac{1}{2}}\leq C_{{\Greekmath 011B} }\left( \frac{T}{r}\right) ^{{\Greekmath 011B} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }
0<r\leq T, \label{e5.7}
\end{equation}
\begin{equation}
\left\vert {\Greekmath 011F} _{T}(x)-{\Greekmath 011F} _{T}(y)\right\vert \leq C_{{\Greekmath 011B} }T^{1-{\Greekmath 011B}
}\left\vert x-y\right\vert ^{{\Greekmath 011B} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }\left\vert x-y\right\vert
\leq T. \label{e5.8}
\end{equation}
\end{lemma}
\begin{proof}
Let us first check (\ref{e5.6}). From the inequality (\ref{4.3}), we deduce
that
\begin{equation}
\sup_{z\in \mathbb{R}^{d},R\geq T}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{R}(z)}\left\vert {\Greekmath 011F} _{T}\right\vert ^{2}\right) ^{\frac{1}{2}
}\leq CT \label{e5.9}
\end{equation}
where $C$ depends only on $d$, ${\Greekmath 010B} $ and ${\Greekmath 010C} $. Now fix $
z=(z_{i})_{1\leq i\leq d}$ in $\mathbb{R}^{d}$ and define
\begin{equation}
u(y)={\Greekmath 011F} _{T,j}(y)+y_{j}-z_{j}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }y\in \mathbb{R}^{d}. \label{e5.11}
\end{equation}
Then $u$ solves the equation
\begin{equation}
{\Greekmath 0272} \cdot (A{\Greekmath 0272} u)=T^{-2}{\Greekmath 011F} _{T,j}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}.
\label{e5.12}
\end{equation}
Using the De Giorgi-Nash estimates, we obtain
\begin{eqnarray*}
\sup_{B_{T}(z)}\left\vert u\right\vert &\leq &C\left[ \left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}(z)}\left\vert u\right\vert ^{2}\right) ^{\frac{1}{2}
}+T^{2}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}(z)}\left\vert T^{-2}{\Greekmath 011F} _{T,j}\right\vert ^{2}\right) ^{
\frac{1}{2}}\right] \\
&\leq &CT+C\sup_{x\in \mathbb{R}^{d}}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}(x)}\left\vert {\Greekmath 011F} _{T,j}\right\vert ^{2}\right) ^{\frac{1}{2}
}\leq CT
\end{eqnarray*}
where $C=C(d,A)$. It follows that $\left\vert {\Greekmath 011F} _{T,j}(z)\right\vert \leq
CT$. Whence (\ref{e5.6}). Now, concerning (\ref{e5.8}), one uses Schauder
estimates: if $v\in H_{loc}^{1}(\mathbb{R}^{d})$ is a weak solution of $
-{\Greekmath 0272} \cdot (A{\Greekmath 0272} v)=h+{\Greekmath 0272} \cdot H$ in $B_{2R}(x_{0})$, then for
each ${\Greekmath 011B} \in (0,1)$ and for all $x,y\in B_{R}(x_{0})$,
\begin{eqnarray}
\left\vert v(x)-v(y)\right\vert &\leq &C\left\vert x-y\right\vert ^{{\Greekmath 011B} }
\left[ R^{-{\Greekmath 011B} }\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2R}(x_{0})}\left\vert v\right\vert ^{2}\right) ^{\frac{1}{2}
}+\sup _{\substack{ z\in B_{R}(x_{0}) \\ 0<r<R}}r^{2-{\Greekmath 011B} }\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}(z)}\left\vert h\right\vert ^{2}\right) ^{\frac{1}{2}}\right.
\label{e5.10} \\
&&\left. +\sup_{\substack{ z\in B_{R}(x_{0}) \\ 0<r<R}}r^{1-{\Greekmath 011B} }\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}(z)}\left\vert H\right\vert ^{2}\right) ^{\frac{1}{2}}\right]
\notag
\end{eqnarray}
where $C=C({\Greekmath 011B} ,A)$ (see e.g. \cite{Giaquinta} or \cite[Theorem 3.4]{Shen}
). Assume $x,y\in \mathbb{R}^{d}$ with $\left\vert x-y\right\vert \leq T$.
Applying (\ref{e5.10}) with $2R=T$, $h=T^{-2}{\Greekmath 011F} _{T,j}$, $H=Ae_{j}$, $
v={\Greekmath 011F} _{T,j}$ and $x_{0}=0$,
\begin{eqnarray*}
\left\vert {\Greekmath 011F} _{T,j}(x)-{\Greekmath 011F} _{T,j}(y)\right\vert &\leq &C\left\vert
x-y\right\vert ^{{\Greekmath 011B} }(T^{-{\Greekmath 011B} }\left\Vert {\Greekmath 011F} _{T,j}\right\Vert
_{L^{\infty }}+T^{2-{\Greekmath 011B} }\left\Vert T^{-2}{\Greekmath 011F} _{T,j}\right\Vert
_{L^{\infty }}+T^{1-{\Greekmath 011B} }\left\Vert A\right\Vert _{L^{\infty }}) \\
&\leq &CT^{1-{\Greekmath 011B} }\left\vert x-y\right\vert ^{{\Greekmath 011B} },
\end{eqnarray*}
where we have used (\ref{e5.6}) for the last inequality above. To obtain (
\ref{e5.7}), we use Caccioppoli's inequality for $-{\Greekmath 0272} \cdot (A{\Greekmath 0272}
{\Greekmath 011F} _{T,j})+T^{-2}{\Greekmath 011F} _{T,j})={\Greekmath 0272} \cdot (Ae_{j})$ in $B_{2r}(x)$ and (
\ref{e5.8}) to get
\begin{eqnarray*}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}(x)}\left\vert {\Greekmath 0272} {\Greekmath 011F} _{T,j}(y)\right\vert ^{2}dy &\leq
&Cr^{-2}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2r}(x)}\left\vert {\Greekmath 011F} _{T,j}(y)-{\Greekmath 011F} _{T,j}(x)\right\vert
^{2}dy+C
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2r}(x)}\left\vert A\right\vert ^{2}dy \\
&\leq &Cr^{-2}(T^{1-{\Greekmath 011B} }r^{{\Greekmath 011B} })^{2}+C\leq C\left( \frac{T^{1-{\Greekmath 011B}
}}{r^{1-{\Greekmath 011B} }}\right) ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ since }0<r\leq T\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{eqnarray*}
(\ref{e5.7}) follows by replacing ${\Greekmath 011B} $ by $1-{\Greekmath 011B} $. This finishes
the proof.
\end{proof}
The next result will be used in the forthcoming sections. It involves
Green's function $G:\mathbb{R}^{d}\times \mathbb{R}^{d}\rightarrow \mathbb{R}
$ solution of
\begin{equation}
-{\Greekmath 0272} _{x}\cdot \left( A(x){\Greekmath 0272} _{x}G(x,y)\right) ={\Greekmath 010E} _{y}(x)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
in }\mathbb{R}^{d}. \label{10.1}
\end{equation}
The properties of the function $G$ require the definition of the weak-$L^{2}$
space denoted by $L^{2,\infty }(\mathbb{R}^{d})$ (see \cite[Chapter 1]{BL76}
for its definition) together with its topological dual denoted by $L^{2,1}(
\mathbb{R}^{d})$ (see \cite{Tar07} for its definition).
\begin{proposition}
\label{p10.1}Assume the matrix $A\in L^{\infty }(\mathbb{R}^{d})^{d\times d}$
is uniformly elliptic (see \emph{(\ref{1.2})}) and symmetric. Then equation
\emph{(\ref{10.1})} has a unique solution in $L^{\infty }(\mathbb{R}
_{y}^{d};W_{loc}^{1,1}(\mathbb{R}_{x}^{d}))$ satisfying:
\begin{itemize}
\item[(i)] $G(\cdot ,y)\in W_{loc}^{1,2}(\mathbb{R}^{d}\backslash \{y\})$
for all $y\in \mathbb{R}^{d};$
\item[(ii)] There exists $C=C(d)>0$ such that
\begin{equation}
\left\Vert {\Greekmath 0272} _{y}G(x,\cdot )\right\Vert _{L^{2,\infty }(\mathbb{R}
^{d})}\leq C, \label{10.2}
\end{equation}
\begin{equation}
\left\vert G(x,y)\right\vert \leq \left\{
\begin{array}{l}
C(1+\left\vert \log \left\vert x-y\right\vert \right\vert )\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }d=2 \\
C\left\vert x-y\right\vert ^{2-d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }d\geq 3
\end{array}
\right. \RIfM@\expandafter\text@\else\expandafter\mbox\fi{, all }x,y\in \mathbb{R}^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }x\neq y, \label{10.3}
\end{equation}
\begin{equation}
\int_{B_{2R}(x)\backslash B_{R}(x)}\left\vert {\Greekmath 0272} _{y}G(x,y)\right\vert
^{q}dy\leq \frac{C}{R^{N(q-1)-q}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }R>0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }1\leq q\leq
2. \label{2.8}
\end{equation}
\noindent If $A$ has H\"{o}lder continuous entries, then for $d\geq 3$ and
for all $x,y\in \mathbb{R}^{d}$ with $x\neq y,$
\begin{equation}
\left\vert {\Greekmath 0272} _{y}G(x,y)\right\vert \leq C\left\vert x-y\right\vert
^{1-d}. \label{10.4}
\end{equation}
\end{itemize}
\end{proposition}
Properties (\ref{10.3}) and (\ref{10.4}) are classical; see e.g. \cite[
Theorems 1.1 and 3.3]{Widman}. (\ref{2.8}) is proved in \cite[Lemma 4.2]
{Lebris}.
\section{Approximation of homogenized coefficients: quantitative estimates}
To simplify the presentation of the results, we assume from now on that $
A(x,y)=A(y)$. We henceforth denote the mean value by $\left\langle \cdot
\right\rangle $.
\subsection{Approximation by Dirichlet problem}
In the preceding section, we saw that the corrector problem is posed on the
whole of $\mathbb{R}^{d}$. However, if the coefficients of our problem are
periodic (say the function $y\mapsto A(y)$ is $Y$-periodic ($
Y=(-1/2,1/2)^{d} $), then this problem reduces to another one posed on the
bounded subset $Y$ of $\mathbb{R}^{d}$, and this yields coefficients that
are computable. Contrasting with the periodic setting, the corrector problem
in the general deterministic framework cannot be reduced to a problem on a
bounded domain. Therefore, truncations must be considered, particularly on
large domains like $Q_{R}$ (the closed cube centered at the origin and of
side length $R$) with appropriate boundary conditions. We proceed exactly as
in the random setting (see \cite{BP2004}). We consider the equation
\begin{equation}
-{\Greekmath 0272} _{y}\cdot \left( A(e_{j}+{\Greekmath 0272} _{y}{\Greekmath 011F} _{j,R})\right) =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }
Q_{R},\ \ {\Greekmath 011F} _{j,R}\in H_{0}^{1}(Q_{R}), \label{3.3}
\end{equation}
which possesses a unique solution satisfying
\begin{equation}
\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}\left\vert {\Greekmath 0272} _{y}{\Greekmath 011F} _{j,R}\right\vert ^{2}dy\right) ^{
\frac{1}{2}}\leq C\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for any }R\geq 1 \label{i}
\end{equation}
where $C$ is independent of $R$. Set ${\Greekmath 011F} _{R}=({\Greekmath 011F} _{j,R})_{1\leq j\leq
d} $. We define the effective and approximate effective matrices $A^{\ast }$
and $A_{R}^{\ast }$ respectively, as follows
\begin{equation}
A^{\ast }=\left\langle A(I+{\Greekmath 0272} _{y}{\Greekmath 011F} )\right\rangle \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }
A_{R}^{\ast }=
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}A(y)(I+{\Greekmath 0272} _{y}{\Greekmath 011F} _{R}(y))dy. \label{eq5}
\end{equation}
\begin{theorem}
\label{t3.1}The generalized sequence of matrices $A_{R}^{\ast }$ converges,
as $R\rightarrow \infty $, to the homogenized matrix $A^{\ast }$.
\end{theorem}
\begin{proof}
We set, for $x\in Q_{1}$, $w_{j}^{R}(x)=\frac{1}{R}{\Greekmath 011F} _{j,R}(Rx)$, $
A_{R}(x)=A(Rx)$ and consider the re-scaled version of (\ref{3.3}) whose $
w_{j}^{R}$ is solution. It reads as
\begin{equation}
-{\Greekmath 0272} \cdot (A_{R}(e_{j}+{\Greekmath 0272} w_{j}^{R}))=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }Q_{1}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, \ }
w_{j}^{R}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\partial Q_{1}. \label{3.6}
\end{equation}
Then (\ref{3.6}) possesses a unique solution $w_{j}^{R}\in H_{0}^{1}(Q_{1})$
satisfying the estimate
\begin{equation}
\left\Vert {\Greekmath 0272} w_{j}^{R}\right\Vert _{L^{2}(Q_{1})}\leq C\ \ \ (1\leq
j\leq d) \label{3.7}
\end{equation}
where $C>0$ is independent of $R>0$. Proceeding as in the proof of Theorem
\ref{t1.1}, we derive the existence of $w_{j}\in H_{0}^{1}(Q_{1})$ and $
w_{j,1}\in L^{2}(Q_{1};B_{\#\mathcal{A}}^{1,2}(\mathbb{R}^{d}))$ such that,
up to a subsequence not relabeled,
\begin{equation}
w_{j}^{R}\rightarrow w_{j}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }H_{0}^{1}(Q_{1})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-weak and }{\Greekmath 0272}
w_{j}^{R}\rightarrow {\Greekmath 0272} w_{j}+{\Greekmath 0272} _{y}w_{j,1}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }
L^{2}(Q_{1})^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-weak }\Sigma \label{4.00}
\end{equation}
and the couple $(w_{j},w_{j,1})$ solves the equation
\begin{equation}
\int_{Q_{1}}\left\langle A(e_{j}+{\Greekmath 0272} w_{j}+{\Greekmath 0272} _{y}w_{j,1})\cdot
({\Greekmath 0272} {\Greekmath 0120} _{0}+{\Greekmath 0272} _{y}{\Greekmath 0120} _{1})\right\rangle dx=0\ \ \forall ({\Greekmath 0120}
_{0},{\Greekmath 0120} _{1})\in \mathcal{C}_{0}^{\infty }(Q_{1})\times (\mathcal{C}
_{0}^{\infty }(Q_{1})\otimes \mathcal{A}^{\infty }), \label{4.100}
\end{equation}
which can be rewritten in the following equivalent form (\ref{4.101})-(\ref
{4.102})
\begin{equation}
\int_{Q_{1}}\left\langle A(e_{j}+{\Greekmath 0272} w_{j}+{\Greekmath 0272}
_{y}w_{j,1})\right\rangle \cdot {\Greekmath 0272} {\Greekmath 0120} _{0}dx=0\ \ \forall {\Greekmath 0120} _{0}\in
\mathcal{C}_{0}^{\infty }(Q_{1}) \label{4.101}
\end{equation}
and
\begin{equation}
\left\langle A(e_{j}+{\Greekmath 0272} w_{j}+{\Greekmath 0272} _{y}w_{j,1})\cdot {\Greekmath 0272}
_{y}v\right\rangle dx=0\ \ \forall v\in \mathcal{A}^{\infty }. \label{4.102}
\end{equation}
To solve (\ref{4.102}), we consider its weak distributional form
\begin{equation}
{\Greekmath 0272} _{y}\cdot \left( A(e_{j}+{\Greekmath 0272} w_{j}+{\Greekmath 0272} _{y}w_{j,1})\right) =0
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{4.103}
\end{equation}
So fix ${\Greekmath 0118} \in \mathbb{R}^{d}$ and consider the problem
\begin{equation}
{\Greekmath 0272} _{y}\cdot \left( A(e_{j}+{\Greekmath 0118} +{\Greekmath 0272} _{y}{\Greekmath 0119} _{j}({\Greekmath 0118} )\right) =0
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d};\ {\Greekmath 0119} _{j}({\Greekmath 0118} )\in B_{\#\mathcal{A}}^{1,2}(
\mathbb{R}^{d}). \label{4.104}
\end{equation}
Then ${\Greekmath 0119} _{j}({\Greekmath 0118} )$ has the form ${\Greekmath 0119} _{j}({\Greekmath 0118} )={\Greekmath 011F} _{j}+{\Greekmath 0112} _{j}({\Greekmath 0118}
)$ where ${\Greekmath 011F} _{j}$ is the solution of the corrector problem (\ref{1.6})
and ${\Greekmath 0112} _{j}({\Greekmath 0118} )$ solves the equation
\begin{equation}
{\Greekmath 0272} _{y}\cdot \left( A({\Greekmath 0118} +{\Greekmath 0272} _{y}{\Greekmath 0112} _{j}({\Greekmath 0118} )\right) =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
in }\mathbb{R}^{d};\ {\Greekmath 0112} _{j}({\Greekmath 0118} )\in B_{\#\mathcal{A}}^{1,2}(\mathbb{R}
^{d}), \label{4.105}
\end{equation}
that is, ${\Greekmath 0112} _{j}({\Greekmath 0118} )={\Greekmath 0118} \cdot {\Greekmath 011F} $ where ${\Greekmath 011F} =({\Greekmath 011F} _{k})_{1\leq
k\leq d}$ with ${\Greekmath 011F} _{k}$ being the solution of (\ref{1.6}) corresponding
to $j=k$ therein. It follows that ${\Greekmath 0119} _{j}({\Greekmath 0118} )={\Greekmath 011F} _{j}+{\Greekmath 0118} \cdot {\Greekmath 011F} $
, so that the function $w_{j,1}$, which corresponds to ${\Greekmath 0119} _{j}({\Greekmath 0272}
w_{j})$, has the form $w_{j,1}={\Greekmath 011F} _{j}+{\Greekmath 011F} \cdot {\Greekmath 0272} w_{j}$. Coming
back to (\ref{4.101}) and replacing there $w_{j,1}$ by ${\Greekmath 011F} _{j}+{\Greekmath 011F} \cdot
{\Greekmath 0272} w_{j}$, we obtain
\begin{equation}
\int_{Q_{1}}\left\langle A(I+{\Greekmath 0272} _{y}{\Greekmath 011F} )\right\rangle (e_{j}+{\Greekmath 0272}
w_{j})\cdot {\Greekmath 0272} {\Greekmath 0120} _{0}dx=0\ \ \forall {\Greekmath 0120} _{0}\in \mathcal{C}
_{0}^{\infty }(Q_{1})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{4.106}
\end{equation}
This shows that $w_{j}\in H_{0}^{1}(Q_{1})$ solves uniquely the equation
\begin{equation}
-{\Greekmath 0272} \cdot (A^{\ast }(e_{j}+{\Greekmath 0272} w_{j}))=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }Q_{1}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,}
\label{3.9}
\end{equation}
and further we have, as $R\rightarrow \infty $,
\begin{equation}
A_{R}(e_{j}+{\Greekmath 0272} w_{j}^{R})\rightarrow A^{\ast }(e_{j}+{\Greekmath 0272} w_{j})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
in }L^{2}(Q_{1})^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-weak.} \label{3.10}
\end{equation}
To see (\ref{3.10}), we observe that the sequence $(A_{R}(e_{j}+{\Greekmath 0272}
w_{j}^{R}))_{R}$ is bounded in $L^{2}(Q_{1})^{d}$ and we choose a test
function $\Phi \in \mathcal{C}_{0}^{\infty }(Q_{1})^{d}$; then by the
sigma-convergence (where we take $A(y)\Phi (x)$ as a test function) we have
from the second convergence result in (\ref{4.00}) that
\begin{eqnarray*}
\int_{Q_{1}}A_{R}(e_{j}+{\Greekmath 0272} w_{j}^{R})\cdot \Phi dx &\rightarrow
&\int_{Q_{1}}\left\langle A(e_{j}+{\Greekmath 0272} w_{j}+{\Greekmath 0272} _{y}w_{j,1})\cdot \Phi
\right\rangle dx \\
&=&\int_{Q_{1}}\left\langle A(e_{j}+{\Greekmath 0272} w_{j}+{\Greekmath 0272}
_{y}w_{j,1})\right\rangle \cdot \Phi dx.
\end{eqnarray*}
But according to (\ref{4.106}), we see that
\begin{equation*}
\left\langle A(e_{j}+{\Greekmath 0272} w_{j}+{\Greekmath 0272} _{y}w_{j,1})\right\rangle
=\left\langle A(I+{\Greekmath 0272} _{y}{\Greekmath 011F} )\right\rangle (e_{j}+{\Greekmath 0272}
w_{j})=A^{\ast }(e_{j}+{\Greekmath 0272} w_{j}).
\end{equation*}
Now, since (\ref{3.9}) has the form $-{\Greekmath 0272} \cdot (A^{\ast }{\Greekmath 0272} w_{j})=0$
in $Q_{1}$, ($A^{\ast }$ has constant entries) we infer from the ellipticity
property of $A^{\ast }$ and the uniqueness of the solution to $-{\Greekmath 0272} \cdot
(A^{\ast }{\Greekmath 0272} w_{j})=0$ in $H_{0}^{1}(Q_{1})$ that $w=(w_{1},...,w_{d})=0$
. Hence the whole sequence $(w_{j}^{R})_{R}$ weakly converges towards $0$ in
$H_{0}^{1}(Q_{1})$. Therefore, integrating (\ref{3.10}) over $Q_{1}$, we
readily get (denoting $w^{R}=(w_{1}^{R},...,w_{d}^{R})$)
\begin{equation*}
A_{R}^{\ast }=
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{1}}A(I+{\Greekmath 0272} w^{R})dx\rightarrow
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{1}}A^{\ast }(I+{\Greekmath 0272} w)dx=A^{\ast }
\end{equation*}
as $R\rightarrow \infty $, where $I$ is the $d\times d$ identity matrix.
This completes the proof.
\end{proof}
\subsection{Quantitative estimates}
We study the rate of convergence for the approximation scheme of the
previous subsection, under the assumption that the corrector lies in $B_{
\mathcal{A}}^{2}(\mathbb{R}^{d})$. To this end, instead of considering the
corrector problem (\ref{1.6}) we rather consider its regularized version (
\ref{4.2}) which we recall here below:
\begin{equation*}
-{\Greekmath 0272} \cdot A(y)(e_{j}+{\Greekmath 0272} {\Greekmath 011F} _{T,j})+T^{-2}{\Greekmath 011F} _{T,j}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }
\mathbb{R}^{d}.
\end{equation*}
We define the regularized homogenized matrix by
\begin{equation}
A_{T}^{\ast }=\left\langle A(I+{\Greekmath 0272} {\Greekmath 011F} _{T})\right\rangle ,\ \ {\Greekmath 011F}
_{T}=({\Greekmath 011F} _{T,j})_{1\leq j\leq d} \label{3.11}
\end{equation}
Recalling that the homogenized matrix has the form $A^{\ast }=\left\langle
A(I+{\Greekmath 0272} {\Greekmath 011F} )\right\rangle $, we show in (\ref{3.19}) below that $
\left\vert A^{\ast }-A_{T}^{\ast }\right\vert \leq CT^{-1}$, so that $
A_{T}^{\ast }\rightarrow A^{\ast }$ as $T\rightarrow \infty $.
With this in mind, we define the approximate regularized coefficients
\begin{equation}
A_{R,T}^{\ast }=
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}A(I+{\Greekmath 0272} {\Greekmath 011F} _{T}^{R}),\ \ {\Greekmath 011F} _{T}^{R}=({\Greekmath 011F}
_{T,j}^{R})_{1\leq j\leq d} \label{3.12}
\end{equation}
where ${\Greekmath 011F} _{T,j}^{R}$ (the regularized approximate corrector) solves the
problem
\begin{equation}
-{\Greekmath 0272} \cdot A(e_{j}+{\Greekmath 0272} {\Greekmath 011F} _{T,j}^{R})+T^{-2}{\Greekmath 011F} _{T,j}^{R}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
in }Q_{R},\ {\Greekmath 011F} _{T,j}^{R}\in H_{0}^{1}(Q_{R}). \label{3.13}
\end{equation}
Then
\begin{equation*}
A_{R,T}^{\ast }\underset{(\ast )}{\overset{R\rightarrow \infty }{\rightarrow
}}A_{T}^{\ast }\underset{(\ast \ast )}{\overset{T\rightarrow \infty }{
\rightarrow }}A^{\ast }.
\end{equation*}
Convergence ($\ast \ast $) will result from (\ref{3.19}) below, while for
convergence ($\ast $), we proceed exactly as in the proof of Theorem \ref
{t3.1}.
The aim here is to estimate the expression $\left\vert A^{\ast
}-A_{R,T}^{\ast }\right\vert $ in terms of $R$ and $T$, and next take $R=T$
to get the suitable rate of convergence. The following theorem is the main
result of this section.
\begin{theorem}
\label{t3.2}Suppose ${\Greekmath 011F} \in B_{\mathcal{A}}^{2}(\mathbb{R}^{d})^{d}$. Let $
{\Greekmath 010E} \in (0,1)$. There exist $C=C(d,{\Greekmath 010E} ,A)$ and a continuous function $
{\Greekmath 0111} _{{\Greekmath 010E} }:[1,\infty )\rightarrow \lbrack 0,\infty )$, which depends
only on $A$ and ${\Greekmath 010E} $, such that $\lim_{t\rightarrow \infty }{\Greekmath 0111}
_{{\Greekmath 010E} }(t)=0$ and
\begin{equation}
\left\vert A^{\ast }-A_{T,T}^{\ast }\right\vert \leq C{\Greekmath 0111} _{{\Greekmath 010E} }(T)
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }T\geq 1. \label{3.17}
\end{equation}
\end{theorem}
The proof breaks down into several steps which are of independent interest.
\begin{lemma}
\label{l3.2}Let $u\in B_{\mathcal{A}}^{2}(\mathbb{R}^{d})$. For any $
0<R<\infty $,
\begin{equation}
\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}u-\left\langle u\right\rangle \right\vert \leq \sup_{y\in
\mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}\left\vert u(t+y)-u(t)\right\vert dt. \label{3.18}
\end{equation}
\end{lemma}
\begin{proof}
Let $u\in B_{\mathcal{A}}^{2}(\mathbb{R}^{d})$. We know that, for any $y\in
\mathbb{R}^{d}$,
\begin{equation*}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}(y)}u-
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}u=
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}\left( u(t+y)-u(t)\right) dt.
\end{equation*}
Now, let $k>1$ be an integer; we have $Q_{kR}=\cup
_{i=1}^{k^{d}}Q_{R}(x_{i}) $ for some $x_{i}\in \mathbb{R}^{d}$, so that
\begin{equation*}
\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$
} \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{kR}}u-
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}u\right\vert \leq \frac{1}{k^{d}}\sum_{i=1}^{k^{d}}\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}(x_{i})}u-
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}u\right\vert \leq \sup_{y\in \mathbb{R}^{d}}\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}(y)}u-
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{R}}u\right\vert .
\end{equation*}
Letting $k\rightarrow \infty $ we are led to (\ref{3.18}).
\end{proof}
The next result evaluates the difference between $A^{\ast }$ and $
A_{T}^{\ast }$.
\begin{lemma}
\label{l3.3}Assume that ${\Greekmath 011F} _{j}$ (defined by \emph{(\ref{1.6})}) belongs
to $B_{\mathcal{A}}^{2}(\mathbb{R}^{d})$. There exists $C=C(d,A)$ such that
\begin{equation}
\left\vert A^{\ast }-A_{T}^{\ast }\right\vert \leq CT^{-1}. \label{3.19}
\end{equation}
\end{lemma}
\begin{proof}
First, let us set $v={\Greekmath 011F} _{T,j}-{\Greekmath 011F} _{j}$. Then $v$ solves the equation $
-{\Greekmath 0272} \cdot (A{\Greekmath 0272} v)+T^{-2}v=-T^{-2}{\Greekmath 011F} _{j}$ in $\mathbb{R}^{d}$. It
follows from Lemma \ref{l4.1} that
\begin{equation*}
\sup_{x\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(x)}\left( \left\vert {\Greekmath 0272} v\right\vert ^{2}+T^{-2}\left\vert
v\right\vert ^{2}\right) \leq CT^{-2}\sup_{x\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(x)}\left\vert {\Greekmath 011F} _{j}\right\vert ^{2}\leq CT^{-2}.
\end{equation*}
In the last inequality above, we have used the fact that ${\Greekmath 011F} _{j}\in B_{
\mathcal{A}}^{2}(\mathbb{R}^{d})$, so that
\begin{equation*}
\sup_{x\in \mathbb{R}^{d},T>0}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(x)}\left\vert {\Greekmath 011F} _{j}\right\vert ^{2}\leq C.
\end{equation*}
The above inequality stems from the fact that $\lim_{T\rightarrow \infty }
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(x)}\left\vert {\Greekmath 011F} _{j}\right\vert ^{2}$ exists uniformly in $
x\in \mathbb{R}^{d}$. We infer
\begin{equation}
\sup_{x\in \mathbb{R}^{d}}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(x)}\left\vert A{\Greekmath 0272} ({\Greekmath 011F} _{T,j}-{\Greekmath 011F} _{j})\right\vert
^{2}\right) ^{\frac{1}{2}}\leq \left\Vert A\right\Vert _{\infty }\sup_{x\in
\mathbb{R}^{d}}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle
-}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(x)}\left\vert {\Greekmath 0272} ({\Greekmath 011F} _{T,j}-{\Greekmath 011F} _{j})\right\vert
^{2}\right) ^{\frac{1}{2}}\leq CT^{-1}. \label{3.20}
\end{equation}
Now, using Lemma \ref{l3.2} with $u=A{\Greekmath 0272} ({\Greekmath 011F} _{T}-{\Greekmath 011F} )$, we obtain
\begin{equation}
\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$
} \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}}A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})-(A^{\ast }-A_{T}^{\ast })\right\vert
\leq \sup_{y\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}}\left\vert A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})(t+y)-A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F}
_{T})(t)\right\vert dt. \label{3.21}
\end{equation}
However, from the equality
\begin{equation*}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}}A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})(t+y)dt=
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(y)}A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})(t)dt
\end{equation*}
associated to the inequality
\begin{equation*}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(y)}\left\vert A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})(t)\right\vert dt\leq
\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(y)}\left\vert A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})\right\vert ^{2}\right)
^{\frac{1}{2}},
\end{equation*}
we deduce that the right-hand side of (\ref{3.21}) is bounded by $
2\sup_{y\in \mathbb{R}^{d}}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}(y)}\left\vert A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})\right\vert ^{2}\right)
^{\frac{1}{2}}$. Taking into account (\ref{3.20}), we get immediately
\begin{equation*}
\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$
} \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}}A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})-(A^{\ast }-A_{T}^{\ast })\right\vert
\leq CT^{-1}.
\end{equation*}
It follows that
\begin{equation*}
\left\vert A^{\ast }-A_{T}^{\ast }\right\vert \leq \left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}}A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})-(A^{\ast }-A_{T}^{\ast })\right\vert
+
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{Q_{T}}\left\vert A{\Greekmath 0272} ({\Greekmath 011F} -{\Greekmath 011F} _{T})\right\vert \leq CT^{-1}.
\end{equation*}
\end{proof}
We are now in a position to prove the theorem.
\begin{proof}[Proof of Theorem \protect\ref{t3.2}]
We decompose $A^{\ast }-A_{R,T}^{\ast }$ as follows:
\begin{equation*}
A^{\ast }-A_{R,T}^{\ast }=(A^{\ast }-A_{T}^{\ast })+(A_{T}^{\ast
}-A_{R,T}^{\ast }).
\end{equation*}
We consider each term separately.
Lemma \ref{l3.3} yields $\left\vert A^{\ast }-A_{T}^{\ast }\right\vert \leq
CT^{-1}$. As regard the term $A_{T}^{\ast }-A_{R,T}^{\ast }$, we observe
that $v={\Greekmath 011F} _{T,j}-{\Greekmath 011F} _{T,j}^{R}$\ solves the equation
\begin{equation*}
-{\Greekmath 0272} \cdot A{\Greekmath 0272} v+T^{-2}v=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }Q_{R}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }v={\Greekmath 011F} _{T,j}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\partial Q_{R},
\end{equation*}
so that, proceeding exactly as in \cite[Proof of Lemma 1]{BP2004} we obtain
\begin{equation}
\left\vert A_{T}^{\ast }-A_{R,T}^{\ast }\right\vert ^{2}\leq C\left(
T^{2}\exp (-c_{1}TR^{{\Greekmath 010E} })+R^{{\Greekmath 010E} -1}\right) \label{3.22}
\end{equation}
where $0<{\Greekmath 010E} <1$, and $C$ and $c_{1}>0$ are independent of $R$ and $T$.
We emphasize that in \cite{BP2004}, the above inequality has been obtained
without any help stemming from the random character of the problem. It
relies only on the bounds of the Green function of the operator $-{\Greekmath 0272}
\cdot A{\Greekmath 0272} +T^{-2}$ and on the bounds of the regularized corrector ${\Greekmath 011F}
_{T}$.
Choosing $R=T$ in (\ref{3.22}), we define the function
\begin{equation*}
{\Greekmath 0111} _{{\Greekmath 010E} }(t)=\frac{1}{t}+t\exp \left( -\frac{c_{1}}{2}t^{1+{\Greekmath 010E}
}\right) +t^{\frac{1}{2}({\Greekmath 010E} -1)}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }t\geq 1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
Then ${\Greekmath 0111} _{{\Greekmath 010E} }$ is continuous with $\lim_{t\rightarrow \infty }{\Greekmath 0111}
_{{\Greekmath 010E} }(t)=0$. We see that
\begin{equation*}
\left\vert A^{\ast }-A_{T,T}^{\ast }\right\vert \leq C{\Greekmath 0111} _{{\Greekmath 010E} }(T)
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for any }T\geq 1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
This concludes the proof of the theorem.
\end{proof}
\section{Convergence rates: the asymptotic periodic setting}
\subsection{Preliminary results}
Let us consider the corrector problem (\ref{1.6}) in which $A$ satisfies in
addition the assumptions (\ref{2.1}) and (\ref{2.2}) below: $A=A_{0}+A_{per}$
where
\begin{equation}
A_{per}\in L_{per}^{2}(Y)^{d\times d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }\left\{
\begin{array}{l}
A_{0}\in L^{2}(\mathbb{R}^{d})^{d\times d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }d\geq 3 \\
A_{0}\in (L^{2}(\mathbb{R}^{2})\cap L^{2,1}(\mathbb{R}^{2}))^{2\times 2}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }d=2.
\end{array}
\right. \label{2.1}
\end{equation}
The matrix $A_{per}$ is symmetric and further
\begin{equation}
{\Greekmath 010B} \left\vert {\Greekmath 0115} \right\vert ^{2}\leq A_{per}(y){\Greekmath 0115} \cdot
{\Greekmath 0115} \leq {\Greekmath 010C} \left\vert {\Greekmath 0115} \right\vert ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }
{\Greekmath 0115} \in \mathbb{R}^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and a.e. }y\in \mathbb{R}^{d}. \label{2.2}
\end{equation}
Let $H_{\infty ,per}^{1}(\mathbb{R}^{d})=\{u\in L_{\infty ,per}^{2}(\mathbb{R
}^{d}):{\Greekmath 0272} u\in L_{\infty ,per}^{2}(\mathbb{R}^{d})^{d}\}$ where $
L_{\infty ,per}^{2}(\mathbb{R}^{d})=L_{0}^{2}(\mathbb{R}^{d})+L_{per}^{2}(Y)$
and $L_{0}^{2}(\mathbb{R}^{d})$ is the completion of $\mathcal{C}_{0}(
\mathbb{R}^{d})$ with respect to the seminorm (\ref{0.2}).
\begin{proposition}
\label{l2.1}Let $H$ be a function such that $H\in L^{2}(\mathbb{R}^{d})^{d}$
for $d\geq 3$ and $H\in (L^{2}(\mathbb{R}^{d})\cap L^{2,1}(\mathbb{R}
^{d}))^{d}$ for $d=2$. Assume $A$ satisfies \emph{(\ref{1.2})}. Then there
exists $u_{0}\in L^{p}(\mathbb{R}^{d})$ with ${\Greekmath 0272} u_{0}\in L^{2}(\mathbb{R
}^{d})^{d}$ such that $u_{0}$ solves the equation
\begin{equation}
-{\Greekmath 0272} \cdot A{\Greekmath 0272} u_{0}={\Greekmath 0272} \cdot H\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}
\label{8.1}
\end{equation}
where $p=2^{\ast }\equiv 2d/(d-2)$ for $d\geq 3$ and $p=\infty $ for $d=2$.
\end{proposition}
\begin{proof}
1) We first assume that $d\geq 3$. Let $Y^{1,2}=\{u\in L^{2^{\ast }}(\mathbb{
R}^{d}):{\Greekmath 0272} u\in L^{2}(\mathbb{R}^{d})^{d}\}$ (where $2^{\ast }=2d/(d-2)$
), and equip $Y^{1,2}$ with the norm $\left\Vert u\right\Vert
_{Y^{1,2}}=\left\Vert u\right\Vert _{L^{2^{\ast }}(\mathbb{R}
^{d})}+\left\Vert {\Greekmath 0272} u\right\Vert _{L^{2}(\mathbb{R}^{d})}$, which makes
it a Banach space. By the Sobolev's inequality (see \cite[Theorem 4.31, page
102]{Adams}), there exists a positive constant $C=C(d)$ such that
\begin{equation}
\left\Vert u\right\Vert _{L^{2^{\ast }}(\mathbb{R}^{d})}\leq C\left\Vert
{\Greekmath 0272} u\right\Vert _{L^{2}(\mathbb{R}^{d})}\ \ \forall u\in Y^{1,2}.
\label{6.3}
\end{equation}
We deduce from (\ref{6.3}) that (\ref{8.1}) possesses a unique solution in $
Y^{1,2}$ satisfying the inequality
\begin{equation}
\left\Vert u_{0}\right\Vert _{Y^{1,2}}\leq C\left\Vert H\right\Vert _{L^{2}(
\mathbb{R}^{d})}. \label{6.4}
\end{equation}
2) Now assume that $d=2$. We use $G(x,y)$ defined by (\ref{10.1}) to express
$u_{0}$ as
\begin{equation}
u_{0}(x)=-\int_{\mathbb{R}^{d}}{\Greekmath 0272} _{y}G(x,y)\cdot H(y)dy. \label{8.6}
\end{equation}
The expression (\ref{8.6}) makes sense since we may proceed by approximation
by assuming first that $H\in \mathcal{C}_{0}^{\infty }(\mathbb{R}^{2})^{2}$
and next using the density of $\mathcal{C}_{0}^{\infty }(\mathbb{R}^{2})$ in
$L^{2,1}(\mathbb{R}^{2})$ together with property (\ref{10.3}) to conclude.
So, using the generalized H\"{o}lder inequality, we get
\begin{equation}
\left\Vert u_{0}\right\Vert _{L^{\infty }(\mathbb{R}^{2})}\leq \sup_{x\in
\mathbb{R}^{2}}\left\Vert {\Greekmath 0272} _{y}G(x,\cdot )\right\Vert _{L^{2,\infty }(
\mathbb{R}^{2})}\left\Vert H\right\Vert _{L^{2,1}(\mathbb{R}^{2})}.
\label{8.2}
\end{equation}
This completes the proof.
\end{proof}
\begin{lemma}
\label{l1.2}Assume that $A=A_{0}+A_{per}$ where $A$ and $A_{per}$ are
uniformly elliptic (see \emph{(\ref{1.2}) }and \emph{(\ref{2.2})}) with $
A_{0}$ and $A_{per}$ being as in \emph{(\ref{2.1})}. Assume further that $
A_{per}$ and $A$ are H\"{o}lder continuous. Let the number $p$ be as in
Proposition \emph{\ref{l2.1}}. Let ${\Greekmath 011F} _{j,per}\in H_{per}^{1}(Y)$ be the
unique solution of
\begin{equation}
-{\Greekmath 0272} _{y}\cdot \left( A_{per}(e_{j}+{\Greekmath 0272} _{y}{\Greekmath 011F} _{j,per})\right) =0
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }Y,\ \ \int_{Y}{\Greekmath 011F} _{j,per}dy=0. \label{2.3}
\end{equation}
Then \emph{(\ref{1.6})} possesses a unique solution ${\Greekmath 011F} _{j}\in H_{\infty
,per}^{1}(Y)$ (in the sense of Theorem \emph{\ref{t4.1}}) satisfying ${\Greekmath 011F}
_{j}={\Greekmath 011F} _{j,0}+{\Greekmath 011F} _{j,per}$ where ${\Greekmath 011F} _{j,0}\in L^{p}(\mathbb{R}^{d})$
with ${\Greekmath 0272} _{y}{\Greekmath 011F} _{j,0}\in L^{2}(\mathbb{R}^{d})^{d}$, and
\begin{equation}
\left\Vert {\Greekmath 011F} _{j}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq C
\label{*2}
\end{equation}
where $C=C(d,A)$.
\end{lemma}
\begin{proof}
First, we notice that if ${\Greekmath 011F} _{j,per}$ solves (\ref{2.3}) then ${\Greekmath 011F}
_{j,0} $ solves
\begin{equation*}
-{\Greekmath 0272} _{y}\cdot \left( A{\Greekmath 0272} _{y}{\Greekmath 011F} _{j,0}\right) ={\Greekmath 0272} _{y}\cdot
\left( A_{0}(e_{j}+{\Greekmath 0272} _{y}{\Greekmath 011F} _{j,per})\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}
^{d}.
\end{equation*}
Assuming that $A_{per}$ is H\"{o}lder continuous, we get ${\Greekmath 0272} _{y}{\Greekmath 011F}
_{j,per}\in L^{\infty }(Y)$. Because of the property of $A_{0}$ given by (
\ref{2.1}), it follows that $g=A_{0}(e_{j}+{\Greekmath 0272} _{y}{\Greekmath 011F} _{j,per})$
belongs to $L^{2}(\mathbb{R}^{d})^{d}$ (resp. $(L^{2}(\mathbb{R}^{2})\cap
L^{2}(\mathbb{R}^{2}))^{2}$) for $d\geq 3$ (resp. $d=2$). Proposition \ref
{l2.1} implies that ${\Greekmath 011F} _{j,0}\in L^{p}(\mathbb{R}^{d})$ with ${\Greekmath 0272}
_{y}{\Greekmath 011F} _{j,0}\in L^{2}(\mathbb{R}^{d})^{d}$ for $d\geq 3$. Hence in that
case one has $\left\langle {\Greekmath 011F} _{j,0}\right\rangle =0$ and $\left\langle
{\Greekmath 0272} _{y}{\Greekmath 011F} _{j,0}\right\rangle =0$. This proves that ${\Greekmath 011F} _{j}={\Greekmath 011F}
_{j,per}+{\Greekmath 011F} _{j,0}\in H_{\infty ,per}^{1}(Y)$ for $d\geq 3$. Now, for $d=2$
we have ${\Greekmath 011F} _{j,0}\in L_{0}^{2}(\mathbb{R}^{2})$ since ${\Greekmath 011F} _{j,0}$
vanishes at infinity. Indeed, we use (\ref{8.2}) to get
\begin{equation*}
\left\Vert {\Greekmath 011F} _{j,0}\right\Vert _{L^{\infty }(\mathbb{R}^{2})}\leq
\sup_{x\in \mathbb{R}^{d}}\left\Vert {\Greekmath 0272} _{y}G(x,\cdot )\right\Vert
_{L^{2,\infty }(\mathbb{R}^{2})}\left\Vert g\right\Vert _{L^{2,1}(\mathbb{R}
^{2})}
\end{equation*}
and proceed as in \cite[Section 3, page 14]{Lebris} (first approximate $g$
by smooth functions in $\mathcal{C}_{0}^{\infty }(\mathbb{R}^{2})^{2}$) to
show that ${\Greekmath 011F} _{j,0}\in L_{0}^{2}(\mathbb{R}^{2})$.
Let us now verify (\ref{*2}). We drop for a while the index $j$ and just
write ${\Greekmath 011F} ={\Greekmath 011F} _{0}+{\Greekmath 011F} _{per}$, where the couple $({\Greekmath 011F} _{per},{\Greekmath 011F}
_{0})$ solves the system
\begin{equation}
-{\Greekmath 0272} _{y}\cdot \left( A_{per}(e_{j}+{\Greekmath 0272} _{y}{\Greekmath 011F} _{per})\right) =0
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }Y, \label{2.4}
\end{equation}
\begin{equation}
-{\Greekmath 0272} _{y}\cdot \left( A{\Greekmath 0272} _{y}{\Greekmath 011F} _{0}\right) ={\Greekmath 0272} _{y}\cdot
\left( A_{0}(e_{j}+{\Greekmath 0272} _{y}{\Greekmath 011F} _{per})\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}.
\label{2.5}
\end{equation}
It is well known that ${\Greekmath 011F} _{per}$ is bounded in $L^{\infty }(\mathbb{R}
^{d})$. Let us first deal with ${\Greekmath 011F} _{0}$. Let $g=A_{0}(e_{j}+{\Greekmath 0272}
_{y}{\Greekmath 011F} _{per})$ and use the Green function defined in Proposition \ref
{p10.1} to express ${\Greekmath 011F} _{0}$ as
\begin{equation}
{\Greekmath 011F} _{0}(y)=-\int_{\mathbb{R}^{d}}{\Greekmath 0272} _{x}G(y,x)g(x)dx. \label{2.7}
\end{equation}
We recall that $G$ satisfies the inequality (\ref{10.4}) for $d\geq 3$\ and (
\ref{10.2}) for $d=2$, respectively.
We first assume that $d\geq 3$. Let $y\in \mathbb{R}^{d}$ and choose ${\Greekmath 010D}
\in \mathcal{C}_{0}^{\infty }(B_{2}(y))$ such that ${\Greekmath 010D} =1$ on $B_{1}(y)$
and $0\leq {\Greekmath 010D} \leq 1$. We write ${\Greekmath 011F} _{0}$ as
\begin{align*}
{\Greekmath 011F} _{0}(y)& =-\int_{\mathbb{R}^{d}}{\Greekmath 0272} _{x}G(y,x)\cdot g(x){\Greekmath 010D}
(x)dx-\int_{\mathbb{R}^{d}}{\Greekmath 0272} _{x}G(y,x)\cdot g(x)(1-{\Greekmath 010D} (x))dx \\
& =v_{1}(y)+v_{2}(y).
\end{align*}
As for $v_{1}$, owing to (\ref{10.4}), we have
\begin{equation*}
\left\vert v_{1}(y)\right\vert \leq C\left\Vert g\right\Vert _{L^{\infty }(
\mathbb{R}^{d})}\int_{B_{2}(y)}\left\vert x-y\right\vert ^{1-d}dx\leq
C\left\Vert g\right\Vert _{L^{\infty }(\mathbb{R}^{d})}
\end{equation*}
where $C=C(d)$. As for $v_{2}$, (\ref{10.4}) and H\"{o}lder's inequality
imply,
\begin{equation*}
\left\vert v_{2}(y)\right\vert \leq C\left\Vert g\right\Vert _{L^{2}(\mathbb{
R}^{d})}\left( \int_{\mathbb{R}^{d}\backslash B_{2}(y)}\left\vert
x-y\right\vert ^{2-2d}dx\right) \leq C\left\Vert g\right\Vert _{L^{2}(
\mathbb{R}^{d})}
\end{equation*}
since $2d-2>d$ for $d\geq 3$.
When $d=2$, we use (\ref{10.2}) to get
\begin{equation*}
\left\Vert {\Greekmath 011F} _{0}\right\Vert _{L^{\infty }(\mathbb{R}^{2})}\leq
\sup_{x\in \mathbb{R}^{2}}\left\Vert {\Greekmath 0272} _{y}G(x,\cdot )\right\Vert
_{L^{2,\infty }(\mathbb{R}^{2})}\left\Vert g\right\Vert _{L^{2,1}(\mathbb{R}
^{2})}\leq C\left\Vert g\right\Vert _{L^{2,1}(\mathbb{R}^{d})}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
\end{proof}
\begin{lemma}
\label{l1.4}\emph{(i)} Let $g\in L^{2}(\mathbb{R}^{d})+L_{per}^{2}(Y)$ be
such that $\left\langle g\right\rangle =0$. Then there exists at least one
function $u\in H_{\infty ,per}^{1}(Y)$ such that
\begin{equation}
\Delta u=g\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d},\ \left\langle u\right\rangle =0.
\label{2.9}
\end{equation}
\emph{(ii)} Assume further that $g\in L^{\infty }(\mathbb{R}^{d})$ and $u$
is bounded; then $u,{\Greekmath 0272} u\in \mathcal{B}_{\infty ,per}(\mathbb{R}^{d})$
and
\begin{equation}
\left\Vert {\Greekmath 0272} u\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq
C\left\Vert g\right\Vert _{L^{\infty }(\mathbb{R}^{d})}, \label{2.10}
\end{equation}
where $C>0$ depends only on $d$.
\end{lemma}
\begin{proof}
(i) We write $g=g_{0}+g_{per}$ with $g_{0}\in L^{2}(\mathbb{R}^{d})$ and $
g_{per}\in L_{per}^{2}(Y)$. Since $\left\langle g\right\rangle =0$, we have $
\left\langle g_{per}\right\rangle =0$. So let $v_{per}\in H_{per}^{1}(Y)$ be
the unique solution of
\begin{equation*}
\Delta v_{per}=g_{per}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }Y\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }\left\langle v_{per}\right\rangle
=0.
\end{equation*}
We observe that if $u$ solves (\ref{2.9}), then $u$ has the form $
u=v_{0}+v_{per}$ where $v_{0}\in H^{1}(\mathbb{R}^{d})$ solves the problem
\begin{equation*}
\Delta v_{0}=g_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }v_{0}(x)\rightarrow 0
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ as }\left\vert x\right\vert \rightarrow \infty .
\end{equation*}
Since $g_{0}\in L^{2}(\mathbb{R}^{d})$, $v_{0}$ easily expresses as
\begin{equation*}
v_{0}(x)=\int_{\mathbb{R}^{d}}\Gamma _{0}(x-y)g_{0}(y)dy
\end{equation*}
where $\Gamma _{0}$ denotes the fundamental solution of the Laplacian in $
\mathbb{R}^{d}$ (with pole at the origin). This shows the existence of $u$
in $H^{1}(\mathbb{R}^{d})+H_{per}^{1}(Y)\subset H_{\infty ,per}^{1}(\mathbb{R
}^{d})$.
Let us check (ii). First, since (\ref{2.9}) is satisfied, $u$ is thus the
Newtonian potential of $g$ in $\mathbb{R}^{d}$, and by \cite[page 71,
Problem 4.8 (a)]{Gilbarg}, ${\Greekmath 0272} u\in \mathcal{C}_{loc}^{1/2}(\mathbb{R}
^{d})$. Using therefore the continuity of ${\Greekmath 0272} u$ together with the fact
that ${\Greekmath 0272} u$ also lies in $L_{\infty ,per}^{2}(\mathbb{R}^{d})$, we infer
that ${\Greekmath 0272} u\in \mathcal{B}_{\infty ,per}(\mathbb{R}^{d})=\mathcal{C}_{0}(
\mathbb{R}^{d})\oplus \mathcal{C}_{per}(Y)$. We then proceed as in the proof
of Lemma \ref{l1.2} to obtain $u\in \mathcal{B}_{\infty ,per}(\mathbb{R}
^{d}) $. This completes the proof.
\end{proof}
The following result is a mere consequence of the preceding lemma. Its proof
is therefore left to the reader.
\begin{corollary}
\label{c1.1}Let $\mathbf{g}$\ be a solenoidal vector in $(L^{2}(\mathbb{R}
^{d})+L_{per}^{2}(Y))^{d}$\ (i.e. ${\Greekmath 0272} \cdot \mathbf{g}=0$) with $
\left\langle \mathbf{g}\right\rangle =0$. Then there exists a skew symmetric
matrix $G$\ with entries in $L_{\infty ,per}^{2}(Y)$\ such that $\mathbf{g}
={\Greekmath 0272} \cdot G$. If further $\mathbf{g}$\ belongs to $L^{\infty }(\mathbb{R}
^{d})^{d}$, then $G$\ has entries in $\mathcal{B}_{\infty ,per}(\mathbb{R}
^{d})$ and
\begin{equation}
\left\Vert G\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq C\left\Vert
\mathbf{g}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}. \label{*5}
\end{equation}
\end{corollary}
\subsection{Convergence rates: proof of Theorem \protect\ref{t5.1}}
Let $u_{{\Greekmath 0122} }$, $u_{0}\in H_{0}^{1}(\Omega )$ be the weak solutions
of (\ref{1.1}) and (\ref{1.4}) respectively. Assume further that $u_{0}\in
H^{2}(\Omega )$. We suppose in addition that $\Omega $ is sufficiently
smooth. For any function $h\in L_{loc}^{2}(\mathbb{R}^{d})$ and ${\Greekmath 0122}
>0$ we define $h^{{\Greekmath 0122} }$ by $h^{{\Greekmath 0122} }(x)=h(x/{\Greekmath 0122} )$
for $x\in \mathbb{R}^{d}$. We define the first order approximation of $
u_{{\Greekmath 0122} }$ by $v_{{\Greekmath 0122} }=u_{0}+{\Greekmath 0122} {\Greekmath 011F} ^{{\Greekmath 0122}
}{\Greekmath 0272} u_{0}$. Let $w_{{\Greekmath 0122} }=u_{{\Greekmath 0122} }-v_{{\Greekmath 0122}
}+z_{{\Greekmath 0122} }$ where $z_{{\Greekmath 0122} }\in H^{1}(\Omega )$ is the weak
solution of the following problem
\begin{equation}
-{\Greekmath 0272} \cdot A^{{\Greekmath 0122} }{\Greekmath 0272} z_{{\Greekmath 0122} }=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\Omega
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }z_{{\Greekmath 0122} }={\Greekmath 0122} {\Greekmath 011F} ^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
on }\partial \Omega . \label{5.3}
\end{equation}
$z_{{\Greekmath 0122} }$ will be used to approximate the difference of $
u_{{\Greekmath 0122} }$ and its first order approximation $v_{{\Greekmath 0122} }$.
\begin{lemma}
\label{l5.1}The function $w_{{\Greekmath 0122} }$ solves the problem
\begin{equation}
\left\{
\begin{array}{l}
-{\Greekmath 0272} \cdot \left( A^{{\Greekmath 0122} }{\Greekmath 0272} w_{{\Greekmath 0122} }\right) ={\Greekmath 0272}
\cdot \left( A^{{\Greekmath 0122} }({\Greekmath 0272} u_{0}+({\Greekmath 0272} _{y}{\Greekmath 011F} )^{{\Greekmath 0122}
}{\Greekmath 0272} u_{0}-\left\langle A({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}{\Greekmath 011F} {\Greekmath 0272}
u_{0})\right\rangle \right) \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +{\Greekmath 0122}
{\Greekmath 0272} \cdot \left( A^{{\Greekmath 0122} }{\Greekmath 0272} ^{2}u_{0}{\Greekmath 011F} ^{{\Greekmath 0122}
}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\Omega \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ w_{{\Greekmath 0122} }=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\partial
\Omega .
\end{array}
\right. \label{5.4}
\end{equation}
\end{lemma}
\begin{proof}
Let $y=x/{\Greekmath 0122} $. Then
\begin{equation*}
A(y){\Greekmath 0272} w_{{\Greekmath 0122} }=A(y)({\Greekmath 0272} u_{{\Greekmath 0122} }-{\Greekmath 0272} u_{0}-{\Greekmath 0272}
_{y}{\Greekmath 011F} (y){\Greekmath 0272} u_{0}-{\Greekmath 0122} ({\Greekmath 0272} ^{2}u_{0}){\Greekmath 011F} (y)+{\Greekmath 0272}
z_{{\Greekmath 0122} }),
\end{equation*}
hence
\begin{align*}
{\Greekmath 0272} \cdot A\left( y\right) {\Greekmath 0272} w_{{\Greekmath 0122} }& ={\Greekmath 0272} \cdot
A(y){\Greekmath 0272} u_{{\Greekmath 0122} }-{\Greekmath 0272} \cdot A(y){\Greekmath 0272} u_{0}-{\Greekmath 0272} \cdot
A(y)({\Greekmath 0272} _{y}{\Greekmath 011F} (y){\Greekmath 0272} u_{0}) \\
& -{\Greekmath 0122} {\Greekmath 0272} \cdot (A(y)({\Greekmath 0272} ^{2}u_{0}){\Greekmath 011F} (y)) \\
& ={\Greekmath 0272} \cdot A^{\ast }{\Greekmath 0272} u_{0}-{\Greekmath 0272} \cdot A(y){\Greekmath 0272} u_{0}-{\Greekmath 0272}
\cdot A(y)({\Greekmath 0272} _{y}{\Greekmath 011F} (y){\Greekmath 0272} u_{0}) \\
& -{\Greekmath 0122} {\Greekmath 0272} \cdot (A(y)({\Greekmath 0272} ^{2}u_{0}){\Greekmath 011F} (y)).
\end{align*}
But
\begin{equation*}
A^{\ast }{\Greekmath 0272} u_{0}=\left\langle A({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}{\Greekmath 011F} {\Greekmath 0272}
u_{0})\right\rangle \equiv \left\langle A(I+{\Greekmath 0272} _{y}{\Greekmath 011F} ){\Greekmath 0272}
u_{0}\right\rangle .
\end{equation*}
Thus
\begin{align*}
-{\Greekmath 0272} \cdot A^{{\Greekmath 0122} }{\Greekmath 0272} w_{{\Greekmath 0122} }& ={\Greekmath 0272} \cdot \left[
A\left( y\right) ({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}{\Greekmath 011F} {\Greekmath 0272} u_{0})-\left\langle
A({\Greekmath 0272} u_{0}+{\Greekmath 0272} _{y}{\Greekmath 011F} {\Greekmath 0272} u_{0})\right\rangle \right] \\
& +{\Greekmath 0122} {\Greekmath 0272} \cdot (A(y)({\Greekmath 0272} ^{2}u_{0}){\Greekmath 011F} (y)),
\end{align*}
which is the statement of the lemma.
\end{proof}
Set
\begin{equation*}
a_{ij}(y)=b_{ij}(y)+\sum_{k=1}^{d}b_{ik}(y)\frac{\partial {\Greekmath 011F} _{j}}{
\partial y_{k}}(y)-b_{ij}^{\ast }
\end{equation*}
where $A^{\ast }=(b_{ij}^{\ast })_{1\leq i,j\leq d}$ is the homogenized
matrix, and let $a_{j}=(a_{ij})_{1\leq i\leq d}$. Then $a_{j}\in \lbrack
L^{\infty }(\mathbb{R}^{d})\cap L_{\infty ,per}^{2}(Y)]^{d}$ with ${\Greekmath 0272}
\cdot a_{j}=0$ and $\left\langle a_{j}\right\rangle =0$. Hence by Corollary
\ref{c1.1}, there is a skew-symmetric matrix $G_{j}$ with entries in $
\mathcal{A}=\mathcal{B}_{\infty ,per}(Y)$ such that $a_{j}={\Greekmath 0272} _{y}\cdot
G_{j}$. Moreover in view of (\ref{*5}) in Corollary \ref{c1.1}, we have
\begin{equation*}
\left\Vert G_{j}\right\Vert _{\infty }\leq C\left\Vert a_{j}\right\Vert
_{\infty }.
\end{equation*}
With this in mind and recalling that $G_{j}$ is skew-symmetric, Eq. (\ref
{5.4}) becomes
\begin{equation}
-{\Greekmath 0272} \cdot A\left( \frac{x}{{\Greekmath 0122} }\right) {\Greekmath 0272} w_{{\Greekmath 0122}
}={\Greekmath 0122} {\Greekmath 0272} \cdot \left( r_{1}^{{\Greekmath 0122} }+r_{2}^{{\Greekmath 0122}
}\right) \label{5.5}
\end{equation}
where
\begin{equation*}
r_{1}^{{\Greekmath 0122} }(x)=\sum_{j=1}^{d}G_{j}(y){\Greekmath 0272} \frac{\partial u_{0}}{
\partial x_{j}}(x)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }r_{2}^{{\Greekmath 0122} }(x)=A(y){\Greekmath 0272}
^{2}u_{0}(x){\Greekmath 011F} (y)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }y=\frac{x}{{\Greekmath 0122} }.
\end{equation*}
Now, since $w_{{\Greekmath 0122} }\in H_{0}^{1}(\Omega )$, it follows from the
ellipticity of $A$ (see (\ref{1.2})) that
\begin{align*}
{\Greekmath 010B} \left\Vert {\Greekmath 0272} w_{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}& \leq
{\Greekmath 0122} \left( \left\Vert r_{1}^{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega
)}+\left\Vert r_{2}^{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\right) \\
& \leq C{\Greekmath 0122} \left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}
\end{align*}
where $C=C(d,A,\Omega )$.
We have just proved the following result.
\begin{proposition}
\label{p5.1}Let $\Omega $ be a smooth bounded domain in $\mathbb{R}^{d}$.
Suppose that $A=A_{0}+A_{per}$ and $A$ and $A_{per}$ are uniformly elliptic
(see \emph{(\ref{1.2})} and \emph{(\ref{2.2})}). For $f\in L^{2}(\Omega )$,
let $u_{{\Greekmath 0122} }$, $u_{0}$ and $v_{{\Greekmath 0122} }$ be weak solutions of
Dirichlet problems \emph{(\ref{1.1})}, \emph{(\ref{1.4})} and \emph{(\ref
{5.3})}, respectively. Assume $u_{0}\in H^{2}(\Omega )$. There $
C=C(d,A,\Omega )$ such that
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} ^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}+z_{{\Greekmath 0122} }\right\Vert _{H_{0}^{1}(\Omega )}\leq C{\Greekmath 0122}
\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}. \label{5.6}
\end{equation}
\end{proposition}
The estimate of the deviation of $u_{{\Greekmath 0122} }$ and $v_{{\Greekmath 0122} }$
is a consequence of the following lemma whose proof is postponed to the next
section and is obtained as a special case of the proof of a general result
formulated as Lemma \ref{l5.3}. Observe that in Lemma \ref{l5.3} we replace $
T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}$ by $
{\Greekmath 0122} $ (see Remark \ref{r5.2}).
\begin{lemma}
\label{l5.2'}Assume $u_{0}\in H^{2}(\Omega )$. Let $z_{{\Greekmath 0122} }$ be the
solution of problem \emph{(\ref{5.3})}. There exists $C=C(d,A,\Omega )$ such
that
\begin{equation}
\left\Vert z_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Omega )}\leq C{\Greekmath 0122} ^{
\frac{1}{2}}\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}. \label{5.7}
\end{equation}
\end{lemma}
\begin{proof}[Proof of Theorem \protect\ref{t5.1}]
Since $\Omega $ is a $\mathcal{C}^{1,1}$-bounded domain in $\mathbb{R}^{d}$
and the matrix $A^{\ast }$ has constant entries, it is known that $u_{0}$
satisfies the inequality
\begin{equation}
\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}\leq C\left\Vert f\right\Vert
_{L^{2}(\Omega )},\ C=C(d,{\Greekmath 010B} ,\Omega )>0. \label{5.7'}
\end{equation}
Using (\ref{5.6}) together with (\ref{5.7}) and (\ref{5.7'}), we arrive at
\begin{align*}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} ^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\right\Vert _{H^{1}(\Omega )}& \leq \left\Vert u_{{\Greekmath 0122}
}-u_{0}-{\Greekmath 0122} {\Greekmath 011F} ^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}+z_{{\Greekmath 0122}
}\right\Vert _{H_{0}^{1}(\Omega )}+\left\Vert z_{{\Greekmath 0122} }\right\Vert
_{H^{1}(\Omega )} \\
& \leq C{\Greekmath 0122} ^{\frac{1}{2}}\left\Vert u_{0}\right\Vert _{H^{2}(\Omega
)}\leq C{\Greekmath 0122} ^{\frac{1}{2}}\left\Vert f\right\Vert _{L^{2}(\Omega )},
\end{align*}
and derive the statement of (\ref{5.8}) in Theorem \ref{t5.1}. As for (\ref
{1.14}) we proceed exactly as in the proof of (\ref{Eq02}) in the proof of
Theorem \ref{t1.4}; see in particular Remark \ref{r5.3} in the next section.
This concludes the proof of Theorem \ref{t5.1}.
\end{proof}
\section{Convergence rates: the asymptotic almost periodic setting}
\subsection{Preliminaries}
We treat the asymptotic almost periodic case in a general way, dropping
restrictions (\ref{2.1}) and (\ref{2.2}). The results in this section extend
those of the preceding section as well as those in the almost periodic
setting obtained in \cite{Shen}.
We recall that a bounded continuous function $u$ defined on $\mathbb{R}^{d}$
is asymptotically almost periodic if there exists a couple $(v,w)\in AP(
\mathbb{R}^{d})\times \mathcal{C}_{0}(\mathbb{R}^{d})$ such that $u=v+w$. We
denote by $\mathcal{B}_{\infty ,AP}(\mathbb{R}^{d})=AP(\mathbb{R}^{d})+
\mathcal{C}_{0}(\mathbb{R}^{d})$ the Banach algebra of such functions. We
denote by $H_{\infty ,AP}^{1}(\mathbb{R}^{d})$ the Sobolev-type space
attached to the Besicovitch space $B_{\mathcal{A}}^{2}(\mathbb{R}^{d})\equiv
L_{\infty ,AP}^{2}(\mathbb{R}^{d})=L_{0}^{2}(\mathbb{R}^{d})+B_{AP}^{2}(
\mathbb{R}^{d})$: $H_{\infty ,AP}^{1}(\mathbb{R}^{d})=\{u\in L_{\infty
,AP}^{2}(\mathbb{R}^{d}):{\Greekmath 0272} u\in L_{\infty ,AP}^{2}(\mathbb{R}
^{d})^{d}\} $. Here $L_{0}^{2}(\mathbb{R}^{d})$ is the completion of $
\mathcal{C}_{0}(\mathbb{R}^{d})$ with respect to the seminorm (\ref{0.2})
while $B_{AP}^{2}(\mathbb{R}^{d})$ is the Besicovitch space associated to
the algebra $AP(\mathbb{R}^{d})$. We also denote by $\mathcal{C}_{b}(\mathbb{
R}^{d})$ the algebra of real-valued bounded continuous functions defined on $
\mathbb{R}^{d}$.
The following characterization of $\mathcal{B}_{\infty ,AP}(\mathbb{R}^{d})$
is a useful tool for the considerations below.
\begin{proposition}
\label{p11.1}Let $u\in \mathcal{C}_{b}(\mathbb{R}^{d})$. Then $u\in \mathcal{
B}_{\infty ,AP}(\mathbb{R}^{d})$ if and only if
\begin{equation}
\sup_{y\in \mathbb{R}^{d}}\inf_{z\in \mathbb{R}^{d},\left\vert z\right\vert
\leq L}\left\Vert u(\cdot +y)-u(\cdot +z)\right\Vert _{L^{\infty }(\mathbb{R}
^{d}\backslash B_{R})}\rightarrow 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ as }L\rightarrow \infty \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and
}R\rightarrow 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{11.1}
\end{equation}
\end{proposition}
\begin{proof}
A set $E$ in $\mathbb{R}^{d}$ is relatively dense if there exists $L>0$ such
that $\mathbb{R}^{d}=E+B_{L}$ (where we recall that $B_{L}=B(0,L)$), that
is, any $x\in \mathbb{R}^{d}$ expresses as a sum $y+z$ with $y\in E$ and $
z\in B_{L}$. This being so, it is known that $u\in \mathcal{C}_{b}(\mathbb{R}
^{d})$ lies in $\mathcal{B}_{\infty ,AP}(\mathbb{R}^{d})$ if and only if for
any ${\Greekmath 0122} >0$, there is $R=R({\Greekmath 0122} )>0$ such that the set
\begin{equation*}
\left\{ {\Greekmath 011C} \in \mathbb{R}^{d}:\left\vert u(t+{\Greekmath 011C} )-u(t)\right\vert
<{\Greekmath 0122} \ \ \forall \left\vert t\right\vert \geq R\right\}
\end{equation*}
is relatively dense; see e.g. \cite[Chap. 5, Theorem 5]{Zaidman}. But this
is shown to be equivalent to (\ref{11.1}).
\end{proof}
\begin{remark}
\label{r11.1}\emph{We notice that, for any }$u\in \mathcal{C}_{b}(\mathbb{R}
^{d})$\emph{, }
\begin{equation*}
\lim_{R\rightarrow \infty }\left( \sup_{\left\vert y\right\vert \leq
R}\left\vert u(y)\right\vert \right) =\lim_{R\rightarrow 0}\left(
\sup_{\left\vert y\right\vert \geq R}\left\vert u(y)\right\vert \right) .
\end{equation*}
\emph{In view of the above equality we may replace (\ref{11.1}) by }
\begin{equation}
\sup_{y\in \mathbb{R}^{d}}\inf_{\left\vert z\right\vert \leq L}\left\Vert
u(\cdot +y)-u(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}\rightarrow 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
\emph{\ as }}L,R\rightarrow \infty \label{11.2}
\end{equation}
\emph{since the limits in (\ref{11.1}) and (\ref{11.2}) are the same. In
practice we will rather use (\ref{11.2}).}
\end{remark}
\begin{definition}
\label{d5.1}\emph{For a function }$u\in \mathcal{B}_{\infty ,AP}(\mathbb{R}
^{d})$\emph{\ we define the modulus of asymptotic almost periodicity of }$u$
\emph{\ by }
\begin{equation}
{\Greekmath 011A} _{u}(L,R)=\sup_{y\in \mathbb{R}^{d}}\inf_{\left\vert z\right\vert \leq
L}\left\Vert u(\cdot +y)-u(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
\emph{\ for }}L,R>0. \label{11.3}
\end{equation}
\emph{In particular we set }
\begin{equation}
{\Greekmath 011A} (L,R)=\sup_{y\in \mathbb{R}^{d}}\inf_{\left\vert z\right\vert \leq
L}\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,
}L,R>0. \label{11.4}
\end{equation}
\end{definition}
\begin{remark}
\label{r5.1}\emph{Observe that if }$R=\infty $\emph{\ (that is, }$B_{R}=
\mathbb{R}^{d}$\emph{) in (\ref{11.3}), then }$u\in \mathcal{B}_{\infty ,AP}(
\mathbb{R}^{d})$\emph{\ is almost periodic if and only if }${\Greekmath 011A}
_{u}(L,\infty )\rightarrow 0$\emph{\ as }$L\rightarrow \infty $\emph{.}
\end{remark}
\subsection{Estimates of approximate correctors}
First we recall that the approximate corrector ${\Greekmath 011F} _{T}=({\Greekmath 011F}
_{T,j})_{1\leq j\leq d}$ is defined as the distributional solution of
\begin{equation}
-{\Greekmath 0272} \cdot \left( A(e_{j}+{\Greekmath 0272} {\Greekmath 011F} _{T,j})\right) +T^{-2}{\Greekmath 011F} _{T,j}=0
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d},\ \ {\Greekmath 011F} _{T,j}\in H_{\infty ,AP}^{1}(\mathbb{R}
^{d}) \label{11.5}
\end{equation}
where $A\in (L_{\infty ,AP}^{2}(\mathbb{R}^{d})\cap L^{\infty }(\mathbb{R}
^{d}))^{d\times d}$ is symmetric and uniformly elliptic.
In all that follows in this section we assume that $A\in (\mathcal{B}
_{\infty ,AP}(\mathbb{R}^{d}))^{d\times d}$.
\begin{theorem}
\label{t11.1}Let $T\geq 1$. Then ${\Greekmath 011F} _{T}\in \mathcal{B}_{\infty ,AP}(
\mathbb{R}^{d})$ and for any $x_{0},y,z\in \mathbb{R}^{d}$,
\begin{equation}
\left\Vert {\Greekmath 011F} _{T}(\cdot +y)-{\Greekmath 011F} _{T}(\cdot +z)\right\Vert _{L^{\infty
}(B_{R}(x_{0}))}\leq CT\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert
_{L^{\infty }(B_{R}(x_{0}))} \label{11.9}
\end{equation}
for any $R>2T$, where $C=C(d,A)$.
\end{theorem}
\begin{proof}
Fix $R>2T$. We need to show that, for any $x_{0},y,z\in \mathbb{R}^{d}$ and $
t\in B_{R}(x_{0})$,
\begin{equation*}
\left\vert {\Greekmath 011F} _{T}(t+y)-{\Greekmath 011F} _{T}(t+z)\right\vert \leq CT\left\Vert
B(\cdot +y)-B(\cdot +z)\right\Vert _{L^{\infty }(B_{R}(x_{0}))}.
\end{equation*}
We follow the same approach as in the proof of \cite[Theorem 6.3]{Shen}.
Without restriction, assume $x_{0}=0$. We choose ${\Greekmath 0127} \in \mathcal{C}
_{0}^{\infty }(B_{\frac{7}{4}T})$ such that ${\Greekmath 0127} =1$ in $B_{\frac{3}{2}
T} $, $0\leq {\Greekmath 0127} \leq 1$ and $\left\vert {\Greekmath 0272} {\Greekmath 0127} \right\vert \leq
CT^{-1}$. We also assume that $d\geq 3$ (the case $d=2$ follows from the
case $d=3$ by adding a dummy variable). Define $u(x)={\Greekmath 011F} _{T,j}(x+y)-{\Greekmath 011F}
_{T,j}(x+z)$ ($x\in \mathbb{R}^{d}$) and note that $u$ solves the equation
\begin{eqnarray*}
-{\Greekmath 0272} \cdot (A(\cdot +y){\Greekmath 0272} u)+T^{-2}u &=&{\Greekmath 0272} \cdot (A(\cdot
+y)-A(\cdot +z))e_{j} \\
&&+{\Greekmath 0272} \cdot \lbrack (A(\cdot +y)-A(\cdot +z)){\Greekmath 0272} v]\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{
R}^{d}
\end{eqnarray*}
where $v(x)={\Greekmath 011F} _{T,j}(x+z)$. We have
\begin{eqnarray}
-{\Greekmath 0272} \cdot (A(\cdot +y){\Greekmath 0272} u) &=&-T^{-2}u{\Greekmath 0127} +{\Greekmath 0272} \cdot
({\Greekmath 0127} (A(\cdot +y)-A(\cdot +z))e_{j}) \label{11.10} \\
&&+{\Greekmath 0272} \cdot ({\Greekmath 0127} (A(\cdot +y)-A(\cdot +z)){\Greekmath 0272} v) \notag \\
&&-(A(\cdot +y)-A(\cdot +z))e_{j}{\Greekmath 0272} {\Greekmath 0127} -A(\cdot +y){\Greekmath 0272} u\cdot
{\Greekmath 0272} {\Greekmath 0127} \notag \\
&&-{\Greekmath 0272} \cdot (uA(\cdot +y){\Greekmath 0272} {\Greekmath 0127} ). \notag
\end{eqnarray}
Denoting by $G^{y}$ the fundamental solution of the operator $-{\Greekmath 0272} \cdot
(A(\cdot +y){\Greekmath 0272} )$ in $\mathbb{R}^{d}$, we use the representation formula
in (\ref{11.10}) to get, for $x\in B_{T}$,
\begin{eqnarray*}
u(x) &=&-T^{-2}\int_{\mathbb{R}^{d}}G^{y}(x,t)u(t){\Greekmath 0127} (t)dt-\int_{
\mathbb{R}^{d}}{\Greekmath 0272} _{t}G^{y}(x,t){\Greekmath 0127} (t)(A(t+y)-A(t+z))e_{j}dt \\
&&-\int_{\mathbb{R}^{d}}{\Greekmath 0272} _{t}G^{y}(x,t){\Greekmath 0127}
(t)(A(t+y)-A(t+z)){\Greekmath 0272} v(t)dt \\
&&-\int_{\mathbb{R}^{d}}G^{y}(x,t)(A(t+y)-A(t+z))e_{j}{\Greekmath 0272} {\Greekmath 0127} (t)dt \\
&&-\int_{\mathbb{R}^{d}}G^{y}(x,t)A(t+y){\Greekmath 0272} u(t)\cdot {\Greekmath 0272} {\Greekmath 0127} (t)dt
\\
&&+\int_{\mathbb{R}^{d}}{\Greekmath 0272} _{t}G^{y}(x,t)A(t+y)u(t){\Greekmath 0272} {\Greekmath 0127} (t)dt.
\end{eqnarray*}
It follows that
\begin{eqnarray}
\left\vert u(x)\right\vert &\leq &CT^{-2}\int_{B_{2T}}\left\vert
G^{y}(x,t)\right\vert \left\vert u(t)\right\vert dt+ \label{e11.10} \\
&&+C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}\int_{B_{2T}}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert dt \notag
\\
&&+C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}\int_{B_{2T}}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert \left\vert
{\Greekmath 0272} v(t)\right\vert dt \notag \\
&&+C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}\int_{B_{2T}}\left\vert G^{y}(x,t)\right\vert \left\vert {\Greekmath 0272}
{\Greekmath 0127} (t)\right\vert dt \notag \\
&&+C\left( \int_{B_{2T}}\left\vert G^{y}(x,t)\right\vert ^{2}\left\vert
{\Greekmath 0272} {\Greekmath 0127} (t)\right\vert ^{2}dt\right) ^{\frac{1}{2}}\left(
\int_{B_{2T}}\left\vert {\Greekmath 0272} u\right\vert ^{2}\right) ^{\frac{1}{2}}
\notag \\
&&+C\left( \int_{B_{2T}}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert
^{2}\left\vert {\Greekmath 0272} {\Greekmath 0127} (t)\right\vert ^{2}dt\right) ^{\frac{1}{2}
}\left( \int_{B_{2T}}\left\vert u\right\vert ^{2}\right) ^{\frac{1}{2}}.
\notag
\end{eqnarray}
Let us first deal with the last two terms in (\ref{e11.10}). Let $0<{\Greekmath 011C} <1$
be such that $B_{{\Greekmath 011C} T}(x)\subset B_{T}$ (recall that $x\in B_{T}$). Then $
B_{2T}\backslash B_{{\Greekmath 011C} T}(x)\subset B_{3T}(x)\backslash B_{{\Greekmath 011C} T}(x)$ and
since ${\Greekmath 0272} {\Greekmath 0127} =0$ in $B_{T}$ (and hence in $B_{{\Greekmath 011C} T}(x)$), it
holds that
\begin{eqnarray*}
\left( \int_{B_{2T}}\left\vert G^{y}(x,t)\right\vert ^{2}\left\vert {\Greekmath 0272}
{\Greekmath 0127} (t)\right\vert ^{2}dt\right) ^{\frac{1}{2}} &\leq &CT^{-1}\left(
\int_{B_{3T}(x)\backslash B_{{\Greekmath 011C} T}(x)}\frac{dt}{\left\vert x-t\right\vert
^{2(d-2)}}\right) ^{\frac{1}{2}} \\
&\leq &CT^{1-\frac{d}{2}};
\end{eqnarray*}
\begin{eqnarray*}
\left( \int_{B_{2T}}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert
^{2}\left\vert {\Greekmath 0272} {\Greekmath 0127} (t)\right\vert ^{2}dt\right) ^{\frac{1}{2}}
&\leq &CT^{-1}\left( \int_{B_{3T}(x)\backslash B_{{\Greekmath 011C} T}(x)}\left\vert
{\Greekmath 0272} _{t}G^{y}(x,t)\right\vert ^{2}\right) ^{\frac{1}{2}} \\
&\leq &CT^{-1}\left( \sum_{i=\left[ \frac{\ln {\Greekmath 011C} }{\ln 2}\right]
}^{2}\int_{B_{2^{i+1}T}(x)\backslash B_{2^{i}T}(x)}\left\vert {\Greekmath 0272}
_{t}G^{y}(x,t)\right\vert ^{2}dt\right) ^{\frac{1}{2}} \\
&\leq &CT^{-1}\left( \sum_{i=\left[ \frac{\ln {\Greekmath 011C} }{\ln 2}\right]
}^{2}(2^{i}T)^{2-d}\right) ^{\frac{1}{2}}\leq CT^{-\frac{d}{2}},
\end{eqnarray*}
where $\left[ \frac{\ln {\Greekmath 011C} }{\ln 2}\right] $ stands for the integer part
of $\frac{\ln {\Greekmath 011C} }{\ln 2}$. We infer that the last two terms in (\ref
{e11.10}) are bounded from above by $T\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left\vert {\Greekmath 0272} u\right\vert ^{2}\right) ^{\frac{1}{2}
}+\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left\vert u\right\vert ^{2}\right) ^{\frac{1}{2}}$. Next,
for any $R>2T$, we appeal to (\ref{4.3}) in Lemma \ref{l4.1} to get in (\ref
{11.10}),
\begin{eqnarray*}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left( \left\vert {\Greekmath 0272} u\right\vert ^{2}+T^{-2}\left\vert
u\right\vert ^{2}\right) &\leq &\sup_{x\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}(x)}\left( \left\vert {\Greekmath 0272} u\right\vert
^{2}+T^{-2}\left\vert u\right\vert ^{2}\right) \\
&\leq &C\sup_{x\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}(x)}\left\vert A(t+y)-A(t+z)\right\vert ^{2}dt \\
&&+C\sup_{x\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}(x)}\left\vert A(t+y)-A(t+z)\right\vert ^{2}\left\vert {\Greekmath 0272}
v\right\vert ^{2}dt \\
&\leq &C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}^{2},
\end{eqnarray*}
where we have used the facts that $R>2T$ and
\begin{equation*}
\sup_{x\in \mathbb{R}^{d}}
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}(x)}\left\vert {\Greekmath 0272} v\right\vert ^{2}dt\leq C\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ (see (
\ref{4.3}) in Lemma \ref{l4.1}).}
\end{equation*}
It follows at once that
\begin{equation}
T\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left\vert {\Greekmath 0272} u\right\vert ^{2}\right) ^{\frac{1}{2}
}+\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left\vert u\right\vert ^{2}\right) ^{\frac{1}{2}}\leq
CT\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}.
\label{e1.1}
\end{equation}
Concerning the second term in the right-hand side of (\ref{e11.10}), we have
\begin{eqnarray}
\int_{B_{2T}}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert dt &\leq
&C\int_{B_{3T}(x)}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert dt
\label{e2.1} \\
&\leq &C\sum_{i=-\infty }^{1}\int_{B_{2^{i+1}T}(x)\backslash
B_{2^{i}T}(x)}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert dt\leq
C\sum_{i=-\infty }^{1}2^{i}T\leq CT, \notag
\end{eqnarray}
where we have used for the first inequality in (\ref{e2.1}), the fact that $
B_{2T}\subset B_{3T}(x)$ (recall that $x\in B_{T}$), and for the last
inequality, (\ref{2.8}) (for $q=1$). It follows that
\begin{equation*}
C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}\int_{B_{2T}}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert dt\leq
CT\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}.
\end{equation*}
As for the third term in the right-hand side of (\ref{e11.10}) is concerned,
we concentrate on the control of the integral
\begin{equation*}
I=\int_{B_{2T}}\left\vert {\Greekmath 0272} _{t}G^{y}(x,t)\right\vert \left\vert {\Greekmath 0272}
v(t)\right\vert dt.
\end{equation*}
First, we note that the function $v$ solves the equation
\begin{equation*}
-{\Greekmath 0272} \cdot (A(\cdot +z){\Greekmath 0272} v)+T^{-2}v={\Greekmath 0272} \cdot (A(\cdot +z)e_{j})
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d}
\end{equation*}
so that appealing to (\ref{4.3}),
\begin{equation}
\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left\vert {\Greekmath 0272} v\right\vert ^{2}\right) ^{\frac{1}{2}}\leq
C\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e11.15}
\end{equation}
Next, H\"{o}lder inequality and (\ref{e11.15}) lead to
\begin{eqnarray*}
I &\leq &CT^{\frac{d}{2}}\left( \int_{B_{2T}}\left\vert {\Greekmath 0272}
_{t}G^{y}(x,t)\right\vert ^{2}dt\right) ^{\frac{1}{2}}\leq CT^{\frac{d}{2}
}\left( \int_{B_{3T}(x)\backslash B_{{\Greekmath 011C} T}(x)}\left\vert {\Greekmath 0272}
_{t}G^{y}(x,t)\right\vert ^{2}\right) ^{\frac{1}{2}} \\
&\leq &CT^{\frac{d}{2}}\left( \sum_{i=\left[ \frac{\ln {\Greekmath 011C} }{\ln 2}\right]
}^{2}\int_{B_{2^{i+1}T}(x)\backslash B_{2^{i}T}(x)}\left\vert {\Greekmath 0272}
_{t}G^{y}(x,t)\right\vert ^{2}dt\right) ^{\frac{1}{2}}\leq CT^{\frac{d}{2}
}\left( \sum_{i=\left[ \frac{\ln {\Greekmath 011C} }{\ln 2}\right] }^{2}(2^{i}T)^{2-d}
\right) ^{\frac{1}{2}} \\
&\leq &CT^{\frac{d}{2}}T^{1-\frac{d}{2}}=CT.
\end{eqnarray*}
For the fourth term in the right-hand side of (\ref{e11.10}), we have
\begin{equation*}
\int_{B_{2T}}\left\vert G^{y}(x,t)\right\vert \left\vert {\Greekmath 0272} {\Greekmath 0127}
(t)\right\vert dt\leq CT^{-1}\int_{B_{3T}(x)}\frac{dt}{\left\vert
x-t\right\vert ^{d-2}}dt\leq CT.
\end{equation*}
We have therefore shown that
\begin{equation}
\left\vert u(x)\right\vert \leq CT^{-2}\int_{B_{2T}}\frac{\left\vert
u(t)\right\vert }{\left\vert x-t\right\vert ^{d-2}}dt+CT\left\Vert A(\cdot
+y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}. \label{e11.17}
\end{equation}
Using the well known fractional integral estimates, (\ref{e11.17}) yields
\begin{equation*}
\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{T}}\left\vert u\right\vert ^{q}\right) ^{\frac{1}{q}}\leq C\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left\vert u\right\vert ^{p}\right) ^{\frac{1}{p}
}+CT\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}
\end{equation*}
where $1<p<q\leq \infty $ with $\frac{1}{p}-\frac{1}{q}<\frac{2}{d}$.
However from (\ref{e1.1}) we derive the estimate
\begin{equation*}
\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{2T}}\left\vert u\right\vert ^{2}\right) ^{\frac{1}{2}}\leq
CT\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{R})},
\end{equation*}
so that by an iteration argument, we are led to
\begin{equation*}
\left\Vert u\right\Vert _{L^{\infty }(B_{T})}\leq CT\left\Vert A(\cdot
+y)-A(\cdot +z)\right\Vert _{L^{\infty }(B_{R})}.
\end{equation*}
This yields (recalling that $x_{0}=0$)
\begin{equation*}
\left\vert u(0)\right\vert \leq CT\left\Vert A(\cdot +y)-A(\cdot
+z)\right\Vert _{L^{\infty }(B_{R})}.
\end{equation*}
Recalling that $0$ may be replaced by any $t\in B_{R}$, this completes the
proof.
\end{proof}
\begin{theorem}
\label{t11.2}Let $T\geq 1$ and $R>2T$. For any $0<L\leq T$ and ${\Greekmath 011B} \in
(0,1)$, there is $C_{{\Greekmath 011B} }=C_{{\Greekmath 011B} }({\Greekmath 011B} ,A)$ such that
\begin{equation}
T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq
C_{{\Greekmath 011B} }\left( {\Greekmath 011A} (L,R)+\left( \frac{L}{T}\right) ^{{\Greekmath 011B} }\right) .
\label{11.11}
\end{equation}
\end{theorem}
\begin{proof}
Let $y,z\in \mathbb{R}^{d}$ with $\left\vert z\right\vert \leq L\leq T$.
Then
\begin{equation*}
\left\vert {\Greekmath 011F} _{T}(y)\right\vert \leq \left\vert {\Greekmath 011F} _{T}(y)-{\Greekmath 011F}
_{T}(0)\right\vert +\left\vert {\Greekmath 011F} _{T}(0)\right\vert
\end{equation*}
and
\begin{eqnarray*}
\left\vert {\Greekmath 011F} _{T}(y)-{\Greekmath 011F} _{T}(0)\right\vert &\leq &\left\vert {\Greekmath 011F}
_{T}(y)-{\Greekmath 011F} _{T}(z)\right\vert +\left\vert {\Greekmath 011F} _{T}(z)-{\Greekmath 011F}
_{T}(0)\right\vert \\
&=&\left\vert {\Greekmath 011F} _{T}(0+y)-{\Greekmath 011F} _{T}(0+z)\right\vert +\left\vert {\Greekmath 011F}
_{T}(z)-{\Greekmath 011F} _{T}(0)\right\vert \\
&\leq &\sup_{x\in B_{R}}\left\vert {\Greekmath 011F} _{T}(x+y)-{\Greekmath 011F} _{T}(x+z)\right\vert
+\left\vert {\Greekmath 011F} _{T}(z)-{\Greekmath 011F} _{T}(0)\right\vert \\
&\leq &CT\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}+C_{{\Greekmath 011B} }T^{1-{\Greekmath 011B} }L^{{\Greekmath 011B} },
\end{eqnarray*}
where for the last inequality above we have used (\ref{e5.8}) (in Lemma \ref
{l11.1}) and (\ref{11.9}) (in Theorem \ref{t11.1}). It follows readily that
\begin{equation}
\sup_{y\in \mathbb{R}^{d}}\left\vert {\Greekmath 011F} _{T}(y)-{\Greekmath 011F} _{T}(0)\right\vert
\leq T\left( C{\Greekmath 011A} (L,R)+C_{{\Greekmath 011B} }\left( \frac{L}{T}\right) ^{{\Greekmath 011B}
}\right) . \label{11.12}
\end{equation}
On the other hand, observing that
\begin{eqnarray*}
\left\vert {\Greekmath 011F} _{T}(0)\right\vert &\leq &\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}}({\Greekmath 011F} _{T}(t)-{\Greekmath 011F} _{T}(0))dt\right\vert +\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}}{\Greekmath 011F} _{T}(t)dt\right\vert \\
&\leq &\sup_{y\in \mathbb{R}^{d}}\left\vert {\Greekmath 011F} _{T}(y)-{\Greekmath 011F}
_{T}(0)\right\vert +\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}}{\Greekmath 011F} _{T}(t)dt\right\vert
\end{eqnarray*}
and letting $r\rightarrow \infty $, we use the fact that $\left\langle {\Greekmath 011F}
_{T}\right\rangle =0$ to get
\begin{equation*}
\left\vert {\Greekmath 011F} _{T}(0)\right\vert \leq \sup_{y\in \mathbb{R}^{d}}\left\vert
{\Greekmath 011F} _{T}(y)-{\Greekmath 011F} _{T}(0)\right\vert .
\end{equation*}
The above inequality associated to (\ref{11.12}) yield (\ref{11.11}).
\end{proof}
Now, we set (for $T\geq 1$ and ${\Greekmath 011B} \in (0,1]$)
\begin{equation}
\Theta _{{\Greekmath 011B} }(T)=\inf_{0<L<T}\left( {\Greekmath 011A} (L,3T)+\left( \frac{L}{T}
\right) ^{{\Greekmath 011B} }\right) \label{11.13}
\end{equation}
where ${\Greekmath 011A} (L,R)$ is given by (\ref{11.4}). Then $T\mapsto \Theta _{{\Greekmath 011B}
}(T)$ is a continuous decreasing function satisfying $\Theta _{{\Greekmath 011B}
}(T)\rightarrow 0$ when $T\rightarrow \infty $ (this stems from the
asymptotic almost periodicity of $A$, so that ${\Greekmath 011A} (L,3T)\rightarrow 0$ as $
T\rightarrow \infty $). We infer from (\ref{11.11}) that
\begin{equation}
T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq
C_{{\Greekmath 011B} }\Theta _{{\Greekmath 011B} }(T) \label{11.14}
\end{equation}
and hence
\begin{equation*}
T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}
^{d})}\rightarrow 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ as }T\rightarrow \infty .
\end{equation*}
As in \cite{Shen} we state the following result.
\begin{lemma}
\label{l11.2}Let $g\in L_{\infty ,AP}^{2}(\mathbb{R}^{d})\cap L^{\infty }(
\mathbb{R}^{d})$ with $\left\langle g\right\rangle =0$ and
\begin{equation}
\sup_{x\in \mathbb{R}^{d}}\left(
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}(x)}\left\vert g\right\vert ^{2}\right) ^{\frac{1}{2}}\leq
C_{0}\left( \frac{T}{r}\right) ^{1-{\Greekmath 011B} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }0<r\leq T
\label{11.15}
\end{equation}
where ${\Greekmath 011B} \in (0,1]$. Then there is a unique $u\in H_{\infty ,AP}^{1}(
\mathbb{R}^{d})$ such that
\begin{equation}
-\Delta u+T^{-2}u=g\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d},\ \ \left\langle u\right\rangle
=0 \label{11.16}
\end{equation}
and
\begin{equation}
T^{-2}\left\Vert u\right\Vert _{L^{\infty }(\mathbb{R}^{d})}+T^{-1}\left
\Vert {\Greekmath 0272} u\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq C,
\label{11.17}
\end{equation}
\begin{equation}
\left\vert {\Greekmath 0272} u(x)-{\Greekmath 0272} u(y)\right\vert \leq C_{{\Greekmath 011B} }T^{1-{\Greekmath 011B}
}\left\vert x-y\right\vert ^{{\Greekmath 011B} }\ \ \forall x,y\in \mathbb{R}^{d}
\label{11.18}
\end{equation}
where $C=C(d)$ and $C_{{\Greekmath 011B} }=C_{{\Greekmath 011B} }(d,{\Greekmath 011B} )$. Moreover $u$ and $
{\Greekmath 0272} u$ belong to $\mathcal{B}_{\infty ,AP}(\mathbb{R}^{d})$ with
\begin{equation}
T^{-2}\left\Vert u\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq C\Theta
_{1}(T) \label{11.19}
\end{equation}
and
\begin{equation}
T^{-1}\left\Vert {\Greekmath 0272} u\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq
C\Theta _{{\Greekmath 011B} }(T) \label{11.20}
\end{equation}
where $\Theta _{{\Greekmath 011B} }(T)$ is defined by \emph{(\ref{11.13})} and $
C=C(d,{\Greekmath 011B} ,g)$.
\end{lemma}
\begin{proof}
If we proceed as in the proof of Lemma \ref{l4.1}, we derive the existence
of a unique $u\in H_{\infty ,AP}^{1}(\mathbb{R}^{d})$ solving (\ref{11.16});
we may also refer to \cite{Po-Yu89} for another proof. Next using the
fundamental solution of $-\Delta +T^{-2}$, we easily get (\ref{11.17}). We
infer from (\ref{11.17}) that $u,{\Greekmath 0272} u\in \mathcal{B}_{\infty ,AP}(
\mathbb{R}^{d})$. In order to obtain (\ref{11.18}) we use (\ref{11.15}) and
proceed as in \cite[Lemma 7.1]{Shen}. It remains to check (\ref{11.19}) and (
\ref{11.20}). To that end, we apply (\ref{11.17}) to the function
\begin{equation*}
\frac{u(\cdot +y)-u(\cdot +z)}{\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert
_{L^{\infty }(B_{R})}}
\end{equation*}
with $u$ solution of (\ref{11.16}). Then
\begin{equation}
T^{-2}\left\Vert u(\cdot +y)-u(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}\leq C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})} \label{11.21}
\end{equation}
and
\begin{equation}
T^{-1}\left\Vert {\Greekmath 0272} u(\cdot +y)-{\Greekmath 0272} u(\cdot +z)\right\Vert
_{L^{\infty }(B_{R})}\leq C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert
_{L^{\infty }(B_{R})}. \label{11.22}
\end{equation}
Using the boundedness of the gradient (see (\ref{11.17})), we obtain
\begin{equation}
\left\vert u(x)-u(t)\right\vert \leq CT\left\vert x-t\right\vert \ \ \forall
x,t\in \mathbb{R}^{d}. \label{11.23}
\end{equation}
Next assuming that $\left\vert z\right\vert \leq L\leq T$, we have
\begin{eqnarray*}
T^{-2}\left\vert u(y)-u(0)\right\vert &\leq &T^{-2}\left\vert
u(y)-u(z)\right\vert +T^{-2}\left\vert u(z)-u(0)\right\vert \\
&\leq &C\left\Vert A(\cdot +y)-A(\cdot +z)\right\Vert _{L^{\infty
}(B_{R})}+CT^{-1}L
\end{eqnarray*}
where we used (\ref{11.21}) and (\ref{11.23}). Hence
\begin{equation}
\sup_{y\in \mathbb{R}^{d}}T^{-2}\left\vert u(y)-u(0)\right\vert \leq C({\Greekmath 011A}
(L,R)+T^{-1}L) \label{11.24}
\end{equation}
for any $R>2T$ and $L>0$. Also, using the inequality
\begin{eqnarray*}
T^{-2}\left\vert u(0)\right\vert &\leq &T^{-2}\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}}(u(t)-u(0))dt\right\vert +T^{-2}\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}}u(t)dt\right\vert \\
&\leq &T^{-2}\sup_{y\in \mathbb{R}^{d}}\left\vert u(y)-u(0)\right\vert
+T^{-2}\left\vert
\mathchoice {{\setbox0=\hbox{$\displaystyle{\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -}{\int}$ } \vcenter{\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\RIfM@\expandafter\text@\else\expandafter\mbox\fistyle{\scriptstyle -}{\int}$ } \vcenter{\hbox{$\scriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$
} \vcenter{\hbox{$\scriptscriptstyle -$
}}\kern-.6\wd0}}{{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle
-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle -$ }}\kern-.6\wd0}}
\!\int_{B_{r}}u(t)dt\right\vert
\end{eqnarray*}
together with the fact that $\left\langle u\right\rangle =0$, we get (after
letting $r\rightarrow \infty $)
\begin{equation}
T^{-2}\left\vert u(0)\right\vert \leq C({\Greekmath 011A} (L,R)+T^{-1}L)\ \ \forall \
0<L\leq T \label{11.25}
\end{equation}
where we have also used (\ref{11.24}). Putting together (\ref{11.24}) and (
\ref{11.25}), and choosing in the resulting inequality $R=3T$, and finally
taking the $\inf_{0<L<T}$, we are led to (\ref{11.19}).
Proceeding as above using this time (\ref{11.18}) and (\ref{11.22}) we
arrive at (\ref{11.20}).
\end{proof}
\begin{lemma}
\label{l5.2}Let ${\Greekmath 011F} _{T,j}$ be defined by \emph{(\ref{11.5})}, and let $
\Omega $ be an open bounded set of class $\mathcal{C}^{1,1}$ in $\mathbb{R}
^{d}$. Then
\begin{equation}
\int_{\Omega }\left\vert \left( {\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j}\right) \left( \frac{x
}{{\Greekmath 0122} }\right) w(x)\right\vert ^{2}dx\leq C\int_{\Omega }(\left\vert
w\right\vert ^{2}+{\Greekmath 010E} ^{2}\left\vert {\Greekmath 0272} w\right\vert ^{2})dx,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\
all }w\in H^{1}(\Omega ) \label{5.30}
\end{equation}
where ${\Greekmath 010E} =T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}
^{d})}$ with $T={\Greekmath 0122} ^{-1}$, and $C=C(A,\Omega ,d)>0$.
\end{lemma}
\begin{proof}
By a density argument, it is sufficient to prove (\ref{5.30}) for $w\in
\mathcal{C}_{0}^{\infty }(\Omega )$. We recall that ${\Greekmath 011F} _{T,j}$ solves the
equation
\begin{equation}
-{\Greekmath 0272} \cdot (A(e_{j}+{\Greekmath 0272} {\Greekmath 011F} _{T,j}))+T^{-2}{\Greekmath 011F} _{T,j}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }
\mathbb{R}^{d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{5.32}
\end{equation}
Testing (\ref{5.32}) with ${\Greekmath 0120} (y)={\Greekmath 0127} ({\Greekmath 0122} y)$ where ${\Greekmath 0127}
\in H_{loc}^{1}(\mathbb{R}^{d})$ with compact support, and next making the
change of variable $x={\Greekmath 0122} y$, we get
\begin{equation*}
\int_{\mathbb{R}^{d}}\left[ (A^{{\Greekmath 0122} }(e_{j}+({\Greekmath 0272} _{y}{\Greekmath 011F}
_{T,j})^{{\Greekmath 0122} })\cdot {\Greekmath 0272} {\Greekmath 0127} +T^{-2}{\Greekmath 011F} _{T,j}^{{\Greekmath 0122}
}{\Greekmath 0127} \right] dx=0
\end{equation*}
where $u^{{\Greekmath 0122} }(x)=u(x/{\Greekmath 0122} )$ for $u\in H_{loc}^{1}(\mathbb{R
}^{d})$. Choosing ${\Greekmath 0127} (x)={\Greekmath 011F} _{T,j}(x/{\Greekmath 0122} )\left\vert
w(x)\right\vert ^{2}$ with $w\in \mathcal{C}_{0}^{\infty }(\Omega )$, we
obtain
\begin{equation*}
\int_{\Omega }\left[ (A^{{\Greekmath 0122} }(e_{j}+({\Greekmath 0272} _{y}{\Greekmath 011F}
_{T,j})^{{\Greekmath 0122} })\cdot \left( \frac{1}{{\Greekmath 0122} }({\Greekmath 0272} _{y}{\Greekmath 011F}
_{T,j})^{{\Greekmath 0122} }\left\vert w\right\vert ^{2}+2w{\Greekmath 011F}
_{T,j}^{{\Greekmath 0122} }{\Greekmath 0272} w\right) +T^{-2}\left\vert {\Greekmath 011F}
_{T,j}^{{\Greekmath 0122} }\right\vert ^{2}\left\vert w\right\vert ^{2}\right]
dx=0,
\end{equation*}
or
\begin{eqnarray}
\int_{\Omega }A^{{\Greekmath 0122} }({\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j})^{{\Greekmath 0122} }w\cdot
({\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j})^{{\Greekmath 0122} }wdx &=&-2{\Greekmath 0122} \int_{\Omega
}A^{{\Greekmath 0122} }({\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j})^{{\Greekmath 0122} }w\cdot {\Greekmath 011F}
_{T,j}^{{\Greekmath 0122} }{\Greekmath 0272} wdx \label{5.31} \\
&&-\int_{\Omega }w(A^{{\Greekmath 0122} }e_{j})\cdot ({\Greekmath 0272} _{y}{\Greekmath 011F}
_{T,j})^{{\Greekmath 0122} }wdx \notag \\
&&-2{\Greekmath 0122} \int_{\Omega }w(A^{{\Greekmath 0122} }e_{j})\cdot {\Greekmath 011F}
_{T,j}^{{\Greekmath 0122} }{\Greekmath 0272} wdx \notag \\
&&-{\Greekmath 0122} T^{-2}\int_{\Omega }\left\vert {\Greekmath 011F} _{T,j}^{{\Greekmath 0122}
}\right\vert ^{2}\left\vert w\right\vert ^{2}dx \notag \\
&=&I_{1}+I_{2}+I_{3}+I_{4}. \notag
\end{eqnarray}
The left hand-side of (\ref{5.31}) is estimated from below by ${\Greekmath 010B}
\int_{\Omega }\left\vert ({\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j})^{{\Greekmath 0122} }w\right\vert
^{2}dx$ while, for the respective terms of the right hand-side of (\ref{5.31}
) we have, after the use of H\"{o}lder and Young inequalities,
\begin{equation*}
\left\vert I_{1}\right\vert \leq \frac{{\Greekmath 010B} }{3}\int_{\Omega }\left\vert
({\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j})^{{\Greekmath 0122} }w\right\vert ^{2}dx+C{\Greekmath 0122}
^{2}\int_{\Omega }\left\vert {\Greekmath 011F} _{T,j}^{{\Greekmath 0122} }\right\vert
^{2}\left\vert {\Greekmath 0272} w\right\vert ^{2}dx;
\end{equation*}
\begin{equation*}
\left\vert I_{2}\right\vert \leq \frac{{\Greekmath 010B} }{3}\int_{\Omega }\left\vert
({\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j})^{{\Greekmath 0122} }w\right\vert ^{2}dx+C\int_{\Omega
}\left\vert w\right\vert ^{2}dx;
\end{equation*}
\begin{equation*}
\left\vert I_{3}\right\vert \leq C\int_{\Omega }\left\vert w\right\vert
^{2}dx+C{\Greekmath 0122} ^{2}\int_{\Omega }\left\vert {\Greekmath 011F} _{T,j}^{{\Greekmath 0122}
}\right\vert ^{2}\left\vert {\Greekmath 0272} w\right\vert ^{2}dx\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }\left\vert
I_{4}\right\vert \leq C\int_{\Omega }\left\vert w\right\vert ^{2}dx.
\end{equation*}
It follows that
\begin{eqnarray*}
\int_{\Omega }\left\vert ({\Greekmath 0272} _{y}{\Greekmath 011F} _{T,j})^{{\Greekmath 0122} }w\right\vert
^{2}dx &\leq &C{\Greekmath 0122} ^{2}\int_{\Omega }\left\vert {\Greekmath 011F}
_{T,j}^{{\Greekmath 0122} }\right\vert ^{2}\left\vert {\Greekmath 0272} w\right\vert
^{2}dx+C\int_{\Omega }\left\vert w\right\vert ^{2}dx \\
&\leq &C{\Greekmath 0122} ^{2}\left\Vert {\Greekmath 011F} _{T,j}\right\Vert _{L^{\infty }(
\mathbb{R}^{d})}^{2}\int_{\Omega }\left\vert {\Greekmath 0272} w\right\vert
^{2}dx+C\int_{\Omega }\left\vert w\right\vert ^{2}dx.
\end{eqnarray*}
Since $T={\Greekmath 0122} ^{-1}$, we get (\ref{5.30}), taking into account that $
T^{-1}\left\Vert {\Greekmath 011F} _{T},j\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq
T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}$.
\end{proof}
\begin{remark}
\label{r5.2}I\emph{n the case of asymptotic periodic functions, we replace }$
{\Greekmath 011F} _{T,j}$\emph{\ by }${\Greekmath 011F} _{j}\in H_{\infty ,per}^{1}(Y)$\emph{\
solution of the corrector problem (\ref{1.6}) and we have (in view of Lemma
\ref{l1.2}) }$\left\Vert {\Greekmath 011F} _{j}\right\Vert _{L^{\infty }(\mathbb{R}
^{d})}\leq C$\emph{. It follows that }
\begin{equation*}
\int_{\Omega }\left\vert \left( {\Greekmath 0272} _{y}{\Greekmath 011F} _{j}\right) \left( \frac{x}{
{\Greekmath 0122} }\right) w(x)\right\vert ^{2}dx\leq C\int_{\Omega }(\left\vert
w\right\vert ^{2}+{\Greekmath 0122} ^{2}\left\vert {\Greekmath 0272} w\right\vert ^{2})dx,
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\emph{\ for all }}w\in H^{1}(\Omega )
\end{equation*}
\emph{where }$C=C(A,\Omega ,d)$\emph{.}
\end{remark}
Let $u_{0}\in H_{0}^{1}(\Omega )$ be the weak solution of (\ref{1.4}). Let $
z_{{\Greekmath 0122} }\in H^{1}(\Omega )$ be the unique weak solution of
\begin{equation}
-{\Greekmath 0272} \cdot (A^{{\Greekmath 0122} }{\Greekmath 0272} z_{{\Greekmath 0122} })=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\Omega
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }z_{{\Greekmath 0122} }={\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\partial \Omega \label{6.31}
\end{equation}
where $\Omega $ is as in Lemma \ref{l5.2}. Then we have
\begin{lemma}
\label{l5.3}Let $z_{{\Greekmath 0122} }$ be as in \emph{(\ref{6.31})} with $
T={\Greekmath 0122} ^{-1}$. Then there exists ${\Greekmath 0122} _{0}\in \lbrack 0,1)$
such that
\begin{equation}
\left\Vert z_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Omega )}\leq C\left(
T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\right)
^{\frac{1}{2}}\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )},\ 0<{\Greekmath 0122}
\leq {\Greekmath 0122} _{0}, \label{6.38}
\end{equation}
where $C=C(A,\Omega )>0$.
\end{lemma}
It follows from (\ref{6.38}) that for any ${\Greekmath 011B} \in (0,1)$, there exists $
C_{{\Greekmath 011B} }=C_{{\Greekmath 011B} }({\Greekmath 011B} ,A,\Omega )>0$ such that
\begin{equation}
\left\Vert z_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Omega )}\leq C_{{\Greekmath 011B}
}(\Theta _{{\Greekmath 011B} }({\Greekmath 0122} ^{-1}))^{\frac{1}{2}}\left\Vert
u_{0}\right\Vert _{H^{2}(\Omega )},\ 0<{\Greekmath 0122} \leq {\Greekmath 0122} _{0}
\label{6.39}
\end{equation}
where $\Theta _{{\Greekmath 011B} }$ is defined by (\ref{11.13}).
For the proof of Lemma \ref{l5.3}, we need the following result whose proof
can be found in \cite{Suslina}.
\begin{lemma}[{\protect\cite[Lemma 5.1]{Suslina}}]
\label{l5.4}Let $\Omega $ be as in Lemma \emph{\ref{l5.2}}. Then there
exists ${\Greekmath 010E} _{0}\in (0,1]$ depending on $\Omega $ such that, for any $
u\in H^{1}(\Omega )$,
\begin{equation}
\int_{\Gamma _{{\Greekmath 010E} }}\left\vert u\right\vert ^{2}dx\leq C{\Greekmath 010E}
\left\Vert u\right\Vert _{L^{2}(\Omega )}\left\Vert u\right\Vert
_{H^{1}(\Omega )}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }0<{\Greekmath 010E} \leq {\Greekmath 010E} _{0} \label{5.34}
\end{equation}
where $C=C(\Omega )$ and $\Gamma _{{\Greekmath 010E} }=\Omega _{{\Greekmath 010E} }\cap \Omega $
with $\Omega _{{\Greekmath 010E} }=\{x\in \mathbb{R}^{d}:\mathrm{dist}(x,\partial
\Omega )<{\Greekmath 010E} \}$.
\end{lemma}
\begin{proof}[Proof of Lemma \protect\ref{l5.3}]
We set $w={\Greekmath 0272} u_{0}$ and $u=z_{{\Greekmath 0122} }$. Assuming $u_{0}\in
H^{2}(\Omega )$, we have that $w\in H^{1}(\Omega )^{d}$. Since ${\Greekmath 010E}
:=T^{-1}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}
^{d})}\rightarrow 0$ as $T\rightarrow \infty $, we may assume that $0<{\Greekmath 010E}
\leq {\Greekmath 010E} _{0}$ where ${\Greekmath 010E} _{0}$ is as in Lemma \ref{l5.4}. Let ${\Greekmath 0112}
_{{\Greekmath 010E} }$ be a cut-off function in a neighborhood of $\partial \Omega $
with support in $\Omega _{2{\Greekmath 010E} }$ (a $2{\Greekmath 010E} $-neighborhood of $\partial
\Omega $), $\Omega _{{\Greekmath 011A} }$ being defined as in Lemma \ref{l5.4}:
\begin{equation}
{\Greekmath 0112} _{{\Greekmath 010E} }\in \mathcal{C}_{0}^{\infty }(\mathbb{R}^{d})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,
\RIfM@\expandafter\text@\else\expandafter\mbox\firm{supp}}{\Greekmath 0112} _{{\Greekmath 010E} }\subset \Omega _{2{\Greekmath 010E} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }0\leq
{\Greekmath 0112} _{{\Greekmath 010E} }\leq 1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }{\Greekmath 0112} _{{\Greekmath 010E} }=1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\Omega
_{{\Greekmath 010E} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }{\Greekmath 0112} _{{\Greekmath 010E} }=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\mathbb{R}^{d}\backslash
\Omega _{2{\Greekmath 010E} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }{\Greekmath 010E} \left\vert {\Greekmath 0272} {\Greekmath 0112} _{{\Greekmath 010E}
}\right\vert \leq C. \label{5.35}
\end{equation}
We set $\Phi _{{\Greekmath 0122} }(x)={\Greekmath 0122} {\Greekmath 0112} _{{\Greekmath 010E} }(x){\Greekmath 011F}
_{T}(x/{\Greekmath 0122} )w(x)$. Then
\begin{equation*}
\left\Vert u\right\Vert _{H^{1}(\Omega )}\leq C{\Greekmath 0122} \left\Vert {\Greekmath 011F}
_{T}^{{\Greekmath 0122} }w\right\Vert _{H^{1/2}(\partial \Omega )}\leq C\left\Vert
\Phi _{{\Greekmath 0122} }\right\Vert _{H^{1}(\Omega )}.
\end{equation*}
So we need to estimate $\left\Vert {\Greekmath 0272} \Phi _{{\Greekmath 0122} }\right\Vert
_{L^{2}(\Omega )}$. But
\begin{eqnarray*}
{\Greekmath 0272} \Phi _{{\Greekmath 0122} } &=&{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }w{\Greekmath 0272}
{\Greekmath 0112} _{{\Greekmath 010E} }+({\Greekmath 0272} _{y}{\Greekmath 011F} _{T})^{{\Greekmath 0122} }w{\Greekmath 0112} _{{\Greekmath 010E}
}+{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0112} _{{\Greekmath 010E} }{\Greekmath 0272} w \\
&=&J_{1}+J_{2}+J_{3}.
\end{eqnarray*}
We have
\begin{eqnarray*}
\left\Vert J_{1}\right\Vert _{L^{2}(\Omega )}^{2} &\leq &C{\Greekmath 0122}
^{2}\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}^{2}{\Greekmath 010E}
^{-2}\int_{\Gamma _{2{\Greekmath 010E} }}\left\vert w\right\vert ^{2}dx \\
&\leq &C\int_{\Gamma _{2{\Greekmath 010E} }}\left\vert w\right\vert ^{2}dx\leq C{\Greekmath 010E}
\left\Vert w\right\Vert _{L^{2}(\Omega )}\left\Vert w\right\Vert
_{H^{1}(\Omega )}
\end{eqnarray*}
where we have used (\ref{5.34}) for the last inequality above. For $J_{2}$,
we have (using (\ref{5.30}) and (\ref{5.34}))
\begin{eqnarray*}
\left\Vert J_{2}\right\Vert _{L^{2}(\Omega )}^{2} &\leq &\int_{\Omega
}\left\vert ({\Greekmath 0272} _{y}{\Greekmath 011F} _{T})^{{\Greekmath 0122} }w{\Greekmath 0112} _{{\Greekmath 010E}
}\right\vert ^{2}dx\leq C\int_{\Omega }\left( \left\vert w{\Greekmath 0112} _{{\Greekmath 010E}
}\right\vert ^{2}+{\Greekmath 010E} ^{2}\left\vert {\Greekmath 0272} (w{\Greekmath 0112} _{{\Greekmath 010E}
})\right\vert ^{2}\right) dx \\
&\leq &C\int_{\Gamma _{2{\Greekmath 010E} }}\left\vert w\right\vert ^{2}dx+C{\Greekmath 010E}
^{2}\int_{\Omega }\left\vert {\Greekmath 0272} (w{\Greekmath 0112} _{{\Greekmath 010E} })\right\vert ^{2}dx \\
&\leq &C{\Greekmath 010E} \left\Vert w\right\Vert _{L^{2}(\Omega )}\left\Vert
w\right\Vert _{H^{1}(\Omega )}+C{\Greekmath 010E} ^{2}\int_{\Omega }\left\vert {\Greekmath 0272}
(w{\Greekmath 0112} _{{\Greekmath 010E} })\right\vert ^{2}dx.
\end{eqnarray*}
But ${\Greekmath 0272} (w{\Greekmath 0112} _{{\Greekmath 010E} })=w{\Greekmath 0272} {\Greekmath 0112} _{{\Greekmath 010E} }+{\Greekmath 0112} _{{\Greekmath 010E}
}{\Greekmath 0272} w$, and
\begin{eqnarray*}
\int_{\Omega }\left\vert {\Greekmath 0272} (w{\Greekmath 0112} _{{\Greekmath 010E} })\right\vert ^{2}dx &\leq
&C\int_{\Gamma _{2{\Greekmath 010E} }}\left\vert {\Greekmath 0272} {\Greekmath 0112} _{{\Greekmath 010E} }\right\vert
^{2}\left\vert w\right\vert ^{2}dx+C\int_{\Omega }\left\vert {\Greekmath 0112} _{{\Greekmath 010E}
}{\Greekmath 0272} w\right\vert ^{2}dx \\
&\leq &C{\Greekmath 010E} ^{-1}\left\Vert w\right\Vert _{L^{2}(\Omega )}\left\Vert
w\right\Vert _{H^{1}(\Omega )}+C\int_{\Omega }\left\vert {\Greekmath 0272} w\right\vert
^{2}dx.
\end{eqnarray*}
Hence
\begin{equation*}
\left\Vert J_{2}\right\Vert _{L^{2}(\Omega )}^{2}\leq C{\Greekmath 010E} \left\Vert
w\right\Vert _{L^{2}(\Omega )}\left\Vert w\right\Vert _{H^{1}(\Omega
)}+C{\Greekmath 010E} ^{2}\left\Vert w\right\Vert _{H^{1}(\Omega )}^{2}.
\end{equation*}
As for $J_{3}$,
\begin{equation*}
\left\Vert J_{3}\right\Vert _{L^{2}(\Omega )}^{2}\leq C{\Greekmath 0122}
^{2}\int_{\Omega }\left\vert {\Greekmath 011F} _{T}^{{\Greekmath 0122} }\right\vert
^{2}\left\vert {\Greekmath 0272} w\right\vert ^{2}dx\leq C{\Greekmath 010E} ^{2}\left\Vert
w\right\Vert _{H^{1}(\Omega )}^{2}.
\end{equation*}
Finally, using Young's inequality together with the fact that ${\Greekmath 010E}
^{2}\leq {\Greekmath 010E} $ we are led to
\begin{eqnarray}
\left\Vert {\Greekmath 0272} \Phi _{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}^{2}
&\leq &C{\Greekmath 010E} \left\Vert w\right\Vert _{L^{2}(\Omega )}\left\Vert
w\right\Vert _{H^{1}(\Omega )}+C{\Greekmath 010E} ^{2}\left\Vert w\right\Vert
_{H^{1}(\Omega )}^{2} \label{5.35'} \\
&\leq &C{\Greekmath 010E} \left\Vert w\right\Vert _{H^{1}(\Omega )}^{2}+C{\Greekmath 010E}
^{2}\left\Vert w\right\Vert _{H^{1}(\Omega )}^{2} \notag \\
&\leq &C{\Greekmath 010E} \left\Vert w\right\Vert _{H^{1}(\Omega )}^{2}. \notag
\end{eqnarray}
So we choose ${\Greekmath 0122} _{0}$ such that $0<{\Greekmath 010E} \leq {\Greekmath 010E} _{0}$ for $
0<{\Greekmath 0122} \leq {\Greekmath 0122} _{0}$ (recall that $0<{\Greekmath 010E} \rightarrow 0$
as $0<{\Greekmath 0122} \rightarrow 0$). We thus derive (\ref{6.38}) since $
\left\Vert \Phi _{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}^{2}\leq {\Greekmath 010E}
\left\Vert w\right\Vert _{H^{1}(\Omega )}^{2}$.
\end{proof}
\subsection{Convergence rates: proof of Theorem \protect\ref{t1.4}}
Assume that $\Omega $ is of class $\mathcal{C}^{1,1}$. Let $u_{{\Greekmath 0122} }$
, $u_{0}\in H_{0}^{1}(\Omega )$ be the weak solutions of (\ref{1.1}) and (
\ref{1.4}) respectively. Let ${\Greekmath 011F} _{T}^{{\Greekmath 0122} }(x)={\Greekmath 011F}
_{T}(x/{\Greekmath 0122} )$ for $x\in \Omega $ and define
\begin{equation}
w_{{\Greekmath 0122} }=u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122}
}{\Greekmath 0272} u_{0}+z_{{\Greekmath 0122} } \label{6.30}
\end{equation}
where $T={\Greekmath 0122} ^{-1}$ and $z_{{\Greekmath 0122} }\in H^{1}(\Omega )$ is the
weak solution of (\ref{6.31}).
\begin{theorem}
\label{t6.1}Suppose that $A$ is as in the preceding subsection. Assume that $
u_{0}\in H^{2}(\Omega )$. Then for any ${\Greekmath 011B} \in (0,1)$ there exists $
C_{{\Greekmath 011B} }=C_{{\Greekmath 011B} }({\Greekmath 011B} ,A,\Omega )$ such that
\begin{equation}
\left\Vert w_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Omega )}\leq C_{{\Greekmath 011B}
}\left( \left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122} ^{-1}}\right\Vert
_{2}+\Theta _{{\Greekmath 011B} }({\Greekmath 0122} ^{-1})\right) \left\Vert u_{0}\right\Vert
_{H^{2}(\Omega )}. \label{6.37}
\end{equation}
\end{theorem}
\begin{proof}
Set
\begin{equation*}
A_{T}=A+A{\Greekmath 0272} _{y}{\Greekmath 011F} _{T}-A^{\ast }
\end{equation*}
where $A^{\ast }$ is the homogenized matrix and where we have taken $
T={\Greekmath 0122} ^{-1}$. Then by simple computations as in Lemma \ref{l5.1} we
get
\begin{equation*}
-{\Greekmath 0272} \cdot \left( A^{{\Greekmath 0122} }{\Greekmath 0272} w_{{\Greekmath 0122} }\right) ={\Greekmath 0272}
\cdot \left( A_{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right) +{\Greekmath 0122} {\Greekmath 0272}
\cdot (A^{{\Greekmath 0122} }{\Greekmath 0272} ^{2}u_{0}{\Greekmath 011F} _{T}^{{\Greekmath 0122} }).
\end{equation*}
This implies that
\begin{equation}
\left\Vert {\Greekmath 0272} w_{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\leq
C\left\Vert A_{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right\Vert _{L^{2}(\Omega
)}+C{\Greekmath 0122} \left\Vert A^{{\Greekmath 0122} }{\Greekmath 0272} ^{2}u_{0}{\Greekmath 011F}
_{T}^{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}. \label{10.6}
\end{equation}
We use (\ref{11.14}) to get
\begin{eqnarray}
{\Greekmath 0122} \left\Vert A^{{\Greekmath 0122} }{\Greekmath 0272} ^{2}u_{0}{\Greekmath 011F}
_{T}^{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )} &\leq &C{\Greekmath 0122}
\left\Vert {\Greekmath 011F} _{T}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\left\Vert
{\Greekmath 0272} ^{2}u_{0}\right\Vert _{L^{2}(\Omega )} \label{10.7} \\
&\leq &C\Theta _{{\Greekmath 011B} }(T)\left\Vert {\Greekmath 0272} ^{2}u_{0}\right\Vert
_{L^{2}(\Omega )}. \notag
\end{eqnarray}
Concerning the term $\left\Vert A_{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right\Vert
_{L^{2}(\Omega )}$, we need to replace $A_{T}$ by a matrix $\mathcal{A}_{T}$
whose mean value is zero. So, we let $\mathcal{A}_{T}=A_{T}-\left\langle
A_{T}\right\rangle $ so that $\left\langle \mathcal{A}_{T}\right\rangle =0$
and $A_{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}=\mathcal{A}_{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}+\left\langle A_{T}\right\rangle {\Greekmath 0272} u_{0}$. The inequality $
\left\vert \left\langle A_{T}\right\rangle \right\vert \leq C\left\Vert
{\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{T}\right\Vert _{2}$ yields readily
\begin{equation}
\left\Vert \left\langle A_{T}\right\rangle {\Greekmath 0272} u_{0}\right\Vert
_{L^{2}(\Omega )}\leq C\left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{T}\right\Vert
_{2}\left\Vert {\Greekmath 0272} u_{0}\right\Vert _{L^{2}(\Omega )}. \label{10.8}
\end{equation}
It remains to estimate $\left\Vert \mathcal{A}_{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\right\Vert _{L^{2}(\Omega )}$. We denote by $a_{T,ij}$ the entries of $
\mathcal{A}_{T}$: $a_{T,ij}=b_{T,ij}-\left\langle b_{T,ij}\right\rangle
\equiv a_{ij}$ where
\begin{equation*}
b_{T,ij}(y)=b_{ij}(y)+\sum_{k=1}^{d}b_{ik}(y)\frac{\partial {\Greekmath 011F} _{T,j}}{
\partial y_{k}}(y)-b_{ij}^{\ast }.
\end{equation*}
In view of Lemma \ref{l11.2}, let $f_{T,ij}\equiv f_{ij}\in H_{\infty
,AP}^{1}(\mathbb{R}^{d})$ be the unique solution of
\begin{equation*}
-\Delta f_{ij}+T^{-2}f_{ij}=a_{ij}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\mathbb{R}^{d},\ \ \left\langle
f_{ij}\right\rangle =0.
\end{equation*}
Owing to (\ref{e5.7}), we see that $a_{ij}$ verifies (\ref{11.15}), so that (
\ref{11.19}) and (\ref{11.20}) are satisfied, that is:
\begin{equation}
T^{-2}\left\Vert f_{ij}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\leq
C\Theta _{1}(T)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }T^{-1}\left\Vert {\Greekmath 0272} f_{ij}\right\Vert
_{L^{\infty }(\mathbb{R}^{d})}\leq C\Theta _{{\Greekmath 011B} }(T). \label{11.26}
\end{equation}
We set $\mathbf{f}=(f_{ij})_{1\leq i,j\leq d}$. Then writing (formally)
\begin{equation*}
a_{ij}=-\sum_{k=1}^{d}\left( \frac{\partial }{\partial y_{k}}\left( \frac{
\partial f_{ij}}{\partial y_{k}}-\frac{\partial f_{kj}}{\partial y_{i}}
\right) +\frac{\partial }{\partial y_{i}}\left( \frac{\partial f_{kj}}{
\partial y_{k}}\right) \right) +T^{-2}f_{ij}
\end{equation*}
and using the fact that
\begin{equation*}
\sum_{i,k=1}^{d}\frac{\partial ^{2}}{\partial y_{i}\partial y_{k}}\left(
\frac{\partial f_{ij}}{\partial y_{k}}-\frac{\partial f_{kj}}{\partial y_{i}}
\right) =0,
\end{equation*}
we readily get
\begin{align}
-{\Greekmath 0272} \cdot (\mathcal{A}_{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0})& ={\Greekmath 0272} \cdot
\left( (\Delta \mathbf{f})^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right) -T^{-2}{\Greekmath 0272}
\cdot (\mathbf{f}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}) \label{11.27} \\
& =\sum_{i,j,k=1}^{d}\frac{\partial }{\partial x_{i}}\left( \frac{\partial }{
\partial x_{k}}\left( \frac{\partial f_{ij}}{\partial x_{k}}-\frac{\partial
f_{kj}}{\partial x_{i}}\right) \left( \frac{x}{{\Greekmath 0122} }\right) \frac{
\partial u_{0}}{\partial x_{j}}\right) \notag \\
& +\sum_{i,j,k=1}^{d}\frac{\partial }{\partial x_{i}}\left( \frac{\partial
^{2}f_{kj}}{\partial x_{k}\partial x_{i}}\left( \frac{x}{{\Greekmath 0122} }
\right) \frac{\partial u_{0}}{\partial x_{j}}\right) -T^{-2}{\Greekmath 0272} \cdot (
\mathbf{f}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}) \notag \\
& =-\sum_{i,j,k=1}^{d}\frac{\partial }{\partial x_{i}}\left( {\Greekmath 0122}
\left( \frac{\partial f_{ij}}{\partial x_{k}}-\frac{\partial f_{kj}}{
\partial x_{i}}\right) \left( \frac{x}{{\Greekmath 0122} }\right) \frac{\partial
^{2}u_{0}}{\partial x_{k}\partial x_{j}}\right) \notag \\
& +\sum_{i,j,k=1}^{d}\frac{\partial }{\partial x_{i}}\left( \frac{\partial
^{2}f_{kj}}{\partial x_{k}\partial x_{i}}\left( \frac{x}{{\Greekmath 0122} }
\right) \frac{\partial u_{0}}{\partial x_{j}}\right) -T^{-2}{\Greekmath 0272} \cdot (
\mathbf{f}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}). \notag
\end{align}
Testing (\ref{11.27}) with ${\Greekmath 0127} \in H_{0}^{1}(\Omega )$, we obtain
\begin{eqnarray}
\left\Vert \mathcal{A}_{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right\Vert
_{L^{2}(\Omega )} &\leq &C{\Greekmath 0122} \left( \int_{\Omega }\left\vert {\Greekmath 0272}
\mathbf{f}\left( \frac{x}{{\Greekmath 0122} }\right) \right\vert ^{2}\left\vert
{\Greekmath 0272} ^{2}u_{0}\right\vert ^{2}dx\right) ^{\frac{1}{2}} \label{6.35} \\
&&+C\sum_{j=1}^{d}\left( \int_{\Omega }\left\vert {\Greekmath 0272} h_{T,j}\left( \frac{
x}{{\Greekmath 0122} }\right) \right\vert ^{2}\left\vert {\Greekmath 0272} u_{0}\right\vert
^{2}dx\right) ^{\frac{1}{2}}+\left\vert \left\langle A_{T}\right\rangle
\right\vert \left\Vert {\Greekmath 0272} u_{0}\right\Vert _{L^{2}(\Omega )} \notag \\
&&+CT^{-2}\left( \int_{\Omega }\left\vert \mathbf{f}^{{\Greekmath 0122}
}\right\vert ^{2}\left\vert {\Greekmath 0272} u_{0}\right\vert ^{2}dx\right) ^{\frac{1}{
2}} \notag \\
&=&I_{1}+I_{2}+I_{3}+I_{4} \notag
\end{eqnarray}
where $h_{T,j}=\sum_{k=1}^{d}\frac{\partial f_{kj}}{\partial y_{k}}\in
L_{\infty ,AP}^{2}(\mathbb{R}^{d})$. We estimate each term above separately.
Let us first deal with $I_{2}$. Observe that $h_{T,j}=\func{div}f_{.j}$
where $f_{.j}=(f_{kj})_{1\leq k\leq d}$. It follows from the definition of $
f_{ij}$ that
\begin{equation*}
-\Delta f_{.j}+T^{-2}f_{.j}=A(e_{j}+{\Greekmath 0272} {\Greekmath 011F} _{T,j})-\left\langle
A(e_{j}+{\Greekmath 0272} {\Greekmath 011F} _{T,j})\right\rangle ,
\end{equation*}
so that, owing to the definition of ${\Greekmath 011F} _{T,j}$,
\begin{equation}
-\Delta h_{T,j}+T^{-2}h_{T,j}=T^{-2}{\Greekmath 011F} _{T,j}. \label{6.36}
\end{equation}
Next, since the function $g=T^{-1}{\Greekmath 011F} _{T,j}$ satisfies assumption (\ref
{11.15}) of Lemma \ref{l11.2} with ${\Greekmath 011B} =1$, it follows that $h_{T,j}$
satisfies estimate (\ref{11.20}), that is,
\begin{equation*}
T^{-1}\left\Vert {\Greekmath 0272} h_{T,j}\right\Vert _{L^{\infty }(\mathbb{R}
^{d})}\leq C_{{\Greekmath 011C} }\Theta _{{\Greekmath 011C} }(T)\ \ \forall {\Greekmath 011C} \in (0,1).
\end{equation*}
Therefore
\begin{equation*}
\left\vert I_{2}\right\vert \leq C{\Greekmath 0122} \left\Vert {\Greekmath 0272}
h_{T,j}\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\left\Vert {\Greekmath 0272}
u_{0}\right\Vert _{L^{2}(\Omega )}\leq C\Theta _{{\Greekmath 011B} }(T)\left\Vert
{\Greekmath 0272} u_{0}\right\Vert _{L^{2}(\Omega )}.
\end{equation*}
As regard $I_{1}$, we infer from (\ref{11.26}) that
\begin{equation*}
\left\vert I_{1}\right\vert \leq C{\Greekmath 0122} \left\Vert {\Greekmath 0272} \mathbf{f}
\right\Vert _{L^{\infty }(\mathbb{R}^{d})}\left\Vert {\Greekmath 0272}
^{2}u_{0}\right\Vert _{L^{2}(\Omega )}\leq C\Theta _{{\Greekmath 011B} }(T)\left\Vert
{\Greekmath 0272} ^{2}u_{0}\right\Vert _{L^{2}(\Omega )}.
\end{equation*}
Concerning $I_{4}$, we use the first inequality in (\ref{11.26}) to get
\begin{equation*}
\left\vert I_{4}\right\vert \leq C\Theta _{1}(T)\left\Vert {\Greekmath 0272}
u_{0}\right\Vert _{L^{2}(\Omega )}
\end{equation*}
where we have put $T={\Greekmath 0122} ^{-1}$. Finally, using the inequality $
\left\vert \left\langle A_{T}\right\rangle \right\vert \leq C\left\Vert
{\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{T}\right\Vert _{2}$ we get
\begin{equation*}
\left\vert I_{3}\right\vert \leq C\left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F}
_{T}\right\Vert _{2}\left\Vert {\Greekmath 0272} u_{0}\right\Vert _{L^{2}(\Omega )}.
\end{equation*}
The result follows thereby.
\end{proof}
We are now in a position to prove Theorem \ref{t1.4}.
\begin{proof}[Proof of Theorem \protect\ref{t1.4}]
Using (\ref{6.37}) together with (\ref{6.39}) we get, for any ${\Greekmath 011B} \in
(0,1)$,
\begin{equation*}
\begin{array}{l}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T={\Greekmath 0122}
^{-1}}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right\Vert _{H^{1}(\Omega )} \\
\leq \left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T={\Greekmath 0122}
^{-1}}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}+z_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Omega
)}+\left\Vert z_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Omega )} \\
\leq C\left( \left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122}
^{-1}}\right\Vert _{2}+\Theta _{{\Greekmath 011B} }({\Greekmath 0122} ^{-1})\right)
\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}+C_{{\Greekmath 011B} }(\Theta _{{\Greekmath 011B}
}({\Greekmath 0122} ^{-1}))^{\frac{1}{2}}\left\Vert u_{0}\right\Vert
_{H^{2}(\Omega )} \\
\leq C\left( \left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122}
^{-1}}\right\Vert _{2}+\left( \Theta _{1}({\Greekmath 0122} ^{-1})\right) ^{{\Greekmath 011B}
}+\left( \Theta _{1}({\Greekmath 0122} ^{-1})\right) ^{\frac{{\Greekmath 011B} }{2}}\right)
\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )} \\
\leq C\left( \left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122}
^{-1}}\right\Vert _{2}+\left( \Theta _{1}({\Greekmath 0122} ^{-1})\right) ^{{\Greekmath 011B}
}\right) ^{\frac{1}{2}}\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )},
\end{array}
\end{equation*}
the last inequality above stemming from the fact that $\left\Vert {\Greekmath 0272}
{\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122} ^{-1}}\right\Vert _{2}+\left( \Theta
_{1}({\Greekmath 0122} ^{-1})\right) ^{{\Greekmath 011B} }\rightarrow 0$ when ${\Greekmath 0122}
\rightarrow 0$, so that we may assume
\begin{equation*}
\left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122} ^{-1}}\right\Vert
_{2}+\left( \Theta _{1}({\Greekmath 0122} ^{-1})\right) ^{{\Greekmath 011B} }<1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for
sufficiently small }{\Greekmath 0122} .
\end{equation*}
Choosing ${\Greekmath 011B} =\frac{1}{2}$, we obtain
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T={\Greekmath 0122}
^{-1}}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right\Vert _{H^{1}(\Omega )}\leq C\left(
\left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122} ^{-1}}\right\Vert
_{2}+\left( \Theta _{1}({\Greekmath 0122} ^{-1})\right) ^{\frac{1}{2}}\right) ^{
\frac{1}{2}}\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}. \label{5.37''}
\end{equation}
We recall that, since $\Omega $ is a $\mathcal{C}^{1,1}$-bounded domain in $
\mathbb{R}^{d}$ and the matrix $A^{\ast }$ has constant entries, it holds
that
\begin{equation}
\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}\leq C\left\Vert f\right\Vert
_{L^{2}(\Omega )},\ C=C(d,{\Greekmath 010B} ,\Omega )>0. \label{5.37'}
\end{equation}
Next, set for ${\Greekmath 0122} \in (0,1]$,
\begin{equation*}
{\Greekmath 0111} ({\Greekmath 0122} )=\left( \left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122}
^{-1}}\right\Vert _{2}+\left( \Theta _{1}({\Greekmath 0122} ^{-1})\right) ^{\frac{1
}{2}}\right) ^{\frac{1}{2}}.
\end{equation*}
Since ${\Greekmath 0111} ({\Greekmath 0122} )\rightarrow 0$ as ${\Greekmath 0122} \rightarrow 0$, we
obtain from (\ref{5.37''}) and (\ref{5.37'}), the statement of (\ref{Eq03})
in Theorem \ref{t1.4}.
It remains to check the near optimal convergence rates result (\ref{Eq02}).
We proceed in two parts.
\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Part I}. We first check that
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Gamma _{2{\Greekmath 010E} })}\leq
C{\Greekmath 0111} ({\Greekmath 0122} )\left\Vert f\right\Vert _{L^{2}(\Omega )}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ where }
{\Greekmath 010E} =\left( {\Greekmath 0111} ({\Greekmath 0122} )\right) ^{2}. \label{5.37}
\end{equation}
Indeed, we have $u_{{\Greekmath 0122} }=(u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F}
_{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0})+u_{0}+{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122}
}{\Greekmath 0272} u_{0}$, so that
\begin{equation*}
\left\Vert u_{{\Greekmath 0122} }\right\Vert _{H^{1}(\Gamma _{2{\Greekmath 010E} })}\leq
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\right\Vert _{H^{1}(\Gamma _{2{\Greekmath 010E} })}+\left\Vert u_{0}\right\Vert
_{H^{1}(\Gamma _{2{\Greekmath 010E} })}+\left\Vert {\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122}
}{\Greekmath 0272} u_{0}\right\Vert _{H^{1}(\Gamma _{2{\Greekmath 010E} })}.
\end{equation*}
It follows from (\ref{Eq03}) and (\ref{5.37'}) that
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\right\Vert _{H^{1}(\Gamma _{2{\Greekmath 010E} })}\leq C{\Greekmath 0111} ({\Greekmath 0122}
)\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}\leq C{\Greekmath 0111} ({\Greekmath 0122}
)\left\Vert f\right\Vert _{L^{2}(\Omega )}. \label{Eq04}
\end{equation}
Using (\ref{5.34}) we obtain
\begin{equation}
\left\Vert u_{0}\right\Vert _{H^{1}(\Gamma _{2{\Greekmath 010E} })}\leq C{\Greekmath 010E} ^{\frac{
1}{2}}\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}\leq C{\Greekmath 010E} ^{\frac{1}{2}
}\left\Vert f\right\Vert _{L^{2}(\Omega )}. \label{Eq05}
\end{equation}
To estimate $\left\Vert {\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\right\Vert _{H^{1}(\Gamma _{2{\Greekmath 010E} })}$, we consider a cut-off
function ${\Greekmath 0112} _{2{\Greekmath 010E} }$ of the same form as in (\ref{5.35}), but with $
{\Greekmath 010E} $ replaced there by $2{\Greekmath 010E} $. Letting $w={\Greekmath 0272} u_{0}$, we observe
that ${\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }w={\Greekmath 0122} {\Greekmath 0112} _{2{\Greekmath 010E}
}{\Greekmath 011F} _{T}^{{\Greekmath 0122} }w$ on $\Gamma _{2{\Greekmath 010E} }$, so that
\begin{equation*}
{\Greekmath 0272} ({\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }w)={\Greekmath 0122} {\Greekmath 011F}
_{T}^{{\Greekmath 0122} }w{\Greekmath 0272} {\Greekmath 0112} _{2{\Greekmath 010E} }+({\Greekmath 0272} _{y}{\Greekmath 011F}
_{T})^{{\Greekmath 0122} }w{\Greekmath 0112} _{2{\Greekmath 010E} }+{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122}
}{\Greekmath 0112} _{2{\Greekmath 010E} }{\Greekmath 0272} w\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\Gamma _{2{\Greekmath 010E} }.
\end{equation*}
Following the same procedure as in the proof of Lemma \ref{l5.3}, we get
\begin{equation}
\left\Vert {\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right\Vert
_{H^{1}(\Gamma _{2{\Greekmath 010E} })}\leq C{\Greekmath 010E} ^{\frac{1}{2}}\left\Vert
u_{0}\right\Vert _{H^{2}(\Omega )}\leq C{\Greekmath 010E} ^{\frac{1}{2}}\left\Vert
f\right\Vert _{L^{2}(\Omega )}. \label{Eq06}
\end{equation}
Choosing ${\Greekmath 010E} =\left( {\Greekmath 0111} ({\Greekmath 0122} )\right) ^{2}$ in (\ref{Eq05})
and (\ref{Eq06}), and taking into account (\ref{Eq04}), we readily get (\ref
{5.37}).
\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Part II}. Note that (\ref{6.37}) implies
\begin{equation}
\left\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}+z_{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\leq C\left( {\Greekmath 0111}
({\Greekmath 0122} )\right) ^{2}\left\Vert f\right\Vert _{L^{2}(\Omega )}.
\label{5.38}
\end{equation}
Thus, using the inequality
\begin{equation}
\left\Vert {\Greekmath 0122} {\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\right\Vert
_{L^{2}(\Omega )}\leq C\left( \Theta _{1}({\Greekmath 0122} ^{-1})\right) ^{\frac{1
}{2}}\left\Vert u_{0}\right\Vert _{H^{2}(\Omega )}\leq C\left( {\Greekmath 0111}
({\Greekmath 0122} )\right) ^{2}\left\Vert f\right\Vert _{L^{2}(\Omega )},
\label{5.39}
\end{equation}
we see that proving (\ref{Eq02}) amounts to prove that
\begin{equation}
\left\Vert z_{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\leq C\left( {\Greekmath 0111}
({\Greekmath 0122} )\right) ^{2}\left\Vert f\right\Vert _{L^{2}(\Omega )}
\label{5.40}
\end{equation}
where $C=C(d,A,\Omega )$. To that end, we consider the function
\begin{equation}
v_{{\Greekmath 0122} }=z_{{\Greekmath 0122} }-\Phi _{{\Greekmath 0122} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, where }\Phi
_{{\Greekmath 0122} }={\Greekmath 0122} {\Greekmath 0112} _{{\Greekmath 010E} }{\Greekmath 011F} _{T}^{{\Greekmath 0122} }{\Greekmath 0272}
u_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }{\Greekmath 010E} =\left( {\Greekmath 0111} ({\Greekmath 0122} )\right) ^{2}.
\label{5.41}
\end{equation}
Then $v_{{\Greekmath 0122} }\in H_{0}^{1}(\Omega )$ and $-{\Greekmath 0272} \cdot
(A^{{\Greekmath 0122} }{\Greekmath 0272} v_{{\Greekmath 0122} })=F_{{\Greekmath 0122} }\equiv {\Greekmath 0272}
\cdot (A^{{\Greekmath 0122} }{\Greekmath 0272} \Phi _{{\Greekmath 0122} })$ in $\Omega $. As shown
in (\ref{5.35'}) (where we use the inequality (\ref{5.7'})), we have
\begin{equation}
\left\Vert {\Greekmath 0272} \Phi _{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\leq
C{\Greekmath 010E} ^{\frac{1}{2}}\left\Vert f\right\Vert _{L^{2}(\Omega )}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }
\left\Vert \Phi _{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\leq C{\Greekmath 010E}
\left\Vert f\right\Vert _{L^{2}(\Omega )}. \label{5.42}
\end{equation}
Now, let $F\in L^{2}(\Omega )$ be arbitrarily fixed, and let $t_{{\Greekmath 0122}
}\in H_{0}^{1}(\Omega )$ be the solution of
\begin{equation}
-{\Greekmath 0272} \cdot (A^{{\Greekmath 0122} }{\Greekmath 0272} t_{{\Greekmath 0122} })=F\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\Omega .
\label{5.43}
\end{equation}
Following the homogenization process of (\ref{1.1}) (see the proof of
Theorem \ref{t1.1} in Section 2), we deduce the existence of a function $
t_{0}\in H_{0}^{1}(\Omega )$ such that $t_{{\Greekmath 0122} }\rightarrow t_{0}$
in $H_{0}^{1}(\Omega )$-weak and $t_{0}$ solves uniquely the equation $
-{\Greekmath 0272} \cdot (A^{\ast }{\Greekmath 0272} t_{0})=F$ in $\Omega $. It follows from (\ref
{5.37}) that
\begin{equation}
\left\Vert {\Greekmath 0272} t_{{\Greekmath 0122} }\right\Vert _{L^{2}(\Gamma _{2{\Greekmath 010E}
})}\leq C{\Greekmath 0111} ({\Greekmath 0122} )\left\Vert F\right\Vert _{L^{2}(\Omega )}.
\label{5.44}
\end{equation}
Taking in the variational form of (\ref{5.43}) $v_{{\Greekmath 0122} }$ test
function, we obtain
\begin{eqnarray}
\int_{\Omega }Fv_{{\Greekmath 0122} }dx &=&\int_{\Omega }A^{{\Greekmath 0122} }{\Greekmath 0272}
t_{{\Greekmath 0122} }\cdot {\Greekmath 0272} v_{{\Greekmath 0122} }dx=\int_{\Omega }{\Greekmath 0272}
t_{{\Greekmath 0122} }\cdot A^{{\Greekmath 0122} }{\Greekmath 0272} v_{{\Greekmath 0122} }dx=\left(
F_{{\Greekmath 0122} },t_{{\Greekmath 0122} }\right) \label{5.45} \\
&=&-\int_{\Omega }A^{{\Greekmath 0122} }{\Greekmath 0272} \Phi _{{\Greekmath 0122} }\cdot {\Greekmath 0272}
t_{{\Greekmath 0122} }dx=-\int_{\Gamma _{2{\Greekmath 010E} }}A^{{\Greekmath 0122} }{\Greekmath 0272} \Phi
_{{\Greekmath 0122} }\cdot {\Greekmath 0272} t_{{\Greekmath 0122} }dx \notag
\end{eqnarray}
where in (\ref{5.45}), the second equality stems from the fact that the
matrix $A$ is symmetric, and in the last equality we have used the
definition and properties of $\Phi _{{\Greekmath 0122} }$. Hence, using together
(the first inequality in) (\ref{5.42}) and (\ref{5.44}), we are led to
\begin{eqnarray*}
\left\vert \int_{\Omega }Fv_{{\Greekmath 0122} }dx\right\vert &\leq &C\left\Vert
{\Greekmath 0272} \Phi _{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\left\Vert {\Greekmath 0272}
t_{{\Greekmath 0122} }\right\Vert _{L^{2}(\Gamma _{2{\Greekmath 010E} })}\leq C{\Greekmath 010E} ^{\frac{
1}{2}}\left\Vert f\right\Vert _{L^{2}(\Omega )}{\Greekmath 010E} ^{\frac{1}{2}
}\left\Vert F\right\Vert _{L^{2}(\Omega )} \\
&\leq &C{\Greekmath 010E} \left\Vert f\right\Vert _{L^{2}(\Omega )}\left\Vert
F\right\Vert _{L^{2}(\Omega )}.
\end{eqnarray*}
Since $F$ is arbitrary, it emerges
\begin{equation}
\left\Vert v_{{\Greekmath 0122} }\right\Vert _{L^{2}(\Omega )}\leq C{\Greekmath 010E}
\left\Vert f\right\Vert _{L^{2}(\Omega )}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }{\Greekmath 010E} =({\Greekmath 0111}
({\Greekmath 0122} ))^{2}. \label{5.46}
\end{equation}
Combining (\ref{5.46}) with the second estimate in (\ref{5.42}) yields (\ref
{5.40}). This concludes the proof of Theorem \ref{t1.4}.
\end{proof}
\begin{remark}
\label{r5.3}\emph{In the asymptotic periodic setting of the preceding
section, we replace }${\Greekmath 011F} _{T}$\emph{\ by }${\Greekmath 011F} $\emph{\ so that }$
\left\Vert {\Greekmath 0272} {\Greekmath 011F} -{\Greekmath 0272} {\Greekmath 011F} _{{\Greekmath 0122} ^{-1}}\right\Vert _{2}=0$
\emph{. Moreover, if we look carefully at the proof of (\ref{Eq02}), we
notice that, in view of Remark \ref{r5.2}, we may replace }${\Greekmath 0111}
({\Greekmath 0122} )$\emph{\ by }${\Greekmath 0122} ^{1/2}$\emph{, so that (\ref{Eq02})
becomes }
\begin{equation*}
\left\Vert u_{{\Greekmath 0122} }-u_{0}\right\Vert _{L^{2}(\Omega )}\leq
C{\Greekmath 0122} \left\Vert f\right\Vert _{L^{2}(\Omega )}
\end{equation*}
\emph{where }$C=C(d,{\Greekmath 010B} ,\Omega )$\emph{. This shows the optimal }$L^{2}$
\emph{-rates of convergence in Theorem \ref{t5.1}.}
\end{remark}
\section{Some examples}
\subsection{Applications of Theorem \protect\ref{t3.2}}
Theorem \ref{t3.2} has been proved under the assumption that the corrector $
{\Greekmath 011F} _{j}$ lies in $B_{\mathcal{A}}^{2}(\mathbb{R}^{d})$ for each $1\leq
j\leq d$. We provide some examples in which this hypothesis is fulfilled.
\subsubsection{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{The almost periodic setting}}
We assume here that the entries of the matrix $A$ are almost periodic in the
sense of Besicovitch \cite{Besicovitch}. Then this falls into the scope of
Theorem \ref{t1.1} by taking there $\mathcal{A}=AP(\mathbb{R}^{d})$.
Now, we distinguish two special cases.
\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Case 1}. The entries of $A$ are continuous quasi-periodic functions
and satisfy the frequency condition (see \cite{Jikov2}). We recall that a
function $b$ defined on $\mathbb{R}^{d}$ is quasi-periodic if $b(y)=\mathcal{
B}({\Greekmath 0121} _{1}\cdot y,...,{\Greekmath 0121} _{m}\cdot y)$ where $\mathcal{B}\equiv
\mathcal{B}(z_{1},...,z_{m})$ is a $1$-periodic function with respect to
every argument $z_{1}$,..., $z_{m}$. The ${\Greekmath 0121} ^{1},...,{\Greekmath 0121} ^{m}$ are
the frequency vectors, and ${\Greekmath 0121} _{j}\cdot y=\sum_{i=1}^{d}{\Greekmath 0121}
_{j}^{i}y_{i}$ is the inner product of vectors in $\mathbb{R}^{d}$. The
frequency condition on the vectors ${\Greekmath 0121} ^{1},...,{\Greekmath 0121} ^{m}\in \mathbb{R}
^{d}$ amounts to the following assumption:
\begin{itemize}
\item[(\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{FC})] There is $c_{0},{\Greekmath 011C} >0$ such that
\begin{equation}
\left\vert \sum_{j=1}^{m}k_{j}{\Greekmath 0121} _{i}^{j}\right\vert \geq
c_{0}\left\vert k\right\vert ^{-{\Greekmath 011C} }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }k\in \mathbb{Z}
^{m}\backslash \{0\}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }1\leq i\leq d. \label{FC}
\end{equation}
\end{itemize}
It is clear that if (\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{FC}) is satisfied, then the vectors ${\Greekmath 0121}
^{1},...,{\Greekmath 0121} ^{m}$ \ are rationally independent, that is,
\begin{equation*}
\sum_{j=1}^{m}k_{j}{\Greekmath 0121} _{i}^{j}\neq 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for every }1\leq i\leq d\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
and all }k\in \mathbb{Z}^{m}\backslash \{0\}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
Then as shown in \cite[Lemma 2.1]{Jikov2}, the corrector problem (\ref{1.6})
possesses a solution which is quasi-periodic. So, it belongs to $B_{AP}^{2}(
\mathbb{R}^{d})$ (the space $B_{\mathcal{A}}^{2}(\mathbb{R}^{d})$ with $
\mathcal{A}=AP(\mathbb{R}^{d})$) since any quasi-periodic function is almost
periodic. We may hence apply Theorem \ref{t3.2}.
\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Case 2}. The entries of $A$ are continuous almost periodic
functions. In \cite[Theorem 1.1]{Armstrong} are formulated the assumptions
implying the existence of bounded almost periodic solution to the problem (
\ref{1.6}). Hence the conclusion of Theorem \ref{t3.2} holds. Notice that
this class of solutions contains continuous quasi-periodic ones (provided
that the assumptions of \cite[Theorem 1.1]{Armstrong} are satisfied) but
also some other almost periodic functions that are not quasi-periodic as
shown in \cite[Section 4]{Armstrong}.
\subsubsection{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{The asymptotic periodic setting}}
We assume that $A=A_{0}+A_{per}$ where $A_{0}\in L^{2}(\mathbb{R}
^{d})^{d\times d}$ and $A_{per}\in L_{per}^{2}(Y)^{d\times d}$. We are here
in the framework of asymptotic periodic homogenization corresponding to $
\mathcal{A}=\mathcal{B}_{\infty ,per}(\mathbb{R}^{d})=\mathcal{C}_{0}(
\mathbb{R}^{d})\oplus \mathcal{C}_{per}(Y)$. In the proof of Lemma \ref{l1.2}
, we showed that the corrector lies in $L_{\infty ,per}^{2}(Y)=L_{0}^{2}(
\mathbb{R}^{d})+L_{per}^{2}(Y)$, which is nothing else but the space $B_{
\mathcal{A}}^{2}(\mathbb{R}^{d})$ with $\mathcal{A}=\mathcal{B}_{\infty
,per}(\mathbb{R}^{d})$. So Theorem \ref{t3.2} applies to this setting.
\begin{remark}
\label{r3.1}\emph{Assume (i) }$A=A_{0}+A_{ap}$ \emph{with }$A_{0}\in
\mathcal{C}_{0}(\mathbb{R}^{d})^{d\times d}$\emph{\ and }$A_{ap}\in AP(
\mathbb{R}^{d})^{d\times d}$\emph{, (ii) the entries of }$A_{ap}$\emph{\
either are quasi-periodic and satisfy the frequency condition, or fulfill
the hypotheses of \cite[Theorem 1.1]{Armstrong}. we may use the same trick
as in Lemma \ref{l1.2} to show that the corrector lies, in each of these
cases, in }$B_{\infty ,AP}^{2}(\mathbb{R}^{d})=L_{0}^{2}(\mathbb{R}
^{d})+B_{AP}^{2}(\mathbb{R}^{d})$\emph{. Therefore the conclusion of Theorem
\ref{t3.2} holds true.}
\end{remark}
\subsection{Applications of Theorems \protect\ref{t1.4} and \protect\ref
{t5.1}}
Here we give some concrete examples of functions for which Theorems \ref
{t1.4} and \ref{t5.1} hold. Let $I_{d}$ denote the identity matrix in $
\mathbb{R}^{d\times d}$.
\subsubsection{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{The asymptotic periodic setting}}
We assume that $A=A_{0}+A_{per}$ where $A_{0}=b_{c}I_{d}$ with $
b_{c}(y)=\exp (-c\left\vert y\right\vert ^{2})$ for any fixed $c>0$. $
A_{per} $ is any continuous periodic symmetric matrix function satisfying
the ellipticity condition (\ref{2.2}). In the special $2$-dimension setting,
we may take $A_{0}=b_{1}I_{2}$ and
\begin{equation*}
A_{per}=\left(
\begin{array}{ll}
a_{1} & 0 \\
0 & a_{2}
\end{array}
\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }a_{1}(y)=4+\cos (2{\Greekmath 0119} y_{1})+\sin (2{\Greekmath 0119} y_{2}),\
a_{2}(y)=3+\cos (2{\Greekmath 0119} y_{1})+\cos (2{\Greekmath 0119} y_{2}).
\end{equation*}
This special example is used for numerical tests in the next section.
\subsubsection{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{The asymptotic almost periodic setting}}
As in the preceding subsection, we take $A_{0}=b_{c}I_{d}$ with $
b_{c}(y)=\exp (-c\left\vert y\right\vert ^{2})$. We assume that $
A=A_{0}+A_{ap}$ with $A_{ap}$ being any matrix with continuous almost
periodic entries such that $A$ satisfies hypothesis (\ref{1.2}). In the
special $2$-dimension setting used for numerical tests below, we take $
A_{0}=b_{1}I_{2}$ and
\begin{equation*}
A_{ap}=\left(
\begin{array}{ll}
a_{1} & 0 \\
0 & a_{2}
\end{array}
\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }a_{1}(y)=4+\sin (2{\Greekmath 0119} y_{1})+\cos (\sqrt{2}{\Greekmath 0119} y_{2}),\
a_{2}(y)=3+\sin (\sqrt{3}{\Greekmath 0119} y_{1})+\cos ({\Greekmath 0119} y_{2}).
\end{equation*}
\section{Numerical simulations}
Our goal in this section is to check numerically the theoretical results
derived in the previous sections. We will consider the finite volume method
with two-point flux approximation. Of course multi-point flux approximation
can be considered when the matrix $A$ is non-diagonal. Even we will not
provide similar results for the discrete problem from numerical
approximation, similar results should normally be observed when the space
discretization step is small enough (fine grid) as the convergence of the
finite volume method for such elliptic problems is well known \cite{FV}.
\subsection{Finite volume methods}
The finite volume methods are widely applied when the differential equations
are in divergence form. To obtain a finite volume discretization, the domain
$\Omega $ is subdivided into subdomains $(K_{i})_{i\in \mathcal{I}},\;
\mathcal{I}$ being the corresponding set of indices, called control volumes
or control domains such that the collection of all those subdomains forms a
partition of $\Omega $. The common feature of all finite volume methods is
to integrate the equation over each control volume $K_{i},\;i\in \mathcal{I}$
and apply Gauss's divergence theorem to convert the volume integral to a
surface integral. An advantage of the two-point approximation is that it
provides monotonicity properties, under the form of a local maximum
principle. It is efficient and mostly used in industrial simulations. The
main drawback is that finite volume method with two-point approximation is
applicable in the so called admissible mesh \cite{FV, Antonio3} and not in a
general mesh. This drawback has been filled by finite volume methods with
multi-point flux approximations \cite{B, H} which allow to handle anisotropy
in more general geometries.
For illustration, we consider the problem find $u\in H_{0}^{1}(\Omega )$
\begin{equation}
-{\Greekmath 0272} \cdot (A(x){\Greekmath 0272} u)=f\,\,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{in}\,\,\,\Omega . \label{pb1}
\end{equation}
We assume that $f\in L^{2}(\Omega )$ and that $A$ is diagonal, so a
rectangular grid should be an admissible mesh \cite{FV, Antonio3}. Consider
an admissible mesh $\mathcal{T}$ with the corresponding control volume $
(K_{i})_{i\in \mathcal{I}}$, we denote by $\mathcal{E}$ the set of edges of
control volumes of $\mathcal{T},\;\mathcal{E}_{int}$ the set of interior
edges of control volume of $\mathcal{T}$, $u_{i}$ the approximation of $u$
at the center (or at any point) of the control volume $K_{i}\in \mathcal{T}$
and $u_{{\Greekmath 011B} }$ the approximation of $U$ at the center (or at any point)
of the edge ${\Greekmath 011B} \in \mathcal{E}$. For a control volume $K_{i}\in
\mathcal{T}$, we denote by $\mathcal{E}_{i}$ the set of edges of $K_{i}$, so
that $\partial K_{i}=\underset{{\Greekmath 011B} \in \mathcal{E}_{i}}{\bigcup }{\Greekmath 011B} $.
We integrate \eqref{pb1} over any control volume $K_{i}\in \mathcal{T}$, and
use the divergence theorem to convert the integral over $K_{i}$ to a surface
integral,
\begin{equation*}
-\int_{\partial K_{i}}A(x){\Greekmath 0272} u\cdot \mathbf{n}_{i,{\Greekmath 011B}
}ds=\int_{K_{i}}f(x)dx.
\end{equation*}
To obtain the finite volume scheme with two-point approximation, the
following finite difference approximations are needed
\begin{eqnarray}
\underset{{\Greekmath 011B} \in \mathcal{E}_{i}}{\sum }F_{i,{\Greekmath 011B} } &\approx
&\int_{\partial K_{i}}A(x){\Greekmath 0272} u\cdot \mathbf{n}_{i,{\Greekmath 011B} }ds \\
F_{i,{\Greekmath 011B} } &=&-\mathrm{meas}({\Greekmath 011B} )\;C_{i,{\Greekmath 011B} }\dfrac{u_{{\Greekmath 011B}
}-u_{i}}{d_{i,{\Greekmath 011B} }} \\
C_{i,{\Greekmath 011B} } &=&|C_{K_{i}}\,\mathbf{n}_{i,{\Greekmath 011B} }|,\quad A_{K_{i}}=\dfrac{1
}{\mathrm{meas}(K_{i})}\int_{K_{i}}A(x)dx
\end{eqnarray}
Here \ $\mathbf{n}_{i,{\Greekmath 011B} }$ is the normal unit vector to ${\Greekmath 011B} $
outward to $K_{i}$,
$\mathrm{meas}({\Greekmath 011B} )$ is the Lebesgue measure of the edge ${\Greekmath 011B} \in
\mathcal{E}_{i}$ and $d_{i,{\Greekmath 011B} }$ the distance between the center of $
K_{i}$ and the edge ${\Greekmath 011B} $. Since the flux is continuous at the interface
of two control volumes $K_{i}$ and $K_{j}$ (denoted by $i\mid j$) we
therefore have $F_{i,{\Greekmath 011B} }=-F_{j,{\Greekmath 011B} }$ for ${\Greekmath 011B} =i\mid j$\footnote{
interface of the control volumes $K_{i}$ and $K_{j}$}, which yields
\begin{equation*}
\left\{
\begin{array}{l}
F_{i,{\Greekmath 011B} }=-{\Greekmath 011C} _{{\Greekmath 011B} }\left( u_{j}-u_{i}\right) =-\dfrac{{\Greekmath 0116}
_{{\Greekmath 011B} }\,\mathrm{meas}({\Greekmath 011B} )}{d_{i,j}}\left( u_{j}-u_{i}\right)
,\,{\Greekmath 011B} =i\mid j\quad \newline
\\
{\Greekmath 011C} _{{\Greekmath 011B} }=\mathrm{meas}({\Greekmath 011B} )\dfrac{C_{i,{\Greekmath 011B} }C_{j,{\Greekmath 011B} }}{
C_{i,{\Greekmath 011B} }d_{i,{\Greekmath 011B} }+C_{j,{\Greekmath 011B} }d_{j,{\Greekmath 011B} }}\quad (\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
transmissibility through}\,{\Greekmath 011B} )
\end{array}
\right.
\end{equation*}
with
\begin{equation*}
{\Greekmath 0116} _{{\Greekmath 011B} }=d_{i,j}\dfrac{C_{i,{\Greekmath 011B} }C_{j,{\Greekmath 011B} }}{C_{i,{\Greekmath 011B}
}d_{i,{\Greekmath 011B} }+C_{j,{\Greekmath 011B} }d_{j,{\Greekmath 011B} }},
\end{equation*}
where $d_{i,j}$ is the distance between the center of $K_{i}$ and center of $
K_{j}$. We will set $d_{i,j}=d_{i,{\Greekmath 011B} }$ for ${\Greekmath 011B} =\mathcal{E}_{i}\cap
\partial \Omega $. For ${\Greekmath 011B} \subset \partial \Omega $ (${\Greekmath 011B} \notin
\mathcal{E}_{int}$ ), we also write
\begin{eqnarray*}
F_{i,{\Greekmath 011B} } &=&-{\Greekmath 011C} _{{\Greekmath 011B} }\left( u_{{\Greekmath 011B} }-u_{i}\right) \\
&=&-\dfrac{\mathrm{meas}({\Greekmath 011B} ){\Greekmath 0116} _{{\Greekmath 011B} }}{d_{i,{\Greekmath 011B} }}\left(
u_{{\Greekmath 011B} }-u_{i}\right) .
\end{eqnarray*}
The finite volume discretization is therefore given by
\begin{eqnarray}
\underset{{\Greekmath 011B} \in \mathcal{E}_{i}}{\sum }F_{i,{\Greekmath 011B} } &=&f_{K_{i}}
\label{ode} \\
f_{K_{i}} &=&\int_{K_{i}}f(x)dx
\end{eqnarray}
Let $h=~$size$(\mathcal{T})=\underset{i\in \mathcal{I}}{\sup }\underset{
(x,y)\in K_{i}^{2}}{\sup }|x-y|$ be the maximum size of $\mathcal{T}$. We
set $u_{h}=(u_{i})_{i\in \mathcal{I}}$, $N_{h}=|\mathcal{I}|$ and $
F=(f_{K_{i}})_{i\in \mathcal{I}}+bc$ , $bc$ being the contribution of the
boundary condition \footnote{
Here $bc$ is null as we are looking for solution in $H_{0}^{1}(\Omega )$}.
Applying \eqref{ode} through all control volumes, the corresponding finite
volume scheme is given by
\begin{equation}
A_{h}u_{h}=F, \label{fv}
\end{equation}
where $A_{h}$ is an $N_{h}\times N_{h}$ matrix. The structure of $A_{h}$
depends of the dimension $d$ and the geometrical shape of the control
volume. For diagonal $A$, if $\Omega $ is a rectangular or parallelepiped
domain, any rectangular grid ($d=2$) or parallelepiped grid ($d=3$) is an
admissible mesh and yields a 5-point scheme ($d=2$) or 7-point scheme ($d=3$
) for the problem \eqref{pb1}. To solve efficiently the linear system
\eqref{fv}, we have used the Matlab linear solver bicgstab with ILU(0)
preconditioners.
\subsection{Simulations in dimension 2}
\subsubsection{The Asymptotic periodic setting}
\label{ssect5} For the numerical tests, we consider problems (\ref{1.1}) and
(\ref{1.4}) in dimension $d=2$ with the finite volume method scheme
\eqref{fv}. We denote by $I_{d}$ the square identity matrix in $\mathbb{R}
^{d\times d}$. We take $A=A_{0}+A_{per}$ with
\begin{align*}
A_{0}& =b_{0}I_{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }b_{0}(x_{1},x_{2})=\exp
(-(x_{1}^{2}+x_{2}^{2}))\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and } \\
A_{per}& =\left(
\begin{array}{ll}
b_{1} & 0 \\
0 & b_{2}
\end{array}
\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }b_{1}=4+\cos (2{\Greekmath 0119} x_{1})+\sin (2{\Greekmath 0119} x_{2}),\
b_{2}=3+\cos (2{\Greekmath 0119} x_{1})+\cos (2{\Greekmath 0119} x_{2}).
\end{align*}
The right-hand side function $f$ is given by $f=1$.
The computational domain is $\Omega =(-1,1)^{2}$.
We take ${\Greekmath 0122} =1/N$ for some integer $N$. We will choose $N$ in the
set $\{2,3,4,5,6\}$.
The aim in this section is to compute numerically the \RIfM@\expandafter\text@\else\expandafter\mbox\fiquotedblright
exact solution\RIfM@\expandafter\text@\else\expandafter\mbox\fiquotedblright\ $u_{{\Greekmath 0122} }$ (for a fixed $
{\Greekmath 0122} >0$) coming from the finite volume scheme with small $h$, and
compare it with its first order asymptotic periodic approximation $
v_{{\Greekmath 0122} }(x)=u_{0}(x_{1},x_{2})+{\Greekmath 0122} {\Greekmath 011F} (\frac{x_{1}}{
{\Greekmath 0122} },\frac{x_{2}}{{\Greekmath 0122} })\cdot {\Greekmath 0272} u_{0}(x_{1},x_{2})$.
For this purpose, the strategy is carried out as follows:
\begin{enumerate}
\item We compute the exact solution of (\ref{1.1}) with our finite volume
scheme on a rectangular fine mesh of size $h>0$, with $h$ sufficiently small
to ensure that the discretization error is much smaller than ${\Greekmath 0122}$,
which is the order of the error associated to the homogenization
approximation (see either Proposition \ref{p5.1} or Theorem \ref{t5.1}).
\item We compute the corrector functions ${\Greekmath 011F}_{1}$ and ${\Greekmath 011F}_{2}$
associated to the respective directions $e_{1}=(1,0)$ and $e_{2}=(0,1)$. To
this end, we rather consider their approximations by the finite volume
scheme \eqref{fv}, which are solutions to Eq. (\ref{3.3}), and we perform
this computation on the domain $Q_{6}=(-6,6)^{2}$ with Dirichlet boundary
conditions (as in (\ref{3.3})). We also compute their gradients $
{\Greekmath 0272}{\Greekmath 011F}_{1}$ and ${\Greekmath 0272}{\Greekmath 011F}_{2}$. Here we take the mesh size $h=8\times
10^{-3}$ independent of ${\Greekmath 0122}$.
\item With ${\Greekmath 0272} {\Greekmath 011F} _{1}$ and ${\Greekmath 0272} {\Greekmath 011F} _{2}$ computed as above, we
compute the homogenized matrix $A_{6}^{\ast }$ as in (\ref{eq5}), namely
\begin{equation*}
A_{6}^{\ast }=\left( \frac{1}{12}\right) ^{2}\int_{Q_{6}}A(x)(I_{2}+{\Greekmath 0272}
{\Greekmath 011F} (x))dx
\end{equation*}
where here, ${\Greekmath 011F} =({\Greekmath 011F} _{1},{\Greekmath 011F} _{2})$ so that ${\Greekmath 0272} {\Greekmath 011F} $ is the
square matrix with entries $c_{ij}=\frac{\partial {\Greekmath 011F} _{j}}{\partial x_{i}}$
.
\item With $A_{6}^{\ast }$ now being denoted by $A^{\ast }$, we compute the
exact solution $u_{0}$ of (\ref{1.4}).
\item Finally we compute the first order approximation $v_{
{\Greekmath 0122}}(x)=u_{0}(x)+{\Greekmath 0122} {\Greekmath 011F} (x/{\Greekmath 0122} )\cdot {\Greekmath 0272}
u_{0}(x)$ and we compare it to the exact solution $u_{{\Greekmath 0122} }$, which
has been computed at step 1.
\end{enumerate}
The goal is to check the convergence result in Theorem \ref{t5.1} given by
\eqref{5.8}, but with the numerical solution using finite volume method.
Indeed we want to evaluate the following error
\begin{equation}
Err({\Greekmath 0122} )=\dfrac{\Vert u_{{\Greekmath 0122} }-u_{0}-{\Greekmath 0122} {\Greekmath 011F}
^{{\Greekmath 0122} }{\Greekmath 0272} u_{0}\Vert _{H^{1}(\Omega )}}{\Vert u_{0}\Vert
_{H^{2}(\Omega )}}=\dfrac{\Vert u_{{\Greekmath 0122} }-v_{{\Greekmath 0122} }\Vert
_{H^{1}(\Omega )}}{\Vert u_{0}\Vert _{H^{2}(\Omega )}}. \label{error}
\end{equation}
As we already mentioned, $u_{0}$, $u_{{\Greekmath 0122} }$ and $v_{{\Greekmath 0122} }$
are computed numerical using the finite volume scheme for a fixed $h=8\times
10^{-3}$ independent of a fixed ${\Greekmath 0122} $. All the norms involved in
\eqref{error} are computed using their discrete forms \cite{FV, Antonio3}.
The coefficients of $A$ and $f$ are $\mathcal{C}^{\infty }(\Omega )$, so the
corresponding solutions $u_{0}$, $u_{{\Greekmath 0122} }$ and $v_{{\Greekmath 0122} }$
should be regular enough. Their graphs are given in Figure \ref{FIG03}. As
we can observe in Table \ref{phi}, the error decreases when ${\Greekmath 0122} $
decreases, and therefore the convergence of $u_{{\Greekmath 0122} }$ and $
v_{{\Greekmath 0122} }$ towards $u_{0}$ when ${\Greekmath 0122} \rightarrow 0$ is
ensured. We can also observe that the corrector plays a key role as graph of
$u_{{\Greekmath 0122} }$ is close to the one of $v_{{\Greekmath 0122} }$. The numerical
value of $A_{6}^{\ast }\equiv A^{\ast }$ obtained and used for $u_{0}$ and $
v_{{\Greekmath 0122} }$ is given by
\begin{equation*}
A_{6}^{\ast }=\left(
\begin{array}{ll}
3.895923 & 0.00001 \\
0 & 2.849959
\end{array}
\right) .
\end{equation*}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
$1/{\Greekmath 0122} $ & 2 & 3 & 4 & 5 & 6 \\ \hline
Err$({\Greekmath 0122} )$ & 0.5298 & 0.1382 & 0.0620 & 0.0577 & 0.0573 \\ \hline
\end{tabular}
\end{center}
\caption{ $Err(\protect{\Greekmath 0122} )$ with the corresponding $1/\protect
{\Greekmath 0122} $ for a fixed $h=2\times 10^{-3}$ independent of a fixed $
\protect{\Greekmath 0122} $.}
\label{phi}
\end{table}
\begin{figure}
\caption{The graphs of $u_{\protect{\Greekmath 0122}
\label{FIG03a}
\label{FIG03b}
\label{FIG03c}
\label{FIG03d}
\label{FIG03e}
\label{FIG03}
\end{figure}
\subsubsection{The asymptotic almost periodic setting}
Here we take $A=A_{0}+A_{ap}$ with
\begin{align*}
A_{0}& =b_{0}I_{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }b_{0}(x_{1},x_{2})=\exp
(-(x_{1}^{2}+x_{2}^{2}))\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and } \\
A_{ap}& =\left(
\begin{array}{ll}
b_{1} & 0 \\
0 & b_{2}
\end{array}
\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }b_{1}=4+\sin (2{\Greekmath 0119} x_{1})+\cos (\sqrt{2}{\Greekmath 0119} x_{2}),\
b_{2}=3+\sin (\sqrt{3}{\Greekmath 0119} x_{1})+\cos ({\Greekmath 0119} x_{2}).
\end{align*}
The right-hand side function $f$ is given by $f(x_{1},x_{2})=\cos ({\Greekmath 0119}
x_{1})\cos (\sqrt{5}{\Greekmath 0119} x_{2})$. The computational domain is as above, that
is, $\Omega =(-1,1)^{2}$. We follow the same steps as above. The
corresponding value of $A_{6}^{\ast }$ is
\begin{equation*}
A_{6}^{\ast }=\left(
\begin{array}{ll}
4.0118 & 0.0002 \\
0.0032 & 3.0206
\end{array}
\right) .
\end{equation*}
We solve (\ref{1.4}) using finite volume method with multi-point flux
approximation \cite{B, H}. From Table \ref{phi2} and Figure \ref{FIG033}, we
can draw the same conclusion as in Section \ref{ssect5}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
$1/{\Greekmath 0122} $ & 2 & 3 & 4 & 5 & 6 \\ \hline
Err$({\Greekmath 0122} )$ & 0.24 & 0.1520 & 0.1284 & 0.0768 & 0.0265 \\ \hline
\end{tabular}
\end{center}
\caption{ $Err(\protect{\Greekmath 0122} )$ with the corresponding $1/\protect
{\Greekmath 0122} $ for a fixed $h=2\times 10^{-3}$ independent of a fixed $
\protect{\Greekmath 0122} $.}
\label{phi2}
\end{table}
\begin{figure}
\caption{The graphs of $u_{\protect{\Greekmath 0122}
\label{FIG03a}
\label{FIG03b}
\label{FIG03b}
\label{FIG03b}
\label{FIG03b}
\label{FIG033}
\end{figure}
\end{document} | math | 242,712 |
\begin{document}
\title{Fibonacci Numbers, Statistical Convergence and Applications}
\author{Murat Kiri\c{s}ci*, Ali Karaisa}
\address{[Murat Kiri\c{s}ci]Department of Mathematical Education, Hasan Ali Y\"{u}cel Education Faculty,
Istanbul University, Vefa, 34470, Fatih, Istanbul, Turkey \vskip 0.1cm }
\email{[email protected], [email protected]}
\address{[Ali Karaisa]Department of Mathematics-Computer Sciences, Necmettin Erbakan University,
Meram Campus, 42090 Meram, Konya, Turkey \vskip 0.1cm }
\email{[email protected]; [email protected]}
\thanks{*Corresponding author.}
\begin{abstract}
The purpose of this paper is twofold. First, the definition of new statistical convergence with Fibonacci sequence is given and some fundamental properties
of statistical convergence are examined. Second, approximation theory worked as a application of the statistical convergence.
\end{abstract}
\keywords{Korovkin type approximation theorems, statistical convergence, Fibonacci numbers, Fibonacci matrix, positive linear operator, density}
\subjclass[2010]{11B39, 41A10, 41A25, 41A36, 40A30, 40G15}
\maketitle
\pagestyle{plain} \makeatletter
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\numberwithin{equation}{section}
\numberwithin{figure}{section}
\theoremstyle{plain}
\newtheorem{pr}[thm]{Proposition}
\theoremstyle{plain}
\newtheorem{exmp}[thm]{Example}
\theoremstyle{plain}
\newtheorem{cor}[thm]{Corollary}
\theoremstyle{plain}
\newtheorem{defin}[thm]{Definition}
\theoremstyle{plain}
\newtheorem{lem}[thm]{Lemma}
\theoremstyle{plain}
\newtheorem{rem}[thm]{Remark}
\numberwithin{equation}{section}
\section{Introduction}
\subsection{Densities and Statistical Convergence}
In the theory of numbers, there are many different definitions of density. It is well known that the most popular
of these definitions is asymptotic density. But, asymptotic density does not exist for all sequences. New densities
have been defined to fill those gaps and to serve different purposes.\\
The asymptotic density is one of the possibilities to measure how large a subset of the set of natural number. We
know intuitively that positive integers are much more than perfect squares. Because, every perfect square is
positive and many other positive integers exist besides. However, the set of positive integers is not in fact larger
than the set of perfect squares: both sets are infinite and countable and can therefore be put in one-to-one correspondence.
Nevertheless if one goes through the natural numbers, the squares become increasingly scarce. It is precisely in this case,
natural density help us and makes this intuition precise.\\
Let $A$ is a subset of positive integer. We consider the interval $[1,n]$ and select an integer in this interval, randomly. Then,
the ratio of the number of elements of $A$ in $[1,n]$ to the total number of elements in $[1,n]$ is belong to $A$, probably. For
$n\rightarrow \infty$, if this probability exists, that is this probability tends to some limit, then this limit is used to as
the asymptotic density of the set $A$. This mentions us that the asymptotic density is a kind of probability of choosing a number
from the set $A$.\\
Now, we give some definitions and properties of asymptotic density:\\
The set of positive integers will be denoted by $\mathbb{Z^{+}}$. Let $A$ and $B$ be subsets of $\mathbb{Z}^{+}$. If the symmetric
difference $A\Delta B$ is finite, then, we can say $A$ is asymptotically equal to $B$ and denote $A\sim B$. Freedman and Sember
have introduced the concept of a lower asymptotic density and defined a concept of convergence in density, in \cite{FreeSem}.
\begin{defin}\cite{FreeSem}
Let $f$ be a function which defined for all sets of natural numbers and take values in the interval $[0,1]$.
Then, the function $f$ is said to a lower asymptotic density, if the following conditions hold:
\begin{itemize}
\item[i.] $f(A)=f(B)$, if $A\sim B$,
\item[ii.] $f(A)+f(B)\leq f(A\cup B)$, if $A\cap B=\emptyset$,
\item[iii .] $f(A)+f(B)\leq 1+ f(A\cap B)$, for all $A$,
\item[iv.] $f(\mathbb{Z^{+}})=1$.
\end{itemize}
\end{defin}
We can define the upper density based on the definition of lower density as follows:\\
Let $f$ be any density. Then, for any set of natural numbers $A$, the function $\overline{f}$ is said to upper density associated with $f$, if
$\overline{f}(A)=1-f(\mathbb{Z}^{+} \backslash A)$.\\
Consider the set $A\subset \mathbb{Z}^{+}$. If $f(A)=\overline{f}(A)$, then we can say that the set $A$ has natural density
with respect to $f$. The term asymptotic density is often used for the function
\begin{eqnarray*}
d(A)=\liminf_{n\rightarrow\infty}\frac{A(n)}{n},
\end{eqnarray*}
where $A\subset \mathbb{N}$ and $A(n)=\sum_{a\leq n, a\in A}1$. Also the natural density of $A$ is given by $d(A)=\lim_{n}n^{-1}|A(n)|$ where
$|A(n)|$ denotes the number of elements in $A(n)$.\\
The study of statistical convergence was initiated by Fast\cite{Fast}. Schoenberg \cite{Sch} studied statistical convergence as
a summability method and listed some of the elementary properties of statistical convergence. Both of these mathematicians
mentioned that if a bounded sequence is statistically convergent to $L$ then it is Ces\`{a}ro summable to $L$. Statistical convergence
also arises as an example of "convergence in density" as introduced by Buck\cite{Buck}. In \cite{Zygm}, Zygmund called this concept
"almost convergence" and established relation between statistical convergence and strong summability. The idea of statistical convergence
have been studied in different branches of mathematics such as number theory\cite{ErTe}, trigonometric series\cite{Zygm}, summability theory\cite{FreeSem},
measure theory\cite{Mil}, Hausdorff locally convex topological vector spaces\cite{Maddox}. The concept of $\alpha\beta-$statistical convergence was introduced and studied by Aktu\v{g}lu\cite{Aktuglu}. In \cite{VatanAli}, Karakaya and Karaisa have been extended the concept of $\alpha\beta-$statistical convergence. Also, they have been introduced the concept of weighted $\alpha\beta-$statistical convergence of order $\gamma$, weighted $\alpha\beta-$summmability of order of $\gamma$ and strongly weighted $\alpha\beta-$summable sequences of order $\gamma$, in \cite{VatanAli}. \\
\begin{defin}
A real numbers sequence $x=(x_{k})$ is statistically convergent to $L$ provided that for every $\varepsilon >0$ the set
$\{n\in\mathbb{N}: |x_{n}-L|\geq \varepsilon\}$ has natural density zero. The set of all statistical convergent sequence is denoted by $S$.
In this case, we write $S-\lim x=L$ or $x_{k}\rightarrow L(S)$.\\
\end{defin}
\begin{defin}\cite{Fri}
The sequence $x=(x_{k})$ is statistically Cauchy sequence if for every $\varepsilon >0$ there is positive integer $N=N(\varepsilon)$ such
that
\begin{eqnarray*}
d\left(\{n\in\mathbb{N}: |x_{n}-x_{N(\varepsilon)}|\geq \varepsilon\}\right)=0.
\end{eqnarray*}
\end{defin}
It can be seen from the definition that statistical convergence is a generalization of the usual of notion of convergence that parallels
the usual theory of convergence.\\
Fridy\cite{Fri} introduce the new notation for facilitate: If $x=(x_{n})$ is a sequence that satisfies some property $P$
for all $n$ except a set of natural density zero, then we say that $x=(x_{n})$ satisfies $P$ for "almost all $n$" and we abbreviate "a.a.n".
In \cite{Fri}, Fridy proved the following theorem:
\begin{thm}
The following statements are equivalent:
\begin{itemize}
\item[i.] $x$ is statistically convergent sequence,
\item[ii.] $x$ is statistically Cauchy sequence,
\item[iii .] $x$ is sequence for which there is a convergent sequence $y$
such that $x_{n}=y_{n}$ for a.a.n.
\end{itemize}
\end{thm}
\subsection{Fibonacci Numbers and Fibonacci Matrix}
The number in the bottom row are called \emph{Fibonacci numbers}, and the number sequence
\begin{eqnarray*}
1,1,2,3,5,8,13,21,34,55,89,144\ldots
\end{eqnarray*}
is the \emph{Fibonacci sequence}\cite{Koshy}. One of the most known and interesting of number sequences is Fibonacci sequence
and it still continues to be of interest to mathematicians. Because, this sequence is an important and useful
tool to expand the mathematical horizon for many mathematician.\\
\begin{defin}
The Fibonacci numbers are the sequence of numbers $(f_{n})$ for $n=1,2,\ldots$ defined by the linear recurrence equation
\begin{eqnarray*}
f_{n}=f_{n-1}+f_{n-2} \quad n\geq 2,
\end{eqnarray*}
\end{defin}
From this definition, it means that the first two numbers in Fibonacci sequence are either $1$ and $1$ (or $0$ and $1$)
depending on the chosen starting point of the sequence and all subsequent number is the sum of the previous two. That is,
we can choose $f_{1}=f_{2}=1$ or $f_{0}=0$, $f_{1}=1$. \\
Fibonacci sequence was initiated in the book \emph{Liber Abaci} of Fibonacci which was written in 1202. However,
the sequence is based on older history. The sequence had been described earlier as Virahanka numbers in Indian mathematics \cite{GooSu}.
In \emph{Liber Abaci}, the sequence starts with $1$, nowadays the sequence begins either with $f_{0}=0$ or with $f_{1}=1$.\\
Some of the fundamental properties of Fibonacci numbers are given as follows:
\begin{eqnarray*}
&&\lim_{n\rightarrow\infty}\frac{f_{n+1}}{f_{n}}=\frac{1+\sqrt{5}}{2}=\alpha, \quad \textrm{(golden ratio)}\\
&&\sum_{k=0}^{n}f_{k}=f_{n+2}-1 \quad (n\in\mathbb{N}),\\
&&\sum_{k}\frac{1}{f_{k}} \quad \textrm{converges},\\
&&f_{n-1}f_{n+1}-f_{n}^{2}=(-1)^{n+1} \quad (n\geq 1) \quad \textrm{(Cassini formula)}
\end{eqnarray*}
It can be yields $f_{n-1}^{2}+f_{n}f_{n-1}-f_{n}^{2}=(-1)^{n+1}$, if we can substituting for $f_{n+1}$ in Cassini's formula.\\
Let $f_{n}$ be the $n$th Fibonacci number for every $n\in \mathbb{N}$. Then, we define the infinite matrix $\widehat{F}=(\widehat{f}_{nk})$ \cite{Kara1} by
\begin{eqnarray*}
\widehat{f}_{nk}=\left\{\begin{array}{ccl}
-\frac{f_{n+1}}{f_{n}}&, & (k=n-1)\\
\frac{f_{n}}{f_{n+1}}&, & (k=n)\\
0&, & (0\leq k < n-1 \textrm{or} k>n).
\end{array}\right.
\end{eqnarray*}
The Fibonacci Sequence was firstly used in the Theory of Sequence Spaces by Kara and Ba\c{s}ar{\i}r\cite{KaraBas}. Afterward, Kara\cite{Kara1} defined the Fibonacci
difference matrix $\widehat{F}$ by using the Fibonacci sequence $(f_{n})$ for $n\in \{0,1,\ldots\}$ and introduced the new sequence spaces
related to the matrix domain of $\widehat{F}$. \\
Following the \cite{KaraBas} and \cite{Kara1}, high qualified papers are produced with the Fibonacci matrix by many mathematicians.(\cite{AloMur}, \cite{BasBasKara}, \cite{Can1}, \cite{CanKn}, \cite{CanKara}, \cite{CanKay}, \cite{DemKaraBas}, \cite{KaraBasMur}, \cite{KaraSer}, \cite{KaraHan}, \cite{UcBas}).
\subsection{Approximation Theory}
Korovkin type approximation theorems are practical tools to check
whether a given sequence $(A_{n})_{n\geq1}$ of positive linear
operators on $C[a,b]$ of all continuous functions on the real
interval $[a,b]$ is an approximation process. That is, these
theorems present a variety of test functions which provide that the
approximation property holds on the whole space if it holds for
them. Such a property was determined by Korovkin \cite{Korovkin} in
1953 for the functions $1$, $x$ and $x^{2}$ in the space $C[a,b]$ as
well as for the functions $1$, $\cos$ and $\sin$ in the space of all
continuous $2\pi-$periodic functions on the real line.
Until Gadjiev and Orhan \cite{GadOr} examine, there aren't any study
related to statistical convergence and approximation theory. In
\cite{GadOr}, it is proved a Korovkin type approximation theorems by
using the idea of statistical convergence. Some of examples of
approximation theory and statistical convergence studies can be seen
in \cite{Aktuglu}, \cite{BeMo}, \cite{EdMoNo}, \cite{EdMuKh},
\cite{VatanAli}, \cite{Mohi}, \cite{MuAlo}, \cite{MuAlo1},
\cite{MuVaErG}. Some of examples of approximation theory and
statistical convergence studies can be seen in \cite{Aktuglu},
\cite{BeMo}, \cite{EdMoNo}, \cite{EdMuKh}, \cite{VatanAli},
\cite{Mohi}, \cite{MuAlo}, \cite{MuAlo1}, \cite{MuVaErG}.
\section{Fibonacci Type Statistical Convergence}
Now, we give the general Fibonacci sequence space $X(\widehat{F})$ as follows\cite{Kara1}, \cite{KaraBas}:
Let $X$ be any sequence space and $k\in \mathbb{N}$. Then,
\begin{eqnarray*}
X(\widehat{F})=\left\{x=(x_{k})\in \omega: \left(\widehat{F}x_{k}\right)\in X\right\}
\end{eqnarray*}
It is clear that if $X$ is a linear space, then, $X(\widehat{F})$ is also a linear space.
Kara proved that if $X$ is a Banach space, then, $X(\widehat{F})$ is also a Banach space with the norm
\begin{eqnarray*}
\|x\|_{X(\widehat{F})}=\|\widehat{F}x\|_{X}.
\end{eqnarray*}
Now, we will give lemma which used in proof of Theorem \ref{thm1}. Proof of this lemma is trivial.\\
\begin{lem}\label{lem1}
If $X\subset Y$, then $X(\widehat{F}) \subset Y(\widehat{F})$
\end{lem}
\begin{thm}\label{thm1}
Consider that $X$ is a Banach space and $A$ is a closed subset of $X$. Then,
$A(\widehat{F})$ is also closed in $X(\widehat{F})$.
\end{thm}
\begin{proof}
Since $A$ is a closed subset of $X$, from Lemma \ref{lem1}, then we can write $A(\widehat{F}) \subset X(\widehat{F})$.
$\overline{A(\widehat{F})}$, $\overline{A}$ denote the closure of $A(\widehat{F})$ and $A$, respectively. To prove the Theorem, we must show that $\overline{A(\widehat{F})}=\overline{A}(\widehat{F})$.\\
Firstly, we take $x\in \overline{A(\widehat{F})}$. Therefore, from 1.4.6 Theorem of \cite{Kreyszig}, there exists a sequence $(x^{n}) \in A(\widehat{F})$
such that $\|x^{n}-x\|_{\widehat{F}}\rightarrow 0$ in $A(\widehat{F})$, for $n\rightarrow\infty$. Thus, $\|(x^{n}_{k})-(x_{k})\|_{\widehat{F}}\rightarrow 0$ as
$n\rightarrow\infty$ in $x\in A(\widehat{F})$ so hat
\begin{eqnarray*}
\sum_{i=1}^{m}\left|x_{i}^{n}-x_{i}\right|+\left\|\widehat{F}(x_{k}^{n})-\widehat{F}(x_{k})\right\|\rightarrow 0
\end{eqnarray*}
for $n\rightarrow\infty$, in $A$. Therefore, $\widehat{F}x \in \overline{A}$ and so $x\in \overline{A}(\widehat{F})$.\\
Conversely, if we take $x\in \overline{A(\widehat{F})}$, then, $x\in A(\widehat{F})$. We know that $A$ is closed. Then $\overline{A(\widehat{F})}=\overline{A}(\widehat{F})$. Hence, $A(\widehat{F})$ is a closed subset of $X(\widehat{F})$.
\end{proof}
From this theorem, we can give the following corollary:
\begin{cor}
If $X$ is a separable space, then, $X(\widehat{F})$ is also a separable space.
\end{cor}
\begin{defin}
A sequence $x=(x_{k})$ is said to be Fibonacci statistically
convergence(or $\widehat{F}-$statistically convergence) if there is
a number $L$ such that for every $\epsilon> 0$ the set
$K_{\epsilon}(\widehat{F}):=\{k\leq
n:|\widehat{F}x_{k}-L|\geq\epsilon\}$ has natural density zero,
i.e., $d(K_{\epsilon}(\widehat{F}))=0$. That is
\begin{eqnarray*}
\lim_{n \to \infty}\frac{1}{n}\left|\{k\leq n:
|\widehat{F}x_{k}-L|\geq \epsilon \}\right|=0.
\end{eqnarray*}
In this case we write $d(\widehat{F})-\lim x_{k}=L$ or $x_{k}\rightarrow L(S(\widehat{F}))$. The set of
$\widehat{F}-$statistically convergent sequences will be denoted by
$S(\widehat{F})$. In the case $L=0$, we will write
$S_{0}(\widehat{F})$.
\end{defin}
\begin{defin}\label{defin2}
Let $x=(x_{k})\in \omega$. The sequence $x$ is said to be $\widehat{F}-$statistically Cauchy if
there exists a number $N=N(\varepsilon)$ such that
\begin{eqnarray*}
\lim_{n \to \infty}\frac{1}{n}\left|\{k\leq n:
|\widehat{F}x_{k}-\widehat{F}x_{N}|\geq \epsilon \}\right|=0
\end{eqnarray*}
for every $\varepsilon >0$.
\end{defin}
\begin{thm}
If $x$ is a $\widehat{F}-$statistically convergent sequence, then $x$ is a $\widehat{F}-$statistically Cauchy sequence.
\end{thm}
\begin{proof}
Let $\varepsilon >0$. Assume that $x_{k}\rightarrow L(S(\widehat{F}))$. Then, $|\widehat{F}x_{k}-L|<\varepsilon / 2$
for almost all $k$. If $N$ is chosen so that $|\widehat{F}x_{N}-L|<\varepsilon / 2$, then we have
$|\widehat{F}x_{k}-\widehat{F}x_{N}|< |\widehat{F}x_{k}-L|+|\widehat{F}x_{N}-L|<\varepsilon / 2 + \varepsilon / 2 =\varepsilon$
for almost all $k$. It means that $x$ is $\widehat{F}-$statistically Cauchy sequence.
\end{proof}
\begin{thm}
If $x$ is a sequence for which there is a $\widehat{F}-$statistically convergent sequence $y$ such that
$\widehat{F}x_{k}=\widehat{F}y_{k}$ for almost all $k$, then $x$ is $\widehat{F}-$ statistically convergent sequence.
\end{thm}
\begin{proof}
Suppose that $\widehat{F}x_{k}=\widehat{F}y_{k}$ for almost all $k$ and $y_{k}\rightarrow L(S(\widehat{F}))$. Then, $\varepsilon >0$
and for each $n$, $\{k \leq n: |\widehat{F}x_{k}-L|\geq \varepsilon\}\subseteq \{k \leq n: \widehat{F}x_{k}\neq \widehat{F}y_{k}\} \cup
\{k \leq n: |\widehat{F}x_{k}-L|\leq \varepsilon\}$. Since $y_{k}\rightarrow L(S(\widehat{F}))$, the latter set contains a fixed number of
integers, say $g=g(\varepsilon)$. Therefore, for $\widehat{F}x_{k}=\widehat{F}y_{k}$ for almost all $k$,
\begin{eqnarray*}
\lim_{n}\frac{1}{n}\left|\{k\leq n: |\widehat{F}x_{k}-L|\geq \varepsilon\}\right|\leq
\lim_{n}\frac{1}{n}\left|\{k\leq n: \widehat{F}x_{k}\neq \widehat{F}y_{k} \}\right|+\lim_{n}\frac{g}{n}=0.
\end{eqnarray*}
Hence $x_{k}\rightarrow L(S(\widehat{F}))$.
\end{proof}
\begin{defin}\label{defin1}\cite{FriOrh}
A sequence $x=(x_{k})$ is said to be statistically bounded if there exists some $L\geq 0$ such that
\begin{eqnarray*}
d\left(\{k: |x_{k}|>L\}\right)=0, \quad \textrm{i.e.,} \quad |x_{k}|\leq L \quad \textrm{a.a.k}
\end{eqnarray*}
\end{defin}
By $m_{0}$, we will denote the linear space of all statistically bounded sequences. Bounded sequences
are obviously statistically bounded as the empty set has zero natural density. However, the converse
is not true. For example, we consider the sequence
\begin{eqnarray*}
x_{n} = \left\{ \begin{array}{ccl}
n&, & (\textrm{k is a square}),\\
0&, & (\textrm{k is not a square}).
\end{array} \right.
\end{eqnarray*}
Clearly $(x_{k})$ is not a bounded sequence. However, $d(\{k: |x_{k}|>1/2\})=0$, as the of squares has zero natural density and hence
$(x_{k})$ is statistically bounded\cite{BharGup}.\\
\begin{pr}\label{prop1}\cite{BharGup}
Every convergent sequence is statistically bounded.
\end{pr}
Although a statistically convergent sequence does not need to be bounded(cf. \cite{BharGup},\cite{FriDemOrh}), the following proposition
shows that every statistical convergent sequence is statistically bounded.
\begin{pr}\label{prop2}\cite{BharGup}
Every statistical convergent sequence is statistically bounded.
\end{pr}
Now, using the Propositions \ref{prop1}, \ref{prop2}, we can give the following corollary:
\begin{cor}
Every $\widehat{F}-$statistical convergent sequence is $\widehat{F}-$statistically bounded.
\end{cor}
Denote the set of all $\widehat{F}-$bounded sequences of real numbers by $m(\widehat{F})$ \cite{Kara1}.
Based on the Definition \ref{defin1} and the descriptions of $m_{0}$ and $m(\widehat{F})$, we can denote the set of all $\widehat{F}-$bounded statistically
convergent sequences of real numbers by $m_{0}(\widehat{F})$.\\
Following theorem can be proved from Theorem 2.1 of \cite{salat} and Theorem \ref{thm1}.
\begin{thm}\label{boun1}
The set of $m_{0}(\widehat{F})$ is a closed linear space of the linear normed space $m(\widehat{F})$.
\end{thm}
\begin{thm}
The set of $m_{0}(\widehat{F})$ is a nowhere dense set in $m(\widehat{F})$.
\end{thm}
\begin{proof}
According to \cite{salat} that every closed linear subspace of an arbitrary linear normed space $E$,
different from $E$ is a nowhere dense set in $E$. Hence, on account of Theorem \ref{boun1}, it suffices
to prove that $m_{0}(\widehat{F})\neq m(\widehat{F})$. But this is evident, consider the sequence
\begin{eqnarray*}
x_{n} = \left\{ \begin{array}{ccl}
1&, & (\textrm{n is odd}),\\
0&, & (\textrm{n is even}).
\end{array} \right.
\end{eqnarray*}
Then, $x\in m(\widehat{F})$, but does not belong to $m_{0}(\widehat{F})$.
\end{proof}
$\omega$ denotes the Fr\'{e}chet metric space of all real sequences with the metric $d_{\omega}$,
\begin{eqnarray*}
d_{\omega}=\sum_{k=1}^{\infty}\frac{1}{2^{k}}\frac{|x_{k}-y_{k}|}{1+|x_{k}-y_{k}|}
\end{eqnarray*}
where $x=(x_{k}), y=(y_{k})\in \omega$ for all $k=1,2,\cdots$.
\begin{thm}
The set of $\widehat{F}-$statistically convergent sequences is dense in the space $\omega$.
\end{thm}
\begin{proof}
If $x=(x_{k})\in S(\widehat{F})$ (for all $k$) and the sequence $y=(y_{k})$ (for all $k$) of real numbers
differs from $x$ only in a finite number of terms, then evidently $y\in S(\widehat{F})$, too. From this statement follows at once on
the basis of the definition of the metric in $\omega$.
\end{proof}
\begin{thm}The following statements are hold.
\begin{itemize}
\item[i.] The inclusion $c(\widehat{F})\subset S(\widehat{F})$ is strict.
\item[ii.] $S(\widehat{F})$ and $\ell_{\infty}(\widehat{F})$, overlap but neither one contains the other.
\item[iii .] $S(\widehat{F})$ and $\ell_{\infty}$, overlap but neither one contains the other.
\item[iv.] $S$ and $S(\widehat{F})$, overlap but neither one contains the other.
\item[v.] $S$ and $c(\widehat{F})$, overlap but neither one contains the other.
\item[vi.] $S$ and $c_{0}(\widehat{F})$, overlap but neither one contains the other.
\item[vii.] $S$ and $\ell_{\infty}(\widehat{F})$, overlap but neither one contains the other.
\end{itemize}
\end{thm}
\begin{proof}
i) Since $c\subset S$, then $c(\widehat{F})\subset S(\widehat{F})$.We choose
\begin{eqnarray}\label{sampleseq}
\widehat{F}x_{n}=(f_{n+1}^{2})=(1,2^{2},3^{2},5^{2},\ldots),
\end{eqnarray}
Since $f_{n+1}^{2}\rightarrow \infty$ as $k\rightarrow\infty$ and $\widehat{F}x=(1,0,0,\ldots)$,
then, $\widehat{F}x\in S$, but is not in the space $c$, that is, $\widehat{F} \not \in c$.\\
For the other items, firstly, using the inclusion relations in \cite{Kara1}. It is obtained that the inclusions $c\subset S(\widehat{F})$, $c\subset c(\widehat{F})$, $c\subset m(\widehat{F})$, $c\subset S$, $c\subset \ell_{\infty}$ and $c\cap c_{0}(\widehat{F})\neq \phi$ are hold. Then, we seen that $S(\widehat{F})$ and $\ell_{\infty}(\widehat{F})$, $S(\widehat{F})$ and $\ell_{\infty}$, $S$ and $S(\widehat{F})$, $S$ and $c(\widehat{F})$, $S$ and $c_{0}(\widehat{F})$, $S$ and $\ell_{\infty}(\widehat{F})$ is overlap.\\
ii) We define $\widehat{F}x=\widehat{F}x_{n}$ from (\ref{sampleseq}). Then, $\widehat{F}x\in S$, but $\widehat{F}x$ is not in $\ell_{\infty}$. Now we choose $u=(1,0,1,0,\ldots)$. Then, $u\in \ell_{\infty}(\widehat{F})$ but $u\not \in S(\widehat{F})$.\\
iii) The proof is the same as (ii).\\
iv) Define
\begin{eqnarray*}
x_{n} = \left\{ \begin{array}{ccl}
1&, & (\textrm{n is a square}),\\
0&, & (\textrm{otherwise}).
\end{array} \right.
\end{eqnarray*}
Then $x\in S$ but $x \in S(\widehat{F})$.
Conversely, if we take $u=(n)$, then $u\not \in S$ but $x \in S(\widehat{F})$.\\
(v), (vi) and (vii) are proven similar to (iv).
\end{proof}
\section{Applications}
\subsection{Approximation by $\widehat{F}-$statistically convergence}
In this section, we get an analogue of classical Korovkin Theorem by
using the concept $\widehat{F}-$statistically convergence.
Let $F(\mathbb{R})$ denote the linear space of real value function
on $\mathbb{R}$. Let $C(\mathbb{R})$ be space of all real-valued
continuous functions $f$ on $\mathbb{R}$. It is well known that
$C(\mathbb{R})$ is Banach space with the norm given as follows:
\begin{eqnarray*}
\parallel f\parallel_{\infty}=\sup_{x\in \mathbb{R}}|f(x)|,\,\,f\in C(\mathbb{R})
\end{eqnarray*}
and we denote $C_{2\pi}(\mathbb{R})$ the space of all $2\pi-$
periodic functions $f\in C(\mathbb{R})$ which is Banach space with
the norm given by
\begin{eqnarray*}
\parallel f\parallel_{2\pi}=\sup_{t\in \mathbb{R}}|f(t)|,\,\,f\in
C(\mathbb{R}).
\end{eqnarray*}
We say $A$ is positive operator, if for every non-negative $f$ and
$x\in I$, we have $A(f,x)\geq 0$, where $I$ is any given interval on
the real semi-axis. The first and second classical Korovkin
approximation theorems states as follows (see \cite{Gadziev}, \cite{Korovkin})
\begin{thm}
Let $(A_{n})$ be a sequence of positive linear operators from
$C[0,1]$ in to $F[0,1]$. Then
\begin{eqnarray*}
\lim_{n \to \infty}\parallel A_{n}(f,x)-f(x)\parallel_{C[a,b]}=0
\Leftrightarrow \lim_{n \to \infty}\parallel A_{n}(e_{i},
x)-e_{i}\parallel_{C[a,b]}=0,
\end{eqnarray*}
where $e_{i}=x^{i}$, $i=0,1,2$.
\end{thm}
\begin{thm}\label{kor2}
Let $(T_{n})$ be sequence of positive linear operators from
$C_{2\pi}(\mathbb{R})$ into $F(\mathbb{R})$. Then
\begin{eqnarray*}
\lim_{n \to \infty}\parallel T_{n}(f,x)-f(x)\parallel_{2\pi}=0
\Leftrightarrow \lim_{n \to \infty}\parallel T_{n}(f_{i},
x)-f_{i}\parallel_{2\pi}=0,\,\, i=0,1,2
\end{eqnarray*}
where $f_{0}=1$, $f_{1}=sinx$ and $f_{2}=cosx$.
\end{thm}
Our main Korovkin type theorem is given as follows:
\begin{thm}\label{thm}
Let $(L_{k})$ be a sequence of positive linear operator from
$C_{2\pi}(\mathbb{R})$ into $C_{2\pi}(\mathbb{R})$. Then for all
$f\in C_{2\pi}(\mathbb{R})$
\begin{eqnarray}\label{k0}
d(\widehat{F})-\lim_{k \to \infty}\parallel
L_{k}(f,x)-f(x)\parallel_{2\pi}=0
\end{eqnarray}
if and only if
\begin{eqnarray}
&&d(\widehat{F})-\lim_{k \to \infty}\parallel L_{k}(1,x)-1\parallel_{2\pi}=0\label{k1},\\
&&d(\widehat{F})-\lim_{k \to \infty}\parallel
L_{k}(\sin t,x)-\sin x\parallel_{2\pi}=0,\label{k2}\\
&&d(\widehat{F})-\lim_{k \to \infty}\parallel L_{k}(\cos t,x)-\cos
x\parallel_{2\pi}=0.\label{k3}
\end{eqnarray}
\end{thm}
\begin{proof}
As $1,\sin x,\cos x\in C_{2\pi}(\mathbb{R})$, conditions
(\ref{k1})-(\ref{k3}) follow immediately from (\ref{k0}). Let the
conditions (\ref{k1})-(\ref{k3}) hold and $I_{1}=(a,a+2\pi)$ be any
subinterval of length $2\pi$ in $\mathbb{R}$. Let fixed $x\in
I_{1}$. By the properties of function $f$ , it follows that for
given $\varepsilon> 0$ there exists $\delta=\delta(\epsilon)>0$ such
that
\begin{eqnarray}\label{k4}
|f(x)-f(t)|<\varepsilon,\,\,\textrm{whenever
}\,\,\forall|t-x|<\delta.
\end{eqnarray}
If $|t-x|\geq\delta$, let us assume that $t\in
(x+\delta,2\pi+x+\delta)$. Then we obtain that
\begin{eqnarray}\label{k5}
|f(x)-f(t)|\leq2\parallel f\parallel_{2\pi}\leq\frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\psi (t),
\end{eqnarray}
where $\psi (t)=\sin^{2}\left(\frac{t-x}{2}\right)$.
By using (\ref{k4}) and (\ref{k5}), we have
\begin{eqnarray*}
|f(x)-f(t)|<\varepsilon+ \frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\psi (t).
\end{eqnarray*}
This implies that
\begin{eqnarray*}
-\varepsilon-\frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\psi
(t)<f(x)-f(t)<\varepsilon+ \frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\psi (t).
\end{eqnarray*}
By using the positivity and linearity of $\{L_{k}\}$, we get
\begin{eqnarray*}
L_{k}(1,x)\left(-\varepsilon \frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\psi
(t)\right)<L_{k}(1,x)\left(f(x)-f(t)\right)<
L_{k}(1,x)\left(\varepsilon+ \frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\psi
(t)\right)
\end{eqnarray*}
where $x$ is fixed and so $ f(x)$ is constant number. Therefore,
\begin{eqnarray}\label{k6}
-\varepsilon L_{k}(1,x)-\frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}L_{k}(\psi
(t),x)< L_{k}(f,x)-f(x)L_{k}(1,x)
\end{eqnarray}
\begin{eqnarray*}
<\varepsilon
L_{k}(1,x)+\frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}L_{k}(\psi
(t),x).
\end{eqnarray*}
On the other hand, we get
\begin{eqnarray}\label{k7}
L_{k}(f,x)-f(x)&=&L_{k}(f,x)-f(x)L_{k}(1,x)+f(x)L_{k}(1,x)-f(x)\nonumber\\
&=&L_{k}(f,x)-f(x)L_{k}(1,x)-f(x)L_{k}+f(x)[L_{k}(1,x)-1]\label{k7}.
\end{eqnarray}
By inequality (\ref{k6}) and (\ref{k7}), we obtain
\begin{eqnarray}\label{k8}
L_{k}(f,x)-f(x)&<&\varepsilon L_{k}(1,x)+\frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}L_{k}(\psi
(t),x)\\ \nonumber &&+f(x)+f(x)[L_{k}(1,x)-1].
\end{eqnarray}
Now, we compute second moment
\begin{eqnarray*}
L_{k}(\psi(t),x)=L_{k}\left(\sin^{2}\left(\frac{x-t}{2}\right),x\right)&=&L_{k}\left(\frac{1}{2}(1-\cos t \cos x-\sin x \sin t),x\right)\\
&=&\frac{1}{2}[L_{k}(1,x)-\cos xL_{k}(\cos t,x)-\sin xL_{k}(\sin t ,x)]\\
&=&\frac{1}{2}\{L_{k}(1,x)-\cos x[L_{k}(\cos t,x)-\cos x]\\
&-&\sin x[L_{k}(\sin t ,x)-\sin x]\}.
\end{eqnarray*}
By (\ref{k8}), we have
\begin{eqnarray*}\label{k9}
L_{k}(f,x)-f(x)&<&\varepsilon L_{k}(1,x)+\frac{2\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\frac{1}{2}\{L_{k}(1,x)\nonumber\\
&&-\cos x[L_{k}(\cos t,x)-\cos x]-\sin x[L_{k}(\sin t ,x)-\sin x]\}+f(x)(L_{k}(1,x)-1)\nonumber\\
&=&\varepsilon[L_{k}(1,x)-1]+\varepsilon+f(x)(L_{k}(1,x)-1)\nonumber\\
&&+\frac{\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\{L_{k}(1,x)-\cos
x[L_{k}(\cos t,x)-\cos x]\nonumber\\
&-&\sin x[L_{k}(\sin t ,x)-\sin x]\}.
\end{eqnarray*}
So, from above inequality, one can see that
\begin{eqnarray*}
|L_{k}(f,x)-f(x)|&\leq&\varepsilon+\left(\varepsilon+|f(x)|+\frac{\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\right)|L_{k}(1,x)-1|
\\ \nonumber &&+\frac{\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}[|\cos
x||L_{k}(\cos t,x)-\cos x|\nonumber\\&&+|\sin x|L_{k}(\sin t,x)-\sin
x|] \leq\varepsilon+\left(\varepsilon+|f(x)|+\frac{\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\right)|L_{k}(1,x)-1|\\
\nonumber&&+\frac{\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}[|L_{k}(\cos
t,x)-\cos x|+|\sin x|L_{k}(\sin t ,x)-\sin x|].
\end{eqnarray*}
Because of $\varepsilon$ is arbitrary, we obtain
\begin{eqnarray*}
\parallel L_{k}(f,x)-f(x)\parallel _{2\pi}&\leq&
\varepsilon+R\bigg(\parallel L_{k}(1,x)-1\parallel _{2\pi}+\parallel
L_{k}(\cos t,x)-\cos t\parallel _{2\pi}\\&&+\parallel L_{k}(\sin
t,x)-\sin x\parallel _{2\pi}\bigg)
\end{eqnarray*}
where $R=\max\left(\varepsilon+\parallel
f\parallel_{2\pi}+\frac{\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)},\frac{\parallel
f\parallel_{2\pi}}{\sin^{2}\left(\frac{\delta}{2}\right)}\right)$.\\
Finally, replacing $L_{k}(.,x)$ by
$T_{k}(.,x)=\widehat{F}L_{k}(.,x)$ and for $\varepsilon^{'}>0$, we
can write
\begin{eqnarray*}
\mathcal{A}:&=&\left\{k \in \mathbb{N}:\parallel
T_{k}(1,x)-1\parallel _{2\pi}+\parallel T_{k}(\sin t,x)-\sin
x\parallel _{2\pi}+\parallel T_{k}(\cos t,x)-\cos x\parallel
_{2\pi}\geq\frac{\varepsilon^{'}}{R}\right\},\\
\mathcal{A}_{1}:&=&\left\{k \in \mathbb{N}:\parallel
T_{k}(1,x)-1\parallel _{2\pi}\geq\frac{\varepsilon^{'}}{3R}\right\},\\
\mathcal{A}_{2}:&=&\left\{k \in \mathbb{N}:\parallel T_{k}(\sin
t,x)-\sin x\parallel _{2\pi}\geq\frac{\varepsilon^{'}}{3R}\right\},\\
\mathcal{A}_{3}:&=&\left\{k \in \mathbb{N}:\parallel T_{k}(\cos
t,x)-\cos x\parallel _{2\pi}\geq\frac{\varepsilon^{'}}{3R}\right\}.
\end{eqnarray*}
Then, $\mathcal{A}\subset \mathcal{A}_{1}\cup \mathcal{A}_{2}\cup
\mathcal{A}_{3}$, so we have $d(\mathcal{A})\leq
d(\mathcal{A}_{1})+d(\mathcal{A}_{2})+d(\mathcal{A}_{3})$. Thus, by
conditions (\ref{k1})-(\ref{k3}), we obtain
\begin{eqnarray*}
d(\widehat{F})-\lim_{k \to \infty}\parallel
L_{k}(f,x)-f(x)\parallel_{2\pi}=0.
\end{eqnarray*}
which completes the proof.
\end{proof}
We remark that our Theorem \ref{thm} is stronger than Theorem
\ref{kor2} as well as Theorem of Gadjiev and Orhan \cite{GadOr}.
For this purpose, we get the following example:
\begin{exmp}
For $n\in\mathbb{N}$, denote by $S_{n}(f)$ the $n-$ partial sum of
the Fourier series of $f$, that is
\begin{eqnarray*}
S_{n}(f,x)=\frac{1}{2}a_{0}(f)+\sum_{k=0}^{n}a_{k}(f)\cos
kx+b_{k}(f)\sin k x.
\end{eqnarray*}
For $n\in\mathbb{N}$, we get
\begin{eqnarray*}
F_{n}(f,x)=\frac{1}{n+1}\sum_{k=0}^{n}S_{k}(f).
\end{eqnarray*}
A standard calculation gives that for every $t\in\mathbb{R}$
\begin{eqnarray*}
F_{n}(f,x)=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(t)\varphi_{n}(x-t)dt,
\end{eqnarray*}
where
\begin{eqnarray*}
\varphi_{n}(x) = \left\{ \begin{array}{ll}
\frac{\sin^{2}((n+1)(x-t)/2)}{(n+1)\sin^{2}((x-t)/2)}& \textrm{ if $x$ is not a multiple of $2\pi$ },\\
n+1 & \textrm{ if $x$ is a multiple of $2\pi$ }.
\end{array} \right.
\end{eqnarray*}
The sequence $(\varphi_{n})_{n\in\mathbb{N}}$ is a positive kernel
which is called the Fej$\acute{e}$r kernel, and corresponding
$F_{n}$ for $n\geq 1$ are called Fej$\acute{e}$r convolution
operators.
We define the sequence of linear operators as
$K_{n}:C_{2\pi}(\mathbb{R})\longrightarrow C_{2\pi}(\mathbb{R})$
with $K_{n}(f,x)=(1+y_{n})F_{n}(f,x)$, where
$y=(y_{n})=(f^{2}_{n+1})$ . Then, $K_{n}(1,x)=1$, $K_{n}(\sin
t,x)=\frac{n}{n+1}\sin x$ and $K_{n}(\cos t,x)=\frac{n}{n+1}\cos x$
and sequence $(K_{n})$ satisfies the conditions
(\ref{k1})-(\ref{k3}). Therefore, we get
\begin{eqnarray*}
d(\widehat{F})-\lim_{k \to \infty}\parallel
K_{n}(f,x)-f(x)\parallel_{2\pi}=0.
\end{eqnarray*}
On the other hand, one can see that $(K_{n})$ does not satisfy
Theorem \ref{kor2} as well as Theorem of Gadjiev and Orhan
\cite{GadOr}, since $\widehat{F}y=(1,0,0,\ldots)$, the sequence $y$
is $\widehat{F}-$statistical convergence to $0$. But the sequence
$y$ neither convergent nor statistical convergent.
\end{exmp}
\subsection{Rate of $\widehat{F}-$statistical convergence} In this
section, we estimate rate of $\widehat{F}-$statistical convergence
of a sequence of positive linear operators defined
$C_{2\pi}(\mathbb{R})$ into $C_{2\pi}(\mathbb{R})$. Now, we give
following definition
\begin{defin}\label{defn}
Let $(a_{n})$ be a positive non-increasing sequence. We say that the
sequence $x=(x_{k})$ is $\widehat{F}-$statistical convergence to $L$
with the rate $o(a_{n})$ if for every, $\varepsilon>0$
\begin{eqnarray*}
\lim_{n \to \infty}\frac{1}{u_{n}}\left|\left\{k\leq
n:|\widehat{F}x-\ell|\geq\epsilon\right\}\right|=0.
\end{eqnarray*}
At this stage, we can write $x_{k}-L=d(\widehat{F})-o(u_{n})$.
\end{defin}
As usual we have the following auxiliary result.
\begin{lem}
Let $(a_{n})$ and $(b_{n})$ be two positive non-increasing
sequences. Let $x=(x_{k})$ and $y=(y_{k})$ be two sequences such
that $x_{k}-L_{1}=d(\widehat{F})-o(a_{n})$ and
$y_{k}-L_{1}=d(\widehat{F})-o(b_{n})$. Then we have
\begin{enumerate}
\item[(i)] $\alpha(x_{k}-L_{1})=d(\widehat{F})-o(a_{n})$ for any
scalar $\alpha$,\\
\item[(ii)] $(x_{k}-L_{1})\pm (y_{k}-L_{2})=d(\widehat{F})-o(c_{n})$,\\
\item[(iii)] $(x_{k}-L_{1})(y_{k}-L_{2})=d(\widehat{F})-o(a_{n}b_{n}),$
\end{enumerate}
where $c_{n}= \max\{a_{n},b_{n}\}$.
\end{lem}
For $\delta> 0$, the modulus of continuity of $f$,
$\omega(f,\delta)$ is defined by
\begin{equation*}
\omega(f,\delta)=\sup_{|x-y|<\delta }|f(x)-f(y)|.
\end{equation*}
It is well-known that for a function $f \in C[a,b]$,
\begin{equation*}
\lim_{n\to 0^{+}}\omega(f,\delta)=0
\end{equation*}
for any $\delta>0$
\begin{equation}\label{r1}
|f(x)-f(y)|\leq \omega(f,\delta)\left(\frac{|x-y|}{\delta}+1\right).
\end{equation}
\begin{thm}
Let $(L_{k})$ be sequence of positive linear operator from
$C_{2\pi}(\mathbb{R})$ into $C_{2\pi}(\mathbb{R})$. Assume that
\begin{eqnarray*}
(i)&&\ \parallel L_{k}(1,x)-x\parallel
_{2\pi}=d(\widehat{F})-o(u_{n}),\\
(ii)&&\omega(f,\theta_{k})=d(\widehat{F})-o(v_{n})\,\,\textrm{where}\,\,\theta_{k}=\sqrt{L_{k}\left[\sin^{2}\left(\frac{t-x}{2}\right),x\right]}.
\end{eqnarray*}
Then for all $f\in C_{2\pi}(\mathbb{R})$, we get
\begin{eqnarray*}
\ \parallel L_{k}(f,x)-f(x)\parallel _{2\pi}=d(\widehat{F})-o(z_{n})
\end{eqnarray*}
where $z_{n}= \max\{u_{n},v_{n}\}$.
\end{thm}
\begin{proof}
Let $f\in C_{2\pi}(\mathbb{R})$ and $x\in [-\pi,\pi]$. From
(\ref{k7}) and $(\ref{r1})$, we can write
\begin{eqnarray*}
|L_{k}(f,x)-f(x)|&\leq&L_{k}(|f(t)-f(x)|;x)+|f(x)||L_{k}(1,x)-1| \\
&\leq&L_{k}\left(\frac{|x-y|}{\delta}+1;x\right)\omega(f,\delta)+|f(x)||L_{k}(1,x)-1|\\
&\leq&L_{k}\left(\frac{\pi^{2}}{\delta^{2}}\sin^{2}\left(\frac{y-x}{2}\right)+1;x\right)\omega(f,\delta)+|f(x)||L_{k}(1,x)-1|\\
&\leq&\left\{L_{k}(1,x)+\frac{\pi^{2}}{\delta^{2}}L_{k}\left(\sin^{2}\left(\frac{y-x}{2}\right);x\right)\right\}\omega(f,\delta)+|f(x)||L_{k}(1,x)-1|\\
&=&\left\{L_{k}(1,x)+\frac{\pi^{2}}{\delta^{2}}L_{k}\left(\sin^{2}\left(\frac{y-x}{2}\right);x\right)\right\}\omega(f,\delta)+|f(x)||L_{k}(1,x)-1|.
\end{eqnarray*}
By choosing $\sqrt{\theta_{k}}=\delta$, we get
\begin{eqnarray*}
\parallel
L_{k}(f,x)-f(x)\parallel _{2\pi}&\leq& \parallel
f\parallel_{2\pi}\parallel L_{k}(1,x)-x \parallel
_{2\pi}+2\omega(f,\theta_{k})+\omega(f,\theta_{k})\parallel
L_{k}(1,x)-x
\parallel _{2\pi}\\
&\leq&K\{\parallel L_{k}(1,x)-x \parallel
_{2\pi}+\omega(f,\theta_{k})+\omega(f,\theta_{k})\parallel
L_{k}(1,x)-x
\parallel _{2\pi}\},
\end{eqnarray*}
where $K=\max\{2,\parallel f\parallel_{2\pi}\}$. By Definition
\ref{defn} and conditions (i) and (ii), we get the desired the
result.
\end{proof}
\section*{Conflict of Interests}
The authors declare that there are no conflict of interests regarding the publication of this paper.
\end{document} | math | 37,015 |
\begin{document}
\title{Equilibrium Perturbations for Asymmetric Zero Range Process under Diffusive Scaling in Dimensions $d \geq 2$}
\begin{abstract}
We consider the asymmetric zero range process in dimensions $d \geq 2$. Assume the initial density profile is a perturbation of the constant density, which has order $N^{-\alpha}$, $\alpha \in (0,1)$, and is constant along the drift direction. Here, $N$ is the scaling parameter. We show that under some constraints on the jump rate of the zero range process, the perturbed quantity macroscopically obeys the heat equation under diffusive scaling.\\
\noindent \emph{Keywords:} asymmetric zero range process; diffusive scaling; spectral gap estimate; logarithmic Sobolev inequality.
\end{abstract}
\section{Introduction}
It is well known that for asymmetric interacting particle systems with only one conservation law, such as asymmetric zero range or exclusion processes, the macroscopic density profile obeys the hyperbolic equation under hyperbolic scaling, \emph{i.e.} time sped up by $N$ and space divided by $N$ \cite{rezakhanlou91}. In order to understand Navier-Stokes equations from a microscopic point of view, asymmetric interacting particle systems have also been considered under diffusive scaling, \emph{i.e.} time sped up by $N^2$ and space divided by $N$. In the seminal paper \cite{esposito1994diffusive}, Esposito, Marra and Yau prove that for asymmetric exclusion processes in dimensions $d \geq 3$, if the initial density profile is a perturbation of order $N^{-1}$ with respect to the constant density, then the perturbed quantity evolves according to a parabolic equation under diffusive scaling. In the literature this is called the incompressible limit, which has also been extended to boundary driven asymmetric exclusion process in \cite{benois2002hydrodynamics}. Another interpretation of the Navier-Stokes equation is to describe the evolution of system in the hyperplane orthogonal to the drift. Precisely speaking, in \cite{benois1997diffusive,landim2004hydrodynamic}, it has been proven that for asymmetric zero range and exclusion processes, if the initial density profile is constant along the drift direction, then the macroscopic behavior is described by a parabolic equation under diffusive scaling. The third interpretation is to consider the first order correction to the hydrodynamic equation, which however is under hyperbolic scaling. We refer to \cite[P. 185]{klscaling} for more background and references on understanding of Navier-Stokes equations.
In this note, we consider equilibrium perturbations for the asymmetric zero range process under diffusive scaling in dimensions $d \geq 2$. We first recall the result in \cite{benois1997diffusive}. Consider the zero range process $(\eta_t, t \geq 0)$ with jump rate $N^2 g(\cdot)$ and transition probability $p(\cdot)$ such that $m:= \sum_{x \in \mathbb T_N^d} x p(x) \neq 0$. Here, $N$ is the scaling parameter. Assume the initial distribution of the process is associated to some density profile $\varrho_0: \mathbb T^d \rightarrow \mathbb R_+$ such that $m \cdot \nabla \varrho_0 (u) = 0$ for any $u \in \mathbb T^d$. Then, it is proved in \cite{benois1997diffusive} that for any continuous function $F: \mathbb T^d \rightarrow \mathbb R$,
\[\lim_{N \rightarrow \infty} \frac{1}{N^d} \sum_{x \in \mathbb T_N^d} \eta_{t} (x) F (\tfrac{x}{N}) = \int_{\mathbb T^d} \varrho (t,u) F(u) du \quad \text{in probability},\]
where $\varrho (t,u)$ is the solution to the following parabolic equation
\begin{equation}\label{parabolicEqn}
\begin{cases}
\partial_t \varrho (t,u) = \frac{1}{2} \sum_{i,j=1}^d \sigma_{i,j} \partial^2_{u_i,u_j} \mathbb Phi (\varrho (t,u)),\\
\varrho (0,u) = \varrho_0 (u).
\end{cases}\end{equation}
Above, $\sigma_{i,j} = \sum_{x \in \mathbb T_N^d} x_i x_j p(x)$ for $1 \leq i,j \leq d$ and $\mathbb Phi (\varrho)$ is the expectation of the jump rate $g(\cdot)$ with respect to the invariant measure of the process with density $\varrho$.
We assume the initial density profile has the following form: fix a positive constant $\rho_*$ and for $\alpha \in (0,1)$, let
\[\varrho_0 (u)= \rho_* + N^{-\alpha} \rho_0 (u).\]
Assume further that $m \cdot \nabla \rho_0 (u) = 0$ for any $u \in \mathbb T^d$. Then, under some constraints on the jump rate $g(\cdot)$, we prove that for any continuous function $F: \mathbb T^d \rightarrow \mathbb R$,
\[\lim_{N \rightarrow \infty} \frac{1}{N^{d-\alpha}} \sum_{x \in \mathbb T_N^d}\big( \eta_{t} (x) - \rho_*\big) F (\tfrac{x}{N}) = \int_{\mathbb T^d} \rho (t,u) F(u) du \quad \text{in probability},\]
where $\rho (t,u)$ is the solution to the following heat equation
\begin{equation}\label{hE}
\begin{cases}
\partial_t \rho (t,u) = \frac{1}{2} \sum_{i,j=1}^d \sigma_{i,j} \mathbb Phi^\prime (\rho_*) \partial^2_{u_i,u_j} \rho (t,u),\\
\rho (0,u) = \rho_0 (u).
\end{cases}
\end{equation}
Note that if we assume a priori that the density profile of the process at time $t$ is given by \[\varrho (t,u) := \rho_* + N^{-\alpha} \rho(t,u)\] and substitute $\varrho (t,u)$ into the parabolic equation \eqref{parabolicEqn}, then we get the above heat equation by using Taylor's expansion and by letting $N \rightarrow \infty$.
The proof is based on relative entropy method introduced by Yau \cite{yau1991relative}. Precisely speaking, let $\nu_{N,t}$ be the product measure with macroscopic density profile $\rho_* + N^{-\alpha} \rho (t,u)$, and let $\mu_{N,t}$ be the distribution of the process at time $t$. We prove in Theorem \ref{thm} that per volume of the relative entropy of $\mu_{N,t}$ with respect to $\nu_{N,t}$ is of order $o(N^{-2\alpha})$. From this and entropy inequality, it is easy to obtain law of large numbers for the perturbed quantities. The main step to bound the relative entropy is to prove a quantitative version of the so-called one block estimate, for which we present two different proofs: one uses spectral gap estimate and the other uses logarithmic Sobolev inequality. The proof based on spectral gap estimate follows the steps in \cite{toth2002between}. Logarithmic Sobolev inequality has also been used to quantify block estimates in the theory of hydrodynamic limit, see \cite{fritz2004derivation} for example.
Equilibrium perturbations from equilibrium have also been considered in other contexts. For example, in dimension one, Sepp{\"a}l{\" a}inen \cite{seppalainen2001perturbation} considers the Hammersley’s model, adds a perturbation of order $N^{-\alpha}$ to the equilibrium, and shows that the perturbation macroscopically obeys the invisid Burgers equation in the time scale $N^{1+\alpha}t$ if $0 < \alpha< 1/2$, which is extended by T{\'o}th and Valk{\'o} \cite{toth2002between} to a large class of one-dimensional interacting particle systems based on relative entropy method, but only for $0 < \alpha < 1/5$ and only in the smooth regime of the solution. For systems with two conservation laws,
T{\'o}th and Valk{\'o} \cite{toth2005perturbation} obtain a two-by-two system for a very rich class of systems. In \cite{valko2006hydrodynamic}, Valk{\'o} shows that small perturbations around a hyperbolic equilibrium point evolve according to two decoupled Burgers equations. For very rencent results, we refer to \cite{jara2021viscous} for equilibrium perturbations in weakly asymmetric exclusion processes, and to \cite{xu2022equilibrium} for the generalized exclusion process and anharmonic chains.
The rest of the note is organized as follows. In Section \ref{sec:results}, we state the model and main results. The proof of Theorem \ref{thm} is presented in Section \ref{sec:relativeentropy} by assuming the one block estimate holds, whose proof is postponed to Section \ref{sec:oneblock}.
\section{Notation and Results}\label{sec:results}
The state space of the zero range process is $\Omega_N^d = (\mathbb T_N^d)^{\mathbb N}$, where $\mathbb N = \{0,1,2,\ldots\}$ and $\mathbb T_N^d = \mathbb Z^d / (N\mathbb Z^d)$ is the $d$ dimensional discrete torus with $d \geq 2$. For a configuration $\eta \in \Omega_N^d$, $\eta(x)$ is the number of particles at site $x$. Let $p (\cdot)$ be a probability measure on $\mathbb Z^d$. We assume that
\begin{enumerate}[(i)]
\item $p(\cdot)$ is of finite range: there exists $R > 0$ such that $p(x) = 0$ for all $|x| > R$;
\item $p(\cdot)$ is asymmetric: $m := \sum_{x \in\mathbb T_N^d} x p(x) \neq 0$.
\end{enumerate}
Above, $|x| = \max_{1 \leq i \leq d} |x_i|$ for $x \in \mathbb Z^d$. Let $g: \mathbb N \rightarrow \mathbb R_+$ be the jump rate of the zero range process. To avoid degeneracy, assume $g(k) = 0$ if and only if $k=0$.
The generator of the zero range process acting on functions $f: \Omega_N^d \rightarrow \mathbb R$ is given by
\[\mathscr{L}_{N} f (\eta) = \sum_{x,y\in \mathbb T_N^d} g(\eta(x)) p(y) \big[ f(\eta^{x,x+y}) - f(\eta) \big].\]
Here, $\eta^{x,y}$ is the configuration obtained from $\eta$ after a particle jumps from $x$ to $y$,
\[\eta^{x,y} (z) = \begin{cases}
\eta(x) - 1, \quad &z=x,\\
\eta(y) + 1, \quad &z=y,\\
\eta(z), \quad &z\neq x,y.\\
\end{cases}\]
Throughout the paper, we need the following assumptions on the rate function $g(\cdot)$.
\begin{assumption}\label{assump:g}
(i) There exists a constant $a_0$ such that for any $k \geq 1$, $|g(k+1) - g(k)| \leq a_0$;\\
(ii) there exists $k_0 > 0$ and $a_1 > 0$ such that $g(k) - g(j) > a_1$ for any $k \geq j+k_0$.
\end{assumption}
\begin{remark}
Condition $(i)$ is needed in order the process to be well defined in the infinite volume (\emph{cf.}\;\cite{Andjel82}). Condition $(ii)$ ensures spectral gap estimates and logarithmic Sobolev inequality for the zero range process (\emph{cf.}\;\cite{landim1996spectral,pra2005logarithmic}), which are main tools of this note.
\end{remark}
\begin{remark}
We assume condition $(ii)$ for simplicity. The results in this note should also hold as long as the spectral gap of the zero range process shrinks at rate at least $\ell^{-2}$ (not necessarily uniformly in the particle density), where $\ell$ is the size of the underlying lattice. For example, the spectral gap estimates have also been proven in the following cases: (a) $g(k) = k^{\alpha}$, $\alpha \in (0,1)$, see \cite{nagahata2010spectral}; (b) $g(k) = \mathbf{1}_{\{k \geq 1\}}$, see \cite{morris2006spectral}.
\end{remark}
It is well known that the zero range process has a family of product invariant measures indexed by the particle density. Precisely speaking, for each $\varphi \geq 0$, let $\bar{\nu}^N_\varphi$ be the product measure on $\Omega_N^d$ with marginals given by
\[\bar{\nu}^N_\varphi (\eta(x) = k) = \frac{1}{Z(\varphi) } \frac{\varphi^k}{g(k)!}, \quad k \geq 0, \;x \in \mathbb T_N^d.\]
Here, $Z (\varphi)$ is the normalizing constant and $g(k)! = \prod_{j=1}^k g(j)$ with the convention that $g(0)! = 1$. Under Assumption \ref{assump:g}, $Z(\varphi) < \infty$ for any $\varphi \geq 0$. In particular,
\[E_{\bar{\nu}^N_\varphi} [e^{\lambda \eta(0)}] < \infty, \quad \forall \lambda \geq 0.\]
For $\varphi \geq 0$, the particle density under $\bar{\nu}^N_\varphi$ is
\[R(\varphi) = E_{\bar{\nu}^N_\varphi} [\eta(x)]. \]
It is easy to see that $R(\varphi)$ is strictly increasing in $\varphi$, hence has an inverse denoted by $\mathbb Phi := R^{-1}$. To index the invariant measures by particle density $\rho \geq 0$, denote $\nu^N_\rho := \bar{\nu}^N_{\mathbb Phi (\rho)}$.
\medspace
Fix $\rho_* > 0$. Let $\rho_0 : \mathbb T^d \rightarrow \mathbb R_+$ be the initial density profile of the perturbed quantity. We assume $\rho_0$ is continuously differentiable and is constant along the drift direction
\begin{equation}\label{assump:initial}
m \cdot \nabla \rho_0 (u) = 0, \quad \forall u \in \mathbb T^d.
\end{equation}
Let $\alpha > 0$ denote the strength of the perturbed quantity. The initial distribution of the process is
\begin{equation}\label{initialdistri}
\mu_{N,0} (d \eta)= \bigotimes_{x \in \mathbb T_N^d} \nu^1_{\rho_* + N^{-\alpha} \rho_0 (x/N)} (d \eta(x)).
\end{equation}
Denote by $\mu_{N,t}$ the distribution of the process with generator $N^2 \mathscr{L}_{N}$ at time $t$ starting from $\mu_{N,0}$. The corresponding accelerated process is denoted by $(\eta_t,t \geq 0)$.
Let $\rho(t,u)$ be the solution to the following heat equation
\begin{equation}\label{heatEqn}
\begin{cases}
\partial_t \rho (t,u) = \frac{1}{2}\sum_{i,j = 1}^d \mathbb Phi^\prime (\rho_*) \sigma_{i,j} \partial^2_{u_i,u_j} \rho (t,u),\\
\rho(0,u) = \rho_0 (u).
\end{cases}\end{equation}
Above, $\sigma_{i,j} = \sum_{x \in\mathbb T_N^d}x_i x_j p(x)$ for $ 1 \leq i,j \leq d$. Define the reference measure as
\[\nu_{N,t} (d \eta) = \bigotimes_{x \in \mathbb T_N^d} \nu_{\rho_*+N^{-\alpha} \rho (t,x/N)}^1 (d \eta (x)). \]
For two probability measures $\mu,\nu$ on $\Omega_N^d$ such that $\mu$ is absolutely continuous with respect to $\nu$, recall the relative entropy of $\mu$ with respect to $\nu$ is defined as
\[H(\mu|\nu) = \int \log (d \mu / d \nu) d \mu.\]
To make notation simple, denote
\[h_N(t) = N^{-d} H(\mu_{N,t} | \nu_{N,t}).\]
The following is the main result of this note.
\begin{theorem}\label{thm} Let $d \geq 2$. For any $\alpha \in (0,1)$, we have $h_N (t) = o(N^{-2 \alpha})$.
\end{theorem}
\begin{remark}
We do not actually need the initial measure $\mu_{N,0}$ to be product as given in \eqref{initialdistri}. But we need it to satisfy: $(i)\; h_N(0) = o(N^{-2\alpha})$, $(ii)\; N^{-d} H(\mu_{N,0} | \nu^N_{\rho_*}) \leq C N^{-2\alpha}$ for some constant $C$ independent of $N$.
\end{remark}
For any probability measure $\mu$ on $\Omega_N^d$, denote by $\mathbb P_\mu$ the distribution of the process $(\eta_t,t\geq 0)$ with initial distribution $\mu$, and by $\mathbb E_\mu$ the corresponding expectation. As a direct consequence of the above theorem, we have the following law of large numbers, whose proof uses entropy inequality and is very standard (\emph{cf.}\;\cite[Corollary 6.1.3]{klscaling} for example). For this reason, we omit the proof.
\begin{corollary}
Under the assumptions of Theorem \ref{thm}, for any continuous function $F: \mathbb T^d \rightarrow \mathbb R$, for any $t>0$ and for any $\varepsilon > 0$,
\[\lim_{N \rightarrow \infty} \mathbb P_{\mu_{N,0}} \Big( \Big| \frac{1}{N^{d-\alpha}} \sum_{x\in \mathbb T_N^d} \big( \eta_t (x) - \rho_*\big) F \big(\tfrac{x}{N}\big) - \int_{\mathbb T^d} \rho(t,u) F(u) du\Big| > \varepsilon \Big) = 0.\]
Above, $\rho(t,u)$ is the solution to the heat equation \eqref{heatEqn}.
\end{corollary}
\section{Relative Entropy}\label{sec:relativeentropy}
In this section, we first bound the entropy production of the process in Subsection \ref{subsec:entropy}, then calculate the relative entropy in Subsection \ref{subsec:calculations}, and finally prove Theorem \ref{thm} in the last subsection. To make notations short, we denote $\rho_N (t,u) := \rho_* + N^{-\alpha} \rho (t,u)$. Recall $\rho(t,u)$ is the solution to the heat equation \eqref{heatEqn}. We also underline that in the following proof the constant $C$ may be different from line to line, but does not depend on the scaling parameter $N$.
\subsection{Entropy production.}\label{subsec:entropy} For $s \geq 0$, denote
\[f_s = f_{N,s} = \frac{d \mu_{N,s}}{d \nu_{\rho_*}^N}.\]
For any $\nu_{\rho_*}^N$--density $f$, the Dirichlet form of $f$ with respect to $\nu_{\rho_*}^N$ is defined as
\[D_N (f;\nu_{\rho_*}^N) := \left\langle\sqrt{f} (-\mathscr{L}_{N})\sqrt{f}\right\rangle_{\nu_{\rho_*}^N}. \]
Here, for any function $f$ and any distribution $\mu$ on $\Omega_N^d$, $\langle f \rangle_\mu := \int f du$. Since $\nu_{\rho_*}^N$ is invariant for the generator $\mathscr{L}_{N}$, direct calculations show that
\[D_N (f;\nu_{\rho_*}^N) = \sum_{x,y \in \mathbb T_N^d} D_{x,y} (f;\nu_{\rho_*}^N),\]
where $D_{x,y}$ is the piece of Dirichlet form associated to the bond $(x,y)$,
\[D_{x,y} (f;\nu_{\rho_*}^N) = \frac{1}{2} \int g(\eta(x)) s(y-x) \big[ \sqrt{f(\eta^{x,y})} - \sqrt{f(\eta)} \big]^2 d \nu_{\rho_*}^N.\]
Above, $s(x) = [p(x)+p(-x)]/2$.
\begin{lemma}\label{lem:entropy}
There exists a constant $C > 0$ independent of $N$ such that for any $t > 0$,
\[H(\mu_{N,t}| \nu_{\rho_*}^N) + N^2 \int_0^t D_N (f_s;\nu_{\rho_*}^N) ds \leq C N^{d-2\alpha}. \]
\end{lemma}
\begin{proof}
Since $\nu_{\rho_*}^N$ is invariant for the zero range process, following the proof in \cite[Subsection 5.2]{klscaling} line by line, we have
\[H(\mu_{N,t}| \nu_{\rho_*}^N) + N^2 \int_0^t D_N (f_s;\nu_{\rho_*}^N) ds \leq H(\mu_{N,0}| \nu_{\rho_*}^N). \]
Therefore, to conclude the proof, we only need to show
\[H(\mu_{N,0}| \nu_{\rho_*}^N) \leq C N^{d-2\alpha}.\]
By direct calculations,
\begin{multline*}
H(\mu_{N,0}| \nu_{\rho_*}^N) = \int \log \frac{\mu_{N,0} (\eta)}{\nu_{\rho_*}^N (\eta)} \mu_{N,0} (d\eta) \\
= \sum_{x \in \mathbb T_N^d} \Big\{ \log \frac{Z(\mathbb Phi (\rho_*))}{Z(\mathbb Phi ( \rho_N (0,\tfrac{x}{N}))} + \rho_N (0,\tfrac{x}{N}) \log \frac{\mathbb Phi (\rho_N (0,\tfrac{x}{N}))}{\mathbb Phi(\rho_*)} \Big\}.
\end{multline*}
Using the basic inequality $\log (1+x) \leq x$, we bound the last line by
\begin{equation}\label{d1}
\sum_{x \in \mathbb T_N^d} \Big\{ \frac{Z(\mathbb Phi (\rho_*) )- Z(\mathbb Phi (\rho_N (0,\tfrac{x}{N}))}{Z(\mathbb Phi (\rho_N (0,\tfrac{x}{N}))}
+ \rho_N (0,\tfrac{x}{N}) \frac{\mathbb Phi (\rho_N (0,\tfrac{x}{N})) - \mathbb Phi (\rho_*)}{\mathbb Phi(\rho_*)} \Big\}.
\end{equation}
Note that for any $\rho > 0$,
\[\frac{Z^\prime (\mathbb Phi (\rho))}{Z(\mathbb Phi (\rho))} = \frac{\rho}{\mathbb Phi (\rho)}.\]
Then, by Taylor's expansion, the first term inside the brace in \eqref{d1} equals
\[\frac{\rho_N (0,\tfrac{x}{N})}{\mathbb Phi (\rho_N (0,\tfrac{x}{N}))} \big[ \mathbb Phi (\rho_* ) - \mathbb Phi (\rho_N (0,\tfrac{x}{N})) \big]+ \mathcal O (N^{-2\alpha}).\]
Therefore, we may bound the term \eqref{d1} by
\[
\sum_{x \in \mathbb T^d_N} \rho_N (0,\tfrac{x}{N}) \big(\mathbb Phi (\rho_N (0,\tfrac{x}{N})) - \mathbb Phi (\rho_*) \big) \Big[\frac{1}{\mathbb Phi (\rho_*)} - \frac{1}{\mathbb Phi (\rho_N (0,\tfrac{x}{N}))}\Big] + \mathcal O (N^{d-2\alpha}).
\]
We conclude the proof by noting that $|\mathbb Phi (\rho_N (0,\tfrac{x}{N})) - \mathbb Phi (\rho_*)| \leq C N^{-\alpha}$.
\end{proof}
\subsection{Calculations.}\label{subsec:calculations} Let
\[\psi_{N,t} := \frac{d \nu_{N,t}}{d \nu_{\rho_*}^N}.\]
Since $\nu_{N,t}$ and $\nu_{\rho_*}^N$ are both product measures, $\psi_{N,t}$ is explicitly given by
\[\psi_{N,t} (\eta)= \prod_{y \in \mathbb T_N^d} \frac{Z(\mathbb Phi(\rho_*))}{Z(\mathbb Phi (\rho_N(t,\tfrac{y}{N})))} \Big[\frac{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))}{\mathbb Phi (\rho_*)}\Big]^{\eta(y)}.\]
Using Yau's relative entropy inequality (\emph{cf.}\;\cite[Lemma 6.1.4]{klscaling} for example),
\begin{equation}\label{entropy0}
\frac{dh_N(t)}{dt} \leq N^{-d} \int \big\{ \psi_{N,t}^{-1} N^{2} \mathscr{L}_{N}^* \psi_{N,t} - \partial_t \log \psi_{N,t} \big\} d \mu_{N,t}.
\end{equation}
Above, $\mathscr{L}_{N}^*$ is the adjoint generator of $\mathscr{L}_{N}$ in $L^2 (\nu_{\rho_*}^N)$ and corresponds to the zero range process with jump rate $p(-\cdot)$. Precisely speaking, for any $f: \Omega_N^d \rightarrow \mathbb R$,
\[\mathscr{L}_{N}^* f (\eta) = \sum_{x,y\in \mathbb T_N^d} g(\eta(x)) p(-y) \big[ f(\eta^{x,x+y}) - f(\eta) \big].\]
\medspace
Now we calculate the right hand side of \eqref{entropy0}. By direct calculations,
\begin{multline}\label{lPsi}
\psi_{N,t}^{-1} N^{2-d} \mathscr{L}_{N}^* \psi_{N,t}
= N^{2-d} \sum_{x,y\in \mathbb T_N^d} g(\eta(y)) p(y-x) \bigg[ \frac{\mathbb Phi (\rho_N(t,\tfrac{x}{N}))}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} - 1 \bigg]\\
= - N^{1-d} \sum_{y\in\mathbb T_N^d} \sum_{i=1}^d \frac{m_i \partial_{u_i} \mathbb Phi (\rho_N(t,\tfrac{y}{N}))}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N}))\big]\\
+ \frac{1}{2N^d} \sum_{y\in\mathbb T_N^d} \sum_{i,j=1}^d \frac{\sigma_{i,j} \partial^2_{u_i,u_j} \mathbb Phi (\rho_N(t,\tfrac{y}{N}))}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N}))\big] + \mathcal{E}_{N,1} + \mathcal O (N^{-1-\alpha}).
\end{multline}
Above, the term $\mathcal O (N^{-1-\alpha})$ comes from the errors by Taylor's expansion since there exists some constant $C$ independent of $N$ such that
\[\sup_{u \in \mathbb T^d} \big| \partial^3_{u_i,u_j,u_k} \mathbb Phi (\rho_N(t,u)) \big| \leq C N^{-\alpha}, \quad \forall 1 \leq i,j,k \leq d,\]
and the other error term is given by
\begin{equation}\label{en1}
\mathcal{E}_{N,1} = - N^{1-d} \sum_{y\in\mathbb T_N^d} \sum_{i=1}^d m_i \partial_{u_i} \mathbb Phi (\rho_N(t,\tfrac{y}{N}))
+ \frac{1}{2N^d} \sum_{y\in\mathbb T_N^d} \sum_{i,j=1}^d \sigma_{i,j} \partial^2_{u_i,u_j} \mathbb Phi (\rho_N(t,\tfrac{y}{N})).
\end{equation}
\begin{lemma}\label{lem:error1}
There exists a constant $C$ independent of $N$ such that
\[|\mathcal{E}_{N,1} | \leq C \big(N^{-1-\alpha} + N^{-3\alpha}\big).\]
In particular, since $\alpha \in (0,1)$, $\mathcal{E}_{N,1} = o(N^{-2\alpha})$.
\end{lemma}
\begin{proof} By direct calculations, for $1 \leq i,j \leq d$,
\begin{align}
\partial_{u_i} \mathbb Phi (\rho_N(t,u)) &= N^{-\alpha} \mathbb Phi^\prime (\rho_N(t,u)) \partial_{u_i} \rho (t,u),\label{firstDer}\\
\partial^2_{u_i,u_j} \mathbb Phi (\rho_N(t,u)) &= N^{-\alpha} \mathbb Phi^\prime (\rho_N(t,u)) \partial^2_{u_i,u_j} \rho (t,u)
+ N^{-2\alpha} \mathbb Phi^{\prime \prime}(\rho_N(t,u)) \partial_{u_i} \rho (t,u) \partial_{u_j} \rho (t,u).\label{secondDer}
\end{align}
By assumption \eqref{assump:initial},
\begin{equation}\label{nablaZero}
m \cdot \nabla \rho(t,u) = 0
\end{equation}
for all $t \geq 0$ and for all $u \in \mathbb T^d$. Therefore, the first term on the right hand side of \eqref{en1} is equal to zero. Since
\[\mathbb Phi^\prime (\rho_N(t,u)) = \mathbb Phi^\prime (\rho_*) + N^{-\alpha} \mathbb Phi^{\prime \prime} (\rho_*) \rho(t,u) + \mathcal O (N^{-2\alpha}), \quad \mathbb Phi^{\prime \prime} (\rho_N(t,u)) = \mathbb Phi^{\prime \prime} (\rho_*) + \mathcal O (N^{-\alpha}),\]
by \eqref{secondDer}, the second term in \eqref{en1} equals
\begin{multline}\label{en1-1}
\frac{1}{2N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \sum_{i,j = 1}^d \mathbb Phi^\prime (\rho_*) \sigma_{i,j} \partial_{u_i,u_j}^2 \rho(t,\tfrac{y}{N}) \\
+ \frac{1}{2N^{d+2 \alpha}} \sum_{y\in\mathbb T_N^d} \sum_{i,j = 1}^d \mathbb Phi^{\prime\prime} (\rho_*) \sigma_{i,j} \big[ \rho(t,\tfrac{y}{N}) \partial_{u_i,u_j}^2 \rho(t,\tfrac{y}{N}) + \partial_{u_i} \rho(t,\tfrac{y}{N}) \partial_{u_j} \rho(t,\tfrac{y}{N}) \big] \\
+ \mathcal O (N^{-3\alpha}).
\end{multline}
Since
\[\Big| \frac{1}{N^d} \sum_{y\in\mathbb T_N^d} \sum_{i,j = 1}^d \mathbb Phi^\prime (\rho_*) \sigma_{i,j} \partial_{u_i,u_j}^2 \rho(t,\tfrac{y}{N}) - \int_{\mathbb T^d} \sum_{i,j=1}^d \mathbb Phi^\prime (\rho_*) \sigma_{i,j} \partial_{u_i,u_j}^2 \rho(t,u) du \Big| \leq \frac{C}{N},\]
and by the conservation of particle numbers,
\[\int_{\mathbb T^d} \frac{1}{2} \sum_{i,j=1}^d \mathbb Phi^\prime (\rho_*) \sigma_{i,j} \partial_{u_i,u_j}^2 \rho(t,u) du = \partial_t \int_{\mathbb T^d} \rho(t,u) du =0, \]
the first term in \eqref{en1-1} is bounded by $C N^{-1-\alpha}$. Similarly, the second term in \eqref{en1-1} is bounded by $C N^{-1-2\alpha}$ since
\[\int_{\mathbb T^d} \sum_{i,j=1}^d \sigma_{i,j} \big[ \rho(t,u) \partial_{u_i,u_j}^2 \rho(t,u) + \partial_{u_i} \rho(t,u) \partial_{u_j} \rho (t,u)\big]du = 0.\]
This concludes the proof.
\end{proof}
By \eqref{firstDer} and \eqref{nablaZero}, the first term on the right hand side of \eqref{lPsi} equals zero. By \eqref{secondDer} and Lemma \ref{lem:error1}, we rewrite $\psi_{N,t}^{-1} N^{2-d} \mathscr{L}_{N}^* \psi_{N,t}$ as
\begin{multline}\label{lPsi1}
\psi_{N,t}^{-1} N^{2-d} \mathscr{L}_{N}^* \psi_{N,t} \\= \frac{1}{2N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \sum_{i,j=1}^d \frac{\sigma_{i,j} \mathbb Phi^\prime (\rho_N(t,\tfrac{y}{N})) \partial^2_{u_i,u_j} \rho (t,\tfrac{y}{N})}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))}
\big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N}))\big] \\
+ \frac{1}{2N^{d+2\alpha}} \sum_{y\in\mathbb T_N^d} \sum_{i,j=1}^d \frac{ \sigma_{i,j} \mathbb Phi^{\prime \prime} (\rho_N(t,\tfrac{y}{N})) \partial_{u_i} \rho (t,\tfrac{y}{N}) \partial_{u_j} \rho (t,\tfrac{y}{N})}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N}))\big] + o(N^{-2\alpha}).
\end{multline}
Direct calculations yield
\[N^{-d} \partial_t \log \psi_{N,t} = \frac{1}{N^d} \sum_{y\in\mathbb T_N^d} \Big\{ \frac{\partial_t \mathbb Phi (\rho_N(t,\tfrac{y}{N}))}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} \eta(y) - \partial_t \log Z \big(\mathbb Phi (\rho_N(t,\tfrac{y}{N})) \big) \Big\}.\]
Since
$E_{\nu_{N,t}} [\partial_t \log \psi_{N,t}] = E_{\nu_{\rho_*}^N} [\partial_t \psi_{N,t}] =0$ and $\rho_N (t,\frac{y}{N}) = E_{\nu_{N,t}} [\eta(y)] $,
we rewrite the last line as
\[N^{-d} \partial_t \log \psi_{N,t} = \frac{1}{N^d} \sum_{y\in\mathbb T_N^d} \frac{\partial_t \mathbb Phi (\rho_N(t,\tfrac{y}{N}))}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} \big(\eta(y) - \rho_N(t,\tfrac{y}{N})\big).\]
Note that
\[\partial_t \mathbb Phi (\rho_N(t,u)) = N^{-\alpha}\mathbb Phi^\prime ( \rho_N (t,u)) \partial_t \rho (t,u) = \frac{\mathbb Phi^\prime (\rho_N(t,u))}{2N^\alpha} \sum_{i,j = 1}^d \sigma_{i,j} \mathbb Phi^\prime (\rho_*) \partial^2_{u_i,u_j} \rho (t,u).\]
Together with \eqref{lPsi1} and \eqref{entropy0}, we have
\begin{multline}\label{entropy1}
\frac{dh_N(t)}{dt} \leq \int \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} a_{N,t} (y) \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N})) - \mathbb Phi^\prime (\rho_*) \big(\eta(y) - \rho_N(t,\tfrac{y}{N})\big) \big] d \mu_{N,t}\\
+ \int \frac{1}{N^{d+2\alpha}} \sum_{y\in\mathbb T_N^d} b_{N,t} (y) \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N}))\big] d\mu_{N,t}+ o(N^{-2\alpha}).
\end{multline}
Above,
\begin{align*}
a_{N,t} (y) &= \frac{1}{2} \sum_{i,j=1}^d \frac{\sigma_{i,j} \mathbb Phi^\prime (\rho_N(t,\tfrac{y}{N})) \partial^2_{u_i,u_j} \rho (t,\tfrac{y}{N})}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} ,\\
b_{N,t} (y) &= \frac{1}{2} \sum_{i,j=1}^d \frac{ \sigma_{i,j} \mathbb Phi^{\prime\prime} (\rho_N(t,\tfrac{y}{N}))\partial_{u_i} \rho (t,\tfrac{y}{N}) \partial_{u_j} \rho (t,\tfrac{y}{N})}{\mathbb Phi (\rho_N(t,\tfrac{y}{N}))} .
\end{align*}
\subsection{Proof of Theorem \ref{thm}.}\label{subsec:proof} In this subsection, we deal with the terms on the right hand side of \eqref{entropy1} respectively. We first deal with the second one since it is simpler.
\begin{lemma}\label{lem:error0}
There exists a constant $C$ independent of $N$ such that
\[\int \frac{1}{N^{d+2\alpha}} \sum_{y\in\mathbb T_N^d} b_{N,t} (y) \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N}))\big] d\mu_{N,t} \leq C N^{-3\alpha}.\]
\end{lemma}
\begin{proof}
Since $|\mathbb Phi (\rho_N(t,\tfrac{y}{N}))- \mathbb Phi (\rho_*)| \leq C N^{-\alpha}$, we only need to prove
\begin{equation*}
\int \frac{1}{N^{d+2\alpha}} \sum_{y\in\mathbb T_N^d} b_{N,t} (y) \big[g(\eta(y)) - \mathbb Phi (\rho_*)\big] d\mu_{N,t} \leq CN^{-3\alpha}.
\end{equation*}
By entropy inequality (\emph{cf.}\;\cite[Appendix 1.8]{klscaling} for example), the left hand side above is bounded by
\[ \frac{H(\mu_{N,t}|\nu_{\rho_*}^N)}{N^{d+\alpha}} + \frac{1}{N^{d+\alpha}} \log E_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \frac{1}{N^{\alpha}} \sum_{y\in\mathbb T_N^d} b_{N,t} (y) \big[g(\eta(y)) - \mathbb Phi (\rho_*)\big] \Big\}\Big].\]
By Lemma \ref{lem:entropy}, $H(\mu_{N,t}|\nu_{\rho_*}^N) \leq CN^{d-2\alpha}$, hence the first term above is bounded by $CN^{-3\alpha}$. Since $\nu_{\rho_*}^N$ is product measure, the second term in the last expression equals
\begin{equation}\label{a1}
\frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \log E_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \frac{1}{N^{\alpha}} b_{N,t} (y) \big[g(\eta(y)) - \mathbb Phi (\rho_*)\big] \Big\}\Big].
\end{equation}
For a random variable $X$ such that $E[X] = 0$ and $E[e^{\lambda X}] < \infty$ for all $\lambda \in \mathbb R$, we claim that for any $\lambda_0 > 0$, there exists a constant $C=C(\lambda_0)$ such that for all $0 < \lambda < \lambda_0$,
\begin{equation}\label{f1}
\log E [e^{\lambda X}] \leq C E[X^4]^{1/2} \lambda^2.
\end{equation}
Indeed, since $e^x \leq 1 + x + (x^2/2) e^{|x|}$ and $\log (1+x) \leq x$, for any $0 < \lambda < \lambda_0$,
\[
\log E [e^{\lambda X}] \leq \frac{\lambda^2}{2} E \big[ X^2 e^{\lambda |X|}\big] \leq \frac{\lambda^2}{2} E \big[ X^4]^{1/2} E \big[ e^{2 \lambda_0 |X|}\big]^{1/2} =: C (\lambda_0)E[X^4]^{1/2} \lambda^2.
\]
Using the above claim, we bound \eqref{a1} by $CN^{-3\alpha}$ for large $N$. This completes the proof.
\end{proof}
Using the same argument as in Lemma \ref{lem:error0}, we could replace $\mathbb Phi^\prime (\rho_*)$ by $\mathbb Phi^\prime (\rho_N (t,\tfrac{y}{N}))$ in the first line in \eqref{entropy1} plus an error of order $\mathcal O (N^{-3\alpha})$. Up to now, we have shown
\begin{multline}\label{entropy2}
\frac{dh_N(t)}{dt} \leq \int \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} a_{N,t} (y) \big[g(\eta(y)) -\mathbb Phi (\rho_N(t,\tfrac{y}{N})) \\
- \mathbb Phi^\prime (\rho_N(t,\tfrac{y}{N})) \big(\eta(y) - \rho_N(t,\tfrac{y}{N})\big) \big] d \mu_{N,t}
+o(N^{-2\alpha}).
\end{multline}
\medspace
For a positive integer $\ell = \ell(N)$ and for any sequence $\{\phi(x)\}_{x \in \mathbb Z^d}$, define
\[\bar{\phi}^\ell (x) = \frac{1}{(2\ell+1)^d} \sum_{|y-x| \leq \ell} \phi (y).\]
In the following, we shall take
\begin{itemize}
\item either $\ell = c N^{\frac{\alpha+d/2}{d+1}}$ for some sufficiently small constant $c > 0$ when applying the spectral gap estimate (\emph{cf.} Lemma \ref{lem:spectralgap}),
\item or $\ell = N^{\frac{\alpha+1}{d+1}}$ when applying the logarithmic Sobolev inequality (\emph{cf.} Lemma \ref{lem:lsi}).
\end{itemize}
\medspace
\begin{lemma}There exists a constant $C$ independent of $N$ such that
\begin{multline}\label{e1}
\int \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \big[ a_{N,t} (y) - \bar{a}_{N,t}^\ell (y) \big] \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N})) \\- \mathbb Phi^\prime (\rho_N(t,\tfrac{y}{N})) \big(\eta(y) - \rho_N(t,\tfrac{y}{N})\big) \big] d \mu_{N,t}
\leq \frac{C \ell^2}{N^{2+2 \alpha}}.
\end{multline}
In particular, with the above choice of $\ell$, the above term is of order $o(N^{-2\alpha})$.
\end{lemma}
\begin{remark}
By Taylor's expansion, the above formula is bounded by $C \ell^2/ N^{2+\alpha}$. However, the bound is of order $o(N^{-2\alpha})$ only for $\alpha < (d+2)/(d+3)$ when $\ell = c N^{\frac{\alpha+d/2}{d+1}}$, and only for $\alpha < 2d / (d+3)$ when $\ell = N^{\frac{\alpha+1}{d+1}}$, which is not optimal.
\end{remark}
\begin{proof}
Since the proof is exactly the same as in Lemma \ref{lem:error0}, we only sketch it. Note that $\big| a_{N,t} (y) - \bar{a}_{N,t}^\ell (y) \big| \leq C \ell^2 / N^2$. Let
\[A_{N,t}^\ell (\eta) = \frac{N^2}{\ell^2} \big[ a_{N,t} (0) - \bar{a}_{N,t}^\ell (0) \big] \big[g(\eta(0)) - \mathbb Phi (\rho_*) - \mathbb Phi^\prime (\rho_*) \big(\eta(0) - \rho_* \big) \big].\]
We pay a price of order $\mathcal O (\ell^2 N^{-2-2\alpha})$ by replacing $\rho_N (t,\frac{y}{N})$ with $\rho_*$ in \eqref{e1}. By entropy inequality, Lemma \ref{lem:entropy} and inequality \eqref{f1},
\begin{multline*}
\int \frac{\ell^2}{N^{d+\alpha+2}} \sum_{y \in \mathbb T_N^d} \tau_y A_{N,t}^\ell (\eta) d \mu_{N,t} \leq \frac{H(\mu_{N,t} | \nu_{\rho_*}^N) }{N^{d+2} \ell^{-2}} \\
+ \frac{1}{N^{d+2} \ell^{-2}} \sum_{y \in \mathbb T_N^d} \log \int \exp \Big\{ \frac{1}{N^\alpha} \tau_y A^\ell_{N,t} (\eta)\Big\} d \nu_{\rho_*}^N \leq \frac{C \ell^2}{N^{2+2\alpha}}.
\end{multline*}
This concludes the proof.
\end{proof}
Using summation by parts formula,
\begin{multline}\label{b1}
\int \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \bar{a}_{N,t}^\ell (y) \big[g(\eta(y)) - \mathbb Phi (\rho_N(t,\tfrac{y}{N}))- \mathbb Phi^\prime (\rho_N(t,\tfrac{y}{N})) \big(\eta(y) - \rho_N(t,\tfrac{y}{N})\big) \big] d \mu_{N,t} \\
= \int \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} a_{N,t} (y) \big[\bar{g}^\ell (\eta(y)) - \mathbb Phi (\bar{\rho}^\ell_N(t,\tfrac{y}{N})) - \mathbb Phi^\prime (\bar{\rho}^\ell_N(t,\tfrac{y}{N})) \big(\bar{\eta}^\ell(y) - \bar{\rho}^\ell_N(t,\tfrac{y}{N})\big) \big] d \mu_{N,t} \\
+ \sum_{j=2}^3 \mathcal{E}_{N,j},
\end{multline}
where
\begin{align*}
\mathcal{E}_{N,2} =& \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} a_{N,t} (y) \Big[ \mathbb Phi (\bar{\rho}^\ell_N(t,\tfrac{y}{N})) - \bar{\mathbb Phi}^\ell (\rho_N(t,\tfrac{y}{N})) \Big],\\
\mathcal{E}_{N,3} =& \int \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} a_{N,t} (y) \Big[ \mathbb Phi^\prime (\bar{\rho}^\ell_N(t,\tfrac{y}{N})) \big(\bar{\eta}^\ell(y) - \bar{\rho}^\ell_N(t,\tfrac{y}{N})\big) \\
&- \frac{1}{(2\ell+1)^d} \sum_{|x-y| \leq \ell} \mathbb Phi^\prime (\rho_N(t,\tfrac{x}{N})) \big(\eta(x) - \rho_N(t,\tfrac{x}{N})\big) \Big] d \mu_{N,t}.
\end{align*}
Since for any $|x-y| \leq \ell$, $|\rho_N(t,\tfrac{x}{N}) - \rho_N (t,\tfrac{y}{N}) |\leq C \ell N^{-1-\alpha}$, it is easy to see
\[\big| \mathcal{E}_{N,j} \big| \leq C \ell N^{-1-2\alpha} = o(N^{-2\alpha}), \quad j = 2,3.\]
In order to deal with the first term on the right hand side of \eqref{b1}, we need the following one-block estimate, whose proof is postponed to Section \ref{sec:oneblock}.
\begin{lemma}[One-block estimate]\label{lem:oneblock}
With the choice of $\ell(N)$ as above, for any continuous function $F: \mathbb R_+ \times \mathbb T^d \rightarrow \mathbb R$,
\[\int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} F \big( s, \tfrac{y}{N}\big) \tau_y V_g^\ell (\eta) \Big] ds = o(N^{-2\alpha}),\]
where
\[V_g^\ell (\eta) := \bar{g}^\ell (\eta(0)) - \mathbb Phi \big( \bar{\eta}^{\ell} (0)\big) .\]
\end{lemma}
\medspace
By Lemma \ref{lem:oneblock} and \eqref{b1}, we have shown that
\begin{equation}\label{entropy3}
h_N(t) \leq h_N (0) + \int^t_0 E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} a_{N,s} (y) W^\ell_y (s,\eta)\Big] ds + o(N^{-2\alpha}),
\end{equation}
where
\[W^\ell_y (t, \eta) := \mathbb Phi (\bar{\eta}^\ell (y)) - \mathbb Phi (\bar{\rho}^\ell_N(t,\tfrac{y}{N})) - \mathbb Phi^\prime (\bar{\rho}^\ell_N(t,\tfrac{y}{N})) \big(\bar{\eta}^\ell(y) - \bar{\rho}^\ell_N(t,\tfrac{y}{N})\big). \]
In order to deal with the integral in \eqref{entropy3}, we first state a lemma and refer the readers to \cite[P.193]{toth2002between} for its proof.
\begin{lemma}[{\cite[Lemma 2]{toth2002between}}]\label{lem:ineqn1}
Let $\zeta_i,\,i\geq 1,$ be independent random variables with mean zero. For any $\lambda > 0$, assume
\[\Lambda_i (\lambda) = \log E [e^{\lambda \zeta_i}] < \infty.\]
Assume further that there exist constants $\lambda_0 > 0$ and $C_0 > 0$ such that $\Lambda_i (\lambda) \leq C_0 \lambda^2$ for all $0 < \lambda < \lambda_0$ and for all $i \geq 1$. Let $G: \mathbb R \rightarrow \mathbb R_+$ be smooth and satisfy $G(u) \leq C_1 (|u| \wedge u^2)$ for some constant $C_1 > 0$. Denote $S_\ell = \sum_{i=1}^\ell \zeta_i$. Then, there exist constants $\gamma_0 > 0$ and $C_2 < \infty$ such that for any $\ell > \gamma_0^{-1}$, for any $0 < \gamma < \gamma_0$,
\[\log E \big[ \exp \big\{ \gamma \ell G(S_\ell/ \ell)\big\}\big] < C_2.\]
\end{lemma}
\medspace
Now we continue to treat \eqref{entropy3}. By entropy inequality, for any $\gamma > 0$,
\[E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} a_{N,s} (y) W^\ell_y (s, \eta) \Big] \leq \frac{h_N(s)}{\gamma} + \frac{1}{\gamma N^d} \log E_{\nu_{N,s}} \Big[ \exp \Big\{ \frac{\gamma}{N^\alpha} \sum_{y\in\mathbb T_N^d} a_{N,s} (y) W^\ell_y (s, \eta)\Big\}\Big]. \]
Since $W^\ell_y (s, \eta)$ and $ W^\ell_{y^\prime} (s, \eta) $ are independent under $\nu_{N,s}$ if $|y-y^\prime| > 2 \ell$, by H{\"o}lder's inequality, the second term in the last line is bounded by
\[ \frac{1}{\gamma N^d (2\ell+1)^d} \sum_{y\in\mathbb T_N^d} \log E_{\nu_{N,s}} \Big[ \exp \Big\{ \frac{\gamma (2\ell+1)^d}{N^\alpha} a_{N,s} (y) W^\ell_y (s, \eta) \Big\}\Big].\]
Take $\gamma = \gamma_0 N^\alpha$ for some fixed but small enough $\gamma_0$ and take
\[G(u) = \mathbb Phi (u+\bar{\rho}_N^\ell (t,y)) - \mathbb Phi (\bar{\rho}_N^\ell (t,\tfrac{y}{N})) - \mathbb Phi^\prime (\bar{\rho}_N^\ell (t,\tfrac{y}{N})) u, \quad u =\bar{\eta}^\ell (y) - \bar{\rho}_N^\ell (t,\tfrac{y}{N})\]
in Lemma \ref{lem:ineqn1}, then the above term is bounded by $C \ell^{-d} N^{-\alpha}$ for some constant $C = C(\gamma_0)$. Therefore,
\begin{equation}\label{entropy4}
h_N(t) \leq h_N (0) + \frac{C}{N^\alpha} \int_0^t h_N (s) ds + \frac{C}{\ell^d N^\alpha} + o(N^{-2\alpha}).
\end{equation}
With of choices of $\ell$ above,
\[\frac{1}{\ell^d N^\alpha} = \mathcal O(N^{-\frac{(4d+2)\alpha+d^2}{2(d+1)}}) \quad \text{or} \quad \frac{1}{\ell^d N^\alpha} = \mathcal O (N^{-\frac{(2d+1)\alpha + d}{d+1}}).\]
In both cases, the third term in \eqref{entropy4} is of order $o(N^{-2\alpha})$ if $d \geq 2$ and $\alpha \in (0,1)$. We conclude the proof of Theorem \ref{thm} by using Gr{\"o}nwall's inequality.
\section{One-block Estimate}\label{sec:oneblock}
In this section, we prove Lemma \ref{lem:oneblock}. In order to cut off large densities, for each $M > 0$, let
\[V_{g,M}^\ell (\eta) =V_g^\ell ( \eta) \mathbf{1}_{\{ \bar{\eta}^{\ell} (0) \leq M\}}. \]
Then, the expression in Lemma \ref{lem:oneblock} equals
\begin{multline}\label{oneblock0}
\int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} F \big( s, \tfrac{y}{N}\big) \tau_y \big(V_g^\ell (\eta) \mathbf{1}_{\{ \bar{\eta}^{\ell} (0) > M\}} \big) \Big] ds\\
+ \int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} F \big( s, \tfrac{y}{N}\big) \tau_y V_{g,M}^\ell (\eta) \Big] ds.
\end{multline}
\subsection{Cut off large densities.} In this subsection, we bound the first term in \eqref{oneblock0}.
\begin{lemma}\label{cutoff} There exists some constant $C=C(M,F,t)$ such that for $N$ large enough,
\[\int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} F \big( s, \tfrac{y}{N}\big) \tau_y \big(V_g^\ell (\eta) \mathbf{1}_{\{ \bar{\eta}^{\ell} (0) > M\}} \big) \Big] ds \leq C \Big( \frac{\log N}{N^{3\alpha}} + \frac{\log N}{\ell^d N^\alpha} \exp \{ - C \ell^d\} \Big).\]
In particular, with the choices of $\ell=\ell(N)$ given in Subsection \ref{subsec:proof}, the above term has order $o(N^{-2\alpha})$.
\end{lemma}
\begin{proof}
By Assumption \ref{assump:g} $(i)$, $\big| V_g^\ell (\eta) \big| \leq C(a_0) \bar{\eta}^\ell (0)$. Therefore, the term on the right hand side in the lemma is bounded by
\[C \|F\|_\infty \int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \bar{\eta}^\ell (y) \mathbf{1}_{\{ \bar{\eta}^{\ell} (y) > M\}} \Big] ds.\]
By entropy inequality \cite[Appendix 1.8]{klscaling}, for any $\gamma > 0$, the integral above is bounded by
\begin{equation}\label{cutoff1}
\int_0^t \frac{H (\mu_{N,s} | \nu_{\rho_*}^N)}{\gamma} ds + \frac{t}{\gamma} \log E_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \frac{\gamma}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \bar{\eta}^\ell (y) \mathbf{1}_{\{ \bar{\eta}^{\ell} (y) > M\}} \Big\}\Big]
\end{equation}
By Lemma \ref{lem:entropy}, for any $0 \leq s \leq t$,
\[H (\mu_{N,s} | \nu_{\rho_*}^N) \leq H (\mu_{N,0} | \nu_{\rho_*}^N) \leq C N^{d-2\alpha}.\]
For the second term in \eqref{cutoff1}, since $\bar{\eta}^\ell (y)$ and $\bar{\eta}^\ell (z)$ are independent if $|y-z| > 2 \ell$, by H{\" o}lder's inequality,
\begin{multline*}
\log E_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \frac{\gamma}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} \bar{\eta}^\ell (y) \mathbf{1}_{\{ \bar{\eta}^{\ell} (y) > M\}} \Big\} \Big] \\
\leq \frac{N^d}{(2\ell+1)^d} \log E_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \frac{\gamma (2\ell+1)^d}{N^{d+\alpha}} \bar{\eta}^\ell (0) \mathbf{1}_{\{ \bar{\eta}^{\ell} (0) > M\}} \Big\} \Big]
\end{multline*}
By using the basic inequality $\log (1+x) \leq x$, we bound the last line by
\begin{multline*}
\frac{N^d}{(2\ell+1)^d} \log E_{\nu_{\rho_*}^N} \Big[ 1+ \exp \Big\{ \frac{\gamma (2\ell+1)^d}{N^{d+\alpha}} \bar{\eta}^\ell (0) \Big\} \mathbf{1}_{\{ \bar{\eta}^{\ell} (0) > M\}} \Big]\\
\leq \frac{N^d}{(2\ell+1)^d} E_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \frac{\gamma (2\ell+1)^d}{N^{d+\alpha}} \bar{\eta}^\ell (0) \Big\} \mathbf{1}_{\{ \bar{\eta}^{\ell} (0) > M\}} \Big].
\end{multline*}
By Cauchy-Schwarz inequality, the expectation in the last line is bounded by
\begin{equation}\label{cutoff2}
E_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \frac{2 \gamma (2\ell+1)^d}{N^{d+\alpha}} \bar{\eta}^\ell (0) \Big\} \Big]^{1/2} P_{\nu_{\rho_*}^N} \Big( \bar{\eta}^{\ell} (0) > M \Big)^{1/2}.
\end{equation}
For $\lambda \geq 0$ and $\theta \in \mathbb R$, define
\[\Lambda (\lambda) := \log E_{\nu_{\rho_*}^N} \big[ e^{\lambda \eta(0)}\big], \quad I(\theta) := \sup_{\lambda} \{ \lambda \theta - \Lambda (\lambda)\}.\]
Recall under Assumption \ref{assump:g}, $\Lambda (\lambda) < \infty$ for any $\lambda \geq 0$. By standard large deviation estimates, we may bound \eqref{cutoff2} by
\[ \exp \Big\{ C(2\ell+1)^d \big[ \Lambda (2\gamma / N^{d+\alpha}) - I(M) \big]\Big\}.\]
To sum up, we bound \eqref{cutoff1} by
\begin{equation}\label{cutoff3}
\frac{C \|F\|_\infty tN^{d}}{\gamma N^{2\alpha}} + \frac{ C \|F\|_\infty t N^d}{\gamma (2\ell+1)^d} \exp \Big\{ C(2\ell+1)^d \big[ \Lambda (2\gamma / N^{d+\alpha}) - I(M) \big]\Big\}.
\end{equation}
Now, take $\gamma = N^{d+\alpha} / \log N$. Since $\lim_{\lambda \rightarrow 0} \Lambda (\lambda) = 0$, for $N$ large enough, \[\Lambda (2\gamma / N^{d+\alpha}) = \Lambda(2/\log N) < I(M)/2.\] Hence, \eqref{cutoff3} is bounded by
\[C \|F\|_\infty t \Big( \frac{\log N}{N^{3\alpha}} + \frac{\log N}{\ell^d N^\alpha} \exp \{ - C I (M) \ell^d\} \Big).\]
This concludes the proof of the lemma.
\end{proof}
\subsection{Estimates by spectral gap inequality.} In this subsection, we bound the second term in \eqref{oneblock0} by using the spectral gap estimates for the zero range process.
\begin{lemma}\label{lem:spectralgap}
Let $F: \mathbb R_+ \times \mathbb T^d \rightarrow \mathbb R$ be continuous. There exists $\varepsilon_0 > 0$ such that for any $\ell$ satisfying $M \ell^{d+1} < \varepsilon_0 N^{d/2 + \alpha}$, there exists $C = C(M,F,t)$ such that
\begin{equation}\label{ob}
\int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} F \big( s,\tfrac{y}{N}\big) \tau_y V_{g,M}^\ell (\eta) \Big] ds \leq C \Big( \frac{1}{N^\alpha \ell^d} + \frac{ \ell}{N^{2\alpha+d/2}}\Big).
\end{equation}
In particular, by taking
\[\ell = \Big(\frac{\varepsilon_0}{2M}\Big)^{1/(d+1)} N^{\frac{\alpha+d/2}{d+1}},\]
the left hand side in \eqref{ob} is bounded by $C N^{-\frac{(4d+2)\alpha + d^2}{2(d+1)}} = o(N^{-2\alpha})$.
\end{lemma}
\begin{proof}
By entropy inequality, for any $\gamma > 0$,
\begin{multline}\label{oneblock1}
\int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} F \big( s,\tfrac{y}{N}\big) \tau_y V_{g,M}^\ell \Big] ds \leq \frac{H(\mu_{N,0} | \nu_{\rho_*}^N)}{\gamma N^d} \\
+ \frac{1}{\gamma N^d} \log \mathbb{E}_{\nu_{\rho_*}^N} \Big[ \exp \Big\{ \int_0^t \frac{\gamma \|F\|_\infty}{N^\alpha} \sum_{y\in\mathbb T_N^d} \big| \tau_y V_{g,M}^\ell (\eta_s) \big| ds \Big\}\Big].
\end{multline}
By Lemma \ref{lem:entropy}, $H(\mu_{N,0} | \nu_{\rho_*}^N) \leq C N^{d-2\alpha}$. Using the basic inequality
\[e^{|x|} \leq e^x + e^{-x}, \forall x, \quad \text{and} \quad \log (a+b) \leq \log 2 + \max \{\log a, \log b\}, \forall a, b > 0,\]
we could remove the absolute value inside the exponential in the second term above. By Feynman-Kac formula \cite[Lemma A1.7.2]{klscaling}, the second term in \eqref{oneblock1} is bounded by $t \Gamma_N/(\gamma N^d)$,
where $\Gamma_N$ is the largest eigenvalue of the operator $N^2 \mathscr{L}_N^s + \frac{\gamma \|F\|_\infty}{N^\alpha} \sum_{y\in\mathbb T_N^d} \tau_y V_{g,M}^\ell (\eta)$ with $\mathscr{L}_{N}^s := (\mathscr{L}_{N} + \mathscr{L}_{N}^*)/2$ being the symmetric part of $\mathscr{L}_{N}$ in $L^2 (\nu_{\rho_*}^N)$. Moreover, $t \Gamma_N/(\gamma N^d)$ may be written as the following variational form
\begin{equation}\label{oneblock2}
t \, \sup_{f:\nu_{\rho_*}^N-density} \Big\{ \frac{\|F\|_\infty}{N^{d+\alpha}} E_{\nu_{\rho_*}^N} \Big[ \sum_{y\in\mathbb T_N^d} \tau_y V_{g,M}^\ell (\eta) f(\eta) \Big] - \frac{N^2}{\gamma N^d} D_N (f;\nu_{\rho_*}^N)\Big\}.
\end{equation}
Above, recall $D_N (f;\nu_{\rho_*}^N)$ is the Dirichlet form of the density $f$ with respect to the measure $\nu_{\rho_*}^N$,
\[D_N (f;\nu_{\rho_*}^N) := \left\langle\sqrt{f} (-\mathscr{L}_{N})\sqrt{f}\right\rangle_{\nu_{\rho_*}^N} = \left\langle\sqrt{f} (-\mathscr{L}_N^s)\sqrt{f}\right\rangle_{\nu_{\rho_*}^N}. \]
For any $\nu_{\rho_*}^N$--density $f$, denote
\[\bar{f} = \frac{1}{N^d} \sum_{y\in\mathbb T_N^d} \tau_y f.\]
By translation invariance and convexity of the Dirichlet form,
\[D_N (f;\nu_{\rho_*}^N) \geq D_N (\bar{f};\nu_{\rho_*}^N).\]
This permits us to bound \eqref{oneblock2} by
\begin{equation}\label{oneblock3}
t \, \sup_{f} \Big\{ \frac{\|F\|_\infty}{N^{\alpha}} E_{\nu_{\rho_*}^N} \Big[ V_{g,M}^\ell (\eta) f(\eta) \Big] - \frac{N^2}{\gamma N^d} D_N (f;\nu_{\rho_*}^N)\Big\}.
\end{equation}
Above, the supremum is over all translation invariant $\nu_{\rho_*}^N$--densities $f$.
Denote by $\Lambda_\ell^d := \{-\ell,-\ell+1, \ldots, \ell\}^d$ the $d$-dimensional cube of length $2\ell+1$ and denote $\Omega_\ell^d := (\Lambda_\ell^d)^\mathbb N$. Let $\nu_{\rho_*}^\ell$ be the product measure on $\Omega_\ell^d$ with particle density $\rho_*$. Denote $f_\ell := E_{\nu_{\rho_*}^N} [f | \mathcal{F}_\ell]$ with $\mathcal{F}_\ell$ being the $\sigma$-algebra generated by $(\eta(x), x\in \Lambda_\ell^d)$. Let
\begin{align*}
D_\ell (f;\nu_{\rho_*}^N) &= \sum_{x,y \in \Lambda_\ell^d} D_{x,y} (f;\nu_{\rho_*}^N), \\
D_\ell (f_\ell;\nu_{\rho_*}^\ell) &= \left\langle\sqrt{f}(-\mathscr{L}_\ell^s) \sqrt{f}\right\rangle_{\nu_{\rho_*}^\ell}\\
&= \frac{1}{2} \sum_{x,y \in \Lambda_\ell^d} \int_{\Omega_\ell^d} g(\eta(x)) s(y-x) \big[ \sqrt{f(\eta^{x,y})} - \sqrt{f(\eta)} \big]^2 d \nu_{\rho_*}^\ell,
\end{align*}
where the symmetric generator $\mathscr{L}_\ell^s$ acts on functions $f:\Omega_\ell^d \rightarrow \mathbb R$ as
\[\mathscr{L}_\ell^s f (\eta) = \sum_{x,y\in \Lambda_\ell^d} g(\eta(x)) s(y-x) \big[ f(\eta^{x,y}) - f(\eta) \big].\]
Since $f$ is translation invariant, by convexity of the Dirichlet form,
\[D_N (f;\nu_{\rho_*}^N) \geq\frac{N^d}{(2\ell+1)^d} D_\ell (f;\nu_{\rho_*}^N) \geq \frac{N^d}{(2\ell+1)^d} D_\ell (f_\ell;\nu_{\rho_*}^\ell).\]
Therefore, \eqref{oneblock3} is bounded by
\begin{equation}\label{oneblock4}
t \, \sup_{f} \Big\{ \frac{\|F\|_\infty}{N^{\alpha}} E_{\nu_{\rho_*}^\ell} \Big[ V_{g,M}^\ell (\eta) f_\ell (\eta) \Big] - \frac{N^2}{\gamma (2\ell+1)^d} D_\ell (f_\ell;\nu_{\rho_*}^\ell)\Big\}.
\end{equation}
To decompose the above term on hyperplanes with fixed number of particles, we first introduce the following notations. For $j \geq 0$, let
\[ \Omega_{\ell,j}^d = \big\{ \eta \in \Omega_\ell^d: \sum_{x \in \Lambda_\ell^d} \eta (x) = j \big\}, \]
and let $\nu_{\ell,j}$ be the conditional measure of $\nu_{\rho_*}^\ell$ on $\Omega_{\ell,j}^d$, \emph{i.e.}\;$\nu_{\ell,j} (\cdot) = \nu_{\rho_*}^\ell (\cdot | \Omega_{\ell,j}^d)$. Note that $\nu_{\ell,j}$ is independent of the density $\rho_*$. Define
\[m_{\ell,j} (f) = E_{\nu_{\rho_*}^\ell} [f_\ell \mathbf{1}_{\{\bar{\eta}^\ell (0)= j / (2\ell+1)^d\}}], \quad f_{\ell,j} = \frac{f_\ell}{E_{\nu_{\ell,j}} [f_\ell]}. \]
Then,
\[E_{\nu_{\rho_*}^\ell} \Big[ V_{g,M}^\ell (\eta) f_\ell (\eta) \Big] = \sum_{j=0}^{M (2\ell+1)^d} m_{\ell,j} (f) E_{\nu_{\ell,j}} [V_{g,M}^\ell f_{\ell,j}].\]
Moreover,
\[D_\ell (f_\ell;\nu_{\rho_*}^\ell) = \sum_{j=0}^{\infty} m_{\ell,j} (f) D_\ell (f_{\ell,j};\nu_{\ell,j}) \geq \sum_{j=0}^{M (2\ell+1)^d} m_{\ell,j} (f) D_\ell (f_{\ell,j};\nu_{\ell,j}).\]
Since $f_\ell$ is density with respect to $\nu_{\rho_*}^\ell$,
\[\sum_{j=0}^{M (2\ell+1)^d} m_{\ell,j} (f) \leq E_{\nu^\ell_{\rho_*}} [f_\ell] = 1.\]
Then, we may bound \eqref{oneblock4} by
\begin{equation}\label{oneblock5}
t \sup_{f} \sup_{0 \leq j \leq (2\ell+1)^d M} \Big\{ \frac{\|F\|_\infty}{N^\alpha} E_{\nu_{\ell,j}} [V_{g,M}^\ell f_{\ell,j}] - \frac{N^d}{\gamma (2\ell+1)^d} D_\ell (f_{\ell,j};\nu_{\ell,j}) \Big\}
\end{equation}
Note that $V_{g,M}^\ell$ is not mean zero with respect to the measure $\nu_{\ell,j}$. In order to make it centralized, define
\begin{align*}
U_{g,M}^{\ell,j} :=& \;\bar{g}^\ell(\eta(0)) \mathbf{1}_{\{\bar{\eta}^\ell (0) \leq M\}}- E_{\nu_{\ell,j}} \big[ \bar{g}^\ell(\eta(0)) \mathbf{1}_{\{\bar{\eta}^\ell (0) \leq M\}} \big]\\
=& \;\bar{g}^\ell(\eta(0)) \mathbf{1}_{\{\bar{\eta}^\ell (0) \leq M\}}- E_{\nu_{\ell,j}} \big[ g(\eta(0))\big].
\end{align*}
The last identity holds because $j \leq M (2\ell+1)^d$. We may rewrite $V_{g,M}^\ell$ as
\[V_{g,M}^\ell = U_{g,M}^{\ell,j} + E_{\nu_{\ell,j}} \big[ g(\eta(0))\big] - \mathbb Phi (\bar{\eta}^\ell (0)) \mathbf{1}_{\{\bar{\eta}^\ell (0) \leq M\}}.\]
Let $\lambda_g^{\ell,j} (-\mathscr{L}_\ell^s)$ be the largest eigenvalue associated to the generator $-\mathscr{L}_\ell^s + \gamma (2\ell+1)^d \|F\|_\infty U_{g,M}^{\ell,j} / N^{d+\alpha}$,
Then, \eqref{oneblock5} is bounded by
\begin{equation}\label{oneblock6}
\frac{t \|F\|_\infty}{N^\alpha} \sup_{0 \leq j \leq (2\ell+1)^d M } \big\{ E_{\nu_{\ell,j}} [g(\eta(0))] - \mathbb Phi \big(j/(2\ell+1)^d\big)\big\}+ \frac{t N^d}{\gamma (2\ell+1)^d} \; \sup_{0 \leq j \leq (2\ell+1)^d M} \lambda_g^{\ell,j} (-\mathscr{L}_\ell^s).
\end{equation}
By the standard equivalence of ensembles \cite[Corollary A2.1.7]{klscaling},
\[ \sup_{0 \leq j \leq (2\ell+1)^d M } \big\{ E_{\nu_{\ell,j}} [g(\eta(0))] - \mathbb Phi \big(j/(2\ell+1)^d\big)\big\} \leq C \ell^{-d}.\]
Hence, the first term in \eqref{oneblock6} is bounded by $C N^{-\alpha} \ell^{-d}$. In order to bound the second term in \eqref{oneblock6}, we need the following lemma to bound the largest eigenvalue of the small perturbation of the reversible generator $-\mathscr{L}_\ell^s$.
\begin{lemma}[{\cite[Theorem A3.1.1]{klscaling}}] Let $\mathscr{L}$ be a reversible irreducible generator on some countable state space $\Omega$ with respect to some probability measure $\nu$ on $\Omega$. Let $U$ be a mean zero bounded function with respect to $\nu$. For $\varepsilon > 0$, denote by $\lambda_\varepsilon$ the upper bound of the spectrum of $\mathscr{L}+\varepsilon U$. Then,
\[0 \leq \lambda_\varepsilon \leq \frac{\varepsilon^2 \kappa}{1-2\|U\|_\infty \varepsilon \kappa} {\rm Var} (U;\nu),\]
where $\kappa^{-1}$ is the spectral gap of the generator $\mathscr{L}$ in $L^2 (\nu)$,
\[\kappa^{-1} = \inf_{f:E_{\nu}[f] = 0} \frac{\left\langlef(-\mathscr{L})f\right\rangle_{\nu}}{{\rm Var} (f;\nu)}\]
\end{lemma}
We continue to bound the second term in \eqref{oneblock6}. Note that $E_{\nu_{\ell,j}} [U_{g,M}^{\ell,j}] = 0$ and $|U_{g,M}^{\ell,j}| \leq CM$. By the above lemma, the second term in \eqref{oneblock6} is bounded by
\[\sup_{0 \leq j \leq (2\ell+1)^d M } \Big\{ \frac{C(F,t) N^d}{\gamma \ell^d} \frac{\gamma^2 \ell^{2d} N^{-2d-2\alpha} \kappa}{1-C(F) M \gamma \ell^d N^{-d-\alpha} \kappa} {\rm Var} \big( U_{g,M}^{\ell,j}; \nu_{\ell,j} \big) \Big\} ,\]
where $\kappa = \kappa (\ell,j)$ is the reciprocal of the spectral gap of the generator $\mathscr{L}_\ell^s$,
\[\kappa^{-1} = \inf_{f} \frac{\left\langlef(-\mathscr{L}_\ell^s)f\right\rangle_{\nu_{\ell,j}}}{{\rm Var} (f;\nu_{\ell,j})}.\]
Above, the infimum is over all functions $f$ such that $E_{\nu_{\ell,j}} [f] = 0$. In \cite{landim1996spectral}, Landim \emph{et al.} prove the following lower bound for the spectral gap of the zero range process under Assumption \ref{assump:g}, \[\kappa (\ell,j)^{-1} \geq C \ell^{-2}\]
uniformly in $j$. By \cite[Lemma 4]{toth2002between},
\[\sup_{0 \leq j \leq (2\ell+1)^dM} {\rm Var} \big( U_{g,M}^{\ell,j}; \nu_{\ell,j} \big) \leq C \ell^{-d}.\]
Therefore, the second term in \eqref{oneblock6} is bounded by $C (M,F,t)\gamma \ell^{2} N^{-d-2\alpha}$ as long as $\gamma M \ell^{d+2} < \varepsilon_0 N^{d+\alpha}$ for some fixed but small $\varepsilon_0$. Adding up the above estimates \eqref{oneblock1} and \eqref{oneblock6}, we bound the left hand side in \eqref{ob} by
\[C (M,F,t) \Big( \frac{1}{N^\alpha \ell^d} + \frac{\gamma \ell^2}{N^{d+2\alpha}} + \frac{1}{\gamma N^{2\alpha}} \Big).\]
Take $\gamma = N^{d/2} \ell^{-1}$, we get the desired bound
\[C (M,F,t) \Big( \frac{1}{N^\alpha \ell^d} + \frac{ \ell}{N^{2\alpha+d/2}}\Big)\]
as long as $M \ell^{d+1} < \varepsilon_0 N^{d/2 + \alpha}$, which concludes the proof.
\end{proof}
\subsection{Estimates by logarithmic Sobolev inequality.} In this subsection, we present a second proof to bound the second term in \eqref{oneblock0} by using the logarithmic Sobolev inequality for the zero range process.
\begin{lemma}\label{lem:lsi}
There exists $C = C(M,F,t)$ such that
\begin{equation}\label{lsi}
\int_0^t E_{\mu_{N,s}} \Big[ \frac{1}{N^{d+\alpha}} \sum_{y\in\mathbb T_N^d} F \big( s,\tfrac{y}{N}\big) \tau_y V_{g,M}^\ell \Big] ds \leq C \Big( \frac{1}{N^\alpha \ell^d} + \frac{ \ell^{d+2}}{N^{3\alpha+2}}\Big).
\end{equation}
In particular, by taking $\ell = N^{\frac{\alpha+1}{d+1}}$, the left hand side in \eqref{lsi} is bounded by $C N^{-\frac{(2d+1)\alpha+d}{d+1}}$, which has order $o(N^{-2\alpha})$.
\end{lemma}
\begin{remark}
Compared the estimate in the above lemma with that in Lemma \ref{lem:spectralgap}, we get a better bound by using logarithmic Sobolev inequality only in dimension one. Indeed,
\[N^{-\frac{(2d+1)\alpha+d}{d+1}} < N^{-\frac{(4d+2)\alpha + d^2}{2(d+1)}} \quad \text{if and only if} \quad d < 2.\]
\end{remark}
\begin{proof}
Denote
\[f_s = f_{N,s} = \frac{d \mu_{N,s}}{d \nu_{\rho_*}^N}, \quad \bar{f}_s = \frac{1}{N^d} \sum_{y \in \mathbb T_N^d} \tau_y f_s, \quad \bar{f}_{s,\ell} = E_{\nu_{\rho_*}^N} [ \bar{f}_s | \mathcal{F}_\ell]. \]
Then, we may bound the left hand side in \eqref{lsi} by
\[ \int_0^t E_{\nu_{\rho_*}^N} \Big[ \frac{\|F\|_\infty }{N^{\alpha}} V_{g,M}^\ell \bar{f}_s \Big] ds = \int_0^t E_{\nu_{\rho_*}^\ell} \Big[ \frac{\|F\|_\infty }{N^{\alpha}} V_{g,M}^\ell \bar{f}_{s,\ell} \Big] ds\]
As in the proof of Lemma \ref{lem:spectralgap}, we denote
\[m_{\ell,j} (\bar{f}_{s,\ell}) = E_{\nu_{\rho_*}^\ell} [\bar{f}_{s,\ell} \mathbf{1}_{\{\bar{\eta}^\ell (0)= j / (2\ell+1)^d\}}], \quad\bar{f}_{s,\ell,j}= \frac{\bar{f}_{s,\ell}}{E_{\nu_{\ell,j}} [\bar{f}_{s,\ell}]}. \]
Then, we rewrite the last formula as
\[\sum_{j=0}^{M(2\ell+1)^d} \int_0^t m_{\ell,j} (\bar{f}_{s,\ell}) E_{\nu_{\ell,j}} \Big[ \frac{\|F\|_\infty }{N^{\alpha}} V_{g,M}^\ell \bar{f}_{s,\ell,j} \Big] ds. \]
By entropy inequality, for any $\gamma > 0$, the above formula is bounded by
\begin{multline}\label{c1}
\sum_{j=0}^{M(2\ell+1)^d} \frac{1}{\gamma} \int_0^t m_{\ell,j} (\bar{f}_{s,\ell}) E_{\nu_{\ell,j}} \big[ \bar{f}_{s,\ell,j} \log \bar{f}_{s,\ell,j} \big] ds \\
+ \sum_{j=0}^{M(2\ell+1)^d} \frac{1}{\gamma} \int_0^t m_{\ell,j} (\bar{f}_{s,\ell}) \log E_{\nu_{\ell,j}} \Big[ \exp \Big\{ \frac{\gamma \|F\|_\infty}{N^\alpha} V_{g,M}^\ell \Big\}\Big] ds.
\end{multline}
The logarithmic Sobolev inequality for the zero range process reads (\emph{cf.} \cite{pra2005logarithmic})
\[E_{\nu_{\ell,j}} \big[ \bar{f}_{s,\ell,j} \log \bar{f}_{s,\ell,j} \big] \leq C \ell^2 D_\ell \big( \bar{f}_{s,\ell,j}; \nu_{\ell,j}\big)\]
uniformly in $j$. By convexity and translation invariance of the Dirichlet form,
\[\sum_{j=0}^{M(2\ell+1)^d} m_{\ell,j} (\bar{f}_{s,\ell}) D_\ell \big( \bar{f}_{s,\ell,j}; \nu_{\ell,j}\big) \leq D_\ell \big( \bar{f}_{s,\ell}; \nu_{\rho_*}^\ell \big) \leq D_\ell \big( \bar{f}_{s}; \nu_{\rho_*}^N \big) \leq \frac{(2\ell+1)^d}{N^d} D_N \big( \bar{f}_{s}; \nu_{\rho_*}^N \big).\]
Since by Lemma \ref{lem:entropy},
\[\int_0^t D_N \big( \bar{f}_{s}; \nu_{\rho_*}^N \big) ds \leq C N^{d-2\alpha -2},\]
we bound the first term in \eqref{c1} by $C \ell^{d+2} / (\gamma N^{2\alpha+2})$.
Now we bound the second term in \eqref{c1}. Note that $\big| V_{g,M}^\ell \big| \leq C M$. Using the basic inequality
\[e^{x} \leq 1 + x +\tfrac{x^2}{2} e^{|x|}, \quad \log (1+x) \leq x,\]
we bound the second term in \eqref{c1} by
\[\frac{C (M,F,t) }{\gamma} \sup_{0 \leq j \leq M (2\ell+1)^d} \Big\{ \frac{\gamma}{N^\alpha}E_{\nu_{\ell,j}} [V^\ell_{g,M}] + \frac{\gamma^2}{N^{2\alpha}}E_{\nu_{\ell,j}} [(V^\ell_{g,M})^2] \Big\}, \quad \forall \gamma \leq N^{\alpha}.\]
Using the standard equivalence of ensembles (\emph{cf.}\;\cite[Corollary A2.1.7]{klscaling}), for any $j \leq M(2\ell+1)^d$,
\[E_{\nu_{\ell,j}} [V^\ell_{g,M}] = E_{\nu_{\ell,j}} [g(\eta(0))] - \mathbb Phi (j/(2\ell+1)^d) \leq C \ell^{-d}.\]
By Cauchy-Schwarz inequality \;and \cite[Lemma 4]{toth2002between},
\[E_{\nu_{\ell,j}} [(V^\ell_{g,M})^2] \leq 2 \big(E_{\nu_{\ell,j}} [g(\eta(0))] - \mathbb Phi (j/(2\ell+1)^d)\big)^2 + 2 {\rm Var} \big( \bar{g}^\ell (0); \nu_{\ell,j}\big) \leq C \ell^{-d}.\]
This permits us to bound the second term in \eqref{c1} by $C (M,F,t) \Big(N^{-\alpha} \ell^{-d} + \gamma N^{-2\alpha} \ell^{-d}\Big)$.
In conclusion, for any $\gamma \leq N^{\alpha}$, we bound the left hand side in \eqref{lsi} by
\[C (M,F,t) \Big( \frac{1}{N^\alpha \ell^d} + \frac{\ell^{d+2}}{\gamma N^{2\alpha+2}} + \frac{\gamma }{N^{2\alpha} \ell^d}\Big).\]
We finish the proof by taking $\gamma = N^\alpha$.
\end{proof}
\noindent{\large Linjie \textsc{Zhao}}
\noindent School of Mathematics and Statistics, Huazhong University of Science \& Technology, Wuhan, China.\\
{\tt linjie\[email protected]}
\end{document} | math | 57,897 |
\begin{document}
\centerline{\bf\Large Asymptotics of the convolution of the Airy function}
\vskip 1mm
\centerline{\bf\Large and a function of the power-like behavior}
\vskip 5mm
\centerline{\bf S.V.~Zakharov}
\vskip 5mm
\begin{center}
Institute of Mathematics and Mechanics,\\
Ural Branch of the Russian Academy of Sciences,\\
16, S.Kovalevskaja street, 620990, Ekaterinburg, Russia
\end{center}
\vskip 5mm
\textbf{Abstract.}
The asymptotic behavior of the convolution-integral
of a special form of the Airy function
and a function of the power-like behavior
at infinity is obtained.
The integral under consideration is the solution
of the Cauchy problem for an evolutionary
third-order partial differential equation
used in the theory of wave propagation
in physical media with dispersion.
The obtained result can be applied to studying
asymptotics of solutions of the KdV equation
by the matching method.
\vskip 3mm
Keywords: Airy function, convolution, Cauchy problem, asymptotics.
Mathematics Subject Classification: 35Q53, 35Q60, 33C10.
\vskip 5mm
\section{Introduction}\label{s1}
In the present paper,
we study the behavior as $|x|+t\to\infty$
of the convolution
\begin{equation}\label{us}
u(x,t) = \frac{1}{\root 3 \of {3t} }
\int\limits_{-\infty}^{+\infty}
f(y)\, \mathrm{Ai} \left( \frac{x-y}{\root 3 \of {3t} } \right) \, dy
\end{equation}
of the Airy function
\begin{equation}\label{af}
\mathrm{Ai}\,(x) =
\frac{1}{\pi} \int\limits_{0}^{+\infty}
\cos \left( \frac{\theta^3}{3}+ x\theta \right) \,
d\theta
\end{equation}
and a locally Lebesgue integrable function~$f$,
which satisfies the following asymptotic
(in the sense of Poincar\'{e}) relations:
\begin{equation}\label{le}
f(x) = \sum\limits_{n=0}^{\infty}
f^{\pm}_n x^{-n},
\qquad x\to \pm\infty.
\end{equation}
Integral~(\ref{us}),
which is understood in this case in the sense
of the limit
$\lim\limits_{R\to +\infty}\int\limits_{-\infty}^{R}$
of a standard Lebesgue integral, is of interest
as the solution of the Cauchy problem for a third-order
equation with the initial function~$f$:
$$
\frac{\partial u}{\partial t} +
\frac{\partial^3 u}{\partial x^3} =0, \quad u(x,0) =
f(x),
\qquad t\geqslant 0, \quad x\in\mathbb{R}.
$$
This equation is used
in the theory of wave propagation
in physical media with dispersion~\cite{Gbw} and it
is sometimes called the linearized Korteweg--de Vries (KdV) equation.
Notice that the Airy function and
the Airy transform (convolution of a general form)
have applications in wave optics, quantum mechanics,
and in modern laser technologies~\cite{vs,at}.
In addition, the investigation of the behavior
of integral~(\ref{us})
is of independent interest for asymptotic analysis,
for example, the obtained result can be applied to studying
asymptotics of solutions of the nonlinear KdV equation
by the matching method~\cite{ib}
in the same way as the large-time asymptotics of solutions
of the heat equation~\cite{hd} is applied
to the Cauchy problem for a nonlinear parabolic equation~\cite{2ps,zz}.
An asymptotics can be easily found
for large values of~$t$ and bounded values of~$x$,
if $f(x)$ exponentially
tends to zero as $x \to \infty$.
In this case, it suffices to expand the Airy function
into a series in $t^{-1/3}$.
However, for more general conditions~(\ref{le})
the problem becomes much more difficult.
\setcounter{equation}{0}
\section{Investigation integral}\label{s2}
Following the approach used in paper~\cite{hd},
we represent function~(\ref{us}) in the form
\begin{equation}\label{ud}
u(x,t)=U^-_1(x,t) + U^-_0(x,t) +U^+_0(x,t) + U^+_1(x,t),
\end{equation}
where $$
U^-_1(x,t) = \int\limits_{-\infty}^{-\sigma} \dots,
\quad
U^-_0(x,t) = \int\limits_{-\sigma}^{0}\dots,
\quad
U^+_0(x,t) = \int\limits_{0}^{\sigma}\dots,
\quad
U^+_1(x,t) = \int\limits_{\sigma}^{+\infty}\dots,
$$
$$
\sigma = (|x|^3 + t)^{p/3},
\qquad
0 < p < 1,
$$
and dots denote the integrand from formula~(\ref{us})
together with the factor $(3t)^{-1/3}dy$.
In the integral $U^-(x,t)$,
we make the change $y = \theta {\root 3 \of {3t}}$,
setting $$
\mu = \frac{\sigma}{{\root 3 \of {3t}}},
\qquad
\eta = \frac{x}{{\root 3 \of {3t}}}.
$$
Using condition~(\ref{le}) as $x\to -\infty$,
we obtain
$$
U^-_1(x,t) = \int\limits_{-\infty}^{-\mu}
f(\theta {\root 3 \of {3t}}) \mathrm{Ai}(\eta-\theta)
d\theta =
$$
\begin{equation}\label{up1i}
= \sum\limits_{n=0}^{N-1}
{f^-_n} (3t)^{-n/3}
\int\limits_{-\infty}^{-\mu}
\theta ^{-n} \mathrm{Ai}(\eta-\theta)
d\theta +
\int\limits_{-\infty}^{-\mu}
R_N(\theta {\root 3 \of {3t}}) \mathrm{Ai}(\eta-\theta) d\theta,
\end{equation}
where $ R_N(s) = O(|s|^{-N}).$
From the last relation, we have an estimate
of the remainder:
\begin{equation}\label{ern}
\int\limits_{-\infty}^{-\mu}
R_N(\theta {\root 3 \of {3t}}) \mathrm{Ai}(\eta-\theta) d\theta
= O(\sigma^{-N+1}).
\end{equation}
Let us find the dependence of the
integral
$$
\int\limits_{-\infty}^{-\mu}
\theta ^{-n} \mathrm{Ai}(\eta-\theta)
d\theta
$$
on value of $\mu$.
For $n=0$ and $t\geqslant |x|^{\alpha}$,
$2 + p < \alpha < 3$, we have $$
\int\limits_{-\infty}^{-\mu}
\mathrm{Ai}(\eta-\theta) d\theta =
\int\limits_{0}^{+\infty}
\mathrm{Ai}(\eta+\theta' ) d\theta' - \int\limits_{0}^{\mu}
\mathrm{Ai}(\eta+\theta' ) d\theta' =
$$
\begin{equation}\label{ei0}
=\int\limits_{\eta}^{+\infty}\mathrm{Ai}(\eta')
d\eta'
-\sum\limits_{r=1}^{N-1} \frac{\mu^r}{r!} \mathrm{Ai}^{(r-1)}(\eta)
+ O(\sigma^{-\gamma N}),
\qquad
\sigma\to \infty,
\end{equation}
where $\gamma = \frac{\alpha}{3p} - 1 > 0.$
To obtain (\ref{ei0}), we used Taylor's
formula for the expansion of $\mathrm{Ai}(\eta+\theta' )$
in $\theta'$ and the estimates
\begin{equation}\label{ms}
\mu = O\left(t^{\frac{3p-\alpha}{3\alpha}} \right),
\qquad
\sigma = O\left(t^{p/\alpha} \right),
\qquad
\mu = O\left( \sigma^{-\gamma} \right).
\end{equation}
as $\sigma\to \infty$ on the set
$
T_{\alpha} = \{ (x,t)\, : \, x\in\mathbb{R},\
t \geqslant |x|^{\alpha} \}.
$
For $n\geqslant 1$, using the regularization method,
i.e., subtraction of singularities
as $\theta\to -0$,
we obtain $$
\int\limits_{-\infty}^{-\mu}
\theta ^{-n} \mathrm{Ai}(\eta-\theta)
d\theta =
\int\limits_{-\infty}^{-1} \theta ^{-n}
\mathrm{Ai}(\eta-\theta) d\theta +
$$
$$
+ \int\limits_{-1}^{-\mu} \Psi_n(\theta,\eta)
d\theta +
\sum\limits_{r=0}^{n-1} \frac{(-1)^{r-n}}{r!} \mathrm{Ai}^{(r)}(\eta)
\int\limits_{-1}^{-\mu} \theta ^{r-n}
d\theta,
$$
where
\begin{equation}\label{psin}
\Psi_n(\theta,\eta) = \theta ^{-n} \left[
\mathrm{Ai}(\eta-\theta) - \sum\limits_{r=0}^{n-1} \frac{(-\theta)^{r}}{r!}\mathrm{Ai}^{(r)}(\eta)
\right],
\end{equation}
and the sum in the square brackets is
a partial sum of the Taylor series
for $\mathrm{Ai}(\eta-\theta)$ in variable $\theta$.
Thus, $$
\int\limits_{-\infty}^{-\mu} \theta ^{-n}
\mathrm{Ai}(\eta-\theta) d\theta =
\int\limits_{-\infty}^{-1} \theta ^{-n}
\mathrm{Ai}(\eta-\theta) d\theta
-\frac{\mathrm{Ai}^{(n-1)}(\eta)}{(n-1)!} \ln\mu +
$$
\begin{equation}\label{ein}
-\sum\limits_{r=0}^{n-2} \mu^{r-n+1}
\frac{\mathrm{Ai}^{(r)}(\eta)}{r!(r-n+1)}
- \int\limits_{-\mu}^{0}\Psi_n(\theta,\eta)
d\theta
+B^-_n(\eta),
\end{equation}
where
\begin{equation}\label{bnm}
B^-_n(\eta)= \sum\limits_{r=0}^{n-2}
\frac{\mathrm{Ai}^{(r)}(\eta)}{r!(r-n+1)}
+ \int\limits_{-1}^{0}\Psi_n(\theta,\eta)
d\theta.
\end{equation}
From formulas~(\ref{psin}) we conclude
that $\Psi_n(\theta,\eta)$ has no singularities
as $\theta\to -0$;
whence we get
$$
\int\limits_{-\mu}^{0}\Psi_n(\theta,\eta)
d\theta
= \sum\limits_{r=1}^{N-1}\mu^r q_{r}\mathrm{Ai}^{(r)}(\eta) +
O(\sigma^{-\gamma N}),
$$
\begin{equation}\label{pinm}
\int\limits_{-1}^{0}
\Psi_n(\theta,\eta) d\theta =
O\left( |\eta|^{-1/4+n/2} \right),
\qquad \eta\to \infty.
\end{equation}
Substituting relations~(\ref{ei0}) and~(\ref{ein})
in formula~(\ref{up1i}), and using estimate~(\ref{ern}),
we find
\begin{equation}\label{up1f}
U^-_1(x,t) =
f^-_0\int\limits_{\eta}^{\infty}
\! \mathrm{Ai}(\theta)\, d\theta
+\sum\limits_{n=1}^{N-1}
f^-_n (3t)^{-n/3}
\left[ \int\limits_{-\infty}^{-1}
\theta^{-n} \mathrm{Ai}(\eta-\theta)
d\theta +B^-_n(\eta)
\right]+
\end{equation}
$$
\phantom{==================}
+ V^-_1(\mu,\eta,t) + O(\sigma^{-\gamma N}),
$$
where $$
V^-_1(\mu,\eta,t)= \sum\limits_{r^2_s+q^2_s\neq 0}
v^-_{1,s} \mathrm{Ai}^{(m_s)}(\eta)
t^{l_s} \mu^{r_s} \ln^{q_s}\mu,
$$
$v^-_{1,s}$ are constant coefficients.
From the explicit form the Airy function~(\ref{af}),
it is easy to see that
$$
\int\limits_{\eta}^{+\infty}
\! \mathrm{Ai}(\theta)\, d\theta
= -\int\limits_{0}^{+\infty}
\frac{1}{\pi\theta}
\sin \left( \frac{\theta^3}{3}
+ \eta\theta \right) d\theta
+ \mathrm{const}.
$$
In addition, for $\eta>0$
$$
\int\limits_{0}^{+\infty}
\frac{1}{\theta}
\sin \left( \frac{\theta^3}{3}
+ {\eta\theta} \right) d\theta
= \int\limits_{0}^{\eta^{-1/2}}
+ \int\limits_{\eta^{-1/2}}^{\infty}
=
$$
$$
=\int\limits_{0}^{\eta^{1/2}}
\frac{1}{z}
\sin \left( \frac{z^3}{3\eta^3} + z \right)
dz
- \int\limits_{\eta^{-1/2}}^{\infty}
\frac{1}{\theta (\theta^2 + \eta)}
d \cos \left( \frac{\theta^3}{3}
+ {\eta\theta} \right).
$$
The first integral on the right-hand side of this
relation as ${\eta\to +\infty}$
tends to the limit value of the sine integral
$$
\mathrm{Si}(+\infty)=
\int\limits_{0}^{+\infty}
\frac{\sin z}{z} dz = \frac{\pi}{2},
$$
since $z^3/\eta^3 \leqslant \eta^{-3/2}$
on the whole integration interval.
Integrating by parts, we see that
$$
\int\limits_{\eta^{-1/2}}^{\infty}
\frac{1}{\theta (\theta^2 + \eta)}
d \cos \left( \frac{\theta^3}{3}
+ {\eta\theta} \right) =O(\eta^{-1/2}).
$$
Thus, we obtain \begin{equation}\label{iaf}
\int\limits_{\eta}^{+\infty}
\! \mathrm{Ai}(\theta)\, d\theta
= -\int\limits_{0}^{+\infty}
\frac{1}{\pi\theta}
\sin \left( \frac{\theta^3}{3}
+ \eta\theta \right) d\theta
+ \frac{1}{2}.
\end{equation}
Now, let us study the integral
$$
U^-_0 (x,t) =
\frac{1}{\root 3 \of {3t} }
\int\limits_{-\sigma}^{0}
f(y)\, \mathrm{Ai} \left( \frac{x-y}{\root 3 \of {3t} } \right) \, dy.
$$
Since $2t\geqslant \sigma^{\alpha/p}$ on the
set $T_{\alpha}$
and there holds the estimate $$
\frac{y}{\root 3 \of {3t}} = O(\sigma^{-\gamma}),
\qquad
\gamma = \frac{\alpha}{3p}-1 > 0,
$$
for $|y| \leqslant \sigma$,
we can use Taylor's formula.
Then,
$$
U^-_0 (x,t)= \sum\limits_{n=0}^{N-1}
t^{-(n+1)/3}
\frac{(-1)^n 3^{-(n+1)/3}}{n!}
\mathrm{Ai}^{(n)}(\eta)
\int\limits_{-\sigma}^{0}
y^{n} f(y) dy
+ O( \sigma^{- \gamma N}).
$$
Let us transform the integral $\int\limits_{-\sigma}^{0}$ as follows:
$$
\int\limits_{-\sigma}^{0} y^{n} f(y)
dy =
\int\limits_{-1}^{0} y^{n} f(y) dy +
\int\limits_{-\sigma}^{-1}\left(
f^-_0 y^{n} + \dots + f^-_{n} + \frac{f^-_{n+1}}{y} \right) dy +
$$
$$
+ \int\limits_{-\sigma}^{-1} y^{n} \left[
f(y) - f^-_0 - \dots - \frac{f^-_{n}}{y^{n}} - \frac{f^-_{n+1}}{y^{n+1}} \right]
dy =
$$
\begin{equation}\label{is1}
= \int\limits_{-1}^{0} y^{n} f(y) dy +
\int\limits_{-\infty}^{-1}\Phi^-_n(y)
dy
-\int\limits_{-\infty}^{-\sigma} \Phi^-_n(y)
dy
- f^-_{n+1} \ln\sigma + P_n(\sigma),
\end{equation}
where $$
\Phi^-_n(y) = y^{n} \left[ f(y) - \sum\limits_{m=0}^{n+1} \frac{f^-_m}{y^{m}} \right].
$$
From relations~(\ref{le}) we obtain
estimates $$
|\Phi^-_n(y)| \leqslant C_n y^{-2},
\qquad y\leqslant -1,
$$
$$
\int\limits_{-\infty}^{-\sigma} \Phi^-_n(y)
dy =
\sum\limits_{m=1}^{N-1} \varphi_{n,m} \sigma^{-m} +
O(\sigma^{-N}),
\qquad \sigma\to \infty,
$$
where $\varphi_{n,m}$ are some constants.
Taking into account these estimates and
substituting $\sigma=\mu{\root 3 \of {3t}}$
in (\ref{is1}), we obtain $$
\int\limits_{-\sigma}^{0} y^{n} f(y)
dy =
I_n - \frac{f^-_{n+1}}{3} \ln t -f^-_{n+1} \ln\mu +
\sum\limits_{r\neq 0} a_{n,r} \mu^{r}
t^{r/3}
+ O\left( \sigma^{-N}\right),
$$
where $I_n$ and $a_{n,r}$ are constants.
Then, we have
\begin{equation}\label{up0f}
U^-_0(x,t) = \sum\limits_{n=0}^{N-1}
t^{-(n+1)/3} \mathrm{Ai}^{(n)}(\eta)
[b^-_{n} + \widetilde{b}^-_{n} \ln
t]
+V^-_{0}(\mu,\eta,t) + O(\sigma^{-\gamma N}),
\end{equation}
$$
V^-_0(\mu,\eta,t)= \sum\limits_{r^2_s+q^2_s\neq 0}
v^-_{0,s} \mathrm{Ai}^{(m_s)}(\eta)
t^{l_s} \mu^{r_s} \ln^{q_s}\mu,
$$
where $b^-_{n}$, $\widetilde{b}^-_{n}$,
and $v^-_{0,s}$ are constant coefficients.
From the asymptotics of the Airy function
\begin{equation}\label{aap}
\mathrm{Ai}(x) = \frac{x^{-1/4}}{2\pi}
\exp\left( -\frac{2}{3}x^{3/2}\right)
\sum\limits_{n=0}^{\infty}
\frac{(-1)^n \Gamma\left( \frac{3n+1}{2}\right)}{3^{2n}(2n)!}
x^{-3n/2}, \quad x\to +\infty,
\end{equation}
it is seen that estimates of remainders
in formulas~(\ref{up1f}) and~(\ref{up0f})
are also valid on the set $$
X^+_{\alpha} = \{ (x,t)\, : \, x>0,\ 0 <
t < |x|^{\alpha} \},
$$
where $$
\mu = o(\eta),
\qquad
|\eta| \geqslant \frac{|x|^{1-\frac{\alpha}{3}}}{\root 3 \of 3}
\to \infty
\qquad \mbox{as}
\quad \sigma\to \infty.
$$
To find the dependence on $\mu$
of integrals $$
\int\limits_{0}^{\mu}
\mathrm{Ai}(\eta+\theta' ) d\theta',
\qquad
\int\limits_{-\mu}^{0}\Psi_n(\theta,\eta)
d\theta,
\qquad
\int\limits_{0}^{\mu}\Psi_n(\theta,\eta)
d\theta
$$
on the set
$$
X^-_{\alpha} = \{ (x,t)\, : \, x<0,\ 0 <t < |x|^{\alpha} \},
$$
we use the expansion
\begin{equation}\label{aam}
\mathrm{Ai}(x) = \frac{|x|^{-1/4}}{\pi}
\sum\limits_{n=0}^{\infty}
\frac{(-1)^{[n/2]} \Gamma\left(\frac{3n+1}{2}\right)}{3^{2n}(2n)!}
\sin\left(\frac{2}{3}|x|^{3/2}+\frac{\pi}{4}(-1)^n \right)
|x|^{-3n/2}
\end{equation}
as $x\to -\infty$.
Then we obtain expressions of the form
$$
\sum\limits_{r^2_s+q^2_s\neq 0}
b_s \mu^{r_s} (\ln\mu)^{q_s}\,
t^{l_s} \eta^{m_s}
\sin\left(\frac{2}{3}|\eta|^{3/2} \pm\frac{\pi}{4} \right),
$$
where $b_s$ are constant coefficients.
Analogously to formulas~(\ref{up1f})
and~(\ref{up0f}), we find
\begin{equation}\label{um1f}
U^+_1(x,t) =
f^+_0\int\limits_{-\infty}^{\eta}
\! \mathrm{Ai}(\theta)\, d\theta
+\sum\limits_{n=1}^{N-1}
f^+_n (3t)^{-n/3}
\left[ \int\limits_{1}^{\infty}
\theta^{-n} \mathrm{Ai}(\eta-\theta)
d\theta +B^+_n(\eta)
\right]+
\end{equation}
$$
\phantom{==================}
+ V^+_1(\mu,\eta,t) + O(\sigma^{-\gamma N}),
$$
where $$
V^+_1(\mu,\eta,t) = \sum\limits_{r^2_s+q^2_s\neq 0}
G_{1,s}(\eta) t^{l_s} \mu^{r_s} \ln^{q_s}\mu,
$$
and smooth functions
$$
B^+_n(\eta)= \sum\limits_{r=0}^{n-2}
\frac{\mathrm{Ai}^{(r)}(\eta)}{r!(r-n+1)}
+ \int\limits_{0}^{1}\Psi_n(\theta,\eta)
d\theta,
$$
\begin{equation}\label{pinp}
\int\limits_{0}^{1}
\Psi_n(\theta,\eta) d\theta =
O\left( |\eta|^{-1/4+n/2} \right),
\qquad \eta\to \infty,
\end{equation}
are constructed similarly to functions~(\ref{bnm}),
\begin{equation}\label{um0f}
U^+_0(x,t) = \sum\limits_{n=0}^{N-1}
t^{-(n+1)/3} \mathrm{Ai}^{(n)}(\eta)
[b^+_{n} + \widetilde{b}^+_{n} \ln
t]
+ V^+_0(\mu,\eta,t) + O(\sigma^{-\gamma N}),
\end{equation}
${b}^+_{n}$ and $\widetilde{b}^+_{n}$
are some constants,
$$
V^+_0(\mu,\eta,t) = \sum\limits_{r^2_s+q^2_s\neq 0}
G_{0,s}(\eta) t^{l_s} \mu^{r_s} \ln^{q_s}\mu,
$$
$G_{0,s}(\eta)$ and $G_{1,s}(\eta)$ are smooth
functions.
Since $\mu$ contains an arbitrary parameter~$p$
and $\mu\to 0$ for $t>|x|^{h}\, (3p<h <3)$,
from A.R.~Danilin's lemma~\cite[Lemma 4.4]{dar},
it follows that
\begin{equation}\label{vz}
V^-_{0}(\mu,\eta,t)+V^-_{1}(\mu,\eta,t)
+V^+_{0}(\mu,\eta,t)+V^+_{1}(\mu,\eta,t)=
O(\sigma^{-\gamma N}).
\end{equation}
Then substituting expressions~(\ref{up1f}), (\ref{up0f}),
(\ref{um1f}), and~(\ref{um0f}) into (\ref{ud})
and using formulas~(\ref{iaf}) and~(\ref{vz}),
we obtain $$
u(x,t)=(f^+_0-f^-_0)\int\limits_{0}^{+\infty}
\frac{1}{\pi\theta}
\sin \left( \frac{\theta^3}{3}
+ \frac{x\theta}{\root 3 \of {3t} } \right)
d\theta
+\frac{f^+_0+f^-_0}{2} +
$$
\begin{equation}\label{uxt}
+ \sum\limits_{n=1}^{N-1} t^{-n/3}
\left(F_{n}(\eta) + \widetilde{F_{n}}(\eta)\ln
t \right)
+O(\sigma^{-\gamma N}).
\end{equation}
The coefficients $F_{n}(\eta)$ and $\widetilde{F_{n}}(\eta)$
are linear combinations of smooth bounded functions,
derivatives of the Airy function up to the $(n-1)$th
order, inclusively, and integrals
$$
\int\limits_{-1}^{0}
\Psi_n(\theta,\eta) d\theta,
\qquad
\int\limits_{0}^{1}
\Psi_n(\theta,\eta) d\theta.
$$
Then from asymptotics~(\ref{aap}), (\ref{aam}),
(\ref{pinm}), and~(\ref{pinp}),
it follows that functions $F_{n}(\eta)$
and $\widetilde{F_{n}}(\eta)$
cannot grow faster than $|\eta|^{-1/4+n/2}$
as $\eta\to\infty$.
Consequently, the~$n$th term in expansion~(\ref{uxt})
cannot grow faster than
$$
\ln t \, |\eta|^{-1/4} \left| \frac{x}{t}\right|^{n/2}.
$$
Therefore, the series
$$
\sum\limits_{n=1}^{\infty}
t^{-n/3} \left(F_{n}(\eta) + \widetilde{F_{n}}(\eta)\ln t \right)
$$
keeps its asymptotic character for $t>|x|^{1+\delta}$
with any $\delta>0$.
\vskip 3mm
Thus, we arrive at the following statement.
\textbf{Theorem.}
{\it
If for a locally Lebesgue integrable function~$f$
the conditions
$$
f(x) = \sum\limits_{n=0}^{\infty}
f^{\pm}_n x^{-n},
\qquad x\to \pm\infty,
$$
are fulfilled, then for any $\delta>0$
and $t>|x|^{1+\delta}$
there holds the asymptotic formula
$$
\frac{1}{\root 3 \of {3t} }
\int\limits_{-\infty}^{+\infty}
f(y)\, \mathrm{Ai} \left( \frac{x-y}{\root 3 \of {3t} } \right) \, dy
=
$$
$$
=(f^+_0-f^-_0)\int\limits_{0}^{+\infty}
\frac{1}{\pi\theta}
\sin \left( \frac{\theta^3}{3}
+ \frac{x\theta}{\root 3 \of {3t} } \right)
d\theta
+\frac{f^+_0+f^-_0}{2}
+ \sum\limits_{n=1}^{\infty} t^{-n/3}
\left(F_{n}(\eta) + \widetilde{F_{n}}(\eta)\ln
t \right),
$$
as $|x| + t \to \infty$,
where $F_{n}(\eta)$ and $\widetilde{F_{n}}(\eta)$
are $C^{\infty}$-smooth functions
of the self-similar variable
$$
\eta = \frac{x}{{\root 3 \of {3t}}}.
$$
}
According to formula~(\ref{uxt}),
the asymptotic series in the theorem
should be understood in the sense of Erdelyi
with the asymptotic sequence
$$\{ (|x|^3 + t)^{-\gamma'n}\}_{n=1}^{\infty},$$
where $\gamma' >0$.
\textbf{Acknowledgments.}
This work was supported by the Russian Foundation for Basic Research,
project no.~14-01-00322.
\end{document} | math | 18,801 |
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{document}
\allowdisplaybreaks
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\B{\mathbf
B}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\mathbb N} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\kk{\kappa} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\m{{\bf m}{\mathbb N} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\kk{\kappa} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\m{{\bf m}}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\varepsilon{\textup{Var}epsilon}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\textup{Var}epsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma{\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E{\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D{\text{\rm{d}}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\bb{\beta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\aa{\alpha} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\D{\scr D}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\sigma} \def\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegass{\text{\rm{ess}}{\sigma} \def\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegass{\text{\rm{ess}}gma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegass{\text{\rm{ess}}}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\begin} \def\beq{\begin{equation}} \def\F{\scr F{\begin} \def\beq{\begin{equation}} \def\F{\scr Fin} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\beq{\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\F{\scr F}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrtic{\text{\rm{Ric}}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\Hess{\text{\rm{Hess}}}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega{\text{\rm{e}}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\uparrow{\underline a} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\OO{\Omega} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\oo{\omega}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\tilde} \def\Ric{\text{\rm{Ric}}{\tilde} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrtic{\text{\rm{Ric}}}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\text{\rm{cut}}} \def\P{\mathbb P} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hfn{I_n(f^{\bigotimes n}){\text{\rm{cut}}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\P{\mathbb P} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hfn{I_n(f^{\bigotimes n})}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\scr C} \def\aaa{\mathbf{r}} \def\r{r{\scr C} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\aaa{\mathbf{r}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\r{r}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r{\text{\rm{gap}}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\prr{\pi_{{\bf m},\textup{Var}rho}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\r{\mathbf r}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda{\mathbb Z} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vrr{\textup{Var}rho} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ll{\lambda}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\Tt{\tilde} \def\Ric{\text{\rm{Ric}}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\TT{\tilde} \def\Ric{\text{\rm{Ric}}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\II{\mathbb I}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm in}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\Sect{{\rm Sect}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\H{\mathbb H}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda{\scr M}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\Q{\mathbb Q} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\texto{\text{o}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb IL{\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb Iambda}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrtank{{\rm Rank}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\B{\scr B} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm i}} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\HR{\hat{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt}^d}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rightarrow}\def\l{\ell}\def\iint{\int{\rightarrow}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\l{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegall}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hint{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\scr E{\scr E}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\scr A{\scr A}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\scr B{\scr B}
\title{{f Functional Inequalities for Convolution Probability Measures}
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{abstract} Let $\mu$ and $\nu$ be two probability measures on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$, where $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)= \frac{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x}{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x}$ for some $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$. Explicit sufficient conditions on $V$ and $\nu$ are presented such that
$\mu*\nu$ satisfies the log-Sobolev,
Poincar\'e and super Poincar\'e inequalities. In particular, if $V(x)=\ll|x|^2$ for some $\ll>0$
and $\nu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\theta |\cdot|^2})<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ for some $\theta>1$, then $\mu*\nu$ satisfies the log-Sobolev inequality. This improves and extends the recent results on the log-Sobolev inequality derived in \cite{Z} for convolutions of the Gaussian measure and compactly supported probability measures. On
the other hand, it is well known that the log-Sobolev inequality for
$\mu*\nu$ implies $\nu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv|\cdot|^2}) <{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ for some
$\vv>0.$
\centerline{\textbf{R\'esum\'e}}
Soit $\mu$ et $\nu$ deux mesures de probabilit\'e sur $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$, o\`u $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)= \frac{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x}{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x}$ avec $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$. Des conditions explicites suffisantes sur $V$ et $\nu$ sont pr\'esent\'ees telles que $\mu*\nu$ satisfait des in\'egalit\'es de Sobolev logarithmique, de Poincar\'e et de super-Poincar\'e. En particulier, si $V(x)=\ll|x|^2$ pour quelque $\ll>0$ et $\nu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\theta |\cdot|^2})<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ avec $\theta>1$, alors $\mu*\nu$ satisfait l'in\'egalit\'e de Sobolev logarithmique. Cela am\'eliore et \'etend des r\'esultats r\'ecents sur l'in\'egalit\'e de Sobolev logarithmique obtenus dans \cite{Z} pour des convolutions de la mesure de Gauss et des mesures de probabilit\'e \`a support compact. D'autre part, il est bien connu que l'in\'egalit\'e de Sobolev logarithmique pour $\mu*\nu$ implique $\nu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv|\cdot|^2}) <{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ pour quelque $\vv>0.$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{abstract}
\noindent
AMS subject Classification:\ 60J75, 47G20, 60G52. \\
\noindent
Keywords: Log-Sobolev inequality, Poincar\'e inequality, super Poincar\'{e} inequality, convolution.
\vskip 2cm
\section{Introduction}\label{sec1}
Functional inequalities of Dirichlet forms are powerful tools in characterizing the properties of
Markov semigroups and their generators, see e.g.\ \cite{Wbook} and references within. To establish functional inequalities for less explicit or less regular probability measures, one regards the measures as perturbations from better ones satisfying the underlying functional inequalities. For a probability measure $\mu$ on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$, the perturbation to $\mu$ can be made in the following two senses. The first type perturbation is in the sense of exponential potential: the perturbation of $\mu$ by a potential $W$ is given by $\mu_W(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{W(x)}\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)}{\mu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^W)}$, for which functional inequalities have been studied in many papers, see \cite{AS, BLW, CWW} and references within. Another kind of perturbation is in the sense of independent sum of random variables: the perturbation of $\mu$ by a probability measure $\nu$ on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ is
given by their convolution $$(\mu*\nu)(A):= {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} 1_{A}(x+y)\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y).$$ Functional inequalities for the latter case is not yet well investigated, and the study is useful in characterizing distribution properties of random variables under independent perturbations, see e.g. \cite[Section 3]{Z} for an application in the study of random matrices.
In general, let $\mu$ and $\nu$ be two probability measures on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. A straightforward result on functional inequalities of $\mu*\nu$ can be derived from the sub-additivity property; that is, if both $\mu$ and $\nu$ satisfy a type of functional inequality, $\mu*\nu$ will satisfy the same type inequality. In this paper, we will consider the Poincar\'e inequality and the super Poincar\'e inequality. We say that a probability measure $\mu$ satisfies the Poincar\'e inequality with constant $C>0$, if
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{pin}\mu(f^2)\le C\mu(|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f|^2)+\mu(f)^2,\ \ f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} We say that $\mu$ satisfies the super Poincar\'e inequality with $\beta: (0,{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty)\rightarrow}\def\l{\ell}\def\iint{\int (0,{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty)$, if
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{sup-pin}\mu(f^2)\le r \mu(|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f|^2) +\beta(r)\mu(|f|)^2,\ \ r>0, f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
It is shown in \cite[Corollary 3.3]{W00a} or \cite[Corollary 1.3]{W00b} that the super Poincar\'e inequality holds with
$\beta(r)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{c/r}$ for some constant $c>0$ if and only if
the following Gross log-Sobolev inequality (see \cite{G}) holds for some constant $C>0$:
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{log}\mu(f^2\log f^2)\le C\mu(|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f|^2),\ \ f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d), \mu(f^2)=1.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{prp}\label{P1.1} Let $\mu$ and $\nu$ be two probability measures on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$.\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate}
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(1)$] If $\mu$ and $\nu$ satisfy the Poincar\'e $($resp. log-Sobolev$)$ inequality with constants $C_1$ and $C_2>0$ respectively,
then $\mu*\nu$ satisfies the same inequality with constant $C=C_1+C_2$.
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(2)$] If $\mu$ and $\nu$ satisfy the super Poincar\'e inequality with $\beta_1$ and $\beta_2$ respectively, then $\mu*\nu$ satisfies the super Poincar\'e inequality with $$\beta(r):={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf\big\{\beta_1(r_1)\beta_2(r_2): r_1,r_2>0, r_1+ r_2\beta_1(r_1)\le r\big\},\ \ r>0.$$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{enumerate} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{prp}
Since the proof of this result is almost trivial by using
functional inequalities for product measures (cf.\ \cite[Corollary
13]{Ch}), we simply omit it. Due to Proposition \ref{P1.1}, in this
paper the perturbation measure $\nu$ may not satisfy the
Poincar\'e inequality, it is in particular the case if the support
of $\nu$ is disconnected.
\
Recently, when $\mu$ is the Gaussian measure with variance matrix
$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho I$ for some $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho>0$, it is proved in \cite{Z} that $\mu*\nu$
satisfies the log-Sobolev inequality if $\nu$ has a compact support
and either $d=1$ or $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho>2R^2d$, where $R$ is the radius of a ball
containing the support of $\nu$, see \cite[Theorem 2 and Theorem
17]{Z}. The first purpose of this paper is to extend this result to
more general $\mu$ and to drop the restriction $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho>2R^2d$ for high
dimensions. The main tool used in \cite{Z} is the Hardy type
criterion for the log-Sobolev inequality due to \cite{BG}, which is
qualitatively sharp in dimension one. In this paper we will use a
perturbation result of \cite{AS} and a Lyapunov type criterion
introduced in \cite{CGW} to derive more general and better
results. In particular, as a consequence of Corollary \ref{C1.3}
below, we have the following result where the compact support
condition of $\nu$ is relaxed by an exponential integrability
condition. We would like to indicate that the exponential
integrability condition $\nu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\cdot|^2})<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ for some
$\vv>0$ is also necessary for $\mu*\nu$ to satisfy the log-Sobolev
inequality. Indeed, it is well known that the log-Sobolev inequality
for $\mu*\nu$ implies $(\mu*\nu)(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{c |\cdot|^2})<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ for some
$c>0$, so that $\nu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\cdot|^2})<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ for $\vv{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn (0,c)$.
However, it is not clear whether $``\theta >1$" in the
following result is sharp or not.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T1.1} Let $V=\ll |\cdot|^2$ for some constant
$\ll>0$, and $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)= \frac{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x}{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x}$ be a probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Then for any probability measure $\nu$ on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ with
$\nu(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\theta|\cdot|^2})<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ for some constant $\theta>1,$ the log-Sobolev inequality
$$(\mu*\nu)(f^2\log f^2)\le C(\mu*\nu)(|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f|^2),\ \ f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d), (\mu*\nu)(f^2)=1$$
holds for some constant $C>0.$ \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{thm}
According to the above-mentioned results in \cite{Z}, one may wish to prove that the log-Sobolev inequality is stable
under convolution with compactly supported probability measures; i.e. if $\mu$ satisfies the log-Sobolev inequality, then so does $\mu*\nu$ for a probability measure $\nu$ having compact support. This is however not true, a simple counterexample is that $\mu=\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_0,$ the Dirac measure at point $0$, which obviously satisfies the log-Sobolev inequality, but $\mu*\nu=\nu$ does not have to satisfy the log-Sobolev inequality even if $\nu$ is compactly supported. Thus, to ensure that $\mu*\nu$ satisfies the log-Sobolev inequality for any compactly supported probability measure $\nu$, one needs additional assumptions on $\mu$. Moreover, since when $\ll\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$, the Gaussian measure $\mu$ in Theorem \ref{T1.1} converges to $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta_0$, this counterexample also fits to the assertion of Theorem \ref{T1.1} that for large $\lambda$ we need a stronger concentration condition on $\nu$.
In the remainder of this paper, let $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ be a
probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ such that $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$. For a probability measure $\nu$ on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$, we define
$$p_\nu(x)= {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x-z)}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z),\ V_\nu(x)= -\log p_\nu(x), \ \ x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d.$$ Then
\beq\label{*0} (\mu*\nu)(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)= p_\nu(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x = \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V_\nu(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} Moreover, let
$$\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)= \ff 1 {p_\nu(x)} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x-z)}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z),\ \ x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d.$$
In the following three sections, we will investigate the log-Sobolev inequality, Poincar\'e and super Poincar\'e inequalities for $\mu*\nu$ respectively.
As a complement to the present paper, Cheng and Zhang investigated the weak Poincar\'e inequality in \cite{chengzhang}
for convolution probability measures, by using the Lyapunov type conditions as we did in Sections 3 and 4 for the Poincar\'e and super Poincar\'e inequalities respectively.
\section{Log-Sobolev Inequality}
In this section we will use two different arguments to study the log-Sobolev inequality for $\mu*\nu$. One is the perturbation argument due to \cite{A,AS}, and the other is the Lyapunov criterion presented in \cite{CGW}.
\subsection{Perturbation Argument}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T1.2} Assume that the log-Sobolev inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{log}
holds for $\mu$ with some constant $C>0$. If $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ such that
$$\Phi_\nu(x):= {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} (\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V})(x-z)\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z),\ \ x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$$ is well-defined and continuous, and there exists a constant
$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta>1$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{T1.2.1} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaxp \left\{\frac{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta C}{4}\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |\nabla V(x)-\nabla V(x-z)|\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\Big)^2\right\}\, \mu (\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} then $\mu*\nu$ also
satisfies the log-Sobolev inequality, i.e. for some constant $C'>0$,
$$(\mu*\nu)(f^2\log f^2)\le C'(\mu*\nu)(|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f|^2),\ \ f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d), (\mu*\nu)(f^2)=1.$$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{thm}
Obviously, $\Phi_\nu{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d;\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ holds if either $\nu$ has
compact support or $\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V}$ is bounded. Moreover, (\ref{C1.3.1})
below holds for bounded $\Hess_V$ and compactly supported $\nu$.
So, the following direct consequence of Theorem \ref{T1.2} improves
the above-mentioned main results in \cite{Z}. Indeed, this corollary
implies Theorem \ref{T1.1}.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{cor}\label{C1.3} Assume that $(\ref{log})$ holds and $\Phi_\nu$ is well defined and continuous.
If $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ with bounded $\Hess_V$
such that
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{C1.3.1} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaxp \left\{\frac{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta C}{4}
\|\Hess_V\|_{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty^2\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |z|\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\Big)^2\right\}\,
\mu (\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} holds for some constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho>1$,
then $\mu*\nu$ satisfies the log-Sobolev inequality. \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{cor}
Before presenting the proof of Theorem \ref{T1.2}, we first prove Theorem \ref{T1.1}
using Corollary \ref{C1.3}.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T1.1}] Let $Z={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll|x|^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x.$ Since, in the framework of Corollary \ref{C1.3}, $V(x)=\ll |x|^2+\log Z$, we
have $\|\Hess_V\|_{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty^2=4\ll^2$ and \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{log} holds for $C=\ff
1 \ll$. Moreover, since $\theta>1$, there exists a constant $\vv{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn (0,1)$ such that $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho:= \theta-\ff\vv{1-\vv}>1.$ So, by the Jensen inequality \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{L1}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} I&:=
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaxp\bigg\{\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho C}4
\|\Hess_V\|_{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty^2\nu_x(|\cdot|)^2\bigg\}\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\le {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho \ll \nu_x(|\cdot|^2)}\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\\
&\le{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d\times\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho\ll|z|^2}\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)
={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \ff{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho |z|^2-\ll|x-z|^2}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)}{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll|x-z|^2}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)}\,\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
Take $R>0$ such that $\nu(B(0,R))\ge \ff 1 2$. We have
$${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll |x-z|^2}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z) \ge {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,R)} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll R^2-2\ll R|x|-\ll |x|^2}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\ge \ff 1 2 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll R^2-2\ll R|x|-\ll |x|^2}.$$ Moreover,
for the above $\vv{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn (0,1)$ we have
$$-|x-z|^2 \le 2|x|\cdot |z|-|x|^2-|z|^2\le -\vv |x|^2 +\ff\vv{1-\vv} |z|^2.$$ Combining this with \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{L1}, we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} I&\le \ff {2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll R^2}} Z {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d\times\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho|z|^2-\ll|x-z|^2+2\ll R|x|}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\\
&\le \ff {2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll R^2}}Z {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d\times\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho|z|^2-\ll \vv |x|^2 +\ff{\ll \vv}{1-\vv}|z|^2+2\ll R|x|} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\\
&= \ff {2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll R^2}}Z {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d\times\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\theta|z|^2-\ll \vv |x|^2 +2\ll R|x|} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}
Then the proof is finished by Corollary \ref{C1.3}.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
To prove Theorem \ref{T1.2},
we introduce the following perturbation result due to
\cite[Lemma 3.1]{AS} and \cite[Lemma 4.1]{A}.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L2.2}Assume that the probability measure $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ satisfies the
log-Sobolev inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{log} with some constant $C>0$. Let $\mu_{V_0}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V_0(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ be a probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. If $F:=\frac{1}{2}(V-V_0){\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{l2.1} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaxp (\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta C|\nabla F|^2)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu
<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} holds for some constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho>1$, then the
defective log-Sobolev inequality
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{d-log}\mu_{V_0}(f^2\log f^2)\le C_1\mu_{V_0}(|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f|^2)+C_2,\ \ f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d), \mu_{V_0}(f^2)=1,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}holds for some constants $C_1,C_2>0.$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T1.2}] Since by (\ref{*0}) we have $(\mu*\nu)(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V_\nu(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$, to apply Lemma \ref{L2.2} we take $V_0=V_\nu$, so that $$F(x)=\frac{1}{2}(V(x)-V_0(x))=\frac{1}{2}\log {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{V(x)-V(x-z)}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z).$$ Since $\Phi_\nu$ is locally bounded, for any $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ we have
$$\lim_{y\rightarrow}\def\l{\ell}\def\iint{\int 0} (p_\nu(x+y)-p_\nu(x))=\lim_{y\rightarrow}\def\l{\ell}\def\iint{\int 0} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_0^1\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammay,\Phi_\nu(x+sy)\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s=0.$$ So, $p_\nu{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$. Then the continuity of $\Phi_\nu$ implies that
$$\Psi(x):= {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}(\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V)(x-z)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)=-\ff {\Phi_\nu(x)} {p_\nu(x)}$$ is continuous in $x$ as well.
Therefore, for any $x,v{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \lim_{\vv\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Downarrow 0} \ff{F(x+\vv v)-F(x)}\vv &= \lim_{\vv\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Downarrow 0} \ff 1 {2\vv} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_0^\vv \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammav, \nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x+sv)-\Psi(x+sv)\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\
& =\ff 1 2 \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammav,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x)-\Psi(x)\>.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*} Thus, by the continuity of $\Psi$ and $\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V$ we conclude that $F{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and
$$ |\nabla F(x)|^2=\ff 1 4 |\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x)-\Psi(x)|^2 \le \frac{1}{4}\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |\nabla V(x)-\nabla V(x-z)|\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\Big)^2.$$ Combining this with (\ref{T1.2.1}), we are able to apply Lemma \ref{L2.2} to derive the defective log-Sobolev inequality for $\mu*\nu$. Moreover, the form
$$\E(f,g):= {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E g\> \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D(\mu*\nu),\ \ f,g{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$$ is closable in $L^2(\mu*\nu)$,
and its closure is a symmetric, conservative, irreducible Dirichlet form. Thus, according to
\cite[Corollary 1.3]{W13} (see also \cite[Theorem 1]{M2}), the defective log-Sobolev inequality implies the desired log-Sobolev inequality. Then the proof is finished.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
To see that Corollary \ref{C1.3} has a broad range of application
beyond \cite[Theorem 2]{Z} and Proposition \ref{P1.1}(1) for the
log-Sobolev inequality, we present below an example where the support of $\nu$ is
unbounded and disconnected.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{exa}\label{E1}
Let $d=1$, $V(x)=\ff 1 2 \log\pi+x^2$ and
$$\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)= \ff 1 \gg \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\lambda i^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_i(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z),\ \
\gg:=\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\lambda i^2},$$ where $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_i$ is the Dirac measure at
point $i$ and $\ll>0$. Then $\mu*\nu$ satisfies the log-Sobolev inequality.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{exa}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} For the present $V$ it is well known from \cite{G} that the log-Sobolev
inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{log} holds with $C=1$. On the other hand, it is
easy to see that for any $i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb Z} \def\vrr{\varrho} \def\ll{\lambda$, $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt$ and $\ll >0$, we have
\beq\label{**1} |x-i|^2+\ll i^2= (1+\ll)\Big(i-\ff x{\ll+1}\Big)^2 +\ff{\ll
x^2}{1+\ll}.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
Let $\tilde} \def\Ric{\text{\rm{Ric}} p(x)= \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}.$ Then
\beq\label{**2} \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)= \ff 1 {\gg(x)} \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-|x-i|^2-\ll i^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_i(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)=
\ff 1 {\tilde} \def\Ric{\text{\rm{Ric}} p(x)}\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb Z} \def\vrr{\varrho} \def\ll{\lambda}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_i(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z),\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}where $\gg(x)= \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-|x-i|^2-\ll i^2}.$
So,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |z|\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)&= \ff 1 {\tilde} \def\Ric{\text{\rm{Ric}} p(x)}
\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} |i|\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}\\
&\le \ff{|x|}{1+\ll} +\ff 1 {\tilde} \def\Ric{\text{\rm{Ric}} p(x)}
\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \Big|i-\ff{x}{1+\ll}\Big|\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}\\
&\le \ff{|x|}{1+\ll}+c,\ \ x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}
holds for
\beq\label{NB}c:= \sup_{x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn [0, 1+\ll]} \ff 1{\tilde} \def\Ric{\text{\rm{Ric}} p(x)} \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \Big|i-\ff{x}{1+\ll}\Big|\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} since the underlying function is periodic with a period $[0,1+\ll]$. Noting that $C=1$ and $\|\Hess_V\|^2 =4$, we conclude from this that condition \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{C1.3.1} holds for $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn (1,1+\ll).$ Then the proof is finished by Corollary \ref{C1.3}.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
Finally, the following example shows that Theorem \ref{T1.2} may also work for unbounded $\Hess_V$.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{exa}\label{E2}
Let $V(x)=c +|x|^p$ with $p{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn [2,4)$ for some constant $c$ such that $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ is a probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Let $\nu$ be a probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ with compact support. Then $\mu*\nu$ satisfies the log-Sobolev inequality.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{exa}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} Since $p\ge 2$, we have $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and
$\Phi_\nu{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d, \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d).$ Let $R=\sup\{|z|: z{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn {\rm
supp}\,\nu\}.$ Then
$${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |\nabla V(x)-\nabla V(x-z)|\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z) \le R\sup_{z{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn B(x,R)} |\Hess_{V}(z)|\le C(R)(1+|x|^{p-2})$$ holds for some constant $C(R)>0$ and all $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Combining this with $2(p-2)<p$ implied by $p<4$, we see that (\ref{T1.2.1}) holds. Then the proof is finished by Theorem \ref{T1.2}. \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
We will see in Remark 4.1 below that the assertion in Example
\ref{E2} remains true for $p\ge 4.$ Indeed, when $p>2$ the super
Poincar\'e inequality presented in Example \ref{E3.7} below is
stronger than the log-Sobolev inequality, see \cite[Corollary
3.3]{W00a}.
\subsection{Lyapunov Criterion}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm} \label{T1.6} Assume that $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ with bounded $\Hess_V$ such that
\beq\label{T1.6.1} \Hess_V\ge K I \ \text{ \ outside\ a\ compact\
set}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} holds for some constant $K>0$. Then $\mu*\nu$
satisfies the log-Sobolev inequality provided the following two
conditions hold: \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate}{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(C1)$] There exists a
constant $c>0$ such that $$\nu_x(f^2)-\nu_x(f)^2\le c\|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E
f\|_{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty^2,\ \ f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C_b^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d),x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d.$$
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(C2)$] $\limsup_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Dfrac{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(-z)|\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)}{|x|}<K.$\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{enumerate}
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{thm}
We believe that Theorems \ref{T1.2} and \ref{T1.6} are incomparable, since (\ref{T1.6.1}) is neither necessary for (\ref{log}) to hold, nor provides explicit upper bound of $C$ in (\ref{log}) which is involved in condition (\ref{T1.2.1}) for Theorem \ref{T1.2}. But
it would be rather complicated to construct proper counterexamples confirming this observation.
The proof of Theorem \ref{T1.6} is based on
the following Lyapunov type criterion due to \cite[Theorem
1.2]{CGW}.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}[\cite{CGW}]\label{L2.1} Let $\mu_0(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V_0(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$
be a probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ for some $V_0{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d).$ Then
$\mu_0$ satisfies the log-Sobolev inequality provided the following
two conditions hold: \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate}{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[{\rm (i)}] There exists a
constant $K_0{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt$ such that $\Hess_{V_0}\ge K_0I$.
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[{\rm (ii)}] There exists $W{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ with $W\ge 1$ such that
$$\DD W(x)-\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V_0,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E W\>(x)\le (c_1-c_2|x|^2)W(x),\ \ x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$$
holds for some constants $c_1,c_2>0$.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{enumerate}
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T1.6}] By (\ref{*0}) and Lemma \ref{L2.1}, it suffices to verify conditions
(i) and (ii) for $V_0=V_{\nu}:=-\log p_\nu$.
(a) Proof of (i). By the boundedness of $\Hess_V$ and the condition
(\ref{T1.6.1}), it is to see that $p_\nu{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and for any
$X{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ with $|X|=1$, we have
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{2.3}\Hess_{V_0}(X,X)= \ff 1
{p_\nu^2}\Big((\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E_Xp)^2
-p_\nu\Hess_{p_\nu}(X,X)\Big).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} Moreover,
$$\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E_Xp_\nu(x)= -p_\nu(x){\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} (\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E_XV(x-z))\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z).$$ Then, letting
$K_1:=\|\Hess_V\|<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$, we obtain \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \Hess_{p_\nu}(X,X)(x)
&={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}
\Big(|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E_XV(x-z)|^2-\Hess_V(X,X)(x-z)\Big)\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x-z)}\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\\
&\le p_\nu(x) {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E_XV(x-z)|^2\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)
+K_1p_\nu(x).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}Combining these with (\ref{2.3})
and $(C1)$, we conclude that \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}
\Hess_{V_0}(X,X)(x) &\ge -K_1 -{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}
(\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E_XV(x-z))^2\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z) +\bigg({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E_XV(x-z)
\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\bigg)^2\\
&\ge -K_1-cK_1^2.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*} Thus, (i) holds for
$K_0=-K_1-cK_1^2$.
(b) Proof of (ii). Let $W(x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |x|^2}$ for some constant $\vv>0.$ Then
\beq\label{2.4} \ff{\DD W-\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V_0,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E W\>}{W}(x) = 2d\vv
+4\vv^2|x|^2 -\vv {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammax,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)\>\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} Since $\Hess_V$ is bounded and \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{T1.6.1}
holds, we know that ${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammax,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)\>\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)$ is
well defined and locally bounded. By (\ref{T1.6.1}), there exists a
constant $r_0>0$ such that $\Hess_V\ge KI$ holds on the set
$\{|z|\ge r_0\}$. So, for $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ with $|x|>2r_0$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)-\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(-z),x\>
&=|x|{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_0^{|x|}\Hess_V\Big(\ff{x}{|x|},\ff{x}{|x|}\Big)\Big(\ff{rx}{|x|}-z\Big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
r\\
&\ge K|x|^2 -K_1|x|\Big|\Big\{r{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn [0,|x|]: \Big|\ff{r
x}{|x|}-z\Big|\le r_0\Big\}\Big|\\&\ge K|x|^2 - 2 K_1 r_0
|x|.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*} Combining this with (\ref{2.4}) and
$(C2)$, and noting that $$ \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammax,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)\>\le\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)-\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E
V(-z),x\> +|x|\cdot |\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(-z)|,$$ we conclude that there exist constants
$C_1,C_2>0$ such that $$\ff{\DD W-\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V_0,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E W\>}{W}(x) \le 2d\vv
+4\vv^2|x|^2-\vv C_1|x|^2 +\vv C_2.$$ Taking $\vv=\ff {C_1}8$, we
prove (ii) for some constants $c_1,c_2>0.$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
Since when $\nu$ has compact support, we have
$$\nu_x(f^2)-\nu_x(f)^2={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d\times\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |f(z)-f(y)|^2\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\le R^2\|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E f\|_{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty^2,$$ where $R:=\sup\{|z-y|:\
z,y{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn {\rm supp}\nu\}<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$, and
$$\lim_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \ff{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} |\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(-z)|\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)}{|x|}\le \lim_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \ff{\sup_{{\rm supp}\nu} |\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E
V|}{|x|}=0,$$ The following direct consequence of Theorem
\ref{T1.6} improves the above mentioned results in \cite{Z}
as well.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{cor}\label{C1.1} Assume that $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ with bounded $\Hess_V$ such that $(\ref{T1.6.1})$ holds. Then $\mu*\nu$ satisfies the log-Sobolev inequality for any compactly supported probability measure $\nu$.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{cor}
To show that Theorem \ref{T1.6} also has a range of application
beyond Corollary \ref{C1.1} and Proposition \ref{P1.1}(1) for the
log-Sobolev inequality, we reprove Example
\ref{E1} by using Theorem \ref{T1.6}.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Example \ref{E1} using Theorem \ref{T1.6}] Obviously, (\ref{T1.6.1}) holds for $K=2$.
Let
$$ \tilde} \def\Ric{\text{\rm{Ric}} \nu_{x}=\ff1 {\tilde} \def\Ric{\text{\rm{Ric}}\gg(x) } \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x)^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_i,\ \ \tilde} \def\Ric{\text{\rm{Ric}}\gg(x)= \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x)^2}.$$
By (\ref{**1}) we have $\tilde} \def\Ric{\text{\rm{Ric}}\nu_{x}=\nu_{(1+\ll)x}.$ Thus, we only need to
verify conditions $(C1)$ and $(C2)$ for $\tilde} \def\Ric{\text{\rm{Ric}}\nu_x$ in place of
$\nu_x$.
(a) To prove condition $(C1)$, we make use of a Hardy type
inequality for birth-death processes with Dirichlet boundary
introduced in \cite{M}. Let $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt$ be fixed. For any bounded
function $f$ on $\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda$, let $\tilde} \def\Ric{\text{\rm{Ric}} f(i)= f(i)-f(i_x)$, where
$i_x:=\sup\{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda: i\le x\}$ is the integer part of $x$. Then
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{2.5} \tilde} \def\Ric{\text{\rm{Ric}}\nu_x(f^2)-\tilde} \def\Ric{\text{\rm{Ric}}\nu_x(f)^2\le
\sum_{i=-{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty}^{i_x} \tilde} \def\Ric{\text{\rm{Ric}} f(i)^2\tilde} \def\Ric{\text{\rm{Ric}}\nu_x(i) + \sum_{i=i_x}^{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty
\tilde} \def\Ric{\text{\rm{Ric}} f(i)^2\tilde} \def\Ric{\text{\rm{Ric}}\nu_x(i).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} It is easy to see that there
exists a constant $c>0$ independent of $x$ such that for any $m\ge
i_x>x-1$,
$$\sum_{i=i_x}^m \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(1+\ll)(i-x)^2}\le c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(1+\ll)(m-x)^2},\ \
\sum_{i=m+1}^{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x)^2}\le c
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(m+1-x)^2}.$$ Therefore, \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}
&\sup_{m\ge i_x} \Big(\sum_{i=i_x}^m
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(1+\ll)(i-x)^2}\Big)\sum_{i=m+1}^{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x)^2}\\
&\le c^2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(1+\ll)\{(m-x)^2-(m+1-x)^2\}}
= c^2
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(1+\ll)\{2(x-m)-1\}}\le c^2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{1+\ll}.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}
By this and the Hardy inequality (see \cite[Theorem
1.3.9]{Wbook}), we have
$$\sum_{i=i_x}^{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty \tilde} \def\Ric{\text{\rm{Ric}} f(i)^2\tilde} \def\Ric{\text{\rm{Ric}}\nu_x(i)\le 4 c^2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{1+\ll}\sum_{i=i_x}^{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty
(f(i+1)-f(i))^2\tilde} \def\Ric{\text{\rm{Ric}}\nu_x(i).$$ Similarly,
$$\sum_{i=-{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty}^{i_x} \tilde} \def\Ric{\text{\rm{Ric}} f(i)^2\tilde} \def\Ric{\text{\rm{Ric}}\nu_x(i)\le 4
c^2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{1+\ll}\sum_{i=-{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty}^{i_x} (f(i-1)-f(i))^2\tilde} \def\Ric{\text{\rm{Ric}}\nu_x(i).$$
Combining these with (\ref{2.5}) we prove $(C1)$ for $\tilde} \def\Ric{\text{\rm{Ric}}\nu_x$ and
some constant $c>0$ (independent of $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt$).
(b) Let $\tilde} \def\Ric{\text{\rm{Ric}} p(x)= \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}.$
Noting that $\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(z)= 2 z$, by (\ref{**2}) we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(-z)|\,\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)&= \ff 2 {\tilde} \def\Ric{\text{\rm{Ric}} p(x)}
\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} |i|\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}\\
&\le \ff{2|x|}{1+\ll} +\ff 2 {\tilde} \def\Ric{\text{\rm{Ric}} p(x)}
\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \Big|i-\ff{x}{1+\ll}\Big|\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(1+\ll)(i-x/(1+\ll))^2}\\
&\le c+ \ff{2|x|}{1+\ll}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}
for $c>0$ in \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{NB}. Therefore,
$$\limsup_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \ff{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(-z)|\,\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)}{|x|}\le \ff 2 {1+\ll} <2=K.$$ Thus, condition $(C2)$ holds.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
At the end of this section, we present the following two remarks for perturbation argument and Lyapunov criteria to deal with convolution probability measures.
\paragraph{Remark 2.1} (1) Both Theorems \ref{T1.2} and \ref{T1.6} are concerned with qualitative conditions ensuring the existence of the log-Sobolev inequality for convolution probability measures. It would be interesting to derive explicit estimates on the log-Sobolev constant, i.e. the smallest constant such that the log-Sobolev inequality holds. Recently, by using refining the conditions in Lemma \ref{L2.1}, Zimmermann has estimated the log-Sobolev constant in \cite{zi2014} for the convolution of a Gaussian measure with a compactly supported measure (see \cite[Theorem 10]{zi2014} for more details). Similar things can be done under the present general framework. However, as it is well known that estimates derived from perturbation arguments are in general less sharp, we will not go further in this direction and leave the quantitative estimates to a forthcoming paper by other means.
(2) As mentioned in Section \ref{sec1}, the convolution of probability measures refers to the sum of independent random variables. So, by induction we may use the Lyapunov criteria to investigate functional inequalities for multi-convolution measures. In this case it is interesting to study the behavior of the optimal constant (e.g. the log-Sobolev constant) as multiplicity goes to ${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$. For this we need fine estimates on the constant in terms of the multiplicity, which is related to what we have discussed in Remark 2.1(1). Of course, for functional inequalities having the sub-additivity property, it is possible to derive multiplicity-free estimates on the optimal constant, see e.g. the recent paper \cite{PLS} for Beckner-type inequalities of convolution measures on the abstract Wiener space.
\section{Poincar\'{e} inequality}\label{sec3}
In the spirit of the proof of Theorem \ref{T1.6}, in this section we
study the Poincar\'e
inequality for convolution measures using the
Lyapunov conditions presented in \cite{BCG, BBCG}. One may also wish to use the following easy to check perturbation result on
the Poincar\'e inequality corresponding to Lemma \ref{L2.2}.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{If the probability measure $\mu_V(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=e^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ satisfies
the Poincar\'e inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{pin} with some constant $C>0$, then for any $V_0{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ such
that ${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt e^{-V_0(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x=1$ and
$C \|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E (V-V_0)\|_{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty^2<2$, the probability measure $\mu_{V_0}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=e^{-V_0(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$
satisfies the Poincar\'e inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{pin} (with a different constant) as well.}
Since
the boundedness condition on $\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E(V-V_0)$ is rather strong (for instance, it excludes Example \ref{E3.3}(1) below for $p>2$), here, and also in the next section for the super Poincar\'e
inequality, we will use the Lyapunov criteria rather than this perturbation result. By combining the following Theorem \ref{T3.1} below with \cite[Theorem 1.4]{BBCG}, one may derive quantitative estimates on the Poincar\'{e} constant (or the spectral gap).
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T3.1} Let $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ be a
probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ and let $\nu$ be a probability measure
on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Assume that $\Phi_\nu$ in Theorem $\ref{T1.2}$ is well-defined and continuous. Then $\mu*\nu$ satisfies the Poincar\'e inequality
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{pin}, if at least one of the following conditions holds:
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate}{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(1)$] $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ such that
$\liminf\limits_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Dfrac{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \langle x, \nabla V(x-z)\rangle \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)}{|x|}>0.$ {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(2)$] $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ such that $\tilde} \def\Ric{\text{\rm{Ric}}\Phi_\nu(x):= {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}(\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E^2 V)(x-z) \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)$ is well-defined and continuous in $x$, and there is a constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn(0,1)$ such that
$$ \liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}\! \Big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x-z)|^2-\triangle V(x-z)\Big)
\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)>0.$$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{enumerate} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{thm}
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{proof} Let $L_{\nu}=\Delta-\nabla V_\nu$. According to \cite[Theorem 3.5]{BCG} or \cite[Theorem
1.4]{BBCG}, $(\mu*\nu)(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V_\nu(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ satisfies the
Poincar\'{e} inequality if there exist a $C^2$-function $W\ge1$ and some
positive constants $\theta, b, R$ such that for all $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$,
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{pi-1}L_{\nu}W(x)\le -\theta W(x)+b1_{B(0,R)}(x).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} In particular, by
\cite[Corollary 1.6]{BBCG},
if either
\beq\label{*D1}\liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \ff{\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V_\nu(x),x\>}{|x|}>0,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} or
there is a constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn (0,1)$ such that
\beq\label{*D2}\liminf\limits_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \Big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta |\nabla V_\nu(x)|^2-\DD
V_\nu(x)\Big)>0,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
then the inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{pi-1} fulfills.
Now, as shown in the proof of Theorem \ref{T1.2} that the continuity of $\Phi_\nu$ implies that $V_\nu{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and $$\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E
V_\nu(x),x\>={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla V(x-z), x\>\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z).$$ Then condition (1) in Theorem \ref{T3.1} implies (\ref{*D1}), and hence the Poincar\'e inequality for $\mu*\nu$.
On the other hand, repeating the argument leading to $F{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ in the proof of Theorem \ref{T1.2},
we conclude that the continuity of $\Phi_\nu$ and $\tilde} \def\Ric{\text{\rm{Ric}}\Phi_\nu$ implies $ V_\nu{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &|\nabla V_\nu(x)|^2=\left({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \nabla V(x-z)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\right)^2,\\
&\DD V_\nu(x)=|\nabla V_\nu(x)|^2+{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \big\{\DD V(x-z) -
|\nabla V(x-z)|^2\big\}\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*} Then for any $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn(0,1)$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta |\nabla V_\nu(x)|^2-\Delta V_\nu(x)=&{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \Big(|\nabla V(x-z)|^2-\DD V(x-z)\Big)\,
\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)-(1-\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta)|\nabla V_\nu(x)|^2\\
\ge& {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \Big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x-z)|^2-\DD V(x-z)\Big)\,
\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*} Combining this with condition (2) in Theorem \ref{T3.1} we prove (\ref{*D2}), and hence the Poincar\'e inequality for $\mu*\nu$.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
When the measure $\nu$ is compactly supported, we
have the following consequence of Theorem \ref{T3.1}.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{cor}\label{C3.2} Let $\nu$ be a probability measure
on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ with compact support such that $R:=\sup\{|z|: z{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn {\rm
supp}\,\nu\}<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty.$ If either $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ with
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{C3.2.1}\liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty}
\ff{\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x),x\>-R|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x)|}{|x|}>0,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
or $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and there is a constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn(0,1)$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{C3.2.2} \liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x)|^2-\DD V(x)\big)>0,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
then $\mu*\nu$ satisfies the Poincar\'e inequality.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{cor}
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{proof} Since the support of $\nu$ is compact, the continuity of $\Phi_\nu$ when $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and that of $\tilde} \def\Ric{\text{\rm{Ric}}\Phi_\nu$ when $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ are obvious. Below we prove conditions (1) and (2) in Theorem \ref{T3.1} using (\ref{C3.2.1}) and (\ref{C3.2.2}) respectively.
(a) By \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{C3.2.1} we obtain \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} {{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}
\langle x, \nabla V(x-z)\rangle \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)}&={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}
\Big(\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammax-z,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)\> +\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammaz, \nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)\>\Big)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\\
&\ge {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \Big(\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gammax-z,\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x-z)\> -R|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E
V(x-z)|\Big)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\\
&\ge {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \big(c_1|x-z|-c_2\big)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z) \\
&\ge
c_1(|x|-R)^+ -c_2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}
for some constants $c_1,c_2>0$. Then condition (1) in Theorem \ref{T3.1} holds.
(b) According to \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{C3.2.2}, there are constants $r_1, c_3$ and $c_4>0$
such that for all $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{D3}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}&{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \Big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla
V(x-z)|^2-\DD V(x-z)\Big) \, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\\
&\ge
c_3{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\{|x-z|>r_1\}}\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)-c_4{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\{|x-z|\le r_1\}}\,
\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} Since for $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$
with $|x|>R+r_1$ we have $${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\{|x-z|>r_1\}}\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\ge {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\{
|z|\le R\}}\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)=1 $$ and
$${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\{|x-z|\le r_1\}}\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\le {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\{|z|>R\}}\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)=0,$$ then (\ref{D3}) implies condition (2) in Theorem \ref{T3.1}.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
Finally, we present the following examples to illustrate Theorem
\ref{T3.1} and Corollary \ref{C3.2}.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{exa}\label{E3.3} $(1)$ Let $V(x)= c+|x|^p$ for some $p\ge 1$ and constant $c$ such that $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):= \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ is a
probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Then $\mu*\nu$ satisfies the Poincar\'{e} inequality for every compactly supported probability measure $\nu$ on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$.
$(2)$ Let $d=1$, $V(x)=c+ \sqrt{1+x^2}$ and
$$\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)= \ff 1 \gg \sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-|i|}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_i(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z),\ \
\gg:=\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-|i|},$$ where $c=\log {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\sqrt{1+x^2}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ and $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_i$ is the Dirac measure at
point $i$. Then $\mu*\nu$ satisfies the Poincar\'{e} inequality.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{exa}
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{proof} Since when $p<2$ the function $V(x)=c+|x|^p$ is not in $C^2$ at point $0$, we take $\tilde{V}{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ such that
$\tilde{V}(x)=V(x)$ for $|x|\ge1$. Let $\tilde} \def\Ric{\text{\rm{Ric}}\mu (\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)= \tilde} \def\Ric{\text{\rm{Ric}} C \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\tilde} \def\Ric{\text{\rm{Ric}} V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$, where $\tilde} \def\Ric{\text{\rm{Ric}} C>0$ is a constant such that $\tilde} \def\Ric{\text{\rm{Ric}}\mu$ is a probability measure. By the
stability of Poincar\'{e} inequality under the bounded perturbations
(e.g.\ see \cite[Proposition 17]{Ch}), it suffices to prove that $\tilde} \def\Ric{\text{\rm{Ric}}\mu*\nu$ satisfies the Poincar\'e inequality.
In case (1) the assertion is a direct consequence
of Corollary \ref{C3.2}. So, we only have to verify condition (1) in Theorem \ref{T3.1} for case (2). For simplicity, we only
verify for $x\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty,$ i.e.
\beq\label{3.1'} \lim_{x\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \frac{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt} x V'(x-z)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)}{|x|} >0.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
Let $i_x$ be the integer
part of $x$, and $h_x=x-i_x.$ Note that for any $x>0$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{3.2'} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \frac{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt} x V'(x-z)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)}{|x|}&={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt} V' (x-z)\nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\\
&=\frac{\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn
\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda}\frac{x-i}{\sqrt{1+(x-i)^2}}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\sqrt{1+(x-i)^2}-|i|}}
{\sum_{i{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\sqrt{1+(x-i)^2}-|i|}}\\
&=\frac{\sum_{k{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb Z} \def\vrr{\varrho} \def\ll{\lambda}\frac{h_x+k}{\sqrt{1+(h_x+k)^2}}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\sqrt{1+(h_x+k)^2}-|i_x-k|}}
{\sum_{k{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\sqrt{1+(h_x+k)^2}-|i_x-k|}}\\
&=:1-p_\nu(x)^{-1}\sum_{k{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} (a_k b_k)(x),\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
where \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &a_k(x):=\frac{\sqrt{1+(h_x+k)^2}-(h_x+k)}{\sqrt{1+(h_x+k)^2}},\\
&b_k(x):={\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\sqrt{1+(h_x+k)^2}-|i_x-k|}},\ \
p_\nu(x)= \sum_{k{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} b_k(x).\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*} It is easy to see that
$$0\le a_k(x)\le \begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} (1+k^2)^{-1/2}, & k\ge 0,\\
2, & k<0.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{cases}$$ Then for any $n\ge 1$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \sum_{k{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} (a_k b_k)(x)&=\sum_{k\le 0}(a_k b_k)(x)+\sum_{k=1}^n(a_k b_k)(x)+ \sum_{k=n+1}^{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty a_k b_k(x)\\
&\le 2\sum_{k\le 0}b_k(x)+\sum_{k=1}^nb_k(x)+\ff 1 {n+1}\sum_{k=n+1}^{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty b_k(x). \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}Thus, for any $x>0$ and $1\le n\le i_x$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}
&\sum_{k\le 0}b_k(x)\le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-x}+\sum_{k=-{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty}^{-1} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(-k-h_x)-(i_x-k)}
\le (2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^2+1)\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-x},\\
&\sum_{k=1}^nb_k(x)\le n\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-x},\ \
p_\nu(x)\ge \sum_{k=1}^{i_x}b_k(x)\ge i_x\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-x-1}.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*} Then for any $n\ge 1$,
$$\limsup_{x\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \ff 1 {p_\nu(x)} \sum_{k{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} (a_kb_k)(x) \le \lim_{x\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \Big\{\ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{x+1}(2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^2+1+n)\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-x}}{i_x}+\ff 1 {n+1}\Big\}=\ff 1{n+1}.$$ Letting $n\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ we obtain
$\lim_{x\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} p_\nu(x)^{-1}\sum_{k{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} (a_k b_k)(x)=0$. Combining this with (\ref{3.2'}) we prove (\ref{3.1'}).
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
\section{Super Poincar\'{e} Inequality}\label{sec4}
In this section we extend the results in Section 3 for the super Poincar\'e inequality.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T3.4} Let $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ be a
probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ and let $\nu$ be a probability measure
on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Define $$\alpha(r,s)=(1+s^{-d/2})\frac{\Big(\sup_{|x|\le
r}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\Big)^{d/2+1}}{\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\le
r}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\Big)^{d/2+2}},\ \ s,r>0.$$
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate}
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(1)$] If $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{T3.4.1}\liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \frac{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \langle x, \nabla V(x-z)\rangle \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)}{|x|}={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} then $\mu*\nu$ satisfies the super
Poincar\'{e} inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{sup-pin} with
$$\beta(r)=c\Big(1+\alpha(\psi(2/r), r/2)\Big)$$ for some constant $c>0$, where
$$\psi(r):={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf\bigg\{s>0: {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\ge s}\frac{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d} \langle x, \nabla V(x-z)\rangle \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
z)}{|x|}\ge r\bigg\}<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty,\ \ r>0.$$
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(2)$] Suppose that $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$ and there is a constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn(0,1)$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{T3.4.2} \liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}\! \Big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x-z)|^2-\DD V(x-z)\Big)
\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} Then, $\mu*\nu$ satisfies the
super Poincar\'{e} inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{sup-pin} with
$$\beta(r)=c\Big(1+\alpha(\tilde} \def\Ric{\text{\rm{Ric}}\psi (2/r), r/2)\Big)$$ for some constant $c>0$, where
$$\tilde} \def\Ric{\text{\rm{Ric}}\psi(r):={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf\bigg\{s>0: {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\ge s}
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d}\! \Big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x-z)|^2-\DD V(x-z)\Big)\, \nu_x(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)\ge r\bigg\}<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty,\ \ r>0.$$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{enumerate}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{thm}
The proof of Theorem \ref{T3.4} is based on the following lemma.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{lem}\label{L3.5}Let $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ be a probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$.
Assume that there are functions $W\ge 1$, $\phi>0$ with
$\liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty}\phi(x)={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty$ and constants $b,r_0>0$ such
that
$$\frac{\DD W- \langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla W, \nabla V\>}{W}\le
-\phi+b1_{B(0,r_0)}.$$ Then, the following super Poincar\'{e}
inequality holds
$$\mu_V(f^2)\le r \mu_V(|\nabla f |^2)+\beta(r) \mu_V(|f|)^2,$$
with $$\beta(r)=c \Big(1+\alpha(\psi_\phi(2/r), r/2)\Big),\ \
r>0$$ for some constant $c >0$ and $$\psi_\phi(r):={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf\big\{s>0:
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\ge s}\phi(x)\ge r\big\}.$$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{proof} It is well known that (e.g.\ see \cite[Proposition 3.1]{CGWW}) there exists a constant $C>0$ such that
for any $t, s>0$ and $f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$,
$${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}f^2(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\le s {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}|\nabla f(x)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x+ C(1+s^{-d/2})\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}|f|(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\Big)^2.$$ Therefore,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}f^2(x)\mu_V(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)&\le
\Big(\sup_{|x|\le t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\Big){\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}f^2(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\\
&\le s \frac{\sup_{|x|\le t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}}{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\le t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}
}{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}|\nabla f(x)|^2\mu_V(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\\
&\quad + C(1+s^{-d/2}) \frac{\sup_{|x|\le
t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}}{\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\le
t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\Big)^2}\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}|f|(x)\mu_V(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\Big)^2\\
&\le s \frac{\sup_{|x|\le t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}}{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\le t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)} }
\mu_V(|\nabla f|^2)+C(1+s^{-d/2}) \frac{\sup_{|x|\le
t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}}{\Big({\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\le t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\Big)^2}\mu_V(|f|)^2.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{split}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation*}
Taking $s=r\frac{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\le t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)} }{\sup_{|x|\le
t}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}}$ in the inequality above, we arrive at that for any $t,
r>0$ and $f{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^1(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$,
$${\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnt_{B(0,t)}f^2(x)\mu_V(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\le r \mu_V(|\nabla
f|^2)+C\alpha(t,r)\mu_V(|f|)^2.$$
Thus, the proof is finished by \cite[Theorem 2.10]{CGWW} and
the fact that the function $\alpha(r,s)$ is increasing with respect
to $r$ and decreasing with respect to $s$.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{proof}[Proof of Theorem \ref{T3.4}] As the same to the proof of Theorem \ref{T3.1}, let $L_{\nu}=\Delta-\nabla V_\nu$.
In case (1), we consider a smooth function such that $W(x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{2|x|}$ for $|x|\ge1$ and $W(x)\ge 1$ for all $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. We have
$$\frac{L_{\nu}W(x)}{W(x)}\le -\frac{\langle x, \nabla V_\nu(x)\rangle}{|x|} 1_{\{|x|\ge 1\}}+ b1_{\{|x|\le 1\}} $$ for some constant $b>0$. Then, the required assertion follows from Lemma \ref{L3.5} and the proof of Theorem \ref{T3.1}(1).
In case (2), we consider a smooth function such that $W(x)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(1-\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta)V(x)}$ for $|x|\ge 1$ and $W(x)\ge 1$ for all $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn \mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Then, $$\frac{L_{\nu}W(x)}{W(x)}\le -(1-\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta)\big(\Delta V(x)-\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x)|^2\big)+ b1_{\{|x|\le 1\}} $$ for some constant $b>0$. This along with Lemma \ref{L3.5} and the proof of Theorem \ref{T3.1}(2) also yields the desired assertion. \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
According to the proof of Corollary \ref{C3.2}, when the measure $\nu$ has the compact support, we can obtain the following statement from Theorem \ref{T3.4}.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{cor}\label{C3.6} Let $\nu$ be a probability measure
on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ with compact support such that $R:=\sup\{|z|: z{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn {\rm
supp}\,\nu\}<{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty.$ \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate}
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(1)$] If \begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{C3.6.1}\liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty}
\ff{\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x),x\>-R|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x)|}{|x|}={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} then
$\mu*\nu$ satisfies the super Poincar\'{e} inequality
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{sup-pin} with $$\beta(r)=c \Big(1+\alpha(\psi(2/r),
r/2)\Big) $$ for some constant $c>0$, where
$$\psi(r):={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf\bigg\{s>0: {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\ge 2s}
\ff{\langle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\>{\rangle} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\GG{\Gamma} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\gg{\gamma\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x),x\>-R|\nabla} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\pp{\partial} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\E{\scr E V(x)|}{|x|}\ge r\bigg\}.$$
{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Htem[$(2)$] If there is a constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn(0,1)$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{equation}\label{C3.6.2} \liminf_{|x|\rightarrow}\def\l{\ell}\def\iint{\int{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty} \big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x)|^2-\DD V(x)\big)={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnfty,\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{enumerate} then $\mu*\nu$ satisfies the super Poincar\'{e} inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{sup-pin} with
$$\beta(r)=c \Big(1+\alpha(\tilde} \def\Ric{\text{\rm{Ric}}\psi(2/r), r/2)\Big) $$ for some constant $c>0$, where
$$\tilde} \def\Ric{\text{\rm{Ric}}\psi(r):={\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf\bigg\{s>0: {\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hnf_{|x|\ge 2s}
\big(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x)|^2-\DD V(x)\big)\ge r\bigg\}.$$
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{cor}
The proof of Corollary \ref{C3.6} is similar to that of Corollary \ref{C3.2}, and is thus omitted.
Finally, we consider the following example to illustrate Corollary \ref{C3.6}.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{exa}\label{E3.7} Let $V(x)=c+|x|^p$ for some $p>1$ and $c{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt$ such that $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):= \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ is a probability measure on $\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$. Then for any compactly supported probability measure $\nu$, there exists a constant $c>0$ such that $\mu*\nu$ satisfies the super Poincar\'{e} inequality \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{sup-pin} with
\beq\label{WFJ} \beta(r)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaxp\Big(cr^{-\frac{p}{2(p-1)}}\Big),\ \ r>0.\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation}
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{exa}
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{proof} Since by \cite[Corollary 1.2]{W13} the super Poincar\'e inequality implies the Poincar\'e inequality, we may take $\bb(r)=1$ for large $r>0$. So, it suffices to prove the assertion for small $r>0.$ As explained in the proof of Example \ref{E3.3}
up to a bounded perturbation, we may simply assume that $V{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn C^2(\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d)$. For any $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn(0,1)$ and any $x{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn\mathbb R} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ff{\frac} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\ss{\sqrt^d$ with $|x|$ large enough,
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta|\nabla V(x)|^2-\DD V(x)\ge \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegata (V(x)),$$
where $\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegata$ is a non-decreasing function such that $\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegata(r)=\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta r^{2(p-2)/p}$ for some constant $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho>0$ and all $r\ge 1$. So,
$$\tilde} \def\Ric{\text{\rm{Ric}} \psi(u) \le c_1 \big(1+u^{\ff 1 {2(p-2)}}\big),\ \ \ u>0$$ holds for some constant $c_1>0$. Next, it is easy to see that
$$\aa(r,s) \le c_2(1+s^{-d/2}) \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{c_2 r^p},\ \ s,r>0$$ holds for some constant $c_2>0$. Therefore, the desired assertion for small $r>0$ follows from Corollary \ref{C3.6}(2). \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{proof}
\paragraph{Remark 4.1} (1) By letting $\nu=\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\DD{\Delta} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\vv{\varepsilon} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Def\rr{\rho_0$ we have $\mu=\mu*\nu$. So, Example \ref{E3.7} implies that
$\mu$ satisfies the super Poincar\'e inequality with $\bb$ given in \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{WFJ} for some constant $c>0$, and moreover, the inequality is stable under convolutions of compactly supported probability measures. It is easy to see from \cite[Theorem 6.2]{W00a} that the rate function $\bb$ given in \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{WFJ} is sharp, i.e. $\mu*\nu$ does not satisfy the super Poincar\'e inequality with $\bb$ such that $\lim_{r\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr Downarrow 0} r^{\ff p{2(p-1)}} \log \bb(r)=0.$
(2) On the other hand, however, if $\nu$ has worse concentration property, $\mu*\nu$ may only satisfy a weaker functional inequality. For instance,
let $\mu$ be in Example \ref{E3.7} but $\nu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)= C\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-|z|^q}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z$ for some constant $q{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb Hn (1,p)$ and normalization constant $C>0$. As explained in Remark 4.1(1) for $q$ in place $p$ we see that $\nu$ satisfies the super Poincar\'e inequality with
\beq\label{DGG} \beta(r)=\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaxp\Big(cr^{-\frac{q}{2(q-1)}}\Big),\ \ r>0\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{equation} for some constant $c>0$. Combining this with the super Poincar\'e inequality for $\mu$ with $\bb$ given in \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{WFJ}, from Proposition \ref{P1.1} we conclude that $\mu*\nu$ also satisfies the super Poincar\'e inequality with $\bb$ given in \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegaqref{DGG} for some (different) constant $c>0$, which is sharp according to \cite[Theorem 6.2]{W00a} as explained above. However, it is less straightforward to verify this super Poincar\'e inequality for $\mu*\nu$ using Theorem \ref{T3.4} instead of Proposition \ref{P1.1}.
\paragraph{\bf Acknowledgement.} We would like to thank the referee for helpful comments.
\begin} \def\beq{\begin{equation}} \def\F{\scr Fin{thebibliography}{111}
\bibitem{A} S. Aida, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Uniform positivity improving property, Sobolev inequalities, and spectral gaps},
J. Funct. Anal. 158(1998), 152--185.
\bibitem{AS} S. Aida, I. Shigekawa, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Logarithmic Sobolev inequalities and spectral gaps: Perturbation theory}, J. Funct. Anal. 126(1994), 448--475.
\bibitem{BBCG} D. Bakry, F. Barthe, P. Cattiaux, A. Guillin, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{A simple proof of the Poincar\'{e} inequality for a large class of measures including the logconcave case}, Electron. Comm. Probab. 13(2008), 60--66.
\bibitem{BCG} D. Bakry, P. Cattiaux, A. Guillin, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Rate of convergence for ergodic continuous Markov processes: Lyapunov versus Poincar\'{e}}, J. Funct. Anal. 254(2008), 727--759.
\bibitem{BLW} D. Bakry, M. Ledoux, F.-Y. Wang, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Perturbations of functional inequalities using growth conditions,} J. Math. Pures Appl. 87(2007), 394--407.
\bibitem{BG} S. G. Bobkov, F. G\"otze, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Exponential
integrability and transportation cost related to logarithmic Sobolev
inequalities,} J. Funct. Anal. 163(1999), 1--28.
\bibitem{CGWW}
P. Cattiaux, A. Guillin, F.-Y. Wang, L. Wu, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Lyapunov conditions for Super Poincar\'{e} inequalities,} J. Funct. Anal. 256(2009), 1821--1841.
\bibitem{CGW}
P. Cattiaux, A. Guillin, L. Wu, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{A note on Talagrand's transportation inequality and logarithmic Sobolev
inequality,} Probab. Theory Relat. Fields 148(2010), 285--304.
\bibitem{Ch}
D. Chafai, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Entropies, convexity, and functional inequalities,} J. Math. Kyoto Uni. 44(2010), 325--363.
\bibitem{CWW} X. Chen, F.-Y. Wang, J. Wang, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Perturbations of functional inequalities for L\'evy type Dirichlet forms,}
to appear in \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Forum Math.}, also see arXiv:1303.7349.
\bibitem{chengzhang} L.-J. Cheng, S.-Q. Zhang, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Weak Poincar\'e inequality for convolution
probability measures,} arXiv:1407.4910.
\bibitem{G} L. Gross, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Logarithmic Sobolev inequalities,} Amer.
J. Math. 97(1975), 1061--1083.
\bibitem{M} L. Miclo, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{An example of application of discrete Hardy's inequalities,} Markov Proc. Relat. Fields 2(1996), 263--284.
\bibitem{M2} L. Miclo, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{On hyperboundedness and spectrum of Markov operators,} to appear in \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Inven. Math.}, also see http://hal.archives-ouvertes.fr/hal-00777146v3.
\bibitem{PLS} P. D. Pelo, A. Lanconelli, A. I. Stan, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{
An extension of the Beckner's type Poincar\'e inequality
to convolution measures on abstract Wiener spaces}, arXiv:1409.5861.
\bibitem{W00a}
F.-Y. Wang, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Functional inequalities for empty essential spectrum,}
J.\ Funct.\ Anal. 170(2000), 219--245.
\bibitem{W00b} F.-Y. Wang, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Functional inequalities, semigroup properties
and spectrum estimates,} Infin. Dimens. Anal. Quant. Probab.
Relat. Topics 3(2000), 263--295.
\bibitem{W13} F.-Y. Wang, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Criteria of spectral gap for Markov operators,} J. Funct. Anal. 266(2014), 2137--2152.
\bibitem{Wbook} F.-Y. Wang, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Functional Inequalities, Markov Processes and Spectral Theory,} Science Press, Beijing, 2005.
\bibitem{Z} D. Zimmermann, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Logarithmic Sobolev inequalities
for mollified compactly supported measures,} J. Funct. Anal. 265(2013), 1064--1083.
\bibitem{zi2014} D. Zimmermann, \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegamph{Bounds for logrithmic Sobolev constants for Gaussian convolutions for compactly supported measures,} arXiv:1405.2581.
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{thebibliography}
\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omegand{document} | math | 148,890 |
\textbfegin{eg}in{equation}gin{document}
\title{Grothendieck--Serre for constant reductive group schemes}
\maketitle
\textbfegin{eg}in{equation}gin{abstract}
The Grothendieck--Serre conjecture asserts that on a regular local ring there is no nontrivial reductive torsor becomes trivial over the fraction field.
This conjecture is answered in the affirmative in the equicharacteristic case but is still open in the mixed characteristic case.
In this article, we establish its generalized version over Pr\"ufer bases for constant reductive group schemes.
In particular, the Noetherian restriction of our main result settles a new case of the Grothendieck--Serre conjecture.
Subsequently, we use this as a key input to resolve the variant of Bass--Quillen conjecture for torsors under constant reductive group schemes in our Pr\"uferian context.
Along the way, inspired by the recent preprint of $\check{\mathrm{C}}$esnavi$\check{\mathrm{c}}$ius \cite{Ces22b}, we also prove several versions of Nisnevich conjecture in our context.
\end{abstract}
^{\mathrm{h}}ypersetup{
linktoc=page,
}
\renewcommand*{\mathrm{cont}}entsname{}
\quad\\
\tableofcontents
\mathrm{new}page
\section{Grothendieck--Serre on schemes smooth over Pr\"ufer bases}
The Grothendieck--Serre conjecture predicts that every torsor under a reductive group scheme $G$ over a regular local ring $A$ is trivial if it becomes trivial over $\Frac A$.
In other words, the following map
\[
H^1_{\mathrm{\acute{e}t}}(A,G)\rightarrow H^1_{\mathrm{\acute{e}t}}(\Frac A,G)
\]
between nonabelian cohomology pointed sets, has trivial kernel.
The conjecture was settled in the affirmative when $\dim A=1$, or when $A$ contains a field (that is, $A$ is of equicharacteristic).
When $A$ is of mixed characteristic, the conjecture holds when $A$ is unramified and $G$ is quasi-split, whereas most of the remaining scenarios are unknown, see the recent survey \cite{Pan18}*{\S 5} for a detailed review of the state of the art, as
well as \S \ref{history} below for a summary.
The present article aims to establish a new mixed characterstic case of the Grothendieck--Serre conjecture, in the following much more general non-Noetherian setup:
\textbfegin{eg}in{thm}t [\Cref{G-S for constant reductive gps}~\ref{G-S for constant reductive gps i}]\leftarrowbel{main thm}
For a semilocal Pr\"ufer ring $R$, a reductive $R$-group scheme $G$, and an irreducible, affine, $R$-smooth scheme $X$, every generically trivial $G$-torsor on $X$ is Zariski-semilocally trivial.
In other words, if $A:=\mathscr{O}_{X,\textbf{x}}$ is the semilocal ring of $X$ at a finite subset $\textbf{x}\subset X$, we have
\[
\ker\,(H^1_{\mathrm{\acute{e}t}}(A,G)\rightarrow H^1_{\mathrm{\acute{e}t}}(\Frac A, G))=\{\ast\}.
\]
\mathrm{\acute{e}t}hmt
A ring is \emph{Pr\"ufer} if all its local rings are valuation rings, that are, integral domains $V$ such that every $x\in (\Frac V)\textbfegin{eg}in{equation}gin{aligned}ckslash V$ satisfies $x^{-1}\in V$.
Noetherian valuation rings are exactly discrete valuation rings.
Therefore, \Cref{main thm} specializes to a previously unsolved case of the Grothendieck--Serre in the sense that the aforementioned regular local ring $A$ is taken as a local ring of a scheme smooth over a discrete valuation ring.
By Popescu's theorem \SP{07GC}, \Cref{main thm} still holds when $A$ is a semilocal ring that is geometrically regular over a Dedekind ring.
\textbfegin{pp}[Known cases of the Grothendieck--Serre conjecture]
\leftarrowbel{history}
Since proposed by Serre \cite{Ser58}*{page~31} and Grothendieck \cite{Gro58}*{pages 26--27, Remarques~3}, \cite{Gro68}*{Remarques~1.11~a)}, the Grothendieck--Serre conjecture has already various known cases, as listed below.
\textbfegin{eg}in{equation}numr
\item The case when $G$ is a torus was proved by Colliot-Th\'el\`ene and Sansuc in \cite{CTS87}.
\item The case when $\dim A=1$, namely, $A$ is a discrete valuation ring, was addressed by Nisnevich in \cite{Nis82} and \cite{Nis84}, then is improved and generalized to the semilocal Dedekind case in \cite{Guo22}.
Several special cases were proved in \cite{Har67}, \cite{BB70}, \cite{BrT3} over discrete valuation rings, and in \cite{PS16}, \cite{BVG14}, \cite{BFF17}, \cite{BFFH20} for the semilocal Dedekind case.
\item The case when $A$ is Henselian was settled in \cite{BB70} and \cite{CTS79}*{Assertion~6.6.1} by reducing the triviality of $G$-torsors to residue fields then inducting on $\dim A$ to reach Nisnevich's resolved case.
\item The equicharacteristic case, namely, when $A$ contains a field $k$, was established by Fedorov and Panin \cite{FP15} when $k$ is infinite (see also \cite{PSV15}, \cite{Pan20b} for crucial techniques) and by Panin \cite{Pan20a} when $k$ is finite, which was later simplified by \cite{Fed22a}.
Before these, several equicharacteristic subcases were proved in \cite{Oja80},\cite{CTO92}, \cite{Rag94}, \cite{PS97}, \cite{Zai00}, \cite{OP01}, \cite{OPZ04}, \cite{Pan05}, \cite{Zai05}, \cite{Che10}, \cite{PSV15}.
\item When $A$ is of mixed characteristic, \v{C}esnavi\v{c}ius \cite{Ces22a} settled the case when $G$ is quasi-split and $A$ is unramified (that is, for $p\colonequals \mathrm{char} (A/\mathfrak{m}_A)$, the ring $A/pA$ is regular).
Priorly, Fedorov \cite{Fed22b} proved the split case under additional assumptions.
Recently, \v{C}esnavi\v{c}ius \cite{Ces22b}*{Theorem~1.3} settled a generalized Nisnevich conjecture under certain conditions, which specializes to the equal and mixed characteristic cases of the Grothendieck--Serre proved in \cite{FP15}, \cite{Pan20a}, \cite{Ces22a}.
\item There are sporadic cases where $A$ or $G$ are speical (with possible mixed characteristic condition), see \cite{Gro68}*{Remarque~1.11~a)}, \cite{Oja82}, \cite{Nis89}, \cite{Fed22b}, \cite{Fir22}, \cite{BFFP22}, \cite{Pan19b}.
\end{equation}num
For arguing \Cref{main thm}, we will use our Pr\"uferian counterparts of the toral case \cite{CTS87} (see \Cref{G-S type results for mult type}~\ref{G-S for mult type gp}) and the semilocal Dedekind case \cite{Guo22} (see \Cref{G-S over semi-local prufer}) but no other known case of the Grothendieck--Serre Conjecture.
\end{pp}
\textbfegin{pp}[Outline of the proof of \Cref{main thm}]
\leftarrowbel{intro-outline-pf of main thm}
As an initial geometric step, similar to \v{C}esnavi\v{c}ius's approach, one tries to use Gabber--Quillen type presentation lemma to fiber a suitable open neighbourhood $U\subset X$ of $\textbf{x}$ into smooth affine curves $U\to S$ over an open
\[
S\subset \mathbb{A}_{R}^{\dim(X/R)-1}
\]
in such a way that a given small\mathfrak{o}otnote{Typically, it is not so small, and is only $R$-fiberwise of codimension $\ge 1$ in $X$.} closed subscheme $Y\subset X$ becomes \emph{finite} over $S$. For us, $Y$ could be any closed subscheme away from which the generically trivial torsor in question is trivial. Practically, this could be very hard (and even impossible) to achieve, partly due to the remaining mysterious of algebraic geometry in mixed characteristics; however, by the same reasoning as \cite{Ces22a}*{Variant 3.7}, it is indeed possible if our $X$ admits a projective, flat compactification $\overlineerline{X}$ over $R$ such that the boundary $\overlineerline{Y}\textbfegin{eg}in{equation}gin{aligned}ckslash Y$ of $Y$ is $R$-fiberwise of codimension $\ge 2$ in $\overlineerline{X}$.
Assume for the moment that our compactification $\overlineerline{X}$ is even $R$-smooth. (Although this looks very limited, it is strong enough for our later application in which $\overlineerline{X}=\mathbb{P}_{R}^d$ .) Then, a key consequence of the purity for reductive torsors on scheme smooth over Pr\"ufer schemes (which takes advantage of the fact that our $G$ descends to $R$ hence is defined over the whole $\overlineerline{X}$) is that we can enlarge the domain $X$ of our torsor so that $\overlineerline{X}\textbfegin{eg}in{equation}gin{aligned}ckslash X$ is $R$-fiberwise of codimension $\ge 2$ in $\overlineerline{X}$, see \Cref{extend generically trivial torsors} for a more precise statement. In particular, since $\overlineerline{Y}\textbfegin{eg}in{equation}gin{aligned}ckslash Y \subset \overlineerline{X}\textbfegin{eg}in{equation}gin{aligned}ckslash X$, we can ensure that $\overlineerline{Y}\textbfegin{eg}in{equation}gin{aligned}ckslash Y$ is also $R$-fiberwise of codimension $\ge 2$ in $\overlineerline{X}$, and it is therefore possible to run Panin--Fedorov--\v{C}esnavi\v{c}ius type arguments to finish the proof in the present case.
However, our $R$-smooth $X$ from the beginning is arbitrary and need not have a smooth projective compactification. To circumvent this difficulty, we prove the following result, which is interesting in its own right: the pair $(Y,X)$ Zariski-locally on $X$ can be presented as an elementary étale neighbourhood of a similar pair $(Y',X')$, where $X'$ is an open of some projective $R$-space, see \Cref{variant of Lindel's lem} for a finer statement. Standard glueing techniques then allows us to replace $(Y,X)$ by $(Y',X')$ and to study generically trivial torsors on $X'$, but now $X'$ has smooth projective compactifications, namely, the projective $R$-spaces, we come back to the situation already settled in the previous paragraph.
\end{pp}
\textbfegin{pp}[Nisnevich's purity conjecture]
Now, we turn to Nisnevich's purity conjecture, where we require the total isotropicity of group schemes.
A reductive group scheme $G$ defined over a scheme $S$ is \emph{totally isotropic} at $s\in S$ if every $G_i$ in the decomposition \cite{SGA3IIInew}*{Exposé~XXIV, Proposition~5.10~(i)}
\[
\textstyle G^{\mathrm{ad}}_{\mathscr{O}_{S,s}}\cong ^{\prime}od_i\mathrm{Res}_{R_i/\mathscr{O}_{S,s}}(G_i)
\]
contains a $\mathbb{G}_{m,R_i}$, equivalently, every $G_i$ has a parabolic $R_i$-subgroup that is fiberwisely proper, see \cite{SGA3IIInew}*{Expos\'e XXVI, Corollaire 6.12}.
If this holds for all $s\in S$, then $G$ is \emph{totally isotropic}.
Proposed by Nisnevich \cite{Nis89}*{Conjecture~1.3} and modified due to the anisotropic counterexamples of Fedorov \cite{Fed22b}*{Proposition~4.1}, the Nisnevich conjecure predicts that, for a regular semilocal ring $A$, a regular parameter $r\in A$ (that is, $r\in \mathfrak{m}\textbfegin{eg}in{equation}gin{aligned}ckslash \mathfrak{m}^2$ for every maximal ideal $\mathfrak{m}\subset A$), and a reductive $A$-group scheme $G$ such that $G_{A/rA}$ is totally isotropic, every generically trivial $G$-torsor on $A[\f{1}{r}]$ is trivial, that is, the following map
\[
\textstyle \text{ $H^1(A[\f{1}{r}],G)\rightarrow H^1(\Frac A,G)$ \quad has trivial kernel}.
\]
The case when $A$ is a local ring of a regular affine variety over a field and $G=\GL_n$ was settled by Bhatwadekar--Rao in \cite{BR83} and was subsequently extended to arbitrary regular local rings containing fields by Popescu \cite{Pop02}*{Theorem~1}.
Nisnevich in \cite{Nis89} proved the conjecture in dimension two, assuming that $A$ is a local ring with infinite residue field and that $G$ is quasi-split.
For the state of the art, the conjecure was settled in equicharacteristic case and in several mixed characteristic case by {\v{C}}esnavi{\v{c}}ius in \cite{Ces22b}*{Theorem~1.3} (previously, Fedorov \cite{Fed21} proved the case when $A$ contains an infinite field).
Besides, the toral case and some low dimensional cases are known and surveyed in \cite{Ces22}*{Section~3.4.2~(1)} including Gabber's result \cite{Gab81}*{Chapter~I, Theorem~1} for the local case $\dim A\leq 3$ when $G$ is either $\GL_n$ or $\mathrm{PGL}_n$.
In this article, we prove several variants of Nisnevich conjecture over Pr\"ufer bases, see \Cref{torsors-Sm proj base}~\ref{Nis-sm-proj} and \Cref{G-S for constant reductive gps}~\ref{G-S for constant reductive gps ii}.
\end{pp}
\textbfegin{pp}[Outline of the paper]
\leftarrowbel{outline of the paper}
In this \S \ref{outline of the paper}, unless stated otherwise, $R$ is a semilocal Pr\"ufer ring, $S:=\Spec R$ is its spectrum, $X$ is an irreducible, affine, $R$-smooth scheme, $A\colonequals \mathscr{O}_{X,\textbf{x}}$ is the semilocal ring of $X$ at a finite subset $\textbf{x}\subset X$, and $G$ is a reductive $X$-group scheme.
\textbfegin{eg}in{equation}gin{itemize}
\item [(1)] In \S \ref{sect-purity of reductive torsors}, we establish purity of reductive torsors on schemes smooth over Pr\"ufer rings. The global statement is \Cref{purity for rel. dim 1} : if $X$ is a smooth $S$-curve and if a closed subset $Z\subset X$ satisfies
\[
\text{$Z_{\mathrm{\acute{e}t}a}= \emptyset$\quad for each generic point $\mathrm{\acute{e}t}a\in S$ \quad and\quad $\codim(Z_s,X_s)\ge 1$ for all $s\in S$,}
\]
then restriction induces the following equivalence of categories of $G$-torsors
\[
\mathbf{Tors}(X_{\mathrm{\acute{e}t}},G) \isoto \mathbf{Tors}((X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)_{\mathrm{\acute{e}t}},G).
\]
In particular, passing to isomorphism classes of objects, we have the following bijection of pointed sets
\[
H^1_{\mathrm{\acute{e}t}}(X,G)\simeq H^1_{\mathrm{\acute{e}t}}(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z,G).
\]
In its local variant \Cref{extends across codim-2 points}, we show that if $x\in X$ is a point such that
\[
\text{either $x\in X_{\mathrm{\acute{e}t}a}$ with $\dim \mathscr{O}_{X_{\mathrm{\acute{e}t}a},x} =2$, \quad or\; $x\in X_s$ with $s \neq \mathrm{\acute{e}t}a$ and $\dim \mathscr{O}_{X_s,x} =1$,}
\]
then every $G$-torsor over $\Spec \mathscr{O}_{X,x} \textbfegin{eg}in{equation}gin{aligned}ckslash \{x\}$ extends uniquely to a $G$-torsor over $\Spec \mathscr{O}_{X,x}$. An immediate consequence is \Cref{extend generically trivial torsors}: every generically trivial $G$-torsor on $\mathscr{O}_{X,\textbf{x}}$ extends to a $G$-torsor on an open neighbourhood of $\textbf{x}$ whose complementary closed has codimension $\ge 3$ (resp., $\ge 2$) in the generic (resp., non-generic) $S$-fibers of $X$.
As for their proofs, the key case to treat is that of vector bundles, that is, $G=\GL_n$, whose analysis ultimately depends on the estimate of the projective dimensions of reflexive sheaves on $X$ and the equivalence of categories of reflexive sheaves on $X$ and on $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z$ (under suitable codimensional constraints on $Z$), both are essentially obtained by Gabber--Ramero \cite{GR18}.
\item [(2)] In \S \ref{low dim cohomology of mult gp}, based on the low-dimensional vanishing of local cohomology of tori, we prove the purity for cohomology of group schemes of multiplicative type over Pr\"ufer bases, as well as the surjectivity of $H^1_{\mathrm{\acute{e}t}}(-,T) $ and the injectivity of $H^2_{\mathrm{\acute{e}t}}(-,T)$ upon restricting to opens for flasque tori $T$ over an $S$-smooth scheme, see \Cref{purity for gp of mult type} for a more precise statement. After these preliminaries, we are able to prove the Pr\"uferian counterparts of Colliot-Thélène--Sansuc's results for tori and, in particular, the Grothendieck-Serre for tori, see \Cref{G-S type results for mult type}~\ref{G-S for mult type gp}.
\item [(3)] In \S \ref{sect-geom lem}, we first present the Pr\"uferian analog of \v{C}esnavi\v{c}ius's formulation of the geometric presentation lemma in the style of Gabber--Quillen, see \Cref{Ces's Variant 3.7}.
The rest of \S \ref{sect-geom lem} is devoted to proving the following variant (in some aspect, a stronger form) of Lindel's lemma which should be of independent interest: for an $S$-smooth scheme $X$ and a closed subscheme $Y\subset X$ that avoids all the maximal points of the $S$-fibers of $X$, the pair $(Y,X)$ Zariski-locally on $X$ can be presented as an elementary étale neighbourhood of a similar pair $(Y',X')$, where $X'$ is an open of some projective $S$-space, see \Cref{variant of Lindel's lem} for a finer statement where one works Zariski semilocally on $X$.
\item [(4)] The main result of \S \ref{section-torsors on sm aff curves} is the Section \Cref{triviality on sm rel. affine curves} on triviality of torsors on a smooth affine relative curve. The idea of the proof ultimately depends on the geometry of affine Grassmannians developed by Fedorov, who proved \Cref{triviality on sm rel. affine curves}~\ref{sec-thm-semilocal} for $C=\mathbb{A}_R^1$.
\item [(5)] In \S \ref{sect-torsor on sm proj base}, we prove \Cref{main thm} under the additional assumption that $X$ is $R$-projective, only assuming that $G$ is a reductive $X$-group scheme (thus not necessarily descends to $R$). This result was proved by the second author and simultaneously by an unpublished work of Panin and the first author in the Noetherian case. As explained in \S \ref{intro-outline-pf of main thm}, the proof crucially uses the results of \S \ref{sect-purity of reductive torsors} on purity for reductive torsors to extends the domain of the relevant torsors so that the geometric presentation \Cref{Ces's Variant 3.7} could apply. It turns out that, without much extra efforts, we could simultaneously obtain a version of Nisnevich statement as in \Cref{torsors-Sm proj base}~\ref{Nis-sm-proj}.
\item [(6)] In \S \ref{sect-torsor under constant redu}, we prove \Cref{main thm} as well as the corresponding Nisnevich statement \Cref{G-S for constant reductive gps}~\ref{G-S for constant reductive gps ii}. As mentioned in \S \ref{intro-outline-pf of main thm}, the proof is via reduction to the case settled in \S \ref{sect-torsor on sm proj base} where $X$ is an open of some projective $R$-space, using \Cref{variant of Lindel's lem}.
\item [(7)] In \S \ref{sect-Bass-Quillen}, we prove the following Bass--Quillen statement for torsors: for a ring $A$ that is smooth over a Pr\"{u}fer ring $R$ and a totally isotropic reductive $R$-group scheme $G$, a $G$-torsor on $\mathbb{A}_A^N$ descends to $A$ if it is Zariski-locally trivial, equivalently, if it is Nisnevich-locally trivial. For the proof, the key input is the main \Cref{main thm} as well as the purity \Cref{purity for rel. dim 1} that we utilize to show that every generically trivial $G_{R(t)}$-torsor is trivial for a reductive $R$-group scheme $G$ (see \Cref{triviality over R(t)}). Granted them, we will closely follow the line of the proofs of the classical Bass--Quillen conjecture for vector bundles in the unramified case (as gradually developed by Quillen \cite{Qui76}, Lindel \cite{Lin81} and Popescu \cite{Pop89}): Quillen patching and the approximation \Cref{approxm semi-local Prufer ring} reduce us to the case of a finite-rank valuation ring $R$, inducting on $\mathrm{rank}(R)$ and the number of variables involving \Cref{triviality over R(t)} establishes the case $A$ being a polynomial ring over $R$, an application of the `inverse' to Quillen patching \Cref{inverse patching} settles the case $A$ being a localization of a polynomial ring over $R$, and finally, by inducting on the pair $(\dim R,\dim A-\dim R)$ and using \Cref{main thm}, a glueing argument involving \Cref{variant of Lindel's lem} reduces the general case to the already settled case $A$ being a localization of a polynomial ring.
\item [(8)] In appendix \ref{section-G-S on semilocal prufer}, we prove the Grothendieck--Serre on semilocal Pr\"ufer rings (\Cref{G-S over semi-local prufer}), which simultaneously generalizes the main results of \cite{Guo22} and \cite{Guo20b}. In essence, one has to prove a product formula for reductive groups scheme on semilocal Pr\"ufer rings (\Cref{decomp-gp}). The toral case is immediately obtained, thanks to the already settled Grothendieck-Serre for tori (see \Cref{G-S type results for mult type}~\ref{G-S for mult type gp}).
The general case would follow from the similar arguments as in \emph{loc. cit}, once Harder's weak approximation argument was carried out to establish the key \Cref{open-normal}.
\end{itemize}
\end{pp}
\textbfegin{pp}[Notations and conventions]
All rings in this paper are commutative with units, unless stated otherwise. For a
point $s$ of a scheme (resp., for a prime ideal $\mathfrak{p}$ of a ring), we let $\kappa(s)$ (resp., $\kappa(\mathfrak{p})$) denote its residue
field. For a global section $s$ of a scheme $S$, we write $S[\mathfrak{r}ac{1}{s}]$ for the open locus where $s$ does not
vanish. For a ring $A$, we let $\text{Frac}\, A$ denote its total ring of fractions.
For a morphism of algebraic spaces $S'\to S$, we let $(-)_{S'}$ denote the base change functor from $S$ to $S'$; if $S=\text{Spec}\,R$ and $S'=\text{Spec}\,R'$ are affine schemes, we will also write $(-)_{R'}$ for $(-)_{S'}$.
Let $S$ be an algebraic space, and let $G$ be an $S$-group algebraic space. For an $S$-algebraic space $T$, by a $G$-torsor over $T$ we shall mean a $G_T:=G\times_RT$-torsor. Denote by $\textbf{Tors}(S_{\mathfrak{p}pf},G)$ (resp., $\textbf{Tors}(S_{\mathrm{\acute{e}t}},G)$) the groupoid of $G$-torsors on $S$ that are fppf-locally (resp., \'etale-locally) trivial; specifically, if $G$ is $S$-smooth (e.g., $G$ is $S$-reductive), then every fppf-locally trivial $G$-torsor is \'etale-locally trivial, so we have
$$\textbf{Tors}(S_{\mathfrak{p}pf},G)= \textbf{Tors}(S_{\mathrm{\acute{e}t}},G).$$
Let $X$ be a scheme. The category of invertible $\mathscr{O}_X$-modules is denoted $\mathbf{Pic} X$. Assume that $X$ is locally coherent, for example, $X$ could be locally Noetherian or locally finitely presented over a Pr\"ufer ring. The category of reflexive $\mathscr{O}_X$-modules is denoted $\mathscr{O}_X$-$\mathbf{Rflx}$.
\end{pp}
\section{Purity for reductive torsors on schemes smooth over Pr\"ufer rings}
\leftarrowbel{sect-purity of reductive torsors}
In this section, we recollect useful geometric properties on scheme over Pr\"ufer bases.
\textbfegin{eg}in{lemma}t\leftarrowbel{geom}
For a Pr\"{u}fer scheme $S$, an $S$-flat, finite type, irreducible scheme $X$, and a point $s\in S$,
\textbfegin{eg}in{equation}numr
\item\leftarrowbel{geo-i} all nonempty $S$-fibers have the same dimension;
\item\leftarrowbel{geo-iii} if $\mathscr{O}_{X_s,\texti}$ is reduced for a maximal point $\texti\in X_s$, then the local ring $\mathscr{O}_{X,\texti}$ is a valuation ring such that the extension $\mathscr{O}_{S,s}^{\mathrm{h}}ookrightarrow \mathscr{O}_{X,\texti}$ induces an isomorphism of value groups.
\end{equation}num
\end{lemma}t
\textbfegin{eg}in{proof}
For \ref{geo-i}, see \cite{EGAIV3}*{Lemme~14.3.10}.
For \ref{geo-iii}, see \cite{MB22}*{Théorème~A}.
\end{proof}
\textbfegin{eg}in{lemma}\leftarrowbel{enlarge valuation rings}
For a valuation ring $V$, an essentially finitely presented (resp., essentially smooth) $V$-local algebra $A$, there are an extension of valuation rings $V'/V$ with trivial extension of value groups, and an essentially finitely presented (resp., essentially smooth) $V$-map $V'\to A$ with finite residue fields extension.
\end{lemma}
\textbfegin{eg}in{proof}
Assume $A=\mathscr{O}_{X,x}$ for an affine scheme $X$ finitely presented over $V$ and a point $x\in X$ lying over the closed point $s \in \text{Spec}(V)$. Let $t=\text{tr.deg}(\kappa(x)/\kappa(s))$.
As $\kappa(x)$ is a finite extension of $l\colonequals \kappa(s)(a_1,\cdots, a_t)$ for a transcendence basis $(a_i)_1^t$ of $\kappa(x)/\kappa(s)$, we have $t=\dim_l\Omega^1_{l/\kappa(s)}\leq \dim_{\kappa(x)}\Omega_{\kappa(x)/\kappa(s)}^1$.
Choose sections $b_1,\cdots,b_t\in \Gamma(X,\mathscr{O}_X)$ such that $d\overlineerline{b_1},\cdots,d\overlineerline{b_t} \in \Omega^1_{\kappa(x)/\kappa(s)} $ are linearly independent over $\kappa(x)$, where the bar stands for their images in $\kappa(x)$.
Define $p:X\to \mathbb{A}_V^t$ by sending the standard coordinates $T_1,\cdots, T_t$ of $\mathbb{A}_V^t$ to $b_1,\cdots, b_t$, respectively.
Since $d\overlineerline{b_1},\cdots,d\overlineerline{b_t} \in \Omega^1_{\kappa(x)/\kappa(s)} $ are linearly independent, the image $\mathrm{\acute{e}t}a\colonequals p(x)$ is the generic point of $\mathbb{A}^t_{\kappa(s)}$, so $V^{\prime}\colonequals \mathscr{O}_{\mathbb{A}^t_{V},\mathrm{\acute{e}t}a}$ is a valuation ring whose value group is $\Gamma_{V^{\prime}}\simeq \Gamma_V$.
Note that $\kappa(x)/\kappa(\mathrm{\acute{e}t}a)$ is finite, the map $V^{\prime}\rightarrow A$ induces a finite residue fields extension.
When $V\rightarrow A$ is essentially smooth, the images of $db_1,\cdots,db_t $ under the map $\Omega^1_{X/V}\otimes \kappa(x) \to \Omega^1_{\kappa(x)/\kappa(s)}$ are linearly independent, so are their images in $\Omega^1_{X/V}\otimes \kappa(x)$. Hence, $p$ is essentially smooth at $x$.
\end{proof}
In the sequel, we will use the following limit argument repeatedly.
\textbfegin{eg}in{lemma}t \leftarrowbel{approxm semi-local Prufer ring}
Every semilocal Pr\"{u}fer domain $R$ is a filtered direct union of its subrings $ R_i$ such that:
\textbfegin{eg}in{equation}numr
\item for every $i$, $R_i$ is a semilocal Pr\"{u}fer domain of finite Krull dimension; and
\item for $i$ large enough, $R_i\to R$ induces a bijection on the sets of maximal ideals hence is fpqc.
\end{equation}num
\end{lemma}t
\textbfegin{eg}in{proof}
Write $\Frac (R)=\cup_i K_i$ as the filtered direct union of the subfields of $\Frac (R)$ that are finitely generated over its prime field $\mathfrak{K}$.
For $R_i\colonequals R\cap K_i$, we have $R=\cup_i R_i$.
It remains to see that every $R_i$ is a semilocal Pr\"{u}fer domain whose local rings have finite ranks.
Let $\{\mathfrak{p}_j\}_{1\le j \le n}$ be the set of maximal ideals of $R$. Then $R=\textbfigcap_{1\le j \le n} R_{\mathfrak{p}_j}$ is the intersection of the valuation rings $R_{\mathfrak{p}_j}$. Thus we have
\[
\textstyle R_i=\textbfigcap_{1\le j \le n} \left(K_i \cap R_{\mathfrak{p}_j}\right).
\]
Since $K_i/\mathfrak{K}$ has finite transcendence degree, by Abhyankar's inequality, every $K_i \cap R_{\mathfrak{p}_j}$ is a valuation ring of finite rank. By
\cite{Bou98}*{VI, \S7, Proposition~1--2}, $R_i$ is a semilocal Pr\"{u}fer domain, and its local rings at maximal ideals are precisely the minimal elements of the set $\{K_i \cap R_{\mathfrak{p}_j}\}_{1\le j \le n}$ under inclusion. This implies (i). For (ii), it suffices to show that for $i$ large enough there are no strict inclusion relation between $K_i \cap R_{\mathfrak{p}_{j_1}}$ and $K_i \cap R_{\mathfrak{p}_{j_2}}$ for $j_1\neq j_2$. Indeed, if $\pi_j \in \mathfrak{p}_j\textbfegin{eg}in{equation}gin{aligned}ckslash \textbfigcup_{j'\neq j} \mathfrak{p}_{j'}$ for $1\le j \le n$, then (ii) holds for any $i$ for which $\{\pi_j\}_{1\le j \le n} \subset K_i$.
\end{proof}
\textbfegin{pp}[Coherence and reflexive sheaves]
For a ring $R$, if an $R$-module $M$ is finitely generated and its finitely generated submodules are all finitely presented, then $M$ is a \emph{coherent} $R$-module.
The ring $R$ is \emph{coherent} if it is a coherent $R$-module. An $\mathscr{O}_X$-module $\mathscr{F}$ on a scheme $X$ is \emph{coherent} if, for every affine open $U\subset X$, $\mathscr{F}(U)$ is a coherent $\mathscr{O}_X(U)$-module.
A scheme $X$ is \emph{locally coherent} if $\mathscr{O}_X$ is a coherent $\mathscr{O}_X$-module. For a locally coherent scheme $X$, an $\mathscr{O}_X$-module $\mathscr{F}$ is coherent if and only if it is finitely presented (\SP{05CX}), and if this is true then its dual $\mathscr{F}^{\vee}\colonequals \mathscr{H}om_{\mathscr{O}_X}(\mathscr{F},\mathscr{O}_X)$ is also coherent. Examples of locally coherent schemes include locally Noetherian schemes, Pr\"ufer schemes and, more generally, any scheme flat and locally of finite type over them, see \cite{GR71}*{Partie~I, Théorème~3.4.6}.
Let $X$ be a locally coherent scheme. A coherent $\mathscr{O}_X$-module $\mathscr{F}$ is \emph{reflexive} if taking double dual $\mathscr{F}\rightarrow \mathscr{F}^{\vee\!\vee}$ is an isomorphism.
For a reflexive $\mathscr{O}_X$-module $\mathscr{F}$, locally on $X$, taking dual of a finite presentation of $\mathscr{F}^{\vee}$ yields a \emph{finite copresentation} of $\mathscr{F}\simeq \mathscr{F}^{\vee\!\vee}$ of the form $0\rightarrow \mathscr{F}\rightarrow \mathscr{O}_{X}^{^{\mathrm{op}}lus m}\textrightarrow{\alpha} \mathscr{O}_{X}^{^{\mathrm{op}}lus n}$. Conversely, if $\mathscr{F}$ admits such a copresentation, then $\mathscr{F}\simeq \text{coker} (\alpha^{\vee})^{\vee}$, so by \SP{01BY} is reflexive.
Hence, a coherent sheaf on $X$ is reflexive if and only if it is finitely copresented.
As a consequence, we see that reflexivity is compatible with limit formalism in the following sense: if $X=\lim_{i}X_i$ is the limit of an inverse system $(X_i)$ of qcqs locally coherent schemes with affine transition morphisms, then we have
\textbfegin{eg}in{equation}gin{equation}\leftarrowbel{lim of Rflx}
\text{colim}_i\,\mathscr{O}_{X_i}\text{-}\mathbf{Rflx}\textrightarrow{\sim}\mathscr{O}_{X} \text{-}\mathbf{Rflx}.
\end{equation}
\end{pp}
\textbfegin{eg}in{lemma}t\leftarrowbel{lim-codim}
Let $X\rightarrow S$ be a finite type morphism with regular fibers between topologically Noetherian schemes, let $j\colon U^{\mathrm{h}}ookrightarrow X$ be a quasi-compact open immersion with complement $Z\colonequals X\textbfegin{eg}in{equation}gin{aligned}ckslash U$ satisfying
\[
\quad \text{$\codim(Z_s, X_s)\geq 1$ for every $s\in S$\quad and \quad $\codim(Z_{\mathrm{\acute{e}t}a},X_{\mathrm{\acute{e}t}a})\geq 2$ for every generic point $\mathrm{\acute{e}t}a\in S$,}
\]
and let $\mathscr{F}$ be a reflexive $\mathscr{O}_X$-module.
Assume that $S$ is a cofiltered inverse limit of integral schemes $(S_{\leftarrowmbda})_{\leftarrowmbda\in \GammaL}$ with generic point $\mathrm{\acute{e}t}a_{\leftarrowmbda}$ and surjective transition maps.
Then, there is a $\leftarrowmbda_{0}\in \Leftarrowmbda$, a finite type morphism $X_{\leftarrowmbda_0}\rightarrow S_{\leftarrowmbda_0}$ with regular fibers such that $X_{\leftarrowmbda_0}\times_{S_{\leftarrowmbda_0}}S\simeq X$, a closed subscheme $Z_{\leftarrowmbda_0}\subset X_{\leftarrowmbda_0}$ such that $Z_{\leftarrowmbda_{0}}\times_{S_{\leftarrowmbda_0}}S\simeq Z$, the open immersion $j_{\leftarrowmbda_0}\colon X_{\leftarrowmbda_0}\textbfegin{eg}in{equation}gin{aligned}ckslash Z_{\leftarrowmbda_0}^{\mathrm{h}}ookrightarrow X_{\leftarrowmbda_0}$ is quasi-compact, and the following
\[
\quad \text{$\codim(({Z_{\leftarrowmbda_0}})_s, ({X_{\leftarrowmbda_0}})_s)\geq 1$ for every $s\in S_{\leftarrowmbda_0}$\quad and \quad $\codim(({Z_{\leftarrowmbda_0}})_{\mathrm{\acute{e}t}a_0},({X_{\leftarrowmbda_0}})_{\mathrm{\acute{e}t}a_0})\geq 2$}
\]
is satisfied.
Also, there is a reflexive $\mathscr{O}_{X_{\leftarrowmbda_0}}$-module $\mathscr{F}_{\leftarrowmbda_0}$ whose inverse image on $X$ is $\mathscr{F}$.
\end{lemma}t
\textbfegin{eg}in{proof}
The condition that $X$ has regular $S$-fibers descends to $X_{\leftarrowmbda_0}$ by \cite{EGAIV2}*{Proposition~6.5.3}.
The reflexive $\mathscr{O}_X$-module $\mathscr{F}$ descends thanks to \cite{EGAIV3}*{Théorème~8.5.2} and by applying \cite{EGAIV3}*{Corollaire~8.5.2.5} to $\mathscr{F}\isoto \mathscr{F}^{\vee\!\vee}$.
Because $Z$ is contructible closed, by \cite{EGAIV3}*{Théorème~8.3.11}, it descends to $Z_{\leftarrowmbda}$ such that $p_{\leftarrowmbda}^{-1}(Z_{\leftarrowmbda})=Z$. For $f_{\leftarrowmbda}\colon X_{\leftarrowmbda}\rightarrow S_{\leftarrowmbda}$, by the transversity of fibers and \cite{EGAIV2}*{Corollaire~4.2.6}, $Z_{\leftarrowmbda}$ does not contain any irreducible components of $f_{\leftarrowmbda}^{-1}(s_{\leftarrowmbda})$ for any $s_{\leftarrowmbda}\in S_{\leftarrowmbda}$.
Finally, the image of the generic point $\mathrm{\acute{e}t}a\in S$ is the generic point $\mathrm{\acute{e}t}a_{\leftarrowmbda}\in S_{\leftarrowmbda}$. By \cite{EGAIV2}*{Corollaire~6.1.4}, we have $\codim((Z_{\leftarrowmbda})_{\mathrm{\acute{e}t}a_{\leftarrowmbda}}, (X_{\leftarrowmbda})_{\mathrm{\acute{e}t}a_{\leftarrowmbda}})=\codim(Z_{\mathrm{\acute{e}t}a}, X_{\mathrm{\acute{e}t}a})\geq 2$.
Finally, $\mathscr{F}$ descends to a reflexive sheaf by (\ref{lim of Rflx}).
\end{proof}
\textbfegin{eg}in{prop}t\leftarrowbel{GR18}
For a semilocal Pr\"{u}fer ring $R$ and a flat, locally of finite type morphism $f\colon X\rightarrow S:=\Spec R$ of schemes with regular fibers, the following assertions hold.
\textbfegin{eg}in{equation}numr
\item\leftarrowbel{GR18-11.4.1} For every $x\in X$ and every coherent $\mathscr{O}_X$-module $\mathscr{F}$ that is reflexive at $x$, we have
\[
\text{$\mathrm{proj.dim}_{\mathscr{O}_{X,x}}\mathscr{F}_x\leq \max (0,n-1)$, \quad where \quad $n= \dim \mathscr{O}_{f^{-1}(f(x)),x}$.}
\]
\item\leftarrowbel{GR18-11.4.6} For a closed subset $Z\subset X$ such that $j\colon X\textbfegin{eg}in{equation}gin{aligned}ckslash Z^{\mathrm{h}}ookrightarrow X$ is quasi-compact and satisfies the following
\[
\quad \text{$\codim(Z_s, X_s)\geq 1$ for all $s\in S$\quad and \quad $\codim(Z_{\mathrm{\acute{e}t}a},X_{\mathrm{\acute{e}t}a})\geq 2$ for the generic point $\mathrm{\acute{e}t}a\in S$,}
\]
the restriction functors induce the following equivalences:
\textbfegin{eg}in{equation}gin{equation}\leftarrowbel{restriction}
\text{$\mathscr{O}_{X}\text{-}\mathbf{Rflx}\isoto \mathscr{O}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}\text{-}\mathbf{Rflx}$\quadq and \quadq $\mathbf{Pic}\,X\isoto \mathbf{Pic}\, X\textbfegin{eg}in{equation}gin{aligned}ckslash Z$}
\end{equation}
In particular, for every $X$-affine finite type scheme $Y$, we have a bijection of sets
\[
\textstyle Y(X)\simeq Y(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z).
\]
\end{equation}num
\end{prop}t
\textbfegin{eg}in{proof}
The assertion \ref{GR18-11.4.1} is \cite{GR18}*{Proposition~11.4.1~(iii)}.
For the first assertion of \ref{GR18-11.4.6}, by glueing, the equivalence is Zariski-local on $X$, so we may assume that $X$ is affine and thus, by \cite{Nag66}*{Theorem 3'}, $X\to S$ is finitely presented. By a standard limit argument involving Lemmata \ref{approxm semi-local Prufer ring} and \ref{lim-codim}, we reduce to the case of a finite-dimensional $S$.
Now since $|X|$ is the finite disjoint union of its $S$-fibers $X_s$, which are Noetherian spaces, we see that $X$ is topologically Noetherian.
In particular, every open subset of $X$ is quasi-compact.
By \cite{GR18}*{Proposition~11.3.8~(i)}, the functor (\ref{restriction}) is essentially surjective.
For the fully faithfulness, we let $ \mathscr{F}$ and $ \mathscr{G}$ be two reflexive $\mathscr{O}_X$-modules and need to show that restriction induces $$
\text{Hom}_{\mathscr{O}_X}(\mathscr{F},\mathscr{G})\textrightarrow{\sim }\text{Hom}_{\mathscr{O}_U}(\mathscr{F}|_U,\mathscr{G}|_U), \quadq \text{ where } \quad U:=X\textbfegin{eg}in{equation}gin{aligned}ckslash Z.
$$
Choose a finite presentation $\mathscr{O}_{X}^{^{\mathrm{op}}lus m}\rightarrow \mathscr{O}_{X}^{^{\mathrm{op}}lus n} \rightarrow \mathscr{F} \rightarrow 0$.
Then chasing the following commutative diagram
\[
\textbfegin{eg}in{equation}gin{tikzcd}
0 \arrow[r] & \mathrm{Hom}_{\mathscr{O}_X}(\mathscr{F},\mathscr{G}) \arrow[d] \arrow[r] & \mathrm{Hom}_{\mathscr{O}_X}(\mathscr{O}_X^{^{\mathrm{op}}lus n},\mathscr{G}) \arrow[d] \arrow[r] & \mathrm{Hom}_{\mathscr{O}_X}(\mathscr{O}_X^{^{\mathrm{op}}lus m},\mathscr{G}) \arrow[d] \\
0 \arrow[r] & \mathrm{Hom}_{\mathscr{O}_U}(\mathscr{F}|_U,\mathscr{G}|_U) \arrow[r] & \mathrm{Hom}_{\mathscr{O}_U}(\mathscr{O}_U^{^{\mathrm{op}}lus n},\mathscr{G}|_U) \arrow[r] & \mathrm{Hom}_{\mathscr{O}_U}(\mathscr{O}_U^{^{\mathrm{op}}lus m},\mathscr{G}|_U)
\end{tikzcd}
\]
with exact rows reduces us first to the case when $\mathscr{F}$ is free. Choose a finite copresentation $0\rightarrow \mathscr{G}\rightarrow \mathscr{O}_{X}^{^{\mathrm{op}}lus m^{\prime}}\rightarrow \mathscr{O}_{X}^{^{\mathrm{op}}lus n^{\prime}}$ and chasing a similar commutative diagram reduces us further to the case $\mathscr{G}$ is free. Thus we may assume that $\mathscr{F}=\mathscr{G}=\mathscr{O}_{X}$ and need to show that $\mathscr{O}_X(X)\textrightarrow{\sim} \mathscr{O}_X(U)$.
Through the terminology of \cite{GR18}*{10.4.19}, our assumption on the fiberwise codimension of $Z$ in $X$ implies that $\partialta'(z,\mathscr{O}_X)\ge 2$, see \cite{GR18}*{Corollary~10.4.46}, so we may apply \cite{GR18}*{Proposition~11.3.8 (ii)} to $\mathscr{F}:=\mathscr{O}_X$ and deduce that $\mathscr{O}_X\simeq j_*(\mathscr{O}_{U})$. Taking global sections yields the desired isomorphism.
For the second assertion of \ref{GR18-11.4.6}, by glueing, the problem is Zariski-local on $X$, so we can assume that $X$ is affine.
Choose an embedding $Y^{\mathrm{h}}ookrightarrow \mathbb{A}_X^n$ over $X$ for some integer $n$. Since $U$ is schematically dense in $X$,
for every morphism $\phi\colon U \rightarrow Y$, if $\phi$ extends uniquely to a morphism $\widetilde{\phi}\colon X\rightarrow \mathbb{A}^n_X$, then $\widetilde{\phi}^{-1}(Y)$ is a closed subscheme of $X$ containing $U$ and, by \cite{EGAIV4}*{Lemme~20.3.8.8}, must coincide with $X$.
In other words, if $\widetilde{\phi}$ exists uniquely, then it factorises as $X\overlineerset{\psi}{\rightarrow} Y^{\mathrm{h}}ookrightarrow \mathbb{A}^n_X$ such that $\psi$ is the unique extension of $\phi$.
This reduces us to the case $Y=\mathbb{A}_X^n$. Now, by the reflexivity of $\mathscr{O}_X$ and the full faithfulness of $\mathscr{O}_{X}\text{-}\mathbf{Rflx}\isoto \mathscr{O}_{U}\text{-}\mathbf{Rflx}$, we have the desired bijection
\[
\mathbb{A}_X^n(X)=\mathrm{Hom}_{\mathscr{O}_X}(\mathscr{O}_X,\mathscr{O}_X^{^{\mathrm{op}}lus n}) \simeq \mathrm{Hom}_{\mathscr{O}_U}(\mathscr{O}_{U},\mathscr{O}_{U}^{^{\mathrm{op}}lus n}) =\mathbb{A}_X^n(U).\quadedhere
\]
\end{proof}
\textbfegin{eg}in{cor}t\leftarrowbel{vect-ext}
For a semilocal affine Pr\"{u}fer scheme $S$, an $S$-flat, locally of finite type scheme $X$ with regular one-dimensional $S$-fibers, and its closed subset $Z$ such that $j\colon X\textbfegin{eg}in{equation}gin{aligned}ckslash Z^{\mathrm{h}}ookrightarrow X$ is quasi-compact and
\[
\text{$Z_{\mathrm{\acute{e}t}a}= \emptyset$\quad for each generic point $\mathrm{\acute{e}t}a\in S$ \quad and\quad $\codim(Z_s,X_s)\ge 1$ for all $s\in S$,}
\]
the restriction and the pushforward $j_{\ast}(-)$ as inverse induce an equivalence of categories of vector bundles
\[
\mathbf{Vect}_{X} \isoto \mathbf{Vect}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}.
\]
\end{cor}t
\textbfegin{eg}in{proof}
For vector bundles $\mathscr{E}_1$ and $\mathscr{E}_2$ on $X$, the scheme $Y\colonequals \underlinederline{\mathrm{Isom}}_X(\mathscr{E}_1,\mathscr{E}_2)$ is $X$-affine of finite type, so $Y(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)=Y(X)$ by \Cref{GR18}\ref{GR18-11.4.6}. This proves the full faithfulness. For the essential surjectivity, by \Cref{GR18}\ref{GR18-11.4.6}, every vector bundle $\mathscr{E}$ on $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z$ extends to a reflexive $\mathscr{O}_X$-module $j_{\ast}\mathscr{E}$.
To show that the reflexive $\mathscr{O}_X$-module $j_{\ast}\mathscr{E}$ is a vector bundle, it suffices to exploit \Cref{GR18}\ref{GR18-11.4.1}.
\end{proof}
As a consequence, we obtain the following purity on regular one-dimensional relative curves for torsors under reductive group schemes.
\textbfegin{eg}in{thm}t\leftarrowbel{purity for rel. dim 1}
For a semilocal affine Pr\"{u}fer scheme $S$, an $S$-flat, locally of finite type scheme $X$ with regular $\mathrm{one\text{-}dimensional}$ $S$-fibers,
and a closed subset $Z\subset X$ such that $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z^{\mathrm{h}}ookrightarrow X$ is quasi-compact and
\[
\text{$Z_{\mathrm{\acute{e}t}a}= \emptyset$\quad for each generic point $\mathrm{\acute{e}t}a\in S$ \quad and\quad $\codim(Z_s,X_s)\ge 1$ for all $s\in S$,}
\]
and a reductive $X$-group scheme $G$, restriction induces the following equivalence of categories of $G$-torsors
\textbfegin{eg}in{equation}gin{equation}\leftarrowbel{restriction of G-torsors}
\mathbf{Tors}(X_{\mathrm{\acute{e}t}},G) \isoto \mathbf{Tors}((X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)_{\mathrm{\acute{e}t}},G).
\end{equation}
In particular, passing to isomorphism classes of objects, we have an isomorphism $$H^1_{\mathrm{\acute{e}t}}(X,G)\simeq H^1_{\mathrm{\acute{e}t}}(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z,G).$$
\mathrm{\acute{e}t}hmt
\textbfegin{eg}in{proof}
It suffices to show that (\ref{restriction of G-torsors}) is an equivalence.
To show that (\ref{restriction of G-torsors}) is fully faithful, we let $\mathcal{P}_1$ and $\mathcal{P}_2$ be two $G$-torsors, and consider $Y\colonequals \underlinederline{\mathrm{Isom}}_X(\mathcal{P}_1,\mathcal{P}_2)$, the functor on $X$-schemes parameterizing isomorphisms from $\mathcal{P}_1$ to $\mathcal{P}_2$. By descent theory, $Y$ is $X$-affine and of finite type, so $Y(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)=Y(X)$ by \Cref{GR18}~\ref{GR18-11.4.6}. This proves the full faithfulness.
To show that (\ref{restriction of G-torsors}) is essentially surjective, we pick a $G$-torsor $\mathcal{P}$ on $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z$ and need to show that $\mathcal{P}$ extends to a $G$-torsor on $X$. Since the assumption on the fiber codimension still holds when we base change to every \'etale scheme $X'$ over $X$, we obtain an equivalence $\mathbf{Tors}(X'_{\mathrm{\acute{e}t}},G) \isoto \mathbf{Tors}((X'\textbfegin{eg}in{equation}gin{aligned}ckslash Z')_{\mathrm{\acute{e}t}},G)$, where $Z':=Z\times_{X}X'$. Consequently, by glueing in the \'etale topology, it suffices to show that,
\'etale locally on $X$, $\mathcal{P}$ extends to a $G$-torsor on $X$.
To see this, we may assume that $X$ is affine and $G\subset \GL_{n,X}$, then exploit the commutative diagram with exact rows
\[
\textbfegin{eg}in{equation}gin{tikzcd}
{(\mathrm{GL}_{n,X}/G)(X)} \arrow[r] \arrow[d,"\simeq" labl, ] & {H^1_{\mathrm{\acute{e}t}}(X,G)} \arrow[r] \arrow[d] & {H^1_{\mathrm{\acute{e}t}}(X,\mathrm{GL}_{n,X})} \arrow[d] \\
{(\mathrm{GL}_{n,X}/G)(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)} \arrow[r] & {H^1_{\mathrm{\acute{e}t}}(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z,G)} \arrow[r] & {H^1_{\mathrm{\acute{e}t}}(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z,\mathrm{GL}_{n,X})},
\end{tikzcd}
\]
where the bijectivity of the left vertical arrow follows from \Cref{GR18}\ref{GR18-11.4.6} and, as $G$ is reductive, $\mathrm{GL}_{n,X}/G$ is affine over $X$ by \cite{Alp14}*{9.4.1}.
By \Cref{vect-ext}, we may replace $X$ by an affine open cover to assume that the induced $\GL_{n,X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$-torsor $\mathcal{P}\times ^{G_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}}\GL_{n,X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$ is trivial.
A diagram chase implies that there exists a $G$-torsor $\mathcal{Q}$ on $X$ such that $\mathcal{Q}|_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}\simeq \mathcal{P}$, as claimed.
\end{proof}
The following local variant of \Cref{purity for rel. dim 1} is a non-Noetherian counterpart of \cite{CTS79}*{Théorème~6.13}.
\textbfegin{eg}in{thm}t \leftarrowbel{extends across codim-2 points}
For a finite-rank valuation ring $R$ with spectrum $(S,\mathrm{\acute{e}t}a)$, an $S$-flat finite type scheme $X$ with regular fibers, a reductive $X$-group scheme $G$, and a point $x$ that is
\textbfegin{eg}in{equation}numr
\item\leftarrowbel{g-tor-local-gen} either $x\in X_{\mathrm{\acute{e}t}a}$ with $\dim \mathscr{O}_{X_{\mathrm{\acute{e}t}a},x} =2$, or
\item\leftarrowbel{g-tor-local-ngen} $x\in X_s$ with $s \neq \mathrm{\acute{e}t}a$ and $\dim \mathscr{O}_{X_s,x} =1$,
\end{equation}num
every $G$-torsor on $\Spec \mathscr{O}_{X,x} \textbfegin{eg}in{equation}gin{aligned}ckslash \{x\}$ extends uniquely to a $G$-torsor on $\Spec \mathscr{O}_{X,x}$.
\mathrm{\acute{e}t}hmt
\textbfegin{eg}in{proof}
It suffices to show that restriction induces an equivalence of categories of $G$-torsors on $\Spec \mathscr{O}_{X,x}$ and on $\Spec \mathscr{O}_{X,x}\textbfegin{eg}in{equation}gin{aligned}ckslash \{x\}$. The argument of \Cref{purity for rel. dim 1} reduces us to the case of vector bundles, namely, to the case $G=\GL_n$.
Now the case \ref{g-tor-local-gen} is classical (see for instance, \cite{Gab81}*{\S1, Lemma~1}).
For \ref{g-tor-local-ngen}, the finite-rank assumption on $V$ guarantees $X$ and hence also $\Spec\,\mathscr{O}_{X,x} \textbfegin{eg}in{equation}gin{aligned}ckslash \{x\}$ to be topologically Noetherian and, in particular, quasi-compact. Therefore, by \Cref{GR18}\ref{GR18-11.4.6}, every vector bundle $\mathscr{E}$ on $\Spec \mathscr{O}_{X,x}\textbfegin{eg}in{equation}gin{aligned}ckslash \{x\}$, extends to a reflexive sheaf $j_{\ast}(\mathscr{E})$ on $\Spec \mathscr{O}_{X,x}$, which, by the assumption $\dim \mathscr{O}_{X_s,x}=1$ and \Cref{GR18}\ref{GR18-11.4.1}, is projective, hence the assertion follows.
\end{proof}
Granted the purity \Cref{extends across codim-2 points}, we extend reductive torsors outside a closed subset of higher codimension that is crucial for the proof of \Cref{G-S for constant reductive gps}.
\textbfegin{eg}in{prop}t \leftarrowbel{extension torsors}
For a semilocal Pr\"{u}fer affine scheme $S$, an $S$-flat finite type quasi-separated scheme $X$ with regular $S$-fibers, a closed subset $Z\subset X$ such that $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z\subset X$ is quasi-compact and satisfies
\[
\text{$\codim(Z_{\mathrm{\acute{e}t}a}, X_{\mathrm{\acute{e}t}a})\geq 2$\,\, for each generic point $\mathrm{\acute{e}t}a\in S$ \quad and\quad $\codim(Z_s,X_s)\geq 1$\,\, for all $s\in S$,}
\]
a reductive $X$-group scheme $G$, and a $G$-torsor $\mathcal{P}$ on $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z$, there is a closed subset $Z^{\prime}\subset Z$ satisfying
\[
\text{$\codim(Z_{\mathrm{\acute{e}t}a}', X_{\mathrm{\acute{e}t}a})\geq 3$\,\, for each generic point $\mathrm{\acute{e}t}a\in S$ \quad and\quad $\codim(Z_s',X_s)\geq 2$\,\, for all $s\in S$,}
\]
and a $G$-torsor $\mathcal{Q}$ on $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z'$ such that $\mathcal{P}\simeq \mathcal{Q}|_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$.
\end{prop}t
\textbfegin{eg}in{proof}
By \cite{Nag66}*{Theorem 3'}, $X$ is finitely presented over $S$, hence a limit argument involving Lemmata \ref{approxm semi-local Prufer ring} and \ref{lim-codim}, we are reduced to the case when all local rings of $R$ are valuation rings of finite ranks.
Let $\mathcal{P}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$ be a $G$-torsor on $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z$. Since $|S|$ is finite and each fiber $X_s$ is Noetherian, there are finitely many points $x\in Z$ satisfying one of the assumptions (i)--(ii) of \Cref{extends across codim-2 points}; among these points we pick a maximal one under the generalization, say $x$.
The maximality of $x$ implies that $(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)\cap \Spec \,\mathscr{O}_{X,x} = \Spec \,\mathscr{O}_{X,x} \textbfegin{eg}in{equation}gin{aligned}ckslash \{x\}$, so, by \Cref{extends across codim-2 points}, the $G$-torsor $\mathcal{P}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}|_{(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z) \cap \Spec (\mathscr{O}_{X,x})}$ extends to a $G$-torsor $\mathcal{P}_x$ on $\Spec \,\mathscr{O}_{X,x}$.
As $X$ is topologically Noetherian, we may spread out $\mathcal{P}_x$ to obtain a $G$-torsor $\mathcal{P}_{U_x}$ over an open neighbourhood ${U}_x$ of $x$ such that $\mathcal{P}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}|_{({X\textbfegin{eg}in{equation}gin{aligned}ckslash Z})\cap U_x} \simeq \mathcal{P}_{U_x}|_{(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)\cap U_x}$ as $G$-torsors over $(X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)\cap U_x$.
Consequently, we may glue $\mathcal{P}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$ with $\mathcal{P}_{U_x}$ to extend $\mathcal{P}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$ to a $G$-torsor on $U_1\colonequals (X\textbfegin{eg}in{equation}gin{aligned}ckslash Z)\cup U_x$.
Since $Z_1\colonequals X\textbfegin{eg}in{equation}gin{aligned}ckslash U_1$ contains strictly fewer points $x$ satisfying the assumptions (i) or (ii) of \Cref{extends across codim-2 points},
we extend $\mathcal{P}$ iteratively to find the desired closed subset $Z^{\prime}\subset X$ such that $\mathcal{P}_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$ extends over $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z^{\prime}$.
\end{proof}
\textbfegin{eg}in{cor}t \leftarrowbel{extend generically trivial torsors}
For a semilocal Pr\"{u}fer affine scheme $S$, an $S$-flat finite type quasi-separated scheme $X$ with regular $S$-fibers, a finite subset $\textbf{x}\subset X$ contained in an single affine open, a nonzero $r\in \mathscr{O}_{X,\mathbf{x}}$, and a reductive $X$-group scheme $G$,
every generically trivial $G$-torsor on $\mathscr{O}_{X,\textbf{x}}[\mathfrak{r}ac{1}{r}]$ extends to a $G$-torsor on an open neighbourhood $U$ of $\mathrm{Spec}\,\mathscr{O}_{X,\textbf{x}}[\mathfrak{r}ac{1}{r}]$ whose complementary closed $Z\colonequals X\textbfegin{eg}in{equation}gin{aligned}ckslash U$ satisfies the following
\[
\text{$\codim(Z_{\mathrm{\acute{e}t}a}, X_{\mathrm{\acute{e}t}a})\geq 3$\,\, for each generic point $\mathrm{\acute{e}t}a\in S$ \quad and\quad $\codim(Z_s,X_s)\geq 2$\,\, for all $s\in S$.}
\]
\end{cor}t
\textbfegin{eg}in{proof}
As in the proof of \Cref{extension torsors}, we may assume that $S$ has finite Krull dimension; in particular, $X$ is topologically Noetherian.
Let $\mathcal{P}$ be a generically trivial $G$-torsor on $\mathscr{O}_{X,\textbf{x}}[\mathfrak{r}ac{1}{r}]$.
By spreading out, $\mathcal{P}$ extends to a $G$-torsor $\mathcal{P}_U$ on $U\colonequals \Spec R[\f{1}{r}]$ for an affine open neighbourhood $\Spec R \subset X$ of $\textbf{x}$.
It remains to extend $U$ and $\mathcal{P}_U$ to ensure that $Z\colonequals X\textbfegin{eg}in{equation}gin{aligned}ckslash U$ satisfies the assumptions of \Cref{extension torsors}. Assume that $Z$ contains a point $z$ such that either
\textbfegin{eg}in{equation}numr
\item $z\in X_{\mathrm{\acute{e}t}a}$ and $\dim \mathscr{O}_{X,z} =1$, in which case $\Spec (\mathscr{O}_{X,z}) \cap U$ is a maximal point of $X$, or
\item $z$ is a maximal point of $X_s$ with $s \neq \mathrm{\acute{e}t}a$, in which case $\Spec \,\mathscr{O}_{X,z}$, and hence also $\Spec (\mathscr{O}_{X,z}) \cap U$, is the spectrum of a valuation ring (\Cref{geom}~\ref{geo-iii}).
\end{equation}num
By the Grothendieck--Serre on valuation rings \cite{Guo20b}, the generically trivial $G$-torsor $\mathcal{P}_U|_{\Spec (\mathscr{O}_{X,z}) \cap U}$ is trivial.
Thus, as in the proof of \Cref{extension torsors}, we can glue $\mathcal{P}_U$ with the trivial $G$-torsor over a small enough open neighbourhood of $z$ to extend $\mathcal{P}_U$ across such a point $z\in Z$.
Note that, due to the topologically Noetherianness of $X$, $Z$ contains at most finitely many points $z$ satisfying the above assumption (i) or (ii). Therefore, iteratively extend $U$ and $\mathcal{P}_U$, we may assume that $Z$ does not contain any point $z$ satisfying (i) or (ii), whence \Cref{extension torsors} applies.
\end{proof}
\section{Low dimensional cohomology of groups of multiplicative type}
\leftarrowbel{low dim cohomology of mult gp}
\textbfegin{pp}[Vanishing of low-dimensional local cohomology of tori]
\end{pp}
\textbfegin{eg}in{prop}t\leftarrowbel{local cohomology of tori}
For a finite-rank valuation ring $R$ with spectrum $S$ and generic point $\mathrm{\acute{e}t}a\in S$, an $S$-flat finite type scheme $X$ with regular $S$-fibers, a point $x\in X$, and an $\mathscr{O}_{X,x}$-torus $T$,
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{paraf} if either $x\in X_{\mathrm{\acute{e}t}a}$ with $\dim \mathscr{O}_{X_{\mathrm{\acute{e}t}a},x} \ge 2$, or $x\in X_s$ with $s \neq \mathrm{\acute{e}t}a$ and $\dim \mathscr{O}_{X_s,x} \ge 1$, then we have
\[
\text{$H^i_{\{x\}}(\mathscr{O}_{X,x},T)=0$ \quad for \, $0\le i \le 2$; }
\]
\item \leftarrowbel{val} otherwise, $\mathscr{O}_{X,x}$ is a valuation ring, and, in case $T$ is $\mathrm{flasque}$, we have
\[
H^2_{\{x\}}(\mathscr{O}_{X,x},T)=0.
\]
\end{equation}num
\end{prop}t
\textbfegin{eg}in{proof}
In \ref{paraf}, by \cite{EGAIV3}*{Lemme~14.3.10}, $\overlineerline{\{x\}}$ is $R$-fiberwise of codimension $\ge 2$ (resp., $\ge 1$) in the generic fiber (resp., non-generic fiber) of $X$, so, by \Cref{GR18}~\ref{GR18-11.4.6}, for any open $x\in U \subset X$ we have
\[
H^i(U,\mathbb{G}_{m})\simeq H^i(U\textbfegin{eg}in{equation}gin{aligned}ckslash \overlineerline{\{x\}},\mathbb{G}_{m}) \quad \text{ for } \quad 0\le i \le 1.
\]
The same is true with $(U,x)$ replaced by any scheme $(W,y)$ \'etale over it, so taking colimit yields
\[
H^i_{\mathrm{\acute{e}t}}(\mathscr{O}_{X,\textbfegin{eg}in{equation}gin{aligned}r{x}}^{\text{sh}},\mathbb{G}_m)\simeq \text{colim}_{(W,y)}\,H_{\mathrm{\acute{e}t}}^i(W,\mathbb{G}_{m})\simeq \text{colim}_{(W,y)}\,H_{\mathrm{\acute{e}t}}^i(W\textbfegin{eg}in{equation}gin{aligned}ckslash \overlineerline{\{y\}},\mathbb{G}_{m}) \quad \text{ for } \quad 0\le i \le 1.
\]
By the finite-rank assumption on $R$, $\mathscr{O}_{W,y}$ and hence also $\mathscr{O}_{X,\textbfegin{eg}in{equation}gin{aligned}r{x}}^{\text{sh}}$ has quasi-compact punctured spectrum, so the rightmost term above identifies with $H^i_{\mathrm{\acute{e}t}}(\Spec(\mathscr{O}_{X,\textbfegin{eg}in{equation}gin{aligned}r{x}}^{\mathrm{sh}})\textbfegin{eg}in{equation}gin{aligned}ckslash \{\overline{x}\},\mathbb{G}_m)$. Looking at the local cohomology exact sequence for the pair $(\Spec(\mathscr{O}_{X,\textbfegin{eg}in{equation}gin{aligned}r{x}}^{\mathrm{sh}}),\textbfegin{eg}in{equation}gin{aligned}r{x})$ and $T\simeq \mathbb{G}_{m}^{\dim\,T}$, we deduce that $$H_{\{\textbfegin{eg}in{equation}gin{aligned}r{x}\}}^i(\mathscr{O}_{X,\textbfegin{eg}in{equation}gin{aligned}r{x}}^{\mathrm{sh}},T)=0 \quad \text{ for } \quad 0\le i\le 2.$$
By the local-to-global $E_2$-spectral sequence \cite{SGA4II}*{Exposé~V, Proposition~6.4}, this implies the desired vanishing.
In \ref{val}, either $x\in X_{\mathrm{\acute{e}t}a}$ with $\dim \mathscr{O}_{X_{\mathrm{\acute{e}t}a},x}\le 1,$ then $\mathscr{O}_{X,x}$ is a discrete valuation ring, or $x$ is a maximal point of some fiber of $X\to S$, then, by \Cref{geom}~\ref{geo-iii}, $\mathscr{O}_{X,x}$ is a valuation ring. The desired vanishing is proven in \cite{Guo20b}*{Lemma~2.3}.
\end{proof}
\Cref{local cohomology of tori} has the following global consequence.
\textbfegin{eg}in{thm}t \leftarrowbel{purity for gp of mult type}
For a semilocal Pr\"ufer scheme $S$, a flat, locally of finite type $S$-scheme $X$ with regular $S$-fibers, a closed $Z\subset X$ with retrocompact $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z$, and a finite type $X$-group $M$ of multiplicative type,
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{mult-paraf} if $Z$ satisfies the following condition
\[
\text{$\codim(Z_{\mathrm{\acute{e}t}a}, X_{\mathrm{\acute{e}t}a})\geq 2$ for every generic point $\mathrm{\acute{e}t}a\in S$ \quad and\quad $\codim(Z_s,X_s)\geq 1$ for all $s\in S$,}
\]
then we have
\[
H^i_{\mathfrak{p}pf}(X,M)\simeq H^i_{\mathfrak{p}pf}(U,M)\quad \,\text{for }0\le i\le 1 \quad \text{and} \quad\, H^2_{\mathfrak{p}pf}(X,M)^{\mathrm{h}}ookrightarrow H^2_{\mathfrak{p}pf}(U,M).
\]
\item \leftarrowbel{gl-H1-H2-flasque} if $X$ is qcqs and $M=T$ is an $X$-torus such that $T_{\mathscr{O}_{X,z}}$ is $\mathrm{flasque}$ for every $z\in Z$ for which $\mathscr{O}_{X,z}$ is a valuation ring, then we have
\[
\text{$H^1_{\mathrm{\acute{e}t}}(X,T) \twoheadrightarrow H^1_{\mathrm{\acute{e}t}}(U,T)$ \quad and \quad $H^2_{\mathrm{\acute{e}t}}(X,T)^{\mathrm{h}}ookrightarrow H^2_{\mathrm{\acute{e}t}}(U,T)$;}
\]
in particular, if $K(X)$ denotes the total ring of fractions of $X$,
\[
\text{$\mathrm{Pic}\, X \twoheadrightarrow \mathrm{Pic}\, U$ \quad and \quad $\mathrm{Br}(X)^{\mathrm{h}}ookrightarrow \mathrm{Br}(K(X))$.}
\]
\end{equation}num
\mathrm{\acute{e}t}hmt
\textbfegin{eg}in{proof}
For \ref{mult-paraf}, by the local cohomology exact sequence for the pair $(X,Z)$ and the sheaf $M$,
the statement is equivalent to the vanishings $H^i_Z(X,M)=0$ for $0\le q \le 2$. By the spectral sequence in \cite{CS21}*{Lemma 7.1.1}, it suffices to show the vanishings of $\mathcal{H}^q_Z(M)$, the \'etale-sheafification of the presheaf $X^{\prime}\mapsto H^q_{Z^{\prime}}(X^{\prime}, M)$ on $X_{\mathrm{\acute{e}t}}$, where $Z^{\prime}\colonequals Z\times_X X^{\prime}$. This problem is \'etale-local on $X$, so we may assume that $X$ is affine, and $M$ splits as $\mathbb{G}_m$ or $\mu_n$, and,
since $\mu_n=\ker(\mathbb{G}_m\overlineerset{\times n}{\rightarrow}\mathbb{G}_m)$, even $M=\mathbb{G}_m$.
Now, $X$ is qcqs and, by \cite{Nag66}*{Theorem 3'}, is finitely presented over $S$, so a standard limit argument involving Lemmata \ref{approxm semi-local Prufer ring} and \ref{lim-codim} reduces us to the case $R$ having finite Krull dimension; in particular, $X$ is topologically Noetherian.
Recall the coniveau spectral sequence \cite{Gro68c}*{\S10.1}
\[
\textstyle E^{pq}_2=\textbfigoplus_{z\in Z^{(p)}} H^{p+q}_{\{z\}}(M)\Rightarrow H^{p+q}_Z(X,M);
\]
the topological Noetherianness of $X$ allows us to identify
\[
\text{$H^{p+q}_{\{z\}}(M)\colonequals \text{colim}\,H^{p+q}_{\overlineerline{\{z\}}\cap U}(U,M)$ \quad as \quad $H^{p+q}_{\{z\}}(\mathscr{O}_{X,z},M)$,}
\]
where $U$ runs over the open neighbourhoods of $z$ in $X$.
Therefore, it is enough to show $H^i_{\{z\}}(\mathscr{O}_{X,z},\mathbb{G}_m)=0$ for $z\in Z$ and $0\le i \le 2$, which was settled by \Cref{local cohomology of tori}\ref{paraf}.
For \ref{gl-H1-H2-flasque}, the local cohomology exact sequence reduces us to show the vanishing $H^2_Z(X,T)=0$. Similar to the above,
as $X$ is qcqs and $X\textbfegin{eg}in{equation}gin{aligned}ckslash Z\subset X$ is a qc open, a standard limit argument involving \Cref{approxm semi-local Prufer ring} reduces us to the case $R$ having finite Krull dimension, in particular, $X$ is topologically Noetherian.
The coniveau spectral sequence reduces us to show $H^2_{\{z\}}(\mathscr{O}_{X,z},T)=0$, which was settled by \Cref{local cohomology of tori}.
\end{proof}
\textbfegin{pp}[Grothendieck--Serre type results for tori]
\end{pp}
\textbfegin{eg}in{lemma}t \leftarrowbel{non-Noeth-pullback line bundle}
Let $\phi:X\to Y$ be a morphism of schemes. Let $\mathscr{L}$ be an invertible $\mathscr{O}_X$-module. If
\textbfegin{eg}in{equation}gin{itemize}
\item [(1)] $Y$ is quasi-compact quasi-separated, integral, and normal,
\item [(2)] there exist a smooth projective morphism $\overlineerline{\phi}:\overlineerline{X}\to Y$, with geometrically integral fibers, and a quasi-compact open immersion $X^{\mathrm{h}}ookrightarrow \overlineerline{X}$ over $Y$, and
\item [(3)] $\mathscr{L}$ is trivial when restricted to the generic fiber of $\phi$,
\end{itemize}
then $\mathscr{L}\simeq \phi^*\mathscr{N}$ for some invertible $\mathscr{O}_Y$-module $\mathscr{N}$.
\end{lemma}t
\textbfegin{eg}in{proof}
When $Y$ is Noetherian, this follows from a much more general result \SP{0BD6}, where (2) can be replaced by the weaker assumption that $X\to Y$ is faithfully flat of finite presentation, with integral fibers. The general case follows from this via Noetherian approximations. Precisely, as $Y$ is qcqs, we can use \SP{01ZA} to write $Y=\lim_i Y_i$ for a filtered inverse system $\{Y_i\}$ of finite type integral $\mathbf{Z}$-schemes with affine transition morphisms. Since $Y$ is normal and the normalization of a finite type integral $\mathbf{Z}$-scheme is again of finite type over $\mathbf{Z}$, we may assume that each $Y_i$ is normal. Next, as $\phi$ is qcqs and of finite presentation, by \SPD{01ZM}{0C0C}, for some $i_0$ there exist a finite type smooth morphism $\overlineerline{\phi}_{i_0}:\overlineerline{X}_{i_0}\to Y_{i_0}$ such that $\overlineerline{X}\simeq \overlineerline{X}_{i_0}\times_{Y_{i_0}}Y$ as $Y$-schemes, an open subscheme $X_{i_0}\subset \overlineerline{X}_{i_0}$ whose pullback to $\overlineerline{X}$ identifies with $X$, and, by \SP{{0B8W}}, there is an invertible $\mathscr{O}_{X_{i_0}}$-module $\mathscr{L}_{i_0}$ whose pullback to $X$ is isomorphic to $\mathscr{L}$. For any $i\ge i_0$ denote by $\phi_i:X_i\colonequals X_{i_0}\times_{Y_{i_0}}Y_i\to Y_i$ the base change of $\overlineerline{\phi}_{i_0}|_{X_{i_0}}$ to $Y_i$, and denote by $\mathscr{L}_i$ the pullback of $\mathscr{L}_{i_0}$ to $X_i$. By \SPD{01ZM}{01ZP}, any projective embedding of $\overlineerline{X}$ over $Y$ descends to a projective embedding of $\overlineerline{X}_{i}$ over $Y_i$ for large $i$; in particular, $\overlineerline{\phi}_i$ is projective for large $i$.
Since $Y$ is normal, the assumption (2) implies that the Stein factorization of $\overlineerline{\phi}$ is itself; in particular, $\mathscr{O}_Y\isoto \overlineerline{\phi}_*\mathscr{O}_{\overlineerline{X}}$.
This implies that the finite extension $\mathscr{O}_{Y_{i_0}} ^{\mathrm{h}}ookrightarrow \overlineerline{\phi}_{i_0,*}\mathscr{O}_{\overlineerline{X}_{i_0}}$ is an isomorphism, because its base change to the function field of $Y$ is so and $Y_{i_0}$ is normal.
In particular, by Zariski's main theorem, $\overlineerline{\phi}_{i_0}$ has connected geometric fibers; as it is also smooth, all its fibers are even geometrically integral.
By limit formalism, for large $i$, $\mathscr{L}_i$ is trivial when restricted to the generic fiber of $\phi_i$. Consequently, for large $i$, the morphism $\phi_i:X_i\to Y_i$ and the invertible $\mathscr{O}_{X_i}$-module $\mathscr{L}_i$ satisfy all the assumptions of the Lemma, so $\mathscr{L}_i\simeq \phi_i^*\mathscr{N}_i$ for some invertible $\mathscr{O}_{Y_i}$-module $\mathscr{N}_i$. Then $\mathscr{L}\simeq \phi^*\mathscr{N}$ for $\mathscr{N}$ the pullback of $\mathscr{N}_i$ to $Y$.
\end{proof}
\textbfegin{eg}in{prop}t [\emph{cf.}~\cite{CTS87}*{4.1--4.3}]\leftarrowbel{G-S type results for mult type}
For a Pr\"ufer domain $R$, an irreducible scheme $X$ essentially smooth over $R$ with function field $K(X)$, an $X$-group scheme $M$ of multiplicative type, and a connected finite \'etale Galois covering $X'\to X$ splitting $M$, the restriction maps
\[
H^1_{\mathfrak{p}pf}(X,M) \rightarrow H^1_{\mathfrak{p}pf}(K(X),M)\quad \text{ and } \quad H^2_{\mathfrak{p}pf}(X,M) \rightarrow H^2_{\mathfrak{p}pf}(K(X),M)
\]
are injective in each of the following cases:
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{G-S for mult type gp}$X=\Spec A$ for $A$ a semilocal ring essentially smooth over $R$;
\item \leftarrowbel{B-Q for mult type} For some essentially smooth semilocal $R$-algebra $A$, there exists a quasi-compact open immersion $X^{\mathrm{h}}ookrightarrow \overline{X}$, where $\overline{X}$ is a smooth projective $A$-scheme, with geometrically integral fibers, such that $\Pic X_L=0$ for any finite separable fields extension $L/\Frac A$, and $M=N_X$ for $N$ an $A$-group of multiplicative type (for instance, $X$ could be any quasi-compact open subscheme of $\mathbb{P}_A^N$);
\item any subcovering $X''\to X$ of $X'\to X$ satisfies $\mathrm{Pic}\,X''=0$.
\end{equation}num
Further, if $M$ is a $\mathrm{flasque}$ $X$-torus, then in all cases $\mathrm{(i)}$--$\mathrm{(iii)}$ the restriction map
\[
H^1_{\mathrm{\acute{e}t}}(X,M) \isoto H^1_{\mathrm{\acute{e}t}}(K(X),M) \quad \text{ is bijective.}
\]
\end{prop}t
\textbfegin{eg}in{proof}
It is clear that (i) is a particular case of (ii). Let us show that (ii) is a particular case of (iii). Let $A\to B$ be a connected finite \'etale Galois covering that splits $N$. Take $X'\colonequals X\times_AB$. As $A$ is normal and $X \to \Spec A$ is smooth, $X$ is also normal. Then, since $X\to \Spec A$ has geometrically integral generic fiber, the natural map $\pi_1^{\mathrm{\acute{e}t}}(X)\to \pi_1^{\mathrm{\acute{e}t}}(\Spec A)$ is surjective.
This implies that any subcovering $X''\to X$ of $X'\to X$ is of the form $X''=X\times_AC$ for some subcovering $A\to C$ of $A\to B$. By assumption, $\text{Pic}\,X_{\Frac (C)}=0$, so we may apply \Cref{non-Noeth-pullback line bundle} and the morphism $X\times_AC \to \Spec C$ to deduce that
\[
\text{the pullback map \quad $\Pic C \rightarrow \Pic (X\times_A C)$\quad is surjective. }
\]
Since $C$ is semilocal, we conclude that $\Pic C=0= \Pic (X\times_AC)$.
It is thus enough to prove all assertions only for (iii).
Assume first that $M=T$ is an $X$-torus. Take
\[
\text{a flasque resolution\quadq $1\to F \to P \to T \to 1$,\quadq}
\]
where $F$ is a flasque $X$-torus and $P$ is a quasitrivial $X$-torus. This yields a commutative diagram
\[
\textbfegin{eg}in{equation}gin{tikzcd}
{H^1_{\mathrm{\acute{e}t}}(X,P)} \arrow[r] & {H^1_{\mathrm{\acute{e}t}}(X,T)} \arrow[r] \arrow[d, "\rho_1"] & {H^2_{\mathrm{\acute{e}t}}(X,F)} \arrow[d, "\rho_2"] \\
& {H^1_{\mathrm{\acute{e}t}}(K(X),T)} \arrow[r] & {H^2_{\mathrm{\acute{e}t}}(K(X),F)}
\end{tikzcd}
\]
with exact rows.
Now the quasitrivial torus $P$ is isomorphic to a finite direct product of tori $\mathrm{Res}_{X^{\prime}pr/X}\mathbb{G}_{m,X^{\prime}pr}$ for finite \'etale subcoverings $X''\to X$ of $X'\to X$. Hence, assumption (iii) implies that $H^1_{\mathrm{\acute{e}t}}(X,P)=0$, and so the injectivity of $\rho_1$ reduces to that of $\rho_2$.
To prove that $\rho_2$ is injective we pick $a\in H^2_{\mathrm{\acute{e}t}}(X,F)$ for which $a|_{K(X)}=0$. By spreading out, we may assume that $X$ is a localization of an irreducible, smooth, affine $R$-scheme $\widetilde{X}$, $F=\widetilde{F}_X$ for a flasque $\widetilde{X}$-torus $\widetilde{F}$, and $a=\widetilde{a}|_X$ for some class $\widetilde{a} \in H^2(\widetilde{X},\widetilde{F})$. Since $\widetilde{a}|_{K(X)}=0$, for a proper closed subset $Z\subset \widetilde{X}$,
$$
\widetilde{a}|_{\widetilde{X}\textbfegin{eg}in{equation}gin{aligned}ckslash Z}=0 \in H^2_{\mathrm{\acute{e}t}}(\widetilde{X}\textbfegin{eg}in{equation}gin{aligned}ckslash Z,\widetilde{F}).
$$
By \Cref{purity for gp of mult type}\ref{gl-H1-H2-flasque}, $\widetilde{a}=0$, so $a=\widetilde{a}|_X=0$. This proves the injectivity of $\rho_2$ and hence also of $\rho_1$. Now let $M$ be an arbitrary $X$-group of multiplicative type, then there is an $X$-subtorus $T\subset M$ such that $\mu\colonequals M/T$ is $X$-finite. Consequently, for any generically trivial $M$-torsor $\mathcal{P}$, the $\mu$-torsor $\mathcal{P}/T$ is finite over $X$; as $X$ is normal, this implies $(\mathcal{P}/T)(X)=(\mathcal{P}/T)(K(X))$. Therefore, $\mathcal{P}/T\to X$ has a section that lifts to a generic section of $\mathcal{P}\to X$, that is, $\mathcal{P}$ reduces to a generically trivial $T$-torsor $\mathcal{P}_T$. By the injectivity of $\rho_1$, $\mathcal{P}_T$ and hence also $\mathcal{P}$ is trivial. This proves the injectivity of $H^1_{\mathrm{\acute{e}t}}(X,M) \to H^1_{\mathrm{\acute{e}t}}(K(X),M)$.
On the other hand, there is a short exact sequence
\[
1\to M\to F \to P \to 1
\]
of $X$-groups of multiplicative type, where $F$ is flasque and $P$ is quasitrivial, both split after base change by $X'\to X$. This yields the following commutative diagram with exact rows
\[
\textbfegin{eg}in{equation}gin{tikzcd}
{H^1_{\mathfrak{p}pf}(X,P)} \arrow[r] & {H^2_{\mathfrak{p}pf}(X,M)} \arrow[r] \arrow[d,"\rho_3"] & {H^2_{\mathfrak{p}pf}(X,F)} \arrow[d,"\rho_2"] \\
& {H^2_{\mathfrak{p}pf}(K(X),M)} \arrow[r] & {H^2_{\mathfrak{p}pf}(K(X),F)}
\end{tikzcd}
\]
Since we have already shown that $H^1_{\mathfrak{p}pf}(X,P)=0$ and $\rho_2$ is injective, the injectivity of $\rho_3$ follows.
Finally, if $M$ is a flasque $X$-torus, the bijectivity of $H^1_{\mathfrak{p}pf}(X,M) \to H^1_{\mathfrak{p}pf}(K(X),M)$ will follow if one proves its surjectivity, but the latter follows from \Cref{purity for gp of mult type}\ref{gl-H1-H2-flasque} via a limit argument.
\end{proof}
\section{Geometric lemmata}
\leftarrowbel{sect-geom lem}
\textbfegin{pp}[Geometric presentation lemma in the style of Gabber--Quillen]
In both of the works of Fedorov and $\check{\mathrm{C}}$esnavi$\check{\mathrm{c}}$ius on mixed charateristic Grothendieck--Serre, a certain type geometric results in the style of Gabber--Quillen play a prominent role, see \cite{Fed22b}*{Proposition~3.18} and \cite{Ces22a}*{Variant~3.7}, respectively.
We begin with the following analog of \emph{loc.~cit}.
\end{pp}
\textbfegin{eg}in{lemma}t \leftarrowbel{Ces's Variant 3.7}
For a semilocal Pr\"{u}fer ring $R$, a projective, flat $R$-scheme $\overlineerline{X}$ with fibers of pure dimension $d>0$, an $R$-smooth open subscheme $X$, a finite subset $\textbf{x}\subset X$, and a closed subscheme $Y\subset X$ that is $R$-fiberwise of codimension $\ge 1$ such that $\overlineerline{Y}\textbfegin{eg}in{equation}gin{aligned}ckslash Y$ is $R$-fiberwise of codimension $\ge 2$ in $\overlineerline{X}$, there are
affine opens $S\subset \mathbb{A}_R^{d-1}$ and $\textbf{x}\subset U\subset X$
and a smooth morphism $\pi:U\to S$ of relative dimension 1 such that $Y \cap U$ is $S$-finite.
\end{lemma}t
\textbfegin{eg}in{proof}
This can be proved similarly as \cite{Ces22a}*{Variant~3.7}.
\end{proof}
\textbfegin{pp}[A variant of Lindel's lemma]
According to a lemma of Lindel \cite{Lin81}*{Proposition~1 \emph{et seq} Lemma}, an \'etale extension of local rings $A\to B$ with trivial extension of residue fields induces isomorphisms $$A/r^nA\isoto B/r^nB, \quad \text{ where } \quad n\ge1,$$
for a well-chosen non-unit $r\in A$.
In our context in which the prescribed $B$ is essentially smooth over a valuation ring, we will prove the following variant of \emph{loc. cit.} that allows us to fix the $r\in B$ in advance, at the cost of that $A$ is a carefully-chosen local ring of an affine space over that valuation ring. This result will be the key geometric input for dealing with torsors under a reductive group scheme that descends to the Pr\"{u}fer base ring, and, as the cited work of Lindel on the Bass--Quillen conjecture for vector bundles, it reduces us to studying torsors on opens of affine spaces.
\end{pp}
\textbfegin{eg}in{prop}t \leftarrowbel{variant of Lindel's lem}
Let $\Leftarrowmbda$ be a semilocal Pr\"{u}fer domain, $X$ an irreducible, $\Leftarrowmbda$-smooth affine scheme of pure relative dimension $d>0$, $Y\subset X$ a finitely presented closed subscheme that avoids all the maximal points of the $\Leftarrowmbda$-fibers of $X$, and $\textbf{x} \subset X$ a finite subset. Assume that for every maximal ideal $\mathfrak{m}\subset \Leftarrowmbda$ with finite residue field, there are at most $\max(\#\,\kappa(\mathfrak{m}),d)-1$ points of $\textbf{x}$ lying over $\mathfrak{m}$. There are an affine open neighbourhood $W\subset X$ of $\textbf{x}$, an affine open $ U\subset \mathbb{A}_{\Leftarrowmbda}^d$, and an \'etale surjective $\Leftarrowmbda$-morphism $f:W\to U$ such that the restriction $f|_{W\cap Y}:W\cap Y \to U$ is a closed immersion and $f$ induces a Cartesian square:
\textbfegin{eg}in{equation}gin{equation*}
\textbfegin{eg}in{equation}gin{tikzcd}
W\cap Y \arrow[r, hook] \arrow[d, equal]
& W \arrow[d, "f"] \\
W\cap Y \arrow[r, hook]
& U.
\end{tikzcd}
\end{equation*}
\end{prop}t
\textbfegin{eg}in{rem}t\leftarrowbel{rem on Lindel's lem}
The assumption on the cardinality of $\textbf{x}$ holds, for instance, if either $\textbf{x}$ is a singleton or $d>\# \,\textbf{x}$. The latter case will be critical for us to settle the general semilocal case of \Cref{G-S for constant reductive gps}.
On the other hand, the following finite field obstruction shows a certain assumption on $\# \textbf{x}$ is necessary: if $d=1$ and $\GammaL=k$ is a finite field, then the map $f$ delivered from \Cref{variant of Lindel's lem} gives a closed immersion $\textbf{x} ^{\mathrm{h}}ookrightarrow \mathbb{A}_k^1$, which is impossible as soon as $\# \, \textbf{x} > \# \,k$.
\end{rem}t
To prove \Cref{variant of Lindel's lem}, we begin with the following reduction to considering closed points.
\textbfegin{eg}in{lemma}t \leftarrowbel{reduction to closed points}
The proof of \Cref{variant of Lindel's lem} reduces to the case when $\textbf{x}$ consists of closed points of the closed $\GammaL$-fibers of $X$.
\end{lemma}t
\textbfegin{eg}in{proof}
First, by a standard limit argument involving \Cref{approxm semi-local Prufer ring}, we can reduce to the case when $\GammaL$ has finite Krull dimension.
Next, if for each $x\in \textbf{x}$ the closure $\overlineerline{\{x\}}$ contains a closed point $x'$ of the closed $\GammaL$-fibers of $X$ and if the new collection $\{x':x\in \textbf{x}\}$ satisfies the same cardinality assumption on $\textbf{x}$, we can simply replace each $x$ by $x'$ to complete the reduction process. However, it may happen that $\overlineerline{\{x\}}$ does not contain any point of the closed $\GammaL$-fibers of $X$, and even if it does, the new collection $\{x':x\in \textbf{x}\}$ may not satisfy the cardinality assumption on $\textbf{x}$. To overcome this difficulty, we will use a trick by adding auxiliary primes to $\Spec \GammaL$ (and adding the corresponding fibers to $X$ and $Y$) so that $\overlineerline{\{x\}}$ contains closed points of the closed $\GammaL$-fibers of $X$ for all $x\in \textbf{x}$. More precisely, we will show that there are a semilocal Pr\"{u}fer domain $\GammaL'$, an open embedding $\Spec \GammaL \subset \Spec \GammaL'$, an irreducible, affine, $\GammaL'$-smooth scheme $X'$ of pure relative dimension $d$, a closed $\GammaL'$-subscheme $Y'\subset X'$ that avoids all the maximal points of the $\GammaL'$-fibers of $X'$, and a $\GammaL$-isomorphism $X'_{\GammaL} \simeq X$ that identifies $Y'_{\GammaL}$ with $Y$ such that the assumptions of the second sentence of this paragraph hold for our new $X'$ and $Y'$.
To construct the desired $\GammaL'$ (and $X',Y'$), we can first use the specialization technique to reduce to the case when all points of $\textbf{x}$ are closed in the \emph{corresponding} $\GammaL$-fibers of $X$, that is, if $x\in \textbf{x}$ lies over $\mathfrak{p}\subset \GammaL$, then $x$ is $\kappa(\mathfrak{p})$-finite. For the rest of proof we will assume, without lose of generality, that there is exactly one point of $\textbf{x}$, say $x$, that lies over some \emph{non-maximal} prime of $ \GammaL$, say $\mathfrak{p}$. Write $\GammaL_{\mathfrak{p}}=\textbfigcup A$ as a filtered union of its finitely generated $\textbf{Z}$-subalgebras $A$.
By a standard limit argument (\SPD{0EY1}{0C0C}), for large enough $A$,
\textbfegin{eg}in{equation}gin{itemize}
\item [(a)] $X_{\GammaL_{\mathfrak{p}}}$ descends to an irreducible, affine, $A$-smooth scheme $\mathcal{X}$ of pure relative dimension $d$;
\item [(b)] the finitely presented closed subscheme $Y_{\GammaL_{\mathfrak{p}}}\subset X_{\GammaL_{\mathfrak{p}}}$ descends to a closed $A$-subscheme $\mathcal{Y} \subset \mathcal{X}$ which, upon enlarging
$A$, avoids all the maximal points of the $A$-fibers of $\mathcal{X}$: by \cite{EGAIV3}*{Proposition~9.2.6.1}, the subset
\[
\text{$\{s\in \Spec
A: \dim \mathcal{Y}_s=d\} \subset \Spec
A$\quad is constructible,}
\]
and its base change over $\GammaL_{\mathfrak{p}}=\lim_A
A$ is empty, so we may assume that it is already empty;
\item [(c)] the $\kappa(\mathfrak{p})$-finite point $x$ descends to a $A/\mathfrak{p}_A$-finite closed subscheme $\widetilde{x}\subset \mathcal{X}_{A/\mathfrak{p}_A}$, where $\mathfrak{p}_A\colonequals A\cap \mathfrak{p}$;
\end{itemize}
For any prime $\GammaL \supset \mathfrak{q} \supset \mathfrak{p}$ with $\mathrm{ht}(\mathfrak{q})=\text{ht}(\mathfrak{p})+1$, choose an element $a_{\mathfrak{q}} \in \mathfrak{q} \textbfegin{eg}in{equation}gin{aligned}ckslash \mathfrak{p}$. We assume that
\textbfegin{eg}in{equation}gin{itemize}
\item [(d)] $a_{\mathfrak{q}}^{-1} \in A$ for all such $\mathfrak{q}$. (This guarantees the equality $A\cdot \GammaL_{\mathfrak{m}}=\GammaL_{\mathfrak{p}}$ for every maximal ideal $\mathfrak{m}\subset \GammaL$ containing $\mathfrak{p}$.)
\end{itemize}
Since a maximal ideal $\mathfrak{m}\subset \GammaL$ containing $\mathfrak{p}$ gives rise to a non-trivial valuation ring $\GammaL_{\mathfrak{m}}/\mathfrak{p}\GammaL_{\mathfrak{m}}$
of $\kappa(\mathfrak{p})$, we see that
the field $\kappa(\mathfrak{p})$ is not finite. As $\kappa(\mathfrak{p}) =\textbfigcup_A A/\mathfrak{p}_A$, by enlarging $A$ we may assume that $A/\mathfrak{p}_A$ is also not a finite, so we can find a nonzero prime $ \mathfrak{p}' \subset A/\mathfrak{p}_A$.\mathfrak{o}otnote{We use the following fact: a prime ideal of a finite type $\mathbf{Z}$-algebra is maximal if and only if its residue field is finite.} Choose a valuation ring of $\kappa(\mathfrak{p}_A)$ with center $\mathfrak{p}'$ in $A/\mathfrak{p}_A$, and then extend it to a valuation ring $V_{\mathfrak{p}'}$ of $\kappa(\mathfrak{p})$. Let $V$ be the composite of $\GammaL_{\mathfrak{p}}$ and $V_{\mathfrak{p}'}$; explicitly, $V$ is the preimage of $V_{\mathfrak{p}'}$ under the reduction map $\GammaL_{\mathfrak{p}} \twoheadrightarrow \kappa(\mathfrak{p})$. Then $V$ is a valuation ring of $\Frac (\GammaL)$, and, by the above assumption (d), the equality $V\cdot \GammaL_{\mathfrak{m}}=\GammaL_{\mathfrak{p}}$ holds for any maximal ideal $\mathfrak{m}\subset \GammaL$ containing $\mathfrak{p}$. Therefore, by \cite{Bou98}*{VI, \S7, Propositions~1--2},
\[
\GammaL^{\prime} \colonequals \GammaL \cap V
\]
is a semilocal Pr\"{u}fer domain whose spectrum is obtained by glueing $\Spec \GammaL$ with $\Spec V$ along their common open $\Spec \GammaL_{\mathfrak{p}}$.
Consequently, we may glue $X$ with $\mathcal{X}_{V}$ along $X_{\GammaL_{\mathfrak{p}}}$ to extend $X$ to an irreducible, affine, $\GammaL'$-smooth scheme $X'$ of pure relative dimension $d$, with a closed $\GammaL'$-subscheme $Y'\subset X'$ obtained by glueing $Y$ with $\mathcal{Y}_{V}$ along $Y_{\GammaL_{\mathfrak{p}}}$; by construction, $Y'$ avoids all the maximal points of the $\GammaL'$-fibers of $X'$. Since the closed subscheme $\widetilde{x}_V \subset \mathcal{X}_V$ is $V$-finite, we may specialize $x$ to a point of $\widetilde{x}_V\subset X'$ that lies over the closed point of $\Spec V$. Hence, by replacing $\GammaL$ by $\GammaL'$, $X$ by $X'$ and $Y$ by $Y'$, we can reduce to the already treated case when all points of $\textbf{x}$ specialize to closed points of the closed $\GammaL$-fibers of $X$.
\end{proof}
As the relative dimension of $X/\GammaL$ is $d>0$, the closed subset $\textbf{x}\cup Y$ avoids all maximal points of the $R$-fibers of $X$.
By prime avoidance, there is an $a\in \Gamma(X,\mathscr{O}_X)$ that vanishes on $\textbf{x}\cup Y$ but does not vanish at any maximal points of $\GammaL$-fibers of $X$.
Replacing $Y$ by $V(a)$, we may assume the following in the sequel:
\textbfegin{eg}in{equation}gin{itemize}
\item for some $a\in \Gamma(X,\mathscr{O}_X)$, $Y=V(a)$ avoids all the maximal points of $\GammaL$-fibers of $X$; and
\item $ \textbf{x}$ consists of closed points of the closed $\GammaL$-fibers of $X$ and is contained in $Y$.
\end{itemize}
\textbfegin{eg}in{lemma}t \leftarrowbel{lem on h:X to A1}
For $k$ a finite product of fields, a finite type affine $k$-scheme $X'$, a closed subscheme $Y'\subset X'$ of pure dimension $e>0$, a finite subset of closed points $\textbf{x}\subset Y'\cap X'^{\mathrm{sm}}$, and an element $(t(x))\in ^{\prime}od_{x\in \textbf{x}}\kappa(x)$, there is a $k$-morphism $h:X'\to \mathbb{A}_k^1$ that is smooth at $\textbf{x}$ such that
\[
\text{$h|_{Y'}$ has fiber dimension $e-1$\quadq and \quadq $h(x)=t(x)$\quad for every $x\in \textbf{x}$.}
\]
\end{lemma}t
\textbfegin{eg}in{proof}
Assume that $k$ is a field. Pick a finite subset of closed points $\textbf{y}\subset Y'$ that is disjoint from $\textbf{x}$ and meets each irreducible component of $Y'$ in exactly one point. For every integer $n>0$ denote by $\textbf{x}^{(n)}$ (resp., $\textbf{y}^{(n)}$) the $n^{\text{th}}$ infinitesimal neighbourhood of $\textbf{x}$ (resp., $\textbf{y}$) in $X'$.
Pick $h_{\textbf{x}}\in H^0(\textbf{x}^{(1)}, \mathscr{O}_{\textbf{x}^{(1)}})$ such that
\textbfegin{eg}in{equation}gin{equation}\leftarrowbel{value and differential at x}
\text{$h_{\textbf{x}}(x)= t(x) $ \quad and \quad $\text{d}h_{\textbf{x}}(x)\neq 0\in \mathfrak{m}_{X',x}/\mathfrak{m}_{X',x}^2$ \quad for every \quad $x\in \textbf{x}$.}
\end{equation}
Pick an $h_{\textbf{y}} \in H^0(X',\mathscr{O}_{X'})$ such that its restriction to every irreducible component of $Y'_{\text{red}}$ is not identically zero. By the faithfully flatness of the completion map
\[
\textstyle \mathscr{O}_{Y'_{\text{red}},\textbf{y}}=^{\prime}od_{y\in \textbf{y}} \mathscr{O}_{Y'_{\text{red}},y} \to ^{\prime}od_{y\in \textbf{y}} \widehat{\mathscr{O}_{Y'_{\text{red}},y}}=\lim_{n\rightarrow \infty} H^0(\textbf{y}^{(n)}\cap Y'_{\text{red}},\mathscr{O}_{\textbf{y}^{(n)}\cap Y'_{\text{red}}} ),
\]
we see that for large $n$, the restriction of $h_{\textbf{y}}$ to every component of $\textbf{y}^{(n)}\cap Y'_{\text{red}}$ is nonzero.
Let $h\in H^0(X',\mathscr{O}_{X'})$ be an element whose restriction to $\textbf{x}^{(1)}$ is $h_{\textbf{x}}$ and whose restriction to $\textbf{y}^{(n)}$ is congruent to $h_{\textbf{y}}$. As $X'$ is $k$-smooth at $\textbf{x}$, (\ref{value and differential at x}) implies that the morphism $h:X'\to \mathbb{A}_k^1$ (obtained by sending the standard coordinate of $\mathbb{A}_k^1$ to $h$) is smooth at $\textbf{x}$ and $h(x)=t(x)$ for every $x\in \textbf{x}$. For large $n$, as the restriction of $h$ to each irreducible component of $\textbf{y}^{(n)}\cap Y'_{\text{red}}$ and hence also to $Y'_{\text{red}}$ is nonzero, the morphism $h$ is non-constant on each irreducible component of $Y'$, so $h|_{Y'}$ has fiber dimension $e-1$.
\end{proof}
\textbfegin{eg}in{lemma}t \leftarrowbel{lem on g}
With the setup in \Cref{reduction to closed points}, there exists a $\GammaL$-morphism $g:X\to \mathbb{A}_{\GammaL}^{d-1}$ such that
\textbfegin{eg}in{equation}numr
\item it smooth of relative dimension 1 at $\textbf{x}$;
\item the restriction $g|_Y$ is quasi-finite at $\textbf{x}$; and
\item for every maximal ideal $\mathfrak{m}\subset \Leftarrowmbda$ and every $x\in \textbf{x}$ lying over $\mathfrak{m}$, one has $\kappa(\mathfrak{m})=\kappa(g(x))$.
\end{equation}num
In addition, if $d>\#(\textbf{x} \cap X_{\kappa(\mathfrak{m})})$ for every maximal ideal $\mathfrak{m}\subset \Leftarrowmbda$ with finite residue field, then we may find such a morphism $g$ under which all points of $\textbf{x}$ have pairwise distinct images.
\end{lemma}t
\textbfegin{eg}in{proof}
We first reduce the lemma to the case when $\GammaL=k$ is a field. Assume that for every maximal ideal $\mathfrak{m}\subset \GammaL$ there exists a $\kappa(\mathfrak{m})$-morphism $g_{\mathfrak{m}}:X_{\kappa(\mathfrak{m})}\to \mathbb{A}_{\kappa(\mathfrak{m})}^{d-1}$ that is smooth at $\textbf{x}\cap X_{\kappa(\mathfrak{m})}$ such that the restriction $g_{\mathfrak{m}}|_{Y_{\kappa(\mathfrak{m})}}$ is quasi-finite at $\textbf{x}\cap X_{\kappa(\mathfrak{m})}$.
We then use Chinese remainder theorem to lift the maps $\{g_{\mathfrak{m}}\}_{\mathfrak{m}}$ simultaneously to obtain a $\GammaL$-morphism $g:X\to \mathbb{A}_{\GammaL}^{d-1}$ which would verify the first assertion of the lemma: only the flatness of $g$ at $\textbf{x}$ need to be checked, but this follows from the fibral criterion of flatness \cite{EGAIV3}*{Théorème~11.3.10}. In addition, if all points of $\textbf{x} \cap X_{\kappa(\mathfrak{m})} $ have pairwise distinct images under $g_{\mathfrak{m}}$, then the resulting morphism $g$ verifies the second assertion of the lemma.
Assume now that $\Leftarrowmbda=k$ is a field.
Given a collection of maps $t_1,\cdots,t_{d-1}:\textbf{x} \to k$, taking products yields maps $(t_1,\cdots,t_i):\textbf{x}\to \mathbb{A}_k^i(k)=k^{i}$ for $1\le i \le d-1$.
We now apply \Cref{lem on h:X to A1} inductively to show:
\textbfegin{eg}in{comment}lt \leftarrowbel{inductively construct g_i}
For $1\le i\le d-1$, there exists a $k$-morphism $g_i:X\to \mathbb{A}_k^i$ such that
\textbfegin{eg}in{equation}gin{itemize}
\item $g_i$ is smooth at $\textbf{x}$ with $g_i|_{\textbf{x}}=(t_1,\cdots,t_i)$; and
\item every irreducible component of $(g_i|_Y)^{-1}(g_i(\textbf{x}))$ intersecting $\textbf{x}$ has dimension $d-1-i$.
\end{itemize}
\end{comment}lt
\textbfegin{eg}in{proof} [Proof of the claim]
Set $g_0:X\to \Spec k$, the structural morphism. Assume the morphism $g_{i-1}$ has been constructed. We apply \Cref{lem on h:X to A1}, with $k$ being the ring $k'$ of global sections of $g_{i-1}(\textbf{x})$ here, $X'$ being $g_{i-1}^{-1}(g_{i-1}(\textbf{x}))$, $Y'$ being the union of all the irreducible components of $(g_{i-1}|_Y)^{-1}(g_{i-1}(\textbf{x}))$ meeting $\textbf{x}$, and $t$ being $t_i|_{k'}$, to obtain a $k^{\prime} $-morphism $h\colon g_{i-1}^{-1}(g_{i-1}(\textbf{x}))\to \mathbb{A}_{k'}^1$ that is smooth at $\textbf{x}$ such that $h|_{Y'}$ has fiber dimension $d-1-i$ and such that $h|_{\textbf{x}}=t_i|_{k^{\prime}}$, where $t_i|_{k'}:\textbf{x} \textrightarrow{t_i} k\to k'$. It remains to take $g_i\colonequals (g_{i-1},\widetilde{h}):X\to \mathbb{A}_k^i=\mathbb{A}_k^{i-1}\times_k\mathbb{A}_k^1$ for any lifting $\widetilde{h}\in H^0(X,\mathscr{O}_X)$ of
\[
h \in H^0\textbfigl(g_{i-1}^{-1}(g_{i-1}(\textbf{x})),\mathscr{O}_{g_{i-1}^{-1}(g_{i-1}(\textbf{x}))}\textbfigr). \quadedhere
\]
\end{proof}
Starting from any map $(t_1,\cdots,t_{d-1})\colon \textbf{x}\to k^{d-1}$, the morphism $g\colonequals g_{d-1}$ from \Cref{inductively construct g_i} immediately settles the first assertion of the lemma. For the second assertion, it suffices to note that, under the stated assumption, there always exists an injection $\textbf{x} ^{\mathrm{h}}ookrightarrow k^{d-1}$: for an infinite field $k$, the cardinality of $k^{d-1}$ is infinite, and, for a finite field $k$, we have $\# (k^{d-1}) \ge d-1$.
\end{proof}
Consider the morphism $(g,a)\colon X\to \mathbb{A}_{\GammaL}^d=\mathbb{A}_{\GammaL}^{d-1}\times_{\GammaL} \mathbb{A}_{\GammaL}^1$. By construction, it is quasi-finite at $\textbf{x}$, and, by the openness of the quasi-finite locus of a finite type morphism, up to shrinking $X$, we may assume that it is already quasi-finite; since the generic $\GammaL$-fibers of its domain and codomain are irreducible varieties of the same dimension $d$, it is also dominant. Consequently, by Zariski's main theorem \cite{EGAIV4}*{Corollaire~18.12.13}, the morphism $(g,a)\colon X\rightarrow \mathbb{A}^d_{\GammaL}$ factors as $$X \textrightarrow{j} \overlineerline{X} \textrightarrow {h_1} \mathbb{A}_{\GammaL}^d,$$ where $\overlineerline{X}$ is an integral affine scheme, $j$ is an open immersion, and $h_1$ is finite, dominant.
Denote $\overlineerline{g}\colonequals \mathrm{pr}_1 \circ h_1$, where $\mathrm{pr}_1\colon \mathbb{A}_{\GammaL}^d \to \mathbb{A}_{\GammaL}^{d-1}$ is the projection onto the first $(d-1)$-coordinates, and let $\overlineerline{a}\in \Gamma(\overlineerline{X},\mathscr{O}_{\overlineerline{X}})$ be the pullback of the last standard coordinate of $\mathbb{A}_{\GammaL}^d$.
Then $h_1=(\overlineerline{g},\overlineerline{a})$, and $\overlineerline{g}$ (resp., $\overlineerline{a}$) restricts to $g$ (resp., $a$) on $X$.
In what follows, we shall identify $j(x)$ with $x$ for every $x\in \textbf{x}$.
Write $ S\subset \Spec \GammaL$ for the union of the closed points of $\Spec \GammaL$ (with the reduced structure).
\textbfegin{eg}in{lemma}t \leftarrowbel{lem on b}
There exists an element $b\in \Gamma(\overlineerline{X}, \mathscr{O}_{\overlineerline{X}})$ such that the morphism $$h_2\colonequals (\overlineerline{g}, b):\overlineerline{X}\to \mathbb{A}_{\GammaL}^d=\mathbb{A}_{\GammaL}^{d-1}\times_{\GammaL} \mathbb{A}_{\GammaL}^1$$ has the following properties:
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{lem on b 1} set-theoretically we have $h_1^{-1}(h_1(\textbf{x})) \cap h_2^{-1}(h_2(\textbf{x}))= \textbf{x}$;
\item \leftarrowbel{lem on b 2} $h_2$ is \'etale around $\textbf{x}$ and induces a bijection $\textbf{x} \textrightarrow{\sim} h_2(\textbf{x})$; and
\item \leftarrowbel{lem on b 3} $h_2$ induces an isomorphism of residue fields $\kappa(h_2(x)) \textrightarrow{\sim} \kappa(x)$ for every $x \in \textbf{x}$.
\end{equation}num
\end{lemma}t
\textbfegin{eg}in{proof}
Since $h_1$ is finite, surjective, we see that $\overlineerline{g}^{-1}(g(\textbf{x}))$ is an $S$-curve that contains $g^{-1}(g(\textbf{x}))$ as an open subcurve, so it is $S$-smooth around $\textbf{x}$.
For a point $x\in \textbf{x}$ lying over a maximal ideal $\mathfrak{m}\subset \GammaL$, its first infinitesimal neighbourhood in $\overlineerline{g}^{-1}(g(\textbf{x}))$ is isomorphic to $\Spec (\kappa(x)[u_x]/(u_x^2))$, where $u_x$ is a uniformizer of $\overlineerline{g}^{-1}(g(\textbf{x}))$ at $x$.
Recall that the residue field of a point on a smooth curve over a field is a simple extension of that field, see \cite{Ces22a}*{Lemma~6.3}.
It follows that, for $x\in \textbf{x} $ lying over $\mathfrak{m}$, there exists a closed $\kappa(\mathfrak{m})$-immersion $x^{(1)} ^{\mathrm{h}}ookrightarrow \mathbb{A}_{\kappa(\mathfrak{m})}^1=\mathbb{A}_{g(x)}^1$. For a maximal ideal $\mathfrak{m} \subset \GammaL$ with finite residue field, under the assumption that $\#(\textbf{x}\cap X_{\kappa(\mathfrak{m})}) < \max (\#\,\kappa(\mathfrak{m}),d)$, either $\textbf{x}$ contains at most $\#\,\kappa(\mathfrak{m})-1$ points lying over $\mathfrak{m}$ or every fiber of $g_{\kappa(\mathfrak{m})}$ contains at most 1 point of $\textbf{x}$ (\Cref{lem on g}). Consequently, we may arrange the above immersions so that they jointly give a closed immersion over $\mathbb{A}_{\GammaL}^{d-1}$:
\textbfegin{eg}in{equation}gin{equation} \leftarrowbel{embeds 1st neighb of x}
\textstyle \textbfigsqcup_{x\in \textbf{x}} x^{(1)} ^{\mathrm{h}}ookrightarrow
\mathbb{A}_{g(\textbf{x})}^1\subset \mathbb{A}_{\mathbb{A}_{\GammaL}^{d-1}}^1=\mathbb{A}_{\GammaL}^d,
\end{equation}
where we regard $g(\textbf{x})\subset \mathbb{A}_{\GammaL}^{d-1}$ as a closed subscheme. Note that the complement of the image of the morphism (\ref{embeds 1st neighb of x}) in $\mathbb{A}_{\GammaL}^d$ has at least 1 rational point fiberwisely over $\mathbb{A}_{\GammaL}^{d-1}$.
Thus, by sending any $y\in (h_1^{-1}(h_1(\textbf{x})) \textbfegin{eg}in{equation}gin{aligned}ckslash \textbf{x}) $ to a suitable rational point of $\mathbb{A}_{g(y)}^1$, we may further extend (\ref{embeds 1st neighb of x}) to a $\mathbb{A}_{\GammaL}^{d-1}$-morphism
\[
\textstyle u\colon Z\colonequals \left(\textbfigsqcup_{x\in \textbf{x}} x^{(1)}\right) \textbfigsqcup \left(\textbfigsqcup_{y\in h_1^{-1}(h_1(\textbf{x})) \textbfegin{eg}in{equation}gin{aligned}ckslash \textbf{x}}y\right) \to \mathbb{A}_{\GammaL}^d
\]
such that $u(\textbf{x})\cap u(h_1^{-1}(h_1(\textbf{x})) \textbfegin{eg}in{equation}gin{aligned}ckslash \textbf{x}) =\emptyset$, or, equivalently,
\textbfegin{eg}in{equation}gin{equation}\leftarrowbel{intersection equality}
h_1^{-1}(h_1(\textbf{x})) \cap u^{-1}(u(\textbf{x}))= \textbf{x}.
\end{equation}
Let $b\in \Gamma(\overlineerline{X},\mathscr{O}_{\overlineerline{X}})$ be a lifting of $u^*(t)\in \Gamma(Z,\mathscr{O}_Z)$, where $t$ is the standard coordinate on $\mathbb{A}_{\GammaL}^1$.
Consider the morphism $h_2\colonequals (\overlineerline{g}, b):\overlineerline{X}\to \mathbb{A}_{\GammaL}^d=\mathbb{A}_{\GammaL}^{d-1}\times_{\GammaL} \mathbb{A}_{\GammaL}^1$. Viewing $\overlineerline{X}$ as a $\mathbb{A}_{\GammaL}^{d-1}$-scheme via $\overlineerline{g}$, the base change of $h_2$ to $g(\mathbf{x})\subset \mathbb{A}_{\GammaL}^{d-1}$ restricts to $u$ on $Z$, so $h_2$ is unramified at $\textbf{x}$.
Now \ref{lem on b 1} follows from (\ref{intersection equality}) and \ref{lem on b 3} is a consequence of our choice of the morphism (\ref{embeds 1st neighb of x}). For \ref{lem on b 2}, it suffices to argue that $h_2$ is flat at $\textbf{x}$; however, since the domain and the codomain of $h_2$ are $\GammaL$-flat of finite presentation, the fibral criterion of flatness \cite{EGAIV3}*{Théorème~11.3.10} reduces us to checking the flatness of the $\GammaL$-fibers of $h_2$ at $\textbf{x}$, while the latter follows from the flatness criterion \cite{EGA IV2}*{Proposition~6.1.5}.
\end{proof}
Let $\GammaL[h_1^{*}(t_1),\cdots, h_1^*(t_{d-1}), a,b]\subset \Gamma(\overlineerline{X},\mathscr{O}_{\overlineerline{X}})$ be the $\GammaL$-subalgebra generated by $a,b$ and $h_2^{*}(t_i)(=h_1^*(t_i)=\overlineerline{g}^*(t_i))$ for $1\le i \le d-1$. We introduce the following notations.
\textbfegin{eg}in{equation}gin{itemize}
\item Let $V\colonequals \Spec (\GammaL[h_1^{*}(t_1),\cdots, h_1^*(t_{d-1}), a,b])$, and let $h_3:\overlineerline{X}\to V$ be the morphism induced by the inclusion $\GammaL[h_1^{*}(t_1),\cdots, h_1^*(t_{d-1}), a,b]\subset \Gamma(\overlineerline{X},\mathscr{O}_{\overlineerline{X}})$.
\item Let $v_1:V\to \mathbb{A}_{\GammaL}^d$ be the morphism such that $v_1^*(t_i)=h_1^*(t_i)$ for $1\le i \le d-1$ and $v_1^*(t_d)=a$.
\item Let $v_2:V\to \mathbb{A}_{\GammaL}^d$ be the morphism such that $v_2^*(t_i)=h_1^*(t_i)$ for $1\le i \le d-1$ and $v_2^*(t_d)=b$.
Note that there is a natural surjection
\[
\GammaL[h_1^{*}(t_1),\cdots, h_1^*(t_{d-1}),b] \twoheadrightarrow \GammaL[h_1^{*}(t_1),\cdots, h_1^*(t_{d-1}), a,b]/(a)= \Gamma(V,\mathscr{O}_{V})/(a);
\]
this implies that $v_2$ induces a closed immersion
\[
\overlineerline{v}_2:\Spec (\Gamma(V,\mathscr{O}_{V})/(a)) ^{\mathrm{h}}ookrightarrow V \textrightarrow{v_2} \mathbb{A}_{\GammaL}^d.
\]
\end{itemize}
We have the following commutative diagram of morphisms of affine schemes:
\textbfegin{eg}in{equation}gin{equation*}
\textbfegin{eg}in{equation}gin{tikzcd}
X \arrow[r, hook, "j"] & \overlineerline{X}
\arrow[drr, bend left, "h_2"]
\arrow[ddr, bend right, "h_1"]
\arrow[dr, "h_3"] & & \\
& & V \arrow[r, "v_2"] \arrow[d, "v_1"]
& \mathbb{A}_{\GammaL}^d \\
& & \mathbb{A}_{\GammaL}^d
&
\end{tikzcd}.
\end{equation*}
\textbfegin{eg}in{lemma}t \leftarrowbel{isom of loc rings}
The morphism $h_3$ induces a bijection $\textbf{x} \textrightarrow{\sim} h_3(\textbf{x})$ and $h_3^{-1}(h_3(\textbf{x}))=\textbf{x}$. Further, $h_3$ induces an isomorphism of semilocal rings $$\mathscr{O}_{V,h_3(\textbf{x})} \simeq \mathscr{O}_{\overlineerline{X},\textbf{x}}\,=\mathscr{O}_{X,\textbf{x}}.
$$
\end{lemma}t
\textbfegin{eg}in{proof}
By \Cref{lem on b}~\ref{lem on b 2}--\ref{lem on b 3}, we see that $h_3$ induces a bijection $\textbf{x} \textrightarrow{\sim} h_3(\textbf{x})$ and an isomorphism of residue fields $\kappa(h_3(x)) \textrightarrow{\sim} \kappa(x)$ for every $x\in \textbf{x}$. Chasing the above diagram we see that
\[
h_3^{-1}(h_3(\textbf{x}))\subset h_1^{-1}(h_1(\textbf{x})) \cap h_2^{-1}(h_2(\textbf{x}))=\textbf{x},
\]
where the last equality is \Cref{lem on b}~\ref{lem on b 1}. As $h_3$ is finite, surjective, we see that $h_3^{-1}(h_3(\textbf{x}))=\textbf{x}$. By \Cref{lem on b}~\ref{lem on b 2}, $h_3$ is unramified at $\textbf{x}$. It follows that the base change of $h_3$ to $\Spec \,\mathscr{O}_{V,h_3(\textbf{x})}$ is
$$
\Spec \,\mathscr{O}_{\overlineerline{X},\textbf{x}}\to \Spec \,\mathscr{O}_{V,h_3(\textbf{x})},
$$
and it is actually an isomorphism: letting $J$ be the Jacobson radical of the semilocal ring $\mathscr{O}_{V,h_3(\textbf{x})}$, since the natural map
\[
\textstyle ^{\prime}od_{x\in \textbf{x}}\kappa(h_3(x)) \simeq \mathscr{O}_{V,h_3(\textbf{x})}/J \textrightarrow{h_3^*} \mathscr{O}_{\overlineerline{X},\textbf{x}}/J\mathscr{O}_{\overlineerline{X},\textbf{x}} \simeq ^{\prime}od_{x\in \textbf{x}} \kappa(x)
\]
is an isomorphism (in particular, surjective), an application of Nakayama lemma shows
\[
\textstyle h_3^*: \mathscr{O}_{V,h_3(\textbf{x})} \simeq \mathscr{O}_{\overlineerline{X},\textbf{x}}=\mathscr{O}_{X,\textbf{x}}. \quadedhere
\]
\end{proof}
\textbfegin{pp} [Proof of \Cref{variant of Lindel's lem}]
Define $f\colonequals h_2\circ j:X\to \mathbb{A}_{\GammaL}^d$, which we may assume to be \'etale upon replacing $X$ by an affine open neighbourhood of $\textbf{x}$.
By \Cref{isom of loc rings}, there exists an affine open neighbourhood $W_0'\subset V$ of $h_3(\textbf{x})$ such that $W_0\colonequals h_3^{-1}(W_0) \subset j(X)$ and $h_3|_{W_0}:W_0\textrightarrow{\sim} W_0'$. We shall identify $W_0$ as an open subscheme of $X$ via $j$. As noted above, $v_2$ induces a closed immersion
\[
\overlineerline{v}_2:Y'\colonequals \Spec (\Gamma(V,\mathscr{O}_{V})/(a))^{\mathrm{h}}ookrightarrow \mathbb{A}_{\GammaL}^d.
\]
In particular, the topology of $Y'$ is induced from that of $\mathbb{A}_{\GammaL}^d$ via $\overlineerline{v}_2$. Note also that, since $a$ vanishes on $\textbf{x}$, $h_3(\textbf{x})\subset Y' \subset V$. Consequently, there exists an affine open neighbourhood $U\subset \mathbb{A}_{\GammaL}^d$ of $f(\textbf{x})=v_2(h_3(\textbf{x}))$ such that $\overlineerline{v}_2^{-1}(U)\subset W_0'$. Therefore, $f$ induces a closed immersion of affine schemes
\[
Y_U:=f^{-1}(U)\cap Y = (h_3\circ j)^{-1}(v_2^{-1}(U) \cap Y')=
(h_3\circ j)^{-1}(\overlineerline{v}_2^{-1}(U))\simeq \overlineerline{v}_2^{-1}(U) ^{\mathrm{h}}ookrightarrow U.
\]
Since $f$ is separated and \'etale, any section of $f\times_{\mathbb{A}_{\GammaL}^d,f}Y_U$, such as the one $s$ induced by the above inclusion $Y_U^{\mathrm{h}}ookrightarrow U$, is an inclusion of a clopen, so
\[
X\times_{\mathbb{A}_{\GammaL}^d,f}Y_U=\widetilde{Y}_1 \sqcup \widetilde{Y}_2 \quadq \text{ with } \quadq \widetilde{Y}_1=\text{im}(s)\textrightarrow{\sim} Y_U.
\]
Let $W\subset f^{-1}(U)$ be an affine open whose preimage in $X\times_{\mathbb{A}_{\GammaL}^d,f}Y_U$ is $\widetilde{Y}_1$.
Then $f|_W:W\to U$ is \'etale such that $f|_{W\cap Y}:W\cap Y ^{\mathrm{h}}ookrightarrow U$ is a closed immersion and such that $W\times_{U,f}(W\cap Y) \textrightarrow{\sim} W\cap Y$. As \'etale maps are open, we may shrink $U$ around $f(\textbf{x})$ to ensure that $f|_W:W\to U$ is also surjective. \QED
\end{pp}
\section{ Torsors on a smooth affine relative curve}
\leftarrowbel{section-torsors on sm aff curves}
In this section we prove the following result concerning triviality of torsors on a smooth affine relative curve.
A similar result can also be found in the recent preprint \cite{Ces22b}*{Theorem~4.4}.
\textbfegin{eg}in{thm}t[Section theorem]\leftarrowbel{triviality on sm rel. affine curves}
For a semilocal domain $R$ with geometrically unibranch\mathfrak{o}otnote{According to \SP{0BPZ}, a local ring $A$ is \emph{geometrically unibranch} if its reduction $A_{\text{red}}\colonequals A/\sqrt{(0)}$ is a domain, and if the integral closure of $A_{\text{red}}$ in its fraction field is a local ring whose residue field is purely inseparable over that of $A$. By \SP{06DM}, $A$ is geometrically unibranch if and only if its strict Henselization $A^{\text{sh}}$ has a unique minimal prime.} local rings, a smooth affine $R$-curve $C$ with a section $s\in C(R)$, a reductive $C$-group scheme $G$, an $R$-algebra $A$, and a $G$-torsor $\mathcal{P}$ over $C_A\colonequals C\times_RA$ that trivializes over $C_A\textbfegin{eg}in{equation}gin{aligned}ckslash Z_A$ for an $R$-finite closed subscheme $Z\subset C$, if
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{sec-thm-semilocal} either $A$ is semilocal,
\item \leftarrowbel{sec-thm-totally isotropic} or $s_A^*(G)$ is totally isotropic,
\end{equation}num
then the pullback $s_A^*(\mathcal{P})$ is a trivial $s_A^*(G)$-torsor, where $s_A$ denotes the image of $s$ in $C_A(A)$.
\mathrm{\acute{e}t}hmt
To prove Theorem \ref{triviality on sm rel. affine curves}, we first use \Cref{equating red gps} to reduce to the case when $G$ is the base change of a reductive $R$-group scheme, and then to the case when $C=\mathbb{A}^1_R$, see \Cref{1st reduction of sec thm}. As for the latter, one can approach it via the geometry of affine Grassmannians.
\textbfegin{pp}[Reduction to the case when $C=\mathbb{A}_R^1$ and $G$ is constant]
We start with the following result for equating reductive group schemes, which was already known to experts, see also \cite{Ces22b}*{Lemma~3.5}.
\end{pp}
\textbfegin{eg}in{lemma}t[Equating reductive group schemes]\leftarrowbel{equating red gps}
For a semilocal ring $B$ with geometrically unibranch local rings, reductive $B$-group schemes $G_1$ and $G_2$ whose geometric $B$-fibers have the same type, maximal $B$-tori $T_1\subset G_1$ and $T_2\subset G_2$, and an ideal $I\subset B$, if there is an isomorphism of $B/I$-group schemes
\[
\text{$\iota:(G_1)_{B/I} \isoto (G_2)_{B/I}$ \quad such that \quad $\iota((T_1)_{B/I})=(T_2)_{B/I}$.}
\]
then, there are a faithfully flat, finite, \'etale $B$-algebra $B'$, a section $s:B'\twoheadrightarrow B/I$, and an isomorphism of $B'$-group schemes $\iota': (G_1)_{B'} \simeq (G_2)_{B'}$ such that $\iota((T_1)_{B'})=(T_2)_{B'}$ and whose $s$-pullback is $\iota$.
\end{lemma}t
\textbfegin{eg}in{proof}
By \cite{SGA3IIInew}*{Exposé~XXIV, Corollaire~2.2}, the condition on geometric $B$-fibers ensures that
\[
\text{the functor \quad $X\colonequals \underlinederline{\text{Isom}}_B((G_1,T_1),(G_2,T_2))$}
\]
parameterizing the isomorphisms of the pairs $(G_1,T_1)$ and $(G_2,T_2)$ is representable by a $B$-scheme and is an $H\colonequals \underlinederline{\text{Aut}}_B((G_1,T_1))$-torsor.
We need to show that, for any $\iota\in X(B/I)$, there are a faithfully flat, finite, \'etale $B$-algebra $B'$, an $\iota'\in X(B')$, and a section $s:B'\twoheadrightarrow B/I$ such that $s(\iota')=\iota\in X(B/I)$.
By \emph{loc.~cit.}, $H$ is an extension of an \'etale locally constant $B$-group scheme by $T_1^{\text{ad}}$, the quotient of $T_1$ by the center of $G_1$. According to \cite{SGA3IIInew}*{Exposé~XXIV, Proposition~2.6}, $T_1^{\text{ad}}$ acts freely on $X$ and
\[
\text{the fppf quotient \quad $\overlineerline{X}\colonequals X/T_1^{\text{ad}}$ }
\]
is representable by a faithfully flat $B$-scheme that is \'etale locally constant on $B$.
As all local rings of $B$ are geometrically unibranch, by \cite{SGA3IIInew}*{Exposé~X, Corollaire~5.14}, every connected component of $\overlineerline{X}$ is finite \'etale over $B$.
As the image of $\iota:\text{Spec}(B/I)\to X \to \overlineerline{X}$ intersects only finitely many connected components of $\overlineerline{X}$, the union of these components is the spectrum of a finite \'etale $B$-algebra $A$, and there are an $\overlineerline{\iota}\in \overlineerline{X}(A)$ and a section $t: A\twoheadrightarrow B/I$ such that $t(\overlineerline{\iota})=\iota$.
By adding more connected components of $\overlineerline{X}$ into $\Spec A$ if needed, we may assume that $A$ is faithfully flat over $B$.
Consider the fiber product
\[
Y\colonequals X\times_{\overline{X},\overline{\iota}}\Spec A;
\]
it is a $T^{\mathrm{ad}}_A$-torsor equipped with a point $\iota\in Y(A/J)\subset X(A/J)$, where $J\colonequals \ker \,(A\twoheadrightarrow B/I)$.
By \cite{Ces22}*{Corollary~6.3.2}, there are a faithfully flat, finite, \'etale $A$-algebra $B'$, a section $$s':B'\twoheadrightarrow A/J\simeq B/I,$$ and an $\iota'\in Y(B')\subset X(B')$ such that $s'(\iota')=\iota$.
\end{proof}
\textbfegin{eg}in{lemma}t\leftarrowbel{1st reduction of sec thm}
The proof of the \Cref{triviality on sm rel. affine curves} reduces to the case when $C=\mathbb{A}_R^1$ and $G$ is the base change of a reductive $R$-group scheme.
\end{lemma}t
\textbfegin{eg}in{proof}
Let $B$ be the semilocal ring of $C$ at the closed points of $\text{im}(s) \cup Z$; its local rings are geometrically unibranch. By abuse of notation, we may view $s:B \twoheadrightarrow R$ as a section of the $R$-algebra $B$. As $B$ is semilocal, by \cite{SGA3II}*{Exposé~XIV, Corollaire~3.20}, $G_B$ admits a maximal $B$-torus $T_B$.
Since the pullbacks of the pairs $(G_B,T)$ and $((s^*G)_B,(s^*T)_B)$ along $s$ are the same, by \Cref{equating red gps}, there are a faithfully flat, finite, \'etale $B$-algebra $B'$, a section $s':B'\twoheadrightarrow R$ that lifts $s$, and a $B'$-isomorphism
$$
\iota:(G_{B'},T_{B'}) \isoto \left((s^*(G))_{B'},(s^*(T)_{B'}\right)
$$
whose $s$-pullback is the identity.
We spread out $\Spec B^{\prime} \to \Spec B$ to obtain a finite \'etale cover $C'\to U $ of a small affine open neighbourhood $U$ of $\text{im}(s) \cup Z$ in $C$.
Shrinking $U$ if necessary, we may assume that the isomorphism $\iota$ is defined over $C'$. In both cases of \Cref{triviality on sm rel. affine curves} we may replace $C$ by $C'$, $Z$ by $C'\times_CZ$, $s$ by $s'$, and $\mathcal{P}$ by $\mathcal{P}|_{C'_A}$ to reduce to the case when $G$ is the base change of the reductive $R$-group scheme $s^*(G)$.
Next, in order to apply glueing \cite{Ces22a}*{Lemma~7.1} to achieve that $C=\mathbb{A}_R^1$, we need to modify $C$ so that $Z$ embeds into $\mathbb{A}_R^1$.
For this, we first replace $Z$ by $Z\cup \text{im}(s)$ to assume that $s$ factors through $Z$. Then we apply Panin's `finite field tricks' \cite{Ces22a}*{Proposition~7.4} to obtain a finite morphism $\widetilde{C}\to C$ that is \'etale at the points in $\widetilde{Z}\colonequals \widetilde{C}\times_CZ$ such that $s$ lifts to $\widetilde{s}\in \widetilde{C}(R)$, and there are no finite fields obstruction to embedding $\widetilde{Z}$ into $\mathbb{A}_R^1$ in the following sense: for every maximal ideal $\mathfrak{m}\subset R$,
\[
\# \textbfigl\{
z\in \widetilde{Z}_{\kappa(\mathfrak{m})}:[\kappa(z):\kappa(\mathfrak{m})]=d \textbfigr\}< \# \textbfigl\{y\in \mathbb{A}_{\kappa(\mathfrak{m})}^1: [\kappa(y):\kappa(\mathfrak{m})]=d \textbfigr\} \quad \text{for every} \quad d\ge 1.
\]
Then, by \cite{Ces22a}*{Lemma~6.3}, there are an affine open $C''\subset \widetilde{C}$ containing $\text{im}(\widetilde{s})$, a quasi-finite, flat $R$-map $C''\to \mathbb{A}_R^1$ that maps $Z$ isomorphically to a closed subscheme $Z'\subset \mathbb{A}_R^1$ with
\[
Z\simeq Z' \times_{\mathbb{A}_R^1}C''.
\]
(Actually, $C''\to \mathbb{A}_R^1$ can be \'etale by shrinking $C''$ around $\text{im}(\widetilde{s})$).
For both cases of \Cref{triviality on sm rel. affine curves}, since $\mathcal{P}|_{C''_A}$
is a $G$-torsors that trivializes over $C''_A\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{Z}_A$, we may use \cite{Ces22a}*{Lemma~7.1} to glue $\mathcal{P}_{C''_A}$ with the trivial $G$-torsor over $\mathbb{A}_A^1$ to obtain a $G$-torsor $\mathcal{P}'$ over $\mathbb{A}_A^1$ that trivializes over $\mathbb{A}_A^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Z'_A$. Let $s'\in \mathbb{A}_R^1(R)$ be the image of $\widetilde{s}$; then $s'^*(\mathcal{P}')\simeq s^*(\mathcal{P})$.
It remains to replace $C$ by $\mathbb{A}_R^1$, $Z$ by $Z'$, $s$ by $s'$, and $\mathcal{P}$ by $\mathcal{P}'$.
\end{proof}
\textbfegin{pp}[Studying torsors on $\mathbb{P}_R^1$ via affine Grassmannian]
The analysis of torsors on $\mathbb{A}_R^1$ ultimately depends on the geometry of affine Grassmannians. A nice summary of and complement on the relevant techniques can be found in \cite{Ces22}*{\S5.3}. In particular, we will use the following slight variant of \cite{Ces22}*{Proposition~5.3.6}, which in turn is a mild generalization of \cite{Fed22b}*{Theorem~6}.
\end{pp}
\textbfegin{eg}in{prop}t\leftarrowbel{main tech for torsor on A1}
For a semilocal ring $R$ with $\mathrm{connected}$
spectrum and a reductive $R$-group scheme $G$, let
\[
\textstyle G^{\mathrm{ad}}\simeq ^{\prime}od_{i} \mathrm{Res}_{R_i/R}(G_i)
\]
be the canonical decomposition of the adjoint quotient $G^{\mathrm{ad}}$ in \cite{SGA3IIInew}*{Exposé~XXIV, Proposition~5.10}, where $G_i$ is a simple adjoint $R_i$-group scheme, and $R_i$ is a finite, \'etale $R$-algebra with $\mathrm{connected}$ spectrum\mathfrak{o}otnote{To ensure that the $R_i$'s have connected spectra, we decompose $R_i\simeq ^{\prime}od_{j=1}^{n_i}R_{ij}$, where $R_{ij}$ have connected spectra, and then use the canonical isomorphism $\mathrm{Res}_{R_i/R}(G_i)\simeq ^{\prime}od_{j=1}^{n_i}\mathrm{Res}_{R_{ij}/R}(G_{i,R_{ij}})$.}.
Let $Y\subset \mathbb{A}_R^1$ be a $R$-finite, \'etale, closed subscheme with the following properties:
\textbfegin{eg}in{equation}numr
\item\leftarrowbel{main tech i} for every $i$, there is a clopen $Y_i\subset Y\times _R R_i$ such that $(G_i)_{Y_i}$ contains a copy of $\mathbb{G}_{\text{m},Y_i}$;
\item\leftarrowbel{main tech ii} $\mathscr{O}_{\mathbb{P}_{\kappa(\mathfrak{m})}^1}(1)$ is trivial on $\mathbb{P}_{\kappa(\mathfrak{m})}^1 \textbfegin{eg}in{equation}gin{aligned}ckslash (Y_i)_{\kappa(\mathfrak{m})}$ for each maximal ideal $\mathfrak{m} \subset R_i$ such that $(G_i)_{\kappa(\mathfrak{m})}$ is isotropic;
\item\leftarrowbel{main tech iii} $\mathscr{O}_{\mathbb{P}_R^1}(1)$ is trivial on $\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y$.
\end{equation}num
Let $\mathcal{P}$ be a $G$-torsor over $\mathbb{P}_R^1$ that trivializes over $\mathbb{P}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash Z$ for some $R$-finite closed subscheme $Z\subset \mathbb{A}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash Y$. Assume that for every maximal ideal $\mathfrak{m} \subset R$ the $G^{\mathrm{ad}}$-torsor over $\mathbb{P}_{\kappa(\mathfrak{m})}^1$ induced by $\mathcal{P}$ lifts to a $\mathrm{generically}$ $\mathrm{trivial}$ $(G^{\mathrm{ad}})^{\mathrm{sc}}$-torsor over $\mathbb{P}_{\kappa(\mathfrak{m})}^1$. Then the restriction $\mathcal{P}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ is trivial.
\end{prop}t
By \cite{SGA3IIInew}*{Exposé~XXVI, Corollaire~6.12}, (i) is equivalent to that the base change of $(G_i)_{Y_i}$ to every connected component of $Y_i$ contains proper parabolics.
For instance, if $G$ is quasi-split, we can take $Y_i=Y\times_RR_i$ to ensure (i). In practice, we achieve (i) by guaranteeing base changes of $(G_i)_{Y_i}$ to connected components of $Y_i$ contain proper parabolics. For (ii), we can take $Y_i$ so that $Y_i(\kappa(\mathfrak{m}))\neq \emptyset$ for every maximal ideal $\mathfrak{m}\subset R_i$ with $(G_i)_{\kappa(\mathfrak{m})}$ isotropic. For (iii), we just choose $Y$ so that it contains finite \'etale $R$-schemes of degrees $d$ and $d+1$ for some $d\ge 1$, then $\mathscr{O}(d)$ and $\mathscr{O}(n+1)$ are both trivial on $\mathbb{P}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash Y$, and so is $\mathscr{O}(1)$.
\textbfegin{eg}in{proof}
We will deduce \Cref{main tech for torsor on A1} from (the proof of) a particular case of \cite{Ces22}*{Proposition~5.3.6}. (We remind that the assumption (ii) of \emph{loc.~cit.} should read as `$(G_i)_{Y_i}$ contains a copy of $\mathbb{G}_{m,Y_i}$', as its proof shows.)
The $R$-finite \'etale $Y$ is the vanishing locus of a monic polynomial $t$ in the standard coordinate of $\mathbb{A}_R^1$; namely, $t$ is the characteristic polynomial of this standard coordinate acting on $\widetilde{R}:=\Gamma(Y,\mathscr{O}_Y)$. The formal completion of $\mathbb{P}_R^1$ along $Y$ has coordinate ring $\widetilde{R}\mathfrak{p}s{t}$. Recall that, by formal glueing, a $G$-torsor over $\mathbb{P}_R^1$ can be viewed as the glueing of its restriction to $\mathbb{P}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash Y$ and to $\widetilde{R}\mathfrak{p}s{t}$ along the `intersection' $\widetilde{R}\lps{t}$; since our torsor $\mathcal{P}$ is trivial over an open neighbourhood $U\subset \mathbb{P}_R^1$ of $Y$, both of the restriction $\mathcal{P}|_{U\textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ and $\mathcal{P}|_{\widetilde{R}\mathfrak{p}s{t}}$ are trivial, and once a trivialization of the former was chosen, all such glueings are parameterized by elements of $G(\widetilde{R}\lps{t})/G(\widetilde{R}\mathfrak{p}s{t})$. In particular, since $G(\widetilde{R}\lps{t})$ acts on $G(\widetilde{R}\lps{t})/G(\widetilde{R}\mathfrak{p}s{t})$ (via left multiplication), an element of $G(\widetilde{R}\lps{t})$
yields a modification of $\mathcal{P}$ along $Y$: it is the $G$-torsor over $\mathbb{P}_R^1$ whose restriction to
$\mathbb{P}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash Y$ and to $\widetilde{R}\mathfrak{p}s{t}$ are the same as $\mathcal{P}$, but their corresponding glueings, viewed as elements of $G(\widetilde{R}\lps{t})/G(\widetilde{R}\mathfrak{p}s{t})$, differ by a left translation by the element of $G(\widetilde{R}\lps{t})$ we choose.
Denote by $\mathcal{P}^{\text{ad}}$ the $G^{\text{ad}}$-torsor over $\mathbb{P}_R^1$ induced by $\mathcal{P}$. Since the formation of $H^1(\mathbb{P}_R^1,-)$ commutes with taking products, $\mathcal{P}^{\text{ad}}$ corresponds to a collection $(\mathcal{P}^{\text{ad}}_i)$, where $\mathcal{P}^{\text{ad}}_i$ is a $\mathrm{Res}_{R_i/R}(G_i)$-torsor over $\mathbb{P}_R^1$ satisfying the analogous assumptions \ref{main tech i}--\ref{main tech iii} of the \Cref{main tech for torsor on A1}. Since $R\to R_i$ is finite \'etale and $G_i$ is $R_i$-smooth, we have $R^1f_*G_i=1$ for the map $f\colon \Spec R_i\to \Spec R$ induced by $R\to R_i$.
We have the following exact sequence of nonabelian pointed sets from \cite{Gir71}*{Chapitre~V, Proposition~3.1.3}
\[
1\to H^1(\mathbb{P}_R^1,\mathrm{Res}_{R_i/R}(G_i))\to H^1(\mathbb{P}_{R_i}^1, G_i) \to H^1(\mathbb{P}_R^1,R^1f_*G_i).
\]
Thus $\mathcal{Q} \mapsto \mathrm{Res}_{R_i/R}(\mathcal{Q})$ defines a bijection of pointed sets $H^1(\mathbb{P}_{R_i}^1, G_i) \isoto H^1(\mathbb{P}_R^1,\mathrm{Res}_{R_i/R}(G_i)) $. In particular, each $\mathcal{P}^{\text{ad}}_i$ corresponds to a $G_i$-torsor $\mathcal{Q}_i$ over $\mathbb{P}_{R_i}^1$. As one can see now, the assumptions \ref{main tech i}--\ref{main tech iii} of \Cref{main tech for torsor on A1} for the $\mathrm{Res}_{R_i/R}(G_i)$-torsor $\mathcal{P}^{\text{ad}}_i$ translate into the assumptions \cite{Ces22}*{Proposition~5.3.6} (i)--(iv) for the $G_i$-torsor $\mathcal{Q}_i$ over $\mathbb{P}_{R_i}^1$.
By the proof of \emph{loc.~cit.} (using the condition that over closed fibers of $\mathbb{P}^1_R$, the simply-connected lifting of the torsor induced by $\mathcal{P}$ is generically trivial), for some
\[
\alpha_i\in \text{im}\textbfigl(G_i^{\text{sc}} ((\widetilde{R}\otimes_RR_i)\lps{t})\to G_i((\widetilde{R}\otimes_RR_i)\lps{t})\textbfigr),
\]
the corresponding modification of $\mathcal{Q}_i$ along $Y\times_RR_i$ is trivial. We can view the element
\[
\alpha\colonequals (\alpha_i) \in \text{im} \textbfigl((G^{\mathrm{ad}})^{\text{sc}}(\widetilde{R}\lps{t}) \to G^{\mathrm{ad}}(\widetilde{R}\lps{t}) \textbfigr);
\]
as $(G^{\mathrm{ad}})^{\text{sc}} \to G^{\mathrm{ad}}$ factors through $(G^{\mathrm{ad}})^{\text{sc}} \to G$, $\alpha$ lifts to $\widetilde{\alpha}\in G(\widetilde{R}\lps{t})$.
Denote by $\mathcal{Q}$ the modification of $\mathcal{P}$ along $Y$ using $\widetilde{\alpha}$.
By construction, the $G^{\text{ad}}$-torsor $\mathcal{Q}^{\text{ad}}$ over $\mathbb{P}_R^1$ induced by $\mathcal{Q}$ corresponds to the collection of modifications of the $\mathcal{P}^{\text{ad}}_i=\text{Res}_{R_i/R}(\mathcal{Q}_i)$ along $Y$ using $\alpha_i \in G_i((\widetilde{R}\otimes_RR_i)\lps{t})= \text{Res}_{R_i/R}G_i(\widetilde{R}\lps{t})$, which are trivial, so that $\mathcal{Q}^{\text{ad}}$ is trivial, to the effect that $\mathcal{Q}$ reduces to a torsor over $\mathbb{P}_R^1$ under the center $Z_G$ of $G$.
Now, as the last paragraph of the proof of \cite{Ces22}*{Proposition~5.3.6} showed, any $Z_G$-torsor over $\mathbb{P}_R^1$ is isomorphic to the sum of a constant torsor (i.e., the pullback of a $Z_G$-torsor over $R$) and $\leftarrowmbda_*(\mathscr{O}(1))$ for a unique cocharacter $\leftarrowmbda $ of $Z_G$. Therefore, by assumption \ref{main tech iii}, $\mathcal{Q}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ is a constant torsor, and, by checking along the infinity section, it is even trivial, so is $\mathcal{P}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y}=\mathcal{Q}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y}$, as desired.
\end{proof}
The following helps us to construct the desired $R$-finite, \'etale schemes $Y_i$ and $Y$ in \Cref{main tech for torsor on A1}.
\textbfegin{eg}in{lemma}t\leftarrowbel{lem on isotropicity}
Let $R\rightarrow R_1$ be a finite \'etale ring map of semilocal rings with connected spectra, let $W \subset \mathbb{A}_R^1$ be an $R$-finite closed scheme, and let $G_1$ be a simple $R_1$-group scheme.
There is an $R_1$-finite \'etale scheme $Y_1$ and a closed immersion $Y_1 \subset \mathbb{A}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash W$ over $R$ such that $(G_1)_{Y_1}$ contains a copy of $\mathbb{G}_{\text{m},Y_1}$, and, for every maximal ideal $\mathfrak{m} \subset R_1$ with $(G_1)_{\kappa(\mathfrak{m})}$ isotropic, the line bundle $\mathscr{O}_{\mathbb{P}_{\kappa(\mathfrak{m})}^1}(1)$ is trivial over $\mathbb{P}_{\kappa(\mathfrak{m})}^1 \textbfegin{eg}in{equation}gin{aligned}ckslash (Y_1)_{\kappa(\mathfrak{m})}$.\mathfrak{o}otnote{Note that $Y_1$ is a clopen of $Y_1\times_RR_1$, thus naturally embeds into $\mathbb{A}_{R_1}^1$.}
In addition, there is an $R_1$-finite, \'etale scheme $\widetilde{Y}$ and a closed immersion $\widetilde{Y} ^{\mathrm{h}}ookrightarrow \mathbb{A}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash W$ over $R$ such that the line bundle $\mathscr{O}_{\mathbb{P}_R^1}(1)$ is trivial over $\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{Y}$.
\end{lemma}t
\textbfegin{eg}in{proof}
Let $\text{Par}^{\prime} \to \Spec R_1$ be the scheme parameterizing \emph{proper} parabolic subgroup schemes of the reductive $R_1$-group scheme $G_1$; it is smooth projective over $R_1$ (\cite{SGA3IIInew}*{Exposé~XXVI, Corollaire~3.5}). Fix an embedding $\text{Par}' ^{\mathrm{h}}ookrightarrow \mathbb{P}_{R_1}^N$ over $R_1$. Write $\text{Par}'=\textbfigsqcup_{i=1}^t P_t$ as a disjoint union of its connected components; every $P_t$ has a constant relative dimension $d_t$ over $R_1$. For every maximal ideal $\mathfrak{m} \subset R_1$ with $(G_1)_{\kappa(\mathfrak{m})}$ isotropic, a proper parabolic subgroup of $(G_1)_{\kappa(\mathfrak{m})}$ gives a point $b_{\mathfrak{m}}\in \text{Par}'(\kappa(\mathfrak{m}))$.
Fix an $i=1,\cdots, t$. For every maximal ideal $\mathfrak{m} \subset R_1$, by Bertini theorems (including Poonen's version \cite{Poo04}*{Theorem~1.2} over finite fields), one can find a hypersurface in $\mathbb{P}_{\kappa(\mathfrak{m})}^N$ of large enough degree such that it passes through all points $b_{\mathfrak{m}}$ that lies in $P_i$ and has smooth intersection with $(P_i)_{\kappa(\mathfrak{m})}$.
We may assume that the above hypersurfaces have the same degree for all $\mathfrak{m}$. By the Chinese Remainder theorem, one can lift these simultaneously to get a hypersurfaces $H\subset \mathbb{P}_{R_1}^N$. Then $H\cap P_i$ is a smooth projective $R_1$-scheme of pure relative dimension $d_i-1$, and $b_{\mathfrak{m}}\in H\cap P_i$ whenever $b_{\mathfrak{m}}\in P_i$. The same argument can be applied to the hypersurface section
$H\cap P_i$.
Continuing in this way, we finally arrive at an $R_1$-finite, \'etale, closed subscheme $Y_i \subset P_i$ such that $b_{\mathfrak{m}}\in Y_i$ whenever $b_{\mathfrak{m}}\in P_i$. Denote $Y_1^{\prime} \colonequals \textbfigsqcup_{i=1}^t Y_i$.
Unfortunately, $Y_1^{\prime}$ may not embed into $\mathbb{A}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash W$.
So we first modify $Y_1'$ by using Panin's `finite field tricks'.
For large enough integers $d>0$, we can choose a monic polynomial $h_{\mathfrak{n}' }\in \kappa(\mathfrak{n}')[u]$ of degree $2d+1$ for each maximal ideal $\mathfrak{n}' \subset \Gamma(Y_1',\mathscr{O}_{Y_1'})$ such that:
\textbfegin{eg}in{equation}gin{itemize}
\item [(1)] if $\kappa(\mathfrak{n}')$ is finite, $h_{\mathfrak{n}'}$ is a product of two irreducible polynomials of degrees $d$ and $d+1$\mathfrak{o}otnote{This follows from the following fact: for a finite field $k$, there are asymptotically $(\# \,k)^d/d$ points on $\mathbb{A}_k^1$ of exact degree $d$.}, respectively;
\item [(2)] if $\kappa(\mathfrak{n}')$ is infinite, $h_{\mathfrak{n}'}$ is a separable polynomial and has at least one root in $\kappa(\mathfrak{n}')$.
\end{itemize}
Let $h\in \Gamma(Y_1',\mathscr{O}_{Y_1'})[u] $ be a common monic lifting of $h_{\mathfrak{n}' }$ for all $\mathfrak{n}' \subset \Gamma(Y_1',\mathscr{O}_{Y_1'})$.
Define
\[
\textstyle Y_1\colonequals \Spec \textbfigl(\Gamma(Y_1',\mathscr{O}_{Y_1'})[u]/(h)\textbfigr);
\]
it is finite, \'etale over $Y_1'$, and hence also over $R_1$.
By (1), for each maximal ideal $\mathfrak{n}\subset R$ with finite residue field, $(Y_1)_{\kappa(\mathfrak{n})}$ has at most $2\,\text{deg}(Y_1'/R)$ points, each of degree $\ge d$ over $\mathfrak{n}$, so, there is no finite fields obstruction to embedding $Y_1$ into $\mathbb{A}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash W$ over $R$ in the following sense: for every maximal ideal $\mathfrak{n}\subset R$,
\[
\# \left\{
z\in (Y_1)_{\kappa(\mathfrak{n})}:[\kappa(z):\kappa(\mathfrak{n})]=e \right\}\le
\# \left\{z\in \mathbb{A}_{\kappa(\mathfrak{n})}^1\textbfegin{eg}in{equation}gin{aligned}ckslash W: [\kappa(z):\kappa(\mathfrak{n})]=e \right\} \quad \text{for every} \quad e\ge 1,
\]
because the left hand side is zero for $e<d$ and is uniformly bounded for $e\ge d$, but the right hand side is asymptotically $(\# \,\kappa(\mathfrak{n}))^e/e$, which tends to infinity as $e$ grows.
Consequently, for such $d$, there exists
\[
\textstyle \text{a closed immersion \quad $\textbfigsqcup_{\mathfrak{n} \subset R} (Y_1)_{\kappa(\mathfrak{n})} ^{\mathrm{h}}ookrightarrow \mathbb{A}_{R}^1\textbfegin{eg}in{equation}gin{aligned}ckslash W $ \quad over $R$; }
\]
by Nakayama lemma, any of its lifting $Y_1 ^{\mathrm{h}}ookrightarrow \mathbb{A}_{R}^1\textbfegin{eg}in{equation}gin{aligned}ckslash W$ over $R$ (which exists since $Y_1$ is affine) is also a closed immersion. By construction, the restriction of $(G_1)_{Y_1'}$ to every connected component of $Y_1'$ contains a proper parabolic subgroup scheme, so, by \cite{SGA3IIInew}*{Exposé~XXVI, Corollaire~6.12}, $(G_1)_{Y_1^{\prime}}$ contains $\mathbb{G}_{m,Y_1'}$, and also $(G_1)_{Y_1}$ contains $\mathbb{G}_{m,Y_1}$.
For every maximal ideal $\mathfrak{m}\subset R_1$ with $(G_1)_{\kappa(\mathfrak{m})}$ isotropic, by construction, there exists a point $\mathfrak{n}^{\prime} \colonequals b_{\mathfrak{m}}\in Y_1'$, that is, $\kappa(\mathfrak{m})=\kappa(\mathfrak{n}^{\prime})$.
Thus, in case (1) (resp., in case (2)), $(Y_1)_{\kappa(\mathfrak{m})}$ contains points of both degrees $d$ and $d+1$ (resp., a point of exact degree $1$) over $\mathfrak{m}$, so the line bundles $\mathscr{O}_{\mathbb{P}_{\kappa(\mathfrak{m})}^1}(d)$ and $\mathscr{O}_{\mathbb{P}_{\kappa(\mathfrak{m})}^1}(d+1)$ are trivial over $\mathbb{P}_{\kappa(\mathfrak{m})}^1 \textbfegin{eg}in{equation}gin{aligned}ckslash (Y_1)_{\kappa(\mathfrak{m})}$, hence so is $\mathscr{O}_{\mathbb{P}_{\kappa(\mathfrak{m})}^1}(1)$.
To construct $\widetilde{Y}$, it suffices to produce, for a large $d$, $R$-finite, \'etale, closed subschemes $\widetilde{Y}_1, \widetilde{Y}_2\subset \mathbb{A}_R^1$ of $R$-degrees $d$ and $d+1$ which are disjoint from $W$, and then take $\widetilde{Y}\colonequals \widetilde{Y}_1\sqcup \widetilde{Y}_2$. To achieve this, one just need to imitate the above procedure for constructing $Y_1$ from $Y_1'$. Details omitted.
\end{proof}
\textbfegin{pp}[Proof of \Cref{triviality on sm rel. affine curves}]
\Cref{1st reduction of sec thm} reduces us to the case when $C=\mathbb{A}_R^1$ and $G$ is a reductive $R$-group scheme.
Up to shifting we may assume that $s=0_R\in \mathbb{A}_R^1(R)$ is the zero section, and base changing to $A$ reduces us further to the case $A=R$ at the cost of that $R$ need not be a domain or geometrically unibranch. Thus, in case \ref{sec-thm-semilocal}, our $R$ is semilocal, and, in case \ref{sec-thm-totally isotropic}, our $G$ is totally isotropic.
For both cases \ref{sec-thm-semilocal}--\ref{sec-thm-totally isotropic}, by glueing $\mathcal{P}$ with the trivial $G$-torsor over $\mathbb{P}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash Z$ we extend $\mathcal{P}$ to a $G$-torsor $\mathcal{Q}$ over $\mathbb{P}_R^1$. By \cite{Fed22b}*{Proposition~2.3} or \cite{Ces22}*{Lemma~5.3.5}, up to replacing $\mathcal{Q}$ and $Z$ by their pullbacks by $\mathbb{P}_R^1\to \mathbb{P}_R^1, t\mapsto t^d$, where $d$ is divisible by the $R$-fibral degrees of the simply-connected central cover $(G^{\text{ad}})^{\text{sc}}\to G^{\text{ad}}$, we may assume that
for every maximal ideal $\mathfrak{m} \subset R$ the $G^{\text{ad}}$-torsor over $\mathbb{P}_{\kappa(\mathfrak{m})}^1$ induced by $\mathcal{Q}$ lifts to a generically trivial $(G^{\text{ad}})^{\text{sc}}$-torsor over $\mathbb{P}_{\kappa(\mathfrak{m})}^1$.
\end{pp}
\textbfegin{eg}in{comment}lt \leftarrowbel{lem on trivialize ouside Y}
In both cases \ref{sec-thm-semilocal}--\ref{sec-thm-totally isotropic}, assume that $R$ is semilocal. For any $R$-finite closed subscheme $W_0\subset \mathbb{A}_R^1$, there exists an $R$-finite, \'etale, closed subscheme $Y\subset \mathbb{A}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash W_0$ such that $\mathcal{Q}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ is trivial.
\end{comment}lt
\textbfegin{eg}in{proof} [Proof of the claim]
Restricting ourselves on each connected component, we may assume that $\Spec R$ is connected. Write the canonical decomposition of $G^{\text{ad}}$ as in \Cref{main tech for torsor on A1}.
Replace $W_0$ by $W_0\cup Z$ to assume that $Z\subset W_0$. Applying Lemma \ref{lem on isotropicity} separately to each simple $R_i$-group scheme $G_i$ (with appropriate choices of $W$'s), we get $R_i$-finite, \'etale schemes $Y_i$ such that $(G_i)_{Y_i}$ are totally isotropic,
and a closed immersion
$\textbfigsqcup_i Y_i ^{\mathrm{h}}ookrightarrow \mathbb{A}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash W_0$ over $R$ such that, for every maximal ideal $\mathfrak{m} \subset R_i$ with $(G_i)_{\kappa(\mathfrak{m})}$ isotropic, the line bundle $\mathscr{O}_{\mathbb{P}_{\kappa(\mathfrak{m})}^1}(1)$ is trivial over $\mathbb{P}_{\kappa(\mathfrak{m})}^1 \textbfegin{eg}in{equation}gin{aligned}ckslash (Y_i)_{\kappa(\mathfrak{m})}$.
Applying the second part of Lemma \ref{lem on isotropicity} to $W\colonequals (\sqcup_i Y_i) \textbfigsqcup W_0$, we obtain an $R$-finite, \'etale, closed subscheme
$$
\textstyle Y'\subset \mathbb{A}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash \left((\sqcup_i Y_i) \textbfigsqcup W_0\right)
$$
such that $\mathscr{O}_{\mathbb{P}_R^1}(1)$ is trivial over $\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y'$. Denote $Y\colonequals Y^{\prime} \textbfigsqcup (\sqcup_i Y_i) $. Then all the assumptions \ref{main tech i}--\ref{main tech iii} of \Cref{main tech for torsor on A1} are verified, so we conclude that $\mathcal{Q}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ is trivial.
\end{proof}
For \ref{sec-thm-semilocal}, we take $W_0=Z\cup 0_R$, then \Cref{lem on trivialize ouside Y} gives a $R$-finite \'etale closed $Y\subset \mathbb{A}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash W_0$ such that $\mathcal{Q}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ is trivial.
Since $Y\cap 0_R =\emptyset$, we deduce that the pullback of $\mathcal{Q}$ along $s=0_R$ is also trivial, as wanted.
For \ref{sec-thm-totally isotropic}, we will follow \cite{Ces22b}*{Lemma~4.3} to show that both $\mathcal{P}=\mathcal{Q}|_{\mathbb{A}_R^1}$ and $\mathcal{Q}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash 0_R}$ descend to $G$-torsors over $R$, and then we are done: both of these descendants agree with the restriction of $\mathcal{Q}$ along $1_R\in \mathbb{A}_R^1(R)$, so they agree with the restriction of $\mathcal{Q}$ along $\infty_R$, which is trivial, and hence they must be trivial. By Quillen patching \cite{Ces22}*{Corollary~5.1.5~(b)}, for the descent claim we may replace $R$ by its localizations at maximal ideals to assume that $R$ is local.
Now, since $R$ is local, we apply \Cref{lem on trivialize ouside Y} to $W_0=0_R$ to get a $R$-finite \'etale closed $Z'\subset \mathbb{A}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash 0_R$ such that $\mathcal{Q}|_{\mathbb{P}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash Z'}$ is trivial. It remains to apply \Cref{main tech for torsor on A1} twice using that $G$ is totally isotropic, with $Y=0_R$ (resp., $Y=\infty_R$) and $Y_i=Y\times_{R}R_i$, to show that both $\mathcal{Q}|_{\mathbb{P}_R^1\textbfegin{eg}in{equation}gin{aligned}ckslash 0_R}$ and $\mathcal{Q}|_{\mathbb{P}_R^1 \textbfegin{eg}in{equation}gin{aligned}ckslash \infty_R}$
are trivial. \QED
\section{ Torsors under a reductive group scheme over a smooth projective base}
\leftarrowbel{sect-torsor on sm proj base}
The main result of this section is the following:
\textbfegin{eg}in{thm}t\leftarrowbel{torsors-Sm proj base}
For a semilocal Pr\"{u}fer domain $R$, an $r\in R\textbfegin{eg}in{equation}gin{aligned}ckslash \{0\}$, an irreducible, smooth, projective $R$-scheme $X$, a finite subset $\textbf{x} \subset X$ with semilocal ring $A\colonequals \mathscr{O}_{X,\textbf{x}}$, and a reductive $X$-group scheme $G$,
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{loc-gen-trivial-sm-proj} any generically trivial $G$-torsor over $A$ is trivial, that is,
\[
\ker\,(H^1(A,G)\rightarrow H^1(\Frac A, G))=\{\ast\};
\]
\item \leftarrowbel{Nis-sm-proj}if $G_{A[\f{1}{r}]}$ is totally isotropic, then any generically trivial $G$-torsor over $A[\f{1}{r}]$ is trivial, that is,
\[
\textstyle \ker\,(H^1(A[\f{1}{r}],G)\rightarrow H^1(\Frac A,G))=\{\ast\}.
\]
\end{equation}num
\mathrm{\acute{e}t}hmt
The case (i) is a version of the Grothendieck--Serre conjecture in the case the relevant reductive group scheme $G_A$ has a reductive model over some smooth projective compactification of $\Spec A$. The case (ii) provides a version of Nisnevich conjecture for such `nice' reductive groups satisfying the total isotropicity assumption: if $R$ is a discrete valuation ring with uniformizer $r$ and if $R\to A$ is a local homomorphism of local rings, then $r\in \mathfrak{m}_A\textbfegin{eg}in{equation}gin{aligned}ckslash \mathfrak{m}_A^2$, and (ii) says that any generically trivial $G$-torsor over $A[\f{1}{r}]$ is trivial (the isotropicity assumption on $G_A$ is essential, see, for instance, \cite{Fed21}).
\textbfegin{eg}in{rem}t
An inspection of the proof below shows that \Cref{torsors-Sm proj base} still holds provided that $X$ is only a flat projective $R$-scheme such that $X\textbfegin{eg}in{equation}gin{aligned}ckslash X^{\text{sm}}$ is $R$-fiberwise of codimension $\ge 2$ in $X$, $\textbf{x}\subset X^{\text{sm}}$, and $G$ is a reductive $X^{\text{sm}}$-group scheme, where $X^{\text{sm}}$ denotes the smooth locus of $X\to \Spec R$.
\end{rem}t
To prove \Cref{torsors-Sm proj base}, we first derive from \Cref{extend generically trivial torsors} and \Cref{Ces's Variant 3.7} the following key result, which reduces the proof of \Cref{torsors-Sm proj base} to studying torsors on a smooth affine relative curve.
\textbfegin{eg}in{lemma}t\leftarrowbel{nicely spread out lem}
For a semilocal Pr\"{u}fer domain $R$ of finite Krull dimension, an irreducible, smooth, projective $R$-scheme $X$ of pure relative dimension $d> 0$, a finite subset $\textbf{x} \subset X$, and a reductive $X$-group scheme $G$, the following assertions hold.
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{GS-nicely spread out} Given a generically trivial $G$-torsor $\mathcal{P}$ over $A\colonequals \mathscr{O}_{X,\textbf{x}}$, there are
\textbfegin{eg}in{equation}gin{itemize}
\item [-] a smooth, affine $A$-curve $C$, an $A$-finite closed subscheme $Z\subset C$, and a section $s\in C(A)$;
\item [-] a reductive $C$-group scheme $\mathscr{G}$ satisfying $s^{\ast}\mathscr{G}\simeq G_A$ and a $\mathscr{G}$-torsor $\mathcal{F}$ such that $\mathcal{F}|_{C\textbfegin{eg}in{equation}gin{aligned}ckslash Z}$ is trivial and $s^{\ast}\mathcal{F} \simeq \mathcal{P}$.
\end{itemize}
\item \leftarrowbel{Nis-nicely spread out} Given an $r\in R\textbfegin{eg}in{equation}gin{aligned}ckslash \{0\}$ and a generically trivial $G$-torsor $\widetilde{\mathcal{P}}$ over $A[\f{1}{r}]$, there are
\textbfegin{eg}in{equation}gin{itemize}
\item [-] a smooth, affine $A$-curve $C$, an $A$-finite closed subscheme $Z\subset C$, and a section $s\in C(A)$;
\item [-] a reductive $C$-group scheme $\mathscr{G}$ such that $s^{\ast}\mathscr{G}\simeq G_A$, a $\mathscr{G}$-torsor $\widetilde{\mathcal{F}}$ over $C[\f{1}{r}]\colonequals C\times_AA[\f{1}{r}]$ such that $\widetilde{\mathcal{F}}|_{C[\f{1}{r}]\textbfegin{eg}in{equation}gin{aligned}ckslash Z[\f{1}{r}]}$ is trivial and $(s|_{A[\f{1}{r}]})^{\ast}(\widetilde{\mathcal{F}}) \simeq \widetilde{\mathcal{P}}$.
\end{itemize}
\end{equation}num
\end{lemma}t
\textbfegin{eg}in{proof}
By \Cref{extend generically trivial torsors}, $\mathcal{P}$ (resp., $\widetilde{\mathcal{P}}$) extends to a $G$-torsor $\mathcal{P}_0$ (resp., $\widetilde{\mathcal{P}_0}$) over an open neighbourhood $W\subset X$ of $\textbf{x}$ (resp., an open neighbourhood $\widetilde{W}\subset X$ of $\Spec (A[\f{1}{r}])$) such that
\[
\text{$\codim((X\textbfegin{eg}in{equation}gin{aligned}ckslash W)_K, X_K)\geq 3$ \quad and\quad $\codim((X\textbfegin{eg}in{equation}gin{aligned}ckslash W)_s,X_s)\geq 2$ for all $s\in \Spec (R)$};
\]
and
\[
\text{$\codim((X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})_K, X_K)\geq 3$ \quad and\quad $\codim((X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})_s,X_s)\geq 2$ for all $s\in \Spec (R)$.}
\]
Here, $K$ is the fraction field of $R$. Let $\textbf{z}\subset X$ be the set of maximal points of the $R$-fibers of $X$; the above codimension bounds implies $\textbf{z}\subset W$ (resp., $\textbf{z}\subset \widetilde{W}$). By \Cref{geom}\ref{geo-iii}, the semilocal ring $\mathscr{O}_{X,\textbf{z}}$, and hence also $\mathscr{O}_{X,\textbf{z}}[\f{1}{r}]$, is a Pr\"{u}fer domain. By the Grothendieck--Serre on semilocal Pr\"{u}fer schemes (\Cref{G-S over semi-local prufer}), the generically trivial $G$-torsor $(\mathcal{P}_0)|_{\mathscr{O}_{X,\textbf{z}}}$ (resp., $(\widetilde{\mathcal{P}_0})|_{\mathscr{O}_{X,\textbf{z}}[\f{1}{r}]}$) is actually trivial. Thus there exists a closed subscheme $Y\subset X$ (resp., $\widetilde{Y}\subset X$) that avoids all the maximal points of $R$-fibers of $X$ such that the restriction $(\mathcal{P}_0)|_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ (resp., $(\widetilde{\mathcal{P}_0})|_{(X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{Y})[\f{1}{r}]}$) is trivial; such a $Y$ (resp., $\widetilde{Y}$) is $R$-fiberwise of codimension $>0$ in $X$.
Now, we treat the two cases \ref{GS-nicely spread out}--\ref{Nis-nicely spread out} separately.
For \ref{GS-nicely spread out}, by the above, $X\textbfegin{eg}in{equation}gin{aligned}ckslash W$ is $R$-fiberwise of codimension $\ge 2$ in $X$; \emph{a fortiori,} the same codimension bound holds for $Y\textbfegin{eg}in{equation}gin{aligned}ckslash W $ in $X$. Consequently, we can apply \Cref{Ces's Variant 3.7} to obtain an affine open $S\subset \mathbb{A}_{R}^{d-1}$, an affine open neighbourhood $ U\subset W$ of $\textbf{x}$, and a smooth morphism $\pi\colon U\to S$ of pure relative dimension 1 such that $U\cap Y$ is $S$-finite.
Let $\tau:C\colonequals U\times_S\Spec A\to \Spec A$ be the base change of $\pi$ to $\Spec A$. Let $Z$ and $\mathcal{F}$ be the pullbacks of $U\cap Y$ and $(\mathcal{P}_0)|_U$ under $\text{pr}_1:C\to U$, respectively. Then, via $\tau,$ $C$ is a smooth affine $A$-curve, $Z\subset C$ is a $A$-finite closed subscheme, and $\mathcal{F}$ is a $\mathscr{G}\colonequals \text{pr}_1^*(G_U)$-torsor that trivializes over $C\textbfegin{eg}in{equation}gin{aligned}ckslash Z$.
Finally, the diagonal in $C$ induces a section $s\in C(A)$ with $s^{\ast}\mathcal{F}\simeq \mathcal{P}$ (as $s^{\ast}\mathscr{G}=G_A$-torsors).
For \ref{Nis-nicely spread out}, since $\Spec (A[\f{1}{r}])$ consists of points of $X[\f{1}{r}]\colonequals X\times_RR[\f{1}{r}]$ that specializes to some point of $\textbf{x}$, we deduce from the inclusion $\Spec A[\f{1}{r}] \subset \widetilde{W}$ that no points of $(X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})[\f{1}{r}] =X[\f{1}{r}]\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W}[\f{1}{r}]$ specializes to any points of $\textbf{x}$.
Hence, the closure $\overline{(X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})[\f{1}{r}]}$ (in $X$) is disjoint from $\textbf{x}$, so $\widetilde{W}'\colonequals X\textbfegin{eg}in{equation}gin{aligned}ckslash \overline{(X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})[\f{1}{r}]}$ is an open neighbourhood of $\textbf{x}$.
Notice that, since $\Spec R$ has finite Krull dimension, $X$ is topological Noetherian.
Since $(X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})[\f{1}{r}]$ is $R[\f{1}{r}]$-fiberwise of codimension $\ge 2$ in $X[\f{1}{r}]$, by \Cref{geom}\ref{geo-i} applied to the closures of the (finitely many) maximal points of $(X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})[\f{1}{r}]$, the closure $\overlineerline{(X\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W})[\f{1}{r}]}=X \textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W}'$ is $R$-fiberwise of codimension $\ge 2$ in $X$;
\emph{a fortiori}, the same holds for $\widetilde{Y}\textbfegin{eg}in{equation}gin{aligned}ckslash \widetilde{W}'$ in $X$. Consequently, we can apply \Cref{Ces's Variant 3.7} to obtain an affine open $\widetilde{S}\subset \mathbb{A}_{R}^{d-1}$, an affine open neighbourhood $ \widetilde{U}\subset \widetilde{W}'$ of $\textbf{x}$, and a smooth morphism $\widetilde{\pi}:\widetilde{U}\to \widetilde{S}$ of pure relative dimension 1 such that $\widetilde{U}\cap \widetilde{Y}$ is $\widetilde{S}$-finite. Notice that $\widetilde{U}[\f{1}{r}]\subset \widetilde{W}'[\f{1}{r}] =\widetilde{W}[\f{1}{r}]$, so we have the restriction $(\widetilde{\mathcal{P}_0})|_{\widetilde{U}[\f{1}{r}]}$.
Let $\tau\colon C\colonequals \widetilde{U}\times_{\widetilde{S}}\Spec
A\to \Spec A$ be the base change of $\widetilde{\pi}$ to $\Spec A$. Let $Z$ be the pullback of $\widetilde{U}\cap \widetilde{Y}$ under $\mathrm{pr}_1\colon C\to \widetilde{U}$. Let $\widetilde{\mathcal{F}}$ be the pullback of $(\widetilde{\mathcal{P}_0})|_{\widetilde{U}[\f{1}{r}]}$ under $\mathrm{pr}_1\colon C[\f{1}{r}]\to \widetilde{U}[\f{1}{r}]$. Then, via $\tau,$ $C$ is a smooth affine $A$-curve, $Z\subset C$ is a $A$-finite closed subscheme, and $\widetilde{\mathcal{F}}$ is a $\mathscr{G}\colonequals \text{pr}_1^{\ast}(G_{\widetilde{U}})$-torsor over $C[\f{1}{r}]$ that trivializes over $C[\f{1}{r}]\textbfegin{eg}in{equation}gin{aligned}ckslash Z[\f{1}{r}]$. Finally, the diagonal in $C$ induces a section $s\in C(A)$ such that $s^{\ast}\mathscr{G}=G_A$ and $s_{A[\f{1}{r}]}^{\ast}(\widetilde{\mathcal{F}})\simeq \widetilde{\mathcal{P}}$.\quadedhere
\end{proof}
\textbfegin{pp}[Proof of \Cref{torsors-Sm proj base}]
By a standard limit argument involving \Cref{approxm semi-local Prufer ring}, one easily reduces to the case when $R$ has finite Krull dimension.
Now, let $\mathcal{P}$ (resp., $\widetilde{\mathcal{P}}$) be a generically trivial $G$-torsor over $A\colonequals \mathscr{O}_{X,\textbf{x}}$ (resp., over $A[\f{1}{r}]$) that we want to trivialize.
Let $d$ be the relative dimension of $X$ over $R$.
If $d=0$, then $A$ and $A[\f{1}{r}]$ are semilocal Pr\"{u}fer domains, so, by the Grothendieck--Serre on semilocal Pr\"{u}fer schemes (\Cref{G-S over semi-local prufer}), the torsors $\mathcal{P}$ and $\widetilde{\mathcal{P}}$ are trivial. Hence we may assume that $d>0$. Then, by \Cref{nicely spread out lem}, there are a smooth, affine $A$-curve $C$, an $A$-finite closed subscheme $Z\subset C$, a section $s\in C(A)$, a reductive $C$-group scheme $\mathscr{G}$ with $s^{\ast}\mathscr{G}\simeq G_A$,
\textbfegin{eg}in{equation}gin{itemize}
\item [-] a $\mathscr{G}$-torsor $\mathcal{F}$ over $C$ that trivializes over $C\textbfegin{eg}in{equation}gin{aligned}ckslash Z$ such that $s^{\ast}\mathcal{F} \simeq \mathcal{P}$, and
\item [-] a $\mathscr{G}$-torsor $\widetilde{\mathcal{F}}$ over $C[\f{1}{r}]$ that trivializes over $C[\f{1}{r}]\textbfegin{eg}in{equation}gin{aligned}ckslash Z[\f{1}{r}]$ such that $(s|_{A[\f{1}{r}]})^{\ast}(\widetilde{\mathcal{F}}) \simeq \widetilde{\mathcal{P}}$.
\end{itemize}
By \Cref{triviality on sm rel. affine curves} \ref{sec-thm-semilocal}, the $G$-torsor $s^{\ast}\mathcal{F} \simeq \mathcal{P}$ is trivial. By \Cref{triviality on sm rel. affine curves} \ref{sec-thm-totally isotropic}, in case $(s|_{A[\f{1}{r}]})^{\ast}(\mathscr{G})\simeq G_{A[\f{1}{r}]}$ is totally isotropic, the $G_{A[\f{1}{r}]}$-torsor $(s|_{A[\f{1}{r}]})^{\ast}(\widetilde{\mathcal{F}}) \simeq \widetilde{\mathcal{P}}$ is trivial.
\QED
\end{pp}
\section{ Torsors under a constant reductive group scheme}
\leftarrowbel{sect-torsor under constant redu}
In this section we prove the first main result of this paper, namely, the Grothendieck--Serre conjecture and a version of Nisnevich conjecture, for `constant' reductive group schemes.
The proof uses a variant of Lindel's Lemma (\Cref{variant of Lindel's lem}) and glueing techniques to reduce to the resolved case \Cref{torsors-Sm proj base}.
\textbfegin{eg}in{thm}t\leftarrowbel{G-S for constant reductive gps}
For a semilocal Pr\"ufer domain $R$, an $r\in R\textbfegin{eg}in{equation}gin{aligned}ckslash \{0\}$, an irreducible affine $R$-smooth scheme $X$, a finite subset $\textbf{x}\subset X$, and a reductive $R$-group scheme $G$,
\textbfegin{eg}in{equation}numr
\item\leftarrowbel{G-S for constant reductive gps i} any generically trivial $G$-torsor over $A\colonequals \mathscr{O}_{X,\textbf{x}}$ is trivial, that is,
\[
\mathrm{ker}\left(H^1(A,G)\to H^1(\Frac A,G)\right)=\{*\};
\]
\item\leftarrowbel{G-S for constant reductive gps ii} if $G_{R[\f{1}{r}]}$ is totally isotropic, then any generically trivial $G$-torsor over $A[\f{1}{r}]$ is trivial, that is,
\[
\textstyle \mathrm{ker}\left(H^1(A[\f{1}{r}],G)\to H^1(\Frac A,G)\right)=\{*\}.
\]
\end{equation}num
\mathrm{\acute{e}t}hmt
\textbfegin{eg}in{proof}
Let $\mathcal{P}$ (resp., $\widetilde{\mathcal{P}}$) be a generically trivial $G$-torsor over $A$ (resp., over $A[\f{1}{r}]$). By shrinking $X$ around $\textbf{x}$, we may assume that $\mathcal{P}$ is defined on the whole $X$ (resp., $\widetilde{\mathcal{P}}$ is defined on the whole $X[\f{1}{r}]\colonequals X\times_RR[\f{1}{r}]$). Let $d$ be the relative dimension of $X$ over $R$. As pointed out by $\check{\mathrm{C}}$esnavi$\check{\mathrm{c}}$ius, since it suffices to argue that $\mathcal{P}$ (resp., $\widetilde{\mathcal{P}}$) is trivial Zariski-semilocally on $X$, we may replace $X$ by $X\times_R \mathbb{A}_R^N$ for large $N$ to assume that $d>\#\, \textbf{x}$: by pulling back along the zero section $X\to X\times_R \mathbb{A}_R^N$, the Zariski-semilocal triviality of $\mathcal{P}_{X\times_R \mathbb{A}_R^N}$ (resp., $\widetilde{\mathcal{P}}_{X[\f{1}{r}]\times_R \mathbb{A}_R^N}$) on $X\times_R \mathbb{A}_R^N$ implies that of $\mathcal{P}$ (resp., $\widetilde{\mathcal{P}}$) on $X$.
Our goal is to show that $\mathcal{P}|_A$ (resp., $\widetilde{\mathcal{P}}|_{A[\f{1}{r}]}$) is trivial.
A limit argument involving \Cref{approxm semi-local Prufer ring} reduces us to the case when $R$ has finite Krull dimension. By specialization, we can assume that each point of $\textbf{x}$ is closed in its \emph{corresponding} $R$-fiber of $X$ (but not necessarily lies in the closed $R$-fibers of $X$).
If $d=0$, then $A$ (resp., $A[\f{1}{r}]$) is a semilocal Pr\"{u}fer domain, so, by \Cref{G-S over semi-local prufer}, the torsor $\mathcal{P}|_A$ (resp., $\widetilde{\mathcal{P}}|_{A[\f{1}{r}]}$) is trivial. Thus we may assume that $d>0$ for what follows.
Let $\textbf{y}$ be the set of maximal points of the $R$-fibers of $X$.
\textbfegin{eg}in{comment}lt
No points of $\textbf{x}$ specializes to any point of $\textbf{y}$, that is, $\overlineerline{\textbf{x}} \cap \textbf{y} =\emptyset$.
\end{comment}lt
\textbfegin{eg}in{proof} [Proof of the claim] Let $\pi:X\to S\colonequals \Spec R$ be the structural morphism.
If not, say $x\in \textbf{x}$ specializes to $y\in \textbf{y}$, then, by \Cref{geom}\ref{geo-i},
\[
\dim \overlineerline{\{x\}}_{\pi(x)} = \dim \overlineerline{\{x\}}_{\pi(y)},
\]
which is $\ge \dim \overlineerline{\{y\}}_{\pi(y)}=d$ (because $y$ is a maximal point in the fiber $ \pi^{-1}(\pi(y))$ which has pure dimension $d$). Since $\dim \pi^{-1}(\pi(x))=d>0$, $x$ can not be a closed point of the fiber $ \pi^{-1}(\pi(x))$, a contradiction.
\end{proof}
By \Cref{geom}~\ref{geo-iii} again, the semilocal ring $\mathscr{O}_{X,\textbf{y}}$, and hence also $\mathscr{O}_{X,\textbf{y}}[\f{1}{r}]$, are Pr\"{u}fer domains, so, by \Cref{G-S over semi-local prufer}, the $G$-torsor $\mathcal{P}|_{\mathscr{O}_{X,\textbf{y}}}$ (resp., $\widetilde{\mathcal{P}}|_{\mathscr{O}_{X,\textbf{y}}[\f{1}{r}]}$) is trivial. Therefore, using the above claim and prime avoidance, we can find an element $a\in \Gamma(X,\mathscr{O}_X)$ such that, if denote $ Y\colonequals V(a)\subset X$, then $\textbf{x}\subset Y$, $\textbf{y}\cap Y=\emptyset$, and the restriction $\mathcal{P}|_{X\textbfegin{eg}in{equation}gin{aligned}ckslash Y}$ (resp., $\widetilde{\mathcal{P}}|_{(X\textbfegin{eg}in{equation}gin{aligned}ckslash Y)[\f{1}{r}]}$) is trivial. (We just take $a=a_1a_2$, where $a_1$ is such that $\textbf{y}\cap V(a_1)=\emptyset$ and $\mathcal{P}|_{X\textbfegin{eg}in{equation}gin{aligned}ckslash V(a_1)}$ (resp., $\widetilde{\mathcal{P}}|_{(X\textbfegin{eg}in{equation}gin{aligned}ckslash V(a_1))[\f{1}{r}]}$) is trivial, and $a_2$ is delivered from prime avoidance utilizing the fact $\overlineerline{\textbf{x}} \cap \textbf{y} =\emptyset$, so that $\textbf{x}\subset V(a_2)$ and $\textbf{y} \cap V(a_2)=\emptyset$.)
Since $d>\#\, \textbf{x}$, we may apply \Cref{variant of Lindel's lem} to obtain an affine open neighbourhood $W\subset X$ of $\textbf{x}$, an affine open subscheme $ U\subset \mathbb{A}_R^d$, and an \'etale surjective $R$-morphism $f:W\to U$ such that the restriction $f|_{W\cap Y}$ is a closed immersion and $f$ induces a Cartesian square
\textbfegin{eg}in{equation}gin{equation*}
\textbfegin{eg}in{equation}gin{tikzcd}
W\cap Y \arrow[r, hook] \arrow[d, equal]
& W \arrow[d, "f"] \\
W\cap Y \arrow[r, hook]
& U.
\end{tikzcd}
\end{equation*}
Applying $(-)\times_RR[\f{1}{r}]$ yields a similar Cartesian square. By glueing \cite{Ces22a}*{Lemma~7.1},
\textbfegin{eg}in{equation}numr
\item we may (non-canonically) glue $\mathcal{P}|_{W}$ and the trivial $G$-torsor over $U\textbfegin{eg}in{equation}gin{aligned}ckslash f(W\cap Y)$ to descend $\mathcal{P}|_{W}$ to a $G$-torsor $\mathcal{Q}$ over $U$ that trivializes over $U\textbfegin{eg}in{equation}gin{aligned}ckslash f(W\cap Y)$. Since $U$ has a smooth, projective compactification $\mathbb{P}_R^d$, we may apply \Cref{torsors-Sm proj base}~\ref{loc-gen-trivial-sm-proj} to deduce that $\mathcal{Q}|_{\mathscr{O}_{U,f(\textbf{x})}}$ is trivial, so $\mathcal{P}|_A=\mathcal{P}|_{\mathscr{O}_{W,\textbf{x}}}$
is trivial, as desired.
\item we may (non-canonically) glue $\widetilde{\mathcal{P}}|_{W[\f{1}{r}]}$ and the trivial $G$-torsor over $(U\textbfegin{eg}in{equation}gin{aligned}ckslash f(W\cap Y))[\f{1}{r}]$ to descend $\widetilde{\mathcal{P}}|_{W[\f{1}{r}]}$ to a $G$-torsor $\widetilde{\mathcal{Q}}$ over $U[\f{1}{r}]$ that trivializes over $U[\f{1}{r}]\textbfegin{eg}in{equation}gin{aligned}ckslash f(W\cap Y)[\f{1}{r}]$. Since $U$ has a smooth, projective compactification $\mathbb{P}_R^d$, we may apply \Cref{torsors-Sm proj base}~\ref{Nis-sm-proj} to conclude that $\widetilde{\mathcal{Q}}|_{\mathscr{O}_{U,f(\textbf{x})}[\f{1}{r}]}$ is trivial, so $\widetilde{\mathcal{P}}|_{A[\f{1}{r}]}=\widetilde{\mathcal{P}}|_{\mathscr{O}_{W,\textbf{x}}[\f{1}{r}]}$
is trivial, as desired. \quadedhere
\end{equation}num
\end{proof}
\section{The Bass--Quillen conjecture for torsors}
\leftarrowbel{sect-Bass-Quillen}
In this section, we prove the following variant of Bass--Quillen conjecture for torsors:
\textbfegin{eg}in{thm}t \leftarrowbel{B-Q over val rings}
For a ring $A$ that is smooth over a Pr\"{u}fer ring $R$, a totally isotropic reductive $R$-group scheme $G$, then, via pullback,
\[
\text{$H^1_{\mathrm{Nis}}(A,G)\isoto H^1_{\mathrm{Nis}}(\mathbb{A}^N_A,G)$\quad or equivalently, \quad $H^1_{\mathrm{Zar}}(A,G)\isoto H^1_{\mathrm{Zar}}(\mathbb{A}^N_A,G)$}.
\]
\mathrm{\acute{e}t}hmt
We notice that very special instances of \Cref{B-Q over val rings} for $G=\GL_n$ (that is, for vector bundles) was already known: Simis--Vasconcelos in \cite{SV71} considered the case when $A$ is a valuation ring and $N=1$, and Lequain--Simis in \cite{LS80} treated the case when $A$ is a Pr\"{u}fer ring. Apart from that, we are not aware of any instance of \Cref{B-Q over val rings} even for $G=\GL_n$ in our non-Noetherian context.
\textbfegin{eg}in{rem}t
By \Cref{G-S for constant reductive gps}, for a reductive $R$-group scheme $G$, a $G$-torsor on $\mathbb{A}_R^N$ is Nisnevich-locally trivial, if and only if it is generically trivial, if and only if it is Zariski-locally trivial. This implies the equivalence of the formulation of \Cref{B-Q over val rings} for the topologies `Nis' and `Zar'.
\end{rem}t
\textbfegin{pp}[The `inverse' to Quillen patching]
Compared to Quillen patching \cite{Ces22}*{Corollary~5.1.5}, the following `inverse' to Quillen patching is more elementary but is also useful. Its case when $G=\text{GL}_n$ and $A=R[t_1,\cdots,t_N]$ is due to Roitman \cite{Roi79}*{Proposition 2}.
\end{pp}
\textbfegin{eg}in{lemma}t\leftarrowbel{inverse patching}
Let $R$ be a ring, let $G$ be a quasi-affine, flat, finitely presented $R$-group scheme, let $A=\textbfigoplus_{i_1,\cdots,i_N\ge 0}A_{i_1,\cdots,i_N}$ be a $\mathbb{Z}_{\ge 0}^{^{\mathrm{op}}lus N}$-graded $R$-algebra (resp., a $\mathbb{Z}_{\ge 0}^{^{\mathrm{op}}lus N}$-graded domain over $R$) such that $R\textrightarrow{\sim}A_{0.\cdots,0}$, and suppose that every $G$-torsor on $A$ (resp., every generically trivial $G$-torsor on $A$) descends to a $G$-torsor on $R$. Then, for any multiplicative subset $S\subset R$, every $G$-torsor on $A_S$ (resp., every generically trivial $G$-torsor on $A_S$) whose restriction to each local ring of $(A_{0.\cdots,0})_S\simeq R_S$ extends to a $G$-torsor on $R$ descends to a $G$-torsor on $R_S$.
\end{lemma}t
(The relevant case for us is when $A=R[t_1,\cdots,t_N]$.)
\textbfegin{eg}in{proof}
We focus on the part on generically trivial torsors, since the other is \cite[Proposition 5.1.10]{Ces22}.
Let $X$ be a generically trivial $G$-torsor on $A_S$ whose restriction to each local ring of $(A_{0.\cdots,0})_S\simeq R_S$ extends to a $G$-torsor on $R$. By Quillen patching \cite{Ces22}*{Corollary~5.1.5}, we may enlarge $S$ to reduce to the case when $R_S$ is local. Then, by assumption, the restriction of $X$ to $(A_{0.\cdots,0})_S\simeq R_S$ extends to a $G$-torsor $X_0$ on $R$. A limit argument reduces us further to the case when $S=\{r\}$ is a singleton at the cost of $R_S$ no longer being local. Notice that the projection onto the $(0,\cdots,0)$-th component
\[
\textstyle R^{\mathrm{op}}lus \textbfigl( \textbfigoplus_{(i_1,\cdots,i_N)\neq (0,\cdots, 0)} A_{i_1,\cdots,i_N}[\mathfrak{r}ac{1}{r}] \textbfigr)\simeq A[\mathfrak{r}ac{1}{r}] \times_{R[\mathfrak{r}ac{1}{r}]}R \twoheadrightarrow R
\]
induces an isomorphism both modulo $r^n$ and on $r^n$-torsion for every $n>0$. So, by \cite[Proposition 4.2.2]{Ces22}, we can glue the $G$-torsor $X$ on $A[\mathfrak{r}ac{1}{r}]$ with the $G$-torsor $X_0$ on $R$ to obtain a generically trivial $G$-torsor $\widetilde{X}$ on $A[\mathfrak{r}ac{1}{r}] \times_{R[\mathfrak{r}ac{1}{r}]}R$. However, since
\[
\textstyle A[\mathfrak{r}ac{1}{r}] \times_{R[\mathfrak{r}ac{1}{r}]}R \simeq \underlinederset{i\in \mathbb{N}}{\text{colim}} \,A,
\]
where the transition maps $A\to A$ are given by multiplications by $r^{i_1+\cdots+i_N}$ on the degree $(i_1,\cdots,i_N)$-part $A_{i_1,\cdots,i_N}$, so, by a standard limit argument, $\widetilde{X}$ descends to a generically trivial $G$-torsor on some copy of $A$ in the direct colimit. Therefore, by assumption, it descends further to a $G$-torsor on $R$. The base change to $R_S$ of this final descendant gives a desired descendant of $X$ to a $G$-torsor on $R_S$.
\end{proof}
\textbfegin{pp}[Torsors on $\mathbb{A}_R^N$ under reductive $R$-group schemes]
The following was conjectured in \cite{Ces22}*{Conjecture 3.5.1} and settled recently in \cite{Ces22b}*{Theorem 2.1(a)}.
\end{pp}
\textbfegin{eg}in{thm}t\leftarrowbel{triviality over relative affine space for totally isotropic}
For a ring $R$ and a $\mathrm{totally}$ $\mathrm{isotropic}$ reductive $R$-group scheme $G$, any $G$-torsor on $\mathbb{A}_R^N$ that is trivial away from some $R$-finite closed subscheme of $\mathbb{A}_R^N$ is trivial.
\mathrm{\acute{e}t}hmt
\textbfegin{eg}in{lemma}t \leftarrowbel{B-Q for mult type}
For a normal domain $A$, an $A$-group $M$ of multiplicative type, the pullback
\[
\text{$H^1_{\mathfrak{p}pf}(A,M)\isoto H^1_{\mathfrak{p}pf}(\mathbb{A}_A^n,M)$ \quad is bijective. }
\]
\end{lemma}t
\textbfegin{eg}in{proof}
When $A$ is Noetherian, this is \cite{CTS87}*{Lemma 2.4}. For a general normal domain $A$, we write it as a filtered union of its finitely generated $\mathbf{Z}$-subalgebras $A_i$, and, by replacing $A_i$ with its normalizations (which is again of finite type over $\mathbf{Z}$), we may assume that each $A_i$ is normal, so we may conclude from the Noetherian case via a limit argument, because $M$ is finitely presented over $A$.
\end{proof}
For any commutative unital ring $A$, denote by $A(t)$ the localization of $A[t]$ with respect to the multiplicative system of \emph{monic} polynomials.
\textbfegin{eg}in{lemma}t\leftarrowbel{triviality over R(t)}
Let $R$ be a semilocal Pr\"ufer domain and $G$ a reductive $R$-group scheme.
\[
\text{Every $G$-torsor over $R(t)$ is trivial \quad if and only if\quad it is generically trivial.}
\]
\end{lemma}t
\textbfegin{eg}in{proof}
Let $\mathcal{E}$ be a generically trivial $G$-torsor over $R(t)$. Denote by $\mathfrak{m}$ the maximal ideal of $R$. We observe that $R(t)$ is the local ring of the projective $t$-line $\mathbb{P}_R^1$ over $R$ at $\infty \in \mathbb{P}_{V/\mathfrak{m}}^1$, with $s\colonequals \mathfrak{r}ac{1}{t}$ inverted:
\[
\textstyle R(t)=\left(R[s]\right)_{(\mathfrak{m},s)}\left[ \mathfrak{r}ac{1}{s}\right].
\]
Hence, $\mathcal{E}$ spreads out to a $G$-torsor $\mathcal{E}_{U^{\circ}}$ over $U^{\circ}\colonequals U\textbfegin{eg}in{equation}gin{aligned}ckslash \{s=0\} $ for an affine open $U\subset \mathbb{P}_R^1$ containing $\infty$.
The fiber of $\{s=0\}$ over $\Frac R$ is the singleton $\{\texti\}$, where $\texti$ denotes the generic point of $\{s=0\}$, and
\[
W\,\cap \,\text{Spec}\,\mathscr{O}_{\mathbb{P}_R^1,\texti} = \{\text{the generic point of } \mathbb{P}_R^1\}.
\]
Since $\mathcal{E}_{U^{\circ}}$ is generically trivial and $U^{\circ}$ is affine hence quasi-compact, there is an open neighbourhood $U_{\texti} \subset U$ of $\texti$ such that $\mathcal{E}_{U^{\circ}}|_{U^{\circ}\cap U_{\texti}}$ is trivial.
Consequently, we may (non-canonically) glue $\mathcal{E}_{U^{\circ}}$ with the trivial $G$-torsor over $U_{\texti}$ to obtain a $G$-torsor $\widetilde{\mathcal{E}}$ over the open subset $U^{\circ} \cup U_{\texti}$ of $U$.
By construction, $\widetilde{\mathcal{E}}|_{U^{\circ}}\cong \mathcal{E}_{U^{\circ}}$.
Now the complementary closed $Z\colonequals U\textbfegin{eg}in{equation}gin{aligned}ckslash (U^{\circ} \cup U_{\texti})$ is contained in $\{s=0\}\textbfegin{eg}in{equation}gin{aligned}ckslash \{\texti\}$.
In particular,
\[
\text{$Z\times_R \Frac R= \emptyset$ \quad and\quad $\codim(Z_s,U_s)\ge 1$ for all $s\in \text{Spec}\,R$.}
\]
By purity \Cref{purity for rel. dim 1} of reductive torsors on smooth relative curves, $\widetilde{\mathcal{E}}$ extends to a $G$-torsor, which is also denoted by $\widetilde{\mathcal{E}}$, over entire $U$.
As $\widetilde{\mathcal{E}}$ is generically trivial, by \Cref{G-S for constant reductive gps} \ref{G-S for constant reductive gps i}, its restriction to $\mathscr{O}_{\mathbb{P}_R^1,\infty}=\left(R[s]\right)_{(\mathfrak{m},s)}$,
and its further restriction to $R(t)$, is trivial, that is, $\mathcal{E}|_{R(t)}\cong \widetilde{\mathcal{E}}|_{R(t)}$ is trivial.
\end{proof}
\textbfegin{pp}[Proof of \Cref{B-Q over val rings}]
Since any section of $\mathbb{A}_A^N\to \Spec A$ induces sections to the pullback maps in \Cref{B-Q over val rings}, these pullback maps are injective, so it suffices to show that they are surjective. By Quillen patching \cite{Ces22}*{Corollary~5.1.5}, we may replace $R$ by its localizations to assume that $R$ is a valuation ring. By a limit argument involving \Cref{approxm semi-local Prufer ring}, we reduce to the case of a finite-rank $R$.
\textbf{Step 1:} \emph{$A$ is a polynomial ring over $R$.}
It suffices to show that every generically trivial $G$-torsor $\mathcal{E}$ over $R[t_1,\cdots,t_N]$ is trivial; \emph{a fortiori, } it descends to a $G$-torsor over $R$. To prove this we will argue by double induction on the pair $(N,\text{rank}(R))$. If $N=0$, then by convention $R[t_1,\cdots,t_N]=R$, we know from \cite{Guo20b} that a generically trivial $G$-torsor over $R$ is trivial. Now assume $N\ge 1$ and set $A':=R(t_N)[t_1,\cdots,t_{N-1}]$.
\end{pp}
\textbfegin{eg}in{comment}lt
The $G_{A'}$-torsor $\mathcal{E}_{A'}$ descends to a $G_{R(t_N)}$-torsor $\mathcal{E}_0$.
\end{comment}lt
\textbfegin{eg}in{proof}[Proof of the claim]
Consider the natural projection $\pi:\text{Spec}\,R(t_N) \to \Spec R$. Since by definition, $R(t_N)$ is the localization of $R[t_N]$ with respect to the multiplicative system of monic polynomials, the closed fiber of $\pi$ consists of the singleton $\{\mathfrak{p}_0\}$; further, the local ring $R(t_N)_{\mathfrak{p}_0}$ is a valuation ring of $\text{Frac}(R)(t_N)$ whose valuation restricts to the Gauss valuation associated to $R$ on $R[t_N]$:
$$
\textstyle R[t_N]\to \Gamma_R , \quad \sum_{i\ge 0} a_i t_N^i \mapsto \min_i v(a_i),
$$
where $v:R\to \Gamma_R$ is the (additive) valuation on $R$. In particular, $R$ and $R(t_N)_{\mathfrak{p}_0}$ have the same value group. To apply the Quillen patching \cite{Ces22}*{Corollary~5.1.5} and conclude, it suffices to show that the base change of $\mathcal{E}_{A'}$ to $R(t_N)_{\mathfrak{p}}[t_1,\cdots,t_{N-1}]$ is trivial for every prime ideal $\mathfrak{p} \subset R(t_N)$.
Indeed, if $\mathfrak{p}=\mathfrak{p}_0$, then the above discussion implies $\text{rank}(R_{\pi(\mathfrak{p})})=\text{rank}(R)$, so
the desired triviality of the base change follows from induction hypothesis. If $\mathfrak{p}\neq \mathfrak{p}_0$, then $\pi(\mathfrak{p}) \in \Spec R$ is not the closed point and $\text{rank}(R_{\pi(\mathfrak{p})})<\text{rank}(R)$, so, by induction hypothesis, $\mathcal{E}_{R_{\pi(\mathfrak{p})}[t_1,\cdots,t_N]}$ and hence also its further base change along $R_{\pi(\mathfrak{p})}[t_1,\cdots,t_N]\to R(t_N)_{\mathfrak{p}}[t_1,\cdots,t_{N-1}]$ is trivial.
\end{proof}
Since $\mathcal{E}$ is generically trivial, by considering the pullback of $\mathcal{E}_{A'}$ along a general section $s\in \mathbb{A}_{R(t_N)}^{N-1}(R(t_N))$, we see that $\mathcal{E}_0$ is also generically trivial. By Lemma \ref{triviality over R(t)}, $\mathcal{E}_0$, and hence also $\mathcal{E}_{A'}$, is trivial. Consequently, $\mathcal{E}$ is trivial away from the $R$-finite closed subset $\{f(t_N)=0\} \subset \mathbb{A}_R^N$ for some monic polynomial $f\in R [t_N]$. By \Cref{triviality over relative affine space for totally isotropic}, $\mathcal{E}$ is trivial. This completes the induction process.
\textbf{Step 2:} \emph{$A$ is the localization of a polynomial ring $\widetilde{R}:=R[u_1,\cdots,u_d]$ with respect to some multiplicative subset $S\subset \widetilde{R}$.}
We wish to apply the `inverse' to Quillen patching \Cref{inverse patching}, with $R$ being $\widetilde{R}$ here and $A$ being $\widetilde{R}[t_1,\cdots,t_N]$. It remains to check the assumptions of \emph{loc.~cit}. Firstly, by Step 1, a generically trivial $G$-torsor over $\widetilde{R}[t_1,\cdots,t_N]$ descends to a $G$-torsor over $\widetilde{R}$. Secondly, for any multiplicative subset $S\subset \widetilde{R}$ and any generically trivial $G$-torsor $\mathcal{E}$ over $\widetilde{R}_S[t_1,\cdots,t_N]$, the restriction of $\mathcal{E}$ to each local ring of the closed subscheme
$$
\Spec \widetilde{R}_S:=\left\{t_1=\cdots=t_N=0\right\}\subset
\mathbb{A}_{\widetilde{R}_S}^N
$$
is trivial (so trivially extends to a $G$-torsor over $\widetilde{R}$): by Bass--Quillen in the field case, the restriction of $\mathcal{E}$ to $\text{Frac}(\widetilde{R}_S)[t_1,\cdots,t_N]$ is trivial. Thus the restriction $\mathcal{E}|_{\widetilde{R}_S}$ is generically trivial and hence, by \Cref{G-S for constant reductive gps}\ref{G-S for constant reductive gps i}, is Zariski locally trivial. All assumptions of \Cref{inverse patching} are thus verified.
\textbf{Step 3:} \emph{$A$ is an arbitrary smooth $R$-algebra.}
Let $\mathcal{E}$ be a generically trivial $G$-torsor over $\mathbb{A}_A^N$. We need to show that $\mathcal{E}$ descends to a $G$-torsor over $A$. By Quillen patching \cite{Ces22}*{Corollary~5.1.5}, we may assume that $A$ is an essentially smooth local algebra over a valuation ring $R$. By localizing $R$, we assume that the homomorphism $R \to A$ is local.
We will argue by double induction on the pair $(\dim R, \dim A-\dim R)$ to show that $\mathcal{E}$ is even trivial. If $\dim R=0$, then $R$ is a field and we come back to the classical already settled case. If $\dim A=\dim R$, by \Cref{geom}~\ref{geo-iii}, then $A$ is a valuation ring, and we come back to the case already settled in Step 1. Assume now $\dim R>0$ and $\dim A-\dim R>0$.
By \Cref{enlarge valuation rings}, up to enlarging $R$ (without changing $\dim R$) we may assume that $A=\mathscr{O}_{X,x}$, where $X$ is an irreducible affine $R$-smooth scheme of pure relative dimension $d>0$, and $x\in X$ is a \emph{closed} point in the \emph{closed} $R$-fiber. Then necessarily we have $d=\dim A-\dim R$. By \Cref{variant of Lindel's lem}, up to shrinking $X$, there are an \'etale $R$-morphism $f:X\to \mathbb{A}_R^d$ and an element $r_0\in A_0:=\mathscr{O}_{\mathbb{A}_R^d,f(x)}$ such that
\[
\text{$f$ \quad induces \quad \quad
$A_0/r_0A_0 \textrightarrow{\sim} A/r_0A$.}
\]
On the other hand, our induction hypothesis implies that $\mathcal{E}_{\mathbb{A}_{A_{\mathfrak{p}}}^N}$ is trivial for any $\mathfrak{p}\in \text{Spec}\,A\textbfegin{eg}in{equation}gin{aligned}ckslash \{\mathfrak{m}_A\}$ (thus descends to the trivial $G$-torsor over $A_{\mathfrak{p}}$):
\textbfegin{eg}in{equation}gin{itemize}
\item either $\mathfrak{p}$ lies over a non-maximal ideal $ \mathfrak{q}\subset R$, in which case $R_{\mathfrak{q}} \to A_{\mathfrak{p}}$ is a local homomorphism,
\[
\dim R_{\mathfrak{q}} <\dim R, \quad \text{ and } \quad \dim A_{\mathfrak{p}}-\dim R_{\mathfrak{q}}\le d=\dim A-\dim R;
\]
\item or $R \to A_{\mathfrak{p}}$ is a local homomorphism, in which case
$$
\dim A_{\mathfrak{p}}-\dim R<\dim A-\dim R.
$$
\end{itemize}
By Quillen patching, we deduce that $\mathcal{E}_{\mathbb{A}_{A[\mathfrak{r}ac{1}{r_0}]}^N}$ descends to a $G$-torsor $\mathcal{F}$ over $A[\mathfrak{r}ac{1}{r_0}]$. However, $\mathcal{F}$ must be trivial, because it extends to a generically
trivial $G$-torsor over $A$, such as the restriction of $\mathcal{E}$ along any section $s\in \mathbb{A}_{A}^N(A)$, and, by \Cref{G-S for constant reductive gps}\ref{G-S for constant reductive gps i}, any such an extension is trivial.
Now by the \cite{Ces22a}*{Lemma~7.1} applied to the Cartesian square
\textbfegin{eg}in{equation}gin{equation*}
\textbfegin{eg}in{equation}gin{tikzcd}
\mathbb{A}_{A/r_0A}^N \arrow[hookrightarrow]{r} \arrow[d, "\sim"]
& \mathbb{A}_A^N \arrow{d} \\
\mathbb{A}_{A_0/r_0A_0}^N \arrow[hookrightarrow]{r}
& \mathbb{A}_{A_0}^N,
\end{tikzcd}
\end{equation*}
we may glue $\mathcal{E}$ with the trivial $G$-torsor over $\mathbb{A}_{A_0[\mathfrak{r}ac{1}{r_0}]}^N $ to obtain a $G$-torsor $\mathcal{E}_0$ over $\mathbb{A}_{A_0}^N$ that trivializes over $\mathbb{A}_{A_0[\mathfrak{r}ac{1}{r_0}]}^N$. By Step 2, $\mathcal{E}_0$ is trivial, so $\mathcal{E}$ is trivial as well.
\QED
\textbfegin{eg}in{equation}gin{appendix}
\section{Grothendieck--Serre on a semilocal Pr\"{u}fer domain}
\leftarrowbel{section-G-S on semilocal prufer}
The main result of this section is the following generalization of the main results of \cite{Guo22} and \cite{Guo20b}.
\textbfegin{eg}in{thm} \leftarrowbel{G-S over semi-local prufer}
For a semilocal Pr\"{u}fer domain $R$ and a reductive $R$-group scheme $G$, we have
\[
\text{$\mathrm{ker}\left(\text{H}_{\mathrm{\acute{e}t}}^1(R,G)\to \text{H}_{\mathrm{\acute{e}t}}^1(\Frac R,G)\right)=\{*\}$}.
\]
\mathrm{\acute{e}t}hm
\textbfegin{pp}t[Setup]\leftarrowbel{pd-setup}
We fix the following notations.
For a semilocal Pr\"ufer domain $R$ of finite Krull dimension, all the maximal ideals $(\mathfrak{m}_i)_{i=1}^r$ of $R$, the local rings $\mathcal{O}_i\colonequals R_{\mathfrak{m}_i}$, an element $a\in R$ such that $V(a)=\{\mathfrak{m}_i\}_{i=1}^r$, let $\widehat{R}$ (resp., $\widehat{\mathcal{O}}_i$) denote the $a$-adic completion of $R$ (resp., of $\mathcal{O}_i$).
Then $\widehat{\mathcal{O}}_i$ is an $a$-adic complete valuation ring of rank 1, and we have $\widehat{R}\simeq ^{\prime}od_{i=1}^r\widehat{\mathcal{O}}_i$, compatibly with the topologizes. Denote $\widehat{K}_i\colonequals \Frac \widehat{\mathcal{O}}_i=\widehat{\mathcal{O}}_i[\f{1}{a}]$.
Topologize $R[\f{1}{a}]$ by declaring $\{\mathrm{im}(a^nR\rightarrow R[\f{1}{a}])\}_{n\ge 1}$ to be a fundamental system of open neighbourhood of $0$; the associated completion is
\[
\textstyle \text{$R[\f{1}{a}]\to \widehat{R}[\f{1}{a}]\simeq ^{\prime}od_{i=1}^r \widehat{\mathcal{O}}_i[\f{1}{a}]=^{\prime}od_{i=1}^r \widehat{K}_i,$}
\]
where each $\widehat{K}_i$ is a complete valued field, with pseudo-uniformizer (the image of) $a$.
In particular, for an $R$-scheme $X$, we have a map
\[
\textstyle \text{$\Phi_X\colon X(R[\f{1}{a}])\rightarrow ^{\prime}od_{i=1}^rX(\widehat{K}_i)$.}
\]
If $X$ is locally of finite type over $R$, we endow the right hand side with the product topology where each $X(\widehat{K}_i)$, by, for example, Conrad, has a natural topology induced from that of $\widehat{K}_i$, which we will call the $a$-adic topology. If moreover $X$ is affine, we can canonically topologize $X(R[\f{1}{a}])$ by choosing a closed embedding $X^{\mathrm{h}}ookrightarrow \mathbb{A}_R^N$ and endowing $X(R[\f{1}{a}]) ^{\mathrm{h}}ookrightarrow R[\f{1}{a}]^N$ with the subspace topology (this is independent of the choices of the embeddings), then $\Phi_X$ is a continuous map.
\end{pp}t
\csub[Lifting maximal tori of reductive group schemes over semilocal rings]
Without any difficulty, one can generalize the lifting of maximal tori \cite{Guo20b} to semilocal case.
\textbfegin{eg}in{lemma}\leftarrowbel{lift-tor}
For a semilocal scheme $S$ and an $S$-smooth finitely presented group scheme $G$ whose $S$-fibers are connected and affine.
Assume that
\textbfegin{eg}in{equation}numr
\item for each residue field $\kappa(s)$ of $S$ at a closed point $s$, the fiber $G_{\kappa(s)}$ is a $\kappa(s)$-reductive group; and
\item $\# \kappa(s)\geq \dim (G_{\kappa(s)}/Z_{\kappa(s)})$ for the center $Z_{\kappa(s)}\subset G_{\kappa(s)}$,
\end{equation}num
then the following natural map is surjective
\[
\textstyle \un{\mathrm{Tor}}g(S)\twoheadrightarrow ^{\prime}od_{s\in S\mathrm{\, closed}}\un{\mathrm{Tor}}g(\kappa(s)).
\]
\end{lemma}
\textbfegin{eg}in{lemma}\leftarrowbel{tor-dense}
For a semilocal Pr\"ufer domain $R$ of finite Krull dimension, we use the notations in the setup \S\ref{pd-setup}.
For a reductive $R$-group scheme $G$, the scheme $\un{\mathrm{Tor}}g$ of maximal tori of $G$, and the $a$-adic topology on $\un{\mathrm{Tor}}g(\widehat{K}_i)$, the image of the following map is dense:
\[
\textstyle \un{\mathrm{Tor}}g(R[\f{1}{a}])\rightarrow ^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{K}_i).
\]
\end{lemma}
\csub[Harder's weak approximation]
\vskip -0.5cm
\textbfegin{eg}in{lemma}\leftarrowbel{tor-open-tor}
For a semilocal Pr\"ufer domain $R$ of finite Krull dimension, we use the setup \S\ref{pd-setup}.
For a $R[\f{1}{a}]$-torus $T$, let $L_i/\widehat{K}_i$ be minimal Galois field extensions splitting $T_{\widehat{K}_i}$ and consider the norm map
\[
N_i\colon T(L_i)\rightarrow T(\widehat{K}_i).
\]
Then, the image $U$ of $^{\prime}od_{i=1}^r N_i$ is open and is contained in $\overline{\mathrm{im}(T(R[\f{1}{a}])\rightarrow ^{\prime}od_{i=1}^rT(\widehat{K}_i))}$.
\end{lemma}
\textbfegin{eg}in{proof}
The proof proceeds as the following steps.
\vskip 0.1cm
\textbf{Step 1.} The image $U$ is open. For each $i$, there is a short exact sequence of tori
\[
1\rightarrow \mathcal{T}_i\rightarrow \Res_{L_i/\widehat{K}_i}T_{L_i}\rightarrow T_{\widehat{K}_i}\rightarrow 1
\]
and the norm map $N_i\colon (\Res_{L_i/\widehat{K}_i}T_{L_i})(\widehat{K}_i)\rightarrow (\Res_{L_i/\widehat{K}_i}T_{L_i}/\mathcal{T}_i)(\widehat{K}_i)$, which by \cite{Ces15d}*{Proposition~4.3~(a) and \S2.8~(2)} is $a$-adically open.
As a product of open subsets, $U$ is open in $^{\prime}od_{i=1}^r T(\widehat{K}_i)$.
\vskip 0.1cm
\textbf{Step 2.} We prove that $U$ is contained in the closure of $\mathrm{im}(T(\ria))$.
Equivalently, we show that every $u\in U$ and every open neighbourhood $B_u\subset U$ satisfy that $B_u\cap \mathrm{im}(T(\ria))\neq \emptyset$.
Let $\widetilde{R}/\ria$ be a minimal Galois cover splitting $T$. Consider the following commutative diagram
\[
\textbfegin{eg}in{equation}gin{tikzcd}
T(\tilde{R}) \arrow[d, "N_{\widetilde{R}/\ria}"'] \arrow[r] & ^{\prime}od_{i=1}^rT(L_i) \arrow[d, "^{\prime}od_{i=1}^rN_i"] \\
T(\ria) \arrow[r] & ^{\prime}od_{i=1}^rT(\widehat{K}_i) .
\end{tikzcd}
\]
Take a preimage $v\in (^{\prime}od_{i=1}^rN_i)^{-1}(u) \subset ^{\prime}od_{i=1}^rT(L_i)$ and let $B_v\subset ^{\prime}od_{i=1}^r T(L_i)$ be the preimage of $B_u$.
Since $T_{\widetilde{R}}$ splits, the image of $T(\widetilde{R})$ in $^{\prime}od_{i=1}^rT(L_i)$ is dense, hence $T(\widetilde{R})\times_{^{\prime}od_{i=1}^rT(L_i)}B_v\neq \emptyset$, namely, there is $r\in T(\widetilde{R})$ whose image is in $B_v$.
Let $s\colonequals N_{\widetilde{R}/\ria}(r)\in T(\ria)$, then the image of $s$ under the map $T(\ria)\rightarrow ^{\prime}od_{i=1}^rT(\widehat{K}_i)$ is contained in $B_u$, so the assertion follows.
\end{proof}
\textbfegin{eg}in{lemma}\leftarrowbel{tor-i-open}
For a semilocal Pr\"ufer domain $R$ of finite Krull dimension, we use the setup \S\ref{pd-setup}.
For a reductive $R$-group scheme $G$ and for each $i$ a fixed maximal torus $T_i\subset G_{\widehat{K}_i}$ with minimal Galois field extension $L_i/\widehat{K}_i$ splitting $T_i$, consider the following norm map
\[
N_i\colon T_i(L_i)\rightarrow T_i(\widehat{K}_i).
\]
Then the image $U$ of the map $^{\prime}od_{i=1}^{r}N_i$ is an open subgroup of $^{\prime}od_{i=1}^r T_i(\widehat{K}_i)$ and is contained in $\overline{\mathrm{im}(G(\ria))}$, the closure of $\mathrm{im}(G(\ria)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i))$.
\end{lemma}
\textbfegin{eg}in{proof}
By the same arguement in \Cref{tor-open-tor}, the image $U$ is open in $^{\prime}od_{i=1}T_i(\widehat{K}_i)$.
It remains to show that $U\subset \overline{\mathrm{im}(G(\ria))}$, which proceeds as the following steps.
\vskip 0.1cm
\textbf{Step 1.} The map $\phi_i\colon G(\widehat{K}_i)\rightarrow \un{\mathrm{Tor}}g(\widehat{K}_i)$ defined by $g\mapsto gTg^{-1}$ is $a$-adically open for each $i$.
Since the image of $\un{\mathrm{Tor}}g(\ria)\rightarrow ^{\prime}od_{i=1}^r \un{\mathrm{Tor}}g(\widehat{K}_i)$ is dense, for every open neighbourhood $W\subset ^{\prime}od_{i=1}^rG(\widehat{K}_i)$ of $\mathrm{id}$, we have $((^{\prime}od_{i=1}^r\phi_i)(W))\cap \mathrm{im}(\un{\mathrm{Tor}}g(\ria)\rightarrow ^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{K}_i))\neq \emptyset$.
Therefore, there exist a torus $T^{\prime}\in \un{\mathrm{Tor}}g(\ria)$ and a $(g_i)_{i=1}^r\in W$ such that $g_iT_ig_i^{-1}=T^{\prime}_{\widehat{K}_i}$ for all $i$.
\vskip 0.1cm
\textbf{Step 2.} For any $u\in U$, consider the map $^{\prime}od_{i=1}^rG(\widehat{K}_i)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i)$ defined by $g\mapsto g^{-1}ug$.
Then, we apply the Step 1 to the preimage $W$ of $U$ under this map: there is a $\gamma=(\gamma_i)_{i=1}^r\in W$ and a torus $T^{\prime}\in \un{\mathrm{Tor}}g(\ria)$ such that $\gamma_iT_i\gamma_i^{-1}=T^{\prime}_{\widehat{K}_i}$ for each $i$.
Then, $u\in \gamma U\gamma^{-1}=\gamma (^{\prime}od_{i=1}^rN_i (T_i(L_i)))\gamma^{-1}$, which by transport of structure, is $^{\prime}od_{i=1}^rN_i(T^{\prime}_{\widehat{K}_i}(L_i))$.
By \Cref{tor-open-tor}, the last term is contained in the closure of $\mathrm{im}(T^{\prime}(\ria)\rightarrow ^{\prime}od_{i=1}^rT^{\prime}(\widehat{K}_i))$, so is contained in $\overline{\mathrm{im}(G(\ria))}$.
\end{proof}
\textbfegin{eg}in{prop}\leftarrowbel{open-normal}
For a semilocal Pr\"ufer domain $R$ of finite Krull dimension, we use the setup \S\ref{pd-setup}.
For a reductive $R$-group scheme $G$, the closure $\overline{\mathrm{im}(G(\ria))}$ of the image of $G(\ria)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i)$,
\[
\textstyle \text{$\overline{\mathrm{im}(G(\ria))}$\quad contains an open normal subgroup $N$ of $^{\prime}od_{i=1}^rG(\widehat{K}_i)$.}
\]
\end{prop}
\textbfegin{eg}in{proof}
The proof proceeds in the following steps.
\textbfegin{eg}in{equation}numr
\item For each $i$, we fix a maximal torus $T_i\subset G_{\widehat{K}_i}$.
Then \Cref{tor-i-open} provides the open subgroup $U\subset ^{\prime}od_{i=1}^rT_i(\widehat{K}_i)$.
Since each component of the norm map defining $U$ is the image of the $\widehat{K}_i$-points of $\Res_{L_i/\widehat{K}_i}(T_{i,L_i})\rightarrow T_i$, and $\Res_{L_i/\widehat{K}_i}(T_{i,L_i})$ is a Zariski dense open subset of an affine space over $\widehat{K}_i$, we have $U\cap ^{\prime}od_{i=1}^rT_i^{\mathrm{reg}}(\widehat{K}_i)\neq \emptyset$.
\item For any $\tau=(\tau_i)_{i=1}^r\in U\cap ^{\prime}od_{i=1}^rT^{\mathrm{reg}}_i(\widehat{K}_i)$, by \cite{SGA3II}*{Exposé~XIII, Corollaire~2.2}, for each $i$,
\[
\text{ $f_i\colon G_{\widehat{K}_i}\times T_i\rightarrow G_{\widehat{K}_i},\quad (g,t)\mapsto gtg^{-1}$\quad is smooth at $(\mathrm{id}, \tau_i)$.}
\]
Hence, there is a Zariski open neighbourhood $B\subset ^{\prime}od_{i=1}^r ( G_{\widehat{K}_i}\times T_i)$ of $(\mathrm{id},\tau)$ such that
\[
\textstyle \text{$(^{\prime}od_{i=1}^rf_i)|_B\colon B\rightarrow ^{\prime}od_{i=1}^r G_{\widehat{K}_i}$ \quad is smooth.}
\]
By \cite{GGMB14}*{Proposition~3.1.4}, the map $B(^{\prime}od_{i=1}^r\widehat{K}_i)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i)$ is open. In particular, since $(^{\prime}od_{i=1}^rG(\widehat{K}_i))\times U$ is an open neighbourhood of $(\mathrm{id},\tau)$,
$E:=(^{\prime}od_{i=1}^{r}f_i)((^{\prime}od_{i=1}^rG(\widehat{K}_i))\times U)$ contains a non-empty open subset of $^{\prime}od_{i=1}^rG(\widehat{K}_i)$.
Define $N$ as the subgroup of $^{\prime}od_{i=1}^rG(\widehat{K}_i)$ generated by $E$; it is open.
By construction, $E$ is stable under conjugations by $^{\prime}od_{i=1}^rG(\widehat{K}_i)$, thus $N$ is normal.
\item We prove that $N$ is in the closure of $\mathrm{im}(G(\ria)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i))$.
As $E$ is the union of conjugates of $U$, which are contained in $\overline{G(\ria)}$ by \Cref{tor-i-open}, so $E$ is in this closure, and so is $N$. \quadedhere
\end{equation}num
\end{proof}
\textbfegin{eg}in{cor}\leftarrowbel{lift-r-torus}
For a semilocal Pr\"ufer domain $R$ of finite Krull dimension, we use the setup \S\ref{pd-setup}.
For a reductive group scheme $G$ over $R$, a maximal torus $T_i\subset G_{\widehat{\mathcal{O}}_i}$ for each $i$, and any open neighbourhood $W$ of $\mathrm{id}\in ^{\prime}od_{i=1}^rG(\widehat{K}_i)$ such that $W\subset \overline{\mathrm{im}(G(\ria))}\cap ^{\prime}od_{i=1}^r G(\widehat{\mathcal{O}}_i)$, there exist $g=(g_i)_i\in W$ and a maximal torus $T\in \un{\mathrm{Tor}}g(R)$ such that for every $i$, we have
\[
T_{\widehat{K}_i}=g_iT_{i,\widehat{K}_i}g_i^{-1}.
\]
\end{cor}
\textbfegin{eg}in{proof}
By \Cref{open-normal}, $\overline{\mathrm{im}(G(\ria))}\cap ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)$ is an $a$-adically open neighbourhood of $\mathrm{id}\in ^{\prime}od_{i=1}^rG(\widehat{K}_i)$, so it makes sense to take its subset $W$ such that $W$ is a neighbourhood of $\mathrm{id}$.
Now consider the $a$-adically open map $\phi\colon ^{\prime}od_{i=1}^rG(\widehat{K}_i)\rightarrow ^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{K}_i)$ defined by $g_i\mapsto g_iT_{i,\widehat{K}_i}g_i^{-1}$.
Then $\phi(W)$ is an $a$-adically open neighbourhood of $(T_i)_i\in ^{\prime}od_{i=1}^{r}\un{\mathrm{Tor}}g(\widehat{K}_i)$.
Since $^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{\mathcal{O}}_i)\subset ^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{K}_i)$ is also an $a$-adically open neighbourhood of $(T_i)_i$, we have an open intersection $\phi(W)\cap ^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g({\widehat{\mathcal{O}}_i})\neq \emptyset$.
Then the density of the image of $\un{\mathrm{Tor}}g(\ria)\rightarrow ^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{K}_i)$ provided by \Cref{tor-dense} yields
\[
\textstyle T\in \un{\mathrm{Tor}}g(R)\isoto \un{\mathrm{Tor}}g(\ria)\times_{^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{K}_i)}^{\prime}od_{i=1}^r\un{\mathrm{Tor}}g(\widehat{\mathcal{O}}_i),
\]
thanks to the identification $R\isoto \ria \times_{^{\prime}od_{i=1}^r\widehat{K}_i}^{\prime}od_{i=1}^r\widehat{\mathcal{O}}_i$ and the affineness of $\un{\mathrm{Tor}}g$ over $R$.
Then $T$ is a maximal $R$-torus of $G$ satisfying the conditions.
\end{proof}
\textbfegin{eg}in{lemma} \leftarrowbel{replace im by it closure}
With the notations in \Cref{open-normal}, we have
\[
\textstyle \overline{\mathrm{im}(G(\ria))}\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)=\mathrm{im}(G(\ria)\rightarrow ^{\prime}od_{i=1}^r G(\widehat{K}_i))\cdot ^{\prime}od_{i=1}^r G(\widehat{\mathcal{O}}_i).
\]
\end{lemma}
\csub[Product formula over semilocal Pr\"ufer domains]
The goal of this section is to prove the following product formula for reductive group schemes:
\textbfegin{eg}in{prop}\leftarrowbel{decomp-gp}
For a semilocal Pr\"ufer domain $R$ of finite Krull dimension, we use the notations in the setup \S\ref{pd-setup}. For a reductive $R$-group scheme $G$, we have
\[
\textstyle ^{\prime}od_{i=1}^r G(\widehat{K}_i)=\mathrm{im}\textbfigl(G(R[\f{1}{a}])\rightarrow ^{\prime}od_{i=1}^r G(\widehat{K}_i)\textbfigr)\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i).
\]
\end{prop}
Before proceeding, we recall the following consequence of the Beauville-Laszlo type glueing of torsors:
\textbfegin{eg}in{lemma} \leftarrowbel{double cosets}
The $G$-torsors on $R$ that trivialize both on $R[\f{1}{a}]$ and on $\widehat{R}\simeq ^{\prime}od_{i=1}^r\widehat{\mathcal{O}}_i$ are in bijection with the following double cosets
\[
\textstyle \mathrm{im}\textbfigl(G(R[\f{1}{a}])\rightarrow ^{\prime}od_{i=1}^r G(\widehat{K}_i)\textbfigr)\textbfegin{eg}in{equation}gin{aligned}ckslash ^{\prime}od_{i=1}^r G(\widehat{K}_i)/ ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i).
\]
\end{lemma}
\textbfegin{eg}in{cor} \leftarrowbel{cor to double cosets} ^{\mathrm{h}}fill
\textbfegin{eg}in{equation}numr
\item \leftarrowbel{pd-tori} \Cref{decomp-gp} holds when $G$ is an $R$-group scheme of multiplicative type.
\item \leftarrowbel{decomp-gp implies G-S on semi pruf} \Cref{decomp-gp} implies \Cref{G-S over semi-local prufer}.
\end{equation}num
\end{cor}
\textbfegin{eg}in{proof}
By \Cref{double cosets}, \ref{pd-tori} follows from \Cref{G-S type results for mult type}~\ref{G-S for mult type gp}, since then no nontrivial $G$-torsor on $R$ trivializes on $R[\f{1}{a}]$. For \ref{decomp-gp implies G-S on semi pruf}, by the approximation \Cref{approxm semi-local Prufer ring}, we may assume that the ring $R$ in \Cref{G-S over semi-local prufer} has finite Krull dimension. The case $\dim R=0$ is trivial, so we assume $\dim R>0$ for what follows. Since $R[\f{1}{a}]$ is also a semilocal Pr\"ufer domain and $\dim R[\f{1}{a}]<\dim R$, by induction on $\dim R$, we may assume that every generically trivial $G$-torsor on $R$ trivializes on $R[\f{1}{a}]$ and, by \cite{Guo20b}, also on each $\widehat{\mathcal{O}}_i$ (since $\widehat{\mathcal{O}}_i$ is a rank-one valuation ring). Therefore, once the product formula in \Cref{decomp-gp} holds for $G$, \Cref{double cosets} would imply that every generically trivial $G$-torsor on $R$ is itself trivial.
\end{proof}
\textbfegin{eg}in{proof}[Proof of \Cref{decomp-gp}]
We will proceed verbatim as in \cite{Guo20b}*{\S4}.
We choose a minimal parabolic $\widehat{\mathcal{O}}_i$-subgroup $P_i$ for each $G_i\colonequals G\times_{R}\widehat{\mathcal{O}}_i$. Denote $U_i\colonequals \mathrm{rad}^u(P_i)$.
\textbfegin{eg}in{equation}numr
\item For the maximal split torus $T_i\subset P_i$, we have $^{\prime}od_{i=1}^rT_i(\widehat{K}_i)\subset \mathrm{im}(G(R[\f{1}{a}])\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i))\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)$.
By \cite{SGA3IIInew}*{Exposé~XXVI, Corollaire~6.11}, there is a maximal torus $\widetilde{T}_i$ of $G_i$ containing $T_i$. In particular, $\widetilde{T}_{i,\widehat{K}_i}$ is a maximal torus of $G_{\widehat{K}_i}$.
Then we apply \Cref{lift-r-torus} to all $\widetilde{T}_i$: there are a $g=(g_i)_i\in \overline{\mathrm{im}(G(\ria))}\cap ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)$ and a maximal torus $T_0\subset G$ such that $T_{0,\widehat{K}_i}=g_i\widetilde{T}_{i,\widehat{K}_i}g_i^{-1}$ for every $i$, which combined with \Cref{cor to double cosets}~\ref{pd-tori} for $T_0$ yields
\[
\textstyle ^{\prime}od_{i=1}^r\widetilde{T}(\widehat{K}_i)=g^{-1}\left(^{\prime}od_{i=1}^rT_0(\widehat{K}_i)\right)g\subset g^{-1}\left(\overline{\mathrm{im}(G(\ria))}\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)\right)g.
\]
Since $g\in \overline{\mathrm{im}(G(\ria))}\cap ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)$, the inclusion displayed above implies that $^{\prime}od_{i=1}^r\widetilde{T}_i(\widehat{K}_i)\subset \overline{\mathrm{im}(G(\ria))}\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)$.
Therefore, by \Cref{replace im by it closure}, we obtain the following inclusion
\[
\textstyle ^{\prime}od_{i=1}^rT_i(\widehat{K}_i)\subset ^{\prime}od_{i=1}^r\widetilde{T}_i(\widehat{K}_i)\subset \mathrm{im}(G(\ria))\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i).
\]
\item We prove $^{\prime}od_{i=1}^rU_i(\widehat{K}_i)\subset \overline{\mathrm{im}(G(\ria))}$.
Consider the $T_i$-action on $G_i$ defined by
\[
\textstyle T_i\times G_i\rightarrow G_i,\quad (t,g)\mapsto tgt^{-1}.
\]
Recall the open normal subgroup $N\subset ^{\prime}od_{i=1}^rG(\widehat{K}_i)$ constructed in \Cref{open-normal}, then each $N\cap U_i(\widehat{K}_i)$ is open in $U_i(\widehat{K}_i)$.
The dynamic argument in \cite{Guo20b} shows that $U_i(\widehat{K}_i)=N\cap U_i(\widehat{K}_i)$, hence $U_i(\widehat{K}_i)\subset N$ for each $i$. Therefore, we have $^{\prime}od_{i=1}^rU_i(\widehat{K}_i)\subset \overline{\mathrm{im}(G(\ria))}$.
\item We prove $^{\prime}od_{i=1}^rP_i(\widehat{K}_i)\subset \mathrm{im}(G(\ria)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i))\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i)$.
The quotient $H_i\colonequals L_i/T_i$ is anisotropic, therefore we have $H_i(\widehat{K}_i)=H_i(\widehat{\mathcal{O}}_i)$ for every $i$. Consider the commutative diagram
\[
\textbfegin{eg}in{equation}gin{tikzcd}
0 \arrow{r}
& T_i(\widehat{\mathcal{O}}_i) \arrow{d} \arrow{r} & L_i(\widehat{\mathcal{O}}_i) \arrow{d} \arrow{r} & H_i(\widehat{\mathcal{O}}_i) \ar[equal]{d} \arrow{r} & H^1(\widehat{\mathcal{O}}_i, T_i)=0 \arrow{d} \\
0 \arrow{r} & T_i(\widehat{K}_i) \arrow{r} & L_i(\widehat{K}_i) \arrow{r} & H_i(\widehat{K}_i) \arrow{r} & H^1(\widehat{K}_i,T_i)=0
\end{tikzcd}
\]
with exact rows.
By diagram chase, we have $L_i(\widehat{K}_i)=T_i(\widehat{K}_i)\cdot L_i(\widehat{\mathcal{O}}_i)$ for every $i$.
Subsequently, the combination of (i) and (ii) yields the inclusion
\[
\textstyle ^{\prime}od_{i=1}^rP_i(\widehat{K}_i)\subset \mathrm{im}(G(\ria)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i))\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i).
\]
\item Recall \cite{SGA3IIInew}*{Exposé~XXVI, Théorème~4.3.2 and Corollaire~5.2} that for each $P_i$, there is a parabolic subgroup $Q_i$ of $G_i$ such that $P_i\cap Q_i=L_i$ fitting into the following surjection
\[
\mathrm{rad}^u(P_i)(\widehat{K}_i)\cdot \mathrm{rad}^u(Q_i)(\widehat{K}_i)\twoheadrightarrow G(\widehat{K}_i)/P_i(\widehat{K}_i).
\]
This surjection, combined with the result of (ii) gives an inclusion
\[
\textstyle ^{\prime}od_{i=1}^rG(\widehat{K}_i)\subset \overline{\mathrm{im}(G(\ria)\rightarrow ^{\prime}od_{i=1}^rG(\widehat{K}_i))}\cdot ^{\prime}od_{i=1}^rP_i(\widehat{K}_i).
\]
Combined with (iii) and \Cref{replace im by it closure}, this yields the following desired product formula
\[
\textstyle ^{\prime}od_{i=1}^r G(\widehat{K}_i)=\mathrm{im}\p{G(R[\f{1}{a}])\rightarrow ^{\prime}od_{i=1}^r G(\widehat{K}_i)}\cdot ^{\prime}od_{i=1}^rG(\widehat{\mathcal{O}}_i).\quadedhere
\]
\end{equation}num
\end{proof}
\end{appendix}
\textbfegin{eg}in{equation}gin{bibdiv}
\textbfegin{eg}in{equation}gin{biblist}
\textbfibselect{bibliography}
\end{biblist}
\end{bibdiv}
\end{document} | math | 187,317 |
\begin{document}
\title{Quantum linear mutual information and classical correlations in globally pure
bipartite systems}
\author{R.M. Angelo\footnote[1]{Present address: Instituto de F\'{\i}sica,
Universidade de S\~ao Paulo, C.P. 66318, 05315-970, S\~ao Paulo, SP, Brazil.},
S. A. Vitiello, M.A.M. de Aguiar and K. Furuya}
\affiliation{Instituto de F\'{\i}sica `Gleb Wataghin', Universidade
Estadual de Campinas, C.P. 6165, \\13083-970, Campinas, SP, Brazil}
\begin{abstract}
We investigate the correlations of initially separable probability
distributions in a globally pure bipartite system with two degrees
of freedom for classical and quantum systems. A classical version
of the quantum linear mutual information is introduced and the two
quantities are compared for a system of oscillators coupled with
both linear and non-linear interactions. The classical
correlations help to understand how much of the quantum loss of
purity are due to intrinsic quantum effects and how much is
related to the probabilistic character of the initial states, a
characteristic shared by both the classical and quantum pictures.
Our examples show that, for initially localized Gaussian states,
the classical statistical mutual linear entropy follows its
quantum counterpart for short times. For non-Gaussian states the
behavior of the classical and quantum measures of information are
still qualitatively similar, although the fingerprints of the
non-classical nature of the initial state can be observed in their
different amplitudes of oscillation.
\pacs{05.70, 65.40G, 03.65.U}
\end{abstract}
\maketitle
\section{Introduction}
Entanglement has been a focus of intense investigation in the recent
years due to its relevance in quantum computation and quantum
information \cite{1,2,vedral,plenio,vidal,woot}. From a fundamental
point of view, entanglement is considered ``the characteristic trait
of quantum mechanics, the one that enforces its entire departure from
classical lines of thought''
(Schr\"odinger)\cite{schrodinger35}. Entanglement reflects quantum
correlations in the Hilbert space, where state vectors are associated
with probability distributions. In classical mechanics, on the other
hand, states are described by points in phase space, and no intrinsic
probabilistic character exists.
Probability distributions can be introduced in classical mechanics
with the Liouville formalism, and statistical averages on
ensembles can be calculated. Initially independent probability
distributions can become correlated when evolved by a classical
Hamiltonian with interaction terms. These correlations represent
the dynamical emergence of conditional probabilities in the
classical statistical world.
In this paper we consider the time evolution of a bipartite system
whose Hilbert space is the direct product of two subspaces.
Initial states that are the tensor product of kets in each
subspace, usually evolve to non-product states, generating
coherences and entangling the two subsystems. A measure of the
non-separability between them is provided either by the Von
Neumann or the linear entropies. In the case of a globally pure
bipartite system we argue that the latter is a convenient quantum
measure of non-separability. Next, as the main point of our work,
we consider the time evolution of probability densities in the
phase space of the corresponding bipartite {\em classical}
analogue. Inspired on the quantum problem we propose a measure of
the classical correlations generated by the statistical ensembles
of classical trajectories. We want to measure the non-separability
of the classical probability distribution evolving in time via
Liouville equations. Moreover we want to compare behavior of this
classical measure with the loss of purity of the quantum system,
particularly at short times. This comparison helps to identify the
time at which intrinsic quantum effects, such as interferences,
begin to be important.
In our approach we first define a classical statistical quantity
and compare its behavior with the quantum linear entropy (QLE).
Due to its formal similarity with the quantum linear entropy, we
call it {\it classical statistical linear entropy} (CSLE). A
similar classical quantity has recently been used in the study of
quantum-classical correspondence of intrinsic decoherence
\cite{gong03}. In spite of the direct correspondence between the
QLE and the CSLE, the latter may not be symmetric between the two
subsystems that make up the global bipartite system, i.e., the
CSLE of each subspace does not contain in general the same
information. Also, the CSLE can assume negative values and is
restricted to initially separable distributions, not being a good
measure in the case of distributions that start off non-separable.
We therefore define a more general classical measure, the {\it
classical statistical linear mutual information} (CSLMI), based on
the concept of {\it quantum mutual information} (QMI)
\cite{vedral97}, that is always symmetric and positive, and thus,
best suited to measure the classical separability. We show that
for two bi-linearly coupled harmonic oscillators with Gaussian
initial distributions, the classical and quantum results coincide.
For non-linear coupling the classical and quantum entropies remain
close for short times for both regular and chaotic regimes.
The outline of this work is as follows. In Section II.A we give
some general definitions and show that the quantum linear entropy
can be expressed in terms of the non-diagonal terms of the density
operator, providing a good measure of the entanglement between the
subsystems. In Section II.B we present our definitions of the
classical statistical linear entropy with some considerations
about the initial probability distributions. Next we introduce the
quantum linear mutual information and its classical statistical
analogue. This allows us to treat cases where the two subsystems
have different types of initial distributions in phase space.
Section III is reserved to the comparison between the QLMI and the
CSLMI for a system of two oscillators coupled with the following
types of interactions: (1) bi-linear; (2) non-linear ; (3)
bilinear within the {\em rotating wave approximation} (RWA). As
initial states we consider both Gaussian and non-Gaussian
distributions. Finally in Section IV we present some conclusions
and discuss the adequacy and limitations of the present approach.
\section{Theory}
\subsection{Quantum Linear Entropy}
The density operator corresponding to an initially (normalized) pure
state $|\psi_0 \rangle$ is
\begin{equation}
\rho(t) = U(t) \, |\psi_0 \rangle \langle \psi_0| \, U^{\dagger}(t),
\end{equation}
where $U(t)=e^{-iHt/\hbar}$ is the evolution operator and $H$ is the
Hamiltonian. For a globally pure state, there are constraints
connecting the diagonal and off-diagonal matrix elements of the density
operator. This follows from the property $Tr\rho^2 = Tr\rho =1$, which
can be written explicitly as
\begin{eqnarray}
\sum_i\rho_{ii}²+\sum_i\sum\limits_{k\neq i}|\rho_{ik}|^2=\sum_i
\rho_{ii} = 1 \;,
\label{correlacoes}
\end{eqnarray}
where we have separated the diagonal and off-diagonal terms in the
left hand side. From this relation we define
\begin{eqnarray}
d (\rho)\equiv\sum_i\sum\limits_{k\neq i}|\rho_{ik}|²=
1-\sum_i\rho_{ii}² \;.
\label{emaranhamento}
\end{eqnarray}
Consider now a bipartite system whose Hilbert space is the direct
product of two sub-spaces: $\mathcal{E} = \mathcal{E}_1 \otimes
\mathcal{E}_2$. A measure of non-separability between these subsystems
for a globally pure state is provided either by the Von Neumann or by
the linear entropies \cite{vedral97,manfredi}. The partial trace of
$\rho$ on system $2$ defines the operator
\begin{equation}
\label{rho1q}
\rho_1(t) = tr_2(\rho(t)) \;.
\end{equation}
The corresponding QLE is given by
\begin{equation}
\label{d1}
S_1(t) = 1 - tr_1(\rho_1^2(t)) \;.
\end{equation}
The operator $\rho_2(t)$ and the QLE $S_2(t)$ are similarly defined.
For $H = H_1\otimes\mathbf{1}_2 + \mathbf{1}_1\otimes H_2$ both
$\rho_1(t)$ and $\rho_2(t)$ are projectors and $S_1(t)=S_2(t)=0$.
If couplings between the sub-spaces exist, they force the global
state to evolve to non-product states, entangling the subsystems and
producing non-zero values for $S_1(t)$ and $S_2(t)$.
In fact, for a globally pure bipartite system
$S_1=S_2=d (\rho)$. To see this we write
\begin{eqnarray}
|\psi\rangle=\sum_i \lambda_i |a_i\rangle|b_i\rangle,
\end{eqnarray}
where $\{|a_i\rangle\}$ and $\{|b_i\rangle\}$ are orthonormal basis
of subsystems $1$ and $2$ respectively, and $\{\lambda_i\}$ is the
Schmidt spectrum \cite{schmidt07}. Then,
\begin{eqnarray}
\rho=
\sum_{i,j}\rho_{ij}|a_i\rangle |b_i\rangle\langle a_j| \langle b_j|,
\end{eqnarray}
with $\rho_{ij}=\lambda_i \lambda_j^*$. The reduced density of system
$1$ becomes
\begin{eqnarray}
\rho_1&=&\sum_i \rho_{ii}|a_i\rangle\langle a_i|,
\end{eqnarray}
and similarly for system $2$. The subsystem linear entropy becomes
trivial in this basis:
\begin{eqnarray}
S_1=1-Tr_{1} (\rho_{1}^2)= 1-\sum_i \rho_{ii}^2
\label{delta}
\end{eqnarray}
with a similar result for $S_2$.
Comparing this result with Eq.(\ref{emaranhamento}) we see that
\begin{eqnarray}
S_1 = S_2 = \sum_i\sum\limits_{k\neq i}|\rho_{ik}|^2=d (\rho).
\label{s1s2}
\end{eqnarray}
Therefore, as long as any globally pure state of the system can be
decomposed in the Schmidt basis, we can conclude that the linear
entropy indeed measures coherences between the
subsystems. Furthermore, because of the total conservation
of coherences (\ref{correlacoes}), the linear entropy
seems to be the most natural measure in the present situation.
\subsection{Classical Systems}
A measure of non-separability for classical systems can be defined
only at a statistical level by considering ensembles of initial
conditions \cite{3,4,5}. Following Wehrl \cite{wehrl} we define a
quantity that we call {\it classical statistical linear entropy}.
Consider a system with two degrees of freedom described by the classical
Hamiltonian function $\cal{H}$. Consider also several copies of this
system with initial conditions distributed according to
the ensemble probability distribution
$P(x,t=0)$,
where $x \equiv (q_1,p_1,q_2,p_2)$. The classical time
evolution of $P(x,0)$ is obtained via Liouville's equation
\begin{equation}
\frac{\partial P}{\partial t} = \{\mathcal{H},P\} ,
\label{Lio}
\end{equation}
whose solution is
\begin{equation}
\label{timeevol}
P(x,t) = P(\phi_t^{-1}(x),0),
\end{equation}
where $\phi_t^{-1}(x) \equiv x_0$ is the initial condition that
propagates to $x$ in the time $t$ and $\phi_t$ is the phase space
flux, so that $x=\phi_t(x_0)$. In words, the numerical value of
the probability $P$ at point $x$ and time $t$ has the same
numerical value of the probability at point $x_0$ of the {\em
initial distribution}. This value is carried over to $x$ in the
time $t$. Liouville's theorem guarantees the conservation of
probability at all times. For integrable systems $\phi_t(x_0)$
might be determined analytically, otherwise numerical calculations
can be performed. Notice that $P(x,t)$ is itself a constant of
motion, but this is not enough to guarantee the integrability of
the system, since it is not generally in involution with the
Hamiltonian.
The marginal probability distribution
\begin{equation}
\label{p1t}
P_1(q_1,p_1,t) = \int dq_2 dp_2 P(q_1,p_1,q_2,p_2,t)
\end{equation}
allows us to define the {\em classical statistical linear entropy}
\begin{equation}
\label{deltaclas}
S_1^{cl}(t) = 1 - \frac{ \int dq_1 dp_1 P_1^2(q_1,p_1,t)}
{ \int dq_1 dp_1 P_1^2(q_1,p_1,0)} \;.
\end{equation}
The normalization is necessary for dimensional reasons and to
guarantee that $S_1^{cl}(0)=0$.
The initial classical distribution corresponding to a given quantum
pure state $\rho(0)$ is chosen as the coherent states phase space
projection
\begin{equation}
\label{classicalp}
P(x,0) =\frac{1}{N}
\langle\alpha_1,\alpha_2|\rho(0)|\alpha_2,\alpha_1\rangle,
\end{equation}
where $N$ is a normalization constant and $\alpha_i=(q_i+\imath
p_i)/\sqrt{2\hbar}$ is the usual complex parametrization of the
coherent states. Eq. (\ref{classicalp}) is the normalized Husimi
distribution, which is positive definite by construction. It is known
that the Husimi distribution does not reproduce the correct marginal
probabilities. However, the constraint of a positive probability
distribution excludes the Wigner function \cite{6b} as a classical
initial distribution.
Eqs.(\ref{emaranhamento}) and (\ref{s1s2}) show that the quantum
linear entropies $S_1$ and $S_2$ can be obtained only from the
diagonal elements of the global density matrix. Diagonal elements, on
the other hand, have classical analogues. Therefore it makes sense to
compare the dynamics of $S_1$ and $S_2$, which may be written either
in terms of only off-diagonal or only diagonal elements, with the
classical correlations.
An important difference between the classical and quantum linear
entropies can be made explicit in the coherent state basis. For
one dimensional systems we obtain
\begin{equation}
S(t)=1-\int \frac{d²\alpha}{\pi} \Big[\langle\alpha|U(t)\rho²(0)U^{\dag}(t)
|\alpha\rangle\Big].
\end{equation}
On the other hand, defining the Liouvillian operator $\cal{L}$ such
that $\{\mathcal{H},P\}=\mathcal{L}P$, Eq.(\ref{Lio}) can be formally
integrated and the CSLE can be written as
\begin{equation}
\label{classicals}
S^{cl}(t)=1-\int \frac{d²\alpha}{\pi}
\left[\frac{\left(e^{\mathcal{L}t}
\langle\alpha|\rho(0)|\alpha\rangle\right)^2}{M}\right],
\end{equation}
where $M=N²/\pi$. In particular, at $t=0$ the integrand of the
classical entropy depends on $\langle \alpha
|\rho(0)|\alpha\rangle^2$ instead of
$\langle\alpha|\rho^2(0)|\alpha\rangle$. Another important
difference is of course in the dynamics: the quantum evolution is
determined by non-commuting operators, which brings a number of
corrections to the classical formalism. For more degrees of
freedom, although the time evolution cannot be expressed in such a
simple way, the two differences pointed out above remain true.
\subsection{Quantum and Classical Linear Mutual Information}
Although the CSLE seems to be the natural classical analague of
the quantum linear entropy, it is not symmetric between the
subsystems. Indeed, if the initial probability distributions of
each subsystem are not equal (for example a Gaussian distribution
for one subsystem and a Poisonian distribution for the other) we
find that $S_1^{cl}(t) \neq S_2^{cl}(t)$. The quantum linear
entropy, on the other hand, always satisfies $S_1(t) = S_2(t)$.
Another drawback of the definition Eq.(\ref{deltaclas}) is that it
gives $S_1^{cl}(0)=S_2^{cl}(0)=0$ for any initial classical
distribution, correlated or not. Thus, it is interesting to
define a more general quantity that avoids these difficulties and
that takes into account the contributions of both subsystems
entropies symmetrically. At the quantum level we define the {\sl
quantum linear mutual information} (QLMI) that depends on the QLE
as $I\equiv S(\rho_1\otimes\rho_2)-S(\rho)$. This is based on the
{\sl Von Neumann mutual information} \cite{vedral97,11},
$I_{VN}=S_{VN}(\rho_1 \otimes\rho_2)-S_{VN}(\rho)$, where $S_{VN}$
is the Von Neumann entropy. By using Eq.(\ref{d1}) we can write
the QLMI as (notice that the linear entropy is not additive
\cite{manfredi})
\begin{equation}
I(t)=S_1(t)+S_2(t)-S_1(t)\,S_2(t)-S(t),
\label{iq}
\end{equation}
where $S_{i}$ and $S$ are the subsystem and the global linear
entropies respectively. For pure initial states $S(t)=0$ and
$I(t)=S_1+S_2 -S_1 S_2$.
We also define a quantity that we call {\sl classical statistical
linear mutual information} (CSLMI) as
\begin{equation}
I^{cl}(t)=S_1^{cl}(t)+S_2^{cl}(t)-S_1^{cl}(t)\,
S_2^{cl}(t)-S^{cl}(t).
\label{icl}
\end{equation}
Once again $S^{cl}(t)=0$ for pure states. Note that the above
definition can also be written in form $I^{cl}=S^{cl}(P_1 \,
P_2)-S^{cl}(P)$ where $S^{cl}(P)$ is given by Eq.(\ref{deltaclas})
with $P_1$ replaced by $P$.
We emphasize that the quantum and classical linear mutual information
defined above are also non-separability measures, like the QLE and
CSLE. This can be made explicit by re-writing Eqs.(\ref{iq}) and
(\ref{icl}) as
\begin{eqnarray}
I(t)&=&tr[\rho²(t)-\rho_1²(t)\otimes\rho_2²(t)]\nonumber \\
&=& 1- tr[\rho_1²(t)\otimes\rho_2²(t)],\\ \nonumber \\
I^{cl}(t)&=&\frac{\int dx [P²(t)-P_1²(t)P_2²(t)]}
{\int dx P²(0)}\nonumber \\
&=& 1-\frac{\int dx P_1²(t)P_2²(t)}
{\int dx P²(0)} .
\label{iqic}
\end{eqnarray}
The last equality is true since ${\int dx P^2(0)} ={\int dx P^2(t)}$.
When the two subsystems have the same type of initial
distributions, both $I$ and $I^{cl}$ have the same contents of their
respective linear entropies. Moreover, since $I^{cl}$ is symmetric
with respect to the two subsystems, it does not present difficulties
when the initial distributions are different.
In the next section we compare $I(t)$ with $I^{cl}(t)$ for three
different cases.
\section{Results}
In order to show that our definition of a CSLMI is physically sensible,
we consider the following classical Hamiltonian with two degrees of
freedom
\begin{equation}
{\cal H}(q_1,p_1,q_2,p_2)= {\cal H}_1 + {\cal H}_2 + {\cal H}_I \; ,
\label{Hcl}
\end{equation}
where ${\cal H}_i = \frac{1}{2}(p_i^2+\omega_i^2 q_i^2)$ are harmonic
oscillators and ${\cal H}_I$ is an interaction term. In what follows
we shall calculate both CSLMI and QLMI for three different couplings: a
simple bilinear, a non-integrable and a rotating-wave approximation
(RWA). In the first and
last cases the calculations can be performed analytically.
1) {\em Bilinear Coupling ${\cal H}_I(q_1,p_1,q_2,p_2)= \lambda
q_1 q_2$} \cite{7}\\
In this case, the classical equations of motion can be easily
integrated and we find
\begin{eqnarray}
\label{solosc}
q_{1}(t) = q_1 C_+(t) + q_2 C_-(t) + p_1 S_+(t) + p_2 S_-(t), \\
p_{1}(t) = p_1 C_+(t) + p_2 C_-(t) - q_1 S_+(t) - q_2 S_-(t), \nonumber
\end{eqnarray}
and similar expressions for $q_{2}(t)$ and $p_{2}(t)$. $C_{\pm}$ and
$S_{\pm}$ are quasi-periodic functions of time
\begin{eqnarray}
\label{cssn}
C_{\pm}(t) = \frac{1}{2} \left(\cos \Omega_x t \pm \cos \Omega_yt\right),
\nonumber \\
S_{\pm}(t) = -\frac{1}{2}\left({\Omega_x} \sin\Omega_x t \pm {\Omega_y}
\sin \Omega_yt \right),
\end{eqnarray}
where $\Omega_x=\sqrt{\omega^2+\lambda}$ and
$\Omega_y=\sqrt{\omega^2-\lambda}$.
As initial phase space distribution we choose a Gaussian centered at
$x_c=(q_{1c},p_{1c},q_{2c},p_{2c})$,
\begin{eqnarray}
P(x,0)=\frac{1}{4\pi²\hbar²}\exp\left\{-\frac{(x-x_c)^T(x-x_c)}{2\hbar}
\right\}.
\label{rho0}
\end{eqnarray}
This is the classical density as given by Eq.(\ref{classicalp}),
corresponding to a coherent state initial wave-function.
The probability distribution at time $t$ is obtained using
Eqs. (\ref{timeevol}) and (\ref{solosc}). The result can be written
in the form
\begin{eqnarray}
P(x,t)= \frac{1}{4\pi^2\hbar^2} e^{ -x^T {\bf A} x + 2 B x + C},
\label{probosc}
\end{eqnarray}
where the matrix ${\bf A}$ and the vector $B$ are functions of
$C_{\pm}$, $S_{\pm}$ and $\hbar$. Replacing Eq.(\ref{probosc}) into
(\ref{icl}) we get
\begin{equation}
\label{deltaclassosc}
I^{cl}(t) = 1 - \frac{1}{64\hbar^6} \,
\frac{1}{\det{(\beta)}^2 \det{(\alpha-\gamma \beta^{-1} \gamma^T)}},
\end{equation}
where $\alpha$, $\beta$ and $\gamma$ are $2$ x $2$ matrices
whose elements are combinations of $C_{\pm}$, $S_{\pm}$ and
$T_{\pm}$, with
\begin{eqnarray}
T_{\pm}(t) = -\left(\frac{\sin \Omega_x t}{\Omega_x} \pm
\frac{\sin \Omega_y t}{\Omega_y} \right) \nonumber \;.
\end{eqnarray}
It is interesting to note that all the dependence on the center
position of the initial distribution has gone.
The reduced QLMI can also be analytically computed. Here, we simply write
down the result:
\begin{equation}
\label{deltaquantosc}
I(t) = 1 - \left[ \frac{4
\hbar^2(1-\lambda^2)}{|D_x(t)| |D_y(t)| \,
\sqrt{\det{(O)}}} \right]^4.
\end{equation}
The coefficients $D_i(t)$ are periodic functions of frequency
$\Omega_i$. $O$ is an $8\times 8$ quasi-periodic matrix depending
on both $\Omega_1$ and $\Omega_2$. Expressions
(\ref{deltaclassosc}) and (\ref{deltaquantosc}) are actually
identical, which shows that our definition of $I^{cl}$ and the
choice of its normalization are both appropriate. Fig. 1 displays
an example for $\lambda = 0.9$ that shows $I_1(t) = I_1^{cl}(t)$.
2) {\em Nonlinear Coupling} \\
The exact coincidence between the classical statistical and quantum
linear mutual informations just presented, certainly has to do with both the
quadratic nature of the Hamiltonian and the Gaussian initial distributions.
To study the role of non-linearities we consider the interaction Hamiltonian
\begin{equation}
{\cal H}_I(q_1,p_1,q_2,p_2) = - q_1p_1p_2+\frac{1}{2} {q_1}^2 {q_2}^2.
\label{Nel}
\end{equation}
The total Hamiltonian ${\cal H}$ is a canonically transformed version
of a well studied system known as the Nelson potential
\cite{8}. Fig. \ref{fig2} shows a typical mixed Poincar\'e section
for parameter values $\omega_1=\sqrt{0.1}$, $\omega_2=\sqrt{2}$ and
energy $E=0.05$.
The calculation of $I_1^{cl}(t)$ (Eq.(\ref{deltaclas})) now has to
be performed numerically. We use the same initial distribution,
Eq.(\ref{rho0}). $x_0=\phi^{-1}_t(x)$ was calculated using
standard Runge-Kutta routines. The integrations in Eqs.(\ref{p1t})
and (\ref{deltaclas}) was done both by Monte Carlo and by direct
trapezoidal techniques; the results of the two methods agree in
the time intervals we have considered. In Fig.(\ref{fig3}a) we
show $I(t)$ and the corresponding $I^{cl}(t)$. The initial density
matrix is the direct product of two coherent states, and the
initial classical probability distribution is that given by
(\ref{classicalp}). The center of the coherent states is in the
{\sl chaotic} region of the corresponding Poincar\'e section. The
resemblance between the QLMI and CSLMI is quite good even after
two oscillations. Fig(\ref{fig3}b) shows a similar calculation
with the coherent states centered at the {\sl regular} region.
Once again the classical and quantum results coincide for short
times. Surprisingly, the two entropy-like quantities agree for
{\sl longer times} in the chaotic case. For both the regular and
chaotic cases, the classical and quantum calculations agree very
well for short times, although the classical mutual information is
systematically larger than its quantum counterpart for times
larger than about $1$. Also, the linear mutual information grows
faster for the chaotic case than for the regular case, in
accordance with similar previous results for the linear entropy
\cite{kyoko}. Finally we note that the classical linear entropies
Eq.(\ref{deltaclas}) may become negative. Similar behaviors were
reported in refs. \cite{wehrl, manfredi}. The linear mutual
information, on the other hand, is always positive and better
suited for measuring the classical loss of separability.
3) {\em RWA Coupling:} ${\cal H}_I(q_1,p_1,q_2,p_2)= \lambda (q_1
q_2+p_1 p_2)$\\
This is the classical version of a rotating-wave approximation of
the interaction Hamiltonian treated in the example (1). In this case,
Gaussian distributions in each subspace evolve coherently: $I_1(t)=
I_1^{cl}(t)=0$. However, this is a very particular case of a preferred
basis state \cite{6a,9}, and the same kind of coherent evolution is not
expected for more general initial states. For instance, in the case of
{\bf Fock states} ($|\psi_0 \rangle=|1\rangle\otimes|1\rangle$) the
classical phase space distribution, given by the Husimi distribution, is
\begin{equation}
P(x,0) =\frac{e^{-\frac{(x-x_c)^T(x-x_c)}{2\hbar}}}{4\pi²\hbar²}
\prod\limits_{k=1}^2\left(\frac{q_k²+p_k²}{2\hbar} \right).
\end{equation}
In the analytical calculation of the CSLMI we use the
super-operator method \cite{10} modified to conform with classical
Poisson brackets computations. We find
\begin{equation}
I(t)=8u(t)[2-8 u(t)]
\end{equation}
and
\begin{equation}
I^{cl}(t)=u(t)[2- u(t)],
\end{equation}
where
\begin{equation}
u(t)=\displaystyle{\frac{\sin^2 (2\lambda t)}{32}
\left[5+3 \cos{(4 \lambda t)} \right]} = S_1^{cl}(t) = S_2^{cl}(t)\;.
\end{equation}
The quantum and the classical statistical linear entropies have
similar qualitative behaviors, but the amplitude of their
oscillations are markedly different. They exhibit the same
purification period $\tau_0=\frac{\pi}{2 \lambda}$.
Finally, we consider one of the subsystems in a coherent state and
the other in the number state ($|\psi_0 \rangle=|\alpha_1\rangle
\otimes|1\rangle$). In this case, for $\alpha_1=0$, we obtain
\begin{equation}
\begin{array}{ll}
I(t) & =\displaystyle{\frac{\sin^2 (2\lambda t)}{8}
\left[7 + \cos{(4 \lambda t)} \right]} \\ \\
I^{cl}(t) & =\displaystyle{\frac{\sin^2 (2\lambda t)}{64}
\left[15 + \cos{(4 \lambda t)} \right]} \;.
\end{array}
\end{equation}
These quantities are plotted in Fig. (\ref{fig4}). The
qualitative agreement of the oscillations is remarkable, though
the amplitudes are quite different. If the classical information
is re-scaled so that its maximum at $\pi/4$ coincides with the
quantum plot, the two curves become very similar. This might seem
a consequence of the normalization introduced in $I^{cl}$ and not
present in $I$ (see Eq.(\ref{iqic})). Unfortunately this is not
so: the classical normalization is indeed necessary (at least for
dimensional reasons) and choosing it in a case by case basis does
not seem to bring any important physical information. Our
interpretation of the differences in the amplitudes is that they
reflect the non-classical character of the initial phase space
distribution.
\section{Conclusion}
In this work we have defined the {\sl classical statistical linear
mutual information}, a tool that quantifies the non-separability
of classical statistical distributions representing pure states of
a bipartite Hamiltonian system. The comparison of the quantum and
classical linear mutual information provides a measure of how much
of the quantum loss of purity are due to intrinsic quantum effects
and how much is related only to the probabilistic character of the
initial distributions. We computed the classical and quantum
mutual information for a system of two oscillators subjected to
different types of coupling. We found that the two measures follow
each other closely in the case of initially separable Gaussian
states. For the case of linear coupling the classical and quantum
mutual information are identical, revealing the classical nature
of the system and the coherent evolution of the Gaussian
wave-packets. For non-linear couplings the classical mutual
information follows the quantum one for short times. This is
follows from the fact that the short time quantum evolution can be
formulated in terms of Liouville formalism
\cite{ballentine,angelo}. This property is desirable for a measure
of classical correlations in view of Ehrenfest's theorem
\cite{ehren}, and confirms that our definition is appropriate. For
longer times the folding of the wave-packets certainly introduces
self-interferences that have no classical counterpart. When
quantum interferences become substantial \cite{rf} the classical
and quantum mutual information become significantly different. For
open systems, where interferences are eliminated by the coupling
with an external environment, we conjecture that the classical and
quantum mutual information are going to coincide for much longer
times.
We have also investigated the role of non-Gaussian types of
initial distributions in the classical separability. We have shown
that also in this case the CSLMI $I_{cl}(t)$ is a meaningful
quantity to measure the classical non-separability. Our example of
linearly coupled harmonic oscillators (RWA) shows that the time
evolution of the classical mutual information is qualitatively
similar to that of its quantum counterpart. The amplitude of the
classical oscillations, however, is markedly different from the
quantum ones, reflecting the non-classical nature of the initial
state.
\begin{center}
{\bf Acknowledgments}
\end{center}
We acknowledge Conselho Nacional de Desenvolvimento
Cient\'{\i}fico e Tecnol\'ogico (CNPq) and Funda\c{c}\~ao de
Amparo a Pesquisa de S\~ao Paulo (FAPESP)(Contract No.02/10442-6)
for financial support.
\begin{figure}
\caption{QLMI $I(t)$ and CSLMI $I_{cl}
\label{fig1}
\end{figure}
\begin{figure}
\caption{\small Poincar\'e section for the Nelson potential at $E=0.05$,
$q_1=0$ and $p_1>0$.
The symbols stand for the centers of the coherent states: filled
circle for chaotic region and filled square for regular region.}
\label{fig2}
\end{figure}
\begin{figure}
\caption{ QLMI (continuous line, $\hbar=0.05$) and CSLMI (dashed line)
for the nonlinear coupling and initial Gaussian distributions
centered on a chaotic orbit (a) and on a regular orbit (b).}
\label{fig3}
\end{figure}
\begin{figure}
\caption{\small Quantum and classical mutual linear information for the RWA
coupling for $w=\lambda=\hbar=1$ and initial state
$|\psi_0 \rangle=|0\rangle\otimes|1\rangle$.
The continuous line shows the quantum result $I(t)$ and the
dashed line the classical $I^{cl}
\label{fig4}
\end{figure}
\end{document} | math | 29,436 |
\begin{equation}gin{document}
\tildeitle{Systemic Risk and Heterogeneous Mean Field Type Interbank Network}
\author{Li-Hsien Sun\tildehanks{Institute of Statistics, National Central University, Chung-Li, Taiwan, 32001 {\em [email protected]}. Work supported by Most grant 108-2118-M-008-002-MY2}}
\date{}
\title{Systemic Risk and Heterogeneous Mean Field Type Interbank Network}
\begin{equation}gin{abstract}
We study the system of heterogeneous interbank lending and borrowing based on the relative average of log-capitalization given by the linear combination of the average within groups and the ensemble average and describe the evolution of log-capitalization by a system of coupled diffusions. The model incorporates a game feature with homogeneity within groups and heterogeneity between groups where banks search for the optimal lending or borrowing strategies through minimizing the heterogeneous linear quadratic costs in order to avoid to approach the default barrier.
Due to the complicity of the lending and borrowing system, the closed-loop Nash equilibria and the open-loop Nash equilibria are both driven by the coupled Riccati equations. The existence of the equilibria in the two-group case where
the number of banks are sufficiently large is guaranteed by the solvability for the coupled Riccati equations as the number of banks goes to infinity in each group. The equilibria are consisted of the mean-reverting term identical to the one group game and the group average owing to heterogeneity. In addition, the corresponding heterogeneous mean filed game with the arbitrary number of groups is also discussed. The existence of the $\mbox{var}epsilonilon$-Nash equilibrium in the general $d$ heterogeneous groups is also verified. Finally, in the financial implication, we observe the Nash equilibria governed by the mean-reverting term and the linear combination of the ensemble averages of individual groups and study the influence of the relative parameters on the liquidity rate through the numerical analysis.
\end{equation}d{abstract}
\tildeextbf{Keywords:} Systemic risk, inter-bank borrowing and lending system, heterogeneous group, relative ensemble average, Nash equilibrium, Mean Field Game.
\section{Introduction}\label{sec:intro}
Toward a deeper understanding of systemic risk created by lending and borrowing behavior under the heterogeneous environment, we extend the model studied in \cite{R.Carmona2013} from one homogeneous group into several groups with heterogeneity.
The evolution of monetary reserve is described by a system of interacting diffusions with homogeneity within groups and heterogeneity between groups. Banks intend to borrow money from a central bank when they remain below the global average of capitalization treated as the critical level and lend money to a central bank when they stay above the same critical level through minimizing their cost for the corresponding lending or borrowing given by the linear quadratic objective functions with varied parameters. Furthermore, motivated by \cite{Touzi2015}, instead of the global ensemble average, we propose the relative ensemble average which is the linear combination of the group average and the global ensemble average. In the case of the finite players with heterogeneous groups, we solve the closed-loop Nash equilibria using the dynamic programming principle and the corresponding fully coupled backward Hamilton Jacobi Bellman (HJB) equations. In addition, through the Pontryagin minimization principle and the corresponding the adjoint forward and backward stochastic differential equations (FBSDEs), we get the open-loop Nash equilibria.
Due to the complicity of the heterogeneity, the closed-loop Nash equilibria and the open-loop Nash equilibria are given by the coupled Riccati equations. Hence, in the two-group case, we propose the solvability condition for the existence of the equilibria as the number of banks in each group goes to infinity in the sense that in the case of sufficiently large number of banks, the existence of the closed-loop Nash equilibria and the open-loop Nash equilibria can be guaranteed. Furthermore, we discuss heterogeneous mean field games (MFGs) with the common noises where the number of groups is arbitrary. Due to the complexity generated by common noises, the $\mbox{var}epsilonilon$-Nash equilibria can not be obtained using the HJB equations. Hence, the adjoint FBSDEs are applied to solve the equilibria. The existence of the $\mbox{var}epsilonilon$-Nash equilibria is also proved under some sufficient conditions. As the results in \cite{R.Carmona2013}, the closed-loop Nash equilibria and the open look Nash equilibria converge to the $\mbox{var}epsilonilon$-Nash equilibria.
We observe that owing to the linear quadratic regulator, the equilibria are the linear combination of the mean-reverting term identical to the one group system discussed in \cite{R.Carmona2013} and the group ensemble averages given by heterogeneity. In addition, through the numerical studies, if banks prefer tracing the global ensemble average rather than the average of their own groups, they intend to increase the liquidity rate, and the larger sample size also implies the larger liquidity rate.
In the literature, this type of interaction in continuous time is studied in several models. \cite{Fouque-Sun, R.Carmona2013} describe systemic risk based on the coupled Ornstein-Uhlenbeck (OU) type processes. The extension of this OU type model with delay obligation is also proposed by \cite{Carmona-Fouque2016}. \cite{Fouque-Ichiba, Sun2016} investigate system crash through the Cox-Ingersoll-Ross (CIR) type processes. The rare event related to systemic risk given by the bistable model is discussed in \cite{Garnier-Mean-Field, GarnierPapanicolaouYang}. The stability created by a central agent according to the model in \cite{Garnier-Mean-Field, GarnierPapanicolaouYang} is provided by \cite{Papanicolaou-Yang2015}.
The asymptotic equilibria called $\mbox{var}epsilonilon$-Nash equilibria of stochastic differential games in one homogeneous group are solved by MFGs proposed by \cite{MFG1, MFG2, MFG3}. Here, the interactions given by empirical distributions whose solution satisfies the coupled backward HJB equation backward in time and the forward Kolmogorov equation forward in time. Independently, Nash certainty equivalent treated as a similar case of MFGs is also developed by \cite{HuangCainesMalhame1,HuangCainesMalhame2}. In addition, the probabilistic approach in the form of FBSDEs to obtain $\mbox{var}epsilonilon$-Nash equilibria is studied in
\cite{Bensoussan_et_al, CarmonaDelarueLachapelle, Carmona-Lacker2015,MFG-book-1}. \cite{Lacker-Webster2014, Lacker2015, MFG-book-2} discuss MFGs with common noise and the master equations. In particular, \cite{Lacker-Zari2017} study the optimal investment under heterogeneous relative performance in the case of mean field limit.
The paper is organized as follows. In Section \ref{Heter}, we analyze the case of two heterogeneous groups with relative performance and study the corresponding closed-loop Nash equilibria in Section \ref{Sec:NE}. In parciular, Section \ref{MFG} is devoted to solving for the $\mbox{var}epsilonilon$-Nash equilibria with common noises in MFGs with heterogeneous groups using the coupled FBSDEs. The financial implication is illustrated in Section \ref{FI}. The concluding remark is given in Section \ref{conclusions}.
\section{Heterogeneous Groups}\label{Heter}
According to the interbank lending and borrowing system discussed in \cite{R.Carmona2013}, it is nature to consider the model of interbank lending and borrowing containing $d$ groups and $N_k$ denoted the number of banks in group $k=1,\cdots,d$ where $N=\sum_{j=1}^dN_j$. The $i$-th Bank in group $k$ intends to obtain the optimal strategy through minimizing its own linear quadratic cost function
\begin{eqnarray}
\label{objective}J^{(k)i}(\alphaha)=\mathbb{E} \bigg\{\int_{0}^{T} f^{N}_{(k)}(X^{(k)i}_t, X^{-(k)i}_t, \alphaha^{(k)i}_t)
dt+ g_{(k)}(X^{(k)i}_T, X^{-(k)i}_T )\bigg\},
\end{eqnarray}
where $X=(X^{(1)1},\cdots,X^{(d)N_d})$, $x=(x^{(1)1},\cdots,x^{(d)N_d})$, $\alphaha=(\alphaha^{(1)1},\cdots,\alphaha^{(d)N_d})$, and $ X^{-{(k)i}}=( X^{(1)1},\cdots, X^{(k)i-1}, X^{(k)i+1},\cdots, X^{(d)N_d})$ where the running cost is
\begin{eqnarray}
&&f^N_{(k)}(x^{(k)i}, x^{-(k)i},\alphaha)=\frac{(\alphaha )^2}{2}-q_k\alphaha \left(\tildeilde x^{\lambda_k} -x^{(k)i}\right)
+\frac{\mbox{var}epsilonilon_k}{2}\left(\tildeilde x^{\lambda_k} -x^{(k)i}\right)^2,\label{running_cost}
\end{eqnarray}
and terminal cost is
\begin{equation}\label{teminal_cost}
g^N_{(k)}(x^{(k)i}, x^{-(k)i})=\frac{c_k}{2}\left(\tildeilde x^{\lambda_k} -x^{(k)i} \right)^2,
\end{equation}
with $x^{-(k)i}=( x^{(1)1},\cdots, x^{(k)i-1}, x^{(k)i+1},\cdots, x^{(d)N_d}$) where the relative ensemble average is
\[
\overlineverline x^{\lambda_k}=(1-\lambda_k)\overlineverline x^{(k)}+\lambda_k\overlineverline{x}
\]
where the global average of capitalization and the group average of capitalization are written as
\[
\overlineverline{x} =\frac{1}{N}\sum_{k=1}^d\sum_{i=1}^{N_k}x^{(k)i} ,\,\overlineverline{x} ^{(k)}=\frac{1}{N_k}\sum_{i=1}^{N_k}x^{(k)i},
\]
under the constraint
\begin{eqnarray}\label{diffusions}
\nonumber dX^{(k)i}_t &=& (\alphaha_{t}^{(k)i}+\gamma^{(k)}_{t})dt\\
&&+\sigma_k\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)i}_t\right)\right),
\end{eqnarray}
for $i=1,\cdots,N_k$ with nonnegative diffusion parameters $\sigma_k$, nonnegative deterministic growth rate $\gamma^{(k)}$ in $L^\infty$-space denoted as the collection of bounded measurable functions on $[0,T]$. We further assume that $W^{(k)i}_t$ for all $k=1,\cdots,d$ and $i=1,\cdots,N_k$ are standard Brownian motions and $W^{(0)}_t$ and $W^{(k)}_t$ for $k=1,\cdots,d$ are the common noises between groups and within groups corresponding to the parameters $\rho $ and $\rho_k$ for $k=1,\cdots,d$ denoted as the correlation between groups and within groups respectively. Note that all Brownian motions are defined on a filtration probability space $(\Omega,{\cal{F}},{\cal{P}},\{{\cal{F}}_t\})$ referred to as Definition 5.8 of Chapter 2 in \cite{Karatzas2000}. The initial value $X_0^{(k)i}$ which may also be a squared integrable random variable $\xi^{(k)}$ for $k=1,\cdots,d$ independent of the Brownian motions defined on $(\Omega,{\cal{F}},{\cal{P}},\{{\cal{F}}_t\})$. Namely, the system of the interbank lending and borrowing contains the feature of homogeneity within the groups and heterogeneity between groups. Note that $\alphaha_\cdot$ is a progressively measurable control process and $\alphaha^{(k)i}_\cdot$ is admissible if it satisfy the integrability condition given by
\begin{equation}\label{admissible}
\mathbb{E} \int_0^T\left|\alphaha_s^{(k)i}\right|^2ds<\infty.
\end{equation}
In addition, the parameters $q_k$, $\mbox{var}epsilonilon_k$, and $c_k$ stay in positive satisfying $q_k^2\leq \mbox{var}epsilonilon_k$ in order to guarantee that $\alphaha\rightarrow f_{(k)}^{N}(x,\alphaha)$ is convex for any $x$ and $x\rightarrow f_{(k)}^{N}(x,\alphaha)$ is convex for any $\alphaha$.
The parameters $0\leq\lambda_k\leq 1$ for $k=1,\cdots,d$ are described as the relative consideration for the group average and the global average. The case of large $\lambda$ means that banks consider tracing the global average rather than the group one through the large ratio on global average. Finally, $q_k$ presents the incentive of lending and borrowing behavior for banks in group $k$ as the description in \cite{Carmona-Fouque2016,R.Carmona2013}.
For simplicity, in case of the finite players, we study the case of two heterogeneous groups where $d=2$
Hence, the dynamics for both groups are written as
\begin{eqnarray}
\nonumber dX^{(1)i}_t &=& (\alphaha_{t}^{(1)i}+\gamma^{(1)}_{t})dt\\
\label{diffusion-major}&&+\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right),
\end{eqnarray}
and
\begin{eqnarray}
\nonumber dX^{(2)i}_t &=& (\alphaha_{t}^{(2)i}+\gamma^{(2)}_{t})dt\\
\label{diffusion-minor}&&+\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right),
\end{eqnarray}
with the initial value $X_0^{(k)i}$ for $k=1,2$ and $i=1,\cdots,N_k$. In particular, given the first group consisted of larger banks and the second group consisted of smaller banks, we may further assume that $ 0\leq\lambda_1<\lambda_2 \leq 1$ since large banks intend to trace their own group ensemble average $\overlineverline X^{(1)}$ rather than the global average $\overlineverline X$. On the contrary, small banks prefer tracing large banks through the global ensemble average. In addition, the number of large banks $N_1$ is usually less than the number of small banks $N_2$.
\section{Construction of Nash Equilibria }\label{Sec:NE}
This section is devoted to obtain the closed-loop Nash equilibria and the open-loop Nash equilibria in the finite players game. The solution to the closed-loop Nash equilibria are given by the HJB approach. The open-loop Nash equilibria are obtained using the FBSDEs based on the Pontryagin minimum principle.
\subsection{Closed-loop Nash Equilibria}\label{HJB-approach}
In order to solve the closed-loop Nash equilibrium, given the optimal strategies $\hat\alphaha^{(k)j}$ for $j\neq i$ with the corresponding trajectories $$\hat X^{-{(k)i}}=(\hat X^{(1)1},\cdots,\hat X^{(k)i-1},\hat X^{(k)i+1},\cdots,\hat X^{(2)N_2}),$$ bank $(1)i$ and bank $(2)j$ intend to minimize the objective functions through the value functions written as
\begin{equation}\label{value-function-1}
V^{(1)i}(t,x)=\inf_{\alphaha^{(1)i}\in{\cal{A}} }\mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(X^{(1)i}_s, \hat X^{-(1)i}_s, \alphaha^{(1)i}_s )ds+ g^N_{(1)}(X_{T}^{(1)i},\hat X^{-(1)i}_T )\right\},
\end{equation}
and
\begin{equation}\label{value-function-2}
V^{(2)j}(t,x)=\inf_{\alphaha^{(2)j}\in{\cal{A}}}\mathbb{E} _{t,x}\left\{\int_t^T f^N_{(2)}(X^{(2)j}_s, \hat X^{-(2)j}_s, \alphaha^{(2)j}_s )ds+ g^N_{(2)}(X_{T}^{(2)j},\hat X^{-(2)j}_T )\right\},
\end{equation}
subject to
\begin{eqnarray}\label{coupled-1}
\nonumber dX^{(1)i}_t &=& (\alphaha_{t}^{(1)i}+\gamma^{(1)}_{t})dt\\
&&+\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right) ,
\end{eqnarray}
and
\begin{eqnarray}\label{coupled-2}
\nonumber dX^{(2)j}_t &=& (\alphaha_{t}^{(2)j}+\gamma^{(2)}_{t})dt\\
&&+\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)j}_t\right)\right),
\end{eqnarray}
where $W^{(k)i}_t$ for all $k=1,2$, $i=1,\cdots,N_1$ and $j=1,\cdots,N_2$ are standard Brownian motions and $W^{(0)}_t$ and $W^{(k)}_t$ for $k=1,2$ are the common noises between groups and within groups corresponding to the parameters $\rho$ and $\rho_k$ for $k=1,\cdots,2$ denoted as the correlation between groups and within groups respectively. The initial value $X_0^{(k)i}$ may also be a squared integrable random variable $\xi^{(k)}$. The control process $\alphaha^{(k)i}$ is progressively measurable and ${\cal{A}}$ is denoted as the admissible set given by \eqref{admissible} for $\alphaha^{(k)i}$.
Note that $\mathbb{E} _{t,x}$ denotes the expectation given $X_t=x$.
\vskip 0.5 in
\begin{equation}gin{theorem}\label{Hete-Nash}
Assuming $q_k^2\leq \mbox{var}epsilonilon_k$ for $k=1,2$ and $\eta^{(i)}_t$ and $\phi^{(i)}_t$ for $i=1,\cdots,10$ satisfying \eqref{eta1} to \eqref{phi10}, the value functions of the closed-loop Nash equilibria to the problem \eqref{value-function-1} and \eqref{value-function-2} subject to \eqref{coupled-1} and \eqref{coupled-2} are given by
\begin{eqnarray}
\nonumber V^{(1)i}(t,x)&=&\frac{\eta^{(1)}_t}{2}( \overlineverline x^{(1)}-x^{(1)i})^2+\frac{\eta^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\eta^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber&&+\eta^{(4)}_t( \overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(1)}+\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(2)}+\eta^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\eta^{(7)}_t( \overlineverline x^{(1)}-x^{(1)i})+\eta^{(8)}_t\overlineverline x^{(1)}+\eta^{(9)}_t\overlineverline x^{(2)}+\eta^{(10)}_t,\label{ansatz-1-prop}
\end{eqnarray}
and
\begin{eqnarray}
\nonumber V^{(2)j}(t,x)&=&\frac{\phi^{(1)}_t}{2}( \overlineverline x^{(2)}-x^{(2)j})^2+\frac{\phi^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\phi^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber&&+\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(1)}+\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(2)}+\phi^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\phi^{(7)}_t(\overlineverline x^{(2)}-x^{(2)j})+\phi^{(8)}_t\overlineverline x^{(1)}+\phi^{(9)}_t\overlineverline x^{(2)}+\phi^{(10)}_t,\label{ansatz-2-prop}
\end{eqnarray}
and the corresponding closed-loop Nash equilibria are
\begin{eqnarray}
\label{optimal-finite-ansatz-V1}
&&\hat\alphaha^{(1)i}(t,x)=(q_1+\widetilde\eta^{(1)}_t)( \overlineverline x^{(1)}-x^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline x^{(1)}+\widetilde\eta^{(5)}_t\overlineverline x^{(2)}+\widetilde\eta_t^{(7)},\\
&&\hat\alphaha^{(2)j}(t,x)=(q_2+\widetilde\phi^{(1)}_t)( \overlineverline x^{(2)}-x^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline x^{(1)}+\widetilde\phi^{(5)}_t\overlineverline x^{(2)}+\widetilde\phi_t^{(7)},\label{optimal-finite-ansatz-V2}
\end{eqnarray}
for $i=1,\cdots,N_1$ and $j=1,\cdots,N_2$ where
\begin{eqnarray}
&&\nonumber \widetilde\eta^{(1)}_t=(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t, \quad\widetilde\eta^{(4)}_t=(1-\frac{1}{N_1})\eta^{(4)}_t-\frac{1}{N_1}\eta^{(2)}_t-\lambda_1(1-\begin{equation}ta_1)q_1,\\
&&\label{tildeeta}\widetilde\eta^{(5)}_t=(1-\frac{1}{N_1})\eta^{(5)}_t-\frac{1}{N_1}\eta^{(6)}_t+\lambda_1\begin{equation}ta_2q_1,\quad\widetilde\eta^{(7)}_t=(1-\frac{1}{N_1})\eta^{(7)}_t-\frac{1}{N_1}\eta^{(8)}_t,
\end{eqnarray}
and
\begin{eqnarray}
&&\nonumber \widetilde\phi^{(1)}_t=(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi^{(5)}_t, \quad \widetilde\phi^{(4)}_t=(1-\frac{1}{N_2})\phi^{(4)}_t-\frac{1}{N_2}\phi^{(6)}_t+\lambda_2\begin{equation}ta_1q2,\\
\nonumber&&\widetilde\phi^{(5)}_t=(1-\frac{1}{N_2})\phi^{(5)}_t-\frac{1}{N_2}\phi^{(3)}_t-\lambda_2(1-\begin{equation}ta_2)q_2,\quad \widetilde\phi^{(7)}_t=(1-\frac{1}{N_2})\phi^{(7)}_t-\frac{1}{N_2}\phi^{(9)}_t.\\
\label{tildephi}
\end{eqnarray}
\end{equation}d{theorem}
\begin{equation}gin{proof}
See Appendix \ref{Appex-1}.
\end{equation}d{proof}
Similarly, due to the complexity of the coupled ODEs given by (\ref{eta1}-\ref{phi10}), we now study the existence of the coupled system (\ref{eta1}-\ref{phi10}) in the case of $N_1\rightarrow\infty$ and $N_2\rightarrow\infty$.
\begin{equation}gin{prop}\label{Prop_suff}
As $N_1\rightarrow\infty$ and $N_2\rightarrow\infty$, the coupled equations \eqref{eta1} to \eqref{eta6} and \eqref{phi1} to \eqref{phi6} are rewritten as
written as
\begin{eqnarray}
\label{eta1_N} \dot{\widehat{\eta}}^{(1)}_t&=&2 q_1\widehat\eta^{(1)}_t+ (\widehat\eta_t^{(1)})^2-(\mbox{var}epsilonilon_1-q_1^2),\\
\nonumber {\dot{\widehat\eta}^{(2)}_t}&=&2\left(-\widehat\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \widehat\eta^{(2)}_t-(\widehat\eta_t^{(4)})^2\\
&&-2\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(\begin{equation}ta_1-1)^2,\label{eta2_N}\\
\nonumber {\dot{\widehat\eta}^{(3)}_t} &=&2\left(-\widehat\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\widehat\eta^{(3)}_t-(\widehat\eta_t^{(5)})^2\\
&&-2\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2\begin{equation}ta_2^2\label{eta3_N}\\
\nonumber \dot{\widehat\eta}^{(4)}_t&=&q_1\left(1+\lambda_1(1-\begin{equation}ta_1)\right) \widehat\eta^{(4)}_t-(\widehat\eta^{(4)}_t)^2\\
&&-\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta^{(5)}_t+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(1- \begin{equation}ta_1),\label{eta4_N}\\
\dot{\widehat\eta}^{(5)}_t&=&\left(q_1-\widehat\eta^{(4)}_t-\widehat\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\widehat\eta^{(4)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\label{eta5_N}\\
\nonumber\dot{\widehat\eta}^{(6)}_t&=&\left(-\widehat\eta^{(4)}_t-\widehat\phi^{(5)}_t+q_1\lambda_1(1-\begin{equation}ta_1)+q_2\lambda_2(1-\begin{equation}ta_2)\right)\widehat\eta^{(6)}_t-\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\eta^{(2)}_t\\
&&-\widehat\eta_t^{(4)}\widehat\eta_t^{(5)}-\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat \eta^{(3)}_t+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(1-\begin{equation}ta_1)\begin{equation}ta_2,\label{eta6_N}
\end{eqnarray}
\begin{eqnarray}
\label{phi1_N}\dot{\widehat\phi}^{(1)}_t&=&2 q_2\widehat\phi^{(1)}_t+(\widehat\phi^{(1)}_t)^2 -(\mbox{var}epsilonilon_2-q_2^2),\\
\nonumber {\dot{\widehat\phi}^{(2)}_t}&=&2\left(-\widehat\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \widehat\phi^{(2)}_t-(\widehat\phi_t^{(4)})^2\\
&&-2\left( \widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\phi^{(6)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_2^2,\label{phi2_N}\\
\nonumber {\dot{\widehat\phi}^{(3)}_t} &=&2\left(-\widehat\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\phi^{(3)}_t -(\phi_t^{(5)})^2\\
&&-2\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat \phi^{(6)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2(\begin{equation}ta_2-1)^2,
\label{phi3_N}\\
\dot{\widehat\phi}^{(4)}_t&=&\left(q_2-\widehat\eta^{(4)}_t-\widehat\phi_t^{(5)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\widehat\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\widehat \phi^{(5)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1,\label{phi4_N}\\
\nonumber \dot{\widehat\phi}^{(5)}_t&=&q_2\left(1+\lambda_2(1-\begin{equation}ta_2)\right)\widehat\phi^{(5)}_t-(\widehat\phi^{(5)}_t)^2\\
&&-\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\phi_t^{(4)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2(1-\begin{equation}ta_2),\label{phi5_N}\\
\nonumber\dot{\widehat\phi}^{(6)}_t&=&\left(-\widehat\eta^{(4)}_t-\widehat\phi^{(5)}_t+q_1\lambda_1(1-\begin{equation}ta_1)+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\phi^{(6)}_t-\widehat\phi_t^{(4)}\widehat\phi_t^{(5)}\\
\nonumber &&-\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat \phi^{(2)}_t-\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\phi^{(3)}_t- (\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1),\\\label{phi6_N}
\end{eqnarray}
with terminal conditions
\begin{eqnarray}n
&&\widehat\eta_T^{(1)}=c_1,\quad \widehat\eta_T^{(2)}=c_1\lambda_1^2(\begin{equation}ta_1-1)^2, \quad \widehat\eta_T^{(3)}=c_1\lambda_1^2\begin{equation}ta_2^2, \quad\widehat \eta_T^{(4)}=c_1\lambda_1(\begin{equation}ta_1-1),\\
&&\widehat\eta_T^{(5)}=c_1\lambda_1\begin{equation}ta_2,\quad\widehat \eta_T^{(6)}=c_1\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2.
\end{eqnarray}n
and
\begin{eqnarray}n
&&\widehat\phi_T^{(1)}=c_2,\quad \widehat \phi_T^{(2)}=c_2\lambda_2^2\begin{equation}ta_1^2, \quad \widehat\phi_T^{(3)}=c_2\lambda_2^2(\begin{equation}ta_2-1)^2, \quad \widehat\phi_T^{(4)}=c_2\lambda_2\begin{equation}ta_1,\\
&&\widehat\phi_T^{(5)}=c_2\lambda_2(\begin{equation}ta_2-1),\quad \widehat\phi_T^{(6)}=c_2\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1).
\end{eqnarray}n
where $0<\begin{equation}ta_1,\;\begin{equation}ta_2<1$, $\begin{equation}ta_1+\begin{equation}ta_2=1$, $0<\lambda_1,\;\lambda_2<1$, and $q_1,q_2,\mbox{var}epsilonilon_1,\mbox{var}epsilonilon_2>0$ with $\mbox{var}epsilonilon_1-q_1^2>0$ and $\mbox{var}epsilonilon_2-q_2^2>0$. The existence of the coupled equations \eqref{eta1_N} to \eqref{phi6_N} is verified.
\end{equation}d{prop}
\begin{equation}gin{proof}
We first observe that the existence of the coupled equations (\ref{eta1_N}-\ref{phi6_N}) is rely on the existence of the coupled Riccati equations (\ref{eta4_N}-\ref{eta5_N}) and (\ref{phi4_N}-\ref{phi5_N}). Using (\ref{eta4_N}-\ref{eta5_N}) and (\ref{phi4_N}-\ref{phi5_N}), we have
\begin{eqnarray}
\nonumber \dot{\widehat\eta_t^{(4)}}+\dot{\widehat\eta_t^{(5)}}&=&q_1\widehat\eta^{(4)}_t-(\widehat\eta^{(4)}_t)^2-\widehat\phi^{(4)}_t\widehat\eta^{(5)}_t+q_1\widehat\eta^{(5)}_t-\widehat\phi^{(5)}_t\widehat\eta^{(5)}_t-\widehat\eta^{(4)}_t\widehat\eta^{(5)}_t\\
&=&\left(q_1-\widehat\eta^{(4)}_t\right)\left(\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t\right)-\widehat\eta^{(5)}_t\left(\widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t\right)\label{eqn_sum_1}
\end{eqnarray}
and similarly
\begin{eqnarray}
\dot{\widehat\phi_t^{(4)}}+\dot{\widehat\phi_t^{(5)}}&=&\left(q_2-\widehat\phi^{(5)}_t\right)\left(\widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t\right)-\widehat\phi^{(4)}_t\left(\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t\right)\label{eqn_sum_2}
\end{eqnarray}
with the terminal conditions $\widehat\eta^{(4)}_T+\widehat\eta^{(5)}_T=0$ and $\widehat\phi^{(4)}_T+\widehat\phi^{(5)}_T=0$. Observe that \eqref{eqn_sum_1} and \eqref{eqn_sum_2} are linear equations for $\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t$ with the terminal condition being zero implying
\begin{equation}\label{suff_cond_1}
\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t=0, \quad \widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t=0,
\end{equation}
for $0 \leq t\leq T$. Namely $\widehat\eta^{(4)}_t=-\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t=-\widehat\phi^{(5)}_t$ for $0\leq t\leq T$.
Hence, it is sufficient to study the existence of $\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t$.
Inserting $\widehat\eta^{(4)}_t=-\widehat\eta^{(5)}_t$ and $\widehat\phi^{(5)}_t=-\widehat\phi^{(4)}_t$ into $\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t$ gives
\begin{eqnarray}n
\nonumber\dot{\widehat\eta}_t^{(5)}&=&\left(q_1+\widehat\eta_t^{(5)}+\widehat\phi_t^{(4)}+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta_t^{(5)}+q_1\lambda_1\begin{equation}ta_2\widehat\eta_t^{(5)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\\
&=&\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)\widehat\eta_t^{(5)}+(\widehat\eta_t^{(5)})^2+\widehat\phi_t^{(4)}\widehat\eta_t^{(5)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,
\end{eqnarray}n
and
\begin{eqnarray}n
\dot{\widehat\phi}_t^{(4)}&=&\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)\widehat\phi_t^{(4)}+(\widehat\phi_t^{(4)})^2+\widehat\eta_t^{(5)}\widehat\phi_t^{(4)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1.
\end{eqnarray}n
Now, consider ${\check\eta}_t^{(5)}=\widehat\eta_{T-t}^{(5)}$ and $\check\phi_t^{(4)}=\widehat\phi_{T-t}^{(4)}$. Then
\begin{eqnarray}
\dot{\check\eta}_t^{(5)}&=&-\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)\check\eta_t^{(5)}-(\check\eta_t^{(5)})^2-\check\phi_t^{(4)}\check\eta_t^{(5)}+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\\
\dot{\check\phi}_t^{(4)}&=&-\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)\check\phi_t^{(4)}-(\check\phi_t^{(4)})^2-\check\eta_t^{(5)}\check\phi_t^{(4)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1,
\end{eqnarray}
with the initial conditions $\check\eta_0^{(5)}=c_1\lambda_1\begin{equation}ta_2$ and $\check\phi_0^{(4)}=c_2\lambda_2\begin{equation}ta_1$. Simple argument implies $\check\eta_t^{(5)}$ and $\check\phi_t^{(4)}$ being positive for $0\leq t\leq T$. Then, we have
\begin{eqnarray}n
\dot{\check\eta}_t^{(5)}&\leq& -\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)\check\eta_t^{(5)}+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\\
\dot{\check\phi}_t^{(4)}&\leq& -\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)\check\phi_t^{(4)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1,
\end{eqnarray}n
leading to
\begin{eqnarray}n
e^{\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)t}\check\eta_t^{(5)}&\leq&c_1\lambda_1\begin{equation}ta_2+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\int_0^t e^{\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)s}ds\\
&\leq&c_1\lambda_1\begin{equation}ta_2+\frac{(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2}{q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2}e^{\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)t}\\
\end{eqnarray}n
such that
\begin{equation}
0\leq \check\eta_t^{(5)} \leq c_1\lambda_1\begin{equation}ta_2 e^{-\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)t}+\frac{(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2}{q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2}.
\end{equation}
Similarly, we also get
\begin{equation}
0\leq \check\phi_t^{(4)}\leq c_2\lambda_2\begin{equation}ta_1 e^{-\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)t}+\frac{(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1}{q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1}.
\end{equation}
The proof is complete.
\end{equation}d{proof}
According to the results in Proposition \ref{Prop_suff}, as $N_1$ and $N_2$ are large enough, the existence of the coupled ODEs \eqref{eta1} to \eqref{phi10} are guaranteed.
\subsection{Open-loop Nash Equilibria}\label{FBSDE-approach}
Referring to \cite{R.Carmona2013}, we now study the open-loop Nash equilibria where the strategies are the functions given at initial time. Namely, the strategies are the function of $t$ and $X_0$ given in \cite{CarmonaSIAM2016}.
\begin{equation}gin{theorem}\label{Hete-open}
Assume $q_k^2\leq \mbox{var}epsilonilon_k$ for $k=1,2$ and $\eta^{o,(i)}_t$ and $\phi^{o,(i)}_t$ for $i=1,\cdots,4$ satisfying \eqref{eta_open-1} to \eqref{phi_open-4}. The open-loop Nash equilibria are written as
\begin{eqnarray}
\nonumber \hat\alphaha^{o,(1)i}&=&\left(q_1+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(1)}\right)(\overlineverline X_t^{(1)}-X_t^{(1)i})\\
\nonumber&&+\left(q_1\lambda_1(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
&&+\left(q_1\lambda_1\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_1}\right)\eta_t^{o,(4)},\\
\nonumber \hat\alphaha^{o,(2)j}&=&\left(q_2+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(1)}\right)(\overlineverline X_t^{(2)}-X_t^{(2)j})\\
\nonumber &&+\left(q_2\lambda_2\begin{equation}ta_1+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
&&+\left(q_2\lambda_2(\begin{equation}ta_2-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_2}\right)\phi_t^{o,(4)},
\end{eqnarray}
where $$\frac{1}{\widetilde N_k}=\frac{1-\lambda_k}{N_k}+\frac{\lambda_k}{N},$$ for $k=1,2$.
\end{equation}d{theorem}
\begin{equation}gin{proof}
See Appendix \ref{Appex-open}.
\end{equation}d{proof}
Due to the complexity of coupled ordinary differential equations (ODEs) (\ref{eta_open-1}-\ref{phi_open-4}), we also verify the existence of (\ref{eta_open-1}-\ref{phi_open-4}) in the case of $N_1\rightarrow\infty$ and $N_2\rightarrow\infty$ where the equations are given by
\begin{eqnarray}
\dot{\widehat{\eta}}^{o,(1)}_t&=&2 q_1\widehat\eta^{o,(1)}_t+ (\widehat\eta_t^{o,(1)})^2-(\mbox{var}epsilonilon_1-q_1^2),\\
\nonumber \dot{\widehat\eta}^{o,(2)}_t&=&q_1\left(1+\lambda_1(1-\begin{equation}ta_1)\right) \widehat\eta^{o,(2)}_t-(\widehat\eta^{o,(2)}_t)^2\\
&&-\left(\widehat\phi^{o,(2)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta^{o,(3)}_t+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(1- \begin{equation}ta_1),\\
\nonumber \dot{\widehat\eta}^{o,(3)}_t&=&\left(q_1-\widehat\eta^{o,(2)}_t-\widehat\phi^{o,(3)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\eta^{o,(3)}_t\\
&&-q_1\lambda_1\begin{equation}ta_2\widehat\eta^{o,(2)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\\
\dot{\widehat\phi}^{o,(1)}_t&=&2 q_2\widehat\phi^{o,(1)}_t+(\widehat\phi^{o,(1)}_t)^2 -(\mbox{var}epsilonilon_2-q_2^2),\\
\nonumber\dot{\widehat\phi}^{o,(2)}_t&=&\left(q_2-\widehat\eta^{o,(2)}_t-\widehat\phi_t^{o,(3)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\widehat\phi^{o,(2)}_t\\
&& -q_2\lambda_2\begin{equation}ta_1\widehat \phi^{o,(3)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1, \\
\nonumber \dot{\widehat\phi}^{o,(3)}_t&=&q_2\left(1+\lambda_2(1-\begin{equation}ta_2)\right)\widehat\phi^{o,(3)}_t-(\widehat\phi^{o,(3)}_t)^2\\
&&-\left(\widehat\eta^{o,(3)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\phi_t^{o,(2)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2(1-\begin{equation}ta_2),
\end{eqnarray}
with the terminal conditions
\begin{eqnarray}n
\widehat\eta_T^{o,(1)}=c_1,\;\widehat\eta_T^{o,(2)}=c_1\lambda_1(\begin{equation}ta_1-1),\;\widehat\eta_T^{o,(3)}=c_1\lambda_1\begin{equation}ta_2,\\
\widehat\phi_T^{o,(1)}=c_2,\;\widehat\eta_T^{o,(2)}=c_2\lambda_2\begin{equation}ta_1,\;\widehat\phi_T^{o,(3)}=c_2\lambda_2(\begin{equation}ta_1-1).
\end{eqnarray}n
Note that according to the results in Section \ref{HJB-approach}, we obtain the Riccati equations $\widehat\eta_T^{o,(2)}=\widehat\eta_T^{(4)}$ and $\widehat\phi_T^{o,(3)}=\widehat\phi_T^{(5)}$ and the linear ODEs $\widehat\eta_T^{o,(3)}=\widehat\eta_T^{(5)}$ and $\widehat\phi_T^{o,(2)}=\widehat\phi_T^{(4)}$. Hence, the existence of the coupled ODEs is verified by Proposition \ref{Prop_suff}.
We then discuss $\mbox{var}epsilonilon$-Nash Equilibria in the case of the general $d$ groups mean field game. Namely, $N\rightarrow\infty$, $N_k\rightarrow\infty$ with $\frac{N_k}{N}\rightarrow\begin{equation}ta_k$ for $k=1,\cdots,d$.
\section{$\mbox{var}epsilonilon$-Nash Equilibria: Mean Field Games}\label{MFG}
We return to the case of the general $d$ groups. The $i$-th bank in group $k$ minimizes the objective function given by
\begin{eqnarray}
\nonumber J^{(k)i}(\alphaha)&=&\mathbb{E} \bigg\{\int_{0}^{T}
\left[\frac{(\alphaha_{t}^{(k)i})^2}{2}-q_k\alphaha_{t}^{(k)i}\left(\overlineverline{X}^{\lambda_k}_{t}-X^{(k)i}_{t}\right)
+\frac{\mbox{var}epsilonilon_k}{2}\left(\overlineverline{X}^{\lambda_k}_{t}-X^{(k)i}_{t}\right)^2\right]dt\\
& &+ \frac{c_k}{2}\left(\overlineverline{X}^{\lambda_k}_{T}-X^{(k)i}_{T}\right)^2\bigg\},
\end{eqnarray}
with $q_k^2<\mbox{var}epsilonilon_k$ subject to
\begin{eqnarray}
\nonumber dX^{(k)i}_t &=& (\alphaha_{t}^{(k)i}+\gamma^{(k)}_{t})dt\\
&&+\sigma_k\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)i}_t\right)\right),
\end{eqnarray}
with the initial value $X_0^{(k)i}$ which may also be a squared integrable random variable $\xi^{(k)}$. In the mean field limit as $N\rightarrow \infty$ and $\frac{N_k}{N}\rightarrow\begin{equation}ta_k$ for all $k$, as in the case of one group, the $d$ heterogenous groups is solved by the $d$-players game. Referred to \cite{R.Carmona2013}, the scheme to solve the $\mbox{var}epsilonilon$-Nash equilibria is as follows:
\begin{equation}gin{enumerate}
\item Fix $m^{(k)}_t=\mathbb{E} [X^{(k)}_t|(W^{(0)}_s)_{s\leq t}]$ which is a candidate for the limit of $\overlineverline{X}^{(k)}_t$ as $N_k\rightarrow \infty$:
\[
m^{(k)}_t=\lim_{I_k\rightarrow\infty}\overlineverline{X}^{(k)}_t,
\]
for all $k$, and
\[
M_t=\lim_{N,N_1\ldots,N_k\rightarrow\infty}\sum_{k=1}^d\frac{N_k}{N}\overlineverline{X}^{(k)}_t=\sum_{k=1}^d\begin{equation}ta_km^{(k)}_t,
\]
where a vector of standard Brownian motions $W^{(0)}=(W^{(0),(0)},\cdots,W^{(0),(d)})$ represents the common noises.
\item Substitute $m_t^{(k)}$ to $\overlineverline X^{(k)}$ and solve the $d$-players control problem through minimizing the objective function written as
\begin{eqnarray}n
&& \nonumber \inf_{ \alphaha^{(k)}\in {\cal{A}}} \mathbb{E} \bigg\{\int_{0}^{T} \left[\frac{(\alphaha_{t}^{(k)})^2}{2}-q_k\alphaha_{t}^{(k)}\left(M^{\lambda_k}_{t}-X^{(k)}_{t}\right)+\frac{\mbox{var}epsilonilon_k}{2}\left(M^{\lambda_k}_{t}-X^{(k)}_{t}\right)^2\right]dt\\
&&\quad\quad\quad+ \frac{c_k}{2}\left(M^{\lambda_k}_{T}-X^{(k)}_{T}\right)^2\bigg\}
\end{eqnarray}n
with $q_k^2<\mbox{var}epsilonilon_k$ subject to the dynamics
\begin{eqnarray}
\nonumber dX^{(k)}_t &=& (\alphaha_{t}^{(k)}+\gamma^{(k)}_{t})dt\\
\nonumber &&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(0),(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)}_t\right)\right),\\\label{X-d-groups}
\end{eqnarray}
with $M_t^{\lambda_k}=(1-\lambda_k)m_t^{(k)}+\lambda_kM_t$ where $W_t^{(k)}$ is a Brownian motion independent of $W_t^{(0)}$.
\item Similarly, solve the fixed point problem: find $$m^{(k)}_t=\mathbb{E} [X^{(k)}_t|(W^{(0)}_s)_{s\leq t},(W_s^{(0),(k)})_{s\leq t}]$$ for all $t$.
\end{equation}d{enumerate}
\begin{equation}gin{theorem}\label{Hete-MFG-prop}
Assuming $q_k^2\leq \mbox{var}epsilonilon_k$ and $\eta_t^{m,(k)}$, $\psi^{m,(k),h}_t$, and $\mu^{m,(k)}_t$ satisfying
\begin{eqnarray}
\label{Hete-eta-MFG}\dot\eta_t^{m,(k)}&=&2q_k \eta_t^{m,(k)}+(\eta_t^{m,(k)})^2-(\mbox{var}epsilonilon_k-q_k^2),\\
\nonumber \dot\psi_t^{m,(k),h_1}&=&q_k\psi_t^{m,(k),h_1}-\sum_{h=1}^d\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h(\begin{equation}ta_{h_1 }-\deltalta_{h,h_1})\right)\\
&&-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})\label{Hete-psi-MFG}\\
\dot\mu^{m,(k)}_t&=&q_k\mu^{m,(k)}_t-\sum_{h=1}^d\psi_t^{m,(k),h}(\mu_t^{m,(h)}+\gamma_t^{(h)}),
\label{Hete-mu-MFG}
\end{eqnarray}
with terminal conditions $\eta_T^{m,(k)}=c_k$, $\psi_T^{m,(k),h}=c_k\lambda_k(\begin{equation}ta_h-\deltalta_{k,h})$, and $\mu^{m,(k)}_T=0$ for $k,h=1,\cdots,d$, the $\mbox{var}epsilonilon$-Nash equilibrium is given by
\begin{equation}\label{Hete-optimal-MFG}
\hat\alphaha_t^{m,(k)}=(q_k+\eta^{m,(k)}_t)( m^{(k)}_t-x^{(k)})+\sum_{h=1}^d\widetilde\psi_t^{m,(k),h}m^{(h)}_t+\mu^{m,(k)}_t,\quad
\end{equation}
where $m_t^{(k)}$ is given by
\begin{eqnarray}n
\nonumber dm^{(k)}_t&=&\bigg\{\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t+\gamma^{(k)}_t+q_k\lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m^{(h_1)}_t\bigg\}dt\\
&&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2} \rho_{k}dW_t^{(0),(k)} \right)
\end{eqnarray}n
for $k=1,\cdots,d$. Given $c_{\tildeilde k}\geq \max_{k,h}\left(\frac{q_k\lambda_k}{\lambda_h}-q_h\right)$ for $\tildeilde k=1,\cdots,d$, the existence of the coupled ODEs \eqref{Hete-eta-MFG} to \eqref{Hete-mu-MFG} can be verified.
\end{equation}d{theorem}
\begin{equation}gin{proof}
See Appendix \ref{Appex-Hete-MFG}.
\end{equation}d{proof}
In particular, in the case of $d=2$, as $N\rightarrow\infty$, $N_1\rightarrow\infty$, and $N_2\rightarrow\infty$ with $\frac{N_1}{N}\rightarrow\begin{equation}ta_1$ and $\frac{N_2}{N}\rightarrow\begin{equation}ta_2$, we observe that $\widetilde\eta^{(4)}_t\rightarrow\widetilde\psi_t^{m,(1),1}$, $\widetilde\eta^{(5)}_t\rightarrow\widetilde\psi_t^{m,(1),2}$, $\widetilde\eta_t^{(7)}\rightarrow\widetilde\mu_t^{m,(1)}$, $\widetilde\phi^{(4)}_t\rightarrow\widetilde\psi_t^{m,(2),1}$, $\widetilde\phi^{(5)}_t\rightarrow\psi_t^{m,(2),2}$, and $\widetilde\phi_t^{(7)}\rightarrow\widetilde\mu_t^{m,(2)}$ for $0\leq t \leq T$ such that the $\mbox{var}epsilonilon$-Nash equilibria in the case of the mean field game with heterogeneous groups are the limit of the closed-loop and the open loop Nash equilibria in the case of the finite player game with heterogeneous groups. The results are consistent with \cite{Lacker2018}. Hence, the asymptotic optimal strategy for bank $(1)i$ is given by
\[
\hat\alphaha^{m,(1)i}_t=(q_1+ \eta^{m,(1)}_t)( \overlineverline x^{(1)}-x^{(1)i})+\widetilde\psi_t^{m,(1),1}\overlineverline x^{(1)}+\widetilde\psi_t^{m,(1),2}\overlineverline x^{(2)}+\mu^{m,(1)}_t.
\]
\begin{equation}gin{remark}
The case of no common noise implies the given $m_t$ for $0\leq t \leq T$ being a deterministic function. For instance, in the case of $d=1$, the $\mbox{var}epsilonilon$-Nash equilibrium in mean field games can be obtained using the HJB equation written as
\begin{eqnarray}n
\partialrtial_tV&+&\inf_{\alphaha}\bigg\{\alphaha\partialrtial_xV+\frac{\sigma^2}{2}\partialrtial_{xx}V +\frac{\alphaha^2}{2}-q\alphaha(m_t-x)+\frac{\mbox{var}epsilonilon}{2}(m_t-x)^2\bigg\}=0, \\
\end{eqnarray}n
with the terminal condition $V(T,x)=\frac{c}{2}(m_T-x)^2$.
\end{equation}d{remark}
\section{Financial Implications}\label{FI}
The aim of this section is to investigate the financial implications for this heterogeneous interbank lending and borrowing model. We mainly comment on the closed-loop Nash equilibria in the case of finite players identical to the open-loop Nash equilibria and $\mbox{var}epsilonilon$-Nash equilibria. We recall the closed-loop Nash equilibria written as
\begin{eqnarray}
\label{optimal-finite-ansatz-V1-FI}
\hat\alphaha^{(1)i}(t,x)&=&(q_1+\widetilde\eta^{(1)}_t)( \overlineverline x^{(1)}-x^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline x^{(1)}+\widetilde\eta^{(5)}_t\overlineverline x^{(2)}+\widetilde\eta_t^{(7)},\\
\hat\alphaha^{(2)j}(t,x)&=&(q_2+\widetilde\phi^{(1)}_t)(\overlineverline x^{(2)}-x^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline x^{(1)}+\widetilde\phi^{(5)}_t\overlineverline x^{(2)}+\widetilde\phi_t^{(7)}.\label{optimal-finite-ansatz-V2-FI}
\end{eqnarray}
The corresponding optimal trajectories are written as
\begin{eqnarray}
\nonumber dX_t^{(1)i}&=&\left\{(q_1+\widetilde\eta^{(1)}_t)( \overlineverline X_t^{(1)}-X_t^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\eta^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\eta_t^{(7)}+\gamma^{(1)}_t\right\}dt \\
\nonumber&& +\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right)\\
\nonumber &=&\left\{\frac{q_1+\widetilde\eta^{(1)}_t}{N_1}\sum_{l=1}^{N_1}(X_t^{(1)l}-X_t^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\eta^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\eta_t^{(7)}+\gamma^{(1)}_t\right\}dt \\
\label{X-1-FI}&& +\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right),\\
\nonumber dX_t^{(2)j}&=&\left\{(q_2+\widetilde\phi^{(1)}_t)( \overlineverline X_t^{(2)}-X_t^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\phi^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\phi_t^{(7)}+\gamma^{(2)}_t\right\}dt\\
\nonumber&& +\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right)\\
\nonumber&=&\left\{\frac{q_2+\widetilde\phi^{(1)}_t}{N_2}\sum_{l=1}^{N_2}(X^{(2)l}-X_t^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\phi^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\phi_t^{(7)}+\gamma^{(2)}_t\right\}dt\\
\label{X-2-FI}&& +\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right)
\end{eqnarray}
with given initial values $X_0^{(1)i}$ and $X_0^{(2)j}$ for $i=1,\cdots,N_1$ and $j=1,\cdots,N_2$.
Compared to the homogeneous group scenario discussed in \cite{R.Carmona2013}, owing to the linear quadratic structure, heterogeneity implies that banks not only purely consider the distance between their capitalization and averages of their own group capitalization where these terms can be rewritten as
$
\frac{1}{N_1}\sum_{l=1}^{N_1}(X_t^{(1)l}-X_t^{(1)i})
$
and $\frac{1}{N_2}\sum_{l=1}^{N_2}(X^{(2)l}-X_t^{(2)j})$ show the lending and borrowing behavior within their own groups
identical to the homogeneous case but also the linear combination of the average of each group.
In particular, as $N_1$ and $N_2$ are sufficiently large , based on the relation $\widetilde\eta^{(4)}_t=-\widetilde\eta^{(5)}_t$ and $\widetilde\phi^{(4)}_t=-\widetilde\phi^{(5)}_t$ in Proposition \ref{Prop_suff}, we rewrite the system (\ref{X-1-FI}-\ref{X-2-FI}) as
\begin{eqnarray}
\nonumber dX_t^{(1)i}&=&\left\{\frac{q_1+\widetilde\eta^{(1)}_t}{N_1}\sum_{l=1}^{N_1}(X_t^{(1)l}-X_t^{(1)i})+\widetilde\eta^{(5)}_t(\overlineverline X_t^{(2)}-\overlineverline X_t^{(1)})+\widetilde\eta_t^{(7)}+\gamma^{(1)}_t\right\}dt \\
\label{X-1-FI-1}&& +\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right),\\
\nonumber dX_t^{(2)j}&=&\left\{\frac{q_2+\widetilde\phi^{(1)}_t}{N_2}\sum_{l=1}^{N_2}(X^{(2)l}-X_t^{(2)j})+\widetilde\phi^{(4)}_t(\overlineverline X_t^{(1)}-\overlineverline X_t^{(2)})+\widetilde\phi_t^{(7)}+\gamma^{(2)}_t\right\}dt\\
\label{X-2-FI-1}&& +\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right).
\end{eqnarray}
The terms $\widetilde\eta^{(5)}_t(\overlineverline X_t^{(2)}-\overlineverline X_t^{(1)})$ and $\widetilde\phi^{(4)}_t(\overlineverline X_t^{(1)}-\overlineverline X_t^{(2)})$ with $\widetilde\eta^{(5)}_t$ and $\widetilde\phi^{(4)}_t$ being positive for $0\leq t \leq T$ give the mean-reverting interaction between groups with each other such that the ensemble averages intend to be close. The dynamics of the distance $\overlineverline X^{D}_t= \overlineverline X^{(1)}_t-\overlineverline X_t^{(2)} $ is written as
\begin{eqnarray}
\nonumber d\overlineverline X^{D}_t&=&-\left\{(\widetilde\eta^{(5)}_t+\widetilde\phi^{(4)}_t)\overlineverline X^{D}_t+(\widetilde\eta_t^{(7)}+\gamma^{(1)}_t-\widetilde\phi_t^{(7)}-\gamma^{(2)}_t)\right\}dt\\
\nonumber &&+\rho\left(\sigma_1-\sigma_2\right)dW^{(0)}_t+\sqrt{1-\rho^2}\left(\sigma_1\rho_1dW^{(1)}_t-\sigma_2\rho_2dW_t^{(2)}\right)\\
&&+\sqrt{1-\rho^2}\left(\sqrt{1-\rho_1^2}\frac{1}{N_1}\sum_{l=1}^{N_1}dW^{(1)l}_t-\sqrt{1-\rho_2^2}\frac{1}{N_2}\sum_{l=1}^{N_2}dW^{(2)l}_t\right).
\end{eqnarray}
with $\overlineverline X_0^{(D)}=\overlineverline X^{(1)}_0-\overlineverline X_0^{(2)}$. As $N_1, N_2\rightarrow\infty$, the distance $\overlineverline X^{D}_t$ is driven by common noises $W^{(0)}_t$, $W^{(1)}_t$ and $W^{(2)}_t$. Namely, the stronger correlation leads to larger fluctuation between groups.
On the contrary, in the case of no common noise with $\rho=\rho_1=\rho_2=0$ and no growth rates $\gamma^{(1)}=\gamma^{(2)}_t=0$ leading to $\widetilde\eta_t^{(7)}=\widetilde\phi_t^{(7)}=0$ for all $t\geq 0$, we obtain $\overlineverline X^{(D)}_t\rightarrow 0$ as $t\rightarrow\infty$ in the sense that in the long term behavior, all banks trace the global average $\overlineverline X_t$ driven by only the scaled Brownian motion. This implies that systemic risk happens in the same manner as studied in \cite{R.Carmona2013} and therefore
\[
\mathbb{P} (\tildeau<\infty) = \lim_{T\tildeo\infty}\mathbb{P} (\tildeau \leq T) =\lim_{T\tildeo\infty}2\Phi\left(\frac{D\sqrt{N}}{\sigma\sqrt{T}}\right)= 1,
\]
with $\Phi$ being the standard normal distribution function where $\tildeau=\inf\{t:\overlineverline X_t\leq D\}$ with the certain default level $D$. The systemic event
\[
\left\{\left(\frac{1}{N}\sum_{i=1}^{N}X_{t}^{(i)}\right)\leq {D}\; \mathrm{for\;some}\; t\right\}
\]
defined by \cite{Fouque-Sun} is unavoidable.
In the numerical analysis, suppose that the first group is the group of stronger banks and the second one is the group of smaller banks. As discussion in Section \ref{Heter}, we first assume $\begin{equation}ta_1=0.2$ and $\begin{equation}ta_2=0.8$ and further fix the relative consideration $\lambda_1=0.1$ for the relative ensemble average
\[
\overlineverline x^{\lambda_1}=(1-\lambda_1)\overlineverline x^{(1)}+\lambda_1\overlineverline{x},
\]
since the major players prefer tracing the group average rather than the ensemble average. By using varied $\lambda_2$ and $N$, we then obtain the following implications:
\begin{equation}gin{enumerate}
\item We first comment on two extreme cases. As $\lambda_1=\lambda_2=0$ or $\begin{equation}ta_1=\begin{equation}ta_2=1$, the model degenerates to the two homogeneous group model without the interaction between groups referred to \cite{R.Carmona2013}.
\item In the intermediate region, in Figure \ref{liquidityratelambda}, we observe that the liquidity rate
\begin{equation}\label{liquidityrate}
\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}
\end{equation}
increases in the relative proportion $\lambda_2$. Namely, when banks consider to trace the global ensemble average $\overlineverline x$, they intend to lend to or borrow from a central bank more frequently.
\item As the terminal time $T$ becomes large, Figure \ref{liquidityratelambda_t=10} shows that the liquidity rate intends to be a constant. The identical results are obtained in \cite{R.Carmona2013}.
\item As the numbers of banks $N$, $N_1$, and $N_2$ become large, the liquid rate \eqref{liquidityrate} also increases. The interbank lending and borrowing behavior becomes more frequently. See Figure \ref{liquidityrate_N} for instance.
\end{equation}d{enumerate}
\begin{equation}gin{figure}[htbp]
\begin{equation}gin{center}
\includegraphics[width=12cm,height=8cm]{liquid_rate_t=1_N=10.eps}
\caption{The liquidity $\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}$ with the varied $\lambda_2$ and the fixed $\lambda_1=0.1$. The fixed parameters are $N=10$, $N_1=2$, $N_2=8$, $q_1=q_2=2$, $\mbox{var}epsilonilon_1=5$, $\mbox{var}epsilonilon_2=4.5$, $c_1=c_2=0$, $T=1$, and $\gamma^{(1)}_t=\gamma^{(2)}_t=0$ for $0\leq t\leq T$. }
\label{liquidityratelambda}
\end{equation}d{center}
\end{equation}d{figure}
\begin{equation}gin{figure}[htbp]
\begin{equation}gin{center}
\includegraphics[width=12cm,height=8cm]{liquid_rate_t=10_N=10.eps}
\caption{The liquidity $\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}$ with the varied $\lambda_2$ and the fixed $\lambda_1=0.1$. The fixed parameters are $N=10$, $N_1=2$, $N_2=8$, $q_1=q_2=2$, $\mbox{var}epsilonilon_1=5$, $\mbox{var}epsilonilon_2=4.5$, $c_1=c_2=0$, $T=10$, and $\gamma^{(1)}_t=\gamma^{(2)}_t=0$ for $0\leq t\leq T$. }
\label{liquidityratelambda_t=10}
\end{equation}d{center}
\end{equation}d{figure}
\begin{equation}gin{figure}[htbp]
\begin{equation}gin{center}
\includegraphics[width=6cm,height=8cm]{liquid_rate_t=1_N.eps}
\includegraphics[width=6cm,height=8cm]{liquid_rate_t=10_N.eps}
\caption{The liquidity $\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}$ with the varied $N$, $N_1$, $N_2$ with the proportion $\begin{equation}ta_1=0.2$, $\begin{equation}ta_2=0.8$. The terminal times are $T=1$ (left) and $T=10$ (right). The parameters are $\lambda_1=0.5$, $\lambda_2=0.1$, $q_1=q_2=2$, $\mbox{var}epsilonilon_1=5$, $\mbox{var}epsilonilon_2=4.5$, $c_1=c_2=0$, and $\gamma^{(1)}_t=\gamma^{(2)}_t=0$ for $0\leq t\leq T$.}
\label{liquidityrate_N}
\end{equation}d{center}
\end{equation}d{figure}
\section{Conclusions}\label{conclusions}
We study the system of interbank lending and borrowing with heterogeneous groups where the lending and borrowing depends on the homogeneous parameters within groups and heterogeneous parameters between groups. The amount of lending and borrowing is based on the relative ensemble average
\[
\tildeilde x^{\lambda_k}=(1-\lambda_k)\overlineverline x^{(k)}+\lambda_k\overlineverline{x},\quad k=1,\cdots,d.
\]
Due to the heterogeneity structure, the value functions and the corresponding closed-loop and open-loop Nash equilibria are given by the coupled Riccati equations. In the two-group case, the existence of the coupled Riccati equations can be proved through when the number of banks are sufficiently large so that the existence of the value functions and equilibria are guaranteed. In addition, in the case of mean field games with the general $d$ groups, the existence of the $\mbox{var}epsilonilon$-Nash equilibria is also verified.
We observe that owing to heterogeneity, the equilibria are consisted of the term of mean-reverting at their own group averages and all group ensemble averages . In the mean field case with no common noise, systemic event happens almost surely in the long time period. The numerical results illustrate that as banks intend to trace the global average as large $\lambda_k$, they prefer liquidating more frequently using a larger liquidity rate. The liquidity rate is also increasing in the number of banks.
The problem can be extended in several directions. First, it is interesting to discuss the delay obligation based on the model studied in \cite{Carmona-Fouque2016,FouqueZhang2018}. Second, it is nature to consider the stochastic growth rate in the system. Third, the CIR type processes can be applied to describe the capitalization of banks. See \cite{Fouque-Ichiba,Sun2016}. Furthermore, referring to \cite{BMMB2019}, the bubble assets is worth to study in the interbank lending and borrowing system. The admissible conditions for the equilibria of the above extensions are also interesting to investigate.
\section{Proof of Theorem \ref{Hete-Nash} and Verification Theorem} \label{Appex-1}
The corresponding coupled HJB equations for the value functions (\ref{value-function-1}) and (\ref{value-function-2}) read
\begin{eqnarray}\label{HJB1}
\nonumber \partialrtial_{t}V^{(1)i} &+&\inf_{\alphaha^{(1)i}}\bigg\{
\sum_{l\neq i,l=1}^{N_1}\bigg(\gamma^{(1)}_t+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}
+ \bigg(\gamma^{(1)}_t+{\alphaha^i}\bigg)\partialrtial_{x^{(1)i}}V^{(1)i}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right) \partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}\\
&+&\frac{(\alphaha^{(1)i})^2}{2}-q_1\alphaha^{(1)i}\left(\overlineverline x^{\lambda_1}-x^{(1)i}\right)+\frac{\mbox{var}epsilonilon_1}{2}(\overlineverline x^{\lambda_1}-x^{(1)i})^2\bigg\}=0,
\end{eqnarray}
with the terminal condition $V^{(1)i}(T,x)=\frac{c_1}{2}(\overlineverline x^{\lambda_1} -x^{(1)i})^2$ and
\begin{eqnarray}\label{HJB2}
\nonumber \partialrtial_{t}V^{(2)j}&+&\inf_{\alphaha^{(2)j}}\bigg\{
\sum_{l=1}^{N_1}\bigg(\gamma_t^{(1)}+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(2)j}\\
\nonumber &+&\sum_{h\neq j,h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(2)j}+\bigg(\gamma_t^{(2)}+{\alphaha^{(2)j}}\bigg)\partialrtial_{x^{(2)j}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(2)j}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right)\partialrtial_{x^{(2)l}x^{(2)h}}V^{(2)j}\\
&+&\frac{(\alphaha^{(2)j})^2}{2}-q_2\alphaha^{(2)j}\left(\overlineverline x^{\lambda_2}-x^{(2)j}\right)+\frac{\mbox{var}epsilonilon_2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2\bigg\} =0,\label{HJB2}
\end{eqnarray}
with the terminal condition $V^{(2)j}(T,x)=\frac{c_2}{2}(\overlineverline x^{\lambda_2} -x^{(2)j})^2$. The first order condition gives the candidate of the optimal strategy for bank $(k)i$ written as
\begin{equation}\label{candidate}
\hat\alphaha^{(k)i}=q_k(\overlineverline x^{\lambda_k}-x^{(k)i})-\partialrtial_{x^{(k)i}}V^{(k)i}.
\end{equation}
Inserting \eqref{candidate} into \eqref{HJB1} and \eqref{HJB2} gives
\begin{eqnarray}
\nonumber \partialrtial_{t}V^{(1)i}(t,x)&+& \bigg\{
\sum_{l=1}^{N_1}\bigg(\gamma^{(1)}_t+q_1(\overlineverline x^{\lambda_1} -x^{(1)l})-\partialrtial_{x^{(1)l}}V^{(1)l}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+q_2(\overlineverline x^{\lambda_2}-x^{(2)h})-\partialrtial_{x^{(2)h}}V^{(2)h}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right) \partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}\\
&+&\frac{(\partialrtial_{x^{(1)i}}V^{(1)i})^2}{2}+\frac{\mbox{var}epsilonilon_1-q_1^2}{2}(\overlineverline x^{\lambda_1} -x^{(1)i})^2\bigg\}=0,\label{HJB1-1}
\end{eqnarray}
with the terminal condition $V^{(1)i}(T,x)=\frac{c_1}{2}(\overlineverline x^{\lambda_1} -x^{(1)i})^2$ and
\begin{eqnarray}
\nonumber \partialrtial_{t}V^{(2)j}(t,x)&+& \bigg\{
\sum_{l=1}^{N_1}\bigg(\gamma^{(1)}_t+q_1(\overlineverline x^{\lambda_1} -x^{(1)l})-\partialrtial_{x^{(1)l}}V^{(1)l}\bigg)\partialrtial_{x^{(1)l}}V^{(2)j}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+q_2(\overlineverline x^{\lambda_2}-x^{(2)h})-\partialrtial_{x^{(2)h}}V^{(2)h}\bigg)\partialrtial_{x^{(2)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(2)j}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right)\partialrtial_{x^{(2)l}x^{(2)h}}V^{(2)j}\\
&+&\frac{(\partialrtial_{x^{(2)j}}V^{(2)j})^2}{2}+\frac{\mbox{var}epsilonilon_2-q_2^2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2\bigg\}=0,\label{HJB2-1}
\end{eqnarray}
with the terminal condition $V^{(2)j}(T,x)=\frac{c_2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2$. We make the ansatz for $V^{(1)i}$ written as
\begin{eqnarray}
\nonumber V^{(1)i}(t,x)&=&\frac{\eta^{(1)}_t}{2}(\overlineverline x^{(1)}-x^{(1)i})^2+\frac{\eta^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\eta^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber&&+\eta^{(4)}_t(\overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(1)}+\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(2)}+\eta^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\eta^{(7)}_t(\overlineverline x^{(1)}-x^{(1)i})+\eta^{(8)}_t\overlineverline x^{(1)}+\eta^{(9)}_t\overlineverline x^{(2)}+\eta^{(10)}_t\label{ansatz-1},
\end{eqnarray}
and the ansatz for $V^{(2)j}$ given by
\begin{eqnarray}
\nonumber V^{(2)j}(t,x)&=&\frac{\phi^{(1)}_t}{2}(\overlineverline x^{(2)}-x^{(2)j})^2+\frac{\phi^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\phi^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber &&+\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(1)}+\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(2)}+\phi^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\phi^{(7)}_t(\overlineverline x^{(2)}-x^{(2)j})+\phi^{(8)}_t\overlineverline x^{(1)}+\phi^{(9)}_t\overlineverline x^{(2)}+\phi^{(10)}_t,\label{ansatz-2}
\end{eqnarray}
where $\eta^{(i)}$ and $\phi^{(j)}$ for $i=1,\cdots,10$ and $j=1,\cdots,10$ are deterministic functions with terminal conditions
\begin{eqnarray}n
&&\eta_T^{(1)}=c_1,\quad \eta_T^{(2)}=c_1\lambda_1^2(\begin{equation}ta_1-1)^2, \quad \eta_T^{(3)}=c_1\lambda_1^2\begin{equation}ta_2^2, \quad \eta_T^{(4)}=c_1\lambda_1(\begin{equation}ta_1-1),\\
&&\eta_T^{(5)}=c_1\lambda_1\begin{equation}ta_2,\quad \eta_T^{(6)}=c_1\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2,\quad \eta_T^{(7)}=\eta_T^{(8)}=\eta_T^{(9)}=\eta_T^{(10)}=0,
\end{eqnarray}n
and
\begin{eqnarray}n
&&\phi_T^{(1)}=c_2,\quad \phi_T^{(2)}=c_2\lambda_2^2\begin{equation}ta_1^2, \quad \phi_T^{(3)}=c_2\lambda_2^2(\begin{equation}ta_2-1)^2, \quad \phi_T^{(4)}=c_2\lambda_2\begin{equation}ta_1,\\
&&\phi_T^{(5)}=c_2\lambda_2(\begin{equation}ta_2-1),\quad \phi_T^{(6)}=c_2\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1),\quad \phi_T^{(7)}= \phi_T^{(8)}=\phi_T^{(9)}=\phi_T^{(10)}=0,
\end{eqnarray}n
using
\begin{eqnarray}n
\nonumber(\overlineverline x^{\lambda_1} -x^{(1)i})&=& (\overlineverline x^{(1)}-x^{(1)i})+\lambda_1(\begin{equation}ta_1-1)\overlineverline x^{(1)}+\lambda_1\begin{equation}ta_2\overlineverline x^{(2)}\\
\nonumber(\overlineverline x^{\lambda_2}-x^{(2)j})&=&(\overlineverline x^{(2)}-x^{(2)j})+\lambda_2\begin{equation}ta_1\overlineverline x^{(1)}+\lambda_2(\begin{equation}ta_2-1)\overlineverline x^{(2)}.
\end{eqnarray}n
Inserting the ansatz \eqref{ansatz-1} and \eqref{ansatz-2} into the HJB equations \eqref{HJB1} and \eqref{HJB2} and using
\begin{eqnarray}n
\partialrtial_{x^{(1)l}}V^{(1)i}&=&\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(1)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_1}\eta^{(2)}_t\overlineverline x^{(1)}+\frac{1}{N_1}\eta^{(4)}_t(\overlineverline x^{(1)}-x^{(1)i})\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(4)}\overlineverline x^{(1)}+ \left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(5)} \overlineverline x^{(2)}+\frac{1}{N_1}\eta^{(6)}_t\overlineverline x^{(2)}\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\\
\partialrtial_{x^{(2)l}}V^{(1)i}&=& \frac{1}{N_2}\eta^{(3)}_t\overlineverline x^{(2)}+\frac{1}{N_2}\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_2}\eta^{(6)}_t\overlineverline x^{(1)}+\frac{1}{N_2}\eta_t^{(9)} \\
\end{eqnarray}n
\begin{eqnarray}n
\partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}&=&\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(1)}_t+\left(\frac{1}{N_1}\right)^2\eta_t^{(2)}\\
&&+\frac{1}{N_1}\left(\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\right)\eta^{(4)}_t,\\
\partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}&=& \frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t,\\
\partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}&=&\frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t,\\
\partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}&=& \left(\frac{1}{N_2}\right)^2\eta_t^{(3)} ,
\end{eqnarray}n
\begin{eqnarray}n
\partialrtial_{x^{(1)l}}V^{(2)j}&=& \frac{1}{N_1}\phi^{(2)}_t\overlineverline x^{(1)} +\frac{1}{N_1}\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_1}\phi^{(6)}_t\overlineverline x^{(2)}+\frac{1}{N_1}\phi_t^{(8)}\\
\partialrtial_{x^{(2)l}}V^{(2)j}&=&\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi^{(1)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_2}\phi^{(3)}_t\overlineverline x^{(2)} + \left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)} \overlineverline x^{(1)}\\
&&+\frac{1}{N_2}\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(5)}\overlineverline x^{(2)}+\frac{1}{N_2}\phi^{(6)}_t\overlineverline x^{(1)},\\
&&+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}
\end{eqnarray}n
\begin{eqnarray}n
\partialrtial_{x^{(1)l}x^{(1)h}}V^{(2)j}&=&\left(\frac{1}{N_1}\right)^2\phi_t^{(2)},\\
\partialrtial_{x^{(1)l}x^{(2)h}}V^{(2)j}&=& \frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)},\\
\partialrtial_{x^{(2)l}x^{(1)h}}V^{(2)j}&=& \frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)},\\
\partialrtial_{x^{(2)l}x^{(2)h}}V^{(2)j}&=&\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(1)}+\left(\frac{1}{N_2}\right)^2\phi_t^{(3)}\\
&&+\frac{1}{N_2}\left(\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\right)\phi_t^{(5)},
\end{eqnarray}n
we get
\begin{eqnarray}n
\partialrtial_tV^{1(i)}&+&\sum_{l=1}^{N_1}\bigg\{\left(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right)(\overlineverline x^{(1)}-x^{(1)l})\\
&&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\overlineverline x^{(1)}\\
&&- \left((\frac{1}{N_1}-1)\eta^{(5)}_t+\frac{1}{N_1}\eta^{(6)}_t-q_1\lambda_1\begin{equation}ta_2\right)\overlineverline x^{(2)}-\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)+\gamma^{(1)}_t\bigg\} \\
&&\bigg\{\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(1)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_1}\eta^{(2)}_t\overlineverline x^{(1)}+\frac{1}{N_1}\eta^{(4)}_t(\overlineverline x^{(1)}-x^{(1)i})\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(4)}\overlineverline x^{(1)}+ \left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(5)} \overlineverline x^{(2)}+\frac{1}{N_1}\eta^{(6)}_t\overlineverline x^{(2)}\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\bigg\}\\
&+&\sum_{l=1}^{N_2}\bigg\{\bigg(q_2+(1- \frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi^{(5)}_t\bigg)(\overlineverline x^{(2)}-x^{(2)l})\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(4)}+\frac{1}{N_2}\phi_t^{(6)}-q_2\lambda_2\begin{equation}ta_1\right)\overlineverline x^{(1)}\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\overlineverline x^{(2)}-\frac{1}{N_2}\eta_t^{(9)} +\gamma^{(2)}_t\bigg\}\\
&& \bigg\{\frac{1}{N_2}\eta^{(3)}_t\overlineverline x^{(2)}+\frac{1}{N_2}\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_2}\eta^{(6)}_t\overlineverline x^{(1)}+\frac{1}{N_2}\eta_t^{(9)} \bigg\}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\bigg\{\rho^2+(1-\rho^2)\rho_1^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_1^2)\bigg\}\\
&&\bigg\{\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(1)}_t+\left(\frac{1}{N_1}\right)^2\eta_t^{(2)}\\
&&+ \frac{1}{N_1}\left(\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\right)\eta^{(4)}_t\bigg\} \\
\nonumber&+&\frac{\rho^2\sigma_1\sigma_2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\bigg\{\frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t\bigg\}\\
\nonumber&+&\frac{\rho^2\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1} \bigg\{\frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t\bigg\}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2}\bigg\{\rho_{2}^2+(1-\rho^2)\rho_2^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2}^2)\bigg\}\bigg\{ \left(\frac{1}{N_2}\right)^2\eta_t^{(3)}\bigg\}\\
&+&\frac{1}{2}\bigg\{\left((\frac{1}{N_1}-1)\eta^{(1)}_t+\frac{1}{N_1}\eta^{(4)}_t\right)(\overlineverline x^{(1)}-x^{(1)i})+\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t\right)\overlineverline x^{(1)}\\
&&+\left((\frac{1}{N_1}-1)\eta^{(5)}_t+\frac{1}{N_1}\eta^{(6)}_t\right)\overlineverline x^{(2)}+(\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\bigg\}^2\\
&+&\frac{\mbox{var}epsilonilon_1-q_1^2}{2}\bigg\{ (\overlineverline x^{(1)}-x^{(1)i})+\lambda_1(\begin{equation}ta_1-1)\overlineverline x^{(1)}+\lambda_1\begin{equation}ta_2\overlineverline x^{(2)}\bigg\}^2=0,
\end{eqnarray}n
for $i=1,\cdots,N_1$ and
\begin{eqnarray}n
\partialrtial_tV^{2(j)}&+&\sum_{l=1}^{N_1}\bigg\{\bigg(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\bigg)(\overlineverline x^{(1)}-x^{(1)l})\\
&&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\overlineverline x^{(1)}\\
&&-\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\overlineverline x^{(2)}-\frac{1}{N_1}\eta_t^{(8)}+\gamma^{(1)}_t\bigg\}\\
&& \bigg\{ \frac{1}{N_1}\phi^{(2)}_t\overlineverline x^{(1)} +\frac{1}{N_1}\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_1}\phi^{(6)}_t\overlineverline x^{(2)}+\frac{1}{N_1}\eta_t^{(8)}\bigg\}\\
&+&\sum_{l=1}^{N_2}\bigg\{\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi^{(5)}_t\right)( \overlineverline x^{(2)}-x^{(2)l})\\
&&-\left(\frac{1}{N_2}\eta^{(6)}_t+(\frac{1}{N_2} -1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\overlineverline x^{(1)}\\
&&-\left((\frac{1}{N_2}-1)\phi^{(5)}_t+\frac{1}{N_2}\phi^{(3)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\overlineverline x^{(2)}\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)+\gamma^{(2)}_t\bigg\}\\
&& \bigg\{\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi^{(1)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_2}\phi^{(3)}_t\overlineverline x^{(2)} + \left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)} \overlineverline x^{(1)}\\
&&+\frac{1}{N_2}\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(5)}\overlineverline x^{(2)}+\frac{1}{N_2}\phi^{(6)}_t\overlineverline x^{(1)}\\
&&+\left((\frac{1}{N_2}-\deltalta_{(2)j,(2)l})\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)\bigg\}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\bigg\{\rho^2+(1-\rho^2)\rho_1^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1}^2)\bigg\}\bigg\{\left(\frac{1}{N_1}\right)^2\phi_t^{(2)}\bigg\}\\
\nonumber&+&\frac{\rho^2\sigma_1\sigma_2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2} \bigg\{\frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)}\bigg\}\\
\nonumber&+&\frac{\rho^2\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1} \bigg\{\frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)}\bigg\}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \bigg\{\rho^2+(1-\rho^2)\rho_2^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2}^2)\bigg\}\\
&&\bigg\{\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(1)}+\left(\frac{1}{N_2}\right)^2\phi_t^{(3)}\\
&&+\frac{1}{N_2}\left(\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\right)\phi_t^{(5)}\bigg\}\\
&+& \frac{1}{2}\bigg\{\left((\frac{1}{N_2}-1)\phi^{(1)}_t+\frac{1}{N_2}\phi^{(5)}_t\right)(\overlineverline x^{(2)} -x^{(2)j})+\left(\frac{1}{N_2}\eta^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t\right)\overlineverline x^{(1)}\\
&&+\left((\frac{1}{N_1}-1)\phi^{(5)}_t+\frac{1}{N_2}\phi^{(3)}_t\right)\overlineverline x^{(2)}+(\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\bigg\}^2\\
&+&\frac{\mbox{var}epsilonilon_2-q_2^2}{2}\bigg\{(\overlineverline x^{(2)}-x^{(2)j})+\lambda_2\begin{equation}ta_1\overlineverline x^{(1)}+\lambda_2(\begin{equation}ta_2-1)\overlineverline x^{(2)}\bigg\}^2=0,
\end{eqnarray}n
for $j=1,\cdots,N_2$.
By idenfifying the terms of states $X$, we obtain that the deterministic functions $\eta^i$ and $\phi^i$ for $i=1,\cdots,10$ must satisfy
\begin{eqnarray}
\nonumber \dot\eta^{(1)}_t&=&2\left(q_1+(1-\frac{1}{N_1} )\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right)\eta^{(1)}_t- \left(\frac{1}{N_1}\eta^{(4)}_t+(\frac{1}{N_1}-1)\eta_t^{(1)}\right)^2-(\mbox{var}epsilonilon_1-q_1^2)\\
\label{eta1}\\
\nonumber {\dot\eta^{(2)}_t}&=&2\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1} -1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta^{(2)}_t- \left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1} -1)\eta_t^{(4)}\right)^2\\
\label{eta2}&&+2\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(\begin{equation}ta_1-1)^2\\
\nonumber {\dot\eta^{(3)}_t} &=&2\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\eta^{(3)}_t-\left(\frac{1}{N_1}\eta_t^{(6)}+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)^2\\
&&+2\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2\begin{equation}ta_2^2
\label{eta3}
\end{eqnarray}
\begin{eqnarray}
\nonumber\dot\eta^{(4)}_t&=&\left(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right) \eta^{(4)}_t+\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta^{(4)}_t\\
\nonumber&&-\left(\frac{1}{N_1}\eta^{(4)}_t+(\frac{1}{N_1}-1)\eta_t^{(1)}\right)\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta_t^{(4)}\right)\\
\label{eta4}&&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\eta^{(5)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(\begin{equation}ta_1-1)\\
\nonumber\dot\eta^{(5)}_t&=&\left(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right) \eta^{(5)}_t+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\eta^{(5)}_t\\
\nonumber&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\eta^{(4)}_t\\
\label{eta5}&&-\left(\frac{1}{N_1}\eta^{(4)}_t+(\frac{1}{N_1}-1)\eta_t^{(1)}\right)\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\\
\nonumber\dot\eta^{(6)}_t&=&\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta^{(6)}_t\\
\nonumber&&+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \eta^{(6)}_t\\
\nonumber&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\eta^{(2)}_t\\
\nonumber &&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta_t^{(4)}\right)\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)\\
&&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right) \eta^{(3)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2\label{eta6}
\end{eqnarray}
\begin{eqnarray}
\nonumber \dot\eta^{(7)}_t&=&\left(q_1+(1-\frac{1}{N_1})\eta_t^{(1)}-\frac{1}{N_1}\eta_t^{(4)}\right)\eta_t^{(7)}\\
\nonumber &&+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\eta_t^{(4)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\eta_t^{(5)}\\
&&-\left((\frac{1}{N_1}-1)\eta_t^{(1)}+\frac{1}{N_1}\eta_t^{(4)}\right)\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)\label{eta7}\\
\nonumber \dot\eta^{(8)}_t&=&\left(\frac{1}{N_1}\eta_t^{(2)}+(\frac{1}{N_1}-1)\eta_t^{(4)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta_t^{(8)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\eta_t^{(2)}\\
\nonumber&&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta_t^{(4)}\right)\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)\\
\nonumber &&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\eta_t^{(9)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\eta_t^{(6)}\\
\label{eta8}\\
\nonumber \dot\eta^{(9)}_t&=&\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\eta_t^{(9)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\eta_t^{(6)}\\
\nonumber &&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}-q_1\lambda_1\begin{equation}ta_2\right)\eta_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\eta_t^{(3)}\\
\label{eta9} &&-\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)\\
\nonumber \dot\eta^{(10)}_t&=&\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma^{(1)}_t\right)\eta_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma^{(2)}_t\right)\eta_t^{(9)}\\
\nonumber &&-\frac{\sigma_1^2}{2}\left((1-\frac{1}{N_1})(1-\rho^2)(1-\rho_1^2)\eta_t^{(1)}+\left(\rho^2+(1-\rho^2)\rho_1^2+\frac{1}{N_1}(1-\rho^2)(1-\rho^2_1)\right)\eta_t^{(2)}\right)\\
\nonumber &&-\rho^2\sigma_1\sigma_2\eta_t^{(6)}-\frac{\sigma_2^2}{2}\left(\rho_2^2+(1-\rho^2)\rho^2_2+\frac{1}{N_2}(1-\rho^2)(1-\rho_2^2)\right)\eta_t^{(3)}\\
&&-\frac{1}{2}\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)^2,\label{eta10}
\end{eqnarray}
and
\begin{eqnarray}
\nonumber\dot\phi^{(1)}_t&=&2\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)}\right)\phi^{(1)}_t-\left(( \frac{1}{N_2}-1)\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)}\right)^2-(\mbox{var}epsilonilon_2-q_2^2)\\
\label{phi1}\\
\nonumber {\dot\phi^{(2)}_t}&=&2\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \phi^{(2)}_t- \left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi_t^{(4)}\right)^2 \\
&&+2\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi^{(6)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_2^2 \label{phi2}\\
\nonumber {\dot\phi^{(3)}_t} &=&2\left( \frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \phi^{(3)}_t - \left(\frac{1}{N_2}\phi_t^3+(\frac{1}{N_2}-1)\phi_t^{(5)}\right)^2\\
&&+2\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right) \phi^{(6)}_t- (\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2(\begin{equation}ta_2-1)^2
\label{phi3}
\end{eqnarray}
\begin{eqnarray}
\nonumber\dot\phi^{(4)}_t&=&\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)}\right)\phi^{(4)}_t+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi^{(5)}_t\\
\nonumber&&-\left((\frac{1}{N_2}-1)\phi^{(1)}_t+\frac{1}{N_2}\phi_t^{(5)}\right)\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi_t^{(4)}\right)\\
&&+\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\phi^{(4)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1\label{phi4}\\
\nonumber\dot\phi^{(5)}_t&=&\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)} \right)\phi^{(5)}_t+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi_t^{(5)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\phi^{(5)}_t\\
\nonumber&&-\left((\frac{1}{N_2}-1)\phi^{(1)}_t+\frac{1}{N_2}\phi_t^{(5)}\right)\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi_t^{(5)}\right)\\
\label{phi5}&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\phi_t^{(4)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2(\begin{equation}ta_2-1)\\
\nonumber\dot\phi^{(6)}_t&=&\left( \frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \phi^{(6)}_t \\
\nonumber&&+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \phi^{(6)}_t \\
\nonumber &&-\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi_t^{(4)} \right) \left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi_t^{(5)}\right)\\
\nonumber&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right) \phi^{(2)}_t +\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi^{(3)}_t\\
\label{phi6}&&- (\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1)\\
\nonumber\dot\phi^{(7)}_t&=&\left(q_2+(1-\frac{1}{N_2})\phi_t^{(1)}-\frac{1}{N_2}\phi_t^{(5)}\right)\phi_t^{(7)}\\
\nonumber &&+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\phi_t^{(4)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\phi_t^{(5)}\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(1)}+\frac{1}{N_2}\phi_t^{(5)}\right)\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)\label{phi7}\\
\nonumber \dot\phi^{(8)}_t&=&\left(\frac{1}{N_1}\eta_t^{(2)}+(\frac{1}{N_1}-1)\eta_t^{(4)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\phi_t^{(8)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\phi_t^{(2)}\\
\nonumber&&-\left((\frac{1}{N_2}-1)\phi_t^{(4)}+\frac{1}{N_2}\phi_t^{(6)}\right)\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)\\
\nonumber&&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi_t^{(9)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\phi_t^{(6)}\\\label{phi8}\\
\nonumber \dot\phi^{(9)}_t&=&\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\phi_t^{(9)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\phi_t^{(6)}\\
\nonumber &&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}-q_1\lambda_1\begin{equation}ta_2\right)\phi_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\phi_t^{(3)}\\
\label{phi9} &&+\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}\right)\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)
\end{eqnarray}
\begin{eqnarray}
\nonumber \dot\phi^{(10)}_t&=& \left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\phi_t^{(8)}-\gamma^{(1)}_t\right)\eta_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma^{(2)}_t\right)\phi_t^{(9)}\\
\nonumber &&-\frac{\sigma_1^2}{2}\left(\rho_1^2+(1-\rho^2)\rho_1+\frac{1}{N_1}(1-\rho^2)(1-\rho_1^2)\right)\phi_t^{(2)}-\rho^2\sigma_1\sigma_2\phi_t^{(6)}\\
\nonumber &&-\frac{\sigma_2^2}{2}\left((1-\frac{1}{N_2})(1-\rho^2)(1-\rho_2^2)\phi_t^{(1)}+\left(\rho^2+(1-\rho^2)\rho_2^2+\frac{1}{N_2}(1-\rho^2)(1-\rho^2_2)\right)\phi_t^{(3)}\right)\\
&&-\frac{1}{2}\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)^2,\label{phi10}
\end{eqnarray}
with terminal conditions
\begin{eqnarray}n
&&\eta_T^{(1)}=c_1,\quad \eta_T^{(2)}=c_1\lambda_1^2(\begin{equation}ta_1-1)^2, \quad \eta_T^{(3)}=c_1\lambda_1^2\begin{equation}ta_2^2, \quad \eta_T^{(4)}=c_1\lambda_1(\begin{equation}ta_1-1),\\
&&\eta_T^{(5)}=c_1\lambda_1\begin{equation}ta_2,\quad \eta_T^{(6)}=c_1\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2,\quad \eta_T^{(7)}=\eta_T^{(8)}=\eta_T^{(9)}=\eta_T^{(10)}=0,
\end{eqnarray}n
and
\begin{eqnarray}n
&&\phi_T^{(1)}=c_2,\quad \phi_T^{(2)}=c_2\lambda_2^2\begin{equation}ta_1^2, \quad \phi_T^{(3)}=c_2\lambda_2^2(\begin{equation}ta_2-1)^2, \quad \phi_T^{(4)}=c_2\lambda_2\begin{equation}ta_1,\\
&&\phi_T^{(5)}=c_2\lambda_2(\begin{equation}ta_2-1),\quad \phi_T^{(6)}=c_2\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1),\quad \phi_T^{(7)}= \phi_T^{(8)}=\phi_T^{(9)}=\phi_T^{(10)}=0.
\end{eqnarray}n
We now discuss the existence of $\eta^{(i)}$ and $\phi^{(i)}$ for $i=1,\cdots,10$. First, observe that $\eta^{(3)}$, $\eta^{(i)}$ for $i=5,\cdots,10$, $\phi^{(2)}$, $\phi^{(4)}$, and $\phi^{(i)}$ for $i=6,\cdots,10$ are coupled first order linear equations.
The existence of $\eta^{(i)}$ and $\phi^{(i)}$ for $i=1,\cdots,10$ in the case of sufficiently large $N_1$ and $N_2$ can be verified in Proposition \ref{Prop_suff}. Hence, the closed-loop Nash equilibria are written as
\begin{eqnarray}
\label{optimal-finite-ansatz-V1-appen}
\hat\alphaha^{(1)i}(t,x)&=&(q_1+\tildeilde\eta^{(1)}_t)(\overlineverline x^{(1)}-x^{(1)i})+\tildeilde\eta^{(4)}_t\overlineverline x^{(1)}+\tildeilde\eta^{(5)}_t\overlineverline x^{(2)}+\tildeilde\eta_t^{(7)},\\
\hat\alphaha^{(2)j}(t,x)&=&(q_2+\tildeilde\phi^{(1)}_t)(\overlineverline x^{(2)}-x^{(2)j})+\tildeilde\phi^{(4)}_t\overlineverline x^{(1)}+\tildeilde\phi^{(5)}_t\overlineverline x^{(2)}+\tildeilde\phi_t^{(7)},\label{optimal-finite-ansatz-V2-appen}
\end{eqnarray}
where $\tildeilde\eta^i$ and $\tildeilde\phi^i$ for $i=1,4,5,7$ satisfy (\ref{tildeeta}-\ref{tildephi}).
\qed
We then verify that $V^{(1)i}$, $V^{(2)j}$, $\hat\alphaha^{(1)i}$, and $\hat\alphaha^{(2)j}$ are the solutions to the problem (\ref{value-function-1}-\ref{coupled-2}). Without loss of generality, we show the verification theorem for $V^{(1)i}$.
{\tildeheorem\label{Ver-Thm}(Verification Theorem)\\
Given the optimal strategies $\hat\alphaha^{(1)l}$ for $l\neq i$ given by \eqref{optimal-finite-ansatz-V1} and $\hat\alphaha^{(2)j}$ for $j=1,\cdots,N_2$ given by \eqref{optimal-finite-ansatz-V2}, $V^{(1)i}$ given by \eqref{ansatz-1} is the value function associated to the problem \eqref{value-function-1} and \eqref{value-function-2} subject to \eqref{coupled-1} and \eqref{coupled-2} and $\hat\alphaha^{(1)i}$ is the optimal strategy for the $i$-th bank in the first group and also the closed-loop Nash equilibrium.
}
\begin{equation}gin{proof}
According to the notations in \cite{Sun2016}, an admissible strategy $\tildeilde\alphaha$ and its corresponding trajectory $\tildeilde X$ are given by
\begin{equation}
\tildeilde\alphaha_t=\left(\hat\alphaha_t^{(1)1},\cdots, \alphaha^{(1)i}_t,\cdots,\hat\alphaha_t^{(1)N_1},\hat\alphaha_t^{(2)1},\cdots,\hat\alphaha_t^{(2)N_2}\right)
\end{equation}
and
\begin{equation}
\tildeilde X_t=\left(\tildeilde X_t^{(1)1},\cdots,\tildeilde X^{(1)i}_t,\cdots,\tildeilde X_t^{(1)N_1},\tildeilde X_t^{(2)1},\cdots,\tildeilde X_t^{(2)N_2}\right).
\end{equation}
In addition, the optimal strategy $\hat\alphaha$ and its corresponding trajectory $\hat X$ are written as
\begin{equation}
\hat\alphaha_t=\left(\hat\alphaha_t^{(1)1},\cdots,\hat\alphaha^{(1)i}_t,\cdots,\hat\alphaha_t^{(1)N_1},\hat\alphaha_t^{(2)1},\cdots,\hat\alphaha_t^{(2)N_2}\right)
\end{equation}
and
\begin{equation}
\hat X_t=\left(\hat X_t^{(1)1},\cdots,\hat X^{(1)i}_t,\cdots,\hat X_t^{(1)N_1},\hat X_t^{(2)1},\cdots,\hat X_t^{(2)N_2}\right).
\end{equation}
We claim for any admissible strategy $\tildeilde\alphaha$,
\begin{equation} \label{upper}
V^{(1)i}(t,x)\leq \mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(\tildeilde{X}_t, \alphaha^{(1)i}_s)ds+g_{(1)}(\tildeilde{X}_T)\right\},
\end{equation}
and for $\hat{\alphaha}$
\begin{equation} \label{optim}
V^{(1)i}(t,x)= \mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(\hat{X}_t, \hat{\alphaha}^{(1)i}_s)ds+g_{(1)}(\hat{X}_T)\right\},
\end{equation}
leading to $\hat{\alphaha}^{(1)i}$ is the optimal strategy for $i$-th bank in the first group . We can assume
\begin{equation} \label{condition}
\mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(\tildeilde{X}_t, {\alphaha}^{(1)i}_s)ds\right\}<\infty,
\end{equation}
otherwise (\ref{upper}) holds automatically. For some $M>0$, define the exit time
\[
\tildeheta_M=\inf\{t;\; |\tildeilde{X}_t|\geq M \}.
\]
Given the condition \eqref{condition}, in order to complete the proof, we shall claim
\begin{equation} \label{nonexplosion}
P(\tildeheta_M\leq T)\rightarrow 0,\ M\rightarrow \infty,
\end{equation}
and
\begin{equation} \label{ui}
\mathbb{E} _{t, x}[\sup_{t\leq s\leq T}|\tildeilde{X}_s|^2]<\infty.
\end{equation}
The proof of the above properties is postponed.
Given the optimal strategies $\hat\alphaha^{(1)i}$ and $\hat\alphaha^{(2)j}$, applying It\^o's formula, we get
\begin{eqnarray}
\nonumber&&V^{(1)i}(T\wedge\tildeheta_M,\tildeilde{X}_{T\wedge\tildeheta_M})\\
\nonumber&=&V^{(1)i}(t,x)\\
\nonumber &&+\int_t^{T\wedge\tildeheta_M}\bigg\{\partialrtial_sV^{(1)i}(s,\tildeilde X_s)+
\sum_{l\neq i,l=1}^{N_1}\bigg(\gamma^{(1)}_t+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}(s,\tildeilde X_s)
\\
\nonumber&&+ \bigg(\gamma^{(1)}_t+{\alphaha^{(1)i}}\bigg)\partialrtial_{x^{(1)i}}V^{(1)i}(s,\tildeilde X_s)+\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber&&+\frac{(\sigma^1)^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}((\rho^{11})^2+\deltalta_{(1)l,(1)h}(1-(\rho^{11})^2)) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber&&+\frac{\sigma^1\sigma^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}((\rho^{12})^2+\deltalta_{(1)l,(2)h}(1-(\rho^{12})^2)) \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber&&+\frac{\sigma^2\sigma^1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}((\rho^{21})^2+\deltalta_{(2)l,(1)h}(1-(\rho^{21})^2)) \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber &&+ \frac{(\sigma^2)^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} ((\rho^{22})^2+\deltalta_{(2)l,(2)h}(1-(\rho^{22})^2))\partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}(s,\tildeilde X_s)\bigg\} ds\\
\nonumber&&+\int_t^{T\wedge\tildeheta_M}\sigma^1\sum_{l=1}^{N_1}\partialrtial_{x^{(1)l}}V^{(1)i}(s, \tildeilde{X}_s) dW^{(1)l}_s+\int_t^{T\wedge\tildeheta_M}\sigma^2\sum_{h=1}^{N_1}\partialrtial_{x^{(2)h}}V^{(1)i}(s, \tildeilde{X}_s) dW^{(2)h}_s.
\end{eqnarray}
Taking the expectation on both sides and using
\begin{eqnarray} \label{positive}
\nonumber \partialrtial_{t}V^{(1)i} &+&
\sum_{l\neq i,l=1}^{N_1}\bigg(\gamma^{(1)}_t+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}
+ \bigg(\gamma^{(1)}_t+{\alphaha^{(1)i}}\bigg)\partialrtial_{x^{(1)i}}V^{(1)i}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{(\sigma^1)^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}((\rho^{11})^2+\deltalta_{(1)l,(1)h}(1-(\rho^{11})^2)) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma^1\sigma^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}((\rho^{12})^2+\deltalta_{(1)l,(2)h}(1-(\rho^{12})^2)) \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma^2\sigma^1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}((\rho^{21})^2+\deltalta_{(2)l,(1)h}(1-(\rho^{21})^2)) \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}\\
\nonumber &+& \frac{(\sigma^2)^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} ((\rho^{22})^2+\deltalta_{(2)l,(2)h}(1-(\rho^{22})^2))\partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}\\
&+&\frac{(\alphaha^{(1)i})^2}{2}-q^1\alphaha^{(1)i}\left(\overlineverline x^{\lambda_1}-x^{(1)i}\right)+\frac{\mbox{var}epsilonilon^1}{2}(\overlineverline x^{\lambda_1}-x^{(1)i})^2 \geq 0
\end{eqnarray}
give
\begin{eqnarray}
\nonumber V^{(1)i}(t,x) &\leq&\mathbb{E} \bigg\{\int_t^{T\wedge\tildeheta_M}\left(\frac{(\alphaha^{(1)i}_s)^2}{2}-q_1\alphaha^{(1)i}_s\left(\overlineverline{\tildeilde X}^{\lambda_1}_s-\tildeilde X^{(1)i}_s\right)+\frac{\mbox{var}epsilonilon_1}{2}(\overlineverline {\tildeilde X}^{\lambda_1}_s-\tildeilde X^{(1)i}_s)^2\right)ds \\
&&+ V^{(1)i}(T\wedge\tildeheta_M,\tildeilde X_{T\wedge\tildeheta_M})\bigg\}.\label{ver-ineq}
\end{eqnarray}
Assuming that there exists a constant $C$ such that
$$
|V^{(1)i}(T\wedge\tildeheta_M,\tildeilde X_{T\wedge\tildeheta_M})|\leq C(1+ \sup_{t\leq s\leq T} |\tildeilde{X}_s|^2),
$$
and then the condition \eqref{ui} implies $V^{(1)i}(T\wedge\tildeheta_M,X_{T\wedge\tildeheta_M})$ being uniformly integrable. Together with (\ref{nonexplosion}), we obtain as $M\rightarrow\infty$
$$
\mathbb{E} _{t, x}[ V^{(1)i}(T\wedge\tildeheta_M,\tildeilde X_{T\wedge\tildeheta_M})]\rightarrow \mathbb{E} _{t, x}[g^N_{(1)}(\tildeilde{X}_T)].
$$
In addition, owing to the integrand staying nonnegative, we get
\begin{eqnarray}n
\nonumber && \mathbb{E} \bigg\{\int_t^{T\wedge\tildeheta_M}\left(\frac{(\alphaha^{(1)i}_s)^2}{2}-q\alphaha^{(1)i}_s\left(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s\right)+\frac{\mbox{var}epsilonilon}{2}(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s)^2\right)ds \bigg\}\\
&& \leq \mathbb{E} \bigg\{\int_t^{T}\left(\frac{(\alphaha^{(1)i}_s)^2}{2}-q\alphaha^{(1)i}_s\left(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s\right)+\frac{\mbox{var}epsilonilon}{2}(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s)^2\right)ds \bigg\}.
\end{eqnarray}n
Based on the above results, we obtain
\begin{eqnarray}
V^{(1)i}(t,x_t) \leq\mathbb{E} \bigg\{\int_t^{T}f^N_{(1)}(\tildeilde{X}_t, \alphaha^{(1)i}_s)ds
+g^N_{(1)}(\tildeilde{X}_T))\bigg\}.\label{V(t,x)}
\end{eqnarray}
This completes the proof of (\ref{upper}).
In order to prove \eqref{nonexplosion}, we first recall
\begin{equation}\label{coupled-appex}
d\tildeilde X_t^{(1)i}=(\alphaha^{(1)i}_t+\gamma_t^{(1)})dt+ \sigma_1dW^{(1)i}_t
\end{equation}
and
\[
d(\tildeilde X_t^{(1)i})^2=\left(2\tildeilde X_t^{(1)i}(\alphaha^{(1)i}_t+\gamma^{(1)}_t)+\sigma_1^2\right)dt+2\tildeilde X_t^{(1)i}\sigma_1dW^{(1)i}_t.
\]
Denote $\begin{equation}ta>0$ large enough satisfying
\[
\begin{equation}ta>2\sup_{0\leq t\leq T}\left\{(\gamma^{(1)}_t)^2+\sigma_1^2+1\right\}.
\]
and apply It\^o formula to $e^{-\begin{equation}ta t}(\tildeilde X_t^{(1)i})^2$ leading to
\begin{eqnarray}n
&&e^{-\begin{equation}ta(t\wedge\tildeheta_M)}(\tildeilde X^{(1)i})^2_{t\wedge\tildeheta_M}\\
&\leq&(\tildeilde X_0^{(1)i})^2+\int_0^{t\wedge\tildeheta_M}e^{-\begin{equation}ta s}\left(-\frac{\begin{equation}ta}{2}((\tildeilde X_s^{(1)i})^2-1)+|\alphaha^{(1)i}_s|^2\right)ds+\int_0^{t\wedge\tildeheta_M}e^{-\begin{equation}ta s}2\sigma_1\tildeilde X_s^{(1)i}dW^{(1)i}_t.
\end{eqnarray}n
Taking expectation on both sides gives
\begin{equation}
e^{-\begin{equation}ta t}M^2\mathbb{P} (\tildeheta_M\leq t)\leq (\tildeilde X_0^{(1)i})^2+\frac{\begin{equation}ta }{2}t+\mathbb{E} \left[\int_0^{t\wedge\tildeheta_M}|\alphaha^{(1)i}_s|^2ds\right]-\frac{\begin{equation}ta}{2}\mathbb{E} \left[\int_0^{t\wedge\tildeheta_M}(\tildeilde X_s^{(1)i})^2ds\right].
\end{equation}
By letting $t=T$ and $M\rightarrow \infty$, we have \eqref{nonexplosion} and
\begin{equation}\label{condition-X2}
\frac{\begin{equation}ta}{2}\mathbb{E} \left[\int_0^{T}(\tildeilde X_s^{(1)i})^2ds\right]\leq (\tildeilde X_0^{(1)i})^2+\frac{\begin{equation}ta}{2}T+\mathbb{E} \left[\int_0^T|\alphaha^{(1)i}_s|^2ds\right].
\end{equation}
Applying Doob's martingale inequality and Cauchy-Schuwartz inequality to \eqref{coupled-appex} and using \eqref{condition-X2} imply
\begin{eqnarray}
\nonumber&&\mathbb{E} [\sup_{t\leq s\leq T}|\tildeilde X^{(1)i}_s|^2]\\
\nonumber&\leq& 2\mathbb{E} \left[\int_t^T|\gamma^{(1)}_s|ds\right]^2+2\mathbb{E} \left[\int_t^T|\alphaha^{(1)i}_s|ds\right]^2+2\mathbb{E} \left[\sup_{t\leq u\leq T}\int_t^u2\sigma_1 \tildeilde X_s^{(1)i}dW^{(1)i}_s\right]^2\\
&\leq&C_1T\mathbb{E} \left[\int_t^T|\gamma^{(1)}_s|^2+|\alphaha^{(1)i}_s|^2ds\right]+C_2\mathbb{E} \left[\int_t^T (\tildeilde X^{(1)i}_s)^2ds\right]< \infty,
\end{eqnarray}
where $C_1$ and $C_2$ are two positive constants. This proves \eqref{ui}.
\end{equation}d{proof}
\section{Proof of Theorem \ref{Hete-open}} \label{Appex-open}
Applying the Pontryagin principle to the proposed problem (\ref{objective}-\ref{diffusions}), we obtain the Hamiltonians written as
\begin{eqnarray}
\nonumber H^{(1)i}&=&\sum_{k=1}^{N_1}(\gamma_t^{(1)}+\alphaha^{(1)k})y^{(1)i,(1)k}+\sum_{k=1}^{N_2}(\gamma^{(2)}_t+\alphaha^{(2)k})y^{(1)i,(2)k}\\
&&+\frac{(\alphaha^{(1)i})^2}{2}-q_1\alphaha^{(1)i}(\overlineverline x^{\lambda_1}-x^{(1)i})+\frac{\mbox{var}epsilonilon_1}{2}(\overlineverline x^{\lambda_1}-x^{(1)i})^2,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber H^{(2)j}&=&\sum_{k=1}^{N_1}(\gamma_t^{(1)}+\alphaha^{(1)k})y^{(2)j,(1)k}+\sum_{k=1}^{N_2}(\gamma^{(2)}_t+\alphaha^{(2)k})y^{(2)j,(2)k}\\
&&+\frac{(\alphaha^{(2)j})^2}{2}-q_2\alphaha^{(2)j}(\overlineverline x^{\lambda_2}-x^{(2)j})+\frac{\mbox{var}epsilonilon_2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2,
\end{eqnarray}
where the adjoint diffusions $Y_t^{(1)i,(1)l}$, $Y_t^{(1)i,(2)h}$, $Y_t^{(2)j,(1)l}$, and $Y_t^{(2)j,(2)h}$ for $i,l=1,\cdots,N_1$ and $j,h=1,\cdots,N_2$ are given by
\begin{eqnarray}
\label{Y-1-1}\nonumber dY_t^{(1)i,(1)l}&=&-\frac{\partialrtial H^{(1)i}}{\partialrtial x^{(1)l}}(\hat\alphaha_t^{(1)i})dt+\sum_{k=0}^2Z_t^{(1)i,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k=1}^{N_1}Z_t^{(1)i,(1)l,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(1)i,(1)l,(2)k}dW_t^{(2)k},\\
\nonumber dY_t^{(1)i,(2)h}&=&-\frac{\partialrtial H^{(1)i}}{\partialrtial x^{(2)h}}(\hat\alphaha_t^{(1)i})dt+\sum_{k=0}^2Z_t^{(1)i,(2)h,k}dW_t^{(k)}\\
&&+\sum_{k=1}^{N_1}Z_t^{(1)i,(2)h,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(1)i,(2)h,(2)k}dW_t^{(2)k},
\end{eqnarray}
and
\begin{eqnarray}
\nonumber dY_t^{(2)j,(1)l}&=&-\frac{\partialrtial H^{(2)j}}{\partialrtial x^{(1)l}}(\hat\alphaha_t^{(2)j})dt+\sum_{k=0}^2Z_t^{(2)j,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k=1}^{N_1}Z_t^{(2)j,(1)l,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(2)j,(1)l,(2)k}dW_t^{(2)k},\\
\nonumber dY_t^{(2)j,(2)h}&=&-\frac{\partialrtial H^{(2)j}}{\partialrtial x^{(2)h}}(\hat\alphaha_t^{(2)j})dt+\sum_{k=0}^2Z_t^{(2)j,(2)h,k}dW_t^{(k)}\\
\label{Y-1-4}&&+\sum_{k=1}^{N_1}Z_t^{(2)j,(2)h,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(2)j,(2)h,(2)k}dW_t^{(2)k}
\end{eqnarray}
with the squared integrable progressive processes
\begin{eqnarray}n
Z_t^{(1)i,(1)l,k}, Z_t^{(1)i,(1)l,(1)k_1}, Z_t^{(1)i,(1)l,(2)k_2}, Z_t^{(1)i,(2)h,k}, Z_t^{(1)i,(2)h,(1)k_1}, Z_t^{(1)i,(2)h,(2)k_2},\\
Z_t^{(2)j,(1)l,k}, Z_t^{(2)j,(1)l,(1)k}, Z_t^{(2)j,(1)l,(2)k}, Z_t^{(2)j,(2)h,k}, Z_t^{(2)j,(2)h,(1)k}, Z_t^{(2)j,(2)h,(2)k},
\end{eqnarray}n
for $i,l=1,\cdots,N_1$, $j,h=1,\cdots,N_2$, and $k=0,1,2$. The terminal conditions are written as
\[
Y_T^{(1)i,(1)l}=c_1\left( \frac{1-\lambda_1}{N_1}+\frac{\lambda_1}{N}-\deltalta_{(1)i,(1)l}\right)(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),\,\;Y_T^{(1)i,(2)h}=c_1\frac{\lambda_1}{N}(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),
\]
and
\[
Y_T^{(2)j,(1)l}=c_2\frac{\lambda_2}{N}(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}),\;Y_T^{(2)j,(2)h}=c_2\left( \frac{1-\lambda_2}{N_2}+\frac{\lambda_2}{N}-\deltalta_{(2)j,(2)h}\right)(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}).
\]
Through minimizing the Hamiltonians with respect to $\alphaha$ written as
\[
\frac{\partialrtial H^{(1)i}}{\partialrtial \alphaha^{(1)i}}(\hat\alphaha^{(1)i})=0, \quad \frac{\partialrtial H^{(2)j}}{\partialrtial \alphaha^{(2)j}}(\hat\alphaha^{(2)j})=0,
\]
the Nash equilibria are given by
\begin{eqnarray}
\label{optimal_open-1-1}\hat\alphaha^{o,(1)i}&=&q_1(\overlineverline x^{\lambda_1}-x^{(1)i})-y^{(1)i, (1)i},\\
\label{optimal_open-1-2}\hat\alphaha^{o,(2)j}&=&q_2(\overlineverline x^{\lambda_2}-x^{(2)j})-y^{(2)j,(2)j},
\end{eqnarray}
such that the optimal forward equations for banks are given by
\begin{eqnarray}
\nonumber dX^{(1)i}_t &=& \left(q_1(\overlineverline X_t^{\lambda_1}-X_t^{(1)i})-Y_t^{(1)i, (1)i} +\gamma^{(1)}_{t}\right)dt\\
&&+\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right) ,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber dX^{(2)j}_t &=& \left(q_2(\overlineverline X_t^{\lambda_2}-X_t^{(2)j})-Y_t^{(2)j,(2)j}+\gamma^{(2)}_{t}\right)dt\\
&&+\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)j}_t\right)\right).
\end{eqnarray}
Inserting \eqref{optimal_open-1-1} and \eqref{optimal_open-1-2} into (\ref{Y-1-1}-\ref{Y-1-4}), the adjoint processes are rewritten as
\begin{eqnarray}
\nonumber dY_t^{(1)i,(1)l}&=&\left( \frac{1-\lambda_1}{N_1}+\frac{\lambda_1}{N}-\deltalta_{(1)i,(1)l}\right)\left\{-(\mbox{var}epsilonilon_1-q_1^2)(\overlineverline X^{\lambda_1}_t-X_t^{(1)i})-q_1Y_t^{(1)i,(1)i}\right\}dt\\
\nonumber&&+\sum_{k=0}^2Z_t^{(1)i,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(1)i,(1)l,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(1)i,(1)l,(2)k_2}dW_t^{(2)k_2},
\label{Y-1-1}\\
\nonumber dY_t^{(1)i,(2)h}&=&\frac{\lambda_1}{N}\left\{-(\mbox{var}epsilonilon_1-q_1^2)(\overlineverline X^{\lambda_1}_t-X_t^{(1)i})-q_1Y_t^{(1)i,(1)i}\right\}dt+\sum_{k=0}^2Z_t^{(1)i,(2)h,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(1)i,(2)h,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(1)i,(2)h,(2)k_2}dW_t^{(2)k_2},
\end{eqnarray}
with the terminal conditions
\[
Y_T^{(1)i,(1)l}=c_1\left( \frac{1-\lambda_1}{N_1}+\frac{\lambda_1}{N}-\deltalta_{(1)i,(1)l}\right)(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),\,\;Y_T^{(1)i,(2)h}=c_1\frac{\lambda_1}{N}(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),
\]
and
\begin{eqnarray}
\nonumber dY_t^{(2)j,(1)l}&=&\frac{\lambda_2}{N}\left\{-(\mbox{var}epsilonilon_2-q_2^2)(\overlineverline X^{\lambda_2}_t-X_t^{(2)j})-q_1Y_t^{(1)i,(1)i}\right\}dt+\sum_{k=0}^2Z_t^{(2)j,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(2)j,(1)l,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(2)j,(1)l,(2)k_2}dW_t^{(2)k_2},\\
\nonumber dY_t^{(2)j,(2)h}&=&\left( \frac{1-\lambda_2}{N_2}+\frac{\lambda_2}{N}-\deltalta_{(2)j,(2)h}\right)\left\{-(\mbox{var}epsilonilon_2-q_2^2)(\overlineverline X^{\lambda_2}_t-X^{(2)j})-q_2Y_t^{(2)j,(2)j}\right\}dt\\
\nonumber&&+\sum_{k=0}^2Z_t^{(2)j,(2)h,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(2)j,(2)h,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(2)j,(2)h,(2)k_2}dW_t^{(2)k_2},
\label{Y-1-4}
\end{eqnarray}
with the terminal conditions
\[
Y_T^{(2)j,(1)l}=c_2\frac{\lambda_2}{N}(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}),\;Y_T^{(2)j,(2)h}=c_2\left( \frac{1-\lambda_2}{N_2}+\frac{\lambda_2}{N}-\deltalta_{(2)j,(2)h}\right)(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}).
\]
We then make the ansatz written as
\begin{eqnarray}
\nonumber Y_t^{(1)i,(1)l}&=&\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\left(\eta_t^{(o),1}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\eta_t^{(o),2}\overlineverline X_t^{(1)}+\eta_t^{(o),3}\overlineverline X_t^{(2)}+\eta_t^{(o),4}\right)\\
\label{ansatz_open_1} \\
Y_t^{(1)i,(2)h}&=&\frac{\lambda_1}{N}\left(\eta_t^{(o),1}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\eta_t^{(o),2}\overlineverline X_t^{(1)}+\eta_t^{(o),3}\overlineverline X_t^{(2)}+\eta_t^{(o),4}\right)
\end{eqnarray}
and
\begin{eqnarray}
Y_t^{(2)j,(1)l}&=&\frac{\lambda_2}{N}\left(\phi_t^{o,(1)}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\phi_t^{o,(2)}\overlineverline X_t^{(1)}+\phi_t^{o,(3)}\overlineverline X_t^{(2)}+\phi_t^{o,(4)}\right)\\
\nonumber Y_t^{(2)j,(2)h}&=&\left(\frac{1}{\widetilde N_2}-\deltalta_{(1)i,(1)l}\right)\left(\phi_t^{o,(1)}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\phi_t^{o,(2)}\overlineverline X_t^{(1)}+\phi_t^{o,(3)}\overlineverline X_t^{(2)}+\phi_t^{o,(4)}\right)\\
\label{ansatz_open_2}
\end{eqnarray}
where $$\frac{1}{\widetilde N_k}=\frac{1-\lambda_k}{N_k}+\frac{\lambda_k}{N},$$ for $k=1,2$. Differentiating (\ref{ansatz_open_1}-\ref{ansatz_open_2}) and identifying the $dY_t^{(1)i,(1)l}$, $dY_t^{(1)i,(2)h}$, $dY_t^{(2)j,(1)l}$, and $Y_t^{(2)j,(2)h}$ with (\ref{Y-1-1}-\ref{Y-1-4}), we obtain that the deterministic functions $\eta_t^{o,(i)}$ and $\phi_t^{o,(i)}$ for $i=1,\cdots,4$ must satisfy
\begin{eqnarray}
\label{eta_open-1}\dot\eta_t^{o,(1)}&=&\left(2-\frac{1}{\widetilde N_1}\right)q_1\eta_t^{o,(1)}+\left(1-\frac{1}{\widetilde N_1}\right)(\eta_t^{o,(1)})^2-(\mbox{var}epsilonilon_1-q_1^2)\\
\nonumber \dot\eta_t^{o,(2)}&=&-\left(q_1\lambda_1(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\eta_t^{o,(2)}-\left(q_2\lambda_1\begin{equation}ta_1+(1-\frac{1}{\widetilde N_2})\phi^{o,(2)}\right)\eta_t^{o,(3)}\\
&&-q_1\left(\frac{1}{\widetilde N_1}-1\right)\eta_t^{o,(2)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(\begin{equation}ta_1-1)\\
\nonumber \dot\eta_t^{o,(3)}&=&-\left(q_1\lambda_2\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\eta_t^{o,(2)}-\left(q_2\lambda_2(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\eta_t^{o,(3)}\\
&&-q_1\left(\frac{1}{\widetilde N_1}-1\right)\eta_t^{o,(3)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\\
\nonumber \dot\eta_t^{o,(4)}&=&-\left((1-\frac{1}{\widetilde N_1})\eta_t^{o,(4)}+\gamma_t^{(1)}\right)\eta_t^{o,(2)}\\
&&-\left((1-\frac{1}{\widetilde N_2})\phi_t^{o,(4)}+\gamma_t^{(2)}\right)\eta_t^{o,(3)}-q_1\left(\frac{1}{\widetilde N_1}-1\right)\eta_t^{o,(4)}
\end{eqnarray}
\begin{eqnarray}
\dot\phi_t^{o,(1)}&=&\left(2-\frac{1}{\widetilde N_2}\right)q_2\phi_t^{o,(1)}+\left(1-\frac{1}{\widetilde N_2}\right)(\phi_t^{o,(1)})^2-(\mbox{var}epsilonilon_2-q_2^2)\\
\nonumber \dot\phi_t^{o,(2)}&=&-\left(q_2\lambda_2(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\phi_t^{o,(2)}-\left(q_2\lambda_2\begin{equation}ta_2+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(2)}\right)\phi_t^{o,(3)}\\
&&-q_2\left(\frac{1}{\widetilde N_2}-1\right)\phi_t^{o,(2)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1\\
\nonumber\dot\phi_t^{o,(3)}&=&-\left(q_1\lambda_2\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\phi_t^{o,(2)}-\left(q_2\lambda_2(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\phi_t^{o,(3)}\\
&&-q_2\left(\frac{1}{\widetilde N_2}-1\right)\phi_t^{o,(3)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_1(\begin{equation}ta_2-1)\\
\nonumber\dot\phi_t^{o,(4)}&=&-\left((1-\frac{1}{\widetilde N_1})\eta_t^{o,(4)}+\gamma_t^{(1)}\right)\phi_t^{o,(2)}\\
&&-\left((1-\frac{1}{\widetilde N_2})\phi_t^{o,(4)}+\gamma_t^{(2)}\right)\phi_t^{o,(3)}-q_2\left(\frac{1}{\widetilde N_2}-1\right)\phi_t^{o,(4)}\label{phi_open-4}
\end{eqnarray}
with the terminal conditions
\begin{eqnarray}n
\eta_T^{o,(1)}=c_1,\;\eta_T^{o,(2)}=c_1\lambda_1(\begin{equation}ta_1-1)\;\eta_T^{o,(3)}=c_1\lambda_1\begin{equation}ta_2,\;\eta_T^{o,(4)}=0,\\
\phi_T^{o,(1)}=c_2,\;\eta_T^{o,(2)}=c_2\lambda_2\begin{equation}ta_1\;\phi_T^{o,(3)}=c_2\lambda_2(\begin{equation}ta_1-1),\;\phi_T^{o,(4)}=0,
\end{eqnarray}n
and the squared integrable progressive processes are given by
\begin{eqnarray}n
&&Z_t^{(1)i,(1)l,0}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\rho\left(\sigma_1\eta_t^{o,(2)}+\sigma_2\eta_t^{o,(3)}\right),\\
&&Z_t^{(1)i,(1)l,1}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\;\\
&&Z_t^{(1)i,(1)l,2}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(1)i,(1)l,(1)k_1}=\frac{1}{N_1}\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(1)}\left(\frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2}-\deltalta_{(1)i,(1)k_1}\right),\\
&&Z_t^{(1)i,(1)l,(2)k_2}=\frac{1}{N_1}\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\sqrt{1-\rho_2^2},
\end{eqnarray}n
and
\begin{eqnarray}n
&&Z_t^{(1)i,(2)h,0}=\frac{\lambda_1}{N}\rho\left(\sigma_1\eta_t^{o,(2)}+\sigma_2\eta_t^{o,(3)}\right),\\
&&Z_t^{(1)i,(2)h,1}=\frac{\lambda_1}{N}\eta_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\\
&&Z_t^{(1)i,(2)h,2}=\frac{\lambda_1}{N}\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(1)i,(2)h,(1)k_1}=\frac{\lambda_1}{N}\eta_t^{o,(1)}\left(\frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2}-\deltalta_{(1)i,(1)k_1}\right)\\
&&Z_t^{(1)i,(2)h,(2)k_2}=\frac{\lambda_1}{N}\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\sqrt{1-\rho_2^2},\\
\end{eqnarray}n
and
\begin{eqnarray}n
&&Z_t^{(2)j,(1)l,0}=\frac{\lambda_2}{N}\rho\left(\sigma_1\phi_t^{o,(2)}+\sigma_2\phi_t^{o,(3)}\right),\\
&&Z_t^{(2)j,(1)l,1}=\frac{\lambda_2}{N}\phi_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\\
&&Z_t^{(2)j,(1)l,2}=\frac{\lambda_2}{N}\phi_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(2)j,(1)l,(1)k_1}=\frac{\lambda_2}{N}\phi_t^{o,(2)} \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2},\\
&&Z_t^{(2)j,(1)l,(2)k_2}=\frac{\lambda_2}{N}\phi_t^{o,(1)}\left( \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_2^2}-\deltalta_{(2)j,(2)k_2}\right),
\end{eqnarray}n
and
\begin{eqnarray}n
&&Z_t^{(2)j,(2)h,0}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\rho\left(\sigma_1\phi_t^{o,(2)}+\sigma_2\phi_t^{o,(3)}\right),\\
&&Z_t^{(1)i,(2)h,1}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\phi_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\\
&&Z_t^{(2)j,(2)h,2}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\frac{\lambda_2}{N}\phi_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(2)j,(2)h,(1)k_1}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\phi_t^{o,(2)} \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2},\\
&&Z_t^{(2)j,(2)h,(2)k_2}=\frac{\lambda_2}{N}\phi_t^{o,(1)}\left( \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_2^2}-\deltalta_{(2)j,(2)k_2}\right),
\end{eqnarray}n
for $i,l=1,\cdots,N_1$ and $j,h=1,\cdots,N_2$. Hence, the open-loop Nash equilibria are written as
\begin{eqnarray}
\nonumber \hat\alphaha^{o,(1)i}&=&\left(q_1+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(1)}\right)(\overlineverline X_t^{(1)}-X_t^{(1)i})+\left(q_1\lambda_1(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
\label{open-app-1}&&+\left(q_1\lambda_1\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_1}\right)\eta_t^{o,(4)},\\
\nonumber \hat\alphaha^{o,(2)j}&=&\left(q_2+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(1)}\right)(\overlineverline X_t^{(2)}-X_t^{(2)j})+\left(q_2\lambda_2\begin{equation}ta_1+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
\label{open-app-2}&&+\left(q_2\lambda_2(\begin{equation}ta_2-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_2}\right)\phi_t^{o,(4)}.
\end{eqnarray}
Note that the existence of the coupled ODEs (\ref{eta_open-1}-\ref{phi_open-4}) in the case of sufficiently large $N_1$ and $N_2$ is studied in Proposition \ref{Prop_suff}. Based on the open-loop equilibria (\ref{open-app-1}-\ref{open-app-2}), we have the lending and borrowing system satisfying the Lipchitz condition in the sense that the existence of the corresponding FBSDEs can be verified using the fixed point argument. See \cite{Carmona-Fouque2016} for instance.
\section{Proof of Theorem \ref{Hete-MFG-prop}}\label{Appex-Hete-MFG}
Due to the non-Markovian structure for the given $m_t^{(k)}$ for $k=1,\cdots,d$, in order to obtain the $\mbox{var}epsilonilon$-Nash equilibrium for the coupled diffusions with common noises, we again apply the adjoint FBSDEs discussed in \cite{CarmonaDelarueLachapelle} and \cite{R.Carmona2013}. The corresponding Hamiltonian is given by
\begin{equation}
H^{k}(t,x,y^{(k)},\alphaha)=\sum_{h=1}^d(\alphaha^{(h)}+\gamma^{(h)}_t)y^{(k),h}+\frac{(\alphaha^{(k)})^2}{2}-q_k\alphaha^{(k)}\left(M^{\lambda_k}_t-x^{(k)}\right)+\frac{\mbox{var}epsilonilon_k}{2}\left(M^{\lambda_k}_t-x^{(k)}\right)^2,
\end{equation}
where $x=(x^{(1)},\cdots,x^{(d)})$, $y^{(k)}=(y^{k,1},\cdots,y^{k,d})$, and $\alphaha=(\alphaha^{(1)},\cdots,\alphaha^{(d)})$. The Hamiltonian attains its minimum at
\begin{equation}
\hat\alphaha^{m,(k)}_t=q_k\left(M^{\lambda_k}_t-x^{(k)}\right)-y^{k,k}.
\end{equation}
The backward equations satisfy
\begin{eqnarray}
\nonumber dY^{k,l}_t&=&-\partialrtial_{x^{(l)}}H^k(\hat\alphaha^{(k)})dt+\sum_{h=0}^dZ^{0,k,l,h}_tdW^{(0),(h)}_t+\sum_{h=1}^dZ^{k,l,h}_tdW^{(h)}_t\\
\nonumber &=&(q_kY^{k,k}_t+(\mbox{var}epsilonilon_k-q_k^2)(M^{\lambda_k}_t-X^{(k)}_t))\deltalta_{k,l}dt+\sum_{h=0}^dZ^{0,k,l,h}_tdW^{(0),(h)}_t+\sum_{h=1}^dZ^{k,l,h}_tdW^{(h)}_t,\\
\label{Y-MFG}
\end{eqnarray}
with the terminal conditions $Y^{k,l}_T=\frac{c_k}{2}(X^{(k)}_T-m^{(k)}_T)\deltalta_{k,l}$ for $k,l=1,\cdots,d$ where the processes $Z^{0,k,l,h}_t$ and $Z^{k,l,h}_t$ are adapted and square integrable. We make the ansatz for $Y^{k,l}_t$ written as
\begin{equation}\label{ansatz-MFG}
Y^{k,l}_t=-\left(\eta^{m,(k)}_t(m^{(k)}_t-X_t^{(k)})+\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t\right)\deltalta_{k,l},
\end{equation}
leading to
\begin{eqnarray}
\nonumber dX^{(k)}_t&=&\bigg\{(q_k+\eta^{m,(k)}_t)(m^{(k)}_t -X_t^{(k)})+\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t+\gamma^{(k)}_t\\
\nonumber&&+q_k\lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m^{(h_1)}_t\bigg\}dt\\
&&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(0),(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)}_t\right)\right),\label{X-MFG-1}\\
\nonumber dm^{(k)}_t&=&\bigg\{\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t+\gamma^{(k)}_t+q_k\lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m^{(h_1)}_t\bigg\}dt\\
&&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2} \rho_{k}dW_t^{(0),(k)} \right) \label{m-hete}
\end{eqnarray}
Inserting the ansatz \eqref{ansatz-MFG} into \eqref{Y-MFG} gives
\begin{eqnarray}
\nonumber dY^{k,l}_t&=&\deltalta_{k,l}\bigg\{(-q_k\eta_t^{m,(k)}+\mbox{var}epsilonilon_k-q_k^2)(m^{(k)}_t-X_t^{(k)})+(\mbox{var}epsilonilon_k-q_k^2) \lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m_t^{(h_1)} \\
\nonumber&&-q_k\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t-q_k\mu^{m,(k)}_t\bigg\}dt+\sum_{h=0}^dZ^{0,k,l,h}_tdW^{(0),(h)}_t+\sum_{h=1}^dZ^{k,l,h}_tdW^{(h)}_t,\\\label{Y-MFG-1}
\end{eqnarray}
and applying It\^o formula to \eqref{ansatz-MFG} and using \eqref{X-MFG-1} and \eqref{m-hete} imply
\begin{eqnarray}
\nonumber dY^{k,l}_t&=&\deltalta_{k,l}\bigg\{\bigg(-\dot\eta^{m,(k)}_t(m^{(k)}_t-X_t^{(k)})+\eta^{m,(k)}_t (q_k+\eta_t^{m,(k)})(m^{(k)}_t-X^{(k)}_t)\\
\nonumber&&-\dot\mu_t^{m,(k)}-\sum_{h_1=1}^d\dot\psi_t^{m,(k),h_1}m^{(h_1)}_t\\
\nonumber&& -\sum_{h=1}^d\psi_t^{m,(k),h}\left(\sum_{h_1=1}^d(\psi_t^{m,(h),h_1}+q_h\lambda_h(\begin{equation}ta_{h_1}-\deltalta_{h,h_1}))m^{(h_1)}_t+\mu_t^{m,(h)}+\gamma^{(h)}_t \right)\bigg)dt \\
\nonumber&&+\sum_{h=1}^d\psi_t^{m,(k),h}\sigma_h\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2} \rho_{h}dW_t^{(0),(h)}\right)\\
&&+\eta_t^{m,(k)}\sigma_k\sqrt{1-\rho^2}\sqrt{1-\rho_k^2}dW^{(k)}_t \bigg\}.
\label{Y-MFG-2}
\end{eqnarray}
Similarly, through identifying \eqref{Y-MFG-1} and \eqref{Y-MFG-2}, we get $\eta_t^{(k)}$, $\psi^{(k),h}_t$, and $\mu_t^{(k)}$ must satisfy (\ref{Hete-eta-MFG}-\ref{Hete-mu-MFG}) and the squared integrable processes $Z^{0,k,l,h}_t$ and $Z^{k,l,h}_t$ satisfying
\begin{equation}
Z^{0,k,l,0}_t=-\eta_t^{m,(k)}\lambda_k\rho\sum_{h_1=1}^d\sigma_{h_1}(\begin{equation}ta_{h_1}-\deltalta_{k,h_1}+\psi_t^{m,(k),h_1}),\;l=k,\quad Z^{0,k,l,0}_t=0, \; l\neq k,
\end{equation}
and
\begin{equation}
Z^{0,k,l,h}_t=-\eta_t^{m,(k)}\lambda_k\sqrt{1-\rho^2}\sigma_{h }(\begin{equation}ta_{h }-\deltalta_{k,h}+\psi_t^{m,(k),h}),\;l=k,\quad Z^{0,k,l,h}_t=0, \; l\neq k,
\end{equation}
and
\begin{equation}
Z^{k,l,h}_t=\eta^{m,(k)}_t\sigma_k\sqrt{1-\rho^2}\sqrt{1-\rho_k^2},\;l=k,\quad Z^{k,l,h}_t=0,\; l\neq k.
\end{equation}
By the fixed point argument, the $\mbox{var}epsilonilon$-Nash equilibria are given by
\begin{equation}
\hat\alphaha_t^{m,(k)}=(q_k+\eta^{m,(k)}_t)(m^{(k)}_t-x^{(k)})+\sum_{h=1}^d\widetilde\psi_t^{m,(k),h}m^{h}_t+\mu^{m,(k)}_t,\quad k=1,\cdots,d
\end{equation}
where $\widetilde\psi_t^{m,(k),h}=\psi_t^{m,(k),h}+q_k\lambda_k(\begin{equation}ta_k-\deltalta_{k,h})$.
We now study the existence of the coupled ODEs (\ref{Hete-eta-MFG}-\ref{Hete-mu-MFG}). Note that we show the existence of the case of two heterogeneous groups in Proposition \ref{Prop_suff}. Observe that \eqref{Hete-eta-MFG} satisfies the Riccati equation without coupling. Given \eqref{Hete-psi-MFG}, the system \eqref{Hete-mu-MFG} is the system of linear ODEs. Hence, it is sufficient to show the existence of \eqref{Hete-psi-MFG}. Similar to the results in Proposition \ref{Prop_suff}, in the general $d$ groups, we obtain
\[
\sum_{h_1=1}^d\psi_t^{m,(k),h_1}=0,
\]
implying that given $k=h_1$
\begin{equation}\label{MFG_suff_cond_1}
\psi_t^{m,(k),k}=-\sum_{h_1\neq k}\psi_t^{m,(k),h_1}.
\end{equation}
Now, by inserting \eqref{MFG_suff_cond_1} into \eqref{Hete-mu-MFG}, for $k\neq h_1$, \eqref{Hete-mu-MFG} can be rewritten as
\begin{eqnarray}
\nonumber \dot\psi_t^{m,(k),h_1}&=&q_k\psi_t^{m,(k),h_1}+\sum_{h\neq k}\psi_t^{m,(k),h}\left(\psi_t^{m,(k),h_1}+q_k\lambda_k\begin{equation}ta_{h_1 }\right)\\
\nonumber &&-\sum_{h\neq k}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h(\begin{equation}ta_{h_1 }-\deltalta_{h,h_1})\right)-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1} \\
\nonumber &=&q_k(1+\lambda_k\begin{equation}ta_{h_1})\psi_t^{m,(k),h_1}+(\psi_t^{m,(k),h_1})^2\\
\nonumber &&-\sum_{h\neq k,h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }\right)\\
\nonumber &&+\psi_t^{m,(k),h_1}\left(\sum_{h\neq h_1}\psi_t^{m,(k),h}+q_{h_1}\lambda_{h_1}(1-\begin{equation}ta_{h_1 })\right)\\
\nonumber &&+\sum_{h\neq k, h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(k),h_1}+q_k\lambda_k\begin{equation}ta_{h_1 }\right)-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}\\
\nonumber &=&q_k(1+\lambda_k\begin{equation}ta_{h_1})\psi_t^{m,(k),h_1}+(\psi_t^{m,(k),h_1})^2\\
\nonumber &&+\psi_t^{m,(k),h_1}\left(\sum_{h\neq h_1}\psi_t^{m,(k),h}+q_{h_1}\lambda_{h_1}(1-\begin{equation}ta_{h_1 })\right)+\psi_t^{m,(k),h_1}\sum_{h\neq k, h_1}\psi_t^{m,(k),h}\\
&&-\sum_{h\neq k,h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }-q_k\lambda_k\begin{equation}ta_{h_1 }\right)-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}\
\label{C-15}
\end{eqnarray}
We now further assume $c_{\tildeilde k}\geq \max_{k,h}\left(\frac{q_k\lambda_k}{\lambda_h}-q_h\right)$ for $\tildeilde k=1,\cdots,d$ such that the term
\begin{equation}\label{C-15-1}
\sum_{h\neq k,h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }-q_k\lambda_k\begin{equation}ta_{h_1 }\right)
\end{equation}
stays in negative for all $t$ in order to guarantee $\psi_t^{m,(k),h_1}\geq 0$ for $k\neq h_1$. Note that in the two-group case with $d=2$, the equation \eqref{C-15-1} can be removed. See Proposition \ref{Prop_suff} for details.
Similarly, using $\check\psi_t^{m,(k),h_1}= \psi_{T-t}^{m,(k),h_1}$, we have
\begin{eqnarray}n
\dot{\check\psi}_t^{m,(k),h_1}&=&-q_k(1+\lambda_k\begin{equation}ta_{h_1})\check\psi_t^{m,(k),h_1}-(\check\psi_t^{m,(k),h_1})^2-\sum_{h\neq k, h_1}\check\psi_t^{m,(k),h}\left(\check\psi_t^{m,(k),h_1}+q_k\lambda_k\begin{equation}ta_{h_1 }\right)\\
&&-\check\psi_t^{m,(k),h_1}\left(\sum_{h\neq h_1}\check\psi_t^{m,(k),h}+q_{h_1}\lambda_{h_1}(1-\begin{equation}ta_{h_1 })\right)\\
&&+\sum_{h\neq k,h_1}\check\psi_t^{m,(k),h}\left(\check\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 } \right)+(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1} \\
&\leq &-q_k(1+\lambda_k\begin{equation}ta_{h_1})\check\psi_t^{m,(k),h_1}-(\check\psi_t^{m,(k),h_1})^2\\
&&+\sum_{h\neq k,h_1}\check\psi_t^{m,(k),h}\left(\check\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }\right)+(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}
\end{eqnarray}n
Let $$\underlinenderline \zeta=\min_{k,h}q_k(1+\lambda_k\begin{equation}ta_h),\quad\overlineverline \zeta=\max_{k,h}q_k\lambda_k\begin{equation}ta_h.$$ We have
\begin{eqnarray}\label{psi_ij}
\nonumber \dot{\check\psi}_t^{m,(k),h_1}&\leq&\underlinenderline\zeta \check\psi_t^{m,(k),h_1}-(\check\psi_t^{m,(k),h_1})^2+\frac{1}{2}\sum_{h\neq k,h_1}\left((\check\psi_t^{m,(k),h})^2+(\check\psi_t^{m,(h),h_1})^2\right)\\
&&+\overlineverline \zeta\sum_{h\neq k,h_1}\check\psi_t^{m,(k),h}+(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1} .
\end{eqnarray}
Now, using $$\check\psi_t=\sum_{k=1,\cdots,d,h_1=1,\cdots,N_k,k\neq h_1}\check\psi_t^{m,(k),h_1}$$ and \eqref{psi_ij} leads to
\begin{eqnarray}
\dot{\check\psi}_t&\leq&-\underlinenderline\zeta\check\psi_t+\overlineverline \zeta\check\psi_t=(\overlineverline \zeta-\underlinenderline \zeta)\check\psi_t+\hat\zeta
\end{eqnarray}
implying
\begin{equation}
\check\psi_t\leq\widetilde\zeta e^{(\overlineverline \zeta-\underlinenderline \zeta)t}+\frac{\hat\zeta}{\overlineverline \zeta-\underlinenderline \zeta}\left(e^{(\overlineverline \zeta-\underlinenderline \zeta)t}-1\right),
\end{equation}
where $$\widetilde\zeta=\sum_{k=1,\cdots,d,h_1=1,\cdots,N_k,k\neq h_1}c_k\lambda_k\begin{equation}ta_{h_1},\quad \hat\zeta=\sum_{k=1,\cdots,d,h_1=1,\cdots,N_k,k\neq h_1}(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}.$$
Using $\psi_t^{m,(k),h_1}\geq 0$ for $k\neq h_1$, the proof is complete. \qed
\end{equation}d{document} | math | 121,355 |
\begin{document}
\title[Strichartz inequalities for the wave equation]
{Note on Strichartz inequalities for the wave equation with potential}
\author{Seongyeon Kim, Ihyeok Seo and Jihyeon Seok}
\subjclass[2010]{Primary: 35B45; Secondary: 35L05}
\keywords{Strichartz estimates, wave equation}
\address{Department of Mathematics, Sungkyunkwan University, Suwon 16419, Republic of Korea}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\begin{abstract}
We obtain Strichartz inequalities for the wave equation with potentials which behave like the inverse square potential $|x|^{-2}$
but might be not a radially symmetric function.
\end{abstract}
\maketitle
\section{Introduction}
Consider the following Cauchy problem for the wave equation with a potential $V(x)$:
\begin{equation}\label{WE}
\begin{cases}
-\partial_t^2 u+\Delta u-V(x)u=0,\\
u(x,0)=f(x), \\
\partial_t u(x,0)=g(x),
\end{cases}
\end{equation}
where $u:\mathbb{R}^{n+1}\rightarrow\mathbb{C}$, $V:\mathbb{R}^{n}\rightarrow\mathbb{C}$
and $\Delta$ is the $n$ dimensional Laplacian.
In this paper we are concerned with the Strichartz inequalities for the wave equation \eqref{WE}.
In the free case $V\equiv0$, the following remarkable estimate was first obtained by Strichartz \cite{S}
in connection with Fourier restriction theory in harmonic analysis:
\begin{equation}\label{oneS}
\|u\|_{L^{\frac{2(n+1)}{n-1}}(\mathbb{R}^{n+1})}
\lesssim \|f\|_{\dot{H}^{1/2}(\mathbb{R}^n)} + \|g\|_{\dot{H}^{-1/2}(\mathbb{R}^n)}, \quad n\ge2,
\end{equation}
where $\dot{H}^\gamma$ denotes the homogeneous Sobolev space equipped with the norm
$\|f\|_{\dot{H}^\gamma} = \| (\sqrt{-\Delta})^\gamma f\|_{L^2}$.
Since then, \eqref{oneS} was extended to mixed norm spaces $L_t^qL_x^r$ as follows (see \cite{LS,KT} and references therein):
\begin{equation*}
\|u\|_{L_t^q(\mathbb{R};L_x^r(\mathbb{R}^n))} \lesssim \|f\|_{\dot{H}^{1/2}(\mathbb{R}^n)} + \|g\|_{\dot{H}^{-1/2}(\mathbb{R}^n)}
\end{equation*}
for $q \ge 2$, $2 \le r < \infty$,
\begin{equation}\label{cond}
\frac{2}{q} + \frac{n-1}{r} \le \frac{n-1}{2}\quad \text{and}\quad\frac{1}{q} + \frac{n}{r} = \frac{n-1}{2}.
\end{equation}
Here the second equality in \eqref{cond} is just the gap condition
and the case $q=r$ recovers the classical result \eqref{oneS}.
Now we turn to the wave equation with a potential.
Several works have treated this potential perturbation of the
free wave equation.
In \cite{BS} the potentials satisfy the decay assumption that $V(x)$ decays like $|x|^{-4-\varepsilon}$ at infinity.
In \cite{C} this assumption is weakened to $|x|^{-3-\varepsilon}$,
which is in turn improved to $|x|^{-2-\varepsilon}$ in \cite{GV}.
In these papers Strichartz type estimates for the corresponding perturbed wave
equations are established.
But the main interest in the equation \eqref{WE} comes from the case where the potential term is homogeneous of degree $-2$
and therefore scales exactly the same as the Laplacian.
For instance, when $V(x)=a|x|^{-2}$ with a real number $a$, the equation \eqref{WE} arises in the study of wave propagation on
conic manifolds \cite{CT}.
We also note that the heat flow for the operator $-\Delta+a|x|^{-2}$ has been studied
in the theory of combustion \cite{VZ}.
In fact, the decay $|V| \sim |x|^{-2}$ was shown to be critical in \cite{GVV} which concerns explicitly the Schr\"odinger case but can be adapted to the wave equation as well.
For the inverse square potentials $V(x)=a|x|^{-2}$ with $a>-{(n-2)^2}/4$,
Planchon, Stalker and Tahvildar-Zadeh \cite{PST-Z} first obtained the Strichartz inequalities
for the equation \eqref{WE} with radial Cauchy data $f$ and $g$.
Thereafter, this radially symmetric assumption was removed in \cite{BPST-Z}.
More precisely, the range of the admissible exponents $(q,r)$ for the
Strichartz inequalities
\begin{equation*}
\|u\|_{L_t^q(\mathbb{R};\dot{H}^\sigma_r(\mathbb{R}^n))} \lesssim \|f\|_{\dot{H}^{1/2}(\mathbb{R}^n)} + \|g\|_{\dot{H}^{-1/2}(\mathbb{R}^n)}
\end{equation*}
obtained in \cite{PST-Z,BPST-Z} with the gap condition $\sigma=\frac{1}{q} + \frac{n}{r} -\frac{n-1}{2}$ is restricted under
$\frac{2}{q} + \frac{n-1}{r} \le \frac{n-1}{2}$
which is the same as that of the wave equation without potential.
Here,
$\|f\|_{\dot{H}^\sigma_r} = \|(\sqrt{-\Delta})^\sigma f\|_{L^r}$.
In \cite{BPST-Z2} these results were further extended to potentials which behave like the inverse square potential
but might be not a radially symmetric function.
Indeed, the potentials considered in \cite{BPST-Z2} are contained in the weak space, $L^{n/2,\infty}$.
In this paper we consider the Fefferman-Phong class of potentials which is defined for $1\leq p\leq n/2$ by
\begin{equation*}
V \in {\mathcal{F}}^p \quad \Leftrightarrow \quad \|V\|_{\mathcal{F}^p} = \sup_{x \in \mathbb{R}^n, r>0}r^{2-n/p} \left( \int_{B_r(x)} |V(y)|^p dy \right)^{\frac{1}{p}} < \infty,
\end{equation*}
where $B_r(x)$ denotes the ball centered at $x$ with radius $r$.
Note that $L^{n/2}={\mathcal{F}}^{n/2}$ and $a|x|^{-2} \in L^{n/2, \infty} \subsetneq {\mathcal{F}}^p$ if $1 \le p < n/2$.
Our result is the following theorem.
\begin{thm}\label{thm}
Let $n \ge 3$. Let $u$ be a solution to \eqref{WE} with Cauchy data $(f,g)\in\dot{H}^{1/2}\times\dot{H}^{-1/2}$
and potential $V \in {\mathcal{F}^p}$ with small $\|V\|_{\mathcal{F}^p}$ for $p>(n-1)/2$.
Then we have
\begin{equation}\label{result}
\|u\|_{L_t^q(\mathbb{R};\dot{H}^\sigma_r(\mathbb{R}^n))}\lesssim
(1+ \|V\|_{\mathcal{F}^p}) \Big(\|f\|_{\dot{H}^\frac{1}{2}} +\|g\|_{\dot{H}^{-\frac{1}{2}}}\Big)
\end{equation}
for $q > 2$, $2 \le r < \infty$,
\begin{equation}\label{cond2}
\frac{2}{q} + \frac{n-1}{r} \le \frac{n-1}{2}\quad \text{and}\quad\sigma = \frac{1}{q} + \frac{n}{r} -\frac{n-1}{2}.
\end{equation}
\end{thm}
\begin{rem}
The class of potentials in the theorem is strictly larger than $L^{n/2,\infty}$.
For instance, consider
\begin{equation*}
V(x)= f(\frac{x}{|x|}) |x|^{-2}, \quad f \in L^p(S^{n-1}), \quad (n-1)/2< p< n/2,
\end{equation*}
which is in ${\mathcal{F}^p}$ but not in $L^{n/2, \infty}$.
\end{rem}
Throughout this paper, the letter $C$ stands for a positive constant which may be different
at each occurrence.
We also denote $A\lesssim B$ to mean $A\leq CB$
with unspecified constants $C>0$.
\
\section{Proof of Theorem \ref{thm}} \label{sec2}
In this section we prove the Strichartz inequalities \eqref{result} by making use of a weighted space-time $L^2$ estimate for the wave equation.
We first consider the potential term as a source term and then write the solution to \eqref{WE} as the sum of the solution to
the free wave equation plus a Duhamel term, as follows:
\begin{equation}\label{sol}
u(x,t) \\= \cos(t\sqrt{-\Delta})f + \frac{\sin(t\sqrt{-\Delta})}{\sqrt{-\Delta}}g +
\int_{0}^{t} \frac{\sin((t-s)\sqrt{-\Delta})}{\sqrt{-\Delta}}\big(V(\cdot)u(\cdot, s)\big) ds.
\end{equation}
By the classical Strichartz inequalities for the wave equation (see e.g. \cite{KT}), we see
\begin{equation}\label{homo}
\| e^{it\sqrt{-\Delta}}f\|_{L^q_t \dot{H}^\sigma_r} = \| (\sqrt{-\Delta})^\sigma e^{it\sqrt{-\Delta}}f\|_{L^q_t L^r_x} \lesssim \|f\|_{\dot{H}^{\frac{1}{2}}}
\end{equation}
for $(q,r)$ satisfying $q \ge 2$, $2 \le r < \infty$ and the condition \eqref{cond2}.
Applying \eqref{homo} to \eqref{sol}, we get
\begin{align*}
\|u\|_{{L^q_t \dot{H}^\sigma_r}} &
\lesssim \|f\|_{\dot{H}^{\frac{1}{2}}} + \|g\|_{\dot{H}^{-\frac{1}{2}}} + \bigg\|\int_{0}^{t} \frac{\sin((t-s)\sqrt{-\Delta})}{\sqrt{-\Delta}}\big(V(\cdot)u(\cdot, s)\big) ds\bigg\|_{L^q_t \dot{H}^\sigma_r}
\end{align*}
for the same $(q,r)$.
Now it remains to show that
\begin{equation*}
\bigg\|\int_{0}^{t} \frac{\sin((t-s)\sqrt{-\Delta})}{\sqrt{-\Delta}}\big(V(\cdot)u(\cdot, s)\big) ds\bigg\|_{L^q_t \dot{H}^\sigma_r} \lesssim \|V\|_{\mathcal{F}^p} \big(\|f\|_{\dot{H}^{\frac{1}{2}}} + \|g\|_{\dot{H}^{-\frac{1}{2}}}\big)
\end{equation*}
for $(q,r)$ satisfying $q> 2$, $2 \le r < \infty$ and the condition \eqref{cond2}.
By duality, it is sufficient to show that
\begin{equation}\label{d-inhom}
\begin{split}
\bigg< (\sqrt{-\Delta})^\sigma \int_{0}^{t} \frac{\sin((t-s)\sqrt{-\Delta})}{\sqrt{-\Delta}}&\big(V(\cdot)u(\cdot, s)\big) ds , G \bigg>_{x,t} \\
& \lesssim \|V\|_{\mathcal{F}^p} \big(\|f\|_{\dot{H}^{\frac{1}{2}}} + \|g\|_{\dot{H}^{-\frac{1}{2}}}\big){\|G\|}_{L^{q^\prime }_t L^{r^\prime }_x}.
\end{split}
\end{equation}
The left-hand side of \eqref{d-inhom} is equivalent to
\begin{align}\label{eq1}
\qquad \qquad \int_{\mathbb{R}}\int_{0}^{t}&{\big< (\sqrt{-\Delta})^{\sigma-1}\sin((t-s)\sqrt{-\Delta})\big(V(\cdot)u(\cdot, s)\big), G\, \big>}_{x} dsdt \nonumber
\\&= \int_{\mathbb{R}}\int_{0}^{t}{\big< Vu, (\sqrt{-\Delta})^{\sigma-1} \sin((t-s)\sqrt{-\Delta})G\, \big>}_{x} dsdt \nonumber
\\&=\bigg< V^{1/2} u, V^{1/2} (\sqrt{-\Delta})^{\sigma-1} \int_{s}^{\infty} \sin((t-s)\sqrt{-\Delta})G\,dt \bigg>_{x, s}.
\end{align}
Using H\"older's inequality, \eqref{eq1} is bounded by
\begin{equation*}
\|u\|_{L^2_{x,s}(|V|)} \Big\|(\sqrt{-\Delta})^{\sigma-1}\int_{s}^{\infty}\sin((t-s)\sqrt{-\Delta})\,G\,dt \Big\|_{L^2_{x,s}(|V|)}.
\end{equation*}
Here, $L^2(|V|)$ denotes a weighted space equipped with the norm
$$\|h\|_{L_{x,t}^2(|V|)}= \bigg(\int_{\mathbb{R}^{n+1}} |h(x,t)|^2 |V(x)| dxdt\bigg)^{\frac{1}{2}}.$$
We will show that
\begin{equation}\label{d1}
\|u\|_{L^2_{x,t}(|V|)} \lesssim \|V\|^{1/2}_{\mathcal{F}^p}\big(\|f\|_{\dot{H}^\frac{1}{2}} + \|g\|_{\dot{H}^{-\frac{1}{2}}}\big)
\end{equation}
and
\begin{equation}\label{d2}
\Big\|(\sqrt{-\Delta})^{\sigma-1}\int_{t}^{\infty}\sin((t-s)\sqrt{-\Delta})\,G\,ds \, \Big\|_{L^2_{x,t}(|V|)} \lesssim \|V\|^{1/2}_{\mathcal{F}^p}{\|G\|}_{L^{q^\prime }_t L^{r^\prime }_x}
\end{equation}
for $(q,r)$ satisfying the same conditions in the theorem.
Then the desired estimate \eqref{d-inhom} is proved.
To show \eqref{d1}, we use the following lemma which is a particular case of
Proposition 2.3 and 4.2 in \cite{RV}.
\begin{lem}\label{lem1}
Let $n \ge 3$.
Assume that $V\in\mathcal{F}^p$ for $p > (n-1)/2$.
Then we have
\begin{equation}\label{cos}
\|\cos (t\sqrt{-\Delta}) f\|_{L_{x,t}^2(|V|)} \lesssim \|V\|^{1/2}_{\mathcal{F}^p} \|(\sqrt{-\Delta})^{1/2}f\|_{L^2},
\end{equation}
\begin{equation}\label{sin}
\bigg\|\frac{\sin t\sqrt{-\Delta}}{\sqrt{-\Delta}} g\bigg\|_{L_{x,t}^2(|V|)}
\lesssim \|V\|^{1/2}_{\mathcal{F}^p} \|(\sqrt{-\Delta})^{-1/2}g\|_{L^2},
\end{equation}
and
\begin{equation*}
\bigg\|\int_{0}^{t} \frac{\sin((t-s)\sqrt{-\Delta})}{\sqrt{-\Delta}}F(\cdot, s)\, ds\bigg\|_{L_{x,t}^2(|V|)}
\lesssim \|V\|_{\mathcal{F}^p} \|F\|_{L_{x,t}^2(|V|^{-1})}.
\end{equation*}
\end{lem}
Indeed, applying Lemma \ref{lem1} to \eqref{sol}, we see
\begin{equation}\label{eq2}
\begin{split}
\|u\|_{L^2_{x,t}(|V|)} \lesssim \|V\|^{1/2}_{\mathcal{F}^p} \big(\|f\|_{\dot{H}^\frac{1}{2}} + \|g\|_{\dot{H}^{-\frac{1}{2}}}\big)+ \|V\|_{\mathcal{F}^p}\|u\|_{L^2_{x,t}(|V|)}.
\end{split}
\end{equation}
Since we are assuming that $\|V\|_{\mathcal{F}^p}$ is small enough, the last term on the right-hand side of \eqref{eq2}
can be absorbed into the left-hand side.
Hence, we get the first estimate \eqref{d1}.
To obtain the second estimate \eqref{d2}, we first note that the first two estimates \eqref{cos} and \eqref{sin} in Lemma \ref{lem1}
directly imply
\begin{equation}\label{lem1-result}
\big\|(\sqrt{-\Delta})^\gamma e^{it\sqrt{-\Delta}} f\big\|_{L_{x,t}^2(|V|)}
\lesssim \|V\|^{1/2}_{\mathcal{F}^p} \big\|(\sqrt{-\Delta})^{\gamma+1/2}f\big\|_{L^2}
\end{equation}
for $\gamma\in\mathbb{R}$.
Using \eqref{lem1-result} and the dual estimate of \eqref{homo}, we then have
\begin{align*}
\Big\|(\sqrt{-\Delta})^{\sigma-1}\int_{\mathbb{R}}\sin((t-s)\sqrt{-\Delta})&\,G\,ds \Big\|_{L^2_{x,t}(|V|)}\\
\nonumber&\lesssim \Big\|({\sqrt{-\Delta}})^{\sigma-1}\int_{\mathbb{R}}e^{i(t-s)\sqrt{-\Delta}} \,G\, ds\Big\|_{L^2_{x,t}(|V|)} \nonumber\\&\lesssim \|V\|^{1/2}_{\mathcal{F}^p} \Big\|({\sqrt{-\Delta}})^{\sigma-1/2}\int_{\mathbb{R}} e^{-is\sqrt{-\Delta}}\,G\, ds\Big\|_{L^2} \nonumber\\&\lesssim \|V\|^{1/2}_{\mathcal{F}^p}\|G\|_{L^{q^\prime }_tL^{r^\prime }_x}
\end{align*}
for $(q,r)$ satisfying $q \ge 2$, $2 \le r < \infty$ and the condition \eqref{cond2}.
Here we are going to use the following Christ-Kiselev lemma (\cite{CK}) to conclude that
\begin{equation}\label{ck}
\Big\|(\sqrt{-\Delta})^{\sigma-1}\int_{-\infty}^{t}\sin((t-s)\sqrt{-\Delta})\, G\,ds \Big\|_{L^2_{x,t}(|V|)} \lesssim ||V||^{1/2}_{\mathcal{F}^p}||G||_{L^{q^\prime }_tL^{r^\prime }_x}
\end{equation}
if $2>q'$.
\begin{lem}
Let $X$ and $Y$ be two Banach spaces and let $T$ be a bounded linear operator from $L^\alpha(\mathbb{R};X)$ to $L^\beta(\mathbb{R};Y)$
such that
$$Tf(t)=\int_{\mathbb{R}} K(t,s)f(s)ds.$$
Then the operator
$$\widetilde{T}f(t)=\int_{-\infty}^t K(t,s)f(s)ds$$
has the same boundedness when $\beta>\alpha$, and $\|\widetilde{T}\|\lesssim\|T\|$.
\end{lem}
The desired estimate \eqref{d2} follows directly from \eqref{ck} by changing some variables.
This completes the proof.
\
\noindent {\bf{Acknowledgements.}}
This research was supported by NRF-2019R1F1A1061316.
The authors would like to thank Y. Koh for discussions on related issues.
\end{document} | math | 13,190 |
\begin{document}
\title{Peano's Existence Theorem revisited}
\begin{center}
{\large Rodrigo L\'opez Pouso \\
Departamento de
An\'alise Matem\'atica\\
Facultade de Matem\'aticas,\\Universidade de Santiago de Compostela, Campus Sur\\
15782 Santiago de
Compostela, Spain.
}
\end{center}
\begin{abstract}
We present new proofs to four versions of Peano's Existence Theorem for ordinary differential equations and systems. We hope to have gained readability with respect to other usual proofs. We also intend to highlight some ideas due to Peano which are still being used today but in specialized contexts: it appears that the lower and upper solutions method has one of its oldest roots in Peano's paper of 1886.
\end{abstract}
\noindent
{\it ``Le dimostrazioni finora date dell'esistenza degli integrali delle equazioni differenziali lasciano a desiderare sotto l'aspetto della semplicit$\grave{a}$."}
\medbreak
\noindent
{\bf G. Peano}, {\it Sull'integrabilit\`a delle equazioni differenziali di primo
ordine}, {Atti. Accad. Sci. Torino, {vol. 21} (1886), 677--685.}
\section{Introduction}
The fundamental importance of Peano's Existence Theorem hardly needs justification when we find it in almost any undergraduate course on the subject. Let us simply point out that Peano's Theorem provides us with a very easily checkable condition to ensure the existence of solutions for complicated systems of ordinary differential equations.
This paper contains a new proof to Peano's Existence Theorem and some other new proofs to not so well--known finer versions of it in the scalar case.
\bigbreak
Before going into detail we need to introduce some notation. Let $t_0 \in {{\Bbb R}}$ and $y_0=(y_{0,1},y_{0,2},\dots, y_{0,n}) \in {{\Bbb R}}^n$ ($n \in {{\Bbb N}}$) be fixed, and let
$$f:\mbox{Dom}(f) \subset {{\Bbb R}}^{n+1} \longrightarrow {{\Bbb R}}^n$$ be defined
in a neighborhood of $(t_0,y_0)$. We consider the initial value pro\-blem
\begin{equation}
\label{ivp}
y'=f(t,y), \quad y(t_0)=y_0,
\end{equation}
which is really a system of $n$ coupled ordinary differential equations along with their corresponding initial conditions.
By a solution of (\ref{ivp}) we mean a function $\varphi:I \longrightarrow {{\Bbb R}}^n$ which is diffe\-ren\-tiable in a nondegenerate real interval $I$, $t_0 \in I$, $\varphi(t_0)=y_0$, and for all $t \in I$ we have $\varphi'(t)=f(t,\varphi(t))$, the corresponding one--sided derivatives being considered at the endpoints of the interval. It is implictly required that solutions have their graphs inside the domain of the function $f$.
\bigbreak
The most popular version of Peano's Theorem reads as follows:
\bigbreak
\noindent
{\bf Peano's Theorem.} {\it If the function $f$ is continuous in a neighborhood of $(t_0,y_0)$ then the initial value problem (\ref{ivp}) has at least one solution defined in a neighborhood of $t_0$.}
\medbreak
Peano proved the result in dimension $n=1$ first, see \cite{peano}, and then he extended it to systems in \cite{peano2}. Peano's Theorem has attracted the attention of many mathematicians and, as a result, there are many different proofs available today. We can clasify them into two fundamental types:
\begin{enumerate}
\item[(A)] Proofs based on the construction of a sequence of approximate solutions (mainly Euler--Cauchy polygons or Tonelli sequences) which converges to some solution.
\item[(B)] Proofs based on fixed point theorems (mainly Schauder's Theorem) applied to the equivalent integral version of (\ref{ivp}).
\end{enumerate}
Both types of proofs (A) and (B) have their advantages and their drawbacks. First, proofs of type (A) are more elementary and hence {\it a priori} more adequate for elementary courses. Second, proofs of type (B) are shorter and much clearer, but they involve more sophisticated results, namely fixed point theorems in infinite dimensional function spaces.
One can find some other types of proofs in the literature, but in our opinion they are not so clear or elementary. For instance, a well--known kind of mix of types (A) and (B) consists in approximating $f$ by polinomials $f_n$ (by virtue of the Stone--Weierstrass Theorem) so that the problems (\ref{ivp}) with $f$ replaced by $f_n$ have unique solutions $y_n$ (by virtue of the Banach contraction principle) which form an approximating sequence. More elementary proofs were devised in the seventies of the 20th century, partly as a reaction to a question posed in \cite{ken}. In \cite{dv, gar, jwal, wwal} we find proofs of type (A) in dimension $n=1$ which avoid the Arzel\`a--Ascoli Theorem. As already stated in those references, similar ideas do not work in higher dimension, at least without further assumptions. We also find in \cite{dv} a proof which uses Perron's method, see \cite{perron}, a refined version of Peano's own proof in \cite{peano}. It is fair to acknowledge that probably the best application of Perron's method to (\ref{ivp}) in dimension $n=1$ is due to Goodman \cite{goo}, who even allowed $f$ to be discontinuous with respect to the independent variable and whose approach has proven efficient in more general settings, see \cite{bp, bs, has, po2}.
\bigbreak
In this paper we present a proof of Peano's Theorem which, in our opinion, takes profit from the most advantageous ingredients of proofs of types (A) and (B) to produce a new one which we find more readable. In particular, our proof in Section 2 meets the following objectives:
\begin{enumerate}
\item It involves an approximating sequence, but we do not have to worry about its convergence or its subsequences at all.
\item It involves a mapping from a space of functions into the reals, thus introducing some elements of functional analysis, but the most sophisticated result we use to study it is the Arzel\`a--Ascoli Theorem.
\item Compactness in the space of continuous functions is conveniently emphasized as a basic ingredient in the proof, but we think that the way we use it (``continuous mappings in compact sets have minima") leads to a more readable proof than the usual (``sequences in compact sets have convergent subsequences").
\end{enumerate}
This paper is not limited to a new proof of Peano's Theorem. With a little extra work and a couple of new ideas, we prove the existence of the least and the greatest solutions for scalar problems (Section 3) and we also study the existence of solutions between given lower and upper solutions (Section 4). Comparison with the literature is discussed in relevant places and examples (some of them new) are given to illustrate the results or their limitations.
\section{Proof of Peano's Theorem}
{\bf A simplification of the problem.} We claim that {\it it suffices to prove the existence of solutions defined on the right of the initial time $t_0$}.
To justify it, assume we have already proven the result for that specific type of solutions and consider the problem with reversed time
\begin{equation}
\label{ivpdos}
y'=g(t,y)=-f(-t,y), \quad y(-t_0)=y_0.
\end{equation}
The function $g$ is continuous in a neighborhood of $(-t_0,y_0)$, and therefore problem (\ref{ivpdos}) has some solution $\phi$ defined for $t \in [-t_0,-t_0+\varepsilon_2]$ ($\varepsilon_2>0$). Hence $ \varphi(t)=\phi(-t)$ solves (\ref{ivp}) in the interval $[t_0-\varepsilon_2,t_0]$. Now it suffices to use any solution defined on the left of $t_0$ and any solution defined on the right to have a solution defined in a neighborhood of $t_0$. The claim is proven.
\bigbreak
In the sequel, $\| \cdot\|$ denotes the maximum norm in ${{\Bbb R}}^n$, i.e., for a vector $x=(x_1,x_2,\dots,x_n) \in {{\Bbb R}}^n$ we define $$\|x\|=\max_{1 \le j \le n}|x_j|.$$
\noindent
{\bf The proof.} As usual, we start by fixing some constants $a>0$ and $b>0$ such that the function $f$ is defined and continuous in the $(n+1)$--dimensional interval $[t_0,t_0+a] \times \{y \in {{\Bbb R}}^n \, : \, \|y-y_0\| \le b\}$, which is a compact subset of ${{\Bbb R}}^{n+1}$. Hence there exists $L>0$ such that
$$\|f(t,y)\|\le L \quad \mbox{whenever $0\le t-t_0 \le a$ and $\|y-y_0\|\le b$.}$$
We now define the real interval $I=[t_0,t_0+c]$ with length
$$c=\min\{a,b/L\},$$
and we consider the set ${\cal A}$ of all functions $\gamma:I \longrightarrow {{\Bbb R}}^n$ such that $\gamma(t_0)=y_0$ and which satisfy a Lipschitz condition with constant $L$, i.e.,
$$\|\gamma(t)-\gamma(s)\| \le L |t-s| \quad \mbox{for all $s,t \in I$.}$$
The previous choice of the constant $c$ guarantees that every function $\gamma \in {\cal A}$ satisfies $\|\gamma(t)-y_0\|\le b$ for all $t \in I$. Hence, for every $\gamma \in {\cal A}$ the composition $t \in I \longmapsto f(t,\gamma(t))$ is well--defined, bounded (by $L$) and continuous in $I$. It is therefore possible to construct a mapping $$F:{\cal A}\longrightarrow [0,+\infty)$$ as follows: for each function $\gamma \in {\cal A}$ we define
the non--negative number
$$F(\gamma)=\max_{t \in I}\left\|\gamma(t)-y_0-\int_{t_0}^t{f(s,\gamma(s)) \, ds}\right\|,$$
which is a sort of measure of how far the function $\gamma$ is from being a solution. In fact,
the Fundamental Theorem of Calculus ensures that if $F(\gamma)=0$ for some $\gamma \in {\cal A}$ then $\gamma$ is a solution of the initial value problem (\ref{ivp}) in the interval $I$ (the converse is also true, but we do not need it for this proof.)
It is easy to check that the mapping $F$ is continuous in ${\cal A}$ (equipped with the topology of the uniform convergence in $I$). In turn, by the Arzel\`a-- Ascoli Theorem, the domain ${\cal A}$ is compact. Hence $F$ attains a minimum at some $\varphi \in {\cal A}$.
To show that $F(\varphi)=0$ (which implies that $\varphi$ is a solution) it suffices to prove that $F$ assumes arbitrarily small positive values in ${\cal A}$. To do so, we follow Tonelli, and for $k \in {{\Bbb N}}$, $k \ge 2$, we consider the approximate problem\footnote{The differential equations in the approximate problems belong to the class of differential equations {\it with delay}. See, for instance, \cite{sm}.}
$$\left\{
\begin{array}{l}
\mbox{$y(t)=y_0$ for all $t \in [t_0,t_0+c/k]$,}\\
\\
y'(t)=f(t-c/k,y(t-c/k)) \quad \mbox{for all $t \in (t_0+c/k, t_0+c]$.}
\end{array}
\right.$$
This problem has a unique solution $\gamma_k \in {\cal A}$ which we can integrate. Indeed, using induction with the subdivision $t_0,t_0+c/k,t_0+2c/k, \dots,t_0+c$, and changing variables in the corresponding integrals, one can prove that the unique solution satisfies $\|\gamma_k(t)-y_0\| \le b$ ($t \in I$) and
$$\gamma_k(t)=\left\{
\begin{array}{cl}
y_0 & \mbox{for all $t \in [t_0,t_0+c/k]$,}\\
\\
y_0+\int_{t_0}^{t-c/k}{f(s,\gamma_k(s))\, ds} & \mbox{for all $t \in (t_0+c/k, t_0+c]$.}
\end{array}
\right.$$
Therefore, for $t \in [t_0,t_0+c/k]$ we have
$$\left\|\gamma_k(t)-y_0-\int_{t_0}^{t}{f(s,\gamma_k(s)) \, ds}\right\|=\left\|\int_{t_0}^{t}{f(s,y_0) \, ds}\right\| \le \dfrac{Lc}{k},$$
and for $t \in (t_0+c/k,t_0+c]$ we have
$$\left\|\gamma_k(t)-y_0-\int_{t_0}^{t}{f(s,\gamma_k(s)) \, ds}\right\|=\left\|\int_{t-c/k}^{t}{f(s,\gamma_k(s)) \, ds}\right\| \le \dfrac{ Lc}{k}.$$
Hence
$$0 \le F(\varphi) \le F(\gamma_k) \le \dfrac{ L c}{k},$$
thus proving that $F(\varphi)=0$ because $k$ can be chosen as big as we wish. \hbox to 0pt{}
$\rlap{$\sqcap$}\sqcup$\medbreak
The sequence $\{\gamma_k\}_{k \in {{\Bbb N}}n}$ constructed in the proof is often referred to as a Tonelli sequence. It is possible to use some other types of minimizing sequences in our proof. In particular, the usual Euler--Cauchy polygons are adequate for the proof too.
\begin{remark}
\label{rem2}
The proof gives us more information than that collected in the statement, which we have preferred to keep in that form for simplicity and clarity.
For completeness and for later purposes, we now point out some consequences. In the conditions of Peano's Theorem, and with the notation introduced in the proof, the following results hold:
\begin{enumerate}
\item If $f:[t_0,t_0+a] \times \{y \in {{\Bbb R}}^n \, : \, \|y-y_0\| \le b\} \to {{\Bbb R}}^n$ is continuous, bounded by $L>0$ on its domain, and $a \le b/L$, then problem (\ref{ivp}) has at least one solution defined on the whole interval $[t_0,t_0+a]$.
\item A specially important consequence of the previous result arises when $f:[t_0,t_0+a]\times {{\Bbb R}}^n \longrightarrow {{\Bbb R}}^n$ is continuous and bounded. In that case we can guarantee that the initial value problem (\ref{ivp}) has at least one solution defined on $[t_0,t_0+a]$.
\end{enumerate}
\end{remark}
Peano's Theorem allows the existence of infinitely many solutions. We owe the following example precisely to Peano.
\bigbreak
\noindent
{\bf Peano's example of a problem with infinitely many solutions.} The scalar problem
\begin{equation}
\label{expe}
y'=3y^{2/3}, \quad y(0)=0,
\end{equation}
has infinitely many solutions. Indeed, one can easily check that $\varphi(t)=0$ and $\psi(t)=t^3$ ($t \in{{\Bbb R}}$) are solutions. Now the remaining solutions are given by, let us say, a combination of those two. Specifically, if $t_1 \le 0 \le t_2$ then the function
$$\phi(t)=\left\{
\begin{array}{cl}
(t-t_1)^3, & \mbox{if $t < t_1$,}\\
0, & \mbox{if $t_1 \le t \le t_2$,} \\
(t-t_2)^3, & \mbox{if $t >t_2$,}
\end{array}
\right.$$
is a solution of (\ref{expe}). The converse is true as well: every solution of (\ref{expe}) is one of those indicated above.
\section{A finer result in dimension one}
In this section we are concerned with a not so well--known version of Peano's Theorem in dimension $n=1$ which ensures the existence of the least and the greatest solutions to (\ref{ivp}). This result goes back precisely to Peano \cite{peano}, where the greatest solution defined on the right of $t_0$ was obtained as the infimum of all {\it strict upper solutions}. A strict upper solution to the problem (\ref{ivp}) on some interval $I$ is, roughly speaking, some function $\beta=\beta(t)$ satisfying $\beta'(t)>f(t,\beta(t))$ for all $t \in I$, and $\beta(t_0)\ge y_0$.
Peano also showed in \cite{peano} that the least solution is the supremum of all strict lower solutions, which we define by reversing all the inequalities in the definition of strict upper solution.
\bigbreak
Notice, for instance, that Peano's example (\ref{expe}) has infinitely many solutions, the least one being
$$\mbox{$\varphi_*(t)=t^3$ for $t<0$,} \quad \mbox{$\varphi_*(t)=0$ for $t \ge 0$,}$$
and the greatest solution being
$$\mbox{$\varphi^*(t)=0$ for $t<0$,} \quad \mbox{$\varphi^*(t)=t^3$ for $t \ge 0.$}$$
\bigbreak
Here we present a very easy proof of the existence of the least and the greatest solutions which does not lean on lower/upper solutions (as in \cite{dv, goo, peano, perron}) or on special sequences of approximate solutions (as in \cite{dv, jwal, wwal}). Basically, we obtain the greatest solution as the solution having the greatest integral. This idea works in other settings, see \cite{fp}.
\begin{theorem}
\label{t2} (Second version of Peano's Theorem)
Consider problem (\ref{ivp}) in dimension $n=1$, and assume that there exist constants $a, \, b, \, L \in (0,+\infty)$ such that the function
$$f:[t_0,t_0+a] \times [y_0-b,y_0+b] \longrightarrow {{\Bbb R}}$$ is continuous and $|f(t,y)| \le L$ for all $(t,y) \in [t_0,t_0+a] \times [y_0-b,y_0+b]$.
Then there exist solutions of (\ref{ivp}) $\varphi_*,\varphi^*:I=[t_0,t_0+c] \longrightarrow {{\Bbb R}}$, where $c=\min\{a,b/L\}$, such that every solution of (\ref{ivp}) $\varphi:I \longrightarrow {{\Bbb R}}$ satisfies
$$\varphi_*(t) \le \varphi(t) \le \varphi^*(t) \quad \mbox{for all $t \in I$.}$$
\end{theorem}
\noindent
{\bf Proof.} Let us consider the set of functions ${\cal A}$ introduced in the proof of Peano's Theorem in Section 2 and adapted to dimension $n=1$, i.e., the set of all functions $\gamma:I=[t_0,t_0+c]\longrightarrow {{\Bbb R}}$ such that $\gamma(t_0)=y_0$ and
$$|\gamma(t)-\gamma(s)| \le L |t-s| \quad \mbox{for all $s,t \in I$.}$$
Let ${\cal S}$ denote the set of solutions of (\ref{ivp}) defined on $I$. Our first version of Peano's Theorem ensures that ${\cal S}$ is not an empty set. Moreover standard arguments show that ${\cal S} \subset {\cal A}$ and that ${\cal S}$ is a compact subset of ${\cal C}(I)$. Hence the continuous mapping
$${\cal I}: \varphi \in {\cal S} \longmapsto {\cal I}(\varphi)=\int_{t_0}^{t_0+c}{\varphi(s) \, ds}$$
attains a maximum at some $\varphi^* \in {\cal S}$.
Let us show that $\varphi^*$ is the greatest solution of (\ref{ivp}) on the interval $I$. Reasoning by contradiction, assume that we have a solution $\varphi:I \longrightarrow {{\Bbb R}}$ such that $\varphi(t_1)> \varphi^*(t_1)$ for some $t_1 \in (t_0,t_0+L)$. Since $\varphi(t_0)=y_0=\varphi^*(t_0)$ we can find
$t_2 \in [t_0,t_1)$ such that $\varphi(t_2)=\varphi^*(t_2)$ and $\varphi> \varphi^*$ on $(t_2,t_1]$. We now have two possibilites on the right of $t_1$: either $\varphi > \varphi^*$ on $(t_2,t_0+L)$, or there exists $t_3 \in (t_1,t_0+L)$ such that $\varphi> \varphi^*$ on $(t_2,t_3)$ and $\varphi(t_3)=\varphi^*(t_3)$. Let us assume the latter (the proof is similar in the other situation) and consider the function
$$\varphi_1: t \in I \longmapsto \varphi_1(t)=\left\{
\begin{array}{cl}
\varphi(t), & \mbox{if $t \in [t_2,t_3]$,}\\
\\
\varphi^*(t), & \mbox{otherwise.}
\end{array}
\right.$$
Elementary arguments with side derivatives show that $\varphi_1 \in {\cal S}$. Moreover $\varphi^* \le \varphi_1$ in $I$, with strict inequality in a subinterval, hence
$${\cal I}(\varphi^*) < {\cal I}(\varphi_1),$$
but this is a contradiction with the choice of $\varphi^*.$
Similarly, one can prove that ${\cal I}$ attains a minimum at certain $\varphi_* \in {\cal S}$, and that $\varphi_*$ is the least element in ${\cal S}$.
\hbox to 0pt{}
$\rlap{$\sqcap$}\sqcup$\medbreak
\medbreak
Can Theorem \ref{t2} be adapted to systems? Yes, it can, but more than continuity must be required for the function $f$, as we will specify below. The need of some extra conditions is easily justified with examples of the following type.
\begin{example}
\label{ex1}
Consider the system
$$\left\{
\begin{array}{ll}y_1'=3y_1^{2/3}, & y_1(0)=0,\\
\\
y_2'=-y_1, & y_2(0)=0.
\end{array}
\right.$$
The first problem can be solved independently, and it has infinitely many solutions (this is Peano's example again). Notice that the greater the solution we choose for $y_1$ is then the smaller the corresponding $y_2$ becomes on the right of $t_0=0$. Therefore this system does not have a solution which is greater than the other ones {\it in both components.}
\end{example}
Notice however that the system in Example \ref{ex1} has a solution whose first component is greater than the first component of any other solution, and the same is true replacing ``greater" by ``smaller" or ``first component" by ``second component". This observation leads us naturally to the following question: in the conditions of Peano's Existence Theorem we fix a component $i \in \{1,2,\dots,n\}$, can we ensure the existence of a solution with the greatest $i$-th component? The following example answers this question on the negative.
\begin{example}
Let $\phi:{{\Bbb R}} \longrightarrow {{\Bbb R}}$ be a continuously differentiable function such that $\phi(0)=0$ and $\phi$ assumes both negative and positive values in every neighborhood of $0$ (hence $\phi'(0)=0$)\footnote{A paradigm is $\phi(x)=x^3 \sin(1/x)$ for $x \neq 0$, and $\phi(0)=0$.}.
The idea is to construct a system whose solutions have a component which is a translation of $\phi$, and then those specific components cannot be compared. To do so it suffices to consider the two dimensional system
$$\left\{
\begin{array}{ll}y_1'=3y_1^{2/3}, & y_1(0)=0,\\
\\
y_2'=\phi' \left(y_1^{1/3} \right), & y_2(0)=0.
\end{array}
\right.$$
Let $\varepsilon >0$ be fixed; we are going to prove that there is not a solution of the system whose second component is greater than the second component of any other solution on the whole interval $[0,\varepsilon]$.
First, note that we can compute all the solutions. For each $a \in [0,\varepsilon]$ we have a solution $\varphi(t)=(\varphi_1(t),\varphi_2(t))$ given by
$$\mbox{$\varphi_1=0$ on $[0,a]$} \quad \mbox{and} \quad \mbox{$\varphi_1(t)=(t-a)^3$ for $t \in [a,\varepsilon]$,}$$
and then it suffices to integrate the second component to obtain $\varphi_2=0$ on $[0,a]$, and
$$\varphi_2(t)=\int_b^t{\phi'(s-a) \, ds}=\phi(t-a) \quad \mbox{for $t \in [a,\varepsilon]$.}$$
Conversely, every solution of the system is given by the previous expression for an adequate value of $a \in [0,\varepsilon]$.
Now let us consider two arbitrary solutions of the system. They are given by the above formulas for some corresponding values $a=b$ and $a=b'$, with $0 \le b<b'\le \varepsilon$, and then their respective second components cannot be compared in the subinterval $(b,b')$.
\end{example}
The previous example still has a solution with the greatest first component, but, in the author's opinion, this is just a consequence of the fact that the first equation in the system is uncoupled and we can solve it independently (in particular, Theorem \ref{t2} applies). However this remark raises the open problem of finding a two dimensional system which has neither a solution with the greatest first component nor a solution with the greatest second component.
\bigbreak
A multidimensional version of Theorem \ref{t2} is valid if the nonlinear part
$$f(t,y)=(f_1(t,y),f_2(t,y),\dots, f_n(t,y))$$
is {\it quasimonotone nondecreasing}, i.e., if for each component $i \in \{1,2,\dots,n\}$ the relations $y_j \le \bar y_j$, $j \neq i$, imply
$$f_i(t,(y_1,\dots,y_{i-1},y,y_i,\dots,y_n)) \le f_i(t,(\bar y_1,\dots,\bar y_{i-1},y,\bar y_i,\dots,\bar y_n)).$$
The reader is referred to \cite{bp, bs, has, hl, wal, wwal} and references therein for more information on quasimonotone systems.
\section{The power of lower and upper solutions: Existence for nonlocal problems}
The real power of lower and upper solutions reveals when we want to guarantee the existence of solution to (\ref{ivp}) on a given interval, and not merely in an unknown (possibly very small) neighborhood of $t_0$.
Let $a>0$ be fixed, let $f:[t_0,t_0+a]\times {{\Bbb R}} \longrightarrow {{\Bbb R}}$ be continuous, and consider the nonlocal problem
\begin{equation}
\label{ivp2}
y'=f(t,y) \quad \mbox{for all $t \in I=[t_0,t_0+a]$,}Ê\quad y(t_0)=y_0.
\end{equation}
In this section we follow Goodman \cite{goo}, who considered the Carath\'eodory version of (\ref{ivp2}) and proved that the greatest solution is the supremum of all (nonstrict) lower solutions, while the least solution is the infimum of all upper solutions.\footnote{Note that we can find in the literature some other denominations for upper (lower) solutions, such as upper (lower) functions, or superfunctions (subfunctions).}
In this paper, by upper solution we mean a function $\beta:I \longrightarrow {{\Bbb R}}$ which is continuously differentiable in the interval $I$, $\beta(t_0) \ge y_0$, and $\beta'(t) \ge f(t, \beta(t))$ for all $t \in I$. We define a lower solution in an analogous way reversing the corresponding inequalities.
\bigbreak
Our first result is quite simple and well--known. Indeed, it has a standard concise proof as a corollary of Peano's Existence Theorem (one has to use the second observation in Remark \ref{rem2}). However it can also be proven independently, as we are going to show, and then Peano's Existence Theorem in dimension $n=1$ will follow as a corollary (see Remark \ref{remp}). To sum up, the following proof is another new proof to Peano's Theorem in dimension $n=1$ which readers might want to compare with those in \cite{dv, gar, jwal, wwal}. The main difference with respect to the proofs in \cite{dv, gar, jwal, wwal} is that we need to produce approximate solutions between given lower and upper solutions, which we do with the aid of Lemma \ref{lema}.
\begin{theorem}
\label{tss1} (Third version of Peano's Theorem)
Suppose that problem (\ref{ivp2}) has a lower solution $\alpha$ and an upper solution $\beta$ such that $\alpha(t) \le \beta(t)$ for all $t \in I$.
Then problem (\ref{ivp2}) has at least one solution $\varphi:I \longrightarrow {{\Bbb R}}$ such that
$$ \alpha(t) \le \varphi(t) \le \beta(t) \quad \mbox{for all $t \in I$.}$$
\end{theorem}
\noindent
{\bf Proof.} Let $L>0$ be fixed so that
$$|f(t,y)| \le L \quad \mbox{for all $(t,y) \in I \times {{\Bbb R}}^2$ such that $\alpha(t) \le y \le \beta(t)$,}$$
and $\max \{|\alpha'(t)|, |\beta'(t)| \} \le L$ for all $t \in I$. Let us define the set ${\cal A}$ of all continuous functions $\gamma:I \longrightarrow {{\Bbb R}}$ such that $\alpha \le \gamma \le \beta$ on $I$ and
$$|\gamma(t) -\gamma(s)| \le L |t-s| \quad \mbox{for all $s,t \in I$.}$$
The choice of $L$ ensures that $\alpha, \, \beta \in {\cal A}$, so ${\cal A}$ is not empty. Moreover, the set ${\cal A}$ is a connected subset of ${\cal C}(I)$ (convex, actually), and the Arzel\`a--Ascoli Theorem implies that ${\cal A}$ is compact. Hence the mapping defined by
$$F(\gamma)=\max_{t \in I}\left|\gamma(t)-y_0-\int_{t_0}^t{f(s,\gamma(s)) \,ds} \right| \quad \mbox{for each $\gamma \in {\cal A},$}$$
attains a minimum at some $\varphi \in {\cal A}$. We are going to prove that $F(\varphi)=0$, thus showing that $\varphi$ is a solution in the conditions of the statement. To do it, we are going to prove that we can find functions $\gamma \in {\cal A}$ such that $F(\gamma)$ is as small as we wish. The construction of such functions leans on the following lemma.
\begin{lemma}
\label{lema}
For all $t_1, t_2 \in I$ such that $t_1 < t_2$, and all $y_1 \in [\alpha(t_1),\beta(t_1)]$, there exists $\gamma \in {\cal A}$ such that
$$\gamma(t_1)=y_1 \quad \mbox{and} \quad \gamma(t_2)=y_1+\int_{t_1}^{t_2}{f(s,\gamma(s)) \, ds}.$$
\end{lemma}
\noindent
{\bf Proof of Lemma \ref{lema}.} Let $t_1,t_2$ and $y_1$ be as in the statement. We define a set of functions ${\cal A}_1=\{\gamma \in {\cal A} \, : \, \gamma(t_1)=y_1 \}$.
The set ${\cal A}_1$ is not empty: an adequate convex linear combination of $\alpha$ and $\beta$ assumes the value $y_1$ at $t_1$, and then it belongs to ${\cal A}_1$.
Let us consider the mapping $G: {\cal A}_1 \longrightarrow {{\Bbb R}}$, defined for each $\gamma \in {\cal A}_1$ as
$$G(\gamma)=\gamma(t_2)-y_1-\int_{t_1}^{t_2}{f(s,\gamma(s)) \, ds}.$$
To finish the proof it suffices to show that $G(\gamma)=0$ for some $\gamma \in {\cal A}_1$.
The mapping $G$ is continuous in ${\cal A}_1$, which is connected, hence $G({\cal A}_1)$ is a connected subset of the reals, i.e., an interval\footnote{This is not true in dimension $n>1$, so this approach does not work in that case.}. Therefore to ensure the existence of some $\gamma \in {\cal A}_1$ such that $G(\gamma)=0$ it suffices to prove the existence of functions $\tilde \alpha$ and $\tilde \beta$ in ${\cal A}_1$ such that $G(\tilde \alpha) \le 0 \le G(\tilde \beta)$.
Next we show how to construct one such $\tilde \alpha$ from $\alpha$ (the construction of $\tilde \beta$ from $\beta$ is analogous and we omit it). If $\alpha(t_1)=y_1$ we simply take $\tilde \alpha=\alpha$. If, on the other hand, $\alpha(t_1) < y_1$ then we define $\tilde \alpha$ in ``three (or two) pieces": first, we define $\tilde \alpha$ on $[t_0,t_1]$ as an adequate convex linear combination of $\alpha$ and $\beta$ to have $\tilde \alpha(t_1)=y_1$; second, we define $\tilde \alpha$ on the right of $t_1$ as the function whose graph is the line with slope $-L$ starting at the point $(t_1,y_1)$ and on the interval $[t_1,t_3]$, where $t_3$ is the first point in the interval $(t_1,t_0+a)$ such that the line intersects with the graph of $\alpha$, and finally we continue $\tilde \alpha=\alpha$ on $[t_3,t_0+a]$. If no such $t_3$ exists, then $\tilde \alpha$ is simply the line with slope $-L$ on the whole interval $[t_1,t_0+a]$. Verifying that $\tilde \alpha \in {\cal A}_1$ and that $G(\tilde \alpha) \le 0$ is just routine. The proof of Lemma \ref{lema} is complete. \hbox to 0pt{}
$\rlap{$\sqcap$}\sqcup$\medbreak
\bigbreak
Now we carry on with the final part of the proof of Theorem \ref{tss1}. Let $\varepsilon>0$ be fixed and consider a partition of the interval $[t_0,t_0+a]$, say $t_0,t_1, \dots, t_k=t_0+a$ ($k \in {{\Bbb N}}$), such that
$$0<t_j-t_{j-1}<\dfrac{\varepsilon}{2L} \quad \mbox{for all $j \in \{1,2,\dots,k\}$.}$$
Lemma \ref{lema} guarantees that we can construct (piece by piece) a function $\gamma_{\varepsilon} \in {\cal A}$ such that $\gamma_{\varepsilon}(t_0)=y_0$ and
$$\gamma_{\varepsilon}(t_j)=\gamma_{\varepsilon}(t_{j-1})+\int_{t_{j-1}}^{t_j}{f(s,\gamma_{\varepsilon}(s)) \, ds} \quad \mbox{for all $j \in \{1,2,\dots, k\}$.}$$
Now for each $t \in (t_0,t_0+a]$ there is a unique $j \in \{1,2,\dots, k\}$ such that $t \in (t_{j-1},t_j]$ and then
\begin{align*}
\left| \gamma_{\varepsilon}(t)-y_0-\int_{t_0}^t{f(s,\gamma_{\varepsilon}(s)) \, ds} \right|&=\left| \gamma_{\varepsilon}(t)-\gamma_{\varepsilon}(t_{j-1})-\int_{t_{j-1}}^{t}{f(s,\gamma_{\varepsilon}(s)) \, ds} \right|\\
& \le |\gamma_{\varepsilon}(t)-\gamma_{\varepsilon}(t_{j-1})|+\int_{t_{j-1}}^{t}{|f(s,\gamma_{\varepsilon}(s))| \, ds}\\
& \le 2L|t-t_{j-1}| < \varepsilon.
\end{align*}
Hence $0 \le F(\varphi) \le F(\gamma_{\varepsilon}) < \varepsilon$, which implies that $F(\varphi)=0$ because $\varepsilon>0$ was arbitrarily chosen. \hbox to 0pt{}
$\rlap{$\sqcap$}\sqcup$\medbreak
\begin{remark}
\label{remp}
Notice that Peano's Existence Theorem in dimension $n=1$ is really a consequence of Theorem \ref{tss1}. To see it consider problem (\ref{ivp}) in dimension $n=1$ and let $b>0$, $L>0$ and $c>0$ be as in the proof of Peano's Theorem in Section 2. Now define a new function
$$\tilde f(t,y)=f(t,\max\{ y_0-b,\min\{y,y_0+b\}\}) \quad \mbox{for all $(t,y) \in [t_0,t_0+c]\times {{\Bbb R}}$,}$$
which is continuous and bounded by $L$ on the whole of $[t_0,t_0+c] \times {{\Bbb R}}$.
Obviously, the functions $\alpha(t)=y_0-L(t-t_0)$ and $\beta(t)=y_0+L(t-t_0)$ are, respectively, a lower and an upper solution to
\begin{equation}
\nonumber
y'=\tilde f(t,y), \, \, t \in [t_0,t_0+c], \quad y(t_0)=y_0,
\end{equation}
which then has at least one solution $\varphi \in [\alpha,\beta]$, by virtue of Theorem \ref{tss1}. Since $|\varphi'(t)| \le L$ for all $t \in [t_0,t_0+c]$ we have $|\varphi(t)-y_0| \le b$ for all $t \in [t_0,t_0+c]$, and then the definition of $\tilde f$ implies that $\varphi$ solves (\ref{ivp}).
\end{remark}
The existence of both the lower and the upper solutions in Theorem \ref{tss1} is essential. Our next example shows that we cannot expect to have solutions {\it for a nonlocal problem} if we only have a lower (or an upper) solution.
\begin{example}
The function $\alpha(t)=0$ for all $t \in [0,\pi]$ is a lower solution to the initial value problem
$$y'=1+y^2, \quad y(0)=0,$$
which has no solution defined on $[0,\pi]$ (Its unique solution is $\varphi(t)=\mbox{\rm tan} \, t$ for all $t \in (-\pi/2, \pi/2)$).
\end{example}
Theorem \ref{tss1} does not guarantee that every solution of (\ref{ivp2}) is located between the lower and the upper solutions. To see it, simply note that solutions are lower and upper solutions at the same time, so the zero function is both lower and upper solution for Peano's example (\ref{expe}) which has many solutions above the zero function.
However it is true that, in the conditions of Theorem \ref{tss1}, we have a greatest and a least solution between $\alpha$ and $\beta$, and we have their respective Goodman's characterizations in terms of lower and upper solutions, see \cite{goo}.
\begin{corollary}
\label{t4}
Suppose that problem (\ref{ivp2}) has a lower solution $\alpha$ and an upper solution $\beta$ such that $\alpha(t) \le \beta(t)$ for all $t \in I$ and let
$$[\alpha, \beta]=\{\gamma \in {\cal C}(I) \, :\, \mbox{$\alpha \le \gamma \le \beta$ on $I$} \}.$$
Then problem (\ref{ivp2}) has solutions $\varphi_*,\varphi^* \in [\alpha,\beta]$ such that every solution of (\ref{ivp}) $\varphi \in [\alpha,\beta]$ satisfies
$$ \varphi_*(t) \le \varphi(t) \le \varphi^*(t) \quad \mbox{for all $t \in I$.}$$
Moreover, the least solution of (\ref{ivp2}) in $[\alpha,\beta]$ satisfies
\begin{equation}
\label{le}
\varphi_*(t)=\min \{ \gamma(t) \, : \, \mbox{$\gamma \in [\alpha,\beta]$, $\gamma$ upper solution of (\ref{ivp2})}\} \quad (t \in I),
\end{equation}
and the greatest solution of (\ref{ivp}) in $[\alpha,\beta]$ satisfies
\begin{equation}
\label{le2}
\varphi^*(t)=\max \{ \gamma(t) \, : \, \mbox{$\gamma \in [\alpha,\beta]$, $\gamma$ lower solution of (\ref{ivp2})}\} \quad (t \in I).
\end{equation}
\end{corollary}
\noindent
{\bf Proof.} Let $L>0$ be fixed so that
$$|f(t,y)| \le L \quad \mbox{for all $(t,y) \in I \times {{\Bbb R}}^2$ such that $\alpha(t) \le y \le \beta(t)$.}$$
The set of solutions of (\ref{ivp2}) in $[\alpha,\beta]$ is not empty by virtue of Theorem \ref{tss1}. Moreover, it is a compact subset of ${\cal C}(I)$, because it is closed, bounded and every one of its elements satisfies a Lipschitz condition with constant $L$. Hence, a similar argument to that in the proof of Theorem \ref{t2} guarantees that there is a solution $\varphi^* \in [\alpha,\beta]$ which is greater than any other solution $\varphi \in [\alpha,\beta]$.
Let us prove that $\varphi^*$ satisfies (\ref{le2}). Notice first that if $\gamma \in [\alpha,\beta]$ is a lower solution to (\ref{ivp2}) then Theorem \ref{tss1} guarantees that there is some solution $\varphi_{\gamma} \in [\gamma,\beta]$. Hence, for all $t \in I$ we have
\begin{align*}
\sup\{\gamma(t) \, :\, \mbox{$\gamma \in [\alpha,\beta]$, $\gamma$ lower solution} \} &\le \max\{\varphi(t) \, :\, \mbox{$\varphi \in [\alpha,\beta]$, $\varphi$ solution}
\}\\
&=\varphi^*(t),
\end{align*}
and then (\ref{le2}) obtains because $\varphi^*$ is a lower solution of (\ref{ivp2}) in $[\alpha,\beta]$.
The proof of the existence of $\varphi_*$ and the proof of (\ref{le}) are similar.
\hbox to 0pt{}
$\rlap{$\sqcap$}\sqcup$\medbreak
\begin{remark}
Corollary \ref{t4} can be proven directly via Perron's method, see \cite{dv, goo, po2, perron}. This means starting at (\ref{le2}) as a definition and then showing that $\varphi^*$ is a solution in the conditions of the statement. Perron's method involves careful work with sets of one--sided differentiable functions, which we avoid. Corollary \ref{t4} can also be proven easily from Theorem \ref{t2}.
\end{remark}
Finally we deduce from Corollary \ref{t4} the Peano's characterizations of the least and the greatest solutions in terms of strict lower and upper solutions. In doing so we are finally proving the ``real" Peano's Theorem, because our next result is the closest to the one proven in \cite{peano}.
We say that $\alpha:I \longrightarrow {{\Bbb R}}$ is a strict lower solution of (\ref{ivp2}) if it is continuously differentiable in $I$, $\alpha(t_0)\le y_0$, and $\alpha'(t)<f(t,\alpha(t))$ for all $t \in I$. A strict upper solution is defined analogously by reversing the relevant inequalities. Notice that if $\alpha$ is a strict lower solution and $\beta$ is a strict upper solution, then $\alpha < \beta$ on $(t_0,t_0+a]$, for otherwise we could find $t_1 \in (t_0,t_0+a]$ such that $\alpha< \beta$ in $(t_0,t_1)$ and $\alpha(t_1)=\beta(t_1)$, but then we would have
$$\beta'(t_1)>f(t_1,\beta(t_1))=f(t_1,\alpha(t_1)) > \alpha'(t_1) \ge \beta'(t_1), \quad \mbox{a contradiction.}$$
\begin{theorem}
\label{t5} (Fourth version of Peano's Theorem)
If problem (\ref{ivp2}) has a strict lower solution $\alpha$ and a strict upper solution $\beta$ then the following results hold:
\begin{enumerate}
\item Problem (\ref{ivp2}) has at least one solution;
\item If $\varphi$ solves (\ref{ivp2}) just in some domain $[t_0,t_0+\varepsilon]$, with $\varepsilon \in (0,a)$, then it can be extended as a solution of (\ref{ivp2}) to the whole interval $[t_0,t_0+a]$ and then it satisfies
\begin{equation}
\label{dee}
\mbox{$\alpha(t) < \varphi(t) < \beta(t)$ for all $t \in (t_0,t_0+a]$;}
\end{equation}
\item Problem (\ref{ivp2}) has the least solution $\varphi_*:I \longrightarrow {{\Bbb R}}$ and the greatest solution $\varphi^*:I \longrightarrow {{\Bbb R}}$, which satisfy
\begin{equation}
\label{lepe}
\varphi_*(t)=\sup \{ \gamma(t) \, : \, \mbox{ $\gamma$ strict lower solution of (\ref{ivp2})}\} \quad (t \in I),
\end{equation}
and
\begin{equation}
\label{le2pe}
\varphi^*(t)=\inf \{ \gamma(t) \, : \, \mbox{ $\gamma$ strict upper solution of (\ref{ivp2})}\} \quad (t \in I).
\end{equation}
\end{enumerate}
\end{theorem}
\noindent
{\bf Proof.} Corollary \ref{t4} guarantees that (\ref{ivp2}) has the least and the greatest solutions in $[\alpha,\beta]=\{\gamma \in {\cal C}(I) \, : \, \alpha \le \gamma \le \beta\}$, which we denote, respectively, by $\varphi_*$ and $\varphi^*$.
Let us prove that solutions of (\ref{ivp2}) satisfy (\ref{dee}) which, in particular, ensures that all of them belong to $[\alpha,\beta]$. Reasoning by contradiction, assume that a certain solution $\varphi:[t_0,t_0+\varepsilon] \longrightarrow {{\Bbb R}}$ ($\varepsilon \in (0,a)$) satisfies $\varphi(t_1) \ge \beta(t_1)$ for some $t_1 \in (t_0,t_0+\varepsilon)$. The initial conditions ensure that we can find some $t_2 \in (t_0,t_1)$ such that $\varphi<\beta$ in $(t_0,t_2)$ and $\varphi(t_2)=\beta(t_2)$, but then we have
$$\beta'(t_2)>f(t_2,\beta(t_2))=f(t_2,\varphi(t_2))=\varphi'(t_2) \ge \beta'(t_2),$$
a contradiction. This proves that every solution is smaller than $\beta$ on its domain, and a similar argument shows that every solution is greater than $\alpha$. Therefore Theorem \ref{tss1} ensures that every solution of (\ref{ivp2}) can be continued to the whole interval $I=[t_0,t_0+a]$ as a solution of (\ref{ivp2}) and between $\alpha$ and $\beta$. Hence (\ref{dee}) obtains and, moreover, $\varphi_*$ and $\varphi^*$ are, respectively, the least and the greatest among all the solutions of (\ref{ivp2}).
Next we show that (\ref{le2pe}) is satisfied. The proof of (\ref{lepe}) is similar and we omit it.
The previous arguments still work if we replace $\beta$ by any other strict upper solution $\gamma:I \longrightarrow {{\Bbb R}}$ (necessarily greater than $\alpha$ on $I$). Hence
$$\varphi^*(t) \le \inf \{ \gamma(t) \, : \, \mbox{$\gamma$ strict upper solution of (\ref{ivp2})}\} \quad (t \in I).$$
To show that we can replace this inequality by an identity it suffices to construct a decreasing sequence of strict upper solutions which converges to $\varphi^*$. To do it, let $k \in {{\Bbb N}}$ be sufficiently large so that
$$\beta'(t)>f(t,\beta(t))+1/k \quad \mbox{for all $t \in I$.}$$
Plainly, $\beta$ is an upper solution to the initial value problem
\begin{equation}
\label{pax}
y'=f(t,y)+1/k, \quad y(t_0)=y_0,
\end{equation}
and, in turn, $\varphi^*$ is a lower solution. Hence, for all sufficiently large values of $k$ there exists $\varphi_k$, a solution of (\ref{pax}), between $\varphi^*$ and $\beta$.
Obviously, $\varphi_k$ is a strict upper solution to (\ref{pax}) with $k$ replaced by $k+1$, hence $\varphi_k \ge \varphi_{k+1} \ge \varphi^*$ on $I$. Thus we can define a limit function
\begin{equation}
\label{ga}
\varphi_{\infty}(t)= \lim_{k \to \infty}\varphi_k(t) \ge \varphi^*(t) \quad (t \in I).
\end{equation}
The functions $\varphi_k$ satisfy Lipschitz conditions on $I$ with the same Lipschitz constant, which implies that $\varphi_{\infty}$ is Lipschitz continuous on $I$. Dini's Theorem ensures then that the sequence $\{\varphi_k\}_k$ converges uniformly to $\varphi_{\infty}$ on $I$. Now for all sufficiently large values of $k \in {{\Bbb N}}$ we have
$$\varphi_k(t)=y_0+\int_{t_0}^t{f(s,\varphi_k(s)) \, ds}+(t-t_0)/k \quad (t \in I),$$
and then taking limit when $k$ tends to infinity we deduce that $\varphi_{\infty}$ is a solution of (\ref{ivp2}). Hence $\varphi_{\infty} \le \varphi^*$ on $I$, and then (\ref{ga}) yields $\varphi_{\infty}=\varphi^*$ on $I$. The proof of (\ref{le2pe}) is complete. \hbox to 0pt{}
$\rlap{$\sqcap$}\sqcup$\medbreak
\section{Concluding remarks}
\begin{enumerate}
\item The assumption $\alpha \le \beta$ on $I$ in Theorem \ref{tss1} can be omitted. Marcelli and Rubbioni proved in \cite{mr} that we have solutions between the minimum of $\alpha$ and $\beta$ and the maximum of them. Furthermore, we do not even need that lower or upper solutions be continuous, see \cite{po}.
\item Theorem \ref{tss1} and Corollary \ref{t4} can be extended to quasimonotone systems, see \cite{bp, bs, hl}. Theorem \ref{tss1} for systems even works when the lower and upper solutions are not ordered, see \cite{mr}, and $f$ may be discontinuous or singular, see \cite{bp}.
\item The lower and upper solutions method is today a most acknowledged effective tool in the analysis of differential equations and, specially, boundary value problems. A detailed account on how far the method has evolved (just for second--order ODEs!) is given in the monograph by De Coster and Habets \cite{ch}. As far as the author is aware, the first use of lower and upper solutions is due Peano in \cite{peano}.
\end{enumerate}
\end{document} | math | 41,229 |
\begin{document}
\title {On Optimal Multi-Dimensional Mechanism Design}
\author {Constantinos Daskalakis\thanks{Supported by a Sloan Foundation Fellowship and NSF Award CCF-0953960 (CAREER) and CCF-1101491.}\\
EECS, MIT \\
\tt{[email protected]}
\and
S. Matthew Weinberg\thanks{Supported by a NSF Graduate Research Fellowship and a NPSC Graduate Fellowship.}\\
EECS, MIT\\
\tt{[email protected]}
}
\addtocounter{page}{-1}
\maketitle
\begin{abstract}
\noindent We efficiently solve the {\em optimal multi-dimensional mechanism design problem} for independent bidders with arbitrary demand constraints when either the number of bidders is a constant or the number of items is a constant. In the first setting, we need that each bidder's values for the items are sampled from a possibly correlated, {\em item-symmetric} distribution, allowing different distributions for each bidder. In the second setting, we allow the values of each bidder for the items to be arbitrarily correlated, but assume that the distribution of bidder types is {{\em bidder-symmetric}}. These symmetric distributions include i.i.d. distributions, as well as many natural correlated distributions. E.g., an item-symmetric distribution can be obtained by taking an arbitrary distribution, and ``forgetting'' the names of items; this could arise when different members of a bidder population have various sorts of correlations among the items, but the items are {``the same''} with respect to a random bidder from the population.
For all $\epsilon>0$, we obtain a computationally efficient additive $\epsilon$-approximation, when the value distributions are bounded, or a multiplicative $(1-\epsilon)$-approximation when the value distributions are unbounded, but satisfy the Monotone Hazard Rate condition, covering a widely studied class of distributions in Economics. Our running time is polynomial in $\max\{\text{\#items,\#bidders}\}$, and {\em not} the size of the support of the joint distribution of all bidders' values for all items, which is typically exponential in both the number of items and the number of bidders. Our mechanisms are randomized, explicitly price bundles, and in some cases can also accommodate budget constraints.
Our results are enabled by establishing several new tools and structural properties of Bayesian mechanisms. In particular, we provide a {\em symmetrization technique} that turns any truthful mechanism into one that has the same revenue and respects all symmetries in the underlying value distributions. We also prove that item-symmetric mechanisms satisfy a natural {\em strong-monotonicity property} which, unlike cyclic-monotonicity, can be harnessed algorithmically. Finally, we provide a technique that turns any given $\epsilon$-BIC mechanism (i.e. one where incentive constraints are violated by $\epsilon$) into a truly-BIC mechanism at the cost of {$O(\sqrt{\epsilon})$ revenue.} We expect our tools to be used beyond the settings we consider here. Indeed there has already been follow-up research~\cite{CDW,CJ} making use of our tools.
\end{abstract}
\thispagestyle{empty}
\section{Introduction} \label{sec:introduction}
How can a seller auction off a set of items to a group of interested buyers to maximize profit? This problem, dubbed the {\em optimal mechanism design} problem, has gained central importance in mathematical Economics over the past decades. The seller could certainly auction off the items sequentially, using her favorite single-item auction, such as the English auction. But this is not always the best idea, as it is easy to find examples where this approach leaves money on the table.~\footnote{A simple example is this: Suppose that an auctioneer is selling a Picasso and a Dali painting and there are two bidders of which one loves Picasso and does not care about Dali and vice versa. Running a separate English auction for each painting will result in small revenue since there is going to be no serious competition for either painting. But bundling the paintings together will induce competition and drive the auctioneer's revenue higher.} The chief challenge that the auctioneer faces is that the values of the buyers for the items, which determine how much each buyer is willing to pay for each item, is information that is private to the buyers, at least at the onset of the auction. Hence, the mechanism needs to provide the appropriate incentives for the buyers to reveal ``just enough'' information for the optimal revenue to be extracted.
Viewed as an optimization problem, the optimal mechanism design problem is of a rather intricate kind. First, it is a priori not clear how to evaluate the revenue of an arbitrary mechanism because it is not clear how rational bidders will play. One way to cope with this is to only consider mechanisms where rational bidders are properly incentivized to tell the designer their complete {\em type}, i.e. how much they would value each possible outcome of the mechanism (i.e. each bundle of items they may end up getting). Such mechanisms can be \emph{Incentive Compatible} (IC), where each bidder's strategy is to report a type and the following {\em worst-case guarantee} is met: regardless of the types of the other bidders, it is in the best interest of a bidder to truthfully report her type. Or the mechanism can be \emph{Bayesian Incentive Compatible} (BIC), where it is assumed that the bidders' types come from a known distribution and the following {\em average-case guarantee} is met: in expectation over the other bidders' types, it is in the best interest of a bidder to truthfully report her type, if the other bidders report truthfully. See Sec~\ref{sec:notation} for formal definitions. We only note here that, under very weak assumptions, restricting attention to IC/BIC mechanisms in the aforementioned settings of without/with prior information over bidders' types is without loss of generality~\cite{AGTbook}.
But even once it is clear how to evaluate the revenue of a given mechanism, it is not necessarily clear what benchmark to compare it against. For example, it is not hard to see that the {\em social welfare}, i.e. the sum of the values of the buyers for the items they are allocated, is {\em not} the right benchmark to use, as in general one cannot hope to achieve revenue that is within any constant factor of the optimal social welfare: why would a buyer with a large value for an item pay an equally large price to the auctioneer to get it, if there is no competition for this item? Given the lack of a useful revenue benchmark (i.e. one that upper bounds the revenue that one may hope to achieve but is not too large to allow any reasonable approximation), the task of the mechanism designer can only be specified in generic terms as follows: come up with an IC/BIC auction whose revenue is at least as large as the revenue of any other IC/BIC auction.
Finally, even after restricting the search space to IC/BIC auctions and only comparing to the optimal revenue achievable by any IC/BIC auction, it is still easy to show that it is impossible to guarantee any finite approximation if no prior is known over the bidders' types. Instead, many solutions in the literature adopt a Bayesian viewpoint, assuming that a prior does exist and is known to both the auctioneer and the bidders, and targeting the optimal achievable \emph{expected revenue}. Once the leap to the Bayesian setting is made the goal is typically this: {\em Design a BIC, possibly randomized, mechanism whose expected revenue is optimal among all BIC, possibly randomized, mechanisms.}~\footnote{In view of the results of~\cite{BCKW,CMS}, to achieve optimal, or even near-optimal, revenue in correlated settings, or even i.i.d. multi-item settings, we are forced to explore randomized mechanisms.}
One of the most celebrated results in this realm is {\em Myerson's optimal auction}~\cite{myerson}, which achieves optimal revenue via an elegant design that spans several important settings. Despite its significance, Myerson's result is limited to the case where bidders are {\em single-dimensional}. In simple terms, this means that each bidder can be characterized by a single number (unknown to the auctioneer), specifying the value of the bidder per item received. This is quite a strong assumption when the items are heterogeneous, so naturally, after Myerson's work, a large body of research has been devoted to the {\em multi-dimensional problem}, i.e. the setting where the bidders may have different values for different items/bundles of items. Even though progress has been made in certain restricted settings, it seems that we are far from an optimal mechanism, generalizing Myerson's result; see survey~\cite{optimal:econ} and its references for work on this problem by Economists.
Algorithmic Game Theory has also studied this problem, with an extra eye on the computational efficiency of mechanisms. Chawla et al.~\cite{CHK} study the case of a single (multidimensional) unit-demand bidder with independent values for the items. They propose an elegant reduction of this problem to Myerson's single-dimensional setting, resulting in a mechanism that achieves a constant factor approximation to the optimal revenue among all BIC, possibly randomized~\cite{CMS}, mechanisms. For the same problem, Cai and Daskalakis~\cite{CD} recently closed the constant approximation gap against all deterministic mechanisms by obtaining polynomial-time approximation schemes for optimal item-pricing. As for the case of correlated values, it had been known that finding the optimal pricing (deterministic mechanism) is highly inapproximable by~\cite{BK}, although no hardness results are known for randomized mechanisms. In the multi-bidder setting, Chawla et al.~\cite{CHMS}, Bhattacharya et al.~\cite{BGGM} and recently Alaei~\cite{alaei} obtain constant factor approximations in the case of additive bidders or unit-demand bidders and matroidal constraints on the possible allocations.
While our algorithmic understanding of the optimal mechanism design problem is solid, at least as far as constant factor approximations go, there has been virtually no result in designing computationally efficient revenue-optimal mechanisms for multi-dimensional settings, besides the single-bidder result of~\cite{CD}. In particular, one can argue that the previous approaches~\cite{alaei,BGGM,CHK,CHMS} are inherently limited to constant factor approximations, as ultimately the revenue of these mechanisms is compared against the optimal revenue in a related single-dimensional setting~\cite{CHK,CHMS}, or a convex programming relaxation of the problem~\cite{alaei,BGGM}. Our focus in this work is to fill this important gap in the algorithmic mechanism design literature, i.e. to {\em obtain computationally efficient near-optimal multi-dimensional mechanisms}, {coming $\epsilon$-close to the optimal revenue in polynomial time}, for any desired accuracy $\epsilon>0$. We obtain a Polynomial-Time Approximation Scheme (PTAS) for the following two important cases of the general problem.
\noindent \framebox{
\begin{minipage}{\hsize}
{\bf The BIC $k$-items problem.} Given as input an arbitrary (possibly correlated) distribution $\mathcal{F}$ over valuation vectors for $k$ items, a demand bound $C$, and an integer $m$, the number of bidders, output a BIC mechanism $M$ whose expected revenue is optimal relative to any other, possibly randomized, BIC mechanism, when played by $m$ additive bidders with demand constraint $C$ whose valuation vectors are sampled independently from $\mathcal{F}$.
\end{minipage}}
\noindent \framebox{
\begin{minipage}{\hsize} {\bf The BIC $k$-bidders problem.} Given as input $k$ item-symmetric\footnote{A distribution over $\mathbb{R}^n$ is symmetric if, for all $\vec{v} \in \mathbb{R}^n$, the probability it assigns to $\vec{v}$ is equal to the probability it assigns to any permutation of $\vec{v}$.} distributions $\mathcal{F}_1,\ldots,\mathcal{F}_k$, demand bounds $C_1,\ldots,C_k$ (one for each bidder), and an integer $n$, the number of items, output a BIC mechanism $M$ whose expected revenue is optimal relative to any other, possibly randomized, BIC mechanism, when played by $k$ additive bidders with demand constraints $C_1,\ldots,C_k$ respectively whose valuation vectors for the $n$ items are sampled independently from $\mathcal{F}_1,\ldots,\mathcal{F}_k$.
\end{minipage}}
In other words, the problems we study are where either the number of bidders is large, but they come from the same population, i.e. each bidder's value vector is sampled from the same, arbitrary, possibly correlated distribution, or the number of items is large, but each bidder's value distribution is item-symmetric (possibly different for each bidder). While these do not capture the problem of Bayesian mechanism design in its complete generality, they certainly represent important special cases of the general problem and indeed the first interesting cases for which computationally efficient near-optimal mechanisms have been obtained. Before stating our main result, it is worth noting that:
\noindent ~~\begin{minipage}{16.5cm}$\bullet$ When the number of bidders is large, it does not make sense to expect that the auctioneer has a separate prior distribution for the values of each individual bidder for the items. So our assumption in the $k$-items problem that the bidders are drawn from the same population of bidders is a realistic one, and---in our opinion---the practically interesting case of the general problem. Indeed, there are hardly any practical examples of auctions using bidder-specific information (think, e.g., eBay, Sotheby's etc.) A reasonable extension of our model would be to assume that bidders come from a constant number of different sub-populations of bidders, and that the auctioneer has a prior for each sub-population. Our results extend to this setting.\end{minipage}
\noindent ~~\begin{minipage}{16.5cm}$\bullet$ When the number of items is large, it is still hard to imagine that the auctioneer has a distribution for each individual item. In the $k$-bidders problem, we assume that each bidder's value distribution is item-symmetric. This certainly contains the case where each bidder has i.i.d. values for the items, but there are realistic applications where values are correlated, but still item-symmetric. Consider the following scenario: the auctioneer has the same number of Yankees, Red Sox, White Sox, and Mariners baseball caps to sell. Each bidder is a fan of one of the four teams and has non-zero value for exactly one of the four kinds of caps, but it is unknown to the auctioneer which kind that is and what the value of the bidder for that kind is. Hence, the values of a random bidder for the caps are certainly non i.i.d., as if the bidder likes a Red Sox cap then she will equally like another, but will have zero value for a Yankees cap. Suppose now that we are willing to make the assumption that all teams have approximately the same number of fans and those fans have statistically the same passion for their team. Then a random bidder's values for the items is drawn from an item-symmetric distribution, or close to one, so we can handle such distributions. In this case too, our techniques still apply if we deviate from the item-symmetric model to models where there is a constant number of types of objects, e.g. caps and jerseys, and symmetries do not permute types, but permute objects within the same type.\end{minipage}
\begin{theorem}\label{thm:additive}(Additive approximation) For all $k$, if $\mathcal{F}$ samples values from $[0,1]^k$ there exists a PTAS with additive error $\epsilon$ for the BIC $k$-items problem. For all $k$, if $\mathcal{F}_i$ samples vectors from $[0,1]^n$, there exists a PTAS with additive error {$\epsilon \cdot \max\{C_i\}$} for the BIC $k$-bidders problem.
\end{theorem}
\begin{remark}\label{rem:additive} Some qualifications on Theorem~\ref{thm:additive} are due.
\begin{itemize}
\item The mechanism output by our PTAS is truly BIC, not $\epsilon$-BIC, and there are no extra assumptions necessary to achieve this.
\item We make no assumptions about the size of the support of $\mathcal{F}_i$ or $\mathcal{F}$, as the runtime of our algorithms {\em does not} depend on the size of the support. This is an important distinction between our work and the literature where it is folklore knowledge that if one is willing to pay computational time polynomial in the size of the support of the value distribution, then the optimal mechanism can be easily computed via an LP (see, e.g.,~\cite{BGGM,BCKW,DFK}). However, exponential size supports are easy to observe. Take, e.g., our $k$-bidders problem and assume that every bidder's value for each of the items is i.i.d. uniform in $\{\$5,\$10\}$. The na\"ive LP based approach would result in time polynomial in $2^n$, while our solution needs time polynomial in $n$.
\item If we are willing to replace BIC by $\epsilon$-BIC (or $\epsilon$-IC) in Thm \ref{thm:additive} and compare our revenue to the best revenue achievable by any BIC (or IC) mechanism, then we can also accommodate budget constraints. The only step of our algorithm that does not respect budgets is the $\epsilon$-BIC to BIC reduction (Thm~\ref{thm:epsilon-BIC to BIC}). For space considerations, we restrict our attention to BIC throughout the main body of the paper and prove the related claims for IC in App~\ref{sec: IC results}.
\item If the value distributions are discrete and every marginal has constant-size support, then our algorithms achieve {\em exactly optimal revenue} in polynomial time, even though the support of such a distribution may well be exponential. For instance, in the example given in the second bullet our algorithm obtains exactly optimal revenue in time polynomial in $n$. In these cases, we can find optimal truly BIC or truly IC mechanisms that also accommodate budget constraints.
\item The mechanisms produced by our techniques satisfy the demand constraints of each bidder in a strong sense (and not in expectation). Moreover, the user of our theorem is free to choose whether they want to satisfy {\em ex-interim individual rationality}, that the expected value of a bidder for the received bundle of items is larger than the expected price she pays, or {\em ex-post individual rationality}, where this constraint is true with probability $1$ (and not just in expectation). We focus the main presentation on producing mechanisms that are ex-interim IR. In App~\ref{app:ex-post IR} we explain the required modification for producing ex-post IR mechanisms {\em without any loss in revenue. }
\item The assumption that $\mathcal{F}_i,\mathcal{F}$ sample from $[0,1]^n$ as opposed to some other bounded set is w.l.o.g. and previous work has made the same assumption~\cite{HKM,HL} on the input distributions.
\end{itemize}
\end{remark}
\noindent One might prefer to assume that the value distributions are not upper bounded, but satisfy some tail condition, such as the Monotone Hazard Rate condition (see App~\ref{app:MHR}).~\footnote{The class of Monotone Hazard Rate distributions is a family of distributions that is commonly used in Economics applications, and contains such familiar distributions as the Normal, Exponential and Uniform distributions.} Using techniques from~\cite{CD}, we can extend our theorems to MHR distributions. All the relevant remarks still apply.
\begin{corollary}\label{cor:MHR}(Multiplicative approximation for MHR distributions) For all $k$, if the $k$ marginals of $\mathcal{F}$ all satisfy the MHR condition, there exists a PTAS obtaining at least a $(1-\epsilon)$-fraction of the optimal revenue for the BIC $k$-items problem (whose runtime does not depend on $\mathcal{F}$ or $C$). Likewise, for all $k$, if every marginal of $\mathcal{F}_i$ is MHR for all $i$, there exists a PTAS obtaining at least a $(1-\epsilon)$-fraction of the optimal revenue for the BIC $k$-bidders problem (whose runtime does not depend on $\mathcal{F}_i$ or $C_i$).
\end{corollary}
The rest of the paper is organized as follows: Sec~\ref{sec:notation} provides a few standard definitions from Mechanism Design. Sec~\ref{sec:overview} gives an overview of our proof of Thm~\ref{thm:additive}, explaining the different components that get into our proof and guiding through the rest of the paper. The rest of the main body and the appendix provide all technical details. App \ref{app:MHR} provides the proof of Corollary \ref{cor:MHR}.
\section{Preliminaries and notation}\label{sec:notation}
We assume that the seller has a single copy of $n$ (heterogeneous) items that she wishes to auction to $m$ bidders. Each bidder $i$ has some non-negative value for item $j$ which we denote $v_{ij}$. We can think of bidder $i$'s {\em type} as an $n$-dimensional vector $\vec{v}_i$, and denote the entire profile of bidders as $\vec{v}$, or sometimes $(\vec{v}_i~;~\vec{v}_{-i})$ if we want to emphasize its decomposition to the type $\vec{v}_i$ of bidder $i$ and the joint profile $\vec{v}_{-i}$ of all other bidders. We denote by $\mathcal{D}$ the distribution from which $\vec{v}$ is sampled. We also denote by $\mathcal{D}_i$ the distribution of types for bidder $i$, and by $\mathcal{D}_{-i}$ the distribution of types for every bidder except $i$. The {\em value of a bidder} with demand $C$ for any subset of items is the sum of her values for her favorite $C$ items in the subset; that is, we assume bidders are {\em additive up to their demand}.
{Since we are shooting for BIC/IC mechanisms, we will only consider (direct revelation) mechanisms} where each bidder's strategy is to report a type. When the reported bidder types are $\vec{v}$, we denote the (possibly randomized) {\em outcome of mechanism $M$} as $M(\vec{v})$. The outcome can be summarized in: the {\em expected price} charged to each bidder (denoted $p_i(\vec{v})$), and a collection of {\em marginal probabilities} $\vec{\phi}(\vec{v}) = (\phi_{ij}(\vec{v}))_{ij}$, where $\phi_{ij}(\vec{v})$ denotes the marginal probability that bidder $i$ receives item $j$.
A collection of marginal probabilities $\vec{\phi}(\vec{v}):=(\phi_{ij}(\vec{v}))_{ij}$
is {\em feasible} iff there exists a consistent with them joint distribution over allocations of items to bidders
so that in addition, with probability $1$, no item is allocated to more than one bidder, and no bidder receives more items than her demand. A straightforward application of the Birkhoff-von Neumann theorem~\cite{JDM} reveals that a sufficient condition for the above to hold is that {\em in expectation} no item is given more than once, and all bidders receive an {\em expected number of items} less than or equal to their demand. Note that this sufficient condition is expressible in terms of the $\phi_{ij}$'s
only. Moreover, under the same conditions, we can efficiently sample a joint distribution with the desired $\phi_{ij}$'s.
(See App~\ref{app:feasible} for details.)
The outcome of mechanism $M$ restricted to bidder $i$ on input $\vec{v}$ is denoted $M_i(\vec{v}) = (\vec{\phi}_i(\vec{v}), p_i({\vec{v}}))$. Assuming that bidder $i$ is additive (up to her demand) and risk-neutral and that the mechanism is feasible (so in particular it does not violate the bidder's demand constraint) the {\em value} of bidder $i$ for outcome $M_i(\vec{w})$ is just (her expected value) $\vec{v}_i \cdot \vec{\phi}_i(\vec{w})$, while the bidder's {\em utility} for the same outcome is $U(\vec{v}_i,M_i(\vec{w})):=\vec{v}_i \cdot \vec{\phi}_i(\vec{w}) - p_i(\vec{w})$. Such bidders subtracting price from expected value are called {\em quasi-linear.} Moreover, for a given value vector $\vec{v}_i$ for bidder $i$, we write: $\pi_{ij}(\vec{v}_i)=\mathbb{E}_{\vec{v}_{-i}\sim {\cal D}_{-i}}[\phi_{ij}(\vec{v}_i~;~\vec{v}_{-i})]$.
We proceed to formally define incentive compatibility of mechanisms in our notation:
\notshow{
\begin{definition}[\cite{BH,HKM,HL}](BIC/$\epsilon$-BIC/IC/$\epsilon$-IC Mechanism) \label{def:BIC} A mechanism $M$ is called $\epsilon$-BIC iff a bidder with type $\vec{v_i}$ can never expect to gain more than $\epsilon \cdot v_{\max}$ by lying about his type $\vec{v}_i$, i.e. for all $i$, $\vec{v}_i, \vec{w}_i$:
$$\mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}}\left[U(\vec{v}_i,M_i(\vec{v}))\right] \ge \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}}\left[ U(\vec{v}_i,M_i(\vec{w}_i~;~\vec{v}_{-i})) \right] - \epsilon \cdot v_{\max},$$
where $v_{\max}$ is the maximum possible value of any bidder for any item in the support of the value distribution. Similarly, $M$ is called $\epsilon$-IC iff for all $i$, $\vec{v}_i, \vec{w}_i, \vec{v}_{-i}$: $U(\vec{v}_i,M_i(\vec{v})) \ge U(\vec{v}_i,M_i(\vec{w}_i~;~\vec{v}_{-i})) - \epsilon \cdot v_{\max}$. A mechanism is called BIC iff it is $0$-BIC and IC iff it is $0$-IC.
~\footnote{Clearly, the definition of $\epsilon$-BIC is meaningless for unbounded distributions, for $\epsilon>0$. However, for such distributions, we shall never define/use $\epsilon$-BIC mechanisms. We may truncate and discretize an unbounded distribution and design $\epsilon$-BIC mechanisms for the resulting distribution. But then we always convert this mechanism to a $0$-BIC mechanism for the original distribution. We will never misuse this definition and try to claim that we have an $\epsilon$-BIC mechanism for an unbounded distribution, for $\epsilon>0$.}
\end{definition}
}
{
\begin{definition}(BIC/$\epsilon$-BIC/IC/$\epsilon$-IC Mechanism)\label{def:BIC} A mechanism $M$ is called $\epsilon$-BIC iff the following inequality holds for all $i,\vec{v}_i,\vec{w}_i$:
$$\mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}}\left[U(\vec{v}_i,M_i(\vec{v}))\right] \ge \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}}\left[ U(\vec{v}_i,M_i(\vec{w}_i~;~\vec{v}_{-i})) \right] - \epsilon v_{\max} \cdot \sum_j \pi_{ij}(\vec{w}_i),$$
where $v_{\max}$ is the maximum possible value of any bidder for any item in the support of the value distribution. In other words, $M$ is $\epsilon$-BIC iff when a bidder lies by reporting $\vec{w}_i$ instead of $\vec{v}_i$, they do not expect to gain more than $\epsilon v_{\max}$ times the expected number of items that $\vec{w}_i$ receives. Similarly, $M$ is called $\epsilon$-IC iff for all $i$, $\vec{v}_i, \vec{w}_i, \vec{v}_{-i}$: $U(\vec{v}_i,M_i(\vec{v})) \ge U(\vec{v}_i,M_i(\vec{w}_i~;~\vec{v}_{-i})) - \epsilon v_{\max} \cdot \sum_j \phi_{ij}(\vec{w}_i~;~\vec{v}_{-i})$. A mechanism is called BIC iff it is $0$-BIC and IC iff it is $0$-IC.~\footnote{Any feasible mechanism that we call $\epsilon$-BIC, respectively $\epsilon$-IC, by our definition is certainly an $\epsilon \cdot \max\{C_i\}$-BIC, respectively $\epsilon \cdot \max\{C_i\}$-IC, mechanism by the more standard definition, which omits the factors $\sum_j \pi_{ij}(\vec{w}_i)$, respectively $\sum_j \phi_{ij}(\vec{w}_i~;~\vec{v}_{-i})$, from the incentive error. We only include these factors here for convenience.}
\end{definition}
}
\noindent In our proof of Thm~\ref{thm:additive} throughout this paper we assume that $v_{\max}=1$. If $v_{\max}<1$, we can scale the value distribution so that this condition is satisfied.
{We also define individual rationality of BIC/$\epsilon$-BIC mechanisms:
\begin{definition}
A BIC/$\epsilon$-BIC mechanism $M$ is called {\em ex-interim individually rational (ex-interim IR)} iff for all $i$, $\vec{v}_i$:
$$\mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}}\left[U(\vec{v}_i,M_i(\vec{v}))\right] \ge 0.$$
It is called {\em ex-post individually rational (ex-post IR)} iff for all $i$, $\vec{v}_i$ and $\vec{v}_{-i}$, $U(\vec{v}_i,M_i(\vec{v})) \ge 0$ with probability $1$ (over the randomness in the mechanism).
\end{definition}
\noindent While we focus the main presentation to obtaining ex-interim IR mechanisms, in Appendix~\ref{app:ex-post IR} we describe how without any loss in revenue we can turn these mechanisms into ex-post IR.}
For a mechanism $M$, we denote by $R^M(\mathcal{D})$ the expected revenue of the mechanism when bidders sampled from $\mathcal{D}$ play $M$ truthfully. We also let $R^{OPT}(\mathcal{D})$ (resp. $R^{OPT}_\epsilon(\mathcal{D})$) denote the maximum possible expected revenue attainable by any BIC (resp. $\epsilon$-BIC) mechanism when bidders are sampled from ${\cal D}$ and play truthfully. For all cases we consider, these terms are well-defined.
We state and prove our results assuming that we can exactly sample from all input distributions efficiently and exactly evaluate their {cumulative distribution} functions. Our results still hold {\em even if we only have oracle access to sample from the input distributions}, as this is sufficient for us to approximately evaluate the {cumulative} functions to within the right accuracy in polynomial time (by making use of our symmetry and discretization tools, described in the next section). The approximation error on evaluating the cumulative functions is absorbed into loss in revenue. See discussion in App \ref{app:input model}.
Finally, we denote by $S_m, S_n$ the symmetric groups over the sets $[m]:=\{1,\ldots,m\}$ and $[n]$ respectively. Moreover, for $\sigma =(\sigma_1,\sigma_2) \in S_m \times S_n$, we assume that $\sigma$ maps element $(i,j) \in [m] \times [n]$ to $\sigma(i,j):=(\sigma_1(i), \sigma_2(j))$. We extend this definition to map a value vector $\vec{v}=(v_{ij})_{i\in [m], j \in [n]}$ to the vector $\vec{w}$ such that $\vec{w}_{\sigma(i,j)}=\vec{v}_{ij}$, for all $i,j$. Likewise, if ${\cal D}$ is a value distribution, $\sigma({\cal D})$ is the distribution that first samples $\vec{v}$ from $\cal D$ and then outputs $\sigma(\vec{v})$.
\section{Overview of our Approach} \label{sec:techniques} \label{sec:overview}
{\bf A Na\"ive LP Formulation.} Let ${\cal D}$ be the distribution of all bidders' values for all items (supported on a subset of $\mathbb{R}^{m \times n}$, where $m$ is the number of bidders and $n$ is the number of items). For a mechanism design problem with unit-demand bidders whose values are distributed according to ${\cal D}$, it is folklore knowledge how to write a linear programming relaxation of size polynomial in $|{\rm supp}({\cal D})|$ optimizing revenue. The relaxation keeps track of the (marginal) probability $\phi_{ij}(\vec{v}) \in [0,1]$ that item $j$ is given to bidder $i$ if the bidders' values are $\vec{v}$, and enforces feasibility constraints (no item is given more than once in expectation, no bidder gets more than one item in expectation), incentive compatibility constraints (in expectation over the other bidders' values for the items, no bidder has incentive to misreport her values for the items, if the other bidders don't), while optimizing the expected revenue of the mechanism. Notice that all constraints and the objective function can be written in terms of the marginals $\phi_{ij}$. Moreover, using the Birkhoff-von Neumann decomposition theorem, it is possible to convert the solution of this LP to a mechanism that has the same revenue and satisfies the feasibility constraints strongly (i.e. not in expectation, but prob. $1$). We give the details of the linear program in App~\ref{app:LP}, and also describe how to generalize this LP to incorporate demand and budget constraints.
Despite its general applicability, the na\"ive LP formulation has a major drawback in that $|{\rm supp}(\mathcal{D})|$ could in general be infinite, and when it is finite it is usually exponential in both $m$ and $n$. For the settings we consider, this is always the case. For example, in the very simple setting where ${\cal D}$ samples each value i.i.d. uniformly from $\{\$5, \$10\}$, the support of the distribution becomes $2^{m \times n}$. Such support size is obviously prohibitive if we plan to employ the na\"ive LP formulation to optimize revenue.
\noindent {\bf A Comparison to Myerson's Setting.} {\em What enables succinct and computationally efficient mechanisms in the single-item setting of Myerson?} Indeed, the curse of dimensionality discussed above arises even when there is a single item to sell; e.g., if every bidder's distribution has support $2$ and the bidders are independent, then the number of different bidder profiles is already $2^m$. What drives Myerson's result is the realization that there is structure in {a BIC mechanism} coming in the form of {\em monotonicity}: for all $i$, for all $v_{i1} \ge v_{i1}'$: $\mathbb{E}_{\vec{v}_{-i}}(\phi_{i1}(v_{i1}~;~\vec{v}_{-i})) \ge \mathbb{E}_{\vec{v}_{-i}}(\phi_{i1}(v'_{i1}~;~\vec{v}_{-i})),$ i.e. the expected probability that bidder $i$ gets the single item for sale in the auction increases with the value of bidder $i$, where the expectation is taken over the other bidders' values. Unfortunately, such crisp monotonicity property of {BIC mechanisms} fails to hold if there are multiple items, and even if it were present it would still not be sufficient in itself to reduce the size of the na\"ive LP to a manageable size.
{\em So what next?} We argued earlier that the symmetric distributions considered in the BIC $k$-items and the BIC $k$-bidders problems are very natural cases of the general optimal mechanism design problem. We argue next that they are natural for another reason: they enable enough structure for (i) the optimal mechanism to have small description complexity, instead of being an unusable, exponentially long list of what the mechanism ought to do for every input value vector $\vec{v}$; and (ii) the succinct solution to be efficiently computable, bypassing the exponentially large na\"ive LP. Our structural results are discussed in the following paragraphs. The first is enabled by exploiting randomization to transfer symmetries from the value distribution to the optimal mechanism. The second is enabled by proving a {\em strong-monotonicity property} of all BIC mechanisms. Our notion of monotonicity is more powerful than the notion of {\em cyclic-monotonicity}, which holds more generally but can't be exploited algorithmically. Together our structural results bring to light how the item- and bidder-symmetric settings are mathematically more elegant than general settings with no {apparent} structure.
\noindent {\bf Structural Result 1:}~{\em The Interplay Between Symmetries and Randomization.} Since the inception of Game Theory scientists were interested in the implications of symmetries in the structure of equilibria~\cite{GKT,BvN,N}. In his seminal paper~\cite{N}, Nash showed a rather interesting structural result, informally reading as follows: ``If a game has any symmetry, there exists a Nash equilibrium satisfying that symmetry." Indeed, something even more powerful is true: ``There always exists a Nash equilibrium that simultaneously satisfies all symmetries that the game may have.''
\notshow{
In his seminal paper~\cite{N}, Nash showed that the use of randomness in players' strategies enables the existence of an equilibrium in every game. In the same paper, Nash showed a less well-known, albeit deep, structural implication of randomness: if all players of the game are identical (i.e. they have identical strategy sets and payoff functions), there exists a symmetric equilibrium in the game, i.e. one in which every player uses the same randomized strategy. In fact, something stronger is true: there always exists a Nash equilibrium respecting any symmetry that the game has. For example, suppose that only two players of a multi-player game are identical (i.e. swapping their names does not affect any player's payoff); then there exists a Nash equilibrium in which these two players use the same mixed strategy. If two actions are interchangeable (i.e. replacing all occurrences of one strategy with the other in a strategy profile does not affect any player's payoff), then there exists a Nash equilibrium in which every player plays these two actions with equal probability. In a game that has both of the above symmetries, there exists a Nash equilibrium satisfying both of the symmetries described above, etc. }
Inspired by Nash's symmetry result, albeit in our different setting, we show a similar structural property of {\bf randomized} mechanisms.~\footnote{We emphasize `randomized', since none of the symmetries we describe holds for deterministic optimal mechanisms.} Our structural result is rather general, applying to settings beyond those addressed in Thm~\ref{thm:additive}, and even beyond MHR or regular distributions. The following theorem holds for \emph{any} (arbitrarily correlated) joint distribution $\mathcal{D}$.
\begin{theorem} \label{thm:symmetries} \label{thm:symmetry} Let ${\cal D}$ be the distribution of bidders' values for the items (supported on a subset of $\mathbb{R}^{m \times n}$). Let also ${\cal S} \subseteq S_m \times S_n$ be an arbitrary set such that ${\cal D} \equiv \sigma(\cal D)$, for all $\sigma \in {\cal S}$; that is, assume that $\cal D$ is invariant under all permutations in $\cal S$. Then any BIC mechanism $M$ can be symmetrized into a mechanism $M'$ that respects all symmetries in $\cal S$ without any loss in revenue. I.E. for all bid vectors $\overrightarrow{v}$ the behavior of $M'$ under $\overrightarrow{v}$ and $\sigma(\overrightarrow{v})$ is identical (up to permutation by $\sigma$) for all $\sigma \in {\cal S}$. The same result holds if we replace BIC with $\epsilon$-BIC, IC, or $\epsilon$-IC.
\end{theorem}
While we postpone further discussion of this theorem and what it means for $M$ to behave ``identically'' to Sec~\ref{sec:symmetries}, we give a quick example to illustrate the symmetries that randomization enables in the optimal mechanism. Consider a single bidder and two items. Her value for each item is drawn i.i.d. from the uniform distribution on $\{4,5\}$. It is easy to see that the only optimal deterministic mechanism assigns price $4$ to one item and $5$ to the other. However, there is an optimal randomized mechanism that offers each item at price $4\frac{1}{2}$, and the uniform lottery ($1/2$ chance of getting item $1$, $1/2$ chance of getting item $2$) at price $4$. While item $1$ and item $2$ need to be priced differently in the deterministic mechanism to achieve optimal revenue, they can be treated identically in the optimal randomized mechanism. Thm \ref{thm:symmetries} applies in an extremely general setting: distributions can be continuous with arbitrary support and correlation, bidders can have budgets and demands, we could be maximizing social welfare instead of revenue, etc.
\noindent {\bf Structural Result 2:} {\em Strong-Monotonicity.} Even though the na\"ive LP formulation is not computationally efficient, Thm \ref{thm:symmetries} certifies the existence of a compact solution for the cases we consider. This solution lies in the subspace of $\mathbb{R}^{m \times n}$ spanned by the symmetries induced by ${\cal D}$. Still Thm~\ref{thm:symmetries} does not inform us how to locate such a symmetric optimal solution. Indeed, the symmetry of the optimal solution is not a priori capable in itself to decrease the size of our na\"ive LP to a manageable one. For this purpose we establish a strong monotonicity property of {item-symmetric BIC} mechanisms (an item-symmetric mechanism is one that respects every item symmetry; see Sec \ref{sec:symmetries} for a definition).
\begin{theorem}\label{thm:monotone} If ${\cal D}$ is item-symmetric, every item-symmetric BIC mechanism is {\em strongly monotone}:
$$\text{for all bidders $i$, and items $j, j'$:}~v_{ij} \ge v_{ij'} \implies \mathbb{E}_{\vec{v}_{-i}}(\phi_{ij}(\vec{v})) \ge \mathbb{E}_{\vec{v}_{-i}}(\phi_{ij'}(\vec{v})).$$
\end{theorem}
\noindent i.e., if $i$ likes item $j$ more than item $j'$, her expected probability (over the other bidders' values) of getting item $j$ is higher. We give an analogous monotonicity property of IC mechanisms in Appendix~\ref{sec: IC results}.
\notshow{While it is a priori not clear that we should be able to find such a solution efficiently, we can indeed write a polynomial-size linear program that outputs a succinct representation of a symmetric solution. The LP looks similar to the original, except it keeps a single representative valuation vector $\overrightarrow{v}$ per equivalence class (under the symmetries of $\cal D$) of valuation vectors, and computes a collection of price-lottery pairs for these representatives only. Clearly, this approach alone is problematic as the truthfulness constraints on representatives are not sufficient to guarantee truthfulness of the full-fledged mechanism: roughly speaking, the smaller LP accounts only for potential deviations to other representative valuation vectors, and not permutations thereof. }
\noindent {\bf From $\epsilon$- to truly-BIC.} Exploiting the aforementioned structural theorems we are able to efficiently compute {\em exactly optimal mechanisms} for value distributions ${\cal D}$ whose marginals on every item have constant-size support. (${\cal D}$ itself can easily have exponentially-large support if, e.g., the items are independent.) To adapt our solution to continuous distributions {or distributions whose marginals have non-constant support}, we attempt the obvious rounding idea that changes $\mathcal{D}$ by rounding all values sampled from $\mathcal{D}$ down to the nearest multiple of some accuracy $\epsilon$, and solves the problem on the resulting distribution ${\cal D}_{\epsilon}$. While we can argue that the optimal BIC mechanism for ${\cal D}_{\epsilon}$ is also approximately optimal for ${\cal D}$, we need to also give up on the incentive compatibility constraints, resulting in an approximately-BIC mechanism where bidders may have an incentive to misreport their values, but the incentive to misreport is always smaller than some function of $\epsilon$. A natural approach to eliminate those incentives to misreport is to appropriately discount the prices for items or bundles of items charged by the mechanism computed for ${\cal D}_{\epsilon}$, generalizing the single-bidder rounding idea attributed to Nisan in~\cite{CHK}. Unfortunately, this approach fails to work in the multi-bidder settings, destroying both the revenue and truthfulness. Simply put, even though the discounts encourage bidders to choose more expensive options, these choices affect not only the price they pay us, but the prices paid by other bidders as well as the incentives of other bidders. Once we start rounding the prices, we could completely destroy any truthfulness the original mechanism had, leaving us with no guarantees on revenue.
Our approach is entirely different, comprising a non-trivial extension of the main technique of~\cite{HKM}. We run simultaneous VCG auctions, one per bidder, where each bidder competes with make-believe replicas of himself, whose values are drawn from the same value distribution where his own values are drawn from. The goods for sale in these per-bidder VCG auctions are replicas of the bidder drawn from the modified distribution ${\cal D}_{\epsilon}$. These replicas are called surrogates. The intention is that the surrogates bought by the bidders in the per-bidder VCG auction will compete with each other in the optimal mechanism ${ M}$ designed for the modified distribution ${\cal D}_{\epsilon}$. Accordingly, the {value} of a bidder for a surrogate is the expected value of the bidder for the items that the surrogate is expected to win in ${ M}$ {\em minus} the price the surrogate is expected to pay. {This is exactly our approach,} except we modify mechanism $M$ to discount all these prices by a factor of $1-O(\epsilon)$. This is necessary to argue that bidders choose to purchase a surrogate with high probability, as otherwise we cannot hope to make good revenue. There are several technical ideas coming into the design and analysis of our two-phase auction (surrogate sale, surrogate competition). We describe these ideas in detail in Sec~\ref{sec:true PTAS}, emphasizing several important complications departing from the setting of~\cite{HKM}. Importantly, the approach of \cite{HKM} is brute force in $|\text{supp}(\mathcal{D}_i)|$. While this is okay for $k$-items, this takes exponential time for $k$-bidders. In addition to showing the following theorem, we show how to make use of Thm \ref{thm:monotone} to get the reduction to run in polynomial time in both settings.
\begin{theorem}\label{thm:epsilon-BIC to BIC}
{Consider a generic setting with $n$ items and $m$ bidders who are additive up to some capacity.} Let ${\cal D}:=\times_i {\cal D}_i$ and ${\cal D}':=\times_i {\cal D}'_i$ be product distributions, sampling every bidder independently from $[0,1]^n$. Suppose that, for all $i$, ${\cal D}'_i$ samples vectors whose coordinates are integer multiples of some $\delta \in (0,1)$ and that ${\cal D}_i$ and ${\cal D}'_i$ can be coupled so that, with probability $1$, a value vector $\vec{v}_i$ sampled from ${\cal D}_i$ and a value vector $\vec{v}'_i$ sampled from ${\cal D}'_i$ satisfy that $v_{ij} \ge v_{ij}' \ge v_{ij}-\delta, \forall j$. Then, for all {$\eta, \epsilon >0$}, any $\epsilon$-BIC mechanism $M_1$ for ${\cal D}'$ can be transformed into a BIC mechanism $M_2$ for ${\cal D}$ such that $R^{M_2}({\cal D}) \ge (1-\eta) \cdot R^{M_1}({\cal D}') - \frac{\epsilon+2\delta}{\eta}T$, where $T$ is the maximum number of items that can be awarded by a feasible mechanism. {Furthermore, if $\mathcal{D}$ and $\mathcal{D}'$ are both valid inputs to the BIC $k$-bidders or $k$-items problem, the transformation runs in time polynomial in $n$ and $m$. Moreover, for the BIC $k$-items problem, $T = k$ and, for the BIC $k$-bidders problem, $T \leq k \max_i C_i$, where $C_i$ is the demand of bidder $i$.}
\end{theorem}
\noindent Figure~\ref{fig:structure} shows how the various components discussed above interact with each other to prove Theorem~\ref{thm:additive}. The proof of Corollary~\ref{cor:MHR} is given in Appendix~\ref{app:MHR}.
\begin{figure}
\caption{Our Proof Structure}
\label{fig:structure}
\end{figure}
\notshow{In recent work, Cai and Daskalakis~\cite{CD} provide an efficient approximation scheme for the unit-demand pricing problem of~\cite{CHK}, i.e. the single-bidder case of the optimal multidimensional mechanism design problem. Their posted price mechanism achieves a $(1-\epsilon)$-fraction of the optimal revenue of all {\em deterministic} mechanisms in polynomial time, when the values of the bidder are independent (but not necessarily identically distributed) from Monotone Hazard Rate distributions, and quasi-polynomial time, when the values are sampled from regular distributions. These results are neither subsumed, nor subsume the results in the present paper. Indeed, we are more general here in that (a) we treat the multi-bidder problem and (b) trade well against the revenue of all randomized mechanisms (instead of just the deterministic ones). On the other hand, we are less general in that, when we have a non-constant number of items, we need to assume that the distribution is item-symmetric, an assumption not needed in~\cite{CD}. Rather interestingly, the techniques of the present paper are orthogonal to those in~\cite{CD}. Here we use the randomization to enable the existence of succinct mechanisms and use linear programming to compute these mechanisms. The approach of~\cite{CD} is instead probabilistic, developing extreme value theorems to characterize the optimal solution, and designing covers of revenue distributions (viewed as random variables that depend on the values and the prices) to design efficient algorithmic solutions.}
\notshow{Other recent work \cite{BCKW,CHMS} investigates the necessity of randomness in optimal mechanism design. In~\cite{BCKW} we learn that, even if there is only a single bidder and four items, there is an unbounded gap between the optimal randomized mechanism and the optimal deterministic mechanism if arbitrary correlation is allowed among the item values. To contrast, \cite{CHMS} shows that, for any number of bidders and items, if the value of each bidder for each item is independent, then the optimal deterministic mechanism is a constant factor approximation to the optimal randomized mechanism. We also see in~\cite{CHMS} that there are in fact examples where the optimal randomized mechanism does strictly better than the optimal deterministic mechanism, even when all the values are i.i.d. These results imply that, to achieve near-optimal revenue in correlated settings or even i.i.d. settings, we are forced to explore randomized mechanisms.}
\notshow{Some recent results~\cite{BGGM,DFK} make use of the same LP. However, without any extra work, the size of the LP is inherently polynomial in the \emph{size of the support of the input distribution}. Our goal is to push beyond this limitation and get algorithms with runtime that only depends on $n$ and $m$ and \emph{not} on the support of the input distribution. In particular, our results apply to continuous and even \emph{unbounded} distributions, where the LP used elsewhere would have infinite size. Additionally, results that do not make use of this LP do not generalize naturally to accommodate budget and demand constraints. Our algorithms naturally accommodate budget and demand constraints by simply adapting the LP (discussed in App \ref{app: LP}). We aim for the best of both worlds: we get optimality and the ability to easily accommodate budget/demand constraints by using the LP, but are able to remove the dependence on the size of the support of the input distribution.
\costasnote{I don't know where to put this and if it's needed... We note that known non-programming based constant-approximation results~\cite{CHK,CHMS} do not generalize naturally to accommodate budget and demand constraints. Instead our algorithms do accommodate budget and demand constraints. We aim for the best of both worlds: we get optimality and the ability to easily accommodate budget/demand constraints by using the LP, but are able to remove the dependence on the size of the support of the input distribution.}}
\section{Symmetry Theorem}\label{sec:symmetries}
We provide the necessary definitions to understand exactly what our symmetry result is claiming.
\begin{definition}(Symmetry in a Distribution) We say that a distribution $\mathcal{D}$ has symmetry $\sigma \in S_m \times S_n$ if, for all $\vec{v} \in \mathbb{R}^{m\times n}$, $Pr_{\cal D}[\vec{v}] = Pr_{\cal D}[\sigma(\vec{v})]$. We also write ${\cal D} \equiv \sigma({\cal D})$.
\end{definition}
\begin{definition}(Symmetry in a Mechanism) We say that a mechanism respects symmetry $\sigma \in S_m \times S_n$ if, for all $\vec{v} \in \mathbb{R}^{m\times n}$, $M(\sigma(\vec{v})) = \sigma(M(\vec{v}))$.
\end{definition}
\begin{definition}(Permutation of a Mechanism) For any $\sigma \in S_m \times S_n$, and any mechanism $M$, define the mechanism $\sigma(M)$ as $[\sigma(M)](\vec{v}) = \sigma(M(\sigma^{-1}(\vec{v})))$.
\end{definition}
\notshow{\mattnote{I want to remove this. It's taking up a lot of space and I don't think it's necessary.}
While the definition for symmetry of a distribution seems very natural, the definition for symmetry of a mechanism and permuting a mechanism may seem odd. We might instead expect to see $M(\sigma(\vec{v})) = M(\vec{v})$ and $\sigma(M)(\vec{v}) = \sigma(M(\vec{v}))$. These latter definitions are in fact very unnatural. Here is a single-bidder $2$-item example to illustrate why. Suppose that $M(\langle 1,0 \rangle)$ gives item $1$ with probability $1/2$ at price $1/4$. Let $\sigma$ swap items $1$ and $2$. Then by these alternative definitions, for $M$ to respect $\sigma$, it must also have $M(\langle 0,1 \rangle)$ give item $1$ with probability $1/2$ at price $1/4$, an outcome the bidder is not willing to purchase. A more natural definition would be to ask that $M(\langle 0,1 \rangle)$ give item $2$ with probability $1/2$ at price $1/4$, exactly what our definition asks. Likewise, by this alternative definition we would have defined $[\sigma(M)](\langle 1,0 \rangle)$ to yield item $2$ with probability $1/2$ at price $1/4$, again an outcome the bidder is not willing to purchase. A more natural definition would be to have $[\sigma(M)](\langle 0,1 \rangle)$ yield item $2$ with probability $1/2$ at price $1/4$, again what our definition asks.} We proceed to state our symmetry theorem; its proof can be found in App~\ref{app:symmetries}.
\begin{prevtheorem}{Theorem}{thm:symmetry}{\bf (Restated from Sec~\ref{sec:overview})} For all $\mathcal{D}$, any BIC (respectively IC, $\epsilon$-IC, $\epsilon$-BIC) mechanism $M$ can be symmetrized into {a BIC (respectively IC, $\epsilon$-IC, $\epsilon$-BIC) mechanism} $M'$ such that, for all $\sigma \in S_m \times S_n$, if $\mathcal{D}$ has symmetry $\sigma$, $M'$ respects $\sigma$, and $R^M(\mathcal{D}) = R^{M'}(\mathcal{D})$.
\end{prevtheorem}
We note that Thm~\ref{thm:symmetry} is an extremely general theorem. $\mathcal{D}$ can have arbitrary correlation between bidders or items, and can be continuous. \notshow{In addition, the theorem still holds if we allow for demand or budgets constraints and other generalizations of the problem.} One might wonder why we had to restrict our theorem to symmetries in $S_m \times S_n$ and not arbitary permutations of the set $[m]\times [n]$. In fact, after reading through our proof, one can see that the same inequalities that make symmetries in $S_m \times S_n$ work also hold for symmetries in $S_{[m]\times [n]}$. However, the mechanism resulting from our proof is not a feasible one, since our transformation can violate feasibility constraints for symmetries $\sigma \notin S_m \times S_n$.
We also emphasize a subtle property of our symmetrizing transformation: the transformation takes as input a set of symmetries satisfied by ${\cal D}$ and a mechanism, and symmetrizes the mechanism so that it satisfies all symmetries in the given set of symmetries. Our transformation {\em does not work if the given set of symmetries is not a subgroup.} Luckily the maximal subset of symmetries in $S_m \times S_n$ satisfied by a value distribution is {\em always a subgroup,} and this enables our result.
\section{Optimal Symmetric Mechanisms for Discrete Distributions}\label{sec:LP}
In this section, we solve the following problem: ``Given a distribution $\mathcal{D}$ with constant support per dimension and a subgroup of symmetries $S \subseteq S_m \times S_n$ satisfied by ${\cal D}$, find a BIC mechanism $M$ that respects all symmetries in $S$ and maximizes $R^M(\mathcal{D})$.'' By Thm~\ref{thm:symmetry}, such $M$ will in fact be optimal with respect to all mechanisms. Intuitively, optimizing over symmetric mechanisms should require less work than over general mechanisms, since we should be able to exploit the symmetry constraints in our optimization. Indeed, suppose that every bidder can report $c$ different values for each item, where $c$ is some absolute constant. Then the na\"ive LP of Section~\ref{sec:overview}/App~\ref{app:LP} has size polynomial in $c^{mn}$, where $m,n$ are the number of bidders and items respectively. In Sec~\ref{subsec:bidder symmetries} we give a simple observation that reduces the number of variables and constraints of this LP for any given $S$. This observation in itself is sufficient to provide an efficient solution to the BIC $k$-items problem (in our constant-support-per-dimension setting), but falls short from solving the BIC $k$-bidders problem. For the latter, we need another structural consequence of symmetry, which comes in the form of a {\em strong-monotonicity} property satisfied by all symmetric BIC mechanisms. Strong-monotonicity and symmetry together enable us to obtain an efficient solution to the BIC $k$-bidders problem in Sec~\ref{subsec:item symmetries} (still for our constant-support-per-dimension setting). We explicitly write the LPs that find the optimal BIC mechanism. Simply tacking on a {$-\epsilon\cdot \sum_j \pi_{ij}(\vec{w}_i)$} to the correct side of the BIC constraints yields an LP to find the optimal $\epsilon$-BIC mechanism for any $\epsilon$. Efficiently solving non-constant/infinite supports per dimension is postponed to Sec~\ref{sec:Algorithms}.
\subsection{Reducing the LP size for any $S$, and Solving the discrete BIC $k$-items Problem}\label{subsec:bidder symmetries}
We provide an LP formulation that works for any $S$. Our LP is the same as the na\"ive LP of Figure~\ref{fig:naive LP}, except we drop some constraints of that LP and modify its objective function as follows. Since our mechanism needs to respect every symmetry in $S$, it must satisfy
$$\phi_{ij}(\vec{v}) = \phi_{\sigma(i,j)}(\sigma(\vec{v})), \forall i,j,\vec{v},\sigma \in S\text{ and }p_i(\vec{v}) = p_{\sigma(i)}(\sigma(\vec{v})), \forall i,\vec{v}, \sigma \in S.$$
Therefore, if we define an equivalence relation by saying that $\vec{v} \sim_S \sigma(\vec{v})$, for all $\sigma \in S$, we only need to keep variables $\phi_{ij}(\vec{v}),p_i(\vec{v})$ for a single representative from each equivalence class. We can then use the above equalities to substitute for all non-representative $\vec{v}$'s into the na\"ive LP. This will cause some constraints to become duplicates. If we let $E$ denote the set of representatives, then we are left with the LP of Figure~\ref{fig:succinct k-items LP} in App \ref{app:succinct LPs}, after removing duplicates. In parentheses at the end of each type of variable/constraint is the number of {distinct} variables/constraints {of that type}.
\begin{lemma} \label{lem:simple reduction works for k-items} The LP of Fig.~\ref{fig:succinct k-items LP} in App~\ref{app:succinct LPs} has polynomial size for the BIC $k$-items problem, if the support of every marginal of the value distribution is an absolute constant.
\end{lemma}
\subsection{Strong-Monotonicity, and Solving the discrete BIC $k$-bidders Problem}\label{subsec:item symmetries}
Unfortunately, the reduction of the previous section is \emph{not} strong enough to make the LP polynomial in the number of items $n$, even if $S$ contains all item permutations and there is a constant number of bidders. This is because a bidder can deviate to an exponential number $c^n$ of types, and our LP needs to maintain an exponential number of BIC constraints. To remedy this, we prove that every item-symmetric {BIC} mechanism for bidders sampled from an item-symmetric distribution satisfies a natural monotonicity property:
\begin{definition}(Strong-Monotonicity of a BIC mechanism) A BIC or $\epsilon$-BIC mechanism is said to be {\em strongly monotone} if for all $i,j,j'$, $v_{ij} \geq v_{ij'} \ensuremath{\mathbb{R}}ightarrow \pi_{ij}(\vec{v_i}) \geq \pi_{ij'}(\vec{v_i})$. That is, bidders expect to receive their favorite items more often.
\end{definition}
\begin{prevtheorem}{Theorem}{thm:monotone}{\bf (Restated from Sec~\ref{sec:overview})} If $M$ is BIC and $\mathcal{D}$ and $M$ are both item-symmetric, then $M$ is strongly monotone. If $M$ is $\epsilon$-BIC and $\mathcal{D}$ and $M$ are both item-symmetric, there exists a {$\epsilon$-BIC} mechanism of the same expected revenue that is strongly monotone.
\end{prevtheorem}\\
\noindent The proof of Thm~\ref{thm:monotone} can be found in App~\ref{app:proof LP}. We note again that our notion of strong-monotonicity is different than the notion of cyclic-monotonicity that holds more generally, but is not sufficient for obtaining efficient algorithms. Instead strong-monotonicity suffices due to the following:
\begin{observation}\label{obs:monotone} When playing {an item-symmetric,} strongly monotone BIC mechanism, bidder $\vec{v_i}$ has no incentive to report any $\vec{w_i}$ with $w_{ij} > w_{ij'}$ unless $v_{ij} \geq v_{ij'}$.
\end{observation}
\begin{lemma} \label{lem: succinct LP for k-bidders} There exists a polynomial-size LP for the BIC $k$-bidders problem, if the support of every marginal of the value distribution is an absolute constant. The LP is shown in Fig.~\ref{fig:succinct k-bidders LP} of App \ref{app:succinct LPs}.
\end{lemma}
{We note that Theorem \ref{thm:monotone} is also true for IC and $\epsilon$-IC mechanisms with the appropriate definition of strong-monotonicity. The definition and proof are given in Appendix \ref{sec: IC results}.}
\section{Efficient Mechanisms for General Distributions}\label{sec:Algorithms}
We use the results of Sec \ref{sec:LP} to prove Thm~\ref{thm:additive}. First, it is not hard to see that discretizing the value distribution to multiples of $\delta$, for sufficiently small $\delta = \delta(\epsilon)$, and applying Lemmas~\ref{lem:simple reduction works for k-items} and~\ref{lem: succinct LP for k-bidders} yields an algorithm for computing an $\epsilon$-BIC $\epsilon$-optimal mechanism for the $k$-items and $k$-bidders problems. The resulting technical difficulty is turning these mechanisms into being $0$-BIC. To do this, we employ a non-trivial modification of the construction in~\cite{HKM} to improve the truthfulness of the mechanism at the cost of a small amount of revenue. We present our construction and its challenges in Sec~\ref{sec:true PTAS}.
\subsection{A Warmup: $\epsilon$-Truthful Near-Optimal Mechanisms} \label{sec:epsilon-truthful}
\paragraph{Discretization:} Let ${\cal D}$ be a valid input to the BIC $k$-items or the BIC $k$-bidders problem. For each $i$, create a new distribution $\mathcal{D}'_i$ that first samples a bidder from $\mathcal{D}_i$, and rounds every value down to the nearest multiple of $\delta$. Let ${\cal D}'$ be the product distribution of all ${\cal D}'_i$'s. {Let also $T$ denote the maximum number of items that can be awarded by a feasible mechanism.} We show the following lemma whose proof can be found in App~\ref{app:bi-criterion}.
\begin{lemma}\label{lem:deltaIC}
For all $\delta$, let $M'$ be the optimal {$\delta$}-BIC mechanism for ${\cal D}'$. Then $R^{M'}(\mathcal{D}') \geq R^{OPT}(\mathcal{D}) - {\delta T}$. Moreover, let $M$ be the mechanism that on input $\vec{v}$ rounds every $v_{ij}$ down to the nearest multiple $v_{ij}'$ of $\delta$ and implements the outcome $M'(\vec{v}')$. Then $M$ is {$2\delta$}-BIC for bidders sampled from $\mathcal{D}$, and has revenue at least $R^{OPT}(\mathcal{D})-{\delta T}$.
\end{lemma}
Now notice that our algorithms of Sec \ref{sec:LP} allow us to find an optimal {$\delta$}-BIC mechanism $M'$ for $\mathcal{D}'$. So an application of Lemma~\ref{lem:deltaIC} allows us to obtain a {$2\delta$}-BIC mechanism for ${\cal D}$ whose revenue is at least $R^{OPT}(\mathcal{D})-{\delta T}$.
\subsection{Truthful Near-Optimal Mechanisms: Proof of Theorems~\ref{thm:epsilon-BIC to BIC} and~\ref{thm:additive}}\label{sec:true PTAS}
{We start this section with describing our $\epsilon$-BIC to BIC transformation result (Thm~\ref{thm:epsilon-BIC to BIC}) arguing that it can be implemented efficiently in the BIC $k$-items and $k$-bidders settings.~{\footnote{{We will explicitly describe the transformation for the BIC $k$-items and $k$-bidders settings. For an arbitrary setting where $m$ bidders sample their valuation vectors for $n$ items independently (but not necessarily identically) from $[0,1]^n$ (allowing correlation among items), simply employ the BIC $k$-items transformation, replacing $k$ with $n$.}}} Combining our transformation with the results of the previous sections we obtain Thm~\ref{thm:additive} in the end of this section. Our transformation is inspired by~\cite{HKM}, but has several important differences. We explicitly describe our transformation, point out the key differences between our setting and that considered in \cite{HKM}, and outline the proof of correctness, postponing the complete proof of Theorem~\ref{thm:epsilon-BIC to BIC} to Appendix~\ref{app:true PTAS}.}
\paragraph{Algorithm Phase $1$: Surrogate Sale}
\begin{enumerate}
\item Recall from the statement of Theorem~\ref{thm:epsilon-BIC to BIC} that ${\cal D}'$ samples values that are integer multiples of $\delta$ and that ${\cal D}$ and ${\cal D}'$ can be coupled so that, whenever we have $\vec{v}$ sampled from $\mathcal{D}$ and $\vec{v}'$ sampled from $\mathcal{D}'$, we have $v_{ij} \geq v'_{ij} \geq v_{ij}-\delta$, for all $i,j$. Moreover, $M_1$ is an $\epsilon$-BIC mechanism for $\mathcal{D}'$, for some $\epsilon$.
\item {Modify $M_1$ to multiply all prices it charges by a factor of $(1-\eta)$. Call $M$ the mechanism resulting from this modification. Interpret the $\eta$-fraction of the prices given back as rebates.}
\item For each bidder $i$, create $r-1$ \emph{replicas} sampled i.i.d. from $\mathcal{D}_i$ and $r$ \emph{surrogates} sampled i.i.d. from $\mathcal{D}'_i$. Use {$r = ({\eta \over \delta})^2 \cdot m^2 \cdot \hat{\beta}$, where $\hat{\beta} = ({1 \over \delta}+1)^k$, for the $k$-items transformation, and $\hat{\beta}=(n+1)^{1/\delta +1}$, for the $k$-bidders transformation.}
\item Ask each bidder to report $\vec{v_i}$. For $k$-bidders only: Fix a permutation $\sigma$ such that ${v_{i\sigma^(j)} \geq v_{i\sigma^(j+1)}}, \forall j$. For each surrogate and replica $\vec{w_i}$, permute $\vec{w_i}$ into $\vec{w_i}'$ satisfying ${w'_{i\sigma^(j)} \geq w'_{i\sigma^(j+1)},\forall j}$.
\item Create a weighted bipartite graph with replicas (and bidder $i$) on the left and surrogates on the right. The weight of an edge between replica (or bidder $i$) with type $\vec{r_i}$ and surrogate of type $\vec{s_i}$ is $\vec{r_i}$'s utility for the expected outcome of $\vec{s_i}$ when playing $M$ (where the expectation is taken over the randomness of $M$ and of the other bidders assuming they are sampled from $\mathcal{D}'_{-i}$).
\item Compute the VCG matching and prices. If a replica (or bidder $i$) is unmatched in the VCG matching, add an edge to a random unmatched surrogate. The surrogate selected for bidder $i$ is whoever she is matched to.
\end{enumerate}
\paragraph{Algorithm Phase $2$: Surrogate Competition}
\begin{enumerate}
\item Let $\vec{s_i}$ denote the surrogate chosen to represent bidder $i$ in phase one, and let $\vec{s}$ denote the entire surrogate profile. Have the surrogates $\vec{s}$ play $M$.
\item If bidder $i$ was matched to their surrogate through VCG, charge them the VCG price and award them $M_i(\vec{s})$. (Recall that this has both an allocation and a price component; the price is added onto the VCG price.) If bidder $i$ was matched to a random surrogate after VCG, award them nothing and charge them nothing.
\end{enumerate}
{There are several differences between our transformation and that of~\cite{HKM}. First, observe that, because ${\cal D'}$ and $M_1$ are explicitly given as input to our transformation (via an exact sampling oracle from ${\cal D}'_i$ and explicitly specifying the outcome awarded to every type $\vec{v}_i$ sampled from ${\cal D}'_i$, for all $i$), we do not have to worry about approximation issues in calculating the edge weights of our VCG auctions in Phase 1. Second, in \cite{HKM}, the surrogates are taking part in an algorithm rather than playing a mechanism, and every replica has non-negative {value} for the outcome of an algorithm because there are no prices charged. Here, however, replicas may have negative {value} for the outcome of a mechanism because there are prices charged. Therefore, some edges may have negative weights, and the VCG matching may not be perfect. We have modified $M$ to give rebates (phase $1$, step $2$) so that the VCG matching cannot be far from perfect, and show that we do not lose too much revenue from unmatched bidders. Finally, in the $k$-bidders problem, the vanilla approach that does not permute sampled replicas and surrogates (like we do in phase $1$, step $4$ of our reduction) would require exponentially many replicas and surrogates to preserve revenue. To maintain the computational efficiency of our reduction, we resort to sampling only polynomially many replicas/surrogates and permuting them according to the permutation induced by the bidder's reported values. This may seem like it is giving a bidder control over the distribution of replicas and surrogates sampled for her. We show, exploiting the monotonicity results of Sec \ref{sec:LP}, that our construction is still BIC despite our permuting the replicas and surrogates. We overview the main steps of the proof of Thm~\ref{thm:epsilon-BIC to BIC} and give its complete proof in App~\ref{app:true PTAS}. We conclude this section with the proof of Thm~\ref{thm:additive}.
\begin{prevproof}{Theorem}{thm:additive}
Choose $\mathcal{D}'$ to be the distribution that samples from ${\cal D}$ and rounds every $v_{ij}$ down to the nearest multiple of $\delta$. Let then $M_1$ be the optimal $\delta$-BIC mechanism for $\mathcal{D}'$ as computed by the algorithms of Section~\ref{sec:LP}. By Lemma~\ref{lem:deltaIC}, $R^{M_1}(\mathcal{D}') \geq R^{OPT}(\mathcal{D}) - \delta T$. Applying Thm~\ref{thm:epsilon-BIC to BIC} we obtain a BIC mechanism $M_2$ such that
{\begin{align}
R^{M_2}({\cal D}) &\ge (1-\eta) \cdot R^{M_1}({\cal D}') - \frac{3\delta}{\eta}T \\
&\ge R^{OPT}(\mathcal{D}) - \eta \cdot R^{OPT}(\mathcal{D}) - (1-\eta) \delta T - \frac{3\delta}{\eta}T. \label{eq:grand 1}
\end{align}
Notice that $R^{OPT}(\mathcal{D}) {\le T}$. Hence, choosing $\eta=\epsilon$ and $\delta = \epsilon^2$, \eqref{eq:grand 1} gives
\begin{align}
R^{M_2}({\cal D}) &\ge R^{OPT}(\mathcal{D}) - O(\epsilon \cdot k)~~~~\text{(for $k$-items)}; \text{and}\\
R^{M_2}({\cal D}) &\ge R^{OPT}(\mathcal{D}) - O(\epsilon \cdot \sum_i C_i)~~~~\text{(for $k$-bidders)}.
\end{align}
The proof of Theorem~\ref{thm:additive} is concluded by noticing that $\sum_i C_i \le k \max_i C_i$ and $k$ is an absolute constant.
}\end{prevproof}
\begin{thebibliography}{1}
\bibitem{alaei} S.~Alaei. Bayesian Combinatorial Auctions: Expanding Single Buyer Mechanisms to Many Buyers. {\em Proceedings of FOCS} 2011.
\bibitem{BH} X. Bei, Z. Huang. Bayesian Incentive Compatibility Via Fractional Assignments. {\em Proceedings of SODA} 2011.
\bibitem{BGGM} S.~Bhattacharya, G.~Goel, S.~Gollapudi and K.~Munagala. Budget constrained auctions with heterogeneous items. {\em Proceedings of STOC} 2010.
\bibitem{BCKW} P.~Briest, S.~Chawla, R.~Kleinberg and S.~M.~Weinberg. Pricing Randomized Allocations. {\em Proceedings of SODA} 2010.
\bibitem{BK} P.~Briest and P.~Krysta. Buying Cheap is Expensive: Hardness of Non-Parametric Multi-Product Pricing. {\em Proceedings of SODA} 2007.
\bibitem{BvN} G.~W.~Brown and J.~von~Neumann. Solutions of Games by Differential Equations. {\em In H. W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games,} 1:73--79. Princeton University Press, 1950.
\bibitem{CD} Y.~Cai and C. Daskalakis. Extreme-Value Theorems for Optimal Multidimensional Pricing. {\em Proceedings of FOCS,} 2011.
\bibitem{CDW} Y.~Cai, C.~Daskalakis and S.~M.~Weinberg. An Algorithmic Characterizion of Multi-Dimensional Mechanisms. {\em arXiv report}, 2011.
\bibitem{CJ} Y.~Cai and Z.~Huang. Simple and Nearly Optimal Multi-Item Auction. {\em Manuscript}, 2011.
\bibitem{CHK} S.~Chawla, J.~D.~Hartline and R.~D.~Kleinberg. Algorithmic Pricing via Virtual Valuations. {\em Proceedings of EC} 2007.
\bibitem{CHMS} S.~Chawla, J.~D.~Hartline, D.~Malec and B.~Sivan. Multi-Parameter Mechanism Design and Sequential Posted Pricing. {\em Proceedings of STOC} 2010.
\bibitem{CMS} S.~Chawla, D.~Malec and B.~Sivan. The Power of Randomness in Bayesian Optimal Mechanism Design. {\em Proceedings of EC} 2010.
\bibitem{DFK} S.~Dobzinski, H.~Fu and R.~D.~Kleinberg. Optimal Auctions with Correlated Bidders are Easy. {\em Proceedings of STOC} 2011.
\bibitem{JDM} D.~M.~Johnson, A.~L.~Dulmage, and N.~S.~Mendelsohn. On an Algorithm of G. Birkhoff Concerning Doubly Stochastic Matrices. {\em Canadian Mathematical Bulletin} 1960.
\bibitem{GKT} D. Gale, H. W. Kuhn, and A. W. Tucker. On Symmetric Games. {\em In H. W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games,} 1:81--87. Princeton University Press, 1950.
\bibitem{HKM} J. Hartline, R. Kleinberg, A. Malekian. Bayesian Incentive Compatibility and Matchings. {\em Proceedings of SODA} 2011.
\bibitem{HL} J.~D.~Hartline and B.~Lucier. Bayesian Algorithmic Mechanism Design. {\em Proceedings of STOC} 2010.
\bibitem{optimal:econ} A.~M.~Manelli and D.~R.~Vincent. Multidimensional Mechanism Design: Revenue Maximization and the Multiple-Good Monopoly. {\em Journal of Economic Theory,} 137(1):153--185, 2007.
\bibitem{myerson} R.~B.~Myerson. Optimal Auction Design. {\em Mathematics of Operations Research,} 1981.
\bibitem{N} J.~Nash.
\newblock Noncooperative Games.
\newblock {\em Annals of Mathematics}, 54:289--295, 1951.
\bibitem{AGTbook} N.~Nisan, T.~Roughgarden, E.~Tardos and V.~V.~Vazirani (eds.). {\em Algorithmic Game Theory}. Cambridge University Press, 2007.
\end{thebibliography}
\appendix
\section{Input Distribution Model}\label{app:input model}
We discuss two models for accessing a value distribution ${\cal D}$, and explain what modifications are necessary, if any, to our algorithms to work with each model:
\begin{itemize}
\item {\bf Exact Access:} We are given access to a sampling oracle as well as an oracle that exactly integrates the pdf of the distribution over a specified region.
\item {\bf Sample-Only Access:} We are given access to a sampling oracle and nothing else.
\end{itemize}
The presentation of the paper focuses on the first model. In this case, we can exactly evaluate probabilities of events without any special care. If we have sample-only access to the distribution, we need to be a bit more careful, and proceed to sketch what modifications are necessary to the LP of Section~\ref{sec:LP} and the reduction of Section~\ref{sec:Algorithms} to obtain our results. For this discussion to be meaningful, the reader should be familiar with the notation of Sections~\ref{sec:LP} and~\ref{sec:Algorithms} and their appendices. Let $\{E_{\ell}\}_{\ell}$ denote the partition of $[0,1]^{nm}$ induced by rounding and symmetries. That is, let $\vec{v} \sim \vec{w}$ if there exists a symmetry $\sigma$ that $\mathcal{D}$ has, and there exist integers $c_{11},\ldots,c_{mn}$ such that $c_{ij}\delta > v_{\sigma(i,j)},w_{ij} \geq (c_{ij}-1)\delta$, for all $i,j$; let then $\{E_{\ell}\}_{\ell}$ denote the partition of $[0,1]^{mn}$ induced by this equivalence relation. In the settings we consider, we have counted at most $\max\{n,m\}^{{(1/\delta+1)}^{\min\{n,m\}}}$ different $E_{\ell}$'s. Hence, we can take $\tilde{O}\left(1/\zeta^2 \cdot \max\{n,m\}^{{(1/\delta+1)}^{\min\{n,m\}}}\right)$ samples from the oracle to simultaneously estimate all probabilities $\{\Pr[\vec{v} \in E_{\ell}]\}_{\ell}$ to within additive accuracy $\zeta$, with high probability. W.l.o.g. we can assume that the estimated probabilities $\{\widehat{\Pr}[\vec{v} \in E_{\ell}]\}_{\ell}$ sum to exactly $1$ (as our estimator is just the histogram of the equivalence classes in which the samples from ${\cal D}$ have fallen). Given these estimates, let ${\cal D}_{\delta,\zeta}$ be the discrete distribution that samples an event $E_{\ell}$ with probability $\widehat{\Pr}[\vec{v} \in E_{\ell}]$, and then outputs an arbitrary vector whose coordinates are all integer multiples of $\delta$ within $E_{\ell}$, after having permuted that vector according to a random $\sigma \in {\cal S}$ where ${\cal S} \subseteq S_m \times S_n$ is the set of symmetries satisfied by ${\cal D}$. Given ${\cal D}_{\delta,\zeta}$ we modify our algorithm as follows. First, we apply the algorithms of Section~\ref{sec:LP} to the distribution ${\cal D}_{\delta,\zeta}$ (that is known explicitly) to compute the optimal mechanism $M_1$ for this distribution. Then we skip the rounding of Section~\ref{sec:epsilon-truthful}; and most importantly, in our reduction of Section~\ref{sec:true PTAS} we make sure to sample surrogates from ${\cal D}_{\delta,\zeta}$ and \emph{not} sample from $\mathcal{D}$ and then round down to multiples of $\delta$. This is because we need the expected outcomes of $M_1$ when played by surrogates sampled by $\mathcal{D}_{\delta,\zeta}$ to be exactly as they were computed by the LPs of Section~\ref{sec:LP}. On the other hand, we still sample replicas directly from $\mathcal{D}$. The error coming from using ${\cal D}_{\delta,\zeta}$ instead of the discretized (to multiples of $\delta$) version of ${\cal D}$ in the reduction of Section~\ref{sec:true PTAS} can be folded into the revenue approximation error. With the appropriate choice of $\zeta$ we can still obtain Theorem~\ref{thm:additive} and Corollary~\ref{cor:MHR}. We skip further details.
\section{A Na\"ive Linear Programming Formulation} \label{app:LP}
Let $\cal D$ be the joint distribution of all bidders' values for all items; this distribution is supported on some subset of $\mathbb{R}^{m \times n}$, where $m$ is the number of bidders and $n$ is the number of items. It has long been known that finding the optimal randomized BIC (or IC) mechanism is merely solving a linear program of size polynomial in $n,m,$ and $|{\rm supp}(\mathcal{D})|$.~\footnote{Such formulation is folklore and appears among other places in~\cite{BCKW,BGGM,DFK}.} The simple linear program to find the optimal BIC mechanism for a distribution with finite support $\mathcal{D}$ where bidder $i$ has demand $C_i$ and budget $B_i$ is shown in Figure \ref{fig:naive LP}. Set $B_i = +\infty$ for bidders with no budget constraints.
\begin{figure}
\caption{Na\"ive LP for bidders with demand and budget constraints.}
\label{fig:naive LP}
\end{figure}
A simple application of the Birkhoff-Von Neumann theorem tells us that as long that marginals $\phi_{ij}(\vec{v})$ satisfy the demand and supply constraints in expectation, then we can find in polynomial time a distribution over allocations that satisfies the demand and supply constraints deterministically and induces these marginals. In addition, a nice trick allows us to switch between ex-post IR and ex-interim IR with no hurt in the value of the LP. These methods are described in Appendix \ref{app:feasible} and \ref{app:ex-post IR}, respectively.
\section{Feasible Randomized Allocations}\label{app:feasible}
Here, we show how to efficiently turn the $\phi$s of a mechanism into an actual randomized outcome. We start with unit-demand bidders (i.e. $C_i=1$ for all $i$) and explain what modifications are necessary for non-unit demand bidders. We note that this procedure was also used in \cite{DFK}, but we include it again here for completeness. Given a collection $\{\phi_{ij}\}_{i,j}$ we explicitly find a distribution over feasible deterministic outcomes (a deterministic outcome assigns each item to some bidder, or possibly the trash) that assigns bidder $i$ item $j$ with probability $\phi_{ij}$ using the Birkhoff-Von Neumann decomposition of a doubly stochastic matrix. To do this, we put the $\phi_{ij}$'s into a matrix, $\Phi$. We observe that $\Phi$ is almost doubly-stochastic, except that the sums of rows or columns can be less than $1$, and $\Phi$ isn't square. We can change $\Phi$ into $\Phi'$ that is doubly-stochastic in the following way: First, add dummy items or dummy bidders to make $\Phi$ square. Next, step through each entry of $\Phi$ one by one and increase $\Phi_{ij}$ as much as possible without making row $i$ or column $j$ sum to greater than $1$. Now we have a $\Phi'$ that is doubly stochastic.
Next, we run a constructive algorithm for the Birkhoff-Von Neumann theorem (\cite{JDM}) to decompose $\Phi'$ into the weighted sum of at most $(\max\{m,n\})^2$ permutation matrices in ${\rm poly}(\max\{m,n\})$ time. Now our sampling scheme is as follows. Pick a permutation matrix with probability equal to it's weight in the decomposition of $\Phi'$, and call this matrix $P$. If $P_{ij} = 1$, then give bidder $i$ item $j$ with probability $\Phi_{ij}/\Phi'_{ij}$.
For any $i,j$, let's explicitly compute the probability that bidder $i$ gets item $j$ in this sampling procedure. The probability that $P_{ij} = 1$ is exactly $\Phi'_{ij}$. And the probability that bidder $i$ gets item $j$ is exactly $\Phi_{ij}/\Phi'_{ij}$ times the probability that $P_{ij} = 1$, which is exactly $\Phi_{ij}$.
\paragraph{Handling Non-Unit Demand Bidders.} If bidder $i$ has demand $C_i$ (instead of $1$), we can replace her in the matrix $\Phi$ with $\min(n,C_i)$ copies (where $n$ is the number of items), each receiving at most one item in expectation. Then we can run the same decomposition and give each bidder all the items awarded to her copies. This solution still runs in polynomial time, and always awards each bidder at most $C_i$ items.
\section{Modifications Required for Ex-Post Individually Rational Mechanisms} \label{app:ex-post IR}
Here, we describe how to turn an ex-interim IR mechanism into an ex-post IR mechanism. If $M$ is ex-interim IR, then we just have $\sum_j v_{ij}\pi_{ij}(\vec{v}_i) \geq q_i(\vec{v}_i)$ for all $i,\vec{v}_i$. Our modification is this: Let $c_i(\vec{v}_i) := q_i(\vec{v}_i)/(\sum_j v_{ij}\pi_{ij}(\vec{v}_i))$, for some specific $i, \vec{v}_i$. Then whenever bidder $i$ receives bundle $J$ when his bid was $\vec{v}_i$, charge him $\sum_{j \in J} c_i(\vec{v}_i) \cdot v_{ij}$. This is clearly ex-post IR because $c_i(\vec{v}_i) \leq 1$. Also, let's compute the expected price bidder $i$ pays when bidding $\vec{v}_i$:
$$\sum_{j} c_i(\vec{v}_i)v_{ij}\pi_{ij}(\vec{v}_i) = c_i(\vec{v}_i)\sum_j v_{ij} \pi_{ij}(\vec{v}_i) = q_i(\vec{v}_i).$$
So we can do this simple transformation to turn an ex-interim IR mechanism into an ex-post IR mechanism without any loss in revenue. One should observe that this transformation may cause bidders to sometimes pay more than their budget (even though the budget constraint is still respected in expectation after our transformation). Unfortunately, this problem is unavoidable in the following sense: the optimal ex-post IR mechanism that respects budget constraints may make strictly less revenue than the optimal ex-interim IR mechanism that respects budget constraints. Here is a simple example that illustrates this on a single item and two bidders. Each bidder always values the item at $10$, but has a budget of $5$. Then the optimal ex-interim IR mechanism that respects budgets is to give the item to each player with probability $1/2$ and charge them $5$ no matter who gets the item. The optimal ex-post IR mechanism that respects budgets is to give the item to each player with probability $1/2$ and charge the winner $5$.
\section{Proof of Theorem~\ref{thm:symmetry}}\label{app:symmetries}
We provide a proof of our symmetry theorem (Theorem~\ref{thm:symmetry}). The outline of our approach is this:
\begin{enumerate}
\item We show that for an arbitrary $\sigma$, if $M$ is BIC and $\mathcal{D}$ has symmetry $\sigma$, $\sigma(M)$ is BIC and $R^M(\mathcal{D}) = R^{\sigma(M)}(\mathcal{D})$.
\item We extend the result to distributions of permutations. That is, let $\mathcal{G}$ be any distribution over permutations in $S_n \times S_m$ that only samples $\sigma$ such that $\mathcal{D}$ has symmetry $\sigma$. Then define the mechanism $\mathcal{G}(M)$ to first sample a $\sigma$ from $\mathcal{G}$, then use $\sigma(M)$. Then if $M$ is BIC, then $\mathcal{G}(M)$ is BIC and $R^{\mathcal{G}(M)}(\mathcal{D}) = R^M(\mathcal{D})$.
\item We show that if $\mathcal{G}$ uniformly samples a permutation from a {\em subgroup} $S \subseteq S_m \times S_n$, then $\mathcal{G}(M)$ respects every permutation in $S$, for all subgroups $S$ and mechanisms $M$.
\item We put everything together and observe that if $\mathcal{G}$ uniformly samples from the subgroup of symmetries that $\mathcal{D}$ has, then for any mechanism $M$, we can create $\mathcal{G}(M)$ that has the same expected revenue as $M$ and respects every symmetry that $\mathcal{D}$ has.
\end{enumerate}
Next we prove the above steps one-by-one. We prove more general statements catering also for IC, $\epsilon$-IC and $\epsilon$-BIC mechanisms.
\begin{lemma}\label{lem:truthful} If $M$ is an arbitrary IC (BIC,$\epsilon$-IC, $\epsilon$-BIC) mechanism, then for any $\sigma \in S_m \times S_n$:
\begin{enumerate}
\item $\sigma(M)$ is an IC (BIC,$\epsilon$-IC,$\epsilon$-BIC) mechanism; and
\item $R^{\sigma(M)}(\mathcal{D}) = \sum_{\vec{v} \in \text{supp}(\mathcal{D})} R^M(\sigma^{-1}(\vec{v}))Pr[\vec{v} \leftarrow \mathcal{D}]$.
\end{enumerate}
Furthermore, if $\mathcal{D}$ has symmetry $\sigma$, then $R^M(\mathcal{D}) = R^{\sigma(M)}(\mathcal{D})$.
\end{lemma}
\begin{prevproof}{Lemma}{lem:truthful}
It is clear that part $2$ is true given part $1$. If all bidders play truthfully, then on bidder profile $\vec{v}$, $\sigma(M)$ makes revenue equal to exactly $R^M(\sigma^{-1}(\vec{v}))$. Therefore the sum exactly computes the expected revenue. The last part of the lemma is also clear given part $2$. If $\mathcal{D}$ has symmetry $\sigma$, then the sum exactly computes $R^M(\mathcal{D})$.
Now we prove part $1$. We do this by explicitly examining the value of bidder $i$ whose true type is $\vec{v}_i$ for reporting any other $\vec{w}_i$ when the rest of the bids are fixed. Let $\vec{v}$ denote the profile of bids when everything besides bidder $i$ is fixed and he reports $\vec{v}_i$, and let $\vec{w}$ denote the profile of bids when everything besides bidder $i$ is the same as $\vec{v}$, but bidder $i$ reports $\vec{w}_i$ instead. By the definition of $\sigma(M)$ we have the following equation. (Recall that $U(\vec{v}_i,M_i(\vec{w}))$ denotes the utility of a bidder with type $\vec{v}_i$ for the expected outcome $M_i(\vec{w})$. We also slightly abuse notation with $\sigma$; when we write $\sigma(i)$ we mean the restriction of the permutation $\sigma$ to $[m]$, etc.)
$$U(\vec{v}_i,[\sigma(M)]_i(\vec{x})) = U(\sigma^{-1}(\vec{v}_i),M_{\sigma^{-1}(i)}(\sigma^{-1}(\vec{x}))).$$
{This holds because $\sigma(M)$ on input $\vec{x}$ offers bidder $i$ the permuted by $\sigma$ lottery offered to bidder $\sigma^{-1}(i)$ by $M$ on bid vector $\sigma^{-1}(\vec{x})$ and charges him the price charged to bidder $\sigma^{-1}(i)$ by $M$ on input $\sigma^{-1}(\vec{x})$. }
Now, because $M$ is an IC mechanism, we know that:
$$U(\sigma^{-1}(\vec{v}_i),M_{\sigma^{-1}(i)}(\sigma^{-1}(\vec{v}))) \geq U(\sigma^{-1}(\vec{v}_i),M_{\sigma^{-1}(i)}(\sigma^{-1}(\vec{w}))).$$
And combining this inequality with the above equality, we get exactly that:
$$U(\vec{v}_i,[\sigma(M)]_i(\vec{v})) \geq U(\vec{v}_i,[\sigma(M)]_i(\vec{w})).$$
Since the above was shown for all $i$ and for all $\vec{v} = (\vec{v}_i~;~\vec{v}_{-i})$ and $\vec{w} = (\vec{w}_i~;~\vec{v}_{-i})$, we get that $\sigma(M)$ is an IC mechanism. If instead of being IC $M$ were BIC, $\epsilon$-IC, or $\epsilon$-BIC, the incentive guarantee we have for $M$ still falls exactly through for $\sigma(M)$.
\end{prevproof}
\begin{corollary}\label{cor:important} Let $\mathcal{G}$ denote any distribution over elements of $S_m \times S_n$. For an IC (BIC,$\epsilon$-IC,$\epsilon$-BIC) mechanism $M$, let $\mathcal{G}(M)$ denote the mechanism that samples an element $\sigma$ from $\mathcal{G}$, and then uses the mechanism $\sigma(M)$. Then for all $\cal G$:
\begin{enumerate}
\item $\mathcal{G}(M)$ is an IC (BIC,$\epsilon$-IC,$\epsilon$-BIC) mechanism; and
\item $R^{\mathcal{G}(M)}(\mathcal{D}) = \sum_{\sigma} R^{\sigma(M)}(\mathcal{D}) Pr[\sigma \leftarrow \mathcal{G}]$.
\end{enumerate}
Furthermore, if $\mathcal{G}$ samples only $\sigma$ such that $\mathcal{D}$ has symmetry $\sigma$, then $R^M(\mathcal{D}) = R^{\mathcal{G}(M)}(\mathcal{D})$.
\end{corollary}
\begin{prevproof}{Corollary}{cor:important}
It is clear that, because each $\sigma(M)$ is an IC mechanism, randomly sampling an IC mechanism will result into an IC mechanism. The second claim is also clear by linearity of expectation. The final claim is clear because $R^{\sigma(M)}(\mathcal{D}) = R^M(\mathcal{D})$, if ${\cal D}$ has symmetry $\sigma$, so taking a weighted average of them will still yield $R^M(\mathcal{D})$. We can replace IC by BIC, $\epsilon$-IC, or $\epsilon$-BIC in our argument.
\end{prevproof}
\begin{lemma} \label{lem:final lemma symm}Let $\mathcal{G}$ sample a permutation uniformly at random from a subgroup $S$ of $S_m \times S_n$. Then $\mathcal{G}(M)$ respects every permutation in $S$.
\end{lemma}
\begin{prevproof}{Lemma}{lem:final lemma symm} For any $\vec{v}$, the outcome $\mathcal{G}(M)(\vec{v})$ is:
$$\mathcal{G}(M)(\vec{v}) = \sum_{\sigma \in S} \frac{\sigma(M(\sigma^{-1}(\vec{v})))}{|S|}.$$
Because $S$ is a subgroup, for any $\tau \in S$, we can write:
$$\mathcal{G}(M)(\tau(\vec{v})) = \sum_{\sigma \in S} \frac{\tau\sigma(M( (\tau\sigma)^{-1}(\tau(\vec{v}))))}{|S|} = \sum_{\sigma \in S} \frac{\tau\sigma(M( \sigma^{-1}(\vec{v})))}{|S|} = \tau(\mathcal{G}(M)(\vec{v})).$$
This is exactly the statement that $\mathcal{G}(M)$ respects symmetry $\tau$. So all $\tau \in S$ are respected by $\mathcal{G}(M)$.
\end{prevproof}
The final step in the proof of Theorem~\ref{thm:symmetries} is just observing that if $S$ denotes the set of symmetries of $\mathcal{D}$, then $S$ is in fact a subgroup because the definition of symmetry immediately yields that if $\mathcal{D}$ has symmetries $\sigma$ and $\tau$, it also has symmetries $\sigma^{-1}$ and $\sigma \tau$.
\section{Proofs Omitted from Section \ref{sec:LP}}\label{app:proof LP}
\begin{prevproof}{Lemma}{lem:simple reduction works for k-items}
Suppose that $S$ contains all bidder permutations and that every marginal of ${\cal D}$ has size at most $c$. We claim that the set of representative value vectors has size {$|E| \leq (m+1)^{c^n}$}. Indeed, each equivalence class is uniquely determined by the number of bidders of each type. There are $c^n$ types of bidders, and {$0$ up to at most $m$} bidders per type, so {$(m+1)^{c^n}$} equivalence classes of value vectors in total. Hence, the LP of Figure~\ref{fig:succinct k-items LP} has $O(n(m+1)^{c^n})$ variables and constraints. If both $c$ and $n$ are constants, the size of the LP is polynomial.
\end{prevproof}
\begin{prevproof}{Theorem}{thm:monotone}
Assume for a contradiction that $M$ is item-symmetric and BIC but not strongly monotone. Let then $\vec{v}_i^*$ be a bidder profile that breaks strong-monotonicity, i.e. $\pi_{ij}(\vec{v}_i^*) > \pi_{ij'}(\vec{v}_i^*)$ and $v_{ij}^* < v_{ij'}^*$ for a pair of items $j$, $j'$. We show that bidder $i$ of type $\vec{v}_i^*$ can strictly increase her expected utility by swapping her values for items $j$ and $j'$. Indeed, because $\mathcal{D}$ is item-symmetric, we know that the distribution of complete bidder profiles satisfies the following for any item permutation $\sigma$:
$$Pr[ \vec{w}|\vec{w}_i = \vec{v}^*_i] = Pr[\sigma(\vec{w})|\vec{w}_i = \sigma(\vec{v}^*_i)].$$
Now because $M$ is item-symmetric, we also know that $M(\sigma(\vec{w})) = \sigma(M(\vec{w}))$, for all $\vec{w}$. Putting these together, we see that we must have $\pi_{ij}(\vec{v}_i^*) = \pi_{i\sigma(j)}(\sigma(\vec{v}_i^*))$, for all item permutations $\sigma$. Letting $\sigma$ be the permutation that swaps items $j$ and $j'$ shows that when bidder $i$ of type $\vec{v}_i^*$ swaps her values for $j$ and $j'$, she simply switches $\pi_{ij}(\vec{v}_i^*)$ and $\pi_{ij'}(\vec{v}_i^*)$, strictly increasing her utility.
For the second part of the theorem, suppose that $M$ is any item-symmetric $\epsilon$-BIC mechanism that is not strongly monotone. Again let $\vec{v}^*_i$ be a bidder profile that breaks strong-monotonicity with $\pi_{ij}(\vec{v}^*_i) > \pi_{ij'}(\vec{v}^*_i)$ and $v_{ij}^* < v_{ij'}^*$ for a pair of items $j$ and $j'$. Let then $M'$ be the mechanism that does the following. If bidder $i$ reports $\vec{w}_i = \tau(\vec{v}^*_i)$, for some $\tau \in S_n$, then pick a random such $\tau$, swap the bidder's values for items $\tau(j)$ and $\tau(j')$, and run $M$. Otherwise just run $M$. It is clear that, $M'$ has the exact same expected revenue as $M$, when played truthfully, because $\mathcal{D}$ is item-symmetric. Observe also that we have added no new alternatives for dishonest bidders to consider reporting, maintained the item symmetry of the mechanism, and made some bidders strictly happier for the outcomes that the mechanism gives them. Therefore, $M'$ is still item-symmetric and $\epsilon$-BIC and makes the same revenue as $M$, when played truthfully. Additionally we have corrected one violation of strong-monotonicity. Iterating this process for a finite number of steps will yield an $\epsilon$-BIC item-symmetric strongly monotone mechanism with the same expected revenue as $M$.
\end{prevproof}
\begin{prevproof}{Lemma}{lem: succinct LP for k-bidders}
Recall the definition of the set $E$ of representative bidder profiles from Section~\ref{subsec:bidder symmetries}/Appendix~\ref{app:succinct LPs}. In addition to this set we define, for each bidder $i$, a set of representative types, $E_i$, for this bidder. $E_i$ contains only value vectors $\vec{v}_i$ satisfying $v_{i1} \geq v_{i2} \geq \ldots \geq v_{in}$. By Observation \ref{obs:monotone}, if {a mechanism is item-symmetric} and no type in $E_i$ wishes to misreport another type in $E_i$, for all $i$, then the mechanism is BIC. Our resulting LP is shown in Figure~\ref{fig:succinct k-bidders LP}. In total, we have {$O(n|E|\sum_i|E_i|)$} variables and {$O(n |E| \sum_i |E_i|^2)$} constraints. Given that the value distribution is symmetric under every item permutation, we have that {$|E| \leq (n+1)^{c^m}$}. Indeed, there are $c^m$ possible ways the $m$ bidders like an item, and the question is how many items are liked in each of the possible ways. We also know that {$|E_i| \leq (n+1)^c$}, again because choosing the number of items valued by a bidder at each of the $c$ possible values uniquely determines an element of $E_i$. It follows that when $c$ and $m$ are constants the size of the LP is polynomial.
\end{prevproof}
\section{Succinct LP Formulations} \label{app:succinct LPs}
Figures~\ref{fig:succinct k-items LP} and~\ref{fig:succinct k-bidders LP} show the succinct LPs that can be used to compute the optimal mechanisms for the BIC $k$-items and the BIC $k$-bidders problems respectively. Details for satisfying the supply and demand constraints with probability $1$, and ex-post IR modifications are exactly the same as in Appendices~\ref{app:feasible} and~\ref{app:ex-post IR}. In both figures, $E$ denotes a set of representatives under the equivalence relation defined by bidder or item symmetries. In Figure \ref{fig:succinct k-bidders LP}, $E_i$ denotes the set of types for bidder $i$ such that $v_{i1} \geq \ldots \geq v_{in}$.
\begin{figure}
\caption{{Succinct BIC k-items LP. In parentheses at the end of each type of variable/constraint is an upper bound on the number of such variables/constraints.}
\label{fig:succinct k-items LP}
\end{figure}
\begin{figure}
\caption{Succinct BIC $k$-bidders LP. In parentheses at the end of each type of variable/constraint is an upper bound on the number of such variables/constraints. }
\label{fig:succinct k-bidders LP}
\end{figure}
\section{Omitted Proofs from Section~\ref{sec:epsilon-truthful}}
\label{app:technical} \label{app:bi-criterion}
In this section we relate the optimal revenue achievable under ``similar'' value distributions, and provide the proof of Lemma~\ref{lem:deltaIC}. {Throughout the section, we let $T$ denote the maximum number of items that can be awarded by a feasible mechanism. For $k$-items this is $k$, for $k$-bidders this is $\min\{n,\sum_i C_i\}$.}
\begin{lemma}\label{lem:add by delta} Suppose that $\mathcal{D}$ and $\mathcal{D'}$ can be coupled so that, with probability $1$, whenever $\vec{v} \leftarrow \mathcal{D}$ and $\vec{v}'\leftarrow \mathcal{D'}$ are jointly sampled under the coupling, it holds that $v'_{ij} = v_{ij} + \delta$, for all $i,j$. Then for all $\epsilon$, $R_\epsilon^{OPT}(\mathcal{D}) \geq R_\epsilon^{OPT}(\mathcal{D'}) - {\delta T}$.
\end{lemma}
\begin{prevproof}{Lemma}{lem:add by delta} Throughout this proof, $\mathbbm{1}$ represents the all-ones vector. Let $M'$ be any $\epsilon$-BIC mechanism for $\mathcal{D'}$. We define a new mechanism $M$ for ${\cal D}$ such that $M(\vec{v}) = M'(\vec{v}+\delta \cdot \mathbbm{1})$, except that we lower the price paid by bidder $i$ by $\delta \cdot \sum_j \phi'_{ij}(\vec{v}+\delta \cdot \mathbbm{1})$, i.e. $\delta$ times the expected number of items given to bidder $i$ by $M'(\vec{v}+\delta \cdot \mathbbm{1})$). Then, for all $i$, $\vec{v}_i, \vec{w}_i, \vec{v}_{-i}$ and corresponding $\vec{v}'_i := \vec{v}_i +\delta \cdot \mathbbm{1}, \vec{w}'_i:=\vec{w}_i +\delta \cdot \mathbbm{1}, \vec{v}'_{-i}:=\vec{v}_{-i}+\delta \cdot \mathbbm{1}$, we have:
$$U(\vec{v_i},M_i(\vec{w}_i~;~\vec{v}_{-i})) = U(\vec{v_i}',M'_i(\vec{w}'_i~;~\vec{v}'_{-i})).$$
Hence because $M'$ is $\epsilon$-BIC under ${\cal D}'$, $M$ is $\epsilon$-BIC under ${\cal D}$. It is also clear that the difference in expected revenue of the two mechanisms under the two distributions is exactly $\delta$ times the expected number of items given out by $M'$, which is at most {$T$}.
\end{prevproof}
\begin{lemma}\label{lem:coupling} Suppose that $\mathcal{D} = \times_i {\cal D}_i$ and $\mathcal{D'}= \times_i {\cal D}'_i$ are product distributions over bidders and suppose that, for all $i$, there is a coupling of ${\cal D}_i$ and ${\cal D}'_i$ so that, with probability $1$, if $\vec{v}_i \leftarrow \mathcal{D}_i$ and $\vec{v}'_i\leftarrow \mathcal{D}_i'$ are jointly sampled under this coupling, it holds that $v_{ij} \leq v'_{ij} \leq v_{ij} + \delta$, for all $j$. Then $R_{{\delta} + \epsilon}^{OPT}(\mathcal{D'}) \geq R_{\epsilon}^{OPT}(\mathcal{D})$, for all $\epsilon$.
\end{lemma}
\begin{prevproof}{Lemma}{lem:coupling}
For all $i$, the coupling whose existence is certified in the statement of the lemma, implies the existence of a (possibly randomized) mapping $f_i^R$ such that the distribution that samples $\vec{v}_i'$ from ${\cal D}_i'$ and outputs the pair $(\vec{v}_i,\vec{v}_i')$, where $\vec{v}_i$ is a random vector sampled from $f_i^R(\vec{v}_i')$, is a valid coupling of ${\cal D}_i$ and ${\cal D}'_i$ satisfying $v_{ij} \le v_{ij}' \le v_{ij}+\delta$, for all $j$, with probability $1$. Let then $f^R$ be the random mapping which on input $\vec{v}'$ samples, for all $i$, a random $\vec{v}_i$ from $f^R_i(\vec{v}_i')$ and outputs $(\vec{v}_1,\ldots,\vec{v}_m)$.
Now consider any $\epsilon$-BIC mechanism $M$ for $\mathcal{D}$, and define the mechanism $M'$ for ${\cal D}'$, which on input $\vec{v}'$ samples a random $\vec{v}$ from $f_R(\vec{v}')$ and outputs $M(\vec{v})$. It is obvious that $R^{M'}(\mathcal{D'}) = R^M(\mathcal{D})$. To conclude the proof of the lemma it suffices to show that $M'$ is $(\epsilon+{\delta})$-BIC for bidders sampled from $\mathcal{D'}$. Indeed, we have from the fact that $M$ is $\epsilon$-BIC that for all $i$, $\vec{v}_i$ and $\vec{w}_i$:
$$\mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}}[U(\vec{v_i},M_i(\vec{v}))] \geq \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}} [U(\vec{v_i},M_i(\vec{w}_i~;~\vec{v}_{-i}))] - \epsilon {\cdot \sum_j \pi_{ij}(\vec{w}_i)}.$$
Now fix $i$, $\vec{v}_i'$ and $\vec{w}_i'$. We have:
\begin{align}
\mathbb{E}_{\vec{v}'_{-i} \sim {\cal D}'_{-i}} [U(\vec{v_i}',M_i'(\vec{w}_i'~;~\vec{v}'_{-i}))] &= \mathbb{E}_{\vec{v}'_{-i} \sim {\cal D}'_{-i}} [U(\vec{v_i}', \mathbb{E}_{\vec{w}_i \sim f^R_i(\vec{w}_i'), \vec{v}_{-i} \sim f^R_{-i}(\vec{v}'_{-i})} M_i(\vec{w}_i~;~\vec{v}_{-i}))]\notag\\
&= \mathbb{E}_{\vec{v}'_{-i} \sim {\cal D}'_{-i}, \vec{w}_i \sim f^R_i(\vec{w}_i'), \vec{v}_{-i} \sim f^R_{-i}(\vec{v}'_{-i})} [U(\vec{v_i}', M_i(\vec{w}_i~;~\vec{v}_{-i}))] \notag\\
&= \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}, \vec{w}_i \sim f^R_i(\vec{w}_i')} [U(\vec{v_i}', M_i(\vec{w}_i~;~\vec{v}_{-i}))] \label{eq:tiresome 1}
\end{align}
Using Eq.~\eqref{eq:tiresome 1} we have:
\begin{align*}\mathbb{E}_{\vec{v}'_{-i} \sim {\cal D}'_{-i}} [U(\vec{v_i}',M_i'(\vec{v}_i'~;~\vec{v}'_{-i}))] &=\mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}, \vec{v}_i \sim f^R_i(\vec{v}_i')} [U(\vec{v_i}', M_i(\vec{v}_i~;~\vec{v}_{-i}))]\\ &\ge \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}, \vec{v}_i \sim f^R_i(\vec{v}_i')} [U(\vec{v_i}, M_i(\vec{v}_i~;~\vec{v}_{-i}))]\\
&= \mathbb{E}_{\vec{v}_i \sim f^R_i(\vec{v}_i')} \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}} [U(\vec{v_i}, M_i(\vec{v}_i~;~\vec{v}_{-i}))]\\
&\ge \mathbb{E}_{\vec{w}_i \sim f^R_i(\vec{w}_i'), \vec{v}_i \sim f^R_i(\vec{v}_i')} \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}} [U(\vec{v_i}, M_i(\vec{w}_i~;~\vec{v}_{-i})) - \epsilon {\cdot \sum_j \pi_{ij}(\vec{w}_i)}]\\
&{= \mathbb{E}_{\vec{w}_i \sim f^R_i(\vec{w}_i'), \vec{v}_i \sim f^R_i(\vec{v}_i')} \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}} [U(\vec{v_i}, M_i(\vec{w}_i~;~\vec{v}_{-i}))] - \epsilon \cdot \sum_j \pi'_{ij}(\vec{w}'_i)},
\end{align*}
where for the last inequality we used that $M$ is $\epsilon$-BIC. Similarly,
\begin{align*}
\mathbb{E}_{\vec{v}'_{-i} \sim {\cal D}'_{-i}} [U(\vec{v_i}',M_i'(\vec{w}_i'~;~\vec{v}'_{-i}))] &=\mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}, \vec{w}_i \sim f^R_i(\vec{w}_i')} [U(\vec{v_i}', M_i(\vec{w}_i~;~\vec{v}_{-i}))]\\
&\le \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}, \vec{w}_i \sim f^R_i(\vec{w}_i'), \vec{v}_i \sim f^R_i(\vec{v}_i')} [U(\vec{v_i}+\delta \mathbbm{1}, M_i(\vec{w}_i~;~\vec{v}_{-i}))] \\
&= \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}, \vec{w}_i \sim f^R_i(\vec{w}_i'), \vec{v}_i \sim f^R_i(\vec{v}_i')} [U(\vec{v_i}, M_i(\vec{w}_i~;~\vec{v}_{-i})){ + \delta \cdot \sum_j \pi_{ij}(\vec{w}_i)]}\\
&= \mathbb{E}_{\vec{w}_i \sim f^R_i(\vec{w}_i'), \vec{v}_i \sim f^R_i(\vec{v}_i')} \mathbb{E}_{\vec{v}_{-i} \sim {\cal D}_{-i}} [U(\vec{v_i}, M_i(\vec{w}_i~;~\vec{v}_{-i}))] + {\delta \cdot \sum_j \pi'_{ij}(\vec{w}'_i)}.
\end{align*}
Combining the above it follows that $M'$ is $(\epsilon+\delta)$-BIC.
\end{prevproof}
\begin{prevproof}{Lemma}{lem:deltaIC}
Let $\mathcal{D}$ denote the original distribution. Let $\mathcal{D}''$ denote the distribution that first samples from $\mathcal{D}$, then rounds every value up to the nearest multiple of $\delta$ (if the sampled value from ${\cal D}$ is exactly at an integer multiple of $\delta$ it is rounded up to the next integer multiple of $\delta$). Let $\mathcal{D}'$ denote the distribution that first samples from $\mathcal{D}$, then rounds every value down to the nearest multiple of $\delta$ (same definition as in the statement of Lemma \ref{lem:deltaIC}). Then it is clear that $\mathcal{D}'$ and $\mathcal{D''}$ satisfy the hypotheses of Lemma \ref{lem:add by delta}, so we have:
$$R^{OPT}_{{\delta}}(\mathcal{D}')\geq R^{OPT}_{{\delta}}(\mathcal{D''}){- {\delta T}}.$$
It is also clear that $\mathcal{D}$ and $\mathcal{D}''$ satisfy the hypotheses of Lemma \ref{lem:coupling}, so we have:
$$R^{OPT}_{{\delta}}(\mathcal{D}'') \geq {R^{OPT}(\mathcal{D})}.$$
Putting both together, we get that:
$$R^{OPT}_{{\delta}}(\mathcal{D}') \geq R^{OPT}(\mathcal{D}) - {\delta T}.$$
It follows immediately from the fact that $M'$ is {$\delta$}-BIC for consumers in $\mathcal{D}'$ that $M$ is {$2\delta$}-BIC for consumers in $\mathcal{D}$ {(via the argument given in the proof of Lemma~\ref{lem:coupling}) and it is obvious that $R^{M'}(\mathcal{D'}) = R^M(\mathcal{D})$.} This completes the proof of the lemma.
\end{prevproof}
\section{Proof of Theorem \ref{thm:epsilon-BIC to BIC}}\label{app:true PTAS}
{Here we prove Theorem \ref{thm:epsilon-BIC to BIC}. We start with a proof outline, and justify each step separately. Unless otherwise stated, every claim applies to both reductions (for the $k$-bidders and the $k$-items problems). {Before starting, we observe that the assumption that $\mathcal{D'}$ is discrete is a \emph{simplifying} assumption and not a \emph{necessary} assumption. {In our proof of Theorem \ref{thm:epsilon-BIC to BIC} below, we point out the key modification that makes it work for continuous ${\cal D}'$ at an additional loss of $O(\frac{\delta}{\eta}T)$ in revenue.
Throughout the proofs, we will use $T_i$ to denote the maximum number of items that are possibly awarded to bidder $i$ {by a feasible mechanism}, and $T$ to denote the maximum number of items that are possibly awarded {by a feasible mechanism}. For $k$-bidders, $T_i = C_i$, {$T= \min\{n,\sum_i C_i\}$}. For $k$-items, {$T_i = \min\{k, C\}$, $T = k$}. Here is a brief outline of our proof. Let $M_2$ denote the mechanism output by our reduction; that the output of the reduction is a valid mechanism will be justified in what follows.}
\begin{enumerate}
\item If bidder $i$ plays $M_2$ truthfully, then the distribution of surrogates matched to bidder $i$ is $\mathcal{D}'_i$.
\item For $k$-bidders, because $M$, $\mathcal{D}$ and {${\cal D}'$} are item-symmetric, no bidder gains by lying about her ordering.
\item Because each bidder is participating in a VCG auction, and the value of each edge is calculated exactly given that all other bidders tell the truth, $M_2$ is BIC.
\item The revenue we make from bidder $i$ is at least the price paid by their surrogate if bidder $i$ is matched in VCG, and $0$ otherwise.
\item There exists a high cardinality matching with positive edge weights. If VCG used this matching, we would make almost as much revenue as $M$. We show that because of the rebates (Phase One, Step 2), the VCG matching makes almost as much revenue as this matching.
\item Therefore, we make a good approximation to $R^M(\mathcal{D'})$, which is in turn a good approximation to $R^{M_1}(\mathcal{D}')$.
\end{enumerate}
\begin{lemma}[\cite{HKM}]\label{lem:HKM} If all bidders play $M_2$ truthfully, then the distribution of the surrogate chosen for bidder $i$ is exactly $\mathcal{D}'_i$.
\end{lemma}
\begin{proof} Imagine changing the order of sampling in the experiment. First, sample the $r$ surrogates i.i.d. from $\mathcal{D}'_i$, and $r$ replicas i.i.d. from $\mathcal{D}_i$. For the $k$-bidders algorithm only, pick a random ordering $\sigma$ and permute each surrogate and replica to respect that ordering. Finally, pick a replica uniformly at random among the sampled ones and decide that this was in fact bidder $i$. The distribution of surrogates, replicas, and bidder $i$ is exactly the same as that of the algorithm assuming bidder $i$ truthfully reports his type. In addition, we can compute the VCG matching once all the replicas and surrogates have been sampled before deciding which replica is bidder $i$. Because we choose a random replica to be bidder $i$, the surrogate chosen to represent her will also be chosen uniformly at random, regardless of the surrogates' types. So we can see that the process of choosing a surrogate is just sampling $r$ times from $\mathcal{D}'_i$ independently, permuting each sample to respect a random ordering (in the $k$-bidders problem only), and outputting a random surrogate among the sampled (and possibly permuted) ones. In the $k$-items reduction, the output surrogate is clearly distributed according to ${\cal D}'_i$. In the $k$-bidders reduction, because ${\cal D}'_i$ is item-symmetric and we used a random permutation to permute all sampled surrogates, we still output a surrogate distributed according to ${D}'_i$.
\end{proof}
\begin{lemma}\label{lem:ordering} In $M_2$ resulting from the $k$-bidders reduction, no bidder $i$ has an incentive to report any $\vec{w}_i$ such that there is some $j$ and $j'$ for which $w_{ij} > w_{ij'}$ when the true type $\vec{v}_i$ of the bidder satisfies $v_{ij} < v_{ij'}$.
\end{lemma}
\begin{proof} Let $\sigma(\vec{w_i})$ be such that $\sigma(\vec{w_i})_j {\ge} \sigma(\vec{w_i})_{j'}$ if and only if $v_{ij} {\ge} v_{ij'}$. We show that bidder $i$ would be better off reporting $\sigma(\vec{w_i})$ than $\vec{w_i}$. Indeed, we can couple the outcomes of $M_2$ on $\vec{w_i}$ and $\sigma(\vec{w_i})$ in the following way. Whenever a replica is sampled for $i$ to play against, sample the same replica for both experiments. Whenever a surrogate is sampled to represent her, sample the same for both experiments. This set of replicas and surrogates will get permuted to match the ordering of $\vec{w_i}$ and $\sigma(\vec{w_i})$ respectively, so the VCG matching chosen will be exactly the same. So if we let $\vec{s_i}$ denote the surrogate chosen when the bid was $\vec{w_i}$, then $\sigma(\vec{s_i})$ is the surrogate chosen on bid $\sigma(\vec{w_i})$. Because $\sigma$ was chosen so that $\sigma(\vec{s_i})$ is ordered the same as $\vec{v_i}$ and $M$ is strongly monotone, bidder $i$ prefers to be represented by $\sigma(\vec{w_i})$ than by $\vec{w_i}$ when her type is $\vec{v_i}$. So bidder $i$ has no incentive to lie about the relative ordering of $\vec{v_i}$.
\end{proof}
\begin{corollary} $M_2$ is BIC.
\end{corollary}
\begin{proof}
{Fix some bidder $i$ and suppose all other bidders report truthfully. By Lemma \ref{lem:HKM} the surrogates chosen for them are distributed according to $\mathcal{D}'_{-i}$. Hence, when we design the VCG auction for bidder $i$, we correctly compute the expected outcome that will be awarded to each surrogate if that surrogate is chosen for bidder $i$. So in the $k$-items reduction, bidder $i$ faces a VCG mechanism that correctly computes edge-weights; hence the bidder will play the VCG mechanism truthfully. In the $k$-bidders reduction, by Lemma \ref{lem:ordering} bidder $i$ won't misreport her ordering, but could possibly lie about her values (respecting her ordering). Nevertheless, no matter what (ordering respecting) values she reports, she won't affect the distribution of the values of the replicas and surrogates she will compete against in VCG, except for the fact that these are going to be conditioned to respect her ordering. Still, because the edge-weights in the VCG auction are computed correctly, the bidder will report her type truthfully.}
\end{proof}
Now that we know $M_2$ is BIC, we want to compare $R^{M_2}(\mathcal{D})$ to $R^M(\mathcal{D'})$. We observe first that if we are lucky and the VCG matching for every bidder is always perfect, then in fact $R^{M_2}(\mathcal{D}) \geq R^M(\mathcal{D'})$. When bidders are matched in VCG to a surrogate, they pay exactly what their surrogate paid, plus a little extra due to the VCG prices. However, because some edge weights are negative, the VCG matching may not be perfect. So how do we analyze expected revenue in this case?
We look at the expected revenue contributed from a single bidder $i$. Consider again changing the order of sampling in the experiment to sample each replica and surrogate first before deciding which replica is bidder $i$. Then if surrogate $\vec{s_i}$ is matched in VCG, there is a $1/r$ chance that its matched replica will be bidder $i$ and we make expected revenue equal to what $\vec{s_i}$ pays in $M$. If $\vec{s_i}$ is unmatched in VCG, then even if its matched replica is bidder $i$, we make no revenue. This ignores the extra possible revenue from the VCG prices, which we will continue to ignore from now on. So let $p(\vec{s_i})$ denote the expected price that a surrogate with type $\vec{s_i}$ pays in $M$ (over the randomness of the other surrogates), and let $V$ denote the set of surrogates that are matched in VCG. Then the expected revenue of $M_2$ from bidder $i$ is exactly:
$$\sum_{\vec{s_i} \in V} p(\vec{s_i})/r.$$
Recall that the expected revenue of $M$ from bidder $i$ is exactly $\sum_{\forall \vec{s_i}} p(\vec{s_i})/r$. So our goal is to bound the difference between these two sums. But of course, there is also randomness in which surrogates are sampled and what the VCG matching is, so we want to bound the difference in expectation of these two sums. We do this in two steps. First, we show that there exists a matching that is \emph{very} close to perfect in expectation and only uses positive edges. In fact, it is so close to perfect that even if we assume that the unmatched surrogates had the highest possible price we barely lose any revenue in expectation. Unfortunately, this is not necessarily the matching that VCG uses. However, because VCG maximizes social welfare, and we give surrogates a free rebate of {$\eta p_i(\vec{s_i})$}, VCG cannot unmatch too many surrogates that pay a high price. We now quantify these statements and prove them.
Define an equivalence relation on bidders and surrogates where $\vec{v_i} \sim \vec{w_i}$ if when we round both vectors down to the nearest multiple of $\delta$ we get the same vector. Observe that a replica $\vec{v_i}$ and surrogate $\vec{s_i}$ are equivalent only if $v_{ij} \geq s_{ij}$ for all $j$. Therefore, replica $\vec{v_i}$ has positive valuation for the outcome of surrogate $\vec{s_i}$.~\footnote{This is the step where it is helpful to assume that $\mathcal{D}'$ is discrete. If $\mathcal{D}'$ were not discrete, we would not necessarily have $v_{ij} \geq s_{ij}$. However, after giving an additional rebate of $\delta$ for every item received, the conclusion that $\vec{v}_i$ has positive valuation for the surrogate $\vec{s}_i$ holds, which is what matters. The extra rebates result in an extra loss of $O({\delta \over \eta} T)$ of revenue. This loss in revenue comes from: a) actually giving the rebates, which costs at most $\delta T$ in revenue and b) possibly reducing the truthfulness of $M_1$ from $\epsilon$-BIC to at worst $(\epsilon+\delta)$-BIC, which costs an additional $\frac{\delta}{\eta}T$ in revenue (replacing $\epsilon$ by $\epsilon+\delta$ in Lemma~\ref{lem:augmenting}, Corollary~\ref{cor:augmenting}, and the discussion following Lemma~\ref{lem:boundd}).} So a matching that matches replicas only to equivalent surrogates uses only positive edges. {If we let $\beta$ be the number of equivalence classes, then $\beta \le ({1 \over \delta}+1)^k$ (for $k$-items) and $\beta \le (n+1)^{1/\delta+1}$ (for $k$-bidders taking into account that we are only looking at permuted replicas/surrogoates in our transformation). We can use a lemma from \cite{HKM} directly:}
\begin{lemma}[\cite{HKM}] The expected cardinality of a maximal matching that only matches equivalent replicas and surrogates is at least $r - \sqrt{\beta r}$.
\end{lemma}
So if we denote by $X$ the set of surrogates in some maximal matching using only equivalent replicas, then because each $p(\vec{s_i})$ is at most $T_i$ we get:
$$\mathbb{E}\left[\sum_{\vec{s_i} \in X} p(\vec{s_i})/r \right] \geq \mathbb{E} \left [\sum_{\forall \vec{s_i}} p(\vec{s_i})/r \right] - T_i\sqrt{\beta / r}$$
Now we want to bound the expected difference between the expected revenue from the matching $X$ and the VCG matching $V$. We can get from $X$ to $V$ through a disjoint collection of augmenting paths and cycles. We want to show that there cannot be many augmenting paths that unmatch a surrogate with a high $p(\vec{s_i})$.
\begin{lemma} \label{lem:augmenting} Let $P$ be any augmenting path from $X$ to $V$ that unmatches surrogate $\vec{s_i}'$. Let also $S$ denote the set of surrogates in $P$. {Finally, let $\vec{s_i}''$ be the surrogate closest to the end of $P$ opposite to $\vec{s_i}'$. If $\vec{s_i}''$ is matched in both $X$ and $V$, then:
{
$${\frac{\delta+\epsilon}{\eta}}\sum_{\vec{s_i} \in S,j} \pi_{ij}(\vec{s_i}) \geq p(\vec{s_i}');$$
otherwise
$${\frac{\delta+\epsilon}{\eta}}\sum_{\vec{s_i} \in S,j} \pi_{ij}(\vec{s_i}) \geq p(\vec{s_i}')-p(\vec{s_i}'').$$}}
\end{lemma}
\begin{proof} { Consider the ``weight'' of this path, computed by adding the weights of the new edges and subtracting the weights of the deleted edges. Because VCG maximizes social welfare, this weight must be positive. Decompose the gain/loss in social welfare in two components: that coming from rebates and that coming without accounting for rebates, i.e. were we running mechanism $M_1$ instead of $M$. As far as the first component goes, the contribution to the welfare via rebates from a surrogate that is matched in both $X$ and $V$ does not change in the two matchings. Instead there is a loss of $\eta p(\vec{s_i}')$ of rebates-welfare because $\vec{s_i}'$ becomes unmatched and a gain of $\eta p(\vec{s_i}'')$ of rebates-welfare, if $\vec{s_i}''$ was not matched in $X$ and became matched in $V$.
Now let us upper-bound the gain in social welfare contributed by the $M_1$ component of the edge-weights. Because $M_1$ is $\epsilon$-BIC for ${\cal D}'$, a replica gains at most ${(\delta+\epsilon)} \sum_j \pi_{ij}(\vec{s_i})$ by being matched to $\vec{s_i}$ instead of her equivalent surrogate. So the $M_1$-welfare goes up by at most ${(\delta+\epsilon)} \sum_{\vec{s_i} \in S,j} \pi_{ij}(\vec{s_i})$ from the augmentation.
Given that the augmentaion must increase social welfare, we obtain the lemma.}
\end{proof}
This lemma, in essence, says that each time a surrogate becomes unmatched from $X$ to $V$ it ``claims'' some weight of the $\pi_{ij}$s. If we let $W_i$ denote $\sum_{\forall \vec{s_i},j} \pi_{ij}{(\vec{s_i})}$, then we get the following corollary:
{
\begin{corollary} \label{cor:augmenting}$$\sum_{\vec{s_i} \in V} p(\vec{s_i})/r \geq \sum_{\vec{s_i} \in X} p(\vec{s_i})/r - {{\frac{\delta+\epsilon}{\eta r}} W_i}.$$
\end{corollary}}
We now proceed to bound $\sum_i \mathbb{E}[W_i]$ with an easy lemma.
\begin{lemma} \label{lem:boundd} $$\sum_i \mathbb{E}[W_i]/r \leq T.$$
\end{lemma}
\begin{proof}
$\mathbb{E}[W_i]/r$ is exactly the expected number of items awarded to bidder $i$ by $M$. As $M$ can only award $T$ items, we have the desired inequality.
\end{proof}
Now we just have to put everything together and chase through some inequalities. From the work above we get that:
$$\mathbb{E}\left[\sum_{\vec{s_i} \in V} p(\vec{s_i})/r\right] \geq \mathbb{E}\left[\sum_{\forall \vec{s_i}} p(\vec{s_i})/r\right] - \left(T_i\sqrt{\beta \over r} + { {\frac{\delta+\epsilon}{\eta r} }\mathbb{E}[W_i]}\right).$$
And when we sum this over all bidders we get:
$$R^{M_2}(\mathcal{D}) \geq R^M(\mathcal{D'}) - \sum_i \left(T_i\sqrt{\beta \over r} + { {\frac{\delta + \epsilon}{\eta r} }\mathbb{E}[W_i]}\right)$$
{$$\implies R^{M_2}(\mathcal{D}) \geq R^M(\mathcal{D'}) - \left(\sqrt{\frac{\beta}{r}} \sum_i T_i + \frac{\delta+\epsilon}{\eta}T\right)$$}
Recall that $M$ is just $M_1$ with rebates, hence:
$$R^M(\mathcal{D'}) = \left(1-\eta\right)R^{M_1}(\mathcal{D'}).$$
Putting the above together with our choice of $r$ {and observing that $T_i \leq T$ for all $i$,} we conclude the proof of Theorem~\ref{thm:epsilon-BIC to BIC}.
\section{Proof of Corollary~\ref{cor:MHR}}\label{app:MHR}
Recall the definition of a Monotone Hazard Rate distribution.
\begin{definition}(Monotone Hazard Rate) A one-dimensional differentiable distribution $F$ satisfies the Monotone Hazard Rate Condition if $\frac{f(x)}{1-F(x)}$ is monotonically non-decreasing for all $x$ such that $F(x) < 1,$ where $f = F'$ is the probability density function.
\end{definition}
\noindent To prove Corollary~\ref{cor:MHR} we reduce the MHR case to the $[0,1]$ case. We do this by finding an appropriate $\Xi$ such that the probability that any bidder values any item above $\Xi$ is tiny. In fact, so tiny that even if we assumed the optimal mechanism could somehow extract full value from bidders when they value an item above $\Xi$, this would account for a tiny fraction of the total revenue. We make use of the following two lemmas from Cai and Daskalakis~\cite{CD}. For a distribution $F$, let $\alpha_p = \inf\{x|F(x)\geq 1-1/p\}$. Then:
\begin{lemma}[\cite{CD}]\label{lem:CD1} If $F$ is MHR, then $k\alpha_p \geq \alpha_{p^k}$, for all $p,k \geq 1$.
\end{lemma}
\begin{lemma}[\cite{CD}] \label{lem:CD2} If $F$ is MHR and $X$ is a random variable distributed according to $F$, then $\mathbb{E}[X|X \geq \alpha_p]\cdot Pr[X \geq \alpha_p] \in O(\alpha_p/p)$.
\end{lemma}
Set $\zeta= \lceil \log_2 k/\epsilon \rceil +1$. For $k$-bidders, let $\alpha_{i,n}$ denote $\alpha_n$ for $F = \mathcal{F}_i$, for all $i$. Let then $\Xi = \max_i \{\alpha_{i,n^{\zeta}}\}$. Then by Lemma \ref{lem:CD2}, even if we could extract full value from every bidder for each item they valued above $\Xi$, we would only make at most $O(k n \Xi/n^\zeta) = O(\epsilon \Xi)$ expected revenue. In addition, there is a trivial mechanism that makes expected revenue $\Omega(\Xi / \log{k/\epsilon})$. Just price each item at $\Xi' = \max_i\{ \alpha_{i,n}\}$ and sell them on a first-come first-served basis. By Lemma \ref{lem:CD1}, we know that $\alpha_{i,n} \geq \alpha_{i,n^{\zeta}}/\zeta$, and therefore $\Xi' \geq \Xi/\zeta$. In addition, the probability that an item gets sold is a constant (approximately $1/e$), so we make revenue $\Omega(\Xi') = \Omega(\Xi/ \log k/\epsilon)$. These two observations together tell us that we can completely ignore the revenue from extreme bidders without losing too much. So our algorithm for $k$-bidders on MHR distributions is as follows: For each $\mathcal{D}_i$, create a new distribution $\mathcal{D}'_i$ that rounds each $v_{ij}$ down to the nearest multiple of $\delta \Xi$ if $v_{ij} < \Xi$, or down to $\Xi$ if $v_{ij} > \Xi$. Find an optimal mechanism $M_1$ for the distribution $\times_i \mathcal{D}'_i$. Then sample surrogates from $\mathcal{D}'_i$ and replicas from $\mathcal{D}_i$ and go through the same reduction as for $[0,1]$ (described in Section~\ref{sec:true PTAS}). Because we sample replicas directly from $\mathcal{D}_i$, this solution will still be BIC (see Appendix \ref{app:true PTAS}). In addition, by the arguments in Appendix \ref{app:true PTAS} and the observations above, we get an additive $O(\epsilon \Xi)$ approximation to the optimal revenue. Because there is a trivial mechanism making revenue $\Omega(\Xi/\log k/\epsilon)$, this is in fact a multiplicative $(1-O(\epsilon \cdot \log k/\epsilon))$ approximation.
For $k$-items, let $\alpha_{j,m}$ denote $\alpha_m$ when $F$ is the marginal distribution of $\mathcal{F}$ for item $j$. Then let $\Xi = \max_j \{\alpha_{j,m^\zeta}\}$. Then by Lemma \ref{lem:CD2}, even if we could extract full value from every bidder for each item they valued above $\Xi$, we could only make at most $O(km\Xi/m^\zeta)=O(\epsilon \Xi)$ expected revenue. In addition, the first-come first-served mechanism that prices all items at $\Xi' = \max_j\{ \alpha_{j,m}\}$ makes $\Omega(\Xi/\log k/\epsilon)$ revenue for the same reasons that the first-come first-served mechanism in the previous paragraph made $\Omega(\Xi/\log k/\epsilon)$ revenue. Again, these observations tell us that we can completely ignore the revenue from extreme bidders without losing too much. Our alogrithm for $k$-items on MHR distributions is the same as for $k$-bidders: $\mathcal{D'}$ samples from $\mathcal{D}$ then rounds each $v_{ij}$ down to the nearest multiple of $\delta \Xi$ if $v_{ij} < \Xi$, and down to $\Xi$ otherwise. We compute the optimal mechanism for ${\cal D}'$. We then sample surrogates from $\mathcal{D}'_i$ and replicas from $\mathcal{D}_i$ and go through the same reduction as for $[0,1]$ (described in Section~\ref{sec:true PTAS}). Again, because we sample replicas directly from $\mathcal{D}$, the solution is still BIC, and by the arguments in Appendix \ref{app:true PTAS} and the observations above, we get an additive $O(\epsilon \Xi)$ approximation to the optimal revenue, which is again a multiplicative $(1-O(\epsilon \cdot \log k/\epsilon))$ approximation.
\section{Extending Theorem~\ref{thm:additive} and Corollary~\ref{cor:MHR} to IC Mechanisms} \label{sec: IC results}
As the modifications to the na\"ive LP for going from BIC to IC are trivial, we will not restate the na\"ive LP for IC here. The symmetry theorem (Theorem \ref{thm:symmetries}, Section \ref{sec:symmetries}) has already been proven for IC mechanisms. The first stop along our proof where we have to treat IC and BIC differently is the monotonicity of item-symmetric mechanisms {(for the $k$-bidders problem)}.
\begin{definition}(Strong-Monotonicity of an IC mechanism) An IC or $\epsilon$-IC mechanism is said to be strongly monotone if for all $i,j,j'$ and $\vec{v}$ such that $v_{kj} = v_{kj'}$ for all $k \neq i$, $\phi_{ij}(\vec{v}) > \phi_{ij'}(\vec{v}) \ensuremath{\mathbb{R}}ightarrow v_{ij} \geq v_{ij'}$.
\end{definition}
\begin{theorem}\label{thm:ICmonotone} Any item-symmetric IC mechanism is strongly monotone. For all item-symmetric $\epsilon$-IC mechanisms $M$, there exists a mechanism $M'$ of equal revenue that is strongly monotone.
\end{theorem}
\begin{proof} The proof follows the same lines as that of Theorem \ref{thm:monotone} after making a quick observation. If $v_{kj} = v_{kj'}$ for all $k \neq i$ and $\sigma$ is the permutation that swaps items $j$ and $j'$, then if bidder $i$ swaps $v_{ij}$ and $v_{ij'}$, he turns $\vec{v}$ into $\sigma(\vec{v})$, simply because $\sigma$ does not affect $\vec{v}_{-i}$. As any item-symmetric mechanism $M$ must have $M(\sigma(\vec{v})) = \sigma(M(\vec{v}))$, the rest of the proof follows that of Theorem \ref{thm:monotone} as bidder $i$ can swap the probability that he receives items $j$ and $j'$ by swapping his values.
\end{proof}
Next, we have to turn Theorem \ref{thm:ICmonotone} into monotonicity constraints for the LP {of the $k$-bidders problem}. We say that $\vec{v} \sim_i \vec{w}$ if there exists a $\sigma$ such that $\sigma(\vec{v}) = \sigma(\vec{w})$ and $\sigma(\vec{v}_k) = \vec{v}_k$ for all $k \neq i$. In other words, $\vec{w}_i$ is a permutation of $\vec{v}_i$ that is the identity on $\vec{v}_{-i}$. Then let $E_i(\vec{v}_{-i})$ denote the set of $\vec{w}$ such that {($\vec{w}_{-i} = \vec{v}_{-i}$) and ($\forall j \wedge j < j': w_{kj} = w_{kj'} \ensuremath{\mathbb{R}}ightarrow w_{ij} \geq w_{ij'}$)}. I.e. {$E_i(\vec{v}_{-i})$} is a set of representatives under the equivalence relation $\sim_i$ for a fixed $\vec{v}_{-i}$. Strong-monotonicity implies then the following.
\begin{observation}\label{obs: ICmonotone} When playing an {item-symmetric}, strongly monotone IC mechanism, if $v_{kj} = v_{kj'}$ for all $k\neq i$, and $v_{ij} {>} v_{ij'}$, then bidder $i$ has no incentive report {any $\vec{w}_i$ such that $w_{ij'} > w_{ij}$.}
\end{observation}
\begin{corollary}\label{cor: ICmonotone} If {$M$ is strongly-monotone and item-symmetric} and when playing $M$, for all $i$ and $\vec{v}_{-i}$, bidder $i$ never has (more than $\epsilon$) incentive to misreport $\vec{w}_i \in E_i(\vec{v}_{-i})$ when her true type is $\vec{v}_i \in E_i(\vec{v}_{-i})$ for any $\vec{w}_i,\vec{v}_i$, then $M$ is IC ($\epsilon$-IC).
\end{corollary}
{Given Corollary~\ref{cor: ICmonotone}, we replace in the $k$-bidders LP the BIC and strong-monotonicity constraints with the following IC and strong-monotonicity constraints:}
\paragraph{IC, Strong-Monotonicity Constraints:}
$$\sum_j {v_{ij}}\phi_{ij}(\vec{v}) - p_i(\vec{v}) \geq \sum_j {v_{ij}}\phi_{ij}(\vec{w}) - p_i(\vec{w}),~~~\text{for all $i,\vec{v}\in E,\vec{w} \in E_i(\vec{v}_{-i})$;}$$
$$\phi_{ij}(\vec{v}) \ge \phi_{ij'}(\vec{v}),~~~\text{for all $i$, $j < j'$ and $\vec{v}\in E$ such that $v_{kj} = v_{kj'}$ for all $k \neq i$}.$$
{The last step is to show that there are only polynomially many IC constraints.} We show this in the following lemma:
\begin{lemma}\label{lem: numberIC} $|E_i(\vec{v}_{-i})| \leq |E|$ for all $i,\vec{v}_{-i}$.
\end{lemma}
\begin{proof}
We prove the lemma by showing that if $\vec{w} \in E_i(\vec{v}_{-i})$, and $\sigma(\vec{w}) \in E_i(\vec{v}_{-i})$, then $\sigma(\vec{w}) = \vec{w}$. Observe first that in order to possibly have $\sigma(\vec{w}) \in E_i(\vec{v}_{-i})$, we must have $\sigma(\vec{w}_{-i}) = \vec{v}_{-i}=\vec{w}_{-i}$. In other words, if $\sigma(j) = j'$, then $w_{kj} = w_{kj'}$ for all $k \neq i$. However, in order for $\vec{w} \in E_i(\vec{v}_{-i})$, it must be the case that for all such $j,j'$, $w_{ij} > w_{ij'} \ensuremath{\mathbb{R}}ightarrow j < j'$, which means that there is a unique ordering of such values in $\vec{w}_i$ that will make $\vec{w} \in E_i(\vec{v}_{-i})$. Because $\vec{w}$ and $\sigma(\vec{w})$ must respect the same ordering, they must be the same vector.
Therefore, because $E_i(\vec{v}_{-i})$ contains at most one representative per equivalence class under $\sim$, and $E$ contains exactly one representative per equivalence class, we have that $|E_i(\vec{v}_{-i})| \leq |E|$.
\end{proof}
Our discretization lemma of Section~\ref{sec:epsilon-truthful}, namely Lemma~\ref{lem:deltaIC}, has exactly the same statement, replacing BIC with IC, and its proof is very similar (just removing expectations over the other bidders' types where appropriate). As we do not have an $\epsilon$-IC to IC reduction, the above changes are actually the only changes that are needed to adjust Theorem \ref{thm:additive} to $\epsilon$-IC. The proof of Corollary \ref{cor:MHR} also works for $\epsilon$-IC, so we have shown how to obtain $\epsilon$-IC mechanisms for the settings of Theorem~\ref{thm:additive} and Corollary~\ref{cor:MHR}. As discussed in Remark \ref{rem:additive}, we can accommodate arbitrary budget constraints as we do not use an analogue of the $\epsilon$-BIC to BIC reduction, which was the only step in our BIC proof that could not accommodate budgets.
\end{document} | math | 129,282 |
\begin{document}
\title{Modified energy for split-step methods applied to the linear Schr\"odinger equation }
\boldsymbol{a}stract{
We consider the linear Schr\"odinger equation and its discretization by split-step methods where the part corresponding to the Laplace operator is approximated by the midpoint rule. We show that the numerical solution coincides with the exact solution of a modified partial differential equation at each time step. This shows the existence of a modified energy preserved by the numerical scheme. This energy is close to the exact energy if the numerical solution is smooth. As a consequence, we give uniform regularity estimates for the numerical solution over arbitrary long time. \\[2ex]
{\bf MSC numbers}: 65P10, 37M15}\\[2ex]
{\bf Keywords}: Schr\"odinger equation, Splitting integrators, Long-time behavior, Backward error analysis.
\section{Introduction}
We consider the linear Schr\"odinger equation
\begin{equation}
\label{E1}
\partial_t u(t,x) = -i \Delta u (t,x) + iV(x) u(t,x),\quad u(0,x) = u^0(x),
\end{equation}
with initial condition $u^0$, and potential function $V(x) \in \mathbb{R}$.
The wave function $u(x,t)$ depends on $x \in \mathbb{T}^d$ or $\mathbb{R}^d$ and the time $t > 0$. The operator $\Delta$ is the $d$-dimensional Laplace operator. In the following, we consider mainly the case where $x \in \mathbb{T}^d$. The case of the whole space is totally similar.
The equation \eqref{E1} is symplectic and its solution preserves the $L^2$ norm and the energy
\begin{equation}
\label{Eorig}
u \mapsto \int_{\mathbb{T}^d} |\nabla u|^2 + V |u|^2 \mathrm{d} x = \langle u|-\Delta+V|u\rangle.
\end{equation}
The solution of \eqref{E1} is given by
$$
u(t,x) = \exp(i t (-\Delta + V)) u^0(x),
$$
and a standard method to simulate this solution is to consider the approximation
\begin{equation}
\label{EsplitD}
\exp(i h (-\Delta + V)) \simeq \exp(-ih \Delta) \exp(ih V)
\end{equation}
for a small stepsize $h > 0$.
The solution at a given time $t = nh$ is then approximated by
\begin{equation}
\label{Eapprox}
\exp(i t (-\Delta + V)) u^0 \simeq \Big(\exp(-ih \Delta) \exp(ih V)\Big)^n u^0.
\end{equation}
The advantage of this method is that it yields a symplectic scheme preserving the $L^2$ norm. Moreover, it is very easy to implement by using the fast Fourier transform: while the operator $\Delta$ is diagonal in the Fourier space, the operator $V$ acts as a multiplication operator in the phase space.
For finite time, this splitting scheme yields a consistent numerical scheme: as $h \to 0$ and if the numerical solution is smooth, it can be shown that \eqref{Eapprox} yields a convergent approximation of order $1$ in $h$, see \cite{Jahn00}. Considering higher order approximation such as the symmetric Strang splitting or higher order splitting methods allows to obtain higher order approximation scheme under the assumption that the numerical solution is smooth enough, see \cite{Jahn00,Hans08}.
Concerning the long-time behaviour of such methods, very few results exist. In \cite{DF07}, {\sc Dujardin \& Faou} showed the conservation of the regularity of the numerical solution \eqref{Eapprox} in $\mathbb{T}^1$ over very long time, provided the potential function is small and smooth. Moreover, even in this situation, resonances effects appear for some values of $h$: typically when $\exp(-ih \Delta)$ posseses eigenvalues close to $1$.
In the finite dimensional case, the long time behaviour of splitting method can be understood upon using the Baker-Campbell-Hausdorff formula (see for instance \cite{HLW}). Roughly speaking, this result states that for two matrices $A$ and $B$, we can write
$$
\exp(t A ) \exp(tB) = \exp( t Z(t))
$$
where $Z(t) = A + B + t [A,B] + t^2 \cdots$, with $[A,B] = AB - BA$ the matrix commutator. Hence the long time behaviour of the numerical solution corresponding to \eqref{Eapprox} can be analyzed by considering the properties of the matrix $Z(t)$ which is a small perturbation of the original operator $A +B$ for small time $t$. However, to be valid, the BCH formula requires $h$ to be small enough with respect to the inverse of the norms of $A$ and $B$. This makes this strategy impossible to apply directly for unbounded operators, unless a drastic CFL like condition is used for the full discretization of \eqref{E1}.
In this paper, we consider the time discretization
\begin{equation}
\label{Esplit}
\exp(ih (-\Delta + V)) \simeq \exp(ihV) R(-ih\Delta)
\end{equation}
where
$$
R(z) = \frac{1+z/2}{1-z/2}
$$
is the stability function of the midpoint rule. Such an approximation is clearly consistent with \eqref{E1} if the solution is smooth enough. Moreover, it defines a symplectic numerical scheme preserving the $L^2$ norm, and easily implemented by using the fast Fourier transform. Similar schemes have been considered in \cite{Ascher, Stern, Zhang}.
Recall that for all $x\in \mathbb{R}$ we have
$$
\frac{1 + ix }{1 - ix} = \exp(2i\arctan(x)).
$$
and hence we can write
$$
R(-ih\Delta) = \frac{1 -ih\Delta/2}{1 + ih\Delta/2} = \exp(2 i \arctan\big(-\frac{h\Delta}{2}\big)),
$$
where now $ 2 \arctan\big(-\frac{h\Delta}{2}\big)$ is a bounded operator from $L^2$ to itself. Using this representation, we show in this work that there exists a symmetric operator $S(h): L^2 \to L^2$ such that
$$
\exp(ihV) R(-ih\Delta) = \exp(ih S(h)),
$$
with
$$
S(h) = -\frac{2}{h}\arctan\big(\frac{h\Delta}{2}\big) + \tilde{V}(h)
$$
where $\tilde{V}(h): L^2 \to L^2$ is a modified potential.
Hence, for all $n$ and all initial value $ u^0$, we have
$$
u^n = \big(\exp(ihV) R(-ih\Delta)\big)^n u^0 = \exp(inh S(h)) u^0
$$
and hence the numerical solution $u^n$ coincides with the exact solution of the {\em modified equation}
$$
\partial_t u = S(h) u
$$
at each time step $t_n = nh$. This implies that the associated energy
$$
\langle u \, | \, S(h) \, | \, u\rangle
$$
is preserved along the numerical solution associated with the split-step scheme \eqref{Esplit}. Moreover this energy is close to the original energy \eqref{Eorig} if $u$ is smooth. Using these properties, we give regularity bounds for the numerical solution over arbitrary long time.
Such a result is to our knowledge the first extension in an infinite dimensional setting of the classical backward error analysis for Hamiltonian ordinary differential equation (see \cite{HLW,Reic04}). Note in particular that as in the case of {\em linear} ordinary differential equation, this result is valid for arbitrary long time, while such results classically hold for exponentially long time with respect to the step size for nonlinear ordinary differential equations.
It is worth noticing that such result does not hold hold for the splitting scheme \eqref{EsplitD} for which it is known that resonance effects occur, see \cite{DF07}. The main difference between \eqref{Esplit} and \eqref{EsplitD} lies in the high frequencies regularization effect of the midpoint rule: by essence, the logarithm of the operator $R(-ih\Delta)$ is bounded while the logarithm of $\exp(-ih \Delta)$ is not well defined when $h \Delta$ possesses eigenvalues close to multiples of $2\pi$. Note that this does not affect the approximation property of the scheme for finite time and smooth numerical solution.
Similarly this result does not automatically extend to situations where the propagator $R(-ih\Delta)$ is replaced by a higher order approximation of $\exp(-ih\Delta)$, or for higher order splitting schemes (see \cite[Chap III]{HLW}). We discuss this point in the last section of this work, and show by numerical experiments that in general resonance effects appear.
Let us mention that in the nonlinear situation, results exist concerning the long-time behaviour of splitting scheme applied to the nonlinear Schr\"odinger equation: see the recent works of \textsc{Faou, Gr\'ebert \& Paturel} \cite{FGP1,FGP2} and \textsc{Gauckler \& Lubich} \cite{GL08a,GL08b} for the long time behaviour of splitting schemes applied to NLS when the initial solution is small. However, to our knowledge no existence results for a global modified energy have been proved. Note that in this direction, concerning the numerical approximation of solitary wave, \textsc{Duran \& Sanz-Serna} \cite{Duran00} have proved the existence of a modified solitary wave over finite time for the numerical solution associated with the midpoint rule.
\section{Statement of the results}
We represent a function $u \in L^2(\mathbb{T}^d)$ by its Fourier coefficients $u = (u_k)_{k \in \mathbb{Z}^d}$ defined as
$$
u_k = \frac{1}{(2\pi)^d}\int_{\mathbb{T}^d} u(x) e^{i k \cdot x} \mathrm{d} x
$$
where for $k = (k_1,\ldots,k_d) \in \mathbb{Z}^d$ and $x = (x_1,\cdots,x_d) \in \mathbb{T}^d$ we set $k \cdot x = k_1 x_1 + \cdots k_d x_d$.
We define
$$
\mathbb{N}orm{u}{}^2 = \sum_{k \in \mathbb{Z}^d} |u_k|^2,\quad \boldsymbol{m}ox{and}\quad \mathbb{N}orm{u}{H^s}^2 = \sum_{k \in \mathbb{Z}^d} (1 + |k|^2)^s |u_k|^2
$$
the $L^2$ and the $H^s$ Sobolev norms on $\mathbb{T}^d$, where for $k = (k_1,\ldots,k_d) \in \mathbb{Z}^d$, we set
$$
|k|^2 = k_1^2 + \cdots k_d^2.
$$
For an operator $A = (A_{k\ell})_{k,\ell \in \mathbb{Z}^d}$ acting in the Fourier space $\mathbb{C}^{\mathbb{Z}^d}$ and for $\alpha > 1$ we set
$$
\mathbb{N}orm{A}{\alpha} = \sup_{k,\ell} |A_{k\ell}| \big(1+ |k - \ell|^\alpha\big).
$$
We denote by
$$
\mathcal{L}_{\alpha} = \{ A = (A_{k\ell})_{k,\ell \in \mathbb{Z}^d}\, | \, \mathbb{N}orm{A}{\alpha} < \infty\, \}.
$$
If $A \in \mathcal{L}_{\alpha}$ with $\alpha > d$, we can easily show that $A \in \mathcal{L}(L^2)$: see Lemma \ref{ELB} below.
We say that $A$ is symmetric if for all $k,\ell \in \mathbb{Z}^d$, we have $A_{k\ell} = \overline{A}_{\ell k}$, or equivalently $A^* = A$. In this situation, for $u \in L^2$, we set
$$
\langle u | \, A \, | u \rangle = \sum_{k,\ell \in \mathbb{Z}^d} \bar{u}_k A_{k\ell} u_\ell = (u,Au) \in \mathbb{R}
$$
where $(\,\cdot \, , \, \cdot\, )$ is the $L^2$ product in $\mathbb{T}^d$.
For two operators $A$ and $B$, we set
$$
\mathrm{ad}_A(B) = AB - BA.
$$
Finally, with a real function $W(x)$ we associate the operator $W = (W_{k\ell})_{k,\ell \in \mathbb{Z}^d}$ with components $W_{k\ell}= W_{k- \ell}$ where $W_n$ denote the Fourier coefficient of $W$ associated with $n \in \mathbb{Z}^d$. Thus the operator $(W_{k\ell})_{k,\ell \in \mathbb{Z}^d}$ acting in the Fourier space corresponds to the multiplication by $W$. Note moreover that with this identification, $\mathbb{N}orm{W}{\alpha} < \infty$ with $\alpha > d$ implies that $\mathbb{N}orm{W}{L^{\infty}} < \infty$.
The goal of this paper is to prove the following results:
\begin{theorem}
\label{T1}
Let $ \alpha > d$, and
assume that $\mathbb{N}orm{V}{\alpha} < \infty$.
There exist $h_0> 0$ and a constant $C$ such that for all $h \in (0, h_0)$,
there exists a symmetric operator $S(h)$ such that
$$
\exp(ih V) R(-ih\Delta) = \exp (ih S(h)),
$$
satisfying for all $h$,
$$
S(h) = -\frac{2}{h} \arctan\big(\frac{h \Delta}{2}\big) + V(h) + h W(h)
$$
where $V(h)$ and $W(h)$ satisfy
\begin{equation}
\label{truc1}
\mathbb{N}orm{V(h)}{\alpha} + \mathbb{N}orm{W(h)}{\alpha} \leq C \mathbb{N}orm{V}{\alpha},
\end{equation}
and
where moreover $V(h)$ is given by the convergent series in $\mathcal{L}_{\alpha}$
\begin{equation}
\label{truc2}
V(h) = \big(\mathrm{d} \exp_{Z_0(h)}\big)^{-1} (V) = V + \sum_{k \geq 1} \frac{B_k}{k!} i^k \mathrm{ad}_{Z_0(h)}^k (V)
\end{equation}
with $Z_0(h) = -2 \arctan\displaystyle\big(\frac{h \Delta}{2}\big) $, and where the $B_k$ are the Bernouilli numbers.
\end{theorem}
\begin{remark}
The size of $h_0$ is only proportional to the inverse of $\mathbb{N}orm{V}{\alpha}$, and hence is a reasonably small parameter. In particular it does not depend on a possible space discretization of the problem through a CFL condition.
\end{remark}
The following result shows that $S(h)$ defines a ``modified'' energy when applied to smooth functions:
\begin{proposition}
\label{P1}
Let $\beta \in [0,1]$.
Assume that $u \in H^{1 +\beta}(\mathbb{T}^d)$, then we have for $h \in (0, h_0)$,
\begin{equation}
\label{Ecomp}
\big|\langle u | S(h) | u \rangle - \langle u | -\Delta + V | u \rangle \big|\leq C h^\beta \mathbb{N}orm{u}{H^{1 + \beta}}^2.
\end{equation}
where $C$ depends on $\beta$ and $V$.
\end{proposition}
The next results shows the conservation the modified energy $S(h)$ along the numerical solution associated with the split-step propagator. As a consequence, we give a regularity bound for the numerical solution over arbitrary long time.
\begin{corollary}
\label{C1}
Assume that $u^0 \in L^2(\mathbb{T}^d)$ and $h \in (0,h_0)$. For all $n \geq 1$, we define
$$
u^n = \big(\exp(ih V) R(-ih\Delta)\big)^n u^0.
$$
Then for all $n$ we have
\begin{equation}
\label{Eenerg}
\langle u^n | S(h) | u^n \rangle = \langle u^0 | S(h) | u^0 \rangle.
\end{equation}
If moreover $u^0 \in H^1$,
then there exists a constant $C_0$ depending on $V$ and $\alpha$ such that for all $n \in \mathbb{N}$,
\begin{equation}
\label{Ebreg}
\sum_{ |k| \leq 1/\sqrt{h}} |k|^2|u^n_k|^2 + \frac{1}{h} \sum_{ |k| > 1/\sqrt{h}} |u^n_k|^2 \leq C_0 \mathbb{N}orm{u^0}{H^1}^2.
\end{equation}
\end{corollary}
This last result shows that $H^1$ estimate are preserved over arbitrary long time only for ``low'' modes $|k| < 1/\sqrt{h}$ whereas the remaining high frequencies part is small in $L^2$.
\begin{remark}
The results above obviously remain valid when considering the full discretization of \eqref{E1} by collocation methods (see for instance \cite{L08}), with estimates independent of the spectral discretization parameter.
\end{remark}
\begin{remark}
\label{NONO}
The previous results easily extend to the splitting scheme
$$
R(-ih \Delta) \exp(ihV)
$$
and to the Strang splitting
\begin{equation}
\label{Estrang}
\exp(ihV/2) R(-ih \Delta) \exp(ihV/2).
\end{equation}
Note that in this last situation, the fact that the method is of order $2$ allows to take $\beta \in [0,2]$ in \eqref{Ecomp}. See Section \ref{Section7} for further details on other possible extensions.
\end{remark}
\section{Formal series}
We now start the proof of Theorem \ref{T1}.
In the following, we set
$$
Z_0 := -2 \arctan\big(\frac{h\Delta}{2}
\big)
$$
the diagonal operator with coefficients
$$
\lambda_k = (Z_0)_{kk} = 2 \arctan\big(\frac{h|k|^2}{2}\big), \quad k \in \mathbb{Z}^d.
$$
We look for a function $ t \to Z(t)$ taking value into the set of operator acting on $\mathbb{C}^{\mathbb{Z}^d}$ such that $Z(0) = Z_0$ and
$$
\forall\, t \in [0,h],\quad e^{itV} e^{iZ_0} = e^{iZ(t)}.
$$
Derivating the equation in $t$, this yields (see \cite{HLW})
$$
i V e^{it V }e^{iZ_0}= i \big(\mathrm{d} \exp_{iZ(t)} Z'(t) \big) e^{iZ(t)}.
$$
Hence $Z(t)$ has to satisfy the equation (see \cite[Chap. III.4]{HLW})
\begin{equation}
\label{EZt}
Z'(t) = (\mathrm{d} \exp_{iZ(t)})^{-1} V = i \sum_{k\geq 0}Ê\frac{B_k}{k!} \mathrm{ad}_{iZ(t)}^k(V).
\end{equation}
and $Z(0) = Z_0$. Here, the $B_k$ are the Bernouilli numbers. Recall that for $z \in \mathbb{C}$, $|z| < 2\pi$, the expression
$$
\sum_{k \geq 0}Ê\frac{B_k}{k!} z^k = \frac{z}{e^{z} - 1}
$$
defines a power series of radius $2\pi$.
We define the formal series
$$
Z (t) = \sum_{\ell \geq 0 } t^\ell Z_\ell
$$
where $Z_\ell$, $\ell \geq 1$, are unknown operators.
Plugging this expression into \eqref{EZt} we find
$$
\begin{array}{rcl}
\displaystyle\sum_{\ell \geq 1}Ê\ell t^{\ell - 1} Z_{\ell}
&=& \displaystyle \sum_{k \geq 0}Ê\frac{B_k}{k!} \Big( i \sum_{\ell \geq 0} t^\ell \mathrm{ad}_{Z_\ell} \Big)^k(V)\\[3ex]
&=& \displaystyle \sum_{\ell \geq 0} t^\ell \sum_{k \geq 0}Ê\frac{B_k}{k!} i^k \sum_{\ell_1 + \cdots + \ell_k = \ell } \mathrm{ad}_{Z_{\ell_1}} \cdots \mathrm{ad}_{Z_{\ell_k}}(V).
\end{array}
$$
Identifying the coefficients in the formal series, we find the induction formula:
\begin{equation}
\label{Erec}
\forall\, \ell \geq 1,\quad
(\ell+1) Z_{\ell+1} = \sum_{k \geq 0}Ê\frac{B_k}{k!} i^k \sum_{\ell_1 + \cdots + \ell_k = \ell } \mathrm{ad}_{Z_{\ell_1}} \cdots \mathrm{ad}_{Z_{\ell_k}}(V).
\end{equation}
Note that we easily show by induction that for all $\ell$, $Z_\ell$ is symmetric. For $\ell = 1$, this equation yields
\begin{equation}
\label{EZ1}
Z_1 = \sum_{k \geq 0}Ê\frac{B_k}{k!} i^k \mathrm{ad}_{Z_0}^k(V).
\end{equation}
Note that the main difference with the finite dimensional situation is that the ``first'' term in the expansion is given by an infinite series and that it depends on the small parameter $h$ through the operator $Z_0$. The key to control this term is to estimate the norm of the operator $\mathrm{ad}_{Z_0}$.
\section{Proof of Theorem \ref{T1}}
\begin{lemma}
\label{Lprod}
Assume that $\alpha > d$.
There exist a constant $C_\alpha$ such that for all operator $A$ and $B$, $$
\mathbb{N}orm{AB}{\alpha} \leq C_\alpha\mathbb{N}orm{A}{\alpha}\mathbb{N}orm{B}{\alpha}.
$$
\end{lemma}
\begin{Proof}
We have for $k,\ell \in \mathbb{Z}^d$,
$$
\begin{array}{rcl}
|(AB)_{k\ell} |( 1+| k - \ell|^\alpha) &\leq& (1+ | k - \ell|^\alpha) \displaystyle\sum_{p\in \mathbb{Z}^d} |A_{kp}| |B_{kp}|\\[2ex]
&\leq &\displaystyle \mathbb{N}orm{A}{\alpha}\mathbb{N}orm{B}{\alpha} \sum_{p\in \mathbb{Z}^d} \frac{1+ | k - \ell|^\alpha}{(1+ |k-p|^\alpha)(1+ |p - \ell|^\alpha)}
\end{array}
$$
But as the function $x \to x^{\alpha}$ is convex for $x > 0$, we have
$$
1 + |k-p|^\alpha \leq 1 + \big(|k-\ell| + |\ell - p|\big)^\alpha \leq 2^{\alpha - 1} \big(1 + |k-\ell|^\alpha + 1+ |\ell - p|^\alpha\big).
$$
Hence we have
$$
|(AB)_{k\ell} |(1+ | k - \ell|^\alpha) \leq 2^{\alpha-1} \mathbb{N}orm{A}{\alpha}\mathbb{N}orm{B}{\alpha} \sum_{p\in \mathbb{Z}^d}\Big( \frac{1}{1+ |k-p|^\alpha} + \frac{1}{ 1+ |p - \ell|^\alpha}\Big)
$$
and this shows the result, as $\alpha > d$.
\end{Proof}
\begin{lemma}
\label{ELB}
Let $\alpha> d$. There exist a constant $M_\alpha$ such that for all symmetric operator $B$ and for all $u \in L^2$, we have
$$
|\langle u | B | u \rangle | \leq M_\alpha \mathbb{N}orm{B}{\alpha} \mathbb{N}orm{u}{}^2.
$$
\end{lemma}
\begin{Proof}
We have
$$
\begin{array}{rcl}
|\langle u | B |\, u \rangle | &\leq& \displaystyle\sum_{k,\ell} |B_{k\ell}| |u_k||u_\ell|\\[2ex]
&\leq & \mathbb{N}orm{B}{\alpha}\displaystyle \sum_{k,\ell} \frac{1}{1+ |k - \ell|^\alpha} |u_k||u_\ell|\\[2ex]
&\leq& \mathbb{N}orm{B}{\alpha} \displaystyle \sum_{k,\ell} \frac{1}{1+ |k - \ell|^\alpha} |u_k|^2
\end{array}
$$
using the formula $|u_k||u_\ell| \leq \frac{1}{2}(|u_k|^2 + |u_\ell|^2) $.
This yields the result.
\end{Proof}
\begin{lemma}
\label{L1}
Recall that $Z_0 = \displaystyle 2 \arctan\big(\frac{h\Delta}{2}\big)$, and
let $W = (W_{k\ell})_{k,\ell \in \mathbb{Z}^d}$ be an operator. We have for all $\alpha > 1$
\begin{equation}
\label{Enono}
\mathbb{N}orm{\mathrm{ad}_{Z_0} W}{\alpha} \leq \pi \mathbb{N}orm{W}{\alpha}.
\end{equation}
\end{lemma}
\begin{Proof}
For $k,\ell \in \mathbb{Z}^d$ we have as $Z_0$ is diagonal
$$
\begin{array}{rcl}
\big(\mathrm{ad}_{Z_0}W \big)_{k\ell} &=& (\lambda_k - \lambda_\ell)W_{k\ell}, \\[2ex]
&=& \big(2\arctan(h |k|^2/2) - 2\arctan(h|\ell|^2/2)\big)W_{k\ell}.
\end{array}
$$
Hence we have for all $k,\ell \in \mathbb{Z}^d$,
$$
\left|\big(\mathrm{ad}_{Z_0}W \big)_{k\ell} \right| \leq \pi |W_{k\ell}|
$$
and this shows the result.
\end{Proof}
Using this Lemma, we see using \eqref{EZ1} that
\begin{equation}
\label{truc4}
\mathbb{N}orm{Z_1}{\alpha} \leq \mathbb{N}orm{V}{\alpha} \sum_{k \geq 0}Ê\frac{|B_k|}{k!} \pi^k \leq C \mathbb{N}orm{V}{\alpha}
\end{equation}
is bounded.
In components, we calculate using the expression of $\mathrm{ad}_{Z_0}$ that
\begin{equation}
\label{EZ1nono}
(Z_1)_{k\ell} = V_{k\ell} \frac{i(\lambda_k - \lambda_\ell)}{\exp(i(\lambda_k - \lambda_\ell)) - 1}
\end{equation}
Note that for any bounded operator $A$ and $B$, we always have
$$
\mathbb{N}orm{\mathrm{ad}_A(B)}{\alpha} \leq 2C_\alpha \mathbb{N}orm{A}{\alpha}\mathbb{N}orm{B}{\alpha}
$$
where $C_\alpha$ is given by Lemma \ref{Lprod}
We define now the following numbers:
$$
\zeta_0 = \pi\quad \boldsymbol{m}ox{and}\quad \zeta_\ell = 2 C_\alpha \mathbb{N}orm{Z_\ell}{\alpha},\quad \boldsymbol{m}ox{for}\quad \ell \geq 1.
$$
Using \eqref{Erec} and Lemma \ref{L1}, we easily see that we have the estimates
$$
\forall\, \ell\geq 1,\quad \frac{1}{2C_\alpha}(\ell+1) \zeta_{\ell+1} \leq \mathbb{N}orm{V}{\alpha}\sum_{k \geq 0}Ê\frac{|B_k|}{k!} \sum_{\ell_1 + \cdots + \ell_k = \ell } \zeta_{\ell_1}\cdots \zeta_{\ell_k} .
$$
Now for any $\rho$ such that $\pi < \rho < 2\pi$, there exist a constant $M$ such that for all $k$, $|B_k| \leq k! M \rho^{-k}$. Hence we can write
$$
\forall\, \ell\geq 1,\quad \frac{1}{2C_\alpha}(\ell+1) \zeta_{\ell+1} \leq M \mathbb{N}orm{V}{\alpha}\sum_{k \geq 0}Ê \rho^{-k }\sum_{\ell_1 + \cdots + \ell_k = \ell } \zeta_{\ell_1}\cdots \zeta_{\ell_k} .
$$
Let $\zeta(t)$ be the formal series $\zeta(t) = \sum_{\ell \geq 0} t^\ell \zeta_\ell$.
Multiplying the previous equation by $t^\ell$ and summing over $\ell \geq 0$, we find
$$
\frac{1}{2C_\alpha}\zeta'(t) \leq M \mathbb{N}orm{V}{\alpha} \sum_{k \geq 0}Ê \rho^{-k}\zeta(t)^k = M \mathbb{N}orm{V}{\alpha} \frac{1}{1 - \zeta(t)/\rho}.
$$
Let $\eta(t)$ be the solution of the differential equation:
$$
\eta'(t) = 2 M C_\alpha \mathbb{N}orm{V}{\alpha} \frac{1}{1 - \eta(t)/\rho}, \quad \eta(0) = \pi.
$$
Taking $\rho = 3\pi /2$, we easily see that for $t \leq \frac{\pi}{32 M C_\alpha\mathbb{N}orm{V}{\alpha}}$, the solution can be written
$$
\eta(t) = \frac{3\pi}{2} \left( 1 - \sqrt{\frac19 - \frac{16}{3} M C_\alpha\mathbb{N}orm{V}{\alpha} t}\right),
$$
and defines an analytic function of $t$. Expanding $\eta(t) = \sum_{\ell \geq 0} t^\ell\eta_\ell$, we see that the coefficients satisfy the relations $\eta_0 = \pi$ and
$$
\forall\, \ell\geq 1,\quad \frac{1}{2C_\alpha}(\ell+1) \eta_{\ell+1} = M \mathbb{N}orm{V}{}\sum_{k \geq 0}Ê \rho^{-k }\sum_{\ell_1 + \cdots + \ell_k = \ell } \eta_{\ell_1}\cdots \eta_{\ell_k}
$$
with $\rho = \frac{3\pi}{2}$.
By induction, this shows that $\zeta_\ell \leq \eta_\ell$. Moreover, for all $z \in \mathbb{C}$ with $|z| \leq \frac{\pi}{32 M C_\alpha \mathbb{N}orm{V}{\alpha}}$, we have as the coefficients $\zeta_\ell$ are positive,
$$
|\zeta(z)| = \left|\sum_{\ell = 0}^\infty \zeta_\ell z^\ell \right| \leq \sum_{\ell = 0}^\infty \zeta_\ell |z|^\ell = \zeta(|z|) \leq \eta(|z|) \leq \frac{3\pi}{2}.
$$
Using Cauchy estimates, we see that
$$
\forall\, \ell \geq 1, \quad \mathbb{N}orm{Z_\ell}{} = \frac{1}{2C_\alpha} \zeta_\ell = \frac1{2C_\alpha} \frac{\zeta^{(\ell)}(0)}{\ell! } \leq \frac{3\pi}{4C_\alpha} \Big(\frac{32 M C_\alpha\mathbb{N}orm{V}{\alpha}}{\pi} \Big)^\ell.
$$
The theorem is now proved by setting
$$
V(h) = Z_1,\quad \boldsymbol{m}ox{and}\quad W(h) = \sum_{\ell \geq 2} h^{\ell - 2} Z_\ell
$$
which defines a convergent power series for $|h| < h_0 = \frac{\pi}{32 M C_\alpha \mathbb{N}orm{V}{\alpha}}$. The estimate \eqref{truc1} on $V(h)$ is then an easy consequence of \eqref{truc4}. The estimate \eqref{truc1} on $W(h)$ is easily proved.
\section{Modified energy}
We give now the proof of Proposition \ref{P1}.
For all $x \in \mathbb{R}$, we have
$$
\arctan(x) - x = -\int_{0}^x \frac{y^2}{1 + y^2} \mathrm{d} y.
$$
For $k \in \mathbb{Z}^d$, this yields
$$
\frac{2}{h}\arctan\big(\frac{h|k|^2}{2}\big) - |k|^2 = - \frac{2}{h} \int_{0}^{h|k|^2/2} \frac{y^2}{1 + y^2} \mathrm{d} y.
$$
Let $\gamma \in [0,2]$, it is clear that for all $y \in \mathbb{R}$,
$$
\frac{y^2}{1 + y^2} \leq y^{\gamma}.
$$
Hence we have for all $k \in \mathbb{Z}^d$,
$$
\Big|\frac{2}{h}\arctan\big(\frac{h|k|^2}{2}\big) - |k|^2 \Big|\leq \frac{2}{h} \int_{0}^{h|k|^2/2} y^{\gamma}\mathrm{d} y \leq C h^{\gamma} |k|^{2\gamma + 2}.
$$
This shows that for all $v$,
\begin{equation}
\label{Ebdelta}
\Big| \langle v | -\frac{2}{h} \arctan\big(\frac{h\Delta}{2}\big) | v \rangle - \langle v | -\Delta | v \rangle\Big|
\leq C h^{\gamma} \mathbb{N}orm{v}{H^{1 + \gamma}}^2.
\end{equation}
Now we have
$$
\langle v \, | \, V(h) \, | \, v \rangle - \langle v \, | \, V \, | \, v \rangle = \displaystyle\sum_{k \geq 1 } \frac{B_k}{k!}
\langle v \, | \, i^k \mathrm{ad}_{Z_0(h)}^k(V) \, | \, v \rangle
$$
Recall that $Z_0(h) = - 2 \arctan\big(\frac{h\Delta}{2}\big)$ is a positive operator. The operator $Z_0(h)^{1/2}$ is hence well defined, and for an operator $W$ we have in components
$$
(Z_0(h)^{1/2}W)_{k\ell} = \Big(2 \arctan\big(\frac{h|k|^2}{2}\big)\Big)^{1/2} W_{k\ell}.
$$
Hence we have for all $\alpha > 1$,
$$
\mathbb{N}orm{Z_0(h)^{1/2}W}{\alpha} \leq \sqrt{\pi} \mathbb{N}orm{W}{\alpha}
\quad \boldsymbol{m}ox{and}\quad\mathbb{N}orm{W Z_0(h)^{1/2}}{\alpha} \leq \sqrt{\pi} \mathbb{N}orm{W}{\alpha}.
$$
Now using Lemma \ref{ELB} and the fact that $Z_0(h)$ is symmetric, we have for all $v$ and all operator $W$
\begin{multline*}
| \langle v \, | \,\mathrm{ad}_{Z_0(h)}(W) \, | v \rangle | \leq (\mathbb{N}orm{Z_0(h)^{1/2}W}{\alpha} +\mathbb{N}orm{W Z_0(h)^{1/2}}{\alpha}) \mathbb{N}orm{Z_0(h)^{1/2} v}{} \mathbb{N}orm{v}{}\\[2ex]
\leq 2 \sqrt{\pi}\mathbb{N}orm{W}{\alpha} \mathbb{N}orm{Z_0(h)^{1/2} v}{} \mathbb{N}orm{v}{}.
\end{multline*}
Hence we have
\begin{multline*}
\big| \langle v \, | \, V(h) \, | \, v \rangle - \langle v \, | \, V \, | \, v \rangle\big| \leq
\displaystyle 2 \sum_{k \geq 1 } \frac{|B_k|}{k!} \pi^{k-1/2} \mathbb{N}orm{V}{\alpha}
\mathbb{N}orm{Z_0(h)^{1/2} v}{} \mathbb{N}orm{v}{}\\[2ex]
\leq C \mathbb{N}orm{V}{\alpha}\mathbb{N}orm{Z_0(h)^{1/2} v}{} \mathbb{N}orm{v}{}
\end{multline*}
Using \eqref{Ebdelta} with $\gamma = 0$, this shows that
$$
\big| \langle v \, | \, V(h) \, | \, v \rangle - \langle v \, | \, V \, | \, v \rangle\big| \leq C \mathbb{N}orm{V}{\alpha} h \mathbb{N}orm{u}{H^1}\mathbb{N}orm{u}{}.
$$
Finally, we easily have using \eqref{truc1} that
$$
\big| \langle v \, | \, W(h) \, | \, v \rangle \big| \leq C\mathbb{N}orm{V}{\alpha} h \mathbb{N}orm{u}{}^2.
$$
Summing the previous inequalities with $\gamma = \beta$ in \eqref{Ebdelta} we have that
$$
\langle\, u | S(h) | u \rangle - \langle\, u | \Delta + V | u \rangle \leq C h^\beta \mathbb{N}orm{u}{H^{1 + \beta}}^2 + C \mathbb{N}orm{V}{\alpha} h \mathbb{N}orm{u}{H^1}\mathbb{N}orm{u}{}
$$
and this yields the result.
\section{Bounds for the numerical solution}
We prove now Corollary \ref{C1}. Note that Eqn. \eqref{Eenerg} is classic.
Using the fact that $V$ is symmetric, we have for all $n$, $\mathbb{N}orm{u^n}{} = \mathbb{N}orm{u^0}{}$ where $\mathbb{N}orm{\cdot}{}$ denotes the $L^2$ norm.
Using Lemma \ref{ELB}, we can write for all $v \in L^2$,
$$
Ê\langle v | S(h) | v \rangle =Ê \frac{1}{h} \langle v | -2 \arctan\big(\frac{h\Delta}{2}\big) | v \rangle + \langle v | V(h) + h W(h) \, | \, v \rangle
$$
whence using \eqref{truc1}, Lemma \ref{ELB} and the fact that $Z_0$ is a positive operator
$$
Ê|\langle v | \, S(h) | v \rangle| \geq \frac{1}{h} \langle v | -2 \arctan\big(\frac{h\Delta}{2}\big) | v \rangle - C \mathbb{N}orm{V}{\alpha} \mathbb{N}orm{v}{}^2.
$$
Hence using \eqref{Eenerg} we have that for all $n$,
$$
\begin{array}{rcl}
\displaystyle\frac{1}{h} \langle u^n | - 2 \arctan\big(\frac{h\Delta}{2}\big)| u^n \rangle &\leq& Ê\langle u^n | S(h) | u^n\rangle + C \mathbb{N}orm{V}{\alpha} \mathbb{N}orm{u^n}{}^2\\[2ex]
&\leq & Ê\langle u^0 | S(h) | u^0\rangle + C \mathbb{N}orm{V}{\alpha} \mathbb{N}orm{u^0}{}^2.
\end{array}
$$
Using \eqref{Ecomp} with $\beta = 0$, we find that there exists a constant such that for all $n$,
\begin{equation}
\label{Eatan}
\frac{1}{h} \langle u^n | - 2 \arctan\big(\frac{h\Delta}{2}\big) | u^n \rangle \leq C_0\mathbb{N}orm{u^0}{H^1}^2.
\end{equation}
Now we have for all $x> 0$
\begin{equation}
\label{Ebornarc}
x > \frac12 \Longrightarrow\arctan x > \arctan\big(\frac12\big)\quad \boldsymbol{m}ox{and}\quad x \leq \frac12 \Longrightarrow \arctan x > \frac{2x}{3}.
\end{equation}
Applying this inequality to \eqref{Eatan} by considering the set of frequencies $h|k|^2 \leq 1$ and $h|k|^2 > 1$ immediately yields the result.
\section{Higher order approximations}
\label{Section7}
In this section we further investigate the long time behaviour by numerical simulations and consider higher-order numerical schemes.
We perform the simulations with $d = 1$, $u^0 = 2/(2 - \cos(x))$ and $V(x) = \cos(x) + \sin(6x)$. In the next figures, we show the maximal size of the oscillations of the truncated $H^1$ norm
\begin{equation}
\label{EH120}
\Big(\sum_{k = -20}^{20} (1 + |k|^2) |u_k^n|^2 \Big)^{1/2}
\end{equation}
along the numerical solution $u^n$ from $t = 0$ to $t = 50$, and for stepsize ranging from $h = 0.01$ to $h = 0.1$.
As expected, we see that this quantity is uniformly bounded for the splitting scheme \eqref{Esplit} (Figure 1).
\begin{figure}
\caption{Midpoint approximation of the exponential.}
\end{figure}
As explained in Remark \ref{NONO}, our methods easily extends to the Strang splitting scheme \eqref{Estrang}.
Considering the alternative Strang splitting
$$
R(-ih\Delta/2) \exp(-ihV) R(-ih\Delta/2),
$$
the same argument does not apply straightforwardly. The obstruction occurs in Lemma \ref{L1} where $R(-ih\Delta)$ is replaced by $R(-ih\Delta/2)^2$ in the definition of the operator $Z_0$, transforming $\pi$ by $2\pi$ in inequality \eqref{Enono}.
Nevertheless, as shown in Figure 2, the same uniform conservation phenomenon can be observed. This might be justified using the fact that the operator $Z_1$ defined in \eqref{EZ1nono} still makes sense in this situation.
Next we consider schemes of the form
\begin{equation}
\label{EsplitN}
\exp(ihV) \prod_{j = 1}^s R(-\gamma_j h \Delta)
\end{equation}
where $\gamma_j \in \mathbb{R}$, $j = 1,\ldots,s$ are coefficients satisfying $\gamma_1 + \ldots + \gamma_s = 1$. Such an approximation will be a higher order approximation of the splitting scheme \eqref{EsplitD} for suitable $\gamma_j$ satisfying given algebraic conditions (see for instance \cite[Chap III]{HLW}). Of course, all these schemes remain symplectic and preserve the $L^2$ norm.
\begin{figure}
\caption{Strang splitting $R(-ih\Delta/2) \exp(-ihV) R(-ih\Delta/2)$.}
\end{figure}
In Figures 3, 4 and 5, we consider successively classical symmetric composition methods of order $4$, $6$ and $8$ (see \cite[Chap V]{HLW} and the references therein). The method of order $4$ is the triple jump method for which $s= 3$,
\begin{equation}
\label{Egammas}
\gamma_1 = \gamma_3 = \frac{1}{2 - 2^{1/3}}, \quad \boldsymbol{m}ox{and}\quad \gamma_2 = -\frac{2^{1/3}}{2 - 2^{1/3}}.
\end{equation}
The methods of order $6$ corresponds to the methods given by {\sc Yoshida} (see \cite{Yoshida} and \cite[Section V.3.2]{HLW}) and requires $s = 7$, while the method of order $8$ is the methods given by {\sc Suzuki \& Umeno}, see \cite{Suzuki}, and requires $s = 15$.
What we observe is that for the method of order $4$, the situation is similar to the previous cases (regularity conservation), but for the methods of order $6$ and $8$, resonances appear: for specific values of the stepsize, the regularity of the numerical solution deteriorates.
\begin{figure}
\caption{Order $4$ approximation of the exponential.}
\end{figure}
\begin{figure}
\caption{Order $6$ approximation of the exponential.}
\end{figure}
\begin{figure}
\caption{Order $8$ approximation of the exponential.}
\end{figure}
Finally, we plot in Figure 6 the same simulation for the ``exact'' splitting scheme \eqref{Esplit}. In this last situation, it is known that the resonances appear for step sizes $h$ such that $h (k^2 - \ell^2)$ is close to a multiple of $2\pi$ for some $k$ and $\ell \in \mathbb{Z}$ (see \cite{DF07}).
\begin{figure}
\caption{Exact splitting.}
\end{figure}
The fact that the method of order $4$ possesses a modified energy can easily seen: With the values of $\gamma_1$, $\gamma_2$ and $\gamma_3$ given in \eqref{Egammas}, we have
$$
R(-\gamma_1 h \Delta)R(-\gamma_2 h \Delta)R(-\gamma_3 h \Delta) = \exp( i Z_0)
$$
where
\begin{equation}
\label{Etj}
Z_0 = - 4\arctan \Big(\frac{h\Delta}{2(2 - 2^{1/3})}\Big) + 2 \arctan\Big(\frac{2^{1/3}h\Delta}{2(2 - 2^{1/3})}\Big) = - G(h\Delta/2)
\end{equation}
with
$$
G(x) = 4 \arctan \big(\frac{x}{2 - 2^{1/3}}\big) - 2 \arctan\big(\frac{2^{1/3}x}{2 - 2^{1/3}}\big).
$$
It is easy to see that for all $x>0$ $G(x)$ is an increasing function such that $G(x) \in (0,\pi)$. Hence Lemma \ref{L1} remains valid for this $Z_0$. Using the same techniques as before, and bounds like
\eqref{Ebornarc} still valid for the function $G(x)$, we can show the existence of a modified energy for this method, explaining the absence of resonances.
Note that in the same spirit, we could consider symmetric composition methods based on the order two Strang splitting \eqref{Estrang} to build higher order methods of the form
$$
\prod_{j = 1}^s
\exp(i\gamma_j hV/2) R(-i\gamma_jh\Delta) \exp(i\gamma_jhV/2)
$$
to approximate \eqref{E1}. A general strategy to show the existence of a modified energy for this method would be to search for an operator $Z(t)$ such that for all $t > 0$,
$$
\exp(iZ(t)) = \prod_{j = 1}^s
\exp(i\gamma_j tV/2) R(-i\gamma_jh\Delta) \exp(i\gamma_jtV/2)
$$
with
$$
Z_0 = - \sum_{j =1}^s 2 \arctan(h\gamma_j \Delta/2).
$$
In the case of the triple jump method, this operator can be written \eqref{Etj}, and the same argument as above shows the existence of a modified energy for this method by using the same kind of techniques. We do not give the details here. The derivation of higher order methods possessing a modified energy is an interesting question that will be addressed in future studies.
\section*{Acknowledgment}
The authors would like to thank Philippe Chartier for fruitful discussions.
\end{document} | math | 33,785 |
\begin{document}
\title[Eigenvalue clusters]{Asymptotic Density of Eigenvalue Clusters for the Perturbed Landau
Hamiltonian}
\author[A.~Pushnitski]{Alexander Pushnitski}
\author[G.~Raikov]{Georgi Raikov}
\author[C.~Villegas-Blas]{Carlos Villegas-Blas}
\begin{abstract}
We consider the Landau Hamiltonian (i.e. the 2D Schr\"odinger
operator with constant magnetic field) perturbed by an electric
potential $V$ which decays sufficiently fast at infinity. The
spectrum of the perturbed Hamiltonian consists of clusters of
eigenvalues which accumulate to the Landau levels. Applying a
suitable version of the anti-Wick quantization, we investigate the
asymptotic distribution of the eigenvalues within a given cluster
as the number of the cluster tends to infinity. We obtain an
explicit description of the asymptotic density of the eigenvalues
in terms of the Radon transform of the perturbation potential $V$.
\end{abstract}
\maketitle
{\bf Keywords}: perturbed Landau Hamiltonian, asymptotic density for eigenvalue clusters,
anti-Wick quantization, Radon transform \\
{\bf 2010 AMS Mathematics Subject Classification}: 35P20, 35J10,
47G30, 81Q10\\
\section{Introduction and main results}\label{intro}
\subsection{Introduction}
Let
$$
H_0 : = \left(-i\frac{\partial}{\partial x}+\frac{B}{2}y\right)^2 +
\left(-i\frac{\partial}{\partial y}-\frac{B}{2}x\right)^2,
$$
be the self-adjoint operator defined initially on
$C_0^{\infty}({\mathbb R}^{2})$, and then closed in $L^2({\mathbb R}^{2})$. The
operator
$H_0$ is the Hamiltonian of a non-relativistic spinless 2D quantum
particle subject to a constant magnetic field of strength $B > 0$.
It is often called the Landau Hamiltonian in honor of the author
of the pioneering paper \cite{lan}.
The spectrum of $H_0$ consists of the eigenvalues (called Landau levels)
$\lambda_q =B (2q+1)$, $q \in {\mathbb Z}_+ = 0,1,2,\ldots$. The multiplicity of each of these
eigenvalues is infinite, and so
$$
\sigma(H_0)=\sigma_\text{ess}(H_0)= \bigcup_{q=0}^{\infty}\{\lambda_q\},
\quad
\lambda_q=B(2q+1).
$$
Next, let $V \in C({\mathbb R}^{2}; {\mathbb R})$ satisfy the estimate
\begin{equation}
\abs{V({\bf x})}\leq C\jap{\mathbf x}^{-\rho}, \quad {\mathbf x} \in {\mathbb R}^{2},
\quad \rho>1,
\label{rho}
\end{equation}
where $\jap{\mathbf x} : =(1+\abs{\mathbf x}^2)^{1/2}$.
We also denote by $V$ the operator of multiplication by $V$
in $L^2({\mathbb R}^{2})$. Consider the perturbed Landau Hamiltonian $H =
H_0 + V$. The spectrum of $H$ consists of eigenvalue clusters
around the Landau levels. More precisely,
we have $\sigma_\text{ess}(H)=\sigma_\text{ess}(H_0)$ and so
all eigenvalues of $H$ in ${\mathbb R} \setminus \sigma_{\rm
ess}(H)$ have finite multiplicities and can accumulate only to the
Landau levels $\lambda_q$.
Our first preliminary result says that the eigenvalue
clusters shrink towards the Landau
levels as $O(q^{-1/2})$ for $q\to\infty$:
\begin{proposition}\label{p11}
Assume \eqref{rho};
then there exists $C_1>0$ such that for all
$q \in {\mathbb Z}_+$ one has
\begin{equation}
\sigma(H) \cap [\lambda_q -B, \lambda_q + B]\subset
(\lambda_q - {C_1}{\lambda}_q^{-1/2},
\lambda_q + {C_1}{\lambda}_q^{-1/2}).
\label{a7}
\end{equation}
\end{proposition}
The proof is given in Section~{\mathbb R}f{s3}.
\begin{remark} \label{r1}
\begin{enumerate}[(i)]
\item Obviously, the above estimate $O(\lambda_q^{-1/2})$ for the
width of the $q$th cluster can also be written as $O(q^{-1/2})$;
however, as we will see, $\lambda_q$ provides a more natural scale
than $q$. \item Simple considerations (see Remark~3.2 in
\cite{korpu}) show that the estimate $O(\lambda_q^{-1/2})$ cannot
be improved: the eigenvalue clusters have width $\geq
c\lambda_q^{-1/2}$ with $c>0$ (unless $V\equiv0$). This will also
follow from the main result of this paper. \item
Proposition~{\mathbb R}f{p11} was proven in \cite{korpu} for $V\in
C_0^\infty({\mathbb R}^{2})$. The proof we give here not only covers the case
of more general potentials $V$, but also is based on different
ideas than those of \cite{korpu}.
\end{enumerate}
\end{remark}
\subsection{Main result}
Our purpose is to describe the asymptotic density of eigenvalues
in the $q$th cluster as $q \to \infty$. Let $\mathds{1}_{\mathcal O}$
denote the characteristic function of the set $\mathcal{O} \subset
{\mathbb R}$. For $q \in {\mathbb Z}_+$ and $\mathcal{O} \in {\mathcal
B}({\mathbb R})$, the Borel $\sigma$-algebra on ${\mathbb R}$, set
$$
\mu_q(\mathcal{O}) : = {\rm
rank}\,\mathds{1}_{\lambda_q^{-1/2}\mathcal{O} + \lambda_q}(H).
$$
The measure $\mu_q$ is not finite, and not even $\sigma$-finite,
but if $\mathcal{O}$ is bounded, and its closure does not contain
the origin, we have $\mu_q(\mathcal{O}) < \infty$ for $q$ sufficiently large.
In particular, for any fixed bounded interval
$[\alpha,\beta]\subset {\mathbb R}\setminus\{0\}$ we have
\begin{equation}
\mu_q([\alpha,\beta]) = \sum_{\lambda_q+\alpha\lambda_q^{-1/2}\leq
\lambda\leq \lambda_q+\beta\lambda_q^{-1/2}} \dim\Ker(H-\lambda I)
< \infty
\end{equation}
for all sufficiently
large $q$. Below we study the asymptotics of the counting measure
$\mu_q$ as $q\to\infty$. In order to describe the limiting
measure, we need to fix some notation. We denote by ${\mathbb T}
\subset {\mathbb R}^{2}$ the circle of radius one, centered at the origin. The
circle ${\mathbb T}$ is endowed with the usual Lebesgue measure
normalized so that $\int_{{\mathbb T}} d \omega = 2\pi$. For
$\omega=(\omega_1,\omega_2)\in\mathbb T$, we denote
$\omega^\perp=(-\omega_2,\omega_1)$. We set
$$
\widetilde{V}(\omega, b)= \frac{1}{2 \pi} \int^{\infty}_{-\infty}
V(b\omega +t\omega^\perp)dt, \quad \omega \in {\mathbb T}, \quad b
\in{\mathbb R}.
$$
Thus, $\widetilde{V}$ is (up to a factor) the Radon transform of
$V$.
In order to make our notation more concise, we find convenient to introduce
the Banach space $X_\rho$ of all potentials $V\in C({\mathbb R}^2,{\mathbb R})$
that satisfy \eqref{rho} equipped with the norm
\begin{equation}
\norm{V}_{X_\rho} = \sup_{\mathbf x\in{\mathbb R}^2} \jap{\mathbf x}^\rho \abs{V(\mathbf x)}.
\label{xrho}
\end{equation}
Using this notation, by
an elementary calculation one finds
\begin{equation}
\abs{\widetilde{V}(\omega,b)}\leq C_\rho \norm{V}_{X_\rho} \jap{b}^{1-\rho},
\quad b\in{\mathbb R}.
\label{a4}
\end{equation}
Define the measure $\mu$ by
$$
\mu(\mathcal{O})= \frac{1}{2\pi} \, \left|
\widetilde{V}^{-1}(B^{-1}\mathcal{O})\right|, \quad {\mathcal O} \in
{\mathcal B}({\mathbb R}),
$$
where $|\cdot|$ stands for the Lebesgue measure (on ${\mathbb T}
\times {\mathbb R}$).
Evidently, for any bounded interval $[\alpha, \beta]
\subset {\mathbb R}\setminus \{0\}$ we have $\mu([\alpha, \beta]) <
\infty$. Moreover, estimate \eqref{a4} implies that
$\mu$ has a bounded support in
${\mathbb R}$, and
\begin{equation}
\int_{\mathbb R} \abs{t}^\ell d\mu(t)<\infty, \quad \forall
\; \ell>1/(\rho-1). \label{a9}
\end{equation}
Our main result is:
\begin{theorem}\label{th12}
Let $V\in C({\mathbb R}^{2})$ be a continuous function that satisfies
\eqref{rho}. Then, for any function $\varrho \in C^{\infty}_0({\mathbb R}
\setminus\{0\})$, we have
\begin{equation}
\lim_{q \to \infty} \lambda_q^{-1/2} \int_{\mathbb R}
\varrho(\lambda)d\mu_q(\lambda) = \int_{\mathbb R}
\varrho(\lambda)d\mu(\lambda). \label{11a}
\end{equation}
\end{theorem}
\begin{remark} \label{r2a}
\begin{enumerate}[(i)]
\item
The asymptotics \eqref{11a} can be more explicitly written as
\begin{equation}
\lim_{q \to \infty} \lambda_q^{-1/2}
\Tr \varrho(\sqrt \lambda_q (H-\lambda_q))
=
\frac1{2\pi} \int_{\mathbb T} \, \int_{{\mathbb R}} \,
\varrho(B \widetilde{V}(\omega,b)) \, db \, d\omega. \label{11}
\end{equation}
\item By standard approximation arguments,
the asymptotics \eqref{11a} can be extended to a wider class of continuous functions $\varrho$.
Further,
it follows from
Theorem~{\mathbb R}f{th12} that if $[\alpha, \beta] \subset {\mathbb R}\setminus
\{0\}$, and $\mu(\{\alpha\})=\mu(\{\beta\})=0$, then
$$
\lim_{q \to \infty}\lambda_q^{-1/2}\mu_q([\alpha,\beta]) =
\mu([\alpha,\beta]).
$$
However, the assumption $\mu(\{\alpha\})=\mu(\{\beta\})=0$ does
not automatically hold, i.e. in general the measure $\mu$ may have
atoms. Indeed, a description of the class of all Radon transforms
$\widetilde{V}(\omega,b)$ of functions $V\in C_0^\infty({\mathbb R}^2)$ is
well known, see e.g. \cite[Theorem~2.10]{helg}. According to this
description, if $a\in C_0^\infty({\mathbb R})$ is an even real-valued
function, then
$\widetilde{V}(\omega,b) : =a(b)$, $b \in {\mathbb R}$, $\omega \in {\mathbb T}$, is a Radon transform
in this class. Of course, if the derivative $a'(b)$ vanishes on
some open interval, then the corresponding measure $\mu$ has an
atom.
\item
If $V\in C_0^\infty({\mathbb R}^{2})$, one can prove (see \cite{korpu})
that the trace in the l.h.s. of \eqref{11} has a complete asymptotic
expansion in inverse powers of $\lambda_q^{1/2}$,
but the formulae for the higher order coefficients
of this expansion are not known.\\
\end{enumerate}
\end{remark}
\subsection{Method of proof}
Let $P_q$ be the orthogonal projection in $L^2({\mathbb R}^{2})$ onto the subspace
${\rm Ker}\, (H_0-\lambda_qI)$.
For $\ell\geq1$, let $S_\ell$ be the Schatten-von Neumann class,
with the norm $\norm{\cdot}_\ell$; the usual operator norm is
denoted by $\norm{\cdot}$.
We first fix a natural number $\ell$ and examine the asymptotics of the trace
in the l.h.s. of \eqref{11}
for functions $\varrho\in C_0^\infty({\mathbb R})$ such that
$\varrho(\lambda)=\lambda^\ell$ for small $\lambda$.
We have the following
(fairly standard) technical result:
\begin{lemma}\label{l31}
For any real $\ell>1/(\rho-1)$, the operators
$(H-\lambda_q)^\ell \mathds{1}_{(\lambda_q - B, \lambda_q+B)}(H)$
and $(P_q V P_q)^\ell$ belong to the trace class and
\begin{equation} \label{31}
\Tr\{(H-\lambda_q)^\ell
\mathds{1}_{(\lambda_q - B,\lambda_q+B)}(H)\}
=
\Tr
(P_q V P_q)^\ell + o(\lambda_q^{-(\ell-1)/2} ), \quad q \to \infty.
\end{equation}
\end{lemma}
The proof of this
lemma is given in Subsection~{\mathbb R}f{s3c}.
This lemma essentially reduces the question to the study of the asymptotics
of traces of $(P_q V P_q)^\ell$.
Our main technical result is the
following statement:
\begin{theorem}\label{th13}
Let $V$ satisfy \eqref{rho} and let $B_0>0$.
\begin{enumerate}[\rm (i)]
\item For some $C=C(B_0)$, one has
\begin{equation}
\sup_{q\geq0} \sup_{B\geq B_0} \lambda_q^{1/2}B^{-1}\norm{P_q VP_q}
\leq
C\norm{V}_{X_\rho}.
\label{a6}
\end{equation}
\item
For any real $\ell>1/(\rho-1)$, we have $P_qVP_q\in S_\ell$, and
for some $C=C(B_0,\ell)$, the estimate
\begin{equation}
\sup_{q\geq0} \sup_{B\geq B_0} \lambda_q^{(\ell-1)/(2\ell)} B^{-1}\norm{P_qVP_q}_{\ell}
\leq C \norm{V}_{X_\rho}
\label{b1}
\end{equation}
holds true.
\item For any integer $\ell>1/(\rho-1)$, we have \begin{equation} \label{12}
\lim_{q\to \infty} \lambda_q^{(\ell-1)/2} \Tr(P_q VP_q)^\ell
=
\frac{B^\ell}{2\pi} \int_{{\mathbb T}} \, \int_{{\mathbb R}}
\;\widetilde{V}(\omega,b)^\ell \, db \, d\omega. \end{equation}
\end{enumerate}
\end{theorem}
Although in our main Theorem~{\mathbb R}f{th12} the strength $B$ of the magnetic
field is assumed to be fixed, we make the dependence on $B$
explicit in the estimates \eqref{a6}, \eqref{b1}, as these estimates
are of an independent interest (see e.g. \cite{dp}), and can be used
in the study of other asymptotic regimes.
The proof of Theorem~{\mathbb R}f{th13} consists of two steps. In
Section~{\mathbb R}f{s4} we establish the unitary equivalence of the
Berezin-Toeplitz operator $P_qVP_q$ to a certain generalized
anti-Wick pseudodifferential operator ($\Psi$DO) whose symbol
$V_B$ is defined explicitly below in \eqref{sof30}. Further, in
Section~{\mathbb R}f{sec.c} we study this $\Psi$DO, prove appropriate
estimates, and analyze its asymptotic behavior as $q \to \infty$.
A combination of
\eqref{31} and \eqref{12} essentially yields
\eqref{11a} for a function $\varrho\in C_0^\infty({\mathbb R})$ such that
$\varrho(\lambda)=\lambda^\ell$ for small $\lambda$. After this,
the main result follows by an application of the Weierstrass'
approximation theorem; this argument is given in
Subsection~{\mathbb R}f{s32}.
\begin{remark}\label{rmk}
In \cite{korpu} the limit \eqref{12} was computed for $\ell=1,2$,
but the result was written in a form not suggestive of the general
formula.
\end{remark}
\subsection{Semiclassical interpretation}
Consider the classical Hamiltonian function
\begin{equation} \label{gr1}
{\mathcal H}({\bf x}i,{\bf x}) = (\xi+\tfrac12 B y)^2+(\eta -\tfrac12
Bx)^2, \quad {\bf x}i : = (\xi,\eta) \in {\mathbb R}^{2}, \quad {\bf x} : = (x,y) \in
{\mathbb R}^{2},
\end{equation}
in the phase space $T^* {\mathbb R}^{2} = {\mathbb R}^4$ with the standard symplectic
form. The projections onto the configuration space of the orbits
of the Hamiltonian flow of ${\mathcal H}$ are circles of radius
$\sqrt{E}/B$, where $E>0$ is the value of the energy corresponding
to the orbit. The classical particles move around these circles
with period $T_B=\pi/B$. The set of these orbits can be
parameterized by the energy $E>0$ and the center $\mathbf
c\in{\mathbb R}^2$ of a circle. Let us denote the path in the
configuration space corresponding to such an orbit by
$\gamma(\mathbf c,E,t)$, $t\in[0,T_B)$, and set
\begin{equation}
\langle V \rangle (\mathbf c,E) = \frac1{T_B} \int_0^{T_B}
V(\gamma(\mathbf c,E,t))dt, \quad T_B=\pi/B. \label{a1}
\end{equation}
For an energy $E>0$, consider the set $M_E$ of all orbits with
this energy. The set $M_E$ is a smooth manifold with coordinates
$\mathbf c\in{\mathbb R}^2$. It can be considered as the quotient of the
constant energy surface
$$
\Sigma_E = \{({\bf x}i,{\bf x}) \in {\mathbb R}^4 \mid {\mathcal H}({\bf x}i,{\bf x})=E\}
$$
with respect to the flow of ${\mathcal H}$. Restricting the
standard Lebesgue measure of ${\mathbb R}^4$ to $\Sigma_E$ and then taking
the quotient, we obtain the measure $B\, dc_1\, dc_2$ on $M_E$. An
elementary calculation shows that the r.h.s. of \eqref{11} can be
rewritten as
\begin{equation}
\frac1{2\pi} \int_{\mathbb T} \int_{{\mathbb R}}
\varrho(B\widetilde{V}(\omega,b)) db \, d\omega= \frac1{2\pi}
\lim_{E\to\infty} \frac1{\sqrt{E}}\int_{{\mathbb R}^2} \varrho(\sqrt{E}\,
\langle V \rangle (\mathbf c,E) ) \, B\, dc_1\, dc_2. \label{a2}
\end{equation}
The basis of this calculation is the fact that as $E\to\infty$,
the radius $\sqrt{E}/B$ of the circles representing the classical
orbits tends to infinity. Thus, the classical
orbits approximate straight lines on any compact domain of the
configuration space.
Given \eqref{a2}, we can rewrite the main result as
\begin{equation}
\lim_{q \to \infty} \frac{1}{\sqrt \lambda_q} \Tr \varrho(\sqrt
\lambda_q (H-\lambda_q)) = \frac1{2\pi} \lim_{E\to\infty}
\frac1{\sqrt{E}}\int_{{\mathbb R}^2} \varrho(\sqrt{E}\, \langle V \rangle
(\mathbf c,E) ) \, B\, dc_1\, dc_2. \label{a3}
\end{equation}
This agrees with the semiclassical intuition. Formula \eqref{a3}
corresponds to the well known ``averaging principle'' for systems
close to integrable ones. This principle states that a good
approximation is obtained if one replaces the original
perturbation by the one which results by averaging the original
perturbation along the orbits of the free dynamics. This method is
very old; quoting V.~Arnold \cite[Section 52]{arn}: ``In studying
the perturbations of planets on one another, Gauss proposed to
distribute the mass of each planet around its orbit proportionally
to time and to replace the attraction of each planet by the
attraction of the ring so obtained''.
\subsection{Related results}
\subsubsection{Asymptotics for eigenvalue clusters for manifolds
with closed geodesics} In spectral theory, results of this type
originate from the classical work by A.~Weinstein \cite{wein} (see
also \cite{CdV}). Weinstein considered
the operator $-\Delta_{\mathcal M} + V$, where $\Delta_{\mathcal
M}$ is the Laplace-Beltrami operator on a compact Riemannian
manifold ${\mathcal M}$ with periodic bicharacteristic flow (e.g.
a sphere), and $V \in C({\mathcal M};{\mathbb R})$.
In this case, all eigenvalues of $\Delta_{\mathcal M}$ have
finite multiplicities which however grow with the
eigenvalue number. Adding the perturbation $V$ creates clusters of
eigenvalues. Weinstein proved that the asymptotic density of
eigenvalues in these clusters can be described by the
density function obtained by averaging $V$ along the closed geodesics on ${\mathcal M}$.
Let us illustrate these results with the case ${\mathcal M}
= {\mathbb S}^2$. It is well known that the eigenvalues of $-\Delta_{{\mathbb
S}^2}$ are $\lambda_{q} = q (q + 1)$, $q \in {\mathbb
Z}_+$, and their multiplicities are $d_q = 2
q + 1$. For $V \in C({\mathbb S}^2; {\mathbb R})$ set
$$
\widetilde{V}(\omega) : = \frac{1}{2\pi} \int_0^{2\pi} V({\mathcal
C}_\omega(s)) ds, \quad \omega \in {\mathbb
S}^2,
$$
where ${\mathcal C}_\omega(s) \in {\mathbb S}^2$ is the great
circle orthogonal to $\omega$, and $s$ is the arc length on this
circle. Then for each $\varrho \in C_0^{\infty}({\mathbb S}^2;
{\mathbb R})$ we have
\begin{equation} \label{end20}
\lim_{q \to \infty}\frac{\Tr\varrho(-\Delta_{{\mathbb
S}^2} + V - \lambda_q)}{d_q} = \int_{{\mathbb
S}^2} \varrho(\widetilde{V}(\omega)) dS(\omega)
\end{equation}
where $dS$ is the normalized Lebesgue measure on ${\mathbb
S}^2$. Since ${\mathbb S}^2$ can be identified
with its set of oriented geodesics ${\mathcal G}$, the r.h.s. of \eqref{end20} can be interpreted as an integral
with respect to the $SO(3)$--invariant normalized measure on
${\mathcal G}$.
This result admits extensions to the case ${\mathcal M} =
{\mathbb S}^n$ with $n>2$, and, more generally, to the case where
${\mathcal M}$ is a compact symmetric manifold of rank 1 (see
\cite{wein, CdV}).\\
In more recent works \cite{tvb, vb1, urvb}, the relation between
the quantum Hamiltonian of the hydrogen atom and the
Laplace-Beltrami operator on the unit sphere is exploited, and the
asymptotic distribution within the eigenvalue clusters of the
perturbed Hamiltonian hydrogen atom is investigated. The
asymptotic density of eigenvalues in these clusters was described
in terms of the perturbation averaged along the trajectories of
the unperturbed dynamics (i.e. the solutions to the Kepler
problem).
Among the main technical tools used in \cite{tvb, vb1, urvb} which
originate from \cite{gp, tw}, are the Bargmann-type
representations of the particular quantum Hamiltonians considered,
implemented via the so-called Segal-Bargmann transforms. In our
analysis generalized coherent states and associated anti-Wick
\partialos closely related to the Bargmann representation and the
Segal-Bargmann transform appear again in a natural way (see
Section~{\mathbb R}f{s4}), although their role is different from the one
of their counterparts in \cite{tvb, vb1, urvb}.
Although this paper is inspired by \cite{wein,tvb, vb1,urvb}, much
of our construction (see Section~{\mathbb R}f{s3}) is based on the
analysis of \cite{korpu}.
In \cite{korpu} it was proven that for $V\in C_0^\infty({\mathbb R}^{2})$ the trace
in the l.h.s. of \eqref{12} has
complete asymptotic expansions
in inverse powers of $\lambda_q^{1/2}$.
However, the coefficients of this expansion have
not been computed explicitly; see Remark~{\mathbb R}f{rmk} above.
\subsubsection{Strong magnetic field asymptotics} It is
useful to compare our main result with the asymptotics as $B
\to \infty$ of the eigenvalues of $H$. It has been found in
\cite{r1} (see also \cite{ivrii}) that
\begin{equation} \label{gr50a}
\lim_{B \to \infty} B^{-1} \Tr \varrho(H-\lambda_q) = \frac{1}{2\pi} \int_{{\mathbb R}^{2}} \varrho(V({\bf x})) d{\bf x} =
\int_{\mathbb R} \varrho(t) dm(t)
\end{equation}
where $\varrho \in C_0^{\infty}({\mathbb R}\setminus \{0\})$, $q \in {\mathbb Z}_+$, $V \in L^p({\mathbb R}^{2})$, $p>1$,
and $m(\mathcal{O}) : = \frac{1}{2\pi} \left|V^{-1}(\mathcal{O})\right|$,
$\mathcal{O} \in {\mathcal B}({\mathbb R})$. Similarly to Theorem {\mathbb R}f{th12},
the proof of \eqref{gr50a} is based on an analogue of Theorem {\mathbb R}f{th13} (i) -- (ii)
(see Lemma {\mathbb R}f{l21} below), and the asymptotic relations
\begin{equation} \label{gr51a}
\lim_{B \to \infty} B^{-1} \Tr (P_q V P_q)^\ell = \lim_{B \to \infty} B^{-1}
\Tr P_q V^\ell P_q = \frac{1}{2\pi} \int_{{\mathbb R}^{2}} V({\bf x})^{\ell} d{\bf x}, \quad q \in {\mathbb Z}_+,
\end{equation}
with $V \in C_0^\infty({\mathbb R}^{2})$ and $\ell \in \mathbb{N}$, close in spirit to \eqref{12}.
Since $B m([\alpha, \beta]) = \frac{1}{2\pi} \left|V_B^{-1}([\alpha, \beta])\right|$,
$[\alpha, \beta] \subset {\mathbb R} \setminus \{0\}$, where $V_B$ (see \eqref{sof30})
is the symbol of the generalized anti-Wick \partialo to which $P_q V P_q$ is unitarily equivalent,
we find that \eqref{gr50a} is again a result of semiclassical nature. However, \eqref{gr51a}
implies that in the strong magnetic field regime the main asymptotic terms of $\Tr (P_q V P_q)^\ell$ and $\Tr P_q V^\ell P_q$ coincide, and hence in the first approximation the commutators $[V, P_q]$
are negligible, while \eqref{12} shows
that obviously this is not the case in the high energy regime considered in
the present article. Hence, Theorem~{\mathbb R}f{th12} retains ``more quantum flavor" than \eqref{gr50a},
and hence its proof is technically much more involved.
\subsubsection{The spectral density of the scattering matrix for high
energies} In the recent work \cite{bupu} inspired by this paper,
D. Bulger and A. Pushnitski considered the scattering matrix
$S(\lambda)$, $\lambda>0$, for the operator pair $(-\Delta + V,
-\Delta)$ where $\Delta$ is the standard Laplacian acting in
$L^2({\mathbb R}^d)$, $d\geq2$, and $V \in C({\mathbb R}^d;{\mathbb R})$ is an electric
potential which satisfies an estimate analogous to \eqref{rho}.
Although the methods applied in \cite{bupu} are different from
ours, it turned out that the asymptotics as $\lambda \to \infty$
of the eigenvalue clusters for $S(\lambda)$ are written in terms
of the $X$-ray transform of $V$ in a manner similar to
\eqref{11a}.
\section{Unitary equivalence of Berezin-Toeplitz operators and generalized anti-Wick \partialos}\label{s4}
\subsection{Outline of the section}
\label{ss21} From methodological point of view, this section plays a
central role in the proof of Theorem {\mathbb R}f{th12}. Its principal goal
is to establish the unitary equivalence between the
Berezin-Toeplitz operators $P_qVP_q$, $q \in {\mathbb Z}_+$, and
some generalized anti-Wick \partialos ${\rm Op}^{aw}_q(V_B)$ whose symbol
$V_B$ is defined explicitly in \eqref{sof30}. This equivalence is
proved in Theorem {\mathbb R}f{th.b2} below. The \partialos ${\rm Op}^{aw}_q$
introduced in Subsection {\mathbb R}f{ss24}, are quite similar to the
classical anti-Wick operators ${\rm Op}^{aw}_0$ (see \cite[Chapter
V, Section 2]{beshu}, \cite[Section 24]{shu}); the only difference
is that the quantization ${\rm Op}^{aw}_0$ is related to coherent
states built on the first eigenfunction $\varphi_0$ of the harmonic
oscillator \eqref{gr3}, while ${\rm Op}^{aw}_q$, $q \in {\mathbb
N}$,
is related to coherent states built on its $q$th eigenfunction $\varphi_q$. \\
In our further analysis of the operator ${\rm Op}^{aw}_q(V_B)$
performed in Section {\mathbb R}f{sec.c}, we also heavily use the
properties of the Weyl symbol of this operator. Thus, in
Subsections {\mathbb R}f{ss21} -- {\mathbb R}f{ss22} we introduce the Weyl
quantization ${\rm Op}^{w}$, and in Subsection {\mathbb R}f{ss25} we
briefly discuss its relation to ${\rm Op}^{aw}_q$. In particular,
we show that ${\rm Op}^{aw}_q(s) = {\rm Op}^{w}(s * \Psi_q)$ where
$s$ is a symbol from an appropriate class, and $2\pi \Psi_q$ is
the Wigner function associated with $\varphi_q$, defined
explicitly in \eqref{17a}. Therefore, the Berezin-Toeplitz
operator $P_qVP_q$, $q \in {\mathbb Z}_+$, with domain $P_q
L^2({\mathbb R}^{2})$, is unitarily equivalent to ${\rm Op}^{w}(V_B *
\Psi_q)$ (see Corollary {\mathbb R}f{fgr1}).
\subsection{Weyl \partialos} \label{ss22}
Let $d \geq 1$. Denote by ${\mathcal S}({\mathbb R}^d)$ the Schwartz
class, and by ${\mathcal S}'({\mathbb R}^d)$ its dual class.
\begin{pr} \label{prvbp1a}
{\rm \cite[Lemma 18.6.1]{ho}} Let $s \in {\mathcal
S}'({\mathbb R}^{2d})$. Assume that $\hat{s} \in L^1({\mathbb R}^{2d})$ where
$\hat{s}$ is the Fourier transform of $s$, introduced explicitly
in \eqref{end2} below.
Then the
operator ${\rm Op}^w(s)$ defined initially as a mapping from the
${\mathcal S}({\mathbb R}^d)$ into ${\mathcal
S}'({\mathbb R}^d)$ by
\begin{equation} \label{gr52}
\left({\rm Op}^w(s)u\right)(x) = (2\pi)^{-d} \int_{{\mathbb R}^d}
\int_{{\mathbb R}^d} s\left(\frac{x+x'}{2},\xi\right)e^{i(x-x')\cdot \xi}
u(x')
dx'd\xi, \quad x \in {\mathbb R}^d,
\end{equation}
extends uniquely to an operator bounded in $L^2({\mathbb R}^d)$. Moreover,
\begin{equation} \label{end3}
\|{\rm Op}^w(s)\| \leq (2\pi)^{-d} \|\hat{s}\|_{L^1({\mathbb R}^{2d})}.
\end{equation}
\end{pr}
Some of the arguments of our proofs require estimates which are
more sophisticated than \eqref{end3}. Let $\Gamma({\mathbb R}^{2d})$, $d\geq 1$,
denote the set of functions $s: {\mathbb R}^{2d} \to {\mathbb C}$ such that
$$
\|s\|_{\Gamma({\mathbb R}^{2d})} : = \sup_{\{\alpha , \beta \in {\mathbb
Z}_+^d \; | \; |\alpha|, |\beta| \leq [\frac{d}{2}] + 1\}}
\sup_{(x,\xi) \in {\mathbb R}^{2d}} |\partial_x^{\alpha}
\partial_{\xi}^{\beta} s(x,\xi)| < \infty.
$$
\begin{pr} \label{prvbp1}
{\rm \cite[Corollary 2.5 (i)]{abd}} There exists a constant $c_0$
such that for any $s \in \Gamma({\mathbb R}^{2d})$, $d\geq 1$, we have
$$
\|{\rm Op}^w(s)\| \leq c_0\|s\|_{\Gamma({\mathbb R}^{2d})}.
$$
\end{pr}
Further, if $s\in
L^2({\mathbb R}^{2d})$, then, obviously, the operator $\Op(s)$ belongs to the
Hilbert-Schmidt class, and
\begin{equation}
\norm{\Op(s)}_{2}^2 = \frac{1}{(2\pi)^d}\int_{{\mathbb R}^{2d}} \abs{s(x,\xi)}^2\,
dx \,d\xi. \label{prvb2}
\end{equation}
Next, we describe the well known metaplectic unitary equivalence
of Weyl \partialos whose symbols are mapped into each other by a linear
symplectic change of the variables.
\begin{pr} \label{p42} {\em \cite[Chapter 7, Theorem A.2]{dsj}}
Let $\kappa: {\mathbb R}^{2d} \rightarrow {\mathbb R}^{2d}$, $d\geq 1$, be a
linear symplectic transformation, $s_1 \in \Gamma({\mathbb R}^{2d})$, and
$s_2 : = s_1 \circ \kappa$. Then there exists a unitary operator $U:
L^2({\mathbb R}^d) \rightarrow L^2({\mathbb R}^d)$ such that
$$
{\rm Op}^w(s_2) = U^* {\rm Op}^w(s_1) U.
$$
\end{pr}
\begin{remark} \label{r3}
\begin{enumerate}[(i)]
\item
The operator $U$ is called {\em the metaplectic operator} corresponding to the linear symplectic transformation $\kappa$.
There exists a one-to-one correspondence between metaplectic
operators and linear symplectic transformations, apart from a
constant factor of modulus 1 (see e.g. \cite[Theorem 18.5.9]{ho}). Moreover, every linear symplectic transformation $\kappa$ is a composition of a finite number of elementary linear symplectic maps (see e.g. \cite[Lemma 18.5.8]{ho}), and for each elementary linear symplectic map there exists an explicit simple metaplectic operator (see e.g. the proof of \cite[Theorem 18.5.9]{ho}).
\item Proposition {\mathbb R}f{p42} extends to a large class of not
necessarily bounded operators. In particular, it holds for Weyl \partialos with
quadratic symbols.
\end{enumerate}
\end{remark}
\subsection{Generalized anti-Wick \partialos}
\label{ss24}
In this subsection we introduce generalized anti-Wick \partialos.
These operators are a special case of the \partialos with
contravariant symbols whose theory has been developed in
\cite{ber}.
Introduce the harmonic oscillator
\begin{equation} \label{gr3}
h : = - \frac{d^2}{dx^2} + x^2,
\end{equation}
self-adjoint in $L^2({\mathbb R})$.
It is well known
that the spectrum of $h$ is purely discrete and simple, and
consists of the eigenvalues $2q+1$, $q \in {\mathbb Z}_+$, while its associated
real-valued eigenfunctions $\varphi_q$, normalized in $L^2({\mathbb R})$, could be written as
\begin{equation} \label{radi}
{\varphi}_q(x): =
\frac{{\rm H}_{q}(x) e^{-x^2/2}}{(\sqrt{\pi}2^{q} q!)^{1/2}},
\quad x \in {\mathbb R}, \quad q \in {\mathbb Z}_+,
\end{equation}
where
\begin{equation} \label{prvb28}
{\rm H}_q(x): = (-1)^q e^{x^2/2} \left(\frac{d}{dx} - x\right)^q
e^{-x^2/2}, \quad x \in {\mathbb R}, \quad q \in {\mathbb Z}_+,
\end{equation}
are the Hermite polynomials. Introduce the generalized coherent states (see e.g. \cite{rs})
\begin{equation} \label{10d04}
\varphi_{q;x,\xi}(y) : = e^{i\xi
y} \varphi_q(y-x), \quad y \in {\mathbb R}, \quad (x,\xi) \in {\mathbb R}^{2},
\end{equation}
so that $\varphi_q = \varphi_{q;0,0}$. Note that if $f \in L^2({\mathbb R})$,
then
\begin{equation} \label{17}
\|f\|^2_{L^2({\mathbb R})} = (2\pi)^{-1}\int_{{\mathbb R}^{2}}|\langle f,
\varphi_{q;x,\xi}\rangle|^2dxd\xi
\end{equation}
where $\langle \cdot , \cdot\rangle$ denotes the scalar product in
$L^2({\mathbb R})$.
Introduce the orthogonal projection
\begin{equation} \label{end15}
p_{q;x,\xi} : = |\varphi_{q;x,\xi}\rangle
\langle\varphi_{q;x,\xi}| : L^2({\mathbb R}) \to L^2({\mathbb R}), \quad q \in
{\mathbb Z}_+, \quad (x,\xi) \in {\mathbb R}^{2}.
\end{equation}
Let $s \in L^1({\mathbb R}^{2}) + L^{\infty}({\mathbb R}^{2})$. Define
$$
{\rm Op}_q^{aw}(s): = (2\pi)^{-1} \int_{{\mathbb R}^{2}} s(x,\xi) p_{q;x,\xi}
dx d\xi
$$
as the operator generated in $L^2({\mathbb R})$ by the bounded
sesquilinear form
\begin{equation} \label{8d03} F_{q,s}[f,g] : = (2\pi)^{-1}
\int_{{\mathbb R}^{2}} s(x,\xi) \langle f, \varphi_{q;x,\xi}\rangle
\overline{\langle g, \varphi_{q;x,\xi}\rangle} dx d\xi, \quad f,g
\in L^2({\mathbb R}).
\end{equation}
We will call ${\rm Op}_q^{aw}(s)$ operator with {\em
anti-Wick symbol of order $q$} equal to $s$. We introduce these operators only in
the case of dimension $d=1$ since this is sufficient for our purposes; of course, their definition extends easily to any dimension $d \geq 1$. \\
Note that the quantization ${\rm Op}_q^{aw}$ with $q = 0$
coincides with the standard anti-Wick one (see e.g.
\cite[Section 24]{shu}). In the following lemma we summarize some
elementary basic properties of generalized anti-Wick \partialos which
follow immediately from the corresponding properties of general
contravariant \partialos.
\begin{lemma} \label{lgr1} \cite[Section 24]{shu} \cite[Section
5.3]{beshu}
{\em (i)} Let $s \in L^{\infty}({\mathbb R}^{2})$. Then we have
\begin{equation} \label{8d03a}
\|{\rm Op}_q^{aw}(s)\| \leq \|s\|_{L^{\infty}({\mathbb R}^{2})}.
\end{equation}
{\em (ii)} Let $s \in L^{\ell}({\mathbb R}^{2})$ with $\ell \in [1,\infty)$. Then we have
\begin{equation} \label{8d04}
\|{\rm Op}_q^{aw}(s)\|_\ell^\ell \leq (2\pi)^{-1}
\|s\|^\ell_{L^\ell({\mathbb R}^{2})}.
\end{equation}
\end{lemma}
\subsection{Relation between generalized anti-Wick and Weyl \partialos}
\label{ss25}
For $q \in {\mathbb Z}_+$ set
\begin{equation} \label{17a}
\Psi_q(x,\xi) = \frac{(-1)^q}{\pi}
{\rm L}_q(2(x^2 + \xi^2)) e^{-(x^2 + \xi^2)}, \quad (x,\xi) \in
{\mathbb R}^{2},
\end{equation}
where
\begin{equation} \label{lagpol}
{\rm L}_q(t) : = \frac{1}{q!} e^t \frac{d^q(t^q
e^{-t})}{dt^q} = \sum_{k=0}^q \binom{q}{k} \frac{(-t)^k}{k!},
\quad t \in {\mathbb R}, \end{equation}
are the Laguerre polynomials.
\begin{lemma} \label{l41}
For a fixed $(x,\xi) \in {\mathbb R}^{2}$ we have
$p_{q;x,\xi} = \Op (\varsigma_{q;x,\xi})$
where $p_{q;x,\xi}$ is the orthogonal projection defined in \eqref{end15}, and
\begin{equation} \label{gr50}
\varsigma_{q;x,\xi}(x',\xi'): =
2\pi \Psi_q(x'-x,\xi'-\xi), \quad (x'.\xi´) \in {\mathbb R}^{2}.
\end{equation}
\end{lemma}
\begin{proof}
Using the well-known relation between the Schwartz kernel of a
linear operator and its Weyl symbol (see e.g \cite[Eq.
(18.5.4)'']{ho}), we find that the Weyl symbol $\varsigma_{q;x,\xi}$
of the projection $p_{q;x,\xi}$ satisfies
\begin{equation} \label{gr51}
\varsigma_{q;x,\xi}(x',\xi') =
\int_{\mathbb R} e^{-iv\xi'}\varphi_{q;x,\xi}(x'+v/2) \overline
{\varphi_{q;x,\xi}(x'-v/2)} dv.
\end{equation}
By \eqref{radi} and \eqref{10d04},
$$
\int_{\mathbb R} e^{-iv\xi'}\varphi_{q;x,\xi}(x'+v/2) \overline
{\varphi_{q;x,\xi}(x'-v/2)} dv =
$$
\begin{equation} \label{18}
\frac{1}{\sqrt{\pi}2^q q!} \int_{{\mathbb R}} e^{iv(\xi-\xi')} {\rm
H}_q(x' + \frac{1}{2} v - x) {\rm H}_q(x' - \frac{1}{2} v - x)
e^{-(x' + \frac{1}{2} v - x)^2/2} e^{-(x' - \frac{1}{2} v -
x)^2/2} dv.
\end{equation}
Changing the variable of integration
$v = 2(t + i(\xi -\xi'))$, and bearing of mind the parity of the Hermite polynomial
${\rm H}_q$, we get
$$
\int_{{\mathbb R}} e^{iv(\xi-\xi')} {\rm H}_q(x' + \frac{1}{2} v - x) {\rm
H}_q(x' - \frac{1}{2} v - x) e^{-(x' + \frac{1}{2} v - x)^2/2}
e^{-(x' - \frac{1}{2} v - x)^2/2} dv =
$$
\begin{equation} \label{19} 2(-1)^q e^{-(x'-x)^2 - (\xi'-\xi)^2} \int_{{\mathbb R}} e^{-t^2}
{\rm H}_q(t-(x-x'-i(\xi-\xi'))) {\rm H}_q(t+ x-x'+i(\xi-\xi')) dt.
\end{equation} Employing the relation between the Laguerre polynomials and
the integrals of Hermite polynomials (see e.g. \cite[Eq.
7.377]{grry}), we obtain
$$
\int_{{\mathbb R}} e^{-t^2} {\rm H}_q(t-(x-x'-i(\xi-\xi'))) {\rm
H}_q(t+x-x'+i(\xi-\xi')) dt =
$$
\begin{equation} \label{20z} \sqrt{\pi}2^q q! {\rm L}_q(2((x'-x)^2+(\xi-\xi')^2)).
\end{equation} Putting together \eqref{gr51} -- \eqref{20z}, we obtain
\eqref{gr50}.
\end{proof}
\begin{remark} \label{r5} Let $\psi \in L^2({\mathbb R})$ and $\|\psi\|_{L^2({\mathbb R})} =
1$. Then the Weyl symbol of the rank-one orthogonal projection
$|\psi\rangle \langle\psi|$, is called the Wigner function
associated with $\psi$ (see e.g. \cite[Definition 2.2]{tp}). Thus,
Lemma {\mathbb R}f{l41} tells us, in particular, that $2\pi \Psi_q$ is the
Wigner function associated with $\varphi_q$.
\end{remark}
Lemma {\mathbb R}f{l41}
immediately entails the following
\begin{follow} \label{f42}
Let $s \in L^1({\mathbb R}^{2}) + L^{\infty}({\mathbb R}^{2}) $. Then we have
\begin{equation} \label{9d05}
{\rm Op}^{aw}_q(s) = {\rm Op}^{w}(\Psi_q * s).
\end{equation}
\end{follow}
\subsection{Metaplectic mapping of the operators $H_0, P_q$ and $V$}
\label{ss23} For ${\bf x} = (x,y) \in {\mathbb R}^{2}, \; {\bf x}i = (\xi,\eta)
\in {\mathbb R}^{2}$, set
\begin{equation} \label{sof22}
{\varkappa}_{B}({\bf x}, {\bf x}i): =
\left(\frac{1}{\sqrt{B}} (x-\eta), \frac{1}{\sqrt{B}} (\xi-y),
\frac{\sqrt{B}}{2}(\xi+y), -\frac{\sqrt{B}}{2}(\eta+x)\right).
\end{equation}
Evidently, the transformation ${\varkappa}_B$ is linear and
symplectic. Define the unitary operator ${\mathcal U}_{B}:
L^2({\mathbb R}^{2}) \to L^2({\mathbb R}^{2})$ by
\begin{equation} \label{133}
({\mathcal U}_{B}
u)(x,y): = \frac{\sqrt{B}}{2\pi} \int_{{\mathbb R}^{2}}
e^{i\phi_{B}(x,y;x',y')} u(x',y') dx'dy'
\end{equation}
where
$$
\phi_{B}(x,y;x',y'): = B \frac{xy}{2} + B^{1/2}(xy' - yx') - x'y'.
$$
Writing ${\varkappa}_{B}$ as a product of elementary linear
symplectic transformations (see e.g. \cite[Lemma 18.5.8]{ho}), we
can easily check that
${\mathcal U}_B$ is a metaplectic operator corresponding to
${\varkappa}_B$. Note that
$$
{\mathcal H}\circ\varkappa_B({\bf x}, {\bf x}i) = B(\xi^2 + x^2), \quad {\bf x} = (x,y) \in {\mathbb R}^{2}, \quad {\bf x}i = (\xi, \eta) \in {\mathbb R}^{2},
$$
where ${\mathcal H}$ is the Weyl symbol of the operator $H_0$ defined in \eqref{gr1}.
On the other hand, $B(\xi^2 + x^2)$
is the Weyl
symbol of the
operator $B(h \otimes I_y)$ self-adjoint in $L^2({\mathbb R}^{2}_{x,y})$
where $h$ is
the harmonic oscillator \eqref{gr3},
acting in $L^2({\mathbb R}_x)$, and
$I_y$ is the identity operator in $L^2({\mathbb R}_y)$. Denote by $p_q =
|\varphi_q\rangle\langle\varphi_q| = p_{q;0,0}$ the orthogonal
projection onto ${\rm Ker}\,(h-2q-1)$, $q \in {\mathbb Z}_+$.
Applying Proposition {\mathbb R}f{p42} with $\kappa= \varkappa_B$, and
bearing in mind Remark {\mathbb R}f{r3} (ii), we obtain the following
\begin{follow} \label{f41}
(i) We have
\begin{equation} \label{sof20}
{\mathcal U}_{B}^* H_0 {\mathcal U}_{B} = B\left(h \otimes
I_y\right),
\end{equation}
\begin{equation} \label{sof21}
{\mathcal U}_{B}^* P_q {\mathcal U}_{B} = p_q \otimes I_y, \quad q
\in {\mathbb Z}_+.
\end{equation}
(ii) If $V \in \Gamma({\mathbb R}^{2})$, then
\begin{equation} \label{sof32}
{\mathcal U}_{B}^* V
{\mathcal U}_{B} = {\rm Op}^w({\bf V}_B)
\end{equation}
where
\begin{equation} \label{8d01}
{\bf V}_B (x,y;\xi,\eta) : = V(B^{-1/2}(x-\eta),
B^{-1/2}(\xi-y)), \quad (x,y;\xi,\eta) \in {\mathbb R}^4.
\end{equation}
\end{follow}
\begin{remark} \label{r2}
Various versions of the symplectic transformation $\varkappa_B$ in
\eqref{sof22} and the corresponding metaplectic operator
${\mathcal U}_B$ in \eqref{133} have been used in the spectral
theory of the perturbations of the Landau Hamiltonian (see e.g.
\cite{helsjo}). Of course, the close relation between the Landau
Hamiltonian $H_0$ and the harmonic oscillator $h$ is well-known
since the seminal work \cite{lan} where the basic
spectral properties of $H_0$ were first described.\\
\end{remark}
\subsection{Unitary equivalence of $P_q V P_q$ and ${\rm Op}_q^{aw}(V_B)$}
\label{ss26}
Set
\begin{equation} \label{sof30}
V_B(x,y)=V(-B^{-1/2}y,-B^{-1/2}x), \quad (x,y)\in{\mathbb R}^{2}.
\end{equation}
\begin{theorem}\label{th.b2}
For any $V\in L^1({\mathbb R}^{2}) + L^\infty({\mathbb R}^{2})$ and $q \in {\mathbb
Z}_+$, we have
\begin{equation} \label{sof31}
{\mathcal U}_B^* P_q VP_q {\mathcal U}_B = p_q\otimes {\rm Op}_q^{aw} (V_B).
\end{equation}
\end{theorem}
For the proof of Theorem {\mathbb R}f{th.b2} we need some well known estimates for Berezin-Toeplitz
operators:
\begin{lemma} \label{l21} {\rm \cite[Lemma 5.1]{r0}, \cite[Lemma
5.1]{fr}} Let $V \in L^\ell({\mathbb R}^{2})$, $\ell \in [1,\infty)$. Then
$P_q V P_q \in S_\ell( L^2({\mathbb R}^{2}))$, and $\|P_q V P_q\|_{\ell}^\ell
\leq \frac{B}{2\pi} \|V\|^\ell_{L^{\ell}({\mathbb R}^{2})}$, $q \in {\mathbb
Z}_+$. Moreover, if $V \in L^1({\mathbb R}^{2})$, then
\begin{equation} \label{gr18}
\Tr P_q V P_q = \frac{B}{2\pi} \int_{{\mathbb R}^{2}} V({\bf x}) d{\bf x}, \quad q \in {\mathbb Z}_+.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Theorem {\mathbb R}f{th.b2}]
Assume at first $V \in C^{\infty}_0({\mathbb R}^{2})$. Then, by
\eqref{sof21}, \eqref{sof32}, and \eqref{8d01},
\begin{equation} \label{8d07}
{\mathcal U}_B^* P_q V P_q {\mathcal
U}_B = (p_q \otimes I_y) {\rm Op}^{w}({\bf V}_B) (p_q \otimes
I_y). \end{equation} Let $u \in {\mathcal S}({\mathbb R}^{2})$. Set
$$
u_q(y) : =
\int_{{\mathbb R}^{2}} u(x,y)\varphi_q(x)dx.
$$
Then we have
$$
\langle{\mathcal U}_B^* P_q V P_q {\mathcal U}_B u,
u\rangle_{L^2({\mathbb R}^{2})} = \langle{\rm Op}^{w}({\bf V}_B)
(\varphi_q\otimes u_q), (\varphi_q\otimes u_q)\rangle_{L^2({\mathbb R}^{2})} =
$$
$$
\frac{1}{(2\pi)^{2}}\int_{{\mathbb R}^6} V_B ((y_1+y_2)/2-\xi, \eta -
(x_1+x_2)/2 )\, e^{i[(x_1-x_2)\xi + (y_1-y_2)\eta]} \,\times
$$
$$
\varphi_q(x_1) u_q(y_1) \;
\varphi_q(x_2) \overline{u_q(y_2)} \;dx_1 dx_2 \,dy_1 dy_2 \,d\xi
d\eta =
$$
$$
\frac{1}{(2\pi)^{2}}\int_{{\mathbb R}^5} V_B((y_1+y_2)/2-y', \eta -
\eta')\,e^{i(y_1-y_2)\eta}\, \times
$$
$$
\left(\int_{{\mathbb R}} \varphi_q(\eta'+v/2) \varphi_q(\eta' -
v/2)e^{ivy'}dv\right) u_q(y_1) \overline{u_q(y_2)} d\eta'd\eta
dy'dy_1dy_2 =
$$
$$
\frac{1}{2\pi}\int_{{\mathbb R}^5} \Psi_q(y',\eta') V_B ((y_1+y_2)/2-y',
\eta - \eta') e^{i(y_1-y_2)\eta}u_q(y_1)
\overline{u_q(y_2)}dy_1dy_2dy' d\eta d\eta'=
$$
\begin{equation} \label{9d01}
\langle {\rm Op}^{w}(V_B * \Psi_q)u_q, u_q \rangle_{L^2({\mathbb R})} =
\langle {\rm Op}_q^{aw}(V_B)u_q, u_q \rangle_{L^2({\mathbb R})} = \langle(p_q \otimes {\rm Op}_q^{aw}(V_B))u, u\rangle_{L^2({\mathbb R}^{2})}.
\end{equation} To obtain the first identity, we have utilized Corollary
{\mathbb R}f{f41}. To establish the second identity, we have used
\eqref{gr52}, \eqref{8d01}, and \eqref{sof30}. To get the third
identity, we have changed the variables $x_1 = \eta' + v/2$, $x_2
= \eta' - v/2$, $\xi = y'$. To obtain the fourth identity, we have
used \eqref{gr50} -- \eqref{gr51}
with $\xi' = y'$, $x' = \eta'$, and $x =0$, $\xi = 0$, taking
into account that $\Psi_q(\eta', -y') = \Psi_q(y',\eta')$. To deduce the fifth identity, we have applied
\eqref{gr52}, bearing in mind the symmetry of
the convolution $\Psi_q * V_B = V_B * \Psi_q$, and for the sixth
identity, we have applied \eqref{9d05} with $s = V_B$. Finally,
the last identity is obvious. Now, \eqref{9d01} entails
\eqref{sof31} in the case $V \in C_0^{\infty}({\mathbb R}^{2})$. \\
Further, let $V \in L^1({\mathbb R}^{2})$, and pick a sequence
$\left\{V_m\right\}$ of functions $V_m \in C_0^{\infty}({\mathbb R}^{2})$ such
that $V_m \rightarrow V$ in $L^1({\mathbb R}^{2})$ as $m \to \infty$. Then by
Lemma {\mathbb R}f{l21} and the unitarity of ${\mathcal U}_B$, we have
$$
\lim_{m \to \infty}\| {\mathcal U}_B^* P_q V_m P_q
{\mathcal U}_B - {\mathcal U}_B^* P_q V P_q {\mathcal U}_B\|_1 = 0.
$$
Similarly, it follows from \eqref{8d04} with $\ell = 1$ and
\eqref{9d05} that
$$
\lim_{m \to \infty}\| p_q \otimes {\rm Op}_q^{aw}(V_{m,B}) -
p_q \otimes {\rm Op}_q^{aw}(V_B)\|_1 = 0.
$$
Hence, \eqref{sof31} is valid for $V \in L^1({\mathbb R}^{2})$.\\
Finally, let now
$V = V_1 + V_2$ with $V_1\in L^1({\mathbb R}^{2})$ and $V_2\in
L^{\infty}({\mathbb R}^{2})$. Denote by $\chi_R$ the characteristic function
of a disk of radius $R>0$ centered at the origin. Then $V_1 +
\chi_R V_2 \in L^1({\mathbb R}^{2})$. Evidently,
$$
\wlim_{R \to \infty} {\mathcal U}_B^* P_q (V_1 + \chi_R
V_2) P_q {\mathcal U}_B = {\mathcal U}_B^* P_q V P_q {\mathcal
U}_B,
$$
while \eqref{8d03} entails
$$
\wlim_{R \to \infty} p_q \otimes {\rm Op}_q^{aw}((V_1 +
\chi_R V_2)_B) = p_q \otimes {\rm Op}_q^{aw}(V_B),
$$
which yields \eqref{sof31} in the general case.
\end{proof}
Combining Theorem~{\mathbb R}f{th.b2} and Corollary~{\mathbb R}f{f42}, we obtain
the following
\begin{follow}\label{fgr1}
Let $V\in L^1({\mathbb R}^{2}) + L^\infty({\mathbb R}^{2})$ and $q \in {\mathbb Z}_+$. Then we have
\begin{equation} \label{gr7}
{\mathcal U}_B^* P_q VP_q {\mathcal U}_B = p_q\otimes {\rm Op}^{w}
(V_B * \Psi_q).
\end{equation}
\end{follow}
\begin{remark} \label{r6}
To the authors' best knowledge, the unitary equivalence
between the Toeplitz operators $P_q V P_q$, $q \in {\mathbb Z}_+$,
and $\Psi$DO with generalized anti-Wick symbols in the context of
the spectral theory of perturbations of the Landau Hamiltonian,
was first shown in \cite{r0}. Related heuristic arguments can be
found in \cite{rs, bhele}. In the case $q=0$ this equivalence is
closely related to the Segal-Bargmann transform in appropriate
holomorphic spaces which, in one form or another, plays an
important role in the semiclassical analysis performed in
\cite{tw, vb1, tvb, urvb}. Let us comment in more detail on this
relation. The Hilbert space $P_0 L^2({\mathbb R}^{2})$ coincides with the
classical Bargmann space
$$
\left\{f \in L^2({\mathbb R}^{2}) \, |\, f({\bf x}) = e^{-B|{\bf x}|^2/4}g({\bf x}), \quad
\frac{\partial g}{\partial x} + i \frac{\partial g}{\partial y} =
0, \quad {\bf x} = (x,y) \in {\mathbb R}^{2}\right\}.
$$
Then the Segal-Bargmann transform $T_0: L^2({\mathbb R}) \to P_0 L^2({\mathbb R}^{2})$ is a unitary operator with integral kernel
$$
{\mathcal T}_0 : = \frac{1}{\sqrt{2}} \left(\frac{B}{\pi}\right)^{3/4} e^{-B((x+iy+2t)^2 - 2t^2 + |{\bf x}|^2)/4},
\quad {\bf x} = (x,y) \in {\mathbb R}^{2}, \quad t \in {\mathbb R},
$$
(see \cite[Lemma 3.1]{tp}). Fix $q \in {\mathbb Z}_+$. Denote by $M_q : L^2({\mathbb R}) \to (p_q \otimes I_y) L^2({\mathbb R}^{2})$
the unitary operator which maps $u \in L^2({\mathbb R})$ into $B^{1/4} \varphi_q(x) u(B^{1/2}y)$, $(x,y) \in {\mathbb R}^{2}$,
and by ${\mathcal R} : L^2({\mathbb R}^{2}) \to L^2({\mathbb R}^{2})$ the unitary operator generated by the rotation by angle $\pi/2$, i.e.
$({\mathcal R} u)(x,y) = u(y,-x)$, $(x,y) \in {\mathbb R}^{2}$; note that $[{\mathcal R}, P_q] = 0$. Then we have
$$
T_0 = {\mathcal R} {\mathcal U}_B M_0.
$$
{}From this point of view the operators $T_q : = {\mathcal R}
{\mathcal U}_B M_q$, $q \in {\mathbb N}$, could be called
generalized Segal-Bargmann transforms.
\end{remark}
\section{Analysis of $\Op (V_B * \Psi_q)$ and proof of Theorem 1.6}\label{sec.c}
\subsection{Reduction of $\Op(V_B * \Psi_q)$ to $\Op(V_B * \delta_{\sqrt{2q+1}})$}
In the sequel we will use the following notations.
For $k>0$, let $\delta_k$ be the $\delta$-function in ${\mathbb R}^{2}$
supported on the circle of radius $k$ centered at the origin. More
precisely, the distribution $\delta_k \in {\mathcal S}'({\mathbb R}^2)$ is defined by
$$
\delta_k(\varphi) : =
\frac1{2\pi}\int_0^{2\pi}\varphi(k\cos\theta,k\sin\theta)d\theta,
\quad \varphi\in {\mathcal S}({\mathbb R}^{2}).
$$
Next, we denote by $\hat{f}$ the Fourier transform of the
distribution $f \in {\mathcal S}'({\mathbb R}^d)$, unitary in $L^2({\mathbb R}^d)$, i.e.
\begin{equation} \label{end2}
\hat{f}(\xi) : = (2\pi)^{-d/2} \int_{{\mathbb R}^d} e^{-ix\cdot\xi} f(x)
dx, \quad \xi \in {\mathbb R}^d,
\end{equation}
for $f \in {\mathcal S}({\mathbb R}^d)$.
\begin{lemma}\label{th.b3}
Let $V\in C_0^\infty({\mathbb R}^2)$ and $B_0>0$.
Then for some constant $C=C(B_0)$ one has
\begin{multline}
\sup_{q\geq0}\sup_{B\geq B_0}\lambda_q^{3/4} B^{-1}
\norm{\Op(V_B * \Psi_q)-\Op(V_B*\delta_{\sqrt{2q+1}})}
\\
\leq
C\int_{{\mathbb R}^2} (\abs{{\mathbb Z}ta}^{5/2}+\abs{{\mathbb Z}ta}^6)\abs{\widehat V({\mathbb Z}ta)}d{\mathbb Z}ta,
\label{c13}
\end{multline}
\begin{multline}
\sup_{q\geq0}\sup_{B\geq B_0}\lambda_q^{3/4} B^{-1}
\norm{\Op(V_B * \Psi_q)-\Op(V_B*\delta_{\sqrt{2q+1}})}_2
\\
\leq
C\left(\int_{{\mathbb R}^2} (\abs{{\mathbb Z}ta}^{5}+\abs{{\mathbb Z}ta}^{12})\abs{\widehat V({\mathbb Z}ta)}^2 d{\mathbb Z}ta\right)^{1/2}.
\label{c14}
\end{multline}
\end{lemma}
The intuition behind this lemma is the convergence of $ \Psi_q$ to
$\delta_{\sqrt{2q+1}}$ in an appropriate sense as $q\to\infty$.
We also note the
similarity between the definition of $V_B * \delta_k$ and the
``classical'' formula \eqref{a1}. For brevity, we introduce the
short-hand notations
\begin{equation} \label{end1}
s_q : = V_B * \Psi_q, \quad q \in {\mathbb Z}_+, \quad t_k : =
V_B*\delta_k, \quad k \in (0,\infty).
\end{equation}
\begin{proof}
First we represent the symbols $s_q$, $t_k$ in a form
convenient for our purposes. For $s_q$ we have
\begin{equation} \label{gr17}
s_q(z)= \int_{{\mathbb R}^2} e^{iz{\mathbb Z}ta} \widehat
\Psi_q({\mathbb Z}ta)\widehat V_B({\mathbb Z}ta)d{\mathbb Z}ta, \quad z\in{\mathbb R}^2.
\end{equation}
In the Appendix we will prove the formula
\begin{equation}
\widehat{\Psi_q}({\mathbb Z}ta) = (-1)^q \Psi_q(2^{-1} {\mathbb Z}ta)/2, \quad q
\in {\mathbb Z}_+, \quad {\mathbb Z}ta \in {\mathbb R}^2. \label{prvb30}
\end{equation}
By \eqref{gr17}, \eqref{prvb30}, and the definition \eqref{17a}
of $\Psi_q$,
$$
s_q(z)=\frac1{2\pi}\int_{{\mathbb R}^2} e^{iz{\mathbb Z}ta}
\Lag_q(\abs{{\mathbb Z}ta}^2/2)e^{-\abs{{\mathbb Z}ta}^2/4}
\widehat V_B({\mathbb Z}ta) d{\mathbb Z}ta.
$$
For $t_k$ we can write
\begin{equation}
t_k(z)=\frac{1}{2\pi}\int_{{\mathbb R}^{2}}e^{iz{\mathbb Z}ta}J_0(k\abs{{\mathbb Z}ta})\widehat
V_B({\mathbb Z}ta)d{\mathbb Z}ta, \quad z\in{\mathbb R}^{2},
\label{b14}
\end{equation}
since the integral representation
for the Bessel function $J_0$ can be written as
\begin{equation}
J_0(k |{\mathbb Z}ta|) = 2\pi \hat{\delta}_k({\mathbb Z}ta), \quad {\mathbb Z}ta\in{\mathbb R}^2.
\label{b13}
\end{equation}
Thus \eqref{c13} (resp. \eqref{c14}) reduces to estimating the operator norm
(resp. the Hilbert-Schmidt norm) of the operator with the Weyl symbol
\begin{equation}
s_q(z)-t_{\sqrt{2q+1}}(z)
=
\frac{1}{2\pi}\int_{{\mathbb R}^2} e^{iz{\mathbb Z}ta}
\bigl(\Lag_q(\abs{{\mathbb Z}ta}^2/2)e^{-\abs{{\mathbb Z}ta}^2/4}
-
J_0(\sqrt{2q+1}\abs{{\mathbb Z}ta})\bigr)
\widehat V_B({\mathbb Z}ta)d{\mathbb Z}ta, \quad q \in {\mathbb Z}_+.
\label{b5}
\end{equation}
In what follows the estimate
\begin{equation}
\Abs{{\rm L}_q(x) e^{-x/2} - J_0(\sqrt{(4q+2)x})} \leq
C(q^{-3/4}x^{5/4}+q^{-1}x^3), \qquad q\in\mathbb N, \quad x>0,
\label{b7}
\end{equation}
plays a key role.
This estimate is probably well known to experts,
but since we could not find it explicitly in the literature, we include its
proof in the Appendix.
Let us prove the estimate \eqref{c13}.
Using the estimates \eqref{end3} and \eqref{b7}, we obtain:
\begin{multline}
\norm{\Op(V_B * \Psi_q)-\Op(V_B*\delta_{\sqrt{2q+1}})}
\leq
(2\pi)^{-1}
\norm{\hat{s_q}-\hat t_{\sqrt{2q+1}}}_{L^1({\mathbb R}^2)}
\\
=
(2\pi)^{-1}
\int_{{\mathbb R}^2}\abs{\Lag_q(\abs{{\mathbb Z}ta}^2/2)e^{-\abs{{\mathbb Z}ta}^2/4}-J_0(\sqrt{2q+1}\abs{{\mathbb Z}ta})}
\abs{\widehat V_B({\mathbb Z}ta)}d{\mathbb Z}ta
\\
\leq
C
\int_{{\mathbb R}^2}(q^{-3/4}\abs{{\mathbb Z}ta}^{5/2}+q^{-1}\abs{{\mathbb Z}ta}^6)
\abs{\widehat V_B({\mathbb Z}ta)}d{\mathbb Z}ta.
\label{b10}
\end{multline}
Recalling the definition \eqref{sof30} of $V_B$, we obtain
$\widehat V_B({\mathbb Z}ta)=B\widehat V_1(B^{1/2}{\mathbb Z}ta)$, and so the l.h.s. of
\eqref{b10} can be estimated by
\begin{multline*}
CB q^{-3/4} \int_{{\mathbb R}^2} \abs{{\mathbb Z}ta}^{5/2} \abs{\widehat V_1(B^{1/2}{\mathbb Z}ta)}d{\mathbb Z}ta
+
CB q^{-1} \int_{{\mathbb R}^2} \abs{{\mathbb Z}ta}^{6} \abs{\widehat V_1(B^{1/2}{\mathbb Z}ta)}d{\mathbb Z}ta
\\
=
CB^{-5/4} q^{-3/4} \int_{{\mathbb R}^2} \abs{{\mathbb Z}ta}^{5/2} \abs{\widehat V_1({\mathbb Z}ta)}d{\mathbb Z}ta
+
CB^{-3} q^{-1} \int_{{\mathbb R}^2} \abs{{\mathbb Z}ta}^{6} \abs{\widehat V_1({\mathbb Z}ta)}d{\mathbb Z}ta.
\end{multline*}
This yields \eqref{c13}.
Next, let us prove the estimate \eqref{c14}.
By \eqref{prvb2} and the unitarity of the Fourier transform,
\begin{multline*}
\norm{\Op(s_q-t_k)}^2_2
=
(2\pi)^{-1}\int_{{\mathbb R}^2}\abs{\hat{s}_q({\mathbb Z}ta)-\hat{t}_{\sqrt{2q+1}}({\mathbb Z}ta)}^2 d{\mathbb Z}ta
\\
=
(2\pi)^{-1}
\int_{{\mathbb R}^2}
\abs{\Lag_q(\abs{{\mathbb Z}ta}^2/2)e^{-\abs{{\mathbb Z}ta}^2/4}-J_0(\sqrt{2q+1}\abs{{\mathbb Z}ta})}^2
\abs{\widehat V_B({\mathbb Z}ta)}^2 d{\mathbb Z}ta.
\end{multline*}
Now using the estimate \eqref{b7} again, we obtain \eqref{c14} in a similar way
to the previous step of the proof.
\end{proof}
\subsection{Norm estimate of $\Op(V_B * \delta_k)$}
\begin{lemma}\label{lma.b3}
Let $V({\bf x})=\jap{{\bf x}}^{-\rho}$, $\rho>1$, and $B_0>0$.
Then
$$
\sup_{k>0} \sup_{B>B_0}
k B^{-1/2}
\norm{\Op(V_B * \delta_k)}<\infty.
$$
\end{lemma}
\begin{proof}
By Proposition~{\mathbb R}f{prvbp1}, it suffices to prove that for any
differential operator $L$ with constant coefficients
we have
\begin{equation} \label{prvb14}
\sup_{k>0} \sup_{B>B_0}
k B^{-1/2}\sup_{z\in {\mathbb R}^{2}} \abs{(LV_B* \delta_k)(z)} < \infty.
\end{equation}
Note that, by the standard symbol
properties of $\langle {\bf x}\rangle^{-\rho}$, we have
$$
|L V_B({\bf x})|\leq CV_B({\bf x}), \quad {\bf x}\in{\mathbb R}^{2},
$$
where $C$ depends only on $B_0$ and $L$.
Thus, it remains to prove that
\begin{equation}
\sup_{k>0} \sup_{B>B_0}
kB^{-1/2}
\sup_{z\in{\mathbb R}^{2}}\abs{(V_B*\delta_k)(z)}<\infty.
\label{c15}
\end{equation}
We have
$$
V_B(z)=(B^{-1}\abs{z}^2+1)^{-\rho/2}.
$$
Take $z=(r,0)$, $r\geq0$. Then
\begin{multline*}
\left(V_B * \delta_k \right)(z) =
\frac1{2\pi}
\int_0^{2\pi}(B^{-1}(k\cos\theta-r)^2+B^{-1}(k\sin\theta)^2+1)^{-\rho/2}d\theta
\\
\leq \frac1{2\pi}
\int_0^{2\pi}(B^{-1}k^2(\sin\theta)^2+1)^{-\rho/2}d\theta
=
\frac2{\pi}
\int_0^{\pi/2}(B^{-1}k^2(\sin\theta)^2+1)^{-\rho/2}d\theta
\\
\leq \frac2{\pi}
\int_0^{\pi/2}(B^{-1}k^2(2\theta/\pi)^2+1)^{-\rho/2}d\theta
\leq
\frac2{\pi} \int_0^{\infty}(B^{-1}k^2(2\theta/\pi)^2+1)^{-\rho/2}d\theta
\\
= \frac{2B^{1/2}}{\pi k}
\int_0^{\infty}((2\theta/\pi)^2+1)^{-\rho/2}d\theta = CB^{1/2}/k.
\end{multline*}
This yields
$$
\sup_{z\in{\mathbb R}^{2}}\abs{(V_B*\delta_k)(z)}\leq CB^{1/2}k^{-1},
$$
and \eqref{c15} follows.
\end{proof}
\subsection{Asymptotics of traces}
\begin{theorem}\label{th.b4}
Let $V\in C_0^\infty({\mathbb R}^2)$. Then for each $\ell \in {\mathbb N}$, $\ell \geq 2$,
we have
\begin{equation} \label{sof12}
\lim_{q \to \infty}\lambda_q^{(\ell-1)/2}
\Tr \bigl(\Op(t_{\sqrt{2q+1}})\bigr)^\ell
=
\frac{B^\ell}{2\pi}
\int_{{\mathbb T}} \, \int_{{\mathbb R}} \;\widetilde{V}(\omega,b)^\ell \,
db\, d\omega.
\end{equation}
\end{theorem}
The proof is based on the following technical lemma.
\begin{lemma}\label{lma.b2}
Let $\ell\in{\mathbb N}$, $\ell\geq 2$, $f\in\mathcal S({\mathbb R}^{2(\ell-1)})$,
and let the function $\varphi:\mathbb T^{\ell}\times {\mathbb R}^{2(\ell-1)}\to{\mathbb R}$
be given by
\begin{equation}
\varphi(\boldsymbol{\omega},\mathbf{z}) = \sum_{j=1}^{\ell-1} z_j \cdot
(\omega_{j+1}-\omega_j), \label{c1}
\end{equation}
where
$\mathbf{z}=(z_1,\dots,z_{\ell-1})\in {\mathbb R}^{2(\ell-1)}$,
$\boldsymbol{\omega}=(\omega_1,\dots,\omega_\ell)\in\mathbb T^{\ell}\subset{\mathbb R}^{2\ell}$,
and $\cdot$ denotes the scalar product in ${\mathbb R}^{2}$.
Then
\begin{multline}
\lim_{k\to\infty}k^{\ell-1}
\int_{{\mathbb R}^{2(\ell-1)}} \, \int_{\mathbb T^{\ell}} f(\mathbf{z})
e^{ik\varphi(\boldsymbol{\omega},\mathbf{z})}\, d\boldsymbol{\omega}\, d\mathbf{z}
\\
=
(2\pi)^{\ell-1}
\int_{\mathbb T} \, \int_{{\mathbb R}^{\ell-1}}
\, f(\alpha_1\omega,\alpha_2\omega,\dots, \alpha_{\ell-1}\omega)\, d\alpha_1 d\alpha_2 \cdots d\alpha_{\ell-1}\,d\omega .
\label{c2}
\end{multline}
\end{lemma}
\begin{proof}
The proof consists in an application of the stationary phase
method.
We use the following parametrisation of the variables
$\boldsymbol{\omega}$, $\mathbf{z}$:
\begin{align*}
\omega_\ell&=(\cos\theta,\sin\theta), \quad \theta\in[-\pi,\pi);
\\
\omega_j&=(\cos(\theta+\theta_j), \sin(\theta+\theta_j)),
\quad \theta_j\in[-\pi,\pi), \quad j=1,\dots, \ell-1;
\\
z_j&=\alpha_j\omega_\ell+\beta_j\omega_\ell^\perp,
\quad \alpha_j, \beta_j\in{\mathbb R}, \quad j=1,\dots, \ell-1.
\end{align*}
We write
$\boldsymbol{\alpha} = (\alpha_1, \ldots, \alpha_{\ell - 1}) \in {\mathbb R}^{\ell-1}$,
$\boldsymbol{\beta} = (\beta_1, \ldots, \beta_{\ell - 1}) \in{\mathbb R}^{\ell - 1}$,
$\boldsymbol{\theta} = (\theta_1,\ldots,\theta_{\ell-1}) \in[-\pi,\pi)^{\ell-1}$.
Using this notation, we can rewrite the integral in the l.h.s. of
\eqref{c2} as
\begin{multline}
\int_{{\mathbb R}^{2(\ell-1)}} \, \int_{\mathbb T^{\ell}} f(\mathbf{z})
e^{ik\varphi(\boldsymbol{\omega},\mathbf{z})}\,d\boldsymbol{\omega} \,d\mathbf{z}
\\
=
\int_{-\pi}^\pi \, \int_{(-\pi,\pi)^{\ell-1}}\,
\int_{{\mathbb R}^{\ell-1}}\, \int_{{\mathbb R}^{\ell-1}} \,
F(\boldsymbol{\alpha}, \boldsymbol{\beta},\boldsymbol{\theta},\theta)e^{ik \Phi(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\theta})} \,d\boldsymbol{\beta}\,d\boldsymbol{\alpha} \,d\boldsymbol{\theta}\,d\theta,
\label{c3}
\end{multline}
where
$$
F(\boldsymbol{\alpha}, \boldsymbol{\beta},\boldsymbol{\theta},\theta)
=
f(\alpha_1\omega_\ell+\beta_1\omega_\ell^\perp,\dots,\alpha_{\ell-1}\omega_\ell+\beta_{\ell-1}\omega_\ell^\perp),
$$
and
$$
\Phi(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\theta})
=
\alpha_1(1-\cos\theta_1)-\beta_1\sin\theta_1,
\text{ if $\ell=2$,}
$$
\begin{multline*}
\Phi(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\theta})
=
\alpha_{\ell-1}-(\alpha_1\cos\theta_1+\beta_1\sin\theta_1)
\\
+
\sum_{j=2}^{\ell-1}((\alpha_{j-1}-\alpha_j)\cos\theta_j+(\beta_{j-1}-\beta_j)\sin\theta_j),
\text{ if $\ell\geq 3$.}
\end{multline*}
Let us consider the stationary points of the phase function
$\Phi$. By a direct calculation,
$\nabla\Phi(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\theta})=0$ if and only if $\boldsymbol{\beta}=0$
and $\boldsymbol{\theta}=0$. By a standard localisation argument, it follows
that the asymptotics of the integral \eqref{c3} will not change if
we multiply $F$ by a function $\chi=\chi(\boldsymbol{\beta},\boldsymbol{\theta})$, $\chi
\in C^\infty({\mathbb R}^{\ell-1}\times[-\pi,\pi)^{\ell-1})$, such that
$\chi(\boldsymbol{\beta},\boldsymbol{\theta})=1$ in an open neighbourhood of the origin
$\boldsymbol{\beta}=0$, $\boldsymbol{\theta}=0$, and $\chi(\boldsymbol{\beta},\boldsymbol{\theta})=0$ if
$\abs{\boldsymbol{\beta}}\geq1/2$ or $\abs{\boldsymbol{\theta}}\geq\pi/2$.
Let us write
\begin{multline}
\int_{-\pi}^\pi \, \int_{(-\pi,\pi)^{\ell-1}} \,
\int_{{\mathbb R}^{\ell-1}} \, \int_{{\mathbb R}^{\ell-1}} \,
F(\boldsymbol{\alpha}, \boldsymbol{\beta},\boldsymbol{\theta},\theta)\chi(\boldsymbol{\beta},\boldsymbol{\theta})
e^{ik \Phi(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\theta})} \, d\boldsymbol{\beta} \, d\boldsymbol{\alpha} \, d\boldsymbol{\theta} \, d\theta
\\
=
\int_{-\pi}^\pi
\int_{{\mathbb R}^{\ell-1}}
I(k;\boldsymbol{\alpha},\theta) \, d\boldsymbol{\alpha} \, d\theta ,
\label{c4}
\end{multline}
where
$$
I(k;\boldsymbol{\alpha},\theta)
=
\int_{(-\pi,\pi)^{\ell-1}} \,
\int_{{\mathbb R}^{\ell-1}} \,
F(\boldsymbol{\alpha}, \boldsymbol{\beta},\boldsymbol{\theta},\theta)\chi(\boldsymbol{\beta},\boldsymbol{\theta})
e^{ik \Phi(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\theta})} \,d\boldsymbol{\beta} \,d\boldsymbol{\theta}.
$$
Let us fix $\boldsymbol{\alpha}$, $\theta$ and compute the
asymptotics of the integral
$I(k;\boldsymbol{\alpha},\theta)$ as $k\to\infty$.
A direct calculation shows that the stationary phase
equations
$$
\frac{\partial\Phi}{\partial\beta_j}=0,
\quad
\frac{\partial\Phi}{\partial\theta_j}=0,
\quad
j=1,\dots,\ell-1
$$
are simultaneously satisfied on the support of $\chi$ if and only
if $\boldsymbol{\beta}=0$, $\boldsymbol{\theta}=0$. In order to apply the stationary phase
method, we need to compute the determinant and the signature (i.e.
the difference between the number of positive and negative
eigenvalues) of the Hessian of $\Phi(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\theta})$ with
respect to the variables $\boldsymbol{\beta}$, $\boldsymbol{\theta}$. Let us denote this
Hessian by $H(\boldsymbol{\alpha})$. The $2(\ell - 1) \times 2(\ell - 1)$
matrix $H(\boldsymbol{\alpha})$ can be represented in a block form
$$
H(\boldsymbol{\alpha}) = \left(
\begin{array} {cc}
H_{11}(\boldsymbol{\alpha}) & H_{12}(\boldsymbol{\alpha})\\
H_{21}(\boldsymbol{\alpha}) & H_{22}(\boldsymbol{\alpha})
\end{array}
\right)
$$
where
$$
H_{11}(\boldsymbol{\alpha}) : = \left\{\frac{\partial^2 \Phi}{\partial \beta_p
\partial \beta_q}(\boldsymbol{\alpha},0,0) \right\}_{p,q=1}^{\ell-1}, \quad
H_{12}(\boldsymbol{\alpha}) : = \left\{\frac{\partial^2 \Phi}{\partial \beta_p
\partial \theta_q}(\boldsymbol{\alpha},0,0) \right\}_{p,q=1}^{\ell-1},
$$
$$
H_{21}(\boldsymbol{\alpha}) : = \left\{\frac{\partial^2 \Phi}{\partial
\theta_p
\partial \beta_q}(\boldsymbol{\alpha},0,0) \right\}_{p,q=1}^{\ell-1}, \quad
H_{22}(\boldsymbol{\alpha}) : = \left\{\frac{\partial^2 \Phi}{\partial
\theta_p
\partial \theta_q}(\boldsymbol{\alpha},0,0) \right\}_{p,q=1}^{\ell-1}.
$$
An explicit computation shows that
$$
H_{11} = 0, \quad H_{12} = \left\{-\delta_{q,p} +
\delta_{q,p+1}\right\}_{p,q=1}^{\ell - 1},
$$
$$
H_{21} = H_{12}^T, \quad H_{22}(\boldsymbol{\alpha}) = {\rm
diag}\left\{\alpha_1, \alpha_2 - \alpha_1,\ldots, \alpha_{\ell -
1} - \alpha_{\ell - 2}\right\}.
$$
Hence,
$$
{\rm det}_{2(\ell - 1)} H(\boldsymbol{\alpha}) = {\rm det}_{\ell - 1}(- H_{12}
H_{21}) = (-1)^{\ell -1};
$$
in particular, our stationary point is non-degenerate.
In order to calculate the signature
of $H(\boldsymbol{\alpha})$, note that since $\det H(\boldsymbol{\alpha})\not=0$
for all $\boldsymbol{\alpha}$ and $H(\boldsymbol{\alpha})$ depends smoothly (in fact,
polynomially) on $\boldsymbol{\alpha}$, the signature is independent of $\boldsymbol{\alpha}$.
Some elementary analysis shows that $\sign H(0)=0$.
Now we can apply a suitable version of the stationary phase
method (see e.g. \cite[Chapter 1]{guist} or \cite[Chapter III,
Section 2]{fed}) to calculate the asymptotics of
$I(k;\boldsymbol{\alpha},\theta)$. This yields
$$
\lim_{k\to\infty}k^{\ell-1}I(k;\boldsymbol{\alpha},\theta)
=
(2\pi)^{\ell-1}F(\boldsymbol{\alpha},0,0,\theta).
$$
Using, for example, the Lebesgue dominated convergence
theorem, one concludes that the above asymptotics can
be integrated over $\boldsymbol{\alpha}$ and $\theta$,
see \eqref{c4}. This yields the required result \eqref{c2}.
\end{proof}
\begin{proof}[Proof of Theorem~{\mathbb R}f{th.b4}]
We use the notation $t_k$, see \eqref{end1}.
Denote by $t_{k, \ell}$ the Weyl symbol of the operator
$(\Op(t_k))^\ell$, i.e.
$$
\Op(t_{k, \ell}) =(\Op(t_{k}))^\ell, \quad \ell=2,3,\dots.
$$
By the standard Weyl
pseudodifferential calculus
(see e.g. \cite[Chapter 7, Eq. (14.21)]{tayii}), for ${\mathbb Z}ta_\ell \in {\mathbb R}^{2}$ we have
$$
\hat{t}_{k, \ell}({\mathbb Z}ta_\ell)
=
(2\pi)^{-\ell + 1}
\int_{{\mathbb R}^{2(\ell-1)}} \hat{t}_k({\mathbb Z}ta_\ell - {\mathbb Z}ta_{\ell - 1})
\hat{t}_k({\mathbb Z}ta_{\ell - 1} - {\mathbb Z}ta_{\ell - 2}) \cdots
\hat{t}_k({\mathbb Z}ta_1) e^{\frac{i}{2}\sum_{j=2}^{\ell} \sigma({\mathbb Z}ta_j,
{\mathbb Z}ta_{j-1})} d\boldsymbol{{\mathbb Z}ta}
$$
where $\sigma(\cdot, \cdot)$ is the symplectic form in
${\mathbb R}^{2} \times{\mathbb R}^{2}$,
and $\boldsymbol{{\mathbb Z}ta}=({\mathbb Z}ta_1,\dots,{\mathbb Z}ta_{\ell-1})$.
It follows that
\begin{multline}
\Tr \bigl(\Op(t_k)\bigr)^\ell
=
\Tr \Op(t_{k,\ell}) =
\frac1{2\pi}\int_{{\mathbb R}^2}t_{k,\ell}(x)dx = \hat t_{k,\ell}(0)
\\
=
(2\pi)^{-\ell + 1}
\int_{{\mathbb R}^{2(\ell-1)}} \hat{t}_k(-{\mathbb Z}ta_{\ell - 1})
\hat{t}_k({\mathbb Z}ta_{\ell - 1} - {\mathbb Z}ta_{\ell - 2}) \cdots \hat{t}_k({\mathbb Z}ta_1)
\exp(\frac{i}{2}\sum_{j=2}^{\ell-1} \sigma({\mathbb Z}ta_j,{\mathbb Z}ta_{j-1}))
d\boldsymbol{{\mathbb Z}ta},
\label{c5}
\end{multline}
where we use the convention that $\sum_{j=2}^{\ell-1}=0$ if
$\ell=2$.
Recalling \eqref{b13}, \eqref{b14}, we get
$$
\hat t_k({\mathbb Z}ta)
=
\frac1{2\pi}\widehat V_B({\mathbb Z}ta)\int_{\mathbb T}e^{-ik\omega{\mathbb Z}ta}d\omega,
\quad
{\mathbb Z}ta\in{\mathbb R}^2,
$$
and so, substituting into \eqref{c5}, we get
$$
\Tr\bigl(\Op(t_k)\bigr)^\ell =
\int_{{\mathbb R}^{2(\ell-1)}}\,\int_{\mathbb T^\ell}\, f(\boldsymbol{{\mathbb Z}ta})
e^{ik\varphi(\boldsymbol{\omega},\boldsymbol{{\mathbb Z}ta})}\, d\boldsymbol{\omega} \, d\boldsymbol{{\mathbb Z}ta},
$$
where
$$
f(\boldsymbol{{\mathbb Z}ta})
=
(2\pi)^{-2\ell+1}
\widehat V_B(-{\mathbb Z}ta_{\ell - 1})
\widehat V_B({\mathbb Z}ta_{\ell - 1} - {\mathbb Z}ta_{\ell - 2}) \cdots \widehat V_B({\mathbb Z}ta_1)
\exp\bigl({\textstyle \frac{i}{2} \sum_{j=2}^{\ell-1} \sigma({\mathbb Z}ta_j,{\mathbb Z}ta_{j-1})}\bigr),
$$
and $\varphi$ is given by \eqref{c1}.
Applying Lemma~{\mathbb R}f{lma.b2}, we obtain
\begin{multline}
\lim_{k\to\infty} k^{\ell-1}
\Tr\bigl(\Op(t_k)\bigr)^\ell
\\
= (2\pi)^{-\ell}\int_{\mathbb T} \int_{{\mathbb R}^{\ell-1}}\,
\widehat V_B(-\alpha_{\ell-1}\omega) \widehat
V_B((\alpha_{\ell-1}-\alpha_{\ell-2})\omega) \cdots\widehat
V_B((\alpha_2-\alpha_1)\omega)\widehat V_B(\alpha_1\omega) \,
d\boldsymbol{\alpha} \, d\omega. \label{c6}
\end{multline}
It remains to transform the last identity into \eqref{sof12}.
We have
$$
\widehat V_B(\alpha \omega)
=
B\int_{{\mathbb R}} e^{-i\alpha
bB^{1/2}}\widetilde{V_1}(\omega,b)\,db
$$
where, in accordance with \eqref{sof30}, we use the notation
$V_1(x,y) = V(-y,-x)$, $(x,y) \in {\mathbb R}^{2}$. Therefore,
\begin{multline}
(2\pi)^{-\ell}\int_{\mathbb T} \int_{{\mathbb R}^{\ell-1}}\, \widehat
V_B(-\alpha_{\ell-1}\omega) \widehat
V_B((\alpha_{\ell-1}-\alpha_{\ell-2})\omega) \cdots\widehat
V_B((\alpha_2-\alpha_1)\omega)\widehat V_B(\alpha_1\omega) \,
d\boldsymbol{\alpha} \, d\omega
\\
= \frac{B^{(1+\ell)/2}}{2\pi} \int_{\mathbb T} \, \int_{\mathbb R}
\widetilde{V_1}(\omega,b)^\ell \, db \, d\omega
=
\frac{B^{(1+\ell)/2}}{2\pi} \int_{\mathbb T} \, \int_{\mathbb R}
\widetilde{V}(\omega,b)^\ell \, db \, d\omega. \label{end10}
\end{multline}
Now \eqref{sof12} follows from \eqref{c6} and \eqref{end10}.
\end{proof}
\subsection{Proof of Theorem 1.6.}
(i) First we observe that
\begin{multline*}
\norm{P_q V P_q}
\leq
\norm{P_q\jap{\cdot}^{-\rho/2}}
\norm{\jap{\cdot}^\rho V}
\norm{\jap{\cdot}^{-\rho/2}P_q}
\\
\leq
\norm{V}_{X_\rho}
\norm{P_q\jap{\cdot}^{-\rho/2}}
\norm{\jap{\cdot}^{-\rho/2}P_q}
=
\norm{V}_{X_\rho}
\norm{P_q\jap{\cdot}^{-\rho}P_q},
\end{multline*}
and so it suffices to consider the case $V({\bf x})=\jap{{\bf x}}^{-\rho}$.
Next, by Corollary~{\mathbb R}f{fgr1}, we have
$$
\norm{P_q VP_q}
=
\norm{\Op (V_B* \Psi_q)}.
$$
In order to estimate the norm of $\Op (V_B* \Psi_q)$, we use
Lemmas~{\mathbb R}f{th.b3} and {\mathbb R}f{lma.b3}.
We note that for $V({\bf x})=\jap{{\bf x}}^{-\rho}$, $\rho>1$, we have
$\widehat V\in L^1$ and
$$
\abs{\widehat V({\mathbb Z}ta)}\leq C_N \abs{{\mathbb Z}ta}^{-N},
\quad
\abs{{\mathbb Z}ta}\geq 1,
$$
for all $N\geq 1$ (see e.g. \cite[Chapter XII, Lemma 3.1]{tay}).
Thus, the integral in the r.h.s. of \eqref{c13} is convergent, and so the proof
of \eqref{c13} applies to $V({\bf x})=\jap{{\bf x}}^{-\rho}$, $\rho>1$.
Now, combining Lemmas~{\mathbb R}f{th.b3} and {\mathbb R}f{lma.b3}, we get
\begin{multline*}
\sup_{q\geq0}\sup_{B\geq B_0}
\lambda_q^{1/2} B^{-1} \norm{\Op(V_B* \Psi_q)}
\\
\leq
\sup_{q\geq0}\sup_{B\geq B_0}
\lambda_q^{3/4} B^{-1}
\norm{\Op(V_B* \Psi_q)-\Op(V_B*\delta_{\sqrt{2q+1}})}
\\
+
\sup_{q\geq0}\sup_{B\geq B_0}
\lambda_q^{1/2} B^{-1}
\norm{\Op(V_B*\delta_{\sqrt{2q+1}})}
<\infty,
\end{multline*}
which proves the required estimate.
(ii)
As in the proof of part (i), we may assume $V({\bf x})=\jap{{\bf x}}^{-\rho}$.
First let us consider the case $\rho>2$, $\ell=1$.
By Lemma~{\mathbb R}f{l21} with $\ell=1$ we have
$$
B^{-1}\norm{P_qVP_q}_1
=
\frac1{2\pi}
\int_{{\mathbb R}^2}V({\bf x})d{\bf x}
\leq
\frac1{2\pi}
\norm{V}_{X_\rho}
\int_{{\mathbb R}^2}\jap{{\bf x}}^{-\rho}d{\bf x},
$$
which proves \eqref{b1} in this case.
Let us consider the case of a general $\ell$.
For a fixed $s>1$ and any
$\ell\in[1,\infty]$, let
$$
M_q^{(\ell)} = B^{-1}\lambda_q^{\frac12-\frac1{2\ell}}
P_q \jap{\cdot}^{-s (1+\frac1\ell)} P_q;
$$
for $\ell=\infty$, one should replace $1/\ell$ by $0$.
By the previous step of the proof and part (i) of the theorem,
$$
\sup_{q\geq0}\sup_{B\geq B_0}
\norm{M_q^{(1)}}_1
\leq
C_1<\infty, \quad
\sup_{q\geq0}\sup_{B\geq B_0}
\norm{M_q^{(\infty)}}\leq C_\infty<\infty,
$$
where the constants $C_1$, $C_\infty$ depend only on $B_0$ and $s$.
Applying the Calderon-Lions interpolation theorem
(see e.g. \cite[Theorem IX.20]{RS2}), we get
$$
\sup_{q\geq0}\sup_{B\geq B_0}
\norm{M_q^{(\ell)}}_\ell
\leq
C_1^{1/\ell}C_\infty^{(\ell-1)/\ell}
<
\infty
$$
for all $\ell \geq 1$. It is easy to see that the last statement is
equivalent to \eqref{b1}.
(iii)
First let us note that the case $\ell=1$ is straightforward.
Indeed, if the integer $\ell =1$ is
admissible, i.e. if $\ell = 1 > 1/(\rho-1)$, then $\rho> 2$, $
V \in L^1({\mathbb R}^{2})$, and \eqref{gr18} yields the identity
$$
\Tr P_q V P_q
=
\frac{B}{2\pi} \int_{{\mathbb R}^{2}} V({\bf x}) d{\bf x}
=
\frac{B}{2\pi} \int_{\mathbb T} \int_{\mathbb R} \widetilde{V}(\omega,b)
\, db \, d\omega.
$$
Thus, we may now assume $\ell\geq2$.
We will first prove the required identity \eqref{12} for
$V\in C_0^\infty({\mathbb R}^2)$ and then use a limiting argument to
extend it to all $V\in X_\rho$.
Denote
\begin{equation}
\gamma_\ell(V)
=
\frac{B^\ell}{2\pi} \int_{{\mathbb T}} \, \int_{{\mathbb R}} \;
\widetilde{V}(\omega,b)^\ell \, db \, d\omega.
\label{c11}
\end{equation}
By Corollary~{\mathbb R}f{fgr1}, we have
$$
\Tr(P_qVP_q)^\ell =
\Tr\bigl(\Op(V_B*\Psi_q)\bigr)^\ell,
$$
and Theorem~{\mathbb R}f{th.b4} says
$$
\lim_{q \to \infty}
\lambda_q^{(\ell-1)/2}
\Tr\bigl(\Op(V_B*\delta_{\sqrt{2q+1}})\bigr)^\ell
= \gamma_\ell(V).
$$
Thus, it suffices to prove that
\begin{equation}
\lim_{q \to \infty}
\lambda_q^{(\ell-1)/2}
\abs{\Tr\bigl(\Op(V_B*\Psi_q)\bigr)^\ell
-
\Tr\bigl(\Op(V_B*\delta_{\sqrt{2q+1}})\bigr)^\ell}
=
0.
\label{c16}
\end{equation}
In order to prove \eqref{c16},
let us first display an elementary estimate
\begin{equation}
\abs{\Tr \left(A_1^\ell\right) - \Tr\left(A_2^\ell\right)} \leq
\ell \max\{\norm{A_1}^{\ell-1}_\ell,
\norm{A_2}^{\ell-1}_\ell\}\norm{A_1-A_2}_\ell; \label{c10}
\end{equation}
here $\ell\in{\mathbb N}$ and $A_n\in S_\ell$, $n=1,2$. The estimate
follows from the formula
$$
A_1^\ell-A_2^\ell = \sum_{j=0}^{\ell-1}
A_1^{\ell-j-1}(A_1-A_2)A_2^j
$$
and the H\"older type inequality for the $S_\ell$ classes.
Next,
using Corollary~{\mathbb R}f{fgr1} and part (ii) of the Theorem, we get
\begin{equation}
\limsup_{q\to\infty}
\lambda_q^{(\ell-1)/(2\ell)}\norm{\Op(V_B*\Psi_q)}_\ell
=
\limsup_{q\to\infty}
\lambda_q^{(\ell-1)/(2\ell)}\norm{P_q V P_q}_\ell
<
\infty.
\label{c17}
\end{equation}
Further, by estimate \eqref{c14}, using the assumption $\ell\geq2$, we get
\begin{multline}
\limsup_{q\to\infty}
\lambda_q^{(\ell-1)/(2\ell)}
\norm{\Op(V_B*\Psi_q)-\Op(V_B*\delta_{\sqrt{2q+1}})}_\ell
\\
\leq
\limsup_{q\to\infty}
\lambda_q^{1/2}
\norm{\Op(V_B*\Psi_q)-\Op(V_B*\delta_{\sqrt{2q+1}})}_2
=0.
\label{c18}
\end{multline}
Combining \eqref{c10}, \eqref{c17} and \eqref{c18},
we obtain \eqref{c16} for $V\in C_0^\infty$; thus, \eqref{12} is proven for
this class of potentials.
It remains to extend \eqref{12} to all potentials
$V\in C({\mathbb R}^2)$ that satisfy \eqref{rho}.
For $\ell>1/(\rho-1)$, denote
\begin{align*}
\Delta_\ell(V)
&=
\limsup_{q\to\infty}
\lambda_q^{(\ell-1)/2}\Tr(P_qVP_q)^\ell,
\\
\delta_\ell(V)
&=
\liminf_{q\to\infty}
\lambda_q^{(\ell-1)/2}\Tr(P_qVP_q)^\ell.
\end{align*}
Above we have proven that
\begin{equation}
\Delta_\ell(V)=\delta_\ell(V)=\gamma_\ell(V)
\label{c7}
\end{equation}
for all potentials $V\in C_0^\infty({\mathbb R}^2)$; now we need to extend
this identity to all $V\in X_\rho$.
From \eqref{a4} we obtain, similarly to \eqref{c10},
\begin{multline*}
\abs{\gamma_\ell(V_1)-\gamma_\ell(V_2)}
\leq
\frac{B^\ell}{2\pi}
\int_{\mathbb T} \int_{{\mathbb R}}
\abs{\widetilde V_1(\omega,b)^\ell-\widetilde V_2(\omega,b)^\ell}db d\omega
\\
\leq
\frac{B^\ell}{2\pi}
C\max\{\norm{V_1}_{X_\rho}^{\ell-1},\norm{V_2}_{X_\rho}^{\ell-1}\}\norm{V_1-V_2}_{X_\rho}
\int_{\mathbb T} \int_{{\mathbb R}} \jap{b}^{(1-\rho)\ell}db\, d\omega.
\end{multline*}
It follows that $\gamma_\ell$ is a continuous functional on $X_\rho$.
Similarly, using \eqref{c10} and part (ii) of the Theorem, we get
$$
\limsup_{q\to\infty}
\lambda_q^{(\ell-1)/2}\abs{\Tr(P_qV_1P_q)^\ell-\Tr(P_qV_2P_q)^\ell}
\leq
C\max\{\norm{V_1}_{X_\rho}^{\ell-1}, \norm{V_2}_{X_\rho}^{\ell-1}\}
\norm{V_1-V_2}_{X_\rho},
$$
and so the functionals $\Delta_\ell$, $\delta_\ell$ are continuous on $X_\rho$.
It follows that \eqref{c7} extends by continuity from $C_0^\infty$ to
the closure $X_\rho^0$ of $C_0^\infty$ in $X_\rho$.
In order to prove \eqref{c7} for all $V\in X_\rho$, one can argue
as follows. For a given $\ell>1/(\rho-1)$, choose $\rho_1$ such
that $1<\rho_1<\rho$ and $\ell>1/(\rho_1-1)$. Then $X_\rho\subset
X_{\rho_1}^0$ and by the same argument as above, \eqref{c7} holds
true for all $V\in X_{\rho_1}^0$. \qed
\section{Proof of Proposition 1.1 and Theorem 1.3}\label{s3}
As already indicated, this section heavily uses the construction of \cite{korpu}.
\subsection{Proof of Proposition 1.1.}\label{s3a}
Set $R_0(z):=(H_0-zI)^{-1}$.
By the Birman-Schwinger principle, if $\lambda \in {\mathbb R} \setminus \cup_{q = 0}^{\infty}\{\lambda_q\}$ is an eigenvalue of the operator $H$, then $-1$ is an eigenvalue of the operator $|V|^{1/2} R_0(\lambda) V^{1/2}$. Hence, it
suffices to show that for some $C>0$ and all sufficiently large
$q$, we have
\begin{equation}
\norm{\abs{V}^{1/2}R_0(\lambda)\abs{V}^{1/2}}<1,
\quad\text{ for all }
\lambda\in[\lambda_q-B,\lambda_q+B],
\quad
\abs{\lambda-\lambda_q}>\frac{C}{\sqrt{q}}.
\label{d3}
\end{equation}
Choose $m\in\mathbb N$ sufficiently large so that
$\norm{V}/\lambda_m<1/2$,
and write $R_0(\lambda)$ as
$$
R_0(\lambda)=\sum_{k=q-m}^{q+m}\frac{P_k}{\lambda_k-\lambda}+\widetilde R_0(\lambda).
$$
Then, for $\lambda\in[\lambda_q-B,\lambda_q+B]$,
$$
\norm{\abs{V}^{1/2}R_0(\lambda)\abs{V}^{1/2}}
\leq
\sum_{k=q-m}^{q+m}
\frac{\norm{\abs{V}^{1/2}P_k\abs{V}^{1/2}}}{\abs{\lambda_k-\lambda}}
+
\norm{\abs{V}^{1/2}\widetilde R_0(\lambda)\abs{V}^{1/2}}.
$$
By the choice of $m$, one has
$$
\norm{\abs{V}^{1/2}\widetilde R_0(\lambda)\abs{V}^{1/2}}
\leq
\norm{\abs{V}^{1/2}}(1/\lambda_m)\norm{\abs{V}^{1/2}}
=
\norm{V}/\lambda_m
<1/2.
$$
On the other hand, by Theorem~{\mathbb R}f{th13}(i),
$$
\sum_{k=q-m}^{q+m}
\frac{\norm{\abs{V}^{1/2}P_k\abs{V}^{1/2}}}{\abs{\lambda_k-\lambda}}
\leq(2m+1)O(q^{-1/2})\max_{q-m\leq k\leq q+m}\abs{\lambda_k-\lambda}^{-1}
=O(q^{-1/2})\abs{\lambda_q-\lambda}^{-1}.
$$
Thus, we get \eqref{d3} for sufficiently large $C>0$.
\qed
\subsection{Resolvent estimates}\label{s3b}
Let $\Gamma_q$ be a positively oriented circle of center
$\lambda_q$ and radius $B$.
\begin{lemma}\label{lma.d1}
Let $V$ satisfy \eqref{rho}. Then for any $\ell>1$,
$\ell>1/(\rho-1)$, one has
\begin{align}
\sup_{z\in\Gamma_q}
\norm{\abs{V}^{1/2}R_0(z)\abs{V}^{1/2}}_{\ell}
&=
O(q^{-(\ell-1)/2\ell}\log q),
\quad q\to\infty,
\label{d1}
\\
\sup_{z\in\Gamma_q}
\norm{\abs{V}^{1/2}R_0(z)}_{2\ell}
&=
O(q^{-(\ell-1)/4\ell}\log q),
\quad q\to\infty.
\label{d2}
\end{align}
\end{lemma}
\begin{proof}
Let us prove \eqref{d1}.
Using the estimate \eqref{b1}, we get
for $z\in\Gamma_q$:
\begin{multline*}
\norm{\abs{V}^{1/2}R_0(z)\abs{V}^{1/2}}_{\ell}
\leq
\sum_{k=0}^\infty
\frac{\norm{\abs{V}^{1/2}P_k\abs{V}^{1/2}}_{\ell}}{\abs{\lambda_k-z}}
\leq
\sum_{k=0}^\infty\frac{C(1+k)^{-(\ell-1)/2\ell}}{\abs{\lambda_k-z}}
\\
\leq
C\int_0^{q-1}\frac{(1+x)^{-(\ell-1)/2\ell}}{\abs{B(2x+1)-z}}dx
+C\int_{q+1}^\infty\frac{(1+x)^{-(\ell-1)/2\ell}}{\abs{B(2x+1)-z}}dx
+
O(q^{-(\ell-1)/2\ell})
\\
=O(q^{-(\ell-1)/2\ell}\log q),
\end{multline*}
as $q\to\infty$.
This proves \eqref{d1}.
Using the fact that
$$
\norm{\abs{V}^{1/2}P_q}_{2\ell}^2
=
\norm{\abs{V}^{1/2}P_q\abs{V}^{1/2}}_{\ell},
$$
one proves the estimate \eqref{d2} in the same way.
\end{proof}
\subsection{Proof of Lemma 1.5.}\label{s3c}
The fact that $(P_qVP_q)^\ell\in S_1$ follows directly from Theorem~{\mathbb R}f{th13}.
Let, as above, $\Gamma_q$ be a positively oriented circle with the
centre $\lambda_q$ and radius $B$. Let $q$ be sufficiently large
so that (see Proposition~{\mathbb R}f{p11}) the contour $\Gamma_q$ does
not intersect the spectrum of $H$. We will use the formula
\begin{equation} \label{32} (H-\lambda_q)^\ell \mathds{1}_{(\lambda_q - B, \lambda_q +
B)}(H) = -\frac{1}{2\pi i} \int_{\Gamma_q} (z-\lambda_q)^\ell
R(z)dz, \end{equation} where $R(z)= (H-zI)^{-1}$. Let us expand the resolvent
$R(z)$ in the r.h.s. of \eqref{32} in the standard perturbation
series:
\begin{equation} \label{37} R(z) = R_0(z) + \sum_{j=1}^{\infty} (-1)^j
R_0(z)(V R_0(z))^j. \end{equation} Let us discuss the convergence of these
series for $z\in\Gamma_q$, $q$ large. Denote $W=\abs{V}^{1/2}$,
$W_0=\sign(V)$. For $j\geq\ell$, we have
\begin{multline}
\norm{R_0(z)(VR_0(z))^j}_{1}
=
\norm{(R_0(z)W)(W_0WR_0(z)W)^{j-1}W_0(WR_0(z))}_{1}
\\
\leq
\norm{R_0(z)W}_{2\ell}
\norm{WR_0(z)W}_{\ell}^{j-1}
\norm{WR_0(z)}_{2\ell},
\quad
j\geq\ell.
\label{d1a}
\end{multline}
Applying Lemma~{\mathbb R}f{lma.d1}, we get that the series
in the r.h.s. of \eqref{37} converges in the trace norm
for $z\in\Gamma_q$ and $q$ sufficiently large
(note that although the tail of the series converges in the trace
class, the series itself is not necessarily trace class).
Next, it is easy to see that the integrals
$$
\int_{\Gamma_q} (z-\lambda_q)^\ell R_0(z)(VR_0(z))^j dz
$$
with $j<\ell$ vanish (since the integrand is analytic inside
$\Gamma_q$). Thus, recalling \eqref{d1a}, we obtain that the
operator $(H-\lambda_q)^\ell \mathds{1}_{(\lambda_q - B, \lambda_q +
B)}(H)$ belongs to the trace class and
\begin{multline}
\Tr\{(H- \lambda_q)^\ell \mathds{1}_{(\lambda_q - B, \lambda_q +
B)}(H)\}
\\
=
\label{38}
-\frac{1}{2\pi i} \sum_{j=\ell}^{\infty} (-1)^j \int_{\Gamma_q} (z -
\lambda_q)^\ell
\Tr [R_0(z) (VR_0(z))^j]dz.
\end{multline}
Integrating by parts in each term of this series and computing
the term with $j=\ell$ by
the residue theorem, we obtain
\begin{multline}
\Tr\{(H - \lambda_q)^\ell
\mathds{1}_{(\lambda_q - B, \lambda_q +B)}(H)\}
\\
=
\label{310}
\Tr(P_qVP_q)^\ell + \frac{\ell}{2\pi i}
\sum_{j=\ell+1}^{\infty} \frac{(-1)^j}{j}
\int_{\Gamma_q} (z - \lambda_q)^{\ell-1}
\Tr (V R_0(z))^j dz.
\end{multline}
It remains to estimate the series in the r.h.s.
of \eqref{310}.
This can be easily done by using Lemma~{\mathbb R}f{lma.d1}.
Similarly to \eqref{d1a}, we have
$$
\abs{\Tr(VR_0(z))^j}
\leq
\norm{WR_0(z)W}^j_{j} \leq \norm{WR_0(z)W}^j_{\ell}, \quad j \geq \ell,
$$
and so
$$
\left\lvert
\int_{\Gamma_q} (z - \lambda_q)^{\ell-1}
\Tr (V R_0(z))^j dz
\right\rvert
\leq
C_1(C_2 q^{-(\ell-1)/2\ell}\log q)^j,
$$
for all sufficiently large $q$.
Thus, the series in the r.h.s. of \eqref{310}
can be estimated by
$$
C_1\sum_{j=\ell+1}^\infty C_2^j q^{-\frac{\ell-1}{2\ell}j}(\log q)^j.
$$
For all sufficiently large $q$,
this series converges and can be estimated
as $o(q^{-(\ell-1)/2})$.
\qed
\subsection{Proof of Theorem 1.3.}\label{s32}
Let $R \geq C_1$ where $C_1$ is the constant from
Proposition~{\mathbb R}f{p11}. Then
\begin{equation} \label{312}
\mathds{1}_{[-R,R]}(\lambda_q^{1/2}(H-\lambda_q)) = \mathds{1}_{(\lambda_q -
B, \lambda_q + B)}(H), \quad q \in {\mathbb Z}_+. \end{equation}
Next, choose $R\geq C_1$ so large that ${\text{supp}} \varrho \subset [-R,
R]$. Let $\ell_0$ be an even natural number satisfying
$\ell_0>1/(\rho-1)$. Since $\varrho (\lambda)$ by assumption
vanishes near $\lambda = 0$, the function
$\varrho(\lambda)/\lambda^{\ell_0}$ is smooth. Applying the
Weierstrass approximation theorem to this function on the interval
$[-R,R]$, we obtain that for any $\varepsilon>0$ there exist
polynomials $P_+$, $P_-$ such that
\begin{equation} \label{313}
P_{\pm}(0) = P'_{\pm}(0) =\dots=P^{(\ell_0-1)}_{\pm}(0)=0,
\end{equation}
\begin{equation} \label{314}
P_-(\lambda) \leq \varrho(\lambda) \leq P_+(\lambda), \quad \forall
\lambda \in [-R, R],
\end{equation}
\begin{equation} \label{315}
P_+(\lambda) - P_-(\lambda) \leq \varepsilon \lambda^{\ell_0}, \quad \forall
\lambda \in [-R, R].
\end{equation}
Thus, we can write
$$
\mathds{1}_{[-R,R]}(\lambda) P_-(\lambda) \leq \varrho(\lambda) \leq \mathds{1}_{[-R,R]}(\lambda)
P_+(\lambda),
$$
for any $\lambda \in [-R, R]$, and therefore
$$
\Tr \{\mathds{1}_{[-R,R]}(\lambda_q^{1/2}(H-\lambda_q))\,
P_-(\lambda_q^{1/2}(H-\lambda_q))\} \leq \Tr \, \varrho (\lambda_q^{1/2}(H-\lambda_q))
$$
$$
\leq \Tr \{\mathds{1}_{[-R,R]}(\lambda_q^{1/2}(H-\lambda_q))\, P_+(\lambda_q^{1/2}(H-\lambda_q))\}.
$$
By \eqref{312} it follows that for all sufficiently large $q$,
\begin{multline}
\Tr \{\mathds{1}_{(\lambda_q - B, \lambda_q + B)}(H) \,
P_-(\lambda_q^{1/2}(H-\lambda_q))\}
\leq
\Tr \varrho
(\lambda_q^{1/2}(H-\lambda_q))
\\
\leq
\Tr\{\mathds{1}_{(\lambda_q - B, \lambda_q + B)}(H) \,
P_+(\lambda_q^{1/2}(H-\lambda_q))\}. \label{316}
\end{multline}
By Lemma~{\mathbb R}f{l31} and Theorem~{\mathbb R}f{th13}(iii),
we have
\begin{multline*}
\lim_{q\to\infty} \lambda_q^{-1/2}
\Tr\{\mathds{1}_{(\lambda_q - B,
\lambda_q + B)}(H) \, P_\pm(\lambda_q^{1/2}(H-\lambda_q))\}
\\
=\frac{1}{2\pi} \int_{{\mathbb T}} \,
\int_{{\mathbb R}} \; P_\pm(\widetilde{V}(\omega, b)) \, db \, d \omega =
\int_{{\mathbb R}} P_\pm(t) d\mu(t).
\end{multline*}
Combining this with \eqref{316}, we get
$$
\limsup_{q \to \infty} \lambda_q^{-1/2}
\Tr \varrho (\lambda_q^{1/2}(H-\lambda_q))
\leq \int_{{\mathbb R}} P_+(\lambda) d\mu (\lambda),
$$
$$
\liminf_{q \to \infty} \lambda_q^{-1/2}
\Tr \varrho (\lambda_q^{1/2}(H-\lambda_q))
\geq \int_{{\mathbb R}} P_-(\lambda) d\mu (\lambda).
$$
Finally, by \eqref{315},
$$
\int_{{\mathbb R}} (P_+(\lambda) - P_-(\lambda)) d\mu (\lambda) \leq
\varepsilon \int_{{\mathbb R}} \lambda^{\ell_0} d\mu (\lambda).
$$
By \eqref{a9}, the integral in the r.h.s. is finite.
Since $\varepsilon>0$ can be taken arbitrary small, we obtain the
required statement.
\qed
\appendix
\section{}
\subsection{Proof of formula (3.6)}
By definition,
$$
\widehat{\Psi_q}({\mathbb Z}ta) = \frac{2(-1)^q}{(2\pi)^2} \int_{{\mathbb R}^{2}}
e^{-iz{\mathbb Z}ta} {\rm L}_q(2|z|^2) e^{-|z|^2} dz =
\frac{(-1)^q}{(2\pi)^2} \int_{{\mathbb R}^{2}}
e^{-iu{\mathbb Z}ta/\sqrt{2}} {\rm L}_q(|u|^2) e^{-|u|^2/2} du
$$
(see \eqref{17a}). Further, by \cite[Eq. 22.12.6]{abst} we
have
$$
{\rm L}_q(|u|^2) = {\rm L}_q(u_1^2 + u_2^2) = \sum_{m = 0}^q
{\rm L}_m^{(-1/2)}(u_1^2) {\rm L}_{q-m}^{(-1/2)}(u_2^2), \quad u
\in {\mathbb R}^{2},
$$
where ${\rm L}_m^{(-1/2)}$, $m \in {\mathbb Z}_+$, are the generalized
Laguerre
polynomials of order $-1/2$. By \cite[Eq. 22.5.38]{abst}
we have
$$
{\rm L}_m^{(-1/2)}(t^2) = \frac{(-1)^m}{m! 2^{2m}} {\rm
H}_{2m}(t), \quad t \in {\mathbb R}, \quad m \in {\mathbb Z}_+,
$$
where ${\rm H}_m$ are the
Hermite polynomials.
Therefore,
$$
{\rm L}_q(|u|^2) = \frac{(-1)^q}{2^{2q}} \sum_{m = 0}^q \frac{
{\rm H}_{2m}(u_1) {\rm H}_{2q - 2m}(u_2)}{m! (q-m)!},
$$
and
$$
\widehat{\Psi_q}({\mathbb Z}ta) =
$$
$$
\frac{1}{2^{2q+2}\pi^2} \sum_{m = 0}^q \frac{
1}{m! (q-m)!} \int_{\mathbb R} e^{-iu_1{\mathbb Z}ta_1/\sqrt{2}} {\rm
H}_{2m}(u_1)e^{-u_1^2/2} du_1 \int_{\mathbb R} e^{-iu_2{\mathbb Z}ta_2/\sqrt{2}} {\rm
H}_{2q-2m}(u_2)e^{-u_2^2/2} du_2.
$$
It is well known that the functions ${\rm
H}_{m}(t)e^{-t^2/2}$, $t \in {\mathbb R}$, $m \in {\mathbb Z}_+$, are
eigenfunctions of the unitary Fourier transform with
eigenvalues equal to $(-i)^m$ (see e.g. \cite{birsol}). Hence,
$$
\widehat{\Psi_q}({\mathbb Z}ta) = \frac{(-1)^q}{2^{2q+1}\pi} \sum_{m = 0}^q \frac{
1}{m! (q-m)!} {\rm H}_{2m}(2^{-1/2} {\mathbb Z}ta_1) {\rm H}_{2q - 2m}(2^{-1/2}
{\mathbb Z}ta_2)e^{-|{\mathbb Z}ta|^2/4} =
$$
$$
(2\pi)^{-1}{\rm L}_q(2^{-1}|{\mathbb Z}ta|^2)
e^{-|{\mathbb Z}ta|^2/4} = (-1)^q \Psi_q(2^{-1} {\mathbb Z}ta)/2.
$$
\subsection{Proof of estimate (3.10)}
Denote $u_q(x)=e^{-x/2}\Lag_q(x)$, $v_q(x)=J_0(\sqrt{(4q+2)x})$.
Using the differential equations for the Laguerre polynomials and
for the Bessel functions, one easily checks that $u_q$ and $v_q$
satisfy
\begin{align*}
xu''_q(x)+u'_q(x)+(q+\tfrac12)u_q(x)&=\tfrac{x}{4}u_q(x),
\\
xv''_q(x)+v'_q(x)+(q+\tfrac12)v_q(x)&=0.
\end{align*}
Using these differential equations and the initial conditions
for $u_q(x)$, $v_q(x)$ at $x=0$, it is easy to verify that
$u_q$ satisfies the integral equation
\begin{gather}
u_q=v_q+K_qu_q,
\quad
(K_q f)(x)=\int_0^x F_q(x,y)f(y)dy,
\label{app1}
\\
F_q(x,y)=-\frac\pi4 y
\bigl(J_0(\sqrt{(4q+2)x})Y_0(\sqrt{(4q+2)y})-Y_0(\sqrt{(4q+2)x})J_0(\sqrt{(4q+2)y}\bigr).
\notag
\end{gather}
This argument is borrowed from \cite{suetin}. Iterating \eqref{app1}, we obtain
\begin{equation}
u_q-v_q=K_qv_q+K_q^2u_q.
\label{app2}
\end{equation}
Now it remains to estimate the two terms in the r.h.s. of \eqref{app2}
in an appropriate way. Using the estimates
$$
\abs{J_0(x)}\leq C/\sqrt{x},
\qquad
\abs{Y_0(x)}\leq C/\sqrt{x},
\qquad x>0,
$$
we obtain
$$
\abs{F_q(x,y)}\leq C q^{-1/2}x^{-1/4}y^{3/4},
\qquad
q\in{\mathbb N}, \quad x>0.
$$
This yields
\begin{equation}
\Abs{\int_0^x F_q(x,y)v_q(y)dy} \leq C q^{-3/4} x^{-1/4}\int_0^x
y^{1/2}dy =Cq^{-3/4}x^{5/4}. \label{app5}
\end{equation}
Next, using the estimate
$\abs{u_q(x)}\leq 1$
(see \cite[Eq. 22.14.12]{abst}),
we obtain
$$
\abs{(K_qu_q)(x)}
\leq
Cq^{-1/2} x^{-1/4}\int_0^x y^{3/4}dy
=
Cq^{-1/2}x^{3/2},
$$
and so
\begin{equation}
\abs{(K_q^2u_q)(x)}
\leq
Cq^{-1}x^{-1/4}\int_0^x y^{\frac32+\frac34}dy
=
Cq^{-1}x^3.
\label{app6}
\end{equation}
Combining \eqref{app2} with \eqref{app5} and \eqref{app6},
we obtain the required estimate \eqref{b7}.\\
{\bf Acknowledgements.} A. Pushnitski and G. Raikov were partially
supported by {\em N\'ucleo Cient\'ifico ICM} P07-027-F ``{\em
Mathematical Theory of Quantum and Classical Magnetic Systems"}.
C. Villegas-Blas was partially supported by the same {\em N\'ucleo
Cient\'ifico} within the framework of the {\em International
Spectral Network}, and by PAPIIT-UNAM 109610-2. G. Raikov was
partially supported by the UNAM, Cuernavaca, during his stay in
2008, and by the Chilean Science
Foundation {\em Fondecyt} under Grant 1090467. \\
The authors are grateful for hospitality and financial
support to the Bernoulli Center, EPFL, Lausanne, where this work
was initiated within the framework of the Program ``{\em Spectral
and Dynamical Properties of Quantum Hamiltonians}", January - June
2010.
{\sc Alexander Pushnitski}\\
Department of Mathematics,\\
King's College London,\\
Strand, London, WC2R 2LS, United Kingdom\\
E-mail: [email protected]\\
{\sc Georgi Raikov}\\
Facultad de
Matem\'aticas,\\ Pontificia Universidad Cat\'olica de Chile,\\
Vicu\~na Mackenna 4860, Santiago de Chile\\
E-mail: [email protected]\\
{\sc Carlos Villegas-Blas}\\
Instituto de Matem\'aticas,\\
Universidad Nacional Auton\'oma de M\'exico, \\
Cuernavaca, Mexico\\
E-mail: [email protected]
\end{document} | math | 86,129 |
\begin{document}
\preprint{}
\title[ ]{Quantum renormalization group of XY model in two-dimensions}
\author{M. Usman$^{1}$}
\author{Asif Ilyas$^{2}$}
\author{Khalid Khan$^{1}$}
\email{[email protected]}
\affiliation{$^{1}$Department of Physics, Quaid-i-Azam University, Islamabad}
\affiliation{$^{2}$ Kohat University of Science and Technology, Kohat, KPK}
\date{May 12 2015}
\begin{abstract}
We investigate entanglement and quantum phase transition (QPT) in a
two-dimensional Heisenberg anisotropic spin-1/2 XY model, using quantum
renormalization group method (QRG) on a square lattice of $N\times N$ sites.
The entanglement through geometric average of concurrences is calculated
after each step of the QRG. We show that the concurrence achieves a non zero
value at the critical point more rapidly as compared to one-dimensional
case. The relationship between the entanglement and the quantum phase
transition is studied. The evolution of entanglement develops two saturated
values corresponding to two different phases. We compute the first
derivative of the concurrence, which is found to be discontinuous at the
critical point $\gamma=0$, and indicates a second-order phase transition in
the spin system. Further, the scaling behaviour of the system is
investigated by computing the first derivative of the concurrence in terms
of the system size.
\end{abstract}
\maketitle
\section{INTRODUCTION}
In quantum systems, entanglement is a resource that reveals the difference
between classical and quantum physics \cite{xydm1}. Its role has been
considered very vital to implement the quantum information tasks in
innovative ways like in quantum computations, quantum cryptography and
quantum teleportation etc. \cite{NC 2000}. In ecent years, the study of
entanglement in strongly correlated systems have attracted much more
attention \cite{vedral intro(1), vedral intro(3)} because it can describe
not only\ the information processing through correlation of spins \cite
{vedral intro(7)} but also the critical phenomenon, quantum phase transition
(QPT) \cite{vedral intro(2)}. Therefore, the quantum entanglement is
considered as the common ground between the quantum information theory (QIT)
and the condensed matter physics \cite{xydm8, Langari xxz 1}. Recently, much
efforts have been devoted to the study of Heisenberg spin models,
especially, one dimensional spin models are the most explored area of
research, as these systems are exactly solvable and give quantitative
results \cite{xy3, xy4, xy5, ising, xxz, xxzdm, xy, xydm, vedral spin1/2(1),
vedral spin 1/2(2)}.
\textit{In higher dimensions, almost all the analysis of entanglement and
the QPT were made through numerical simulations} \cite{NS1, NS2}. \textit{
Whereas the study of the phase diagram was also carried out} \cite{PD1, PD2,
PD3}. \textit{Using Monte Carlo simulations, concurrence was considered as
an entanglement measure in two-dimensional XY and XXZ models} \cite{NS1, NS2}
. \textit{In the d-dimension pair wise entanglement was studied in XXZ model
\cite{NS3, NS4} \ Concurrence was used to calculate the quantum entanglement
in the spin}$-1/2$\textit{\ ladder with four spins ring exchange by exact
diagonalization method} \cite{NS5}.
The density-matrix renormalization group method is a leading numerical
technique useful in exploring ground state properties for many body
interactions in lower dimensions \cite{xydm16, xydm17, xydm18, xydm19}.
Alongside, quantum renormalization group method (QRG) is another technique
which deals with large size systems analytically. At low temperatures
behavior of the spin systems effect their quantum nature due to quantum
fluctuations. At these temperatures, ground states can be used to measure
entanglement through density matrix evaluation, where the non analytical
behavior of the derivative of entanglement explains the phenomenon of QPT
\cite{xxz, xxzdm, xy, xydm}. Such approaches can be implemented in the QRG
method.
The QRG method was used to solve exactly the one dimensional Ising, XXZ and
XY models \cite{ising, xxz, xxzdm, xy, xydm}. Where it was found that the
nearest neighbors interaction exhibits the QPT near the critical point. For
a deeper insight, the next nearest neighbors interaction was studied in XXZ
model\ \cite{xydm30, xydm31}. The RG method was also used in the one
dimensional Ising and XYZ models in the presence of magnetic field \cite
{ising, xyz}.The Jordan Wigner transformation was used to solve the Ising
model exactly, where it was found that near critical point, this model
exhibits the maximum value of entanglement for the second nearest neighbors
\cite{vedral intro(1)}. It was analyzed that in thermodynamic limit the
entanglement of ground state of mutually interacting spin-1/2 particles in a
magnetic field shows cusp like singularities exactly at the critical point
\cite{vedral intro(1)}. \textit{The QRG method in two dimensional spin
systems is a step forward for the better understanding and answering the
open questions like computational complexity of finding the ground states,
ground state properties, energy spectrum, correlation length, criticality,
quantum phase transition and their connection with entanglement.} Analogous
to Kadanoff's block renormalization group approach in one-dimensional spin
systems \cite{xy30}, we apply to two-dimensional spin systems by dividing
the square lattice of spins into blocks of odd number of spins, which span
the whole lattice.
The rest of the paper is arranged as follow. In Sec. II, we present the
model of the system and describe the mathematical formalism to calculate the
renormalized coupling constant and anisotropic coefficients. The effective
Hamiltonian of the system is obtained in terms of renormalized constants. In
Sec. III we investigate the block-block entanglement and its non analytical
behavior which is related to the QPT. We also study the scaling behavior in
this context. The results are summarized in Sec. IV.
\section{QUANTUM RENORMALIZATION OF XY MODEL IN TWO-DIMENSIONS}
Kadanoff block approach was used in the past to study the QRG method in
one-dimensional spin models \cite{xy3, xy4, xy5, ising, xxz, xxzdm, xy, xydm}
. In this approach the fixed point is achieved after number of iterations by
virtue of reduction of degrees of freedom.\ We extend this very idea and
implement it on a two-dimensional square lattice of spins, in which the
whole lattice is spanned by square blocks, each consisting of five spins
(FIG. 1), with one spin at the center and four at the corners. Using this
model we obtain the renormalized parameters producing the effective
Hamiltonian similar to the original one. The Hamiltonian of a two
dimensional Heisenberg XY model represented by the square lattice of $
N\times N$ spins can be written as,
\begin{equation}
H(J,\gamma )=\frac{J}{4}\sum_{i=1}^{N}\sum_{j=1}^{N}((1+\gamma )(\sigma
_{i,j}^{x}\sigma _{i+1,j}^{x}+\sigma _{i,j}^{x}\sigma
_{i,j+1}^{x})+(1-\gamma )(\sigma _{i,j}^{y}\sigma _{i+1,j}^{y}+\sigma
_{i,j}^{y}\sigma _{i,j+1}^{y})), \label{1}
\end{equation}
where $J$ is the exchange coupling constant, $\gamma $ is the anisotropy
parameter and $\sigma ^{x},\sigma ^{y}$ are the Pauli matrices. Depending on
the values of $\gamma $ the model reduces to different classes such as $XX$
model for $\gamma =0,$ Ising model for $\gamma =1$ and Ising universality
class for $0<\gamma \leq 1$ \cite{xy22}.
\begin{figure}
\caption{(Color online) 2-dimensional square lattice is
depicted by considering each block of five spins.}
\label{FIG.1.}
\end{figure}
We begin by dividing the total Hamiltonian into two parts as
\begin{equation}
H=H^{B}+H^{BB}, \label{2}
\end{equation}
where $H^{B}$ and $H^{BB}$ are the block and the interblock Hamiltonians
respectively. The explicit form of these Hamiltonians can be written as
\begin{align}
H^{B}& =\frac{J}{4}\sum\limits_{L}^{N/5}((1+\gamma )(\sigma _{L,1}^{x}\sigma
_{L,2}^{x}+\sigma _{L,1}^{x}\sigma _{L,3}^{x}+\sigma _{L,1}^{x}\sigma
_{L,4}^{x}+\sigma _{L,1}^{x}\sigma _{L,5}^{x}) \notag \\
& +(1-\gamma )(\sigma _{L,1}^{y}\sigma _{L,2}^{y}+\sigma _{L,1}^{y}\sigma
_{L,3}^{y}+\sigma _{L,1}^{y}\sigma _{L,4}^{y}+\sigma _{L,1}^{y}\sigma
_{L,5}^{y})), \label{3}
\end{align}
and
\begin{align}
H^{BB}& =\sum\limits_{L}^{N/5}\frac{J}{4}((1+\gamma )(\sigma
_{L,2}^{x}\sigma _{L+1,3}^{x}+\sigma _{L,2}^{x}\sigma _{L+1,4}^{x}+\sigma
_{L,2}^{x}\sigma _{L+2,5}^{x}+\sigma _{L,3}^{x}\sigma _{L+2,4}^{x} \notag \\
& +\sigma _{L,3}^{x}\sigma _{L+2,5}^{x}+\sigma _{L,4}^{x}\sigma
_{L+3,5}^{x})+(1-\gamma )(\sigma _{L,2}^{y}\sigma _{L+1,3}^{y}+\sigma
_{L,2}^{y}\sigma _{L+1,4}^{y} \notag \\
& +\sigma _{L,2}^{y}\sigma _{L+2,5}^{y}+\sigma _{L,3}^{y}\sigma
_{L+2,4}^{y}+\sigma _{L,3}^{y}\sigma _{L+2,5}^{y}+\sigma _{L,4}^{y}\sigma
_{L+3,5}^{y})), \label{4}
\end{align}
Whereas the $L$th block Hamiltonian can be written as
\begin{align}
H_{L}^{B}& =\frac{J}{4}((1+\gamma )(\sigma _{L,1}^{x}\sigma
_{L,2}^{x}+\sigma _{L,1}^{x}\sigma _{L,3}^{x}+\sigma _{L,1}^{x}\sigma
_{L,4}^{x}+\sigma _{L,1}^{x}\sigma _{L,5}^{x}) \notag \\
& +(1-\gamma )(\sigma _{L,1}^{y}\sigma _{L,2}^{y}+\sigma _{L,1}^{y}\sigma
_{L,3}^{y}+\sigma _{L,1}^{y}\sigma _{L,4}^{y}+\sigma _{L,1}^{y}\sigma
_{L,5}^{y})). \label{5}
\end{align}
The interblock interactions are shown by direction of arrows in FIG. 1,
which is mathematically represented by Eq. \ref{4}. We choose block of odd
spins which in turn produces degenerate eigenvalues for the ground state and
makes\ it possible to construct the projection operator in the renamed basis
of the ground state. In terms of matrix product states \cite{xy29}, the
solution i.e., the eigenvalues and the eigenvectors for the single block
Hamiltonian, can be obtained. Therefore, the degenerate lowest energy can be
written as
\begin{equation}
E_{0}=-\frac{1}{2}J\sqrt{5+5\gamma ^{2}+\alpha _{1}}, \label{6}
\end{equation}
and the corresponding states in terms of eigenstates $\left\vert \uparrow
\right\rangle $, $\left\vert \downarrow \right\rangle $ of $\sigma ^{z}$ are
\begin{align}
\left\vert \phi _{0}^{1}\right\rangle & =\gamma _{1}(\left\vert \uparrow
\uparrow \uparrow \uparrow \downarrow \right\rangle +\left\vert \uparrow
\uparrow \uparrow \downarrow \uparrow \right\rangle +\left\vert \uparrow
\uparrow \downarrow \uparrow \uparrow \right\rangle +\left\vert \uparrow
\downarrow \uparrow \uparrow \uparrow \right\rangle ) \notag \\
& +\gamma _{2}(\left\vert \uparrow \uparrow \downarrow \downarrow \downarrow
\right\rangle +\left\vert \uparrow \downarrow \uparrow \downarrow \downarrow
\right\rangle +\left\vert \uparrow \downarrow \downarrow \uparrow \downarrow
\right\rangle +\left\vert \uparrow \downarrow \downarrow \downarrow \uparrow
\right\rangle ) \notag \\
& +\gamma _{3}\left\vert \downarrow \uparrow \uparrow \uparrow \uparrow
\right\rangle +\gamma _{4}(\left\vert \downarrow \uparrow \uparrow
\downarrow \downarrow \right\rangle +\left\vert \downarrow \uparrow
\downarrow \uparrow \downarrow \right\rangle +\left\vert \downarrow \uparrow
\downarrow \downarrow \uparrow \right\rangle \notag \\
& +\left\vert \downarrow \downarrow \uparrow \uparrow \downarrow
\right\rangle +\left\vert \downarrow \downarrow \uparrow \downarrow \uparrow
\right\rangle +\left\vert \downarrow \downarrow \downarrow \uparrow \uparrow
\right\rangle )+\gamma _{5}\left\vert \downarrow \downarrow \downarrow
\downarrow \downarrow \right\rangle , \label{7}
\end{align}
and
\begin{align}
\left\vert \phi _{0}^{2}\right\rangle & =\gamma _{6}\left\vert \uparrow
\uparrow \uparrow \uparrow \uparrow \right\rangle +\gamma _{7}(\left\vert
\uparrow \uparrow \uparrow \downarrow \downarrow \right\rangle +\left\vert
\uparrow \uparrow \downarrow \uparrow \downarrow \right\rangle +\left\vert
\uparrow \uparrow \downarrow \downarrow \uparrow \right\rangle \notag \\
& +\left\vert \uparrow \downarrow \uparrow \uparrow \downarrow \right\rangle
+\left\vert \uparrow \downarrow \uparrow \downarrow \uparrow \right\rangle
+\left\vert \uparrow \downarrow \downarrow \uparrow \uparrow \right\rangle
)+\gamma _{8}\left\vert \uparrow \downarrow \downarrow \downarrow \downarrow
\right\rangle \notag \\
& +\gamma _{9}(\left\vert \downarrow \uparrow \uparrow \uparrow \downarrow
\right\rangle +\left\vert \downarrow \uparrow \uparrow \downarrow \uparrow
\right\rangle +\left\vert \downarrow \uparrow \downarrow \uparrow \uparrow
\right\rangle +\left\vert \downarrow \downarrow \uparrow \uparrow \uparrow
\right\rangle ) \notag \\
& +\gamma _{10}(\left\vert \downarrow \uparrow \downarrow \downarrow
\downarrow \right\rangle +\left\vert \downarrow \downarrow \uparrow
\downarrow \downarrow \right\rangle +\left\vert \downarrow \downarrow
\downarrow \uparrow \downarrow \right\rangle +\left\vert \downarrow
\downarrow \downarrow \downarrow \uparrow \right\rangle ). \label{8}
\end{align}
Expressions for the $\ \alpha _{1},$ and the $\ \gamma _{i}$'$s$ in terms of
the $\gamma $ are given in the appendix.
\begin{figure}
\caption{(Color online) $\gamma ^{\prime}
\label{FIG.2.}
\end{figure}
Our aim is to construct the effective Hamiltonian $H^{eff}$ in the
renormalized subspace by finding the renormalized coupling constant and
anisotropy parameter from the projection operators $P_{0}$. For which the
projection operators $P_{0}$ are obtained from the degenerate ground state
eigenvectors of the block Hamiltonian $H^{B}$. The effective Hamiltonian is
related to the original Hamiltonian through \cite{xy30}
\begin{equation}
H^{eff}=P_{0}^{\dag }HP_{0}, \label{9}
\end{equation}
where $P_{0}^{\dag }$ is the Hermitian adjoint of $P_{0}$. Using the
perturbative method, we consider only the first order correction term. The
effective Hamiltonian is given by \cite{xxz, xy}
\begin{align}
H^{eff}& =H_{0}^{eff}+H_{1}^{eff} \notag \\
& =P_{0}^{\dag }H^{B}P_{0}+P_{0}^{\dag }H^{BB}P_{0}. \label{10}
\end{align}
In terms of the renamed states of $L$th block, the projection operator $
P_{0}^{L}$ is defined as \cite{xxz, xy}
\begin{equation}
P_{0}^{L}=\left\vert \Uparrow \right\rangle _{L}\left\langle \phi
_{0}^{1}\right\vert +\left\vert \Downarrow \right\rangle _{L}\left\langle
\phi _{0}^{2}\right\vert , \label{11}
\end{equation}
where $P_{0}$ can be described in product form as
\begin{equation}
P_{0}=\prod\limits_{L}^{N/5}P_{0}^{L}, \label{12}
\end{equation}
and $\left\vert \Uparrow \right\rangle _{L}$ and $\left\vert \Downarrow
\right\rangle _{L}$ are the simple qubits of $L$th block to represent
effective site degrees of freedom. The renormalization of the Pauli matrices
is given as
\begin{equation}
P_{0}^{L}\text{ }\sigma _{i,L}^{\varepsilon }\text{ }P_{0}^{L}=\eta
_{i}^{\varepsilon }\text{ }\acute{\sigma}_{L}^{\varepsilon }\text{ \ \ \ \ }
(i=1,2,3,4,5\text{ };\text{ }\varepsilon =x,y), \label{13}
\end{equation}
where
\begin{eqnarray}
\eta _{1}^{x} &=&4\gamma _{10}\gamma _{2}+\gamma _{3}\gamma _{6}+6\gamma
_{4}\gamma _{7}+\gamma _{5}\gamma _{8}+4\gamma _{1}\gamma _{9}, \notag \\
\eta _{2}^{x} &=&\eta _{3}^{x}=\eta _{4}^{x}=\eta _{5}^{x} \notag \\
&=&\gamma _{10}(3\gamma _{4}+\gamma _{5})+3\gamma _{2}\gamma _{7}+\gamma
_{1}(\gamma _{6}+3\gamma _{7})+\gamma _{2}\gamma _{8}+\gamma _{9}(\gamma
_{3}-3\gamma _{4}), \notag \\
\eta _{1}^{y} &=&4\gamma _{10}\gamma _{2}-\gamma _{3}\gamma _{6}-6\gamma
_{4}\gamma _{7}-\gamma _{5}\gamma _{8}+4\gamma _{1}\gamma _{9}, \notag \\
\eta _{2}^{y} &=&\eta _{3}^{y}=\eta _{4}^{y}=\eta _{5}^{y} \notag \\
&=&\gamma _{10}(3\gamma _{4}-\gamma _{5})-3\gamma _{2}\gamma _{7}+\gamma
_{1}(-\gamma _{6}+3\gamma _{7})+\gamma _{2}\gamma _{8}+\gamma _{9}(\gamma
_{3}+3\gamma _{4}). \label{14}
\end{eqnarray}
The effective Hamiltonian of the renormalized two dimensional spins surface
is mapped on to the original Hamiltonian with renormalized coupling
parameters, i.e.,
\begin{equation}
H^{eff}=\frac{\acute{J}}{4}\sum_{p=1}^{N/5}\sum_{q=1}^{N/5}((1+\acute{\gamma
})(\sigma_{p,q}^{x}\sigma_{p+1,q}^{x}+\sigma_{p,q}^{x}\sigma_{p,q+1}^{x})+(1-
\acute{\gamma})(\sigma_{p,q}^{y}\sigma_{p+1,q}^{y}+\sigma_{p,q}^{y}
\sigma_{p,q+1}^{y})), \label{15}
\end{equation}
where
\begin{align}
\acute{J} & =j(\gamma_{10}^{2}(9\gamma_{4}^{2}+6\gamma\gamma_{4}\gamma
_{5}+\gamma_{5}^{2})+9\gamma_{2}^{2}\gamma_{7}^{2}+\gamma_{1}^{2}(\gamma
_{6}^{2}+6\gamma\gamma_{6}\gamma_{7}+9\gamma_{7}^{2})+6\gamma\gamma_{2}^{2}
\gamma_{7}\gamma_{8}+\gamma_{2}^{2}\gamma_{8}^{2} \notag \\
& +6\gamma\gamma_{2}\gamma_{3}\gamma_{7}\gamma_{9}+18\gamma_{2}\gamma
_{4}\gamma_{7}\gamma_{9}+2\gamma_{2}\gamma_{3}\gamma_{8}\gamma_{9}+6\gamma
\gamma_{2}\gamma_{4}\gamma_{8}\gamma_{9}+\gamma_{3}^{2}\gamma_{9}^{2}+6
\gamma\gamma_{3}\gamma_{4}\gamma_{9}^{2} \notag \\
&
+9\gamma_{4}^{2}\gamma_{9}^{2}+2\gamma_{1}(\gamma_{2}(3\gamma_{7}(3\gamma
\gamma_{7}+\gamma_{8})+\gamma_{6}(3\gamma_{7}+\gamma\gamma
_{8}))+(\gamma\gamma_{3}\gamma_{6}+3\gamma_{4}\gamma_{6}+3\gamma_{3}\gamma
_{7} \notag \\
& +9\gamma\gamma_{4}\gamma_{7})\gamma_{9})+2\gamma_{10}(\gamma_{1}(\gamma
_{5}\gamma_{6}+9\gamma_{4}\gamma_{7})+\gamma(9\gamma_{2}\gamma_{4}\gamma
_{7}+3\gamma_{1}(\gamma_{4}\gamma_{6}+\gamma_{5}\gamma_{7}) \notag \\
&
+\gamma_{2}\gamma_{5}\gamma_{8}+9\gamma_{4}^{2}\gamma_{9}+\gamma_{3}
\gamma_{5}\gamma_{9})+3(\gamma_{2}(\gamma_{5}\gamma_{7}+\gamma_{4}\gamma
_{8})+\gamma_{4}(\gamma_{3}+\gamma_{5})\gamma_{9}))), \label{16}
\end{align}
and
\begin{align}
\acute{\gamma}& =(2(3\gamma _{10}\gamma _{4}+3\gamma _{1}\gamma _{7}+\gamma
_{2}\gamma _{8}+\gamma _{3}\gamma _{9})(\gamma _{10}\gamma _{5}+\gamma
_{1}\gamma _{6}+3\gamma _{2}\gamma _{7}+3\gamma _{4}\gamma _{9})+\gamma
(\gamma _{10}^{2}(9\gamma _{4}^{2} \notag \\
& +\gamma _{5}^{2})+9\gamma _{2}^{2}\gamma _{7}^{2}+\gamma _{1}^{2}(\gamma
_{6}^{2}+9\gamma _{7}^{2})+\gamma _{2}^{2}\gamma _{8}^{2}+18\gamma
_{2}\gamma _{4}\gamma _{7}\gamma _{9}+2\gamma _{2}\gamma _{3}\gamma
_{8}\gamma _{9}+\gamma _{3}^{2}\gamma _{9}^{2}+9\gamma _{4}^{2}\gamma
_{9}^{2} \notag \\
& +6\gamma _{1}(\gamma _{2}\gamma _{7}(\gamma _{6}+\gamma _{8})+(\gamma
_{4}\gamma _{6}+\gamma _{3}\gamma _{7})\gamma _{9})+2\gamma _{10}(\gamma
_{1}(\gamma _{5}\gamma _{6}+9\gamma _{4}\gamma _{7})+3(\gamma _{2}(\gamma
_{5}\gamma _{7} \notag \\
& +\gamma _{4}\gamma _{8})+\gamma _{4}(\gamma _{3}+\gamma _{5})\gamma
_{9}))))/(\gamma _{10}^{2}(9\gamma _{4}^{2}+6\gamma \gamma _{4}\gamma
_{5}+\gamma _{5}^{2})+9\gamma _{2}^{2}\gamma _{7}^{2}+\gamma _{1}^{2}(\gamma
_{6}^{2}+6\gamma \gamma _{6}\gamma _{7} \notag \\
& +9\gamma _{7}^{2})+6\gamma \gamma _{2}^{2}\gamma _{7}\gamma _{8}+\gamma
_{2}^{2}\gamma _{8}^{2}+6\gamma \gamma _{2}\gamma _{3}\gamma _{7}\gamma
_{9}+18\gamma _{2}\gamma _{4}\gamma _{7}\gamma _{9}+2\gamma _{2}\gamma
_{3}\gamma _{8}\gamma _{9}+6\gamma \gamma _{2}\gamma _{4}\gamma _{8}\gamma
_{9} \notag \\
& +\gamma _{3}^{2}\gamma _{9}^{2}+6\gamma \gamma _{3}\gamma _{4}\gamma
_{9}^{2}+9\gamma _{4}^{2}\gamma _{9}^{2}+2\gamma _{1}(\gamma _{2}(3\gamma
_{7}(3\gamma \gamma _{7}+\gamma _{8})+\gamma _{6}(3\gamma _{7}+\gamma \gamma
_{8}))+(\gamma \gamma _{3}\gamma _{6} \notag \\
& +3\gamma _{4}\gamma _{6}+3\gamma _{3}\gamma _{7}+9\gamma \gamma _{4}\gamma
_{7})\gamma _{9})+2\gamma _{10}(\gamma _{1}(\gamma _{5}\gamma _{6}+9\gamma
_{4}\gamma _{7})+\gamma (9\gamma _{2}\gamma _{4}\gamma _{7}+3\gamma
_{1}(\gamma _{4}\gamma _{6} \notag \\
& +\gamma _{5}\gamma _{7})+\gamma _{2}\gamma _{5}\gamma _{8}+9\gamma
_{4}^{2}\gamma _{9}+\gamma _{3}\gamma _{5}\gamma _{9})+3(\gamma _{2}(\gamma
_{5}\gamma _{7}+\gamma _{4}\gamma _{8})+\gamma _{4}(\gamma _{3}+\gamma
_{5})\gamma _{9}))). \label{17}
\end{align}
\textit{By solving the Eq}. \ref{17} \textit{for} $\gamma =\acute{\gamma}$,
\textit{we get the\ solutions} $\gamma =0,\pm 1$ \textit{as shown in FIG. 2}
. \textit{The model corresponds to the spin fluid phase for} $\gamma
\rightarrow 0$ \textit{which is called the XX model and it corresponds to
Ising like phase for} $\gamma \rightarrow 1$\textit{or} $-1.$ \textit{It
indicates that there lies a phase boundary which separates the two phases.}
\section{STUDY OF ENTANGLEMENT}
We analyze the entanglement by computing the bipartite concurrence of the
interaction between different interblock spins by using the ground state
density matrix. We compute the geometric average of the all possible
bipartite concurrences. The pure density matrix can be written as,
\begin{equation}
\rho =\left\vert \phi _{0}^{1}\right\rangle \left\langle \phi
_{0}^{1}\right\vert , \label{18}
\end{equation}
where $\left\vert \phi _{0}^{1}\right\rangle $ is one of the ground state as
given in Eq. \ref{7}. We calculate the reduced density matrices $\rho
_{23},\rho _{24,}\rho _{25,}\rho _{34,}\rho _{35,}\rho _{45,}$ by taking the
multiple traces and then the bipartite concurrences are worked out. For the
entanglement measurement we compute the geometric mean of all concurrences
through
\begin{equation}
C_{g}=\sqrt[6]{C_{23}\times C_{24}\times C_{25}\times C_{34}\times
C_{35}\times C_{45}}, \label{19}
\end{equation}
where $C_{ij}$ $(i,j=2,3,4,5)$ are bipartite concurrences given as \cite
{xy3132},
\begin{equation}
C_{ij}=\max [\sqrt{\lambda _{ij,4}}-\sqrt{\lambda _{ij,3}}-\sqrt{\lambda
_{ij,2}}-\sqrt{\lambda _{ij,1}},0], \label{20}
\end{equation}
where $\lambda _{ij,k}$ for $(k=1,2,3,4)$ are the eigenvalues of \ the
matrix \ $\rho _{ij}\tilde{\rho}_{ij}$ with $\tilde{\rho}_{ij}=(\sigma
_{i}^{y}\otimes \sigma _{j}^{y})$\ $\rho _{ij}^{\ast }(\sigma
_{i}^{y}\otimes \sigma _{j}^{y})$ and $\lambda _{ij,4}>\lambda
_{ij,3}>\lambda _{ij,2}>\lambda _{ij,1}.$
We use the numerical technique to determine the renormalized $\gamma $ and
calculate the average concurrence $C_{g}$ after the each RG iteration. $
C_{g} $ is plotted against $\gamma $ in FIG. 3 showing its evolution with
increasing the size of the system. The plots of $C_{g}$ coincide with each
other at the critical point. \textit{After two steps (2nd order) }$C_{g}$
\textit{\ attains two fixed values, (a non-zero value at }$\gamma =0,$
\textit{\ and zero for }$\gamma \neq 0$\textit{) that predicts the behavior
of the infinitely large system in two dimensions. It indicates that the
two-dimensional surface of spins is effectively equivalent to a five sites
square box with the renormalized coupling constants, thus validating the
idea of the QRG. At }$\gamma =0$\textit{\ the non-zero value of }$C_{g}$
\textit{\ confirms that system is entangled with no long-range order\ due to
the presence of quantum fluctuations. Such response of the system
corresponds to a spin-fluid phase. For }$\gamma \neq 0$\textit{\ }$(C_{g}=0)$
\textit{\ the system possesses the magnetic long-range order. Therefore,
nontrivial points i.e., }$\gamma =\pm 1$\textit{\ correspond to two Ising
phases in the }$x$\textit{\ and }$y$\textit{\ directions respectively.} The
results obtained for concurrence in 2D are similar to the one-dimensional
case \cite{xxz, xy}. But the magnitude of concurrence is smaller in 2D,
because the number of shared neighbor sites are larger in 2D as compared to
one-dimensional chain.
\begin{figure}
\caption{(Color online) Geometric average of the
concurrences is plotted against anisotropic parameter $\gamma$
after each step of the RG.}
\label{FIG.3.}
\end{figure}
\textit{The critical behavior of the entanglement can be seen as a diverging
of its derivative when it crosses the phase transition point. The absolute
values of the first derivative of the concurrence with respect to }$\gamma $
\textit{\ after each iteration are shown in FIG. 4. The diverging behavior
of the derivative at }$\gamma =0$\textit{\ can be seen with increasing the
RG iterations. While concurrence itself remains continuous. It reveals that
the system exhibits the second-order QPT. }It is also noted that the
entanglement in the vicinity of the critical point shows scaling behavior
\cite{vedral intro(2)}. At the critical point, the entanglement scales
logarithmically and saturates away from the critical point \cite{xxz24}. As
we have discussed earlier a large system $N=5^{n+1},$ can be effectively
represented by five sites box with renormalized coupling constants after the
$n$th RG iteration. Therefore, the entanglement between the two renormalized
sites describes the entanglement between two blocks, each containing $N/5$
sites. We note that the system shows the scaling behavior which is linear
when $\ln $ of maximum of the absolute value of first derivative $\ln (\mid
dC_{g}/d\gamma \mid _{\max })$\ is plotted against $\ln N=\ln 5^{n+1},$
where $n=1,2,3...$. The scaling behavior is shown in FIG. 5. \textit{The
position of the maximum of }$dC_{g}/d\gamma $\textit{\ approaches the
critical point as the size of the system increases. To get more insight, we
plot }$\ln (\gamma _{c}-\gamma _{\max })$\textit{\ against }$\ln N$\textit{\
in FIG. 6 and obtain the relation }$\gamma _{\max }=\gamma
_{c}-(0.33N)^{-\theta },$\textit{\ where the entanglement exponent }$\theta
=1.14.$\textit{\ The entanglement exponent }$\theta $\textit{\ obtained from
the RG method captures the behavior of the XY model in the vicinity of the
critical point and defined as inverse of the correlation length exponent. In
thermodynamic limit, the correlation length covers the entire system as we
approach the critical point.}
\begin{figure}
\caption{(Color online) Absolute derivative of the
geometric average of the concurrences is plotted against $\gamma $
as the RG iteration is increased.}
\label{FIG.4.}
\end{figure}
\begin{figure}
\caption{Logarithm of the absolute value of
the maximum of the derivative of the concurrence is plotted against the
logarithm of N, the system size.}
\label{FIG.5.}
\end{figure}
\begin{figure}
\caption{Scaling behavior of $\gamma _{\max }
\label{FIG.6.}
\end{figure}
\section{CONCLUSIONS}
Study of the correlated systems in two dimensions through the
renormalization group (RG) technique was presented in this paper. For this
purpose, square lattice of\ Heisenberg spin-1/2 XY model was considered. The
quantum correlations were explored through concurrence and were related to
the quantum phase transition (QPT). Due to the presence of several
interblock interactions, we computed geometric average of the concurrences
of the all possible interactions between the blocks. We noted that the
system size increases rapidly and reaches at the critical point in the less
number of the RG iterations as compared to the one-dimensional case which
were studied previously \cite{xxz, xxzdm, xy, xydm}. \textit{Moreover, we
found that the results for concurrence in 2D are similar to the
one-dimensional case qualitatively. But the magnitude of the concurrence is
smaller in 2D, because the shared neighbor sites are larger in number in 2D
as compared with one-dimensional chain}. The evolution of the entanglement
after the $n$th RG iteration explains that it develops two values, one non
zero value at the critical point and approaches to zero otherwise, which
correspond to spin-fluid phase and Ising phase respectively. The relation
between the critical point, which is maximum value of the absolute
derivative of the concurrence and the system size (scaling behavior) was
investigated, which showed a linear behavior. Moreover, the scaling behavior
was explored through determination of the entanglement exponent which
describes how the critical point is acheived as the size of the system
increases.
\section{ACKNOWLEDGMENTS}
This work was partly supported by the HIGHER EDUCATION COMMISSION, PAKISTAN
under the Indigenous Ph.D. Fellowship Scheme.
\section{APPENDIX}
The expression for $\gamma$'s are given below;
\begin{eqnarray*}
\gamma _{1} &=&-\frac{(-1+\alpha _{1}+\gamma ^{2})\sqrt{(5+\alpha
_{1}+5\gamma ^{2}}}{4\sqrt{2\alpha _{2}}}, \\
\gamma _{2} &=&-\frac{3\sqrt{\frac{\gamma ^{4}(5+\alpha _{1}+5\gamma ^{2})}{
\alpha _{2}}}}{2\sqrt{2}\gamma }, \\
\gamma _{3} &=&\frac{(-1+\alpha _{1}+\gamma ^{2})}{\sqrt{2\alpha _{2}}}, \\
\gamma _{4} &=&\frac{\gamma (5+\alpha _{1}+\gamma ^{2})}{2\sqrt{2\alpha _{2}
}}, \\
\gamma _{5} &=&\frac{3\sqrt{2}\gamma ^{2}}{\alpha _{2}}, \\
\gamma _{6} &=&\frac{\sqrt{\frac{\gamma ^{2}(5+\alpha _{1}+5\gamma ^{2})}{
1+\alpha _{1}+34\gamma ^{2}-\alpha _{1}\gamma ^{2}+\gamma ^{4}}}(-2-2\alpha
_{1}+17\gamma ^{2}-3\alpha _{1}\gamma ^{2}+3\gamma ^{4})}{4(3+2\gamma
^{2}+3\gamma ^{4})}, \\
\gamma _{7} &=&-\frac{\sqrt{\frac{\gamma ^{2}(5+\alpha _{1}+5\gamma ^{2})}{
1+\alpha _{1}+34\gamma ^{2}-\alpha _{1}\gamma ^{2}+\gamma ^{4}}}(1+\alpha
_{1}-\gamma ^{2}+6\gamma ^{4})}{4\gamma (3+2\gamma ^{2}+3\gamma ^{4})}, \\
\gamma _{8} &=&-\frac{3\sqrt{\frac{\gamma ^{2}(5+\alpha _{1}+5\gamma ^{2})}{
1+\alpha _{1}+34\gamma ^{2}-\alpha _{1}\gamma ^{2}+\gamma ^{4}}}(5-\alpha
_{1}+5\gamma ^{2})}{4(3+2\gamma ^{2}+3\gamma ^{4})}, \\
\gamma _{9} &=&\frac{(1+\alpha _{1}-\gamma ^{2})}{4\gamma \sqrt{(34-\alpha
_{1}+\frac{1+\alpha _{1}}{\gamma ^{2}}+\gamma ^{2})}}, \\
\gamma _{10} &=&\frac{3}{2\sqrt{(34-\alpha _{1}+\frac{1+\alpha _{1}}{\gamma
^{2}}+\gamma ^{2})}},
\end{eqnarray*}
where,
\begin{eqnarray*}
\alpha _{1} &=&\sqrt{1+34\gamma ^{2}+\gamma ^{4}}, \\
\alpha _{2} &=&2-2\alpha _{1}+71\gamma ^{2}+17\alpha _{1}\gamma
^{2}+104\gamma ^{4}+3\alpha _{1}\gamma ^{4}+3\gamma ^{6}.
\end{eqnarray*}
\end{document} | math | 29,136 |
\begin{document}
\maketitle
\begin{center}
Department of Mathematics, ETHZ,
R\"amistrasse 101, 8092 Z\"urich (CH).
Email: {\tt
oancea\,@\,math.ethz.ch}
\end{center}
\begin{abstract}
We prove the K\"unneth formula in Floer
(co)homology for manifolds with restricted contact type boundary. We
use Viterbo's definition of Floer homology, involving the
symplectic completion by adding a positive cone over the boundary.
The K\"unneth formula implies
the vanishing of Floer (co)homology
for subcritical Stein manifolds. Other applications include the
Weinstein conjecture
in certain product manifolds, obstructions to exact
Lagrangian embeddings, existence of holomorphic curves with Lagrangian
boundary condition, as well as symplectic capacities.
\end{abstract}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\setcounter{footnote}{0}
\footnotetext{
{\it 2000 Mathematics Subject Classification}:
53D40, 37J45, 32Q28.
The author is currently supported by the
Forschungsinstitut f\"ur Mathematik at ETH Z\"urich.
}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}
The present paper is concerned with the Floer homology groups
$FH_*(M)$ of a compact symplectic manifold $(M, \, \omega)$
with contact type
boundary, as well as with their cohomological dual analogues
$FH^*(M)$. The latter were defined by Viterbo in \cite{functors1}
and are invariants that take into account the topology of the underlying
manifold {\it and}, through an algebraic limit process, all closed
characteristics on $\partial M$. Their definition is closely related
to the Symplectic homology groups of Floer, Hofer, Cieliebak and
Wysocki \cite{FH,CFH,FHW,CFHW,Cieliebak handles}.
Throughout this paper we will assume that $\omega$ is exact,
and in particular\break
$\langle\omega,\,
\pi_2(M)\rangle=0$. This last condition will be referred to as {\it symplectic
asphericity}.
The
groups $FH_*(M)$ are invariant with respect to deformations of the
symplectic form $\omega$ that preserve the contact type character of
the boundary and the condition $\langle \omega,\, \pi_2(M)
\rangle=0$. The groups $FH_*(M)$ actually depend
only on the {\it symplectic
completion} $\widehat M$ of $M$. The manifold $\widehat M$ is
obtained by gluing a positive cone along the boundary $\partial M$
and carries a symplectic form $\widehat \omega$ which is canonically
determined by $\omega$ and the conformal vector field on $M$. We shall
often write $FH_*(\widehat M)$ instead of $FH_*(M)$.
The grading on $FH_*(\widehat M)$ is given by minus the Conley-Zehnder
index modulo $2\nu $, with $\nu$ the minimal Chern number of $M$. There
exist canonical maps
$$H_{n+*}(M,\, \partial M) \stackrel {c_*} \longrightarrow FH_*(\widehat M),$$
$$FH^*(\widehat M) \stackrel {c^*} \longrightarrow H^{n+*}(M, \, \partial M)$$
which shift the grading by $n=\frac 1 2 \dim M$.
\noindent {\bf Theorem A (K\"unneth formula).}
{\it Let $(M^{2m}, \, \omega)$ and $(N^{2n}, \,
\sigma)$ be compact symplectic
manifolds with {\rm restricted} contact type boundary.
Denote the minimal Chern numbers
of $M$, $N$ and $M\times N$ by $\nu_M$, $\nu_N$ and $\nu_{M\times N}
= \tx{gcd}\,(\nu_M,\, \nu_N)$ respectively.
\renewcommand{\arabic{enumi}}{\alph{enumi}}
\begin{enumerate}
\item For any ring $A$ of coefficients there exists a short exact sequence
which splits noncanonically
\begin{equation} \label{suite exacte Kunneth Floer}
{\scriptsize
\xymatrix
@C=30pt
@R=20pt@W=1pt@H=1pt
{
{\bigoplus_{\widehat r+\widehat s = k} }
FH_{\widehat r}(M, \, \omega) \otimes
FH_{\widehat s}(N, \, \sigma) \ \ \ar@{>->}[r] &
FH_k(M \times N, \, \omega \oplus \sigma )
\ar@{->>}[d] \\
& {\bigoplus_{\widehat r+\widehat s = k-1}}
\tx{Tor}_1^A \big(
FH_{\widehat r}(M, \, \omega), \
FH_{\widehat s}(N, \, \sigma) \big)
}
}
\end{equation}
The morphism $c_*$ induces a morphism of exact sequences
whose source is the K\"unneth
exact sequence of the product $(M,\, \partial M) \times (N,\,
\partial N)$ and whose target is~(\ref{suite exacte Kunneth Floer}).
\item For any field $\mathbb{K}$ of coefficients there is an isomorphism
\begin{equation} \label{Kunneth egal Floer coh corps}
{\scriptsize
\xymatrix
@C=30pt
@R=20pt@W=1pt@H=1pt
{ \bigoplus_{\widehat r+\widehat s = k} FH^{\widehat r}(M, \, \omega)
\otimes
FH^{\widehat s}(N, \, \sigma) \ar[r]^{\qquad \sim}
& FH^k(M \times N, \, \omega \oplus
\sigma ) \ ,
}
}
\end{equation}
The morphism $c^*$ induces a commutative diagram with respect to the
K\"unneth isomorphism in cohomology for $(M,\, \partial
M)\times (N,\, \partial N)$.
\end{enumerate}
}
In the above notation we have $k\in \mathbb{Z}/2\nu_{M\times N}\mathbb{Z}$, $0 \le
r \le 2\nu _M -1$,
$0\le s \le 2\nu_N -1$ and the $\widehat{\ }$ symbol associates
to an integer its class in the corresponding $\mathbb{Z}/2\nu\mathbb{Z}$ ring. The
reader can consult~\cite[VI.12.16]{D} for a construction of the
K\"unneth exact sequence in singular homology.
The algebraic properties of the map $c^*$ strongly influence
the symplectic topology of the underlying manifold. Our applications
are based on the following theorem, which summarizes part of
the results in \cite{functors1}.
\noindent {\bf Theorem (Viterbo \cite{functors1}).} {\it Let $(M^{2m},\, \omega)$ be a
manifold with contact type boundary such that $\langle \omega,\,
\pi_2(M)\rangle=0$. Assume the map $c^*:FH^*(M)
\longrightarrow H^{2m}(M, \, \partial M)$ is {\rm not} surjective.
Then the following hold.
\renewcommand{\arabic{enumi}}{\alph{enumi}}
\begin{enumerate}
\item The same is true for any hypersurface of restricted contact
type $\Sigma \subset M$ bounding a compact region;
\item Any hypersurface of contact type $\Sigma \subset M$ bounding
a compact region carries a closed characteristic (Weinstein conjecture);
\item There is no exact Lagrangian embedding $L \subset M$ (here
$M$ is assumed to be exact by definition);
\item For any Lagrangian embedding $L \subset M$ there is a loop on
$L$ which is contractible in $M$, has strictly positive area and
whose Maslov number is at most equal to $m+1$;
\item For any Lagrangian embedding $L \subset M$ and any compatible
almost complex structure $J$ there is a nonconstant
$J$-holomorphic curve $S$
(of unknown genus)
with non-empty boundary $\partial S \subset L$.
\end{enumerate}
}
Viterbo
\cite{functors1} introduces
the following definition, whose interest is obvious in the light of the
above theorem.
\begin{defi} \tx{(Viterbo)}
A symplectic manifold $(M^{2m}, \, \omega)$ which verifies\break $\langle
\omega, \, \pi_2(M) \rangle = 0$ is said to satisfy the Strong
Algebraic Weinstein Conjecture (SAWC) if the composed morphism below is not surjective
$$FH^*(M) \stackrel{c^*}\longrightarrow H^*(M, \, \partial M)
\stackrel{\tx{pr}}\longrightarrow
H^{2m}(M, \, \partial M) \ .$$
\end{defi}
We shall still denote the composed morphism by $c^*$.
Theorem A now implies that the
property of satisfying the SAWC is stable under products,
with all the geometric consequences listed above.
\noindent {\bf Theorem B.} {\it Let $(M, \, \omega)$
be a symplectic manifold
with restricted contact type boundary satisfying the SAWC.
Let $(N, \, \sigma)$ be an arbitrary symplectic manifold with
restricted contact type boundary. The product $(\widehat
M \times \widehat N, \, \widehat \omega \oplus \widehat \sigma)$
satisfies the SAWC and assertions (a) to (e) in the above theorem of
Viterbo hold. In particular, the Weinstein conjecture holds
and there is no exact Lagrangian embedding in $\widehat M\times \widehat N$.
}
The previous result can be
applied for subcritical
Stein manifolds of finite type.
These are complex manifolds $\widehat
M$ which
admit proper and bounded from below plurisubharmonic Morse functions
with only a finite number of critical points, all of index strictly
less than $\frac 1 2 \dim _\mathbb{R} \widehat M$~\cite{Eli-psh}.
They
satisfy the SAWC as proved by
Viterbo~\cite{functors1}.
Cieliebak~\cite{Cieliebak handles} has proved that their Floer homology
actually vanishes. We can recover this
through Theorem A by using another of his results~\cite{C},
namely that every such manifold is
Stein deformation equivalent to a split one $(V \times \mathbb{C}, \, \omega
\oplus \omega_{\tx{std}})$. This can
be seen as an extension of the classical vanishing result
$FH_*(\mathbb{C}^\ell)=0$, $\ell \ge 1$~\cite{FHW}.
\noindent {\bf Theorem C (Cieliebak~\cite{Cieliebak handles}).} {\it
Let $\widehat M$ be a
subcritical Stein manifold of finite type. Its Floer homology vanishes}
$$FH_*(\widehat M) = 0 \ .$$
The paper is organized as follows. In Section~\ref{constructions} we
state the relevant definitions and explain the main properties of
the invariant $FH_*$.
Section~\ref{la preuve
de Kunneth} contains the proof of Theorem A.
The proofs of Theorems B and C, together with other applications,
are gathered in Section~\ref{appli}.
Let us point out where the
difficulty lies in the proof of Theorem A.
Floer homology is
defined on closed manifolds for any Hamiltonian satisfying some
generic nondegeneracy
condition, and this condition is {\it stable} under sums $H(x)+K(y)$ on
products $M \times N \ni (x, \, y)$. This trivially implies (with
field coefficients) a
K\"unneth formula of the type $FH_*(M\times N; \, H+K, \, J_1\oplus
J_2) \simeq FH_*(M; \, H, \, J_1) \otimes FH_*(N; \, K, \, J_2)$. On
the other hand, Floer homology for manifolds with contact type boundary
is defined using Hamiltonians with a rigid behaviour at infinity and
involves an algebraic limit construction.
This class of
Hamiltonians is {\it not stable} under the sum operation $H(x)+K(y)$
on $M \times N$. One may still define Floer homology groups
$FH_*(M\times N; \, H+K, \, J_1\oplus J_2)$, but
the resulting homology might well be
different, in the limit,
from $FH_*(\widehat M\times \widehat N)$. The whole point of
the proof is to show that this is not the case.
This paper is the first of a series studying the Floer homology of
symplectic fibrations with contact type boundary. It treats
trivial fibrations with open fiber and base. A spectral
sequence of Leray-Serre type for symplectic
fibrations with closed base and open fiber is constructed in~\cite{LS}.
{\it Acknowledgements.} This work is part of my Ph.D.
thesis, which I completed under the guidance of Claude Viterbo. Without his
inspired
support this could not have come to being. I am grateful to
Yasha Eliashberg,
Dietmar Salamon, Paul Seidel, Jean-Claude Sikorav and Ivan Smith
for their help and suggestions. I also
thank the referee for pointing out
errors in the
initial proofs of Theorem C and Proposition~\ref{prop:symplectic
capacities}.
During the various stages of preparation of this work I was supported
by the following institutions: Laboratoire de
Math\'ematiques, Universit\'e Paris Sud~;
Centre de Math\'ematiques de l'\'Ecole Polytechnique~;
\'Ecole Normale Sup\'erieure de Lyon~;
Departement Mathematik, ETH.
\section{Definition of Floer homology} \label{constructions}
Floer homology has been first defined by A. Floer for closed
manifolds in a series of papers \cite{F1,F2} which
proved Arnold's conjecture for a large class of
symplectic manifolds including {\it symplectically aspherical} ones.
In this situation Floer's construction can be summarized as follows. Consider a
periodic
time-dependent Hamiltonian $H : \mathbb{S}^1 \times M \longrightarrow \mathbb{R}$
with Hamiltonian vector field $X^t_H$ defined by $\iota _{X^t_H}\omega
= dH(t, \cdot)$. The
associated {\it action functional}
$$A_H : C^\infty _{\tx{contr}} (M) \longrightarrow \mathbb{R} \ ,$$
$$\gamma \longmapsto -\int_{D^2}\bar{\gamma}^* \omega -
\int_{\mathbb{S}^1}H(t, \, \gamma(t)) dt $$
is defined on the space of smooth contractible loops
$$C^\infty _{\tx{contr}} (M) = \big\{ \gamma:\mathbb{S}^1 \longrightarrow M \
: \ \exists \ \tx{smooth } \bar{\gamma}: D^2 \longrightarrow M, \
\bar{\gamma}|_{\mathbb{S}^1} = \gamma \big\} \ .$$
The critical points of $A_H$ are precisely the $1$-periodic solutions of
$\dot{\gamma}=X_H^t(\gamma(t))$, and we denote the corresponding set
by $\mathcal P(H)$. We suppose that the elements of $\mathcal P(H)$
are nondegenerate i.e. the time one return map has no
eigenvalue equal to $1$. Each such periodic orbit $\gamma$ has a
$\mathbb{Z}/2\nu \mathbb{Z}$-valued
Conley-Zehnder index $i_{CZ}(\gamma)$ (see \cite{RS1} for
the definition), where $\nu $ is the {\it minimal Chern
number}. The latter is defined by $\langle c_1, \, \pi_2(M) \rangle =
\nu \mathbb{Z}$ and by the convention $\nu = \infty$ if $\langle c_1, \, \pi
_2(M) \rangle =0$.
Let also $J$ be a compatible almost complex
structure. The homological Floer complex is defined as
$$FC_k(H, \, J) = \bigoplus_{\scriptsize
\begin{array}{c} \gamma \in \mathcal P (H) \\
i_{CZ}(\gamma) = -k \mod \ 2\nu
\end{array} } \mathbb{Z} \langle \gamma \rangle \ , $$
$$\delta : FC_k(H, \, J) \longrightarrow FC_{k-1}(H, \, J) \ ,$$
\begin{equation} \label{the differential}
\delta \langle \gamma \rangle = \sum_{\scriptsize
\begin{array}{c}
\gamma' \in \mathcal P (H) \\ i_{CZ}(\gamma') = -k + 1 \mod \ 2\nu
\end{array}} \# \mathcal M
( \gamma, \, \gamma'; \, H , \, J)/\mathbb{R} \ .
\end{equation}
Here $\mathcal M
( \gamma, \, \gamma'; \, H , \, J)$ denotes the space of trajectories
for the negative $L^2$ gradient of $A_H$ with respect to the metric
$\omega(\cdot, \, J\cdot)$, running from $\gamma$ to $\gamma'$:
\begin{equation} \label{Fl eq}
\mathcal M
( \gamma, \, \gamma'; \, H , \, J) = \Bigg\{
u: \mathbb{R} \times \mathbb{S}^1 \longrightarrow M \, : \,
\begin{array}{c} u_s + J\circ u \cdot u_t - \nabla H(t, \, u) =0 \\
u(s, \cdot) \longrightarrow \gamma, \ s \longrightarrow -\infty \\
u(s, \cdot) \longrightarrow \gamma' , \ s\longrightarrow +\infty
\end{array}
\Bigg\}.
\end{equation}
The additive group $\mathbb{R}$ acts on $\mathcal M(\gamma , \, \gamma'; H , \,
J)$ by translations in the $s$ variable, while the symbol $\# \mathcal
M(\gamma, \, \gamma'; \, H, \, J)/\mathbb{R}$ stands for an algebraic count of
the elements of $\mathcal M(\gamma, \, \gamma'; \, H, \, J)/\mathbb{R}$. We note
that it is possible to choose any coefficient ring instead of $\mathbb{Z}$
once the sign assignment procedure is available.
The crucial statements of the theory are listed below. The main point is
that the symplectic asphericity condition prevents the loss of
compactness for Floer trajectories by bubbling-off of nonconstant
$J$-holomorphic
spheres: the latter simply cannot exist.
\renewcommand{\arabic{enumi}}{\alph{enumi}}
\begin{enumerate}
\item under the nondegeneracy assumption on the $1$-periodic orbits
of $H$ and for a generic choice of $J$,
the space $\mathcal M(\gamma , \, \gamma'; H , \, J)$ is a
manifold of dimension $i_{CZ}(\gamma') - i_{CZ}(\gamma) =
-i_{CZ}(\gamma) - ( -i_{CZ}(\gamma'))$. If this difference
is equal to $1$,
the space $\mathcal M(\gamma, \, \gamma'; H , \, J)/\mathbb{R}$ consists of a
finite number of points. Moreover, there is a consistent choice of
signs for these points with respect to which $\delta \circ \delta =
0$;
\item the Floer homology groups $FH_*(H, \, J)$ are independent of $H$
and $J$. More precisely, for any two pairs $(H_0, \, J_0)$ and
$(H_1, \, J_1)$ there is a generic choice of a smooth
homotopy $(H_s, \, J_s)$, $s\in
\mathbb{R}$ with $(H_s, \, J_s)\equiv (H_0, \, J_0)$, $s\le 0$, $(H_s,
\, J_s) \equiv (H_1, \, J_1)$, $s\ge 1$ defining a
map
\begin{equation} \label{cont morph}
\sigma ^{(H_0, \, J_0)} _{(H_1, \, J_1)} : FC_k (H_0, \, J_0)
\longrightarrow FC_k(H_1, \, J_1) \ ,
\end{equation}
$$\sigma \langle \gamma \rangle = \sum_{\scriptsize \begin{array}{c}
\gamma' \in \mathcal P (H_1) \\ i_{CZ}(\gamma')=i_{CZ}(\gamma) = -k \mod
\ 2\nu
\end{array}} \# \mathcal M (\gamma, \, \gamma' ; \, H_s, \, J_s) \ .$$
The notation $\mathcal M (\gamma, \, \gamma'; \, H_s, \, J_s ) $ stands
for the space of solutions of the equation
\begin{equation} \label{par Fl eq}
u_s + J(s, \, u(s, \, t) ) u_t - \nabla H(s, \, t, \, u(s, \, t))
=0 \ ,\end{equation}
which run from $\gamma$ to $\gamma'$. The map $\sigma ^{(H_0, \,
J_0)} _{(H_1, \, J_1)} $ induces an isomorphism in cohomology
and this isomorphism is independent of the choice of the
homotopy. We shall call it in the sequel the ``continuation morphism''.
\item if $H$ is time independent, Morse and sufficiently small in some
$C^2$ norm, then the $1$-periodic orbits of $X_H$ are the critical
points of $H$ and the Morse and Conley-Zehnder indices satisfy
$i_{\tx{Morse}}(\gamma) = m + (-i_{CZ}^0(\gamma))$, $m= \frac 1 2 \dim
M$. Here $i_{CZ}^0(\gamma)$ is the Conley-Zehnder computed with
respect to the trivial filling disc.
Moreover, the Floer trajectories running between points with
index difference equal to $1$ are independent of the $t$ variable
and the Floer complex is equal, modulo a shift in the grading, with
the Morse complex of $H$.
We infer that for any regular pair $(H, \,
J)$ we have
$$\displaystyle FH_k(H, \, J) \simeq \bigoplus_{l \ \equiv \ k \mod \ 2\nu }
H_{l+m}(M) \ , \ k \in \mathbb{Z}/2\nu \mathbb{Z} \ . $$
\end{enumerate}
{\bf Remark.} An analogous construction yields a {\it
cohomological complex} by considering in (\ref{the differential}) the
space of
trajectories $\mathcal M(\gamma', \, \gamma; \, H, \, J)$.
We explain now how the above ideas
can be adapted in order to construct a symplectic
invariant for manifolds with contact type boundary. We
follow~\cite{functors1}, but similar constructions can be found
in~\cite{FH,CFH,Cieliebak handles} (see~\cite{my survey} for a
survey).
\begin{defi}
A symplectic manifold $(M, \, \omega)$ is said to have a {\rm
contact type boundary} if there is a vector field $X$ defined in a
neighbourhood of $\partial M$, pointing outwards and transverse to
$\partial M$, which satisfies
$$L_X\omega = \omega \ .$$
We say that $M$ has {\rm restricted contact type boundary} if
$X$ is defined globally.
\end{defi}
We call $X$ and $\lambda = \iota(X) \omega$ the {\it Liouville vector
field} and the {\it Liouville
form} respectively, with $d\lambda=\omega$.
We define the {\rm Reeb vector field}
$X_{\tx{Reeb}}$ as the
generator of $\ker \omega|_{T\partial M}$ normalized by
$\lambda(X_{\tx{Reeb}})= 1$. An integral curve of $X_{\tx{Reeb}}$ is
called a {\it characteristic}. We have $\varphi_t^*
\omega = e^t\omega$ and this implies that
a neighbourhood of $\partial M$ is foliated
by the hypersurfaces $\big( \varphi_t(\partial M) \big) _{-\epsilon
\le t \le 0}$, whose characteristic dynamics are conjugate.
The Floer homology groups $FH_*(M)$ of a manifold with contact type
boundary are an invariant that takes
into account:
\begin{itemize}
\item the dynamics
on the boundary by ``counting'' characteristics of arbitrary period;
\item the interior topology of
$M$ by ``counting'' interior $1$-periodic orbits of Hamiltonians.
\end{itemize}
In order to understand its definition, let us explain how one can
``see'' characteristics of arbitrary period by using $1$-periodic
orbits of special Hamiltonians. There is a symplectic diffeomorphism
onto a neighbourhood $\mathcal V$ of the boundary
$$\Psi : \big( \partial M \times [1-\delta, \, 1], \, d(S\lambda|)
\big) \longrightarrow ( \mathcal V, \, \omega),
\quad \delta > 0 \tx{ small ,}$$
$$\Psi(p, \, S) = \varphi_{\ln (S)} (p) \ ,$$
where $\lambda|$ denotes the restriction of $\lambda$ to $\partial
M$ (we actually have $\Psi ^* \lambda = S \lambda|$). We define the {\it symplectic
completion}
$$(\widehat{M}, \, \widehat \omega) = (M, \, \omega) \cup_{\,\Psi} \big( \partial M \times
[1, \, \infty[ , \, d(S\lambda|) \big) \ .$$
Consider now Hamiltonians $H$ such that $H(p, \, S) = h(S)$ for $S \ge
1-\delta$, where $h: [1-\delta, \, 1] \longrightarrow \mathbb{R}$ is
smooth. It is straightforward to see that
$$X_H(p, \, S) = -h'(S) X_{\tx{Reeb}}, \quad S \ge 1-\delta \ .$$
The $1$-periodic orbits of $X_H$ that are located on the level $S$
correspond
to characteristics on $\partial M$ having period $h'(S)$
under the parameterization given by $-X_{\tx{Reeb}}$. The general
principle that can be extracted out of this computation is that {\it one
``sees'' more and more characteristics as the variation of $h$ is
bigger and bigger}.
We define a Hamiltonian to be {\it admissible} if it satisfies $H
\le 0$ on $M$ and it is of the form $H(p, \, S) =
h(S)$ for $S$ big enough\footnote{Our
definition is slightly more general
than the one in \cite{functors1} in that we prescribe the behaviour
of admissible Hamiltonians only near infinity and not on the
whole of $S\ge 1$. It is clear that the a priori $C^0$ bounds
require no additional argument than the one in
\cite{functors1}. The situation is of course different if one
enlarges further the admissible class.}, with $h$ convex increasing and such
that there exists $S_0 \ge 1$ with
$h'$ constant for $S\ge S_0$. We call such a Hamiltonian
{\it linear at infinity}. Moreover, we assume that the
slope at infinity of $h$ is not the area of a closed characteristic on
$\partial M$ and that all $1$-periodic orbits of $H$ are
nondegenerate. One method to obtain such Hamiltonians is to slightly
perturb functions $h(S)$, where $h:[1-\delta, \,
\infty[ \longrightarrow \mathbb{R}$ is equal
to zero in a neighbourhood of $1-\delta$, strictly convex on $\{ h > 0
\}$ and linear at infinity with
slope different from the area of any closed characteristic, by a
perturbation localized around the periodic orbits.
We point out that there are admissible Hamiltonians having arbitrarily large
values of the slope at infinity.
The {\it admissible almost complex structures} are defined to be those
which satisfy the following conditions for {\it large enough values of
$S$}:
$$\left\{ \begin{array}{l}
J_{(p, \, S)}|_\xi = J_0, \\
J_{(p, \, S)} \big( \DP{}{S} \big) = \frac 1 {CS} X_{\tx{Reeb}}(p),
\quad C>0, \\
J_{(p, \, S)}(X_{\tx{Reeb}}(p)) = -CS \DP{}{S} \ ,
\end{array}\right.
$$
where $J_0$ is an almost complex structure compatible with the restriction
of $\omega$ to the contact distribution $\xi=\ker \lambda|$
on $\partial M$. These
are precisely the almost complex structures which are invariant under
homotheties $(p, \, S) \longmapsto (p, \, aS)$, $a>0$ for large enough
values of $S$.
The crucial fact is that the function $(p, \, S) \longmapsto S$ is
plurisubharmonic with respect to this class of almost complex
structures. This means that $d(dS \circ J)(v, \, Jv) < 0$ for any
nonzero $v\in T_{(p, \, S)}\big( \partial M \times [1, \, \infty[\big)$, $p\in
\partial M$, $S \ge 1 $ big enough, and indeed we have $d(dS\circ J) =
d(-CS\lambda|) = -C\widehat \omega$. Plurisubharmonicity implies that for any
$J$-holomorphic curve $u:D^2 \longrightarrow \partial M \times [S_0, \,
\infty [$, $S_0\ge1 $ big enough one has
$\Delta(S\circ u) \ge 0$. In particular
the maximum of $u$ is achieved on the boundary $\partial D^2$
(see for example \cite{GT}, Theorem 3.1). A similar argument
applies to solutions of the Floer equation (\ref{Fl eq}),
as well as to solutions of the parameterized Floer equation (\ref{par
Fl eq}) for {\it increasing homotopies} satisfying
$\DP{^2h}{s\partial S}\ge 0$~\cite{my survey}. This implies
that solutions are
contained in an a priori determined compact set. All compactness
results in Floer's theory therefore carry over to this situation and so does
the construction outlined for closed manifolds.
We introduce a partial
order on regular pairs $(H, \, J)$ as
$$(H, \, J) \prec (K, \, \widetilde{J} ) \qquad \tx{iff} \qquad H\le K
\ . $$
The continuation morphisms (\ref{cont morph}) form a direct system
with respect to this order
and we define the Floer homology groups as
$$FH_*(M) = \displaystyle \lim_{\scriptsize
\begin{array}{c} \rightarrow \\ (H, \,
J) \end{array} } FH_*(H, \, J) \ .$$
An important refinement of the definition consists in using a
truncation by the values of the action. The latter is decreasing along
Floer trajectories and one builds a $1$-parameter family of
subcomplexes of $FC^*(H, \, J)$, defined as
$$FC_k^{]-\infty,\, a[}(H, \, J) = \bigoplus_{\scriptsize
\begin{array}{c} \gamma \in \mathcal P (H) \\
i_{CZ}(\gamma) = -k \mod \ 2\nu \\ A_H(\gamma) < a
\end{array} } \mathbb{Z} \langle \gamma \rangle \ . $$
This allows one to define the corresponding quotient complexes
$$FC_*^{[a, \, b[ } (H, \, J) = FC_*^{]-\infty,\, b[}(H, \, J) /FC_*
^{]-\infty,\, a[} (H, \, J), \quad -\infty \le a < b \le \infty $$
and the same direct limit process goes through. We therefore put
$$FH_*^{[a, \, b[}(M) = \displaystyle \lim_{\scriptsize
\begin{array}{c} \rightarrow \\ (H, \,
J) \end{array} } FH_*^{[a, \, b[}(H, \, J) \ .$$
Let us now make a few remarks on the properties of the above invariants.
\renewcommand{\arabic{enumi}}{\alph{enumi}}
\begin{enumerate}
\item there is a natural cofinal family of Hamiltonians
whose values of the action on the $1$-periodic orbits is positive or
arbitrarily close to $0$ (see the construction of $H_1$ in
Figure \ref{HamPlateau et Rho} (1) described in Section~\ref{la
preuve de Kunneth}).
This implies that $FH_*^{[a, \, b[}(M) =0$
if $b<0$ and $FH_*^{[a, \, b[} $ does not depend on $a$ if the latter
is strictly negative. In particular we have
$$FH_*(M) = FH_*^{[a, \, \infty[}(M), \qquad a<0 \ .$$
\item the infimum $T_0$ of the areas of closed characteristics on the
boundary is always strictly positive and therefore
$$ FH_k^{[a, \, \epsilon[}(M) \simeq \bigoplus _{l \, \equiv \, k
\mod \ 2\nu }H_{l+m}(M, \, \partial M), $$
where $a<0\le \epsilon <T_0$, $k \in \mathbb{Z}/2\nu \mathbb{Z}$, $m = \frac 1 2
\dim M$.
This follows from the fact that, in the limit, the Hamiltonians
become $C^2$-small on $M \setminus \partial M$ and the Floer complex
reduces to a Morse complex that computes the relative cohomology, as
$-\nabla H$ points inward along $\partial M$.
\item there are obvious truncation morphisms
$$FH_*^{[a, \, b[}(H, \, J) \longrightarrow FH_*^{[a', \, b'[}(H, \,
J), \ a\le a', \ b\le b'$$
which induce morphisms $FH_*^{[a, \, b[}(M) \longrightarrow
FH_*^{[a', \, b'[}(M)$, $a\le a'$, $b\le b'$.
If $a=a'<0$, $0 \le b < T_0$ and $b'=\infty$ we obtain a
natural morphism
\begin{equation*}
\bigoplus _{l \, \equiv \, k
\mod \ 2\nu }H_{l+m}(M, \, \partial M) \stackrel{c_*}\longrightarrow FH_k(M) \ ,
\end{equation*}
or, written differently,
\begin{equation} \label{c *}
H_*(M, \, \partial M) \stackrel{c_*}{\longrightarrow} FH_*(M) \ .
\end{equation}
We also note at this point that we have
$$FH_*(M) = \displaystyle \lim _{\scriptsize \begin{array}{c}
\rightarrow \\ b \end{array} } \lim_{\scriptsize
\begin{array}{c} \rightarrow \\ (H, \,
J) \end{array} } FH_*^{[a, \, b[}(H, \, J), \qquad a<0 $$
and that the two limits above can be interchanged by general
properties of bi-directed systems.
\end{enumerate}
\noindent
{\bf Fundamental principle (Viterbo~\cite{functors1}).} {\it If the morphism\break
$H_{*}(M, \, \partial M) \stackrel {c_*}
\longrightarrow FH_*(M) $ is not bijective, then
there is a closed characteristic on $\partial M$. } Indeed, either
there is some extra generator in $FH_*(M)$, or some
Morse homological generator of $H_*(M, \, \partial M)$ is killed in
$FH_*(M)$.
The ``undesired guest'' in the first case or the ``killer'' in
the second case necessarily corresponds
to a closed characteristic on $\partial M$.
The version of Floer homology that we defined above has various
invariance properties \cite{functors1}. The main one that we shall use
is the following.
\begin{prop} \label{inv Fl h}
The Floer homology groups $FH_*(M)$ are an invariant of the
completion $\widehat M$
in the following sense: for any open set with smooth boundary $U
\subset M$ such that $\partial U \subset \partial M \times [1, \,
\infty[$ and the Liouville vector field $S\DP{}{S}$ is transverse and
outward pointing along $\partial U$, we have
$$FH_*(\widehat M) \simeq FH_*(\widehat U) \ .$$
\end{prop}
\noindent {\it \small Proof. } One can realise a differentiable isotopy between $M$ and $U$
along the Liouville vector field. This corresponds to an isotopy of
symplectic forms on $M$ starting from the initial form $\omega =
\omega_0$ and ending with the one induced from $U$, denoted by
$\omega_1$. During the isotopy the boundary $\partial M$ remains of
contact type and the symplectic asphericity condition is preserved.
An invariance theorem of Viterbo \cite{functors1}
shows that $FH_*(M, \, \omega) \simeq FH_*(M, \, \omega_1)$. On the other hand
$FH_*(M, \, \omega_1) \simeq FH_*(U,
\, \omega)$ because $(M,\, \omega_1)$ and $(U,\, \omega)$ are symplectomorphic.
{$\square$}
\section{Proof of Theorem A}
\label{la preuve de Kunneth}
Before beginning the proof, let us note that the natural class of
manifolds for which one can {\it define} Floer homology groups for a product
is that of manifolds with restricted contact type boundary.
The reason is that $\partial \big( M \times N
\big)$ involves the
full manifolds $M$ and $N$, not only some neighbourhoods of their
boundaries. If $X$ and $Y$ are the conformal vector fields on $M$
and $N$ respectively and $\pi_M: M \times N \longrightarrow M$,
$\pi_N: M \times N \longrightarrow N$ are the canonical projections,
the natural conformal vector field on
$M \times N$ is $Z = \pi ^*_M X + \pi ^*_N
Y$.
In order for $Z$ to be defined in a neighbourhood of
$\partial ( M \times N)$
it is necessary that $X$ and $Y$ be globally defined.
We only prove a) because
the proof of b) is entirely dual. One has to reverse arrows and
replace direct limits with inverse limits. The difference in the
statement is due to the fact that the inverse limit functor is
in general not exact, except when each term of the directed system is a
finite dimensional vector space~\cite{ES}. For the sake of clarity
we shall give the proof under the assumption $\langle c_1(TM), \,
\pi_2(M) \rangle =0$, $\langle c_1(TN), \,
\pi_2(N) \rangle =0$, so that the grading on Floer homology is defined
over $\mathbb{Z}$.
I. We establish the short exact
sequence~(\ref{suite exacte Kunneth Floer}).
Here is the sketch of the proof. We consider on $\widehat M \times
\widehat N$ a Hamiltonian of the form $H(t, \, x) + K(t, \, y)$,
$t\in \mathbb{S}^1$, $x\in \widehat M$, $y\in \widehat N$. For an almost
complex structure on $\widehat M \times \widehat N$ of the form
$J^1 \oplus J^2$, with $J^1$, $J^2$ (generic) almost complex
structures on $\widehat M$ and $\widehat N$ respectively, the Floer
complex for $H+K$ can be identified, modulo truncation by the
action issues, with the tensor product of the
Floer complexes of $(H, \, J^1)$ and $(K, \, J^2)$. Nevertheless,
the Hamiltonian $H+K$ is not linear at infinity and hence not
admissible. We refer to~\cite{teza mea} for a discussion of the
weaker notion of asymptotic linearity and a proof of the fact that $H+K$
does not even belong to this extended admissible class.
The main idea of our proof is to construct an admissible
pair $(L, \, J)$ whose Floer complex is roughly the same as the one of $(H+K, \,
J^1 \oplus J^2)$. The Hamiltonian $L$ will have lots of additional
$1$-periodic orbits compared to $H+K$,
but all these will have negative enough action for
them not to be counted in the relevant truncated Floer
complexes.
We define the {\it period spectrum} $\mathcal S(\Sigma)$ of a contact type hypersurface
$\Sigma$ in a symplectic manifold as being the set of
periods of closed characteristics on $\Sigma$, the latter being parameterized by
the Reeb flow. We assume from now on that the period
spectra of $\partial M$ and $\partial N$
are {\it discrete and injective} i.e. the periods of the closed
characteristics form a strictly increasing sequence, every
period being associated to a unique characteristic which is
transversally nondegenerate. This
property is $C^\infty$-generic among hypersurfaces \cite{T}, while
Floer homology does not change under a small $C^\infty$-perturbation of
the boundary (Proposition \ref{inv Fl h}).
Assuming a discrete and injective period spectrum amounts therefore to no loss of
generality.
We shall construct cofinal familes of Hamiltonians and almost
complex structures $(H_\nu, \, J_\nu^1)$, $(K_\nu, \, J_\nu^2)$, $(L_\nu, \,
J_\nu)$ on $\widehat M$, $\widehat N$ and $\widehat M \times \widehat
N$ respectively, with the following property.
{\noindent \bf Main Property.}
{\it Let $\delta >0$ be fixed. For any $b>0$, there is a positive
integer $\nu(b, \, \delta)$
such that, for all $\nu\ge \nu(b, \, \delta)$, the following inclusion of
differential complexes holds:}
\begin{equation} \label{inclusion fondamentale de complexes}
{\scriptsize
\xymatrix{
\bigoplus_{r+s=k} FC_r^{[-\delta,\, \frac{b}2[}(H_\nu,\, J_\nu^1) \otimes
FC_s^{[-\delta,\, \frac{b}2[}(K_\nu,\, J_\nu^2) \ar[r] &
FC_k^{[-\delta,\, b[}(L_\nu,\, J_\nu) \ar[dl] \\
\bigoplus_{r+s=k}
FC_r^{[-\delta,\, 2b[}(H_\nu,\, J_\nu^1) \otimes
FC_s^{[-\delta,\, 2b[}(K_\nu,\, J_\nu^2) \ar[r] &
FC_k^{[-\delta,\, 4b[}(L_\nu,\, J_\nu)
}
}
\end{equation}
It is important to note
that we require the two composed arrows
{\scriptsize
$$FC_k^{[-\delta,\, b[}(L_\nu,\, J_\nu)
\hookrightarrow FC_k^{[-\delta,\, 4b[}(L_\nu,\, J_\nu),
$$
\begin{equation*}
\displaystyle{\bigoplus_{r+s=k} FC_r^{[-\delta,\, \frac{b}2[}(H_\nu,\,
J_\nu^1) \otimes
FC_s^{[-\delta,\, \frac{b}2[}(K_\nu,\, J_\nu^2) }
\hookrightarrow
\displaystyle{\bigoplus_{r+s=k}
FC_r^{[-\delta,\, 2b[}(H_\nu,\, J_\nu^1) \otimes
FC_s^{[-\delta,\, 2b[}(K_\nu,\, J_\nu^2)}
\end{equation*}
}
\noindent to be the usual inclusions corresponding to the truncation by the
action. In practice we shall construct {\it autonomous} Hamiltonians
having transversally nondegenerate $1$-periodic orbits, but one
should think in fact of small local perturbations of these ones,
along the technique of
\cite{CFHW}. The latter consists in perturbing an autonomous
Hamiltonian in the neighbourhood of a transversally
nondegenerate $1$-periodic orbit $\gamma$, replacing $\gamma$ by
precisely two
nondegenerate $1$-periodic
orbits corresponding to the two critical points of a Morse function on
the embedded circle given by $\tx{im}(\gamma)$. The Conley-Zehnder
indices of the perturbed orbits differ by one. Moreover, the perturbation
can be chosen arbitrarily small in any $C^k$-norm and the actions of
the perturbed orbits can be brought arbitrarily close to the action of
$\gamma$.
a. \ Let $S'$, $S''$ be the vertical coordinates on $\widehat M$ and
$\widehat N$ respectively. Let $(H_\nu)$, $(K_\nu)$ be {\it cofinal} families of
autonomous Hamiltonians on
$\widehat M$ and $\widehat N$, such that $H_\nu(p', \, S')=h_\nu(S')$ for $S'
\ge 1$, $K_\nu(p'', \, S'') =k_\nu(S'')$ for $S'' \ge 1$, with $h_\nu$, $k_\nu$
convex and linear of slope $\lambda_\nu$ outside a small neighbourhood
of $1$. We assumed that the period spectra of $\partial M$ and
$\partial N$ are discrete and injective and so we can choose $\lambda_\nu
\notin \mathcal S(\partial M) \ \cup \ \mathcal S(\partial N)$
with $\lambda_\nu \longrightarrow \infty$, $\nu \longrightarrow
\infty$. We shall drop the subscript $\nu$ in the sequel
by referring to $H_\nu$, $K_\nu$ and $\lambda_\nu$ as $H$, $K$ and $\lambda$.
Let us denote
$$\eta_{\lambda} =
\tx{dist} \big( \lambda, \, \mathcal S(\partial M) \, \cup \,
\mathcal S(\partial N) \big) \, > \, 0 \ , $$
$$T_0(\partial M) = \min \,
\mathcal S(\partial M) \ , \ \ T_0(\partial N) = \min \,
\mathcal S(\partial N) \ ,$$
$$T_0 = \min \, \big( T_0(\partial M), \, T_0(\partial N) \big) \,
> \, 0 \ .$$
b. \ Our starting point is the construction by Hermann
\cite{He} of a cofinal family which allows one to identify, in the
case of bounded open sets with restricted contact type boundary in
$\mathbb{C}^n$, the Floer homologies defined by Viterbo \cite{functors1} and
Floer and Hofer \cite{FH}. We fix
$$A=A(\lambda)=5\lambda/\eta_\lambda > 1 $$
and consider
the Hamiltonian $H_1$ equal
to $H$ for $S' \le
A-\epsilon(\lambda)$ and
constant equal to $C$ for $S' \ge A $, with $C$ arbitrarily close
to $\lambda(A - 1)$. Here $\epsilon(\lambda)$ is chosen to be small
enough and positive. We perform the same
construction in order to get a Hamiltonian $K_1$.
We suppose (Figure \ref{HamPlateau et Rho} (1))
that $H_1$ takes its values in the interval $[-\epsilon,\, 0)$
on the interior of $M$ where it is also $C^2$-small
and that $H_1 (\underline{x}, \, S')
=h(S')$ on
$\partial M \times [1,\, \infty[$, with $h'\equiv\lambda$ on $[1+
\epsilon(\lambda),\, A - \epsilon(\lambda)]$ and $h'\equiv 0$ on
$[A,\, \infty[$, where $\epsilon(\lambda)=\epsilon/\lambda$. Thus
$H_1$ takes values
in $[-\epsilon,\, \epsilon]$ for $S' \in
[1,\, 1+\epsilon(\lambda)]$ and in
$[\lambda(A-1)-2\epsilon,\, \lambda(A-1)]$ for $S'
\in [A-\epsilon(\lambda),\, A]$.
The Hamiltonian $H_1$ has additional $1$-periodic orbits compared
to $H$. These are
either constants on levels $H_1=C$ with action $-C \simeq
-\lambda(A-1)$, or orbits corresponding to characteristics on
the boundary, appearing on levels
$S=\tx{ct.}$ close to $A$. The action of the latter is arbitrarily
close to
$h'(S)S-h(S) \le (\lambda - \eta_\lambda) \cdot A - \lambda( A -1)
+2\epsilon \le -3\lambda \longrightarrow -\infty$, $\lambda
\longrightarrow \infty$. The special choice of $A$ is motivated by
the previous computation. We see in particular that it is crucial to take into
account the gap $\eta_\lambda$ in order to be able to make the action
of this kind of orbits tend to $-\infty$.
\begin{center}
\begin{figure}
\caption{The Hamiltonian $H_1$ and the truncation function
$\rho$ \label{HamPlateau et Rho}
\label{HamPlateau et Rho}
\end{figure}
\end{center}
c. \ We deform now $H_1 + K_1$
to a Hamiltonian that is constant equal to $2C$
outside the compact set $\{ S' \le B, \, S'' \le B \}$, with
$$B = A\sqrt \lambda \ .$$
This already holds in $\{ S'
\ge A, \, S'' \ge A \}$. We describe the corresponding
deformation in $\{ S' \le A , \, S'' \ge A \}$ and perform the symmetric
construction in $\{ S' \ge A, \, S'' \le A \}$.
Let us define
$$H_2 : \widehat{M} \times \partial N \times [A, \, \infty[
\longrightarrow \mathbb{R} \ ,$$
$$H_2(\underline{x},\, y,\, S'') = \big( 1-\rho(S'') \big)
H_1(\underline{x}) + \rho (S'') C \ ,$$
with $\rho : [A, \ +\infty[ \longrightarrow [0,1]$, $\rho \equiv
0 $ on $[A,\, 2A]$, $\rho \equiv 1$ for $S'' \ge B-\epsilon$,
$\rho$ strictly increasing on $[2A,\, B-\epsilon]$,
$\rho' \equiv \tx{ct.} \ \in \ [\frac 1 {B-2A-\epsilon}, \frac
1 {B - 2A - 3\epsilon} ] $ on $[2A + \epsilon, \, B -
2\epsilon]$ (Figure \ref{HamPlateau et Rho} (2)).
The symplectic form on $\widehat{M} \times \partial N \times [A,
\, \infty[$ is $\omega' \oplus d(S''\lambda'')$, with $\lambda''$
the contact form on $\partial N$. We get
$$X_{H_2} (\underline{x},\, y,\, S'') = \big( 1 - \rho(S'') \big)
X_{H_1}(\underline{x}) -
\big( C - {H_1}(\underline{x}) \big) \rho'(S'') X''_{\tx{Reeb}}(y) \ .$$
\begin{figure}
\caption{Graph of the deformation $H_2$ \label{deformation}
\label{deformation}
\end{figure}
The projection of a periodic orbit of $X_{H_2}$
on $\widehat{M}$
is a periodic orbit of $X_{H_1}$. In particular,
${H_1}$ is constant along the projection. Moreover, the orbits appear on levels
$S''=\tx{ct.}$
as there is no component $\DP{}{S''}$ in
$X_{H_2}$. As a consequence, the coefficients in front of
$X_{H_1}$ and $X''_{\tx{Reeb}}$ are constant along one orbit of
$X_{H_2}$.
A $1$-periodic orbit $\theta$ of $X_{H_2}$ corresponds therefore
to a couple $(\Gamma, \, \gamma)$ such that
\begin{itemize}
\item $\Gamma$ is an orbit of $X_{H_1}$ having period
$1-\rho(S'')$;
\item $\gamma$ is a closed characteristic on the level
$\partial N \times \{ S''\}$,
having period $\big( C - {H_1}(\underline{x})
\big) \rho'(S'')$ and having the {\it opposite} orientation than
the one given by $X''_{\tx{Reeb}}$. We have used the notation
$\underline{x}=\Gamma(0)$.
\end{itemize}
The action of $\theta$ is
\begin{eqnarray} \label{action totale}
A_{H_2+K_1}(\theta) & = & -A(\Gamma) - A(\gamma) -H_2 - K_1
\nonumber \\
& = & A_{H_1}(\Gamma) -A(\gamma) - \rho (S'') \big( C -
{H_1}(\underline{x}) \big) - C \ .
\end{eqnarray}
We have denoted by $A(\gamma)$, $A(\Gamma)$ the areas of the
orbits $\gamma$ and $\Gamma$ respectively. The Hamiltonian $K_1$ is
constant in the relevant domain and we have directly replaced it by
its value $C$. It is useful to notice that
$A(\gamma)=-S''\rho'(S'')(C-{H_1}(\underline{x}))$. The
minus sign comes from the fact that the running orientation on
$\gamma$ is opposite to the one given by $X''_{\tx{Reeb}}$.
We have used the symplectic form $d(S''\lambda'')$ on the
second factor in~(\ref{action totale}).
For $0 \le T = 1-\rho(S'') \le 1$,
the $T$-periodic orbits of
$X_{H_1}$ belong to one of the following classes.
\renewcommand{\arabic{enumi}}{\arabic{enumi}}
\begin{enumerate}
\item \label{11} constants in the interior of $M$, having zero area;
\item \label{22}
closed characteristics located around the level $S'=1$, whose
areas belong to the interval
$[T_0(\partial M), \, T\lambda]$;
\item \label{33}
if $T\lambda \in \mathcal S(\partial M)$, one has a closed
characteristic of area $S'T\lambda$ for any $S'
\in [1+\epsilon(\lambda),\, A -\epsilon(\lambda)]$
- the interval where ${H_1}$ is
linear of slope $\lambda$;
\item \label{44}
closed characteristics located around the level $S'=A$, whose
areas belong to the interval $[T_0(\partial M),\, T(\lambda - \eta_\lambda)A]$;
\item \label{55} constants on levels $S' \ge A$, having zero area.
\end{enumerate}
For $\epsilon >0$ fixed we choose the
various parameters involved in our constructions such that
$$\lambda (A-1) \, \ge \, C \, \ge \, \lambda (A-1) - \epsilon, $$
$$\rho' \, \le \, 1/ A(\sqrt{\lambda} -1) + \epsilon,$$
$$ S'' \tx{ close to } B \ \Longrightarrow \
\rho'(S'')\cdot S'' \, \le \, \sqrt{\lambda} / (\sqrt{\lambda}
-1) \ .$$
We now show that the actions of $1$-periodic orbits of $X_{H_2}$
appearing in the region
$\widehat{M} \times
\partial N \times [A,\, \infty[$ tend uniformly to
$-\infty$ when $\lambda \rightarrow +\infty$.
We estimate the action of the orbits $\theta$ according to
the type of their first component $\Gamma$ and according to
the level $S''\ge A$ on which lies $\gamma$.
\renewcommand{\theenumii)}{\arabic{enumi}i)}
\begin{enumerate}
\item $\Gamma$ of type \ref{11} corresponds to $ {H_1} \in
[-\epsilon,0]$.
\begin{enumerate}
\item \label{premier}
$S'' \in [A,2A] \ \bigcup \ [B-\epsilon(\lambda),\,
\infty[$. Because $\rho'=0$ there is no component of
$X_{H_2}$ in the
$X''_{\tx{Reeb}}$ direction, orbits $\Gamma$
appear in degenerate families (of dimension $\dim\,N$)
and the action of $\theta$ is
$A_{H_2+K}(\theta) \le \epsilon -C $.
\item \label{deuxieme}
$S'' \in [2A, \, \frac {A + B} 2 ]$. Orbits
$\Gamma$ come in pairs with closed characteristics
$\gamma $ of period
$\rho' (C - {H_1}(\underline{x}))$ on $\partial N \times
\{ S'' \}$. We have
\begin{eqnarray*}
\lefteqn{A_{H_2+K}(\theta) \ \le \ \epsilon +
S''\rho'(S'')(C+\epsilon) -C} \\
& \le & \epsilon + \frac{ A+B} 2 \cdot \frac 1 {B - 2A
-3\epsilon} \cdot (C+\epsilon) - C \ \le \
-\frac 1 4 C \ .
\end{eqnarray*}
The second inequality is valid for sufficiently large
$\lambda$, in view of $B =
A\sqrt{\lambda}$ which implies $(B+A)/(B-2A-3\epsilon)
\rightarrow 1$.
\item \label{troisieme}
$S'' \in [\frac {A + B} 2, B -\epsilon(\lambda)]$ (hence
$\rho \in [\frac 12, 1]$). For $\lambda $ big enough we have
\begin{eqnarray*}
\lefteqn{A_{H_2+K}(\theta) \ \le \ \epsilon +
S''\rho'(S'')(C+\epsilon) - \rho(S'')C - C} \\
& \le & \epsilon + \frac{B - \epsilon}{B - 2A -
3\epsilon} \cdot (C+\epsilon) - \frac 12 \cdot C -C \ \le \
-\frac 14 C \ .
\end{eqnarray*}
\end{enumerate}
\item $\Gamma$ of type \ref{22} corresponds to ${H_1} \in [-\epsilon,
\epsilon]$ and $S' \in [1, \ 1+\epsilon(\lambda)]$.
The area of $\Gamma$ belongs to the interval
$[T_0(\partial M), \, (1-\rho(S'')) \lambda]$.
\begin{enumerate}
\item $S'' \in [A,\, 2A] \ \bigcup \ [B-\epsilon(\lambda),\,
\infty[$. As in \ref{premier}) we have
$$A_{H_2+K}(\theta) \ \le \ \epsilon + (1-\rho(S''))\lambda - C
\ \le \ \epsilon + \lambda - C \ .$$
\item $S'' \in [2A, \frac {A + B} 2
]$. Like in \ref{deuxieme}) the total action of $\theta$
is
\begin{equation*}
A_{H_2+K}(\theta) \le \epsilon +
(1-\rho(S''))\lambda + S''\rho'(S'')(C+\epsilon) -C \le
- \frac 12 C \, .
\end{equation*}
\item $S'' \in [\frac {A + B} 2 , \, B
-\epsilon(\lambda)]$. Following \ref{troisieme}) one has
\begin{equation*}
A_{H_2+K}(\theta) \le \epsilon +(1-\rho(S''))\lambda
+ S''\rho'(S'')(C+\epsilon) -\rho(S'')C - C \le -\frac 14 C .
\end{equation*}
\end{enumerate}
\item $\Gamma$ of type \ref{33} has an action
$A_{H_1}(\Gamma) \le S'T\lambda - \lambda(S'-1-\epsilon')
\le (1+\epsilon') \lambda$, where
$\epsilon'=\epsilon(\lambda)$.
\begin{enumerate}
\item $S'' \in [A,\, 2A] \, \bigcup \,
[B-\epsilon(\lambda),\, \infty[$~:
$A_{H_2+K}(\theta) \le 2\lambda - C$.
\item $S'' \in [2A, \, \frac {A + B} 2 ]$~:
$A_{H_2+K}(\theta) \le 2\lambda -\frac 14 C$.
\item \label{quatrieme} $S'' \in [\frac {A + B} 2 , \, B
-\epsilon(\lambda)]$. The technique used in
\ref{troisieme}) in order to get the upper bound no longer applies, as
$C-{H_1}(\underline{x})$ can be arbitrarily close to
$0$. Nevertheless $\rho$ satisfies by
definition the inequality $(S''-2A) \rho'(S'') \le \rho(S'')+\epsilon$.
We thus get
\begin{eqnarray*}
\lefteqn{A_{H_2+K}(\theta)} \\
\ & \le & (1+\epsilon')\lambda +
S''\rho'(S'')(C-{H_1}(\underline{x})) -
\rho(S'')(C-{H_1}(\underline{x})) -C \\
\ & \le & (1+\epsilon')\lambda +
\frac{2A}{B-2A-3\epsilon} (C+\epsilon)
+\epsilon(C+\epsilon) -C \le 2\lambda - \frac 12 C \, .
\end{eqnarray*}
\end{enumerate}
\item $\Gamma$ of type \ref{44} corresponds to $ {H_1} \in
[C-\epsilon,\, C]$ and $A_{H_1}(\Gamma)\le (1-\rho(S''))
\lambda A-\lambda(A-1) \le \lambda$. In all three
cases a)-c) we get $A_{H_2+K}(\theta) \le 2\lambda - \frac 1 4
C$.
\item $\Gamma$ of type \ref{55} corresponds to ${H_1} \equiv C$. Like
in
\ref{premier}) there is no component in the $X''_{\tx{Reeb}}$
direction for $X_{H_2}$
and orbits $\Gamma$ appear in (highly) degenerated families. The
total action in all three cases a) - c) is
$A_{H_2+K}(\theta) = -C - C = -2C$.
\end{enumerate}
This finishes the proof of the fact that the action of the new orbits of $H_2$
tends uniformly to $-\infty$.
d. \ The symmetric construction can be carried out for $K$ in the
region
$\partial M \times [A, \, \infty[ \times \widehat{N}$. One gets
in the end a Hamiltonian
$H_2+K_2$ which is constant equal to $2C$ on $\{
S' \ge B \} \bigcup \{ S'' \ge B \}$. We modify
now $H_2+K_2$ outside the compact set $\{S'\le B \} \bigcap \{
S'' \le B \}$ in order to make it linear with respect to the
Liouville vector field $Z =
X \oplus
Y$ on $\widehat{M} \times \widehat{N}$.
Let us define the following domains in $\widehat M \times \widehat
N$ (see Figure \ref{param product}):
$$\tx{\bf I} = \partial M \times
[1,\ +\infty[ \ \times \ \partial N \times [1, \ +\infty[ \ ,$$
$$\tx{\bf II} = M \times \partial N \times [1, \ +\infty[ \ , \ \
\tx{\bf III} = \partial M \times [1, \ +\infty[ \ \times \ N \ .$$
Let $\Sigma \subset \widehat{M} \times \widehat{N}$ be a hypersurface
which is transversal to $Z$ such that
$$S'\ _{\vert_{\Sigma \ \cap \ \text{\bf III}}} \equiv \alpha > 1,
\qquad \qquad
S'\ _{\vert_{\Sigma \ \cap \ \text{\bf I}}} \in [1,\ \alpha] \ ,$$
$$S''\ _{\vert_{\Sigma \ \cap \ \text{\bf II}}} \equiv \beta > 1,
\qquad \qquad
S''\ _{\vert_{\Sigma \ \cap \ \text{\bf I}}} \in [1 , \ \beta] \ .
$$
We parameterize $\widehat{M} \times \widehat{N} \ \setminus \
\tx{int}(\Sigma)$ by
$$\Psi~: \Sigma \times [1, \ +\infty[ \longrightarrow \widehat{M}
\times \widehat{N} \ \setminus \ \tx{int}(\Sigma) \ , $$
$$(z,S) \longmapsto \big( \varphi'_{_{\ln S}}(z), \ \varphi''
_{_{\ln S}} (z) \big) \ ,$$
which is a symplectomorphism if one endows $\Sigma \times [1,\
+\infty[$ with the symplectic form $d(S \lambda|)$,
where $\lambda| = \iota(X\oplus Y)\big( \omega \oplus
\sigma \big)\vert_\Sigma$. As
an example, for $z \in \Sigma \ \cap \ \text{\bf III}$ we have
$\varphi' _{_{\ln S}}(z) = \big( x(z), \ S\alpha \big)$. It is
easy to see that
\begin{equation} \label{estimation}
\Psi ^{-1} \Big( \{
S' \ge B \} \ \cup \ \{ S'' \ge B \} \Big) \supseteq \{ S \ge
B \} \ .
\end{equation}
As a consequence $H_2+K_2$ is constant equal to $2C$ on
$\{ S \ge B
\}$. We replace it by
$L =l(S)$ on $\{ S \ge B\}$, with $l$ convex and
$l'(S) = \mu \notin \mathcal S(\Sigma)$
for $S \ge B+\epsilon$. The additional $1$-periodic orbits that are
created in this way have action $A_L \le \mu(B+\epsilon)
-2C = \mu(\sqrt{\lambda}A+\epsilon) - 2\lambda (A-1)$.
By choosing $\mu = \sqrt{\lambda}$ one ensures
$A_L \longrightarrow \ -\infty$, as well as the cofinality of
the family of Hamiltonians $L$ as $\lambda\rightarrow \infty$. Indeed,
the Hamiltonian $L$ is bigger than $(\sqrt{\lambda}
-\epsilon)(S-1)$ on $\Sigma \times [1, \, \infty[$.
Note that the choice of $\mu$ equal to
$\sqrt{\lambda}$ and not belonging to the spectrum of $\Sigma$
is indeed possible if we choose $\Sigma$ to have a discrete and
injective spectrum.
\begin{center}
\begin{figure}
\caption{Parameterization of the product $\widehat M \times \widehat
N$ \label{param product}
\label{param product}
\end{figure}
\end{center}
\quad \ e. \ We have constructed a cofinal family of Hamiltonians
$(L_\nu)_{\nu\ge 1}$ which are linear at infinity and that are
associated to the initial Hamiltonians $(H_\nu)$ and
$(K_\nu)$.
I claim that the {\it Main Property} holds for $(L_\nu)_{\nu \ge
1}$. We assume of course that
$J_\nu^1$, $J_\nu^2$ and $J_\nu$
are regular almost complex structures for
$H_\nu$, $K_\nu$, $L_\nu$ respectively, which are standard
for $ S' \ge 1+\epsilon$, $S'' \ge 1+\epsilon$ and $S\ge
B+\epsilon$. Moreover, the almost complex structure $J_\nu$ is of the form
$J_\nu = J_\nu^1 \oplus J_\nu^2$ for $S \le
B$. The preceding estimates on the action of the $1$-periodic orbits
show that the sequence (\ref{inclusion fondamentale
de complexes}) of inclusions is certainly valid at the level of
modules. This says in particular that the $1$-periodic orbits involved
in the free modules appearing in (\ref{inclusion fondamentale
de complexes}) are located in a neighbourhood of
$M$, $N$ and $M\times N$ respectively, and we can assume that the
latter is contained in $\{ S \le 1
\}$ for a suitable choice of $\Sigma$.
Classical transversality arguments (\cite{FH}, Prop. 17~;
\cite{FHS}, 5.1 and 5.4) ensure that we can choose regular
almost complex structures of the form described above.
The point is now to prove that the inclusions~(\ref{inclusion fondamentale
de complexes}) are also valid at the level of {\it differential}
complexes.
It is enough to show that any trajectory
$u = (v, \, w):
\mathbb{R} \times \mathbb{S}^1 \longrightarrow \widehat{M} \times \widehat{N}$
which satisfies
$u(s,\cdot) \longrightarrow (x^{\pm}, \, y^{\pm})$,
$s\longrightarrow \pm \infty$ stays in the domain $\{ S\le 1
\}$, where the Floer equation is split. That will imply that $v$
and $w$ are Floer trajectories in
$\widehat{M}$ and $\widehat{N}$ respectively and will prove
(\ref{inclusion fondamentale de complexes}) at the level of differential complexes.
It is of course enough to prove this statement under the assumption
that
$J_\nu=J_\nu^1\oplus J_\nu^2$ on the whole of $\widehat{M} \times
\widehat{N}$:
the Floer trajectories would then be a-posteriori contained in $\{
S\le 1 \}$. Moreover, the proof of this fact only makes use of the split
structure on the set $\{ S' \le 2A, \, S'' \le 2A \}$ and this
allows one to modify $J_\nu$ outside $\{ S \le
B \}$ in order to formally work with an almost complex structure that
is homothety-invariant at infinity.
We therefore suppose in the sequel that $J_\nu = J_\nu ^1 \oplus
J_\nu^2$. Arguing by contradiction, assume that the Floer trajectory
$u$ is not contained in
$\{ S
\le 1 \}$. As $u$ is anyway contained in a compact set, we infer that
the function $S \circ u$ has a local maximum in $\{ S
> 1 \}$, which means that one of the functions $S'\circ v$ or $S''
\circ w$ has a local maximum in $\{ S' > 1+\epsilon \}$ or $\{ S''
> 1+\epsilon \}$ respectively. The two cases are symmetric and we can
assume without loss of generality that
$S'' \circ w$
has a local maximum in $\{ S'' > 1+\epsilon \}$.
In view of the fact that $w$ satisfies the Floer equation associated
to $K_\nu$ for $S'' \le 2A$
the maximum principle ensures that the value of $S'' \circ
w$ at the local maximum is in the interval $]2A, \,
\infty[$. But $w(s,\cdot) \rightarrow y^\pm$, $s \rightarrow \pm \infty$
with $y^\pm \in \{ S'' \le 1+\epsilon \}$ and this implies that $w$
crosses the hypersurfaces $\{S'' =A \}$ and $\{ S'' =2A\}$. Moreover,
the piece of $w$ contained in $\{ A \le S'' \le 2A \}$ is
$J_\nu^2$-holomorphic because $K_\nu$ is constant on that strip.
We therefore obtain
\begin{eqnarray} \label{inegalite estimation energie}
\lefteqn{A_{L_\nu}(x^-, \, y^-) -A_{L_\nu}(x^+, \, y^+) \ = \ \int_{\mathbb{R} \times
\mathbb{S}^1} |(v_s,\, w_s)|^2_{J_\nu^1 \oplus J_\nu^2}} \\
& = & \int_{\mathbb{R} \times\mathbb{S}^1} |v_s |^2_{J_\nu^1}+|w_s|^2_{J_\nu^2} \
\ge \ \int_{\mathbb{R} \times \mathbb{S}^1} |w_s|^2_{J_\nu^2} \nonumber \\
& \ge &
\int_{\big[(s,\, t) \,~: \, w(s,\, t) \, \in \, \{ A \le S'' \le 2A \}
\big]} |w_s|^2_{J_\nu^2} \ = \ \tx{Area}(w \cap \{ A \le S'' \le 2A
\}) \ . \nonumber
\end{eqnarray}
The last equality holds because $w$ is
$J_\nu^2$-holomorphic in the relevant region.
The lemma below will allow us to conclude.
It is inspired from Hermann's work \cite{He},
where one can find it stated for $\widehat{N} = \mathbb{C}^n$.
\begin{lem} \label{petit lemme energetique}
Let $(N, \, \omega)$ be a manifold with contact type boundary and
$\widehat{N}$ its symplectic completion.
Let $J$ be an almost complex structure which is homothety invariant
on
$\{ S \ge 1 \}$.
There is a constant $C(J) >0$ such that,
for any $A \ge 1$ and any $J$-holomorphic curve $u$ having
its boundary components on both
$\partial N \times \{ A \}$ and $\partial N \times \{2A \}$, one has
\begin{equation} \label{petite ineg aires}
\tx{Area}(u) \ge C(J) A \ .
\end{equation}
\end{lem}
\noindent {\it \small Proof. }
Consider, for $A \ge 1$, the map
$$\begin{array}{rcl}
\partial N \times [1, \, \infty[ & \stackrel {h_A}
\longrightarrow & \partial N \times [1, \,
\infty[ \ , \\
& & \\
(p, \, S) & \longmapsto & (p, \, AS) \ .
\end{array}
$$
By definition the map $h_A$ is $J$-holomorphic. On the other
hand, by using the explicit form of $J$ given in
\S\ref{constructions}, one sees that $h_A$
expands the area
element by a factor $A$. As a consequence, up to rescaling by $h_A$
it is enough to prove~(\ref{petite
ineg aires}) for $A=1$. We apply Gromov's
Monotonicity Lemma \cite{G} 1.5.B,
\cite{Sik} 4.3.1
which ensures the existence of $\epsilon_0>0$ and of
$c(\epsilon_0, \, J)>0$
such that,
for any $0 < \epsilon \le \epsilon_0$, any $x\in \partial N \times
[1, \, 2]$ with $B(x, \, \epsilon) \subset \partial N \times [1, \,
2]$ and any connected
$J$-holomorphic curve $S$ such that $x\in S$ and $\partial
S\subset \partial B(x,\, \epsilon)$ one has
$$\tx{Area}(S \cap B(x, \, \epsilon)) \ge c(\epsilon_0, \, J)
\epsilon^2 \ .$$
Let us now fix $\epsilon$ small enough so that $B(x, \,
\epsilon) \subset \partial N \times [1, \, 2]$ for all $x \in \partial N
\times \{ \frac 3 2 \}$. As the boundary of $u$ rests on both $\partial N
\times \{ 1 \}$ and $\partial N \times \{ 2 \}$ one can find such a point
$x$ on the image of $u$. We infer $\tx{Area}(u) \ge
\tx{Area}(u \cap B(x, \, \epsilon)) \ge c(\epsilon_0, \, J)
\epsilon^2$. Then $C(J)=c(\epsilon_0, \,
J)\epsilon^2$ is the desired constant.
{$\square$}
Applying Lemma \ref{petit lemme energetique} to our situation
we get a constant
$c >0$ that does not depend on $u$ and such that $\tx{Area}(w \cap
\{ A \le S'' \le 2A \} ) \ge cA \ge C\lambda$. The difference of the
actions in (\ref{inegalite estimation energie}) is at the same time
bounded by
$4b$ and we get a contradiction for $\lambda$
large enough.
For a fixed $b>0$ the Floer trajectories corresponding to $L_\nu$ are therefore
contained in $\{ S \le 1 \}$ for $\nu$ large enough. This
proves that the sequence of inclusions (\ref{inclusion fondamentale
de complexes}) is valid at the level of differential complexes.
A few more commutative diagrams will now finish the proof.
First, as a direct consequence of (\ref{inclusion
fondamentale de complexes}), one has the commutative diagram
\begin{equation} \label{premier diagramme intermediaire}
{\scriptsize
\xymatrix{
H_k \big( FC_*^{[-\delta,\, \frac{b}2[}(H_\nu,\, J_\nu ^1) \otimes
FC_*^{[-\delta,\, \frac{b}2[}(K_\nu,\, J_\nu ^2)
\big) \ar@{-->}[d] \ar[r] & FH_k^{[-\delta,\, b[}(L_\nu,\, J_\nu)
\ar[dl] \ar@{-->}[d] \\
H_k \big( FC_*^{[-\delta,\, 2b[}(H_\nu,\, J_\nu ^1) \otimes
FC_*^{[-\delta,\, 2b[}(K_\nu,\, J_\nu ^2)
\big) \ar[r] & FH_k^{[-\delta,\, 4b[}(L_\nu,\, J_\nu)
}
}
\end{equation}
By taking the direct limit for $\nu \rightarrow
\infty$ and $b \rightarrow \infty$ this induces the diagram
\begin{equation} \label{deuxieme diagramme intermediaire}
{\scriptsize
\xymatrix{ \displaystyle
\lim_{\stackrel{\longrightarrow} {b \rightarrow +\infty}}
\lim_{\stackrel{\longrightarrow} {\nu {\longrightarrow} +\infty}}
H_k \big( FC_*^{[-\delta, \frac{b}2[}(H_\nu, J_\nu ^1) \otimes
FC_*^{[-\delta, \frac{b}2[}(K_\nu, J _\nu ^2)
\big) \ar[r] \ar@{-->}[d] & FH_k(M\times N)
\ar[dl]_\sim \ar@{-->}[d] \\
\displaystyle
\lim_{\stackrel{\longrightarrow} {b \rightarrow +\infty}}
\lim_{\stackrel{\longrightarrow} { \nu {\longrightarrow} +\infty}}
H_k \big( FC_*^{[-\delta, 2b[}(H_\nu, J_\nu ^1) \otimes
FC_*^{[-\delta, 2b[}(K_\nu, J_\nu ^2)
\big) \ar[r] & FH_k(M \times N)
}
}
\end{equation}
We easily see that {\it vertical arrows are isomorphisms} as the
direct limits following $\nu$ and $b$ commute with each
other (this is a general property of bidirected systems).
This implies that {\it the diagonal arrow is an isomorphism} as
well. At the same time the algebraic K\"unneth
theorem~\cite[VI.9.13]{D} ensures the existence of a split short exact
sequence
\begin{equation} \label{troisieme diagramme intermediaire}
{\scriptsize
\xymatrix{
\bigoplus_{r+s=k} FH_r(M) \otimes
FH_s(N) \ \ \ar@{>->}[r] &
\displaystyle
\lim_{\begin{array}{c} \scriptsize \longrightarrow \\ b,\, \nu
\rightarrow \infty \end{array}
}
H_k \big( FC_*^{[-\delta,\, 2b[}(H_\nu,\, J _\nu ^1) \otimes
FC_*^{[-\delta,\, 2b[}(K_\nu,\, J _\nu ^2)
\big) \ar@{->>}[d] \\
& \bigoplus_{r+s=k-1} \tx{Tor}_1^A \big( FH_r(M), \
FH_s(N) \big)
}
}
\end{equation}
We infer the validity of the short exact sequence~(\ref{suite exacte
Kunneth Floer}).
We note that we use in a crucial way the
exactness of the direct limit functor in order to
obtain~(\ref{troisieme diagramme intermediaire})
from the K\"unneth exact sequence in truncated homology.
II. We prove now the existence of the morphism from the classical
K\"unneth exact sequence to~(\ref{suite exacte Kunneth Floer}).
We restrict the domain of the action to $[-\delta, \,
\delta[$ with $\delta >0$ small enough.
The Floer trajectories of a
$C^2$-small autonomous Hamiltonian
which is a Morse function on $M$ coincide in the symplectically
aspherical case with the gradient trajectories in the
Thom-Smale-Witten complex. We denote by $C_*^{\tx{Morse}}$ the
Morse complexes on the relevant manifolds. By~(\ref{inclusion
fondamentale de complexes}) there is a commutative diagram
\begin{equation} \label{premier diag pour Kunneth geant}
{\scriptsize
\xymatrix{
\displaystyle{\bigoplus_{r+s=k} FC_r^{[-\delta, 2b[}(H_\nu, J_\nu^1) \otimes
FC_s^{[-\delta, 2b[}(K_\nu, J_\nu^2) } \ \ \ar@{^(->}[r]
& FC_k^{[-\delta, 4b[}(L_\nu, J_\nu) \\
\displaystyle{\bigoplus_{r+s=k} C^{\tx{Morse}}_{m+r}(H_\nu,
J_\nu^1)
\otimes C^{\tx{Morse}}_{n+s}(K_\nu, J_\nu^2) }
\ar[u]
\ar@{=}[r] & C^{\tx{Morse}}_{m+n+k}(L_\nu, J_\nu^1\oplus J_\nu ^2) \ .
\ar[u]
}
}
\end{equation}
The relevant Morse complexes compute homology relative to the
boundary. With an obvious notation the above diagram induces in homology
\begin{equation} \label{deuxieme diag pour Kunneth geant}
{\scriptsize
\xymatrix{
\displaystyle \lim_{\stackrel{\longrightarrow } b } \ \lim_{\stackrel
\longrightarrow n} \
H_k\big(FC_*^{[-\delta,\, 2b[}(H_\nu, \, J_\nu^1) \otimes
FC_*^{[-\delta,\, 2b[}(K_\nu, \, J_\nu^2)\big)
\ar[r]^{\qquad \qquad \qquad \qquad \qquad \sim}
& FH_k(M\times N) \\
H_{m+n+k} \big( C_{m+*}(M, \, \partial M) \otimes
C_{n+*}(N, \partial N) \big) \ar[u]^{\phi}
\ar[r]^{\qquad \qquad \sim}
& H_{m+n+k}(M\times N, \, \partial(M\times N)) \ar[u]^{c_*}
}
}
\end{equation}
By naturality of the algebraic K\"unneth exact sequence, the map $\phi$
fits into the diagram below, where $\, * \, $ stands for
its domain and target.
\begin{equation} \label{troisieme diag pour Kunneth geant}
{\scriptsize
\xymatrix{
{\displaystyle{\bigoplus_{r+s = k}}} FH_r(M) \otimes
FH_s(N) \ \ \ar@{>->}[r] &
{*} \ar@{->>}[r]
& {\displaystyle{\bigoplus_{r+s=k-1}}} \tx{Tor}_1^A \big( FH_r(M),
FH_s(N) \big) \\
\hspace{-.1cm}{\displaystyle{\hspace{.03cm}\bigoplus_{r+s = k}}}
\hspace{-.1cm}
H_{m+r}(M, \partial M) \otimes
H_{n+s}(N, \partial N) \ \ \ar@{>->}[r] \ar[u]_{c_*\otimes c_*} &
{*} \ar@{->>}[r] \ar[u]_{\phi} &
\hspace{-.32cm}{\displaystyle{\bigoplus _{r+s=k-1}}} \hspace{-.25cm}
\tx{Tor}_1^A \big(H_{m+r}(M,\partial M),
H_{n+s}(N,\partial N) \big)
\ar[u]_{\tx{Tor}_1(c_*)}
}
}
\end{equation}
Diagrams
(\ref{deuxieme diag pour Kunneth geant}--\ref{troisieme diag pour
Kunneth geant}) establish the
desired morphism of exact sequences.
{$\square$}
\section{Applications} \label{appli}
\subsection{Computation of Floer homology groups}
\begin{prop} \label{prop:produc with C}
Let $N$ be a compact symplectic manifold with restricted contact
type boundary and let $\widehat N$ be its symplectic completion.
Let $\widehat N \times \mathbb{C}^\ell$, $\ell\ge 1$ be endowed with the
product symplectic form. Then
$$FH_*(\widehat N \times \mathbb{C}^\ell) =0 \ .$$
\end{prop}
\noindent {\it \small Proof. } This follows directly from the K\"unneth exact sequence
and from Floer, Hofer and Wysocki's computation $FH_*(\mathbb{C}^{\ell})=0$~\cite{FHW}.
{$\square$}
We now fix terminology for
the proof of Theorem C following~\cite{Eli-psh}.
A Stein manifold $V$ is a triple $(V,\, J_V,\, \phi_V)$,
where $J_V$ is a complex structure and $\phi_V$
is an exhausting plurisubharmonic function.
We say that $V$ is of {\it finite
type} if we can choose $\phi_V$ with all critical points lying in
a compact set $K$. The Stein domains $V_c=\{\phi_V\le c\}$ such that
$V_c\supset K$ are called {\it big Stein domains} of $\phi_V$~; they are
all isotopic.
We call $( V,
\, J_V,\, \phi_V)$ {\it subcritical} if $\phi_V$ is Morse and all its
critical points have indices
strictly smaller than $\frac 1 2 \dim _\mathbb{R} V$ (they are anyway at most
equal to $\frac 1 2 \dim _\mathbb{R} V$).
Let $(V,\, J_V,\, \phi_V)$ be a Stein manifold of finite
type with $\phi_V$ Morse. Following~\cite{SS} we define a
{\it finite type Stein deformation} of $( V,\, J_V,\, \phi_V)$
as a smooth family of complex structures $J_t$,
$t\in [0,1]$ together with exhausting plurisubharmonic functions
$\phi_t$ such that: i) $J_0=J_V$, $\phi_0=\phi_V$; ii) the $\phi_t$ have only Morse or birth-death
type critical points; iii) there exists $c_0$ such that all $c\ge c_0$
are regular values for $\phi_t$, $t\in [0,1]$.
Condition ii) is not actually imposed in~\cite{SS},
but we need it for the following theorem,
which says that the existence of finite type
Stein deformations on subcritical
manifolds is a topological problem.
{\bf Theorem (compare~\cite[3.4]{Eli-psh}).} {\it Let $(J_0,\, \phi_0)$ and
$(J_1,\, \phi_1)$ be
finite type Stein structures on $V$.
Assume $J_0$, $J_1$ are homotopic as almost complex
structures and $\phi_0$, $\phi_1$ can be connected
by a family $\phi_t$, $t\in [0,1]$ of exhausting smooth functions
which satisfy
conditions ii)-iii) above and whose nondegenerate critical points
have subcritical index for all $t\in[0,1]$.
Then $(J_0,\,
\phi_0)$, $(J_1,\, \phi_1)$ are homotopic by a finite type Stein deformation.}
This is the finite type version of Theorem 3.4 in~\cite{Eli-psh}. It
holds because the
latter is proved by \emph{h}-cobordism methods within the
plurisubharmonic category~\cite[Lemma
3.6]{Eli-psh}, and these preserve the
finite type condition.
Given a Stein manifold of finite type $(V,\, J_V,\, \phi_V)$, let us
fix $c_0$ such that all $c\ge c_0$ are regular values of $\phi_V$. Let
$\omega_V=-d(d\phi_V\circ J_V)$. The
big Stein domains $V_c=\{\phi_V\le c\}$, $c\ge c_0$ are diffeomorphic and
endowed with exact symplectic forms $\omega_c = \omega_V|_{V_c}$
for which $\partial V_c$ is of restricted contact type.
The Floer homology groups $FH_*(V_c)$ are well defined
and, by invariance under deformation of the symplectic forms, they are
isomorphic~\cite[3.7]{Cieliebak handles}. We define
$FH_*(V,\, J_V,\, \phi_V)$ as $FH_*(V_c)$ for $c$ large enough.
Let now $(J_t,\, \phi_t)$, $t\in [0,1]$ be a finite type deformation on a Stein
manifold $V$. Given $c_0$ such that all $c\ge c_0$ are regular values
of $\phi_t$ for all $t\in [0,1]$, the big Stein domains $V_{t,c}=\{\phi_t\le
c\}$, $t\in [0,1]$, $c\ge c_0$ are all diffeomorphic and endowed with
exact symplectic forms $\omega_{t,c}=-d(d\phi_t\circ J_t)$. The
boundaries $\partial V_{t,c}$ are of restricted contact type and we
can again apply Lemma 3.7 in~\cite{Cieliebak handles} to conclude that
the Floer homology groups $FH_*(V_{t,c})$ are naturally isomorphic. In
particular $FH_*(V,\, J_0,\, \phi_0)\simeq FH_*(V,\, J_1,\, \phi_1)$.
\noindent {\it \small Proof of Theorem C.}
Cieliebak proved in~\cite{C} that, given a subcritical Stein manifold of
finite type $(\widehat N,\, J, \, \phi)$, there exists a
Stein manifold of finite type $(V,\,J_V,\,\phi_V)$ and a diffeomorphism $F:V\times
\mathbb{C}\longrightarrow \widehat N$ such that: i) $J_V\times i$ and $F^*J$ are
homotopic as almost complex structures; ii) $\phi_V+|z|^2$ and
$F^*\phi$ are subcritical Morse functions
with isotopic big Stein
domains $W_c$, respectively $W'_c$.
The functions
$\phi_V+|z|^2$ and $F^*\phi$ are associated to handle decompositions
$H_1\cup \ldots \cup H_\ell$, $H'_1\cup \ldots \cup H'_\ell$ of $W_c$
and $W'_c$ such that $\tx{index}(H_s)=\tx{index}(H'_s)$, $1\le s \le
\ell$ and the following property holds.
Given $f\in\tx{Diff}_0(V\times \mathbb{C})$
such that $f(W_c)=W'_c$,
the attaching maps of $f\circ H_s$, $H'_s$, $2\le s\le
\ell$ are isotopic in
$H'_1\cup\ldots\cup H'_{s-1}$ (the condition is independent of
$f$). Two such handle
decompositions are called {\it isotopic}.
Because the handle decomposition determines the isotopy class of the
associated Morse function and the two handle decompositions above are
isotopic, we infer that the
functions $\phi_V+|z|^2$ and $F^*\phi$ are isotopic, i.e.
there exists $f\in \tx{Diff}_0(V\times
\mathbb{C})$ with $(F^*\phi)\circ f = \phi_V+|z|^2$.
We can therefore apply the previous
theorem and conclude that
the Stein structures $(J_V\times i,\, \phi_V+|z|^2)$ and
$(F^*J,\, F^*\phi)$ can be connected by a finite type Stein
deformation. It follows that
$FH_*(V\times \mathbb{C},\, F^*J,\,
F^*\phi) \simeq FH_*(V\times \mathbb{C}, \, J_V\times i,\, \phi_V+|z|^2)$.
By Proposition~\ref{prop:produc with C}, the latter homology group
vanishes.
On the other hand, by invariance of Floer homology
under symplectomorphism we have $FH_*(\widehat N,\, J,\,
\phi)\simeq FH_*(V\times \mathbb{C},\, F^*J,\, F^*\phi)$.
{$\square$}
\subsection{Symplectic geometry in product manifolds}
\noindent {\it \small Proof of Theorem B.}
The statement follows readily from the existence of the
commutative diagram given
by the second part of
Theorem A, taking into account the isomorphism
$H^{2m}(M, \, \partial
M) \otimes H^{2n}(N, \, \partial N) \stackrel \sim
\longrightarrow H^{2m + 2n}(M
\times N, \, \partial (M \times N))$, $2m=\dim M$,
$2n=\dim N$. The latter is
given by the usual K\"unneth formula in singular cohomology with
coefficients in a field.
{$\square$}
\noindent {\bf Remark.} Theorem B should be interpreted as a
stability property for the SAWC condition.
\noindent {\bf Remark.} Floer, Hofer and Viterbo~\cite{FHV} proved
the Weinstein
conjecture in a product
$P\times \mathbb{C}^\ell$, $\ell\ge 1$ with $P$ a closed symplectically
aspherical manifold.
The Weinstein conjecture for a product $\widehat M \times \widehat N$
with $\widehat N$ subcritical Stein and $\widehat M$ the completion of
a restricted contact type manifold has been
proved by Frauenfelder and Schlenk in~\cite{FrSc}.
\subsection{Symplectic capacities}
The discussion below makes use of field coefficients. Let $\delta
>0$ be small enough.
One defines (see e.g.~\cite{functors1}) the
capacity of a compact symplectic manifold $M$ with contact type
boundary as
\begin{eqnarray*}
c(M) & = & \inf \{ b > 0 \, : \, FH^{m}_{]-\delta,\, b]}(M)
\longrightarrow H^{2m}(M, \, \partial M) \tx{ is zero } \} \\
& = & \sup \{ b > 0 \, : \, FH^{m}_{]-\delta,\, b]}(M)
\longrightarrow H^{2m}(M, \, \partial M) \tx{ is nonzero } \} \ .
\end{eqnarray*}
Here $2m=\dim M$. The next result
is joint work with
A.-L. Biolley, who applies it in her study of symplectic
hyperbolicity~\cite{Anne Laure}.
\begin{prop} \label{prop:symplectic capacities}
Let $M$, $N$ be compact symplectic manifolds with
boundary of restricted contact type. Then
$$c(M\times N) \le 2 \min \big( c(M), \, c(N) \big) \ . $$
\end{prop}
\noindent {\it \small Proof. } The Main Property~(\ref{inclusion fondamentale de complexes})
gives, for $\nu$ large enough and field coefficients,
an arrow {\footnotesize$\bigoplus
_{r+s=m+n}FH^r_{]-\delta,\frac b 2]}(H_\nu)\otimes
FH^s_{]-\delta,\frac b 2]}(K_\nu) \longleftarrow
FH^{m+n}_{]-\delta,b]}(L_\nu)$}.
Moreover, for fixed $b$ and $\nu$ large
enough the Hamiltonians $H_\nu$, $K_\nu$ and $L_\nu$ compute the
corresponding truncated cohomology groups
of $M$, $N$ and $M\times N$.
Like in Theorem A.b. we get the commutative diagram
\begin{equation*}
{\scriptsize
\xymatrix@R=12pt{\displaystyle{
\bigoplus _{r+s=m+n} FH^r_{]-\delta, \, \frac b 2]}(M)
\otimes FH^s_{]-\delta, \, \frac b 2]}(N)} \ar[d]^{c_*^{b/2}\otimes
c_*^{b/2}} & & \ar[ll]
FH^{m+n}_{]-\delta, \, b]}(M \times N) \ar[d]^{c_*^b} \\
H^{2m}(M,\, \partial M)\otimes H^{2n}(N,\, \partial N) & &
\ar[ll]_{\sim} H^{2m+2n}(M\times N,\, \partial (M\times N))
}
}
\end{equation*}
Let now $b<c(M\times N)$. Then $c_*^b\neq 0$, hence
$c_*^{b/2}\otimes c_*^{b/2}\neq 0$ and therefore $b/2 \le \min\,
\big(c(M),\, c(N)\big)$.
{$\square$}
\end{document} | math | 74,948 |
\begin{document}
\begin{frontmatter}
\title{A fictitious domain finite element method for simulations of fluid-structure interactions: The Navier-Stokes equations coupled with a moving solid}
\author{S\'ebastien Court$^*$, Michel Fourni\'e$^{**}$
\footnote{[email protected]}}
\address{$^{*}$Laboratoire de Mathématiques, Campus des Céreaux,\\
Université Blaise Pascal, B.P. 80026, 63171 Aubière cedex, France.\\
$^{**}$Institut de Mathématiques de Toulouse, Unit\'e Mixte C.N.R.S. 5219,\\
Universit\'e Paul Sabatier Toulouse III, 118 route de Narbonne, 31062 Toulouse Cedex 9, France.}
\begin{abstract}
The paper extends a stabilized fictitious domain finite element method initially developed for the Stokes problem to the incompressible Navier-Stokes equations coupled with a moving solid. This method presents the advantage to predict an optimal approximation of the normal stress tensor at the interface. The dynamics of the solid is governed by the Newton's laws and the interface between the fluid and the structure is materialized by a level-set which cuts the elements of the mesh. An algorithm is proposed in order to treat the time evolution of the geometry and numerical results are presented on a classical benchmark of the motion of a disk falling in a channel.
\end{abstract}
\begin{keyword}
Fluid-structure interactions, Navier-Stokes, Fictitious domain, eXtended Finite Element.
\end{keyword}
\end{frontmatter}
\section{Introduction}
Fluid-structure interactions problems remain a challenge both for a comprehensive study of such problems as for the development of robust numerical methods (see a review in \cite{Hou&Wang&Layton}). One class of numerical methods is based on meshes that are conformed to the interface where the physical boundary conditions are imposed \cite{LT, SMSTT0, SST}. As the geometry of the fluid domain changes through the time, re-meshing is needed, which is excessively time-consuming, in particular for complex systems. An other class of numerical methods is based on non-conforming mesh with a fictitious domain approach where the mesh is cut by the boundary. Most of the non-conforming mesh methods are based on the immersed boundary methods where force-equivalent terms are added to the fluid equations in order to represent the fluid structure interaction \cite{Peskinacta, Mittal&Iaccarino}. Many related numerical methods have been developed, in particular the popular distributed Lagrange multiplier method, introduced for rigid bodies moving in an incompressible flow \cite{Glowinski}. In this method, the fluid domain is extended in order to cover the rigid domain where the fluid velocity is required to be equal to the rigid body velocity.\\
More recently, eXtended Finite Element Method introduced by Mo\"{e}s, Dolbow and Belytschko in \cite{MoesD} (see a review of such methods in \cite{reviewXfem}) has been adapted to fluid structure interactions problems in \cite{MoesB, SukumarC, Gerstenberger2008, Choi2010}. The idea is similar to the fictitious domain / Lagrange multiplier method aforementioned, but the fluid velocity is no longer extended inside the structure domain, and its value given by the structure velocity is enforced by a Lagrange multiplier only on the fluid-structure interface. One thus gets rid of unnecessary fluid unknowns. Besides, one easily recovers the normal trace of the Cauchy stress tensor on the interface. We note that this method has been originally developed for problems in structural mechanics mostly in the context of cracked domains, see for example \cite{HaslR, MoesG, Stazi, SukumarM, Stolarska}. The specificity of the method is that it combines a level-set representation of the geometry of the crack with an enrichment of a finite element space by singular and discontinuous functions.\\
In the context of fluid-structure interactions, the difficulty related to the applications of such techniques lies in the choice of the Lagrange multiplier space used in order to take into account the interface, which is not trivial because of the fact that the interface cuts the mesh (see \cite{Bechet2009} for instance).
In particular, the natural mesh given by the points of intersection of the interface with the global mesh cannot be used directly. An algorithm to construct a multiplier space satisfying the inf-sup condition is developed in \cite{Bechet2009}, but its implementation can be difficult in practice.
The method proposed in the present paper tackles this difficulty by using a stabilization technique proposed in \cite{HaslR}. This method was adapted to contact problems in elastostatics in \cite{HildR} and more recently to the Stokes problem in \cite{CourtFournieLozinski}.
An important feature of this method (based on the eXtended Finite Element Method approach, similarly to \cite{Gerstenberger2008, Choi2010}) is that the Lagrange multiplier is identified with the normal trace of the Cauchy stress tensor $\sigma(\bu,p)\bn$ at the interface.
Moreover, it is possible to obtain a good numerical approximation of $\sigma(\bu,p)\bn$ (the proof is given in \cite{CourtFournieLozinski} for the Stokes problem). This property is crucial in fluid-structure interactions since this quantity gives the force exerted by the viscous fluid on the structure. In the present paper, we propose to extend this method to the Navier-Stokes equations coupled with a moving solid. Note that alternative methods based on the Nitsche's work \cite{Nitsche} (such as \cite{Burman1, Burman3} in the context of the Poisson problem and \cite{Massing} in the context of the Stokes problems) do not introduce the Lagrange multiplier and thus do not necessarily provide a good numerical approximation of this force. Our method based on boundary forces is particular interesting for control flow around a structure. The control function can be localized on the boundary of the structure where we impose its local deformation. In order to perform direct numerical simulations of such a control, efficient tools based on accurate computations on the interface must be developed. The present approach is one brick in this research topic where recent development towards stabilized Navier-Stokes equations are proposed (like in \cite{JPR}).\\
The outline of the paper is as follows. The continuous fluid-structure interactions problem is given in Section~\ref{section2} and the weak formulation with the introduction of a Lagrange multiplier for imposing the boundary condition at the interface is given in Section~\ref{subsection2}. Next, in Section~\ref{section3} the fictitious domain method is recalled with the introduction of the finite element method (Section~\ref{fem}) with a time discretization (Section~\ref{time}). Section~\ref{section4} is devoted to numerical tests and validation on a benchmark corresponding to the falling of a disk in a channel. The efficiency of the method is presented before conclusion.
\section{The model} \label{section2}
\subsection{Fluid-structure interactions}
\hspace*{0.5cm} We consider a moving solid which occupies {a time-depending domain denoted by $\mathcal{S}(t)$}. The remaining domain $\mathcal{F}(t) = \mathcal{O} \setminus \overline{\mathcal{S}(t)}$ corresponds to the fluid flow.
\begin{figure}
\caption{Decomposition of the solid movement.\label{fig1}
\label{fig1}
\end{figure}
\FloatBarrier
The displacement of a rigid solid can be given by the knowledge of ${\bf h}(t)$, namely the position of its gravity center, and $\mathbf{R}(t)$ its rotation given by $\displaystyle \left( \begin{array}{cc} c& -s \\ s& c\\ \end{array} \right)$ for $\displaystyle c=\cos(\theta(t))$, $s=\sin(\theta(t))$, where $\theta(t)$ is the rotation angle of the solid (see Figure~\ref{fig1}).
Then at time $t$ the domain occupied by the structure is given by
\begin{eqnarray*}
\displaystyle \mathcal{S}(t) & = & {\bf h}(t) + \mathbf{R}(t)\mathcal{S}(0).
\end{eqnarray*}
{\it Remark:} This formulation can be extended in order to consider general deformations of the structure. Then we would have to define a mapping $X^*(\cdot,t)$ which corresponds to the deformation of the solid in its own frame of reference. Then, $\mathcal{S}(t) = X_S(\mathcal{S}(0),t)$ where $X_S(\mathrm{y},t) = {\bf h}(t) + \mathbf{R}(t)X^*(\mathrm{y},t)$, for $\mathrm{y} \in \mathcal{S}(0)$.\\
The velocity of the incompressible viscous fluid of density $\rho_f$ is denoted by ${\bf u}$, the pressure by $p$ and $\nu$ is the dynamic viscosity. We denote by $\bn$ the outward unit normal vector to $\p \mathcal{F}$ (the boundary of $\mathcal{F}$), and the normal trace on the interface $\Gamma = \partial \mathcal{S}(t)$ of the Cauchy stress tensor is given by
\begin{eqnarray*}
\sigma(\bu,p)\bn = 2\nu D(\bu){\bf n} -p\bn, & & \text{ with } D(\bu) = \frac{1}{2} \left(\nabla \bu + \nabla \bu^T \right).
\end{eqnarray*}
When gravity forces are considered (we denote by ${\bf g}$ the gravity field), the fluid flow is modeled by the incompressible Navier-Stokes equations
\begin{equation}
\label{eq1}
\left \{
\begin{array}{llr}
\displaystyle \rho_f\left ( \frac{\partial {\bf u}}{\partial t} + ({\bf u}.\nabla){\bf u} \right ) - \nu \Delta {\bf u} +\nabla p = \rho_f {\bf g}, & \mathrm{x} \in \mathcal{F}(t), & t \in (0,T),\\
\displaystyle \mbox{div}({\bf u}) =0, & \mathrm{x} \in \mathcal{F}(t), & t \in (0,T),\\
\displaystyle {\bf u} = 0, & \mathrm{x} \in \partial \mathcal{O} , & t \in (0,T),\\
\end{array}
\right .
\end{equation}
and the Newton's laws are considered for the dynamics of the solid
\begin{equation}
\label{eq2}
\left \{
\begin{array}{l}
\displaystyle m_s{\bf h}''(t) = - \int_{\partial \mathcal{S}(t)} {\sigma({\bf u},p){\bf n}} d \Gamma -m_s {\bf g},\\
\displaystyle I\theta''(t) = - \int_{\partial \mathcal{S}(t)}({\bf x}-{\bf h}(t))^{\bot}\cdot {\sigma({\bu},p){\bf n}} d \Gamma,
\end{array}
\right .
\end{equation}
where $m_s$ is the mass of the solid, and $I$ is its moment of inertia.\\
At the interface $\partial \mathcal{S}(t)$, for the coupling between fluid and structure, we impose the continuity of the velocity
\begin{equation}
\label{eq3}
\displaystyle {\bf u}({\bf x},t) = {\bf h}'(t) + \theta'(t) ({\bf x}-{\bf h}(t))^{\bot}={\bf u_{\Gamma}}, \ \ \ {\bf x} \in \partial \mathcal{S}(t), \ \ \ t \in (0,T).
\end{equation}
The coupled system (\ref{eq1})--(\ref{eq3}) has for unknowns ${\bf u}$, $p$, ${\bf h}(t)$ and the angular velocity $\omega(t) = \theta'(t) $ (a scalar function in 2D).
\subsection{Weak formulation of the problem with stabilization terms} \label{subsection2}
We consider the coupled system (\ref{eq1})-(\ref{eq3}) and we assume that the boundary condition imposed at the interface $\Gamma = \partial \mathcal{S}(t)$ is sufficiently regular to make sense, and we introduce the following functional spaces (based on the classical Sobolev spaces $\L^2(\mathcal{F})$, $\mathbf{H}^1(\mathcal{F})$, $\mathbf{H}^{1/2}(\Gamma)$ and $\mathbf{H}^{-1/2}(\Gamma)$, see \cite{Evans} for instance)
\begin{eqnarray*}
\begin{array}{lcl}
\mathbf{V} &= &\left\{ \bv\in \mathbf{H}^1(\mathcal{F}) \mid \bv=0 \text{ on } \p \mathcal{O} \right\},\\
Q &=& \L^2_0(\mathcal{F}) = \left\{p \in \L^2(\mathcal{F}) \mid \displaystyle \int_{ \mathcal{F}}p\ \d \mathcal{F} = 0 \right\},\\
\mathbf{W} &=& \mathbf{H}^{-1/2}(\Gamma) = \left(\mathbf{H}^{1/2}(\Gamma) \right)'.\\
\end{array}
\end{eqnarray*}
Due to the fact that we only consider boundary conditions of Dirichlet type, we impose to the pressure $p$ to have null average (this condition is taken into account in $Q$). That variational formulation can be done in three steps:
\begin{itemize}
\item[Step 1 --] Classical formulation of the Navier-Stokes and structure equations (in the formulation $\gamma$ and $\blambda$ are equal to $0$);
\item[Step 2 --] Introduction of Lagrange multiplier $\blambda$ in order to take into account the Dirichlet condition at the interface $\Gamma$ (in the formulation only $\gamma$ is equal to $0$);
\item[Step 3 --] Introduction of stabilization terms with a parameter $\gamma$.
\end{itemize}
This new unknown $\blambda$ plays a critical role due to the fact that it is equal to the
normal trace of the Cauchy stress tensor $\sigma(\bu,p)\bn$ (this equality is described in \cite{Gunzburger}). The stabilization terms are associated with the constant parameter $\gamma$ (chosen sufficiently small). The variational problem that we consider is the following:
\begin{eqnarray*}
\hspace*{-15pt} & & \text{Find $({\bf u},p,{\bf \lambda},{\bf h}',{\bf h},\theta',\theta) \in \mathbf{V} \times Q \times \mathbf{W} \times \R^2 \times \R^2 \times \R \times \R$ such that} \\
\hspace*{-15pt} & & \left\{ \begin{array} {ll}
\displaystyle \int_{\mathcal{F} } \rho_f \frac{\partial {\bf u}}{\partial t}\cdot{\bf v} \mathrm{d}\mathcal{F} + \mathcal{A}(({\bf u},p,{\blambda});{\bf v}) + \int_{\mathcal{F}} \rho_f [({\bf u}\cdot\nabla){\bf u}]\cdot {\bf v} \mathrm{d}\mathcal{F} = \int_{\mathcal{F} } \rho_f {\bf g} \cdot {\bf v}\mathrm{d}\mathcal{F},
& \forall {\bf v} \in \mathbf{V}, \\
\mathcal{B}(({\bf u},p,\boldsymbol{\lambda});q) = 0, & \forall q \in Q, \\
\mathcal{C}(({\bf u},p,\boldsymbol{\lambda});{\bmu}) = \mathcal{G}({\bmu}), \quad & \forall {\bf \bmu} \in \mathbf{W},\\
\displaystyle m_s{\bf h}''(t) = - \int_{\partial \mathcal{S}(t)} {\blambda} d \Gamma -m_s {\bf g},&\\
\displaystyle I\theta''(t) = - \int_{\partial \mathcal{S}(t)}({\bf x}-{\bf h}(t))^{\bot}\cdot \boldsymbol{\lambda} d \Gamma,&
\end{array} \right. \label{FVaugmented}
\end{eqnarray*}
where
\begin{eqnarray*}
\mathcal{A}(({\bf u},p,\boldsymbol{\lambda});{\bf v}) & = & 2\nu\int_{\mathcal{F}}D({\bf u}):D({\bf v})\mathrm{d}\mathcal{F} - \int_{\mathcal{F}}p\div \ {\bf v}\mathrm{d}\mathcal{F} - \int_{\Gamma} \boldsymbol{\lambda}\cdot {\bf v}\mathrm{d}\Gamma \\
& &\hspace*{-2.5cm} -4\nu^2\gamma \int_{\Gamma}\left( D({\bf u})\bn \right)\cdot \left( D({\bf v})\bn \right)\mathrm{d}\Gamma +2\nu \gamma \int_{\Gamma}p \left(D({\bf v})\bn\cdot \bn \right)\mathrm{d}\Gamma +2\nu \gamma \int_{\Gamma} \boldsymbol{\lambda} \cdot \left(D({\bf v})\bn\right)\mathrm{d}\Gamma , \\
\mathcal{B}(({\bf u},p,\boldsymbol{\lambda});q) & = & - \int_{\mathcal{F}}q\div\ {\bf u}\mathrm{d}\mathcal{F} +2\nu \gamma \int_{\Gamma}q\left(D({\bf u})\bn\cdot \bn \right)\mathrm{d}\Gamma -\gamma \int_{\Gamma}pq \mathrm{d}\Gamma - \gamma \int_{\Gamma} q\boldsymbol{\lambda} \cdot \bn\mathrm{d}\Gamma , \\
\mathcal{C}(({\bf u},p,\boldsymbol{\lambda});\boldsymbol{\mu}) & = & -\int_{\Gamma} \boldsymbol{\mu} \cdot {\bf u}\mathrm{d}\Gamma +2\nu \gamma \int_{\Gamma}\boldsymbol{\mu} \cdot (D({\bf u})\bn)\mathrm{d}\Gamma -\gamma \int_{\Gamma}p(\boldsymbol{\mu}\cdot \bn)\mathrm{d}\Gamma - \gamma \int_{\Gamma} \boldsymbol{\lambda} \cdot \boldsymbol{\mu} \mathrm{d}\Gamma,\\
\hspace*{-0.8cm} \mathcal{G}(\boldsymbol{\mu}) &= & -\int_{\Gamma} \boldsymbol{\mu} \cdot {\bf u_{\Gamma}} \mathrm{d}\Gamma = - \int_{\Gamma} \boldsymbol{\mu} \cdot ({\bf h}'(t) + \theta'(t) ({\bf x}-{\bf h}(t))^{\bot})\mathrm{d}\Gamma.\\
\end{eqnarray*}
{\it Remark:} The formulation can be justified by the introduction of an extended Lagrangian - \`a la Barbosa-Hughes, see \cite{Barbosa1} - whose a stationary point is a weak solution of the problem. The first-order derivatives of this Lagrangian leads to forcing $\blambda$ to reach the desired value corresponding to $\sigma(\bu,p)\bn$.
\section{Fictitious domain approach}\label{section3}
We refer to the article \cite{CourtFournieLozinski} for the details of the fictitious domain approach we consider here. In the following, we recall the method used for the present work.
\subsection{Finite element discretization} \label{fem}
The fictitious domain for the fluid is considered on the whole domain $\mathcal{O}$. Let us introduce three discrete finite element spaces, {$\tilde{\mathbf{V}}^h \subset \mathbf{H}^1(\mathcal{O})$, $\tilde{Q}^h \subset \L^2_0(\mathcal{O})$ and $\tilde{\mathbf{W}}^h \subset \mathbf{L}^2(\mathcal{O})$}. Notice that the spaces $\mathbf{V},\ Q,\ \mathbf{W}$ introduced to define the weak formulation are included into those spaces defined all over the domain $\mathcal{O} = \mathcal{F} \cup \mathcal{S} $. In practice, $\mathcal{O}$ is a simple domain, so that the construction of a unique mesh for all spaces is straightforward (the interface between the fluid and the structure is not considered). Let us consider for instance a rectangular domain where a structured uniform mesh $\mathcal{T}^h$ can be constructed (see Figure~\ref{figMesh}). Classical finite element discretizations can be defined on the spaces {$\tilde{\mathbf{V}}^h$, $\tilde{Q}^h$ and $\tilde{\mathbf{W}}^h$}. For $\tilde{\mathbf{V}}^h$, let us consider for instance a subspace of the continuous functions $ C(\overline{\mathcal{O}})$ defined by
\begin{eqnarray*}
\tilde{\mathbf{V}}^h & = & \left\{\bv^h \in C(\overline{\mathcal{O}})\mid \bv^h_{\left| \p \mathcal{O}\right.} = 0, \ \bv^h_{\left| T\right.} \in P(T), \ \forall T \in \mathcal{T}^h \right\}, \label{defvtilde}
\end{eqnarray*}
where $P(T)$ is a finite dimensional space of regular functions, containing $P_k(T)$ the polynom space of degree less or equal to an integer $k$ ($k \geq 1$). For more details, see \cite{Ern} for instance. The mesh step stands for $\displaystyle h = \max_{T\in \mathcal{T}^h} h_T$, where $h_T$ is the diameter of $T$. In order to split the fluid domain and the structure domain, we define spaces on the fluid part $\mathcal{F}$ and on the interface $\Gamma$ only, as
\begin{eqnarray*}
\mathbf{V}^h := \tilde{\mathbf{V}}^h_{\left| \mathcal{F} \right.}, \quad Q^h := \tilde{Q}^h_{\left|\mathcal{F}\right.}, \quad \mathbf{W}^h := \tilde{\mathbf{W}}^h_{\left| \Gamma \right.}.
\end{eqnarray*}
\begin{center}
\begin{figure}
\caption{Illustration of the elements cut with respect to the level-set.\label{figsupercut}
\label{figsupercut}
\end{figure}
\end{center}
\FloatBarrier
Notice that $\mathbf{V}^h $, $Q^h$, $\mathbf{W}^h$ are respective natural discretizations of $\mathbf{V}$, $Q$ and $\mathbf{W}$. It corresponds to cutting the basis functions of spaces $\tilde{\mathbf{V}}^h$, $Q^h$ and $\tilde{\mathbf{W}}^h$, as shown in Figure~\ref{figsupercut}. This approach is equivalent to the eXtended Finite Element Method, as proposed in \cite{Choi2010} or \cite{Gerstenberger2008}, where the standard finite element method basis functions are multiplied by Heaviside functions ($H({\bf x}) = 1$ for ${\bf x} \in \mathcal{F}$ and $H({\bf x})=0$ for ${\bf x}\in \mathcal{O} \setminus \mathcal{F}$), and the products are substituted in the variational formulation of the problem. Thus the degrees of freedom inside the fluid domain $\mathcal{F}$ are used in the same way as in the standard finite element method, whereas the degrees of freedom in the solid domain $\mathcal{S}$ at the vertexes of the elements cut by the interface (the so called virtual degrees of freedom) do not define the field variable at these nodes, but they are necessary to define the fields on $\mathcal{F}$ and to compute the integrals over $\mathcal{F}$. The remaining degrees of freedom, corresponding to the basis functions with support completely outside of the fluid, are eliminated (see Figure~\ref{figMesh}). We refer to the papers aforementioned for more details.
\begin{figure}
\caption{\label{figMesh}
\label{figMesh}
\end{figure}
\FloatBarrier
The discrete problem consists in finding $(\bu^h,p^h,\blambda^h,{\bf h}',{\bf h},\theta', \theta) \in \mathbf{V}^h \times Q^h \times \mathbf{W}^h \times \R^2 \times \R^2 \times \R \times \R$ such that
\begin{eqnarray*}
\left\{ \begin{array} {lll}
\displaystyle \int_{\mathcal{F} } \rho_f \frac{\partial {\bf u}^h}{\partial t}\cdot{\bf v^h} \mathrm{d}\mathcal{F} + \mathcal{A}(({\bf u}^h,p^h,{\blambda}^h);{\bf v}^h) +\int_{\mathcal{F}} \rho_f [({\bf u}^h\cdot\nabla){\bf u}^h]\cdot{\bf v}^h \mathrm{d}\mathcal{F} = \int_{\mathcal{F} } \rho_f {\bf g} \cdot{\bf v}^h\mathrm{d}\mathcal{F},\\
& \hspace*{-2cm} \forall {\bf v}^h \in \mathbf{V}^h, \\ &\\
\mathcal{B}(({\bf u}^h,p^h,{\blambda}^h);q^h) = 0, & \hspace*{-2cm} \forall q^h \in Q^h, \\
&\\
\mathcal{C}(({\bf u}^h,p^h,{\blambda}^h);{\bmu}^h) = \mathcal{G}({\bmu}^h), & \hspace*{-2cm} \forall {\bmu}^h \in \mathbf{W}^h,\\
&\\
\displaystyle m_s{\bf h}''(t) = - \int_{\partial \mathcal{S}(t)} {\blambda}^h \d \Gamma -m_s {\bf g},\qquad
\displaystyle I\theta''(t) = - \int_{\partial \mathcal{S}(t)}({\bf x}-{\bf h}(t))^{\bot}\cdot {\blambda}^h \d \Gamma.&\\
\end{array} \right. \label{FVhaugmented}
\end{eqnarray*}
This is a system of nonlinear differential algebraic equations which can be formulated into a compact form. We denote by $\boldsymbol{U}$, $\boldsymbol{P}$ and $\boldsymbol{\Lambda}$ the respective degrees of freedom of $\bu^h$, $p^h$ and $\blambda^h$. After standard finite element discretization of the following bilinear forms
\begin{eqnarray*}
\mathcal{M}_{{\bf u}{\bf u}} : ({\bf u},{\bf v}) & \longmapsto & \int_{\mathcal{F}}\rho_f{\bf u}.{\bf v}\mathrm{d}\mathcal{F}, \qquad
\mathcal{M}_{\blambda} : {\blambda} \longmapsto -\int_{\partial \mathcal{S}(t)}{\blambda} \mathrm{d}\Gamma,\\
\mathcal{A}_{{\bf u}{\bf u}} : ({\bf u},{\bf v}) & \longmapsto & 2\nu\int_{\mathcal{F}}D({\bf u}):D({\bf v})\mathrm{d}\mathcal{F} - 4\nu^2\gamma \int_{\Gamma}\left( D({\bf u})\bn \right)\cdot \left( D({\bf v})\bn\right)\mathrm{d}\Gamma , \\
\mathcal{A}_{{\bf u}p} : ({\bf v},p) & \longmapsto & - \int_{\mathcal{F}}p\div \ {\bf v} \mathrm{d}\mathcal{F} + 2\nu \gamma \int_{\Gamma}p \left(D({\bf v})\bn\cdot \bn \right)\mathrm{d}\Gamma , \\
\mathcal{A}_{{\bf u}{\blambda}} : ({\bf u},{\blambda}) & \longmapsto & - \int_{\Gamma} {\blambda}\cdot {\bf v}\mathrm{d}\Gamma + 2\nu \gamma \int_{\Gamma} {\blambda} \cdot \left(D({\bf v})\bn\right)\mathrm{d}\Gamma , \\
\mathcal{A}_{pp} : (p,q) & \longmapsto & -\gamma \int_{\Gamma}pq \mathrm{d}\Gamma , \hspace*{0.5cm}
\mathcal{A}_{p{\blambda}} : (q,{\blambda}) \longmapsto - \gamma \int_{\Gamma} q{\blambda} \cdot \bn\mathrm{d}\Gamma , \\
\mathcal{A}_{{\blambda}{\blambda}} : ({\blambda},{\bmu}) & \longmapsto & -\gamma \int_{\Gamma} {\blambda} \cdot {\bmu} \mathrm{d}\Gamma,
\end{eqnarray*}
we define matrices like $M_{{\bf u}{\bf u}}$ from $\mathcal{M}_{{\bf u}{\bf u}}$, etc..., the vector $\boldsymbol{G}$ from $\mathcal{G}$, ${\bf F}$ from the gravity forces $\rho_f {\bf g}$, $M_{\blambda}$ the matrix computed by integration over $\Gamma$ of the $\mathbf{W}^h$ basis functions and $N(\boldsymbol{U}(t))\boldsymbol{U}(t)$ the matrix depending on the velocity and corresponding to the nonlinear convective term $\displaystyle \int_{\mathcal{F}} \rho_f[({\bf u} \cdot \nabla){\bf u}] \cdot{\bf v} \mathrm{d}\mathcal{F}$. Then the matrix formulation is given by
\begin{eqnarray}
M_{{\bf u}{\bf u}} \frac{\mbox{d} \boldsymbol{U(t)}}{\mbox{d} t} +A_{{\bf u}{\bf u}} \boldsymbol{U}(t) + N( \boldsymbol{U}(t)) \boldsymbol{U}(t) + A_{{\bf u}p}\boldsymbol{P}(t) + A_{{\bf u}{\bf \lambda}}\boldsymbol{\Lambda}(t) = {\bf F}, \label{M1} \\
A^T_{{\bf u}p} \boldsymbol{U}(t) + A_{pp} \boldsymbol{P}(t) + A_{p{\bf \lambda}}\boldsymbol{\Lambda}(t) = 0, \label{M2} \\
A^T_{{\bf u}{\bf \lambda}} \boldsymbol{U}(t) + A^T_{p{\bf \lambda}} \boldsymbol{P}(t) + A_{{\bf \lambda} {\bf \lambda}}\boldsymbol{\Lambda}(t) = \boldsymbol{G},\label{M3}\\
\displaystyle m_s{\bf h}''(t) = M_{ \bf \lambda} \boldsymbol{\Lambda}(t) -m_s {\bf g},&&\label{M4}\\
\displaystyle \text{{$I\theta''(t) = M_{ \bf \lambda}
\left[ ({\bf x}-{\bf h}(t))^{\bot} \cdot \boldsymbol{\Lambda}(t)\right]$}}. &&\label{M5}
\end{eqnarray}
At the interface $\Gamma = \partial \mathcal{S}(t)$ represented by a level-set function which cuts the global mesh, the coupling between the fluid and the structure is imposed by a Dirichlet condition whose elements are determined through the computation of $\sigma(\bu,p){\bf n}$. The main advantage of our numerical method - mathematically justified in \cite{CourtFournieLozinski} - is to return an optimal approximation $\boldsymbol{\Lambda}(t)$ of $\sigma(\bu,p){\bf n}$ at the interface. Getting a good approximation of this quantity is crucial for the dynamics of the system.
\subsection{Time discretization and treatment of the nonlinearity} \label{time}
Classical methods like $\theta$-methods can be used for the time discretization. For a matter of unconditional stability of the scheme, we consider an implicit discretization based on the backward Euler method. We denote by $\boldsymbol{U}^{n+1}$ the solution at the time level $t^{n+1}$ and $dt = t^{n+1}-t^n$ is the time step. Particular attention must be done for a moving particle problem. Indeed, at the time level $t^{n+1}$ the solid occupies $\mathcal{S}(t^{n+1})$ which is different from the previous time level $t^n$. So, the field variable at the time level $t^{n+1}$ can become undefined near the interface since there was no fluid flow at the time level $t^n$ ($\mathcal{S}(t^{n+1})\neq \mathcal{S}(t^{n})$ for the solid and $\mathcal{F}(t^{n+1})\neq \mathcal{F}(t^{n})$ for the fluid). In other words, some degrees of freedom for the fluid part which are not considered at the time level $t^n$ must be taken into account at the time level $t^{n+1}$. In particular, the velocity field must be known in such nodes. In the present work, we impose the velocity to be equal to the motion of the solid. The validity of this approximation is justified as soon as time step is sufficiently small to ensure that the structure moves progressively across the mesh without jump of cells (when the level-set doesn't cuts this cell). This constraint is not too strong and corresponds to the classical CFL condition for velocity of the structure.\\
In the following we present the algorithm we perform to compute at the time level $t^{n+1}$ the solution ($\boldsymbol{U}^{n+1}, \boldsymbol{P}^{n+1}, \boldsymbol{\Lambda}^{n+1},{\bf h}'^{n+1},{\bf h}^{n+1},\theta'^{n+1},\theta^{n+1}$) on $\mathcal{F}(t^{n+1})$. To simplify, we assume that $dt$ is constant. At the time level $t^n$ we have access to ($\boldsymbol{U}^n, \boldsymbol{P}^n, \boldsymbol{\Lambda}^n,{\bf h}'^n,{\bf h}^n,\theta'^n,\theta^{n}$) on $\mathcal{F}(t^{n})$.
\begin{itemize}
\item[1--] {\bf Velocity of the structure} -From $\boldsymbol{\Lambda}^n$, we compute $({\bf h}'^{n+1},\theta'^{n+1})$ using (\ref{M4}) and (\ref{M5}) with the mid-point method, as
\begin{eqnarray*}
\displaystyle m_s \frac{{\bf h}'^{n+1}-{\bf h}'^n}{dt} = M_{ \bf \lambda} \boldsymbol{\Lambda}^{n} -m_s {\bf g},&&\\
\displaystyle I\frac{\theta'^{n+1} - \theta'^n}{dt} =
M_{ \bf \lambda} \left [ ({\bf x}-{\bf h}^{n})^{\bot}\cdot \boldsymbol{\Lambda}^{n}\right ] .&&\\
\end{eqnarray*}
\item[2--] {\bf Position of the structure} - From $\boldsymbol{\Lambda}^n$, we compute $({\bf h}^{n+1},\theta^{n+1})$ using (\ref{M4}) and (\ref{M5}) with mid point rule
\begin{eqnarray*}
\displaystyle m_s \frac{{\bf h}^{n+1}-2{\bf h}^n+{\bf h}^{n-1}}{dt^2} = M_{ \bf \lambda} \boldsymbol{\Lambda}^{n} -m_s {\bf g},&&\\
\displaystyle I\frac{\theta^{n+1} - 2 \theta^n + \theta^{n-1}}{dt^2} = M_{ \bf \lambda} \left [ ({\bf x}-{\bf h}^{n})^{\bot}\cdot \boldsymbol{\Lambda}^{n}\right ] .&&
\end{eqnarray*}
\item[3--] We update the geometry to determine $\mathcal{F}(t^{n+1})$. It corresponds to update the position of the level-set which is defined from ${\bf h}^{n+1}$ and $\theta^{n+1}$.
\item[4--] We complete the velocity $\boldsymbol{U}^n$ defined on $\mathcal{F}(t^{n})$ to the full domain $\mathcal{O}$ by imposing the velocity on each node of $\mathcal{S}(t^{n+1})$ to be equal to ${\bf h}'^{n+1} + \theta'^{n+1}({\bf x}-{\bf h}^{n+1})^{\bot}$.\\
After this step, we know the Dirichlet condition for the velocity to impose at the interface
$\Gamma(t^{n+1}) = \partial \mathcal{S}(t^{n+1})$. So we determine $\boldsymbol{G}^{n+1}$ in~\eqref{M3} from ${\bf u}_{\Gamma}^{n+1} = {\bf h}'^{n+1} + \theta'^{n+1} ({\bf x}-{\bf h}^{n+1})^{\bot}$.
\item[5--] Finally, we compute $(\boldsymbol{U}^{n+1},\boldsymbol{P}^{n+1},\boldsymbol{\Lambda}^{n+1})$ such that
\begin{eqnarray*}
M_{{\bf u}{\bf u}} \frac{\boldsymbol{U}^{n+1} -\boldsymbol{U}^{n} }{dt} +A_{{\bf u}{\bf u}} \boldsymbol{U}^{n+1} + N(\boldsymbol{U}^{n+1}) \boldsymbol{U}^{n+1} + A_{{\bf u}p}\boldsymbol{P}^{n+1}
+ A_{{\bf u}{\bf \lambda}}\boldsymbol{\Lambda}^{n+1} = \boldsymbol{F}^{n+1}, \\
A^T_{{\bf u}p} \boldsymbol{U}^{n+1} + A_{pp} \boldsymbol{P}^{n+1} + A_{p{\bf \lambda}}\boldsymbol{\Lambda}^{n+1} = 0, \\
A^T_{{\bf u}{\bf \lambda}} \boldsymbol{U}^{n+1} + A^T_{p{\bf \lambda}} \boldsymbol{P}^{n+1} + A_{{\bf \lambda} {\bf \lambda}}\boldsymbol{\Lambda}^{n+1} = \boldsymbol{G}^{n+1}.
\end{eqnarray*}
At this stage, the solution of the resulting nonlinear algebraic system is achieved by a Newton method. The initialization of the Newton algorithm is done with the solution at the previous time step (this solution is defined in item 4--).
\item[6--] We complete the velocity $\boldsymbol{U}^{n+1}$ defined on $\mathcal{F}(t^{n+1})$ to the full domain $\mathcal{O}$ by imposing the velocity on each node of $\mathcal{S}(t^{n+1})$ to be equal to ${\bf h}'^{n+1} + \theta'^{n+1}({\bf x}-{\bf h}^{n+1})^{\bot}$.
\end{itemize}
{\it Remark 1:} In practice the mid-point method is used to update the geometry of the structure for the computation of ${\bf h}'^{n+1},{\bf h}^{n+1},{\theta'}^{n+1},\theta^{n+1}$.\\
{\it Remark 2:} Step 6-- plays an important role to update the geometry. Indeed, after extension, we have access to the values of the solution at each node of the full domain. Thus new nodes that appear after update have some values and no interpolation is required.
\section{Numerical tests: Free fall of a disk in a channel} \label{section4}
For validation of our method, we consider the numerical simulation of the motion of a disk falling inside an incompressible Newtonian viscous fluid. The parameters used in the computation, for a disk of radius $R=0.125$ cm in a channel of dimension $[0,2]\times [0,6]$, are given in Table \ref{tab}.
\begin{table}[h]
\begin{center}
\begin{tabular}{lllll}
Parameter& $\rho_f$ & $\rho_s$ & $\nu$ & $g$\\
\hline
Unit & g/cm$^2$ & g/cm$^2$ & g/cm$^2$ s & cm/s$^2$\\
value & 1 & 1.25& 0.1 & 981\\
\end{tabular}
\caption{Parameters used for the simulation.}
\label{tab}
\end{center}
\end{table}
This simulation is well documented in the literature and considered as a challenging benchmark.
We refer to the paper \cite{Glowinski} where fictitious domain method is used and \cite{Hachem} for simulations with mesh adaptation.\\
For the finite element discretization, we consider classical Lagrange family with $P_2 - P_1 - P_0$ for respectively ${\bf u}$, $p$, and $\boldsymbol{\lambda}$ , which is a choice that satisfies
the inf-sup condition required for such kind of problems (see \cite{CourtFournieLozinski} for more details). Uniform triangular meshes are used and defined by imposing a uniform repartition of points on the boundary of the domain. Two meshes are used, $mesh_{50 \times 150}$ with $50$ points in $x$-direction and $150$ points in $y$-direction, and $mesh_{100 \times 300}$ with $100$ points in $x$-direction and $300$ points in $y$-direction.\\
Numerical tests are performed with and without stabilization to underline the advantage of the method. When stabilization is considered, we choose $\gamma =h\times \gamma_0$ where $\gamma_0=0.05$ (see \cite{CourtFournieLozinski} for the justification of this choice). The parameter $\gamma$ has to obey to a compromise between the coerciveness of the system and the weight of the stabilization term. The time discretization step $dt$ is initialized to $0.0005$ and adapted at each time iteration to satisfy a CFL condition. More precisely, we evaluate the norm of the velocity at each point of the structure and we deduce the maximum value $v_m = \max_{{\bf x} \in \mathcal{S}(t^n)}(\|\bu({\bf x}) \|)$. Then we impose $dt = \min(0.9h/v_m,2h^2/{\nu})$. This condition is not restrictive and we observe that $dt \in [0.0005; 0.006]$ in all the tests.\\
In the literature, in order to study the fall of the disk, curves are given to show the evolution of the vertical velocity and the position of the center of the disk according to the time. We present the same analysis for different adaptation of our method. As expected, the disk reaches quickly a uniform fall velocity with slight moving on the right side of the vertical symmetry axis. This observation was already reported in the literature and is not specific to our method. One challenge is to propose robust methods that limit this breaking. In this section, we show that our method gives an answer to this question. Indeed, numerical simulations can be done with coarse meshes, even if it is not recommended with a fictitious domain approach (points on the interface can be far from the degrees of freedom introduced by the finite element method).\\
{\bf Contribution of the stabilization technique}: Numerical tests are performed with the mesh $mesh_{50 \times 150}$, with and without performing the stabilization ($\gamma_0=0$). We compute the position of the disk according to the time and represent separately the vertical and horizontal positions.\\
In Figure~\ref{FigGamma} we represent the vertical velocity through the time. The results are similar, whether we perform stabilization or not. However, with the stabilization technique the method is more robust. If we zoom in (see Figure~\ref{FigGamma}), without stabilization (red curve) some perturbations appear. When we compare the positions of the disk through the time, we do not observe difference on the vertical position in the Figure~\ref{FigCenterVelocity}(a). However the difference is more important for the horizontal position and the rotation of the disk. The curves are plotted in red in Figure~\ref{FigPosx-angle}, with the mesh $mesh_{50\times 150}$. With the stabilization technique the results are clearly improved. Without stabilization, the symmetry is broken even if it seems that the disk comes back around the symmetry axis of the cavity at the end of the simulation. This behavior related to the computation of the angular velocity which is represented in Figure~\ref{FigPosx-angle}(b).
\begin{center}
\begin{figure}
\caption{\label{FigGamma}
\label{FigGamma}
\end{figure}
\end{center}
\FloatBarrier
{\bf Influence of the mesh size:} With stabilization, we compare the simulations obtained with $mesh_{50 \times 150}$ and $mesh_{100 \times 300}$. The results are given in Figure~\ref{FigCenterVelocity} (red curves for $mesh_{50 \times 150}$ and blue curves for $mesh_{100 \times 300}$). As expected, the smoothness of the solution is better when a sharper mesh is used. Besides, the result seems to be as good as results obtained in \cite{Glowinski, Hachem} for instance. When a coarse mesh is used, the velocity is over-estimated. This observation can be justified by the capability of the method for preserving the conservation of the mass. Indeed, with a coarse mesh, a numerical added mass appears in the system. This artificial mass is proportional to the stabilization terms (see $\mathcal{A}_{p{\blambda}} $ and $\mathcal{A}_{{\blambda}{\blambda}}$ in the discrete problem) which are themselves proportional to the mesh size, and thus it can be neglected when the mesh size decreases.
\begin{center}
\begin{figure}
\caption{\label{FigCenterVelocity}
\label{FigCenterVelocity}
\end{figure}
\end{center}
The computation of the horizontal position of the disk is given in Figure~\ref{FigPosx-angle}(a). We observe that the disk tends to come back towards the symmetry axis of the cavity atthe end of the simulation with $mesh_{100 \times 300}$, unlike in the simulations with $mesh_{50 \times 150}$, where the symmetry breaking seems to growth. This phenomena can be observed in Figure~\ref{FigPosx-angle}(b) which represents the evolution of the rotation angle of the disk. With $mesh_{50 \times 150}$ this angle always growths, unlike for the sharper mesh $mesh_{100 \times 300}$. This behavior can be justified by perturbation associated with our numerical method, in particular for the treatment of the nonlinear term. Moreover, a perturbation in horizontal velocity component is difficult to compensate during the simulation and contributes to the amplifying the phenomena. However, with stabilization, relevant values of the rotation are obtained and show that their influence is reduced compared with the translation.\\
\begin{center}
\begin{figure}
\caption{\label{FigPosx-angle}
\label{FigPosx-angle}
\end{figure}
\end{center}
\FloatBarrier
\begin{center}
\begin{figure}
\caption{Imagery illustration of the intensity of the fluid's velocity during the fall of the ball.\label{figsuperfall}
\label{figsuperfall}
\end{figure}
\end{center}
\FloatBarrier
\section{Practical remarks on the numerical implementation.}
\begin{itemize}
\item All the numerical simulations were performed with the free generic library Getfem++ \cite{Getfem} (same source code for 2D and 3D) and implemented on High Performing Computers (parallel computations).
\item In order to compute properly the integrals over elements at the interface (during assembling procedure), external call to {\sc Qhull} Library \cite{qhull} is realized.
\item In the algorithm, steps 1-- and 2-- require computing of $M_{ \bf \lambda} \boldsymbol{\Lambda}^{n}$ and $M_{ \bf \lambda} \left [ ({\bf x}-{\bf h}^{n})^{\bot}\cdot \boldsymbol{\Lambda}^{n}\right ]$, corresponding to integrations over the level-set. Such integrations require particular attention, in the sake of preserving a good accuracy. Indeed, the integrations must use nodes on level-set and accurate values on that nodes are required (no interpolation).
\item The method is very efficient in time computation, since it requires an update of the assembling matrices only locally near the interface.
\item As mentioned in \cite{HaslR}, it is possible to define a reinforced stabilization technique in order to prevent difficulties that can occur when the intersection of the solid and the mesh over the whole domain introduce "very small" elements. The technique consists in selecting elements which are better to deduce the normal derivative on $\Gamma$. A similar approach is given in \cite{Pitkaranta}. We think that this kind of reinforced stabilization technique can prevent the perturbations that appear
during simulation (see zoom in Figure~\ref{FigGamma}).
\end{itemize}
\section{Conclusion}
In this paper, we have considered a new fictitious domain method based on the extended finite element with stabilized term applied to the Navier-Stokes equations coupled with a moving solid. This method is quite simple to implement since all the variables (multipliers and primal variables) are defined on a single mesh independent of the computational domain. The algorithm leads to a robust method (good computation of the normal Cauchy stress tensor) whatever is the intersection of the domain with the - not necessarily sharp - mesh. The simulation of a falling disk with respect to the time confirms that our approach is able to predict well the interaction between the fluid and the structure. The stabilization must be considered to obtain more physical results preserving symmetry. Applications in 3D are in progress, in particular for control flow by acting on the boundary of the solid.
\end{document} | math | 39,430 |
\betaetagin{document}
\betaaselineskip = 14pt
\sigmaelectlanguage{english}
\title[Quasi-invariant Gaussian measures for the cubic third order NLS]
{Quasi-invariant Gaussian measures
for the cubic
nonlinear Schr\"odinger equation
with third order dispersion}
\author[T.~Oh, Y.~Tsutsumi, and N.~Tzvetkov]
{Tadahiro Oh, Yoshio Tsutsumi, and Nikolay Tzvetkov}
\address{
Tadahiro Oh, School of Mathematics\\
The University of Edinburgh\\
and The Maxwell Institute for the Mathematical Sciences\\
James Clerk Maxwell Building\\
The King's Buildings\\
Peter Guthrie Tait Road\\
Edinburgh\\
EH9 3FD\\
United Kingdom}
\varepsilonmail{[email protected]}
\address{Yoshio Tsutsumi\\
Department of Mathematics\\ Kyoto University\\ Kyoto 606-8502\\ Japan}
\varepsilonmail{[email protected]}
\address{
Nikolay Tzvetkov\\
Universit\'e de Cergy-Pontoise\\
2, av.~Adolphe Chauvin\\
95302 Cergy-Pontoise Cedex \\
France}
\varepsilonmail{[email protected]}
\sigmaubjclass[2010]{35Q55}
\keywords{third order nonlinear Schr\"odinger equation;
Gaussian measure; quasi-invariance; non-resonance}
\maketitle
\betaetagin{abstract}
In this paper, we consider the cubic nonlinear Schr\"odinger equation
with third order dispersion on the circle.
In the non-resonant case,
we prove that the mean-zero Gaussian measures on Sobolev spaces
$H^s(\mathbb{T})$, $s > \frac 34$,
are quasi-invariant under the flow.
In establishing the result, we apply gauge transformations to remove the resonant part of the dynamics
and use invariance of the Gaussian measures under these gauge transformations.
\varepsilonnd{abstract}
\betaetagin{otherlanguage}{french}
\betaetagin{abstract}
Dans cet article, nous consid\'erons l'\'equation de Schr\"odinger non lin\'eaire cubique avec dispersion d'ordre trois sur le cercle.
Dans le cas non r\'esonant, nous prouvons que les mesures gaussiennes de moyenne nulle sur les espaces de Sobolev
$ H ^ s (\mathbb{T}) $, $ s> \frac 34 $, sont quasi-invariantes par le flot.
En \'etablissant le r\'esultat, nous appliquons des transformations de gauge pour \'eliminer la partie r\'esonante de la dynamique
et nous utilisons l'invariance des mesures gaussiennes par rapport \`a ces transformations de gauge.
\varepsilonnd{abstract}
\varepsilonnd{otherlanguage}
\sigmaection{Introduction}
\varepsilonllabel{SEC:intro}
\sigmaubsection{Cubic nonlinear Schr\"odinger equation with third order dispersion}
The main goal of this work is to extend the result of our previous paper \cite{OTz1}
to the more involved case of lower order dispersion.
Namely, we consider
the following
cubic nonlinear Schr\"odinger equation with third order dispersion (3NLS) on $\mathbb{T}$:
\betaetagin{align}
\betaetagin{cases}
i \partialrtial_t u - i \partialrtial_x^3 u - \betaeta \partialrtial_x^2 u = |u|^{2}u \\
u|_{t = 0} = u_0,
\varepsilonnd{cases}
\quad (x, t) \in \mathbb{T}\times \mathbb{R},
\varepsilonllabel{3NLS1}
\varepsilonnd{align}
\noindent
where $u$ is a complex-valued function
on $\mathbb{T}\times \mathbb{R}$ with $\mathbb{T} = \mathbb{R}/(2\pi \mathbb{Z})$ and $\betaeta \in \mathbb{R}$.
The equation \varepsilonqref{3NLS1}
appears as a mathematical model for nonlinear pulse propagation phenomena in various fields of physics,
in particular, in nonlinear optics \cite{HK, Ag}.
The equation \varepsilonqref{3NLS1} without the third order dispersion
corresponds to the standard cubic nonlinear Schr\"odinger equation (NLS)
and it has been studied extensively from both theoretical and applied points of view.
In recent years,
there has been an increasing interest in
the cubic 3NLS \varepsilonqref{3NLS1} with the third order dispersion
in
nonlinear optics \cite{Oi, LMKEHT, MS}.
While the equation \varepsilonqref{3NLS1}
conserves the following Hamiltonian:
\betaetagin{align*}
H(u) = - \frac 12 \hspace{0.5mm}\text{I}\hspace{0mm}m \int_\mathbb{T} \partialrtial_x^2 u \omegaverline {\partialrtial_x u}\, dx + \frac{\betaeta}{2} \int_\mathbb{T} |\partialrtial_x u |^2 dx - \frac{1}{4}\int_\mathbb{T} |u|^4 dx,
\varepsilonnd{align*}
\noindent
the leading order term is sign-indefinite
and hence it does not play an important role in the well-posedness theory of \varepsilonqref{3NLS1}.
On the other hand,
the conservation of the mass $M(u)$ defined by
\betaetagin{align*}
M(u) = \int_\mathbb{T}|u|^2 dx,
\varepsilonnd{align*}
\noindent
combined with local well-posedness in $L^2(\mathbb{T})$, yields
the following global well-posedness of~\varepsilonqref{3NLS1} in $L^2(\mathbb{T})$.
\betaetagin{proposition}\varepsilonllabel{PROP:GWP}
The cubic 3NLS \varepsilonqref{3NLS1} is globally well-posed
in $H^s(\mathbb{T})$
for $s \gammaeq 0$.
\varepsilonnd{proposition}
The proof of local well-posedness
in $L^2(\mathbb{T})$ follows from the Fourier restriction norm method
with the periodic Strichartz estimate.
See \cite{MT1}.
We point out that
Proposition \ref{PROP:GWP} is sharp
since
\varepsilonqref{3NLS1} is ill-posed below $L^2(\mathbb{T})$
in the sense of non-existence of solutions~\cite{GO, MT2}.
In studying 3NLS \varepsilonqref{3NLS1} with the cubic nonlinearity,
the following phase function $\phi(\betaar n)$ plays an important role:
\betaetagin{align}
\phi(\betaar n) & = \phi(n, n_1, n_2, n_3)
: = (n^3 -\betaeta n^2) - (n_1^3 -\betaeta n_1^2)
+ (n_2^3 -\betaeta n_2^2) - (n_3^3 -\betaeta n_3^2) \notag\\
&= 3 (n - n_1) (n- n_3) \betaig(n_1 + n_3 - \tfrac23\betaeta\betaig),
\varepsilonllabel{phi1}
\varepsilonnd{align}
\noindent
where the last equality holds under $n = n_1 - n_2 + n_3$.
Note that when $\frac{2\betaeta}{3} \notin \mathbb{Z}$, the last factor never vanishes.
On the other hand, when $\frac{2\betaeta}{3} \in \mathbb{Z}$,
the last factor is identically 0 for $n_3 = -n_1 + \frac{2\betaeta}{3}$, $n_1 \in \mathbb{Z}$.
We refer to the first case ($\frac{2\betaeta}{3} \notin \mathbb{Z}$)
and the second case ($\frac{2\betaeta}{3} \in \mathbb{Z}$)
as the non-resonant case and
the resonant case, respectively.
In the following, we focus on the non-resonant case.
\sigmaubsection{Transport property of the Gaussian measures on periodic functions}
Given $ s> \frac{1}{2}$, let $\mu_s$ be the mean-zero Gaussian measure on $L^2(\mathbb{T})$
with the covariance operator $2(\text{Id} - \Delta)^{-s}$, formally written as\footnote{Given a function $f$ on $\mathbb{T}$,
we use both ${\betaf w}idehat f_n$ and ${\betaf w}idehat f(n)$ to denote the Fourier coefficient of $f$
at frequency~$n$.}
\betaetagin{align}
d \mu_s
= Z_s^{-1} e^{-\frac 12 \| u\|_{H^s}^2} du
= \prod_{n \in \mathbb{Z}} Z_{s, n}^{-1}e^{-\frac 12 \jb{n}^{2s} |{\betaf w}idehat u_n|^2} d{\betaf w}idehat u_n .
\varepsilonllabel{gauss0}
\varepsilonnd{align}
\noindent
More concretely,
we can define $\mu_s$ as the induced probability measure
under the map\footnote{In the following, we drop the harmless factor of $2\pi$
when it does not play any important role.}
\betaetagin{align}
\omega \in \Omega \mapsto u^\omega(x) = u(x; \omega) = \sigmaum_{n \in \mathbb{Z}} \frac{g_n(\omega)}{\jb{n}^s}e^{inx},
\varepsilonllabel{gauss1}
\varepsilonnd{align}
\noindent
where $\jb{\,\cdot\,} = (1+|\cdot|^2)^\frac{1}{2}$
and
$\{ g_n \}_{n \in \mathbb{Z}}$ is a sequence of independent standard complex-valued
Gaussian random variables (i.e.~$\text{Var}(g_n) = 2$) on a probability space
$(\Omega, \mathcal{F}, P)$.
It is easy to see that
$u^\omega$ in \varepsilonqref{gauss1} lies in $H^{\sigma}(\mathbb{T})\sigmaetminus H^{s-\frac 12}(\mathbb{T})$ for $\sigma < s -\frac 12$, almost surely.
Namely,
$\mu_s$ is a Gaussian probability measure on $H^{\sigma}(\mathbb{T})$,
$\sigma < s -\frac 12$.
Moreover, for the same range of $\sigma$,
the triplet $(H^s, H^\sigma, \mu_s)$ forms an abstract Wiener space. See \cite{GROSS, Kuo2}.
Our main goal is to study the transport property
of the Gaussian measures $\mu_s$ on Sobolev spaces
under the dynamics of \varepsilonqref{3NLS1}.
We first recall the following definition of quasi-invariant measures.
Given a measure space $(X, \mu)$,
we say that $\mu$ is {\it quasi-invariant} under a transformation $T:X \to X$
if
the transported measure $T_*\mu = \mu\circ T^{-1}$
and $\mu$
are equivalent, i.e.~mutually absolutely continuous with respect to each other.
We now state our main result.
\betaetagin{theorem}\varepsilonllabel{THM:quasi}
Let $\frac 23 \betaetata \notin \mathbb{Z}$. Then, for $s > \frac 34$, the Gaussian measure $\mu_s$ is quasi-invariant under the flow of
the cubic 3NLS
\varepsilonqref{3NLS1}.
\varepsilonnd{theorem}
In probability theory,
the transport property
of Gaussian measures
under linear and nonlinear transformations
has been studied extensively.
See, for example, \cite{CM, Kuo, RA, Cru1, Cru2, Bog, AF}.
On the other hand,
in the field of Hamiltonian PDEs,
Gaussian measures
naturally appear in the construction
of invariant measures
associated to conservation laws
such as Gibbs measures,
starting with the seminal work of Bourgain \cite{BO94, BO96}.
See \cite{OTz1, BOP4} for the references therein.
In \cite{TzBBM}, the third author initiated the study of transport properties of Gaussian measures
under the flow of a Hamiltonian PDE,
where two methods were presented
in establishing quasi-invariance of the Gaussian measures $\mu_s$
as stated in Theorem~\ref{THM:quasi}.
See also the subsequent work~\cite{OTz1, OTz3, OSTz}
on the transport property of the Gaussian measures
under nonlinear Hamiltonian PDEs.
\sigmamallskip
\noindent
$\betaulletlet$ {\betaf Method 1:}
The first method is to reduce an equation under consideration
so that one can apply a general
criterion on quasi-invariance
of a Gaussian measure on an abstract Wiener space
under a nonlinear transformation
due to Ramer \cite{RA}.
Essentially speaking,
this result states that $\mu_s$
is quasi-invariant
if the nonlinear part is $(d+\varepsilon)$-smoother than the linear part
for an evolution equation posed on $\mathbb{T}^d$.
Namely, the given nonlinear dynamics is basically a compact perturbation of the linear dynamics.
\sigmamallskip
\noindent
$\betaulletlet$ {\betaf Method 2:}
This method was introduced in \cite{TzBBM} by the third author to go beyond Ramer's general argument
in studying concrete examples of evolution equations.
It is based on combining
both PDE techniques
and probabilistic techniques in an intricate manner.
In particular,
the crucial step in this second method is to establish
an effective energy estimate (with smoothing)
for the (modified) $H^s$-functional.
See, for example, Proposition 5.1 in~\cite{TzBBM} and Proposition 6.1 in \cite{OTz1}.
\sigmamallskip
We refer readers to \cite{OTz2}
for a brief introduction of the subject and an overview of these two methods.
We point out that, in applying either method,
it is essential to exhibit nonlinear smoothing for given dynamics.
We also remark that
the second method in general performs better than the first method.
See \cite{TzBBM, OTz1, OTz3}.
See also Remark \ref{REM:M2}.
In \cite{OTz1}, we studied the transport property of the Gaussian measure $\mu_s$
under
the following cubic fourth order nonlinear Schr\"odinger equation (4NLS) on $\mathbb{T}$:
\betaetagin{align}
i \partialrtial_t u - \partialrtial_x^4 u = |u|^{2}u .
\varepsilonllabel{4NLS}
\varepsilonnd{align}
\noindent
Our main tool
to show nonlinear smoothing in this context was normal form reductions
analogous to the approach employed in
\cite{BIT, KO, GKO}.
In \cite{BIT}, Babin-Ilyin-Titi introduced
a normal form approach for constructing solutions to dispersive PDEs.
It turned out that this approach has various applications
such as establishing unconditional uniqueness~\cite{KO, GKO, CGKO}
and exhibiting nonlinear smoothing~\cite{ET}.
In applying the first method,
we performed a normal form reduction
to the (renormalized) equation
and
proved quasi-invariance of $\mu_s$ under \varepsilonqref{4NLS} for $s > 1$.
On the other hand, in applying the second method,
we performed a normal form reduction
to the equation satisfied by the (modified) $H^s$-energy functional
and proved quasi-invariance for $s > \frac 34$
(which was later improved to the optimal range of regularity $s > \frac 12$
via an infinite iteration of normal form reductions
in \cite{OSTz}).
Now, let us turn our attention to the cubic 3NLS \varepsilonqref{3NLS1}.
Let us first proceed as in~\cite{OTz1}
and transform the equation.
It is crucial that
the Gaussian measure $\mu_s$ is quasi-invariant under
the transformations we consider in the following.
(In fact, $\mu_s$ is invariant under these transformations. See Lemma \ref{LEM:gauss}.)
Hence, it suffices to prove quasi-invariance of $\mu_s$ under the resulting dynamics.
Given $t \in \mathbb{R}$, we define a gauge transformation $\GammaG_t$ on $L^2(\mathbb{T})$
by setting
\betaetagin{align}
\GammaG_t [f ]: = e^{ 2 i t \fint |f|^2} f,
\varepsilonllabel{gauge1}
\varepsilonnd{align}
\noindent
where $\fint_\mathbb{T} f(x) dx := \frac{1}{2\pi} \int_\mathbb{T} f(x)dx$.
Given a function $u \in C(\mathbb{R}; L^2(\mathbb{T}))$,
we define $\GammaG$ by setting
\[\GammaG[u](t) : = \GammaG_t[u(t)].\]
\noindent
Note that $\GammaG$ is invertible
and its inverse is given by $\GammaG^{-1}[u](t) = \GammaG_{-t}[u(t)]$.
Let $u \in C(\mathbb{R}; L^2(\mathbb{T}))$ be a solution to \varepsilonqref{3NLS1}.
Define ${\betaf u}$ by
\betaetagin{align}
{\betaf u} (t) := \mathcal{G}[u](t) = e^{ 2 i t \fint |u(t)|^2} u(t).
\varepsilonllabel{gauge2}
\varepsilonnd{align}
\noindent
Then, it follows from
the mass conservation
that ${\betaf u}$ is a solution to the following renormalized 3NLS:
\betaetagin{align}
i \partialrtial_t {\betaf u} - i \partialrtial_x^3 {\betaf u} - \betaeta \partialrtial_x^2 {\betaf u} = \betaigg( | {\betaf u} |^{2} - 2 \fint_\mathbb{T} |{\betaf u} |^2dx \betaigg) {\betaf u}.
\varepsilonllabel{3NLS2}
\varepsilonnd{align}
\noindent
Let $\mathbb{N}b({\betaf u}) = (|{\betaf u}|^{2} - 2 \fint_\mathbb{T} |{\betaf u}|^2 dx\betaig){\betaf u} $ be the renormalized nonlinearity in \varepsilonqref{3NLS2}.
Then, we have
\betaetagin{align}
\mathbb{N}b({\betaf u})
& = \sigmaum_{n\in \mathbb{Z}} e^{inx} \sigmaum_{\sigmaubstack{n = n_1 - n_2 + n_3\\n\ne n_1, n_3\\n_1 + n_3 \ne \frac{2\betaeta}{3}}}
{\betaf w}idehat {{\betaf u}}_{n_1}\omegaverline{{\betaf w}idehat {{\betaf u}}_{n_2}}{\betaf w}idehat {{\betaf u}}_{n_3}
- \sigmaum_{n\in \mathbb{Z}} e^{inx} |{\betaf w}idehat {{\betaf u}}_n|^2{\betaf w}idehat {{\betaf u}}_n
\notag\\
& \hphantom{X}
+ \sigmaum_{n\in \mathbb{Z}} e^{inx}\sigmaum_{\sigmaubstack{n = n_1 - n_2 + n_3\\n\ne n_1, n_3\\n_1 + n_3 = \frac{2\betaeta}{3}}}
{\betaf w}idehat {{\betaf u}}_{n_1}\omegaverline{{\betaf w}idehat {{\betaf u}}_{n_2}}{\betaf w}idehat {{\betaf u}}_{n_3}
\notag\\
& =: \mathbb{N}b_1 ({\betaf u}) + \mathbb{N}b_2 ({\betaf u}) + \mathbb{N}b_3 ({\betaf u}) .
\varepsilonllabel{nonlin1}
\varepsilonnd{align}
\noindent
In view of \varepsilonqref{phi1},
the first term corresponds to the non-resonant contribution,
while the second and third terms correspond to the resonant contribution.
Moreover, under the non-resonant assumption: $\frac{2\betaeta}{3} \notin \mathbb{Z}$,
we have $\mathbb{N}b_3({\betaf u})\varepsilonquiv 0$.
See Remark \ref{REM:res} for more on the renormalized equation \varepsilonqref{3NLS2}.
At this point, we can introduce the interaction representation $v$ of ${\betaf u}$ as in \cite{OTz1} by
\betaetagin{align}
v(t) = S(-t){\betaf u}(t),
\varepsilonllabel{gauge3}
\varepsilonnd{align}
\noindent
where $S(t) = e^{t (\partialrtial_x^3 - i \betaeta \partialrtial_x^2)}$
denotes the linear solution map for 3NLS \varepsilonqref{3NLS1}.
Under the non-resonant assumption ($\frac 23 \betaeta \notin \mathbb{Z}$), this reduces
\varepsilonqref{3NLS2} to the following equation for $\{{\betaf w}idehat v_n\}_{n \in \mathbb{Z}}$:\footnote{The non-resonant part
$\mathbb{N}N_0(v)$ is non-autonomous. For simplicity of notation, however,
we drop the $t$-dependence. A similar comment applies to multilinear terms
appearing in the following.}
\betaetagin{align}
\partialrtial_t {\betaf w}idehat v_n
& = -i \sigmaum_{\Gamma(n)} e^{i t \phi(\betaar n) } {\betaf w}idehat v_{n_1}\omegaverline{{\betaf w}idehat v_{n_2}}{\betaf w}idehat v_{n_3}
+ i |{\betaf w}idehat v_n|^2 {\betaf w}idehat v_n \notag\\
& =: {\betaf w}idehat{\mathbb{N}N_0(v)}(n) + {\betaf w}idehat{\mathbb{R}R_0(v)}(n),
\varepsilonllabel{3NLS3}
\varepsilonnd{align}
\noindent
where the phase function $\phi(\betaar n)$ is as in \varepsilonqref{phi1} and the plane $\Gamma(n)$ is given by
\betaetagin{align}
\Gamma(n)
= \betaig\{(n_1, n_2, n_3) \in \mathbb{Z}^3:\,
n = n_1 - n_2 + n_3, \ n \ne n_1, n_3,
\text{ and } n_1 + n_3 \ne \tfrac{2\betaeta}{3}\betaig\}.
\varepsilonllabel{Gam1}
\varepsilonnd{align}
\noindent
In view of \varepsilonqref{phi1},
we refer to the first term $\mathbb{N}N_0(v)$ and the second term $\mathbb{R}R_0(v)$
on the right-hand side of \varepsilonqref{3NLS3}
as the non-resonant and resonant terms, respectively.
On the one hand we do not have any smoothing on $\mathbb{R}R_0(v)$ under a time integration.
On the other hand,
Lemma \ref{LEM:phase} below on the phase function $\phi(\betaar n)$ shows that
there is a smoothing on the non-resonant term $\mathbb{N}N_0(v)$
under a time integration.
Hence by applying a normal form reduction as in ~\cite{OTz1},
we can exhibit $(1+\varepsilon)$-smoothing on the nonlinear part if $s > 1$.
See Lemma~\ref{LEM:Znonlin}.
Then, by invoking Ramer's result (Proposition~\ref{PROP:RA}),
we conclude quasi-invariance of~$\mu_s$
under~\varepsilonqref{3NLS3} (and hence under~\varepsilonqref{3NLS1}; see Lemma \ref{LEM:gauss}) for $s > 1$.
We first point out that the regularity $s > 1$ is optimal with respect to this argument
(namely, applying Ramer's result in a straightforward manner)
due to the resonant part $\mathbb{R}R_0(v)$.
See Remark~5.4 in~\cite{OTz1}.
Moreover, due to a weaker dispersion for 3NLS \varepsilonqref{3NLS1}
as compared to 4NLS \varepsilonqref{4NLS},
the second method applied to \varepsilonqref{3NLS3} based on an energy estimate
does not work
for any $s \in \mathbb{R}$.
Hence, a new idea is needed to go below $s = 1$.
To overcome this problem, we introduce
another gauge transformation, which is the main new idea of this paper.
Given $t \in \mathbb{R}$, we define a gauge transformation $\mathcal{J}_t$ on $L^2(\mathbb{T})$
by setting
\betaetagin{align}
\mathcal{J}_t [f ]: = \sigmaum_{n \in \mathbb{Z}} e^{ -i t |{\betaf w}idehat f_n|^2 } {\betaf w}idehat f_ne^{inx}.
\varepsilonllabel{gauge4}
\varepsilonnd{align}
\noindent
Define $\textsf{u}$ by
\betaetagin{align}
\textsf{u}(t) = \mathcal{J}_t[{\betaf u}(t)].
\varepsilonllabel{gauge5}
\varepsilonnd{align}
\noindent
First, by noting that $|{\betaf w}idehat \textsf{u}_n(t)|^2 = |{\betaf w}idehat {\betaf u}_n(t)|^2$,
it follows from \varepsilonqref{nonlin1} and \varepsilonqref{gauge5} that
\betaetagin{align}
\partialrtial_t \betaig(|{\betaf w}idehat \textsf{u}_n |^2\betaig)
& =2 \mathbb{R}e (\partialrtial_t {\betaf w}idehat {\betaf u}_n \omegaverline{{\betaf w}idehat {\betaf u}_n})\notag\\
& =
2 \hspace{0.5mm}\text{I}\hspace{0mm}m \betaigg( \sigmaum_{\Gamma(n)} e^{i t \psi(\betaar n)}
{\betaf w}idehat {\textsf{u}}_{n_1}\omegaverline{{\betaf w}idehat {\textsf{u}}_{n_2}}{\betaf w}idehat {\textsf{u}}_{n_3} \omegaverline{{\betaf w}idehat \textsf{u}_n}\betaigg),
\varepsilonllabel{gauge5a}
\varepsilonnd{align}
\noindent
where the (time-dependent) phase function $\psi(\betaar n)$ is defined by
\betaetagin{align*}
\psi(\betaar n)= \psi(n, n_1, n_2, n_3)(\textsf{u}) := - |{\betaf w}idehat \textsf{u}_n|^2+ |{\betaf w}idehat \textsf{u}_{n_1}|^2 - |{\betaf w}idehat \textsf{u}_{n_2}|^2 + |{\betaf w}idehat \textsf{u}_{n_3}|^2 .
\varepsilonnd{align*}
\noindent
Then,
we see that $\textsf{u}$ satisfies the following equation:
\betaetagin{align}
i \partialrtial_t \textsf{u} - i \partialrtial_x^3 \textsf{u} - \betaeta \partialrtial_x^2 \textsf{u} = \mathbb{N}s_1(\textsf{u}) + \mathbb{N}s_2(\textsf{u}),
\varepsilonllabel{3NLS4}
\varepsilonnd{align}
\noindent
where the nonlinearities $\mathbb{N}s_1(\textsf{u})$ and $\mathbb{N}s_2(\textsf{u})$ are given by
\betaetagin{align*}
\mathbb{N}s_1(\textsf{u})
& = \sigmaum_{n\in \mathbb{Z}} e^{inx}
\sigmaum_{\Gamma(n)} e^{i t \psi(\betaar n)}
{\betaf w}idehat {\textsf{u}}_{n_1}\omegaverline{{\betaf w}idehat {\textsf{u}}_{n_2}}{\betaf w}idehat {\textsf{u}}_{n_3},\\
\mathbb{N}s_2(\textsf{u})
& = 2t \sigmaum_{n\in \mathbb{Z}} e^{inx} \, {\betaf w}idehat \textsf{u}_n \hspace{0.5mm}\text{I}\hspace{0mm}m\betaigg(
\sigmaum_{\Gamma(n)} e^{i t \psi(\betaar n)}
{\betaf w}idehat {\textsf{u}}_{n_1}\omegaverline{{\betaf w}idehat {\textsf{u}}_{n_2}}{\betaf w}idehat {\textsf{u}}_{n_3}\omegaverline{{\betaf w}idehat \textsf{u}_n}\betaigg).
\varepsilonnd{align*}
\noindent
Finally, we consider the interaction representation $w$ of $\textsf{u}$ given by
\betaetagin{align}
w(t) = S(-t)\textsf{u}(t).
\varepsilonllabel{gauge6}
\varepsilonnd{align}
\noindent
Then,
the equation
\varepsilonqref{3NLS4} is reduced to the following equation for $\{{\betaf w}idehat w_n\}_{n \in \mathbb{Z}}$:
\betaetagin{align}
\partialrtial_t {\betaf w}idehat w_n
& = -i \sigmaum_{\Gamma(n)} e^{i t (\phi(\betaar n) +\psi(\betaar n)) } {\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}\notag\\
& \hphantom{Xl}
-2 i t \, {\betaf w}idehat w_n \hspace{0.5mm}\text{I}\hspace{0mm}m\betaigg(
\sigmaum_{\Gamma(n)} e^{i t (\phi(\betaar n) +\psi(\betaar n)) }
{\betaf w}idehat {w}_{n_1}\omegaverline{{\betaf w}idehat {w}_{n_2}}{\betaf w}idehat {w}_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)\notag\\
& =: {\betaf w}idehat{\mathbb{N}N_1(w)}(n)
+ {\betaf w}idehat{\mathbb{N}N_2(w)}(n),
\varepsilonllabel{3NLS5}
\varepsilonnd{align}
\noindent
where $\phi(\betaar n)$ is as in \varepsilonqref{phi1}
and $\psi (\betaar n)$ is now expressed in terms of $w$:
\betaetagin{align}
\psi(\betaar n)= \psi(n, n_1, n_2, n_3)(w) := - |{\betaf w}idehat w_n|^2+ |{\betaf w}idehat w_{n_1}|^2 - |{\betaf w}idehat w_{n_2}|^2 + |{\betaf w}idehat w_{n_3}|^2 .
\varepsilonllabel{psi1}
\varepsilonnd{align}
\noindent
By using the additional gauge transformation $\mathcal{J}_t$,
we removed the resonant part at the expense of introducing
the second term $\mathbb{N}N_2(w)$ in \varepsilonqref{3NLS5}.
While this second term looks more complicated,
it can be handled essentially in the same manner as the non-resonant term $\mathbb{N}N_1(w)$
by noting that ${\betaf w}idehat{\mathbb{N}N_2(w)}(n)$ is basically ${\betaf w}idehat{\mathbb{N}N_1(w)}(n)$
with two extra (harmless) factors of ${\betaf w}idehat w_n$.
See Lemma \ref{LEM:Xnonlin}.
We also note that the phase function $\psi(\betaar n)$ in \varepsilonqref{psi1} depends on
the time variable $t$,
which introduces extra terms
in the normal form reduction step.
See~\varepsilonqref{Xnonlin1} and~\varepsilonqref{Ynonlin1} below.
The main point is, however, that there is no resonant contribution
in~\varepsilonqref{3NLS5}
and,
as a result, we can show $(1+\varepsilon)$-smoothing on the nonlinear part
for $\frac 34 < s < 1$ (Lemma \ref{LEM:Xnonlin})
and apply Ramer's result
to conclude quasi-invariance of the Gaussian measure~$\mu_s$.
Given $t, \thetau \in \mathbb{R}$, let
$\mathbf{P}hi(t):L^2\to L^2$ be the solution map for \varepsilonqref{3NLS1}
and
$ \mathbf{P}si_0(t, \thetau)$ and $\mathbf{P}si_1(t, \thetau) :L^2\to L^2$
be the solution maps for \varepsilonqref{3NLS3} and \varepsilonqref{3NLS5}, respectively,
sending initial data at time $\thetau$
to solutions at time $t$.\footnote{Note that \varepsilonqref{3NLS3}
and \varepsilonqref{3NLS5}
are non-autonomous.
As in \cite{OTz1}, this non-autonomy does not play an essential
role in the remaining part of the paper.
}
When $\thetau =0$, we denote $\mathbf{P}si_0(t, 0)$
and $\mathbf{P}si_1(t, 0)$ by $\mathbf{P}si_0(t)$ and $\mathbf{P}si_1(t)$ for simplicity.
Then,
it follows from \varepsilonqref{gauge2}, \varepsilonqref{gauge3}, \varepsilonqref{gauge5}, and \varepsilonqref{gauge6}
that
\betaetagin{align*}
\mathbf{P}hi(t) = \GammaG_t^{-1} \circ S(t) \circ \mathbf{P}si_0(t)
\qquad \text{and} \qquad
\mathbf{P}hi(t) = \GammaG_t^{-1} \circ \mathcal{J}_t^{-1}\circ S(t) \circ \mathbf{P}si_1(t).
\varepsilonnd{align*}
\noindent
As we pointed out above,
the Gaussian measure $\mu_s$ is invariant under $S(t)$, $\GammaG_t$, and $\mathcal{J}_t$
(Lemma \ref{LEM:gauss})
and hence it suffices to prove quasi-invariance of $\mu_s$
under $\mathbf{P}si_0(t)$ or $\mathbf{P}si_1(t)$.
In applying Ramer's result, we view
$\mathbf{P}si_0(t)$ and $\mathbf{P}si_1(t)$ as the identity plus a perturbation.
By writing
\betaetagin{align}
\mathbf{P}si_0(t) = \text{Id} + K_0(t)
\qquad\text{and}\qquad
\mathbf{P}si_1(t) = \text{Id} + K_1(t),
\varepsilonllabel{nonlin1c}
\varepsilonnd{align}
\noindent
we show that $K_0(t)(u_0)$ and $K_1(t)(u_0)$ are $(1+\varepsilon)$-smoother
than the random initial data $u_0$ distributed according to $\mu_s$
in appropriate ranges of regularities
(Lemmas \ref{LEM:Znonlin} and \ref{LEM:Xnonlin}).
We conclude this introduction with several remarks.
\betaetagin{remark}\rm
Dispersion is essential in establishing quasi-invariance of $\mu_s$ in Theorem \ref{THM:quasi}.
In \cite{OSTz}, the first and third authors with Sosoe
studied the transport property of $\mu_s$
under the following dispersionless model on $\mathbb{T}$:
\betaetagin{align}
i \partialrtial_t u = |u|^{2}u
\varepsilonllabel{NLSX}
\varepsilonnd{align}
\noindent
In particular, they showed that $\mu_s$ is not quasi-invariant under
the dynamics of \varepsilonqref{NLSX}.
\varepsilonnd{remark}
\betaetagin{remark}\varepsilonllabel{REM:M2} \rm
(i)
We point out that the regularity restriction $s > \frac 34$
for the cubic 4NLS~\varepsilonqref{4NLS}
in~\cite{OTz1}
was optimal in a straightforward application of the second method
based on an energy estimate of the form:
\betaetagin{align}
\frac{d}{dt} E(u)
\varepsilonlleq C(\| u \|_{L^2})
\| u \|_{H^{s - \frac 12 - \varepsilon}}^{2-\theta}
\varepsilonllabel{P0}
\varepsilonnd{align}
\noindent
for some $\theta > 0$
and for any solution $u$ to \varepsilonqref{4NLS}.\footnote{We point out that one can
also close an argument by establishing an energy estimate \varepsilonqref{P0} with $\theta = 0$.
See~\cite{OTz3}.}
Here, $E(u) = \|u\|_{H^s}^2 + R(u)$ denotes a modified $H^s$-energy
with a suitable correction term $R(u)$ obtained via a normal form reduction
applied to (the evolution equation satisfied by) $\|u\|_{H^s}^2$.
Note that, in \varepsilonqref{P0}, we are allowed to place only (at most) two factors of $u$
in the $H^\sigma$-norm with $\sigma = s - \frac 12 - \varepsilon $
and need to place other factors in the conserved (weaker) $L^2$-norm.
In particular, the regularity restriction $s > \frac 34$ (i.e.~$\sigma > \frac 14$)
comes from the following estimate \cite[(6.14)]{OTz1}:
\betaetagin{align}
\betaigg\| \sigmaum_{(m_1, m_2, m_3) \in \Gamma(n_1)}
{\betaf w}idehat u_{m_1}\omegaverline{{\betaf w}idehat u_{m_2}}{\betaf w}idehat u_{m_3}
\betaigg\|_{\varepsilonll^\infty_{n_1}} \varepsilonllesssim
\| u \|_{H^\frac{1}{6}}^3
\varepsilonllesssim \|u\|_{L^2}^{1+\theta}
\| u \|_{H^\sigma}^{2-\theta},
\varepsilonllabel{P1}
\varepsilonnd{align}
\noindent
which holds for $\sigma > \frac 14$.
We point out that,
in applying the first method based on Ramer's result,
we can place all the factors in the $H^\sigma$-norm.
See Lemmas \ref{LEM:Znonlin}
and \ref{LEM:Xnonlin}
We stress that this regularity restriction $s > \frac 34$ can not be removed unless one applies an infinite iteration of normal
form reductions as in \cite{OSTz}, since, if we stop applying normal form reductions
within a finite number of steps, then we would need to apply \varepsilonqref{P1}
to estimate the contribution from the trilinear terms added at the very last step.
The same restriction applies to the cubic 3NLS \varepsilonqref{3NLS1}.
Namely, even if we apply the second method based on an energy estimate
to the transformed equation \varepsilonqref{3NLS5},
we can expect, at best, the same regularity range $s > \frac 34$
as in Theorem \ref{THM:quasi},
not yielding any improvement over our proof of Theorem \ref{THM:quasi} based on the first method.
\sigmamallskip
\noindent
(ii) If we apply the second gauge transformation \varepsilonqref{gauge5}
to the cubic 4NLS \varepsilonqref{4NLS} and apply the first method based on Ramer's argument,
we can prove quasi-invariance of $\mu_s$ for $s > \frac 23$.
While this is better than the regularity restriction $s > \frac 34$ in \cite{OTz1},
this approach does not seem to yield an optimal result ($s > \frac 12$) as in \cite{OSTz}
in view of \varepsilonqref{XX1} below.
One way in this direction would be to apply an infinite iteration of normal form reductions
at the level of the equation as in \cite{GKO}.
\varepsilonnd{remark}
\betaetagin{remark}\rm
After the completion of this paper,
the second method based on the energy estimate
has been further developed in \cite{PTV, GOTW}.
In a recent preprint \cite{FT}, Forlano-Trenberth applied the approaches developed in \cite{PTV, GOTW}
to study the cubic fractional NLS:
\betaetagin{align}
i \partialrtial_t u + ( - \partialrtial_x^2)^\alpha u = |u|^{2}u
\varepsilonllabel{fNLS}
\varepsilonnd{align}
\noindent
and showed that the Gaussian measure $\mu_s$ in \varepsilonqref{gauss0}
is quasi-invariant under the flow of \varepsilonqref{fNLS}
for
\[s
> \betaetagin{cases}
\max( \frac 23, \frac{11}{6} - \alpha), & \text{if }\alpha \gammaeq 1
\rule[-3mm]{0pt}{0pt}
\\
\frac{10\alpha + 7}{12}, & \text{if }\frac 12 < \alpha < 1.
\varepsilonnd{cases}
\]
\noindent
In particular, this shows quasi-invariance of $\mu_s$ under the standard NLS with the second order dispersion
for $s > \frac 56$.
\varepsilonnd{remark}
\betaetagin{remark}\rm
In \cite{NTT}, the second author with Nakanishi and Takaoka
studied the low regularity well-posedness of the modified KdV equation on $\mathbb{T}$.
In particular, the following ``gauge'' transformation was used in \cite[Theorem 1.3]{NTT}
under an extra regularity assumption
$|n|^\frac{1}{2}{\betaf w}idehat u_n (0)\in \varepsilonll^\infty_n$:
\betaetagin{align*}
{\betaf w}idetilde \mathcal{J}_{\text{mKdV}} [u ](t): = \sigmaum_{n \in \mathbb{Z}} e^{ -i n \int_0^t |{\betaf w}idehat u_n(t')|^2 dt'} {\betaf w}idehat u_n(t)e^{inx}
\varepsilonnd{align*}
\noindent
to completely remove the resonant part of the dynamics.
Note that this extra regularity assumption was needed
to guarantee the boundedness of the transformation ${\betaf w}idetilde \mathcal{J}_{\text{mKdV}}$.
Instead of $\mathcal{J}_t$ in \varepsilonqref{gauge4},
one may be tempted to use an analogous ``gauge'' transformation ${\betaf w}idetilde \mathcal{J}$ defined by
\betaetagin{align}
{\betaf w}idetilde \mathcal{J} [{\betaf u} ](t): = \sigmaum_{n \in \mathbb{Z}} e^{ -i \int_0^t |{\betaf w}idehat {\betaf u}_n(t')|^2 dt'} {\betaf w}idehat {\betaf u}_n(t)e^{inx}
\varepsilonllabel{gauge7}
\varepsilonnd{align}
\noindent
(for solutions ${\betaf u}$ to \varepsilonqref{3NLS2})
since it would produce a simpler equation than \varepsilonqref{3NLS4} and \varepsilonqref{3NLS5}.
Note that, in our smooth setting,
we do not need an extra regularity assumption thanks to the global well-posedness in $L^2(\mathbb{T})$
stated in Proposition \ref{PROP:GWP}.
We point out that one crucial ingredient in the proof of Theorem~\ref{THM:quasi}
is the invariance\footnote{In view of Lemma~\ref{LEM:gauss}~(iii),
quasi-invariance would suffice.} of the Gaussian measure $\mu_s$ under the gauge transformation $\mathcal{J}_t$
defined in \varepsilonqref{gauge4}.
Note that the transformation ${\betaf w}idetilde \mathcal{J}$ in \varepsilonqref{gauge7} depends the evolution on $[0, t]$,
namely, it is not a well-defined gauge transformation on the phase space $L^2(\mathbb{T})$.
Hence, studying the transport property
of $\mu_s$ under ${\betaf w}idetilde \mathcal{J}$ (such as quasi-invariance) would already require
the understanding of the transport property of $\mu_s$
under \varepsilonqref{3NLS2}
(in particular, in a time average manner which is highly non-trivial).\footnote{Even
in a situation where we have an invariant measure, it is not at all trivial to know
how random solutions at different times
are correlated. In fact, it is an important open question to study
the space-time correlation of a random solution distributed by an invariant (Gibbs) measure
for a dispersive PDE.}
Therefore, while the transformation ${\betaf w}idetilde \mathcal{J}$
may be of use for the low regularity well-posedness theory,
it is not suitable for our analysis.
\varepsilonnd{remark}
\betaetagin{remark}\varepsilonllabel{REM:res}
\rm
In \cite{MT2}, the second author with Miyaji
considered the Cauchy problem for
the renormalized 3NLS \varepsilonqref{3NLS2} in the non-resonant case: $\frac{2\betaeta}{3} \notin \mathbb{Z}$.
By adapting the argument in \cite{TT},
they proved local well-posedness
of \varepsilonqref{3NLS2} in $H^s(\mathbb{T})$, $ s > -\frac 16$.
It is of interest to study the transport property of the Gaussian measure
$\mu_s$ in the resonant case: $\frac{2\betaeta}{3} \in \mathbb{Z}$.
In this case, we can write $\mathbb{N}b_3({\betaf u})$ in \varepsilonqref{nonlin1} as
\betaetagin{align*}
\mathbb{N}b_3({\betaf u})
= \sigmaum_{n\in \mathbb{Z}} e^{inx}
\omegaverline{{\betaf w}idehat {\betaf u}\betaig(- n+\tfrac{2\betaeta}{3}\betaig)}
\betaigg\{ \sigmaum_{\sigmaubstack{n_1 \in \mathbb{Z}\\ n\ne n_1, -n_1 + \frac{2\betaeta}{3}}}
{\betaf w}idehat {\betaf u}(n_1){\betaf w}idehat {\betaf u}\betaig(- n_1+\tfrac{2\betaeta}{3}\betaig)\betaigg\}.
\varepsilonnd{align*}
\noindent
Since there is no dispersion to exploit on this term,
we do not have a result analogous to Theorem \ref{THM:quasi}
in the resonant case.
We also point out that the well-posedness of the renormalized 3NLS \varepsilonqref{3NLS2}
in negative Sobolev spaces
is open in the resonant case.
\varepsilonnd{remark}
In the following,
various constants depend on the parameter $\betaeta \notin \frac{3}{2}\mathbb{Z}$ but we suppress its dependence
since $\betaetata$ is fixed.
In view of the time reversibility of the equation,
we only consider positive times.
\sigmaection{Proof of Theorem \ref{THM:quasi}}
\varepsilonllabel{SEC:Pf}
In this section, we present the proof of Theorem \ref{THM:quasi}
under the non-resonant assumption $\frac 23 \betaeta \notin \mathbb{Z}$.
Our basic approach is to apply Ramer's result
after exhibiting sufficient smoothing
on the nonlinear part.
As mentioned in Section \ref{SEC:intro},
we first transform the original equation~\varepsilonqref{3NLS1}
to~\varepsilonqref{3NLS3} or~\varepsilonqref{3NLS5}.
We then perform
a normal form reduction
and establish nonlinear smoothing
by exploiting the dispersion of the equation.
We first recall the precise statement
of the main result in \cite{RA}
for readers' convenience.
In the following, we use $HS(H)$ to denote
the space of Hilbert-Schmidt operators on $H$
and $GL (H)$ to denote the space of invertible linear operators on $H$
with a bounded inverse.
\betaetagin{proposition}[Ramer
\cite{RA}]\varepsilonllabel{PROP:RA0}
Let $(H, E, \mu)$ be an abstract Wiener space,
where
$\mu$ is the standard Gaussian measure on $E$.
Suppose that $T = \textup{Id} + K: U \to E$
be a continuous (nonlinear) transformation
from some open subset $U\sigmaubset E$ into $E$
such that
\betaetagin{itemize}
\item[\textup{(i)}] $T$ is a homeomorphism of $U$ onto an open subset of $E$.
\item[\textup{(ii)}]
We have $K(U) \sigmaubset H$
and $K:U \to H$ is continuous.
\item[\textup{(iii)}]
For each $x \in U$,
the map $DK(x)$ is a Hilbert-Schmidt operator on $H$.
Moreover, $DK: x \in U \to DK(x) \in HS(H)$ is continuous.
\item[\textup{(iv)}]
$\textup{Id}_H + DK(x) \in GL(H)$ for each $x \in U$.
\varepsilonnd{itemize}
\noindent
\noindent
Then, $\mu$ and $\mu\circ T $
are mutually absolutely continuous measures on $U$.
\varepsilonnd{proposition}
\sigmaubsection{Basic reduction}
We decompose the solution map $\mathbf{P}hi(t)$ to \varepsilonqref{3NLS1}
as
\betaetagin{align*}
\mathbf{P}hi(t) = \GammaG_t^{-1} \circ S(t) \circ \mathbf{P}si_0(t)
\varepsilonnd{align*}
\noindent
for $ s> 1$
and
\betaetagin{align*}
\mathbf{P}hi(t) = \GammaG_t^{-1} \circ \mathcal{J}_t^{-1}\circ S(t) \circ \mathbf{P}si_1(t)
\varepsilonnd{align*}
\noindent
for $ \frac 34 < s \varepsilonlleq 1$.
The following proposition shows that,
in order to prove quasi-invariance of the Gaussian measure $\mu_s$ under $\mathbf{P}hi(t)$,
it suffices to establish its quasi-invariance
under $\mathbf{P}si_0(t)$ or $\mathbf{P}si_1(t)$.
\betaetagin{lemma}\varepsilonllabel{LEM:gauss}
\textup{(i)}
Given a complex-valued mean-zero Gaussian random variable $g$ with variance $\sigma$, i.e.~$g \in \mathbb{N}N_\mathbb{C}(0, \sigma )$,
let $ Tg = e^{- i t |g|^2} g$ for some $t \in \mathbb{R}$.
Then, $Tg \in \mathbb{N}N_\mathbb{C}(0, \sigma)$.
\sigmamallskip
\noindent
\textup{(ii)}
Let $t \in \mathbb{R}$. Then, the Gaussian measure $\mu_s$ defined in \varepsilonqref{gauss0} is invariant under
the linear map
$S(t) = e^{t (\partialrtial_x^3 - i \betaeta \partialrtial_x^2)}$, the map $\GammaG_t$ in \varepsilonqref{gauge1}, and the map $\mathcal{J}_t$
in \varepsilonqref{gauge4}.
\sigmamallskip
\noindent
\textup{(iii)}
Let $(X, \mu)$ be a measure space.
Suppose that $T_1$ and $T_2$
are maps on $X$ into itself
such that $\mu$ is quasi-invariant under $T_j$ for each $j = 1, 2$.
Then, $\mu$ is quasi-invariant under $T = T_1 \circ T_2$.
\varepsilonnd{lemma}
\betaetagin{proof}
In view of
Lemmas 4.1, 4.2, 4.4, and 4.5 in \cite{OTz1},
it remains to prove invariance of $\mu_s$ under $\mathcal{J}_t$.
Note that $\mu_s$ can be written as an infinite product of Gaussian measures:
\betaetagin{align*}
\mu_s = \betaigotimes_{n \in \mathbb{Z}} \rho_n,
\varepsilonnd{align*}
\noindent
where $\rho_n$ is the probability distribution for ${\betaf w}idehat u_n = \frac{g_n}{\jb{n}^s}$ defined in \varepsilonqref{gauss1}.
In particular, $\rho_n$ is a mean-zero Gaussian probability measure on $\mathbb{C}$ with variance $2 \jb{n}^{-2s}$.
Note that the action of $\mathcal{J}_t$ on ${\betaf w}idehat u_n$ is given by $T$ in Part (i),
which leaves the Gaussian measure $\rho_n$ invariant.
Hence,
we conclude that $\mu_s$ is invariant under~$\mathcal{J}_t$.
\varepsilonnd{proof}
Fix $s > \frac 34$ and $\sigma < s - \frac{1}{2}$ sufficiently close to $s - \frac 12$.
First, recall that $\mu_s$ is a probability measure on $H^{\sigma}(\mathbb{T})$.
Given $R>0$, let $B_R$ be the open ball of radius $R$ centered at the origin in
$H^{\sigma}(\mathbb{T})$.
We also recall
from \varepsilonqref{nonlin1c}, \varepsilonqref{3NLS3}, and \varepsilonqref{3NLS5} that
\betaetagin{align*}
\mathbf{P}si_j(t) = \text{Id} + K_j(t), \quad j = 0, 1,
\varepsilonnd{align*}
\noindent
where $K_j(t)$ is given by
\betaetagin{align*}
K_0(t)(u_0)
& = \int_0^t \mathbb{N}N_0(v)(t') dt'
+ \int_0^t \mathbb{R}R_0(v)(t') dt'
=: \mathfrak{N}_0(v)(t) + \mathbb{R}f_0(v)(t), \\
K_1(t)(u_0)
& = \int_0^t \mathbb{N}N_1(w)(t') dt' + \int_0^t \mathbb{N}N_2(w)(t') dt'
=: \mathfrak{N}_1(w)(t)
+ \mathfrak{N}_2(w)(t),
\varepsilonnd{align*}
\noindent
where $v$ and $w$ are the solutions to \varepsilonqref{3NLS3} and \varepsilonqref{3NLS5}
with initial data $u_0$.
Then, the proof of Theorem \ref{THM:quasi} is reduced to proving the following proposition,
guaranteeing
the hypotheses of Ramer's result (Proposition \ref{PROP:RA0}).
See the proof
of Theorem~1.2 for $s > 1$ in \cite[Subsection~5.2]{OTz1}.
\betaetagin{proposition}\varepsilonllabel{PROP:RA}
Given $s > \frac34$, let $j = 0$ if $s > 1$ and $j = 1$ if $\frac 34 < s \varepsilonlleq 1$.
Given $R > 0$,
there exists $\thetau = \thetau(R) > 0$ such that,
for each $t \in (0, \thetau(R)]$, the following statements hold:
\betaetagin{itemize}
\item[(i)]
$\mathbf{P}si_j(t)$ is a homeomorphism of $B_R$
onto an open subset of $H^{\sigma}(\mathbb{T})$.
\sigmamallskip
\item[(ii)]
We have $K_j(t) (B_R) \sigmaubset H^s(\mathbb{T})$ and $K_j(t): B_R \to H^s(\mathbb{T})$ is continuous.
\sigmamallskip
\item[(iii)]
For each $u_0 \in B_R$,
the map $DK_j(t)|_{u_0}$ is a Hilbert-Schmidt operator on $H^s(\mathbb{T})$.
Moreover,
$DK_j(t): u_0 \in B_R \mapsto DK_j(t)|_{u_0} \in HS(H^s(\mathbb{T}))$
is continuous.
\sigmamallskip
\item[(iv)] $\textup{Id}_{H^s} + DK_j(t)|_{u_0} \in GL (H^s(\mathbb{T}))$
for each $u_0 \in B_R$.
\varepsilonnd{itemize}
\varepsilonnd{proposition}
Furthermore, arguing as in the proof of Proposition 5.3 in \cite{OTz1},
we see that Proposition~\ref{PROP:RA} follows
once we prove the following nonlinear estimates (Lemmas \ref{LEM:Znonlin} and \ref{LEM:Xnonlin}),
exhibiting $(1+\varepsilon)$-smoothing.
See also Remark \ref{REM:Ramer}.
The first lemma
shows nonlinear smoothing for the $v$-equation \varepsilonqref{3NLS3}.
In particular, when $\sigma > \frac 12$,
this lemma exhibits nonlinear smoothing of order $1+\varepsilon$,
yielding Proposition~\ref{PROP:RA} and hence Theorem \ref{THM:quasi} for $s > 1$.
\betaetagin{lemma}\varepsilonllabel{LEM:Znonlin}
Let $\sigma> \frac 12$.
Then, we have
\betaetagin{align}
\| \mathfrak{N}_0(v)(t) \|_{H^{\sigma+2}} & \varepsilonllesssim
\|v(0)\|_{H^\sigma}^3 + \|v (t)\|_{H^\sigma}^3 + t
\sigmaup_{t' \in [0, t]} \|v (t')\|_{H^\sigma}^5,
\varepsilonllabel{Znonlin2}\\
\| \mathbb{R}f_0(v)(t) \|_{H^{3\sigma}} & \varepsilonllesssim
t \sigmaup_{t' \in [0, t]} \|v (t')\|_{H^\sigma}^3.
\varepsilonllabel{Znonlin3}
\varepsilonnd{align}
\varepsilonnd{lemma}
The proof of Lemma \ref{LEM:Znonlin}
follows closely that of Lemma 5.1 in \cite{OTz1}.
Namely, we apply a normal form reduction to \varepsilonqref{3NLS3}
and convert the cubic non-resonant nonlinearity into
a quintic nonlinearity (plus cubic boundary terms).
See \varepsilonqref{Znonlin1} below.
While we had a gain of two derivatives for 4NLS \varepsilonqref{4NLS} in \cite{OTz1},
it is not the case for our problem due to a weaker dispersion.
See Lemma \ref{LEM:phase}.
On the other hand,
the resonant part $\mathbb{R}f_0(v)$ is trivially estimated by $\varepsilonll^2_n \sigmaubset \varepsilonll^6_n$
as in \cite{OTz1}.
Note that the amount of smoothing for the resonant part $\mathbb{R}f_0(v)$
is $2\sigma$, imposing the regularity restriction $\sigma > \frac 12$
in order to have $(1+ \varepsilon)$-smoothing.
The next lemma
shows nonlinear smoothing in the context of
the $w$-equation \varepsilonqref{3NLS5},
where the resonant part giving the regularity restriction is now removed.
As in the case of Lemma \ref{LEM:Znonlin},
we perform a normal form reduction.
However, more care is needed
due to the lower regularity under consideration.
\betaetagin{lemma}\varepsilonllabel{LEM:Xnonlin}
Let $\frac 14 < \sigma \varepsilonlleq \frac 12 $.
Then, we have
\betaetagin{align*}
\| \mathfrak{N}_1(w)(t) \|_{H^{\sigma+1+}}
& \varepsilonllesssim
\sigmaum_{j = 0}^2 t^j \sigmaup_{t' \in [0, t]} \|w (t')\|_{H^\sigma}^{2j+3}, \\
\| \mathfrak{N}_2(w)(t) \|_{H^{\sigma+1+}}
& \varepsilonllesssim
\sigmaum_{j = 1}^3 t^j \sigmaup_{t' \in [0, t]} \|w (t')\|_{H^\sigma}^{2j+3}.
\varepsilonnd{align*}
\varepsilonnd{lemma}
We present the proofs of Lemmas \ref{LEM:Znonlin} and \ref{LEM:Xnonlin}
in the next subsections.
Before proceeding to the proofs of the nonlinear estimates,
we state the following elementary lemma on the phase function $\phi(\betaar n)$
under the non-resonant assumption $\frac 23 \betaeta \notin\mathbb{Z}$.
\betaetagin{lemma}\varepsilonllabel{LEM:phase}
Let $\phi(\betaar n)$ and $\Gamma(n)$ be as in \varepsilonqref{phi1} and \varepsilonqref{Gam1}.
Then, one of the following holds on $\Gamma(n)$\textup{:}
\sigmamallskip
\noindent\betaetagin{itemize}
\item[\textup{(i)}]
With $n_{\max} = \max(|n|, |n_1|, |n_2|, |n_3|)$, we have
\betaetagin{align}
&|\phi(\betaar n)| \gammaes n_{\max}^2 \varepsilonllambda,
\varepsilonllabel{phi3}
\varepsilonnd{align}
\noindent
where $\varepsilonllambda = \min\betaig( |n - n_1|, |n- n_3|, |n_1 + n_3 - \tfrac23\betaeta|\betaig)$.
\sigmamallskip
\item[\textup{(ii)}]
$|n| \sigmaim |n_1|\sigmaim |n_2|\sigmaim|n_3|$
and
\betaetagin{align}
&|\phi(\betaar n)| \gammaes n_{\max} \mathcal{L}ambda,
\varepsilonllabel{phi4}
\varepsilonnd{align}
\noindent
where $\mathcal{L}ambda = \min\betaig( |n - n_1| |n- n_3|,
|n- n_3||n_1 + n_3 - \tfrac23\betaeta|,
|n - n_1| |n_1 + n_3 - \tfrac23\betaeta|\betaig)
$.
\varepsilonnd{itemize}
\varepsilonnd{lemma}
The proof of Lemma \ref{LEM:phase} is immediate
from the factorization in \varepsilonqref{phi1}.
See also \cite[(8.21), (8.22)]{BO93no2} for a similar property of the phase function
for the modified KdV equation.
\sigmaubsection{Nonlinear estimate: Part 1}
In this subsection, we present the proof of Lemma~\ref{LEM:Znonlin}.
Fix $\sigma > \frac 12$.
By writing \varepsilonqref{3NLS3} in the integral form, we have
\betaetagin{align*}
{\betaf w}idehat v_n (t)
& = {\betaf w}idehat v_n(0)
-i \int_0^t \sigmaum_{\Gamma(n)}
e^{it' \phi(\betaar n) } {\betaf w}idehat v_{n_1}\omegaverline{{\betaf w}idehat v_{n_2}}{\betaf w}idehat v_{n_3}(t') dt'
+i \int_0^t | {\betaf w}idehat v_{n}|^2{\betaf w}idehat v_{n}(t') dt'\notag\\
& = : {\betaf w}idehat v_n(0) +{\betaf w}idehat{\mathfrak{N}_0(v)}(n, t)+{\betaf w}idehat{\mathbb{R}f_0(v)}(n, t).
\varepsilonnd{align*}
\noindent
In view of Lemma \ref{LEM:phase},
we have a non-trivial oscillation
caused by
the phase function $\phi(\betaar n)$
in
the non-resonant part $\mathfrak{N}_0(v)$.
We exploit this fast oscillation by
a normal form reduction,
i.e.~integrating by parts:
\betaetagin{align}
{\betaf w}idehat {\mathfrak{N}_0(v)}(n, t)
& = - \sigmaum_{\Gamma( n)}
\frac{ e^{it' \phi (\betaar n) } }{\phi(\betaar n)}
{\betaf w}idehat v_{n_1}(t')\omegaverline{{\betaf w}idehat v_{n_2}(t')}{\betaf w}idehat v_{n_3}(t')\betaigg|_{t' = 0}^t
+ \sigmaum_{\Gamma( n)} \int_0^t
\frac{ e^{it' \phi(\betaar n) } }{\phi(\betaar n)}
\partialrtial_t( {\betaf w}idehat v_{n_1}\omegaverline{{\betaf w}idehat v_{n_2}}{\betaf w}idehat v_{n_3})(t') dt' \notag\\
& = -\sigmaum_{\Gamma(n)}
\frac{ e^{i t \phi( \betaar n) } }{\phi(\betaar n)}
{\betaf w}idehat v_{n_1}(t)\omegaverline{{\betaf w}idehat v_{n_2}(t)}{\betaf w}idehat v_{n_3}(t)
+ \sigmaum_{\Gamma( n)}
\frac{ 1}{\phi(\betaar n)}
{\betaf w}idehat v_{n_1}(0)\omegaverline{{\betaf w}idehat v_{n_2}(0)}{\betaf w}idehat v_{n_3}(0) \notag \\
& \hphantom{X}
+ 2 \int_0^t \sigmaum_{\Gamma(n)}
\frac{ e^{i t'\phi(\betaar n) } }{\phi(\betaar n)}
\betaig\{ {\betaf w}idehat{\mathbb{N}N_0(v)}(n_1) + {\betaf w}idehat{\mathbb{R}R_0(v)}(n_1)\betaig\}\omegaverline{{\betaf w}idehat v_{n_2}}{\betaf w}idehat v_{n_3}(t') dt' \notag\\
& \hphantom{X}
+ \int_0^t \sigmaum_{\Gamma( n)}
\frac{ e^{i t' \phi(\betaar n) } }{\phi(\betaar n)}
{\betaf w}idehat v_{n_1}\omegaverline{\betaig\{{\betaf w}idehat {\mathbb{N}N_0(v)}(n_2) + {\betaf w}idehat {\mathbb{R}R_0(v)}(n_2)\betaig\}}{\betaf w}idehat v_{n_3}(t') dt'\notag\\
& =: {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}(n, t) - {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}(n, 0)
+ {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}I(n, t) + {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}II(n, t).
\varepsilonllabel{Znonlin1}
\varepsilonnd{align}
\noindent
In view of Lemma \ref{LEM:phase},
the phase function $\phi(\betaar n)$ appearing in the denominators
allows us to exhibit a smoothing for $\mathfrak{N}_0(v)$.
In the computation above, we formally
switched the order of the time integration
and the summation.
Moreover, we applied the product rule in time differentiation
at the second equality.
These steps can be justified,
provided $\sigma \gammaeq \frac 16$. See~\cite{OTz1} for details.
We now present the proof of
Lemma \ref{LEM:Znonlin}.
\betaetagin{proof}[Proof of Lemma \ref{LEM:Znonlin}]
We first estimate the non-resonant term $\mathfrak{N}_0(v)$ in \varepsilonqref{Znonlin2}.
If $\phi(\betaar n)$ satisfies \varepsilonqref{phi3} in Lemma \ref{LEM:phase},
then we can proceed as in the proof of Lemma 5.1 in \cite{OTz1}
and establish \varepsilonqref{Znonlin2}
since the proof of Lemma 5.1 in \cite{OTz1} only requires two gains of derivative from
the phase function $\phi(\betaar n)$ and the algebra property of $H^\sigma(\mathbb{T})$, $\sigma > \frac 12$.
Moreover, the resonant term $\mathbb{R}f_0(v)$ in \varepsilonqref{Znonlin3}
can be estimated exactly as in \cite{OTz1} with $\varepsilonll^2_n \sigmaubset \varepsilonll^6_n$.
Hence, it remains to prove \varepsilonqref{Znonlin2}
under the assumption that
$\phi(\betaar n)$ satisfies \varepsilonqref{phi4} in Lemma \ref{LEM:phase}.
In this case, we have less gain of derivative from $\phi(\betaar n)$ in the denominator
and hence we need to proceed with more care.
Without loss of generality, assume
\betaetagin{align*}
|\phi(\betaar n)| \gammaes n_{\max} |n - n_1| |n- n_3|.
\varepsilonnd{align*}
\noindent
Recall that we have $|n| \sigmaim |n_1|\sigmaim |n_2|\sigmaim|n_3|$ in this case.
We first consider the term $\hspace{0.5mm}\text{I}\hspace{0mm}$.
It follows from Cauchy-Schwarz inequality that
\betaetagin{align}
\| \hspace{0.5mm}\text{I}\hspace{0mm} (t)\|_{H^{3\sigma+1}}
&
\varepsilonllesssim \betaigg\| \jb{n}^{3\sigma} \sigmaum_{\Gamma(n)}
\frac{1}{|n - n_1| |n- n_3|}\prod_{j = 1}^3 |{\betaf w}idehat v_{n_j}(t)| \betaigg\|_{\varepsilonll^2_n} \notag \\
&
\varepsilonllesssim
\sigmaup_{n \in \mathbb{Z}} \betaigg(\sigmaum_{\Gamma(n)}
\frac{1}{|n - n_1|^2 |n- n_3|^2}\betaigg)^\frac{1}{2}
\| v(t)\|_{H^\sigma}^3\notag \\
& \varepsilonllesssim \| v(t)\|_{H^\sigma}^3.
\varepsilonllabel{Znonlin4}
\varepsilonnd{align}
Next, we consider $\hspace{0.5mm}\text{I}\hspace{0mm}I$.
The contribution $\hspace{0.5mm}\text{I}\hspace{0mm}I_\text{res}$ from $\mathbb{R}R_0(v)$ can be estimated as in \varepsilonqref{Znonlin4}.
With \varepsilonqref{3NLS3}, Cauchy-Schwarz inequality, and $\varepsilonll^2_n \sigmaubset \varepsilonll^6_n$, we have
\betaetagin{align*}
\| \hspace{0.5mm}\text{I}\hspace{0mm}I_\text{res} (t)\|_{H^{5\sigma+1}}
&
\varepsilonllesssim t \sigmaup_{t'\in [0, t]}
\betaigg\| \jb{n}^{5\sigma} \sigmaum_{\Gamma(n)}
\frac{1}{|n - n_1| |n- n_3|}
|{\betaf w}idehat v_{n_1}(t')|^3
\prod_{j = 2 }^3 |{\betaf w}idehat v_{n_j}(t')| \betaigg\|_{\varepsilonll^2_n} \notag \\
&
\varepsilonllesssim t\,
\sigmaup_{n \in \mathbb{Z}} \betaigg(\sigmaum_{\Gamma(n)}
\frac{1}{|n - n_1|^2 |n- n_3|^2}\betaigg)^\frac{1}{2}
\cdot \sigmaup_{t'\in [0, t]} \| v(t')\|_{H^\sigma}^5\notag \\
& \varepsilonllesssim t \sigmaup_{t'\in [0, t]} \| v(t')\|_{H^\sigma}^5.
\varepsilonnd{align*}
\noindent
We now estimate
the contribution $\hspace{0.5mm}\text{I}\hspace{0mm}I_\text{nr}$ from $\mathbb{N}N_0(v)$ in $\hspace{0.5mm}\text{I}\hspace{0mm}I$.
Proceeding as in \varepsilonqref{Znonlin4} with the algebra property of $H^\sigma(\mathbb{T})$, $\sigma > \frac 12$, we have
\betaetagin{align*}
\| \hspace{0.5mm}\text{I}\hspace{0mm}I_\text{nr}(t) \|_{H^{3\sigma+1}}
&
\varepsilonllesssim t \sigmaup_{t'\in [0, t]}
\betaigg\| \jb{n}^{3\sigma} \sigmaum_{\sigmaubstack{n = n_1 - n_2 + n_3\\ n \ne n_1, n_3} }
\frac{1}{|n - n_1| |n- n_3|}
|{\betaf w}idehat {\mathbb{N}N_0(v)}(n_1, t')|
\prod_{j = 2 }^3 |{\betaf w}idehat v_{n_j}(t')| \betaigg\|_{\varepsilonll^2_n} \notag \\
&
\varepsilonllesssim
t \sigmaup_{t'\in [0, t]} \| \mathbb{N}N_0(v)(t')\|_{H^\sigma} \| v(t')\|_{H^\sigma}^2\notag \\
& \varepsilonllesssim t \sigmaup_{t'\in [0, t]} \| v(t')\|_{H^\sigma}^5.
\varepsilonnd{align*}
\noindent
The fourth term $\hspace{0.5mm}\text{I}\hspace{0mm}II$ in \varepsilonqref{Znonlin1} can be estimated in an analogous manner.
Finally,
by noting that $3 \sigma + 1 > \sigma + 2$ for $\sigma > \frac 12$,
we conclude that the estimate \varepsilonqref{Znonlin2} holds for $\sigma > \frac 12$.
\varepsilonnd{proof}
\sigmaubsection{Nonlinear estimate: Part 2}
In this subsection, we present the proof of Lemma~\ref{LEM:Xnonlin}.
Fix $\sigma > \frac 14$.
By writing \varepsilonqref{3NLS5} in the integral form, we have
\betaetagin{align}
{\betaf w}idehat w_n (t)
& = {\betaf w}idehat w_n(0)
+ \int_0^t {\betaf w}idehat{\mathbb{N}N_1(w)} (n, t') dt'
+ \int_0^t {\betaf w}idehat{\mathbb{N}N_2(w)} (n, t') dt'\notag\\
& = : {\betaf w}idehat w_n(0) +{\betaf w}idehat{\mathfrak{N}_1(w)}(n, t)
+ {\betaf w}idehat{\mathfrak{N}_2(w)}(n, t).
\varepsilonllabel{XNLS1}
\varepsilonnd{align}
\noindent
As in the previous subsection,
our basic strategy is to apply a normal form reduction.
The main difference comes from the time dependent nature of
the phase function $\psi(\betaar n)$ in \varepsilonqref{psi1}.
For a short-hand notation, we set
\[ \theta(\betaar n) = \phi(\betaar n) + \psi (\betaar n)\]
\noindent
and
\betaetagin{align}
\Xi(n, t) = 2\hspace{0.5mm}\text{I}\hspace{0mm}m \betaigg(\sigmaum_{(m_1, m_2, m_3) \in \Gamma(n)}
e^{i t \theta(n, \betaar m )}
{\betaf w}idehat w_{m_1}\omegaverline{{\betaf w}idehat w_{m_2}}{\betaf w}idehat w_{m_3}
\omegaverline{{\betaf w}idehat w_{n}}\betaigg),
\varepsilonllabel{xi1}
\varepsilonnd{align}
\noindent
where $(n, \betaar m) = (n, m_1, m_2,m_3)$.
Then,
from \varepsilonqref{3NLS5},
we have
\betaetagin{align}
{\betaf w}idehat {\mathfrak{N}_1(w)}(n, t)
& =
-i \int_0^t \sigmaum_{\Gamma(n)} \partialrtial_t \betaigg(\frac{e^{i t' \phi(\betaar n)}}{i \phi(\betaar n)}\betaigg) e^{i t' \psi(\betaar n)} {\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}(t') dt' \notag\\
& = -\sigmaum_{\Gamma(n)}
\frac{ e^{i t \theta( \betaar n) } }{\phi(\betaar n)}
{\betaf w}idehat w_{n_1}(t)\omegaverline{{\betaf w}idehat w_{n_2}(t)}{\betaf w}idehat w_{n_3}(t)
+ \sigmaum_{\Gamma( n)}
\frac{ 1}{\phi(\betaar n)}
{\betaf w}idehat w_{n_1}(0)\omegaverline{{\betaf w}idehat w_{n_2}(0)}{\betaf w}idehat w_{n_3}(0) \notag \\
& \hphantom{X}
+i \int_0^t \sigmaum_{\Gamma(n)}
\frac{ e^{i t \theta( \betaar n) } }{\phi(\betaar n)}
\betaig\{ \psi(\betaar n) + t' \partialrtial_t \psi(\betaar n)\betaig\} {\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}(t') dt' \notag\\
& \hphantom{X}
- 2i \int_0^t \sigmaum_{\sigmaubstack{(n_1, n_2, n_3) \in \Gamma(n)\\(m_1, m_2, m_3) \in \Gamma(n_1)}}
\frac{ e^{i t' (\theta(\betaar n) + \theta(n_1, \betaar m ))} }{\phi(\betaar n)}
( {\betaf w}idehat w_{m_1}\omegaverline{{\betaf w}idehat w_{m_2}}{\betaf w}idehat w_{m_3})
\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}(t') dt' \notag\\
& \hphantom{X}
+ i \int_0^t \sigmaum_{\sigmaubstack{(n_1, n_2, n_3) \in \Gamma(n)\\(m_1, m_2, m_3) \in \Gamma(n_2)}}
\frac{ e^{i t' (\theta(\betaar n) - \theta(n_2, \betaar m ))} }{\phi(\betaar n)}
{\betaf w}idehat w_{n_1}(\omegaverline{ {\betaf w}idehat w_{m_1}}{\betaf w}idehat w_{m_2}\omegaverline{{\betaf w}idehat w_{m_3}})
{\betaf w}idehat w_{n_3}(t') dt'\notag\\
& \hphantom{X}
- 2i \int_0^t t'
\sigmaum_{ \Gamma(n)}
\frac{ e^{i t' \theta(\betaar n) } }{\phi(\betaar n)}
\Xi(n_1, t')
{\betaf w}idehat w_{n_1} \omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}
(t') dt' \notag\\
& \hphantom{X}
+ i \int_0^t t'
\sigmaum_{ \Gamma(n)}
\frac{ e^{i t' \theta(\betaar n) } }{\phi(\betaar n)}
\Xi(n_2, t'){\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}(t') dt'\notag\\
& =: {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}(n, t) - {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}(n, 0) + {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}I(n, t) + {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}II_1(n, t) \notag\\
& \hphantom{X}+ {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}II_2(n, t) + {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}V_1(n, t) + {\betaf w}idehat \hspace{0.5mm}\text{I}\hspace{0mm}V_2(n, t).
\varepsilonllabel{Xnonlin1}
\varepsilonnd{align}
\noindent
As in \varepsilonqref{Znonlin1},
switching the order of the time integration
and the summation in the computation above
can be justified for $\sigma \gammaeq \frac 16$.
See~\cite{OTz1}.
Similarly, we have
\betaetagin{align}
{\betaf w}idehat {\mathfrak{N}_2(w)}(n, t)
& =
-2 i \int_0^t t' {\betaf w}idehat w_n \hspace{0.5mm}\text{I}\hspace{0mm}m\betaigg( \sigmaum_{\Gamma(n)} \partialrtial_t \betaigg(\frac{e^{i t' \phi(\betaar n)}}{i \phi(\betaar n)}\betaigg) e^{i t' \psi(\betaar n)}
{\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)(t') dt' \notag\\
& =
2 i t \, {\betaf w}idehat w_n \mathbb{R}e\betaigg( \sigmaum_{\Gamma(n)}
\frac{e^{i t \theta(\betaar n)}}{ \phi(\betaar n)}
{\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)(t) \notag\\
& \hphantom{X}
-2 i \int_0^t {\betaf w}idehat w_n \mathbb{R}e \betaigg( \sigmaum_{\Gamma(n)} \frac{e^{i t' \theta(\betaar n)}}{ \phi(\betaar n)} e^{i t' \psi(\betaar n)}
{\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)(t') dt' \notag\\
& \hphantom{X}
+ 2 i \int_0^t t' {\betaf w}idehat w_n \hspace{0.5mm}\text{I}\hspace{0mm}m\betaigg( \sigmaum_{\Gamma(n)}
\frac{e^{i t' \theta(\betaar n)}}{ \phi(\betaar n)}
\betaig\{ \psi(\betaar n) + t' \partialrtial_t \psi(\betaar n)\betaig\}
{\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)(t') dt' \notag\\
& \hphantom{X}
-2 i \int_0^t t' (\partialrtial_t {\betaf w}idehat w_n) \mathbb{R}e \betaigg( \sigmaum_{\Gamma(n)} \frac{e^{i t' \theta(\betaar n)}}{ \phi(\betaar n)} e^{i t' \psi(\betaar n)}
{\betaf w}idehat w_{n_1}\omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)(t') dt' \notag\\
& \hphantom{X}
-4 i \int_0^t t' {\betaf w}idehat w_n \mathbb{R}e \betaigg( \sigmaum_{\Gamma(n)} \frac{e^{i t' \theta(\betaar n)}}{ \phi(\betaar n)} e^{i t' \psi(\betaar n)}
(\partialrtial_t {\betaf w}idehat w_{n_1}) \omegaverline{{\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)(t') dt' \notag\\
& \hphantom{X}
-2 i \int_0^t t' {\betaf w}idehat w_n \mathbb{R}e \betaigg( \sigmaum_{\Gamma(n)} \frac{e^{i t' \theta(\betaar n)}}{ \phi(\betaar n)} e^{i t' \psi(\betaar n)}
{\betaf w}idehat w_{n_1} (\omegaverline{\partialrtial_t {\betaf w}idehat w_{n_2}}){\betaf w}idehat w_{n_3}\omegaverline{{\betaf w}idehat w_n}\betaigg)(t') dt' \notag\\
& \hphantom{X}
-2 i \int_0^t t' {\betaf w}idehat w_n \mathbb{R}e \betaigg( \sigmaum_{\Gamma(n)} \frac{e^{i t' \theta(\betaar n)}}{ \phi(\betaar n)} e^{i t' \psi(\betaar n)}
{\betaf w}idehat w_{n_1} \omegaverline{ {\betaf w}idehat w_{n_2}}{\betaf w}idehat w_{n_3}(\omegaverline{\partialrtial_t {\betaf w}idehat w_n})\betaigg)(t') dt' \notag\\
& =: {\betaf w}idehat{{\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}}(n, t) + {\betaf w}idehat {{\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}I}(n, t) + {\betaf w}idehat{ {\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}II}(n, t) + {\betaf w}idehat {{\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_0}(n, t) \notag\\
& \hphantom{X}
+ {\betaf w}idehat {{\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_1} (n, t)+ {\betaf w}idehat{{\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_2}(n, t) + {\betaf w}idehat{{\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_3}(n, t),
\varepsilonllabel{Ynonlin1}
\varepsilonnd{align}
\noindent
Modulo the extra phase factor $\psi(\betaar n)$,
the terms $\hspace{0.5mm}\text{I}\hspace{0mm}, \hspace{0.5mm}\text{I}\hspace{0mm}II_1$, and $\hspace{0.5mm}\text{I}\hspace{0mm}II_2$ in~\varepsilonqref{Xnonlin1}
already appear in~\varepsilonqref{Znonlin1}.
While the other terms in \varepsilonqref{Xnonlin1} and \varepsilonqref{Ynonlin1} are new,
it turns out that they can be estimated
in a similar manner to $\hspace{0.5mm}\text{I}\hspace{0mm}, \hspace{0.5mm}\text{I}\hspace{0mm}II_1$, and $\hspace{0.5mm}\text{I}\hspace{0mm}II_2$
with small modifications.
We now present the proof of
Lemma~\ref{LEM:Xnonlin}.
\betaetagin{proof}[Proof of Lemma \ref{LEM:Xnonlin}]
In the following,
we first estimate the terms $\hspace{0.5mm}\text{I}\hspace{0mm}, \hspace{0.5mm}\text{I}\hspace{0mm}II_1$, and $\hspace{0.5mm}\text{I}\hspace{0mm}II_2$ in \varepsilonqref{Xnonlin1}.
We then show how the estimates for the other terms in \varepsilonqref{Xnonlin1}
and \varepsilonqref{Ynonlin1} follow from those
for $\hspace{0.5mm}\text{I}\hspace{0mm}, \hspace{0.5mm}\text{I}\hspace{0mm}II_1$, and $\hspace{0.5mm}\text{I}\hspace{0mm}II_2$.
\sigmamallskip
\noindent
{\betaf Main argument:}
We first consider $\hspace{0.5mm}\text{I}\hspace{0mm}$ in \varepsilonqref{Xnonlin1}.
If $\phi(\betaar n)$ satisfies \varepsilonqref{phi4},
then we estimate $\hspace{0.5mm}\text{I}\hspace{0mm}$ as in~\varepsilonqref{Znonlin4}.
Next, suppose that $\phi(\betaar n)$ satisfies \varepsilonqref{phi3}.
Without loss of generality, assume that
\betaetagin{align}
|\phi(\betaar n)| \gammaes n_{\max}^2 |n-n_1|.
\varepsilonllabel{phi6}
\varepsilonnd{align}
\noindent
Then, by Cauchy-Schwarz inequality
with $\jb{n}^{\sigma} \varepsilonllesssim \max_{j = 1, 2, 3}\jb{n_j}^{\sigma}$ for $\sigma \gammaeq 0$, we have
\betaetagin{align}
\| \hspace{0.5mm}\text{I}\hspace{0mm} (t)\|_{H^{\sigma+1+}}
&
\varepsilonllesssim \betaigg\| \jb{n}^{\sigma} \sigmaum_{\sigmaubstack{n = n_1 - n_2 + n_3\\ n \ne n_1, n_3} }
\frac{1}{|n - n_1| n_{\max}^{1-}}\prod_{j = 1}^3 |{\betaf w}idehat w_{n_j}(t)| \betaigg\|_{\varepsilonll^2_n} \notag \\
&
\varepsilonllesssim
\sigmaup_{n \in \mathbb{Z}} \betaigg(\sigmaum_{\sigmaubstack{n = n_1 - n_2 + n_3\\ n \ne n_1, n_3} }
\frac{1}{|n - n_1|^2 n_{\max}^{2-}}\betaigg)^\frac{1}{2}
\| w(t)\|_{H^\sigma}^3\notag \\
& \varepsilonllesssim \| w(t)\|_{H^\sigma}^3.
\varepsilonllabel{Ynonlin1a}
\varepsilonnd{align}
Next, we consider the fourth term $\hspace{0.5mm}\text{I}\hspace{0mm}II_1$ in \varepsilonqref{Xnonlin1}.
The fifth term $\hspace{0.5mm}\text{I}\hspace{0mm}II_2$ in \varepsilonqref{Xnonlin1} can be estimated in an analogous manner.
In the following, we only estimate the integrand of $\hspace{0.5mm}\text{I}\hspace{0mm}II_1$.
With abuse of notation, we also denote the integrand as $\hspace{0.5mm}\text{I}\hspace{0mm}II_1$.
\sigmamallskip
\noindent
$\betaulletlet$ {\betaf Case (i):}
$\phi(\betaar n)$ satisfies \varepsilonqref{phi3}.
\\
\mathbf 1ent
We first consider the case that \varepsilonqref{phi6} holds.
In this case,
for $\frac 16 < \sigma \varepsilonlleq \frac 12$,
we have
\betaetagin{align}
\frac{\jb{n}^{\sigma+1+}}{|\phi(\betaar n)|}\frac{1}{\jb{n_1}^{\sigma - \frac 16}\jb{n_2}^{\sigma}\jb{n_3}^{\sigma}}
\varepsilonllesssim \frac{1}{\jb{n}^{\frac 12+} |n-n_1|^{1-} \jb{n_{2}}^{\sigma}\jb{n_{3}}^{\frac{1}{2}+}}.
\varepsilonllabel{XX1a}
\varepsilonnd{align}
\noindent
By the triangle inequality: $\jb{n_1}^{\sigma - \frac 16}
\varepsilonllesssim \max_{j = 1, 2, 3}\jb{m_j}^{\sigma - \frac 16}$
and
Sobolev's inequality, we have
\betaetagin{align}
\betaigg\| \jb{n_1}^{\sigma - \frac{1}{6}}\sigmaum_{(m_1, m_2, m_3) \in \Gamma(n_1)}
{\betaf w}idehat w_{m_1}\omegaverline{{\betaf w}idehat w_{m_2}}{\betaf w}idehat w_{m_3}
\betaigg\|_{\varepsilonll^\infty_{n_1}} \varepsilonllesssim
\| w \|_{H^\frac{1}{6}}^2
\| w \|_{H^\sigma}
\varepsilonlleq
\| w \|_{H^\sigma}^3
\varepsilonllabel{XX1}
\varepsilonnd{align}
\noindent
for $\sigma \gammaeq \frac 16$.
Then,
it follows from Cauchy-Schwarz inequality with \varepsilonqref{XX1a} and \varepsilonqref{XX1} that
\betaetagin{align}
\| \hspace{0.5mm}\text{I}\hspace{0mm}II_1 \|_{H^{\sigma+1+}}
& \varepsilonllesssim \| w \|_{H^\sigma}^3
\betaigg\|
\sigmaum_{(n_1, n_2, n_3) \in \Gamma(n)}
\frac{1}{\jb{n}^{\frac 12+} |n-n_1|^{1-} \jb{n_{2}}^{\sigma}\jb{n_{3}}^{\frac{1}{2}+}}
\prod_{j = 2}^3 \jb{n_j}^\sigma |{\betaf w}idehat w_{n_j}|\betaigg\|_{\varepsilonll^2_n} \notag\\
& \varepsilonllesssim \| w \|_{H^{\sigma}}^5
\betaigg(\sigmaum_{n, n_3\in \mathbb{Z}}
\sigmaum_{\sigmaubstack{n_2 \in \mathbb{Z}\\n_2 \ne n_3}}
\frac{1}{\jb{n}^{1+} |n_2 - n_3|^{2-} \jb{n_{2}}^{2\sigma}\jb{n_{3}}^{1+}}
\betaigg)^\frac{1}{2}\notag
\intertext{By first summing in $n_2$, then in $n_3$ and $n$,}
&
\varepsilonllesssim \| w \|_{H^{\sigma}}^5
\varepsilonllabel{XX2}
\varepsilonnd{align}
\noindent
for $\frac 16 < \sigma \varepsilonlleq \frac 12$.
The upper bound $\sigma \varepsilonlleq \frac 12$ is by no means sharp but it suffices
for our purpose.
Next, suppose that
\betaetagin{align*}
|\phi(\betaar n)| \gammaes n_{\max}^2 |n_1 + n_3 - \tfrac 23\betaeta|
= n_{\max}^2 |n + n_2 - \tfrac 23\betaeta|.
\varepsilonnd{align*}
\noindent
In this case, we have
\betaetagin{align*}
\frac{\jb{n}^{\sigma+1+}}{|\phi(\betaar n)|}\frac{1}{\jb{n_2}^{\sigma}\jb{n_3}^{\sigma}}
\varepsilonllesssim \frac{1}{\jb{n}^{\frac 12+} \jb{n + n_2 - \tfrac 23\betaeta}^{1-} \jb{n_{2}}^{\sigma-}\jb{n_{3}}^{\frac{1}{2}+}} .
\varepsilonnd{align*}
\noindent
Then, we can repeat a computation analogous to \varepsilonqref{XX2}
once we notice
\betaetagin{align*}
\betaigg(\sigmaum_{n, n_3\in \mathbb{Z}}
\sigmaum_{n_2 \in \mathbb{Z}}
\frac{1}{\jb{n}^{1+} \jb{n + n_2 -\tfrac 23\betaeta}^{2-} \jb{n_{2}}^{2\sigma-}\jb{n_{3}}^{1+}}
\betaigg)^\frac{1}{2}< \infty.
\varepsilonnd{align*}
\noindent
Similarly, we can handle the case
\betaetagin{align*}
|\phi(\betaar n)| \gammaes n_{\max}^2 |n-n_3|
\varepsilonnd{align*}
\noindent
by noting
\betaetagin{align*}
\frac{\jb{n}^{\sigma+1+}}{|\phi(\betaar n)|}\frac{1}{\jb{n_2}^{\sigma}\jb{n_3}^{\sigma}}
\varepsilonllesssim \frac{1}{\jb{n}^{\frac 12+} \jb{n -n_3}^{1-} \jb{n_{2}}^{\frac{1}{2}+}\jb{n_{3}}^{\sigma-}}
\varepsilonnd{align*}
\noindent
and
\betaetagin{align*}
\betaigg(\sigmaum_{n, n_2\in \mathbb{Z}}
\sigmaum_{n_3 \in \mathbb{Z}}
\frac{1}{\jb{n}^{1+} \jb{n -n_3}^{2-} \jb{n_{2}}^{1+}\jb{n_{3}}^{2\sigma-}}
\betaigg)^\frac{1}{2}< \infty.
\varepsilonnd{align*}
\sigmamallskip
\noindent
$\betaulletlet$ {\betaf Case (ii):}
$\phi(\betaar n)$ satisfies \varepsilonqref{phi4}.
\\
\mathbf 1ent
In this case, we have $|n| \sigmaim |n_1| \sigmaim |n_2| \sigmaim |n_3|$.
We first consider the case
\[|\phi(\betaar n)| \gammaes n_{\max} |n-n_1| |n-n_3|.\]
\noindent
By Cauchy-Schwarz inequality, we have
\betaetagin{align}
\| \hspace{0.5mm}\text{I}\hspace{0mm}II_1 \|_{H^{\sigma+1+}}
& \varepsilonllesssim
\betaigg\|
\sigmaum_{(n_1, n_2, n_3) \in \Gamma(n)}
\frac{1}{\jb{n_1}^{\sigma-} |n-n_1| |n-n_3|}\notag\\
& \hphantom{XXXXXXXX}
\times \sigmaum_{(m_1, m_2, m_3) \in \Gamma(n_1)}
\prod_{i = 1}^3|{\betaf w}idehat w_{m_i}|
\prod_{j = 2}^3 \jb{n_j}^\sigma |{\betaf w}idehat w_{n_j}|\betaigg\|_{\varepsilonll^2_n} \notag\\
& \varepsilonllesssim \| w \|_{H^{\sigma}}^2
\betaigg\|
\frac{1}{\jb{n_1}^{\sigma-} \jb{n-n_1} \jb{n-n_3}}
\sigmaum_{(m_1, m_2, m_3) \in \Gamma(n_1)}
\prod_{i = 1}^3|{\betaf w}idehat w_{m_i}|\betaigg\|_{\varepsilonll^2_{n, n_1, n_3}}\varepsilonllabel{XX3}
\intertext{By summing in $n_3$ and then in $n$
and
applying H\"older inequality (in $n_1$) and Young's inequality
(with $\frac{1-2\sigma+}{2} + 2 = \frac{1}{q}+\frac{1}{q}+\frac{1}{q}$),}
& \varepsilonllesssim \| w \|_{H^{\sigma}}^2
\betaigg\| \sigmaum_{(m_1, m_2, m_3) \in \Gamma(n_1)}
\prod_{i = 1}^3|{\betaf w}idehat w_{m_i}|\betaigg\|_{\varepsilonll^{\frac{2}{1-2\sigma+}}_{n_1}}
\varepsilonllesssim \| w \|_{H^{\sigma}}^2
\| {\betaf w}idehat w_n \|_{\varepsilonll^q_n}^3 \notag
\intertext{By H\"older's inequality,}
&
\varepsilonllesssim \| w \|_{H^{\sigma}}^2
\betaig(\| \jb{n}^{-\frac{2-q}{2q}-}\|_{\varepsilonll^\frac{2q}{2-q}_n}
\| \jb{n}^{\frac{2-q}{2q}+} {\betaf w}idehat w_n\|_{\varepsilonll^2_n}\betaig)^3\notag\\
& \varepsilonllesssim \| w \|_{H^{\sigma}}^2
\| w \|_{H^{\frac{1-\sigma+}{3}}}^3
\varepsilonlleq \| w \|_{H^{\sigma}}^5, \notag
\varepsilonnd{align}
\noindent
provided that
$\frac{1-\sigma+}{3} \varepsilonlleq \sigma$.
This
gives the regularity restriction
$\sigma > \frac 14$
stated in the hypothesis.
When $|\phi(\betaar n)| \gammaes n_{\max} |n-n_1| |n_1+ n_3 - \tfrac{2}{3}\betaeta|$
or
$|\phi(\betaar n)| \gammaes n_{\max} |n-n_3| |n_1+ n_3 - \tfrac{2}{3}\betaeta|$,
we can proceed as in the computation above
but we need to first sum in $n$ and then in $n_3$ the corresponding factors
at \varepsilonqref{XX3}.
\sigmamallskip
\noindent
{\betaf Remaining terms:}
We now estimate the remaining terms.
The main idea is to reduce the estimates to the main argument
presented above (for $\hspace{0.5mm}\text{I}\hspace{0mm}, \hspace{0.5mm}\text{I}\hspace{0mm}II_1$, and $\hspace{0.5mm}\text{I}\hspace{0mm}II_2$)
by noticing that
the extra factors appearing in the remaining terms are all bounded in the $\varepsilonll^\infty_n$-norm.
From \varepsilonqref{psi1}, we have
\betaetagin{align}
\| \psi(\betaar n)\|_{\varepsilonll^\infty_{\betaar n}}
\varepsilonllesssim \| {\betaf w}idehat w_n\|_{\varepsilonll^\infty_n}^2
\varepsilonlleq \| w\|_{L^2}^2,
\varepsilonllabel{Ynonlin2}
\varepsilonnd{align}
\noindent
where $\varepsilonll^\infty_{\betaar n} = \varepsilonll^\infty_{n, n_1, n_2, n_3}$.
On the one hand, it follows from
\varepsilonqref{gauge5a}, \varepsilonqref{psi1}, and \varepsilonqref{xi1} that
\betaetagin{align}
\partialrtial_t \psi(\betaar n) = - \Xi(n) + \Xi(n_1)- \Xi(n_2)+ \Xi(n_3).
\varepsilonllabel{Ynonlin2a}
\varepsilonnd{align}
\noindent
On the other hand,
from \varepsilonqref{XX1} with $\sigma = \frac 16$, we have
\betaetagin{align}
\| \Xi(n) \|_{\varepsilonll^\infty_{ n}}
\varepsilonllesssim \| {\betaf w}idehat w_n\|_{\varepsilonll^\infty_n} \|w \|_{H^\frac{1}{6}}^3
\varepsilonlleq\|w \|_{H^\frac{1}{6}}^4 .
\varepsilonllabel{Ynonlin2b}
\varepsilonnd{align}
\noindent
Combining \varepsilonqref{Ynonlin2a} and \varepsilonqref{Ynonlin2b}, we obtain
\betaetagin{align}
\| \partialrtial_t \psi(\betaar n)\|_{\varepsilonll^\infty_{\betaar n}}
\varepsilonllesssim \|w \|_{H^\frac{1}{6}}^4 .
\varepsilonllabel{Ynonlin3}
\varepsilonnd{align}
\noindent
Hence, from the estimate \varepsilonqref{Znonlin4} and \varepsilonqref{Ynonlin1a} on $\hspace{0.5mm}\text{I}\hspace{0mm}$
with \varepsilonqref{Ynonlin2} and \varepsilonqref{Ynonlin3},
we can estimate $\hspace{0.5mm}\text{I}\hspace{0mm}I$ in~\varepsilonqref{Xnonlin1} by
\betaetagin{align*}
\| \hspace{0.5mm}\text{I}\hspace{0mm}I (t) \|_{H^{\sigma+1+}} \varepsilonllesssim t \sigmaup_{t' \in [0, t]} \|w(t') \|_{H^\sigma}^5 + t^2\sigmaup_{t' \in [0, t]} \|w(t') \|_{H^\sigma}^7
\varepsilonnd{align*}
\noindent
for $\sigma \gammaeq \frac 16$.
Noting that the terms $\hspace{0.5mm}\text{I}\hspace{0mm}V_j$, $j = 1, 2$, in \varepsilonqref{Xnonlin1}
have the same structure
as $\hspace{0.5mm}\text{I}\hspace{0mm}$ with an extra factors of $\Xi(n_j)$,
it follows from the estimates on $\hspace{0.5mm}\text{I}\hspace{0mm}$ and \varepsilonqref{Ynonlin2b} that
\betaetagin{align*}
\| \hspace{0.5mm}\text{I}\hspace{0mm}V_j (t) \|_{H^{\sigma+1+}} & \varepsilonllesssim t^2 \sigmaup_{t' \in [0, t]}\|w(t') \|_{H^\sigma}^7
\varepsilonnd{align*}
\noindent
for $j = 1, 2$.
Similarly, by noting that ${\betaf w}idetilde\hspace{0.5mm}\text{I}\hspace{0mm}$, ${\betaf w}idetilde\hspace{0.5mm}\text{I}\hspace{0mm}I$, and ${\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}II$ in \varepsilonqref{Ynonlin1} basically have the same structure
as $\hspace{0.5mm}\text{I}\hspace{0mm}$ in \varepsilonqref{Xnonlin1}
with two extra factors of ${\betaf w}idehat w_n$
(and $ \psi(\betaar n) + t' \partialrtial_t \psi(\betaar n)$ for ${\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}II$),
we have
\betaetagin{align*}
\| {\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm} (t) \|_{H^{\sigma+1+}} & \varepsilonllesssim t \|w(t) \|_{H^\sigma}^5,
\\
\| {\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}I (t) \|_{H^{\sigma+1+}} & \varepsilonllesssim t \sigmaup_{t' \in [0, t]} \|w(t') \|_{H^\sigma}^5,
\\
\| {\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}II (t) \|_{H^{\sigma+1+}} & \varepsilonllesssim t^ 2 \sigmaup_{t' \in [0, t]}\|w(t') \|_{H^\sigma}^7 + t^3 \sigmaup_{t' \in [0, t]}\|w(t') \|_{H^\sigma}^9.
\varepsilonnd{align*}
\noindent
As for ${\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_0$ and ${\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_3$ in \varepsilonqref{Ynonlin1}, we first observe that
\betaetagin{align}
\| \partialrtial_t {\betaf w}idehat w_n\|_{\varepsilonll^\infty_n}
\varepsilonllesssim \| w\|_{H^\frac{1}{6}}^3
+ t \|{\betaf w}idehat w_n\|_{\varepsilonll^\infty_n}^2\| w\|_{H^\frac{1}{6}}^3
\varepsilonllesssim \| w\|_{H^\frac{1}{6}}^3
+ t\| w\|_{H^\frac{1}{6}}^5,
\varepsilonllabel{Ynonlin4}
\varepsilonnd{align}
\noindent
which follows from \varepsilonqref{3NLS5} and \varepsilonqref{XX1}.
Then, by noting that ${\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_0$ and ${\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_3$ are basically $\hspace{0.5mm}\text{I}\hspace{0mm}$
with extra factors of $\partialrtial_t {\betaf w}idehat w_n$ and ${\betaf w}idehat w_n$, we obtain
\betaetagin{align*}
\| \hspace{0.5mm}\text{I}\hspace{0mm}V_j (t) \|_{H^{\sigma+1+}} \varepsilonllesssim t^ 2 \sigmaup_{t' \in [0, t]}\|w(t') \|_{H^\sigma}^7 + t^3 \sigmaup_{t' \in [0, t]}\|w(t') \|_{H^\sigma}^9
\varepsilonnd{align*}
\noindent
for $j = 0, 3$.
It remains to consider ${\betaf w}idetilde \hspace{0.5mm}\text{I}\hspace{0mm}V_j$, $j = 1, 2$, in \varepsilonqref{Ynonlin1}.
Note that they are basically $\hspace{0.5mm}\text{I}\hspace{0mm}II_j$ in \varepsilonqref{Xnonlin1},
where we replaced
\betaetagin{align*}
\sigmaum_{(m_1, m_2, m_3) \in \Gamma(n_j)} e^{it' \theta(n_j, \betaar m)}
{\betaf w}idehat w_{m_1}\omegaverline{{\betaf w}idehat w_{m_2}}{\betaf w}idehat w_{m_3}
\varepsilonnd{align*}
\noindent
by $\partialrtial_t {\betaf w}idehat w_{n_j}$ and added two extra factors of ${\betaf w}idehat w_n$.
By a small modification of \varepsilonqref{Ynonlin4}, we have
\betaetagin{align}
\| \jb{n_j}^{\sigma - \frac 16} \partialrtial_t {\betaf w}idehat w_{n_j}\|_{\varepsilonll^\infty_{n_j}}
\varepsilonllesssim \| w\|_{H^\sigma}^3
+ t\| w\|_{H^\sigma}^5
\varepsilonllabel{Ynonlin6}
\varepsilonnd{align}
\noindent
for $\sigma \gammaeq \frac 16$.
Then, by repeating the computation in Case (i) above with \varepsilonqref{Ynonlin6}, we obtain
\betaetagin{align}
\| \hspace{0.5mm}\text{I}\hspace{0mm}V_j (t) \|_{H^{\sigma+1+}} \varepsilonllesssim t^ 2 \sigmaup_{t' \in [0, t]}\|w(t') \|_{H^\sigma}^7 + t^3 \sigmaup_{t' \in [0, t]}\|w(t') \|_{H^\sigma}^9
\varepsilonllabel{Ynonlin7}
\varepsilonnd{align}
\noindent
for $j = 1, 2$ when $\phi(\betaar n)$ satisfies \varepsilonqref{phi3}.
Lastly, noting from \varepsilonqref{3NLS5}
that the contribution from ${\betaf w}idehat{ \mathbb{N}N_2(w)}(n_j)$ in $\partialrtial_t {\betaf w}idehat w_{n_j}$
is basically ${\betaf w}idehat{ \mathbb{N}N_1(w)}(n_j)$ with two extra factors of ${\betaf w}idehat w_{n_j}$.
Hence,
by repeating the computation in Case (ii),
we also obtain \varepsilonqref{Ynonlin7}
when $\phi(\betaar n)$ satisfies \varepsilonqref{phi4}.
This completes the proof of Lemma \ref{LEM:Xnonlin}.
\varepsilonnd{proof}
\betaetagin{remark}\varepsilonllabel{REM:Ramer}\rm
As mentioned above, once we have Lemma \ref{LEM:Xnonlin},
we can prove Proposition~\ref{PROP:RA}
with $j = 1$ and $\frac 34 < s \varepsilonlleq 1$
by repeating the proof of Proposition 5.3 in \cite{OTz1}.
In this case, we need to interpret the nonlinear part
$K_1(t)(u_0)
= \mathfrak{N}_1(w)(t) + \mathfrak{N}_2(w)(t)$ of the dynamics~\varepsilonqref{XNLS1}
as those given by the right-hand sides
of \varepsilonqref{Xnonlin1} and \varepsilonqref{Ynonlin1}.
In particular, in computing the derivative $DK_j(t)|_{u_0}$
for $u_0 \in B_R \sigmaubset H^\sigma(\mathbb{T})$,
we need to take derivatives of the complex exponentials such as $e^{i t \psi(\betaar n)}$
since $\psi(\betaar n)$ depends on $w$.
While this introduces extra terms,
it does not cause any issue since
such derivatives can be easily bounded.
For example, let $F(t) = e^{i t \psi(\betaar n)}$.
Then, with \varepsilonqref{psi1}, we have
\betaetagin{align*}
DF(t)|_{u_0}({\betaf w}(0))
& = it F(w)(t) D\psi(\betaar n) (t)|_{u_0}({\betaf w}(0))\\
& = 2 it F(w)(t) \mathbb{R}e( - {\betaf w}idehat w_n \omegaverline{{\betaf w}idehat {\betaf w}_n}
+ {\betaf w}idehat w_{n_1} \omegaverline{{\betaf w}idehat {\betaf w}_{n_1}}
- {\betaf w}idehat w_{n_2} \omegaverline{{\betaf w}idehat {\betaf w}_{n_2}}+ {\betaf w}idehat w_{n_3} \omegaverline{{\betaf w}idehat {\betaf w}_{n_3}}),
\varepsilonnd{align*}
\noindent
where ${\betaf w}$ is the solution to the linearized equation
for \varepsilonqref{XNLS1} around the solution $w$ to \varepsilonqref{XNLS1} with $w|_{t = 0} = u_0$.
Hence, we have
\betaetagin{align}
\betaig\|DF(t)|_{u_0}({\betaf w}(0)) \betaig\|_{\varepsilonll^\infty_{\betaar n}}
\varepsilonllesssim t \| w(t) \|_{L^2} \|{\betaf w}(t)\|_{L^2}.
\varepsilonllabel{Rem1}
\varepsilonnd{align}
\noindent
By combining this with (the proof of) Lemma \ref{LEM:Xnonlin},
we obtain
\betaetagin{align}
\betaig\| \jb{\partialrtial_x}^{\frac{1}{2}+}DK_1(t)|_{u_0}({\betaf w}(0)) \betaig\|_{H^{s}}
\varepsilonllesssim
\sigmaum_{j = 0}^4 t^j \sigmaup_{t' \in [0, t]} \|w(t')\|_{H^{s-\frac 12-}}^{2j+2}\|{\betaf w}(t')\|_{H^{s-\frac 12-}}
\varepsilonllabel{Rem2}
\varepsilonnd{align}
\noindent
for
$\frac 34 < s \varepsilonlleq 1$. Note that when differentiation hits
the complex exponentials, \varepsilonqref{Rem1} increases the value of $j$
by 1 in the statement of Lemma \ref{LEM:Xnonlin}
and hence we needed to include $j = 4$ in~\varepsilonqref{Rem2}.
Once we have \varepsilonqref{Rem2}, one can follow the argument in \cite{OTz1}
and prove $DK_1(t)|_{u_0} \in HS(H^s(\mathbb{T}))$
for any $u_0 \in B_R\sigmaubset H^{s-\frac{1}{2}-}(\mathbb{T})$.
\varepsilonnd{remark}
\betaetagin{ackno}\rm
T.O.~was supported by the European Research Council (grant no.~637995 ``ProbDynDispEq'').
Y.T.~ was partially supported by JSPS KAKENHI Grant-in-Aid for Scientific Research (B) (17H02853) and Grant-in-Aid for Exploratory Research (16K13770).
\varepsilonnd{ackno}
\betaetagin{thebibliography}{99}
\betaibitem{Ag}
G.~Agrawal,
{\it Nonlinear Fiber Optics}, Fifth Edition, Elsevier Academic Press, Oxford, 2013.
\betaibitem{AF}
L.~Ambrosio, A.~Figalli,
{\it On flows associated to Sobolev vector fields in Wiener spaces: an approach \`a la DiPerna-Lions,} J. Funct. Anal. 256 (2009), no. 1, 179--214.
\betaibitem{BIT} A.~Babin, A.~Ilyin, E.~Titi, {\it On the regularization mechanism for the periodic Korteweg-de Vries equation},
Comm. Pure Appl. Math. 64 (2011), no. 5, 591--648.
\betaibitem{BOP4}
\'A.~B\'enyi, T.~Oh, O.~Pocovnicu,
{\it On the probabilistic Cauchy theory for nonlinear dispersive PDEs}, Landscapes of Time-Frequency Analysis. 1--32, Appl. Numer. Harmon. Anal., Birkh\"auser/Springer, Cham, 2019.
\betaibitem{Bog}
V.~Bogachev, {\it Gaussian measures,} Mathematical Surveys and Monographs, 62. American Mathematical Society, Providence, RI, 1998. xii+433 pp.
\betaibitem{BO93no2}
J.~Bourgain,
{\it Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations II: The KdV-equation}, Geom. Funct. Anal. 3 (1993), 209--262.
\betaibitem{BO94}
J.~Bourgain,
{\it Periodic nonlinear Schr\"odinger equation and invariant measures},
Comm. Math. Phys. 166 (1994), no. 1, 1--26.
\betaibitem{BO96}
J.~Bourgain,
{\it Invariant measures for the 2D-defocusing nonlinear Schr\"odinger equation},
Comm. Math. Phys. 176 (1996), no. 2, 421--445.
\betaibitem{CM}
R.~Cameron, W.~Martin,
{\it Transformations of Wiener integrals under translations}, Ann. of Math. (2) 45, (1944). 386--396.
\betaibitem{CGKO}
J.~Chung, Z.~Guo, S.~Kwon,
{\it Normal form approach to global well-posedness of the quadratic derivative nonlinear Schr\"odinger equation on the circle,}
Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire 34 (2017), 1273--1297.
\betaibitem{Cru1}
A.B.~Cruzeiro, {\it \'Equations diff\'erentielles ordinaires: non explosion et mesures quasi-invariantes,}
(French)
J. Funct. Anal. 54 (1983), no. 2, 193--205.
\betaibitem{Cru2}
A.B.~Cruzeiro, {\it \'Equations diff\'erentielles sur l'espace de Wiener et formules de Cameron-Martin non-lin\'eaires},
(French)
J. Funct. Anal. 54 (1983), no. 2, 206--227.
\betaibitem{ET}
M.B.~Erdo\u{g}an, N.~Tzirakis, {\it Global smoothing for the periodic KdV evolution,}
Int. Math. Res. Not. IMRN 2013, no. 20, 4589--4614.
\betaibitem{FT}
J.~Forlano, W.~Trenberth,
{\it On the transport of Gaussian measures under the one-dimensional fractional nonlinear Schr\"odinger equations},
arXiv:1812.06877 [math.AP].
\betaibitem{GROSS} L.~Gross, {\it Abstract Wiener spaces,}
Proc. 5th Berkeley Sym. Math. Stat. Prob. 2 (1965), 31--42.
\betaibitem{GOTW}
T.~Gunaratnam, T.~Oh, N.~Tzvetkov, H.~Weber,
{\it Quasi-invariant Gaussian measures for the nonlinear wave equation in three dimensions},
arXiv:1808.03158 [math.PR].
\betaibitem{GKO}
Z.~Guo, S.~Kwon, T.~Oh,
{\it Poincar\'e-Dulac normal form reduction for unconditional well-posedness of the periodic cubic NLS,} Comm. Math. Phys. 322 (2013), no.1, 19--48.
\betaibitem{GO}
Z.~Guo, T.~Oh, {\it
Non-existence of solutions for the periodic cubic nonlinear Schr\"odinger equation below $L^2$},
Internat. Math. Res. Not. 2018, no.6, 1656--1729.
\betaibitem{HK}
A.~Hasegawa, Y.~Kodama,
{\it Signal transmission by optical solitons in monomode fiber}, Proc. IEEE, 69 (1981), 1145--1150.
\betaibitem{Kuo}
H.~Kuo, {\it Integration theory on infinite-dimensional manifolds,}
Trans. Amer. Math. Soc. 159 (1971), 57--78.
\betaibitem{Kuo2}
H.~Kuo, {\it Gaussian measures in Banach spaces,} Lecture Notes in Mathematics, Vol. 463. Springer-Verlag, Berlin-New York, 1975. vi+224 pp.
\betaibitem{KO}
S.~Kwon, T.~Oh,
{\it On unconditional well-posedness of modified KdV,} Internat. Math. Res. Not. 2012, no. 15, 3509--3534.
\betaibitem{LMKEHT}
F.~Leo, A.~Mussot, P.~Kockaert, P.~Emplit, M.~Haelterman, M.~Taki,
{\it Nonlinear Symmetry Breaking Induced by Third-Order Dispersion in Optical Fiber Cavities}
Phys. Rev. Lett. 110 (2013), 104103.
\betaibitem{MS}
C.~Mili\'an, D.~Skryabin,
{\it Soliton families and resonant radiation in a micro-ring resonator near zero group-velocity dispersion},
Opt. Express 22 (2014), 3732--3739.
\betaibitem{MT1}
T.~Miyaji, Y.~Tsutsumi,
{\it Existence of global solutions and global attractor for the third order Lugiato-Lefever equation on $\mathbf{T} $},
Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire.
34 (2017), no. 7, 1707--1725.
\betaibitem{MT2}
T.~Miyaji, Y.~Tsutsumi,
{\it Local well-posedness of the NLS equation with third order dispersion
in negative Sobolev spaces},
Differential Integral Equations 31 (2018), no. 1-2, 111--132.
\betaibitem{NTT}
K.~Nakanishi, H.~Takaoka,
Y.~Tsutsumi,
{\it Local well-posedness in low regularity of the mKdV equation with periodic boundary condition,} Discrete Contin. Dyn. Syst. 28 (2010), no. 4, 1635--1654.
\betaibitem{OSTz}
T.~Oh, P.~Sosoe, N.~Tzvetkov,
{\it An optimal regularity result on the quasi-invariant Gaussian measures for the cubic fourth order
nonlinear Schr\"odinger equation,}
J. \'Ec. polytech. Math. 5 (2018), 793--841.
\betaibitem{OTz1}
T.~Oh, N.~Tzvetkov,
{\it Quasi-invariant Gaussian measures for the cubic fourth order nonlinear Schr\"odinger equation,}
Probab. Theory Related Fields
169 (2017), 1121--1168.
\betaibitem{OTz2}
T.~Oh, N.~Tzvetkov,
{\it On the transport of Gaussian measures under the flow of Hamiltonian PDEs},
S\'emin. \'Equ. D\'eriv. Partielles. 2015--2016, Exp. No. 6, 9 pp.
\betaibitem{OTz3}
T.~Oh, N.~Tzvetkov,
{\it Quasi-invariant Gaussian measures for the two-dimensional defocusing cubic nonlinear wave equation,}
to appear in J. Eur. Math. Soc.
\betaibitem{Oi}
M.~Oikawa,
{\it Effect of the Third-Order Dispersion on the Nonlinear Schr\"odinger Equation},
J. Phys. Soc. Jpn. 62 (1993), 2324--2333.
\betaibitem{PTV}
F.~Planchon, N.~Tzvetkov, N.~Visciglia,
{\it Transport of Gaussian measures by the flow of the nonlinear
Schr\"odinger equation}, arXiv:1810.00526 [math.AP].
\betaibitem{RA}
R.~Ramer, {\it On nonlinear transformations of Gaussian measures},
J. Functional Analysis 15 (1974), 166--187.
\betaibitem{TT}
H.~Takaoka, Y.~Tsutsumi,
{\it Well-posedness of the Cauchy problem for the modified KdV equation with periodic boundary condition,} Int. Math. Res. Not. 2004, no. 56, 3009--3040.
\betaibitem{TzBBM}
N.~Tzvetkov, {\it Quasi-invariant Gaussian measures for one dimensional Hamiltonian PDE's,}
Forum Math. Sigma 3 (2015), e28, 35 pp.
\varepsilonnd{thebibliography}
\varepsilonnd{document} | math | 85,324 |
\begin{document}
\begin{frontmatter}
\title{Comparison of probabilistic and deterministic point sets}
\author{Peter Grabner\fnref{thanks}\corref{cor}}
\ead{[email protected]}
\author{Tetiana Stepanyuk\fnref{thanks}}
\ead{[email protected]}
\address{Graz University of Technology, Institute of Analysis and Number
Theory, Kopernikusgasse 24/II 8010, Graz, Austria}
\fntext[thanks]{The authors are supported by the Austrian Science Fund FWF
project F5503 (part of the Special Research Program (SFB)
``Quasi-Monte Carlo Methods: Theory and Applications'')}
\cortext[cor]{Corresponding author}
\begin{abstract}
In this paper we make a comparison between certain probabilistic and deterministic point sets and show that some deterministic constructions (spherical $t$-designs) are better or as good as probabilistic ones.
We find asymptotic equalities for the discrete Riesz $s$-energy of sequences of well separated $t$-designs on the unit sphere $\mathbb{S}^{d}\subset\mathbb{R}^{d+1}$, $d\geq2$.
The case $d=2$ was studied in \cite{HesseLeopardiTheCoulombEnergy, Hesse:2009s-energy}. In \cite{Bondarenko-Radchenko-Viazovska2013:optimal_designs} it was established, that for $d\geq 2$, there exists a constant $c_{d}$, such that for every $N> c_{d}t^{d}$ there exists a well-separated spherical $t$-design on $\mathbb{S}^{d}$ with $N$ points. For this reason, in our paper we assume, that the sequence of well separated spherical $t$-designs is such that $t$ and $N$ are related by $N\asymp t^{d}$.
\end{abstract}
\begin{keyword}
The $s$-energy, discrete energy, energy integral, $t$-design, well-separated point sets, equal-weight numerical integration, equal-area partition, sphere.
\MSC[2010]{41A55, 33C45, 41A63}
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{intro}
Let $\mathbb{S}^{d}=\{\mathbf{x}\in\mathbb{R}^{d+1}: \ |\mathbf{x}|=1\}$, where $d\geq2$, be the unit
sphere in the Euclidean space $\mathbb{R}^{d+1}$, equipped with the Lebesgue measure
$\sigma_{d}$ normalized by $\sigma_{d}(\mathbb{S}^{d})=1$.
Let $K_{d}$ be the positive definite function (see \cite{Schoenberg:1942PositiveDefinite})
\begin{align}\label{kernel}
K_{d}(t):=\sum\limits_{n=0}^{\infty}a_{n}P_{n}^{(d)}(t), \ \ a_{n}\geq 0,
\end{align}
where $P_{n}^{(d)}$ is the $n$-th generalized Legendre polynomial,
normalized by ${P_{n}^{(d)}(1)=1}$ and orthogonal on the interval $[-1,1]$
with respect to the weight function $(1-t^{2})^{d/2-1}$.
In this paper
we investigate energy integrals with respect to a probabilistic model ("jittered sampling") of the form
\begin{equation}\label{nonsingularInt}
\frac{1}{N^{2}}\int\limits_{A_{1}}...\int\limits_{A_{N}}\sum\limits_{i,j=1}^{N}K_{d}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N}), \ \ K_{d}\in \mathbb{C}_{[-1,1]}
\end{equation}
and
\begin{equation}\label{singularInt}
\frac{1}{N^{2}}\int\limits_{A_{1}}...\int\limits_{A_{N}}
\sum\limits_{i,j=1, \atop i\neq j}^{N}K_{d}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N}), \ \ K_{d}\in \mathbb{C}_{[-1,1)}.
\end{equation}
Here $\{A_{i}\}_{i=1}^{N}$ is an area regular partition of the sphere (see, e.g., \cite{RakhmanovSaffZhou:1994Minimal}), i.e.:
$\mathbb{S}^{d}=\bigcup\limits_{i=1}^{N}A_{i}$, $A_{i}\cap A_{j}=\emptyset, \ i\neq j$, $\sigma(A_{i})=\frac{1}{N}$
and the point $ \mathbf{x}_{i}$ is chosen uniformly randomly in $A_{i}$ for $i=1,...,N$.
We denote
\begin{align}\label{energy}
E(K_{d}, X_{N}):=\frac{1}{N^{2}}\sum\limits_{i,j=1}^{N}K_{d}(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle).
\end{align}
\begin{defi}\label{def1}
A spherical $t$-design is a finite subset $X_{N}\subset \mathbb{S}^{d}$ with
a characterizing property that an equal weight integration rule with nodes
from $X_{N}$ integrates all spherical polynomials $p$ of total degree at most $t$ exactly;
that is,
\begin{equation*}
\frac{1}{N}\sum\limits_{\mathbf{x}\in X_{N}}p(\mathbf{x})=
\int_{\mathbb{S}^{d}}p(\mathbf{x})d\sigma_{d}(\mathbf{x}), \quad
\mathrm{deg}(p)\leq t.
\end{equation*}
Here $N$ is the cardinality of $X_{N}$ or the number of points of spherical design.
\end{defi}
\begin{defi}
A sequence of $N$-point sets $(X_{N})_{N}$,
$X_{N}=\big\{\mathbf{x}_{1},\ldots, \mathbf{x}_{N} \big\}$, is called
well-separated if there exists a positive constant $c_{1}$ such that
\begin{equation}\label{wellSeparat}
\min\limits_{i\neq j}|\mathbf{x}_{i}- \mathbf{x}_{j}|>
\frac{c_{1}}{N^{\frac{1}{d}}}.
\end{equation}
\end{defi}
The concept of spherical $t$-design was introduced by Delsarte, Goethals and
Seidel in the groundbreaking paper \cite{Delsarte-Goethals-Seidel1977:spherical_designs}, where they also proved the lower bound $N\geq C_{d}t^{d}$.
The relation between $N$ and $t$ in spherical designs plays important role. Korevaar and Meyers \cite{KorevaarMeyers:SphericalFaraday1993} conjectured that there always exist spherical $t$-design with $N\asymp t^{d}$ points.
We write
$a_{n}\asymp b_{n}$ to mean that there exist positive constants $C_{1}$ and
$C_{2}$ independent of $n$ such that $C_{1}a_{n}\leq b_{n}\leq C_{2}a_{n}$ for
all $n$.
Also many authors have predicted the existence of well-separated spherical $t$-designs in
$\mathbb{S}^{d}$ of asymptotically minimal cardinality $\mathcal{O}(t^{d})$ as $t\rightarrow\infty$ (see, e.g., \cite{AnChenSloanWomersley_Well2010}, \cite{HesseLeopardiTheCoulombEnergy}).
In 2013 Bondarenko, Radchenko and Viazovska
\cite{Bondarenko-Radchenko-Viazovska2013:optimal_designs} proved, that indeed for ${d\geq 2}$, there exists a constant $c_{d}$, which depends only of $d$, such that for every $N\geq c_{d}t^{d}$ there exists a spherical $t$-design on $\mathbb{S}^{d}$ with $N$ points. Two years later in
\cite{Bondarenko-Radchenko-Viazovska2015:Well_separated} they showed, that for each ${d\geq 2}$, $t\in \mathbb{N}$, $N>c_{d}t^{d}$, there exist positive constants $c_{d}$ and $\lambda_{d}$, depending only on $d$, such that for every $N\geq c_{d}t^{d}$, there exists a spherical $t$-design on $\mathbb{S}^{d}$ ,
consisting of $N$ points $\{\mathbf{x}_{i}\}_{i=1}^{N}$ with $|\mathbf{x}_{i}-\mathbf{x}_{j}|\geq \lambda_{d}N^{-\frac{1}{d}}$ for $i\neq j$, where $c_{d}$ and $\lambda_{d}$ are positive constants, depending only on $d$.
Taking this into account we always assume that
\begin{equation}\label{NandT}
N=N(t)\asymp t^{d}.
\end{equation}
For given $s>0$ the discrete Riesz $s$-energy of a set of $N$ points $X_{N}$ on
$\mathbb{S}^{d}$ is defined as
\begin{equation}\label{RieszbDef}
E_{d}^{(s)}(X_{N}):=\frac{1}{2}
{\mathop{\sum}\limits_{i,j=1,\atop i\neq j}^{N}}|\mathbf{x}_{i}-\mathbf{x}_{j}|^{-s},
\end{equation}
where $|\mathbf{x}|$ denotes the Euclidian norm in $\mathbb{R}^{d+1}$ of the vector $\mathbf{x}$. In the case $s=d-1$ the energy (\ref{RieszbDef}) is called as Coulomb energy.
In this paper we investigate and compare the asymptotic behaviour of the $s$-energy, for $0<s<d$ for sequences of well-separated $t$-designs and also for jittered sampling.
Hesse and Leopardi
\cite{HesseLeopardiTheCoulombEnergy} showed, that if spherical $t$-designs with
${N=\mathcal{O}(t^{2})}$ exist, then they have asymptotically minimal Coulomb energy $E_{2}(X_{N})$. Namely, it was proved, that the Coulomb energy of each $N$-point spherical $t$-design $X_{N}$ with the following properties: there exist positive constants $\mu$ and separation constant $\lambda$, such that $N\leq\mu(t+1)^{2}$, and the minimum spherical distance between point of $X_{N}$ is bounded from below by $\frac{\lambda}{\sqrt{N}}$, is bounded from above by
\begin{equation}\label{HesseandLeopardi}
E_{2}^{(1)}(X_{N})\leq \frac{1}{2}N^{2}+C_{\lambda,\mu}N^{\frac{3}{2}}.
\end{equation}
Here and further by $C_{a}$ ($C_{a,b}$) we will denote constants, which may depend only on $a$ ($a$ and $b$), but not on $N$.
In \cite{Hesse:2009s-energy} the result (\ref{HesseandLeopardi}) was extended for all $0<s<2$. In particular, under the assumption that $N\leq \kappa t^{2}$, it was shown that for $0<s<2$, there exists a positive constant $c_{s}$ such that for every well separated sequence $N$
-point spherical $t$-designs the following estimate holds
\begin{equation}\label{Hesse}
E_{2}^{(s)}(X_{N})\leq \frac{2^{-s}}{2-s}N^{2}+C_{s,\kappa}N^{1+\frac{s}{2}}.
\end{equation}
Also it should be noticed, that in \cite{BoyvalenkovDragnevHardinSaffStoyanova} some general upper and lower bounds for the energy of spherical designs were found.
The separation constraint was important for the results (\ref{HesseandLeopardi}) and (\ref{Hesse}). Since the $s$-energy is unbounded as two points approach each other, and since spherical designs can have points arbitrarily close together, the separation constraint is needed to guarantee any asymptotic bounds on the energy.
Denote by $\mathcal{E}_{d}^{(s)}(N)$ the minimal discrete $s$-energy for $N$-points on the sphere
\begin{equation}\label{minCoulomb}
\mathcal{E}_{d}^{(s)}(N):=\inf\limits_{X_{N}}E_{d}^{(s)}(X_{N}),
\end{equation}
where the infimum is taken over all $N$-points subsets of $\mathbb{S}^{d}$.
Kuijlaars and Saff \cite{KuijlaarsSaff:1998Asymptotics} proved that for $d\geq2$ and $0<s<d$, there exists a constant $C_{d,s}>0$, such that
\begin{equation}\label{KuijlaarsSaff}
\mathcal{E}_{d}^{(s)}(N)\leq \frac{1}{2} V_{d}(s)N^{2}-C_{d,s}N^{1+\frac{s}{d}},
\end{equation}
where $V_{d}(s)$ is the energy integral
\begin{align}\label{main_term}
V_{d}(s):=\int\limits_{\mathbb{S}^{d}}\int\limits_{\mathbb{S}^{d}}\frac{1}{|\mathbf{x}-\mathbf{y}|^{s}}d\sigma_{d}(\mathbf{x})d\sigma_{d}(\mathbf{y})=
\frac{\Gamma(\frac{d+1}{2})\Gamma(d-s)}{\Gamma(d-s+1)\Gamma(d-\frac{s}{2})}.
\end{align}
Earlier, Wagner \cite{Wagner:1992UpperBounds} had obtained the lower bounds
\begin{equation}\label{Wagner1}
\mathcal{E}_{d}^{(s)}(N)\geq \frac{1}{2} V_{d}(s)N^{2}-C_{d,s}N^{1+\frac{s}{d}}, \ \ d-2<s<d,
\end{equation}
\begin{equation}\label{Wagner2}
\mathcal{E}_{d}^{(s)}(N)\geq \frac{1}{2} V_{d}(s)N^{2}-C_{d,s}N^{1+\frac{s}{2+s}}, \ \ d\geq3, \ 0<s\leq d-2.
\end{equation}
The combination of (\ref{KuijlaarsSaff}) and (\ref{Wagner1}) leads to the correct order of
$\mathcal{E}_{d}^{(s)}(N)-\frac{1}{2} V_{d}(s)N^{2}$ for $d-2<s<d$.
We show that for every well-separated sequence of
$N$-point spherical $t$-designs on
$\mathbb{S}^{d}$, $d\geq2$, with $N\asymp t^{d}$ the following asymptotic equality holds
\begin{align*}
E_{d}^{(s)}(X_{N})= \frac{1}{2} \frac{\Gamma(\frac{d+1}{2})\Gamma(d-s)}{\Gamma(d-s+1)\Gamma(d-\frac{s}{2})}N^{2}+\mathcal{O}\Big(N^{1+\frac{s}{d}}\Big).
\end{align*}
The structure of the paper is as follows.
Section~\ref{mainResults}
contains the statements of all theorems.
Here we analyze energy integrals (\ref{nonsingularInt}) and (\ref{singularInt}) with regard to area-regular partitions of the sphere. In particular the cases, when $K_{d}$ is the reproducing kernel of a reproducing kernel Hilbert space of continuous functions on the sphere or the Riesz s-energy, are considered. Then we make a comparison with the estimates of respective discrete energy sums for spherical $t$-designs and minimizing point sets for $s$-energy.
In Section \ref{prelim} we summarize necessary background information for orthogonal polynomials.
In Section~\ref{proofTheor} we give the proofs of the theorems from the
Section~\ref{mainResults}.
Section~\ref{proofLem} contains the proofs of some technical lemmas, which are needed to proof Theorem 1.
\section{Formulation of main results}
\label{mainResults}
\subsection{The $s$-energy of spherical designs on $\mathbb{S}^{d}$}
\label{sEnergy}
By a spherical cap $S(\mathbf{x}; \varphi)$ of centre $\mathbf{x}$ and angular radius
$\varphi$ we mean
\begin{equation*}
S(\mathbf{x}; \varphi):=\big\{\mathbf{y}\in \mathbb{S}^{d} \big| \langle\mathbf{x},\mathbf{y}\rangle\geq \cos\varphi \big\}.
\end{equation*}
The normalized surface area of a spherical cap is given by
\begin{equation}\label{capArea}
|S(\mathbf{x}; \varphi)|=\frac{\Gamma((d+1)/2)}{\sqrt{\pi}\Gamma(d/2)}
\int\limits_{\cos\varphi}^{1}(1-t^{2})^{\frac{d}{2}-1}dt
\asymp(1-\cos\varphi)^{\frac{d}{2}} \quad\text{as } \varphi\rightarrow 0.
\end{equation}
If condition (\ref{wellSeparat}) holds for a sequence $(X_{N})_{N}$, then any spherical cap $S(\mathbf{x}; \alpha_{N})$, $\mathbf{x}\in \mathbb{S}^{d}$, where
\begin{equation}\label{alphaN}
\alpha_{N}:=\arccos\Big(1-\frac{c^{2}_{1}}{8N^{\frac{2}{d}}}\Big),
\end{equation}
contains at
most one point of the set $(X_{N})_{N}$.
From the elementary estimates
\begin{equation}\label{sinIneq}
\sin\theta\leq \theta\leq \frac{\pi}{2}\sin\theta, \quad 0\leq\theta\leq \frac{\pi}{2},
\end{equation}
we obtain
\begin{equation}\label{alphaEstim}
\Big(1-\frac{c^{2}_{1}}{16N^{\frac{2}{d}}}\Big)^{\frac{1}{2}}\frac{c_{1}}{2N^{\frac{1}{d}}}\leq\alpha_{N}\leq
\frac{\pi}{4}\Big(1-\frac{c^{2}_{1}}{16N^{\frac{2}{d}}}\Big)^{\frac{1}{2}}\frac{c_{1}}{N^{\frac{1}{d}}}.
\end{equation}
\begin{thm}\label{theoremRisz} Let $d\geq2$ be fixed, and
$(X_{N(t)})_t$ be a sequence of well-separated spherical $t$-designs on
$\mathbb{S}^{d}$,
$t$ and $N(t)$ satisfying relation (\ref{NandT}).
Then for the $s$-energy $E_{d}^{(s)}(X_{N})$ the following asymptotic equality holds
\begin{equation}\label{theorem1}
E_{d}^{(s)}(X_{N})= \frac{1}{2} \frac{\Gamma(\frac{d+1}{2})\Gamma(d-s)}{\Gamma(d-s+1)\Gamma(d-\frac{s}{2})}N^{2}+\mathcal{O}\Big(N^{1+\frac{s}{d}}\Big).
\end{equation}
\end{thm}
\subsection{Estimates for energy integrals in the nonsingular case}
\label{nonsingular}
We consider area-regular partitions for which all regions $A_{i}$ have small diameters:
$\mathrm{diam}(A_{i})\leq C N^{-\frac{1}{d}}$ for $i=1,...,N$. Here $C$ is a constant that does not depend on $N$ (see, e.g., \cite{GiganteLeopardi2017:Diameter}).
Let $\sigma_{j}^{*}$ be the restriction of the measure $N\sigma$ to $A_{i}$: $\sigma_{i}^{*}(\cdot)=\sigma(A_{i}\cap \cdot)N$. Then each $\sigma_{j}^{*}$ is a probability measure.
\begin{thm}\label{theorem_det}
Let $K_{d}$ be a continuous function on $[-1,1]$, which is given by (\ref{kernel}).
Then there exists a positive constant $C_{d}$, such that, for the energy $E(K_{d}, X_{N})$ of the form (\ref{energy}) the following estimate holds
\begin{multline}\label{theoremDet}
\int\limits_{A_{1}}...\int\limits_{A_{N}}E(K_{d}, X_{N})d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})-a_{0}\\
\leq \frac{C_{d}}{N}\left(N^{-\frac{2}{d}}\sum\limits_{n=1}^{[N^{\frac{1}{d}}]}a_{n}n^{2}+\sum\limits_{n=[N^{\frac{1}{d}}]+1}^{\infty}a_{n} \right).
\end{multline}
\end{thm}
Let us apply the estimate (\ref{theoremDet}) for the reproducing kernels of Hilbert spaces and compare it with known estimates of worst-case error in these spaces. Before that, we need some additional background.
We denote by $\{Y_{\ell,k}^{(d)}: k=1,\ldots, Z(d,\ell)\}$ a collection of
$\mathbb{L}_{2}(\sigma_{d})$-orthonormal real spherical harmonics (homogeneous harmonic polynomials in $d+1$ variables restricted to $\mathbb{S}^{d}$) of degree $\ell$ (see, e.g., \cite{Mueller1966:spherical_harmonics}), where
\begin{equation}\label{Zd}
Z(d,0)=1,\quad Z(d,\ell)=(2\ell+d-1)
\dfrac{\Gamma(\ell+d-1)}{\Gamma(d)\Gamma(\ell+1)}\sim
\frac{2}{\Gamma(d)}\ell^{d-1}, \quad \ell\rightarrow\infty.
\end{equation}
Each spherical harmonic $Y_{\ell,k}^{(d)}$ of exact degree $\ell$ is an eigenfunction of the negative Laplace-Beltrami operator $-\Delta^{*}_{d}$ with eigenvalue
$ \lambda_{\ell}:=\ell(\ell+d-1) $.
The spherical harmonics of degree $\ell$ satisfy the addition theorem:
\begin{equation}\label{additiontheorem}
\sum\limits_{k=1}^{Z(d,\ell)}Y_{\ell,k}^{(d)}(\mathbf{x})Y_{\ell,k}^{(d)}(\mathbf{y})=Z(d,\ell)P_{\ell}^{(d)}(\langle\mathbf{x},\mathbf{y}\rangle).
\end{equation}
The Sobolev space $\mathbb{H}^{s}(\mathbb{S}^{d})$
for $s\geq 0$ consists of all functions $f\in\mathbb{L}_{2}(\mathbb{S}^{d})$
with finite norm
\begin{equation}\label{normSobolev}
\|f\|_{\mathbb{H}^{s}}=\bigg(\sum\limits_{\ell=0}^{\infty}\sum\limits_{k=1}^{Z(d,\ell)}
\left(1+\lambda_{\ell}\right)^{s}|\hat{f}_{\ell,k}|^{2}\bigg)^{\frac{1}{2}},
\end{equation}
where the Laplace-Fourier coefficients are given by the formula
\begin{equation*}
\hat{f}_{\ell,k}:=(f,Y_{\ell,k}^{(d)})_{\mathbb{S}^{d}}=
\int_{\mathbb{S}^{d}}f(\mathbf{x})Y_{\ell,k}^{(d)}(\mathbf{x})
d\sigma_{d}(\mathbf{x}).
\end{equation*}
The worst-case (cubature) error of the equal weight numerical integration rule $Q[X_{N}]$ in a
Banach space $B$ of continuous functions on $\mathbb{S}^{d}$ with norm
$\|\cdot\|_{B}$ is defined by
\begin{equation}\label{wce}
\mathrm{wce}(Q[X_{N}];B):=\sup\limits_{f\in B,\|f\|_{B}\leq1}
\left|\frac{1}{N}\sum\limits_{i=1}^{N}f(\mathbf{x}_{i})-\int_{\mathbb{S}^{d}}f(\mathbf{x})d\sigma_{d}(\mathbf{x})\right|.
\end{equation}
The worst-case error for the Sobolev space $\mathbb{H}^{s}(\mathbb{S}^{d})$ can be expressed as (see, e.g., \cite{Brauchart-Saff-Sloan+2014:qmc_designs})
\begin{equation}\label{wceKernelHs}
\mathrm{wce}(Q[X_{N}];\mathbb{H}^{s}(\mathbb{S}^{d}))^{2}=\frac{1}{N^{2}}\sum\limits_{i,j=1}^{N}\tilde{K}^{(s)}_{d}(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle),
\end{equation}
where $\tilde{K}^{(s)}_{d}$ denotes the reproducing kernel Hilbert space $\mathbb{H}^{s}(\mathbb{S}^{d})$, $s>\frac{d}{2}$, with the constant term removed
\begin{equation}\label{kernelHs}
\tilde{K}^{(s)}_{d}(\mathbf{x},\mathbf{y})=\sum\limits_{\ell=1}^{\infty}(1+\lambda_{\ell})^{-s}Z(d,\ell)
P_{\ell}^{(d)}(\langle\mathbf{x},\mathbf{y}\rangle).
\end{equation}
If $\tilde{K}^{(s)}_{d}(\mathbf{x},\mathbf{y})$ is given by (\ref{kernelHs}), then
\begin{multline}\label{probabilSobol}
\frac{1}{N^{2}}\int\limits_{A_{1}}...\int\limits_{A_{N}}\sum\limits_{i,j=1}^{N}\tilde{K}^{(s)}_{d}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N}) \\
\ll
\begin{cases}
N^{-\frac{2s}{d}}, & \text{if }
\frac{d}{2}<s<1+\frac{d}{2}, \\
N^{-1-\frac{2}{d}}\ln N , & \text{if } s=1+\frac{d}{2}, \\
N^{-1-\frac{2}{d}}, & \text{if } s>1+\frac{d}{2}.
\end{cases}
\end{multline}
Here and further we use the Vinogradov notation $a_{n}\ll b_{n}$ to mean that there exists positive constant $C$ independent of $n$ such that
$a_{n}\leq C b_{n}$ for all $n$.
In \cite{Brauchart-Hesse2007:numerical_integration} it was proved that there exists $C_{d,s}>0$, such that for every $N$-point spherical $t$-design $X_{N}$ on $\mathbb{S}^{d}$ with $N\asymp t^{d}$
\begin{align}\label{BrauchHesseUpper}
\mathrm{wce}(Q[X_{N}];\mathbb{H}^{s}(\mathbb{S}^{d}))^{2}\leq \frac{C_{s,d}}{N^{\frac{2s}{d}}}.
\end{align}
Let the space $\mathbb{H}^{(\frac{d}{2},\gamma)}(\mathbb{S}^{d})$, $\gamma>\dfrac{1}{2}$,
(see \cite{GrabnerStepanyuk2018}) be the set of all functions $f\in\mathbb{L}_{2}(\mathbb{S}^{d})$ with finite norm
\begin{align*}
\|f\|_{\mathbb{H}^{(\frac{d}{2},\gamma)}}^2:=\sum\limits_{\ell=0}^{\infty}
\left(1+\lambda_{\ell}\right)^{\frac{d}{2}}\left(\ln\left(3+\lambda_{\ell}\right)\right)^{2\gamma}\sum\limits_{k=1}^{Z(d,\ell)}|\hat{f}_{\ell,k}|^{2}<\infty.
\end{align*}
The worst-case error for the space $\mathbb{H}^{(\frac{d}{2},\gamma)}(\mathbb{S}^{d})$, $\gamma>\dfrac{1}{2}$, can be computed by the formula
\begin{align*}
\mathrm{wce}(Q[X_{N}];\mathbb{H}^{(\frac{d}{2},\gamma)}(\mathbb{S}^{d}))^{2}=\frac{1}{N^{2}}\sum\limits_{i,j=1}^{N}\tilde{K}^{(\frac{d}{2},\gamma)}(\mathbf{x}_{i},\mathbf{x}_{j}),
\end{align*}
where $\tilde{K}^{(\frac{d}{2},\gamma)}$ denotes the reproducing kernel of the Hilbert space
$\mathbb{H}^{(\frac{d}{2},\gamma)}(\mathbb{S}^{d})$, ${\gamma>\frac{1}{2}}$, with the constant term removed
\begin{equation}\label{kernelH}
\tilde{K}^{(\frac{d}{2},\gamma)}(\mathbf{x},\mathbf{y})=\sum\limits_{\ell=1}^{\infty}\left(1+\lambda_{\ell}\right)^{-\frac{d}{2}}\left(\ln\left(2+\lambda_{\ell}\right)\right)^{-2\gamma}Z(d,\ell)
P_{\ell}^{(d)}(\langle\mathbf{x},\mathbf{y}\rangle).
\end{equation}
From (\ref{theoremDet}) we have that for $\tilde{K}^{(\frac{d}{2},\gamma)}$, defined by formula (\ref{kernelH}), the following estimate is true
\begin{align}\label{probabilH}
\frac{1}{N^{2}}\int\limits_{A_{1}}...\int\limits_{A_{N}}\sum\limits_{i,j=1}^{N}\tilde{K}^{(\frac{d}{2},\gamma)}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})
\ll N^{-1}\left(\ln N\right)^{-2\gamma+1}.
\end{align}
In \cite{GrabnerStepanyuk2018} it was proved that there exist constants $C_{d,\gamma}^{(1)} $ and $C_{d,\gamma}^{(2)}$, such that for every $N$-point well separated spherical $t$-design $X_{N}$ on $\mathbb{S}^{d}$
\begin{align}\label{GS}
C_{d,\gamma}^{(1)}N^{-1}\left(\ln N\right)^{-2\gamma+1}\leq
\mathrm{wce}(Q[X_{N}];\mathbb{H}^{(\frac{d}{2},\gamma)}(\mathbb{S}^{d}))^{2}
\leq C_{d,\gamma}^{(2)}N^{-1}\left(\ln N\right)^{-2\gamma+1}.
\end{align}
\subsection{Estimates for energy integrals in the singular case}
\label{singular}
In this subsection we consider the case of singular kernel, when in the energy (\ref{energy}) the diagonal terms are omitted. We denote it by
\begin{align}\label{energy0}
\tilde{E}(K_{d}, X_{N}):=\frac{1}{N^{2}}\sum\limits_{i,j=1, \atop i\neq j}^{N}K_{d}(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle).
\end{align}
\begin{thm}\label{theorem_sing}
Let $K_{d}$ is a continuous function on $[-1,1)$, $\lim\limits_{x\rightarrow1}K_{d}(x)=\infty$, $\int\limits_{\mathbb{S}^{d}}K_{d}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma(\mathbf{x})d\sigma(\mathbf{y})<\infty$ and there exist $c_{2}>0$ and $\mathbf{y}_{i}\in A_{i}$, such that each region $A_{i}$ of an area regular partition
$\{A_{i}\}_{i=1}^{N}$ contains a spherical cap $S(\mathbf{y}_{i}; c_{2}N^{-\frac{1}{d}})$ in its interior.
Then
\begin{multline}\label{theoremSingular}
\int\limits_{A_{1}}...\int\limits_{A_{N}}\tilde{E}(K_{d}, X_{N})d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})\\
= a_{0}+\frac{1}{N}\mathcal{O}\left(
\int\limits_{\cos(c_{2}N^{-\frac{1}{d}})}^{1}K_{d}(t)(1-x^{2})^{\frac{d}{2}-1}dx +\max\limits_{-1\leq x\leq1-\frac{2c_{2}^{2}}{\pi^{2}}}K_{d}(x)\right).
\end{multline}
\end{thm}
The existence of an area regular partition
$\{A_{i}\}_{i=1}^{N}$, such that each region $A_{i}$ contains the spherical cap $S(\mathbf{y}_{i}; c_{2}N^{-\frac{1}{d}})$ in its interior was shown by Gigante and Leopardi in \cite{GiganteLeopardi2017:Diameter}.
Let $K_{s,d}$ be the Riesz kernel: $K_{s,d}(x)=\frac{1}{2^{\frac{s}{2}}}(1-x)^{-\frac{s}{2}}$, $0<s<d$, then
\begin{multline}\label{integrRiesz}
\frac{1}{N}
\int\limits_{1-cN^{-\frac{2}{d}}}^{1}K_{s,d}(t)(1-t^{2})^{\frac{d}{2}-1}dt\ll \frac{1}{N}
\int\limits_{\cos(c_{2}N^{-\frac{1}{d}})}^{1}(1-x)^{\frac{d}{2}-\frac{s}{2}-1}dx\\
\ll \frac{1}{N} (N^{-\frac{2}{d}})^{\frac{d}{2}-\frac{s}{2}}= N^{-2+\frac{s}{d}}
\end{multline}
and
\begin{align}\label{maxRi}
\frac{1}{N}\max\limits_{-1\leq x\leq1-\frac{2c_{2}^{2}}{\pi^{2}}}K_{s,d}(x)\ll N^{-1+\frac{s}{d}}.
\end{align}
Thus, we have that for the Riesz kernel $K_{s,d}$ the following estimate holds
\begin{multline}\label{RieszProb}
\int\limits_{A_{1}}...\int\limits_{A_{N}}\sum\limits_{i,j=1, \atop i\neq j}^{N}K(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle)d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})) \\
=
a_{0}N^{2}+\mathcal{O}(N^{1+\frac{s}{d}})= \frac{\Gamma(\frac{d+1}{2})\Gamma(d-s)}{\Gamma(d-s+1)\Gamma(d-\frac{s}{2})}N^{2}+\mathcal{O}(N^{1+\frac{s}{d}}).
\end{multline}
\subsection{Comparison of the estimates for some probabilistic and deterministic point sets}
\label{comparison}
Probabilistic models are often used to show existence of good point sets. But the comparison shows that in many cases $t$-designs give better bounds for the quality measure under consideration.
Indeed, on the basis of (\ref{probabilSobol}) and (\ref{BrauchHesseUpper}) we can summarize, that in the case ${\frac{d}{2}<s<1+\frac{d}{2}}$ spherical $t$-designs are as good as probabilistic point sets, and in case $s\geq\frac{d}{2}+1$, spherical $t$-designs give better bounds for $\mathrm{wce}(Q[X_{N}];\mathbb{H}^{s}(\mathbb{S}^{d}))$.
From (\ref{probabilH}) and (\ref{GS}) it follows, that for the worst-case error \linebreak
$ \mathrm{wce}(Q[X_{N}];\mathbb{H}^{(\frac{d}{2},\gamma)}(\mathbb{S}^{d}))$, spherical $t$-designs are as good as probabilistic point sets.
Comparing formula (\ref{theorem1}) with (\ref{RieszProb}), we have that for the Riesz $s$-energy, $0<s<d$, well-separated $t$-designs are as good as probabilistic point sets.
Also according to relations (\ref{KuijlaarsSaff}) and (\ref{Wagner1}), with respect to the order of the error term, well-separated $t$-designs and probabilistic point sets are as good as point sets which minimize the Riesz $s$ energy (in the case $d-2<s<d$).
\section{Preliminaries}
\label{prelim}
In this paper we use the Pochhammer symbol $(a)_{n}$, where
$n\in \mathbb{N}_{0}$ and $a\in \mathbb{R}$, defined by
\begin{equation*}
(a)_{0}:=1, \quad (a)_{n}:=a(a+1)\ldots(a+n-1)\quad \mathrm{for} \quad
n\in \mathbb{N},
\end{equation*}
which can be written in the terms of the gamma function $\Gamma(z)$ by means of
\begin{equation}\label{Pochhammer}
(a)_{\ell}=\frac{\Gamma(\ell+a)}{\Gamma(a)}.
\end{equation}
For fixed $a,b$ the following asymptotic equality is true
\begin{equation}\label{gamma}
\frac{\Gamma(n+a)}{\Gamma(n+b)}= n^{a-b}\Big(1+\mathcal{O}\Big(\frac{1}{n}\Big) \Big) \ \ \mathrm{as} \ \ n\rightarrow \infty.
\end{equation}
For any integrable function $f: [-1, 1]\rightarrow \mathbb{R}$ (see, e.g.,
\cite{Mueller1966:spherical_harmonics}) we have
\begin{equation}\label{a1}
\int\limits_{\mathbb{S}^{d}}f(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\mathbf{x})=\frac{\Gamma(\frac{d+1}{2})}{\sqrt{\pi}\Gamma(\frac{d}{2})}\int\limits_{-1}^{1}f(t)(1-t^{2})^{\frac{d}{2}-1}dt \quad \forall \mathbf{y}\in \mathbb{S}^{d}.
\end{equation}
The Jacobi polynomials $\mathcal{P}_{\ell}^{(\alpha,\beta)}(x)$ are the polynomials
orthogonal over the interval $[-1,1]$ with the weight function
$w_{\alpha,\beta}(x)=(1-x)^{\alpha}(1+x)^{\beta}$ and normalized by the
relation
\begin{equation}\label{JacobiMax}
\mathcal{P}_{\ell}^{(\alpha,\beta)}(1)=\binom {\ell+\alpha}\ell=
\frac{(1+\alpha)_{\ell}}{\ell!}\sim\frac{1}{\Gamma(1+\alpha)}\ell^{\alpha},
\quad \alpha,\beta>-1.
\end{equation}
(see, e.g., \cite[(5.2.1)]{Magnus-Oberhettinger-Soni1966:formulas_theorems}).
Notice that
\begin{equation}\label{LegendreJacobi}
P_{n}^{(d)}(x)=\frac{n!}{(d/2)_{n}}\mathcal{P}_{n}^{(\frac{d}{2}-1, \frac{d}{2}-1)}(x).
\end{equation}
For fixed ${\alpha, \beta>-1}$ and ${0< \theta<\pi}$, the following relation
gives an asymptotic approximation for $\ell\rightarrow\infty$ (see,
e.g.,\cite[Theorem 8.21.13]{Szegoe1975:orthogonal_polynomials})
\begin{multline*}
\mathcal{P}_{\ell}^{(\alpha,\beta)}(\cos \theta)=\frac{1}{\sqrt{\pi}}\ell^{-1/2}
\Big(\sin\frac{\theta}{2}\Big)^{-\alpha-1/2}
\Big(\cos\frac{\theta}{2}\Big)^{-\beta-1/2}\\
\times\Big\{\cos \Big(\Big(\ell+\frac{\alpha+\beta+1}{2}\Big)\theta-
\frac{2\alpha+1}{4}\pi\Big)+\mathcal{O}(\ell\sin\theta)^{-1}\Big\}.
\end{multline*}
Thus, for
$c_{\alpha,\beta}\ell^{-1}\leq\theta\leq\pi-c_{\alpha,\beta}\ell^{-1}$ the last
asymptotic equality yields
\begin{equation}\label{JacobiIneq}
|\mathcal{P}_{\ell}^{(\alpha,\beta)}(\cos \theta)|\leq \tilde{c}_{\alpha,\beta}
\ell^{-1/2}(\sin\theta)^{-\alpha-1/2}+
\tilde{c}_{\alpha,\beta}\ell^{-3/2}(\sin\theta)^{-\alpha-3/2}, \quad\alpha\geq\beta.
\end{equation}
The following differentiation formula holds
\begin{equation}\label{JacobiDifferen}
\frac{d}{dx}\mathcal{P}_{n}^{(\alpha,\beta)}(x)=\frac{\alpha+\beta+n+1}{2}\mathcal{P}_{n-1}^{(\alpha+1,\beta+1)}(x).
\end{equation}
If $\lambda>d-1$, $0<s<d$, (using formula \cite[(5.3.4)]{Magnus-Oberhettinger-Soni1966:formulas_theorems}) and expressing
the Gegenbauer polynomials via Jacobi polynomials (see, e.g.,
\cite[(5.3.1)]{Magnus-Oberhettinger-Soni1966:formulas_theorems})),
we have that for ${-1<x<1}$ the following expansion holds
\begin{multline}\label{expansionGegenbauer1}
(1-x)^{-\frac{s}{2}}=2^{2\lambda-\frac{s}{2}}\pi^{-\frac{1}{2}}\Gamma(\lambda)\Gamma\Big(\lambda-\frac{s}{2}+\frac{1}{2}\Big) \\
\times\sum\limits_{n=0}^{\infty}\frac{(n+\lambda)(\frac{s}{2})_{n}}{\Gamma(n+2\lambda-\frac{s}{2}+1)}\frac{(2\lambda)_{n}}{(\lambda+\frac{1}{2})_{n}}\mathcal{P}_{n}^{(\lambda-\frac{1}{2},\lambda-\frac{1}{2})}(x).
\end{multline}
\section{Proof of Theorems 1-3}
\label{proofTheor}
\begin{proof}[Proof of Theorem \ref{theoremRisz}]
For each $i\in\{1,\ldots,N\}$ we divide the sphere $\mathbb{S}^{d}$ into an
upper hemisphere $H_{i}^{+}$ with 'north pole' $\mathbf{x}_{i}$ and a lower
hemisphere $H_{i}^{-}$:
\begin{equation*}
H_{i}^{+}:=\Big\{\mathbf{x}\in\mathbb{S}^{d}\Big|\langle\mathbf{x}_{i},\mathbf{x}\rangle\geq0 \Big\},
\end{equation*}
\begin{equation*}
H_{i}^{-}:=\mathbb{S}^{d}\setminus H_{i}^{+}.
\end{equation*}
We split the $s$-energy into two parts
\begin{equation}\label{1split}
E_{d}^{(s)}(X_{N})=\frac{1}{2}\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in H^{\pm}_{i}\setminus S(\pm x_{j};\alpha_{N})}^{N}}|\mathbf{x}_{i}-\mathbf{x}_{j}|^{-s}+
\frac{1}{2}\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in S(- x_{j};\alpha_{N})}^{N}}|\mathbf{x}_{i}-\mathbf{x}_{j}|^{-s}.
\end{equation}
From (\ref{wellSeparat}) and the fact the spherical cap $S(- \mathbf{x}_{j};\alpha_{N})$ contains at most one point of $X_{N}$, the second term in (\ref{1split}), where the scalar product is close to $-1$, can be bounded from above by
\begin{equation}\label{estim1split}
\frac{1}{2}\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in S(- x_{j};\alpha_{N})}^{N}}|\mathbf{x}_{i}-\mathbf{x}_{j}|^{-s}<\frac{1}{2}N\Big(4-\frac{c_{1}^{2}}{4N^{\frac{2}{d}}} \Big)^{-\frac{s}{2}}<\frac{1}{2}N.
\end{equation}
Noting that
\begin{equation}\label{distance}
|\mathbf{x}_{i}-\mathbf{x}_{j}|^{-1}=\frac{1}{\sqrt{2}}(1-\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)^{-\frac{1}{2}},
\end{equation}
taking into account that the Jacobi series (\ref{expansionGegenbauer1}) converges uniformly in \linebreak ${\Big[-1+\frac{c^{2}_{1}}{8N^{\frac{2}{d}}},1-\frac{c^{2}_{1}}{8N^{\frac{2}{d}}} \Big]}$,
and substituting
$\lambda=\frac{d}{2}+K+\frac{1}{2}$, $K>\frac{d}{2}+1$ in the expansion (\ref{expansionGegenbauer1}), we get that
\begin{multline}\label{expansionSubst}
\frac{1}{2}\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in H^{\pm}_{i}\setminus S(\pm x_{j};\alpha_{N})}^{N}}|\mathbf{x}_{i}-\mathbf{x}_{j}|^{-s}=
\frac{1}{2^{1+\frac{s}{2}}}\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in H^{\pm}_{i}\setminus S(\pm x_{j};\alpha_{N})}^{N}}(1-\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)^{-\frac{s}{2}} \\
=\frac{1}{2}E_{h_{t}}(X)+\frac{1}{2}E_{r_{t}}(X),
\end{multline}
where
\begin{multline}\label{s_tDefinition}
h_{t}(x)=h_{t}(s,d,K,t,x):=2^{d+2K-s+1}\pi^{-\frac{1}{2}}\Gamma\Big(\frac{d}{2}+K+\frac{1}{2}\Big)\Gamma\Big(\frac{d}{2}+K-\frac{s}{2}+1\Big) \\
\times\sum\limits_{n=0}^{t}\frac{(n+\frac{d}{2}+K+\frac{1}{2})(\frac{s}{2})_{n}}{\Gamma(n+d+2K-\frac{s}{2}+2 )}\frac{(d+2K+1)_{n}}{(\frac{d}{2}+K+1)_{n}}
\mathcal{P}_{n}^{(\frac{d}{2}+K, \ \frac{d}{2}+K)}(x),
\end{multline}
\begin{multline}\label{r_tDefinition}
r_{t}(x)=r_{t}(s,d,K,t,x):=2^{d+2K-s+1}\pi^{-\frac{1}{2}}\Gamma\Big(\frac{d}{2}+K+\frac{1}{2}\Big)\Gamma\Big(\frac{d}{2}+K-\frac{s}{2}+1\Big) \\
\times\sum\limits_{n=t+1}^{\infty}\frac{(n+\frac{d}{2}+K+\frac{1}{2})(\frac{s}{2})_{n}}{\Gamma(n+d+2K-\frac{s}{2}+2 )}\frac{(d+2K+1)_{n}}{(\frac{d}{2}+K+1)_{n}}
\mathcal{P}_{n}^{(\frac{d}{2}+K, \ \frac{d}{2}+K)}(x),
\end{multline}
and
\begin{align}\label{energyParts}
E_{U}(X):=\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in H^{\pm}_{i}\setminus S(\pm x_{j};\alpha_{N})}^{N}}U(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle).
\end{align}
To finish the proof we will need following two lemmas. We postpone the proof of lemmas to the next section.
\begin{lemma}\label{lem1} Let $d\geq2$
be fixed and $N\asymp t^{d}$. Then for any $K>\frac{d}{2}$,
$K\in \mathbb{N}$, and $0<s<d$ there exists positive constant $C_{d,s} $, such
that
\begin{align}\label{Lemma1}
E_{r_{t}}(X)\leq C_{d,s} N^{1+\frac{s}{d}}.
\end{align}
\end{lemma}
\begin{lemma}\label{lem2} Let $d\geq2$
be fixed, let $(X_{N(t)})_{t}$ be a sequence of well-separated $t$-designs and$N\asymp t^{d}$. Then for any $0<s<d$ and $K>\frac{d}{2}$,
$K\in \mathbb{N}$, the following asymptotic equality holds
\begin{align}\label{Lemma2}
E_{h_{t}}(X)= \frac{\Gamma(\frac{d+1}{2})\Gamma(d-s)}{\Gamma(d-s+1)\Gamma(d-\frac{s}{2})}N^{2}+\mathcal{O}(Nt^{s}).
\end{align}
\end{lemma}
Formulas (\ref{1split})-(\ref{expansionSubst}), (\ref{Lemma1}) and (\ref{Lemma2}) yield (\ref{theorem1}).
Theorem \ref{theoremRisz} is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem_det}]
Integrating $K_{d}$ with respect to the probability measure $d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})$, we obtain
\begin{multline}\label{integr}
\frac{1}{N^{2}}\int\limits_{A_{1}}...\int\limits_{A_{N}}\sum\limits_{i,j=1}^{N}K_{d}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})\\
=\frac{1}{N}K_{d}(1)+\frac{1}{N^{2}}\sum\limits_{i,j=1, \atop i\neq j}^{N}\int\limits_{A_{i}}\int\limits_{A_{j}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{j}^{*}(\mathbf{y}) \\
=\frac{1}{N}K_{d}(1)+\int\limits_{\mathbb{S}^{d}}\int\limits_{\mathbb{S}^{d}}K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma(\mathbf{x})d\sigma(\mathbf{y})-
\frac{1}{N^{2}}\sum\limits_{i}^{N}\int\limits_{A_{i}}\int\limits_{A_{i}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y}) \\
=\frac{1}{N}K_{d}(1)+a_{0}-
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\int\limits_{A_{i}}\int\limits_{A_{i}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y}).
\end{multline}
Substituting (\ref{kernel}), we have
\begin{multline}\label{dif1}
\frac{1}{N}K_{d}(1)-
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\int\limits_{A_{i}}\int\limits_{A_{i}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y}) \\
=\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\Big(\sum\limits_{n=0}^{\infty}a_{n}P_{n}^{(d)}(1)-
\sum\limits_{n=0}^{\infty}a_{n}P_{n}^{(d)}(\cos\theta_{i}) \Big) \\
=\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\sum\limits_{n=0}^{[N^{\frac{1}{d}}]}a_{n}\big(P_{n}^{(d)}(1)-P_{n}^{(d)}(\cos\theta_{i}) \big)+
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\sum\limits_{n=[N^{\frac{1}{d}}]+1}^{\infty}a_{n}\big(P_{n}^{(d)}(1)-P_{n}^{(d)}(\cos\theta_{i}) \big),
\end{multline}
where $\cos\theta_{i}\in M_{i}$, $M_{i}:=\Big\{\langle\textbf{x},\mathbf{y}\rangle \ \Big| \ \textbf{x},\textbf{y}\in A_{i} \Big\}$.
The second term in right-hand side of (\ref{dif1}) can be bounded above by
\begin{align}\label{secondTerm}
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\sum\limits_{n=[N^{\frac{1}{d}}]+1}^{\infty}a_{n}\big(P_{n}^{(d)}(1)-P_{n}^{(d)}(\cos\theta_{i}) \big)\leq
\frac{2}{N}\sum\limits_{n=[N^{\frac{1}{d}}]+1}^{\infty}a_{n}.
\end{align}
If $\mathrm{diam}(A_{i})\leq C N^{-\frac{1}{d}}$, then
\begin{align*}
|\textbf{x}-\textbf{y}|\leq C N^{-\frac{1}{d}} \ \ \forall \textbf{x},\textbf{y}\in A_{i},
\end{align*}
and
\begin{align}\label{diam}
\cos\theta_{i}\geq 1- \frac{C^{2}}{2}N^{-\frac{2}{d}}.
\end{align}
Using the mean value theorem and relations (\ref{LegendreJacobi}), (\ref{JacobiDifferen}), and (\ref{JacobiMax}), we obtain that
\begin{multline}\label{dif2}
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\sum\limits_{n=1}^{[N^{\frac{1}{d}}]}a_{n}\big(P_{n}^{(d)}(1)-P_{n}^{(d)}(\cos\theta_{i}) \big)\\
=
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\sum\limits_{n=0}^{[N^{\frac{1}{d}}]}a_{n}\frac{n!}{(d/2)_{n}}\big(\mathcal{P}_{n}^{(\frac{d}{2}-1, \frac{d}{2}-1)}(1)-\mathcal{P}_{n}^{(\frac{d}{2}-1, \frac{d}{2}-1)}(\cos\theta_{i}) \big)\\
=
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\sum\limits_{n=0}^{[N^{\frac{1}{d}}]}a_{n}\frac{n!}{(d/2)_{n}}\big(1-\cos\theta_{i}\big)\frac{d}{dx}\mathcal{P}_{n-1}^{(\frac{d}{2}-1, \frac{d}{2}-1)}(\xi_{i}) \\
\leq \frac{1}{N}\sum\limits_{n=1}^{[N^{\frac{1}{d}}]}a_{n}\frac{n!}{(d/2)_{n}}\frac{d+n-1}{2}\big(1-\cos\theta_{i}\big)\mathcal{P}_{n-1}^{(\frac{d}{2}, \frac{d}{2})}(1)
\ll \frac{1}{N^{1+\frac{2}{d}}}\sum\limits_{n=1}^{[N^{\frac{1}{d}}]}a_{n}n^{2},
\end{multline}
where $\xi_{i}\in[\cos\theta_{i}, 1]$.
Then Theorem~\ref{theorem_det} is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem_sing}]
Integrating $\tilde{E}(K_{d}, X_{N})$ with respect to the probability measure $d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})$, we obtain
\begin{multline}\label{integrSingular}
\frac{1}{N^{2}}\int\limits_{A_{1}}...\int\limits_{A_{N}}\sum\limits_{i,j=1, \atop i\neq j}^{N}K_{d}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)d\sigma_{1}^{*}(\mathbf{x}_{1})...d\sigma_{N}^{*}(\mathbf{x}_{N})\\
=\frac{1}{N^{2}}\sum\limits_{i,j=1, \atop i\neq j}^{N}\int\limits_{A_{i}}\int\limits_{A_{j}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{j}^{*}(\mathbf{y}) \\
=\int\limits_{\mathbb{S}^{d}}\int\limits_{\mathbb{S}^{d}}K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma(\mathbf{x})d\sigma(\mathbf{y})-
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\int\limits_{A_{i}}\int\limits_{A_{i}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y}) \\
=a_{0}-
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\int\limits_{A_{i}}\int\limits_{A_{i}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y}).
\end{multline}
Taking into account, that each $A_{i}$ contains the spherical cap $S(\mathbf{y}_{i}; c_{2}N^{-\frac{1}{d}})$ in its interior, we obtain
\begin{multline}\label{integrSingular1}
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\int\limits_{A_{i}}\int\limits_{A_{i}} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y}) \\
=
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\int\limits_{A_{i}}
\int\limits_{S(\mathbf{y}_{i}; c_{2}N^{-\frac{1}{d}})} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y})+\frac{1}{N}\mathcal{O}
\Big(\max\limits_{-1\leq x\leq1-\frac{2c_{2}^{2}}{\pi^{2}}}K_{d}(x)\Big).
\end{multline}
Using formula (\ref{a1}), we have that
\begin{multline}\label{integrSingular2}
\frac{1}{N^{2}}\sum\limits_{i=1}^{N}\int\limits_{A_{i}}
\int\limits_{S(\mathbf{y}_{i}; c_{2}N^{-\frac{1}{d}})} K_{d}(\langle\textbf{x},\mathbf{y}\rangle)d\sigma_{i}^{*}(\mathbf{x})d\sigma_{i}^{*}(\mathbf{y})\\
=\frac{1}{N}
\frac{\Gamma(\frac{d+1}{2})}{\sqrt{\pi}\Gamma(\frac{d}{2})}\int\limits_{\cos(c_{2}N^{-\frac{1}{d}})}^{1}K_{d}(x)(1-x^{2})^{\frac{d}{2}-1}dx.
\end{multline}
Formulas (\ref{integrSingular})-(\ref{integrSingular2}) imply (\ref{theoremSingular}). Theorem~\ref{theorem_sing} is proved.
\end{proof}
\section{Proof of Lemmas 1-2}
\label{proofLem}
\begin{proof}[Proof of Lemma \ref{lem1}]
Applying relations (\ref{r_tDefinition}), (\ref{Pochhammer}), (\ref{gamma}) and (\ref{JacobiIneq}), we find that for $0<\theta<\pi$,
\begin{multline}\label{r_ineq1}
|r_{t}(\cos\theta)|\ll
\sum\limits_{n=t+1}^{\infty}\frac{(n+\frac{d}{2}+K+\frac{1}{2})(\frac{s}{2})_{n}}{\Gamma(n+d+2K-\frac{s}{2}+2 )}\frac{(d+2K+1)_{n}}{(\frac{d}{2}+K+1)_{n}}\big|\mathcal{P}_{n}^{(\frac{d}{2}+K, \ \frac{d}{2}+K)}(\cos\theta)\Big| \\
\ll
\sum\limits_{n=t+1}^{\infty}n^{-\frac{d}{2}-K+s-1}\Big|\mathcal{P}_{n}^{(\frac{d}{2}+K, \ \frac{d}{2}+K)}(\cos\theta)\Big| \\
\ll
\sum\limits_{n=t+1}^{\infty}n^{-\frac{d}{2}-K+s-1}\Big(n^{-\frac{1}{2}}(\sin\theta)^{-\frac{d}{2}-K-\frac{1}{2}}+ n^{-\frac{3}{2}}(\sin\theta)^{-\frac{d}{2}-K-\frac{3}{2}}\Big) \\
\ll t^{-\frac{d}{2}-K+s-\frac{1}{2}}(\sin\theta)^{-\frac{d}{2}-K-\frac{1}{2}}+ t^{-\frac{d}{2}-K+s-\frac{3}{2}}(\sin\theta)^{-\frac{d}{2}-K-\frac{3}{2}}.
\end{multline}
We define $\theta_{ij}^{\pm}\in[0,\pi]$ by $\cos \theta_{ij}^{\pm}:=\langle\mathbf{x}_{i},\pm\mathbf{x}_{j}\rangle$. Then $\sin \theta_{ij}^{+}=\sin \theta_{ij}^{-}$.
This and formula (\ref{r_ineq1}) imply
\begin{multline}\label{r_ineq2}
E_{r_{t}}(X)\ll t^{-\frac{d}{2}-K+s-\frac{1}{2}}
\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in H^{\pm}_{i}\setminus S(\pm x_{j};\alpha_{N})}^{N}}(\sin\theta_{ij}^{\pm})^{-\frac{d}{2}-K-\frac{1}{2}} \\
+
t^{-\frac{d}{2}-K+s-\frac{3}{2}}\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in H^{\pm}_{i}\setminus S(\pm x_{j};\alpha_{N})}^{N}}(\sin\theta_{ij}^{\pm})^{-\frac{d}{2}-K-\frac{3}{2}}.
\end{multline}
From \cite[(3.30) and (3.33)]{Brauchart-Hesse2007:numerical_integration}, it
follows that
\begin{multline} \label{BrauchartHesse}
\frac{1}{N^{2}}\sum\limits_{j=1}^{N}{\mathop{\sum}\limits^{N}_{
i=1,\atop \mathbf{x}_{i}\in H_{j}^{\pm}\setminus
S(\pm\mathbf{x}_{j}; \frac{c}{n})}}
(\sin\theta_{ij}^{\pm})^{-\frac{d}{2}+\frac{1}{2}-k-L}
\\
\ll 1+n^{L+k-(d+1)/2}, \quad k=0,1,\ldots \quad \text{for }L>\frac{d+1}{2}.
\end{multline}
Choosing $K>\frac{d+1}{2}$ and applying estimates (\ref{NandT}), (\ref{alphaEstim}) and (\ref{BrauchartHesse}) to each term from the right part of (\ref{r_ineq2}), we have that
\begin{multline}\label{r_ineq3}
E_{r_{t}}(X)\ll t^{-\frac{d}{2}-K+s-\frac{1}{2}}
N^{2}(N^{\frac{1}{d}})^{K-\frac{d}{2}+\frac{1}{2}}
+
t^{-\frac{d}{2}-K+s-\frac{3}{2}}N^{2}(N^{\frac{1}{d}})^{K-\frac{d}{2}+\frac{3}{2}} \\
\ll N^{1+\frac{s}{d}}.
\end{multline}
From (\ref{r_ineq3}) we get (\ref{Lemma1}). This completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem2}]
The polynomial $h_{t}$ is a spherical polynomial of degree $t$ and $X_{N}$ is a spherical $t$-design. Thus, $h_{t}$ is integrating exactly by an equal weight integration rule with nodes from $X_{N}$, and
\begin{multline}\label{s_eq1}
E_{h_{t}}(X)=\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in H^{\pm}_{i}\setminus S(\pm \mathbf{x}_{j};\alpha_{N})}^{N}}h_{t}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle) \\
=\sum\limits_{i,j=1}^{N}h_{t}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)-
\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \textbf{x}_{i}\in S(- x_{j};\alpha_{N})}^{N}}h_{t}(\langle\textbf{x}_{i},\mathbf{x}_{j}\rangle)-Nh_{t}(1) \\
=N^{2}\int\limits_{\mathbb{S}^{d}}h_{t}(\langle\textbf{x}_{i},\mathbf{x}\rangle)d\sigma_{d}(\textbf{x})-
\sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \mathbf{x}_{i}\in S(- x_{j};\alpha_{N})}^{N}}h_{t}(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle)-Nh_{t}(1).
\end{multline}
We observe, that
\begin{align}\label{ineq1}
\Big| \sum\limits_{j=1}^{N}
{\mathop{\sum}\limits_{i=1,\atop \mathbf{x}_{i}\in S(- x_{j};\alpha_{N})}^{N}}h_{t}(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle)\Big|\leq N h_{t}(1).
\end{align}
From relations (\ref{Pochhammer}), (\ref{gamma}), (\ref{JacobiMax}) and (\ref{s_tDefinition})
\begin{multline}\label{abs_valS_t}
h_{t}(1)
= 2^{d+2K-s+1}\pi^{-\frac{1}{2}}\Gamma\Big(\frac{d}{2}+K+\frac{1}{2}\Big)\Gamma\Big(\frac{d}{2}+K-\frac{s}{2}+1\Big) \\
\times\sum\limits_{n=0}^{t}\frac{(n+\frac{d}{2}+K+\frac{1}{2})(\frac{s}{2})_{n}}{\Gamma(n+d+2K-\frac{s}{2}+2 )}\frac{(d+2K+1)_{n}}{(\frac{d}{2}+K+1)_{n}}\mathcal{P}_{n}^{(\frac{d}{2}+K, \ \frac{d}{2}+K)}(1) \\
= 2^{d+2K-s+1}\pi^{-\frac{1}{2}}\Gamma\Big(\frac{d}{2}+K+\frac{1}{2}\Big)\Gamma\Big(\frac{d}{2}+K-\frac{s}{2}+1\Big) \\
\times\sum\limits_{n=0}^{t}\frac{(n+\frac{d}{2}+K+\frac{1}{2})(\frac{s}{2})_{n}}{\Gamma(n+d+2K-\frac{s}{2}+2 )}\frac{(d+2K+1)_{n}}{(\frac{d}{2}+K+1)_{n}}\frac{\Gamma(n+\frac{d}{2}+K+1)}{\Gamma(\frac{d}{2}+K+1)\Gamma(n+1)}\ll t^{s}.
\end{multline}
Thus, relations (\ref{s_eq1})-\ref{abs_valS_t}) yield
\begin{align}\label{ineq_s_t1}
E_{h_{t}}(X)=N^{2}\int\limits_{\mathbb{S}^{d}}h_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\textbf{x})+\mathcal{O}(Nt^{s}), \ \ \mathbf{y}\in \mathbb{S}^{d}.
\end{align}
The expansion (\ref{expansionGegenbauer1}) holds only inside the interval $(-1,1)$.
Thus, we write the integral from (\ref{ineq_s_t1}) in the following way
\begin{multline}\label{s_eq2}
N^{2}\int\limits_{\mathbb{S}^{d}}h_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\textbf{x}) \\
=
N^{2}\int\limits_{\mathbb{S}^{d}\setminus S(\pm\mathbf{y};\alpha_{N})}h_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\textbf{x}) +
N^{2}\int\limits_{S(\pm\mathbf{y};\alpha_{N})}h_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\textbf{x}) \\
=
N^{2}\int\limits_{\mathbb{S}^{d}\setminus S(\pm \mathbf{y};\alpha_{N})}\Big( 1- \langle\mathbf{x},\mathbf{y}\rangle)^{-\frac{s}{2}}-r_{t}(\langle\mathbf{x},\mathbf{y}\rangle) \Big) d\sigma_{d}(\textbf{x})
+
N^{2}\int\limits_{S(\pm \mathbf{y};\alpha_{N})}h_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\textbf{x}) \\
=
N^{2}\int\limits_{\mathbb{S}^{d}}\Big( 1- \langle\textbf{x},\mathbf{y}\rangle)^{-\frac{s}{2}} d\sigma_{d}(\textbf{x})+W_{t}(X_{N}), \ \ \mathbf{y}\in \mathbb{S}^{d},
\end{multline}
where we have used the fact, that the series $r_{t}(\langle\mathbf{x},\mathbf{y}\rangle)$ converges uniformly for all $\mathbf{x}\in \mathbb{S}^{d}\setminus S(\pm\mathbf{y};\alpha_{N})$,
and
\begin{multline}\label{W_def}
W_{t}(X_{N})=W_{t}(d,s,X_{N}):=
-N^{2}\int\limits_{S(\pm \mathbf{y};\alpha_{N})}(1- \langle\mathbf{x},\mathbf{y}\rangle)^{-\frac{s}{2}} d\sigma_{d}(\mathbf{x}) \\
-N^{2}\int\limits_{\mathbb{S}^{d}\setminus S(\pm \mathbf{y};\alpha_{N})}r_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\mathbf{x})
+
N^{2}\int\limits_{S(\pm \mathbf{y};\alpha_{N})}h_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\mathbf{x}).
\end{multline}
Now let us show that
\begin{align}\label{W_estim}
|W_{t}(X_{N})|\ll N^{1+\frac{s}{d}}.
\end{align}
For the third term in (\ref{W_def}) the following estimate holds
\begin{multline}\label{ineq2}
\Big|N^{2}\int\limits_{S(\pm\mathbf{y};\alpha_{N})}h_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\textbf{x})\Big|\leq N^{2}h_{t}(1)|S(\textbf{y};\alpha_{N})| \ll
N^{2}t^{s}|S(\textbf{y};\alpha_{N})| \\
\asymp N^{2}t^{s}(1-\cos\alpha_{N})^{\frac{d}{2}}\asymp N^{2}t^{s}(N^{-\frac{2}{d}})^{\frac{d}{2}}=Nt^{s},
\end{multline}
where we have used the formula for the normalized surface area of spherical cap (\ref{capArea}) and the estimates (\ref{abs_valS_t}) and (\ref{alphaEstim}).
Now we show, that
\begin{align}\label{firstTerm}
\Big|N^{2}\int\limits_{S(\pm \mathbf{y};\alpha_{N})}(1- \langle\mathbf{x},\mathbf{y}\rangle)^{-\frac{s}{2}} d\sigma_{d}(\textbf{x})\Big|\ll N^{1+\frac{s}{d}}.
\end{align}
Clearly,
\begin{align}\label{ineq3}
\Big|N^{2}\int\limits_{S(-\mathbf{y};\alpha_{N})}(1- \langle\mathbf{x},\mathbf{y}\rangle)^{-\frac{s}{2}} d\sigma_{d}(\mathbf{x})\Big|\ll N^{2}|S(\mathbf{x};\alpha_{N})|\ll N.
\end{align}
From (\ref{a1}) we have
\begin{multline}\label{ineq4}
\Big|N^{2}\int\limits_{S(\mathbf{y};\alpha_{N})}(1- \langle\mathbf{x},\mathbf{y}\rangle)^{-\frac{s}{2}} d\sigma_{d}(\mathbf{x})\Big| \\
=N^{2}\frac{\Gamma(\frac{d+1}{2})}{\sqrt{\pi}\Gamma(\frac{d}{2})}\int\limits_{1-\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}^{1}(1-x)^{-\frac{s}{2}}(1-x^{2})^{\frac{d}{2}-1}dx
\ll N^{1+\frac{s}{d}}.
\end{multline}
Combining (\ref{ineq3}) and (\ref{ineq4}), we obtain (\ref{firstTerm}).
It remains to examine the second term in (\ref{W_def}). Relation (\ref{a1}) and the estimate (\ref{r_ineq1}) allow us to write
\begin{multline}\label{ineq5}
\Big|N^{2}\int\limits_{\mathbb{S}^{d}\setminus S(\pm \mathbf{y};\alpha_{N})}r_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\mathbf{x})
\Big| \ll
N^{2}\int\limits_{-1+\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}^{1-\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}|r_{t}(x)|(1-x^{2})^{\frac{d}{2}-1}dx \\
\ll
N^{2}\int\limits_{-1+\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}^{1-\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}} t^{-\frac{d}{2}-K+s-\frac{1}{2}}(\sqrt{1-x^{2}})^{-\frac{d}{2}-K-\frac{1}{2}}(1-x^{2})^{\frac{d}{2}-1}dx \\
+N^{2}\int\limits_{-1+\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}^{1-\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}
t^{-\frac{d}{2}-K+s-\frac{3}{2}}(\sqrt{1-x^{2}})^{-\frac{d}{2}-K-\frac{3}{2}}(1-x^{2})^{\frac{d}{2}-1}dx.
\end{multline}
For the first term in right-hand side of (\ref{ineq5}) we obtain
\begin{multline}\label{ineq6}
\int\limits_{-1+\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}^{1-\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}(\sqrt{1-x^{2}})^{-\frac{d}{2}-K-\frac{1}{2}}(1-x^{2})^{\frac{d}{2}-1}dx=
2\int\limits_{\alpha_{N}}^{\frac{\pi}{2}} (\sin y)^{\frac{d}{2}-K-\frac{3}{2}}dy \\
\ll
\int\limits_{\alpha_{N}}^{\frac{\pi}{2}} y^{\frac{d}{2}-K-\frac{3}{2}}dy\ll
(\alpha_{N})^{\frac{d}{2}-K-\frac{1}{2}} \ll
\Big(N^{-\frac{1}{d}}) \Big)^{\frac{d}{2}-K-\frac{1}{2}}
\ll t^{-\frac{d}{2}+K+\frac{1}{2}},
\end{multline}
where we have used relations (\ref{sinIneq}) and (\ref{NandT}) and fact that $K>\frac{d}{2}+1$.
In the same way
\begin{align}\label{ineq7}
\int\limits_{-1+\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}^{1-\frac{c_{1}^{2}}{8N^{\frac{2}{d}}}}(\sqrt{1-x^{2}})^{-\frac{d}{2}-K-\frac{3}{2}}(1-x^{2})^{\frac{d}{2}-1}dx\ll t^{-\frac{d}{2}+K+\frac{3}{2}}.
\end{align}
Thus, relations (\ref{ineq5})-(\ref{ineq7}) yield
\begin{align}\label{ineq8}
\Big|N^{2}\int\limits_{\mathbb{S}^{d}\setminus S(\pm \mathbf{y};\alpha_{N})}r_{t}(\langle\mathbf{x},\mathbf{y}\rangle)d\sigma_{d}(\mathbf{x})
\Big| \ll N^{1+\frac{s}{d}}.
\end{align}
Combining (\ref{W_def}), (\ref{ineq2}), (\ref{firstTerm}) and (\ref{ineq8}) we obtain desired estimate (\ref{W_estim}).
Formulas (\ref{main_term}), (\ref{ineq_s_t1}), (\ref{s_eq2}) and (\ref{W_estim}) imply (\ref{Lemma2}).
Lemma~\ref{lem2} is proved.
\end{proof}
\section*{References}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} | math | 50,259 |
\begin{document}
\author{X.-W. Luo$^{1,2}$, J. J. Hope$^{3}$,
B. Hillman$^{3}$, and T. M. Stace$^{1}$}\email[]{[email protected]}
\affiliation{$^1$ARC Centre for Engineered Quantum Systems, University of Queensland, St Lucia, QLD 4072,Australia}
\affiliation{$^2$Key Lab of Quantum Information, CAS,
University of Science and Technology of China,
Hefei, Anhui, 230026, P.R. China}
\affiliation{$^3$Department of Quantum Science,
Research School of Physics and Engineering,
The Australian National University, Canberra, ACT0200, Australia}
\title{\textbf{Diffusion Effects in Gradient Echo Memory}}
\begin{abstract}
We study the effects of diffusion on a $\Lambda$-gradient echo memory, which is a coherent optical quantum memory using thermal gases. The efficiency of this memory is high for short storage time, but decreases exponentially due to decoherence as the storage time is increased. We study the effects of both longitudinal and transverse diffusion in this memory system, and give both analytical and numerical results that are in good agreement. Our results show that diffusion has a significant effect on the efficiency. Further, we suggest ways to reduce these effects to improve storage efficiency.
\end{abstract}
\maketitle
\section{Introduction}
Quantum memory is an important tool in many quantum information protocols, including quantum repeaters for long-distance quantum communication \cite{duan2001long}, and identity quantum gates in quantum computation \cite{knill2001scheme}. Numerous optical quantum memories have been developed, including electromagnetically-induced transparency (EIT) based quantum memory \cite{fleischhauer2005electromagnetically, fleischhauer2000dark}, far-detuned Raman process memory \cite{nunn2006modematching, nunn2007mapping}, and photon-echo quantum memories: controlled reversible inhomogeneous broadening (CRIB) memory \cite{kraus2006quantum, sangouard2007analysis}, atomic frequency combs (AFC) memory \cite{afzelius2009multimode}, and gradient echo memory (GEM) \cite{hetet2006gradient, hetet2008electro, hetet2008quantum}. A review of these schemes can be found in \cite{lvovsky2009optical}. Of these schemes, the most impressive efficiency so far attained experimentally is 87\% by $\Lambda$-GEM scheme \cite{hosseini2011high} using warm rubidium vapor. In this paper, we will examine the effects of atomic diffusion on the $\Lambda$-GEM system, which may limit this efficiency for larger storage times.
$\Lambda$-GEM is a memory using a 3-level, $\Lambda$-type atom (Fig. \ref{fig:3-lvl}). The input optical pulse couples the two metastable lower states through a control field. The excited state is coupled in the far detuned region, so the 3-level atoms can be treated as effective 2-level atoms. These effective two-level atoms have linearly increasing atomic Zeeman shifts along the length of the storage medium. The pulse is first absorbed, then by simply reversing the sign of the magnetic field, the pulse is retrieved in the forward direction. The incident signal field is converted to a collective atomic excitation known as a spin wave, which is distributed as a function of position. The Brownian motion of the gaseous atoms will cause diffusion, which will disturb spatial coherence of the atomic spin-wave, leading to decoherence. For a plane wave, only axial diffusion is important, but transverse diffusion becomes significant when a realistic beam profile is included. There has been recent interest in the effects of diffusion in the EIT \cite{firstenberg2008theory, firstenberg2009elimination, firstenberg2010self}. In this work, we study the effects of diffusion in $\Lambda$-GEM system, giving analytical and numerical results. We also suggest ways to reduce these effects to improve storage efficiency.
\begin{figure}
\caption{\footnotesize{Level structure of $\Lambda$-type 3-level atom.}
\label{fig:3-lvl}
\end{figure}
\section{$\Lambda$ Gradient Echo Memory}
We consider a medium consisting of $\Lambda$-type 3-level atoms with two metastable lower states as shown in Fig.~\ref{fig:3-lvl}.
The ground state $\vert 1\rangle$ and the excited state $\vert 3 \rangle$ are coupled by a weak optical field, the positive frequency component of the electric field is described by the slowly varying operator
\begin{equation}
\hat{E}(\textbf{r},t)=\sum_{\textbf{k}}\sqrt{\frac{1}{V}} a_{\textbf{k}}(t)e^{i\textbf{k}\cdot \textbf{r}}e^{-ik_0z}e^{i\omega_0 t}
\end{equation}
with detuning $\Delta$,
where $V$ is the quantization volume, $\omega_0$ is the carrier frequency of the quantum field and $k_0=\omega_0/c$. The excited state $\vert 3 \rangle$ is also coupled to the metastable state $\vert 2 \rangle$ via a coherent control field with Rabi frequency $\Omega_c$ and a two photon detuning $\delta$. This two photon detuning is spatially varied $\delta(z,t)=\eta(t) z$, with time dependent gradient $\eta(t)$. Then the interaction Hamiltonian in the rotating frame with respect to the field frequencies is
\begin{equation}
\begin{split}
\hat{H}=&\sum_{n}[\hslash \Delta \sigma_{33}^{(n)} + \hslash \delta(z_n,t) \sigma_{22}^{(n)} \\
+ &\hslash \sum_{\textbf{k}}(\hslash g_{\textbf{k}} a_{\textbf{k}} e^{i\textbf{k}\cdot \textbf{r}_n} \sigma_{31}^{(n)} + \hslash \Omega_c(\textbf{r}_n) \sigma_{32}^{(n)} + h.c)],
\end{split}
\end{equation}
where $g_{\textbf{k}}=\wp \sqrt{\frac{\omega_{\textbf{k}}}{2\hslash \epsilon_0 V}}$ is the atom-field coupling constant with $\wp$ being the dipole moment of the 1-3 transition, and $\sigma_{\mu \nu}^{(n)}=\vert \mu \rangle_n \langle \nu \vert$ is an operator acting on the $n$-th atom at $\textbf{r}_n=(x_n,y_n,z_n)$.
We assume that initially all atoms are in their ground state $\vert 1\rangle$. We transform to collective operators, which are averages over atomic operators over a small volume centered at $\textbf{r}$ containing $N_{\textbf{r}} \gg 1$ particles,
\begin{equation}
\sigma_{\mu \nu}(\textbf{r},t)=\frac{1}{N_{\textbf{r}}}\sum_{j=1}^{N_{\textbf{r}}}\sigma^{(j)}_{\mu \nu}(t)
\end{equation}
From the Heisenberg-Langevin equations in the weak probe region ($\sigma_{11}\simeq 1, \sigma_{22}\simeq\sigma_{33}\simeq 0$), we get the Maxwell-Bloch equations \cite{walls2008quantum},
\begin{equation}\label{M-B-1}
\begin{split}
&\dot{\sigma}_{13}^{(n)}=-(\gamma_{13}+i\Delta)\sigma_{13}^{(n)}+ige^{ik_0z_n}E(\textbf{r}_n,t)+i
\Omega_c
e^{ik_cz_n}\sigma_{12}^{(n)},\\
&\dot{\sigma}_{12}^{(n)}=-(\gamma_{12}+i\delta(z_n,t))\sigma_{12}^{(n)}+
i\Omega_c e^{-ik_cz_n}\sigma_{13}^{(n)},\\
&\left( \frac{\partial}{\partial t}+c\frac{\partial}{\partial z}-ic\frac{\nabla^2_x+\nabla^2_y}{2k_0}\right)E(\textbf{r},t)
=igNe^{-ik_0z}\sigma_{13}(\textbf{r},t),
\end{split}
\end{equation}
where $\gamma_{\nu \mu}$ are the decay rates, $g=\wp \sqrt{\frac{\omega_0}{2\hslash \epsilon_0}}$, and $N$ is the atomic density. We have assumed that $g_{\textbf{k}}\simeq\wp \sqrt{\frac{\omega_0}{2\hslash \epsilon_0 V}}$ and $\Omega_c(\textbf{r})=\Omega_c e^{ik_cz}$. We also omit the Langevin noise operators since here we are more interested in the decoherence caused by diffusion. This is equivalent to making a semiclassical approximation for the electric field and the atomic coherences.
In Eq.~(\ref{M-B-1}), $ic\frac{\nabla^2_x+\nabla^2_y}{2k_0}$ is the diffraction term, and generally, the diffraction effects can be neglected \cite{stace2010theory}. Notice that we are here considering the regime $t_p\gg L/c$, where $2t_p$ is the temporal width of the signal, and $2L$ is the length of the medium. This allows us to neglect temporal retardation effects, i.e., we can neglect the temporal derivative in the third equation of Eq.~(\ref{M-B-1}).
Also, since the atoms are far detuned (\ $\Delta\gg \gamma_{13}, \Omega_c$), we adiabatically eliminate the fast oscillations and set $\dot{\sigma}_{13}^{(n)}=0$. Then we have $\sigma_{13}= \left(ge^{ik_0z}E+\Omega_ce^{ik_cz}\sigma_{12} \right)/\Delta$, and we get the reduced Maxwell-Bloch equations,
\begin{equation}\label{M-B-2}
\begin{split}
\dot{\sigma}_{12}^{(n)}=&-(i\delta(z_n,t)-i\frac{\Omega_c^2}{\Delta})\sigma_{12}^{(n)}\\
&+i\frac{g\Omega_c}{\Delta}e^{i(k_0-k_c)z_n}E(\textbf{r}_n,t),\\
\frac{\partial}{\partial z}E(\textbf{r},t)=&i\frac{gN\Omega_c}{c\Delta} e^{-i(k_0-k_c)z}\sigma_{12}(\textbf{r},t)\\
&+i\frac{g^2N}{c \Delta}E(\textbf{r},t).
\end{split}
\end{equation}
Here we neglect decay, i.e.\ $\gamma_{12}\rightarrow0$, since we consider the storage time much less than $1/\gamma_{12}$.
\section{Diffusion}
We now consider the effects of diffusion on the atomic state. In order to isolate the motional effects of diffusion from collisional dephasing, we assume that the collisions between atoms do not change the state of the atom. Then we derive the diffusion equation for the atomic density matrix $\rho$. Space is divided into volume elements with length $\Delta r$ and center $r$. We associate a density matrix $\rho(r,t)$ with atoms in this volume element, given by
\[\rho(r,t)=\frac{1}{N_r}\sum_{j=1}^{N_r}\rho^{(j)}(t),\]
where $N_r$ is the atom number in volume centered at $r$. The total density matrix for the entire system is assumed to be the tensor product of these local density matrices.
Diffusion causes an exchange of atoms between adjacent volumes. During a short time $\Delta t$, a fraction $\epsilon$ of the atoms in slice $r$ migrate into slice $r\pm\Delta r$. There is also atomic flux back into slice $r$ from $r\pm\Delta r$. We assume that the total number density of the atoms is uniform, so the state at $r$ and $t+\Delta t$ is described by the new density matrix which is the average of the density matrix of atoms remaining in the volume and those that have migrated in to it. The diffusive component of the evolution is therefore
\begin{equation}
\begin{split}
\rho(r,t+\Delta t)=&(1-2\epsilon)\rho(r,t)\\
&+\epsilon(\rho(r+\Delta r,t)+\rho(r-\Delta r,t))\\
\Rightarrow \partial_t\rho(r,t)=&D\nabla^2\rho(r,t)
\end{split}
\end{equation}
where $D=\epsilon \Delta r^2/\Delta t$ is the diffusion coefficient.
With the same consideration, we get the diffusive component evolution for the atomic correlation functions
\begin{equation}
\dot{\sigma}_{\mu \nu}(r,t)=D\nabla^2\sigma_{\mu \nu}(r,t)
\end{equation}
Now we introduce the interaction with optical fields. Since diffusion is caused by Brownian motion, this will lead to Doppler shifts in the various detunings. We now consider the interaction between the optical field and a single atom, and quantify the effects of these Doppler shifts. The atom moves at some random velocity, and there will be a Doppler shift for both the signal and control fields. So the detunings in Eq.~(\ref{M-B-1}) become $\Delta=\Delta_0+\Delta_{Dopp}$, and $\delta=\delta_0+\delta_{Dopp}$, with $\Delta_0, \delta_0$ the detunings for stationary atoms and $\Delta_{Dopp}, \delta_{Dopp}$ the Doppler shifts. Typically, the one photon Doppler shift $\Delta_{Dopp}\ll\Delta_0$, state $\vert 3\rangle$ is still far detuned. So the adiabatic elimination is still valid in the presence of the Brownian motion induced Doppler shift, and we can still reduce the 3-level atom to an effective 2-level atom. The Maxwell-Bloch equation will still reduce to Eq.~(\ref{M-B-2}), but with one photon detuning $\Delta=\Delta_0+\Delta_{Dopp}$ and two photon detuning $\delta=\delta_0+\delta_{Dopp}$. So, for the reduced two level atomic system, the diffusive Maxwell-Bloch equation for the collective correlation $\sigma_{12}(z,t)$ averaged over atoms in each volume is
\begin{equation}\label{M-B-tr}
\begin{split}
\dot{\sigma}_{12}(\textbf{r},t)=&i\frac{g\Omega_c}{\Delta}e^{i(k_0-k_c)z}E(\textbf{r},t)\\
&-i\delta(z,t)\sigma_{12}(\textbf{r},t)+D\nabla^2\sigma_{12}(\textbf{r},t),\\
\frac{\partial}{\partial z}E(\textbf{r},t)=&
i\frac{gN\Omega_c}{c\Delta} e^{-i(k_0-k_c)z}\sigma_{12}(\textbf{r},t)\\
&+i\frac{g^2N}{c\Delta}E(\textbf{r},t).
\end{split}
\end{equation}
We have absorbed the Stark shift $\frac{\Omega_c^2}{\Delta}$ into the two-photon detuning. Here our diffusive
Maxwell-Bloch equation is consistent with the result in the EIT system \cite{firstenberg2008theory, firstenberg2010self}.
Notice that the signal and control fields are co-propagating, so the Doppler broadening width for $\delta$ is typically $1$kHz, which is much smaller than the frequency width of the signal field ($\sim$ 1MHz), so we neglect this two-photon Doppler broadening $\delta_{Dopp}$ and replace $\delta(z,t)$ by $\delta_0(z,t)=\eta(t) z, z\in[-L,L]$.
For the one photon detuning $\Delta=\Delta_0+\Delta_{Dopp}$, after we make the adiabatic elimination, it will appear in the denominator (see Eq.~(\ref{M-B-tr})), so
\[\frac{1}{\Delta}\simeq\frac{1}{\Delta_0}\left(1-\frac{\Delta_{Dopp}}{\Delta_0}+\left(\frac{\Delta_{Dopp}}{\Delta_0}\right)^2\right).\] The term linear in $\Delta_{Dopp}$ will vanish when we average over many atoms in a volume centred at $r$, so we can replace $\Delta$ by $\Delta_0$ in our Maxwell-Bloch equation, with second order accuracy [typically $(\Delta_{Dopp}/\Delta_0)^2\sim10^{-3}$].
\section{Analytic Calculation and Numerical Simulation}
To quantify the effects of diffusion, we solve for the atomic dynamics. There are three distinct phases during the storage: write-in $-t_0<t<0$, during which the signal is absorbed by the memory; hold $0<t<t_H$, during which the information is stored in the memory and the gradient is turned off; read-out $t_H<t<t_H+t_0$, during which the signal is emitted by turning on the flipped gradient.
We quantify the effects of diffusion by the read-out efficiency $\varepsilon$ defined to be
\begin{equation}\label{eq:eff1}
\varepsilon=\frac{\int_{t_H}^{t_H+t_0}|f_{out}(t)|^2 dt}{\int_{-t_0}^{0}|f_{in}(t)|^2 dt}
\end{equation}
where $f_{out}(t)=E(z=L,t>t_H)$ is the output field and $f_{in}(t)=E(z=-L,t<0)$ is the input field. We solve for $f_{out}(t)$ both numerically and analytically, and consider the effects of diffusion in axial (longitude) and radial (transverse) directions separately.
\subsection{Longitudinal diffusion}
For a uniform plane wave, transverse diffusion is irrelevant. We replace $\textbf{r}$ by $z$ in Eq.~(\ref{M-B-tr}) and consider the longitude diffusion in a 1-dimensional model. Now the Maxwell-Bloch equation is
\begin{equation}\label{M-B-Eq}
\begin{split}
\dot{\sigma}_{12}(z,t)=&i\frac{g\Omega_c}{\Delta}e^{i(k_0-k_c)z}E(z,t)\\
&-i\delta(z,t)\sigma_{12}(z,t)+D\nabla_z^2\sigma_{12}(z,t),\\
\frac{\partial}{\partial z}E(z,t)=&
i\frac{gN\Omega_c}{c\Delta} e^{-i(k_0-k_c)z}\sigma_{12}(z,t)\\
&+i\frac{g^2N}{c\Delta}E(z,t).
\end{split}
\end{equation}
We now investigate the longitude diffusion effects during the write-in process, the hold time and the read-out processes separately.
To compute $f_{out}(t)$, we evolve Eq.~(\ref{M-B-Eq}) using $\eta$ as in Fig. \ref{fig:k-t} (a). Following the method given in \cite{hetet2006gradient}, we first propagate
$E(z,t)$ and $\sigma_{12}(z,t)$ forward with boundary condition $E(z=-L,t<0)=f_{in}(t)$ to find their values at time $t_H$. Then we propagate $E$ and $\sigma_{12}$ backward to time $t_H$, with final condition $E(z=L,t>t_H)=f_{out}(t)$, and solve for $f_{out}(t)$ by matching the two solutions at time $t_H$.
\emph{Write:} Consider the diffusion effects during the write-in process, we find that (see Appendix A)
\begin{equation}
f_{out}(t_H+t)=e^{\frac{-D}{3\eta}(k_i^3-(k_i-\eta t)^3)}f_{in}(-t)\bar{G}
\end{equation}
where $k_i=\frac{g^2N}{c\Delta}+k_0-k_c-\frac{\beta}{L}$ is the initial spatial frequency of $\sigma_{12}(z,t)$, and \[\bar{G}=\vert\eta L\left(t+\frac{\beta}{\eta L}\right)\vert ^{-i2\beta}e^{i\frac{2Lg^2N}{c\Delta}}e^{-i\frac{g_{eff}^2N}{c\eta \left(t+\frac{\beta}{\eta L}\right)}t_H}\Gamma(i\beta)/\Gamma(-i\beta)\] is a phase factor, with $g_{eff}=g\Omega_c/\Delta$ and $\beta=\frac{g_{eff}^2N}{\eta c}$.
For a pulse with Gaussian temporal profile $f_{in}=Ae^{-(t+t_{in})^2/t_p^2}$, we find
\begin{equation}
\varepsilon_{W}=\frac{\int_{-t_0}^0 dte^{-2(t+t_{in})^2/t_p^2} e^{\frac{-2D}{3\eta}(k_i^3-(k_i+\eta t)^3)}}{\int_{-t_0}^0 dte^{-2(t+t_{in})^2/t_p^2}}.
\end{equation}
Typically, $D\eta^2t_p^3$ is very small,
then the efficiency is
\begin{equation}\label{eq:eff-w-1}
\varepsilon_{W}=\sqrt{\alpha_{W}} e^{-\tau_{W}}+O(D^2\eta^4t_p^6)
\end{equation}
where $\alpha_{W}=\frac{1}{1-D\eta^2t_p^2(k_i/\eta-t_{in})}$ and $\tau_{W}=\frac{2D\eta^2}{3}\left[\left(\frac{k_i}{\eta}\right)^3-\left(\frac{k_i}{\eta}-t_{in}\right)^3\right]$ are dimensionless parameters.
For typical experimental parameters, $\alpha_{W}\simeq 1$, then
\begin{equation}\label{eq:eff-w}
\varepsilon_{W}\simeq e^{-\tau_{W}}
\end{equation}
We also numerically solve Eq.~(\ref{M-B-Eq}) with diffusion during the write-in process, using XMDS \cite{XMDS}. We calculate the efficiency for different values of the diffusion rate $D$, input time $t_{in}$ etc. The results are shown in Fig. \ref{fig:1D_input}, (points are numerical results, and the curve is Eq.~(\ref{eq:eff-w})). We plot the efficiency $\varepsilon_{W}$ with respect to the rescaled dimensionless parameter $\tau_{W}$, so all the points with different parameters collapse on a single curve.
\emph{Hold:} During the storage time $[0,t_H]$, we find (see Appendix A)
\begin{equation}
f_{out}(t_H+t)=e^{-Dt_H(k_i-\eta t)^2 }f_{in}(-t)\bar{G}
\end{equation}
For the above Gaussian shape input, the efficiency is given by
\begin{equation}\label{eq:eff-hold}
\varepsilon_H=\sqrt{\alpha_H}e^{-2\alpha_H \tau_H},
\end{equation}
where $\alpha_H=\frac{1}{t_p^2}/\left(\frac{1}{t_p^2}+Dt_H\eta^2\right)$ and $\tau_H=Dt_Hk_H^2$ are dimensionless parameters, with $k_H=k_i-\eta t_{in}$.
For typical experimental parameters, $\alpha_H\simeq 1$ and we have
\begin{equation}\label{eq:eff-H}
\varepsilon_H\simeq e^{-2 \tau_H }.
\end{equation}
We also numerically solve Eq.~(\ref{M-B-Eq}) with diffusion during hold time, using XMDS.
We calculate the efficiency for different values of the diffusion rate $D$, storage time $t_H$ etc. The results are shown in Fig. \ref{fig:1D_hold} (points are numerical results, and curve is Eq.~(\ref{eq:eff-H})). We plot the efficiency $\varepsilon_H$ with respect to the rescaled dimensionless parameter $\tau_H$, so all the points with different parameters collapse on a single curve.
\emph{Read:} the diffusion effects during the read-out process are the same as the diffusion effects of the write-in process (see the appendix A), so we simply have\[\varepsilon_{R}=\varepsilon_{W}.\]
\begin{figure}
\caption{\footnotesize{The efficiency decay with respect to the dimensionless parameter $\tau_{W}
\label{fig:1D_input}
\end{figure}
\begin{figure}
\caption{\footnotesize{The efficiency decay with respect to the dimensionless parameter $\tau_H$ for longitude diffusion during hold time, the points are numerical results, and the curve is Eq.~(\ref{eq:eff-H}
\label{fig:1D_hold}
\end{figure}
\subsection{Transverse diffusion}
We now quantify the effects of diffusion for a beam with realistic transverse Gaussian profile.
The efficiency for a 3-dimensional model is defined as
\begin{equation}\label{eq:3d-eff}
\varepsilon=\frac{\int|f_{out}(x,y,t_H+t)|^2 dxdydt}{\int|f_{in}(x,y,t)|^2 dxdydt}
\end{equation}
Eq.~(\ref{M-B-tr}) can be solved in Fourier space $k_x,k_y$, and also notice that
\begin{equation}
\varepsilon=\frac{\int |f_{out}(k_x,k_y,t_H+t)|^2 dk_xdk_ydt}{\int |f_{in}(k_x,k_y,t)|^2 dk_xdk_ydt}
\end{equation}
Eq.~(\ref{M-B-tr}) can be reduced to a quasi-1D problem, and can be solved as before (see Appendix B).
For transverse diffusion, the output pulse will be
\begin{equation}\label{eq:3d-out}
f_{out}(k_x,k_y,t_H+t)=e^{-2\gamma_k t}e^{-\gamma_k t_H}f_{in}(k_x,k_y,-t)\bar{G}.
\end{equation}
where $\gamma_k=D(k_x^2+k_y^2)$.
If the input pulse has both Gaussian temporal and transverse profile,
\[f_{in}(x,y,t)=Ae^{-(x^2+y^2)/a^2}e^{-(t+t_{in})^2/t_p^2},\]
then $\gamma_k t_p\sim Dt_p/a^2$, which is typically small.
Thus the memory efficiency is
\begin{equation}\label{eq:eff-tr}
\varepsilon_{\perp}=\frac{1}{1+\tau_{\perp}}+O(\gamma_k^2t_p^2),
\end{equation}
where $\tau_{\perp}=4D(t_H+2t_{in})/a^2$ is a dimensionless parameter.
We also numerically solve Eq.~(\ref{M-B-tr}) with $\nabla^2=\nabla_x^2+\nabla_y^2$.
We calculate the efficiency for different values of $a$, $t_H$ etc. The results are shown in Fig. \ref{fig:3D_trans} (points are numerical results, and curve is Eq.~(\ref{eq:eff-tr})). We plot the efficiency $\varepsilon_{\perp}$ with respect to the rescaled dimensionless parameter $\tau_{\perp}$, so all the points with different parameters collapse on a single curve.
\begin{figure}
\caption{\footnotesize{Efficiency decay with respect to $\tau_{\perp}
\label{fig:3D_trans}
\end{figure}
\subsection{Total diffusion}
Experimentally, longitude and transverse diffusion coexist during the whole process.
Combining all the diffusive contributions mentioned above, we get the output field as (Appendix B)
\begin{equation}
\begin{split}
f_{out}(k_x,k_y,t_H+t)=&e^{\frac{-2D}{3\eta}(k_i^3-(k_i-\eta t)^3)}e^{-Dt_H(k_i-\eta t)^2 }\\
\times &e^{-2\gamma_k t-\gamma_k t_H}f_{in}(k_x,k_y,-t)\bar{G}.
\end{split}
\end{equation}
We consider input pulse with both Gaussian temporal and transverse profile as above, typically, $D\eta^2t_p^3, \gamma_kt_p$ are very small. Then
the total efficiency will be
\begin{equation}
\begin{split}
\varepsilon_{tot}
=&\sqrt{\frac{1}{1/\alpha_H + 2/\alpha_{W} -2}}e^{-2\tau_{W}}e^{-2\tau_{H}}\frac{1}{1+\tau_{\perp}}\\
&+O[(D\eta^2t_p^3,\gamma_kt_p)^2]
\end{split}
\end{equation}
Typically, $\alpha_H \simeq 1, \alpha_{W} \simeq 1$, so we have
\[\varepsilon_{tot} \simeq \varepsilon_{W}\times \varepsilon_{H}\times \varepsilon_{R}\times \varepsilon_{\perp}.\]
\subsection{Efficiency optimization and estimation}
Our model did not examine other decoherence processes, such as control field-induced scattering and ground state decoherence. Our results simply quantify the effects of motional diffusion on GEM efficiency, and therefore the represent upper estimates for the performance of GEM.
Structures with larger spatial frequency will decay faster under diffusion. For the 1D model during hold time, we have (see the Appendix A)
\begin{equation}\label{eq:t=0}
\sigma_{12}(k,t)\propto f_{in}(\frac{k-k_i}{\eta})
\end{equation}
The input pulse is centered at $-t_{in}$, then we have $k\sim k_H=k_i-\eta t_{in}$. We
see that $k$ will increase or decrease as $t_{in}$ increases, depending on the sign of $\eta$. During the hold time $[0,t_H]$, the gradient is turned off, and $k$ will hold its value. The read-out process is symmetric to the write-in process (see fig. \ref{fig:k-t}), returning the quasi-momentum to its original distribution. Fig.~\ref{fig:k-t-n} shows the numerical result by solving Maxwell-Bloch equations.
\begin{figure}
\caption{\footnotesize{(a) The gradient is turned off during hold time, and flipped for read-out. (b) An illustration of the spatial frequency for $\sigma_{12}
\label{fig:k-t}
\end{figure}
\begin{figure}
\caption{\footnotesize{Numerical results of $|\sigma_{12}
\label{fig:k-t-n}
\end{figure}
One way to reduce the effects of diffusion is to remove the gradient during the storage part of the process, and only turn on the flipped gradient during readout.
For realistic $t_{in}$ and $\eta$, we can get zero spatial frequency $k_H=0$, for which $\tau_H=0$, and this will minimise the diffusional decay rate. We then have $\varepsilon_H=\sqrt{\alpha}$ and $\varepsilon_{W}=e^{-2 D k_i^2 t_{in}/3}$. Including transverse diffusion, we get the total efficiency for input field with transverse Gaussian profile
\begin{equation}
\varepsilon_{tot} =e^{-4 D k_i^2 t_{in}/3}\sqrt{\alpha}\frac{1}{1+4D(t_H+2t_{in})/a^2}
\end{equation}
The efficiency can be improved further by choosing a larger transverse width $a$, i.e, the effects of transverse diffusion will be reduced by using a smooth field in the transverse direction.
We note that the circumstances in which a GEM will be useful are those for which all dephasing, including that due to diffusion, is small. In this limit, a useful approximate expression for the GEM efficiency is given by
\begin{equation}
\varepsilon_{tot} \simeq 1-\frac{4Dk_i^2t_{in}}{3}-\frac{Dt_H\eta ^2 t_p^2}{2}-\frac{4D(t_H+2t_{in})}{a^2},
\end{equation}
as the inefficiencies arising from each diffusive process considered above add together.
Experimental considerations give estimates of the achievable GEM efficiency. In particular, to ensure the bandwidth of the memory is large enough to absorb the input field, we require $|\eta t_p| > \frac{1}{L}$, and $t_{in}>t_p$ to ensure that the whole pulse enters the medium during the write-in process, also $|k_i|>\frac{1}{L}$ is required to satisfy $k_H=0$. So
\begin{equation}
\begin{split}
\varepsilon_{tot} &\lesssim 1-\frac{4Dk_i^2t_{p}}{3}-\frac{Dt_H}{2L^2}-\frac{4D(t_H+2t_{p})}{a^2}\\
&\lesssim 1-\frac{4Dt_{p}}{3L^2}-\frac{Dt_H}{2L^2}-\frac{4D(t_H+2t_{p})}{a^2}
\end{split}
\end{equation}
This gives a reasonable upper bound on the GEM efficiency, given the pulse duration $2 t_p$, the hold time $t_H$, and the vapour length $L$ and beam width $a$.
\emph{Experimental considerations}: In experiments reported in \cite{hosseini2011high, higginbottom2012spatial}, Rb$^{87}$ atoms were used. Typical system parameters are $\omega_0=2\pi\cdot 377.10746$ THz, $\omega_0-\omega_c=2\pi\cdot 6.8$ GHz,
$\Delta=-2\pi\cdot 1.5$ GHz, $\Omega_c\simeq 2\pi\cdot 20$ MHz, $g\simeq 2\pi\cdot 4.5$ Hz, $t_p=1 \mu$s, $a\simeq 1.45$ mm, $2L=0.2$ m, $\eta\simeq -2\pi\cdot 10$ MHz/m, $N\simeq 0.5\times 10^{18}$ m$^{-3}$ \cite{hosseini2011high, higginbottom2012spatial,steck2001rubidium}. The optical depth $|\beta|\simeq 3.8$ is sufficiently large.
According to the formula in \cite{happer1972optical}, we have $D\sim 0.004$ m$^2$/s for Rubidium atoms in buffer gas \cite{hosseini2011high, higginbottom2012spatial}.
With these parameters, the diffusive decay will be dominated by transverse diffusion. For example, for $t_{in}=5 \mu$s and $t_H=0$, the maximum achievable efficiency is $\varepsilon_{tot}\simeq 93\%$ ($\varepsilon_{\perp}\simeq \varepsilon_{tot}$, $\varepsilon_{H}=1$, $\varepsilon_{W}\simeq 1$).
We examine the input of the (1,1) Hermite-Gaussian mode $f_{in}(x,y,t) \propto x\,y\,e^{-(x^2+y^2)/a^2}$ as an example of a higher order Hermite-Gaussian mode transverse profile. From Eqs.~(\ref{eq:3d-eff}, \ref{eq:3d-out}), we have $\varepsilon_{\perp}=(\frac{1}{1+\tau_{\perp}})^3$, and the longitude diffusion effects are the same as the Gaussian profile (i.e. the (0,0) Hermite-Gaussian mode). Thus, for diffusive decays, we have
\begin{equation}\label{ratio}
\frac{\varepsilon_{(11)}}{\varepsilon_{(00)}} \propto (\frac{1}{1+\tau_{\perp}})^2
\end{equation}
where $\varepsilon_{(ij)}$ is the read-out efficiency for (ij) Hermite-Gaussian mode.
We find that the efficiency decays faster for higher order modes, and the ratio Eq.~(\ref{ratio}) decreases when the storage time increases. This is in agreement with experimental investigations \cite{higginbottom2012spatial}.
\subsection{Output beam width}
After some storage time, transverse diffusion will tend to smear the spin wave density in the radial direction. Intuitively, we would expect this to lead to a spatially wider output beam than would be the case in the absence of diffusion.
This is certainly the case when the control field is radially uniform. To see this, we define the intensity distribution for the read-out signal as
\begin{equation}
I(r_{\perp})=\int |f_{out}(r_{\perp},t_H+t)|^2 dt.
\end{equation}
We suppose that the control field is turned off during the hold time, $[0,t_H]$ to avoid control field-induced scattering, and that the gradient is always on and flipped at $t=0.5t_H$. Also for typical experimental parameters, the effects of longitudinal diffusion is very weak, so we focus on transverse diffusion. We solve the Maxwell-Bloch equation using the same method as before. For a signal with a Gaussian transverse profile, we find that
\begin{equation}
I(r_{\perp}) \propto e^{r_{\perp}^2/[a^2+4D(2t_{in}+t_H)]}.
\end{equation}
with $r_{\perp}^2=x^2+y^2$.
Defining $w_{r_{\perp}}$ as the width of the output field, we have
\begin{equation}\label{eq:width}
w_{r_{\perp}}^2=\frac{a^2}{4}+D(2t_{in}+t_H),
\end{equation}
which increases linearly with storage time (Fig. \ref{fig:width}), at a rate determined by the diffusion coefficient.
\begin{figure}
\caption{\footnotesize{The extra phase $\theta$ for control field with Gaussian profile. Points are numerical results using typical parameters given in the main text, and the curve is the approximate expression in Eq.~(\ref{eqn:theta}
\label{fig:phase}
\end{figure}
\begin{figure}
\caption{\footnotesize{(Color online) The intensity distribution for the read-out signal with $t_H=16$ $\mu$s. To see the expansion clearly, we have renormalized the maximum of $I(r_\perp)$ to 1, and $I_0$ is the renormalized intensity distribution. The black solid curve is input signal, the blue dotted one is the read-out signal for homogeneous control field, and the red dashed is read-out signal for control field with Gaussian profile.}
\label{fig:profile}
\end{figure}
\begin{figure}
\caption{\footnotesize{Expansion of the read-out signal. Circles are numerical results for homogeneous control field. Squares are numerical results for control field with Gaussian profile, the solid line is Eq.~(\ref{eq:width}
\label{fig:width}
\end{figure}
Somewhat surprisingly, the experimentally measured rate of expansion of the read-out signal is smaller than that expected from atomic diffusion by a factor of 2 to 3 \cite{higginbottom2012spatial}.
One possible explanation for this is the signal diffraction as suggested in \cite{higginbottom2012spatial}, diffusion leads to a beam with reduced divergence and the measurement is taken downstream. However in this experiment the scale of experimental setup is much smaller than the Rayleigh range, so the diffraction effect is too small to explain the observed discrepancy.
Instead, we find that the anomalously narrow output beam width can be explained by considering the control field with realistic transverse Gaussian profile. This leads to a transverse variation in the phase of the spin wave, which, under the influence of diffusion leads to lower emission efficiency in the wings of the spin wave.
To analyse this effect quantitatively, we consider a Gaussian transverse variation in the control field, $\Omega_c(r_{\perp})=\Omega e^{-r_{\perp}^2/w_c^2}$, with beam waist $w_c$. Then the two-photon detuning, $\delta$, and the optical depth becomes $r_{\perp}$ dependent.
From our solution for the spin wave [see Appendix A, Eq.~(\ref{eq:t=t})], we find that the inhomogeneity of the control field intensity will introduce a transverse variation in $\delta$, which leads to a transverse dependence in the phase of the spin wave. Likewise, the transverse variation in the optical depth leads to a radially-dependent longitudinal shift in the spin wave $\sigma_{12}(\textbf{r},t)$. In combination, these give rise to a radially dependent phase on the spin wave, $e^{i\theta(r_{\perp})}$, with the effect of the control field typically being dominant. We compare solutions for this inhomogeneous control field with solutions for the homogeneous control field to obtain the phase difference $\theta(r_{\perp})$ during hold time. Typically, the width of the control field, $w_c$, is much larger than the width of the signal field, $a$, so $\theta(r_{\perp})$ is approximately quadratic in $r_{\perp}/w_c$:
\begin{equation}
\begin{split}
\theta(r_\perp)&=[-\frac{2\Omega ^2t_{in}}{\Delta}+2\beta \textsl{ln}\left(|\frac{\eta L t_{in}}{\beta}+1|\right)\\
&+2\beta\left(1-\frac{\Omega^2}{\Delta\eta L}\right)\left(\frac{\beta}{\eta L t_{in}+\beta}+\frac{z}{L}\right)]\frac{r_{\perp}^2}{w_c^2},
\end{split}\label{eqn:theta}
\end{equation}
where $\beta$ is the optical depth corresponding to $\Omega$.
Because of this quadratic phase variation across the spin wave, diffusion acts to wash out the spin-wave coherence more quickly at larger radius, so the read-out efficiency is suppressed at larger $r_\perp$. This will tend to reduce the apparent width of the emitted read-out signal.
\emph{Experimental considerations}: In the experimental results reported in \cite{higginbottom2012spatial}, $w_c\simeq 3$ mm, and $t_{in}\simeq 2$ $\mu$s. Using these parameters, Fig.~\ref{fig:phase} shows the transverse variation in the phase of the spin-wave at $(z=0,t=0.5t_H)$ in the absence of diffusion. When diffusion is introduced, this transverse phase variation is smeared out, leading to reduced read-out efficiency in the wings of the spin-wave. Figure \ref{fig:profile} compares the numerical results for the expansion of the read-out signal after a specific hold time, $t_H=16~\mu$s, with a homogeneous control field (dotted, blue) and with a spatially varying control field (dashed, red), assuming the diffusion rate $D=0.004$ m$^2$/s. Figure \ref{fig:width} shows the variation in the width of the output field as a function of hold time. We see that the expansion is slowed for a control field with Gaussian profile (squares), compared to the case of a uniform control field (circles). Importantly, this corresponds to a reduction of the beam width expansion-rate by a factor of $~2$. The apparent diffusion rate extracted from this slower expansion rate is $D_{eff}\simeq 0.002$ m$^2$/s. This is quantitatively in agreement with the observations in \cite{higginbottom2012spatial}.
\section{Summary}
We have studied the effects of diffusion on the efficiency of the $\Lambda$-gradient echo memory, both numerically and analytically. We find that the efficiency is dependent on the spatial frequencies $k$ for both longitude diffusion and transverse diffusion: higher $k$ leads to more pronounced diffusive effects, and reduced efficiency, as expected. We show that the storage efficiency can be improved by appropriate choice of the gradient during the hold phase.
We established a mechanism by which the rate of expansion of the transverse width of the beam is reduced, compared to the naive expectation of diffusive effects. This mechanism arises from the effects of diffusion on the transverse variation in the spin wave phase. We showed that with an experimentally reasonable choice of parameters, the magnitude of this effect is the same as that observed in recent experiments. When the density of the buffer gas in increased, the collision rate increases, leading to a smaller diffusion rate. However, this will lead to collision-induced dephasing, which will dominate at sufficiently high buffer gas pressures. This implies a trade off between diffusion- and collision-induced dephasing. This will be the subject of future research.
\section*{Acknowledgments}
J. Hope and B. Hillman thank M. Hosseini, D. Higginbottom, O. Pinel and B. Buchler for helpful discussions of the modelling and experiments. J. Hope was supported by the ARC Future Fellowship Scheme.
X.-W. Luo thanks G. J. Milburn for helpful discussions, and gratefully acknowledges the National Natural Science Foundation of China (Grants No. 11174270), the National Basic Research Program of China (Grants No. 2011CB921204), CAS for financial support, and The University of Queensland for kind hospitality.
\appendix
\section*{Appendix A}
The Maxwell-Bloch equation for the 1-dimensional model is
\begin{equation}\label{M-B-A0}
\begin{split}
\dot{\sigma}_{12}(z,t)=&i\frac{g\Omega_c}{\Delta}e^{i(k_0-k_c)z}E(z,t)\\
&-i\delta(z,t)\sigma_{12}(z,t)+D\nabla_z^2\sigma_{12}(z,t),\\
\frac{\partial}{\partial z}E(z,t)=&
i\frac{gN\Omega_c}{c\Delta} e^{-i(k_0-k_c)z}\sigma_{12}(z,t)\\
&+i\frac{g^2N}{c\Delta}E(z,t).
\end{split}
\end{equation}
To find the solution during $[-t_0,0]$, we first solve the equation without diffusion, then introduce the diffusion effects to our solutions.
When $D=0$, we can make transformation
\begin{equation}
\begin{split}
\widetilde{\sigma}_{12}(z,t)=&e^{-i\frac{g^2N}{c\Delta}z}e^{-i(k_0-k_c)z}\sigma_{12}(z,t),\\
\widetilde{E}(z,t)=&e^{-i\frac{g^2N}{c\Delta}z}E(z,t),
\end{split}
\end{equation}
and get the new equations
\begin{equation}\label{M-B-A}
\begin{split}
\partial_z\widetilde{E}(z,t)=&i\frac{g_{eff}N}{c}\widetilde{\sigma}_{12}(z,t),\\
\partial_t\widetilde{\sigma}_{12}(z,t)=&-i\eta z\widetilde{\sigma}_{12}(z,t)+ig_{eff}\widetilde{E}(z,t)
\end{split}
\end{equation}
where $g_{eff}=g\Omega_c/\Delta$. Following the method given in \cite{hetet2006gradient}, and using the boundary conditions $\widetilde{\sigma}_{12}(z,t\rightarrow-\infty)=0$ and $\widetilde{E}(z=-L,t<0)=\widetilde{f}_{in}(t)$, we integrate the first equation and substitute it in the second one. Making use of Fourier transformation, we find
\begin{equation}
\widetilde{E}(k,t)=\widetilde{f}_{in}(\frac{k}{\eta}+\frac{\beta}{\eta L}+t)|\frac{k}{\eta}|^{-i\beta-1}G(\eta,\beta,L),
\end{equation}
and\[G(\eta,\beta,L)=
\frac{1}{\eta}\beta e^{-\pi|\beta|/2}\textsl{sinh}(\pi|\beta|) \\
|\eta L|^{-i\beta}\Gamma(i\beta),\]
where $\widetilde{E}(k,t)=\int \widetilde{E}(z,t)e^{-ikz}dz$, $\beta=\frac{g_{eff}^2N}{\eta c}$ is the optical depth and we assume $\beta$ is sufficiently large, $\Gamma(i\beta)$ is the Gamma Function, $\widetilde{f}_{in}(t)=f_{in}(t)e^{i\frac{g^2N}{c\Delta}L}$ is the input pulse.
According to the Maxwell-Bloch equations, we have $\widetilde{\sigma}_{12}(k,t)=\frac{k\cdot c}{g_{eff}N} \widetilde{E}(k,t)$.
We transform $\widetilde{\sigma}_{12}(k,t)$ back to $\sigma_{12}(k,t)$,
\begin{equation}\label{eq:t=t}
\begin{split}
\sigma_{12}(k,t)=&f_{in}( \frac{k-k_i}{\eta}+t)e^{i\frac{g^2N}{c\Delta}L}\\
\times &|\frac{k-k_i}{\eta}-\frac{\beta}{\eta L}|^{-i\beta}\textsl{sgn}(\frac{k-k_i}{\eta}-\frac{\beta}{\eta L})\frac{c}{g_{eff}N}G
\end{split}
\end{equation}
with $k_i=\frac{g^2N}{c\Delta}+k_0-k_c-\frac{\beta}{L}$.
Now we introduce the diffusion, for the short time interval $[t,t+\Delta t]$, diffusion will cause a decay $e^{-Dk^2\Delta t}$ to $\sigma(k,t)$, or equally, there will be a decay $e^{-Dk^2\Delta t}$ on the signal $f_{in}(t')$ with $k=k_i-\eta(t-t')$. So the total decay during the write-in process for $f_{in}(t')$ is
\[
e^{-D\int_{t'}^0(k_i-\eta(t-t'))^2 dt}=e^{\frac{-D}{3\eta}(k_i^3-(k_i+\eta t')^3)}.
\]
Thus, the solution for $\sigma_{12}$ at $t=0$ is
\begin{equation}
\begin{split}
\sigma_{12}(k,0)=&e^{\frac{-D}{3\eta}(k_i^3-k^3)}f_{in}( \frac{k-k_i}{\eta})e^{i\frac{g^2N}{c\Delta}L}\\
\times &|\frac{k-k_i}{\eta}-\frac{\beta}{\eta L}|^{-i\beta}\textsl{sgn}(\frac{k-k_i}{\eta}-\frac{\beta}{\eta L})\frac{c}{g_{eff}N}G.
\end{split}
\end{equation}
We have assumed that the bandwidth of the memory is larger than the bandwidth of the input signal, $|\eta L|\gg \Delta\omega_s$, and the optical depth is sufficient large, $\beta\gtrsim 1$. The signal will be absorbed near $z=0$, and $\sigma_{12}(z,0)$ and $E(z,0)$ is nonzero only near $z=0$, so we can treat $L$ as infinity during $[0,t_H]$. Also notice that, the gradient is turned off during $[0,t_H]$, and the spatial frequency $k$ will hold its value.
To get the solution in $[0,t_H]$, we solve Eq.~(\ref{M-B-A0}) with initial condition $\sigma_{12}(k,t=0)$
for $\sigma_{12}$ and open boundary condition for $E$. In $k$ space, we find
\begin{equation}\label{f-t=t-H}
\sigma_{12}(k,t_H)=e^{-Dk^2t}e^{i\frac{g_{eff}^2N}{c}\frac{1}{k-\bar{k}}t_H}\sigma_{12}(k,0)
\end{equation}
where $\bar{k}=\frac{g^2N}{c\Delta}+k_0-k_c$.
Notice that $\sigma_{12}(k,t_H)$ get a phase $e^{i\frac{g_{eff}^2N}{c}\frac{1}{k-\bar{k}}t_H}$, so the group velocity for $\sigma_{12}(z,t)$ is $v_g(k)=\frac{g_{eff}^2N}{c(k-\bar{k})^2}$. If the memory broadening $|\eta L|$ is not much larger than the signal pulse bandwidth, the spin wave $\sigma_{12}(z,t)$ will be nonzero near the ensemble boundary. Then the spin wave will propagate to the boundary and be reflected, this may ruin the spin wave coherence near the boundary and lower the memory efficiency. One way to avoid this effect is turning off the control field during storage, which makes the effective coupling $g_{eff}=0$, and the group velocity $v_g=0$.
To find the values for $\sigma_{12}$ and $E$ in the duration $[t_H,t_H+t_0]$, one needs to solve a modified version of Eq.~(\ref{M-B-A0}) where the sign of $i\eta z$ is reversed. We follow the method given in \cite{hetet2006gradient}, propagate these equation backwards with final conditions $E(z=L,t>t_H)=f_{out}(t), \sigma_{12}(z,t\rightarrow\infty)=0$. Similar to the write-in process, at time $t_H$, we have
\begin{equation}\label{b-t=t-H}
\begin{split}
\sigma_{12}(k,t_H)=&e^{\frac{D}{3\eta}(k_i^3-k^3)}f_{out}(t_H+\frac{k-k_i}{-\eta})e^{-i\frac{g^2N}{c\Delta}L}\\
\times &|\frac{k-k_i}{\eta}-\frac{\beta}{\eta L}|^{-i\beta}\textsl{sgn}(\frac{k-k_i}{\eta}-\frac{\beta}{\eta L})\frac{c}{g_{eff}N}G^*.
\end{split}
\end{equation}
By matching the two solutions for $\sigma_{12}$ at $t_H$ Eqs. (\ref{f-t=t-H}), (\ref{b-t=t-H}), we get
\begin{equation}
\begin{split}
f_{out}(t_H+t)=&d_{W}(t)d_Hd_{R}(t)f_{in}(-t)\bar{G}
\end{split}
\end{equation}
where
\[\bar{G}=|\eta L\left(t+\frac{\beta}{\eta L}\right)|^{-i2\beta}e^{i\frac{2Lg^2N}{c\Delta}}e^{-i\frac{g_{eff}^2N}{c\eta \left(t+\frac{\beta}{\eta L}\right)}t_H}\Gamma(i\beta)/\Gamma(-i\beta)\] is a phase factor, $d_{W}(t)=e^{\frac{-D}{3\eta}(k_i^3-(k_i-\eta t)^3)}$, $d_H=e^{-D(k_i-\eta t)^2 t_H}$ and $d_{R}(t)=e^{\frac{-D}{3\eta}(k_i^3-(k_i-\eta t)^3)}$ are the diffusion decays for the write-in process $[-t_0,0]$, storage time $[0,t_H]$ and read-out process $[t_H,t_H+t_0]$ respectively.
\section*{Appendix B}
The Maxwell-Bloch equation for the 3-dimensional model is
\begin{equation}\label{M-B-tr-A}
\begin{split}
\dot{\sigma}_{12}(\textbf{r},t)=&i\frac{g\Omega_c}{\Delta}e^{i(k_0-k_c)z}E(\textbf{r},t)\\
&-(i\eta z)\sigma_{12}(\textbf{r},t)+D\nabla^2\sigma_{12}(\textbf{r},t),\\
\frac{\partial}{\partial z}E(\textbf{r},t)
=&i\frac{gN\Omega_c}{c\Delta} e^{-i(k_0-k_c)z}\sigma_{12}(\textbf{r},t)\\
&+i\frac{g^2N}{c\Delta}E(\textbf{r},t).
\end{split}
\end{equation}
To solve these equations, we first transform transverse coordinates $x,y$ to Fourier space $k_x,k_y$,
\begin{equation}
\begin{split}
\dot{\sigma}_{12}(k_x,k_y,z,t)=&-(i\eta z+\gamma_k)\sigma_{12}(k_x,k_y,z,t)\\
&+i\frac{g\Omega_c}{\Delta}e^{i(k_0-k_c)z}E(k_x,k_y,z,t)\\
&+D\nabla_z^2\sigma_{12}(k_x,k_y,z,t),\\
\frac{\partial}{\partial z}E(k_x,k_y,z,t)
=&i\frac{gN\Omega_c}{c\Delta} e^{-i(k_0-k_c)z}\sigma_{12}(k_x,k_y,z,t)\\
&+i\frac{g^2N}{c\Delta}E(k_x,k_y,z,t),
\end{split}
\end{equation}
where $\gamma_k=D(k_x^2+k_y^2)$. Now we make the following transformation:
\begin{equation}
\begin{split}
\bar{\sigma}_{12}(k_x,k_y,z,t)=&e^{\gamma_k t}\sigma_{12}(k_x,k_y,z,t),\\
\bar{E}(k_x,k_y,z,t)=&e^{\gamma_k t}E(k_x,k_y,z,t),
\end{split}
\end{equation}
then we have
\begin{equation}
\begin{split}
\dot{\bar{\sigma}}_{12}(k_x,k_y,z,t)=&-(i\eta z)\bar{\sigma}_{12}(k_x,k_y,z,t)\\
&+i\frac{g\Omega_c}{\Delta}e^{i(k_0-k_c)z}\bar{E}(k_x,k_y,z,t)\\
&+D\nabla_z^2\bar{\sigma}_{12}(k_x,k_y,z,t),\\
\frac{\partial}{\partial z}\bar{E}(k_x,k_y,z,t)
=&i\frac{gN\Omega_c}{c\Delta} e^{-i(k_0-k_c)z}\bar{\sigma}_{12}(k_x,k_y,z,t)\\
&+i\frac{g^2N}{c\Delta}\bar{E}(k_x,k_y,z,t).
\end{split}
\end{equation}
These are actually quasi-1D equations, so we can
solve these equations by the method we used before, and the output field is:
\[\bar{f}_{out}(k_x,k_y,t_H+t)=d_{W}(t)d_Hd_{R}(t)\bar{f}_{in}(k_x,k_y,-t)\bar{G}.\]
We transform back to $f_{out}(k_x,k_y,t_H+t)$, and get
\begin{equation}
\begin{split}
f_{out}(k_x,k_y,t_H+t)=&d_{W}(t)d_Hd_{R}(t)\\
\times &d_{\perp}(t)f_{in}(k_x,k_y,-t)\bar{G},
\end{split}
\end{equation}
where $d_{\perp}(t)=e^{-2\gamma_k t}e^{-\gamma_k t_H}$ is the transverse difusion decay.
\end{document} | math | 43,803 |
\begin{document}
\title{Typical rank of $m\times n\times (m-1)n$ tensors with
$3\leq m\leq n$ over the real number field}
\author{Toshio Sumi, Mitsuhiro Miyazaki, and Toshio Sakata}
\maketitle
\begin{abstract}
Tensor type data are used
recently in various application fields,
and then a typical rank is important.
Let $3\leq m\leq n$.
We study typical ranks of $m\times n\times (m-1)n$ tensors
over the real number field.
Let $\rho$ be the Hurwitz-Radon function defined as
$\rho(n)=2^b+8c$ for nonnegative integers $a,b,c$
such that $n=(2a+1)2^{b+4c}$ and $0\leq b<4$.
If $m \leq \rho(n)$, then
the set of $m\times n\times (m-1)n$ tensors has two
typical ranks $(m-1)n,(m-1)n+1$.
In this paper, we show that the converse is also true:
if $m > \rho(n)$, then
the set of $m\times n\times (m-1)n$ tensors has only one
typical rank $(m-1)n$.
\end{abstract}
\section{Introduction}
An analysis of high dimensional arrays is getting frequently used.
Kolda and Bader \cite{Kolda-Bader:2009} introduced many applications
of tensor decomposition analysis
in various fields such as signal processing, computer vision, data mining,
and others.
In this paper we concentrate to discuss $3$-way arrays.
A $3$-way array
$$(a_{ijk})_{1\leq i\leq m,\ 1\leq j\leq n,\ 1\leq k\leq p}$$
with size $(m,n,p)$ is called an $m\times n\times p$
tensor.
A rank of a tensor $T$, denoted by $\mathrm{rank}\, T$,
is defined as the minimal number of
rank one tensors which describe $T$ as a sum.
The rank depends on the base field. For example there is
a $2\times 2\times 2$ tensor over the real number field whose rank
is $3$ but is $2$ as a tensor over the complex number field.
\par
Throughout this paper, we assume that the base field is the real number field $\mathbb{R}$.
Let $\mathbb{R}^{m\times n\times p}$ be the set of $m\times n\times p$ tensors with Euclidean topology.
A number $r$ is a typical rank of $m\times n\times p$ tensors
if the set of tensors with rank $r$ contains
a nonempty open semi-algebraic set
of $\mathbb{R}^{m\times n\times p}$
(see Theorem~\ref{thm:Friedland}).
We denote by ${\mathrm{typical\_rank_\RRR}}(m,n,p)$ the set of typical ranks of
$\mathbb{R}^{m\times n\times p}$.
If $s$ (resp. $t$) is the minimal (resp. maximal) number of
${\mathrm{typical\_rank_\RRR}}(m,n,p)$, then
$${\mathrm{typical\_rank_\RRR}}(m,n,p)=[s,t],$$
the interval of all integers between $s$ and $t$, including both, and $s$ is equal to the generic rank
of the set of $m\times n\times p$ tensors over the complex number
field \cite{Friedland:2008}.
In the case where $m=2$, the set of typical ranks of $2\times n\times p$ tensor
is well-known \cite{tenBerge-etal:1999}:
$${\mathrm{typical\_rank_\RRR}}(2,n,p)=\begin{cases}
\{p\}, & n<p\leq 2n \\
\{2n\}, & 2n<p \\
\{p,p+1\}, & n=p\geq 2
\end{cases}
$$
Suppose that $3\leq m\leq n$.
If $p>(m-1)n$ then the set of typical ranks of $m\times n\times p$ tensors
is just $\{\min(p,mn)\}$.
If $p=(m-1)n$ then
the set of typical ranks of $m\times n\times p$ tensor
is $\{p\}$ or $\{p,p+1\}$ \cite{tenBerge:2000}.
Until our paper \cite{Sumi-etal:2010a}, only a few cases where
${\mathrm{typical\_rank_\RRR}}(m,n,(m-1)n)=\{(m-1)n,(m-1)n+1\}$ \cite{Comon-etal:2009,Friedland:2008} are known and
we constructed infinitely many examples
by using the concept of absolutely nonsingular tensors in \cite{Sumi-etal:2010a}:
If $m\leq \rho(n)$ then ${\mathrm{typical\_rank_\RRR}}(m,n,p)=\{p,p+1\}$,
where $\rho(n)$ is the Hurwitz-Radon number given by
$\rho(n)=2^b+8c$ for nonnegative integers $a,b,c$
such that $n=(2a+1)2^{b+4c}$ and $0\leq b<4$.
The purpose of this paper is to completely determine the set of
typical ranks of $m\times n\times (m-1)n$ tensors:
\begin{thm} \label{thm:main}
Let $3\leq m\leq n$ and $p=(m-1)n$.
Then it holds
$${\mathrm{typical\_rank_\RRR}}(m,n,p)=\begin{cases}
\{p\}, & m>\rho(n) \\
\{p,p+1\}, & m\leq \rho(n).
\end{cases}$$
\end{thm}
We denote an $m_1\times m_2\times m_3$ tensor $(x_{ijk})$ by
$(X_1;\ldots; X_{m_3})$, where $X_t=(x_{ijt})$ is an $m_1\times m_2$
matrix for each $1\leq t\leq m_3$.
Let $3\leq m\leq n$ and $p=(m-1)n$.
For an $n\times p\times m$ tensor $X=(X_1;\ldots;X_{m-1};X_{m})$,
let $H(X)$ and $\hat{H}(X)$ be a $p\times p$ matrix and an $mn\times p$ matrix
respectively defined as follows.
$$H(X)=\begin{pmatrix} X_1\\ X_2\\ \vdots\\ X_{m-1}\end{pmatrix}, \quad
\hat{H}(X)=\begin{pmatrix} X_1\\ X_{2}\\ \vdots\\ X_m\end{pmatrix}$$
Let
$$\mathfrak{R}=\{ X\in \mathbb{R}^{n\times p\times m} \mid \text{$H(X)$ is nonsingular}\}.$$
This is a nonempty Zariski open set.
For $X=(X_1;\ldots;X_{m-1};X_m)\in \mathfrak{R}$, we see
$$\hat{H}(X)H(X)^{-1}
=\begin{pmatrix} E_n \\ & E_n \\ &&\ddots \\ &&& E_n \\ Y_1 & Y_2 & \cdots & Y_{m-1} \end{pmatrix},
$$
where $(Y_1,Y_2,\ldots,Y_{m-1})=X_mH(X)^{-1}$.
Note that $\mathrm{rank}\, X\geq p$ for $X\in \mathfrak{R}$.
Let $h$ be an isomorphism from the set of $n\times p$ matrices to $\mathbb{R}^{n\times n\times (m-1)}$
given by
$$(Y_1,Y_2,\ldots,Y_{m-1}) \mapsto (Y_1;Y_2;\ldots;Y_{m-1}).$$
Then $h(X_mH(X)^{-1}) \in \mathbb{R}^{n\times n\times (m-1)}$.
We consider the following subsets of $\mathbb{R}^{n\times n\times (m-1)}$.
For $Y=(Y_1;Y_2;\ldots;Y_{m-1})\in \mathbb{R}^{n\times n\times (m-1)}$ and
$\aaa=(a_1,\ldots,a_{m-1},a_m)^\top \in \mathbb{R}^{m}$,
let
$$M(\aaa,Y)=\sum_{k=1}^{m-1} a_kY_k-a_mE_n$$
and set
$$\mathfrak{C}=\{Y \in \mathbb{R}^{n\times n\times(m-1)}\mid
|M(\aaa,Y)|<0
\text{ for some $\aaa\in \mathbb{R}^m$}\}$$
and
$$\mathfrak{A}=\{Y \in \mathbb{R}^{n\times n\times(m-1)} \mid
|M(\aaa,Y)|>0 \text{ for all $\aaa\ne \zerovec$}
\}.$$
The subsets $\mathfrak{C}$ and $\mathfrak{A}$ are open sets in Euclidean topology
and $\overline{\mathfrak{C}}\cup \overline{\mathfrak{A}}=\mathbb{R}^{n\times n\times(m-1)}$.
In \cite{Sumi-etal:2010a}, we show that $\mathfrak{A}$ is not empty if and only if
$m\leq \rho(n)$ and that $\mathrm{rank}\, X>p$ for any $X\in \mathfrak{R}$
with $h(X_mH(X)^{-1})\in \mathfrak{A}$.
In this paper, we show that there exists an open subset $\mathfrak{F}$ of $\mathfrak{C}$ such that
$\overline{\mathfrak{F}}=\overline{\mathfrak{C}}$ and $\mathrm{rank}\, X=p$
for any $X\in \mathfrak{R}$ with $h(X_mH(X)^{-1})\in \mathfrak{F}$.
\section{Typical rank}
Due to \cite{Strassen:1983,tenBerge:2000} and others,
a number $r$ is a typical rank of tensors of $\mathbb{R}^{m_1\times m_2\times m_3}$ if the subset of tensors of $\mathbb{R}^{m_1\times m_2\times m_3}$
of rank $r$ has nonzero volume.
In this paper, we adopt the algebraic definition due to Friedland.
These definitions are equivalent, since for any $r\geq 0$, the set of tensors of rank $r$ is a semi-algebraic set
by the Tarski-Seidenberg principle (cf. \cite{Bochnak-Coste-Roy:1998}).
For $\xxx=(x_1,\ldots,x_{m_1})^\top\in \mathbb{C}^{m_1}$,
$\yyy=(y_1,\ldots,y_{m_2})^\top\in \mathbb{C}^{m_2}$, and
$\zzz=(z_1,\ldots,z_{m_3})^\top\in \mathbb{C}^{m_3}$, we denote $(x_iy_jz_k)\in \mathbb{C}^{m_1\times m_2\times m_3}$ by $\xxx \otimes \yyy\otimes \zzz$.
Let $f_t\colon (\mathbb{C}^{m_1}\times \mathbb{C}^{m_2}\times \mathbb{C}^{m_3})^t \to
\mathbb{C}^{m_1\times m_2\times m_3}$ be a map given by
$$f_t(\xxx_{1,1}, \xxx_{1,2}, \xxx_{1,3}, \ldots, \xxx_{t,1}, \xxx_{t,2}, \xxx_{t,3}) =
\sum_{\ell=1}^t \xxx_{\ell,1} \otimes \xxx_{\ell,2} \otimes \xxx_{\ell,3}.$$
Let $S$ be a subset of $\mathbb{R}^{m_1\times m_2\times m_3}$.
$S$ is called semi-algebraic if
it is a finite Boolean combination (that is, a finite composition of disjunctions, conjunctions and negatios) of sets of the form
\begin{equation} \label{eq:>0}
\{(a_{ijk})\in \mathbb{R}^{m_1\times m_2\times m_3} \mid f(a_{111},\ldots,a_{m_1,m_2,m_3})>0\}
\end{equation}
and
$$\{(a_{ijk})\in \mathbb{R}^{m_1\times m_2\times m_3} \mid g(a_{111},\ldots,a_{m_1,m_2,m_3})=0\},$$
where $f$ and $g$ are polynomials in $m_1m_2m_3$ indeterminates $x_{111} ,\ldots, x_{m_1,m_2,m_3}$ over $\mathbb{R}$.
Then $S$ is an open semi-algebraic set if and only if it is expressed as a finite Boolean combinations
of sets of the form \eqref{eq:>0}, and it is a dense open semi-albebraic set if and only if it is a
Zariski open set, that is, expressed as
$$\{(a_{ijk})\in \mathbb{R}^{m_1\times m_2\times m_3} \mid g(a_{111},\ldots,a_{m_1,m_2,m_3})\ne0\}.$$
\begin{thm}[{\cite[Theorem~7.1]{Friedland:2008}}] \label{thm:Friedland}
The space $\mathbb{R}^{m_1\times m_2\times m_3}$, $m_1,m_2,m_3 \in\mathbb{N}$,
contains a finite number of open connected disjoint semi-algebraic sets
$O_1,\ldots,O_M$ satisfying the following properties.
\begin{enumerate}
\item $\mathbb{R}^{m_1\times m_2\times m_3}\smallsetminus \cup_{i=1}^M O_i$
is a closed semi-algebraic set $\mathbb{R}^{m_1\times m_2\times m_3}$
of dimension less than $m_1m_2m_3$.
\item Each $T \in O_i$ has rank $r_i$ for $i = 1,\ldots,M$.
\item The number $\min(r_1,\ldots,r_M)$ is equal to the generic rank $\mathrm{grank}\,(m_1,m_2,m_3)$ of
$\mathbb{C}^{m_1\times m_2\times m_3}$, that is, the minimal $t\in \mathbb{N}$
such that the closure of the image of $f_t$ is equal to $\mathbb{C}^{m_1\times m_2\times m_3}$.
\item $\mathrm{mtrank}\,(m_1,m_2,m_3):=\max(r_1,\ldots,r_M)$ is the minimal $t\in \mathbb{N}$ such that the closure of $f_t((\mathbb{R}^{m_1}\times \mathbb{R}^{m_2}\times \mathbb{R}^{m_3})^k)$ is equal to $\mathbb{R}^{m_1\times m_2\times m_3}$.
\item For each integer $r\in [\mathrm{grank}\,(m_1,m_2,m_3),\mathrm{mtrank}\,(m_1,m_2,m_3)]$,
there exists $r_i=r$ for some integer $i\in [1,M]$.
\end{enumerate}
\end{thm}
\begin{definition}\rm
A positive number $r$ is called a typical rank of $\mathbb{R}^{m_1\times m_2\times m_3}$
if
$$r \in [\mathrm{grank}\,(m_1,m_2,m_3),\mathrm{mtrank}\,(m_1,m_2,m_3)].$$
Put
$${\mathrm{typical\_rank_\RRR}}(m_1,m_2,m_3)=[\mathrm{grank}\,(m_1,m_2,m_3),\mathrm{mtrank}\,(m_1,m_2,m_3)].$$
\end{definition}
We state basic facts.
\begin{prop} \label{prop:open}
Let $r$ be a positive number and $U$ a nonempty open set of\/ $\mathbb{R}^{m_1\times m_2\times m_3}$.
If every tensor of $U$ has rank $r$, then $r$ is a typical rank of\/ $\mathbb{R}^{m_1\times m_2\times m_3}$.
\end{prop}
\begin{proof}
Let $O_1,\ldots,O_M$ be open connected disjoint semi-algebraic sets
as in Theorem~\ref{thm:Friedland}.
Since $\dim(\mathbb{R}^{m_1\times m_2\times m_3}\smallsetminus \cup_{i=1}^M O_i)<m_1m_2m_3$, there exists $i\in [1,M]$ such that $U\cap O_i$ is not empty.
\end{proof}
\begin{prop} \label{prop:compare}
Let $m_1,m_2,m_3,m_4 \in \mathbb{N}$ with $m_3 < m_4$. Then
$$\mathrm{grank}\,(m_1,m_2,m_3) \leq \mathrm{grank}\,(m_1,m_2,m_4)$$
and
$$\mathrm{mtrank}\,(m_1,m_2,m_3) \leq \mathrm{mtrank}\,(m_1,m_2,m_4).$$
\end{prop}
\begin{proof}
Let $U$ be the nonempty Zariski open subset $U$ of $\mathbb{C}^{m_1\times m_2\times m_4}$
consisting of all tensors of rank $\mathrm{grank}\,(m_1,m_2,m_4)$
and put
$$V=\{(Y_1;Y_2;\ldots;Y_{m_3}) \in \mathbb{C}^{m_1\times m_2\times m_3} \mid (Y_1;Y_2;\ldots;Y_{m_4}) \in U\}.$$
Then $V$ is a nonempty Zariski open set of $\mathbb{C}^{m_1\times m_2\times m_3}$.
For the subset $U^\prime$ of $\mathbb{C}^{m_1\times m_2\times m_3}$ consisting of all tensors of rank $\mathrm{grank}\,(m_1,m_2,m_3)$, the intersection $V\cap U^\prime$ is a nonempty Zariski open set.
Since $\mathrm{rank}\, Y \leq \mathrm{rank}(Y;X)$ for $Y \in \mathbb{C}^{m_1\times m_2\times m_3}$ and
$(Y;X) \in \mathbb{C}^{m_1\times m_2\times m_4}$, we see
$$\mathrm{grank}\,(m_1,m_2,m_3) \leq \mathrm{grank}\,(m_1,m_2,m_4).$$
\par
Next, take an open semi-algebraic set $V$ of $\mathbb{R}^{m_1\times m_2\times m_3}$ consisting of tensors
of rank $\mathrm{mtrank}\,(m_1,m_2,m_3)$. Then there are $s \in {\mathrm{typical\_rank_\RRR}}(m_1,m_2,m_4)$ and an open semi-algebraic set $O$ of $\mathbb{R}^{m_1\times m_2\times m_4}$ consisting of tensors of rank $s$ such that
$\{(A;B) | A \in V, B \in \mathbb{R}^{m_1\times m_2\times (m_4-m_3)}\}\cap O \ne \varnothing$. Thus
$$\mathrm{mtrank}\,(m_1,m_2,m_3) \leq s \leq \mathrm{mtrank}\,(m_1,m_2,m_4).$$
\qed
\end{proof}
The action of ${\mathrm{GL}}(m)\times {\mathrm{GL}}(n)\times {\mathrm{GL}}(p)$ on $\mathbb{R}^{m\times n\times p}$
is given as follows.
Let $P=(p_{ij})\in {\mathrm{GL}}(n)$, $Q=(q_{ij}) \in {\mathrm{GL}}(m)$, and
$R=(r_{ij}) \in {\mathrm{GL}}(p)$.
The tensor $(b_{ijk})=(P,Q,R)\cdot (a_{ijk})$ is defined as
$$b_{ijk}=\sum_{s=1}^m\sum_{t=1}^n\sum_{u=1}^p p_{is}q_{jt}r_{ku}a_{stu}.$$
Therefore,
$$(P,Q,R)\cdot (A_1;\ldots;A_p)
=(\sum_{u=1}^p r_{1u}PA_uQ^\top;\ldots;\sum_{u=1}^p r_{pu}PA_uQ^\top).$$
\begin{definition}\rm
Two tensors $A$ and $B$ is called {\sl equivalent}
if there exists $g\in {\mathrm{GL}}(m)\times {\mathrm{GL}}(n)\times {\mathrm{GL}}(p)$ such that
$B=g\cdot A$.
\end{definition}
\begin{prop}
If two tensors are equivalent, then they have the same rank.
\end{prop}
A $1\times m_2\times m_3$ tensor $T$ is an $m_2\times m_3$ matrix and
$\mathrm{rank}\, T$ is equal to the matrix rank.
The following three propositions are well-known.
\begin{prop}
Let $m_1,m_2,m_3 \in \mathbb{N}$ with $2 \leq m_1 \leq m_2 \leq m_3$.
If $m_1m_2 \leq m_3$, then typical
rank of\/ $\mathbb{R}^{m_1\times m_2\times m_3}$ is only one integer $m_1m_2$.
\end{prop}
\begin{prop} \label{prop:diagonaldecomp}
An $m_1\times m_2\times m_3$ tensor $(Y_1;\ldots;Y_{m_3})$ has rank
less than or equal to $r$ if and only if
there are an $m_1\times r$ matrix $P$, an $r\times m_2$ matrix $Q$,
and $r\times r$ diagonal matrices $D_1,\ldots,D_{m_3}$
such that
$Y_k=PD_kQ$
for $1\leq k\leq m_3$.
\end{prop}
\begin{prop}
Let $X=(x_{ijk})$ be an $m_1\times m_2\times m_3$ tensor.
For an $m_2\times m_1\times m_3$ tensor $Y=(x_{jik})$ and
an $m_1\times m_3\times m_2$ tensor $Z=(x_{ikj})$, it holds that
$$\mathrm{rank}\, X=\mathrm{rank}\, Y=\mathrm{rank}\, Z.$$
\end{prop}
For an integer $2\leq m<n<2m$, the number $n$ is an only typical rank of $\mathbb{R}^{m\times n\times 2}$. Indeed, it is known that
\begin{thm}[\cite{Miyazaki-etal:2009}]
Let $2\leq m<n$.
There is an open dense semi-algebraic set $O$ of $\mathbb{R}^{m\times n\times 2}$
of which any tensor is equivalent to $((E_{m},O);(O,E_{m}))$ which has rank $\min(n,2m)$.
\end{thm}
Furthermore, by Proposition~\ref{prop:compare},
${\mathrm{typical\_rank_\RRR}}(m,m,2)$ is equal to either $\{m\}$ or $\{m,m+1\}$.
Let $U$ be an open subset of $\mathbb{R}^{m\times m\times 2}$ consisting of $(A;B)$ such that $A$ is an $m\times m$ nonsingular matrix
and all eigenvalues of $A^{-1}B$ are distinct and contain non-real numbers.
For $m\geq 2$, the set $U$ is not empty and any tensor of $U$ has rank $m+1$
(cf. \cite[Theorem~4.6]{Sumi-etal:2009})
and therefore ${\mathrm{typical\_rank_\RRR}}(m,m,2)=\{m,m+1\}$ by Proposition~\ref{prop:open}.
\begin{thm}[{\cite[Result 2]{tenBerge:2000}}] \label{thm:tall}
Let $m,n,\ell \in\mathbb{N}$ with $3\leq m \leq n \leq u$.
If $(m-1)n <u <mn$,
then typical rank of\/ $\mathbb{R}^{m\times n\times u}$ is only one integer $u$.
\end{thm}
Ten Berge showed it by applying Fisher's
result \cite[Theorem 5.A.2]{Fisher:1966} for a map defined by
using the Moore-Penrose inverse.
However the Moore-Penrose inverse is not continuous on the set of matrices
and thus not analytic.
So, until this section, we give another proof for reader's convenience.
Let $3\leq m\leq n$, $p=(m-1)n$, $p<u<mn$ and $q=u-p-1$.
For $W\in M(n-1,n;\mathbb{R})$, the set of $(n-1)\times n$ matrices,
we define a vector
$W^\perp=(a_1,\ldots,a_{n})^\top$ in $\mathbb{R}^{n}$
by
$$a_j=(-1)^{n+j} |W_{[j]}|$$
for $j=1,\ldots,n$, where
$W_{[j]}$ is an $(n-1)\times (n-1)$ matrix obtained from $W$
by removing the $j$-th column.
The following properties are easily shown.
\begin{enumerate}
\item $W^\perp=\zerovec$ if and only if $\mathrm{rank}\, W<n-1$.
\item $WW^\perp=\zerovec$.
\end{enumerate}
Let $A_k$ be an $n\times u$ matrix for $1\leq k\leq m$.
Let $B_j$ be a $q\times u$ matrix defined by
$(O_{p+1},E_q)$ for $j\leq p+1$, and
by $(O_{p},\eee_{j-p-1},{\mathrm{Diag}}(E_{j-p-2},0,E_{u-j}))$
for $p+2\leq j\leq u$, where $E_k$ is the $k\times k$ identity matrix and
$\eee_j$ is the $j$-th column of
the identity matrix with suitable size.
Put
\begin{equation}
\label{eq:XjYj}
X_j=\begin{pmatrix} A_2-jA_1\\ A_3-j^2A_1\\ \vdots\\ A_{m}-j^{m-1}A_1
\end{pmatrix} \text{ and }
Y_j=\begin{pmatrix} X_j\\ B_j\end{pmatrix}
\end{equation}
for $1\leq j\leq u$, and
\begin{equation}
\label{eq:H}
H=(Y_1^\perp,\ldots,Y_{u}^\perp).
\end{equation}
We define a polynomial $h$ on $\mathbb{R}^{n\times u\times m}$ by
$$
h(A_1;A_2;\dots;A_{m})=|H|.
$$
We show that the polynomial $h(A_1;A_2;\ldots;A_m)$ is not zero.
It suffices to show that $h(A_1;A_2;\ldots;A_m)\ne 0$ for some
tensor $(A_1;A_2;\ldots;A_m)$.
We prepare a lemma.
Let $f(a_1,\ldots,a_{m-1},b)=
\left|\begin{matrix}a_1-b&\cdots&a_{m-1}-b\\
a_1^2-b^2&\cdots&a_{m-1}^2-b^2\\ \vdots&&\vdots\\
a_1^{m-1}-b^{m-1}&\cdots&a_{m-1}^{m-1}-b^{m-1}\end{matrix}\right|$.
\begin{lemma} \label{lem:Vandermond}
If $a_1,\ldots,a_{m-1},b$ are distinct eath other, then
$f(a_1,\ldots,a_{m-1},b)\ne 0$.
\end{lemma}
\begin{proof}
It is easy to see that
\begin{equation*}
\begin{split}
f(a_1,\ldots,a_{m},b)&=
\left|\begin{matrix}
1&0&\cdots&0\\
b&a_1-b&\cdots&a_{m-1}-b\\
b^2&a_1^2-b^2&\cdots&a_{m-1}^2-b^2\\
\vdots&\vdots&&\vdots\\
b^{m-1}&a_1^{m-1}-b^{m-1}&\cdots&a_{m-1}^{m-1}-b^{m-1}\end{matrix}\right| \\[2mm]
&=
\left|\begin{matrix}
1&1&\cdots&1\\
b&a_1&\cdots&a_{m-1}\\
b^2&a_1^2&\cdots&a_{m-1}^2\\
\vdots&\vdots&&\vdots\\
b^{m-1}&a_1^{m-1}&\cdots&a_{m-1}^{m-1}\end{matrix}\right| \ne 0.\\
\end{split}
\end{equation*}
\qed
\end{proof}
\begin{lemma}
Let $\vvv=(1,\ldots,1)^T\in \mathbb{R}^{n}$,
$A_1=(E_{n},\ldots,E_{n},\vvv,O_q)$ and
$$A_{s+1}=(A_1\eee_1,2^sA_1\eee_2,\ldots,u^sA_1\eee_{u})
=A_1{\mathrm{Diag}}(1^s,2^s,\ldots,u^s)$$
for $1\leq s\leq m-1$.
Then the $(u-1)\times u$ matrix $Y_j$ defined in \eqref{eq:XjYj} satisfies
that $Y_j^\perp=t_j\eee_j$ for some $t_j\ne 0$.
In particular, $h(A_1;A_2;\ldots;A_{m})\ne 0$.
\end{lemma}
\begin{proof}
Let
$$D_{t,s,j}={\mathrm{Diag}}(((t-1)n+1)^s-j^s,((t-1)n+2)^s-j^s,\ldots,(tn)^s-j^s)$$
be an $n\times n$ matrix.
Then
$$
A_{s+1}-j^sA_1=(D_{1,s,j},D_{2,s,j},\ldots,D_{m-1,s,j},((p+1)^s-j^s)\vvv,O_q).
$$
For a $v\times w$ matrix $G=(g_{ij})$, we denote
by
$$G_{=\{a_1,\ldots,a_c\}}^{=\{b_1,\ldots,b_r\}}$$
the $r\times c$ matrix obtained from $G$
by choosing $a_1$-, $\ldots$, $a_c$-th columns
and $b_1$-, $\ldots$, $b_r$-th rows,
that is
$(g_{b_ia_j})$,
and put
$$G_{=\{a_1,\ldots,a_c\}}=G_{=\{a_1,\ldots,a_c\}}^{=\{1,\ldots,v\}}, \quad
G_{\leq c}=G_{=\{1,\ldots,c\}}^{=\{1,\ldots,v\}},\quad
G_{\leq c}^{\leq r}=G_{=\{1,\ldots,c\}}^{=\{1,\ldots,r\}}.
$$
First we suppose that $j>p$.
Put $S_t=\{t,n+t,2n+t,\ldots,(m-2)n+t\}$ and
$M_{j,t}=(Y_j)_{=S_t}^{=S_t}=(X_j)_{=S_t}^{=S_t}$.
Note that $M_{j,t}$ is nonsingular by Lemma~\ref{lem:Vandermond},
since
$$
|M_{j,t}|=f(t,n+t,2n+t,\ldots,(m-2)n+t,j).
$$
We consider the $p\times p$ matrix
$(Y_j)_{\leq p}^{\leq p}=(X_j)_{\leq p}$.
There exists a permutation matrix $P$ such that
$$
P^{-1}(X_j)_{\leq p}P
={\mathrm{Diag}}(M_{j,1},M_{j,2},\ldots,M_{j,n}).
$$
Thus we get
$$
|(X_j)_{\leq p}|
=\prod_{1\leq t\leq m-1} |M_{j,t}|
$$
which implies that $(X_j)_{\leq p}$ is nonsingular.
Thus $\mathrm{rank}\, Y_j=u-1$ and $Y_j^\perp=t_j\eee_j$ for some $t_j\ne0$,
since the $j$-th column vector of $Y_j$ is zero.
\par
Next suppose that $j\leq p$.
The $j$-th column of $Y_j$ is zero.
Let
$$Z_j=(X_j)_{=\{1,\ldots,p+1\}\smallsetminus\{j\}}$$
be the $p\times p$ matrix obtain from
$(X_j)_{\leq p+1}$ by removing the $j$-th column.
It suffices to show that $\mathrm{rank}\, Z_j=p$.
We express $j$ uniquely by $ns_0+t_0$ for a pair $(s_0,t_0)$ of integers
with $0\leq s_0\leq m-2$ and $1\leq t_0\leq n$.
Let
$$T=\{sn+t_0\mid 0\leq s\leq m-2, s\ne s_0\}\cup\{p+1\}.$$
There exist permutation matrices $P$ and $Q$ such that
$$PZ_jQ
=\begin{pmatrix}
\stackrel{\vbox to 4pt{\hbox{\normalsize${\mathrm{Diag}}$}}}
{\lower0.5ex\vbox to 0pt{
\hbox{\scriptsize$\substack{1\leq t\leq n, \\t\ne t_0}$}}} M_{j,t} &
\begin{matrix} O_{p-m+1,m-2} & *\end{matrix}\\[4mm]
O_{m-1,p-m+1} & (X_j)_{=T}^{=S_{t_0}} \end{pmatrix}$$
of which last column corresponds to the $(p+1)$-th column of $X_j$.
We get the equality
$$
|Z_j|=(-1)^a|(X_j)_{=T}^{=S_{t_0}}|
\prod_{1\leq t\leq m-1, t\ne t_0} |M_{j,t}|.
$$
Again by Lemma~\ref{lem:Vandermond},
$Z_j$ is nonsingular and $Y_j^\perp=t_j\eee_j$ for some $t_j\ne 0$.
\qed
\end{proof}
Thus the polynomial $h$ is not zero.
Consider a nonempty Zariski open set
$$S=\{(A_1;A_2;\ldots;A_{m}) \in \mathbb{R}^{n\times u\times m} \mid
h(A_1;A_2;\ldots;A_{m})\ne 0\}.$$
Note that the closure $\overline{S}$ of $S$ is equal to
$\mathbb{R}^{n\times u\times m}$.
For $(A_1;A_2;\ldots;A_{m})\in S$ and $X_j,Y_j, H$ matrices
given in \eqref{eq:XjYj} and \eqref{eq:H},
$A_kY_j^\perp=j^{k-1}A_1Y_j^\perp$ for $1\leq k\leq m$ and
$1\leq j\leq u$.
Since
\begin{equation*}
\begin{split}
A_kH &=(A_kY_1^\perp,A_kY_2^\perp,\ldots,A_kY_u^\perp) \\
&=
(A_1Y_1^\perp,2^{k-1}A_1Y_2^\perp,\ldots,u^{k-1}A_1Y_u^\perp) \\
&=
A_1H{\mathrm{Diag}}(1,2^{k-1},\ldots,u^{k-1}),
\end{split}
\end{equation*}
it holds that $A_k=A_1H{\mathrm{Diag}}(1,2^{k-1},\ldots,u^{k-1})H^{-1}$
for each $k$.
By Proposition~\ref{prop:diagonaldecomp}, we get
$\mathrm{rank}(A_1;A_2;\ldots;A_{m})\leq u$.
Any number of ${\mathrm{typical\_rank_\RRR}}(m,u,n)$ is greater than or equal to
$u$ which is equal to the generic rank of $\mathbb{C}^{m\times n\times u}$,
since $(m-1)n<u<mn$.
This completes the proof of Theorem~\ref{thm:tall}.
\begin{cor} \label{cor:mxnx(m-1)n}
Let $3\leq m\leq n$.
Then the set of typical ranks of $m\times n\times (m-1)n$ tensors
is either $\{(m-1)n\}$ or $\{(m-1)n,(m-1)n+1\}$.
\end{cor}
\begin{proof}
The typical rank of $\mathbb{R}^{m\times n\times ((m-1)n+1)}$ is only $(m-1)n+1$
by Theorem~\ref{thm:tall} and
the minimal typical rank of $\mathbb{R}^{m\times n\times (m-1)n}$ is equal to
$(m-1)n$, since it is equal to
the generic rank of $\mathbb{C}^{m\times n\times (m-1)n}$.
Thus the assertion follows from Proposition~\ref{prop:compare}.
\qed
\end{proof}
\section{Characterization}
From now on, let $3\leq m\leq n$, $\ell=m-1$ and $p=(m-1)n$.
For an $n\times n\times \ell$ tensor $(Y_1;\ldots;Y_\ell)$,
consider an $n\times p\times m$ tensor
$X(Y_1,\ldots,Y_\ell)=(X_1;\ldots;X_m)$ given by
\begin{equation} \label{eq:typicalform}
\begin{pmatrix} X_1 \\ \vdots \\ X_m\end{pmatrix}
=\begin{pmatrix} E_n \\ & E_n \\ &&\ddots \\ &&& E_n \\ Y_1 & Y_2 & \cdots & Y_{\ell} \end{pmatrix}.
\end{equation}
Note that $\mathrm{rank}\, X(Y_1,\ldots,Y_\ell)\geq p$,
since $\mathrm{rank}\, X(Y_1,\ldots,Y_\ell)$ is greater than or equal to
the rank of the $p\times p$ matrix \eqref{eq:typicalform}.
In generic, an $m\times n\times p$ tensor is equivalent to
a tensor of type as $X(Y_1,\ldots,Y_\ell)$.
We denote by $\mathfrak{M}$ the set of
tensors
$Y=(Y_1;\ldots;Y_\ell)\in \mathbb{R}^{n\times n\times \ell}$
such that
there exist an $m\times p$ matrix $(x_{ij})$ and an $n\times p$ matrix
$A=(\aaa_1,\ldots,\aaa_p)$ such that
\begin{equation} \label{eq:lineq}
(x_{1j}Y_1+\cdots+x_{m-1,j}Y_{m-1}-x_{mj}E_n)\aaa_j=\zerovec
\end{equation}
for $1\leq j\leq p$
and
\begin{equation} \label{eqn:B}
B:=\begin{pmatrix} AD_1\\ \vdots \\ AD_{\ell}\end{pmatrix}
\end{equation}
is nonsingular, where
$D_k={\mathrm{Diag}}(x_{k1},\cdots,x_{kp})$ for $1\leq k\leq \ell$.
\begin{lemma} \label{lem:equiv}
$\mathrm{rank}\, X(Y_1,\ldots,Y_\ell)=p$ if and only if
$(Y_1;\ldots;Y_\ell)\in \mathfrak{M}$.
\end{lemma}
\begin{proof}
Suppose that $\mathrm{rank} X(Y_1,\ldots,Y_\ell)=p$.
There are an $n\times p$ matrix $A$, a $p\times p$ matrix $Q$
and $p\times p$ diagonal matrices $D_i$ such that $X_k=AD_kQ$
for $k=1,\ldots,m$.
Since $$\begin{pmatrix} X_1 \\ \vdots \\ X_\ell\end{pmatrix}=E_p=
\begin{pmatrix} AD_1 \\ \vdots \\ AD_\ell\end{pmatrix}Q,$$
$B$ is nonsingular.
Then
$(Y_1,\ldots,Y_\ell)B=AD_m$ implies that
$\sum_{k=1}^\ell Y_kAD_k=AD_m$.
Therefore, the $j$-th column vector $\aaa_j$ of $A$
satisfies \eqref{eq:lineq}.
Therefore $(Y_1,\ldots,Y_\ell)\in\mathfrak{M}$.
It is easy to see that the converse is also true.
\qed
\end{proof}
For an $n\times n\times \ell$ tensor $Y=(Y_1;\ldots;Y_\ell)$,
we put
$$V(Y)=\{\aaa \in \mathbb{R}^n \mid \sum_{k=1}^\ell x_kY_k\aaa=x_m\aaa
\text{ for some $(x_1,\ldots,x_m)^\top\ne\zerovec$}\}.$$
The set $V(Y)$ is not a vector subspace of $\mathbb{R}^n$.
Let $\hat{V}(Y)$ be the smallest vector subspace of $\mathbb{R}^n$ including $V(Y)$.
Let
$$\mathfrak{S}=\{Y \in \mathbb{R}^{n\times n\times \ell} \mid \dim \hat{V}(Y)=n\}.$$
\begin{prop} \label{prop:UsubsetS}
$\mathfrak{M}\subset\mathfrak{S}$ holds.
\end{prop}
\begin{proof}
Let $Y\in \mathfrak{M}$.
Consider the matrix $B$ in \eqref{eqn:B} for any
$m\times p$ matrix $(x_{ij})$ and any $n\times p$ matrix $A=(\aaa_1,\ldots,\aaa_p)$ satisfying the equation \eqref{eq:lineq}.
By column operations, $B$ is transformed to
a $p\times p$ matrix having a form
$$\begin{pmatrix}
P_{11} & O_{n,p-\dim \hat{V}(Y)} \\
P_{21} & P_{22}
\end{pmatrix}$$
where $P_{11}$ is an $n\times \dim \hat{V}(Y)$ submatrix of $A$.
Since $B$ is nonsingular, $P_{11}$ is also nonsingular, which
implies that $\dim \hat{V}(Y)=n$.
\qed
\end{proof}
By Corollary~\ref{cor:mxnx(m-1)n}, Lemma~\ref{lem:equiv} and Proposition~\ref{prop:UsubsetS},
we have the following
\begin{prop} \label{prop:rankX(Y)}
If $\mathrm{rank}\, X(Y)=p$ then $Y\in \mathfrak{S}$.
In particular,
$\overline{\mathfrak{S}}\ne \mathbb{R}^{n\times n\times\ell}$
implies that ${\mathrm{typical\_rank_\RRR}}(m,n,p)=\{p,p+1\}$.
\end{prop}
\begin{thm}[\cite{Sumi-etal:2010a}] \label{thm:ans}
If $(Y_1;\ldots;Y_\ell;E_n)$ is an absolutely nonsingular tensor, then
it holds that
$\mathrm{rank}\, X(Y_1,\ldots,Y_\ell)>p$.
\end{thm}
Here $(Y_1;\ldots;Y_\ell;Y_m)$ is called an absolutely nonsingular
tensor
if $|\sum_{k=1}^m x_kY_k|=0$ implies $(x_1,\ldots,x_m)^\top=\zerovec$.
Therefore,
\begin{prop} \label{prop:dimV(Y)=0}
$\dim \hat{V}(Y)=0$ if and only if
$(Y;E_n)$ is an $n\times n\times m$ absolutely nonsingular tensor.
\end{prop}
Note that there exists an $n\times n\times m$ absolutely
nonsingular tensor if and only if $m$ is less than or equal to
the Hurwitz-Radon number $\rho(n)$ \cite{Sumi-etal:2010a}.
\begin{prop}
Let $Y$ and $Z$ be $n\times n\times m$ tensors.
Suppose $(P,Q,R)\cdot Y=Z$ for $(P,Q,R)\in {\mathrm{GL}}(n)\times {\mathrm{GL}}(n)\times {\mathrm{GL}}(m)$.
Then $V(Y)=Q^\top V(Z)=\{Q^\top\yyy \mid \yyy\in V(Z)\}$.
In particular, $\dim \hat{V}(Z)=\dim \hat{V}(Y)$.
\end{prop}
\begin{proof}
Suppose that $\sum_{k=1}^m x_kZ_k\yyy=\zerovec$.
Then from the definition of the action, it follows that
$$\sum_{k=1}^m d_k\sum_{u=1}^m r_{ku}PY_uQ^\top\yyy=
P(\sum_{u=1}^m (\sum_{k=1}^m d_kr_{ku}Y_u))Q^\top\yyy=\zerovec.$$
Thus $Q^\top\yyy \in V(Y)$.
\qed
\end{proof}
\begin{cor}
$\mathfrak{S}$ is closed under the equivalence relation.
\end{cor}
The closure of the set of all $n\times p\times m$ tensors equivalent to
$X(Y_1,\ldots,Y_\ell)$ for some $Y_1,\ldots,Y_{\ell}$
is $\mathbb{R}^{n\times p\times m}$.
Furthermore, the following claim holds.
Let $\mathfrak{V}$ be the set of $n\times p\times m$ tensors
$(X_1;\ldots;X_m)$
such that
$A=(X_1^\top,\ldots,X_\ell^\top)$
is a nonsingular $p\times p$ matrix
and $(Y_1;\ldots;Y_\ell)$ given by
$(Y_1,\ldots,Y_\ell)=A^{-1}X_m$ lies in $\mathfrak{M}$.
Any tensor of $\mathfrak{V}$ has rank $p$.
If $\mathfrak{M}$ is dense in $\mathbb{R}^{n\times n\times \ell}$
then $\mathfrak{V}$ is dense in $\mathbb{R}^{n\times p\times m}$.
\section{Classes of $n\times n\times \ell$ tensors}
We separate $\mathbb{R}^{n\times n\times \ell}$ into three classes
$\mathfrak{A}$, $\mathfrak{C}$, and $\mathfrak{B}$ as follows.
Let $\mathfrak{A}$ be the set of tensors $Y$ such that
$(Y;E_n)$ is absolutely nonsingular.
By Proposition~\ref{prop:dimV(Y)=0},
we have the following
\begin{prop} \label{prop:C1capU}
$\mathfrak{A}\cap \mathfrak{S}=\varnothing$.
\end{prop}
From now on, we use symbols $x_1,\ldots,x_\ell,x_m$ as indeterminates over $\mathbb{R}$.
For $Y=(Y_1;\ldots;Y_\ell)\in \mathbb{R}^{n\times n\times\ell}$, we define the
$n\times n$ matrix with entries in $\mathbb{R}[x_1,\ldots,x_\ell,x_m]$ as follows.
$$M(\xxx,Y)=\sum_{k=1}^\ell x_kY_k-x_mE_n$$
Note that fixing $a_1,\ldots,a_\ell$, the determinant
$|M(\aaa,Y)|$ is positive for $a_m \ll 0$, where $\aaa=(a_1,\ldots,a_\ell,a_m)^\top$.
Set
$$\mathfrak{C}=\{Y \in \mathbb{R}^{n\times n\times\ell}\mid
|M(\aaa,Y)|<0
\text{ for some $\aaa\in \mathbb{R}^m$}\}.$$
Note that $\mathfrak{C}$ is not empty, and
if $n$ is not congruent to $0$ modulo $4$
then $\mathfrak{A}$ is empty since $m\geq 3$.
Set $\mathfrak{B}=\mathbb{R}^{n\times n\times \ell}\smallsetminus
(\mathfrak{A}\cup \mathfrak{C})$.
The class $\mathfrak{B}$ contains the zero tensor.
\begin{prop}
$\mathfrak{A}$ and $\mathfrak{C}$ are open subsets of\/ $\mathbb{R}^{n\times n\times \ell}$.
\end{prop}
Recall that
$$\mathfrak{A}=\{Y \in \mathbb{R}^{n\times n\times\ell} \mid
|M(\aaa,Y)|>0 \text{ for all $\aaa\ne \zerovec$}
\}.$$
Thus it holds
$$
\mathfrak{B}=\{Y \in \mathbb{R}^{n\times n\times\ell} \mid
\begin{array}{l}
|M(\bbb,Y)|=0 \text{ for some $\bbb\ne \zerovec$ and} \\
|M(\aaa,Y)|\geq 0 \text{ for all $\aaa$}
\end{array}
\}.
$$
\begin{prop} \label{prop:C3isboundaryC2}
$\mathfrak{B}$ is a boundary of $\mathfrak{C}$.
In particular, $\mathbb{R}^{n\times n\times \ell}$ is a disjoint sum
of $\mathfrak{A}$ and the closure $\overline{\mathfrak{C}}$ of $\mathfrak{C}$.
\end{prop}
\begin{proof}
It suffices to show that
$\mathfrak{B} \subset \overline{\mathfrak{C}}$.
Let $Y=(Y_1;\ldots;Y_\ell) \in \mathfrak{B}$.
There are a nonzero vector $\bbb=(b_1,\ldots,b_\ell,b_m)^\top\in\mathbb{R}^n$
with $|M(\bbb,Y)|=0$ and an element $g \in {\mathrm{GL}}(\ell)$
such that $g\cdot Y=(Z_1;Z_2;\ldots;Z_\ell)$ and
$Z_1=\sum_{k=1}^\ell b_kY_k$.
Then $|Z_1-b_mE_n|=0$.
Take a sequence $\{Z_1^{(u)}\}_{u\geq 1}$ such that
$|Z_1^{(u)}-b_mE_n|<0$ and $\lim_{u\to\infty} Z_1^{(u)}=Z_1$.
Thus, $(Z_1^{(u)};Z_2;\ldots;Z_\ell)\in \mathfrak{C}$ and then
$g^{-1}\cdot (Z_1^{(u)};Z_2;\ldots;Z_\ell)\in \mathfrak{C}$.
Therefore, $Y\in \overline{\mathfrak{C}}$.
\qed
\end{proof}
\begin{cor}
If $\mathfrak{A}$ is not empty then
$\mathfrak{B}$ is a boundary of $\mathfrak{A}$.
\end{cor}
The set $\mathfrak{B}$ contains a nonzero tensor in general.
We give an example.
\begin{example}\rm
Let $A=(A_1;A_2;A_3)$ be a $6\times 6\times 3$ tensor
given by
$$X(x_1,x_2,x_3)=x_1A_1+x_2A_2-x_3A_3=\begin{pmatrix}
-x_3&-x_2&0&0&0&-x_1\\
x_1&-x_3&x_2&0&0&0 \\
0&x_1&-x_3&x_2&0&0 \\
0&0&x_1&-x_3&-x_2&0\\
0&0&0&x_1&-x_3&x_2\\
-x_2&0&0&0&x_1&-x_3
\end{pmatrix}
$$
Then $|a_1A_1+a_2A_2-a_3A_3|=a_3^2(a_1a_2-a_3^2)^2+(a_1^3+a_2^3)^2\geq 0$.
The equality holds if $a_3=0$ and $a_1=-a_2$.
Thus $\dim \hat{V}((A_1;A_2))=1$.
Let $B=\begin{pmatrix} 1&\cdots&1\\ 0 &\cdots&0 \\ \vdots&&\vdots\\
0&\cdots&0\end{pmatrix}$ be a $6\times 6$ matrix.
If $x_3=y$, $x_1=-y^2$, and $x_2=-2y/5$, then
$$|X+yB|=y^6(y^6+y^5-7y^4/5+161y^3/125-167y^2/125+629y/625-2926/15625).$$
Thus, if $|a_3|$ is sufficiently small
then
$|X(-a_3^2,-2a_3/5,a_3)+a_3B|<0$.
\end{example}
\begin{prop} \label{prop:notFull}
If $m\leq \rho(n-1)$ then $\mathfrak{C} \not\subset \mathfrak{S}$,
where $\rho(n-1)$ is a Hurwitz-Radon number.
\end{prop}
\begin{proof}
Let $(A_1;\ldots;A_\ell;E_{n-1})$ be an $(n-1)\times (n-1)\times m$ absolutely nonsingular tensor.
Put $B_k={\mathrm{Diag}}(a_k,A_k)$ for $1\leq k\leq \ell$ and $B_m={\mathrm{Diag}}(1,E_{n-1})=E_n$, and
$B=(B_1;\ldots;B_\ell)$.
Then it is easy to see that
$B \in \mathfrak{C}$ and $|\sum_{k=1}^\ell x_kB_k-zB_m|=0$ implies
$z=\sum_{k=1}^\ell a_kx_k$.
Therefore $V(B)=\{a(1,0,\ldots,0)^\top \in \mathbb{R}^n \mid a\in \mathbb{R}\}$.
In particular $B \notin \mathfrak{S}$.
\qed
\end{proof}
\section{Irreducibility}
In the space of homogeneous polynomials in $m$ variables, there exists
a proper Zariski closed subset $S$ such that
if a polynomial does not belong to $S$ then it is irreducible \cite[Theorem~7]{Kaltofen:1995}, since $m\geq 3$.
Let $P(m,n)$ be the set of
homogeneous polynomials in $m$ variables $x_1,\ldots,x_m$ with real coefficients of degree $n$ such that the coefficient of $x_m^n$ is one.
Its dimension is $\binom{m+n-1}{m-1}-1$.
Let $I_\ell$ be a nonempty Zariski open subset of $P(m,n)$ such that
any polynomial of $I_\ell$ is irreducible.
Note that $|-M(\xxx,Y)| \in P(m,n)$.
This section stands to show the following fact.
\begin{prop} \label{prop:irr}
The set
$$\{Y\in\mathbb{R}^{n\times n\times\ell} \mid |-M(\xxx,Y)| \in I_\ell\}$$
is a nonempty Zariski open subset of\/ $\mathbb{R}^{n\times n\times \ell}$.
\end{prop}
Let $f_\ell\colon \mathbb{R}^{n\times n\times\ell} \to P(m,n)$ be a map which sends
$(Y_1;\ldots;Y_\ell)$ to $|\sum_{k=1}^\ell x_kY_k+x_mE_n|$.
Note that $|-M(\xxx,Y)| \in I_\ell$ if and only if
$f_\ell(Y) \in I_\ell$.
Since $I_\ell$ is a Zariski open set,
$$\mathfrak{T}_\ell:=\{Y\in\mathbb{R}^{n\times n\times \ell} \mid
f_\ell(Y) \in I_{\ell}\}$$
is a Zariski open subset of $\mathbb{R}^{n\times n\times \ell}$.
Then it suffices to show that $\mathfrak{T}_\ell$ is not empty.
First, we show it in the case where $m=3$.
\par
The affine space $P(3,n)$ is isomorphic to a real vector space of dimension $n(n+3)/2$ with basis
$$\{x_1^ax_2^bx_3^c\mid 0\leq a,b,c\leq n, a+b+c=n, c\ne n\}.$$
Let $G$ be a map from $\mathbb{R}^{n\times n\times 2}$ to $\mathbb{R}^{n(n+3)/2}$
defined as
$$G((Y_1;Y_2))=\phi(|x_1Y_1+x_2Y_2+x_3E_n|),$$
where $\phi\colon P(3,n)\to \mathbb{R}^{n(n+3)/2}$ is an isomorphism.
It suffices to show that the Jacobian matrix of $G$ has generically full column rank.
To show this, we restrict the source of $G$ to
$$S:=\{(Y_1;Y_2) \in \mathbb{R}^{n\times n\times 2} \mid
Y_1=
\begin{pmatrix} u_{11} & 0&\cdots & 0\\
u_{21} & u_{22} & \ddots & \vdots \\
\vdots & \ddots & \ddots & 0\\
u_{n1}& \cdots & u_{n-1,1} & u_{n1}
\end{pmatrix},
Y_2=
\begin{pmatrix}0 & 0&\cdots & v_1\\
-1 & 0 & \cdots & v_{2}\\
\vdots & \ddots & \ddots & \vdots \\
0& \cdots & -1 & v_n
\end{pmatrix}
\}$$
of dimension $n(n+3)/2$, say $G|_S\colon S\to \mathbb{R}^{n(n+3)/2}$.
\begin{lemma} \label{lem:Jacobian}
The Jacobian of $G|_S$ is nonzero.
\end{lemma}
\begin{proof}
Put $g(Y):=f(Y)-x_3^n$ for $Y\in S$.
Suppose that for constants $c(v_j)$, $c(u_{ij})$, the linear equation
\begin{equation}\label{eqn:1}
\sum_{j=1}^n c(v_j)\frac{\partial g}{\partial v_j}
+\sum_{1\leq j\leq i\leq n} c(u_{ij})\frac{\partial g}{\partial u_{ij}}=0
\end{equation}
holds.
We show that all of $c(v_j)$, $c(u_{ij})$ are zero by induction on $n$.
It is easy to see that the assertion holds in the case where $n=1$.
As the induction assumption, we assume that the
assertion holds in the case where $n-1$ instead of $n$.
We put
$$\lambda_j=u_{jj}x_1+x_3 \text{ and } \mu(a,b)=\prod_{t=a}^{b}\lambda_t.$$
After a partial derivation, we put $u_{ij}=0$ ($i>j$) and then
have the following equations:
$$
\begin{array}{lcll}
\displaystyle\frac{\partial g}{\partial v_j}&=&
x_2^{n-j+1}\mu(1,j-1)
& (1\leq j\leq n)\\
\displaystyle\frac{\partial g}{\partial u_{jj}}&=&
x_1\mu(1,j-1)
\left| \begin{matrix} \lambda_{j+1} && & & v_{j+1}x_2\\
-x_2& \lambda_{j+2} &&& v_{j+2}x_2\\
& \ddots& \ddots&&\vdots \\
& & -x_2 & \lambda_{n-1}& v_{n-1}x_2\\
& && -x_2 & \lambda_{n}+v_nx_2\\
\end{matrix}\right|
& (1\leq j\leq n)\\
\displaystyle\frac{\partial g}{\partial u_{ij}}&=&
\displaystyle -x_1x_2^{n-i}\mu(j+1,i-1)
\left| \begin{matrix} \lambda_{1} & & && v_{1}x_2\\
-x_2& \lambda_{2} &&& v_{2}x_2\\
& \ddots& \ddots&& \vdots \\
& & -x_2 & \lambda_{j-1}& v_{j-1}x_2\\
& && -x_2 & v_jx_2\\
\end{matrix}\right|
& (1\leq j<i\leq n) \\
\end{array}
$$
\noindent
By seeing terms divisible by $\lambda_1$ in the left hand side of
\eqref{eqn:1}, we have
\begin{equation*}\label{eqn:3}
\sum_{j=2}^n c(v_j)\frac{\partial g}{\partial v_j}
+\sum_{2\leq j\leq i\leq n} c(u_{ij})h_{ij}=0,
\end{equation*}
where
$$h_{ij}=\displaystyle -x_1x_2^{n-i}\mu(j+1,i-1))
\left| \begin{matrix} \lambda_{1} & & & & 0\\
-x_2& \lambda_{2} &&& v_{2}x_2\\
& \ddots& \ddots&& \vdots \\
& & -x_2 & \lambda_{j-1}& v_{j-1}x_2\\
& && -x_2 & v_jx_2\\
\end{matrix}\right|.$$
Note that
$$
\begin{array}{lcll}
\displaystyle\frac{\partial g}{\partial v_j} &=&
\displaystyle\lambda_1\frac{\partial g^\prime}{\partial v_j}
& (2\leq j\leq n), \text{ and} \\
h_{ij} &=&
\displaystyle\lambda_1\frac{\partial g^\prime}{\partial u_{ij}}
& (2\leq j\leq i\leq n) \\
\end{array}
$$
where $g^\prime$ is the determinant of
the $(n-1)\times (n-1)$ matrix obtained from
$x_1Y_1+x_2Y_2+x_3E_n$ by removing the first row and the first column
minus $x_3^{n-1}$.
Therefore by the induction assumption,
$$c(v_j)=c(u_{ij})=0 \quad (2\leq j\leq i\leq n)$$
since
$\displaystyle\frac{\partial g^\prime}{\partial v_j}$,
$\displaystyle\frac{\partial g^\prime}{\partial u_{ij}}$
($2\leq j\leq i\leq n$)
are linearly independent.
By \eqref{eqn:1}, we have
\begin{equation}\label{eqn:4}
c(v_1)x_2^n
+c(u_{11})\frac{\partial g}{\partial u_{11}}
-\sum_{i=2}^n c(u_{i1})\displaystyle v_1x_1x_2^{n-i+1}\mu(2,i-1)=0.
\end{equation}
By expanding at the $n$-th column, we have
$$
\frac{\partial g}{\partial u_{11}}=\sum_{i=2}^{n-1} v_{i}x_1x_2^{n-i-1}
\mu(2,i-1)+ x_1(\lambda_n+v_nx_2)\mu(2,n-1).
$$
Therefore, the equation \eqref{eqn:4} implies that
\begin{equation*}\label{eqn:5}
c(v_1)x_2^n+
\sum_{i=2}^{n} (c(u_{11})v_i-c(u_{i1})v_1)x_1x_2^{n-i+1}\mu(2,i-1)
+c(u_{11})x_1\mu(2,n)=0.
\end{equation*}
In this equation we notice the coefficients corresponding
to $x_2^s$, $0\leq s\leq n$.
Then we have $c(u_{i1})=c(v_1)=0$ for $1\leq i\leq n$.
Therefore, we conclude that
$\displaystyle\frac{\partial g}{\partial v_j}$, $\displaystyle\frac{\partial g}{\partial u_{ij}}$ ($1\leq j\leq i\leq n$) are linearly independent, which
means that the Jacobian of $G|_S$ is nonzero.
\qed
\end{proof}
By Lemma~\ref{lem:Jacobian}, there is an open subset
$S$ of $\mathbb{R}^{n\times n\times 2}$
such that the rank of the Jacobian matrix of $G$ at $Y$
has full column rank for any $Y\in S$.
Then $f_2(S)\cap I_2$ is not empty and
thus $\mathfrak{T}_2 \cap S$ is not empty.
In particular, $\mathfrak{T}_2$ is not empty.
\par
Now we show that $\mathfrak{T}_\ell$ is not empty in the case where $\ell>2$.
Let $q\colon \mathbb{R}^{n\times n\times \ell} \to \mathbb{R}^{n\times n\times 2}$
be a canonical projection which sends $(Y_1;\ldots;Y_\ell)$ to
$(Y_{\ell-1};Y_{\ell})$.
Put $\hat{\mathfrak{T}}=q^{-1}(\mathfrak{T}_2 \cap S)$ and
let $\bar{q}\colon P(m,n)\to P(3,n)$ be also a canonical projection which sends a polynomial
$g(x_1,\ldots,x_m)$ to $g(0,\ldots,0,x_{1},x_{2},x_3)$.
The following diagram is commutative.
$$\begin{CD}
\hat{\mathfrak{T}} @>{\subset}>> \mathbb{R}^{n\times n\times \ell} @>f_\ell>> P(m,n) \\
@VVV @V{q}VV @V{\bar{q}}VV \\
\mathfrak{T}_2 \cap S @>{\subset}>> \mathbb{R}^{n\times n\times 2} @>f_2>> P(3,n)
\end{CD}
$$
Note that if $g(x_1,\ldots,x_m)\in P(m,n)$ is reducible then so is
$g(0,\ldots,0,x_1,x_2,x_3)\in P(3,n)$.
The set $\hat{\mathfrak{T}}$ is a nonempty open subset of
$\mathbb{R}^{n\times n\times\ell}$ with the property that
$f_\ell(Y)$ is irreducible for any $Y\in \hat{\mathfrak{T}}$.
Thus $\mathfrak{T}_\ell$ is not empty,
since $\hat{\mathfrak{T}}\subset\mathfrak{T}_\ell$.
This completes the proof of Proposition~\ref{prop:irr}.
\section{Proof of Theorem~\ref{thm:main} \label{sec:proof}}
In this section we show Theorem~\ref{thm:main}.
Let $\check{\xxx}=(x_1,\ldots,x_\ell)^\top$ for $\xxx=(x_1,\ldots,x_\ell,x_m)^\top$, and put
$$\psi(\xxx,Y):=\begin{pmatrix}
(-1)^{n+1}|M(\xxx,Y)_{n,1}| \\
(-1)^{n+2}|M(\xxx,Y)_{n,2}| \\
\vdots \\
(-1)^{n+n}|M(\xxx,Y)_{n,n}|
\end{pmatrix},\quad
\check{\xxx}\otimes \psi(\xxx,Y):=\begin{pmatrix}
x_1\psi(\xxx,Y) \\
x_2\psi(\xxx,Y) \\
\vdots \\
x_\ell\psi(\xxx,Y)
\end{pmatrix}$$
and
$$U(Y):=\langle \check{\aaa}\otimes \psi(\aaa,Y) \mid \, |M(\aaa,Y)|=0 \rangle.$$
\begin{lemma} \label{lem:dimVvsM}
If $\dim U(Y)=p$, then $Y\in \mathfrak{M}$.
\end{lemma}
\begin{proof}
Let $\dim U(Y)=p$. Then there are
$\aaa_j=(a_{1j},\ldots,a_{mj})^\top\in U(Y)$ for $1\leq j\leq p$
such that
$$B^\prime
=(\check{\aaa}_1\otimes \psi(\aaa_1,Y),\ldots,\check{\aaa}_p\otimes \psi(\aaa_p,Y))$$
is nonsingular.
Note that
$M(\aaa_j,Y)\psi(\aaa_j,Y)=\zerovec$ for $1\leq j\leq p$
and
$$B^\prime=\begin{pmatrix} AD_1\\ \vdots \\ AD_{\ell}\end{pmatrix},$$
where
$A=(\psi(\aaa_1,Y),\ldots,\psi(\aaa_p,Y))$ and
$D_k={\mathrm{Diag}}(a_{k1},\cdots,a_{kp})$ for $1\leq k\leq \ell$.
Thus $Y\in \mathfrak{M}$.
\qed
\end{proof}
For an $n\times \ell$ matrix
$C=(\ccc_1,\ldots\ccc_\ell)$, we put
$$g(\xxx,Y,C):=\left| \begin{matrix} M(\xxx,Y)^{<n}\\ \sum_{k=1}^\ell x_k\ccc_k^\top\end{matrix}\right|,$$
where $M(\xxx,Y)^{<n}$ is the $(n-1)\times n$ matrix obtained from
$M(\xxx,Y)$ by removing the $n$-th row.
\begin{lemma} \label{lem:dimV}
Let $C=(\ccc_1,\ldots,\ccc_\ell)$ be an $n\times\ell$ matrix.
The following claims are equivalent.
\begin{enumerate}
\item $\dim U(Y)=p$.
\item $g(\aaa,Y,C)=0$
for any $\aaa \in \mathbb{R}^{m}$ with $|M(\aaa,Y)|=0$ implies $C=O$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $C=(\ccc_1,\ldots,\ccc_\ell)$ be an $n\times \ell$ matrix.
Put $\ddd=(\ccc_1^\top,\ldots,\ccc_\ell^\top)^\top \in \mathbb{R}^p$.
The inner product of this vector $\ddd$ with
$\check{\aaa}\otimes \psi(\aaa,Y)$ is equal to
$g(\aaa,Y,C)$.
Therefore $\ddd$ belongs to the orthogonal complement of $U(Y)$
if and only if $g(\xxx,Y,C)=0$
for any $\aaa \in \mathbb{R}^{m}$ with $|M(\aaa,Y)|=0$.
Thus the assertion holds.
\qed
\end{proof}
For any $i$ and $k$ with $1\leq i\leq n-1$ and $1\leq k\leq n$,
let $s^{(k)}_i$ be an elementary symmetric polynomial of degree $i$ with
variables $\alpha_1,\ldots,\alpha_{k-1},\alpha_{k+1},\ldots,\alpha_{n}$.
Put
$$S_n=\begin{pmatrix} 1&1&\ldots&1\\
s^{(1)}_1&s^{(2)}_1&\ldots&s^{(n)}_1\\
s^{(1)}_2&s^{(2)}_2&\ldots&s^{(n)}_2\\
\vdots&\vdots&&\vdots\\
s^{(1)}_{n-1}&s^{(2)}_{n-1}&\ldots&s^{(n)}_{n-1}
\end{pmatrix}.$$
\begin{lemma} \label{lem:vanDerMonde}
The determinant $|S_n|$ of the $n\times n$ matrix $S_n$ is equal to
$$\prod_{1\leq i<j\leq n} (\alpha_i-\alpha_j).$$
In particular, if $\alpha_1,\ldots,\alpha_n$ are distinct each other,
then $S_n$ is nonsingular.
\end{lemma}
\begin{proof}
For any $i$ and $k$ with $1\leq i\leq n-1$ and $2\leq k\leq n-1$,
let $t^{(k-1)}_i$ be an elementary symmetric polynomial of degree $i$ with
variables $\alpha_2,\ldots,\alpha_{k-1},\alpha_{k+1},\ldots,\alpha_{n}$.
For $1\leq i\leq n-1$ and $1\leq k\leq n$, we have
$s^{(k)}_{i}-s^{(1)}_{i}=(\alpha_{1}-\alpha_{k})t^{(k-1)}_{i-1}$.
Then
$$|S_n|=\prod_{2\leq k\leq n}(\alpha_{1}-\alpha_{k})
\left|\begin{matrix} 1&1&\ldots&1\\
t^{(1)}_1&t^{(2)}_1&\ldots&t^{(n-1)}_1\\
\vdots&\vdots&&\vdots\\
t^{(1)}_{n-2}&t^{(2)}_{n-2}&\ldots&t^{(n-1)}_{n-2}
\end{matrix}\right|.$$
Therefore we have the assertion by induction on $n$.
\qed
\end{proof}
The following lemma is obtained straightforwardly.
\begin{lemma}
$$\left|\begin{matrix} \alpha_1+z&&&&a_1\\ & \alpha_2+z &&& a_2\\
&&\ddots && \vdots \\
&&&\alpha_{n}+z&a_{n}\\
b_1&b_2&\ldots&b_{n}&0\end{matrix}\right|=-(z^{n-1},z^{n-2},\ldots,1)S_n\begin{pmatrix} a_1b_1\\ a_2b_2\\ \vdots\\ a_nb_n\end{pmatrix}.$$
\end{lemma}
\begin{proof}
We see the left hand of the equation is equal to
\begin{equation*}
\begin{split}
&-\sum_{k=1}^n a_kb_k\frac{\prod_{1\leq i\leq n}(\alpha_i+z)}{\alpha_k+z} \\
&=-\sum_{k=1}^n a_kb_k\left(\sum_{i=1}^{n} s_{i-1}^{(k)}\right) z^{n-i} \\
&=-\sum_{i=1}^{n} \left(\sum_{k=1}^n a_kb_ks_{i-1}^{(k)}\right) z^{n-i} \\
&=-(z^{n-1},z^{n-2},\ldots,1)\begin{pmatrix} \sum_{k=1}^n a_kb_k\\
\sum_{k=1}^n a_kb_ks_{1}^{(k)}\\
\vdots\\ \sum_{k=1}^n a_kb_ks_{n-1}^{(k)}\end{pmatrix}. \\
\end{split}
\end{equation*}
\qed
\end{proof}
\begin{cor} \label{cor:3slice}
Let $\alpha_1,\ldots,\alpha_{n-1}$ be distinct complex numbers,
$a_1,\ldots,a_{n-1}$ nonzero complex numbers, and
$b_1,\ldots,b_{n-1}$ complex numbers.
If
$$\left|\begin{matrix} {\mathrm{Diag}}(\alpha_1,\ldots,\alpha_{n-1})+zE_{n-1} & \aaa\\
\bbb^\top & 0\end{matrix}\right|=0$$
for any $z \in \mathbb{R}$,
then
$\bbb=\zerovec$, where
$\aaa=(a_1\ldots,a_{n-1})^\top$ and $\bbb=(b_1,\ldots,b_{n-1})^\top$.
\end{cor}
\begin{proof}
Since $S_n\begin{pmatrix} a_1b_1\\ a_2b_2\\ \vdots\\ a_nb_n\end{pmatrix}=\zerovec$ and $S_n$ is nonsingular,
we have $(a_1b_1,\ldots,a_nb_n)=\zerovec^\top$.
\qed
\end{proof}
The set
$$\mathfrak{U}_1=\{Y\in\mathbb{R}^{n\times n\times \ell} \mid
\, |M(\xxx,Y)| \text{ is irreducible} \}$$
is a nonempty Zariski open subset of
$\mathbb{R}^{n\times n\times \ell}$ (see Proposition~\ref{prop:irr}).
Let $W$ be the subset of $\mathbb{R}^{n\times n}$ consisting of
matrices $\begin{pmatrix} A_1&A_2\\ A_3&A_4\end{pmatrix}$ such that
all eigenvalues of $A_1$ are distinct over the complex number field and
every element of the vector $P^{-1}A_2$ is nonzero complex number where
$A_1\in \mathbb{R}^{(n-1)\times (n-1)}$, $P\in \mathbb{C}^{(n-1)\times (n-1)}$
with $P^{-1}A_1P$ is a diagonal matrix.
Note that the validity of the condition that every element of the vector $P^{-1}A_2$ is nonzero is independent of the choice of $P$.
We put
$$\mathfrak{U}_2:=\{(Y_1;\ldots;Y_\ell)\in \mathbb{R}^{n\times n\times \ell}\mid
Y_k \in W, 1\leq k\leq\ell\}.$$
The set $\mathfrak{U}_2$ is a nonempty Zariski open subset of
$\mathbb{R}^{n\times n\times \ell}$
and
$\mathfrak{U}:=\mathfrak{U}_1\cap \mathfrak{U}_2$ is also.
\begin{lemma} \label{lem:detzero.cond}
Let $Y\in \mathfrak{U}_2$ and $\ddd_1,\ldots,\ddd_\ell\in\mathbb{R}^{n-1}$.
If
$$\left|\begin{matrix} \multicolumn{2}{c}{M(\aaa,Y)^{<n}} \\ \sum_{k=1}^\ell a_k \ddd_k^\top & 0\end{matrix}\right| = 0$$
for any $\aaa=(a_1,\ldots,a_m)^\top \in \mathbb{R}^m$,
then $\ddd_1=\cdots=\ddd_\ell=\zerovec$.
\end{lemma}
\begin{proof}
Let $1\leq k\leq \ell$.
Take $a_k=1$ and $a_j=0$ for $1\leq j\leq\ell$, $j\ne k$ and
put $Y_k=\begin{pmatrix} A_1&A_2\\ A_3&A_4\end{pmatrix}$, where
$A_1$ is an $(n-1)\times(n-1)$ matrix.
Since $Y_k\in W$, there are a matrix $P\in \mathbb{C}^{(n-1)\times (n-1)}$
and distinct complex numbers $\alpha_1,\ldots, \alpha_{n-1}$
such that
$${\mathrm{Diag}}(P,1)^{-1}\begin{pmatrix} \multicolumn{2}{c}{(Y_k-a_mE_n)^{<n}} \\
\ddd_k^\top & 0\end{pmatrix}{\mathrm{Diag}}(P,1)=
\begin{pmatrix} {\mathrm{Diag}}(\alpha_1,\ldots,\alpha_{n-1})-a_mE_{n-1} & P^{-1}A_2\\
\ddd_k^\top P & 0 \end{pmatrix}$$
and every element of $P^{-1}A_2$ is nonzero.
Then we have $\ddd_k^\top P=\zerovec^\top$ by Corollary~\ref{cor:3slice} and
thus $\ddd_k=\zerovec$.
\qed
\end{proof}
The following lemma is essential for the proof of Theorem~\ref{thm:main}.
\begin{lemma} \label{lem:UvsC2}
$\mathfrak{U}\cap \mathfrak{C}\subset \mathfrak{M}$.
In particular, $\overline{\mathfrak{C}}\subset\overline{\mathfrak{M}}$ holds.
\end{lemma}
\begin{proof}
Let $Y\in \mathfrak{U}\cap \mathfrak{C}$ and fix it.
There exists $\aaa=(a_1,\ldots,a_\ell,a_m)^\top$
such that $|M(\aaa,Y)|<0$.
Then there is an open neighborhood $U$ of
$(a_1,\ldots,a_\ell)^\top$ and a mapping $\mu\colon U\to \mathbb{R}$ such that
$$|M(\begin{pmatrix} \yyy\\ \mu(\yyy)\end{pmatrix},Y)|=0$$
for any $\yyy \in U$.
Thus $|M(\xxx,Y)|=0$ determines an $(m-1)$-dimensional algebraic set.
Let $C$ be an $n\times \ell$ matrix.
Now suppose that $g(\aaa,Y,C)=0$ holds for any $\aaa\in \mathbb{R}^m$ with $|M(\aaa,Y)|=0$.
We show that $g(\xxx,Y,C)$ is zero as a polynomial
over elements of $\xxx$.
As a contrary, assume that $g(\xxx,Y,C)$ is not zero.
The degree of $g(\xxx,Y,C)$ corresponding to the $m$-th
element of $\xxx$
is less than $m$ which is that of $|M(\xxx,Y)|$.
Furthermore, since $M(\xxx,Y)$ is irreducible, $M(\xxx,Y)$ and $g(\xxx,Y,C)$ are coprime.
Then there are
polynomials $f_1(\xxx)$, $f_2(\xxx)\in\mathbb{R}[x_1,\ldots, x_\ell, x_m]$ and a nonzero polynomial $h(\check{\xxx})\in\mathbb{R}[x_1,\ldots, x_\ell]$
such that
$$f_1(\xxx)M(\xxx,Y)
+f_2(\xxx)g(\xxx,Y,C)=h(\check{\xxx})$$
as a polynomial over elements of $\xxx$, by Euclidean algorithm.
However, we can take $\bbb\in U$ so that $h(\bbb)\ne 0$.
Then the above equation does not hold at $\xxx=\begin{pmatrix} \bbb\\ \mu(\bbb)\end{pmatrix}$.
Hence $g(\xxx,Y,C)$ must be the zero polynomial over elements of $\xxx$.
Let $\ccc_k^\top=(c_{1k},\ldots,c_{nk})$.
By seeing the coefficient of $x_m^{n-1}x_k$, we get $c_{nk}=0$ for $1\leq k\leq\ell$.
Therefore $C=O$ by Lemma~\ref{lem:detzero.cond}.
By Lemmas~\ref{lem:dimV} and \ref{lem:dimVvsM} we get $Y\in\mathfrak{M}$.
Therefore $\mathfrak{U}\cap \mathfrak{C}$ is a subset of $\mathfrak{M}$.
Then $\overline{\mathfrak{C}}=\overline{\mathfrak{U}\cap \mathfrak{C}}
\subset \overline{\mathfrak{M}}$.
\qed
\end{proof}
\begin{thm} \label{thm:SvsUvsC2}
$\overline{\mathfrak{S}}=\overline{\mathfrak{M}}=\overline{\mathfrak{C}}$ holds.
\end{thm}
\begin{proof}
We have $\overline{\mathfrak{M}}\subset \overline{\mathfrak{S}}$
by Proposition~\ref{prop:UsubsetS}.
By Propositions~\ref{prop:C1capU} and \ref{prop:C3isboundaryC2},
the set $\mathfrak{S}$ is a subset of $\overline{\mathfrak{C}}$
and then $\overline{\mathfrak{S}}\subset\overline{\mathfrak{C}}$.
Therefore $\overline{\mathfrak{S}}=\overline{\mathfrak{M}}=\overline{\mathfrak{C}}$ by Lemma~\ref{lem:UvsC2}.
\qed
\end{proof}
\noindent{\bf Proof of Theorem~\ref{thm:main}.}
For almost all $Y\in\mathfrak{A}$, $\mathrm{rank}\, X(Y)=p+1$ by Theorem~\ref{thm:ans}.
Since $\mathfrak{A}$ is an open set, if $\mathfrak{A}$ is not an empty set,
then ${\mathrm{typical\_rank_\RRR}}(m,n,p)=\{p,p+1\}$ (\cite[Theorem~3.4]{Sumi-etal:2010a}).
Suppose that $\mathfrak{A}$ is empty.
Then $\overline{\mathfrak{M}}=\mathbb{R}^{n\times n\times \ell}$
and the closure of the set consisting of all $n\times p\times m$
tensors equivalent to $X(Y)$ for some $Y\in \mathfrak{M}$
is $\mathbb{R}^{n\times p\times m}$.
Recall that any tensor $X(Y)$ for $Y\in\mathfrak{M}$ has rank $p$.
By Theorem~\ref{thm:Friedland}, $p$ is the maximal
typical rank of $\mathbb{R}^{n\times p\times m}$.
Therefore,
$${\mathrm{typical\_rank_\RRR}}(m,n,p)={\mathrm{typical\_rank_\RRR}}(n,p,m)=\{p\}$$
holds.
\qed
\end{document} | math | 50,152 |
\begin{equation}gin{document}
\begin{equation}gin{titlepage}
\title{Do quantum nonlocal correlations imply information transfer?- \\
A simple quantum optical test.}
\author{R. Srikanth\thanks{e-mail: [email protected]} \\
Indian Institute of Astrophysics, Koramangala, \\
Bangalore- 34, Karnataka, India.}
\maketitle
\date{}
\pacs{03.65.Bz,03.30.+p}
\begin{equation}gin{abstract}
In order to understand whether nonlocality implies information transfer, a
quantum optical experimental test, well within the scope of current technology,
is proposed. It is essentially a delayed choice experiment as applied
to entangled particles. The basic idea is: given two observers sharing
position-momentum entangled photons, one party chooses whether she measures
position or momentum of her photons after the particles leave the source. The
other party should infer her action by checking for the absence or
presence of characteristic interference patterns after subjecting his particles
to certain optical pre-processing.
An occurance of signal transmission is attributed to the breakdown of
complementarity in incomplete measurements. Since the result implies that
the transferred information is classical, we discuss some propositions
for safeguarding causality.
\end{abstract}
\end{titlepage}
\section{Introduction}
Quantum information has opened up a new era in recent times both in
fundamental and applied physics. Its nonclassical resources of quantum
superposition and entanglement are at the heart of powerful future applications
in communication \cite{ben93} and computation \cite{preskill}.
And yet, the fundamentally very important question whether the correlated
measurements on entangled systems imply an "action-at-a-distance" effect
remains somehow unclear. Einstein, Podolsky and Rosen (EPR) thought that it
did, which was the basis of their claim of quantum mechanical incompleteness
\cite{epr}.
Quantum nonlocal correlations have been confirmed in
experiments since the mid-1980's performed both on spin entangled systems
\cite{bellexp}, based on the Bohm version \cite{bohm} of the EPR
thought-experiment, which are shown to violate Bell's inequality
\cite{bella}, and also on systems entangled in continuous variables
(Refs. \cite{ghosh,str95} and references therein),
where nonlocality is manifested in multi-particle interferences.
Bell's celebrated theorem \cite{bella} tells us
only that any realistic model of quantum mechanics should be nonlocal.
Informed opinions diverge between on the one hand the view that quantum
nonlocality implies
no information transfer, but only a change in the mutual knowledge of the two
nonlocal systems, to the acknowledgement on the other hand of a tension
between quantum theory and special relativity \cite{weihs}. The tension stems
from the possibility that the nonlocal correlations might imply a
superluminal transfer of information. A majority of physicists in the field,
it would seem, accept the scenario of a spacelike but causal enforcement of
correlation, as for example in quantum dense coding \cite{dense}.
In this view, entanglement cannot be used to transmit classical signals
nonlocally
because statistically the single particle outcome at any one particle is not
affected by measurements on its entangled parties
\cite{nosig}. This understanding is echoed in statements of
``a deep mystery" \cite{ghz}, and ``peaceful coexistence" \cite{shi89}
between quantum nonlocality and special relativity.
In the present article, we propose a simple quantum optical experiment
whose aim is to test in a philosophically unpredisposed
way whether information transfer occurs in nonlocal systems.
\section{A practical experiment}\label{action}
Figure \ref{rayent}
presents a `folded out' plan of an experiment in which two observers,
designated Alice and Bob, share
entangled photons from a nonlinear crystal pumped by
a suitable laser (eg., Ar laser in $\lambda$ 351.1 nm). The correlated photons
are produced by spontaneous parametric down-conversion (SPDC) \cite{str95}.
Photons not down-converted are filtered out (not shown in Figure \ref{rayent}),
leaving only entangled photon pairs to be shared between Alice and Bob.
Alice's observes her photons through a lens of focal length $f$.
It is positioned at distance $2f$ from the
EPR source. By classical optics, coplanar rays that are parallel in front of
her lens converge to a single point on the focal plane.
A detection by Alice at some point on the focal plane on her side of the lens
implies that Bob's photon is left in a definite momentum state,
but with its point of origin in the source indeterminate as expected
on basis of the uncertainty principle. This has indeed been
confirmed by observing an interference pattern in Bob's photons detected in
coincidence with Alice's momentum measurement \cite{zei00}.
On the other hand, by positioning her detector at some point on the
image plane of her lens, Alice images the source, and hence measures the point
of origin of her photon, but here the momentum with which it left the source
remains indeterminate. Bob's photon is also left in a position eigenstate, with
the position of its detection being correlated with that of Alice's
coincidentally detected photon \cite{zei00}. Bob is equipped with
a Young's double-slit interferometer and a direction filter permitting only
horizontal momenta to reach Bob's interferometer. The filter consists of two
convex lenses, of radius $R$ and focal length $g$, sharing a focal plane. A
diaphragm is placed at this plane, perforated with a small hole, of diameter
$h$, at the point where the principal axis of the lenses
intercepts the diaphragm.
Provided $h/g \ll \lambda/s$, where $s$ is the interferometer slit-seperation,
the permitted deviation from horizontality of the
rays will not affect the fringe pattern observed on the interferometer screen.
Bob's interferometer is located at distance $d$.
According to the scheme of Figure \ref{rayent}, the general nonlocal
multi-mode vacuum state of the photons in the experiment is given by:
\begin{equation}gin{equation}
\label{spdc}
|\Psi\rangle = |{\rm vac}\rangle + \epsilon
(|s_{po}i_{po}\rangle + |s_{p-}i_{p+}\rangle +
|s_{qo}i_{qo}\rangle + |s_{q-}i_{q+}\rangle
\end{equation}
where $|{\rm vac}\rangle$ is the vacuum ground state;
$s_{po}$ and $s_{p-}$ are the photon modes on the
$-p_{\rm side}$ and $-p_{\rm down}$ signal (Alice's) beams, emanating
from point $p$ in the source, as shown in Figure \ref{rayent},
and $i_{po}$ and $i_{p+}$ are the photon modes on the
$p_{\rm side}$ and $p_{\rm up}$ idler (Bob's) beams, emanating from
the same point in the source. Analogously, for the modes originating from
point $q$ on the source,
$s_{qo}$ and $s_{q-}$ are the modes on the
$-q_{\rm side}$ and $-q_{\rm down}$ signal beams,
and $i_{qo}$ and $i_{q+}$ are the modes on Bob's
$q_{\rm side}$ and $q_{\rm up}$ idler beams. The quantity
$\epsilon (\ll 1)$ depends on the
pump laser and nonlinearity in the downconverting crystal \cite{str95}.
Because of the narrowness of the hole,
a ray entering it diffracts to enter both slits $u$ and $v$ in the
interferometer. In the Schr\"odinger picture, let the
diffracting wavefunction, for some ray $|X\rangle$, be:
\begin{equation}gin{equation}
\label{tform}
|X\rangle \longrightarrow \alpha\left[\sin(\phi /2) |u\rangle +
\cos(\phi /2) |v\rangle\right] + \cdot\cdot\cdot
\end{equation}
where the $\cdot\cdot\cdot$ indicate other points on the double-slit diaphragm
where the photon could fall, but which do not concern us since they do not
pass through the double slit.
Here $\alpha$ $( < 1 )$ depends on $h$ and the cross-section of the
slits, $\phi$ is (some function of) $|X\rangle$'s angle of
incidence on the diaphragm as measured from the $+y$-axis whose origin is at
the hole. Note that for a ray perpendicular to the diaphragm, the amplitude
for entering both slits is equal. Given the finite width of the
downconverted beam,
all of which Bob's first lens is assumed to intercept,
$\phi$ will take values between some some $\phi_0$ ($> 0$) and $180^{\circ} -
\phi_0$.
By virtue of the direction filter, only the rays $p_{\rm side}$ and
$q_{\rm side}$ can fall on Bob's detector. As a result, the
(positive mode) electric field $E^{(+)}_B$ at a point $x$ on Bob's
detector has contributions from two of the modes: $i_{p0}$
falling on the hole at some angle $\phi$, and $i_{q0}$ at angle $180 - \phi$.
The phase for each ray depends on the distance along its path.
\begin{equation}gin{eqnarray}
\label{Eb}
E^{(+)}_B &=&
\alpha\hat{i}_{po}\left( \sin(\phi /2) e^{ik(d^{\prime} + \overline{ux})}
+ \cos(\phi /2) e^{ik(d^{\prime} + \overline{vx})}\right) \nonumber \\
&+& \alpha\hat{i}_{qo}\left( \cos(\phi /2) e^{ik(d^{\prime} + \overline{ux})}
+ \sin(\phi /2) e^{ik(d^{\prime} + \overline{vx})}\right),
\end{eqnarray}
where the "hatted" quantities represent corresponding annihilation operators,
$d^{\prime} \equiv 2g + \overline{KN} + \overline{Nv}
= 2g + \overline{LM} + \overline{Mu}$, and the overline quantities
are distances between the named points. The four terms in right hand side of
Eq. (\ref{Eb}) can be
understood as follows. The first term is the product of three amplitudes:
for beam $p_{\rm side}$'s movement (a) from point $p$ to the hole, (b)
from the hole to slit $u$, (c) from slit $u$ to point $x$ on Bob's screen.
Similarly with the other terms.
\subsection{Alice measures position}
Alice positions her detector on the image plane. A detection at point $y$
(Figure \ref{rayent}) means
that the rays $-p_{\rm side}$ and $-p_{\rm down}$ were chosen on the signal
beam. Alice knows that her photon originated at point $p$,
but its momentum remains indeterminate between directions $-p_{\rm side}$ and
$-p_{\rm down}$. Correspondingly, the idler photon is left in the correlated
ray states $p_{\rm side}$ and $p_{\rm up}$.
The (positive mode) electric field, $E^{(+)}_A$, of Alice's detector has a
contribution from both signal rays, $-p_{\rm side}$ and $-p_{\rm down}$,
that converge to $y$. Therefore, Alice's detector is given by the field:
\begin{equation}gin{equation}
E^{(+)}_A = \hat{s}_{po}e^{ik(2f + \overline{ry})} +
\hat{s}_{p-}e^{ik(\overline{py})}.
\end{equation}
The correlation function for Alice finding her photon at $y$ and Bob his at $x$
is $\langle E^{(+)}_A E^{(+)}_B \rangle$, where
$\langle \cdot\cdot\cdot \rangle$ indicates
an averaging over the state vector $|\Psi\rangle$ of Eq. (\ref{spdc}).
\begin{equation}gin{equation}
\langle E^{(+)}_AE^{(+)}_B\rangle = \epsilon\alpha (\sin(\phi /2)
e^{ik(2f + \overline{ry} + d^{\prime} + \overline{ux})} +
\cos(\phi /2) e^{ik(2f + \overline{ry} + d^{\prime} + \overline{vx})})
\end{equation}
The probability
$P_{AB}$ of coincident detections between these detectors is given by
$|\langle E^{(+)}_A E^{(+)}_B \rangle|^2$, which is:
\begin{equation}gin{equation}
\label{p_visib}
P_{AB} = \epsilon^2\alpha^2
[1 + \sin (\phi) \cos (\overline{ux} - \overline{vx})].
\end{equation}
In Eq. (\ref{p_visib}), the $\sin (\phi)$ term will be
different for different localizations of Bob's photon.
Bob's observed intensity pattern will therefore be an averaged pattern over
the closed interval $\phi \in [\phi_0, 180^{\circ} - \phi_0]$.
Integrating over $\phi$ and assuming for
simplicity a flat profile for the converging beam
distributed in this range (which is obtained for a laser profile that goes as
$g^{-1}\sin^2\phi$), the ensemble intensity pattern that he finds on his screen
is:
\begin{equation}gin{equation}
\label{I_p_visib}
I_p = I_0 \epsilon^2\alpha^2 (1 + \cos (\phi_0))(\cos (\overline{ux} -
\overline{vx}),
\end{equation}
where $I_0$ is the idler beam intensity.
We note that the interference pattern is observed in the single
(as against coincidence) count intensity. The reason is that $P_{AB}$ in
Eq. (\ref{p_visib}) does not depend on any Alice variables, whose presence
would have, upon being marginalized, resulted in washing out the interference
pattern. This is made possible by Bob's filter.
\subsection{Alice measures momentum}
Alice positions her detector on the focal plane. A detection at some point
means that the signal beam is left with only parallel rays converging to
this point. To obtain coincidence events, we need consider only signal rays
converging to point $m$ in Figure \ref{rayent}, since Bob's filter
permits only rays entangled to them. A detection here implies
that the rays $-p_{\rm side}$ and $-q_{\rm side}$ were chosen on the signal
beam. Correspondingly, the idler photon is left in ray
states corresponding to ray $p_{\rm side}$ and $q_{\rm side}$. Thus, Bob's
photon has a definite (horizontal) momentum, but its point of origin is
indeterminate between $p$ and $q$.
Here Alice's detector's electric field is given by:
\begin{equation}gin{equation}
E^{(+)}_A = \hat{s}_{po}e^{ik(2f + \overline{rm})} +
\hat{s}_{qo}e^{ik(2f + \overline{tm})}
\end{equation}
Noting that $\overline{rm} = \overline{tm}$, we find:
$P_{AB} = |\langle E^{(+)}_A E^{(+)}_B \rangle |^2$ using Eqs. (\ref{spdc})
and (\ref{Eb}):
\begin{equation}gin{equation}
P_{AB} = 2\epsilon^2\alpha^2\left(1 + \sin\phi \right)
[1 + \cos (\overline{ux} - \overline{vx})].
\label{interf}
\end{equation}
Here, too, $P_{AB}$
depends only on the path difference, $\overline{ux} - \overline{vx}$, from the
slits to $x$. It does not depend on any of Alice's variables.
The intensity pattern observed by Bob in the single counts is therefore
given by the function:
\begin{equation}gin{equation}
\label{I_m_visib}
I_m = 2I_0\epsilon^2\alpha^2 (1 + \cos\phi_0)
[1 + \cos (\overline{ux} - \overline{vx})].
\end{equation}
This has a visibility function of 1.0 in contrast to $\cos\phi_0$, found in
the case of $I_p$ given in Eq. (\ref{I_p_visib}). Furthermore, $I_m$ is about
twice more intense than $I_p$. Therefore the intensity and visibility of Bob's
single count interference pattern
are affected by what Alice observes. She transmits one classical bit of
information nonlocally.
The experiment is essentially a delayed choice experiment \cite{wheeler},
as applied to entangled particles instead of a single particle. Alice forces
Bob's entangled particle to behave like a wave or particle by measuring a
wave (i.e, momentum) or particle (i.e, path) property of her photon.
The interesting part is that she may delay her choice of which aspect
to manifest until after the photons have left the source.
A more dramatic demonstration of the signaling is obtained by minimizing
diffraction at the hole by increasing size $h$. Position
measurement by Alice will result in an idler ray passing through
only one of the two slits.
No interference will result. On the other hand, her momentum measurement will
result in an interference because the optics will ensure that Bob's photon
passes through both slits
thereby producing an interference. Thus, Bob simply checks for the presence or
absence interference pattern to infer Alice's action. One complication here is
that a larger hole size would allow non-horizontal momenta to enter the
interferometer, thereby reducing the visibility. Therefore the hole cannot be
too large. The condition
$1 \ll h/\lambda \ll g/s$ ensures that these criteria are satisfied. But it
could require larger $g$ than may be feasible for entangled beams of finite
coherence length. Nevertheless, the origin of the signaling can be more
simply understood for this case, as done in the next subsection.
\section{Understanding the classical signal}
The no-signaling theorem implies that statistically the outcomes for Bob's
particle is independent of Alice's action \cite{nosig}. It is of interest
to know how the above experiment circumvents it. The answer is basically
that the proofs of no-signaling assume that both Alice and Bob make complete
measurements, whereas in the above experiment, their interferometric
measurement is incomplete, because a detection does not uniquely indicate an
eigenmode. Furthermore, Bob employs a filter, which restricts
his observation to the incomplete measurement projector $\hat{M}_B =
|i_{p0}\rangle\langle i_{p0}| + |i_{q0}\rangle\langle i_{q0}|$. Another
factor is a subtlety concerning the scope of the complementarity principle in
multi-particle interferences.
In a position measurement, Alice collapses $|\Psi\rangle$ with
one of the incomplete measurement projectors $\hat{P}_1 =
|s_{p0}\rangle\langle s_{p0}| + |s_{p-}\rangle\langle s_{p-}|$ and $\hat{P}_2 =
|s_{q0}\rangle\langle s_{q0}| + |s_{q-}\rangle\langle s_{q-}|$.
In a momentum measurement, her operators are $\hat{M}_1 =
|s_{p0}\rangle\langle s_{p0}| + |s_{q0}\rangle\langle s_{q0}|$ or $\hat{M}_2 =
|s_{p-}\rangle\langle s_{p-}| + |s_{q-}\rangle\langle s_{q-}|$. If Alice
measures position, the state of photons observed by Bob is
$\hat{M}_B\hat{P}_i|\Psi\rangle$. This is given by the statistical mixture:
\begin{equation}gin{equation}
\label{statmixp}
\rho_p = \frac{1}{2}\left(|s_{p0}i_{p0}\rangle\langle s_{p0}i_{p0}| +
|s_{q0}i_{q0}\rangle\langle s_{q0}i_{q0}|\right).
\end{equation}
As there are no cross-terms between the paths,
Bob observes no interference in this case.
On the other hand, if Alice
measures momentum, the state of photons observed by Bob is:
\begin{equation}gin{equation}
\label{state}
\hat{M}_B\hat{M}_1|\Psi\rangle =
(1/\sqrt{2})(|s_{p0}i_{p0}\rangle + |s_{q0}i_{q0}\rangle ),
\end{equation}
since $\hat{M}_B\hat{M}_2|\Psi\rangle = 0$.
A direct application of complementarity would suggest that Bob's rays
$|i_{p0}\rangle$ and $|i_{q0}\rangle$ in Eq. (\ref{state})
cannot produce an interference pattern
because their entanglement with the signal photon,
whose states $|s_{p0}\rangle$ and $|s_{q0}\rangle$
are mutually orthogonal, makes them distinguishable. This is equivalent to
saying that interference is not possible because tracing
over the signal states \cite{can78} in the density matrix
$\hat{M}_B\hat{M}_1|\Psi\rangle\langle \Psi |\hat{M}_1^{\dag}\hat{M}_B^{\dag}$
results in $\rho_p$. {\em But this conclusion is not
supported by the experimentally attested two-particle coincidence
interferences} \cite{ghosh,str95,zei00}.
The more rigorous approach would be to analyze
interference in terms of the phase accumulated by Bob's particle's wavefunction
along each path, without reference to external states \cite{ste90}.
Though complementarity is an excellent thumb-rule to elucidate many
non-classical effects, it is essentially
a qualitative idea, and its use warrents some caution \cite{srik_complem}.
The observation of two-particle interference implies that the phase
contribution to the wavefunction on a path is accounted for by the distance
traversed by a ray along its path. No phase (or uniform phase) is picked up
at the detector. Bob's filter-interferometer system is crucial as it ensures
that only two horizontal modes are allowed (i.e, it implements $\hat{M}_B$).
This fixes the point of Alice's detection in coincidences.
Therefore, the relative phase
in Bob's wavefunction at the two slits remains constant,
permitting the formation of an observable interference pattern on his screen.
For the set-up in Figure \ref{rayent}, the relative phase vanishes.
Classical signaling is the direct consequence of the distinguishability of
$\hat{M}_B\hat{M}_1|\Psi\rangle$ in Eq. (\ref{state}) from $\rho_p$ in Eq.
(\ref{statmixp}). We note that the single mode probabilities for states
$|i_{p0}\rangle$ and $|i_{q0}\rangle$ are the same for both cases. Thus,
the no-signaling theorem, within the scope of its implicit assumption, is
not violated. Also, for the same reason, no violation of probability
conservation occurs.
\section{Discussion}\label{discuss}
The above experiment aims to prove a much stronger condition
about nonlocal correlations than does Bell's theorem, in two ways. First:
unlike Bell's theorem, it does
not assume the reality of underlying variables; just the tested principles
of quantum mechanics and quantum electrodynamics suffice.
Second: the nonlocal influence is shown
to transmit classical information, which means that the correlations are
not uncontrollable. Of course, this leads to the problematic situation that
the effective speed $v_{\rm eff}$ of the transmission of this nonlocal
classical signal can be made arbitrarily large. In one sense this need not
be surprising: for quantum mechanics is a non-relativistic theory.
The no-signaling theorem is based on quantum mechanical unitarity and
assumptions of quantum measurement rather than
relativisitc signal locality. If the
instant of Alice's choice is $t_a$ and that of Bob's measurement is
$t_b$, where $t_b \ge t_a$, then
$v_{\rm eff} = (d + 4f)/(t_b - t_a)$,
where the upper
limit occurs in the situation where Alice delays her choice until just before
her photon reaches her. Since this can be made arbitrarily large by increasing
$4f$ and $d$ and/or decreasing the time difference, suggesting that
nonlocality does not prohibit a superluminal classical signal,
it is of interest to examine factors inhibiting such a possibility.
The situation is not improved by considering a no-collapse scenario like
the relative-state (or many-worlds) interpretation \cite{eve50}, because each
branching universe would have to contend with the classical signal.
Requiring collapse to be subluminal (even simply non-instantaneous)
would imply significant changes to quantum theory, and
furthermore permit non-conservation of entangled quantities. By
demonstrating a classical transfer of information, the above experiments
would corraborate the objective nature of state vector reduction.
It appears that to
restore causality we would need the light to somehow decohere into
disentangled momentum pointer states \cite{zeh70}
before reaching Bob's double-slit so that he will always find a Young's
double-slit interference pattern irrespective of Alice's action. But it is not
clear how such a decoherence can be brought about. Perhaps somehow
spacetime itself
would act as a measuring environment to the system, thereby decohering it, in
order to safeguard its own causal structure? Such an explanation is related to
an incompleteness in quantum
mechanics as a formal axiomatic theory \cite{srik}, and would probably require
new axioms in the theory. But it would also reveal an unexpected
connection between quantum information and decoherence. In any case, the
technical feasibility of the above described thought experiment permits a
facile test for nonlocal communication of the type claimed in this paper.
\section{Conclusion}
The question raised in the title of this article is answered in the
affirmative. It is unclear how this is to be reconciled with signal locality.
Perhaps this points to the need for new physics to understand state vector
"collapse", more so in nonlocal cases.
\acknowledgements
I am thankful to Dr. R. Tumulka, Dr. R. Plaga, Dr. M. Steiner,
Dr. J. Finkelstein and Dr. C. S. Unnikrishnan for their constructive criticism.
I thank Ms. Regina Jorgenson for her valuable suggestions.
\begin{equation}gin{thebibliography}{}
\bibitem{ben93} C. H. Bennett, G. Brassard G, C. Crépeau C, R. Josza R, A.
Peres and W. K. Wootters Phys. Rev. Lett. 70, 1895 (1993).
\bibitem{preskill} J. Preskill, http://arXiv.org/abs/quant-ph/9705031.
\bibitem{epr} A. Einstein, N. Rosen, and B. Podolsky,
Phys. Rev. Lett. {\bf 47}, 777 (1935).
\bibitem{bellexp} A. Aspect, P. Grangier, and G. Roger, Phys. Rev. Lett.
{\bf 49}, 91 (1982); W. Tittel, J. Brendel, H. Zbinden, and N. Gisin,
Phys. Rev. Lett. {\bf 81}, 3563 (1998).
\bibitem{bohm} Bohm, D. and Aharonov, Y., Phys. Rev. {\bf 108},
1070 (1957).
\bibitem{bella} J. S. Bell, Physics {\bf 1}, 195 (1964).
\bibitem{ghosh} R. Ghosh, and L. Mandel, Phys. Rev. Lett. {\bf 59},
1903 (1987).
\bibitem{str95} D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and
Y. H. Shih, Phys. Rev. Lett. {\bf 74}, 3600 (1995);
\bibitem{weihs} G. Weihs, T. Jennewein,
C. Simon, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. {\bf 81},
5039 (1998).
\bibitem{dense} C. H. Bennett and S. J. Wiesner Phys. Rev. Lett. {\bf 69},
2881 (1992).
\bibitem{nosig} Eberhard, P. H., Nuovo Cimento 46B, 392 (1978);
P. J. Bussey, Phys. Lett. {\bf 90}A, 9 (1982);
A. J. M. Garrett, Found. Phys. {\bf 20}, No. 4, 381 (1990).
\bibitem{ghz} D. M. Greenberger, M. A. Horne \& A. Zeilinger: Physics
Today (8), 22 (1993).
\bibitem{shi89} A. Shimony, in Philosophical Consequences of Quantum
Theory, edited by J. T. Cushing, and E. McMullin (Univ. of Notre Dame Press,
Notre Dame, Indiana, 1989).
\bibitem{wheeler} J. A. Wheeler,
in The Mathematical Foundations of Quantum Mechanics, ed. A. R. Marlow,
(Academic Press, New York 1978).
\bibitem{zei00} A. Zeilinger, Rev. Mod. Phys. {\bf 71}, S288 (1999).
\bibitem{can78} C. D. Cantrell and M. O. Scully, Phys. Rep. {\bf 43}, 499
(1978).
\bibitem{ste90} A. Stern, Y. Ahoronov and Y. Imry, Phys. Rev. A, {\bf 41},
3436 (1990); C. S. Unnikrishnan, Phys. Rev. A 62, 015601 (2000).
\bibitem{srik_complem} R. Srikanth, under preparation.
\bibitem{eve50} H. Everett III, Rev. Mod. Phys. {\bf 29}, 454 (1957).
\bibitem{zeh70} D. Zeh, Found. Phys. {\bf 1}, 69 (1970).
\bibitem{srik} R. Srikanth, under preparation.
\end{thebibliography}
\begin{equation}gin{figure}
\centerline{\psfig{file=rayent.ps}}
\caption{Light
produced in a nonlinear crystal pumped via spontaneous parametric
downconversion (SPDC) is shared by Alice and Bob. Alice measures the
position or momentum of her photons by detecting them at the image- or
focal-plane of her
lens. The interference pattern produced by their twins in the single counts is
observed by Bob using a double-slit interferometer. Access to Bob's
interferometer is restricted to horizontal rays, by means of two lenses
facing each other with a single-hole perforated diaphragm placed at their
common focal plane, so that only horizontal rays fall on the interferometer.
The intensity and visibility of his
interference pattern can be controlled by Alice's choice of observation. }
\label{rayent}
\end{figure}
\end{document} | math | 26,250 |
\begin{document}
\title{Zero Lyapunov exponents of the Hodge bundle}
\author{Giovanni Forni}
\address{Giovanni Forni: Department of Mathematics, University of Maryland, College Park, MD 20742-4015, USA}
\email{[email protected].}
\author{Carlos Matheus}
\address{Carlos Matheus: CNRS, LAGA, Institut Galil\'ee, Universit\'e Paris 13, 99, Avenue Jean-Baptiste
Cl\'ement, 93430, Villetaneuse, France}
\email{[email protected].}
\urladdr{http://www.impa.br/$\sim$cmateus}
\author{Anton Zorich}
\address{Anton Zorich:
Institut de math\'ematiques de Jussieu
and Institut universitaire de France,
Universit\'e Paris 7, Paris, France}
\email{[email protected]}
\date{April 18, 2014}
\begin{abstract}
By the results of G.~Forni and of R.~Trevi\~no, the Lyapunov spectrum
of the Hodge bundle over the Teichm\"uller geodesic flow on the
strata of Abelian and of quadratic differentials does not contain
zeroes even though for certain invariant submanifolds zero exponents
are present in the Lyapunov spectrum. In all previously known
examples, the zero exponents correspond to those $\operatorname{PSL}(2,{\mathbb R})$-invariant
subbundles of the real Hodge bundle for which the monodromy of the
Gauss---Manin connection acts by isometries of the Hodge metric. We
present an example of an arithmetic Teichm\"uller curve, for which
the real Hodge bundle does not contain any $\operatorname{PSL}(2,{\mathbb R})$-invariant,
subbundles, and nevertheless its spectrum of Lyapunov
exponents contains zeroes. We describe the mechanism of this
phenomenon; it covers the previously known situation as a particular
case. Conjecturally, this is the only way zero exponents can appear
in the Lyapunov spectrum of the Hodge bundle for any $\operatorname{PSL}(2,{\mathbb R})$-invariant
probability measure.
\end{abstract}
\mathfrak{m}aketitle
\vspace*{-0.5cm}
\setcounter{tocdepth}{2}
\tableofcontents
\section{Introduction}
A complex structure on the Riemann surface $X$ of genus $g$
determines a complex $g$-dimensional space of holomorphic 1-forms
$\Omega(X)$ on $X$, and the Hodge decomposition
$$
H^1(X;\field{C}{}) = H^{1,0}(X)\oplus H^{0,1}(X) \simeq
\Omega(X)\oplus\bar \Omega(X)\ .
$$
The pseudo-Hermitian intersection form
\begin{equation}
\label{eq:intersection:form}
\langle\omega_1,\omega_2\rangle:=\frac{i}{2}
\int_{X} \omega_1\wedge \overline{\omega_2}\qquad\qquad\qquad
\end{equation}
is positive-definite on $H^{1,0}(X)$ and negative-definite on
$H^{0,1}(X)$.
For any linear subspace $V\subset H^1(X,\field{C}{})$ define its holomorphic
and anti-holomorphic parts respectively as
$$
V^{1,0}:=V\cap H^{1,0}(X) \qquad\text{and}\qquad
V^{0,1}:=V\cap H^{0,1}(X)\,.
$$
A subspace $V$ of the complex cohomology which decomposes as a direct
sum of its holomorphic and anti-homolomorphic parts, that is,
$V=V^{1,0}\oplus V^{0,1}$, will be called a \emph{split} subspace
(the case when one of the summands is null is not excluded: a
subspace $V$ which coincides with its holomorphic or
anti-homolomorphic part is also considered as \textit{split}).
Clearly, the restriction to any split subspace $V$ of the
pseudo-Hermitian form of formula~\eqref{eq:intersection:form} is
non-degenerate. Note that the converse is, in general, false.
The complex Hodge bundle $H^1_\field{C}$ is the bundle over the moduli space
${\mathcal M}_g$ of Riemann surfaces with fiber the complex cohomology
$H^1(X,\field{C})$ at any Riemann surface $X$. The complex Hodge bundle can
be pulled back to the moduli space of Abelian or quadratic
differentials under the natural projections ${\mathcal H}_g\to{\mathcal M}_g$ or
${\mathcal Q}_g\to{\mathcal M}_g$ respectively. A subbundle $V$ of the complex Hodge
bundle is called a \emph{split} subbundle if all of its fibers are
split subspaces or, in other terms, if it decomposes as a direct sum
of its holomorphic and anti-holomorphic parts.
Let ${\mathcal L}_1$ be an orbifold in some stratum of unit area Abelian
differentials (respectively, in some stratum of unit area
meromorphic quadratic differentials with at most simple poles).
Throughout this paper we say that such an orbifold is
$\operatorname{SL}(2,{\mathbb R})$-\textit{invariant} (respectively, $\operatorname{PSL}(2,{\mathbb R})$-\textit{invariant}) if it is
the support of a Borel probability measure, invariant with respect to
the natural action of the group $\operatorname{SL}(2,{\mathbb R})$ (respectively, of the group
$\operatorname{PSL}(2,{\mathbb R})$) and ergodic with respect to the Teichm\"uller geodesic flow.
The action of $\operatorname{SL}(2,{\mathbb R})$ (respectively, of $\operatorname{PSL}(2,{\mathbb R})$) on ${\mathcal L}_1$ lifts to a
cocycle on the complex Hodge bundle $H^1_\field{C}$ over ${\mathcal L}_1$ by parallel
transport of cohomology classes with respect to the Gauss--Manin
connection. This cocycle is called the complex Kontsevich--Zorich
cocycle.
It follows from this definition that the pseudo-Hermitian
intersection form is $\operatorname{SL}(2,{\mathbb R})$-equivariant (respectively,
$\operatorname{PSL}(2,{\mathbb R})$-equivariant) under the complex Kontsevich--Zorich cocycle. The
complex Kontsevich--Zorich cocycle has a well-defined restriction to
the real Hodge bundle $H^1_\field{R}$ (the real part of the complex Hodge
bundle), called simply the Kontsevich--Zorich cocycle.
By the results H.~Masur~\cite{Masur} and of W.~Veech~\cite{Veech}, the
Teichm\"uller geodesic flow is ergodic on all connected components of
all strata in the moduli spaces of Abelian differentials
and in the moduli spaces of meromorphic quadratic
differentials with at most simple poles with respect to the unique $\operatorname{SL}(2,{\mathbb R})$-invariant
(respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant), absolutely continuous, finite measure. By the
further results of G.~Forni~\cite{Forni:positive} and of
R.~Trevi\~no~\cite{Trevino}, it is known that the action of the
Teichm\"uller geodesic flow on the real or complex Hodge bundle over
such $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant) orbifolds
has only non-zero Lyapunov exponents.
In this paper we continue our investigation on the occurrence of zero
Lyapunov exponents for special $\operatorname{PSL}(2,{\mathbb R})$-invariant orbifolds (see
\cite{Forni:Matheus:Zorich_1} , \cite{Forni:Matheus:Zorich_2}).
Previous examples of $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant)
measures with zero exponents in the Lyapunov spectrum were found in
the class of cyclic covers over $\field{C}P$ branched exactly at four points
(see~\cite{Bouw:Moeller}, \cite{Eskin:Kontsevich:Zorich:cyclic},
\cite{ForniSurvey}, \cite{Forni:Matheus:Zorich_1} and
\cite{Forni:Matheus:Zorich_2}). In all of those examples the neutral
Oseledets subbundle (that is, the subbundle of the zero Lyapunov
exponent in the Oseledets decomposition) is a smooth $\operatorname{SL}(2,{\mathbb R})$-invariant
(respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant) split subbundle.
Our main contribution in this paper is the analysis of a cocycle acting on the
complex Hodge bundle over a certain $\operatorname{PSL}(2,{\mathbb R})$-invariant orbifold (which projects onto
an arithmetic Teichm\"uller curve) in the moduli space of holomorphic quadratic
differentials in genus four. This particular example was inspired by the work of C.~McMullen
on the Hodge theory of general cyclic covers~\cite{McMullen}.
It is the first explicit example of a cocycle with the Lyapunov spectrum containing zero
exponents such that the neutral Oseledets subbundle, which is by definition flow-invariant,
is nevertheless \textit{not} $\operatorname{PSL}(2,{\mathbb R})$-invariant. In other words, the neutral subbundle
in this example \textit{is not} a pullback of a flat subbundle of the Hodge
bundle over the corresponding Teichm\"uller curve.
In fact, the zero exponents in this new example, as well as those in
all previously known ones, can be explained by a simple common
mechanism. Conjecturally such a mechanism is completely general and
accounts for all zero exponents with respect to any $\operatorname{SL}(2,{\mathbb R})$-invariant
(respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant) probability measure on the moduli
spaces of Abelian (respectively, quadratic) differentials. It can be
outlined as follows. We conjecture that a semisimplicity property
holds for the complex Hodge bundle in the spirit of Deligne
Semisimplicity Theorem. Namely, we conjecture that the restriction of
the complex Hodge bundle to any $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively,
$\operatorname{PSL}(2,{\mathbb R})$-invariant) orbifold as above splits into a direct sum of
irreducible $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively, irreducible
$\operatorname{PSL}(2,{\mathbb R})$-invariant), continuous, split subbundles.\footnote{This
conjecture has been recently proved by S.~Filip \cite{Fil13b}. Semisimplicity of the Kontsevich--Zorich cocycle on the {\it real} Hodge bundle had been proved earlier by Avila, Eskin and M\"oller (see Theorem 1.5 in \cite{Avila:Eskin:Moeller}) after a weaker semisimplicity result, establishing semisimplicity of the algebraic hulls of the cocycle, was proved by Eskin and Mirzahani (see \cite{Eskin:Mirzakhani}, Appendix A, also quoted as Theorem 2.1 in \cite{Avila:Eskin:Moeller}).}
The \textit{continuous} vector subbundles in the known examples are,
actually, smooth (even analytic, or holomorphic). However, in the
context of this paper it is important to distinguish subbundles which
are only \textit{measurable} and those which are \textit{continuous}.
To stress this dichotomy in the general case we shall always speak
about~\textit{continuous} subbundles, even when we know that they are
smooth (analytic, holomorphic). In particular, a $\operatorname{SL}(2,{\mathbb R})$-invariant
(respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant) subbundle of the Hodge bundle is
called \emph{irreducible} if it has no non-trivial \emph{continuous}
$\operatorname{SL}(2,{\mathbb R})$-invariant (respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant) subbundle.
In the special case of subbundles defined over suborbifolds which
project onto Teichm\"uller curves \textit{all} $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively,
$\operatorname{PSL}(2,{\mathbb R})$-invariant) subbundles are continuous, in fact smooth, since
by definition the action of the group on the suborbifold is transitive.
We describe this splitting in our example. In fact, it was
observed by M.~M\"oller (see Theorem 2.1 in~\cite{Moeller}) that
whenever the projection of the invariant orbifold ${\mathcal L}_1$ to the
moduli space ${\mathcal M}_g$ is a Teichm\"uller curve (as in our example) the
Deligne Semisimplicity Theorem~\cite{Deligne:87} implies the
existence and uniqueness of the above-mentioned decomposition. The
action of the group $\operatorname{SL}(2,{\mathbb R})$ (respectively, $\operatorname{PSL}(2,{\mathbb R})$) on each irreducible,
invariant split subbundle of the complex Hodge bundle is a cocycle
with values in the group $U(p,q)$ of pseudo-unitary matrices, that
is, matrices preserving a quadratic form of signature $(p,q)$. It is
a general result, very likely known to experts, that any
$U(p,q)$-cocycle has at least $\vert p-q\vert$ zero Lyapunov
exponents (we include a proof of this simple fundamental result in
Appendix~\ref{a:Lyapunov:spectrum:of:pseudo-unitary:cocycles}).
In the very special case of cyclic covers branched at four points,
considered in~\cite{Bouw:Moeller}, \cite{Eskin:Kontsevich:Zorich},
\cite{Forni:Matheus:Zorich_1}, \cite{Forni:Matheus:Zorich_2}, only
pseudo-unitary irreducible cocycles of type $(0,2)$, $(2,0)$ $(0,1)$,
$(1,0)$, and $(1,1)$ arise. In the first four cases the Lyapunov
spectrum of the corresponding invariant irreducible component is
null, while in the fifth case there is a symmetric pair of non-zero
exponents. The examples which we present in this paper are
suborbifolds of the locus of cyclic covers branched at six points.
In this case we have a decomposition into two (complex conjugate)
continuous components of type $(3,1)$ and $(1,3)$. It follows that the zero
exponent has multiplicity at least $2$ in each component
(which is of complex dimension $4$). We prove that, in fact, the
multiplicity of the zero exponent is exactly $2$. Our main example
is a suborbifold which projects onto a certain arithmetic Teichm\"uller
curve. In this case we prove that the above-mentioned decomposition
is in fact \textit{irreducible}. The irreducibility of the components
implies that the complex two-dimensional neutral Oseledets subbundles
of both components cannot be $\operatorname{PSL}(2,{\mathbb R})$-invariant.
For general suborbifolds of our locus of cyclic covers branched at six points,
it follows from results of~\cite{Forni:Matheus:Zorich_2} (see in particular
Theorem 8 in that paper) that whenever the complex two-dimensional
neutral Oseledets subbundles of both components are $\operatorname{PSL}(2,{\mathbb R})$-invariant, then they
are also continuous, in fact smooth. It follows then from our irreducibility
result that the neutral Oseledets subbundles are not $\operatorname{PSL}(2,{\mathbb R})$-invariant
on the full locus of cyclic covers branched at six points, which contains our
main example. Moreover, recent work of Avila, Matheus and Yoccoz
\cite{Avila:Matheus:Yoccoz} suggests that the neutral Oseledets subbundles
are also not continuous there.
As in all known examples, our cocycle is non-degenerate, in the sense
that the multiplicity of the zero exponent is exactly equal to $\vert
p-q\vert$. Conjecturally, all cocycles arising from the action of
$\operatorname{SL}(2,{\mathbb R})$ (respectively, $\operatorname{PSL}(2,{\mathbb R})$) on the moduli space of Abelian
(respectively, quadratic) differentials are non-degenerate in the
above sense and are simple, in the sense that all non-zero exponents
are simple in every irreducible $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively,
$\operatorname{PSL}(2,{\mathbb R})$-invariant) continuous component of the complex Hodge bundle.
(The simplicity of the Lyapunov spectrum for the canonical invariant
measure on the connected components of the strata of Abelian
differentials is proved in~\cite{Avila:Viana}; an analogous statement
for the strata of \textit{quadratic} differentials for the moment
remains conjectural.)
Note that currently one cannot naively apply the Deligne
Semisimplicity Theorem to construct an $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively,
a $\operatorname{PSL}(2,{\mathbb R})$-invariant) splitting of the Hodge bundle over a general
invariant suborbifold ${\mathcal L}$. Even though by recent results of
A.~Eskin and M.~Mirzakhani~\cite{Eskin:Mirzakhani} each such
invariant suborbifold is an affine subspace in the ambient stratum, it
is not known whether it is a quasiprojective variety or not\footnote{It has been proved recently by S.~Filip \cite{Fil13a} that all invariant suborbifolds are quasiprojective varieties.}.
Note also that the conjectural decomposition of the complex Hodge bundle into
irreducible $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively, $\operatorname{PSL}(2,{\mathbb R})$-invariant) components might be
finer than the decomposition coming from the Deligne Semisimplicity
Theorem. The summands in the first (hypothetical) decomposition are
irreducible only with respect to the action by parallel transport
\textit{along the $\operatorname{GL}_+(2,{\mathbb R})$-orbits} in ${\mathcal L}$, or equivalently,
\textit{along the leaves of the foliation by \mathfrak{m}box{Teichm\"uller}
discs} in the projectivization $\operatorname{P}\hspace*{-2pt}\cL$, while the decomposition of the
Hodge bundle provided by the Deligne Semisimplicity Theorem is
invariant with respect to the action by parallel transport \textit{of
the full fundamental group} of ${\mathcal L}$. For example, the Hodge bundle
$H^1_{\field{C}{}}$ over the moduli space ${\mathcal H}_g$ of Abelian differentials
splits into a direct sum of $(1,1)$-tautological subbundle and its
$(g-1,g-1)$-orthogonal complement. This splitting is $\operatorname{GL}_+(2,{\mathbb R})$-invariant,
but it is by no means invariant under the parallel transport in the
directions transversal to the orbits of $\operatorname{GL}_+(2,{\mathbb R})$. The only case when the
two splittings certainly coincide corresponds to the Teichm\"uller
curves, when the entire orbifold ${\mathcal L}$ is represented by a single
orbit of $\operatorname{GL}_+(2,{\mathbb R})$.
We conclude the introduction by formulating an outline of the
principal conjectures.
\begin{NNConjecture}
Let ${\mathcal L}_1$ be a suborbifold in the moduli space of unit area Abelian
differentials or in the moduli space of unit area meromorphic
quadratic differentials with at most simple poles. Suppose that
${\mathcal L}_1$ is endowed with a Borel probability measure, invariant with
respect to the natural action of the group $\operatorname{SL}(2,{\mathbb R})$ (respectively, of
the group $\operatorname{PSL}(2,{\mathbb R})$) and ergodic with respect to the Teichm\"uller
geodesic flow. The Lyapunov spectrum of the complex Hogde bundle
$H^1_{\field{C}{}}$ over the Teichm\"uller geodesic flow on ${\mathcal L}_1$ has the
following properties.
(I.) Let $r$ be the total number of zero entries in the Lyapunov
spectrum. By passing, if necessary, to an appropriate finite
(possibly ramified) cover $\field{H}at{\mathcal L}_1$ of ${\mathcal L}_1$ one can decompose
the vector bundle induced from the Hodge bundle over $\field{H}at{\mathcal L}_1$ into
a direct sum of irreducible $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively,
irreducible $\operatorname{PSL}(2,{\mathbb R})$-invariant) continuous split subbundles.\footnote{This part of the conjecture in a more precise form has been recently established by S.~Filip \cite{Fil13b}.}
Denote by $(p_i,q_i)$ the signature of the restriction of the pseudo-Hermitian
intersection form to the corresponding split subbundle. Then $\sum_i
|p_i-q_i|=r$.
(II.) By passing, if necessary, to an appropriate finite (possibly
ramified) cover $\field{H}at{\mathcal L}_1$ of ${\mathcal L}_1$ one can decompose
the vector bundle induced from the Hodge bundle over $\field{H}at{\mathcal L}_1$ into a
direct sum of irreducible $\operatorname{SL}(2,{\mathbb R})$-invariant (respectively, irreducible $\operatorname{PSL}(2,{\mathbb R})$-invariant)
continuous split subbundles, such that the
nonzero part of the Lyapunov spectrum of each summand is simple.
\end{NNConjecture}
\subsection{Statement of the results}
\label{ss:A:concrete:example}
Let us consider a flat surface $S$ glued from six unit squares as in
Figure~\ref{fig:oneline:6}. It is easy to see that this surface has
genus zero, and that the flat metric has five conical singularities
with the cone angle $\pi$ and one conical singularity with the cone
angle $3\pi$. Thus, the quadratic differential representing the flat
surface $S$ belongs to the stratum ${\mathcal Q}(1,-1^5)$ in the moduli space
of meromorphic quadratic differentials.
\begin{figure}
\caption{
\label{fig:oneline:6}
\label{fig:oneline:6}
\end{figure}
The equation
\begin{equation}
\label{eq:cyclic:cover:equation}
w^3=(z-z_1)\cdot\dots\cdot(z-z_6)
\end{equation}
defines a Riemann surface $\field{H}at X$ of genus four, and a triple cover
$p:\field{H}at X\to \field{C}P$, $p(w,z)=z$. The cover $p$ is ramified at the
points $z_1,\dots,z_6$ of $\field{C}P$ and at no other points. By placing the
ramification points $z_1,\dots,z_6$ at the single zero and at the
five poles of the flat surface $S$ as in Figure~\ref{fig:oneline:6}
we induce on $\field{H}at X$ a flat structure, thus getting a square-tiled
surface $\field{H}at S$. It is immediate to check that $\field{H}at S$ belongs to
the stratum ${\mathcal Q}(7,1^5)$ of holomorphic quadratic differentials in
genus four.
Let us consider the corresponding arithmetic Teichm\"uller curve
$\field{H}at{\mathcal T}\subset{\mathcal M}_4$ and the Hodge bundle over it. The following
theorem, announced in \cite{Forni:Matheus:Zorich_2}, Appendix B,
summarizes the statement of Proposition~\ref{prop:Lyapunov:spectrum}
and of Corollary~\ref{cor:no:R:subspaces}.
\begin{Theorem}
\label{th:R}
The Lyapunov spectrum of the real Hodge bundle $H^1_{\field{R}{}}$ with
respect to the geodesic flow on the arithmetic Teichm\"uller curve
$\field{H}at{\mathcal T}$ is
$$
\left\{\frac{4}{9},\frac{4}{9},0,0,0,0,-\frac{4}{9},-\frac{4}{9}\right\}\,.
$$
The real Hodge bundle $H^1_{\field{R}{}}$ over $\field{H}at{\mathcal T}$ does not have any
nontrivial $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles.
\end{Theorem}
It follows from the above theorem that the neutral Oseledets subbundle $E_0$ over
$\field{H}at {\mathcal T}$ is not $\operatorname{PSL}(2,{\mathbb R})$-invariant. It seems likely that it is also not
continuous.
Note that the cyclic group $\field{Z}/3\field{Z}$ acts naturally on any Riemann
surface $\field{H}at X$ as in~\eqref{eq:cyclic:cover:equation} by deck
transformations of the triple cover $p:\field{H}at X\to \field{C}P$. In coordinates
this action is defined as
\begin{equation}
\label{eq:T}
T: (z,w)\mathfrak{m}apsto (z,\zeta w)\,,
\end{equation}
where $\zeta=e^{2\pi i/3}$. Thus, the complex Hodge bundle $H^1_\field{C}{}$
splits over the locus of cyclic covers~\eqref{eq:cyclic:cover:equation} into a
direct sum of two flat subbundles (that is, vector subbundles invariant under
the parallel transport with respect to the Gauss--Manin connection):
\begin{equation}
\label{eq:Ezeta:oplus:Ezeta2}
H^1_\field{C}{}={\mathcal E}(\zeta)\oplus {\mathcal E}(\zeta^2)\,,
\end{equation}
where ${\mathcal E}(\zeta)$, ${\mathcal E}(\zeta^2)$ are the eigenspaces of the induced
action of the generator $T$ of the group of deck transformations.
The above Theorem~\ref{th:R} has an equivalent formulation in terms of the complex
Hodge bundle, which summarizes the statements of Proposition~\ref{prop:Lyapunov:spectrum}
and of Proposition~\ref{prop:only:E} below.
\begin{Theorem}
\label{th:C}
The complex Hodge bundle $H^1_{\field{C}{}}$ over the arithmetic
Teichm\"uller curve $\field{H}at{\mathcal T}$ does not have any non-trivial
$\operatorname{PSL}(2,{\mathbb R})$-invariant complex subbundles other than ${\mathcal E}(\zeta)$ and
${\mathcal E}(\zeta^2)$. The Lyapunov spectrum of each of the subbundles ${\mathcal E}(\zeta)$,
${\mathcal E}(\zeta^2)$ with respect to the geodesic flow on $\field{H}at{\mathcal T}$ is
$$
\left\{\frac{4}{9},0,0,-\frac{4}{9}\right\}\,.
$$
\end{Theorem}
Actually, there is nothing special about the arithmetic Teichm\"uller
curve ${\mathcal T}\subset{\mathcal Q}(1,-1^5)$ considered above. By taking any
$\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold ${\mathcal L}\subseteq{\mathcal Q}(1,-1^5)$ we can
construct a cyclic cover~\eqref{eq:cyclic:cover:equation} for each
flat surface $S$ in ${\mathcal L}$ placing the six ramification points at the
zero and at the five poles of the quadratic differential. We get the
induced quadratic differential on the resulting cyclic cover. In this
way we get a $\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold $\field{H}at{\mathcal L}\subseteq{\mathcal Q}(7,1^5)$.
By construction, it has the same properties as ${\mathcal L}$, namely, it is
endowed with a Borel probability measure, invariant with respect to the
natural action of the group $\operatorname{PSL}(2,{\mathbb R})$ and ergodic with respect to the Teichm\"uller
geodesic flow. (See the end of Section~\ref{s:Hodge:bundle} for a generalization
of this construction.) Let $\field{H}at{\mathcal Z}$ denote the suborbifold of all
cyclic covers branched at six points, namely, the suborbifold
obtained by the above construction in the case ${\mathcal L}={\mathcal Q}(1,-1^5)$
(see also \cite{Forni:Matheus:Zorich_2}, Appendix B).
\begin{Theorem}
\label{th:L}
The complex Hodge bundle $H^1_{\field{C}{}}$ over the invariant orbifold
$\field{H}at{\mathcal L}$ decomposes into the direct sum of two $\operatorname{PSL}(2,{\mathbb R})$-invariant,
continuous split subbundles $H^1_\field{C}{}={\mathcal E}(\zeta)\oplus {\mathcal E}(\zeta^2)$ of
signatures $(1,3)$ and $(3,1)$ respectively.
The Lyapunov spectrum of each of the subbundles ${\mathcal E}(\zeta)$,
${\mathcal E}(\zeta^2)$ with respect to the Teichm\"uller geodesic flow on
$\field{H}at{\mathcal L}$ is
$$
\left\{\frac{4}{9},0,0,-\frac{4}{9}\right\}\,.
$$
\end{Theorem}
The only difference between the more general Theorem~\ref{th:L} and the
previous one, treating the particular case $\field{H}at{\mathcal L}=\field{H}at{\mathcal T}$, is that now we
do not claim irreducibility of the subbundles ${\mathcal E}(\zeta)$,
${\mathcal E}(\zeta^2)$ for all invariant orbifolds $\field{H}at{\mathcal L}$ as above.
Theorem~\ref{th:L} follows from Theorem~\ref{th:spec:of:unitary:cocycle}
below and from Proposition~\ref{prop:5:2}.
Note that the stratum ${\mathcal Q}(1,-1^5)$ is naturally isomorphic to the
stratum ${\mathcal H}(2)$. Thus, the classification of
C.~McMullen~\cite{McMullen:genus2} describes all $\operatorname{PSL}(2,{\mathbb R})$-invariant
suborbifolds in ${\mathcal Q}(1,-1^5)$: they are represented by an explicit
infinite series of suborbifolds corresponding to arithmetic
Teichm\"uller curves, by an explicit infinite series of suborbifolds
corresponding to non-arithmetic Teichm\"uller curves and by the
entire stratum. By the way, note that the subbundles ${\mathcal E}(\zeta)$,
${\mathcal E}(\zeta^2)$ of the Hodge bundle over the invariant suborbifold
$\field{H}at {\mathfrak{m}athcal Z}\subset{\mathcal Q}(7,1^5)$ (induced from the entire stratum
${\mathcal Q}(1,-1^5)$) are irreducible: indeed, this follows from
Theorem~\ref{th:C} as $\field{H}at{\mathcal T}\subset \field{H}at {\mathfrak{m}athcal Z}$.
Theorem~\ref{th:L} confirms the Conjecture stated in the introduction
for all resulting $\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifolds (up to the
irreducibility of the decomposition in the case of suborbifolds
$\field{H}at{\mathcal L}\neq \field{H}at{\mathcal T}, \field{H}at{\mathfrak{m}athcal Z}$).
As proved in~\cite{Forni:Matheus:Zorich_2}, Appendix B, Theorem 8, from Theorem~\ref{th:R}
and Theorem~\ref{th:L} above and from Theorem 3 of~\cite{Forni:Matheus:Zorich_2}
we can derive the following result.
\begin{Corollary}
If the neutral Oseledets subbundle $E_0$ of the Kontsevich--Zorich cocycle over the invariant suborbifold $\field{H}at{\mathcal L}$ is $\operatorname{PSL}(2,{\mathbb R})$-invariant, then it is continuous, in fact smooth. In particular, since
$\field{H}at{\mathcal T} \subset \field{H}at{\mathcal Z}$, the subbundle $E_0$ is not almost everywhere $\operatorname{PSL}(2,{\mathbb R})$-invariant over
the suborbifold $\field{H}at {\mathfrak{m}athcal Z}$ endowed with the canonical measure.
\end{Corollary}
A.~Avila, C.~Matheus and J.-C.~Yoccoz~\cite{Avila:Matheus:Yoccoz}
have recently proved that indeed $E_0$ is also not continuous over the
suborbifold $\field{H}at {\mathcal Z}$.
Note that a Riemann surface $X$, or a pair given by a Riemann surface
and an Abelian or quadratic differential, might have a nontrivial
automorphism group. This automorphism group is always finite. The
fiber of the Hodge bundle $H^1_{\field{C}}$ over the corresponding point $x$
of the moduli space is defined as the quotient of $H^1(X,\field{C})$ by the
corresponding finite group $G_x$ of induced linear automorphisms. In other words,
the bundle $H^1_{\field{C}}$ is an \textit{orbifold vector bundle}, in the sense
that it is a \textit{fibered} space $H$ over a base $M$ such that the fiber $H_x$
over any $x\in M$ is the quotient $H_x =V_x/G_x$ of a vector space $V_x$
over a finite subgroup $G_x$ of the group $\textrm{Aut} (V_x)$ of linear
automorphisms of $V_x$.
Since the Hodge bundle $H^1_{\field{C}}$ is an orbifold vector bundle, the complex
Kontsevich-Zorich cocycle is an example of an \textit{orbifold linear cocycle} on
an orbifold vector bundle $H$ over a flow $T_t$ on $M$, i.e., a flow $F_t$ on $H$
such that the restrictions $F_t : H_x \to H_{T_tx}$ are well-defined and are projections
of linear maps $\field{H}at F_t : V_x \to V_{T_t x}$. Note that such linear maps are only
defined up to precomposition with the action of elements of $G_x$ on $V_x$ and
postcomposition with the action of elements of $G_{T_tx}$ on $V_{T_tx}$.
In this paper we always work within the locus of cyclic covers.
For any generic cyclic cover $x$ as in~\eqref{eq:cyclic:cover:equation} the
automorphism group is isomorphic to the cyclic group $\field{Z}/3\field{Z}$.
The induced action on the subspaces ${\mathcal E}_x(\zeta)$ and ${\mathcal E}_x(\zeta^2)$
is particularly simple: the induced group $G_x$ of linear automorphisms
acts by multiplication by the complex numbers $\zeta^k$ for $k=0,1,2$
(we recall that $\zeta= e^{2 \pi i/3}$).
This implies that any complex vector subspace of ${\mathcal E}_x(\zeta)$ or ${\mathcal E}_x(\zeta^2)$
is invariant. The elements of the monodromy representations of the bundles
${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$, hence in particular the restrictions of the
Kontsevich--Zorich cocycle to those bundles, are thus given by linear maps
defined only up to composition with the maps $\zeta^k \text{Id}d$, that is, up
to multiplication by $\zeta^k$, for $k=0,1,2$,
\subsection{Lyapunov spectrum of pseudo-unitary cocycles}
Consider an invertible transformation (or a flow) ergodic with
respect to a finite measure. Let $U$ be a $\log$-integrable cocycle over
this transformation (flow) with values in the group $\operatorname{U}(p,q)$ of
pseudo-unitary matrices. The Oseledets Theorem (i.e. the
multiplicative ergodic theorem) can be applied to complex cocycles.
Denote by $\lambda_1,\dots,\lambda_{p+q}$ the corresponding Lyapunov
spectrum.
\begin{Theorem}
\label{th:spec:of:unitary:cocycle}
The Lyapunov spectrum of a pseudo-unitary cocycle $U$ is symmetric
with respect to the sign change and has at least $|p-q|$ zero
exponents.
In other words, the Lyapunov spectrum of an integrable cocycle with
the values in the group $\operatorname{U}(p,q)$ of pseudo-unitary matrices has the
form
$$
\lambda_1\ge\dots\ge\lambda_r\ge 0 = \dots = 0
\ge -\lambda_r\ge \dots \ge -\lambda_1\,,
$$
where $r=\mathfrak{m}in(p,q)$. In particular, if $r=0$, the spectrum is null.
\end{Theorem}
This theorem might be known to experts, and, in any case, the proof
is completely elementary. For the sake of completeness, it is given
in Appendix~\ref{a:Lyapunov:spectrum:of:pseudo-unitary:cocycles}.
\subsection{Outline of the proofs and plan of the paper}
We begin by recalling in Section~\ref{ss:Splitting:of:the:Hodge:bundle}
some basic properties of cyclic covers. In Section~\ref{ss:Construction:of:PSL:invariant:orbifolds}
we construct plenty of more general $\operatorname{PSL}(2,{\mathbb R})$-invariant orbifolds in loci of
cyclic covers.
By applying results of C.~McMullen~\cite{McMullen}, we then show in
Section~\ref{ss:Splitting:of:the:Hodge:bundle} that
in the particular case of the arithmetic Teichm\"uller disc $\field{H}at{\mathcal T}$ defined in
Section~\ref{ss:A:concrete:example}, the splitting
$H^1_\field{C}{}={\mathcal E}(\zeta)\oplus{\mathcal E}(\zeta^2)$ of the complex Hodge bundle
over $\field{H}at{\mathcal T}$ decomposes the corresponding cocycle over the
Teichm\"uller geodesic flow on $\field{H}at{\mathcal T}$ into the direct sum of
complex conjugate $\operatorname{U}(3,1)$ and
$\operatorname{U}(1,3)$-cocycles. By
Theorem~\ref{th:spec:of:unitary:cocycle} the Lyapunov spectrum of
each of the two cocycles has the form
$$
\{\lambda, 0, 0, -\lambda\}\,,
$$
with nonnegative $\lambda$. Since the two cocycles are complex
conjugate, their Lyapunov spectra coincide. Hence, the Lyapunov
spectrum of real and complex Hodge bundles over $\field{H}at{\mathcal T}$ has the form
$$
\{\lambda, \lambda, 0, 0, 0, 0, -\lambda, -\lambda\}\,.
$$
To compute $\lambda$ we construct in Section~\ref{ss:The:PSLZ:orbit}
the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of the square-tiled surface $\field{H}at S$. This orbit is
very small: it contains only two other square-tiled surfaces. Knowing
the cylinder decompositions of the resulting square-tiled surfaces in
the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of $\field{H}at S$, we apply a formula
from~\cite{Eskin:Kontsevich:Zorich} for the sum of the positive
Lyapunov exponents of the Hodge bundle over the corresponding
arithmetic Teichm\"uller disc $\field{H}at{\mathcal T}$ to get the explicit value
$\lambda=4/9$. This computation is performed in
Section~\ref{ss:Spectrum:of:the:Lyapunov:exponents}.
(In Section~\ref{s:non:varying} we present an alternative, more
general, way to compute Lyapunov exponents in similar situations.)
In Section~\ref{s:Irreducibility:of:the:Hodge:bundle} we check the
irreducibility of the subbundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$
essentially by hands.
Note that the monodromy representation of the subbundles ${\mathcal E}(\zeta)$
and ${\mathcal E}(\zeta^2)$ factors through the action of the Veech group of
$\field{H}at{\mathcal T}$. We encode the action of the group $\operatorname{PSL}(2,{\mathbb Z})$ on the orbit of
$\field{H}at S$ by a graph $\Gamma$ associating oriented edges to the basic
transformations
$$
h=
\begin{pmatrix}
1&1\\
0&1
\end{pmatrix}
\qquad
\text{ and }
\qquad
r=\left(
\begin{array}{rr}
0&-1\\
1&0
\end{array}\right)\,.
$$
The resulting graph $\Gamma$ is represented at
Figure~\ref{fig:PSL2Z:orbit}. We choose a basis of homology on every
square-tiled surface in the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of $\field{H}at S$ and associate
to every oriented edge of the graph the corresponding monodromy
matrix. Any closed path on the graph defines the free homotopy type
of the corresponding closed path on the Teichm\"uller curve
$\field{H}at{\mathcal T}$. The monodromy along such path on $\field{H}at{\mathcal T}$ can be
calculated as the product of matrices associated to edges of the
graph in the order following the path on the graph. In
Proposition~\ref{prop:Ezeta:does:not:have:subbundles} we construct
two explicit closed paths and show that the induced monodromy
transformations cannot have common invariant subspaces. This implies
the irreducibility claims in Theorems~\ref{th:R} and~\ref{th:C}.
The evaluation of the monodromy representation is outlined in
Appendix~\ref{a:matrix:calculation}.
Following a suggestion of M.~M\"oller, we sketch in
Lemma~\ref{lm:Ezeta:strong:irreducibility} in Section~\ref{ss:Zariski:closure} the computation of the Zariski closure of the monodromy
group of ${\mathcal E}(\zeta)$. The details of this calculation are explained in Appendix~\ref{a:Zariski:closure}. Then, using Lemma~\ref{lm:Ezeta:strong:irreducibility}, we prove in Proposition~\ref{prop:strongly:irreducible} in Section~\ref{ss:strong_irreducibility} the
\textit{strong irreducibility}\footnote{I.e., the
irreducibility of lifts of ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ to any
finite (possibly ramified) cover of $\field{H}at{\mathcal T}$.} of ${\mathcal E}(\zeta)$ and
of ${\mathcal E}(\zeta^2)$.
In Section~\ref{s:non:varying} we prove the non-varying phenomenon
for certain $\operatorname{PSL}(2,{\mathbb R})$-invariant loci of cyclic covers. Namely, we show
that the sum of the Lyapunov exponents is the same for any
$\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold in such loci.
Finally, in Appendix~\ref{a:Lyapunov:spectrum:of:pseudo-unitary:cocycles} we
discuss some basic facts concerning linear algebra of pseudo-unitary
cocycles and prove Theorem~\ref{th:spec:of:unitary:cocycle}.
\section{Hodge bundle over invariant suborbifolds in loci of cyclic
covers}
\label{s:Hodge:bundle}
\subsection{Splitting of the Hodge bundle over loci of cyclic covers}
\label{ss:Splitting:of:the:Hodge:bundle}
Consider a collection of $n$ pairwise-distinct points $z_i\in\field{C}{}$.
The equation
\begin{equation}
\label{eq:cyclic:cover:general}
w^d=(z-z_1)\cdot\dots\cdot(z-z_n)
\end{equation}
defines a Riemann surface $\field{H}at X$, and a cyclic cover $p:\field{H}at X\to
\field{C}P$, $p(w,z)=z$. Consider the
canonical generator $T$ of the group $\field{Z}/d\field{Z}$ of deck
transformations; let
$$
T^\ast: H^1(X;\field{C}{})\to H^1(X;\field{C}{})
$$
be the induced action in cohomology. Since
$(T^\ast)^d=\operatorname{Id}$, the eigenvalues of $T^\ast$ belong to
a subset of $\{\zeta,\dots,\zeta^{d-1}\}$, where
$\zeta=\exp\left(\dfrac{2\pi i}{d}\right)$. We excluded the root
$\zeta^0=1$ since any cohomology class invariant under deck
transformations would be a pullback of a cohomology class on $\field{C}P$,
and $H^1(\field{C}P)=0$.
For $k=1,\dots,d-1$ denote
\begin{equation}
\label{eq:E:zeta:k}
{\mathcal E}(\zeta^k):=\operatorname{Ker}(T^\ast-\zeta^k\operatorname{Id})
\subseteq H^1(X;\field{C}{})\ .
\end{equation}
Denote
$$
{\mathcal E}^{1,0}(\zeta^k):={\mathcal E}(\zeta^k)\cap H^{1,0}\quad
\text{ and }\quad
{\mathcal E}^{0,1}(\zeta^k):={\mathcal E}(\zeta^k)\cap H^{0,1}\ .
$$
Since a generator $T$ of the group of deck transformations respects
the complex structure, it induces a linear map
$$
T^\ast: H^{1,0}(X)\to H^{1,0}(X)\,.
$$
This map preserves the pseudo-Hermitian form~\eqref{eq:intersection:form} on
$H^{1,0}(X)$. This implies that $T^\ast$ is a unitary operator on
$H^{1,0}(X)$, and hence $H^{1,0}(X)$ admits a splitting into a direct
sum of eigenspaces of $T^\ast$,
\begin{equation}
\label{eq:H10:direct:sum:for:k}
H^{1,0}(X)=\bigoplus_{k=1}^{d-1}{\mathcal E}^{1,0}(\zeta^k)\ .
\end{equation}
The latter observation also implies that for any $k=1,\dots,d-1$ one
has ${\mathcal E}(\zeta^k)={\mathcal E}^{1,0}(\zeta^k)\oplus{\mathcal E}^{0,1}(\zeta^k)$. The
vector bundle ${\mathcal E}^{1,0}(\zeta^k)$ over the locus of cyclic
covers~\eqref{eq:cyclic:cover:general} is a holomorphic subbundle of
$H^1_{\field{C}{}}$.
The decomposition
$$
H^1(X;\field{C}{})=\oplus {\mathcal E}(\zeta^k)\,,
$$
is preserved by the Gauss---Manin connection, which implies that the
complex Hodge bundle $H^1_{\field{C}{}}$ over the locus of cyclic
covers~\eqref{eq:cyclic:cover:general} splits into a direct sum of
the subbundles ${\mathcal E}(\zeta^k)$ invariant with respect to the parallel
transport of the the Gauss---Manin connection.
\begin{NNTheorem}[C.~McMullen]
The signature of the intersection form on
${\mathcal E}(\zeta^{-k})$ is given by
\begin{equation}
\label{eq:signature}
(p,q)=\big([n(k/d)-1],[n(1-k/d)-1]\big)\,.
\end{equation}
In particular,
\begin{equation}
\label{eq:dim}
\dim{\mathcal E}(\zeta^k)=
\begin{cases}
n-2&\text{ if }d\text{ divides }kn\,,\\
n-1&\text{otherwise}
\end{cases}\,.
\end{equation}
\end{NNTheorem}
By applying these general results to the particular cyclic
cover~\eqref{eq:cyclic:cover:equation}, we see that
$H^1_\field{C}{}={\mathcal E}(\zeta)\oplus{\mathcal E}(\zeta^2)$, where the signature of the
intersection form on ${\mathcal E}(\zeta)$ is $(3,1)$ and on
${\mathcal E}(\zeta^2)=\overline{{\mathcal E}(\zeta)}$ is $(1,3)$.
\mathfrak{m}edskip
\noindent\textbf{Bibliographical remarks.}
Cyclic covers over $\field{C}P$ branched at four points were used by I.~Bouw
and M.~M\"oller in~\cite{Bouw:Moeller} to construct new series of
nonarithmetic \mathfrak{m}box{Teichm\"uller} curves. Similar cyclic covers were
independently used by G.~Forni~\cite{ForniSurvey} and then by
G.~Forni and C.~Matheus~\cite{Forni:Matheus} to construct arithmetic
Teichm\"uller curves with completely degenerate spectrum of the
Lyapunov exponents of the Hodge bundle with respect to the geodesic
flow. The monodromy of the Hodge bundle is explicitly described in
these examples by C.~Matheus and J.-C.~Yoccoz~\cite{Matheus:Yoccoz}.
More general arithmetic \mathfrak{m}box{Teichm\"uller} curves corresponding to
cyclic covers over $\field{C}P$ branched at four points are studied
in~~\cite{Forni:Matheus:Zorich_1}. The Lyapunov spectrum of the Hodge
bundle over such arithmetic Teichm\"uller curves is explicitly
computed in~\cite{Eskin:Kontsevich:Zorich:cyclic}. More generally,
Abelian covers are studied in this context by
A.~Wright~\cite{Wright}. Our consideration of cyclic covers as
in~\eqref{eq:cyclic:cover:general} is inspired by the paper of
C.~McMullen~\cite{McMullen}, where he studies the monodromy
representation of the braid group in the Hodge bundle over the locus
of cyclic covers.
For details on geometry of cyclic covers see the original papers of
I.~Bouw~\cite{Bouw:thesis} and~\cite{Bouw} and of
J.~K.~Koo~\cite{Koo}, as well as the recent paper of
A.~Elkin~\cite{Elkin} citing the first three references as a source.
\subsection{Construction of $\operatorname{PSL}(2,{\mathbb R})$-invariant orbifolds
in loci of cyclic covers}
\label{ss:Construction:of:PSL:invariant:orbifolds}
Suppose for simplicity that $d$ divides $n$, where $n$ and $d$ are
the integer parameters in equation~\eqref{eq:cyclic:cover:general}.
The reader can easily extend the considerations below to the
remaining case.
Let ${\mathcal L}$ be a $\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold in some stratum
${\mathcal Q}(m_1,\dots,m_k,-1^l)$ in the moduli space of quadratic
differentials with at most simple poles on $\field{C}P$. For any such
invariant orbifold ${\mathcal L}$ and for any couple of integers $(d,n)$ we
construct a new $\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold $\field{H}at{\mathcal L}$ such that the
Riemann surfaces underlying the flat surfaces from $\field{H}at{\mathcal L}$ belong
to the locus of cyclic covers~\eqref{eq:cyclic:cover:general}. The
construction is performed as follows.
Let $S=(\field{C}P,q)\in{\mathcal L}$. In the simplest case, when the total number
$k+l$ of zeroes and poles of the meromorphic quadratic differential
$q$ on $\field{C}P$ coincides with the number $n$ of ramification points,
one can place the points $z_1,\dots z_n$ exactly at the zeroes and
poles of the corresponding quadratic differential $q$. (Here we
assume that $d$ divides $n$, so that the cyclic cover as
in~\eqref{eq:cyclic:cover:general} is not ramified at infinity.)
Consider the induced quadratic differential $p^\ast q$ on the cyclic
cover $\field{H}at X$. By applying this operation to every flat surface
$S\in{\mathcal L}$, we get the promised orbifold $\field{H}at{\mathcal L}$.
Since by assumption ${\mathcal L}$ is $\operatorname{PSL}(2,{\mathbb R})$-invariant,
the induced orbifold
$\field{H}at{\mathcal L}$ is also $\operatorname{PSL}(2,{\mathbb R})$-invariant, and in the simplest case, when
$k+l=n$, we get $\dim\field{H}at{\mathcal L}=\dim{\mathcal L}$. In particular, starting with a
Teichm\"uller curve, we get a Teichm\"uller curve.
In the concrete example from Section~\ref{ss:A:concrete:example} we
start with an arithmetic Teichm\"uller curve ${\mathcal T}$ corresponding to
the stratum ${\mathcal Q}(1,-1^5)$. Placing the points $z_1,\dots,z_6$ at the
single zero and at the five poles of each flat surface $S$ in ${\mathcal T}$
we get an arithmetic Teichm\"uller curve $\field{H}at{\mathcal T}$ corresponding to
the stratum ${\mathcal Q}(7,1^5)$. By construction, $\field{H}at{\mathcal T}$ belongs to the
locus of cyclic covers~\eqref{eq:cyclic:cover:equation}.
The latter construction can be naturally generalized to the case when
$k+l\neq n$.
When $\operatorname{P}\hspace*{-2pt}\cL$ is a nonarithmetic Teichm\"uller curve, the construction
can be modified by placing the points $z_1,\dots,z_n$ at all possible
subcollections of $n$ distinct \textit{periodic points};
see~\cite{Gutkin:Hubert:Schmidt} for details.
The construction can be generalized further. Let ${\mathcal L}_1$ be a
$\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold of some stratum
${\mathcal Q}_1(m_1,\dots,m_k,-1,\dots,-1)$ in genus zero. Fix a subset
$\Sigma$ in the ordered set with multiplicities
$\{m_1,\dots,m_k,-1,\dots,-1\}$; let $j$ be the cardinality of
$\Sigma$. For each flat surface $S=(\field{C}P,q)$ in ${\mathcal L}$, consider all
possible cyclic covers as in~\eqref{eq:cyclic:cover:general} such
that the points $z_1,\dots,z_j$ run over all possible configurations
of the zeroes and poles corresponding to the subset $\Sigma$, and the
remaining points $z_{j+1},\dots,z_n$ run over all possible
configurations of $n-j$ distinct regular points in $S$. Considering
for each configuration a quadratic differential $p^\ast q$ on the
resulting cyclic cover $\field{H}at X$, we construct a $\operatorname{PSL}(2,{\mathbb R})$-invariant
suborbifold $\field{H}at{\mathcal L}$ of complex dimension $(\dim{\mathcal L}+n-j)$.
Of course, the proof that when ${\mathcal L}_1$ is endowed with a Borel
probability measure, invariant with respect to the natural action of
the group $\operatorname{PSL}(2,{\mathbb R})$ and ergodic with respect to the Teichm\"uller
geodesic flow, the new suborbifold $\field{H}at{\mathcal L}_1$ is also endowed with a
$\operatorname{PSL}(2,{\mathbb R})$-invariant measure satisfying the same properties, requires in
general case $n-j>0$ some extra work (see, for example, the
paper~\cite{Eskin:Marklof:Morris} in this spirit).
\section{Concrete example: the calculations}
\label{s:Concrete:example:the:calculations}
In this section we treat in all details the example from
Section~\ref{ss:A:concrete:example}.
\subsection{The $\operatorname{PSL}(2,{\mathbb Z})$-orbit}
\label{ss:The:PSLZ:orbit}
It is an exercise (left to the reader) to verify that the $\operatorname{PSL}(2,{\mathbb Z})$-orbit
of the square-tiled surface $S$ of Figure~\ref{fig:oneline:6} has the
structure presented in Figure~\ref{fig:PSL2Z:orbit:below} below. By
historical reasons, the initial surface $S$ is denoted as $S_3$
there.
\begin{figure}
\caption{
\label{fig:PSL2Z:orbit:below}
\label{fig:PSL2Z:orbit:below}
\end{figure}
\begin{Convention}
\label{conv:hor:vert}
By typographical reasons, we are forced to use a peculiar orientation
as in Figure~\ref{fig:PSL2Z:orbit:below} and in all
remaining Figures in this paper. The notions ``horizontal''
and ``vertical'' correspond to this ``landscape orientation'':
``horizontal'' means ``parallel to the $x$-axes'' and ``vertical''
means ``parallel to the $y$-axes''. Under this convention, the
leftmost surface $S_3$ of Figure~\ref{fig:PSL2Z:orbit:below} has a
single \textit{horizontal} cylinder of height $1$ and width $6$.
\end{Convention}
The three square-tiled surfaces $\field{H}at S_1, \field{H}at S_2, \field{H}at S_3$ in the
$\operatorname{PSL}(2,{\mathbb Z})$-orbit of $\field{H}at S=\field{H}at S_3$ are presented in
Figure~\ref{fig:three:surfaces}.
This figure also shows how the surfaces $\field{H}at S_1, \field{H}at S_2, \field{H}at S_3$
are related by the basic transformations
$$
h=
\begin{pmatrix}
1&1\\
0&1
\end{pmatrix}
\qquad
\text{ and }
\qquad
r=
\begin{pmatrix}
0&-1\\
1&0
\end{pmatrix}
$$
given by the action of $\operatorname{PSL}(2,{\mathbb Z})$ on the flat surfaces
$\field{H}at S_1, \field{H}at S_2, \field{H}at S_3$.
\subsection{Spectrum of Lyapunov exponents}
\label{ss:Spectrum:of:the:Lyapunov:exponents}
\begin{Lemma}
\label{lm:8:9}
The sum of the nonnegative Lyapunov exponents of the Hodge bundle
$H^1$ with respect to the geodesic flow on $\field{H}at{\mathcal T}$ is equal to
$8/9$.
\end{Lemma}
\begin{proof}
By the formula for the sum of Lyapunov exponents of the subbundle
$H^1_{\field{R}{}}=H^1_+$ from~\cite{Eskin:Kontsevich:Zorich} one has
\begin{equation}
\label{eq:general:sum:of:plus:exponents:for:quadratic}
\lambda_1 + \dots + \lambda_g
\ = \
\cfrac{1}{24}\,\sum_{j=1}^n \cfrac{d_j(d_j+4)}{d_j+2}
+\frac{\pi^2}{3}\cdot c_{\mathfrak{m}athit{area}}(\field{H}at{\mathcal T})
\end{equation}
where the Siegel--Veech constant for the corresponding arithmetic
Teichm\"uller disc $\field{H}at{\mathcal T}$ is computed as
$$
c_{\mathfrak{m}athit{area}}(\field{H}at{\mathcal T})=
\cfrac{3}{\pi^2}\cdot
\cfrac{1}{\operatorname{card}(\operatorname{PSL}(2,{\mathbb Z})\cdot \field{H}at S)}\
\sum_{\field{H}at S_i\in\operatorname{PSL}(2,{\mathbb Z})\cdot \field{H}at S}\ \
\sum_{\substack{
\mathfrak{m}athit{horizontal}\\
\mathfrak{m}athit{cylinders\ cyl}_{ij}\\
such\ that\\\field{H}at S_i=\sqcup\mathfrak{m}athit{cyl}_{ij}}}\
\cfrac{h_{ij}}{w_{ij}}\,,
$$
In our case $\field{H}at{\mathcal T}\subset{\mathcal Q}(7,1^5)$, so the first summand
in~\eqref{eq:general:sum:of:plus:exponents:for:quadratic} gives
$$
\frac{1}{24}\,\sum_{j=1}^n \cfrac{d_j(d_j+4)}{d_j+2}=
\frac{1}{24} \left(\frac{7\cdot 11}{9} +
5\cdot \frac{1\cdot 5}{3}\right)=
\frac{19}{27}\,.
$$
Observing the cylinder decompositions of the three surfaces in the
$\operatorname{PSL}(2,{\mathbb Z})$-orbit of the initial square-tiled cyclic cover, we get:
$$
\frac{\pi^2}{3}\cdot c_{\mathfrak{m}athit{area}}(\field{H}at{\mathcal T})=
\frac{1}{3}\left(\frac{1}{18} + 2\left(\frac{1}{12} +
\frac{1}{6}\right)\right)=
\frac{5}{27}\,.
$$
Thus, taking the sum of the two terms
in~\eqref{eq:general:sum:of:plus:exponents:for:quadratic} we get
$$
\lambda_1 + \dots + \lambda_4=\frac{19}{27}+\frac{5}{27}=\frac{8}{9}
$$
\end{proof}
Consider the $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles ${\mathcal E}(\zeta)$,
${\mathcal E}(\zeta^2)$ of the Hodge bundle $H^1_{\field{C}{}}$ over $\field{H}at{\mathcal T}$ as
in~\eqref{eq:E:zeta:k}. Note that in our case we have
$H^1_{\field{C}{}}={\mathcal E}(\zeta)\oplus{\mathcal E}(\zeta^2)$.
\begin{Proposition}
\label{prop:Lyapunov:spectrum}
The Lyapunov spectrum of the real and complex Hodge bundles
$H^1_{\field{R}{}}$ and $H^1_{\field{C}{}}$ with respect to the geodesic flow on
the arithmetic Teichm\"uller curve $\field{H}at{\mathcal T}$ is
$$
\left\{\frac{4}{9},\frac{4}{9},0,0,0,0,-\frac{4}{9},-\frac{4}{9}\right\}\,.
$$
The Lyapunov spectrum of each of the subbundles ${\mathcal E}(\zeta)$,
${\mathcal E}(\zeta^2)$ with respect to the geodesic flow on $\field{H}at{\mathcal T}$ is
$$
\left\{\frac{4}{9},0,0,-\frac{4}{9}\right\}\,.
$$
\end{Proposition}
\begin{proof}
Note that the vector bundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ are
complex conjugate. Hence, their Lyapunov spectra coincide.
The pseudo-Hermitian Hodge bilinear form on $H^1_{\field{C}{}}$ is preserved
by the Gauss--Manin connection. By the theorem of C.~McMullen cited
at the end of Section~\ref{ss:Splitting:of:the:Hodge:bundle}, the
signature of its restriction to ${\mathcal E}(\zeta)$, ${\mathcal E}(\zeta^2)$ equals
to $(3,1)$ and $(1,3)$ respectively. Thus, the restriction of the
cocycle to ${\mathcal E}(\zeta)$, ${\mathcal E}(\zeta^2)$ lies in
$\operatorname{U}(3,1)$ and $\operatorname{U}(1,3)$ respectively.
Hence, by Theorem~\ref{th:spec:of:unitary:cocycle} the spectrum of
each of ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ has the form
$$
\{\lambda, 0, 0, -\lambda\}\,,
$$
where $\lambda\ge 0$. Since
$H^1_{\field{C}{}}={\mathcal E}(\zeta)\oplus{\mathcal E}(\zeta^2)$, the spectrum of Lyapunov
exponents of the Hodge bundle $H^1_{\field{C}{}}$ is the union of spectra of
${\mathcal E}(\zeta)$ and of ${\mathcal E}(\zeta^2)$. Since the Lyapunov spectrum of
$H^1_{\field{R}{}}$ coincides with the one of $H^1_{\field{C}{}}$ we conclude that
the spectrum of $H^1_{\field{R}{}}$ is
$$
\{\lambda, \lambda, 0, 0, 0, 0, -\lambda, -\lambda\}\,.
$$
By Lemma~\ref{lm:8:9} we get $\lambda=4/9$.
\end{proof}
Recall that the Oseledets subspace (subbundle) $E_0$ (the one
associated to the zero exponents) is called \textit{neutral Oseledets
subspace (subbundle)}.
\begin{Proposition}\label{prop:isometryU31}
The Kontsevich-Zorich cocycle over $\field{H}at{{\mathcal T}}$ acts by isometries on
the neutral Oseledets
subbundle $E_0$ of each of the bundles ${\mathcal E}(\zeta), {\mathcal E}(\zeta^2)$. In
other words, the restriction of the pseudo-Hermitian form to the
subbundle $E_0$ of each of the bundles ${\mathcal E}(\zeta), {\mathcal E}(\zeta^2)$ is either
positive-definite or negative-definite.
\end{Proposition}
\begin{proof}
The Kontsevich-Zorich cocycle over $\field{H}at{\mathcal T}$ on ${\mathcal E}(\zeta)$,
respectively, on ${\mathcal E}(\zeta^2)$, is a $U(3,1)$, respectively, a
$U(1,3)$, cocycle. Moreover, by Proposition~\ref{prop:Lyapunov:spectrum}, the
dimension of the corresponding neutral Oseledets subspaces is
$2=3-1=|1-3|$. By Lemma~\ref{lemma:isometryUpq} below, this implies
that the Kontsevich-Zorich cocycle acts by isometries along the
neutral Oseledets subspace.
\end{proof}
\begin{remark}This Proposition was motivated by a question of Y.~Guivarch to the authors.
\end{remark}
We prove in Section~\ref{s:non:varying} a non-varying phenomenon
similar to the one proved by D.~Chen and M.~M\"oller
in~\cite{Chen:Moeller} for strata in lower genera: certain invariant
loci of cyclic covers share the same sum of the Lyapunov exponents.
\section{Closed geodesics on an arithmetic Teichm\"uller curve}
\label{s:Closed:geodesics:on:an:arithmetic:Teichm:curve}
In this section we describe the basic facts concerning the geometry
of a general arithmetic Teichm\"uller curve. We do not claim
originality: these elementary facts are in part already described in
the literature (see~\cite{Herrlich}, \cite{Hubert:Lelievre},
\cite{Hubert:Schmidt}, \cite{Moeller:Matheus:Yoccoz},
\cite{McMullen:square:tiled}, \cite{Schmithusen:Veech:group},
\cite{Schmithusen:examples}, \cite{Yoccoz}, \cite{Zmiaikou},
\cite{Zorich:square:tiled} and references there); in part widely
known in folklore concerning square-tiled surfaces (as in recent
experiments~\cite{Delecroix:Lelievre}); in part they can be extracted
from the broad literature on coding of geodesics on surfaces of
constant negative curvature (see, for example, \cite{Dalbo}
and~\cite{Series} and references there).
Consider a general square-tiled surface $S_0$. Throughout this
section we assume that the flat structure on $S_0$ is defined by a
quadratic differential no matter whether it is a global square of an
Abelian differential or not. In particular, we deviate from the
traditional convention and always consider the Veech group
$\Gamma(S_0)$ of $S_0$ as a subgroup of $\operatorname{PSL}(2,{\mathbb R})$, and never as a
subgroup of $\operatorname{SL}(2,{\mathbb R})$.
We use the same notation ${\mathcal T}$ for the arithmetic Teichm\"uller curve
defined by $S_0$ and for the corresponding hyperbolic surface with
cusps.
Note that, when working with geodesic flows, in some situations one has
to consider the points of the unit tangent bundle while in the other
situations the points of the base space. In our concrete example with
arithmetic Teichm\"uller curves, the orbit $\operatorname{PSL}(2,{\mathbb R})\cdot S_0\subset Q_g$
of a square-tiled surface $S_0$ in the moduli space of quadratic
differentials ${\mathcal Q}_g$ plays the role of the unit tangent bundle to
the arithmetic Teichm\"uller curve ${\mathcal T}\subset{\mathcal M}_g$ in the moduli
space ${\mathcal M}_g$ of curves. The corresponding projection is defined by
``forgetting'' the quadratic differential:
$$
{\mathcal Q}_g\ni S=(C,q)\mathfrak{m}apsto C\in{\mathcal M}_g\,.
$$
\subsection{Encoding a Veech group by a graph}
\label{ss:Encoding:a:Veech:group:by:a:graph}
Recall that $\operatorname{PSL}(2,{\mathbb Z})$ is isomorphic to the group with two generators
$h$ and $r$ satisfying the relations
\begin{equation}
\label{eq:PSLZ:relations}
r^2=\operatorname{id}\qquad\text{and}\qquad (hr)^3=\operatorname{id}\,.
\end{equation}
As generators $r$ and $h$ one can chose matrices
$$
h=
\begin{pmatrix}
1&1\\
0&1
\end{pmatrix}
\qquad
\text{ and }
\qquad
r=
\begin{pmatrix}
0&-1\\
1&0
\end{pmatrix}\,.
$$
Having an irreducible square-tiled surface $S_0$ defined by a quadratic differential,
construct the following graph $\mathfrak{m}athbb{G}$. Its vertices are in a
bijection with the elements of the orbit $\operatorname{PSL}(2,{\mathbb Z})\cdot S_0$. Its edges
are partitioned in two types. Edges of ``r-type'' are not oriented.
Edges of ``h-type'' are oriented. The edges are naturally constructed
as follows. Each vertex $S_i\in\mathfrak{m}athbb{G}$ is joined by the edge of
the $r$-type with the vertex represented by the square-tiled surface
$r\cdot S_i$. Each vertex $S_i\in\mathfrak{m}athbb{G}$ is also joined by the
oriented edge of the ``h-type'' with the vertex $h\cdot S_i$, where
the edge is oriented from $S_i$ to $h\cdot S_i$.
By construction, the graph $\mathfrak{m}athbb{G}$ with marked vertex $S_0$ is
naturally identified with the coset $\operatorname{PSL}(2,{\mathbb Z})/\Gamma(S_0)$, where
$\Gamma(S_0)$ is the Veech group of the square-tiled surface $S_0$.
(Irreducibility of $S_0$ implies that $\Gamma(S_0)$ is indeed a
subgroup of $\operatorname{PSL}(2,{\mathbb Z})$.)
The structure of the graph carries complete information about the
Veech group $\Gamma(S_0)$. Namely, any path on the graph $\mathfrak{m}athbb{G}$
composed from a collection of its edges defines the corresponding
word in ``letters'' $h, h^{-1}, r$. Any \textit{closed} path starting
at $S_0$ naturally defines an element of the Veech group
$\Gamma(S_0)\subseteq\operatorname{PSL}(2,{\mathbb Z})$. Reciprocally, any element of
$\Gamma(S_0)$ represented as a word in generators $h, h^{-1}, r$
defines a closed path starting at $S_0$. Two closed homotopic paths,
with respect to the homotopy in $\mathfrak{m}athbb{G}$ with the fixed base
point $S_0$, define the same element of the Veech group
$\Gamma(S_0)$. Clearly, the resulting map
\begin{equation}
\label{eq:pi1:to:Gamma}
\pi_1(\mathfrak{m}athbb{G},S_0)\to\Gamma(S_0)\subseteq\operatorname{PSL}(2,{\mathbb Z})\,.
\end{equation}
is a group homomorphism, and even epimorphism.
For any flat surface $S=g\cdot S_0$ in the $\operatorname{PSL}(2,{\mathbb R})$-orbit $\operatorname{PSL}(2,{\mathbb R})\cdot
S_0$ of the initial square-tiled surface $S_0$ the Veech group
$\Gamma(S)$ is conjugated to the Veech group of $S_0$, namely,
$\Gamma(S)=g\cdot\Gamma(S_0)\cdot g^{-1}$. One can construct an
analogous graph $\mathfrak{m}athbb{G}_S$ for $S$ which would be isomorphic to
the initial one. The only change would concern the representation of
the edges in $\operatorname{PSL}(2,{\mathbb R})$: the edges of the $h$-type would be represented
now by the elements $ghg^{-1}$ and the edges of the $r$-type would be
represented by the elements $grg^{-1}$.
Note that by the result~\cite{Hubert:Lelievre:nonconguence} of
P.~Hubert and S.~Lelievre, in general, $\Gamma(S_0)$ \textit{is not}
a congruence subgroup.
One can formalize the properties of the graph $\mathfrak{m}athbb{G}$ as
follows:
\begin{itemize}
\item[(i)] Each vertex of $\mathfrak{m}athbb{G}$ has valence three or four,
where one valence is represented by an outgoing edge of the
``h-type'', another one --- by an incoming edge of the ``h-type'';
the remaining one or two valences are represented by an $r$-edge or
an $r$-loop respectively;
\item[(ii)] The path $hrhrhr$ (where we follow the orientation of
each $h$-edge) starting from any vertex of the graph $\mathfrak{m}athbb{G}$ is
closed.
\end{itemize}
\begin{Question}
Does any abstract graph satisfying properties (i) and (ii) represents
the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of some square-tiled surface $S_0$?
\end{Question}
J.~Ellenberg and D.~McReynolds gave an affirmative answer to the
latter question for square-tiled surfaces with markings (i.e. with
``fake zeroes''), see~\cite{Ellenberg:McReynolds}. We are curious
whether the answer is still affirmative for flat surfaces without any
fake zeroes?
Note that certain infinite collections of pairwise-distinct
square-tiled surfaces might share the same Veech group $\Gamma$, and,
thus, the same graph $\mathfrak{m}athbb{G}$. As the simplest example one can
consider already $\Gamma=\operatorname{PSL}(2,{\mathbb Z})$: infinite collections of square-tiled
surfaces with the Veech group $\operatorname{PSL}(2,{\mathbb Z})$ are constructed
in~\cite{Schmithusen:Veech:group},~\cite{Schmoll:PSLZ},~\cite{Herrlich}, and~\cite{Forni:Matheus:Zorich_1}.
However, if one considers any fixed stratum, the number of
square-tiled surfaces with a fixed isomorphism class of the Veech
group is finite within this stratum (see~\cite[Corollary
1.7]{Smillie:Weiss}). Nevertheless, these square-tiled surfaces might
be distributed into several $\operatorname{PSL}(2,{\mathbb Z})$-orbits (see, say, Example 5.3 of
F. Nisbach's Ph.D. thesis~\cite{Nisbach}).
\subsection{Partition of an arithmetic Teichm\"uller disc
into hyperbolic triangles}
Consider the modular curve (modular surface)
$$
\mathfrak{m}athcal{MOD}=\operatorname{PSO}(2,{\mathbb R})\backslash\operatorname{PSL}(2,{\mathbb R})/\operatorname{PSL}(2,{\mathbb Z})\,.
$$
Consider its canonical fundamental domain in the upper half plane,
namely the hyperbolic triangle
\begin{equation}
\label{eq:fundam:domain}
\{z\,|\, \text{Id}m z>0\}\ \cap\
\{z\,|\, -1/2\le \field{R}e z\le 1/2\}\ \cap\
\{z\,|\quad |z|\ge 1\}
\end{equation}
with angles $0,\pi/3,\pi/3$. Any arithmetic Teichm\"uller curve ${\mathcal T}$
has a natural structure of a (possibly ramified) cover over the
modular curve, and, thus, it is endowed with the natural partition by
isometric triangles as above. We accurately say ``partition'' instead
of ``triangulation'' because of the following subtlety: the side of
the triangle represented by the circle arc in the left picture of
Figure~\ref{fig:modular:surface} might be folded in the middle point
$B$ and glued to itself, as it happens, for example, already for the
modular surface $\mathfrak{m}athcal{MOD}$. The vertices and the sides of this
partition define a graph $\check{\mathfrak{m}athbb{G}}$ embedded into the
compactified surface $\bar{\mathcal T}$, where we apply the following
convention: each side of the partition, which is bent in the middle
and glued to itself, is considered as a loop of the graph, see
Figures~\ref{fig:modular:surface} and~\ref{fig:graph:of:a:surface}.
In particular, the middle point of such side \textit{is not} a vertex
of the graph $\check{\mathfrak{m}athbb{G}}$.
\begin{figure}
\caption{
\label{fig:modular:surface}
\label{fig:modular:surface}
\end{figure}
The degree of the cover ${\mathcal T}\to\mathfrak{m}athcal{MOD}$ equals to the
cardinality of the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of the initial square-tiled surface
$S_0$,
\begin{equation}
\label{eq:card}
\deg({\mathcal T}\to\mathfrak{m}athcal{MOD}) =\operatorname{card}\big(\operatorname{PSL}(2,{\mathbb Z})\cdot S_0\big)\,.
\end{equation}
The cover ${\mathcal T}\to\mathfrak{m}athcal{MOD}$ might be ramified over two special
points of $\mathfrak{m}athcal{MOD}$. The first possible ramification point is
the point $B$ (having coordinate $i$); it corresponds to the flat
torus glued from the unit square, see
Figure~\ref{fig:modular:surface}. The second possible ramification
point is the point $A$ represented by the identified corners
$e^{-(\pi i)/3}$ and $e^{(\pi i)/3}$ of the hyperbolic
triangle~\eqref{eq:fundam:domain}. The latter point corresponds to
the flat torus glued from the regular hexagon.
Any preimage of the point $B$ (see Figure~\ref{fig:modular:surface})
is either regular or has ramification degree two. In the first case
the preimage is a conical singularity of the hyperbolic surface ${\mathcal T}$
with the cone angle $\pi$ (as for the modular surface $\mathfrak{m}athcal{MOD}$
itself); in the latter case it is a regular point of ${\mathcal T}$.
Any preimage of the point $A$ (see Figure~\ref{fig:modular:surface})
is either regular or has the ramification degree three. In the first
case the preimage is a conical singularity of the hyperbolic surface
${\mathcal T}$ with the cone angle $2\pi/3$ (as for the modular surface
$\mathfrak{m}athcal{MOD}$ itself); in the latter case it is a regular point of
${\mathcal T}$.
For each of the two special points of the base surface
$\mathfrak{m}athcal{MOD}$ some preimages might be regular and some preimages
might be ramification points. The cover ${\mathcal T}\to\mathfrak{m}athcal{MOD}$ does
not have any other ramification points.
A square-tiled surface $S=(X,q)$ in the moduli space of quadratic
differentials defines a conical point $X$ of the arithmetic
Teichm\"uller disc if and only if $(X,q)$ and $(X,-q)$ define the
same point in the moduli space. In other words, a square-tiled
surface $S$ projects to a conical point of the arithmetic
Teichm\"uller disc if turning it by $\pi/2$ we get an isomorphic
square-tiled surface.
\subsection{Encoding an arithmetic Teichm\"uller curve by a graph}
\label{ss:dual:graph}
Note that the set of the preimages in ${\mathcal T}$ of the point $B$ (with
coordinate $i$) in $\mathfrak{m}athcal{MOD}$ (see
Figure~\ref{fig:modular:surface}) under the cover
${\mathcal T}\to\mathfrak{m}athcal{MOD}$ coincides with the collection of the
projections of the orbit $\operatorname{PSL}(2,{\mathbb Z})\cdot S_0$ in the moduli space ${\mathcal Q}_g$
of quadratic differentials to the moduli space ${\mathcal M}_g$ of curves.
Since the cover ${\mathcal T}\to\mathfrak{m}athcal{MOD}$ is, in general, ramified over
$i$, the cardinality of the latter set might be less than the
degree~\eqref{eq:card} of the cover. In this sense, the square-tiled
surfaces are particularly \textit{inconvenient} to enumerate the
hyperbolic triangles as above.
Consider a flat torus $T$ which does not correspond to any of the two
conical points of the modular surface $\mathfrak{m}athcal{MOD}$. For example,
let $T$ correspond to the point $4i$ of the fundamental domain. Let
$$
g=\begin{pmatrix}2&0\\0&1/2\end{pmatrix}\in\operatorname{PSL}(2,{\mathbb R})\,.
$$
Then $T=g\cdot T_0$, where $T_0$ stands for the torus glued from the
standard unit square. Consider the following two closed paths
$\gamma_h, \gamma_r$ in the modular surface starting at $4i$, see
Figure~\ref{fig:modular:surface}. The path $\gamma_h$ follows the
horizontal horocyclic loop, while the path $\gamma_r$ descends along
the vertical geodesic from $4i$ to $i$ and returns back following the
same vertical geodesic. The point $T$ of $\mathfrak{m}athcal{MOD}$ and the two
loops $\gamma_h$, $\gamma_r$ can be considered as a realization of
the graph $\mathfrak{m}athbb{G}_T$ under the usual convention that the
``folded'' path $\gamma_r$ is considered as the loop of the graph.
\begin{figure}
\caption{
\label{fig:graph:of:a:surface}
\label{fig:graph:of:a:surface}
\end{figure}
For any square-tiled surface $S_0$ consider the surface $S=g\cdot
S_0$ in the $\operatorname{PSL}(2,{\mathbb R})$-orbit of $S_0$. By construction, it projects to
$T=g\cdot T_0$ under the cover ${\mathcal T}\to\mathfrak{m}athcal{MOD}$. Consider all
preimages of $T$ under this cover, and consider the natural lifts of
the loops $\gamma_h$ and $\gamma_r$. Under the usual convention that
``folded'' paths are considered as loops of the graph, we get the
graph $\mathfrak{m}athbb{G}_S$ from
Section~\ref{ss:Encoding:a:Veech:group:by:a:graph}.
The geometry of the hyperbolic surface ${\mathcal T}$ is completely encoded by
each of the graphs $\mathfrak{m}athbb{G}\simeq\mathfrak{m}athbb{G}_S$ and
$\check{\mathfrak{m}athbb{G}}$. For example, the cusps of ${\mathcal T}$ can be
described as follows (see~\cite{Hubert:Lelievre}).
\begin{Lemma}
The cusps of the hyperbolic surface ${\mathcal T}$ are in the natural
bijection with the orbits of the subgroup generated by the element
$$
h=
\begin{pmatrix}
1&1\\
0&1
\end{pmatrix}
$$
on $\mathfrak{m}athbb{G}\simeq\operatorname{PSL}(2,{\mathbb Z})/\Gamma(S_0)$. In other words, the cusps of
the hyperbolic surface ${\mathcal T}$ are in the natural bijection with the
maximal positively oriented chains of $h$-edges in the graph
$\mathfrak{m}athbb{G}$.
\end{Lemma}
\begin{Remark}
As it was pointed out by D.~Zmiaikou, the Lemma above should be
applied in the context of the action of $\operatorname{PSL}(2,{\mathbb R})$ and \textit{not} of
$\operatorname{SL}(2,{\mathbb R})$, see~\cite{Zmiaikou}.
\end{Remark}
It is clear from the construction that the graphs $\mathfrak{m}athbb{G}_S$ and
$\check{\mathfrak{m}athbb{G}}$ are in natural duality: under the natural
embedding of $\mathfrak{m}athbb{G}_S$ into ${\mathcal T}$ described above, the vertices
of the graph $\mathfrak{m}athbb{G}_S$ are in the canonical one-to-one
correspondence with the hyperbolic triangles in the partition of
${\mathcal T}$; the edges of the graphs $\mathfrak{m}athbb{G}_S$ and
$\check{\mathfrak{m}athbb{G}}$ are also in the the canonical one-to-one
correspondence; under our usual convention concerning the loops, one
can assume that the dual loops intersect transversally.
This allows us to encode the paths on ${\mathcal T}$, and, more particularly, the
closed loops on ${\mathcal T}$, with a fixed base point (or rather the homotopy
classes of such loops in a homotopy fixing the base point), by the
closed loops on the graph $\mathfrak{m}athbb{G}$. This observation is used in
the next section, where we discuss the monodromy representation of
our main example, that is, the arithmetic Teichm\"uller curve $\field{H}at{\mathcal T}$
defined in Section~\ref{ss:A:concrete:example}.
\begin{Remark}
One can go further, and encode the hyperbolic geodesics on any
arithmetic Teichm\"uller disc using the continued fractions and the
associated sequences in ``letters'' $h$, $h^{-1}$, $r$. This coding
is the background of numerous computer experiments evaluating
approximate values of \mathfrak{m}box{Lyapunov} exponents of the Hodge bundle
over general arithmetic Teichm\"uller discs, as, for example, the
ones described in~\cite{Eskin:Kontsevich:Zorich} or
in~\cite{Delecroix:Lelievre}. We refer the reader to the detailed
surveys~\cite{Series} and~\cite{Dalbo} (and to references cited
there) for generalities on geometric coding of geodesics on the
modular surface. The coding adapted particularly to Teichm\"uller
discs is described in~\cite{Moeller:Matheus:Yoccoz}.
\end{Remark}
\section{Irreducibility of the Hodge bundle in the example}
\label{s:Irreducibility:of:the:Hodge:bundle}
In this section we prove that the orthogonal splitting
into the subbundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ is the unique irreducible
decomposition of the Hodge bundle into $\operatorname{PSL}(2,{\mathbb R})$-invariant (continuous) complex subbundles. We then
use the fact that the Zariski closure of the monodromy representations on the bundles
${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ is determined in Appendix~\ref{a:Zariski:closure} to generalize our irreducibility result to
all finite covers of the Teichm\"uller curve $\field{H}at{\mathcal T}$ (strong irreducibility).
\subsection{Irreducibilty of the decomposition}
\label{ss:irreducibility}
We start with the following elementary Lemma from linear algebra
which we present without proof.
\begin{Lemma}
\label{lm:lin:alg}
Let $A, B$ be two $n\!\times\! n$-matrices. If $\det(AB-BA)\neq 0$, then
the corresponding linear automorphisms of $\field{R}^{n}$ (respectively
$\field{C}^{n}$) do not have any common one-dimensional invariant subspaces.
\end{Lemma}
It would be convenient to work with the dual \textit{homology} vector
bundle over the Teichm\"uller curve $\field{H}at{\mathcal T}$ and with its
decomposition into direct sum of $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles
$\mathfrak{m}athcal{E}_*(\zeta)\oplus \mathfrak{m}athcal{E}_*(\zeta^2)$, where
\begin{equation*}
\mathfrak{m}athcal{E}_*(\zeta^k):=\operatorname{Ker}(T_\ast-\zeta^k\operatorname{Id})
\subseteq H_1(X;\field{C}{})\,,
\end{equation*}
compare to~\eqref{eq:E:zeta:k}. Of course, since $H^1(X;\field{C})$ and $H_1(X;\field{C})$ are in duality, we can safely replace ${\mathcal E}(\zeta^k)$ with ${\mathcal E}_*(\zeta^k)$ in our subsequent discussion of the complex Kontsevich-Zorich cocycle over $\field{H}at{\mathcal T}$.
\begin{Proposition}
\label{prop:Ezeta:does:not:have:subbundles}
The subbundles $\mathfrak{m}athcal{E}(\zeta)$ and $\mathfrak{m}athcal{E}(\zeta^2)$ over the Teichm\"uller curve $\field{H}at{\mathcal T}$ are irreducible, i.e., they do not have any nontrivial $\operatorname{PSL}(2,{\mathbb R})$-invariant complex subbundles.
\end{Proposition}
\begin{proof}We note that ${\mathcal E}_*(\zeta)$ and ${\mathcal E}_*(\zeta^2)$ are complex-conjugate
and the monodromy respects the complex conjugation, hence it suffices to prove
that one of them, for instance ${\mathcal E}_*(\zeta)$ is irreducible.
In Section~\ref{ss:calculation} of Appendix~\ref{a:matrix:calculation} we take (more or less at random) the
following two closed oriented paths $\rho_1$, see~\eqref{eq:rho1},
and $\rho_2$, see~\eqref{eq:rho2}, starting and ending at the same
vertex $\field{H}at S_1$, on the graph of Figure~\ref{fig:PSL2Z:orbit} (representing the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of $\field{H}at{S}$):
\begin{align*}
\rho_1:&=h\cdot r h^{-3} r\cdot h\cdot r h^{-2} r\\
\rho_2:&=r h^{-1} r\cdot h^3\cdot r h^{-1} r \,,
\end{align*}
where each path should be read from left to right. The paths are
chosen to be compatible with the orientation of the graph. Using the
explicit calculation of the monodromy\footnote{Note that all monodromy matrices are
defined up to a multiplication by the complex numbers $\zeta^k$, $k=0,1,2$, induced
by the action of the automorphism group of a cyclic cover~\eqref{eq:cyclic:cover:equation}.} representation performed in Appendix~\ref{a:matrix:calculation}, we compute in
Section~\ref{ss:calculation} the monodromy $X,Y: {\mathcal E}_*(\zeta)\to
{\mathcal E}_*(\zeta)$ along the paths $\rho_1,\rho_2$ respectively, and verify
that $\det(XY-YX)\neq 0$. By Lemma~\ref{lm:lin:alg} this proves that
${\mathcal E}_*(\zeta)$ does not have any one-dimensional $\operatorname{PSL}(2,{\mathbb R})$-invariant
subbundles. Since the monodromy preserves the pseudo-Hermitian
intersection form, which is non-degenerate, by duality the complex
four-dimensional bundle ${\mathcal E}_*(\zeta)$ does not have any $\operatorname{PSL}(2,{\mathbb R})$-invariant
codimension one subbundles, i.e., three-dimensional ones.
Given the monodromy matrices $X,Y$ along the paths $\rho_1,\rho_2$
we compute in Section~\ref{ss:calculation} the induced monodromy
matrices $U,V$ in the second wedge product $\Lambda^2 {\mathcal E}_*(\zeta)$ of
${\mathcal E}_*(\zeta)$, and verify that $\det(UV-VU)\neq 0$. This proves that
${\mathcal E}_*(\zeta)$ does not have any two-dimensional $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles.
\end{proof}
\begin{NNRemark}
The same kind of a straightforward proof of irreducibility based on
Lemma~\ref{lm:lin:alg} was implemented in a similar setting
in~\cite[Appendix B]{Zorich:how:do}.
\end{NNRemark}
\begin{Proposition}
\label{prop:only:E}
The complex Hodge bundle $H^1_{\field{C}{}}$ over the Teichm\"uller curve
$\field{H}at{\mathcal T}$ has no nontrivial $\operatorname{PSL}(2,{\mathbb R})$-invariant
complex subbundles other than ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$.
\end{Proposition}
\begin{proof}
By Proposition~\ref{prop:Ezeta:does:not:have:subbundles} the bundles ${\mathcal E}(\zeta)$ and
${\mathcal E}(\zeta^2)$
do not have any non-trivial $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles.
Since the complex Hodge bundle $H^1_{\field{C}{}}$ over $\field{H}at{\mathcal T}$ is
decomposed into the direct sum of two orthogonal $\operatorname{PSL}(2,{\mathbb R})$-invariant
subbundles
$$
H^1_\field{C}{}={\mathcal E}(\zeta)\oplus {\mathcal E}(\zeta^2)\,,
$$
this implies that $H^1_\field{C}{}$ cannot have $\operatorname{PSL}(2,{\mathbb R})$-invariant
subbundles of dimension $1,2,3$, otherwise the orthogonal projections
to the direct summands would produce nontrivial $\operatorname{PSL}(2,{\mathbb R})$-invariant
subbundles. Moreover, since the flat connection preserves the nondegenerate
pseudo-Hermitian intersection form, this implies that the orthogonal complement to
a $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundle cannot have dimension $1,2,3$, and
thus the Hodge bundle does not have any $\operatorname{PSL}(2,{\mathbb R})$-invariant
subbundles of dimension $5,6,7$.
If there existed a $\operatorname{PSL}(2,{\mathbb R})$-invariant complex subbundle $V$ of
dimension $4$ different from ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$, its orthogonal
projections $\pi_1,\pi_2$ to ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$,
respectively, would be $\operatorname{PSL}(2,{\mathbb R})$-invariant isomorphisms. The composition
$\pi_1^{-1}\circ \pi_2$ would establish a $\operatorname{PSL}(2,{\mathbb R})$-invariant isomorphism
between subbundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$. This would imply
that the vector bundles $E(\zeta)$
and $E(\zeta^2)$ would be isomorphic and would have isomorphic
monodromy representations. However the bundles ${\mathcal E}(\zeta)$
and ${\mathcal E}(\zeta^2)$ are complex conjugate and the monodromy representation
respects complex conjugation, hence the proof will be completed
by finding a monodromy matrix $C$ on ${\mathcal E}(\zeta)$ which has a different
spectrum from its complex conjugate $\bar C$ even up to multiplication
times $\zeta^k$ for $k=0, 1, 2$.
In fact, let us consider closed paths $\mathfrak{m}u_1$ and $\mathfrak{m}u_2$ starting and ending at
the same vertex $\field{H}at S_3=\field{H}at{S}$ on the graph of Figure~\ref{fig:PSL2Z:orbit} given
by
\begin{equation}
\label{eq:mu_1_2}
\mathfrak{m}u_1:=h \quad \textrm{ and }\quad
\mathfrak{m}u_2:=(r^{-1}h^{-1} r)\cdot (r^{-1}h^{-1}r)\,.
\end{equation}
A closed path on the graph of the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of $\field{H}at S$ defines
the free homotopy type of a path on the corresponding arithmetic
Teichm\"uller curve $\field{H}at{\mathcal T}$.
An explicit computation (see Sections~\ref{ss:Construction:of:homology:bases} and~\ref{ss:calculation} of Appendix~\ref{a:matrix:calculation})
shows that the monodromy matrices $A, B: {\mathcal E}_*(\zeta)\to {\mathcal E}_*(\zeta)$ associated to
$\mathfrak{m}u_1, \mathfrak{m}u_2$ are
\begin{equation}
\label{eq:matrixA}
A:=A_3^{hor}=
\left(\begin{array}{cccc}
0 & 0 & 1 & \zeta^2 \\
\zeta & 0 & 0 & \zeta \\
0 & \zeta & 0 & \zeta \\
0 & 0 & 0 & -\zeta^2
\end{array}\right)\,,
\end{equation}
\begin{equation}
\label{eq:matrixB}
B:=A_1^{vert}\cdot A_3^{vert}=
\left(\begin{array}{cccc}
0&\zeta^2-1&\zeta&0\\
0&\zeta&0&0 \\
\zeta&\zeta-\zeta^2&1-\zeta^2&0 \\
1-\zeta^2&1-\zeta^2&1-\zeta&1
\end{array}\right)\,.
\end{equation}
We claim that the spectrum of the matrix
$C=B\cdot A$ is different from that of its complex conjugate $\bar C$ even up to
the action of the automorphism group of the cyclic cover, that is, up to multiplication
by the complex numbers $\zeta^k$, $k=0,1,2$.
In fact, a computation (see Appendix~\ref{a:Zariski:closure}) shows that
\begin{equation}
\label{eq:matrixC}
C:=B\cdot A =
\left(\begin{array}{cccc}
1-\zeta &\zeta^2& 0 &-2\zeta\\
\zeta^2&0&0& \zeta^2 \\
\zeta^2-1&\zeta-1&\zeta &-2 \\
\zeta-1&\zeta-\zeta^2&1-\zeta^2&2\zeta
\end{array}\right)\,,
\end{equation}
and that the characteristic polynomial of the matrix $C$ is
\begin{equation}
\label{eq:char:pol:C}
T^4 + (\zeta^2-\zeta)T^3 -2\zeta^2T^2 + (\zeta^2-1) T + \zeta= (T-1)( T^3-2\zeta T^2+2T-\zeta)\,.
\end{equation}
If the spectrum of $C$ and $\bar C$ up to multiplication times $\zeta^k$, $k=0,1,2$, have a
common element not equal to $1$, then there exist $k\in \{0, 1, 2\}$ and $T \in \field{C}$ such that
$$
T^3-2\zeta T^2+2T -\zeta =
(\zeta^k T)^3-2 \bar \zeta (\zeta^k T)^2+2\zeta^k T -
\bar \zeta=0\,.
$$
By subtracting the two identities above, taking into account that $\zeta^3=1$, we can derive
the following identity
$$
2(\zeta^{2k-1}-\zeta) T^2 + 2(\zeta^k-1)T + \zeta -1/\zeta =0\,.
$$
The roots of the above second degree equation can be computed by hand for
$k=0, 1, 2$ and it can then be checked that none of them is a root of the characteristic polynomial
in formula~\eqref{eq:char:pol:C}. The argument is therefore completed.
\end{proof}
\begin{Corollary}
\label{cor:no:R:subspaces}
The real Hodge bundle $H^1_{\field{R}{}}$ over the Teichm\"uller curve
$\field{H}at{\mathcal T}$ has no non-trivial
$\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles.
\end{Corollary}
\begin{proof}
Let $V$ be a $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundle of the real Hodge bundle
$H^1_{\field{R}{}}$ over $\field{H}at{\mathcal T}$. Its complexification $V_{\field{C}{}}$ is a $\operatorname{PSL}(2,{\mathbb R})$-invariant
subbundle of the complex Hodge bundle.
Moreover, by construction it is invariant under the complex
conjugation. Since ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ are complex conjugate,
Proposition~\ref{prop:only:E} implies that $V$ is trivial.
\end{proof}
\subsection{Zariski closure of the monodromy group}
\label{ss:Zariski:closure}
Following a suggestion of M.~M\"oller we sketch in this section the computation of the
Zariski closure of the monodromy group of the bundle ${\mathcal E}(\zeta)$.
This computation (performed in details in Appendix~\ref{a:Zariski:closure} of this paper) implies a stronger version of
Proposition~\ref{prop:Ezeta:does:not:have:subbundles} stated in Proposition~\ref{prop:strongly:irreducible}. The idea of this computation is due to A.~Eskin.
\begin{lemma}
\label{lm:Ezeta:strong:irreducibility}
The connected
component of the identity of the Zariski closure of the monodromy group of the bundle
${\mathcal E}(\zeta)$ over the Teichm\"uller curve $\field{H}at{\mathcal T}$ is isomorphic to
$\textrm{SU}(3,1)$.
\end{lemma}
\begin{proof}
It follows from the Theorem of C.~McMullen cited in
Section~\ref{ss:Splitting:of:the:Hodge:bundle} that the monodromy
group $G$ of the flat bundle ${\mathcal E}(\zeta)$ preserves the
pseudo-Hermitian form of signature $(3,1)$. The direct computation of
the generators of $G$ shows that it is generated by matrices having
determinant $\zeta^k$, with $k$ integer. Hence, the connected
component of the Zariski closure of $G$ containing the identity
element is isomorphic to a subgroup of $\textrm{SU}(3,1)$. In order to prove that this
subgroup is in fact the whole $\textrm{SU}(3,1)$, it is sufficient to show
that the Lie algebra of the Zariski closure of $G$ has the same
dimension as the Lie algebra $\mathfrak{m}athfrak{su}(3,1)$, that is $15$. In other
words, it is sufficient to find $15$ linearly independent vectors
in the Lie algebra of the Zariski closure of $G$, which we do,
basically, by hands.
For any hyperbolic (or parabolic) element $C$ in $G$ the
vector $X=\log(C)$ belongs to the the Lie algebra $\mathfrak{m}athfrak{g}_0$ of the
Zariski closure of $G$. Also, together with any vector $X$, the Lie
algebra $\mathfrak{m}athfrak{g}_0$ contains the vector
$\operatorname{Ad}_g(X)=gXg^{-1}$, where $g$ is any element in $G$.
Thus, it is sufficient to find a single vector in the Lie algebra
$\mathfrak{m}athfrak{g}_0$, then conjugate it by elements of $G$; as soon as
we get by this procedure $15$ linearly independent vectors, the proof
is completed.
The rest of the computation is computer-assisted. We first find an
explicit hyperbolic $4\times 4$ matrix $C$ in $G$ and an algebraic
expression for the matrix $P$ which conjugates $C$ to a diagonal
matrix $D$. This allows us to compute $X=\log C=P\cdot\log D\cdot
P^{-1}$ with arbitrary high precision.
As soon as we have a collection of linearly independent vectors in
$\mathfrak{m}athfrak{g}_0$ we construct a new vector as follows: we take some
element $g$ in $G$ and compute the distance from $gXg^{-1}$ to the
subspace generated by our collection of independent vectors. If the
distance is large enough, the new vector is linearly independent from
the previous ones and we add it to our collection. If the distance is
suspiciously small, we try another element $g$ in $G$.
This algorithm is implemented in practice as follows.
Let $A$ and $B$ be the matrices of formulas~\eqref{eq:matrixA} and
~\eqref{eq:matrixB} respectively. Both elements are elliptic; $A$ has order $18$,
$B$ has order $6$; the monodromy group $G$ is generated by $A$ and $B$.
We check that the matrix $C:= B\cdot A$ of formula~\eqref{eq:matrixC} is hyperbolic,
then we compute $X=\log C$ as indicated above, and show that the $15$ vectors
\begin{align*}
A^{n}\,\cdot \,&\, X\cdot A^{-n} &n=0,\dots,8;\\
B\cdot A^{n}\,\cdot \,&\, X\cdot A^{-n}\cdot B^{-1} &n=0,2,3,4,5,6
\end{align*}
are linearly independent. See Appendix ~\ref{a:Zariski:closure} of this paper for more details.
\end{proof}
\begin{Remark}
Our initial plan was to use parabolic elements in the group and not
hyperbolic ones. Parabolic elements have an obvious advantage that
their logarithms are polynomials and thus, the vector in the Lie
algebra corresponding to an integer parabolic matrix can be computed
explicitly. As a natural candidate for a parabolic element one can consider the
map in cohomology of a square-tiled surface induced by a simultaneous
twist of the horizontal cylinders by
$$
h^n=
\begin{pmatrix}
1&n\\
0&1
\end{pmatrix}
$$
with $n$ equal to, say, the least common multiple of the widths of
the cylinders.
In the case of square-tiled surfaces corresponding to Abelian differentials
we would certainly get parabolic elements in this way. However, in
our case square-tiled surfaces correspond to quadratic differentials.
A direct computation shows that the square-tiled surfaces $\field{H}at S_1,
\field{H}at S_2, \field{H}at S_3$ in the $\operatorname{PSL}(2,{\mathbb Z})$-orbit (see
Figure~\ref{fig:three:surfaces}) have the following property: the
waist curve of any horizontal cylinder is homologous to zero. As a
result, the monodromy along any path on the Teichm\"uller curve
$\field{H}at{\mathcal T}$ represented by an element $h^n$ as above is elliptic (i.e.
has finite order) and not parabolic. We do not know whether the
monodromy group in our example has at least one parabolic element.
\end{Remark}
\subsection{Strong irreducibilty of the decomposition}
\label{ss:strong_irreducibility}
\begin{Proposition}
\label{prop:strongly:irreducible}
The subbundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ over the Teichm\"uller curve $\field{H}at{\mathcal T}$
are strongly irreducible, i.e., their lifts to any finite (possibly ramified)
cover of $\field{H}at{\mathcal T}$ are irreducible.
\end{Proposition}
\begin{proof}
First note that ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ are complex-conjugate
and the monodromy respects the complex conjugation. Moreover, for any
finite, possibly ramified, cover the induced vector bundles stay
complex conjugate. Thus, it suffices to show that one of them, say,
${\mathcal E}(\zeta)$ is strongly irreducible.
The second observation is that the component of identity of the
Zariski closure of the monodromy group of a vector bundle is
invariant under finite covers. In order to see this it is sufficient to note
that for any hyperbolic or parabolic element $g$ in the original
monodromy group, the monodromy group of the vector bundle induced on
a finite cover contains some power of $g$. Thus,
Lemma~\ref{lm:Ezeta:strong:irreducibility} implies the statement of
the Proposition.
\end{proof}
We would like to derive from the above Proposition~\ref{prop:strongly:irreducible} a generalization
of Proposition~\ref{prop:only:E} to arbitrary finite covers of the Teichm\"uller curve $\field{H}at {\mathcal T}$.
The proof of that Proposition can be in fact generalized after we have established the following
algebraic lemma.
\begin{lemma}
\label{lemma:irrational}
The matrix $C$ in formula~\eqref{eq:matrixC} has a simple eigenvalue $\mathfrak{m}u\in \field{C}$ of modulus
one which is not a root of unity.
\end{lemma}
\begin{proof} Let $P_\zeta(T) = T^3-2\zeta T^2 +2T -\zeta$ be the factor
of the characteristic polynomial of the matrix $C$, written in formula~\eqref{eq:char:pol:C}.
Since $\zeta^3=1$ the relation $\overline{ P_\zeta(T)}= P_\zeta( 1/\bar T)$ holds, hence
$P_\zeta(T)$ has exactly one root $\mathfrak{m}u \in \field{C}$ of modulus one (note that $P_\zeta(T)$
cannot have all the roots on the unit circle since the sum of all of its roots is equal to
$-2 \zeta$ which has modulus equal to $2$). We will compute the minimal polynomial
$M(T)$ (with integer coefficients) of $\mathfrak{m}u$ and check that it is not a cyclotomic polynomial.
The general procedure to compute the minimal polynomial of the roots of
$P_\zeta(T)$ is to compute the resultant of $P_\zeta(T)$ and $\zeta^3-1$. In this
particular case, it can be done by hand as follows. Assume $P_\zeta(T)=0$, then
$T(T^2 +2) = \zeta (2 +T^2)$, hence
$$
T^3(T^2+2)^3 = \zeta^3 (2 +T^2)^3
=(2 +T^2)^3.
$$
It follows then that $P_\zeta(T)$ is a divisor of the following polynomial with integer coefficients:
$$
Q(T):=T^9+ 6T^7-8T^6+12T^5 -12T^4 +8 T^3 -6T^2-1\,.
$$
The above polynomial factorizes as follows into irreducible factors:
$$
Q(T)= (T-1)(T^2-T+1)(T^6 + 2T^5 + 8T^4 + 5 T^3 + 8 T^2 +2T +1)\,.
$$
(The above factorization can be guessed by reduction modulo $2$. In fact, $Q(T) \equiv_2 T^9-1$
and $T^9-1\equiv_2 (T-1)(T^2-T+1)(T^6-T^3+1)$ and it is immediate to check that the factors
$T-1$, $T^2-T+1$ and $T^6-T^3+1$ are irreducible modulo $2$.)
Since $P_\zeta(T)$ and $(T-1)(T^2-T+1)$ have clearly no common roots, it follows that
$$
M(T)= T^6 + 2T^5 + 8T^4 + 5 T^3 + 8 T^2 +2T +1\,.
$$
The polynomial $M(T)$ is not cyclotomic. In fact, it is known
(see~\cite{Migotti}) that for all positive integers $n$ with at most
two distinct odd prime factors, the $n$-th cyclotomic polynomial has
all the coefficients in $\{0, 1, -1\}$. It is also known that if $n$
has $r$ distinct odd prime factors then $2^r$ is a divisor of the
degree of the $n$-th cyclotomic polynomial, which is equal to the
value $\varphi(n)$ of the Euler's $\varphi$-function. It follows that all
cyclotomic polynomials of degree $6$ (which in fact appear only for
$n=7, 9, 14$ and $18$) have all the coefficients in $\{0, 1, -1\}$.
\end{proof}
\begin{Proposition}
\label{prop:only:E:bis}
The complex Hodge bundle $H^1_{\field{C}{}}$ over any finite (possibly ramified)
cover of the Teichm\"uller curve $\field{H}at{\mathcal T}$ has no nontrivial $\operatorname{PSL}(2,{\mathbb R})$-invariant
complex subbundles other than the lifts of the subbundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$.
\end{Proposition}
\begin{proof}
By Proposition~\ref{prop:strongly:irreducible} the lifts of the bundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$
to any finite (possibly ramified) cover of the Teichm\"uller curve $\field{H}at{\mathcal T}$ do not have any
non-trivial $\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles. By the same argument as in the proof of Proposition~\ref{prop:only:E}, the proof can then be reduced to prove that there is no $\operatorname{PSL}(2,{\mathbb R})$-invariant
isomorphism between the lifts of the subbundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$, which are
complex conjugate subbundles of the Hodge bundle. By Lemma~\ref{lemma:irrational},
the monodromy matrix $C=BA$ of formula~\eqref{eq:matrixC} has a (simple) complex
eigenvalue $\mathfrak{m}u\in \field{C}$ of modulus $1$ which is not a root of unity. It follows that any power of
$C$ has a non-real eigenvalue of modulus $1$, hence in particular the spectrum of any
power of $C$ is different from the spectrum of its complex conjugate. Thus for any (possibly
ramified) finite cover of the Teichm\"uller curve $\field{H}at{\mathcal T}$, the monodromy representations
on the lift of the bundles ${\mathcal E}(\zeta)$ and ${\mathcal E}(\zeta^2)$ are not isomorphic. In fact, for
any finite cover of the $\field{H}at {\mathcal T}$, there exists a path with monodromy representation on
the lifts of ${\mathcal E}(\zeta)$ and of ${\mathcal E}(\zeta^2)$ given by a power $C^k$ of $C$ and
by its complex conjugate $\bar C^k$ respectively, which have different spectrum and thus
are not isomorphic.
\end{proof}
By the same argument as in the proof of Corollary~\ref{cor:no:R:subspaces}, this
time based on Proposition~\ref{prop:only:E:bis} (instead of Proposition~\ref{prop:only:E}) we can prove
that the real Hodge bundle is strongly irreducible.
\begin{Corollary}
\label{cor:no:R:subspaces:bis}
The real Hodge bundle $H^1_{\field{R}{}}$ over any finite (possibly ramified)
cover of the Teichm\"uller curve $\field{H}at{\mathcal T}$ has no non-trivial
$\operatorname{PSL}(2,{\mathbb R})$-invariant subbundles.
\end{Corollary}
\section{Non-varying phenomenon for certain loci of cyclic covers}
\label{s:non:varying}
It is known that the sum of the Lyapunov exponents of the Hodge
bundle along the Teichm\"uller geodesic flow is the same for all
$\operatorname{SL}(2,{\mathbb R})$-invariant suborbifold in any hyperelliptic locus in the moduli
space of Abelian or quadratic differentials,
see~\cite{Eskin:Kontsevich:Zorich}. In the paper~\cite{Chen:Moeller}
D.~Chen and M.~M\"oller proved the conjecture of M.~Kontsevich and
one of the authors on non-varying of the sum of the positive Lyapunov
exponents for all Teichm\"uller curves in certain strata of low
genera. We show that analogous non-varying phenomenon is valid for
certain loci of cyclic covers.
Let $M$ be a flat surface in some stratum of Abelian or quadratic
differentials. Together with every closed regular geodesic $\gamma$
on $M$ we have a bunch of parallel closed regular geodesics filling a
maximal cylinder $\mathfrak{m}athit{cyl}$ having a conical singularity at each
of the two boundary components. By the \textit{width} $w$ of a
cylinder we call the flat length of each of the two boundary
components, and by the \textit{height} $h$ of a cylinder --- the flat
distance between the boundary components.
The number of maximal cylinders filled with regular closed geodesics
of bounded length $w(cyl)\le L$ is finite. Thus, for any $L>0$ the
following quantity is well-defined:
\begin{equation}
\label{eq:N:area}
N_{\mathfrak{m}athit{area}}(M,L):=
\frac{1}{\operatorname{Area}(M)}
\sum_{\substack{
\mathfrak{m}athit{cyl}\subset M\\
w(\mathfrak{m}athit{cyl})<L}}
\operatorname{Area}(cyl)
\end{equation}
Note that in the above definition we do not assume that the area of
the flat surface is equal to one. For a flat surface $M$ denote by
$M_{(1)}$ a proportionally rescaled flat surface of area one. The
definition of $N_{\mathfrak{m}athit{area}}( M,L)$ immediately implies that for
any $L>0$ one has
\begin{equation}
\label{eq:area:rescaling}
N_{\mathfrak{m}athit{area}}(M_{(1)},L)=
N_{\mathfrak{m}athit{area}}\left(M,\sqrt{\operatorname{Area}(M)} L\right)\,.
\end{equation}
The following limit, when it exists,
\begin{equation}
\label{eq:SV:definition}
c_{\mathfrak{m}athit{area}}(M):=
\lim_{L\to+\infty}\frac{N_{\mathfrak{m}athit{area}}(M_{(1)},L)}{\pi L^2}
\end{equation}
is called the \textit{Siegel---Veech constant}. By a theorem of
H.~Masur and A.~Eskin~\cite{Eskin:Masur}, for any $\operatorname{PSL}(2,{\mathbb R})$-invariant
suborbifold in any stratum of meromorphic quadratic differentials
with at most simple poles, the limit does exist and is the same for
almost all points of the suborbifold (which explains the term
``constant''). Moreover, by Theorem 3
from~\cite{Eskin:Kontsevich:Zorich}, in genus zero the Siegel--Veech
constant $c_{\mathfrak{m}athit{area}}(M)$ depends only on the ambient stratum
${\mathcal Q}(d_1,\dots,d_m)$ and:
\begin{equation}
\label{eq:carea:genus:0}
c_{\mathfrak{m}athit{area}}(M)=
-\cfrac{1}{8\pi^2}\,\sum_{j=1}^m \cfrac{d_j(d_j+4)}{d_j+2}\,.
\end{equation}
Let $S=(\field{C}P,q)\in{\mathcal Q}_1(n-5,-1^{n-1})$, where $n\ge 4$. Suppose that
the limit~\eqref{eq:SV:definition} exists for $S$. Let $p:\field{H}at C\to
\field{C}P$ be a ramified cyclic cover
\begin{equation}
\label{eq:cyclic:cover:d:n}
w^d=(z-z_1)\cdots(z-z_n)
\end{equation}
with ramification points exactly at the singularities of $q$. Suppose
that $d$ divides $n$, and that $d>2$. Let us consider the induced flat
surface $\field{H}at S:=(\field{H}at C, p^\ast q)$. The Lemma below mimics the
analogous Lemma in the original paper~\cite{Eskin:Kontsevich:Zorich}.
\begin{Lemma}
\label{lm:SVconst:for:cover}
The Siegel--Veech constants of the two flat surfaces are related as
follows:
\begin{equation}
\label{eq:c:area:hat:equals:2:c:area}
c_{\mathfrak{m}athit{area}}(\field{H}at S)=
\begin{cases}
\frac{1}{d}\cdot c_{\mathfrak{m}athit{area}}(S),&\text{when $d$ is odd}\\
\frac{4}{d}\cdot c_{\mathfrak{m}athit{area}}(S),&\text{when $d$ is even.}
\end{cases}
\end{equation}
\end{Lemma}
\begin{proof}
Let us consider any maximal cylinder $cyl$ on the underlying flat surface
$S$. By maximality of the cylinder, each of the boundary components
contains at least one singularity of $q$. Since $S$ is a topological
sphere, the two boundary components of $cyl$ do not intersect. Since
$q$ has a single zero, this zero belongs to only one of the two
components of $cyl$. Since the other component contain only poles, it
contains exactly two poles.
Each of these two poles is a ramification point of $p$ of degree $d$.
Thus, any closed geodesic (waist curve) of the cylinder $cyl$ lifts
to a single closed geodesic of the length $d$ when $d$ is odd and to
two distinct closed geodesics of the lengths $d/2$ when $d$ is even.
Now note that, since $d>2$ the quadratic differential $p^\ast q$ is
holomorphic. The condition that $d$ divides $n$ implies that $p$ is
non-ramified at infinity. At each of the ramification points
$z_1,\dots,z_n$ the quadratic differential $q$ has a zero or pole,
and it has no other singularities on $\field{C}P$. Hence, the (nontrivial)
zeroes of $p^\ast q$ are exactly the preimages of the points
$z_1,\dots,z_n$, and, hence, any maximal cylinder on $\field{H}at S$
projects to a maximal cylinder on $S$.
Note also that since $S$ has area one, $\field{H}at S$ has area $d$. We
consider separately two different cases.
\textbf{Case when $\mathfrak{m}athbf{d}$ is odd.}
Applying~\eqref{eq:area:rescaling} followed by the
definition~\eqref{eq:N:area} and then followed by our remark on the
relation between the corresponding maximal cylinders $\widehat{cyl}$
and $cyl$ we get the following sequence of relations:
\begin{multline*}
N_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)},\sqrt{d}\cdot L)
=
N_{\mathfrak{m}athit{area}}(\field{H}at S,d\cdot L)
=\\=
\sum_{\substack{
\widehat{cyl}\subset \field{H}at S\\
w(\widehat{cyl})<d\cdot L}}
\frac{\operatorname{Area}(\widehat{cyl})}{\operatorname{Area}(\field{H}at S)}
=
\sum_{\substack{
cyl\subset S\\
w(cyl)<L}}
\frac{\operatorname{Area}(cyl)}{\operatorname{Area}(S)}
=
N_{\mathfrak{m}athit{area}}(S,L)\,.
\end{multline*}
Hence,
\begin{multline*}
c_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)})=
\lim_{R\to+\infty}
\frac{N_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)},R)}{\pi R^2}=
\lim_{L\to+\infty}
\frac{N_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)},\sqrt{d}L)}{\pi\cdot d\cdot L^2}
=\\=
\frac{1}{d}\lim_{L\to+\infty}
\frac{N_{\mathfrak{m}athit{area}}(S,L)}{\pi L^2}=
\frac{1}{d}\cdot c_{\mathfrak{m}athit{area}}(S)\,,
\end{multline*}
where we used the substitution $R:=\sqrt{d} L$.
\textbf{Case when $\mathfrak{m}athbf{d}$ is even.}
These time our relations are slightly modified due to the fact that a
preimage of a maximal cylinder downstairs having a waste curve of
length $\ell$ is a disjoint union of two maximal cylinders with the
waste curves of length $d\cdot\ell/2$.
\begin{multline*}
N_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)},\frac{\sqrt{d}}{2} L)
=
N_{\mathfrak{m}athit{area}}(\field{H}at S,\frac{d}{2} L)
=\\=
\sum_{\substack{
\widehat{cyl}\subset \field{H}at S\\
w(\widehat{cyl})<\frac{d}{2}\cdot L}}
\frac{\operatorname{Area}(\widehat{cyl})}{\operatorname{Area}(\field{H}at S)}
=
\sum_{\substack{
cyl\subset S\\
w(cyl)<L}}
\frac{\operatorname{Area}(cyl)}{\operatorname{Area}(S)}
=
N_{\mathfrak{m}athit{area}}(S,L)\,.
\end{multline*}
Hence,
\begin{multline*}
c_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)})=
\lim_{R\to+\infty}
\frac{N_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)},R)}{\pi R^2}=
\lim_{L\to+\infty}
\frac{N_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)},\frac{\sqrt{d}}{2}L)}{\pi\cdot \frac{d}{4}\cdot L^2}
=\\=
\frac{4}{d}\lim_{L\to+\infty}
\frac{N_{\mathfrak{m}athit{area}}(S,L)}{\pi L^2}=
\frac{4}{d}\cdot c_{\mathfrak{m}athit{area}}(S)\,,
\end{multline*}
where we used the substitution $R:=\frac{\sqrt{d}}{2} L$.
\end{proof}
\begin{Proposition}
Under the assumptions of Lemma~\ref{lm:SVconst:for:cover} one gets
\begin{equation}
\label{eq:c:area:hat:answer}
\frac{\pi^2}{3}\cdot c_{\mathfrak{m}athit{area}}(\field{H}at S_{(1)})=
\frac{k}{12\cdot d}\cdot\frac{(n-1)(n-2)}{n-3}
\,,\ \text{where }
k=
\begin{cases}
1,&\text{when $d$ is odd}\\
4,&\text{when $d$ is even.}
\end{cases}
\end{equation}
\end{Proposition}
\begin{proof}
By applying the formula~\eqref{eq:carea:genus:0} for the Siegel---Veech
constant of any $\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold in a stratum
${\mathcal Q}_1(d_1,\dots,d_m)$ in genus zero to a particular case of the
stratum ${\mathcal Q}_1(n-5,-1^{n-1})$, we get
$$
\frac{\pi^2}{3}\cdot c_{\mathfrak{m}athit{area}}(S)=
\frac{1}{24}\left(3(n-1)-\frac{(n-5)(n-1)}{n-3}\right)=
\frac{1}{12}\frac{(n-1)(n-2)}{n-3}\,.
$$
By applying Lemma~\ref{lm:SVconst:for:cover} we complete the proof.
\end{proof}
Let ${\mathcal M}_1$ be a $\operatorname{PSL}(2,{\mathbb R})$-invariant suborbifold
in the stratum ${\mathcal Q}_1(n-5,-1^{n-1})$.
It is immediate to check that
the locus
$\field{H}at {\mathcal M}_1$ of flat surfaces $\field{H}at S_{(1)}$
induced by cyclic covers~\eqref{eq:cyclic:cover:d:n}, where $d$ divides $n$,
belongs to the stratum
\begin{align*}
{\mathcal Q}_1(d(n-3)-2,(d-2)^{n-1}),&\ \text{ when $d$ is odd}\\
{\mathcal H}_1\left(d(n-3)/2-1,(d/2-1)^{n-1}\right),&\ \text{ when $d$ is even}
\end{align*}
Applying Theorems 1 and 2 from~\cite{Eskin:Kontsevich:Zorich} we get
the following
\begin{Proposition}
\label{prop:5:2}
The sum of the Lyapunov exponents of the Hodge bundle $H^1$ over
${\mathcal M}_1$ is equal to
\begin{equation}
\label{eq:sum:lambda}
\lambda_1+\dots+\lambda_g=
\begin{cases}
\cfrac{\left(d^2-1\right) (n-2)}{12 d} &\text{ when $d$ is odd}\\
\\
\cfrac{(n-2) \left(d^2 (n-3)+2 n\right)}{12 d (n-3)} &\text{ when $d$ is even}\,.
\end{cases}
\end{equation}
\end{Proposition}
Consider the particular case, when $d=3$ and $n=3m$.
Then
$$
\lambda_1+\dots+\lambda_g=\frac{2n-4}{9}\,,
$$
were, $g=n-2$, by Riemann---Hurwitz formula.
Note that $H^1={\mathcal E}(\zeta)\oplus{\mathcal E}(\zeta^2)$, where
by~\cite{McMullen} the restriction of the Hodge form to ${\mathcal E}(\zeta)$
has signature $(m-1,2m-1)$ and the restriction of the Hodge form to
${\mathcal E}(\zeta^2)$ has signature $(2m-1,m-1)$. Thus, each of the
subspaces has $m$ zero exponents.
\appendix
\section{Lyapunov spectrum of pseudo-unitary cocycles: the proofs}
\label{a:Lyapunov:spectrum:of:pseudo-unitary:cocycles}
In this appendix we prove Theorem~\ref{th:spec:of:unitary:cocycle}. Its presentation below is inspired by discussions of the second author with A. Avila and J.-C. Yoccoz.
Recall that we consider an invertible transformation $T$ or a flow
$T_t$
preserving a finite ergodic measure $\mathfrak{m}u$ on a locally compact topological space
$M$. Let $U$ be a $\log$-integrable cocycle over this transformation (flow)
with values in the group $\operatorname{U}(p,q)$ of pseudo-unitary matrices. The
Oseledets Theorem (i.e. the multiplicative ergodic theorem) can be
applied to complex cocycles. Denote by
\begin{equation}
\label{eq:Lyapunov:spectrum}
\lambda_1\ge\dots\ge\lambda_{p+q}
\end{equation}
the Lyapunov spectrum of the pseudo-unitary cocycle $U$. Let
\begin{equation}
\label{eq:Lyapunov:spectrum:without:multiplicities}
\lambda_{(1)} > \dots > \lambda_{(s)}
\end{equation}
be all \textit{distinct} Lyapunov exponents from the above spectrum.
By applying the transformation (respectively, the flow) both in
forward and backward directions, we get the corresponding Oseledets
decomposition
\begin{equation}
\label{eq:Oseledets:direct:sum}
E_{\lambda_{(1)}}\oplus\dots\oplus E_{\lambda_{(s)}}
\end{equation}
at $\mathfrak{m}u$-almost every point of the base space $M$. By definition all
nonzero vectors of each subspace $E_{\lambda_{(k)}}$ share the same
Lyapunov exponent $\lambda_{(k)}$ which changes sign under the time
reversing.
\begin{Lemma}
\label{l:symp-orth}
For any nonzero $\lambda_{(k)}$, the subspace
$E_{\lambda_{(k)}}$ of the Oseledets direct sum
decomposition~\eqref{eq:Oseledets:direct:sum} is isotropic. Any two
subspaces $E_{\lambda_{(i)}}$, $E_{\lambda_{(j)}}$ such that
$\lambda_{(j)}\neq-\lambda_{(i)}$ are orthogonal with respect to the
pseudo-Hermitian form.
\end{Lemma}
\begin{proof}
Consider a (measurable family of) norm(s) $\|.\|$ for which the cocycle $U$ is $\log$-integrable.
By Luzin's theorem, the absolute value of the (measurable family of) pseudo-Hermitian product(s)
$\langle.,.\rangle$ of any two vectors $v_1,v_2$ in $\field{C}^{p+q}$ is uniformly bounded on any
compact set ${\mathcal K}$ of positive measure in $M$ by the product of their norms,
$$
|\langle v_1,v_2\rangle|_x\le
\mathit{const}({\mathcal K})\cdot \|v_1\|_x\cdot\|v_2\|_x
\quad
\text{ for any }x\in{\mathcal K}\,,
$$
up to a multiplicative constant $\mathit{const}({\mathcal K})$ depending only on the
norm and on the compact set ${\mathcal K}$. By ergodicity of the transformation (flow), the trajectory of almost any point returns infinitely
often to the compact set ${\mathcal K}$.
Suppose that there is a pair of Lyapunov exponents $\lambda_{(i)}$,
$\lambda_{(j)}$ satisfying $\lambda_{(i)}\neq-\lambda_{(j)}$. We do
not exclude the case when $i=j$. Consider a pair of vectors $v_i,v_j$
such that $v_i\in E_{\lambda_{(i)}}$, $v_j\in E_{\lambda_{(j)}}$. By
definition of $E_{\lambda_{(i)}}$, we have
$$
\|T_t(v_1)\|_x\cdot\|T_t(v_2)\|_x\sim
\exp\big((\lambda_{(i)}+\lambda_{(j)})\, t\big)\,.
$$
When $\lambda_{(i)}+\lambda_{(j)}<0$ the latter expression tends to
zero when $t\to+\infty$; when $\lambda_{(i)}+\lambda_{(j)}>0$ the
latter expression tends to zero when $t\to-\infty$. In both cases, we
conclude that for a subsequence of positive or negative times $t_k$
(chosen when the trajectory visits the compact set ${\mathcal K}$) the
pseudo-Hermitian product $\langle T_{t_k}(v_1),T_{t_k}(v_2)\rangle$
tends to zero. Since the pseudo-Hermitian product is preserved by the
flow, this implies that it is equal to zero, so $\langle
v_1,v_2\rangle=0$. Thus, we have proved that every subspace
$E_{(\lambda_i)}$, except possibly $E^\mathfrak{m}u_{(0)}$, is isotropic, and
that any pair of subspaces $E_{\lambda_{(i)}}$, $E_{\lambda_{(j)}}$
such that $\lambda_{(j)}\neq-\lambda_{(i)}$ is orthogonal with
respect to the pseudo-Hermitian form.
\end{proof}
We proceed with the following elementary linear algebraic fact about
isotropic subspaces of a pseudo-Hermitian form of signature $(p,q)$.
\begin{lemma}
\label{lemma:nullcone}
The dimension $\dim_\field{C}{} V$ of an isotropic subspace $V$ of a
pseudo-Hermitian form of signature $(p,q)$ is bounded above by
$\mathfrak{m}in(p,q)$.
\end{lemma}
\begin{proof}
By choosing an appropriate basis, we can always suppose that
$\langle\vec a,\vec b\rangle$ has the form
$$
\langle\vec a,\vec b\rangle=a^1\bar b^1+\dots+a^p\bar b^p -
a^{p+1}\bar b^{p+1}-\dots-a^{p+q}\bar b^{p+q}
$$
where $\vec a=(a^1,\dots,a^{p+q}), \vec b=(b^1,\dots,b^{p+q})$ and
$\vec a, \vec b\in \field{C}^{p+q}$.
Without loss of generality we can assume that $p\leq q$. Let $\Sigma$
be the null cone of the pseudo-Hermitian form, $\Sigma:=\{\vec
a\in\mathfrak{m}athbb{C}^{p+q}\,|\, \langle\vec a,\vec a\rangle=0\}$. We argue
by contradiction. Suppose that $V\subset\Sigma$ is a vector subspace
of dimension $r$ with $r\geq p+1$. By assumption, we can find $p+1$
linearly independent vectors $\vec v_1,\dots,\vec v_{p+1}\in V$.
By using the first $p$ coordinates of these vectors, we obtain a
collection of $p+1$ vectors $\vec w_i=(v_i^1,\dots,v_i^p)\in\field{C}^p$,
$1\leq i\leq p+1$. Thus, one can find a non-trivial linear relation
$$
t_1\vec w_1+\dots+t_{p+1}\vec w_{p+1}=\vec 0\in\field{C}^p\,.
$$
Going back to the vectors $\vec v_i$, we conclude that the
non-trivial linear combination
$$
\vec v=t_1\vec v_1+\dots+t_{p+1}\vec v_{p+1}
\in V-\{0\}\subset\Sigma-\{0\}
$$
has the form $\vec v=(0,\dots,0,v^{p+1},\dots,v^{p+q})$, which leads
to a contradiction since the inclusion $\vec v\in\Sigma$ forces
$0=|v^{p+1}|^2+\dots+|v^{p+q}|^2$ (that is, $\vec v=0$).
\end{proof}
\begin{Lemma}
\label{lm:spectrum:is:symmetric}
The Lyapunov spectrum~\eqref{eq:Lyapunov:spectrum} is symmetric with
respect to the sign change, that is for any $k$ satisfying $1\le k\le
p+q$ one has
$$
\lambda_k=\lambda_{p+q+1-k}
$$
\end{Lemma}
\begin{proof}
First note that together with any nonzero entry $\lambda_{(i)}$ the
spectrum~\eqref{eq:Lyapunov:spectrum:without:multiplicities}
necessarily contains the entry $-\lambda_{(i)}$. Otherwise, by
Lemma~\ref{l:symp-orth} the subspace $E_{\lambda_{(i)}}$ would be
orthogonal to the entire vector space $\field{C}^{p+q}$, which contradicts
the assumption that the pseudo-Hermitian form is nondegenerate.
Consider a nonzero entry $\lambda_{(i)}$ in the
spectrum~\eqref{eq:Lyapunov:spectrum:without:multiplicities}. Let us
decompose the direct sum~\eqref{eq:Oseledets:direct:sum} into two
terms. As the first term we choose $E_{\lambda_{(i)}}\oplus
E_{-\lambda_{(i)}}$, and we place all the other summands
from~\eqref{eq:Oseledets:direct:sum} to the second term. By
Lemma~\ref{l:symp-orth} the two terms of the resulting direct sum are
orthogonal. Hence, the restriction of the pseudo-Hermitian form to
the first term is non-degenerate. By Lemma~\ref{l:symp-orth} both
subspaces $E_{\lambda_{(i)}}$ and $E_{-\lambda_{(i)}}$ are isotropic.
It follows now from Lemma~\ref{lemma:nullcone} that their dimensions
coincide.
\end{proof}
\begin{Lemma}
\label{lm:at:least:p:minus:q}
The dimension of the neutral subspace $E_0$ in the Oseledets
decomposition~\eqref{eq:Oseledets:direct:sum} is at least $|p-q|$.
\end{Lemma}
\begin{proof}
Consider the direct sum $E_u$ of all subspaces in the Oseledets
decomposition~\eqref{eq:Oseledets:direct:sum} corresponding to
strictly positive Lyapunov exponents $\lambda_{(i)}> 0$,
$$
E_u:=\bigoplus_{\lambda_{(i)}>0} E_{\lambda_{(i)}}\,.
$$
Similarly, consider the direct sum $E_s$ of all subspaces in the
Oseledets decomposition~\eqref{eq:Oseledets:direct:sum} corresponding
to strictly negative Lyapunov exponents $\lambda_{(j)}< 0$,
$$
E_s:=\bigoplus_{\lambda_{(j)}<0} E_{\lambda_{(j)}}\,.
$$
By Lemma~\ref{l:symp-orth} both subspaces $E_u$ and $E_s$ are
isotropic. Hence, by Lemma~\ref{lemma:nullcone} the dimension of each
of them is at most $\mathfrak{m}in(p,q)$. Since the dimension of the space
$E_u\oplus E_0\oplus E_s$ is $p+q$, it follows that the dimension of
the neutral subspace $E_0$ (when it is present) is at least $|p-q|$.
\end{proof}
By combining the statements of Lemma~\ref{lm:spectrum:is:symmetric} and
of Lemma~\ref{lm:at:least:p:minus:q}, we get the statement of
Theorem~\ref{th:spec:of:unitary:cocycle}.
Concluding this appendix, we show the following simple criterion for
the cocycle $U$ to act by isometries on $E_0$.
\begin{Lemma}\label{lemma:isometryUpq}
Suppose that the neutral subspace (subbundle) $E_0$ has dimension exactly
$|p-q|$. Then, the cocycle $U$ acts on $E_0$ by isometries in the sense that the restriction of the pseudo-Hermitian form to the
neutral subspace (subbundle) $E_0$ is either positive definite or
negative definite.
\end{Lemma}
\begin{proof}
We claim that $E_0\cap\Sigma=\{0\}$, where $\Sigma=\{v: \langle
v,v\rangle=0\}$ is the null-cone of the pseudo-Hermitian form
$\langle.,.\rangle$ preserved by $U$. Indeed, since $E_s$ and $E_u$
have the same dimension (by Lemma~\ref{lm:spectrum:is:symmetric}),
and $E_0$ has dimension $|p-q|$ (by hypothesis), we have that
$\dim_\field{C} E_s=\dim_\field{C} E_u=\mathfrak{m}in\{p,q\}$. So, if
$E_0\cap\Sigma\neq\{0\}$, the arguments of the proof of
Lemma~\ref{l:symp-orth} show that $E_s\oplus (E_0\cap\Sigma)$ is an
isotropic subspace whose dimension is at least $\mathfrak{m}in\{p,q\}+1$, which
contradicts Lemma~\ref{lemma:nullcone}.
Since the pseudo-Hermitian form $\langle.,.\rangle$ is
non-degenerate, the fact that $E_0\cap\Sigma=\{0\}$ implies that the
restriction of $\langle.,.\rangle$ to $E_0$ is (positive or negative)
definite. In other words, the cocycle $U$ restricted to $E_0$
preserves a family of definite forms $\langle.,.\rangle|_{E_0}$,
i.e., $U$ acts by isometries on $E_0$.
\end{proof}
\section{Evaluation of the monodromy representation}
\label{a:matrix:calculation}
\subsection{Scheme of the construction.}
Our plan is as follows. We start by constructing the square-tiled
cyclic cover $\field{H}at S=\field{H}at S_3$ of the initial square-tiled surface
$S$ of Figure~\ref{fig:oneline:6}. Then we construct the
$\operatorname{PSL}(2,{\mathbb Z})$-orbit of $\field{H}at S=\field{H}at S_3$. The results of this calculation
are presented in Figure~\ref{fig:PSL2Z:orbit}. In particular, the
$\operatorname{PSL}(2,{\mathbb Z})$-orbit of the initial square-tiled surface $\field{H}at S=\field{H}at S_3$
has cardinality three, see Figure~\ref{fig:PSL2Z:orbit}.
For each of the three square-tiled surfaces $\field{H}at S_1, \field{H}at S_2, \field{H}at
S_3$ in the $\operatorname{PSL}(2,{\mathbb Z})$-orbit of $\field{H}at S_3$ we construct an appropriate
generating set of integer cycles and a basis of the eigenspace
$E_{\field{H}at S_i}(\zeta)\subset H_1(\field{H}at S_i,\field{C}{})$. Then we compute the
six matrices of the action in homology induced by the basic
horizontal shear $h$ and by the counterclockwise rotation $r$ by
$\pi/2$ of these flat surfaces, where
$$
h=\begin{pmatrix}1&1\\0&1\end{pmatrix}
\qquad
r=\begin{pmatrix}0&-1\\1&0\end{pmatrix}\,,
$$
thus obtaining an explicit description of the holonomy
representation. Note that we work with the \textit{homology}; the
representation in the \textit{cohomology} is dual.
\begin{NNRemark}
Since we consider the representation of $\operatorname{PSL}(2,{\mathbb Z})$ we might consider all
matrices up to multiplication by $-1$.
\end{NNRemark}
\subsection{Construction of homology bases and evaluation of induced homomorphisms}
\label{ss:Construction:of:homology:bases}
\paragraph{\textbf{Step 1 (Figure~\ref{fig:oneline:shear}).}}
As a generating set of cycles of $\field{H}at S_3$ we take the cycles
$$
a_1, b_1, c_1, d_1, \dots, a_3, b_3, c_3, d_3
$$
represented in the second picture from the left in
Figure~\ref{fig:oneline:shear}. Each of these cycles is represented
by a close loop with a base point $C$. The loops $d_i$ are composed
from the subpaths $d_{i,1}$, and $d_{i,2}$, as indicated in
Figure~\ref{fig:oneline:shear}.
Consider the affine map $\field{H}at S_3\to \field{H}at S_3$ induced by the
horizontal shear
$$
h=\begin{pmatrix}1&1\\0&1\end{pmatrix}\,,
$$
see Figure~\ref{fig:oneline:shear}. (Recall that, by
Convention~\ref{conv:hor:vert} established in the beginning of
Section~\ref{ss:The:PSLZ:orbit}, the notions ``horizontal'' and
``vertical'' correspond to the ``landscape'' orientation.)
It is clear from Figure~\ref{fig:oneline:shear} that the induced map
$h_3$ in the integer homology acts on the chosen cycles as follows:
\begin{align}
\label{eq:h3:abc}
h_3&: a_i\mathfrak{m}apsto a'_i = b_{i-1}\notag\\
h_3&: b_i\mathfrak{m}apsto b'_i = c_{i-1}\\
h_3&: c_i\mathfrak{m}apsto c'_i = a_{i}\notag\,,
\end{align}
where we use the standard convention that indices are considered
modulo $3$.
To compute the images of the cycles $d_i$ we introduce auxiliary
relative cycles $e_1,e_2,e_3$; see the left edge of the rightmost
picture in Figure~\ref{fig:oneline:shear}. Then,
\begin{align*}
h_3: d_{i,1}\mathfrak{m}apsto d'_{i,1} &= b_{i-1}+c_{i-1}-d_{i+1,2}+e_{i-1}\\
h_3: d_{i,2}\mathfrak{m}apsto d'_{i,2} &= -e_{i-1}-d_{i+1,1}+a_{i+1}\,,
\end{align*}
and taking the sum $d_i=d_{i,1}+d_{i,2}$, we get
\begin{equation}
\label{eq:h3:d}
h_3: d_i\mathfrak{m}apsto d'_i=a_{i+1}+b_{i-1}+c_{i-1}-d_{i+1}\,.
\end{equation}
The induced action of the generator $T$ of the group of deck
transformations, defined in~\eqref{eq:T}, has the following form on
the generating cycles:
\begin{align*}
T_\ast: a_i&\mathfrak{m}apsto a_{i-1}\\
T_\ast: b_i&\mathfrak{m}apsto b_{i-1}\\
T_\ast: c_i&\mathfrak{m}apsto c_{i-1}\\
T_\ast: d_i&\mathfrak{m}apsto d_{i-1}
\end{align*}
This implies that the following elements of $H_1(\field{H}at S_3,\field{C}{})$:
\begin{align*}
a_+&:=a_1+\zeta a_2 + \zeta^2 a_3\\
b_+&:=b_1+\zeta b_2 + \zeta^2 b_3\\
c_+&:=c_1+\zeta c_2 + \zeta^2 c_3\\
d_+&:=d_1+\zeta d_2 + \zeta^2 d_3
\end{align*}
are eigenvectors of $T_\ast$ corresponding to the eigenvalue
$\zeta=\exp(2\pi i/3)$, and hence, they belong to the subspace
${\mathcal E}_*(\zeta)$. These elements are linearly independent, and, thus, form
a basis of this four-dimensional subspace. To verify the latter
statement we compute the intersection numbers of the generating
cycles $a_i, b_j, c_k, d_l$. Using these intersection numbers we
evaluate the quadratic form
$$
\frac{i}{2} (\alpha\cdot\bar\beta)
$$
on the collection $a_+,b_+,c_+,d_+$, and observe that it has
signature $(3,1)$. We skip the details of this elementary
calculation.
By combining the definition of the basis $a_+,b_+,c_+,d_+$ with the
transformation rules~\eqref{eq:h3:abc} and~\eqref{eq:h3:d}, we see
that the matrix $A^{\mathfrak{m}athit{hor}}_3$ of the induced map
$$
h_3: {{\mathcal E}_*}_{\field{H}at S_3}(\zeta)\to {{\mathcal E}_*}_{\field{H}at S_3}(\zeta)
$$
has the form
\begin{equation}
\label{eq:A3:hor}
A^{\mathfrak{m}athit{hor}}_3=\left(
\begin{array}{cccc}
0 & 0 & 1 & \zeta^2 \\
\zeta & 0 & 0 & \zeta \\
0 & \zeta & 0 & \zeta \\
0 & 0 & 0 & -\zeta^2
\end{array}
\right)
\end{equation}
\paragraph{\textbf{Step 2 (Figure~\ref{fig:oneline:rotate}).}}
In Step 1 we used the horizontal cylinder decomposition for the
initial square-tiled surface $\field{H}at S_3$, see the left two pictures in
Figure~\ref{fig:oneline:rotate}. Here, as usual, ``horizontal''
corresponds to the landscape orientation, see
Convention~\ref{conv:hor:vert} in Section~\ref{ss:The:PSLZ:orbit}. In
the bottom two pictures of Figure~\ref{fig:oneline:rotate} we
construct a pattern of the same flat surface $\field{H}at S_3$ corresponding
to the vertical cylinder decomposition. Finally, we rotate the
resulting pattern by $\pi/2$ clockwise; see the right two pictures.
We renumber the squares after the rotation. The resulting surface is
the surface $\field{H}at S_2$. It inherits a collection of generating cycles
and the basis of the subspace ${\mathcal E}_*(\zeta)$ from the surface $\field{H}at S_3$.
By construction, the matrix $R_3$ of the induced map
$$
r_3: {{\mathcal E}_*}_{\field{H}at S_3}(\zeta)\to {{\mathcal E}_*}_{\field{H}at S_2}(\zeta)
$$
is the identity matrix, $r_3=\text{Id}d$ for our choice of the basis in
${{\mathcal E}_*}_{\field{H}at S_3}(\zeta)$ and in ${{\mathcal E}_*}_{\field{H}at S_2}(\zeta)$.
Note that by construction, the points of the $\operatorname{PSL}(2,{\mathbb Z})$-orbit
corresponding to the surfaces $\field{H}at S_3$ and $\field{H}at S_2$ satisfy
$[\field{H}at S_2]=r^{-1} [\field{H}at S_3]=r [\field{H}at S_3]$, where
$r=\begin{pmatrix}0&1\\-1&0\end{pmatrix}\in\operatorname{PSL}(2,{\mathbb Z})$.
\paragraph{\textbf{Step 3 (Figure~\ref{fig:twoline:one:shear}).}}
We consider the affine map $\field{H}at S_2\to \field{H}at S_1$ induced by the
horizontal (in the landscape orientation) shear
$h=\begin{pmatrix}1&1\\0&1\end{pmatrix}$ and define a generating set
of cycles in the homology $H_1(\field{H}at S_1; \field{Z}{})$ of $\field{H}at S_1$ and a
basis of cycles in the subspace ${{\mathcal E}_*}_{\field{H}at S_1}(\zeta)$ as the images
of generating cycles previously defined in $H_1(\field{H}at S_2; \field{Z}{})$. By
construction, the matrix $A^{\mathfrak{m}athit{hor}}_2$ of the induced map
$$
h_2: {{\mathcal E}_*}_{\field{H}at S_2}(\zeta)\to {{\mathcal E}_*}_{\field{H}at S_1}(\zeta)
$$
is the identity matrix, $A^{\mathfrak{m}athit{hor}}_2=\text{Id}d$, for our choice of
the bases in ${{\mathcal E}_*}_{\field{H}at S_2}(\zeta)$ and in ${{\mathcal E}_*}_{\field{H}at S_1}(\zeta)$.
\paragraph{\textbf{Step 4 (Figure~\ref{fig:twoline:shear}).}}
Consider the affine map $\field{H}at S_1\to \field{H}at S_2$ induced by the
horizontal shear $h=\begin{pmatrix}1&1\\0&1\end{pmatrix}$. Now
both homology spaces $H_1(\field{H}at S_1; \field{Z}{})$ and $H_1(\field{H}at S_2; \field{Z}{})$
are already endowed with the generating sets, so we can compute the
matrix $A^{\mathfrak{m}athit{hor}}_1$ of the induced map
$$
h_1: {{\mathcal E}_*}_{\field{H}at S_1}(\zeta)\to {{\mathcal E}_*}_{\field{H}at S_2}(\zeta)\,.
$$
For our choice of the basis in ${{\mathcal E}_*}_{\field{H}at S_1}(\zeta)$ and in ${{\mathcal E}_*}_{\field{H}at
S_2}(\zeta)$ the matrix $A^{\mathfrak{m}athit{hor}}_1$ coincides with the
matrix of the automorphism ${{\mathcal E}_*}_{\field{H}at S_2}(\zeta)\to {{\mathcal E}_*}_{\field{H}at
S_2}(\zeta)$ induced by the affine diffeomorphism $\field{H}at S_2\to \field{H}at
S_2$ corresponding to the horizontal shear
$h^2=\begin{pmatrix}1&2\\0&1\end{pmatrix}$.
Figure~\ref{fig:twoline:shear} describes this automorphism.
It is convenient to introduce auxiliary cycles $s_1, s_2, s_3$,
as in the right picture of Figure~\ref{fig:twoline:shear}. We use
Figure~\ref{fig:twoline:shear} to trace how the induced map $h_1\circ
h_2$ in the integer homology acts on the generating cycles. We start
by noting that
$$
h_1\circ h_2: d_i\mathfrak{m}apsto d'_i = d_i\,.
$$
Next, we remark that,
\begin{align*}
h_1\circ h_2&: a_{i,1}\mathfrak{m}apsto a'_{i,1} = -s_i+c_{i,1}\\
h_1\circ h_2&: a_{i,2}\mathfrak{m}apsto a'_{i,2} = c_{i,2}+s_{i+1}\,,
\end{align*}
and, hence, taking the sum, we get
$$
h_1\circ h_2: a_i\mathfrak{m}apsto a'_i = c_i+s_{i+1}-s_i\,.
$$
We proceed with the relations
\begin{align*}
h_1\circ h_2&: b_{i,1}\mathfrak{m}apsto b'_{i,1} = s_{i-1}+b_{i+1,1}\notag\\
h_1\circ h_2&: b_{i,2}\mathfrak{m}apsto b'_{i,2} = b_{i+1,2}-s_i\,,
\end{align*}
and, hence, taking the sum, we get
$$
h_1\circ h_2: b_i\mathfrak{m}apsto b'_i = b_{i+1}+s_{i-1}-s_i\,.
$$
We conclude with the relations
\begin{align*}
h_1\circ h_2&: c_{i,1}\mathfrak{m}apsto c'_{i,1} = -d_{i+1}+a_{i+1,1}\notag\\
h_1\circ h_2&: c_{i,2}\mathfrak{m}apsto c'_{i,2} = a_{i+1,2}+d_{i-1}\,,
\end{align*}
and, hence, taking the sum, we get
$$
h_1\circ h_2: c_i\mathfrak{m}apsto c'_i = a_{i+1}-d_{i+1}+d_{i-1}\,.
$$
In order to express the auxiliary cycles $s_i$ in terms of the
generating cycles $a_i, b_j, c_k, d_l$ it is convenient
introduce the relative cycle $\vec{j}_b$ following the bottom
horizontal (in the landscape orientation) side of the square number
$j$ from left to right. In these notations
\begin{align*}
s_1&=c_{1,1}+\vec{0}_b+\vec{1}_b+a_{3,2}\\
d_3&=a_{3,1}+\vec{8}_b+\vec{9}_b+c_{1,2}\,.
\end{align*}
Adding up the latter equations and taking into consideration that
$\vec{0}_b=-\vec{9}_b$ and $\vec{1}_b=-\vec{8}_b$ we obtain
$$
s_1+d_3=c_1+a_3
$$
Analogous considerations show that
\begin{equation}
\label{eq:aux:cycles:s}
s_i=a_{i-1}+c_i-d_{i-1}\,.
\end{equation}
Summarizing the above relations we get
\begin{align*}
h_1\circ h_2&: a_i\mathfrak{m}apsto a'_i = a_i-a_{i-1}+c_{i+1}-d_i+d_{i-1}\\
h_1\circ h_2&: b_i\mathfrak{m}apsto b'_i =
-a_{i-1}+a_{i+1}+b_{i+1}+c_{i-1}-c_i+d_{i-1}-d_{i+1}\\
h_1\circ h_2&: c_i\mathfrak{m}apsto c'_i = a_{i+1}-d_{i+1}+d_{i-1}\\
h_1\circ h_2&: d_i\mathfrak{m}apsto d'_i = d_i
\end{align*}
which implies the following expression for $A^{\mathfrak{m}athit{hor}}_1$:
$$
A^{\mathfrak{m}athit{hor}}_1=\left(
\begin{array}{cccc}
1-\zeta & \zeta^2-\zeta & \zeta^2 & 0 \\
0 & \zeta^2 & 0 & 0 \\
\zeta^2 & \zeta-1 & 0 & 0 \\
\zeta-1 & \zeta-\zeta^2 & \zeta-\zeta^2 & 1
\end{array}
\right)\,.
$$
\paragraph{\textbf{Step 5 (Figure~\ref{fig:twoline:rotate}).}}
We compute the action in homology of $\field{H}at S_1$ induced by the
automorphism of $\field{H}at S_1$ associated to the counterclockwise
rotation $r=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$ by the angle
$\pi/2$. More specifically, we want to compute the matrix $R_1$ of
the induced map
$$
r_1: {{\mathcal E}_*}_{\field{H}at S_1}(\zeta)\to {{\mathcal E}_*}_{\field{H}at S_1}(\zeta)\,.
$$
in the chosen basis.
In a way similar to the calculation in Step 4, we use the auxiliary cycles
$s_i$ indicated in the left picture of
Figure~\ref{fig:twoline:rotate}. Note that the cycles $a_i, b_j, c_k,
d_l, s_m$ on the surface $S_1$ are defined as the images of the
corresponding cycles on the surface $S_2$. This implies that the
auxiliary cycles $s_i$ satisfy the same
relation~\eqref{eq:aux:cycles:s} as on the surface $S_2$.
We note that
\begin{align*}
r_1&: a_{i,1}\mathfrak{m}apsto a'_{i,1} = s_i+a_{i,1}\\
r_1&: a_{i,2}\mathfrak{m}apsto a'_{i,2} = a_{i,2}-s_{i+1}\,,
\end{align*}
and, hence, taking the sum, and applying
relations~~\eqref{eq:aux:cycles:s} we get
$$
r_1: a_i\mathfrak{m}apsto a'_i= a_i+s_i-s_{i+1}=
a_{i-1}+c_i-c_{i+1}+d_i-d_{i-1}
\,.
$$
We proceed with the relations
\begin{align*}
r_1&: b_{i,1}\mathfrak{m}apsto b'_{i,1} = d_{i+1}+c_{i,1}\\
r_1&: b_{i,2}\mathfrak{m}apsto b'_{i,2} = c_{i,2}-d_{i-1}\,,
\end{align*}
and, hence, taking the sum, we get
$$
r_1: b_i\mathfrak{m}apsto b'_i= c_i+d_{i+1}-d_{i-1}\,.
$$
We proceed further with the relations
\begin{align*}
r_1&: c_{i,1}\mathfrak{m}apsto c'_{i,1} = -s_{i-1}+b_{i,1}\\
r_1&: c_{i,2}\mathfrak{m}apsto c'_{i,2} = b_{i,2}+s_i\,,
\end{align*}
and, hence, taking the sum, we get
$$
r_1: c_i\mathfrak{m}apsto c'_i= b_i+s_i-s_{i-1}
=a_{i-1}-a_{i+1}+b_i+c_i-c_{i-1}+d_{i+1}-d_{i-1}\,.
$$
To establish the relations for the images of the cycles $d_i$ we
introduce relative cycle $\vec j_t$ following from left to right the
top horizontal edge of the square number $j$ in the \textit{initial}
enumeration on the left picture of Figure~\ref{fig:twoline:rotate}.
As usual, ``horizontal'' is considered with respect to the landscape
orientation in Figure~\ref{fig:twoline:rotate}. In these notations we
get
\begin{align*}
r_1&: d_{1,1}\mathfrak{m}apsto d'_{1,1} = b_{2,1}-\vec{4}_t\\
r_1&: d_{i,2}\mathfrak{m}apsto d'_{i,2} = -\vec{17}_t+b_{2,2}+s_2\,.
\end{align*}
By adding up the latter equations and by taking into account that
$\vec{4}_t=-\vec{17}_t$ we obtain
$$
r_1: d_1\mathfrak{m}apsto d'_1= b_2+s_2\,.
$$
Analogous considerations show that
$$
r_1: d_i\mathfrak{m}apsto d'_i= b_{i+1}+s_{i+1}
=a_i+b_{i+1}+c_{i+1}-d_i\,.
$$
By applying the above relations to the images of the cycles
$a_+,b_+,c_+,d_+$, we get the following expression for the matrix
$R_1$:
$$
R_1=\left(
\begin{array}{cccc}
\zeta & 0 & \zeta-\zeta^2 & 1 \\
0 & 0 & 1 & \zeta^2 \\
1-\zeta^2 & 1 & 1-\zeta & \zeta^2 \\
1-\zeta & \zeta^2-\zeta & \zeta^2-\zeta & -1
\end{array}
\right)\,.
$$
\subsection{Choice of concrete paths and calculation of the monodromy}
\label{ss:calculation}
Consider the maps
\begin{align*}
v_1: = r^{-1}_3\cdot h^{-1}_2\cdot r_1 &: H_1(\field{H}at S_1;\field{Z})\to H_1(\field{H}at S_3;\field{Z})\\
v_2: = r^{-1}_2\cdot h^{-1}_3\cdot r_2 &: H_1(\field{H}at S_2;\field{Z})\to H_1(\field{H}at S_2;\field{Z})\\
v_3: = r^{-1}_1\cdot h^{-1}_1\cdot r_3 &: H_1(\field{H}at S_3;\field{Z})\to H_1(\field{H}at S_1;\field{Z})
\end{align*}
induced by the vertical shear
$
v=\begin{pmatrix}1&0\\1&1\end{pmatrix}.
$
In the chosen bases of homology, the restrictions of these linear
maps to the subspaces ${{\mathcal E}_*}_{\field{H}at S_i}(\zeta)$ have matrices
\begin{align*}
A^{vert}_1&=R^{-1}_3\cdot (A^{hor}_2)^{-1}\cdot R_1 =\text{Id}d\cdot\text{Id}d\cdot R_1 =R_1\\
A^{vert}_2&=R^{-1}_2\cdot (A^{hor}_3)^{-1}\cdot R_2 =\text{Id}d\cdot (A^{hor}_3)^{-1}\cdot\text{Id}d =(A^{hor}_3)^{-1}\\
A^{vert}_3&=R^{-1}_1\cdot (A^{hor}_1)^{-1}\cdot R_3 =R_1^{-1}\cdot (A^{hor}_1)^{-1}\cdot \text{Id}d =R_1^{-1}\cdot (A^{hor}_1)^{-1}
\end{align*}
correspondingly. By multiplying, we obtain
\begin{align*}
A^{vert}_1&=\left(
\begin{array}{cccc}
\zeta & 0 & \zeta-\zeta^2 & 1 \\
0 & 0 & 1 & \zeta^2 \\
1-\zeta^2 & 1 & 1-\zeta & \zeta^2 \\
1-\zeta & \zeta^2-\zeta & \zeta^2-\zeta & -1
\end{array}
\right)
\\
\\
A^{vert}_2&=\left(
\begin{array}{cccc}
0 & \zeta^2 & 0 & \zeta \\
0 & 0 & \zeta^2 & \zeta \\
1 & 0 & 0 & 1 \\
0 & 0 & 0 & -\zeta
\end{array}
\right)
\\
\\
A^{vert}_3&=\left(
\begin{array}{cccc}
0 & 0 & 1 & \zeta^2 \\
\zeta & 0 & 0 & \zeta \\
0 & \zeta & 0 & \zeta \\
0 & 0 & 0 & -\zeta^2
\end{array}
\right)
\end{align*}
In the natural bases of homology, the restrictions of these linear
maps to the subspaces $\Lambda^2 {{\mathcal E}_*}_{\field{H}at S_i}(\zeta)$ have matrices
\begin{align*}
W^{hor}_1&=\left(
\begin{array}{cccccc}
\zeta^2-1 & 0 & 0 & -\zeta & 0 & 0 \\
\zeta-\zeta^2 & -\zeta & 0 & \zeta^2-1 & 0 & 0 \\
0 & \zeta-\zeta^2 & 1-\zeta & 1-\zeta^2 & \zeta^2-\zeta & \zeta^2 \\
-\zeta & 0 & 0 & 0 & 0 & 0 \\
\zeta^2-1 & 0 & 0 & 1-\zeta & \zeta^2 & 0 \\
\zeta-\zeta^2 & 1-\zeta & \zeta^2 & 2 \zeta^2-\zeta-1 & \zeta-1 & 0
\end{array}
\right)
\\
\\
W^{hor}_3&=\left(
\begin{array}{cccccc}
0 & -\zeta & -1 & 0 & 0 & \zeta \\
0 & 0 & 0 & -\zeta & -1 & \zeta \\
0 & 0 & 0 & 0 & 0 & -\zeta^2 \\
\zeta^2 & 0 & \zeta^2 & 0 & -\zeta^2 & 0 \\
0 & 0 & -1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -1 & 0
\end{array}
\right)
\end{align*}
\begin{align*}
W^{vert}_1&=\left(
\begin{array}{cccccc}
0 & \zeta & 1 & 0 & 0 & -\zeta \\
\zeta & 1-\zeta & \zeta^2 & \zeta^2-\zeta & -1 & 0 \\
1-\zeta^2 & \zeta^2-\zeta & -1 & \zeta^2+\zeta-2 & \zeta-\zeta^2 & 0 \\
0 & \zeta^2-1 & \zeta-\zeta^2 & -1 & -\zeta^2 & 1 \\
0 & \zeta-1 & 1-\zeta^2 & \zeta-\zeta^2 & 1-\zeta & -\zeta \\
\zeta^2-\zeta & 0 & 0 & 1-\zeta^2 & -\zeta & 0
\end{array}
\right)
\\
\\
W^{vert}_2&=\left(
\begin{array}{cccccc}
0 & 0 & 0 & \zeta & 1 & -1 \\
-\zeta^2 & 0 & -\zeta & 0 & \zeta^2 & 0 \\
0 & 0 & 0 & 0 & -1 & 0 \\
0 & -\zeta^2 & -\zeta & 0 & 0 & \zeta^2 \\
0 & 0 & 0 & 0 & 0 & -1 \\
0 & 0 & -\zeta & 0 & 0 & 0
\end{array}
\right)
\\
\\
W^{vert}_3&=\left(
\begin{array}{cccccc}
0 & -\zeta & -1 & 0 & 0 & \zeta \\
0 & 0 & 0 & -\zeta & -1 & \zeta \\
0 & 0 & 0 & 0 & 0 & -\zeta^2 \\
\zeta^2 & 0 & \zeta^2 & 0 & -\zeta^2 & 0 \\
0 & 0 & -1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -1 & 0
\end{array}
\right)
\end{align*}
and $W^{hor}_1=\text{Id}d$.
Consider now the following loop $\rho_1$ on $\field{H}at{\mathcal T}$: start with a
horizontal move from $\field{H}at S_1$ and follow the trajectory:
\begin{equation}
\label{eq:rho1}
1\xrightarrow{hor}
2\xrightarrow{vert}
2\xrightarrow{vert}
2\xrightarrow{vert}
2\xrightarrow{hor}
1\xrightarrow{vert}
3\xrightarrow{vert}
1
\end{equation}
The corresponding monodromy matrices $X$ in ${\mathcal E}_*(\zeta)$ and $U$ in
$\Lambda^2 {\mathcal E}_*(\zeta)$ are computed as follows:
$$
X:=A^{vert}_3\cdot A^{vert}_1\cdot A^{hor}_2\cdot\left(A^{vert}_2\right)^3\cdot A^{hor}_1
$$
$$
U:=W^{vert}_3\cdot W^{vert}_1\cdot W^{hor}_2\cdot\left(W^{vert}_2\right)^3\cdot W^{hor}_1
$$
Consider now the second loop $\rho_2$ on $\field{H}at{\mathcal T}$: start with a
vertical move from $\field{H}at S_1$ and follow the trajectory:
\begin{equation}
\label{eq:rho2}
1\xrightarrow{vert}
3\xrightarrow{hor}
3\xrightarrow{hor}
3\xrightarrow{hor}
3\xrightarrow{vert}
1
\end{equation}
The corresponding monodromy matrices $Y$ in ${\mathcal E}_*(\zeta)$ and $V$ in
$\Lambda^2 {\mathcal E}_*(\zeta)$ are computed as follows:
$$
Y:=A^{vert}_3\cdot\left(A^{hor}_3\right)^3\cdot A^{vert}_1
$$
$$
V:=W^{vert}_3\cdot\left(W^{hor}_3\right)^3\cdot W^{vert}_1
$$
As a result we get the following numerical matrices:
$$
X=\left(
\begin{array}{cccc}
\zeta^2-\zeta & \zeta^2+\zeta-1 & \zeta^2-1 & \zeta \\
3 \zeta^2-3 & -\zeta^2+3 \zeta-2 & 3 \zeta-2 & -\zeta^2+\zeta+1 \\
6 \zeta^2-5 & 4 \zeta-4 & \zeta^2+5 \zeta-6 & -3 \zeta^2+3 \zeta +1 \\
-\zeta^2+6 \zeta-5 & -5 \zeta^2+4 \zeta+1 & -6 \zeta^2+5 \zeta+1 & -2 \zeta^2-2 \zeta+3
\end{array}
\right)
$$
$$
Y=\left(
\begin{array}{cccc}
\zeta^2-\zeta & \zeta^2 & \zeta^2-1 & \zeta \\
\zeta^2-\zeta+1 & 2 \zeta^2-\zeta-1 & \zeta^2-1 & \zeta \\
\zeta^2-2 \zeta+1 & 2 \zeta^2-\zeta-1 & 2 \zeta^2-\zeta & \zeta^2+\zeta-1 \\
\zeta^2-1 & \zeta-1 & \zeta-1 & -\zeta^2
\end{array}
\right)
$$
$$
U=\left(
\begin{array}{cccccc}
\zeta^2+\zeta-2 & \zeta^2-\zeta & \zeta-1 & 2 \zeta^2-\zeta
& \zeta & -\zeta^2 \\
3 \zeta^2+3 \zeta
-7 & \zeta-1 & \zeta-2 \zeta^2 & 2 \zeta^2-6 \zeta+4 & 3 \zeta^2-\zeta-1 & 1-\zeta^2 \\
-5 \zeta^2+6 \zeta-1 & 0 & 1-\zeta^2 & 6 \zeta^2-\zeta-5 & -\zeta^2+2 \zeta-2 & 1-\zeta \\
-5 \zeta^2+9 \zeta-4 & -9 \zeta^2+3 \zeta+5 & \zeta^2-7 \zeta+5 & 7 \zeta^2-9 \zeta+2 & 6 \zeta^2-6 & 3 \zeta^2-\zeta-1 \\
-7 \zeta^2-\zeta+8 & \zeta^2-6 \zeta+5 & 5 \zeta^2-2 \zeta-3 & 4 \zeta^2+4 \zeta-8 & -3 \zeta^2+5 \zeta-2 & \zeta-2 \\
3 \zeta^2-6 \zeta+3 & 5 \zeta^2+\zeta-6 & -4 \zeta^2+4 \zeta-1 & -7 \zeta^2+8 \zeta-1 & -4 \zeta^2-2 \zeta+6 & -\zeta^2-\zeta+2
\end{array}
\right)
$$
$$
V=\left(
\begin{array}{cccccc}
-\zeta^2+2 \zeta-2 & 1-\zeta^2 & -\zeta & 2 \zeta^2-2 \zeta & \zeta^2+\zeta-1 & 0 \\
-\zeta^2+2 \zeta-1 & \zeta^2-\zeta & \zeta-1 & 3 \zeta^2-\zeta-1 & 2 \zeta-1 & -\zeta^2 \\
1-\zeta^2 & 0 & 0 & \zeta-1 & -\zeta^2 & 0 \\
-\zeta^2-\zeta+2 & 3 \zeta^2-2 \zeta & \zeta^2+2 \zeta-2 & 2 \zeta^2+2 \zeta-4 & 3 \zeta-3 \zeta^2 & -\zeta^2 \\
\zeta^2-\zeta & \zeta-1 & -\zeta^2 & -2 \zeta^2+\zeta+1 & 1-\zeta
& 0 \\
0 & 1-\zeta^2 & \zeta^2-\zeta & 1-\zeta & \zeta^2-1 & -1
\end{array}
\right)
$$
Computing the determinants we get:
$$
\det(XY-YX)=-285
$$
and
$$
\det(UV-VU)=-5292\,.
$$
\section{Computation of the Zariski closure of a monodromy representation}\label{a:Zariski:closure}
Let $\zeta=\exp(2\pi i/3)$ and $\eta=\exp(2\pi i/6)$. Consider the following matrices
$$A=A_3^{hor}=\left(\begin{array}{cccc}0&0&1&\zeta^2 \\ \zeta&0&0&\zeta \\ 0&\zeta&0&\zeta \\ 0&0&0&-\zeta^2\end{array}\right)$$
and
$$B=A_1^{vert}\cdot A_3^{vert}=\left(\begin{array}{cccc}0&\zeta^2-1&\zeta&0 \\ 0&\zeta&0&0 \\
\zeta&\zeta-\zeta^2&1-\zeta^2&0 \\ 1-\zeta^2&1-\zeta^2&1-\zeta&1\end{array}\right)$$
The product
$$C=B\cdot A=\left(\begin{array}{cccc}1-\zeta&\zeta^2&0&-2\zeta \\ \zeta^2&0&0&\zeta^2
\\ \zeta^2-1&\zeta-1&\zeta&-2 \\ \zeta-1&\zeta-\zeta^2&1-\zeta^2&2\zeta\end{array}\right)$$
has characteristic polynomial
$$T^4+(\zeta^2-\zeta)T^3-2\zeta^2T^2+(\zeta^2-1)T+\zeta$$
$$= (T-1)\cdot(T^3-2\zeta T^2+2T-\zeta)$$
Denoting by $\alpha$, $\beta$ and $\mathfrak{m}u$ the roots of $T^3-2\zeta T^2+2T-\zeta=0$ with $|\alpha|=|\beta|^{-1}>1=|\mathfrak{m}u|$, we have that $C$ has eigenvalues $\alpha$, $1$, $\mathfrak{m}u$ and $\beta$ with eigenvectors
\begin{itemize}
\item $v_{\alpha}=\left(-\frac{(-\zeta+2\zeta\alpha)}{(-1+\alpha)\cdot(\zeta+\alpha)}, \frac{\eta(1+\zeta-\alpha)}{(-1+\alpha)\cdot(\zeta+\alpha)}, \frac{\eta(1+\zeta+2\zeta\alpha)}{\eta+\alpha^2}, 1\right)$,
\item $v_1=(\zeta,1,0,0)$,
\item $v_{\mathfrak{m}u}=\left(-\frac{(-\zeta+2\zeta\mathfrak{m}u)}{(-1+\mathfrak{m}u)\cdot(\zeta+\mathfrak{m}u)}, \frac{\eta(1+\zeta-\mathfrak{m}u)}{(-1+\mathfrak{m}u)\cdot(\zeta+\mathfrak{m}u)}, \frac{\eta(1+\zeta+2\zeta\mathfrak{m}u)}{\eta+\mathfrak{m}u^2}, 1\right)$ and
\item $v_{\beta}=\left(-\frac{(-\zeta+2\zeta\beta)}{(-1+\beta)\cdot(\zeta+\beta)}, \frac{\eta(1+\zeta-\beta)}{(-1+\beta)\cdot(\zeta+\beta)}, \frac{\eta(1+\zeta+2\zeta\beta)}{\eta+\beta^2}, 1\right)$
\end{itemize}
Since $\det A=-\zeta$ and $\det B=-1$, we have that $\det C=\zeta$. In particular, $\det C^3=1$ and hence $\alpha^3\beta^3\mathfrak{m}u^3=1$. This allows us to write
$\alpha^3=R\cdot e^{iT_{\alpha}}$, $\beta^3=R^{-1}\cdot e^{iT_{\beta}}$ and $\mathfrak{m}u^3=e^{iT_{\mathfrak{m}u}}$ with $T_{\mathfrak{m}u}=-T_{\alpha}-T_{\beta}$.
\begin{theorem}\label{t.ZclAB}The subgroup $\langle A, B\rangle\,\cap\, SU(3,1)$ is Zariski dense in $SU(3,1)$.
\end{theorem}
We start by noticing that $C^3\in SU(3,1)$ is contained in the $1$-parameter subgroup of $SU(3,1)$ with infinitesimal generator $X\in \mathfrak{m}athfrak{su}(3,1)$ where
\begin{itemize}
\item $Xv_{\alpha}=(r+iT_{\alpha})\cdot v_{\alpha}$,
\item $Xv_1=0$,
\item $Xv_{\mathfrak{m}u}=iT_{\mathfrak{m}u} v_{\mathfrak{m}u}$ and
\item $X v_{\beta}=(-r+iT_{\beta})\cdot v_{\beta}$
\end{itemize}
and $r=\log R=3\log|\alpha|$.
For computational reasons, let's write $X$ in terms of the canonical basis $\{e_1,\dots,e_4\}$ of $\mathfrak{m}athbb{C}^4$ as follows. Put
$$P=\left(
\begin{array}{cccc}
-\frac{(-\zeta+2\zeta\alpha)}{(-1+\alpha)\cdot(\zeta+\alpha)} & \zeta & -\frac{(-\zeta+2\zeta\mathfrak{m}u)}{(-1+\mathfrak{m}u)\cdot(\zeta+\mathfrak{m}u)} & -\frac{(-\zeta+2\zeta\beta)}{(-1+\beta)\cdot(\zeta+\beta)} \\
\frac{\eta(1+\zeta-\alpha)}{(-1+\alpha)\cdot(\zeta+\alpha)} &1 & \frac{\eta(1+\zeta-\mathfrak{m}u)}{(-1+\mathfrak{m}u)\cdot(\zeta+\mathfrak{m}u)} & \frac{\eta(1+\zeta-\beta)}{(-1+\beta)\cdot(\zeta+\beta)} \\
\frac{\eta(1+\zeta+2\zeta\alpha)}{\eta+\alpha^2} & 0 & \frac{\eta(1+\zeta+2\zeta\mathfrak{m}u)}{\eta+\mathfrak{m}u^2} & \frac{\eta(1+\zeta+2\zeta\beta)}{\eta+\beta^2} \\
1& 0 & 1 & 1
\end{array}
\right)$$
so that $Pe_1=v_\alpha$, $Pe_2=v_1$, $Pe_3=v_\mathfrak{m}u$ and $Pe_4=v_\beta$. Then,
$$X=P\cdot D_X\cdot P^{-1}$$
where
$$D_X=\left(\begin{array}{cccc}
r+iT_\alpha&0&0&0\\
0&0&0&0 \\
0&0& iT_\mathfrak{m}u&0 \\
0&0&0&-r+iT_\beta
\end{array}\right)$$
Next, we observe that $A$ has order $18$ (and $B$ has order $6$), so
that we can construct
$$X(n)=A^{n-1}\cdot X\cdot A^{-(n-1)}$$
for $n=1,\dots, 17$.
Applying the method of least squares (cf. Remark \ref{r.Mathematica} below) we verify that
$$X(1), \dots, X(9)$$
are $9$ linearly independent vectors.
Now we use the matrix $B$ to conjugate the
resulting independent vectors $X(1),\dots, X(9)$. We
construct vectors
$$
Y(n)=B\cdot X(n)\cdot B\,,
$$
and applying the method of least squares (cf. Remark \ref{r.Mathematica} below) verify that
$$Y(1), Y(3),\dots,Y(7)$$
give six more vectors such that $X(1),\dots, X(9), Y(1), Y(3), \dots, Y(7)$ span a vector space of dimension $15$.
Note that the vectors $X(1),\dots,X(9), Y(1), Y(3),\dots,Y(7)$ belong to the Lie algebra $\mathfrak{m}athfrak{g}_0$ of the Zariski closure $G=\textrm{Zcl}(\langle A, B\rangle\cap SU(3,1))$ of $\langle A, B\rangle\cap SU(3,1)$. Indeed, this is a consequence of the following general lemma:
\begin{lemma} Let $C$ be a hyperbolic or unipotent element of $SU(p,q)$. Then, the logarithm $X\in \mathfrak{m}athfrak{su}(p,q)$ of $C$ belongs to the Lie algebra $\mathfrak{m}athfrak{g}$ of any Zariski closed subgroup $G$ of $SU(p,q)$ containing $C$.
\end{lemma}
\begin{proof} Since $C$ is hyperbolic or unipotent, its iterates $C^n$, $n\in \mathfrak{m}athbb{Z}$, form an infinite discrete subset of $SU(p,q)$. Therefore, any Zariski closed subgroup $G$ of $SU(p,q)$ containing $C$ has dimension $1$ at least: in fact, any $0$-dimensional Zariski closed subgroup is finite. On the other hand, denoting by $X$ the logarithm of $C$, we have that $\{\exp(tX):t\in\mathfrak{m}athbb{R}\}$ is the smallest $1$-parameter subgroup containing all iterates $C^n$, $n\in\mathfrak{m}athbb{Z}$, of $C$. So, it follows that $\{\exp(tX):t\in\mathfrak{m}athbb{R}\}\subset G$, and, thus, $X\in\mathfrak{m}athfrak{g}$ where $\mathfrak{m}athfrak{g}$ is the Lie algebra of $G$.
\end{proof}
In particular, coming back to the proof of Theorem \ref{t.ZclAB},
we have proved that $\mathfrak{m}athfrak{g}_0\subset \mathfrak{m}athfrak{su}(3,1)$ is a vector space of dimension $15$ at least. Since $\mathfrak{m}athfrak{su}(3,1)$ is a $15$-dimensional real Lie algebra, we conclude that $\mathfrak{m}athfrak{g}_0=\mathfrak{m}athfrak{su}(3,1)$, and hence $G=SU(3,1)$.
This completes the proof of Theorem \ref{t.ZclAB}.
\begin{remark}\label{r.Mathematica} In the webpages of the last two authors (C.M. and A.Z.), the reader will find a Mathematica routine called ``FMZ3-Zariski-numerics$\_$det1.nb'' where the numerical verification of the linear independence of the vectors $X(n)$, $n=1,\dots,9$, $Y(m)$, $m=1,3,\dots,7$.
\end{remark}
\subsection*{Acknowledgments}
The authors are grateful to A.~Eskin, M.~Kontsevich, M.~M\"oller, and
J.-C.~Yoccoz for extremely stimulating discussions, and to Y. Guivarch
for the question behind Proposition~\ref{prop:isometryU31} above.
We highly appreciated explanations of M.~Kontsevich and A.~Wright
concerning the Deligne Semisimplicity Theorem.
We would like to thank M.~M\"oller for suggesting to us that the statement of Lemma~\ref{lm:Ezeta:strong:irreducibility} should hold, A.~Eskin for his ideas
leading to the proof of this Lemma, and G.~Pearlstein, I.~Rivin and
M.~Sapir for interesting discussions around the computation of the
Zariski closure of concrete examples.
The authors are thankful to Coll\`ege de France, HIM, IHES, IUF,
IMPA, MPIM, and the Universities of Chicago, Maryland, Rennes 1, and
Paris 7 for hospitality during the preparation of this paper.
C.M. was partially supported by the
Balzan Research Project of J. Palis, and C.M. and A.Z. were partially supported by the French ANR grant ``GeoDyM'' (ANR-11-BS01-0004).
\special{
psfile=oneline18.eps
hscale=85
vscale=85
angle=90
hoffset=40
voffset=-453
}
\begin{picture}(0,0)(-9,22)
\begin{picture}(0,0)(0,0)
\put(0,0){11}
\put(0,-24.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(2.5,145)
\put(0,0){17}
\put(0,-24.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(3,0){5}
\put(3,-24.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,0)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(5,145)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(7.5,290)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(10,436)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\end{picture}
\put(-12,15){\tiny $0$}
\put(-30,4.5){\rotatebox{180}{\tiny $12$}}
\put(4,4.5){\rotatebox{180}{\tiny $4$}}
\put(-30,-19.5){\rotatebox{180}{\tiny $13$}}
\put(4,-19.5){\rotatebox{180}{\tiny $17$}}
\put(-30,-44){\rotatebox{180}{\tiny $14$}}
\put(4,-44){\rotatebox{180}{\tiny $2$}}
\put(-26,-68){\rotatebox{180}{\tiny $3$}}
\put(4,-68){\rotatebox{180}{\tiny $15$}}
\put(-26,-92.5){\rotatebox{180}{\tiny $4$}}
\put(4,-92.5){\rotatebox{180}{\tiny $0$}}
\put(-26,-117){\rotatebox{180}{\tiny $5$}}
\put(4,-117){\rotatebox{180}{\tiny $13$}}
\put(-26,-139.5){\rotatebox{180}{\tiny $0$}}
\put(4,-139.5){\rotatebox{180}{\tiny $10$}}
\put(-26,-164){\rotatebox{180}{\tiny $1$}}
\put(4,-164){\rotatebox{180}{\tiny $5$}}
\put(-26,-189){\rotatebox{180}{\tiny $2$}}
\put(4,-189){\rotatebox{180}{\tiny $8$}}
\put(-26,-213){\rotatebox{180}{\tiny $9$}}
\put(4,-213){\rotatebox{180}{\tiny $3$}}
\put(-30,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $6$}}
\put(-30,-262){\rotatebox{180}{\tiny $11$}}
\put(4,-262){\rotatebox{180}{\tiny $1$}}
\put(-26,-285.5){\rotatebox{180}{\tiny $6$}}
\put(4,-285.5){\rotatebox{180}{\tiny $16$}}
\put(-26,-310){\rotatebox{180}{\tiny $7$}}
\put(4,-310){\rotatebox{180}{\tiny $11$}}
\put(-26,-334.5){\rotatebox{180}{\tiny $8$}}
\put(4,-334.5){\rotatebox{180}{\tiny $14$}}
\put(-30,-359){\rotatebox{180}{\tiny $15$}}
\put(4,-359){\rotatebox{180}{\tiny $9$}}
\put(-30,-383.5){\rotatebox{180}{\tiny $16$}}
\put(4,-383.5){\rotatebox{180}{\tiny $12$}}
\put(-30,-408){\rotatebox{180}{\tiny $17$}}
\put(4,-408){\rotatebox{180}{\tiny $7$}}
\put(-13,-428){\tiny $11$}
\put(-13.5,-446){$\field{H}at S_3$}
\end{picture}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=90
hoffset=195
voffset=-298.5
}
\begin{picture}(0,0)(-165,-135.5)
\begin{picture}(0,0)(0,97.5)
\put(-24.5,-98){17}
\put(-24.5,-122.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(2.5,193)
\put(-24.5,-98){11}
\put(-24.5,-122.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(-21.5,-98){5}
\put(-21.5,-122.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,145)
\put(-16,12){\tiny\textit C}
\put(14.5,12){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88.5){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61.5){\tiny\textit B}
\put(-40,-86){\tiny\textit E}
\put(-14,-89){\tiny\textit C}
\put(14.5,-86){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(-0.5,145)
\put(-12,15){\tiny $0$}
\put(-26,4.5){\rotatebox{180}{\tiny $2$}}
\put(4,4.5){\rotatebox{180}{\tiny $6$}}
\put(-26,-19.5){\rotatebox{180}{\tiny $9$}}
\put(4,-19.5){\rotatebox{180}{\tiny $7$}}
\put(-39,-34){\tiny $10$}
\put(-52,-45){\rotatebox{180}{\tiny $4$}}
\put(4,-45){\rotatebox{180}{\tiny $2$}}
\put(-54,-70){\rotatebox{180}{\tiny $11$}}
\put(4,-70){\rotatebox{180}{\tiny $3$}}
\put(-37,-88){\tiny $5$}
\put(-30,-93){\rotatebox{180}{\tiny $14$}}
\put(4,-93){\rotatebox{180}{\tiny $0$}}
\put(-26,-117){\rotatebox{180}{\tiny $3$}}
\put(4,-117){\rotatebox{180}{\tiny $1$}}
\put(-37,-130){\tiny $4$}
\put(-54,-140.5){\rotatebox{180}{\tiny $16$}}
\put(4,-140.5){\rotatebox{180}{\tiny $14$}}
\put(-50,-165){\rotatebox{180}{\tiny $5$}}
\put(4,-165){\rotatebox{180}{\tiny $15$}}
\put(-39,-185){\tiny $17$}
\put(-26,-189.5){\rotatebox{180}{\tiny $8$}}
\put(4,-189.5){\rotatebox{180}{\tiny $12$}}
\put(-30,-213){\rotatebox{180}{\tiny $15$}}
\put(4,-213){\rotatebox{180}{\tiny $13$}}
\put(-39,-227){\tiny $16$}
\put(-54,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $8$}}
\put(-54,-262){\rotatebox{180}{\tiny $17$}}
\put(4,-262){\rotatebox{180}{\tiny $9$}}
\put(-39,-282){\tiny $11$}
\put(-14.5,-282){\tiny $15$}
\put(-26,-300){$\field{H}at S_2$}
\end{picture}
\end{picture}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=90
hoffset=350
voffset=-286.5
}
\begin{picture}(0,0)(-320,-147.5)
\begin{picture}(0,0)(0,97.5)
\put(-24.5,-98){17}
\put(-24.5,-122.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(2.5,193)
\put(-24.5,-98){11}
\put(-24.5,-122.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(-21.5,-98){5}
\put(-21.5,-122.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,144.5)
\put(-16,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-85){\tiny\textit B}
\put(-16,-88){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-85){\tiny\textit B}
\put(-16,-88){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-89){\tiny\textit B}
\put(-14,-89){\tiny\textit C}
\put(14,-89){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(-0.5,145)
\put(-12,15){\tiny $0$}
\put(-26,4.5){\rotatebox{180}{\tiny $2$}}
\put(4,4.5){\rotatebox{180}{\tiny $8$}}
\put(-26,-19.5){\rotatebox{180}{\tiny $9$}}
\put(4,-19.5){\rotatebox{180}{\tiny $3$}}
\put(-39,-34){\tiny $10$}
\put(-52,-45){\rotatebox{180}{\tiny $4$}}
\put(4,-45){\rotatebox{180}{\tiny $6$}}
\put(-54,-70){\rotatebox{180}{\tiny $11$}}
\put(4,-70){\rotatebox{180}{\tiny $1$}}
\put(-37,-88){\tiny $5$}
\put(-30,-93){\rotatebox{180}{\tiny $14$}}
\put(4,-93){\rotatebox{180}{\tiny $2$}}
\put(-26,-117){\rotatebox{180}{\tiny $3$}}
\put(4,-117){\rotatebox{180}{\tiny $15$}}
\put(-37,-130){\tiny $4$}
\put(-54,-140.5){\rotatebox{180}{\tiny $16$}}
\put(4,-140.5){\rotatebox{180}{\tiny $0$}}
\put(-50,-165){\rotatebox{180}{\tiny $5$}}
\put(4,-165){\rotatebox{180}{\tiny $13$}}
\put(-39,-185){\tiny $17$}
\put(-26,-189.5){\rotatebox{180}{\tiny $8$}}
\put(4,-189.5){\rotatebox{180}{\tiny $14$}}
\put(-30,-213){\rotatebox{180}{\tiny $15$}}
\put(4,-213){\rotatebox{180}{\tiny $9$}}
\put(-39,-227){\tiny $16$}
\put(-54,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $12$}}
\put(-54,-262){\rotatebox{180}{\tiny $17$}}
\put(4,-262){\rotatebox{180}{\tiny $7$}}
\put(-39,-282){\tiny $11$}
\put(-14.5,-282){\tiny $15$}
\put(-26,-300){$\field{H}at S_1$}
\end{picture}
\end{picture}
\special{
psfile=Tdisc3.eps
hscale=35
vscale=35
angle=180
hoffset=249
voffset=-470
}
\begin{picture}(0,0)(-248,411)
\put(-176,-66){$h$}
\put(-118,-61){$r$}
\put(-67,-50){$h$}
\put(-67,-83){$h$}
\put(-9,-66){$r$}
\put(-140,-72){\scriptsize $\field{H}at S_3$}
\put(-97,-72){\scriptsize $\field{H}at S_2$}
\put(-36,-66){\scriptsize $\field{H}at S_1$}
\end{picture}
\begin{figure}
\caption{
\label{fig:PSL2Z:orbit}
\label{fig:PSL2Z:orbit}
\label{fig:three:surfaces}
\end{figure}
\special{
psfile=oneline18.eps
hscale=85
vscale=85
angle=90
hoffset=40
voffset=-538
}
\begin{picture}(0,0)(-9,107)
\begin{picture}(0,0)(0,0)
\put(0,0){11}
\put(0,-24.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(2.5,145)
\put(0,0){17}
\put(0,-24.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(3,0){5}
\put(3,-24.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,0)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(5,145)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(7.5,290)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(10,436)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\end{picture}
\put(-12,15){\tiny $0$}
\put(-30,4.5){\rotatebox{180}{\tiny $12$}}
\put(4,4.5){\rotatebox{180}{\tiny $4$}}
\put(-30,-19.5){\rotatebox{180}{\tiny $13$}}
\put(4,-19.5){\rotatebox{180}{\tiny $17$}}
\put(-30,-44){\rotatebox{180}{\tiny $14$}}
\put(4,-44){\rotatebox{180}{\tiny $2$}}
\put(-26,-68){\rotatebox{180}{\tiny $3$}}
\put(4,-68){\rotatebox{180}{\tiny $15$}}
\put(-26,-92.5){\rotatebox{180}{\tiny $4$}}
\put(4,-92.5){\rotatebox{180}{\tiny $0$}}
\put(-26,-117){\rotatebox{180}{\tiny $5$}}
\put(4,-117){\rotatebox{180}{\tiny $13$}}
\put(-26,-139.5){\rotatebox{180}{\tiny $0$}}
\put(4,-139.5){\rotatebox{180}{\tiny $10$}}
\put(-26,-164){\rotatebox{180}{\tiny $1$}}
\put(4,-164){\rotatebox{180}{\tiny $5$}}
\put(-26,-189){\rotatebox{180}{\tiny $2$}}
\put(4,-189){\rotatebox{180}{\tiny $8$}}
\put(-26,-213){\rotatebox{180}{\tiny $9$}}
\put(4,-213){\rotatebox{180}{\tiny $3$}}
\put(-30,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $6$}}
\put(-30,-262){\rotatebox{180}{\tiny $11$}}
\put(4,-262){\rotatebox{180}{\tiny $1$}}
\put(-26,-285.5){\rotatebox{180}{\tiny $6$}}
\put(4,-285.5){\rotatebox{180}{\tiny $16$}}
\put(-26,-310){\rotatebox{180}{\tiny $7$}}
\put(4,-310){\rotatebox{180}{\tiny $11$}}
\put(-26,-334.5){\rotatebox{180}{\tiny $8$}}
\put(4,-334.5){\rotatebox{180}{\tiny $14$}}
\put(-30,-359){\rotatebox{180}{\tiny $15$}}
\put(4,-359){\rotatebox{180}{\tiny $9$}}
\put(-30,-383.5){\rotatebox{180}{\tiny $16$}}
\put(4,-383.5){\rotatebox{180}{\tiny $12$}}
\put(-30,-408){\rotatebox{180}{\tiny $17$}}
\put(4,-408){\rotatebox{180}{\tiny $7$}}
\put(-13,-428){\tiny $11$}
\end{picture}
\special{
psfile=oneline18_arcs.eps
hscale=85
vscale=85
angle=90
hoffset=115
voffset=-528
}
\begin{picture}(0,0)(-77.5,81)
\begin{picture}(0,0)(0,0)
\put(2,4){\textcolor{blue}{$d_{2,2}$}}
\put(27,-25){$c_3$}
\put(27,-75){\textcolor{mygreen}{$b_3$}}
\put(27,-122){\textcolor{red}{$a_3$}}
\end{picture}
\begin{picture}(0,0)(2.5,146)
\put(2,9){\textcolor{blue}{$d_{3,1}$}}
\put(2,-11){\textcolor{blue}{$d_{1,2}$}}
\put(27,-25){$c_2$}
\put(27,-75){\textcolor{mygreen}{$b_2$}}
\put(27,-122){\textcolor{red}{$a_2$}}
\end{picture}
\begin{picture}(0,0)(5,291)
\put(2,9){\textcolor{blue}{$d_{2,1}$}}
\put(2,-11){\textcolor{blue}{$d_{3,2}$}}
\put(27,-25){$c_1$}
\put(27,-75){\textcolor{mygreen}{$b_1$}}
\put(27,-122){\textcolor{red}{$a_1$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(2,9){\textcolor{blue}{$d_{1,1}$}}
\end{picture}
\end{picture}
\special{
psfile=oneline18_arcs_declined.eps
hscale=85
vscale=85
angle=90
hoffset=175.5
voffset=-516
}
\begin{picture}(0,0)(-147,81)
\begin{picture}(0,0)(0,0)
\put(2,0){\textcolor{blue}{$d'_{2,2}$}}
\put(27,-25){$c'_3$}
\put(27,-75){\textcolor{mygreen}{$b'_3$}}
\put(27,-122){\textcolor{red}{$a'_3$}}
\end{picture}
\begin{picture}(0,0)(2.5,146)
\put(2,26){\textcolor{blue}{$d'_{3,1}$}}
\put(2,2){\textcolor{blue}{$d'_{1,2}$}}
\put(27,-25){$c'_2$}
\put(27,-75){\textcolor{mygreen}{$b'_2$}}
\put(27,-122){\textcolor{red}{$a'_2$}}
\end{picture}
\begin{picture}(0,0)(5,291)
\put(2,26){\textcolor{blue}{$d'_{2,1}$}}
\put(2,2){\textcolor{blue}{$d'_{3,2}$}}
\put(27,-25){$c'_1$}
\put(27,-75){\textcolor{mygreen}{$b'_1$}}
\put(27,-122){\textcolor{red}{$a'_1$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(2,27){\textcolor{blue}{$d'_{1,1}$}}
\end{picture}
\end{picture}
\special{
psfile=oneline18_arcs_declined2.eps
hscale=85
vscale=85
angle=90
hoffset=245.5
voffset=-420
}
\begin{picture}(0,0)(-217,82)
\begin{picture}(0,0)(0,0)
\put(27,70){\textcolor{mygreen}{$b'_1$}}
\put(27,23){\textcolor{red}{$a'_1$}}
\put(2,26){\textcolor{blue}{$d'_{1,1}$}}
\put(2,0){\textcolor{blue}{$d'_{2,2}$}}
\put(27,-25){$c'_3$}
\put(27,-75){\textcolor{mygreen}{$b'_3$}}
\put(27,-122){\textcolor{red}{$a'_3$}}
\end{picture}
\begin{picture}(0,0)(2.5,146)
\put(2,26){\textcolor{blue}{$d'_{3,1}$}}
\put(2,2){\textcolor{blue}{$d'_{1,2}$}}
\put(27,-25){$c'_2$}
\put(27,-75){\textcolor{mygreen}{$b'_2$}}
\put(27,-122){\textcolor{red}{$a'_2$}}
\end{picture}
\begin{picture}(0,0)(5,291)
\put(2,26){\textcolor{blue}{$d'_{2,1}$}}
\put(2,2){\textcolor{blue}{$d'_{3,2}$}}
\put(27,-25){$c'_1$}
\end{picture}
\end{picture}
\special{
psfile=oneline18_arcs_declined3.eps
hscale=85
vscale=85
angle=90
hoffset=316
voffset=-420.5
}
\begin{picture}(0,0)(-287.5,-14.5)
\begin{picture}(0,0)(0,0)
\put(4,-10){\textcolor{blue}{$d_{2,2}$}}
\put(27,-25){$c_3$}
\put(-9,-36.5){$e_3$}
\put(27,-75){\textcolor{mygreen}{$b_3$}}
\put(5,-88){\textcolor{blue}{$d'_{1,1}$}}
\put(-9,-109){$e_1$}
\put(27,-122){\textcolor{red}{$a_3$}}
\end{picture}
\begin{picture}(0,0)(2.5,146)
\put(5,8){\textcolor{blue}{$d_{3,1}$}}
\put(4,-10){\textcolor{blue}{$d_{1,2}$}}
\put(27,-25){$c_2$}
\put(-9,-36.5){$e_2$}
\put(27,-75){\textcolor{mygreen}{$b_2$}}
\put(5,-79){\textcolor{blue}{$d'_{1,2}$}}
\put(-9,-109){$e_3$}
\put(27,-122){\textcolor{red}{$a_2$}}
\end{picture}
\begin{picture}(0,0)(5,291)
\put(4,9){\textcolor{blue}{$d_{2,1}$}}
\put(4,-10){\textcolor{blue}{$d_{3,2}$}}
\put(27,-25){$c_1$}
\put(-9,-36.5){$e_1$}
\put(27,-75){\textcolor{mygreen}{$b_1$}}
\put(-9,-109){$e_2$}
\put(27,-122){\textcolor{red}{$a_1$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(4,9){\textcolor{blue}{$d_{1,1}$}}
\end{picture}
\end{picture}
\vspace*{501pt}
\begin{figure}
\caption{
\label{fig:oneline:shear}
\label{fig:oneline:shear}
\end{figure}
\special{
psfile=oneline18.eps
hscale=85
vscale=85
angle=90
hoffset=40
voffset=-448
}
\begin{picture}(0,0)(-9,17)
\begin{picture}(0,0)(0,0)
\put(0,0){11}
\put(0,-24.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(2.5,145)
\put(0,0){17}
\put(0,-24.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(3,0){5}
\put(3,-24.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,0)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(5,145)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(7.5,290)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(14.5,-37){\tiny\textit C}
\put(-16,-61){\tiny\textit B}
\put(14.5,-61){\tiny\textit E}
\put(14.5,-86){\tiny\textit C}
\put(14.5,-109.5){\tiny\textit D}
\end{picture}
\begin{picture}(0,0)(10,436)
\put(-16,12){\tiny\textit A}
\put(14.5,12){\tiny\textit C}
\end{picture}
\put(-12,15){\tiny $0$}
\put(-30,4.5){\rotatebox{180}{\tiny $12$}}
\put(4,4.5){\rotatebox{180}{\tiny $4$}}
\put(-30,-19.5){\rotatebox{180}{\tiny $13$}}
\put(4,-19.5){\rotatebox{180}{\tiny $17$}}
\put(-30,-44){\rotatebox{180}{\tiny $14$}}
\put(4,-44){\rotatebox{180}{\tiny $2$}}
\put(-26,-68){\rotatebox{180}{\tiny $3$}}
\put(4,-68){\rotatebox{180}{\tiny $15$}}
\put(-26,-92.5){\rotatebox{180}{\tiny $4$}}
\put(4,-92.5){\rotatebox{180}{\tiny $0$}}
\put(-26,-117){\rotatebox{180}{\tiny $5$}}
\put(4,-117){\rotatebox{180}{\tiny $13$}}
\put(-26,-139.5){\rotatebox{180}{\tiny $0$}}
\put(4,-139.5){\rotatebox{180}{\tiny $10$}}
\put(-26,-164){\rotatebox{180}{\tiny $1$}}
\put(4,-164){\rotatebox{180}{\tiny $5$}}
\put(-26,-189){\rotatebox{180}{\tiny $2$}}
\put(4,-189){\rotatebox{180}{\tiny $8$}}
\put(-26,-213){\rotatebox{180}{\tiny $9$}}
\put(4,-213){\rotatebox{180}{\tiny $3$}}
\put(-30,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $6$}}
\put(-30,-262){\rotatebox{180}{\tiny $11$}}
\put(4,-262){\rotatebox{180}{\tiny $1$}}
\put(-26,-285.5){\rotatebox{180}{\tiny $6$}}
\put(4,-285.5){\rotatebox{180}{\tiny $16$}}
\put(-26,-310){\rotatebox{180}{\tiny $7$}}
\put(4,-310){\rotatebox{180}{\tiny $11$}}
\put(-26,-334.5){\rotatebox{180}{\tiny $8$}}
\put(4,-334.5){\rotatebox{180}{\tiny $14$}}
\put(-30,-359){\rotatebox{180}{\tiny $15$}}
\put(4,-359){\rotatebox{180}{\tiny $9$}}
\put(-30,-383.5){\rotatebox{180}{\tiny $16$}}
\put(4,-383.5){\rotatebox{180}{\tiny $12$}}
\put(-30,-408){\rotatebox{180}{\tiny $17$}}
\put(4,-408){\rotatebox{180}{\tiny $7$}}
\put(-13,-428){\tiny $11$}
\put(5,25){$\field{H}at S_3$}
\end{picture}
\special{
psfile=oneline18_arcs.eps
hscale=85
vscale=85
angle=90
hoffset=78
voffset=-438
}
\begin{picture}(0,0)(-40.5,-9)
\begin{picture}(0,0)(0,0)
\put(2,4){\textcolor{blue}{$d_{2,2}$}}
\put(27,-25){$c_3$}
\put(27,-75){\textcolor{mygreen}{$b_3$}}
\put(27,-122){\textcolor{red}{$a_3$}}
\end{picture}
\begin{picture}(0,0)(2.5,146)
\put(2,9){\textcolor{blue}{$d_{3,1}$}}
\put(2,-11){\textcolor{blue}{$d_{1,2}$}}
\put(27,-25){$c_2$}
\put(27,-75){\textcolor{mygreen}{$b_2$}}
\put(27,-122){\textcolor{red}{$a_2$}}
\end{picture}
\begin{picture}(0,0)(5,291)
\put(2,9){\textcolor{blue}{$d_{2,1}$}}
\put(2,-11){\textcolor{blue}{$d_{3,2}$}}
\put(27,-25){$c_1$}
\put(27,-75){\textcolor{mygreen}{$b_1$}}
\put(27,-122){\textcolor{red}{$a_1$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(2,9){\textcolor{blue}{$d_{1,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=180
hoffset=305
voffset=-452
}
\begin{picture}(0,0)(-33,459)
\put(-24.5,7){\rotatebox{180}{11}}
\put(0,0.25){12}
\put(27,7){\rotatebox{180}{1}}
\put(49,0.25){16}
\put(75,7){\rotatebox{180}{5}}
\put(99.5,0.25){6}
\put(123,7){\rotatebox{180}{13}}
\put(146.5,0.25){10}
\put(171,7){\rotatebox{180}{17}}
\put(197,0.25){0}
\put(220,7){\rotatebox{180}{7}}
\put(244.5,0.25){4}
\put(27,-16.6){\rotatebox{180}{2}}
\put(49,-24.25){15}
\put(123,-16.5){\rotatebox{180}{14}}
\put(148.5,-24.25){9}
\put(220,-16.5){\rotatebox{180}{8}}
\put(244.5,-24.25){3}
\put(-21.5,22){\rotatebox{180}{\tiny 10}}
\put(2.5,18){\tiny 13}
\put(28.5,22){\rotatebox{180}{\tiny 0}}
\put(51,18){\tiny 17}
\put(76.5,22){\rotatebox{180}{\tiny 4}}
\put(100.5,18){\tiny 7}
\put(123.5,22){\rotatebox{180}{\tiny 12}}
\put(143,18){\tiny 11}
\put(171.5,22){\rotatebox{180}{\tiny 16}}
\put(198,18){\tiny 1}
\put(221,22){\rotatebox{180}{\tiny 6}}
\put(246,18){\tiny 5}
\put(-35,1){\tiny 4}
\put(261,5){\rotatebox{180}{\tiny 11}}
\put(-21,-10){\rotatebox{180}{\tiny 0}}
\put(3.5,-14){\tiny 5}
\put(75,-10){\rotatebox{180}{\tiny 12}}
\put(99,-14){\tiny 17}
\put(174,-10){\rotatebox{180}{\tiny 6}}
\put(196.5,-14){\tiny 11}
\put(14,-22){\tiny 9}
\put(67.5,-18){\rotatebox{180}{\tiny 8}}
\put(110,-22){\tiny 3}
\put(164,-18){\rotatebox{180}{\tiny 2}}
\put(204,-22){\tiny 15}
\put(261,-18){\rotatebox{180}{\tiny 14}}
\put(28.5,-34){\rotatebox{180}{\tiny 3}}
\put(51,-38){\tiny 14}
\put(124.5,-34){\rotatebox{180}{\tiny 15}}
\put(149,-38){\tiny 8}
\put(221,-34){\rotatebox{180}{\tiny 9}}
\put(245,-38){\tiny 2}
\end{picture}
\special{
psfile=twolines18_1.eps
hscale=85
vscale=85
angle=90
angle=180
hoffset=311
voffset=-501
}
\begin{picture}(0,0)(-30,506)
\begin{picture}(0,0)(0,0)
\put(0,0){\textcolor{red}{$a_{2,1}$}}
\put(-8,-19){\textcolor{blue}{$d_2$}}
\put(25,0){\textcolor{red}{$a_{1,2}$}}
\put(50,0){\textcolor{black}{$c_{2,1}$}}
\put(74,0){\textcolor{black}{$c_{1,2}$}}
\put(48,-24.5){\textcolor{mygreen}{$b_{2,2}$}}
\put(75,-24.5){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(-94.5,0)
\put(0,0){\textcolor{red}{$a_{3,1}$}}
\put(-8,-19){\textcolor{blue}{$d_3$}}
\put(25,0){\textcolor{red}{$a_{2,2}$}}
\put(50,0){\textcolor{black}{$c_{3,1}$}}
\put(74,0){\textcolor{black}{$c_{2,2}$}}
\put(48,-24.5){\textcolor{mygreen}{$b_{3,2}$}}
\put(75,-24.5){\textcolor{mygreen}{$b_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(-189,0)
\put(0,0){\textcolor{red}{$a_{1,1}$}}
\put(-8,-19){\textcolor{blue}{$d_1$}}
\put(25,0){\textcolor{red}{$a_{3,2}$}}
\put(50,0){\textcolor{black}{$c_{1,1}$}}
\put(74,0){\textcolor{black}{$c_{3,2}$}}
\put(48,-24.5){\textcolor{mygreen}{$b_{1,2}$}}
\put(75,-24.5){\textcolor{mygreen}{$b_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_1.eps
hscale=85
vscale=85
angle=90
hoffset=294.5
voffset=-263.5
}
\begin{picture}(0,0)(-259.5,-182)
\begin{picture}(0,0)(2.5,244)
\put(-10,81){\textcolor{blue}{$d_2$}}
\put(5,66.5){\textcolor{red}{$a_{2,1}$}}
\put(5,50){\textcolor{red}{$a_{1,2}$}}
\put(5,18.5){\textcolor{black}{$c_{2,1}$}}
\put(5,-1){\textcolor{black}{$c_{1,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{2,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(-10,81){\textcolor{blue}{$d_3$}}
\put(5,66.5){\textcolor{red}{$a_{3,1}$}}
\put(5,50){\textcolor{red}{$a_{2,2}$}}
\put(5,18.5){\textcolor{black}{$c_{3,1}$}}
\put(5,-1){\textcolor{black}{$c_{2,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{3,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(-10,81){\textcolor{blue}{$d_1$}}
\put(5,66.5){\textcolor{red}{$a_{1,1}$}}
\put(5,50){\textcolor{red}{$a_{3,2}$}}
\put(5,18.5){\textcolor{black}{$c_{1,1}$}}
\put(5,-1){\textcolor{black}{$c_{3,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{1,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=90
hoffset=356
voffset=-245
}
\begin{picture}(0,0)(-326,-189)
\begin{picture}(0,0)(0,97.5)
\put(-24.5,-98){17}
\put(-24.5,-122.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(2.5,193)
\put(-24.5,-98){11}
\put(-24.5,-122.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(-21.5,-98){5}
\put(-21.5,-122.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,145)
\put(-16,12){\tiny\textit C}
\put(14.5,12){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88.5){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61.5){\tiny\textit B}
\put(-40,-86){\tiny\textit E}
\put(-14,-89){\tiny\textit C}
\put(14.5,-86){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(-0.5,145)
\put(-12,15){\tiny $0$}
\put(-26,4.5){\rotatebox{180}{\tiny $2$}}
\put(4,4.5){\rotatebox{180}{\tiny $6$}}
\put(-26,-19.5){\rotatebox{180}{\tiny $9$}}
\put(4,-19.5){\rotatebox{180}{\tiny $7$}}
\put(-39,-34){\tiny $10$}
\put(-52,-45){\rotatebox{180}{\tiny $4$}}
\put(4,-45){\rotatebox{180}{\tiny $2$}}
\put(-54,-70){\rotatebox{180}{\tiny $11$}}
\put(4,-70){\rotatebox{180}{\tiny $3$}}
\put(-37,-88){\tiny $5$}
\put(-30,-93){\rotatebox{180}{\tiny $14$}}
\put(4,-93){\rotatebox{180}{\tiny $0$}}
\put(-26,-117){\rotatebox{180}{\tiny $3$}}
\put(4,-117){\rotatebox{180}{\tiny $1$}}
\put(-37,-130){\tiny $4$}
\put(-54,-140.5){\rotatebox{180}{\tiny $16$}}
\put(4,-140.5){\rotatebox{180}{\tiny $14$}}
\put(-50,-165){\rotatebox{180}{\tiny $5$}}
\put(4,-165){\rotatebox{180}{\tiny $15$}}
\put(-39,-185){\tiny $17$}
\put(-26,-189.5){\rotatebox{180}{\tiny $8$}}
\put(4,-189.5){\rotatebox{180}{\tiny $12$}}
\put(-30,-213){\rotatebox{180}{\tiny $15$}}
\put(4,-213){\rotatebox{180}{\tiny $13$}}
\put(-39,-227){\tiny $16$}
\put(-54,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $8$}}
\put(-54,-262){\rotatebox{180}{\tiny $17$}}
\put(4,-262){\rotatebox{180}{\tiny $9$}}
\put(-39,-282){\tiny $11$}
\put(-14.5,-282){\tiny $15$}
\put(-47,24){$\field{H}at S_2$}
\end{picture}
\end{picture}
\vspace*{261pt}
\begin{figure}
\caption{
\label{fig:oneline:rotate}
\label{fig:oneline:rotate}
\end{figure}
\vspace*{-10pt}\field{H}space*{89.75truept}
\begin{minipage}{151truept}
We pass from the original horizontal cylinder decomposition of $\field{H}at
S_3$ (left two pictures) to the vertical cylinder decomposition
(bottom two pictures) and then rotate $\field{H}at S_3$ by $\pi/2$ clockwise
(right two pictures). The squares are renumbered after the rotation.
\end{minipage}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=90
hoffset=58
voffset=-338.5
}
\begin{picture}(0,0)(-27,-93)
\begin{picture}(0,0)(0,97.5)
\put(-24.5,-98){17}
\put(-24.5,-122.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(2.5,193)
\put(-24.5,-98){11}
\put(-24.5,-122.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(-21.5,-98){5}
\put(-21.5,-122.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,145)
\put(-16,12){\tiny\textit C}
\put(14.5,12){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88.5){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61.5){\tiny\textit B}
\put(-40,-86){\tiny\textit E}
\put(-14,-89){\tiny\textit C}
\put(14.5,-86){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(-0.5,145)
\put(-12,15){\tiny $0$}
\put(-26,4.5){\rotatebox{180}{\tiny $2$}}
\put(4,4.5){\rotatebox{180}{\tiny $6$}}
\put(-26,-19.5){\rotatebox{180}{\tiny $9$}}
\put(4,-19.5){\rotatebox{180}{\tiny $7$}}
\put(-39,-34){\tiny $10$}
\put(-52,-45){\rotatebox{180}{\tiny $4$}}
\put(4,-45){\rotatebox{180}{\tiny $2$}}
\put(-54,-70){\rotatebox{180}{\tiny $11$}}
\put(4,-70){\rotatebox{180}{\tiny $3$}}
\put(-37,-88){\tiny $5$}
\put(-30,-93){\rotatebox{180}{\tiny $14$}}
\put(4,-93){\rotatebox{180}{\tiny $0$}}
\put(-26,-117){\rotatebox{180}{\tiny $3$}}
\put(4,-117){\rotatebox{180}{\tiny $1$}}
\put(-37,-130){\tiny $4$}
\put(-54,-140.5){\rotatebox{180}{\tiny $16$}}
\put(4,-140.5){\rotatebox{180}{\tiny $14$}}
\put(-50,-165){\rotatebox{180}{\tiny $5$}}
\put(4,-165){\rotatebox{180}{\tiny $15$}}
\put(-39,-185){\tiny $17$}
\put(-26,-189.5){\rotatebox{180}{\tiny $8$}}
\put(4,-189.5){\rotatebox{180}{\tiny $12$}}
\put(-30,-213){\rotatebox{180}{\tiny $15$}}
\put(4,-213){\rotatebox{180}{\tiny $13$}}
\put(-39,-227){\tiny $16$}
\put(-54,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $8$}}
\put(-54,-262){\rotatebox{180}{\tiny $17$}}
\put(4,-262){\rotatebox{180}{\tiny $9$}}
\put(-39,-282){\tiny $11$}
\put(-14.5,-282){\tiny $15$}
\put(25,25){$\field{H}at S_2$}
\end{picture}
\end{picture}
\special{
psfile=twolines18_1.eps
hscale=85
vscale=85
angle=90
hoffset=131
voffset=-335
}
\begin{picture}(0,0)(-96,-110.5)
\begin{picture}(0,0)(2.5,244)
\put(-10,81){\textcolor{blue}{$d_2$}}
\put(5,66.5){\textcolor{red}{$a_{2,1}$}}
\put(5,50){\textcolor{red}{$a_{1,2}$}}
\put(5,18.5){\textcolor{black}{$c_{2,1}$}}
\put(5,-1){\textcolor{black}{$c_{1,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{2,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(-10,81){\textcolor{blue}{$d_3$}}
\put(5,66.5){\textcolor{red}{$a_{3,1}$}}
\put(5,50){\textcolor{red}{$a_{2,2}$}}
\put(5,18.5){\textcolor{black}{$c_{3,1}$}}
\put(5,-1){\textcolor{black}{$c_{2,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{3,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(-10,81){\textcolor{blue}{$d_1$}}
\put(5,66.5){\textcolor{red}{$a_{1,1}$}}
\put(5,50){\textcolor{red}{$a_{3,2}$}}
\put(5,18.5){\textcolor{black}{$c_{1,1}$}}
\put(5,-1){\textcolor{black}{$c_{3,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{1,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_shear1.eps
hscale=85
vscale=85
angle=90
hoffset=189
voffset=-316.5
}
\begin{picture}(0,0)(-163.5,-120)
\begin{picture}(0,0)(2.5,244)
\put(10.5,115.5){\textcolor{black}{$c'_{1,1}$}}
\put(1,100.5){\textcolor{black}{$c'_{3,2}$}}
\put(-10,95){\textcolor{blue}{$d'_2$}}
\put(10,68){\textcolor{red}{$a'_{2,1}$}}
\put(1,52){\textcolor{red}{$a'_{1,2}$}}
\put(10.5,19.5){\textcolor{black}{$c'_{2,1}$}}
\put(1.25,4){\textcolor{black}{$c'_{1,2}$}}
\put(-14.5,46){\textcolor{mygreen}{$b'_{2,2}$}}
\put(-23,25){\textcolor{mygreen}{$b'_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(-10,95){\textcolor{blue}{$d'_3$}}
\put(10,68){\textcolor{red}{$a'_{3,1}$}}
\put(1,52){\textcolor{red}{$a'_{2,2}$}}
\put(10.5,19.5){\textcolor{black}{$c'_{3,1}$}}
\put(1.25,4){\textcolor{black}{$c'_{2,2}$}}
\put(-14.5,46){\textcolor{mygreen}{$b'_{3,2}$}}
\put(-23,25){\textcolor{mygreen}{$b'_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(-10,95){\textcolor{blue}{$d'_1$}}
\put(10,68){\textcolor{red}{$a'_{1,1}$}}
\put(1,52){\textcolor{red}{$a'_{3,2}$}}
\put(-14.5,46){\textcolor{mygreen}{$b'_{1,2}$}}
\put(-23,25){\textcolor{mygreen}{$b'_{2,1}$}}
\put(10.5,19.5){\textcolor{black}{$c'_{1,1}$}}
\put(1,2){\textcolor{black}{$c'_{3,2}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_second_surface.eps
hscale=85
vscale=85
angle=90
hoffset=257.5
voffset=-293
}
\begin{picture}(0,0)(-232,-120)
\begin{picture}(0,0)(2.5,244)
\put(10.5,115.5){\textcolor{black}{$c_{1,1}$}}
\put(1,100.5){\textcolor{black}{$c_{3,2}$}}
\put(-10,95){\textcolor{blue}{$d_2$}}
\put(10,68){\textcolor{red}{$a_{2,1}$}}
\put(1,52){\textcolor{red}{$a_{1,2}$}}
\put(10.5,19.5){\textcolor{black}{$c_{2,1}$}}
\put(1.25,4){\textcolor{black}{$c_{1,2}$}}
\put(-14.5,46){\textcolor{mygreen}{$b_{2,2}$}}
\put(-23,25){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(10,68){\textcolor{red}{$a_{3,1}$}}
\put(1,52){\textcolor{red}{$a_{2,2}$}}
\put(-10,95){\textcolor{blue}{$d_3$}}
\put(10.5,19.5){\textcolor{black}{$c_{3,1}$}}
\put(1.25,4){\textcolor{black}{$c_{2,2}$}}
\put(-14.5,46){\textcolor{mygreen}{$b_{3,2}$}}
\put(-23,25){\textcolor{mygreen}{$b_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(10,68){\textcolor{red}{$a_{1,1}$}}
\put(1,52){\textcolor{red}{$a_{3,2}$}}
\put(-10,95){\textcolor{blue}{$d_1$}}
\put(-14.5,46){\textcolor{mygreen}{$b_{1,2}$}}
\put(-23,25){\textcolor{mygreen}{$b_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=90
hoffset=350
voffset=-293
}
\begin{picture}(0,0)(-320,-140.75)
\begin{picture}(0,0)(0,97.5)
\put(-24.5,-98){17}
\put(-24.5,-122.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(2.5,193)
\put(-24.5,-98){11}
\put(-24.5,-122.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(-21.5,-98){5}
\put(-21.5,-122.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,144.5)
\put(-16,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-85){\tiny\textit B}
\put(-16,-88){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-85){\tiny\textit B}
\put(-16,-88){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-89){\tiny\textit B}
\put(-14,-89){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(-0.5,145)
\put(-12,15){\tiny $0$}
\put(-26,4.5){\rotatebox{180}{\tiny $2$}}
\put(4,4.5){\rotatebox{180}{\tiny $8$}}
\put(-26,-19.5){\rotatebox{180}{\tiny $9$}}
\put(4,-19.5){\rotatebox{180}{\tiny $3$}}
\put(-39,-34){\tiny $10$}
\put(-52,-45){\rotatebox{180}{\tiny $4$}}
\put(4,-45){\rotatebox{180}{\tiny $6$}}
\put(-54,-70){\rotatebox{180}{\tiny $11$}}
\put(4,-70){\rotatebox{180}{\tiny $1$}}
\put(-37,-88){\tiny $5$}
\put(-30,-93){\rotatebox{180}{\tiny $14$}}
\put(4,-93){\rotatebox{180}{\tiny $2$}}
\put(-26,-117){\rotatebox{180}{\tiny $3$}}
\put(4,-117){\rotatebox{180}{\tiny $15$}}
\put(-37,-130){\tiny $4$}
\put(-54,-140.5){\rotatebox{180}{\tiny $16$}}
\put(4,-140.5){\rotatebox{180}{\tiny $0$}}
\put(-50,-165){\rotatebox{180}{\tiny $5$}}
\put(4,-165){\rotatebox{180}{\tiny $13$}}
\put(-39,-185){\tiny $17$}
\put(-26,-189.5){\rotatebox{180}{\tiny $8$}}
\put(4,-189.5){\rotatebox{180}{\tiny $14$}}
\put(-30,-213){\rotatebox{180}{\tiny $15$}}
\put(4,-213){\rotatebox{180}{\tiny $9$}}
\put(-39,-227){\tiny $16$}
\put(-54,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $12$}}
\put(-54,-262){\rotatebox{180}{\tiny $17$}}
\put(4,-262){\rotatebox{180}{\tiny $7$}}
\put(-39,-282){\tiny $11$}
\put(-14.5,-282){\tiny $15$}
\end{picture}
\put(-53,-120){$\field{H}at S_1$}
\end{picture}
\vspace*{315pt}
\begin{figure}
\caption{
\label{fig:twoline:one:shear}
\label{fig:twoline:one:shear}
\end{figure}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=90
hoffset=58
voffset=-363
}
\begin{picture}(0,0)(-27,-68.5)
\begin{picture}(0,0)(0,97.5)
\put(-24.5,-98){17}
\put(-24.5,-122.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(2.5,193)
\put(-24.5,-98){11}
\put(-24.5,-122.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(-21.5,-98){5}
\put(-21.5,-122.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,145)
\put(-16,12){\tiny\textit C}
\put(14.5,12){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61){\tiny\textit B}
\put(-40,-85){\tiny\textit E}
\put(-16,-88.5){\tiny\textit C}
\put(14.5,-85){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit E}
\put(-16,-34){\tiny\textit C}
\put(14.5,-37){\tiny\textit D}
\put(-40,-61.5){\tiny\textit B}
\put(-40,-86){\tiny\textit E}
\put(-14,-89){\tiny\textit C}
\put(14.5,-86){\tiny\textit F}
\end{picture}
\begin{picture}(0,0)(-0.5,145)
\put(-12,15){\tiny $0$}
\put(-26,4.5){\rotatebox{180}{\tiny $2$}}
\put(4,4.5){\rotatebox{180}{\tiny $6$}}
\put(-26,-19.5){\rotatebox{180}{\tiny $9$}}
\put(4,-19.5){\rotatebox{180}{\tiny $7$}}
\put(-39,-34){\tiny $10$}
\put(-52,-45){\rotatebox{180}{\tiny $4$}}
\put(4,-45){\rotatebox{180}{\tiny $2$}}
\put(-54,-70){\rotatebox{180}{\tiny $11$}}
\put(4,-70){\rotatebox{180}{\tiny $3$}}
\put(-37,-88){\tiny $5$}
\put(-30,-93){\rotatebox{180}{\tiny $14$}}
\put(4,-93){\rotatebox{180}{\tiny $0$}}
\put(-26,-117){\rotatebox{180}{\tiny $3$}}
\put(4,-117){\rotatebox{180}{\tiny $1$}}
\put(-37,-130){\tiny $4$}
\put(-54,-140.5){\rotatebox{180}{\tiny $16$}}
\put(4,-140.5){\rotatebox{180}{\tiny $14$}}
\put(-50,-165){\rotatebox{180}{\tiny $5$}}
\put(4,-165){\rotatebox{180}{\tiny $15$}}
\put(-39,-185){\tiny $17$}
\put(-26,-189.5){\rotatebox{180}{\tiny $8$}}
\put(4,-189.5){\rotatebox{180}{\tiny $12$}}
\put(-30,-213){\rotatebox{180}{\tiny $15$}}
\put(4,-213){\rotatebox{180}{\tiny $13$}}
\put(-39,-227){\tiny $16$}
\put(-54,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $8$}}
\put(-54,-262){\rotatebox{180}{\tiny $17$}}
\put(4,-262){\rotatebox{180}{\tiny $9$}}
\put(-39,-282){\tiny $11$}
\put(-14.5,-282){\tiny $15$}
\put(25,25){$\field{H}at S_2$}
\end{picture}
\end{picture}
\special{
psfile=twolines18_1.eps
hscale=85
vscale=85
angle=90
hoffset=131
voffset=-359.5
}
\begin{picture}(0,0)(-96,-86)
\begin{picture}(0,0)(2.5,244)
\put(-10,81){\textcolor{blue}{$d_2$}}
\put(5,66.5){\textcolor{red}{$a_{2,1}$}}
\put(5,50){\textcolor{red}{$a_{1,2}$}}
\put(5,18.5){\textcolor{black}{$c_{2,1}$}}
\put(5,-1){\textcolor{black}{$c_{1,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{2,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(-10,81){\textcolor{blue}{$d_3$}}
\put(5,66.5){\textcolor{red}{$a_{3,1}$}}
\put(5,50){\textcolor{red}{$a_{2,2}$}}
\put(5,18.5){\textcolor{black}{$c_{3,1}$}}
\put(5,-1){\textcolor{black}{$c_{2,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{3,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(-10,81){\textcolor{blue}{$d_1$}}
\put(5,66.5){\textcolor{red}{$a_{1,1}$}}
\put(5,50){\textcolor{red}{$a_{3,2}$}}
\put(5,18.5){\textcolor{black}{$c_{1,1}$}}
\put(5,-1){\textcolor{black}{$c_{3,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{1,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_2.eps
hscale=85
vscale=85
angle=90
hoffset=189
voffset=-341.5
}
\begin{picture}(0,0)(-163.5,-120)
\begin{picture}(0,0)(2.5,244)
\put(-9,97){\textcolor{blue}{$d'_2$}}
\put(7,65){\textcolor{red}{$a'_{2,1}$}}
\put(3,30){\textcolor{red}{$a'_{1,2}$}}
\put(9,11){\textcolor{black}{$c'_{2,1}$}}
\put(2,-15){\textcolor{black}{$c'_{1,2}$}}
\put(-14.5,65){\textcolor{mygreen}{$b'_{2,2}$}}
\put(-23,30){\textcolor{mygreen}{$b'_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(-9,97){\textcolor{blue}{$d'_3$}}
\put(7,65){\textcolor{red}{$a'_{3,1}$}}
\put(3,30){\textcolor{red}{$a'_{2,2}$}}
\put(9,11){\textcolor{black}{$c'_{3,1}$}}
\put(2,-15){\textcolor{black}{$c'_{2,2}$}}
\put(-14.5,65){\textcolor{mygreen}{$b'_{3,2}$}}
\put(-22,30){\textcolor{mygreen}{$b'_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(-9,97){\textcolor{blue}{$d'_1$}}
\put(7,64){\textcolor{red}{$a'_{1,1}$}}
\put(3,30){\textcolor{red}{$a'_{3,2}$}}
\put(9,11){\textcolor{black}{$c'_{1,1}$}}
\put(2,-15){\textcolor{black}{$c'_{3,2}$}}
\put(-14.5,64){\textcolor{mygreen}{$b'_{1,2}$}}
\put(-22,30){\textcolor{mygreen}{$b'_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_3.eps
hscale=85
vscale=85
angle=90
hoffset=257.5
voffset=-294
}
\begin{picture}(0,0)(-232,-120)
\begin{picture}(0,0)(2.5,244)
\put(-9,96){\textcolor{blue}{$d'_2$}}
\put(9,110){\textcolor{black}{$c'_{1,1}$}}
\put(2,84){\textcolor{black}{$c'_{3,2}$}}
\put(7,65){\textcolor{red}{$a_{2,1}$}}
\put(3,30){\textcolor{red}{$a_{1,2}$}}
\put(9,11){\textcolor{black}{$c_{2,1}$}}
\put(2,-15){\textcolor{black}{$c_{1,2}$}}
\put(-14.5,65){\textcolor{mygreen}{$b_{2,2}$}}
\put(-23,30){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(-9,96){\textcolor{blue}{$d'_3$}}
\put(7,65){\textcolor{red}{$a'_{3,1}$}}
\put(3,30){\textcolor{red}{$a'_{2,2}$}}
\put(9,11){\textcolor{black}{$c'_{3,1}$}}
\put(2,-15){\textcolor{black}{$c'_{2,2}$}}
\put(-14.5,65){\textcolor{mygreen}{$b'_{3,2}$}}
\put(-22,30){\textcolor{mygreen}{$b'_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(-9,96){\textcolor{blue}{$d'_1$}}
\put(7,64){\textcolor{red}{$a'_{1,1}$}}
\put(3,30){\textcolor{red}{$a'_{3,2}$}}
\put(-14.5,64){\textcolor{mygreen}{$b'_{1,2}$}}
\put(-22,30){\textcolor{mygreen}{$b'_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twolines18_1bis.eps
hscale=85
vscale=85
angle=90
hoffset=326
voffset=-300.5
}
\begin{picture}(0,0)(-302,-132.5)
\begin{picture}(0,0)(2.5,244)
\put(-10.5,82){\textcolor{blue}{$d_2$}}
\put(-10,38){\textcolor{black}{$s_2$}}
\put(5,66.5){\textcolor{red}{$a_{2,1}$}}
\put(5,50){\textcolor{red}{$a_{1,2}$}}
\put(5,18.5){\textcolor{black}{$c_{2,1}$}}
\put(5,-1){\textcolor{black}{$c_{1,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{2,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(25, 94){\textcolor{black}{$9_b$}}
\put(-10.5,82){\textcolor{blue}{$d_3$}}
\put(25, 70){\textcolor{black}{$8_b$}}
\put(-10,38){\textcolor{black}{$s_3$}}
\put(5,66.5){\textcolor{red}{$a_{3,1}$}}
\put(5,50){\textcolor{red}{$a_{2,2}$}}
\put(5,18.5){\textcolor{black}{$c_{3,1}$}}
\put(5,-1){\textcolor{black}{$c_{2,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{3,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(-10.5,82){\textcolor{blue}{$d_1$}}
\put(-10,38){\textcolor{black}{$s_1$}}
\put(25, 44){\textcolor{black}{$1_b$}}
\put(5,66.5){\textcolor{red}{$a_{1,1}$}}
\put(5,50){\textcolor{red}{$a_{3,2}$}}
\put(25,20){\textcolor{black}{$0_b$}}
\put(5,18.5){\textcolor{black}{$c_{1,1}$}}
\put(5,-1){\textcolor{black}{$c_{3,2}$}}
\put(-19.5,18.5){\textcolor{mygreen}{$b_{1,2}$}}
\put(-19.55,-1){\textcolor{mygreen}{$b_{2,1}$}}
\end{picture}
\put(-38,-126){$\field{H}at S_2$}
\end{picture}
\vspace*{350pt}
\begin{figure}
\caption{
\label{fig:twoline:shear}
\label{fig:twoline:shear}
\end{figure}
\special{
psfile=twoline18_second_surface_bis.eps
hscale=85
vscale=85
angle=90
hoffset=57.5
voffset=-302
}
\begin{picture}(0,0)(-22.5,-121)
\begin{picture}(0,0)(2.5,244)
\put(10.5,115.5){\textcolor{black}{$c_{1,1}$}}
\put(1,100.5){\textcolor{black}{$c_{3,2}$}}
\put(-10,95){\textcolor{blue}{$d_2$}}
\put(10,68){\textcolor{red}{$a_{2,1}$}}
\put(-38,58){$17_t$}
\put(-9.5,64){$s_2$}
\put(1,52){\textcolor{red}{$a_{1,2}$}}
\put(10.5,19.5){\textcolor{black}{$c_{2,1}$}}
\put(1.25,4){\textcolor{black}{$c_{1,2}$}}
\put(-14.5,46){\textcolor{mygreen}{$b_{2,2}$}}
\put(-23,25){\textcolor{mygreen}{$b_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(10,68){\textcolor{red}{$a_{3,1}$}}
\put(1,52){\textcolor{red}{$a_{2,2}$}}
\put(-9.5,64){$s_3$}
\put(-10,95){\textcolor{blue}{$d_3$}}
\put(10.5,19.5){\textcolor{black}{$c_{3,1}$}}
\put(1.25,4){\textcolor{black}{$c_{2,2}$}}
\put(-14.5,46){\textcolor{mygreen}{$b_{3,2}$}}
\put(-23,25){\textcolor{mygreen}{$b_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(10,68){\textcolor{red}{$a_{1,1}$}}
\put(1,52){\textcolor{red}{$a_{3,2}$}}
\put(-33.5,34){$4_t$}
\put(-9.5,64){$s_1$}
\put(-10,95){\textcolor{blue}{$d_1$}}
\put(-14.5,46){\textcolor{mygreen}{$b_{1,2}$}}
\put(-23,25){\textcolor{mygreen}{$b_{2,1}$}}
\end{picture}
\put(34,-111){$\field{H}at S_1$}
\end{picture}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=90
hoffset=130.5
voffset=-292
}
\begin{picture}(0,0)(-99.5,-142)
\begin{picture}(0,0)(0,97.5)
\put(-24.5,-98){17}
\put(-24.5,-122.5){16}
\put(0,-49){15}
\put(0,-73.5){14}
\put(0,-98){13}
\put(0,-122.5){12}
\end{picture}
\begin{picture}(0,0)(2.5,193)
\put(-24.5,-98){11}
\put(-24.5,-122.5){10}
\put(3,-49){9}
\put(3,-73.5){8}
\put(3,-98){7}
\put(3,-122.5){6}
\end{picture}
\begin{picture}(0,0)(5,290)
\put(-21.5,-98){5}
\put(-21.5,-122.5){4}
\put(3,-49){3}
\put(3,-73.5){2}
\put(3,-98){1}
\put(3,-122.5){0}
\end{picture}
\begin{picture}(0,0)(2.5,144.5)
\put(-16,12){\tiny\textit C}
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-85){\tiny\textit B}
\put(-16,-88){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-85){\tiny\textit B}
\put(-16,-88){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(14.5,-12.75){\tiny\textit F}
\put(-16,-12.75){\tiny\textit A}
\put(-40,-37){\tiny\textit B}
\put(-16,-34){\tiny\textit C}
\put(14.5,-61){\tiny\textit D}
\put(-40,-61){\tiny\textit E}
\put(-40,-89){\tiny\textit B}
\put(-14,-89){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(-0.5,145)
\put(-12,15){\tiny $0$}
\put(-26,4.5){\rotatebox{180}{\tiny $2$}}
\put(4,4.5){\rotatebox{180}{\tiny $8$}}
\put(-26,-19.5){\rotatebox{180}{\tiny $9$}}
\put(4,-19.5){\rotatebox{180}{\tiny $3$}}
\put(-39,-34){\tiny $10$}
\put(-52,-45){\rotatebox{180}{\tiny $4$}}
\put(4,-45){\rotatebox{180}{\tiny $6$}}
\put(-54,-70){\rotatebox{180}{\tiny $11$}}
\put(4,-70){\rotatebox{180}{\tiny $1$}}
\put(-37,-88){\tiny $5$}
\put(-30,-93){\rotatebox{180}{\tiny $14$}}
\put(4,-93){\rotatebox{180}{\tiny $2$}}
\put(-26,-117){\rotatebox{180}{\tiny $3$}}
\put(4,-117){\rotatebox{180}{\tiny $15$}}
\put(-37,-130){\tiny $4$}
\put(-54,-140.5){\rotatebox{180}{\tiny $16$}}
\put(4,-140.5){\rotatebox{180}{\tiny $0$}}
\put(-50,-165){\rotatebox{180}{\tiny $5$}}
\put(4,-165){\rotatebox{180}{\tiny $13$}}
\put(-39,-185){\tiny $17$}
\put(-26,-189.5){\rotatebox{180}{\tiny $8$}}
\put(4,-189.5){\rotatebox{180}{\tiny $14$}}
\put(-30,-213){\rotatebox{180}{\tiny $15$}}
\put(4,-213){\rotatebox{180}{\tiny $9$}}
\put(-39,-227){\tiny $16$}
\put(-54,-238){\rotatebox{180}{\tiny $10$}}
\put(4,-238){\rotatebox{180}{\tiny $12$}}
\put(-54,-262){\rotatebox{180}{\tiny $17$}}
\put(4,-262){\rotatebox{180}{\tiny $7$}}
\put(-39,-282){\tiny $11$}
\put(-14.5,-282){\tiny $15$}
\end{picture}
\end{picture}
\special{
psfile=twolines18_0.eps
hscale=85
vscale=85
angle=180
hoffset=324
voffset=-332
}
\begin{picture}(0,0)(-52,339)
\put(-21.5,0.25){\rotatebox{90}{15}}
\put(2.5,0.25){\rotatebox{90}{14}}
\put(27,0.25){\rotatebox{90}{13}}
\put(49.5,0.25){\rotatebox{90}{12}}
\put(75,2.25){\rotatebox{90}{9}}
\put(99.5,2.25){\rotatebox{90}{8}}
\put(123,2.25){\rotatebox{90}{7}}
\put(146.5,2.25){\rotatebox{90}{6}}
\put(171,2.25){\rotatebox{90}{3}}
\put(197,2.25){\rotatebox{90}{2}}
\put(220,2.25){\rotatebox{90}{1}}
\put(244.5,2.25){\rotatebox{90}{0}}
\put(27,-24.25){\rotatebox{90}{17}}
\put(49.5,-24.25){\rotatebox{90}{16}}
\put(123,-24.25){\rotatebox{90}{11}}
\put(146.5,-24.25){\rotatebox{90}{10}}
\put(220,-21.25){\rotatebox{90}{5}}
\put(244.5,-21.25){\rotatebox{90}{4}}
\put(-20,21){\rotatebox{-90}{\tiny 8}}
\put(3.5,21){\rotatebox{-90}{\tiny 3}}
\put(28.5,21){\rotatebox{-90}{\tiny 6}}
\put(51,21){\rotatebox{-90}{\tiny 1}}
\put(76.5,21){\rotatebox{-90}{\tiny 2}}
\put(100.5,24){\rotatebox{-90}{\tiny 15}}
\put(124.5,21){\rotatebox{-90}{\tiny 0}}
\put(149,24){\rotatebox{-90}{\tiny 13}}
\put(172,24){\rotatebox{-90}{\tiny 14}}
\put(198,21){\rotatebox{-90}{\tiny 9}}
\put(221,24){\rotatebox{-90}{\tiny 12}}
\put(246,21){\rotatebox{-90}{\tiny 7}}
\put(-35.5,3){\rotatebox{90}{\tiny 0}}
\put(261,1){\rotatebox{90}{\tiny 15}}
\put(-21.5,-9){\rotatebox{-90}{\tiny 2}}
\put(3.5,-9){\rotatebox{-90}{\tiny 9}}
\put(76.5,-8.5){\rotatebox{-90}{\tiny 14}}
\put(100.5,-9){\rotatebox{-90}{\tiny 3}}
\put(172,-9){\rotatebox{-90}{\tiny 8}}
\put(198,-8.5){\rotatebox{-90}{\tiny 15}}
\put(13,-24){\rotatebox{90}{\tiny 10}}
\put(67.5,-22){\rotatebox{90}{\tiny 5}}
\put(110,-22){\rotatebox{90}{\tiny 4}}
\put(164,-24){\rotatebox{90}{\tiny 17}}
\put(206,-24){\rotatebox{90}{\tiny 16}}
\put(261,-24){\rotatebox{90}{\tiny 11}}
\put(28.5,-33){\rotatebox{-90}{\tiny 4}}
\put(51,-32.5){\rotatebox{-90}{\tiny 11}}
\put(124.5,-32.5){\rotatebox{-90}{\tiny 16}}
\put(149,-33){\rotatebox{-90}{\tiny 5}}
\put(221,-32.5){\rotatebox{-90}{\tiny 10}}
\put(246,-32.5){\rotatebox{-90}{\tiny 17}}
\begin{picture}(0,0)(-53,-3)
\put(-61,14.5){\tiny\textit F}
\put(-12.75,14.5){\tiny\textit D}
\put(-88,-16){\tiny\textit C}
\put(-61.5,-16){\tiny\textit A}
\put(-40,-16){\tiny\textit C}
\put(14,-16){\tiny\textit C}
\put(-39,-40){\tiny\textit B}
\put(-12.75,-40){\tiny\textit E}
\put(14,-40){\tiny\textit B}
\end{picture}
\begin{picture}(0,0)(-147.5,-3)
\put(-61,14.5){\tiny\textit F}
\put(-12.75,14.5){\tiny\textit D}
\put(-61.5,-16){\tiny\textit A}
\put(-40,-16){\tiny\textit C}
\put(14,-16){\tiny\textit C}
\put(-39,-40){\tiny\textit B}
\put(-12.75,-40){\tiny\textit E}
\put(13.5,-40){\tiny\textit B}
\end{picture}
\begin{picture}(0,0)(-242,-3)
\put(-61,14.5){\tiny\textit F}
\put(-12.75,14.5){\tiny\textit D}
\put(-61.5,-16){\tiny\textit A}
\put(-40,-16){\tiny\textit C}
\put(14,-13){\tiny\textit C}
\put(-39,-40){\tiny\textit B}
\put(-12.75,-40){\tiny\textit E}
\put(14,-40){\tiny\textit B}
\end{picture}
\end{picture}
\special{
psfile=twolines18_second_surface.eps
hscale=85
vscale=85
angle=90
angle=180
hoffset=324
voffset=-395
}
\begin{picture}(0,0)(-43,400)
\begin{picture}(0,0)(0,0)
\put(-26,9){\textcolor{black}{$c'_{1,1}$}}
\put(-7,-6){\textcolor{black}{$c'_{3,2}$}}
\put(-1,-18.5){\textcolor{blue}{$d'_2$}}
\put(21,9){\textcolor{red}{$a'_{2,1}$}}
\put(40.5,-6){\textcolor{red}{$a'_{1,2}$}}
\put(70.75,9){\textcolor{black}{$c'_{2,1}$}}
\put(89.5,-6){\textcolor{black}{$c'_{1,2}$}}
\put(42.5,-18.5){\textcolor{mygreen}{$b'_{2,2}$}}
\put(67,-30.5){\textcolor{mygreen}{$b'_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(-94.5,0)
\put(-1,-18.5){\textcolor{blue}{$d'_3$}}
\put(21,9){\textcolor{red}{$a'_{3,1}$}}
\put(40.25,-6){\textcolor{red}{$a'_{2,2}$}}
\put(70.75,9){\textcolor{black}{$c'_{3,1}$}}
\put(89.5,-6){\textcolor{black}{$c'_{2,2}$}}
\put(42.5,-18.5){\textcolor{mygreen}{$b'_{3,2}$}}
\put(67,-30.5){\textcolor{mygreen}{$b'_{1,1}$}}
\end{picture}
\begin{picture}(0,0)(-189,0)
\put(-1,-18.5){\textcolor{blue}{$d'_1$}}
\put(20.75,9){\textcolor{red}{$a'_{1,1}$}}
\put(40,-6){\textcolor{red}{$a'_{3,2}$}}
\put(42,-18.5){\textcolor{mygreen}{$b'_{1,2}$}}
\put(66,-30.5){\textcolor{mygreen}{$b'_{2,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twoline18_second_surface_prime.eps
hscale=85
vscale=85
angle=90
hoffset=276.5
voffset=-259.5
}
\begin{picture}(0,0)(-242,-165.7)
\begin{picture}(0,0)(2.5,244)
\put(1,99.5){\textcolor{mygreen}{$b'_{1,1}$}}
\put(10,74){\textcolor{mygreen}{$b'_{3,2}$}}
\put(-4.5,47){\textcolor{red}{$a'_{2,1}$}}
\put(10,30){\textcolor{red}{$a'_{1,2}$}}
\put(-23.5,74){\textcolor{black}{$c'_{2,2}$}}
\put(-9,60){\textcolor{black}{$c'_{3,1}$}}
\put(-20,35){\textcolor{blue}{$d'_{3,2}$}}
\put(-20,18.5){\textcolor{blue}{$d'_{2,1}$}}
\end{picture}
\begin{picture}(0,0)(5,341)
\put(1,99.5){\textcolor{mygreen}{$b'_{1,1}$}}
\put(10,74){\textcolor{mygreen}{$b'_{3,2}$}}
\put(-4.5,47){\textcolor{red}{$a'_{3,1}$}}
\put(10,30){\textcolor{red}{$a'_{2,2}$}}
\put(-23.5,74){\textcolor{black}{$c'_{3,2}$}}
\put(-9,60){\textcolor{black}{$c'_{1,1}$}}
\put(-20,35){\textcolor{blue}{$d'_{1,2}$}}
\put(-20,18.5){\textcolor{blue}{$d'_{3,1}$}}
\end{picture}
\begin{picture}(0,0)(7.5,437)
\put(1,99.5){\textcolor{mygreen}{$b'_{1,1}$}}
\put(10,74){\textcolor{mygreen}{$b'_{3,2}$}}
\put(-4.5,47){\textcolor{red}{$a'_{1,1}$}}
\put(10,30){\textcolor{red}{$a'_{3,2}$}}
\put(-23.5,74){\textcolor{black}{$c'_{1,2}$}}
\put(-9,60){\textcolor{black}{$c'_{2,1}$}}
\put(-20,35){\textcolor{blue}{$d'_{2,2}$}}
\put(-20,18.5){\textcolor{blue}{$d'_{1,1}$}}
\end{picture}
\end{picture}
\special{
psfile=twoline18_second_surface_pr.eps
hscale=85
vscale=85
angle=90
hoffset=350
voffset=-244
}
\begin{picture}(0,0)(-320,-189.75)
\begin{picture}(0,0)(-2.5,98)
\put(-24.5,-92){\rotatebox{-90}{3}}
\put(-24.5,-123){\rotatebox{90}{14}}
\put(0,-47){\rotatebox{90}{5}}
\put(0,-64){\rotatebox{-90}{10}}
\put(0,-92){\rotatebox{-90}{6}}
\put(0,-122.5){\rotatebox{90}{13}}
\end{picture}
\begin{picture}(0,0)(0,196)
\put(-24.5,-87){\rotatebox{-90}{15}}
\put(-24.5,-119){\rotatebox{90}{8}}
\put(0,-49){\rotatebox{90}{17}}
\put(0,-66){\rotatebox{-90}{4}}
\put(0,-89.5){\rotatebox{-90}{0}}
\put(0,-119){\rotatebox{90}{7}}
\end{picture}
\begin{picture}(0,0)(2.5,290.5)
\put(-24.5,-92){\rotatebox{-90}{9}}
\put(-24.5,-121){\rotatebox{90}{2}}
\put(0,-51){\rotatebox{90}{11}}
\put(0,-65){\rotatebox{-90}{16}}
\put(0,-90){\rotatebox{-90}{12}}
\put(0,-121){\rotatebox{90}{1}}
\end{picture}
\begin{picture}(0,0)(2.5,144.5)
\put(-16,12){\tiny\textit C}
\put(-16,-12.75){\tiny\textit B}
\put(14.5,-12.75){\tiny\textit E}
\put(-40,-37){\tiny\textit A}
\put(-16,-34){\tiny\textit C}
\put(-40,-61){\tiny\textit F}
\put(14.5,-61){\tiny\textit D}
\put(-40,-85){\tiny\textit A}
\put(-16,-88.5){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(5,241.5)
\put(-16,-12.75){\tiny\textit B}
\put(14.5,-12.75){\tiny\textit E}
\put(-40,-37){\tiny\textit A}
\put(-16,-34){\tiny\textit C}
\put(-40,-61){\tiny\textit F}
\put(14.5,-61){\tiny\textit D}
\put(-40,-85){\tiny\textit A}
\put(-16,-88){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(7.5,338)
\put(-16,-12.75){\tiny\textit B}
\put(14.5,-12.75){\tiny\textit E}
\put(-40,-37){\tiny\textit A}
\put(-16,-34){\tiny\textit C}
\put(-40,-61){\tiny\textit F}
\put(14.5,-61){\tiny\textit D}
\put(-40,-89){\tiny\textit A}
\put(-14,-89){\tiny\textit C}
\end{picture}
\begin{picture}(0,0)(-0.25,145)
\put(-12,15){\rotatebox{90}{\tiny $1$}}
\put(-26,-1.5){\rotatebox{90}{\tiny $16$}}
\put(4,0.5){\rotatebox{90}{\tiny $4$}}
\put(-26,-19.5){\rotatebox{-90}{\tiny $17$}}
\put(4,-19.5){\rotatebox{-90}{\tiny $11$}}
\put(-37,-34){\rotatebox{90}{\tiny $8$}}
\put(-51,-45){\rotatebox{-90}{\tiny $2$}}
\put(4,-45){\rotatebox{-90}{\tiny $7$}}
\put(-51,-75){\rotatebox{90}{\tiny $15$}}
\put(4,-75){\rotatebox{90}{\tiny $12$}}
\put(-37,-84){\rotatebox{-90}{\tiny $9$}}
\put(-26,-99){\rotatebox{90}{\tiny $10$}}
\put(4,-99){\rotatebox{90}{\tiny $16$}}
\put(-26,-115){\rotatebox{-90}{\tiny $11$}}
\put(4,-117){\rotatebox{-90}{\tiny $5$}}
\put(-37,-130){\rotatebox{90}{\tiny $2$}}
\put(-51,-140){\rotatebox{-90}{\tiny $14$}}
\put(4,-141.5){\rotatebox{-90}{\tiny $1$}}
\put(-51,-170){\rotatebox{90}{\tiny $9$}}
\put(4,-170){\rotatebox{90}{\tiny $6$}}
\put(-37,-181){\rotatebox{-90}{\tiny $3$}}
\put(-26,-194.5){\rotatebox{90}{\tiny $4$}}
\put(4,-196.5){\rotatebox{90}{\tiny $10$}}
\put(-26,-215){\rotatebox{-90}{\tiny $5$}}
\put(4,-213){\rotatebox{-90}{\tiny $17$}}
\put(-37,-227){\rotatebox{90}{\tiny $14$}}
\put(-51,-238){\rotatebox{-90}{\tiny $8$}}
\put(4,-238){\rotatebox{-90}{\tiny $13$}}
\put(-51,-267){\rotatebox{90}{\tiny $3$}}
\put(4,-267){\rotatebox{90}{\tiny $0$}}
\put(-37,-277){\rotatebox{-90}{\tiny $15$}}
\put(-11,-282){\rotatebox{90}{\tiny $5$}}
\put(-50,24){$\field{H}at S_1$}
\end{picture}
\end{picture}
\vspace*{405pt}
\begin{figure}
\caption{
\label{fig:twoline:rotate}
\label{fig:twoline:rotate}
\end{figure}
\end{document} | math | 205,023 |
\begin{document}
\title{Path Integral Approach to 't\,Hooft's Derivation of
Quantum from Classical Physics
}
\author{Massimo Blasone${}^{\dag}$, Petr Jizba${}^{\ddag}{}^{\flat}$, and
Hagen Kleinert${}^{\natural}$}
\address{ $ $\\[2mm]
${}^{\dag}$
Dipartimento di Fisica, Universit\`a di Salerno,
Via S.Allende, 84081 Baronissi (SA) - Italy
\\
${}^{\ddag}$ Institute for Theoretical Physics, University of
Tsukuba, Ibaraki 305-8571, Japan\\
${}^{\flat}$ FNSPE, Czech Technical University, Brehova 7, 115 19
Praha 1, Czech Republic\\
${}^{\natural}$ Institut f\"{u}r Theoretische Physik, Freie
Universit\"{a}t Berlin, Arnimallee 14 D-14195 Berlin, Germany
\\ [2mm] E-mails: [email protected], [email protected],
[email protected]
}
\maketitle
\begin{center}
{\small \bf Abstract}
\end{center}
\begin{abstract}
\hspace{-10.5mm}
We present a path-integral formulation of 't~Hooft's derivation of
quantum from classical physics. The crucial ingredient of this
formulation is Gozzi {\em et al.} supersymmetric path integral of
classical mechanics. We quantize explicitly two simple classical
systems: the planar mathematical pendulum and the R\"{o}ssler
dynamical system.
~\\
\noindent}\def\non{\nonumber}\def\ot{\otimes}\def\pa{\partialndent PACS: 03.65.-w, 31.15.Kb, 45.20.Jj, 11.30.Pb. \\
\noindent}\def\non{\nonumber}\def\ot{\otimes}\def\pa{\partialndent {\em Keywords}: t\,Hooft's quantization; Path integral;
Constrained dynamics\draft
\end{abstract}
\section{Introduction}
In recent decades, various classical, i.e.,
deterministic approaches
to quantum theory have been proposed. Examples are Bohmian
mechanics~\cite{Bohm1}, and the stochastic quantization procedures
of Nelson~\cite{Nelson1}, Guerra and Ruggiero~\cite{Guerra1}, and
Parisi and Wu~\cite{Parisi1,Huffel}. Such approaches are finding
increasing interest
in the physics
community. This might be partially ascribed to the fact that such
alternative formulations help in explaining some quantum phenomena
that cannot be easily explained
with the usual formalisms. Examples
are multiple tunneling~\cite{Jona-Lasinio}, critical phenomena at
zero temperature~\cite{Ruggiero1}, mesoscopic physics and quantum
Brownian oscillators~\cite{Rugierro2}, and quantum-field-theoretical
regularization procedures which manifestly preserve
all symmetries of the
bare theory such as gauge symmetry, chiral symmetry, and
supersymmetry~\cite{reg}. They allow one to quantize gauge fields,
both Abelian and non-Abelian, without gauge fixing and the ensuing
cumbersome Faddeev-Popov ghosts~\cite{FPG}, etc..
The primary objective of a reformulation of quantum theory in the
language of classical, i.e., deterministic theory is basically
twofold. On the formal side, it is hoped that this will help in
attacking quantum-mechanical problems from a different direction
using hopefully more efficient mathematical techniques than the
conventional ones. Such techniques may be based on stochastic
calculus, supersymmetry, or various new numerical approaches (see,
e.g., Refs.~\cite{Huffel,Pain} and citations therein). On the
conceptual side, deterministic scenarios are hoped to shed new light
on some old problems of quantum mechanics, such as the origin of the
superposition rule for amplitudes and the theory of quantum
measurement. It may lead to new ways of quantizing chaotic dynamical
systems, and ultimately a long-awaited consistent theory of quantum
gravity. There is, however, a price to be paid for this; such
theories must have a built-in nonlocality
to
escape problems with Bell's inequalities. Nonlocality may be
incorporated in numerous ways --- the Bohm-Hiley quantum
potential~\cite{Bohm1,Bohm2}, Nelson's osmotic
potential~\cite{Nelson1}, or Parisi and Wu's {\em fifth--time\/}
parameter~\cite{Parisi1,Huffel}.
Another deterministic access to quantum-mechanical systems was
recently proposed by 't Hooft ~\cite{tHooft,tHooft3} with subsequent
applications in
Refs.\cite{BJV1,tHooft22,Halliwell:2000mv,cvb,BMM1,Banajee,Elze2}.
It is motivated by black-hole thermodynamics (and particularly by
the so-called {\em holographic principle\/}~\cite{tHooft2,Bousso}),
and hinges on the concept of {\em information loss\/}. This and
certain accompanying non-trivial geometric phases are able to
explain the observed non-locality in quantum mechanics. The original
formulation has appeared in two versions: one involving a discrete
time axis~\cite{tHooft22}, the second continuous
times~\cite{tHooft3}. The goal of this paper is to discuss further
and gain more understanding of the latter model. The reader
interested in the discrete-time model may find some practical
applications in Refs.~\cite{BJV3,Elze}. It is not our purpose to
dwell on the conceptual foundations of 't\,Hooft's proposal. Our aim
is to set up a possible useful alternative formulation of
't\,Hooft's model and quantization scheme that is based on path
integrals \cite{Pain}. It makes use of Gozzi {\em et al.}
path-integral formulation of classical
mechanics~\cite{GozziI,GozziII} which appears to be a natural
mathematical framework for such a discussion. The condition of the
information loss, which is basically a first-class subsidiary
constraint, can then be incorporated into path integrals by
standard techniques. Although 't\,Hooft's procedure differs in its
basic rationale from stochastic quantization approaches, we show
that they share a common key feature, which is
a hidden BRST invariance, related to
the so-called Nicolai map~\cite{Nikoloai1}. To be specific, we shall
apply our formulation to two classical systems: a planar
mathematical pendulum and the simplest deterministic chaotic system
--- the R\"{o}ssler attractor. Suitable choices of the ``loss of
information" condition then allow us to identify the emergent
quantum systems with a free particle, a quantum harmonic oscillator,
and a free particle weakly coupled to Duffing's oscillator.
Our paper is organized as follows. In Section \ref{SEc2} we quantize
't\,Hooft's Hamiltonian system by expressing it in terms of a path
integral which is singular due to the presence of second-class
primary constraints. The singularity is removed with the help of the
Faddeev-Senjanovic prescription~\cite{Fad,Senj}. It is then shown
that the fluctuating system produces a classical partition function.
In Section \ref{SEc3} we briefly review Gozzi {\em et al.}
path-integral formulation of classical mechanics in configuration
space. The corresponding phase-space formulation is more involved
and will not be considered here. By imposing the condition of a
vanishing ghost sector, which is characteristic for the underlying
deterministic system, we find that the most general Hamiltonian
system compatible with such a condition is the one proposed by
't\,Hooft. In Section \ref{SEc4} we introduce 't\,Hooft's constraint
which expresses the property of information loss. This condition not
only explicitly breaks the BRST symmetry but, when coupled with the
Dirac-Bergmann algorithm, it also allows us to recast the classical
generating functional into a form representing a proper
quantum-mechanical partition function. Section V is devoted to
application of our formalism to practical examples. We conclude with
Section VI.
For the reader's convenience the paper is supplemented with four
appendixes which clarify some finer mathematical points needed in
the paper.
\section{Quantization of 't\,Hooft's Model} \label{SEc2}
Consider the class of systems described by Hamiltonians of the form
\begin{eqnarray}
H = \sum_{a=1}^N p_a f_a({\boldsymbol{q}})\, . \label{eq.1.1}
\end{eqnarray}
Such systems emerge in diverse physical situations, for example,
Fermi fields, chiral oscillators~\cite{Banajee}, and noncommutative
magnetohydrodynamics~\cite{Jackiw}. The relevant example in the
present context is the use of (\ref{eq.1.1})
by 't\,Hooft to formulate his
{\em deterministic quatization\/} proposal~\cite{tHooft}.
An immediate problem with the above Hamiltonian is its unboundedness
from below. This is due to the absence of a leading kinetic term
quadratic in the momenta $p_ a ^2/2M$, and we shall dwell more on
this point in Section~\ref{SEc4}. The equations of motion following
from Eq.(\ref{eq.1.1}) are
\begin{eqnarray}
\dot{q}_a \ = \ f_a({\boldsymbol{q}})\, , \;\;\;\; \dot{p}_a \ = \
- p_a \frac{\partial f_a({\boldsymbol{q}})}{\partial q_a}\, .
\label{eq.1.1.1}
\end{eqnarray}
Note that the equation for $q_a$ is autonomous, i.e., it is
decoupled from the conjugate momenta $p_a$. The absence of a
quadratic term makes it impossible to find a Lagrangian via a
Legendre transformation. This is because the system is singular
--- its Hess matrix $H^{ab }\equiv \partial ^2H/\partial p_
a\partial p _b$ vanishes.
A Lagrangian yielding the equations of motion (\ref{eq.1.1.1}) can
nevertheless be found, but at the expense of doubling the
configuration space by introducing additional auxiliary variables
$\overlinen q_ a ~(a=1,\dots, N)$. This {\em extended} Lagrangian
has the form
\begin{eqnarray}
\overlinen L \ \equiv \ \sum_{a=1}^N \left[\bar{q}_a \dot{q}_a -
\bar{q}_a
f_a({\boldsymbol{q}})\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
\,
\label{lag1}
\end{eqnarray}
and it allows us to define canonically conjugate momenta in the
usual way: $p_a \equiv \partial \overlinen L/\partial \dot{q}_a,~
\overlinen{p}_a \equiv
\partial \overlinen L/\partial \dot{\overlinen{q}}_a$.
A Legendre transformation produces the Hamiltonian
\begin{eqnarray}
\overlinen H(p_a, q_a, {\overlinen{p}}_a, {\overlinen{q}}_a) =
\sum_{a=1}^N p_a \dot{q}_a + {\overlinen{p}}_a
\dot{{\overlinen{q}}}_a - L = \sum_{a=1}^N \bar{q}_a
f_a({{\boldsymbol{q}}})\, . \label{2.4}
\end{eqnarray}
The rank of the Hess matrix is zero which gives rise to $2N$ primary
constraints, which can be chosen as:
\begin{eqnarray}
\phi_1^a = p_a - \overlinen{q}_a \ \approx \ 0\, , \;\;\;\;\;\;
\phi_2^a = \overlinen{p}_a \ \approx \ 0\, . \label{2.5}
\end{eqnarray}
The use of the symbol $\approx$ instead of $=$ is due to
Dirac~\cite{Dir} and it has a special meaning: two quantities
related by this symbol are equal after all constraints have been
enforced. The system has no secondary constraints (see Appendix A).
The matrix formed by the Poisson brackets of the primary
constraints,
\begin{eqnarray}
\{\phi_1^a(t) , \phi_2^b(t) \} \ = \ - \delta_{a b}\, , \;\;\;
\label{2.10}
\end{eqnarray}
has a nonzero determinant, implying that all constraints are of the
second class. Note that on the constraint manifold the {\em
canonical} Hamiltonian (\ref{2.4}) coincides with 't\,Hooft's
Hamiltonian (\ref{eq.1.1}).
To quantize 't\,Hooft's system we
utilize the general Faddeev-Senjanovic path integral
formula~\cite{Fad,Senj} for time evolution amplitudes
\footnote{Other path-integral representations of systems with
second-class constrains such as that of Fradkin and
Fradkina~\cite{fradkin} would lead to the same result
(\ref{eg.1.2}). }
\begin{eqnarray}
\langle {\boldsymbol{q}}_2,t_2| {\boldsymbol{q}}_1, t_1 \rangle}\def\rar{\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow}\def\Rar{\Rightarrowgle =
{{\mathcal{N}}} \int {\mathcal{D}}{\boldsymbol{p}}
{\mathcal{D}}{\boldsymbol{q}} \ \sqrt{\left|\det |\!|\{\phi_i ,
\phi_j \} |\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|} \ \prod_i\delta[\phi_i]\ \exp \left\{
\frac{i}{\hbar } \int_{t_1}^{t_2} dt \left[ {\boldsymbol{p}}
\dot{{\boldsymbol{q }} } - \overlinen H({\boldsymbol{q}},
{\boldsymbol{p}} )\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, . \label{frad}\end{eqnarray}
Using the shorthand notation $\phi_i = \phi_1^1, \phi_2^1,\,
\phi_1^2, \phi_2^2,\, \ldots, \phi_1^N, \phi_2^N ~(i=1,\dots,2N)$,
Eq.(\ref{frad}) implies in our case that
\begin{eqnarray}
\langle {\boldsymbol{q}}_2,t_2| {\boldsymbol{q}}_1, t_1 \rangle}\def\rar{\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow}\def\Rar{\Rightarrowgle
&=& {{\mathcal{N}}} \int {\mathcal{D}}{\boldsymbol{p}} {\mathcal{D}}
{\boldsymbol{q}} {\mathcal{D}} \overlinen{{\boldsymbol{p}}}
{\mathcal{D}}\overlinen{{\boldsymbol{q}}} \
\delta[{\boldsymbol{p}} - \overlinen{{\boldsymbol{q}}}] \,
\delta[\overlinen{{\boldsymbol{p}}}]\ \exp\left\{\frac{i}{\hbar }
\int_{t_1}^{t_2} dt\,[{\boldsymbol{p}} \dot{{\boldsymbol{q}}} +
\overlinen{{\boldsymbol{p}}} \dot{\overlinen{{\boldsymbol{q}}}} -
\overlinen H({\boldsymbol{q}}, \overlinen{{\boldsymbol{q}}},
{\boldsymbol{p}}, \overlinen{{\boldsymbol{p}}} )]
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\nonumber
\\
&&~ \nonumber \\
&=& {{\mathcal{N}}} \int_{{\boldsymbol{q}}(t_1) =
{\boldsymbol{q}}_1}^{{\boldsymbol{q}}(t_2) = {\boldsymbol{q}}_2}
{\mathcal{D}}{\boldsymbol{q}}
{\mathcal{D}}\overlinen{{\boldsymbol{q}}} \ \exp\left[\frac{i}{\hbar }
\int_{t_1}^{t_2}\overlinen L({\boldsymbol{q}}, \overlinen{{\boldsymbol{q}}},
\dot{{\boldsymbol{q}}}, \dot{\overlinen{{\boldsymbol{q}}}}) \ dt
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\nonumber \\
&&~ \nonumber \\
&=& {{\mathcal{N}}} \int_{{\boldsymbol{q}}(t_1) =
{\boldsymbol{q}}_1}^{{\boldsymbol{q}}(t_2) = {\boldsymbol{q}}_2}
{\mathcal{D}}{\boldsymbol{q}} \ \prod_a \delta[
\dot{q}_a-f_a({\boldsymbol{q}})]\, , \label{eg.1.2}
\end{eqnarray}
where
$ \delta[{\boldsymbol{f}} ]\equiv \prod_t \delta
({\boldsymbol{f}}(t))$ is the functional version of Dirac's $
\delta $-function. This result shows that quantization of the
system described by the Hamiltonian (\ref{eq.1.1}) retains its
deterministic character. The paths are squeezed onto the classical
trajectories determined by the differential equations
$\dot{q}_a = f_a({\boldsymbol{q}})$.
The time evolution amplitude (\ref{eg.1.2}) contains a sum over
only the classical trajectories --- there are no quantum
fluctuations driving the system away from the classical paths,
which is precisely what we expect from a deterministic dynamics.
The amplitude (\ref{eg.1.2})
can be brought to a more intuitive form by utilizing
the identity
\begin{eqnarray}
\delta\left[ {\boldsymbol{f}}({{\boldsymbol{q}}}) -
\dot{{\boldsymbol{q}}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \ = \ \delta[ {\boldsymbol{q}} -
{\boldsymbol{q}}_{\rm cl}]\ (\det {{M}})^{-1}\, ,
\end{eqnarray}
where ${M}$ is a functional matrix formed by the second derivatives
of the action $\overlinen {\mathcal{A}}[{\boldsymbol{q}},\overlinen
{\boldsymbol{q}}]\equiv \int dt\,\overlinen L({\boldsymbol{q}},
\bar{{\boldsymbol{q}}}, \dot{{\boldsymbol{q}}},
\dot{\bar{{\boldsymbol{q}}}}) $\,:
\begin{eqnarray}
{{M}}_{ab}(t,t') \ = \ \left. \frac{\delta^2 \overlinen
{\mathcal{A}}}{\delta q_a(t)\ \delta \overlinen{q}_b(t')}\
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{{\boldsymbol{q}} = {\boldsymbol{q}}_{\rm cl}} \, .
\label{4.01}
\end{eqnarray}
The Morse index theorem then ensures that for sufficiently short
time intervals $t_2-t_1$ (before the system reaches its first focal
point), the classical solution with the initial condition
${{\boldsymbol{q}}}(t_1) = {\boldsymbol{q}}_1$ is unique. Note,
however, that because of the first-order character of the equations
of motion we are dealing with a Cauchy problem, which may happen to
possess no classical trajectory satisfying the two Dirichlet
boundary conditions ${{\boldsymbol{q}}}(t_1) = {\boldsymbol{q}}_1$,
${{\boldsymbol{q}}}(t_2) = {\boldsymbol{q}}_2$. If a trajectory
exists, Eq.~(\ref{eg.1.2}) can be brought to the form
\begin{eqnarray}
\langle {\boldsymbol{q}}_2,t_2| {\boldsymbol{q}}_1, t_1 \rangle}\def\rar{\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow}\def\Rar{\Rightarrowgle
&=& {\bar{\mathcal{N}}} \int_{{\boldsymbol{q}}(t_1) =
{\boldsymbol{q}}_1}^{{\boldsymbol{q}}(t_2) = {\boldsymbol{q}}_2}
{\mathcal{D}}{\boldsymbol{q}} \ \delta\left[{\boldsymbol{q}} -
{\boldsymbol{q}}_{\rm cl} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, , \label{4.2}
\end{eqnarray}
where ${\bar{\mathcal{N}}}\equiv {{\mathcal{N}}}/(\det {M})$. We close this section
by observing that $\det M$ can be recast into more expedient form.
To do this we formally write
\begin{eqnarray}
\det M \ &=& \ \det\left|\!\left| \left( \partial_t \delta_a^b +
\frac{\partial f_a({\boldsymbol{q}}(t))}{\partial q_b(t)}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\delta(t-t') \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \ = \ \exp\left[
\mbox{Tr}\ln\left|\!\left| \left( \partial_t \delta_a^b +
\frac{\partial f_a({\boldsymbol{q}}(t))}{\partial q_b(t)}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\delta(t-t') \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\nonumber \\
& = & \ \exp\left[ \mbox{Tr}\ln \partial_t \left|\!\left| \delta_a^b
\delta(t-t') + G(t-t')\frac{\partial
f_a({\boldsymbol{q}}(t'))}{\partial q_b(t')}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\nonumber \\
&=& \ \exp\left[\mbox{Tr}(\ln \partial_t)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
\exp\left[\mbox{Tr}\ln \left|\!\left| \delta_a^b \delta(t-t') +
G(t-t')\frac{\partial f_a({\boldsymbol{q}}(t'))}{\partial
q_b(t')}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, . \label{4.2.1}
\end{eqnarray}
Here $G(t-t')$ is the Green's function satisfying the equation
\begin{eqnarray*}
\partial_tG(t-t') \ = \ \delta(t-t')\, .
\end{eqnarray*}
Choosing $G(t-t') = \theta(t-t')$, and noting that the first factor
in Eq.(\ref{4.2.1}) is an irrelevant constant that can be
assimilated into ${\mathcal{N}}$ we have
\begin{eqnarray}
\det M \ &=& \ \exp\left[\mbox{Tr} \ln \left|\!\left| \delta_a^b
\delta(t-t') + G(t-t')\frac{\partial
f_a({\boldsymbol{q}}(t'))}{\partial q_b(t')}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
\ = \ \exp\left[ \mbox{Tr}\left|\!\left| \theta(t-t')
\frac{\partial f_a({\boldsymbol{q}}(t))}{\partial
q_b(t)}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\nonumber \\
&=& \ \exp\left[\frac{1}{2} \int_{t_1}^{t_2} \! dt \
{\boldsymbol{\nabla}}_{{\boldsymbol{q}}}
{\boldsymbol{f}}({\boldsymbol{q}}) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, . \label{4.2.2}
\end{eqnarray}
In deriving Eq.(\ref{4.2.2}) we have used the fact that due to the
product of the $\theta$-function in the expansion of the logarithm,
all terms
vanish but the first one. In evaluating the
generalized function
$\theta(x)$ at the origin
we have used the only consistent
midpoint rule~\cite{Pain}: $\theta(0) = 1/2$.
Using the identity
\begin{eqnarray}
\left.\exp\left[\frac{1}{2} \int_{t_1}^{t_2} \! dt \
{\boldsymbol{\nabla}}_{{\boldsymbol{q}}}
{\boldsymbol{f}}({\boldsymbol{q}}) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{{\boldsymbol{q}}
= {\boldsymbol{q}}_{\rm cl}}\ = \ \int {\mathcal{D}}
{\overline{{\boldsymbol{q}}}} \
\delta\left[\overline{{\boldsymbol{q}}} -
\overline{{\boldsymbol{q}}}_{\rm cl} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \
\exp\left[-\frac{1}{2} \int_{t_1}^{t_2} \! dt \
{\boldsymbol{\nabla}}_{\bar{\boldsymbol{q}}}
\dot{\bar{\boldsymbol{q}}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, ,
\end{eqnarray}
we can finally write the amplitude of transition in a suggestive
form
\begin{eqnarray}
\langle {\boldsymbol{q}}_2,t_2| {\boldsymbol{q}}_1, t_1 \rangle}\def\rar{\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow}\def\Rar{\Rightarrowgle
&=& {\mathcal{N}} \int_{{\boldsymbol{q}}(t_1) =
{\boldsymbol{q}}_1}^{{\boldsymbol{q}}(t_2) = {\boldsymbol{q}}_2}
{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}{\overline{\boldsymbol{q}}}
\ \delta[{{\boldsymbol{q}}} - {\boldsymbol{q}}_{\rm cl}]
\delta[\overline{{\boldsymbol{q}}} -
\overline{{\boldsymbol{q}}}_{\rm cl}] \ \exp\left[-\frac{1}{2}
\int_{t_1}^{t_2} \! dt \
{\boldsymbol{\nabla}}_{\bar{\boldsymbol{q}}}
\dot{\bar{\boldsymbol{q}}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \nonumber \\
&=& {\mathcal{N}} \int_{{\boldsymbol{q}}(t_1) =
{\boldsymbol{q}}_1}^{{\boldsymbol{q}}(t_2) = {\boldsymbol{q}}_2}
{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}{\overline{\boldsymbol{q}}}
\ \delta[{{\boldsymbol{q}}} - {\boldsymbol{q}}_{\rm cl}]
\delta[\overline{{\boldsymbol{q}}} -
\overline{{\boldsymbol{q}}}_{\rm cl}] \ \sqrt{\frac{\det K(t_2) }{
\det K(t_1)}}\, \ . \label{3.50}
\end{eqnarray}
Here $K(t)$ is the fundamental matrix of the solutions of the
system
\begin{eqnarray}
\dot{\bar{q}}_a = - \bar{q}_b \frac{\partial
f_b({\boldsymbol{q}})}{\partial q_a} \, .
\end{eqnarray}
$\det K(t)$ is then the corresponding Wronskian. Note that in the
particular case when ${\boldsymbol{\nabla}}_{{\boldsymbol{q}}}
{\boldsymbol{f}}({\boldsymbol{q}}) \equiv 0$, i.e., when the phase
flow preserves the volume of any domain in the {\em configuration}
space, the exponential in Eq.(\ref{3.50}) can be
dropped.\footnote{This corresponds to the situation when there are
no attractors in the configuration space
$\Gamma_{\boldsymbol{q}}.$} Because the exponent depends only on
the end points of $\bar{\boldsymbol{q}}$ variable it can be
removed by performing the trace over $\bar{{\boldsymbol{q}}}$.
As a result we can cast the quantum-mechanical partition function
(or generating functional) $Z$ into the form
\begin{eqnarray}
Z \ &=& \ {\mathcal{N}} \int
{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}{\overline{\boldsymbol{q}}}
\ \delta[{{\boldsymbol{q}}} - {\boldsymbol{q}}_{\rm cl}]
\delta[\overline{{\boldsymbol{q}}} -
\overline{{\boldsymbol{q}}}_{\rm cl}] \ \exp\left[\int_{t_1}^{t_2}
[{\boldsymbol{J}}(t){\boldsymbol{q}}(t) +
\bar{{\boldsymbol{J}}}(t)\bar{{\boldsymbol{q}}}(t)]dt\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\nonumber
\\
&=& \ {\mathcal{N}} \int {\mathcal{D}} q_a \ \delta[q_a - (q_a)_{\rm
cl}] \ \exp\left[\int^{t_2}_{t_1} dt \ J_a(t) q_a(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, .
\label{3.51}
\end{eqnarray}
Here the doubled vector notation $q_a = \{{\boldsymbol{q}},
\bar{\boldsymbol{q}}\}$ and $J_a \equiv \{{\boldsymbol{J}},
\bar{\boldsymbol{J}}\} $ was used.
\section{Path integral formulation of classical mechanics
- configuration-space approach}\label{SEc3}
Expressions (\ref{4.2}) and (\ref{3.51}) formally coincide with the
path-integral formulation of classical mechanics in configuration
space proposed by Gozzi~\cite{GozziI} and further developed by
Gozzi, Reuter, and Thacker~\cite{GozziII}(see also Ref.\cite{Elze2}
for recent applications). Let us briefly review aspects of this
which will be needed here. Consider the path-integral representation
of the generating functional of a quantum-mechanical system with
action ${\mathcal{A}}[{\boldsymbol{q}}]$:
\begin{eqnarray}
{{{Z}} }_{\rm QM} = {{\mathcal{N}}} \int {\mathcal{D}}{\boldsymbol{q}}\
e^{-i {\mathcal{A}}[{\boldsymbol{q}}]/\hbar } \exp\left[\int
{\boldsymbol{J}}(t){\boldsymbol{q}}(t)dt\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, . \label{@genf}
\label{4.0}
\end{eqnarray}
We assume in this context that there are no constraints that would
make the measure more complicated as in Eq.~(\ref{frad}). Gozzi {\em
et al.} proposed to describe classical mechanics by a generating
functional of the form (\ref{@genf}) with an obviously modified
integration measure which gives equal weight to all classical
trajectories and zero weight to all others
\begin{eqnarray}
{{{Z}} }_{\rm CM} = \tilde{{{\mathcal{N}}}} \int
{\mathcal{D}}{\boldsymbol{q}} \ \delta[{\boldsymbol{q}}-
{\boldsymbol{q}}_{\rm cl}]
\exp\left[\int
{\boldsymbol{J}}(t){\boldsymbol{q}}(t)dt\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, . \label{4.3}
\end{eqnarray}
Although the form of the partition function (\ref{4.3}) is not
derived but {\em postulated}, we show in Appendix B that it can be
heuristically understood either as the ``classical" limit of the
stochastic-quantization partition function (c.f., Appendix BI), or
as a results of the classical limit of the closed-time path integral
for the transition probability of systems coupled to a heat bath
(c.f., Appendix BII). This, in turn, indicates that it would be
formally more correct to associate (\ref{4.3}) with the {\em
probability} of transition or (via the stochastic-quantization
passage) with the {\em Euclidean} amplitude of
transition~\cite{Zinn-JustinII}. Albeit (\ref{4.3}) cannot be
generally obtained from (\ref{4.0}) by a semiclassical limit {\em
\`{a} la} WKB (which can be recognized by the absence of a phase
factor $\exp(i/\hbar {\mathcal{A}}(q_{\rm cl}))$ in (\ref{4.3})) it
may happen that even ordinary amplitudes of transition posses this
form. This is the case, for instance, when the number of degrees of
freedom is doubled or when one deals with closed-time-path
formulation of thermal quantum theory. Yet, whatever is the origin
or motivation for (\ref{4.3}), it will be its formal structure and
mathematical implications that will interest us here most.
To proceed we note that an alternative way of writing (\ref{4.3})
is
\begin{eqnarray}
{{{Z}} }_{\rm CM}\ = \ \tilde{{{\mathcal{N}}}} \int
{\mathcal{D}}{\boldsymbol{q}} \ \delta\left[ \frac{\delta
{\mathcal{A}}}{\delta {\boldsymbol{q}}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \ \det \left| \frac{\delta^2
{\mathcal{A}} }{\delta q_a (t) \ \delta q_b(t')} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \ \exp\left[\int
{\boldsymbol{J}}(t){\boldsymbol{q}}(t)dt\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, . \label{4.4}
\end{eqnarray}
By representing the $\delta$ functional in the usual way as a
functional Fourier integral,
\begin{eqnarray}
\delta\left[ \frac{\delta {\mathcal{A}}}{\delta {\boldsymbol{q}}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] =
\int {\mathcal{D}} {\mathbf{\lambda}} \ \exp\left( i
\int_{t_1}^{t_2} dt \ {\boldsymbol{\lambda}}(t) \frac{\delta
{\mathcal{A}}}{\delta {\boldsymbol{q}}(t)} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, ,
\end{eqnarray}
and the functional determinant as a functional integral over two
real time-dependent Grassmannian {\em ghost variables\/} $c_a(t)$
and $\overlinen{c}_a(t)$,
\begin{eqnarray}
\det \left| \frac{\delta^2 {\mathcal{A}} }{\delta q_a (t) \ \delta q_b(t')}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| = \int {\mathcal{D}}{\boldsymbol{c}} {\mathcal{D}}
\overlinen{{\boldsymbol{c}}} \ \exp\left[ \int_{t_1}^{t_2}dt
\int_{t_1}^{t_2} dt' \ \overlinen{c}_a(t) \frac{\delta^2 {\mathcal{A}}
}{\delta{q_a} (t) \ \delta{q_b}(t')} \ {c_b}(t')\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, ,
\end{eqnarray}
we obtain
\begin{eqnarray}
{{{Z}} }_{\rm CM} \ = \ \int
{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}{\boldsymbol{\lambda}}{\mathcal{D}}{\boldsymbol{c}}
{\mathcal{D}}\overlinen{{\boldsymbol{c}}} \ \exp\left[ i
{\mathcal{S}} + \int_{t_1}^{t_2} dt \
{{\boldsymbol{J}}}(t){{\boldsymbol{q}}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, ,
\label{4.5a}
\end{eqnarray}
with the new action
\begin{eqnarray}
{\mathcal{S}}[{\boldsymbol{q}}, \overlinen{{\boldsymbol{c}}},
{\boldsymbol{c}}, {\boldsymbol{\lambda}}] \equiv \
\int_{t_1}^{t_2} dt \ {\boldsymbol{\lambda}}(t)\frac{\delta
{\mathcal{A}}}{\delta {\boldsymbol{q}}(t)} - i\int_{t_1}^{t_2}dt
\int_{t_1}^{t_2} dt' \ \overlinen{c}_a(t) \frac{\delta^2 {\mathcal{A}}
}{\delta{q_a} (t) \ \delta{q_b}(t')} \ {c_b}(t')\, . \label{4.5}
\end{eqnarray}
Since ${{{Z}} }_{\rm CM}$ together with the action (\ref{4.5})
formally result from the classical limit of the
stochastic-quantization partition function, it comes as no
surprise that ${\mathcal{S}}$ exhibits BRST (and anti-BRST)
supersymmetry. It is simple to check that $\mathcal{S}$ does not
change under the supersymmetry transformations
\begin{eqnarray}
\delta_{{\rm BRST\,}} {\boldsymbol{q}} = \overlinen{\varepsilon}
{\boldsymbol{c}}\,, \;\; \delta_{{\rm BRST\,}} {\boldsymbol{c}} = 0\, , \;\;
\delta_{{\rm BRST\,}} \overlinen{{\boldsymbol{c}}} = -i\overlinen{\varepsilon}
{\boldsymbol{\lambda}}\, , \;\; \delta_{{\rm BRST\,}} {\boldsymbol{\lambda}} =
0\, , \label{4.6}
\end{eqnarray}
where $\overlinen{\varepsilon}$ is a Grassmann-valued parameter
(the corresponding anti-BRST transformations are related with
(\ref{4.6}) by charge conjugation). Indeed, the variations of the
two terms in (\ref{4.5}) read
\begin{eqnarray}
&&\delta_{{\rm BRST\,}} \left[\int_{t_1}^{t_2} dt \
{\boldsymbol{\lambda}}(t)\frac{\delta {\mathcal{A}}}{\delta
{\boldsymbol{q}}(t)} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \ = \
\overlinen{\varepsilon}\int_{t_1}^{t_2} dt \int_{t_1}^{t_2} dt' \
\lambda_a(t) \frac{\delta^2 {\mathcal{A}}}{\delta q_a(t) \delta q_b(t')} \
c_b(t')\,
,\label{4.7} \\
&&~\nonumber \\
&&\delta_{{\rm BRST\,}} \left[ \int_{t_1}^{t_2}dt \int_{t_1}^{t_2} dt' \
{\overlinen{c}}_a(t) \frac{\delta^2 {\mathcal{A}}}{\delta q_a(t) \delta
q_b(t')} \ c_b(t') \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \ = \ -i \overlinen{\varepsilon}
\int_{t_1}^{t_2}dt \int_{t_1}^{t_2} dt' \ \lambda_a(t)
\frac{\delta^2 {\mathcal{A}}}{\delta q_a(t) \delta q_b(t')} \ c_b(t')\nonumber
\\
&& ~\nonumber \\
&&\mbox{\hspace{15mm}} + \ \int_{t_1}^{t_2}dt \int_{t_1}^{t_2} dt'
\int_{t_1}^{t_2}dt'' \ {\overlinen{c}}_a(t) \frac{\delta^3
{\mathcal{A}}}{\delta q_a(t) \delta q_b(t') \delta q_c(t'')} \
\overlinen{\varepsilon} \ c_c(t'') c_b(t')\, . \label{4.8}
\end{eqnarray}
\\
\noindent}\def\non{\nonumber}\def\ot{\otimes}\def\pa{\partial The second term on the RHS of (\ref{4.8}) vanishes because the
functional derivative of ${\mathcal{A}}$ is symmetric in $c\leftrightarrow
b$ whereas the term $c_c c_b$ is anti-symmetric. Inserting
Eqs.(\ref{4.7}) and (\ref{4.8}) into the action we clearly find
$\delta_{{\rm BRST\,}}{\mathcal{S}} = 0$. As noted
in~\cite{GozziII}, the ghost fields $\overlinen{{\boldsymbol{c}}}$
and ${\boldsymbol{c}}$ are mandatory at the classical level as their
r\^{o}le is to cut off the fluctuations {\em perpendicular\/} to the
classical trajectories. On the formal side,
$\overlinen{{\boldsymbol{c}}}$ and ${\boldsymbol{c}}$ may be
identified with Jacobi fields~\cite{GozziII,DeWitt}. The
corresponding BRST charges
are related to Poincar\'{e}-Cartan integral
invariants~\cite{GozziIII}.
By analogy with the stochastic quantization the path integral
(\ref{4.5a}) can, of course, be rewritten in a compact form with
the help of a superfield~\cite{GozziI,Zinn-JustinII}
\begin{eqnarray}
\Phi_a(t, \theta, \overlinen{\theta}) \ = \ q_a(t) + i\theta c_a(t)
-i\overlinen{\theta} \overlinen{c}_a(t) + i \overlinen{\theta}\theta
\lambda_a(t)\, , \label{3.23}
\end{eqnarray}
in which $\theta$ and $\overlinen{\theta}$ are anticommuting
coordinates extending the configuration space of $q_a$ variable to a
superspace. The latter is nothing but the degenerate case of
supersymmetric field theory in $d=1$ in the superspace formalism of
Salam and Strathdee~\cite{SS1}. In terms of superspace variables we
see that
\begin{eqnarray}
\int d\overlinen{\theta} d\theta \ {{\mathcal{A}}}[{\boldsymbol{\Phi}}] &=&
\int dt d\overlinen{\theta} d\theta \ L({\boldsymbol{q}}(t) +
i\theta {\boldsymbol{c}}(t) - i \overlinen{\theta}
\overlinen{\boldsymbol{c}}(t) + i
\overlinen{\theta}\theta \boldsymbol{\lambda}(t) )\nonumber \\
&=& \int d\overlinen{\theta} d\theta \ {{\mathcal{A}}}[{\boldsymbol{q}}] + \int
dt d\overlinen{\theta} d\theta \ \left( i\theta {\boldsymbol{c}}(t)
- i \overlinen{\theta} \overlinen{\boldsymbol{c}}(t) +
i\overlinen{\theta}\theta \boldsymbol{\lambda}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \frac{\delta
{{\mathcal{A}}}}{\delta
\boldsymbol{q}(t)}\nonumber \\
&& + \ \int dt dt' d\overlinen{\theta} d\theta \ \theta
c_a(t)\frac{\delta^2 {{\mathcal{A}}}}{\delta q_a(t) \delta q_b(t')} \
\overlinen{\theta} \overlinen{c}(t').
\end{eqnarray}
Using the standard integration rules for Grassmann variables, this
becomes equal to $-i{\mathcal{S}}$. Together with the identity
${\mathcal{D}} {\boldsymbol{\Phi}} =
{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}
{\boldsymbol{c}}{\mathcal{D}}\overlinen{{\boldsymbol{c}}}
{\mathcal{D}}{\boldsymbol{\lambda}}$ we may therefore express the
classical partition functions (\ref{4.3}) and (\ref{4.4}) as a
supersymmetric path integral with fully fluctuating paths in
superspace,
\begin{eqnarray}
{{{Z}} }_{\rm CM} \ = \ \int {\mathcal{D}} {\boldsymbol{\Phi}} \
\exp\left\{- \int d\theta d\overlinen{\theta} \
{\mathcal{A}}[{\boldsymbol{\Phi}}](\theta, \overlinen{\theta}) + \int dt
d\theta d\overlinen{\theta} \ {\boldsymbol{\Gamma}}(t, \theta,
\overlinen{\theta}){\boldsymbol{\Phi}}(t, \theta,
\overlinen{\theta})\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, . \label{3.24}
\end{eqnarray}
Here we have defined the supercurrent ${\boldsymbol{\Gamma}}(t,
\theta, \overlinen{\theta}) = \overlinen{\theta} \theta
{\boldsymbol{J}}(t)$.
It is interesting to find the most general form of an action ${\mathcal{A}}$
for which the classical path integral (\ref{3.24}) coincides with
the quantum-mechanical path integral of the system, or, in other
words, for which a theory would possess at the same time
deterministic and quantal character. As already mentioned, the
Grassmannnian ghost variables are responsible for the
deterministic nature of the partition function. It is obvious that
if the ghost sector could somehow be factored out we would extend
the path integration to all fluctuating paths in
${\boldsymbol{q}}$-space. By formally writing
\begin{eqnarray}
\frac{\delta^2 {\mathcal{A}} }{\delta{q_k} (t) \ \delta{q_l}(t')} \ = \
{\mathcal{F}}_{kl}\left( t,t', q_m, \frac{\delta {\mathcal{A}}}{\delta q_n}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, ,
\;\;\;\;\; k,l,m,n = 1, \ldots, N\, ,
\label{4.9}
\end{eqnarray}
we see that the factorization will occur if and only if the
(distribution valued) functional ${\mathcal{F}}_{kl}(\ldots)$ is
$q_m$ independent when evaluated on shell, i.e.,
${\mathcal{F}}_{kl}(t,t', q_m, 0 ) = F_{kl}(t,t')$. This is a simple
consequence of Eq.(\ref{4.4}) where the determinant is factorizable
if and only if it is ${\boldsymbol{q}}$-independent at $\delta
{\mathcal{A}}/\delta {{\boldsymbol{q}}} = 0$.
In order to provide a correct Feynman weight to every path we must,
in addition, identify
\begin{eqnarray}
{\mathcal{A}}[{\boldsymbol{q}}] = \int_{t_1}^{t_2} dt \ \lambda_m
\frac{\delta {\mathcal{A}}[{\boldsymbol{q}}]}{\delta q_m}
\, , \label{3.26}
\end{eqnarray}
as can be seen from (\ref{4.5}) after factoring out the second term.
Assuming that $L = L(q_l, \dot{q_l})$ (i.e., a scleronomic system)
and that the Hessian is regular,
the condition (\ref{3.26}) shows that
$\lambda_k = \lambda_k(q_l, \dot{q_k})$. In addition,
it is obvious
on
dimensional grounds
that $\left[ \lambda_l \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] =
\left[ q_l \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]$. This, in turn, implies that $\lambda_k =
\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_{kl}q_l$, where $\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_{lk}$ is some real ($t$-independent)
matrix. To determine the latter we functionally expand $ {\mathcal{A}}$ in
(\ref{3.26}) around $q_k$ and compare both sides. The resulting
integrability condition reads:
\begin{eqnarray}
\left(\delta_{ji} -\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_{ji}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\frac{\delta {\mathcal{A}}}{\delta
q_{j}(t)}\ \delta (t-t') \ = \ \alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_{l\!j} \ q_j(t) \
\frac{\delta^2 {\mathcal{A}}}{\delta q_l(t) \delta q_i(t')}\, ,
\label{3.28}
\end{eqnarray}
which is evidently compatible with the condition (\ref{4.9}).
When $\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_{ij}$ is diagonalizable we can pass to a polar basis
and write (\ref{3.26}) in more manageable form, namely
\begin{eqnarray}
{\mathcal{A}}[{\boldsymbol{q}}]\ = \ \int_{t_1}^{t_2} dt \ \sum_i \alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i
q_i(t) \frac{\delta {\mathcal{A}}[{\boldsymbol{q}}]}{\delta q_i(t)}\, .
\label{3.33}
\end{eqnarray}
For simplicity, we do not use new symbols
for
transformed ${\boldsymbol{q}}$'s.
To proceed we assume that the kinetic energy is quadratic in
${\boldsymbol{q}}$ and $\dot{{\boldsymbol{q}}}$. Then
Eq.(\ref{3.33}) implies that $L_{\rm kin}$ must be liner in
$\dot{{\boldsymbol{q}}}$. As such, one can always write (modulo the
total derivative)
\begin{eqnarray} \label{@withB}
L_{\rm kin} = \sum_{i,j} {{B}}_{ij} \ q_i(t) \dot{q}_j(t)\, ,
\end{eqnarray}
with ${{B}}$ being an upper triangular matrix. Comparing $L_{\rm
kin}$ on both sides of (\ref{3.33}) we arrive at the equation
\begin{eqnarray}
(\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_m - 1) {{B}}_{im} = {{B}}_{mi} \alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_m \,
\, \Rightarrow \, \, ({B} -
{B}^{\top}){\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}} = {B}\, ,
\label{3.37}
\end{eqnarray}
with no Einstein's summation convention applied here. Because ${{B}}$ is
upper triangular, the first part of Eq.(\ref{3.37}) implies that
the only eigenvalues of $\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_{ij}$ are $1$ and $0$. Thus,
${\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}$ can be reduced to the block form
\begin{eqnarray}
{\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}} \ = \ \left[ \begin{tabular}{c|c} 0 & 0 \\
\hline 0 & \ide
\end{tabular}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, , \label{3.38}
\end{eqnarray}
where $\ide$ is a $r\times r$ ($r\leq N$) unit matrix.
Using the equation $({{B}} -
{{B}}^{\top}){\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}} = {{B}} $ we see that
${{B}}$ has the
block structure
\begin{eqnarray}
\mbox{${{B}}$} \ = \
\left[
\begin{tabular}{c|c} 0 &
${{B}}_2$
\\
\hline
0 & 0 \\
\end{tabular}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \, . \label{3.39}
\end{eqnarray}
where ${{B}}_2$ is an $(N-r) \times r$ matrix. To determine $r$ we
use the fact that ${\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}$ is idempotent, i.e.,
${\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}^2 =
{\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}$. Multiplying
$({{B}} -
{{B}}^{\top}){\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}} = {{B}} $ by
$\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}$ we find
\begin{eqnarray}
\begin{array}{ll}
{{B}}{\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}} = {{B}}\, ,~~~~~
{{B}}^\top {\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}} = 0 \, .
\end{array}
\end{eqnarray}
From ${{B}}{\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}} = {{B}}$ follows that rank$({{B}})=
{\mbox{rank}}({\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}})=r$, whereas ${{B}}^\top(\ide -
{\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}) = {{B}}^\top$ implies that rank$({{B}}^\top)=
{\mbox{rank}}(\ide - {\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}})$. Utilizing the identity
${\mbox{rank}}({{B}}) = {\mbox{rank}}({{B}}^\top)$ we derive
$r = {\mbox{rank}}({\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}})
= {\mbox{rank}}(\ide - {\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}) = (N-r)$, and thus $r
= N/2$. Thus the condition (\ref{3.33}) can be satisfied only for an
even number $N$ of degrees of freedom. An immediate further
consequence of (\ref{3.39}) is that we can rewrite (\ref{@withB})
as
\begin{eqnarray}
L_{\rm kin} = \sum_{i,j =1}^{N/2} {{B}}_{i,(N/2+j)} \ \dot{q}_i
q_{N/2 +j} \, . \label{3.36}
\end{eqnarray}
Denoting $\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_{N/2 + i}$, $q_{N/2 + i}$ and $\lambda_{N/2 +i}$
($i = 1,\ldots,N/2$) as ${\bar{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}_{i}$, $\bar{q}_i$, and
$\bar{\lambda}_{i}$, respectively [hence, ${\boldsymbol{\lambda}}
= {\boldsymbol{0}}$ and $\bar{{\boldsymbol{\lambda}}} =
\bar{{\boldsymbol{q}}}~$], then Eq.(\ref{3.33}) reads
\begin{eqnarray}
\bar{{\mathcal{A}}}[{\boldsymbol{q}}, \bar{{\boldsymbol{q}}}] =
\int_{t_1}^{t_2} dt \ \bar{{\boldsymbol{q}}}(t) \frac{\delta
\bar{{\mathcal{A}}}[{\boldsymbol{q}}, \bar{{\boldsymbol{q}}}]}{\delta
\bar{{\boldsymbol{q}}}(t) }\, . \label{3.35}
\end{eqnarray}
Here $\bar{{\mathcal{A}}}[{\boldsymbol{q}}, \bar{{\boldsymbol{q}}}] =
{{\mathcal{A}}}[q_1, \ldots, q_{2N}]$. The result (\ref{3.35}) can be
obtained also in a different way. Indeed, in Appendix C we show that
(\ref{3.33}) is a so-called Euler-like functional
\begin{eqnarray}
{\mathcal{A}}[{\boldsymbol{q}}] = \int_{t_1}^{t_2} dt \ r(t) L\!\left(
r^{-\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_1}(t)q_1(t), \ldots,r^{-\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_N}(t)q_N(t),
\frac{d\left(r^{-\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_1}(t)q_1(t)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)}{dt}, \ldots,
\frac{d\left(r^{-\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_N}(t)q_N(t)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)}{dt}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)
\, ,\label{3.31}
\end{eqnarray}
with $r(t)$ being an arbitrary function of $q_k$ whose variations
vanish at the ends $\delta r(t_i) = \delta r(t_f) = 0$ if all $
\delta q_k$'s have this property.
In particular, we may
chose $r$ to be any finite power $ q_k^{1/\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_k}$
(for $k = 1, \ldots, N$), in which case
\begin{eqnarray}
{\mathcal{A}}[{\boldsymbol{q}}] = \int_{t_1}^{t_2} dt \ q_k^{1/\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_k}
L\!\left( \frac{q_1}{q_k^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_1/\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_k}},
\dots, \stackrel{\stackrel{\mbox{\footnotesize{$k$}}}{\downarrow}}{1},
\dots, \frac{q_N}{q_k^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_N/\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_k}},
\frac{d\left(q_1/q_k^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_1/\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_k}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)}{dt}, \ldots,
\stackrel{\stackrel{\mbox{\footnotesize{$k$}}}{\downarrow}}{0}, \dots,
\frac{d\left(q_N/q_k^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_N/\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_k}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)}{dt}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght).
\label{3.30}
\end{eqnarray}
Assuming, as before, that the kinetic term in $L$ is quadratic in
${\boldsymbol{q}}$ and $\dot{\boldsymbol{q}}$, we arrive at
${\boldsymbol{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha}}$ as in (\ref{3.38}), and the action
(\ref{3.30}) reduces again to (\ref{3.35}).
One can incorporate the constraints on $\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i$ (or $\lambda_i$)
by inserting a corresponding $\delta$-functional into the path
integral (\ref{4.5a}). This leads to the most general generating
functional with the above-stated property:
\begin{eqnarray}
{{{Z}} }_{\rm CM} &=& \int
{\mathcal{D}}{{\boldsymbol{q}}}{\mathcal{D}}{\overlinen{{\boldsymbol{q}}}}
{\mathcal{D}}{{\boldsymbol{\lambda}}}
{\mathcal{D}}{\overlinen{{\boldsymbol{\lambda}}}} \
\delta[{\boldsymbol{\lambda}}]
\delta[\overlinen{{\boldsymbol{\lambda}}} -
\overlinen{{\boldsymbol{q}}}] \ \exp\!\left[i \!\!\int_{t_1}^{t_2}
dt \ {\boldsymbol{\lambda}}\frac{\delta \overlinen
{\mathcal{A}}[{\boldsymbol{q}}, \overlinen{{\boldsymbol{q}}}]}{\delta
{\boldsymbol{q}} } + i \!\!\int_{t_1}^{t_2} dt \
\overlinen{{\boldsymbol{\lambda}}}\ \frac{\delta \overlinen
{\mathcal{A}}[{\boldsymbol{q}}, \overlinen{{\boldsymbol{q}}}]}{\delta
\overlinen{{\boldsymbol{q}}} } + \int_{t_1}^{t_2} dt \sum_{k=1}^N\! J_k q_k \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\nonumber \\
&~& \nonumber \\
&=& \int
{\mathcal{D}}{{\boldsymbol{q}}}{\mathcal{D}}{\overlinen{{\boldsymbol{q}}}}
\ \exp\!\left[ i\!\!\int_{t_1}^{t_2} dt \
\overlinen{{\boldsymbol{q}}}\ \frac{\delta \overlinen
{\mathcal{A}}[{\boldsymbol{q}}, \overlinen{{\boldsymbol{q}}}]}{\delta
\overlinen{{\boldsymbol{q}}} }
+ \int_{t_1}^{t_2} dt \sum_{k=1}^N J_k q_k \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \nonumber \\
&~& \nonumber \\
&=& \int
{\mathcal{D}}{{\boldsymbol{q}}}{\mathcal{D}}{\overlinen{{\boldsymbol{q}}}}
\ \exp\!\left[ i \!\!\int_{t_1}^{t_2} dt \,\overlinen L + \int dt
\sum_{k=1}^N J_k q_k \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, .
\label{3.27}
\end{eqnarray}
An irrelevant
normalization factor has been dropped.
The Lagrangian $\overlinen L$ coincides precisely
with the Lagrangian (\ref{lag1}), and describes therefore
't\,Hooft's deterministic system. Hence within the above assumptions
there are no other systems with the peculiar property that their
full quantum properties are classical. Among other things, the
latter also indicates that the Koopman-von~Neumann operatorial
formulation of classical mechanics~\cite{KN1} when applied to
't~Hooft systems must agree with its canonically quantized
counterpart.
\section{'t\,Hooft's information loss as a first-class primary
constraint}\label{SEc4}
As observed in Section~II, the Hamiltonian (\ref{eq.1.1}) is not
bounded from below, and this is true for any function $f_i$. Thus,
no deterministic system with dynamical equations $\dot{q}_i =
f_i({\boldsymbol{q}})$ can describe a physically acceptable {\em
quantum world\/}. Its Hamiltonian would not be stable and we could
build a perpetuum mobile. To deal with this problem we will employ
't\,Hooft's procedure~\cite{tHooft}. We
assume that the system (\ref{eq.1.1}) has $n$
conserved, irreducible charges $C_i$, i.e.,
\begin{eqnarray}
\{ C_i, H \} = 0\, , \;\;\;\; i = 1, \ldots, n\, . \label{@char}
\end{eqnarray}
In order to enforce a lower bound upon $H$,
't\,Hooft split the Hamiltonian
as
$H = H_+ - H_-$ with both $H_+$ and $H_-$
having lower bounds. Then he imposed the condition that $H_-$
should be zero on the physically accessible part of phase space,
i.e.,
\begin{eqnarray}
H_- \ \approx \ 0\, . \label{4.1}
\end{eqnarray}
This will make the actual dynamics
governed by the reduced Hamiltonian
$H_+$ which is
bounded from below,
by definition.
To ensure that the above splitting is conserved in time one must
require that $\{ H_-, H \} = \{ H_+, H \} = 0$. The latter is
equivalent to the statement that $\{ H_+, H_- \} = 0$. Since
the
charges
$C_i$ in (\ref{@char}) form an irreducible set,
the Hamiltonians
$H_+$
and $H_-$
must
be
functions of the charges and $H$:
$H_+ = F_+(C_k,H)$ and $H_- = F_-(C_k,H)$.
There is a certain amount of
flexibility in finding $F_-$ and $F_+$, but for convenience's sake
we confine ourselves to the following choice
\begin{eqnarray}
H_+ \ = \ \frac{[H + \sum_i a_i(t) C_i]^2}{4 \sum_i a_i(t) C_i} \,
, \; \; H_- \ = \ \frac{[H - \sum_i a_i(t) C_i]^2}{4 \sum_i a_i(t)
C_i} \, , \label{FCH}
\end{eqnarray}
where $a_i(t)$ are independent of ${\boldsymbol{q}}$ and
${\boldsymbol{p}}$ and will be specified later. The lower bound is
then achieved by choosing $\sum_i a_i(t) C_i$ to be positive
definite. In the following it will also be important to select the
combination of $C_i$'s in such a way that it depends solely on
${\boldsymbol{q}}$ (this condition may not necessarily be achievable
for general $f_a({\boldsymbol{q}})$). Thus, by imposing $H_- \approx
0$ we obtain the weak reduced Hamiltonian $H \approx H_+ \approx
\sum_i a_i(t)C_i$.
The constraint (\ref{4.1}) (resp (\ref{FCH})) can be motivated by
dissipation or information loss~\cite{tHooft3,BJV1,BMM1}. In
Appendix D we show that the {\em explicit} constraint (\ref{4.1})
does not generate any new (i.e., secondary) constraints when added
to the existing constraints (\ref{2.5}). In addition, this new set
of constraints corresponds to $2N$ second-class constraints and {\em
one} first-class constraint (see also Appendix D).
It is well known in the theory of constrained systems that the
existence of first-class constraints signals the presence of a gauge
freedom in Hamiltonian theory. This is so because the Lagrange
multipliers affiliated with first-class constraints cannot be fixed
from dynamical equations alone~\cite{Dir}. The time evolution of
observable (physical) quantities, however, cannot be affected by the
arbitrariness in Lagrange multipliers. To remove this superfluous
freedom that is left in the formalism we must pick up a gauge, i.e.,
impose a set of conditions that will eliminate the above redundancy
from the description. It is easy to see that the number of
independent gauge conditions must match the number of first-class
constraints. Indeed, the requirement on a physical quantity (say
$f$) to have a unique time evolution on the constraint submanifold
${\mathcal{M}}$, i.e.,
\begin{eqnarray}
\dot{f} \ \approx \ \{ f, \bar{H}\} \ + \ \sum_{i=1}^{m}v_i \{ f,
\varphi_i\} \ + \ \sum_{k=1}^{m'}u_k \{ f, \phi_k \}\, ,
\end{eqnarray}
implies that
\begin{eqnarray}
\{ f, \varphi_i\} \ \approx \ 0\, . \label{4.24}
\end{eqnarray}
The constraints $\varphi_i$ and $\phi_k$ represent first and
second-class constraints, respectively. First-class constraints
have, by definition, weakly vanishing Poisson's brackets with all
other constraints; any other constraint that is not first class is
second-class. While the Lagrange multipliers $u_k$ can be uniquely
fixed from the dynamics by consistency conditions (c.f. Appendices A
and D) this cannot be done for the $v_i$'s. In this way (\ref{4.24})
represents an obligatory condition for a quantity $f$ to be
observable. Equation (\ref{4.24}) can be considered as a set of $m$
first-order differential equations on the constrained surface with
the relation $\{\varphi_i, \varphi_j \} \ \approx \ 0$ serving as
the integrability condition~\cite{Dir,Sunder}. Thus, $f$ is uniquely
defined by its values on the submanifold of the initial conditions
for Eq.(\ref{4.24}). As a result, the above initial value surface
describes the true degrees of freedom. By denoting the dimension of
the constraint manifold as $D$ we see that the dimension of the
submanifold of initial conditions must be $D-m$. We can take this
submanifold to be a surface $\Gamma^*$ specified by the equations
\begin{eqnarray}
\varphi_i \ &=& \ 0\, , \;\;\;\;\;\; i = 1, \ldots, m\, ,\nonumber \\
\phi_k \ &=& \ 0 \, , \;\;\;\;\;\; k = 1, \ldots, m'\, ,\nonumber \\
\chi_l \ &=& \ 0 \, , \;\;\;\;\;\; l = 1, \ldots, m\, .
\label{4.24b}
\end{eqnarray}
The $m$ subsidiary conditions $\chi_l$ are the sought gauge
constraints. The functions $\chi_l$ must clearly satisfy the
condition
\begin{eqnarray}
\det|\!| \{\chi_l , \varphi_i \} |\!| \ \neq \ 0\, , \label{4.25}
\end{eqnarray}
as only in such a case we can determine specific values for the
multipliers $v_i$ from the dynamical equation for $\chi_l$ (this is
because the time derivative of any constraint, and hence also
$\chi_l$, must be zero). Therefore only when the condition
(\ref{4.25}) is satisfied do the constraints (\ref{4.24b}) indeed
describe the surface of the initial conditions.
The preceding discussion implies that in our case the surface
$\Gamma^*$ is defined by
\begin{eqnarray}
\varphi({\boldsymbol{q}},\bar{\boldsymbol{q}},
{\boldsymbol{p}},\bar{\boldsymbol{p}}) \ &=& \ 0 \, ,
\mbox{\hspace{0.6cm}} \chi({\boldsymbol{q}},\bar{\boldsymbol{q}},
{\boldsymbol{p}},\bar{\boldsymbol{p}}) \ = \ 0 \, , \label{4.26}
\\
\mbox{\hspace{1cm}} \phi_i({\boldsymbol{q}},\bar{\boldsymbol{q}},
{\boldsymbol{p}},\bar{\boldsymbol{p}}) \ &=& \ 0 \, , \;\;\;\;\;\;
i = 1, \ldots, 2N\, .
\end{eqnarray}
The explicit form of $\varphi$ is found in Appendix~D where we
show that $\varphi \approx H - \sum a_i C_i$.
Apart from condition (\ref{4.25}) we shall further restrict our
choice of $\chi$ to functions satisfying the simultaneous equations
\begin{eqnarray}
\{ \chi, \phi_i \} \ = \ 0\, , \;\;\;\;\; i = 1, \ldots, 2N\, .
\label{4.27}
\end{eqnarray}
Such a choice is always possible (at least in a weak
sense)~\cite{Senj} and it will prove crucial in the following.
In order to proceed further we begin by reexamining
Eq.(\ref{3.27}). The latter basically states that
\begin{eqnarray}
Z_{\rm CM} \ = \ \int {\mathcal{D}}{\boldsymbol{q}} \
\delta\!\left[{\boldsymbol{q}} - {\boldsymbol{q}}_{c}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \ \exp\left[
\int_{t_1}^{t_2} dt \ {\boldsymbol{q}}(t){\boldsymbol{J}}(t)
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, . \label{3.40}
\end{eqnarray}
We may now formally invert the steps leading to Eq.(\ref{eg.1.2}),
i.e., we introduce auxiliary momentum integrations and go over to
the canonical representation of (\ref{3.40}). Correspondingly
Eq.(\ref{3.40}) can be recast into
\begin{eqnarray*}
Z_{\rm CM} = \int
{\mathcal{D}}{\boldsymbol{p}}{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}
\bar{\boldsymbol{p}}{\mathcal{D}}\bar{\boldsymbol{q}}
\sqrt{\left|\det|\!|\{ \phi_i, \phi_j \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|}
\prod_{i=1}^{2N} \delta[\phi_i] \exp\left[ i\! \int_{t_1}^{t_2}
\!dt \,[{\boldsymbol{p}}\dot{\boldsymbol{q}} +
\bar{\boldsymbol{p}}\dot{\bar{\boldsymbol{q}}} - H] +
\int_{t_1}^{t_2}\! dt\, [{\boldsymbol{q}}{\boldsymbol{J}} +
\bar{\boldsymbol{q}}\bar{\boldsymbol{J}}] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] .
\end{eqnarray*}
Due to $\delta$-functions in the integration we could substitute
't~Hooft's Hamiltonian $H$ for the canonical Hamiltonian $\bar{H}$.
It should be stressed that despite its formal appearance and the
phase-space disguise, the latter is still the classical partition
function {\em{\`{a}} la } Gozzi {\em et al.}.
To include the constraints (\ref{4.26}) into
(\ref{3.27}) we must be a bit cautious. A na\"{\i}ve intuition would
dictate that the functional $\delta$ functions $\delta[\chi]$ and
$\delta[\varphi]$ should be inserted into the path-integral measure
for $Z_{\rm CM}$. This would be, however, too simplistic as a mere
inclusion of $\delta$ functions into $Z_{\rm CM}$ would not
guarantee that the physical content of the theory that resides in
the generating functional $Z_{\rm CM}$ is independent of the choice
$\chi$. Indeed, utilizing the fact that the generators of gauge
transformations are the first class constraints~\cite{Sunder} we can
write that
\begin{eqnarray}
\delta \chi \ = \ \varepsilon \{\chi , \varphi \} + C \varphi \
\approx \ \varepsilon \{\chi , \varphi \}\, . \label{4.10}
\end{eqnarray}
Here $\varepsilon$ is an infinitesimal quantity. The corresponding
gauge generator $\varepsilon \varphi$ generates the infinitesimal
canonical transformations
\begin{eqnarray}
&&{\boldsymbol{q}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow {\boldsymbol{q}}+ \delta
{\boldsymbol{q}}\,, \;\;\; {\boldsymbol{p}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow
{\boldsymbol{p}} + \delta {\boldsymbol{p}}\,, \;\;\; \delta
{\boldsymbol{q}} = \{\varepsilon \varphi , {\boldsymbol{q}} \}\,,
\;\;\; {\boldsymbol{p}} = \{\varepsilon \varphi , {\boldsymbol{p}}
\}\,,\nonumber \\
&& \bar{\boldsymbol{q}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow \bar{\boldsymbol{q}}+ \delta
\bar{\boldsymbol{q}}\,, \;\;\; \bar{\boldsymbol{p}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow
\bar{\boldsymbol{p}} + \delta \bar{\boldsymbol{p}}\,, \;\;\;
\delta \bar{\boldsymbol{q}} = \{\varepsilon \varphi ,
\bar{\boldsymbol{q}} \}\,, \;\;\; \bar{\boldsymbol{p}} =
\{\varepsilon \varphi , \bar{\boldsymbol{p}} \}\, . \label{ct1}
\end{eqnarray}
It follows immediately that the corresponding generating function
is
\begin{eqnarray}
G({\boldsymbol{q}},\bar{\boldsymbol{q}}, {\boldsymbol{P}},
\bar{\boldsymbol{P}}) = {\boldsymbol{q}}{\boldsymbol{P}} +
\bar{\boldsymbol{q}}\bar{\boldsymbol{P}} + \varepsilon \varphi +
o(\varepsilon^2)\, .
\end{eqnarray}
The canonical transformations (\ref{ct1}) result in changing
$\varphi$ and $\phi_i$ by
\begin{eqnarray}
&&\delta \varphi \ = \ A \varphi\, , \label{4.18}\\
&&\delta \phi_i \ = \ \varepsilon \{ \phi_i, \varphi\} \ = \ B_i
\varphi + D_{ij} \ \phi_j \, . \label{4.19}
\end{eqnarray}
Here $A, B_i, C$ and $D_{ij}$ are some phase-space functions of
order $\varepsilon$. Note that in our case the gauge algebra is
Abelian\footnote{If $\mathcal{F}$ is any phase-space function then
$[\delta_{\varepsilon}, \delta_{\eta} ] {\mathcal{F}} =
\delta_{\varepsilon}\delta_{\eta} {\mathcal{F}} - \delta_{\eta}
\delta_{\varepsilon} {\mathcal{F}} = \varepsilon\eta \left\{
{\mathcal{F}}, \{ \varphi, \varphi\} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} = 0$.}. As a
consequence of (\ref{4.18}) and (\ref{4.19}) we find
\begin{eqnarray}
\delta[\varphi] \ &\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow& \ \left|1 + \mbox{Tr}(A)
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|^{-1} \delta[\varphi]\, , \label{4.11}\\
\prod_i \delta[\phi_i] \ &\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow& \ \left|1 + \mbox{Tr}
(D)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|^{-1}
\prod_i \delta[\phi_i]\, , \label{4.12}\\
\sqrt{\left|\det |\!|\{ \phi_i, \phi_j \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|} \
&\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow& \ \left|1 + \mbox{Tr}(D) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \ \sqrt{\left|\det
|\!|\{ \phi_i, \phi_j \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|}\, . \label{4.13}
\end{eqnarray}
[here $\mbox{Tr}(A) = \sum_t A(t)$, etc.] In (\ref{4.13}) we have
used the fact that in the path-integral measure are present
$\delta[\varphi]$ and $\delta[\phi_i]$, and so we have dropped on
the RHS's of (\ref{4.11})-(\ref{4.13}) the vanishing terms. The
infinitesimal gauge transformations described hitherto clearly show
that $Z_{\rm CM}$ is dependent on the choice of $\chi$ [the term
with $|1+ \mbox{Tr}(A)|$ does not get canceled]. To ensure the gauge
invariance we need to factor out the ``orbit volume" from the
definition of $Z_{\rm CM}$. This will be achieved by a procedure
that is akin to the Faddeev-Popov-De~Witt trick. We define the
functional
\begin{eqnarray}
\left( \triangle_{\chi}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)^{-1} \ = \ \int {\mathcal{D}}g \
\delta [\chi^{g}]\, , \label{4.14}
\end{eqnarray}
with $\chi^{g}$ representing the gauge transformed $\chi$. The
superscript $g$ in Eq.(\ref{4.14}) denotes an element of the Abelian
gauge group generated by $\varphi$.
We point out that the functional (\ref{4.14}) is manifestly gauge
invariant since
\begin{eqnarray}
\left( \triangle_{\chi^{g'}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)^{-1} \ = \ \int
{\mathcal{D}}g \ \delta[\chi^{g'g}]\ = \ \int {\mathcal{D}}(g'g) \
\delta[\chi^{g'g}] \ = \ \left( \triangle_{\chi}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)^{-1}\, .
\label{4.15}
\end{eqnarray}
The second identity holds because of the invariance of the group
measure under composition, i.e., ${\mathcal{D}}g =
{\mathcal{D}}(g'g)$. Equations (\ref{4.14}) and (\ref{4.15}) allow
us to write ``$1$" as
\begin{eqnarray}
1 \ = \ \triangle_{\chi} \ \delta[\chi] \int {\mathcal{D}}g \, .
\label{4.16}
\end{eqnarray}
To find an explicit form of $\triangle[\chi]$ we can apply the
infinitesimal gauge transformation (\ref{4.10}). Then
\begin{eqnarray}
\chi^g \ = \ \chi + \varepsilon \{\chi, \varphi \} + C \varphi \
&\Rightarrow \ & \left(\triangle_{\chi}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)^{-1} \ = \ \int
{\mathcal{D}}\varepsilon \ \delta[\chi + \varepsilon \{\chi ,
\varphi
\} + C \varphi]\, , \nonumber \\
&\Rightarrow \ & \left. \left(\triangle_{\chi}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)^{-1}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{\Gamma^*} \ = \ \left|\det|\!|\{\chi, \varphi
\}|\!|\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|^{-1} \, , \label{4.29}
\end{eqnarray}
with the obvious notation $\det |\!| \{ \chi(t), \varphi(t')\} |\!| =
\prod_t \{ \chi(t), \varphi(t)\}$. Upon insertion of Eq.(\ref{4.16})
into $Z_{\rm CM}$ we obtain
\begin{eqnarray}
&&Z_{\rm CM} \ = \ \int
{\mathcal{D}}{\boldsymbol{p}}{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}
\bar{\boldsymbol{p}}{\mathcal{D}}\bar{\boldsymbol{q}} \ \left|
\det |\!| \{\chi, \varphi \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \sqrt{\left|\det |\!|\{
\phi_i, \phi_j \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|} \; \delta[\chi] \delta[\varphi]
\prod_{i=1}^{2N}
\delta[\phi_i] \nonumber \\
&&\mbox{\hspace{4cm}}\times \ \exp\left[ i\! \int_{t_1}^{t_2} dt \
[{\boldsymbol{p}}\dot{\boldsymbol{q}} +
\bar{\boldsymbol{p}}\dot{\bar{\boldsymbol{q}}} - \bar{H}] +
\int_{t_1}^{t_2} dt \ [{\boldsymbol{q}}{\boldsymbol{J}} +
\bar{\boldsymbol{q}}\bar{\boldsymbol{J}}] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, , \label{4.17}
\end{eqnarray}
where the group volume $G_V = \int {\mathcal{D}}g$ has been factored
out as desired. The partition function (\ref{4.17}) is now clearly
(locally) independent of the choice of the gauge constraints $\chi$.
This is because under the transformation (\ref{4.18}) we have
\begin{eqnarray}
\det |\!|\{\chi, \varphi \}|\!| \ \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow \ \left(1 +
\mbox{Tr}(A) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \det |\!|\{ \chi + \delta \chi, \varphi \}|\!|
\, , \label{4.20}
\end{eqnarray}
and hence the partition function $Z_{\rm CM}$ as obtained by
(\ref{4.17}) takes the same form as the untransformed one, but with
$\chi$ replaced by $\chi + \delta \chi$. Because we deal with
canonical transformations it is implicit in our derivation that the
action in the new variables is identical, to within a boundary term,
with the original action. In path integrals this might be
invalidated by the path roughness and related ordering
problems\footnote{In the literature this phenomenon frequently goes
under the name of the Edwards-Gulyaev effect~\cite{EG}.}. For
simplicity's sake we shall further assume that the latter are absent
or harmless. This happens, for instance, when canonical
transformations are linear. In such cases an infinitesimal change in
$\chi$ does not alter the physical content of the theory present in
$Z_{\rm CM}$. This conclusion may generally not be true globally
throughout phase space. Global gauge invariance, however, is
mandatory in our case since we need a global equivalence between the
partition functions $Z_{\rm CM}$ and $Z_{\rm QM}$ and not mere
perturbative correspondence. Thus the potentiality of Gribov's
copies must be checked in every individual problem separately.
In passing we may notice that if we arrange the constraints in one
set $\{\eta_a \} = \{\chi, \varphi, \phi_i \}$ we can write
(\ref{4.17}) as
\begin{eqnarray}
&&Z_{\rm CM} \ = \ \int
{\mathcal{D}}{\boldsymbol{p}}{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}
\bar{\boldsymbol{p}}{\mathcal{D}}\bar{\boldsymbol{q}} \
\sqrt{\left|\det |\!| \{ \eta_a, \eta_b \} |\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|} \;
\prod_{a=1}^{2N+2}
\delta[\eta_a] \nonumber \\
&&\mbox{\hspace{4cm}}\times \ \exp\left[ i\! \int_{t_1}^{t_2} dt \
[{\boldsymbol{p}}\dot{\boldsymbol{q}} +
\bar{\boldsymbol{p}}\dot{\bar{\boldsymbol{q}}} - H] +
\int_{t_1}^{t_2} dt \ [{\boldsymbol{q}}{\boldsymbol{J}} +
\bar{\boldsymbol{q}}\bar{\boldsymbol{J}}] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, . \label{4.21}
\end{eqnarray}
By comparison with (\ref{frad}) we retrieve a well known
result~\cite{Sunder,GT}, namely, that the set $\{ \eta_a \}$ of
$2N+2$ constraints can be viewed as a set of second-class
constraints. Thus, by fixing a gauge we have effectively converted
the original system of $2N$ second-class and {\em one} first-class
constraints into $2N+2$ second-class constraints.
In view of (\ref{2.10}) and (\ref{4.27}), we can perform a
canonical transformation in the full phase space in such a way
that the new variables are: $P_1 = \chi$, $Q_{1+i} = \phi_{2i}$,
$P_{1+i} = \phi_{2i-1}$; $i =1, \ldots, N$. After a trivial
integration over $P_a$ and $Q_{1+i}$ we find that
\begin{eqnarray}
Z_{\rm CM} \ = \ \int
{\mathcal{D}}\bar{\boldsymbol{P}}{\mathcal{D}}\bar{\boldsymbol{Q}}{\mathcal{D}}{Q}_1
\ \left(\delta[\varphi] \left|\det \left|\!\left|\frac{\delta
\varphi}{\delta Q_1}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \ \exp\left[i
\!\int_{t_1}^{t_2} dt \ \left[ \bar{\boldsymbol{P}}
\dot{\bar{\boldsymbol{Q}}} - K\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
+ \int_{t_1}^{t_2} dt\ \bar{\boldsymbol{Q}} {\boldsymbol{j}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, ,
\end{eqnarray}
where $\bar{P}_a$ and $\bar{Q}_a$ are the remaining canonical
variables spanning the $(2N-2)$-dimensional phase space.
To within a time derivative term the new Hamiltonian is done by the
prescription $K(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}, Q_1) =
H(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}, P_1 = 0, Q_1, Q_{1+i}
=0, P_{1+i} =0)$. The sources ${\boldsymbol{j}} $ are
correspondingly transformed sources ${\boldsymbol{J}}$ and
$\bar{{\boldsymbol{J}} }$. Utilizing the identity
\begin{eqnarray}
\delta[\varphi] \left|\det \left|\!\left|\frac{\delta
\varphi}{\delta Q_1}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \! \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \ = \ \delta[Q_1 -
Q_1^*(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}) ]\, ,
\label{4.23}
\end{eqnarray}
we can finally write
\begin{eqnarray}
Z_{\rm CM} \ = \ \int
{\mathcal{D}}\bar{\boldsymbol{P}}{\mathcal{D}}\bar{\boldsymbol{Q}}
\ \exp\left[i \!\int_{t_1}^{t_2} dt \ \left[ \bar{\boldsymbol{P}}
\dot{\bar{\boldsymbol{Q}}} - K^*\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
+ \int_{t_1}^{t_2} dt\ \bar{\boldsymbol{Q}} {\boldsymbol{j}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, .
\label{4.22}
\end{eqnarray}
Here $K^*(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}) =
K(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}, Q_1 =
Q_1^*(\bar{\boldsymbol{P}},\bar{\boldsymbol{Q}}))$. In view of
(\ref{D3}) we can alternatively write $Z_{CM}$ as
\begin{eqnarray}
Z_{\rm CM} \ = \ \int
{\mathcal{D}}\bar{\boldsymbol{P}}{\mathcal{D}}\bar{\boldsymbol{Q}}
\ \exp\left[i \!\int_{t_1}^{t_2} dt \ \left[ \bar{\boldsymbol{P}}
\dot{\bar{\boldsymbol{Q}}} - H_+^*\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
+ \int_{t_1}^{t_2} dt\ \bar{\boldsymbol{Q}} {\boldsymbol{j}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\, ,
\label{4.30}
\end{eqnarray}
where $H_+^* = H_+(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}, Q_1
= Q_1^*(\bar{\boldsymbol{P}},\bar{\boldsymbol{Q}}), P_a = 0,
Q_{1+i} = 0)$. In passing we may notice that $\bar{P}_a$ and
$\bar{Q}_a$ are true canonical variables on the submanifold
$\Gamma^*$ of the initial conditions for Eq.(\ref{4.24}). Indeed,
in terms of a non-canonical system of variables $\{\zeta_i \} =
\{\varphi; \chi; \phi_i; \bar{\boldsymbol{Q}};
\bar{\boldsymbol{P}}\}$ the Poisson bracket of any two {\em
observable} quantities (say $f$ and $g$) on the constraint
manifold $\mathcal{M}$ is
\begin{eqnarray}
\left. \{f, g \} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{\mathcal{M}} \ = \ \left. \left[
\sum_{a,b}\ \{\zeta_a, \zeta_b \} \ \frac{\partial f}{ \partial
\zeta_a}\frac{\partial g}{ \
\partial \zeta_b}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{\mathcal{M}} \ = \ \sum_{i,j} \ \{ \bar{P}_i, \bar{Q}_j \}
\ \frac{\partial f^*}{\partial \bar{P}_i} \frac{\partial g^*}{
\partial \bar{Q}_j} \ = \ \sum_{i,j} \ \Omega_{ij} \
\frac{\partial f^*}{\partial \bar{\mathcal{Q}}_i}
\frac{\partial g^*}{
\partial \bar{\mathcal{Q}}_j}\, ,
\label{4.91}
\end{eqnarray}
with $ \{ \bar{\mathcal{Q}}_j \} \ = \ \{ \bar{\boldsymbol{Q}};
\bar{\boldsymbol{P}} \}$ and with
\begin{eqnarray*}
f^*(\bar{\boldsymbol{Q}}, \bar{\boldsymbol{P}}) \ &=& \ f(\varphi
= 0, \chi = 0, \phi_i = 0, \bar{\boldsymbol{Q}},
\bar{\boldsymbol{P}})\, , \nonumber \\
g^*(\bar{\boldsymbol{Q}}, \bar{\boldsymbol{P}}) \ &=& \ g(\varphi
= 0, \chi = 0, \phi_i = 0, \bar{\boldsymbol{Q}},
\bar{\boldsymbol{P}})\, ,
\end{eqnarray*}
representing the physical quantities on ${\mathcal{M}}$. The latter
depend only on the canonical variables $\bar{\boldsymbol{Q}}$ and
$\bar{\boldsymbol{P}}$ which are the independent variables on
$\Gamma^*$. In deriving (\ref{4.91}) we have used the fact that
various terms are vanishing on account of Eqs.(\ref{4.24}) and
(\ref{4.27}). So, for instance, $[\{\varphi, \zeta_i\} \
\partial f/\partial \zeta_i ]|_{\mathcal{M}} = 0$, $\{\varphi_i, \bar{P}_j \} =
0$, $\{\varphi_i, \bar{Q}_j \} = 0$, $[\{\chi, \zeta_i\} \
\partial f/\partial \chi ]|_{\mathcal{M}} = 0$, etc. The matrix $\Omega_{ij}$
stands for the $(2N-2)\times(2N -2)$ symplectic matrix.
$Z_{\rm CM}$ as defined by (\ref{4.22})-(\ref{4.30}) does not
generally represent a (classical) deterministic system. This is
because the constraint $\varphi = 0$ explicitly breaks the BRST
invariance of $Z_{\rm CM}$ which (as illustrated in Section~III) is
key in preserving the classical nature of the partition function.
Indeed, using the relations $\{ \chi, \bar{p}_a \} = \{ \chi, p_a -
\bar{q}_a \} = 0$ we immediately obtain
\begin{eqnarray}
\{ \chi, \varphi \} \ = \ \sum_a \left\{ \frac{\partial
\chi}{\partial q_a} \left( \frac{\partial \varphi}{\partial p_a} +
\frac{\partial\varphi}{\partial\bar{q}_a} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) - \frac{\partial
\chi}{\partial p_a} \frac{\partial \varphi}{\partial q_a}
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, ,
\end{eqnarray}
which implies that
\begin{eqnarray}
\left. \{ \chi, \varphi \} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{{\mathcal{M}}, \bar{q}_a =
\lambda_a} \ = \ \sum_a \left\{ \frac{\partial \chi^*}{\partial
q_a} \frac{\partial \varphi^*}{\partial \lambda_a} -
\frac{\partial \chi^*}{\partial \lambda_a} \frac{\partial
\varphi^*}{\partial q_a}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} \ \equiv \ \{\chi^*, \varphi^*
\}\, .
\end{eqnarray}
Here the notations $\chi^*({\boldsymbol{q}}, {\boldsymbol{\lambda}})
= \chi(\boldsymbol{q},\boldsymbol{p} =\boldsymbol{\lambda},
\bar{\boldsymbol{q}} = \boldsymbol{\lambda}, \bar{\boldsymbol{p}} =
0)$ and $\varphi^*({\boldsymbol{q}}, {\boldsymbol{\lambda}}) =
\varphi(\boldsymbol{q},\boldsymbol{\lambda}, \boldsymbol{\lambda},
0)$ were used. We also took advantage of the fact that
$\bar{\boldsymbol{q}} = {\boldsymbol{\lambda}}$ as indicated in
Section III. So the generating functional (\ref{4.22}) (or
(\ref{4.30})) can be rewritten as
\begin{eqnarray}
Z_{\rm CM}[{\boldsymbol{J}} = 0] \ = \ \int
{\mathcal{D}}{\boldsymbol{q}}{\mathcal{D}}{\boldsymbol{\lambda}}
{{\mathcal{D}}\bar{\boldsymbol{c}}} {\mathcal{D}}{\boldsymbol{c}} \
\exp\left[ i \mathcal{S} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \ \delta[\varphi^*]
\delta[\chi^*] \left| \det |\!| \{\chi^*, \varphi^* \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\, ,
\label{iv72}
\end{eqnarray}
where the integration over the ghost fields was reintroduced for
convenience. By reformulating $Z_{\rm CM}$ in terms of
${\boldsymbol{q}}, {\boldsymbol{\lambda}}, {\boldsymbol{c}}$ and
$\bar{\boldsymbol{c}}$ we can now easily check the BRST invariance.
The BRST transformations (\ref{4.6}) imply that
\begin{eqnarray}
&&\delta_{\rm BRST} \ \varphi^* \ = \ \frac{\partial
\varphi^*}{\partial q_i} \ \bar{\varepsilon} c_i \ = \ -
\bar{\varepsilon} \pounds_{{X_{\mathcal{Q}}}_{\rm BRST}} \
\varphi^*\, , \nonumber \\ &&\bar{\delta}_{\rm BRST} \ \varphi^* \
= \ - \frac{\partial \varphi^*}{\partial q_i} \ \varepsilon
\bar{c}_i \ = \ - \bar{\varepsilon}
\pounds_{{X_{\overline{\mathcal{Q}}}}_{\rm BRST}} \ \varphi^*\, .
\end{eqnarray}
Here ${\pounds_{X_{\mathcal{Q}}}}_{\rm BRST}$ and
${\pounds_{X_{\overline{\mathcal{Q}}}}}_{\rm BRST}$ represent the
Lie derivatives with respect to flows generated by the BRST and
anti-BRST charges, respectively. Analogous relations hold also for
$\chi^*$. Correspondingly, to the lowest order in
$\bar{\varepsilon}$ we can write
\begin{eqnarray}
\delta[\chi^*] \ &\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow& \ | 1 -
\mbox{Tr}(\bar{\varepsilon}{\pounds_{X_{\mathcal{Q}}}}_{\rm BRST})
|^{-1} \ \delta[\chi^*]\, , \nonumber \\
\left| \det |\!| \{\chi^*, \varphi^* \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| \ &\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow& \
| 1 - \mbox{Tr}(\bar{\varepsilon}{\pounds_{X_{\mathcal{Q}}}}_{\rm
BRST}) | \left| \det |\!| \{\chi^*, \varphi^* \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\, .
\label{iv74}
\end{eqnarray}
The transformations (\ref{iv74}) show that the term $\delta[\chi^*]
\left| \det |\!| \{\chi^*, \varphi^* \}|\!| \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| $ in
(\ref{iv72}) is the BRST invariant (as, of course, are both the
integration measure and the effective action ${\mathcal{S}}$).
However, because the variation $\delta_{\rm BRST} \delta[\varphi^*]$
is not compensated in (\ref{iv72}) we have in general, $\delta_{\rm
BRST} Z_{\rm CM}[{\boldsymbol{J}} = 0] \neq 0$. An analogous result
applies also to the anti-BRST transformation.
We should note that the condition $\delta_{\rm BRST} Z_{\rm
CM}[{\boldsymbol{J}} = 0] \neq 0$ only indicates that the {\em
classical} path-integral structure is destroyed; it does not,
however, ensure that the ensuing $Z_{\rm CM}$ can be recast into a
form describing a proper quantum-mechanical generating functional.
The straightforward path-integral representation such as
(\ref{4.22}) emerges only after the gauge freedom inherent in
the ``information loss" condition $\varphi$ is properly fixed via
the gauge constraint $\chi$. Let us finally emphasize once more that
the partition function (\ref{4.22}) (resp. (\ref{4.30})) has arisen
as a consequence of the application of the classical Dirac-Bergmann
algorithm for singular systems to the classical path integral of
Gozzi {\em et al.}.
\section{Explicit examples}\label{SEc5}
\subsection{Free particle}
Although the preceding construction may seem a bit abstract, its
implementation is quite straightforward. Let us now illustrate
this with two systems. As a warm-up example we start with the
Hamiltonian
\begin{eqnarray}
H =L_3= xp_y - yp_x\, , \label{5.1}
\end{eqnarray}
which is known to represent the angular momentum with values
unbounded from below. Alternatively, (\ref{5.1}) can be regarded as
describing the mathematical pendulum. This is because the
corresponding dynamical equation (\ref{eq.1.1.1}) for
${\boldsymbol{q}}$ is a plane pendulum equation with the pendulum
constant $l/\mbox{{\textsl{g}}} =1$. The Lagrangian (\ref{lag1})
reads
\begin{eqnarray}
\bar{L} = \overlinen{x}\dot{x} + \overlinen{y}\dot{y} +
\overlinen{x}y - \overlinen{y}x\, . \label{lag2}
\end{eqnarray}
It is well-known~\cite{Lutzky} that the system has two
(functionally independent) constants of motion - Casimir
functions. For (\ref{5.1}) they read
\begin{eqnarray}
C_1 \ = \ x^2 + y^2\, , \;\;\; C_2 \ = \ xp_x + yp_y\, .
\end{eqnarray}
The charge $C_1$ corresponds to the conserved radius of the orbit
while $C_2$ is the Noether charge of dilatation invariance of the
Lagrangian (\ref{lag2}) under the transformations $(\overlinen{x},
\overlinen{y}, x, y) \mapsto (e^{-s}\overlinen{x},
e^{-s}\overlinen{y}, e^sx, e^sy)$. As only $C_1$ is
$\boldsymbol{p}$-independent, the functions $F_+$ and $F_-$ of
this system are according to Eq.~(\ref{FCH}) chosen as:
\begin{eqnarray}
F_+ \ = \ \frac{(H + a_1 C_1)^2}{4a_1 C_1}\, , \;\;\; F_- \ = \
\frac{(H - a_1 C_1)^2}{4a_1 C_1}\, .
\end{eqnarray}
Hence $H_- = 0 $ implies that $H_+ \approx a_1(x^2 + y^2)$. Here
$a_1$ is some constant to be specified later. The ensuing
first-class constraint is
\begin{eqnarray}
\varphi \ = \ xp_y - yp_x - a_1 x^2 - a_1 y^2 - \bar{p}_{\bar{x}}
\bar{y} + 2a_1 \bar{p}_{\bar{x}} x + \bar{p}_{\bar{y}} \bar{x} +
2a_1 \bar{p}_{\bar{y}} y \ \approx \ H - a_1 C_1\, .
\end{eqnarray}
The gauge condition can then be chosen in the form $\chi =
\bar{p}_{\bar{y}} - y$. Indeed, we easily find that
\begin{eqnarray}
&&\{ \chi, \varphi \} \ = \ \bar{p}_{\bar{x}} - x \ \neq \ 0\, ,
\nonumber \\
&&\{ \chi, \phi_i \} \ = \ 0\, , \; \; \; i = 1, \ldots, 4\, .
\label{5.5}
\end{eqnarray}
The advantage of our choice of $\chi$ is that it will not run into
Gribov ambiguities, i.e., the equation $\varphi = 0$ will have
globally unique solution for $Q_1$ on $\Gamma^*$. This should be
contrasted with such choices as, e.g., $\chi = p_x$ or $\chi = p_y$,
which also satisfy the conditions (\ref{5.5}), but lead to two
Gribov copies each.
With the above choice of $\chi$ we may directly write the
canonical transformations:
\begin{eqnarray}
&&P_1 \ = \ \chi \ = \ \bar{p}_{\bar{y}} - y\, , \;\;\; Q_1 \ = \
p_y\, ,\nonumber \\
&&P_2 \ = \ p_x - \bar{x}\, , \mbox{\hspace{1.2cm}} Q_2 \ = \
\bar{p}_{\bar{x}}\, , \nonumber \\
&&P_3 \ = \ p_y - \bar{y}\, , \mbox{\hspace{1.2cm}} Q_3 \ = \
\bar{p}_{\bar{y}}\, , \nonumber \\
&&\bar{P}~ \ = \ \bar{p}_{\bar{x}} - x\, , \mbox{\hspace{1.2cm}}
\bar{Q}~ \ = \ p_x\, .
\end{eqnarray}
It might be checked that the transformation Jacobian is indeed $1$.
In the new canonical variables the Hamiltonian $K$ reads
\begin{eqnarray}
K(\bar{P}, \bar{Q}, Q_1) \ = \ H(\bar{P}, \bar{Q}, P_a = 0, Q_1,
Q_2 = 0, Q_3 = 0 ) \ = \ - \bar{P}Q_1 \, .
\end{eqnarray}
The functional $\delta$-function (\ref{4.23}) has the form
\begin{eqnarray}
\delta[Q_1 - Q_1^*(\bar{P},\bar{Q})] \ = \ \delta[Q_1 + a_1
\bar{P}]\, ,
\end{eqnarray}
and hence $K^*(\bar{P}, \bar{Q}) = H_+^*(\bar{P}, \bar{Q})= a_1
\bar{P}^2$. Let us now set $a_1 = 1/2m\hbar$. After changing
variables $\bar{Q}(t)$ to $\bar{Q}(t)/\hbar$ we obtain not only
the correct ``quantum-mechanical" path-integral measure
\begin{eqnarray}
{\mathcal{D}}\bar{{Q}}{\mathcal{D}}\bar{{P}} \ \approx \ \prod_i
\left(\frac{ d {\bar{{Q}}}(t_i) d {\bar{{P}}}(t_i)}{2\pi
\hbar}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, ,
\end{eqnarray}
but also the prefactor $1/\hbar$ in the exponent. So (\ref{4.30})
reduces to the quantum partition function for a free particle of
mass $m$. As the constant $a_1$ represents the choice of units (or
scale factor) for $C_1$ we see that the quantum scale $\hbar$ is
implemented into the partition function via the choice of the ``loss
of information" constraint.
\subsection{Harmonic oscillator}
The system (\ref{5.1}) can also be used to obtain the quantized
linear harmonic oscillator. This is possible by observing that not
only $C_1 = x^2 + y^2$ is a constant of motion for (\ref{5.1}) but
also $C_1 = x^2 + y^2 + c$ with $c$ being any $\boldsymbol{q}$ and
$\boldsymbol{p}$ independent constant. So in particular we can
choose $c = c(\bar{\boldsymbol{q}})$. The functional dependence of
$c$ on $\bar{\boldsymbol{q}}$ cannot be, however, arbitrary. The
requirement that 't~Hooft's constraint should not generate any new
(i.e., secondary) constraint represents quite severe restriction.
Indeed, in order to satisfy Eq.(\ref{D2}) the following condition
must hold (c.f. Appendix D):
\begin{eqnarray}
\sum_{i=0}^{2N} e_i \{\phi_i, \bar{H} \} \ = \ - \sum_{a,i} a_i \{
C_i, \bar{p}_a \} \{ p_a, \bar{H}\} \ = \ \sum_{i,k,a} a_i
\frac{\partial c_i(\bar{\boldsymbol{q}})}{\partial \bar{q}_a}
\bar{q}_k \frac{\partial f_k(\boldsymbol{q})}{\partial q_a}
\end{eqnarray}
which for the system in question is weakly zero only if
\begin{eqnarray}
\bar{x} \frac{\partial c(\bar{\boldsymbol{q}})}{\partial \bar{y}}
- \bar{y} \frac{\partial c(\bar{\boldsymbol{q}})}{\partial
\bar{x}} \ = \ 0\, .
\end{eqnarray}
The latter equation has the solution (modulo irrelevant additive
constant) $c(\bar{\boldsymbol{q}}) = d^2 ({\bar{x}}^2 +
{\bar{y}}^2)$. Here $d^2$ represents a multiplicative constant.
Hence we have that $C_1$ has the general form
\begin{eqnarray}
C_1 \ = \ x^2 + y^2 + d^2({\bar{x}}^2 + {\bar{y}}^2)\, .
\end{eqnarray}
It will be further convenient to choose $a_1 = -1/2d$. The resulting
first-class constraint then reads
\begin{eqnarray}
\varphi \ &=& \ xp_y - yp_x + \frac{1}{2d} x^2 + \frac{1}{2d} y^2
- \frac{d}{2} {\bar{x}}^2 - \frac{d}{2} {\bar{y}}^2 -
\bar{y}\bar{p}_{\bar{x}} + \bar{x} \bar{p}_{\bar{y}} - \frac{1}{d}
x \bar{p}_{\bar{x}} - \frac{1}{d} y \bar{p}_{\bar{y}} + d
\bar{x}p_x + d \bar{y}p_y \nonumber \\
&\approx& \ H + \frac{1}{2d} \ C_1 \, .
\end{eqnarray}
If we choose the gauge condition to be
\begin{eqnarray}
\chi \ = \ \bar{p}_{\bar{y}} + d p_x- y\, , \label{5.6}
\end{eqnarray}
it ensures that
\begin{eqnarray}
&&\{ \chi, \varphi \} \ = \ 2 \bar{p}_{\bar{x}} - 2 x - 2 d p_y \
\neq \ 0\, ,\nonumber \\
&&\{ \chi, \phi_i \} \ = \ 0\, , \;\;\; i = 1, \ldots, 4\, .
\end{eqnarray}
In addition, we shall see that (\ref{5.6}) guarantees the unique
global solution of the equation $\varphi = 0$ for $Q_1$ on
$\Gamma^*$ (hence it avoids the undesired Gribov ambiguity).
The canonical transformation discussed in Section IV now takes the
form
\begin{eqnarray}
&&P_1 \ = \ \chi \ = \ \bar{p}_{\bar{y}} + dp_x - y\, , \;\;\; Q_1
\ = \
p_y\, ,\nonumber \\
&&P_2 \ = \ p_x - \bar{x}\, , \mbox{\hspace{2.15cm}} Q_2 \ = \
\bar{p}_{\bar{x}}\, , \nonumber \\
&&P_3 \ = \ p_y - \bar{y}\, , \mbox{\hspace{2.15cm}} Q_3 \ = \
\bar{p}_{\bar{y}}\, , \nonumber \\
&&\bar{P}~ \ = \ \bar{p}_{\bar{x}} + dp_y- x\, ,
\mbox{\hspace{1.2cm}} \bar{Q}~ \ = \ p_x\, ,
\end{eqnarray}
and the Hamiltonian $K$ reads
\begin{eqnarray}
K(\bar{P}, \bar{Q}, Q_1) \ = \ -\bar{P}Q_1 + d Q_1^2 - d
\bar{Q}^2\, .
\end{eqnarray}
The functional $\delta$-function (\ref{4.23}) now has the form
\begin{eqnarray}
\delta[Q_1 - Q_1^*(\bar{P}, \bar{Q})] \ = \ \delta[Q_1 -
\frac{1}{2d}\ \bar{P}]\, .
\end{eqnarray}
This finally implies that the Hamiltonian on the physical space
$\Gamma^*$ has the form $K^*(\bar{P}, \bar{Q}) = H_+^*(\bar{P},
\bar{Q}) = -(1/4d) \bar{P}^2 - d \bar{Q}^2$. By choosing $d = -
m\hbar/2$ and transforming $\bar{Q} \mapsto \bar{Q}/\hbar$ in the
path integral (\ref{4.22}) (resp. (\ref{4.30})) we obtain the
quantum partition function for a system described by the
Hamiltonian: $(1/2m)\bar{P}^2 + (m/2)\bar{Q}^2$, i.e., the linear
harmonic oscillator with a unit frequency. This is precisely the
result which in the context of the system (\ref{5.1}) was
originally conjectured by 't~Hooft in Ref.~\cite{tHooft3}. Note
again that the fundamental scale (suggestively denoted as $\hbar$)
was implemented into the theory via the ``loss of information"
condition.
\subsection{Free particle weakly
coupled to Duffing's oscillator}
There is no difficulty, in principle, in carrying over our procedure
to non-linear dynamical systems. As an illustration we will consider
here the R\"{o}ssler system. This is a three-dimensional
continuous-time chaotic system described by the three autonomous
nonlinear equations
\begin{eqnarray}
\frac{dx}{dt} &=& -y - z\, ,\nonumber \\
\frac{dy}{dt} &=& x + Ay\, ,\nonumber \\
\frac{dz}{dt} &=& B + xz - Cz\, , \label{5.3}
\end{eqnarray}
where $A$, $B$, and $C$ are adjustable constants. The associated
't\,Hooft Hamiltonian reads
\begin{eqnarray}
H = -p_x(y + z) + p_y(x + Ay) + p_z(B + xz - Cz)\, , \label{5.4}
\end{eqnarray}
and the Lagrangian (\ref{lag1}) has the form
\begin{eqnarray}
\overlinen L = \overlinen{x}\dot{x} + \overlinen{y}\dot{y} +
\overlinen{z}\dot{z} + \overlinen{x}(y + z)
- \overlinen{y}(x +Ay) -
\overlinen{z}(B +xz +Cz)\, .
\end{eqnarray}
\noindent}\def\non{\nonumber}\def\ot{\otimes}\def\pa{\partial The R\"{o}ssler system is considered to be the simplest
possible chaotic attractor with important applications in
far-from-equilibrium chemical kinetics~\cite{Ruelle}. It also
frequently serves as a playground for studying, e.g.,
period-doubling bifurcation cycles or Feigenbaum's universality
theory. For the sake of an explicit analytic solution we will
confine ourselves only to the special case when $A = B = C = 0$.
With such a choice of parameters the R\"{o}ssler system can be
expressed in a scalar form as $\dddot{y} = y\dot{y}
+\dot{y}\ddot{y} - \dot{y} $ which ensures its
integrability~\cite{Hied}. The latter implies that in this regime
R\"{o}ssler's system does not posses chaotic attractors.
To proceed further, we should realize that because $C_i$ are
supposed to be ${\boldsymbol{p}}$-independent their finding is
equivalent to specifying the first integrals of the system
(\ref{5.3}) (i.e., functions that are constant along lines of
$(x,y,z)$ satisfying (\ref{5.3})). In other words, the
differential equations (\ref{5.3}) represent a characteristic
system for the differential equation $\{H, C_i\} = 0$. It is
simple to see that the first integrals of the above R\"{o}ssler
system are $x^2 + y^2 + 2z$ and $z e^{-y}$, hence we can identify
$C_1$ and $C_2$ with
\begin{eqnarray}
C_1 \ = \ (x^2 + y^2 + 2z)^2\, ,\;\;\;\;\; C_2 \ = \ z^2 e^{-2y}\,
.
\end{eqnarray}
The previous choice provides indeed positive and irreducible
charges. The first class constraint $\varphi$ then reads
\begin{eqnarray}
\varphi \ &=& \ -\ p_x(y + z) \ + \ p_y x \ + \ p_z xz - a_1
(x^2 + y^2 + 2z)^2 - a_2 z^2 e^{-2y} \nonumber \\
&&\ -\ \bar{p}_{\bar{x}}\left( \bar{y} \ + \ \bar{z}z \ - \ 4
a_1 x (x^2 + y^2 + 2z)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \ + \ \bar{p}_{\bar{y}}\left(\bar{x}
\ + \ 4 a_1 y (x^2 + y^2 + 2z) -
2a_2 z^2 e^{-2y} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\nonumber \\
&& \ + \ \bar{p}_{\bar{z}} \left( \bar{x} \ - \ \bar{z}x \ + \ 4
a_1 (x^2 +
y^2 + 2z) \ + \ 2a_2 z e^{-2y}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \, ,\nonumber \\
&\approx& \ H - a_1 C_1 - a_2 C_2 \, .
\end{eqnarray}
Explicit values of $a_1$ and $a_2$ will be fixed in the footnote
$5$. A little algebra shows that the gauge condition $\chi$ can be
selected, for instance, as
\begin{eqnarray}
\chi \ = \ \bar{p}_{\bar{x}} - y\, .
\end{eqnarray}
Such a choice satisfies the necessary conditions
\begin{eqnarray}
\{ \chi, \varphi \} \ = \ \bar{p}_{\bar{y}} + \bar{p}_{\bar{z}} +
x \ \neq \ 0\, ,\;\;\;\;\;\;\;\;\;\; \{ \chi, \phi_i \} \ = \ 0,
\;\;\; i \ = \ 1, \ldots, 6\, .
\end{eqnarray}
The above $\chi$ also allows us to perform the following linear
canonical transformation:
\begin{eqnarray}
\begin{array}{ll}
P_1 \ = \ \chi \ = \ \bar{p}_{\bar{x}} \ - \ y\, , &~~~~~~ Q_1 \ = \ p_y \, , \\
P_2 \ = \ p_x \ - \ \bar{x}\, , &~~~~~~ Q_2 \ = \
\bar{p}_{\bar{x}}\, , \\
P_3 \ = \ p_y \ - \ \bar{y}\, , &~~~~~~ Q_3 \ = \
\bar{p}_{\bar{y}}\, , \\
P_4 \ = \ p_z \ - \ \bar{z}\, , &~~~~~~ Q_4 \ = \
\bar{p}_{\bar{z}}\, , \\
\bar{P}_1 \ = \ (\bar{p}_{\bar{z}}/d \ - \ z/d )/\sqrt{2} \, ,
&~~~~~~ \bar{Q}_1 \ = \ (2dp_z \ - \ \bar{p}_{\bar{x}}/c \ + \
x/c)/\sqrt{2} \,
, \\
\bar{P}_2 \ = \ (2cp_x \ - \ \bar{p}_{\bar{z}}/d \ + \
z/d)/\sqrt{2}\, , &~~~~~~ \bar{Q}_2 \ = \ (x/c \ - \
\bar{p}_{\bar{x}}/c)/\sqrt{2} \, .
\end{array}
\label{5.7}
\end{eqnarray}
Here $c$ and $d$ represent arbitrary real constants to be specified
later. The transformation (\ref{5.7}) secures the unique global
solution $Q_1$ for $\varphi = 0$ on $\Gamma^*$. To show this it is
sufficient to observe that $\left.\left[H - a_1C_1 - a_2C_2
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{\Gamma^*}$ is linear in $Q_1$. Indeed,
\begin{eqnarray}
\left.\left[H - a_1C_1 - a_2C_2 \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{\Gamma^*} \ &=& \
\sqrt{2} c \ Q_1 \bar{Q}_2 - \sqrt{2 }c \ (\bar{Q}_1 -
\bar{Q}_2)\bar{Q}_2 \bar{P}_1 + d/c\ (\bar{P}_1 + \bar{P}_2)
\bar{P}_1\nonumber \\ &-& {\mathcal{A}} \ (\bar{P}_1)^2 -
{\mathcal{B}} \ \bar{P}_1 (\bar{Q}_2)^2 - {\mathcal{C}} \
(\bar{Q}_2)^4\, ,
\end{eqnarray}
with ${\mathcal{A}} = 2d^2(4 a_1 + a_2)$, ${\mathcal{B}} = -
8\sqrt{2}a_1 d c^2$ and ${\mathcal{C}} = 4a_1 c^4$. As a result
\begin{eqnarray}
K^*(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}) =
H^*_+(\bar{\boldsymbol{P}}, \bar{\boldsymbol{Q}}) = {\mathcal{A}} \
(\bar{P}_1)^2 + {\mathcal{B}} \ \bar{P}_1 (\bar{Q}_2)^2 +
{\mathcal{C}} \ (\bar{Q}_2)^4\, .
\end{eqnarray}
Inserting this into (\ref{4.22}) (resp. (\ref{4.30})) and
integrating over $\bar{P}_1$ and $\bar{P}_2$ we obtain the following
chain of identities:
\begin{eqnarray}
Z_{\rm CM} \ &=& \ \int {\mathcal{D}}\bar{\boldsymbol{P}}
{\mathcal{D}}\bar{\boldsymbol{Q}} \ \exp\left\{
i\!\!\int_{t_1}^{t_2} dt \ [\bar{\boldsymbol{P}}
\dot{\bar{\boldsymbol{Q}}} - {\mathcal{A}} \ (\bar{P}_1)^2 -
{\mathcal{B}} \ \bar{P}_1 (\bar{Q}_2)^2 - {\mathcal{C}} \
(\bar{Q}_2)^4 +
\bar{\boldsymbol{Q}}{\boldsymbol{j}}~] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} \nonumber \\
&=& \ \int {\mathcal{D}}\bar{Q}_1 {\mathcal{D}}\bar{Q}_2 \ \delta
[ \dot{\bar{Q}}_2 ] \ \exp \left\{ i\!\!\int_{t_1}^{t_2} dt \
\left[\frac{1}{4{\mathcal{A}}}\ (\dot{\bar{Q}}_1 - {\mathcal{B}}\
(\bar{Q}_2)^2)^2 - {\mathcal{C}}(\bar{Q}_2)^4 +
\bar{\boldsymbol{Q}}{\boldsymbol{j}}~\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\nonumber \\
&=& \ \lim_{a\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0_+} \ \int {\mathcal{D}}\bar{Q}_1
{\mathcal{D}}\bar{Q}_2 \ \exp\left\{i\!\! \int_{t_1}^{t_2} dt \
\left[\frac{1}{4 {\mathcal{A}}} (\dot{\bar{Q}}_1)^2 + \frac{1}{4 a}\
(\dot{\bar{Q}}_2)^2 - \frac{\mathcal{B}}{2{\mathcal{A}}} \
\dot{\bar{Q}}_1(\bar{Q}_2)^2\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} \nonumber \\
&&\ \mbox{\hspace{2.5cm}} \times \ \exp\left\{ i\!\!
\int_{t_1}^{t_2} dt \left[\left(
\frac{{\mathcal{B}}^2}{4{\mathcal{A}}} -
{\mathcal{C}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)(\bar{Q}_2)^4 +
\bar{\boldsymbol{Q}}{\boldsymbol{j}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, .
\label{5.10}
\end{eqnarray}
As an explanatory step we should mention that the formal measure in
the second equality of (\ref{5.10}) has the explicit time-sliced
form
\begin{eqnarray}
{\mathcal{D}}\bar{Q}_1 {\mathcal{D}}\bar{Q}_2 \ \approx \ \prod_i
\left( \frac{d\bar{Q}_1(t_i)}{\sqrt{4\pi i \epsilon
{\mathcal{A}}}}\ d \bar{Q}_2(t_i) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, ,
\end{eqnarray}
while in the third equality the shorthand notation
${\mathcal{D}}\bar{Q}_1 {\mathcal{D}}\bar{Q}_2$ stands for
\begin{eqnarray}
{\mathcal{D}}\bar{Q}_1 {\mathcal{D}}\bar{Q}_2 \ \approx \ \prod_i
\left( \frac{d\bar{Q}_1(t_i) }{\sqrt{4\pi i \epsilon {\mathcal{A}}}}
\ \frac{d\bar{Q}_2(t_i)}{\sqrt{4 \pi i a \epsilon}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, .
\end{eqnarray}
The symbol $\epsilon$ represents the infinitesimal width of the
time slicing. During our derivation we have used the Fresnel
integral
\begin{eqnarray}
\int_{-\infty}^{\infty} dx \ e^{-i a x^2 + i x\xi} \ = \
\sqrt{\frac{\pi}{a}}\ \ e^{i (\xi^2/a - \pi)/4 } \ = \
\sqrt{\frac{\pi}{i a}}\ \ e^{i \xi^2/(4a)}\, , \;\;\;\;\;\;\;\;\;
a > 0\, ,
\end{eqnarray}
and the ensuing representation of the Dirac $\delta$-function:
\begin{eqnarray}
\lim _{a\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0_{+}} \sqrt{\frac{1}{4i\pi a}}\ \
e^{i\xi^2/(4a)} \ = \ \delta(\xi)\, . \label{5.11}
\end{eqnarray}
In the following we perform the scale transformation
$\bar{Q}_2/\sqrt{a} \mapsto \sqrt{2m_2}\ \bar{Q}_2$ and set
${\mathcal{A}} = 1/(2m_1)$, ${\mathcal{B}} = 1/(\sqrt{m_1 m_2})$
and ${\mathcal{C}} = 1/m_2$.~\footnote{This choice is equivalent
to the solution:
\begin{eqnarray*} a_1 = \frac{a_2}{4}\, , \;\; d
=\frac{1}{2\sqrt{2a_2m_1}}\, , \;\; c =
\pm\frac{1}{\sqrt[4]{a_2m_2}}\, .
\end{eqnarray*}
Without loss of generality we can set $d = 1/2$, then:
\begin{eqnarray*}
a_2 = \frac{1}{2m_1}\, , \;\; a_1 = \frac{1}{8m_1}\, , \;\; c = \pm
2^{3/4} \sqrt[4]{\frac{m_1}{m_2}}\, .
\end{eqnarray*}
}
The resulting partition function then reads
\begin{eqnarray}
Z_{\rm CM} \ &=& \ \lim_{{\rm{g}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0_+}\int
{\mathcal{D}}\bar{Q}_1 {\mathcal{D}}\bar{Q}_2 \ \exp\left\{ i\!\!
\int_{t_1}^{t_2} dt \left[\frac{m_1}{2}\ (\dot{\bar{Q}}_1)^2 \ + \
\frac{m_2}{2} \ (\dot{\bar{Q}}_2)^2\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} \nonumber \\
&& \mbox{\hspace{2cm}}\times \ \exp\left\{ i\!\! \int_{t_1}^{t_2}
dt \left[{\rm{g}} \sqrt{\frac{m_1m_2}{2}}
\ \dot{\bar{Q}}_1(\bar{Q}_2)^2 \ - \ \frac{m_2 {\rm{g}}^2}{4} \
(\bar{Q}_2)^4 \ + \
\bar{\boldsymbol{Q}}{\boldsymbol{j}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, ,
\label{5.9}
\end{eqnarray}
where we have set ${\rm{g}} = 2\sqrt{2} a$. The system thus
obtained describes a pure anharmonic (Duffing's) oscillator
($\bar{Q}_2$ oscillator) weakly coupled through the Rayleigh
interaction with a free particle ($\bar{Q}_1$ particle).
Alternatively, when $m_1 = m_2 = m$ we can interpret the Lagrangian
in (\ref{5.9}) as a planar system describing a particle of mass $m$
in a quartic scalar potential $e\Phi(\bar{\boldsymbol{Q}}) = m
{\rm{g}}^2/4\ (\bar{Q}_2)^4$ and a vector potential
$e{\boldsymbol{A}} = ({\rm g}m \sqrt{1/2} \ (\bar{Q}_2)^2, 0)$
(i.e., in the linear magnetic field $B_3 =
\epsilon_{3ij}\partial_iA_j = - {\rm g} m \sqrt{2} \ \bar{Q}_2/e$).
It is preferable to set $m_1 \mapsto m_1 \hbar$ and $m_2 \mapsto
m_2/\hbar$. The latter corresponds to the scale factors $a_2 =
1/(2m_1\hbar)$ and $a_1 = 1/(8m_1\hbar)$. After rescaling
$\bar{{Q}}_1(t) \mapsto \bar{{Q}}_1(t)/\hbar$ the partition function
(\ref{5.9}) boils down to the usual quantum-mechanical partition
function with the path-integral measure
\begin{eqnarray}
{\mathcal{D}}{\bar{\boldsymbol{Q}}} \ \approx \ \prod_i
\left(\frac{d\bar{Q}_1(t_i)}{\sqrt{2\pi i \epsilon \hbar/ m_1}}\
\frac{d \bar{Q}_2(t_i)}{\sqrt{2\pi i \epsilon \hbar/m_2}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\,
,
\end{eqnarray}
and with $1/\hbar$ in the exponent. Hence, just as found in the
previous two cases, the choice of 't~Hooft's condition ensures that
the Planck constant enters the partition function (\ref{5.9}) in a
correct quantum-mechanical manner. In turn, $\hbar$ enters only via
the scale factors $a_1$ and $a_2$ (the factors $d$ and $c$ are
$\hbar$ independent) and hence it represents a natural scale on
which the ``loss of information" condition operates. In other words,
whenever one would be able to ``measure" or determine from ``first
principles" the ``loss of information" condition one could, in
principle, determine the value of the fundamental quantum scale
$\hbar$.
As a final note we mention that the 't~Hooft quantization procedure
can be straightforwardly extended to other non-linear systems and
particularly to systems possessing chaotic behavior (e.g., strange
attractors). In general cases this might be, however, hindered by
our inability to find the corresponding first integrals (and hence
$C_i$'s) in the analytic form. It is interesting to notice that
machinery outlined above allows to find the emergent quantistic
system for the configuration-space strange attractors. This is
because in 't~Hooft's ``quantization" one only needs the dynamical
equations in the {\em configuration} space. The latter should be
contrasted with the Hamiltonian (or symplectic) systems where
strange attractors cannot exist in the {\em phase-space} on account
of the Liouville theorem~\cite{Hop}.
\section{Conclusions and Outlook}
In this paper we have attempted to substantiate the recent proposal
of G.'t~Hooft in which quantum theory as viewed as not a complete
final theory, but is in fact an emergent phenomenon arising from a
deeper level of dynamics. The underlying dynamics are taken to be
classical mechanics with singular Lagrangians supplied with an
appropriate information loss condition. With plausible assumptions
about the actual nature of the constraint dynamics, quantum theory
is shown to emerge when the classical Dirac-Bergmann algorithm for
constrained dynamics is applied to the classical path integral of
Gozzi {\em et al.}.
There are essentially two different tactics for implementing the
classical path integrals in 't~Hooft's quantization scenario. The
first is to apply the configuration-space formulation~\cite{GozziI}.
This is suited to situations when 't~Hooft's systems are phrased
through the Lagrangian description. The alternative approach is to
start with the phase-space version~\cite{GozziII}. The latter
provides a natural framework when the Hamiltonian formulation is of
interest or where the language of symplectic geometry is preferred.
It should be, however, stressed that it is not merely a matter of a
computational convenience which method is actually employed. In
fact, both approaches are mathematically and conceptually very
different (as they are also in conventional quantum
mechanics~\cite{Pain,Sh1}). Besides, the methodology for handling
singular systems is distinct in Lagrangian and Hamiltonian
formulations (c.f. Refs.~\cite{Sunder,GT} and citations therein). In
passing, we should mention that the currently popular
Hamilton-Jacobi~\cite{Gu1} and
Legendre-Ostrogradski\u{i}~\cite{Pons} approaches for a treatment of
constrained systems, though highly convenient in certain cases
(e.g., in higher-order Lagrangian systems), have not found as yet
any particular utility in the present context.
Throughout this paper we have considered only the
configuration-space formulation of classical path integrals.
(Incidently, the phase-space path integral which appears in Section
IV (after Eq.(\ref{3.40})) is not the phase-space path integral {\em
\`{a} la} Gozzi, Reuter and Thacker~\cite{GozziII} but rather
Gozzi's configuration-path~\cite{GozziI} integral with extra degrees
of freedom.) By choosing to work within such a framework we have
been able to render a number of formal steps more tractable (e.g.,
BRST analysis is reputed to be simpler in the configuration space,
uniqueness proof for 't~Hooft systems is easy and transparent in the
Lagrange description, etc.). The key advantage, however, lies in two
observations. First, the position-space path integral of Gozzi {\em
et al.} provides a conceptually clean starting point in view of the
fact that it represents the classical limit of both the
stochastic-quantization path integral and the closed-time-path
integral for the transition probability of systems coupled to a heat
bath. Such a connection is by no means obvious in the canonical
path-integral representation as both the Parisi-Wu stochastic
quantization and the Feynman-Vernon formalism (with ensuing
closed-time-path integral) are intrinsically formulated in the
configuration space. Second, according to 't~Hooft's conjecture the
``loss of information" condition should operate in the position
space where it is supposed to eliminate some of the transient
trajectories leaving behind only stable (or near to stable)
orbits~\cite{tHooft3}. Hence working in configuration space may
allow one to probe the plausibility of 't~Hooft's conjecture. The
price that has been paid for this choice is that the configuration
space must have been doubled. This is an unavoidable step whenever
one wishes to obtain first-order autonomous dynamical equations
directly from the Lagrange formulation (a fact well known in the
theory of dissipative systems~\cite{MF1}). Our analysis in Appendix
BII
suggests, that the auxiliary coordinates $\bar{q}_i$ may be related
to relative coordinates on the backward-forward time path in the
Feynman-Vernon approach. (Such coordinates also go under the names
{\em fast variables}~\cite{Fetter} or {\em quantum noise
variables}~\cite{Sriv1}.)
On the formal side, the auxiliary variables $\bar{q}_i$ are nothing
but Gozzi's Lagrange multipliers ${\lambda}_i$ (in our case denoted
as $\bar{\lambda}_i$).
In order to incorporate the ``loss of information" into our scheme,
we have introduced in Section IV an auxiliary momentum integration
to go over to the canonical representation. Such a step, though
formal, allowed us to treat our constrained system via the standard
Dirac-Bergmann procedure. It should be admitted that such a choice
is by no means unique - e.g., methodologies for treatment of
classical constrained systems in configuration space do
exist~\cite{Sunder,GT}. The decision to apply the Dirac-Bergmann
algorithm was mainly motivated by its conceptual simplicity and
direct applicability to path integrals. On the other hand, we do not
expect that the presented results should undergo any substantial
changes when some another scheme would be utilized. It should be
further emphasized that while we have established the mathematical
link (Eqs.(\ref{4.26}) and (\ref{D3})) between the ``loss of
information" condition and first-class constraints, it is not yet
clear if this connection has more direct physical interpretation
(although various proposals exist in the
literature~\cite{BJV3,tHooft3,BMM1}). Such an understanding would
not only help to develop this approach for more complicated physical
situations but also affiliation in a systematic fashion of a quantum
system to an underlying classical dynamics. Work along those lines
is currently in progress.
To illustrate the presented ideas we have considered two simple
systems; the planar pendulum and the R\"{o}ssler system. In the
pendulum case we have taken advantage of free choice of an additive
constant in the charge $C_1$. This in turn, allowed us to imposed
't~Hooft's constraints in two distinct ways. In the case of
R\"{o}ssler's system two ${}\boldsymbol{p}$-independent, irreducible
charges $C_1$ and $C_2$ exist. For definiteness sake we have
constructed in the latter case the ``loss of information" condition
with the additive constant set to zero. With this we were able to
convert the corresponding classical path integrals into path
integrals describing a quantized free particle, a harmonic
oscillator, and a free particle weakly coupled to Duffing's
oscillator. As a byproduct we could observe that our prescription
provides a surprisingly rigid structure with rather tight
maneuvering space for the emergent quantum dynamics. Indeed, when
the classical dynamics is fixed, the 't~Hooft condition is
formulated via linear combination of charges $C_i$ which correspond
to the first integrals of the autonomous dynamical equations for
${\boldsymbol{q}}$, i.e., Eq.(\ref{eq.1.1.1}). Due to the explicit
form of 't~Hooft's Hamiltonian the constraint is of the first class
and so we must remove the redundancy in the description by imposing
the gauge condition $\chi$. By requiring that the consistency
conditions (\ref{4.25}) and (\ref{4.27}) are fulfilled, that the
choice of $\chi$ does not induce Gribov ambiguity, and that the
canonical transformations defined in Sec.~IV are linear, we
substantially narrowed down the class of possible emergent quantum
systems. Note also, that when we start with the $N$-dimensional
classical system (${\boldsymbol{q}}$ variables), the emergent
quantum dynamics has $N-1$ dimensions ($\bar{\boldsymbol{Q}}$
variables). Indeed, by introducing the auxiliary degrees of freedom
$\bar{\boldsymbol{q}}$ we obtain $4N$-dimensional phase space which
is constrained by $2N + 2$ conditions ($\phi_i$, $\varphi$ and
$\chi$), which leaves behind $(2N-2)$-dimensional phase space
$\bar{\boldsymbol{Q}}, \bar{\boldsymbol{P}}$. This disparity between
the dimensionality of the classical and emergent quantum systems
vindicates in part the terminology ``information loss" used
throughout the text.
An important conclusion of this work is that 't~Hooft's quantization
proposal seems to provide a tenable scenario which allows for
deriving certain quantum systems from classical physics.
It should be stressed that although we assumed throughout that the
deeper level dynamics is the classical (Lagrangian or Hamiltonian)
one, there is in principle no fundamental reason that would preclude
starting with more exotic premises. In particular, our conceptual
reasoning would go unchanged if we had begun with Lagrangians
operating over coordinate superspaces (pseudoclassical
mechanics~\cite{BeMa1}) or with the currently much discussed
discrete classical mechanics (i.e., having foam-, fractal-, or
crystal-like configuration space)~\cite{Kleinert?}, etc.~. The only
prerequisite for such approaches is the possibility of formulating a
corresponding variant of Gozzi's path integral, and a method for
implementing the ``loss of information" constraint in such
integrals.
There are many interesting applications of the above method.
Applications to chaotic dynamical systems especially seem quite
pertinent. After all, central to our reasoning is a (doubled) set of
real first-order dynamical equations\footnote{Non-trivial are only
the equations over actual configuration space. The dynamical
equations for the auxiliary variables $\bar{q}_i$ are linear and
hence they are not relevant in this connection.} which, under
favorable conditions, may by associated with a chaotic dynamics in
the configuration space. We should emphasize that the reader should
not confuse the above with the extensively studied but unrelated
notion of chaos in Hamiltonian systems - we do not deal here with
dynamical equations on symplectic manifolds. This is important, as
Hamiltonian systems forbid {\em per s\`{e}} the existence of
attractive orbits which are otherwise key in 't~Hooft's proposal. In
this respect our approach is parallel with some more conventional
approaches. Indeed, a direct ``quantization" of the equations of
motion -- originally proposed by Feynman~\cite{Dyson} -- is one of
the techniques for tackling quantization of dissipative
systems~\cite{Tar1,HS1}. In field theories this line of reasoning
was recently progressed by Bir\'{o}, M\"{u}ller, and
Matinyan~\cite{BMM1} who demonstrated that quantum gauge field
theories can emerge in the infrared limit of a higher-dimensional
classical (non-Abelian) gauge field theory, known to have chaotic
behavior~\cite{BMM2}.
We finally wish to comment on two more points. First, in cases where
one strives for an explicit reparametrization invariance (or general
covariance) of the emergent quantum system
the presented framework is not very suitable. The absence of
explicit covariance in both Dirac-Bergmann and Fadeev-Senjanovic
algorithms makes the actual analysis very cumbersome or even
impossible. In fact, expressions (\ref{4.17}) and (\ref{4.21}) are
evidently not generally covariant due to the presence of
time-independent constraints in the measure. Although
generalizations that include covariant constraints do
exist~\cite{fradkin,Bat1,Gav} they result in gauge fixing conditions
which depend not only on the canonical variables but also on the
Lagrange multipliers (or explicit time). Such gauge constraints are,
however, incompatible with our Poisson bracket analysis used in
Section IV, and Appendixes A and D. Hence, if the emergent quantum
system is supposed to be reparametrization invariant (e.g.,
relativistic particle, canonical gravity, relativistic string, etc.)
a new framework for the path-integral implementation of 't~Hooft's
scheme must be sought. Second, the formalism of functional integrals
is sometimes deceptive when taken too literally. The latter is the
case, for instance, when gauge conditions are imposed and/or
canonical transformations performed. The difficulty involved is
known as the Edwards-Gulyaev effect~\cite{Pain,EG,Sh1} and it
resides in the exact nature of the limiting sequence of the finite
dimensional integrals which constitute the path integral. As a
result the classical canonical transformation does not leave, in
general, the measure of the path integral Liouville invariant but,
instead induces an anomaly~\cite{Sh1,SW}. Thus, for our construction
to be meaningful it should be shown that the canonical
transformations in Section IV are unaffected by the Edwards-Gulyaev
effect. Fortunately, in cases when the generating function is at
most quadratic (making canonical transformations linear) and not
explicitly time dependent, it can be shown~\cite{Fad,SW,vH} that the
anomaly is absent. It was precisely for this reason that more
general transformations were not considered in the present paper.
Clearly, both mentioned points are of key importance for further
development of our procedure and, due to their delicate nature, they
deserve a separate discussion.
Let us end with the remark that the notorious problem with operator
ordering known from canonical approaches has an elegant solution in
path integrals. The ordering is there naturally generated by the
necessary physical requirement that path integrals must be invariant
under coordinate transformations~\cite{KC}.
\section*{Acknowledgments}
M.B. and P.J. are grateful to the ESF network COSLAB for funding
their stay at FU, Berlin. One of us, P.J., acknowledges very
helpful discussions with R.~Banerjee,
G.~Vitiello and
Y.~Satoh, and thanks the Japanese Society for Promotion of Science
for financial support.
\section*{Appendix A}
In this appendix we show that the system (\ref{eq.1.1}) has no
secondary constraints. In contract to the primary constraints
which are a consequence of the non-invertibility of the velocities
in terms of the $p$'s and $q$'s, secondary constraints result from
the equations of motion. To show their absence in 't\,Hooft's
system we start with the observation that the time derivative of
any function $f({\boldsymbol{q}}, {\boldsymbol{p}})$ is given
by~\cite{Sunder}
\begin{eqnarray}
\dot{f} \ \approx \ \{ f, \overlinen H \} \ + \ u^j \{ f,
\phi_j\}\, .
\end{eqnarray}
Here $u^a$ are the Lagrange multipliers to be determined by the
consistency conditions
\begin{eqnarray}
0 \ \approx \ \dot{\phi_i} \ \approx \ \{ \phi_i, \overlinen H \}
\ + \ u^j \{\phi_i, \phi_j \}\, . \label{A1}
\end{eqnarray}
The latter is nothing but the statement that constraints (as
functions of ${\boldsymbol{q}}$ and ${\boldsymbol{p}}$) must hold
at any time. If all $u^j$ could not be determined from the
consistency condition (\ref{A1}) then we would have
the so-called secondary constraints. In our case we have
\begin{eqnarray}
\{ \phi_1^a, {\overlinen H} \} \ = \ - \frac{\partial
\bar{H}}{\partial q_a} \ \not\approx \ 0\, , \;\;\;\; \{ \phi_2^a,
{\overlinen H} \} \ = \ - f_a({\boldsymbol{q}})\ \not\approx \ 0\,
, \;\;\;\; \{ \phi_1^a, \phi_2^b \} \ = \ - \delta_{ab}\, .
\label{A3}
\end{eqnarray}
Using the fact that $\{ \phi_i, {\overlinen H} \} \ \not\approx \
0$ and $\det\left|\{\phi_i, \phi_j \} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght| = 1$, the
inhomogeneous system of linear equations (\ref{A1}) can be
uniquely resolved with respect to $u^j$, thus implying the absence
of secondary constraints.
\section*{Appendix B}
\subsection*{BI}
We show here that Gozzi's configuration-space path integral results
from the ``classical" limit of the stochastic-quantization partition
function, i.e., the limit where the width of a noise distribution
tends to zero.
For this purpose we start with the form of the partition function
for stochastic quantization as written down by
Zinn-Justin~\cite{Zinn-JustinII,Zinn-Justin1}:
\begin{eqnarray}
Z_{\rm SC}(J) = \int {\mathcal{D}}{\boldsymbol{q}}
{\mathcal{D}}{\boldsymbol{c}} {\mathcal{D}}\bar{{\boldsymbol{c}}}
{\mathcal{D}}\boldsymbol{\lambda} \ \exp\left\{ -
{\mathcal{S}}[{\boldsymbol{q}},{\boldsymbol{c}},\bar{{\boldsymbol{c}}},
\boldsymbol{\lambda}] + \int
{\boldsymbol{J}}(x){\boldsymbol{q}}(x) dx \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, , \label{C1}
\end{eqnarray}
where
\begin{eqnarray}
{\mathcal{S}} \equiv &-& w({\boldsymbol{\lambda}}) + \int
{\boldsymbol{\lambda}}(x)\left(
\frac{\partial{\boldsymbol{q}}(x)}{\partial\tau} \ + \frac{\delta
{\mathcal{A}}}{\delta {\boldsymbol{q}}(x)} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)dx \nonumber \\
&-& \int dx dx' \ \bar{c}_a(x) \left( \frac{\partial}{\partial
\tau} \delta_{ab} \delta(x - x') + \frac{\delta^2 {\mathcal{A}} }{\delta
q_a(x) \delta q_b(x')}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \ c_b(x')\, ,
\end{eqnarray}
and
\begin{eqnarray}
\exp[w({\boldsymbol{\lambda}})]\equiv \int
{\mathcal{D}}{{\boldsymbol{\nu}}}
\exp\left\{-\sigma({\boldsymbol{\nu}}) + \int dx
{\boldsymbol{\lambda}}(x) {\boldsymbol{\nu}}(x)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, ,
\end{eqnarray}
with ${\mathcal{D}}{{\boldsymbol{\nu}}}
\exp(-\sigma({\boldsymbol{\nu}}))$ being the functional measure of
noise. Here $x = (t,\tau)$ and $dx = dtd\tau$ where $\tau$ is the
Parisi-Wu fictitious time. The dynamical equation for
${\boldsymbol{q}}(x)$ is described by the Langevin equation
\begin{eqnarray}
\frac{\partial{\boldsymbol{q}}(x)}{\partial \tau} + \left.
\frac{\delta {\mathcal{A}} [{\boldsymbol{q}}]}{ \delta
{\boldsymbol{q}}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|_{{\boldsymbol{q}} = {\boldsymbol{q}}(x)}
= {\boldsymbol{\nu}}(x)\, , \label{B3}
\end{eqnarray}
with the initial condition ${\boldsymbol{q}}(t,0) =
{\boldsymbol{q}}(t)$.
For Gaussian noise of variance
$2h$, the noise measure is
\begin{eqnarray}
&&{\mathcal{D}}{{\boldsymbol{\nu}}}
\exp(-\sigma({\boldsymbol{\nu}})) =
\prod_{i,x}\frac{d\nu_i(x)}{2\sqrt{\pi \hbar}} \exp\left( -
\frac{1}{4\hbar} \ \int dx {\boldsymbol{\nu}}^2(x) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\,
,
\end{eqnarray}
and
(\ref{C1}) takes the form
\begin{eqnarray}
Z_{\rm SC}(J)\ &=& \ \int {\mathcal{D}}{\boldsymbol{q}}
{\mathcal{D}}{\boldsymbol{\nu}} \ \delta \! \left( \frac{\partial
{\boldsymbol{q}}}{\partial \tau} + \frac{\delta {\mathcal{A}}
[{\boldsymbol{q}}]}{\delta {\boldsymbol{q}}} -
{\boldsymbol{\nu}}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \det\left|\!\left|
\frac{\partial}{\partial \tau} \delta_{ab} \delta(x - x') +
\frac{\delta^2 {\mathcal{A}} }{\delta
q_a(x) \delta q_b(x')} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\nonumber \\
&&\times \ \exp\left\{ - \sigma({\boldsymbol{\nu}}) + \int
{\boldsymbol{J}}(x){\boldsymbol{q}}(x) dx \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\nonumber \\
&=& \ \int {\mathcal{D}}{\boldsymbol{q}}
{\mathcal{D}}{\boldsymbol{\nu}} \ \delta \!\left[{\boldsymbol{q}} -
{\boldsymbol{q}}^{[{\boldsymbol{\nu}}]}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \exp\left\{ -
\sigma({\boldsymbol{\nu}}) + \int
{\boldsymbol{J}}(x){\boldsymbol{q}}(x) dx \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, .
\end{eqnarray}
where $\delta[f({\boldsymbol{q}})] \equiv \prod_{t,\tau}
\delta(f({\boldsymbol{q}}(t,\tau)))$ and
${\boldsymbol{q}}^{[{\boldsymbol{\nu}}]}(x)$ is a solution of
(\ref{B3}). Using the representation.
\begin{eqnarray}
\delta(x) \ = \ \lim_{\hbar \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0_+} \frac{1}{2\sqrt{\pi
\hbar}} \ e^{-x^2/(4\hbar)}\, ,
\end{eqnarray}
we get in the limit of zero distribution width (i.e., $\hbar
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0_+$) that
\begin{eqnarray}
Z_{\rm SC}(J,\hbar)\ \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow \ \int
{\mathcal{D}}{\boldsymbol{q}} \ \delta \!\left[{\boldsymbol{q}} -
{\boldsymbol{q}}^{[0]}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
\exp\left\{ \int
{\boldsymbol{J}}(x){\boldsymbol{q}}(x) dx \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, .
\end{eqnarray}
Choosing a special source ${\boldsymbol{J}}(x) = {\boldsymbol{J}}(t)
\delta(\tau)$ we can sum in the path integral solely over
configurations with ${\boldsymbol{q}}(t,0) = {\boldsymbol{q}}(t)$ as
other configurations will contribute only to an overall
normalization constant. Inasmuch we finally obtain
\begin{eqnarray}
\lim_{\hbar \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0^+} Z_{\rm SC}({\boldsymbol{J}},\hbar) \ =
\ {{{Z}} }_{\rm CM}({\boldsymbol{J}})\, .
\end{eqnarray}
\subsection*{BII}
In this part of the appendix we show that Gozzi's
configuration-space partition function (\ref{4.3}) results from the
``classical" limit of the closed-time path integral for the
transition probability of a system coupled to a thermal reservoir at
some temperature $T$. By the classical limit we mean the high
temperature and weak heat bath coupling limit.
The path-integral treatment of systems that are linearly coupled to
a thermal bath of harmonic oscillators was first considered by
Feynman and Vernon~\cite{FV}. For our purpose it will be
particularly convenient to utilize the so called Ohmic limit
version, as discussed in Refs.\cite{Pain,Klein1}:
\begin{eqnarray}
{\mathcal{Z}}_{\rm FV}[{\boldsymbol{J}}_+, {\boldsymbol{J}}_-] \
&=& \ \int {\mathcal{D}}{\boldsymbol{q}}_+
{\mathcal{D}}{\boldsymbol{q}}_- \ \exp\left\{ \frac{i}{\hbar}
\left[ {\mathcal{A}}[{\boldsymbol{q}}_+] -
{\mathcal{A}}[{\boldsymbol{q}}_-] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]+ \int dt \ \left[
{\boldsymbol{J}}_+(t){\boldsymbol{q}}_+(t) -
{\boldsymbol{J}}_-(t){\boldsymbol{q}}_-(t)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\nonumber
\\
&& \times \ \exp\left\{ -i \frac{m\gamma}{2\hbar} \int dt \
[{\boldsymbol{q}}_+(t) -
{\boldsymbol{q}}_-(t)][\dot{{\boldsymbol{q}}}_+(t) +
\dot{{\boldsymbol{q}}}_-(t)]^R \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\nonumber \\
&& \times \ \exp\left\{ - \frac{m\gamma}{\hbar^2 \beta} \int dt
\int dt' \ [{\boldsymbol{q}}_+(t) -
{\boldsymbol{q}}_-(t)]K(t,t')[{\boldsymbol{q}}_+(t') -
{\boldsymbol{q}}_-(t')] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, . \label{B13}
\end{eqnarray}
Here the paths ${\boldsymbol{q}}_+(t)$ and ${\boldsymbol{q}}_-(t)$
are associated with the forward and backward movement of the
particles in time. The super-script $R$ indicates a {\em negative}
shift in the time argument of the velocities with respect to
positions. The latter ensures the causality of the friction
forces~\cite{Klein1}. In addition, $m$ represents the particle mass
(for simplicity we assume here that all system particles have the
same mass), $\beta = 1/T$, and $\gamma$ is the friction constant (or
thermal reservoir coupling). The function $K(t,t')$ is the bath
correlation function. As argued in~\cite{Pain,Klein1}, at high
temperatures $K(t,t') \approx \delta(t-t')$. Introducing the new set
of variables ${\boldsymbol{q}} = [{\boldsymbol{q}}_+ +
{\boldsymbol{q}}_-]/2$ and $\bar{{\boldsymbol{q}}} =
[{\boldsymbol{q}}_+ - {\boldsymbol{q}}_-]$ (i.e., the center-of-mass
and {\em fast} coordinates) we can in the high-temperature case
recast (\ref{B13}) into
\begin{eqnarray}
{\mathcal{Z}}_{\rm FV}[{\boldsymbol{J}}, \bar{{\boldsymbol{J}}}] \
&=& \ \int {\mathcal{D}}{\boldsymbol{q}} {\mathcal{D}}
\bar{{\boldsymbol{q}}} \ \exp\left\{ \frac{i}{\hbar} \left[
{\mathcal{A}}[{\boldsymbol{q}} + \bar{{\boldsymbol{q}}}/2] -
{\mathcal{A}}[{\boldsymbol{q}} - \bar{{\boldsymbol{q}}}/2] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
+ \int dt \ \left[{\boldsymbol{J}}(t){\boldsymbol{q}}(t) -
\bar{{\boldsymbol{J}}}(t)\bar{{\boldsymbol{q}}}(t)
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\nonumber \\
&& \times \ \exp\left\{- i\frac{m\gamma}{\hbar} \int dt \
\bar{{\boldsymbol{q}}}(t)\left[\dot{{\boldsymbol{q}}}(t)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]^R
- \frac{m\gamma}{\hbar^2 \beta} \int dt \
\bar{{\boldsymbol{q}}}^2(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\, .
\end{eqnarray}
Here the self-explanatory notation ${\boldsymbol{J}}=
[{\boldsymbol{J}}_+ - {\boldsymbol{J}}_-]$ and
$\bar{{\boldsymbol{J}}} = -[{\boldsymbol{J}}_+ +
{\boldsymbol{J}}_-]/2$ was used. Let us now define $\omega}\def\Om{\Omega}\def\AB{{_{A,B}}}\newcommand{\mlab}[1]{\label{#1}ega =
2m\gamma /\beta$, integrate over $\bar{{\boldsymbol{q}}}$,
and go to the
classical limit $\gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0 $. Then we obtain the
following chain of equations:
\begin{eqnarray}
&&\lim_{\gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0} \ {\mathcal{Z}}_{\rm
FV}[{\boldsymbol{J}},
\bar{{\boldsymbol{J}}}]\nonumber \\
&& \mbox{\hspace{1cm}}= \ \lim_{\gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0} \int
{\mathcal{D}}{{\boldsymbol{q}}}
{\mathcal{D}}\bar{{\boldsymbol{q}}} \ \exp\left\{ \frac{i}{\hbar}
\int dt \ \bar{{\boldsymbol{q}}}(t) \left[
\frac{\delta{\mathcal{A}}}{\delta {\boldsymbol{q}}(t)} - m\gamma
\left[\dot{{\boldsymbol{q}}}(t)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]^R + i \hbar
\bar{{\boldsymbol{J}}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] - \frac{\omega}\def\Om{\Omega}\def\AB{{_{A,B}}}\newcommand{\mlab}[1]{\label{#1}ega}{2\hbar^2} \int
dt \ \bar{{\boldsymbol{q}}}^2(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}
\nonumber \\
&& \mbox{\hspace{1.5cm}}\times \ \exp\left\{ \int dt \
{\boldsymbol{J}}(t){\boldsymbol{q}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}\nonumber \\
&& \mbox{\hspace{1cm}}= \ \lim_{\gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0} \ \int
{\mathcal{D}}{{\boldsymbol{q}}} \ \exp \left\{- \frac{1}{2\omega}\def\Om{\Omega}\def\AB{{_{A,B}}}\newcommand{\mlab}[1]{\label{#1}ega}
\int dt \ \left[ \frac{\delta{\mathcal{A}}}{\delta
{\boldsymbol{q}}(t)} - m\gamma
\left[\dot{{\boldsymbol{q}}}(t)\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]^R + i \hbar
\bar{{\boldsymbol{J}}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]^2 + \int dt \
{\boldsymbol{J}}(t){\boldsymbol{q}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} \nonumber \\
&& \mbox{\hspace{1cm}}= \ \lim_{\gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0} \ \int
{\mathcal{D}}{{\boldsymbol{q}}} \ {\mathcal{J}}[{\boldsymbol{q}}]\
\exp \left\{- \frac{1}{2\omega}\def\Om{\Omega}\def\AB{{_{A,B}}}\newcommand{\mlab}[1]{\label{#1}ega} \int dt \ \left[
\frac{\delta{\mathcal{A}}}{\delta {\boldsymbol{q}}(t)} - m\gamma
\dot{{\boldsymbol{q}}}(t) + i \hbar \bar{{\boldsymbol{J}}}(t)
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]^2 + \int dt \
{\boldsymbol{J}}(t){\boldsymbol{q}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} \nonumber \\
&& \mbox{\hspace{1cm}}= \ \int {\mathcal{D}}{{\boldsymbol{q}}} \
\delta \!\left[ \frac{\delta{\mathcal{A}}}{\delta
{\boldsymbol{q}}} + i \hbar \bar{{\boldsymbol{J}}} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght]
{\mathcal{J}}[{\boldsymbol{q}}] \ \exp\left\{ \int dt \
{\boldsymbol{J}}(t){\boldsymbol{q}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}
\nonumber \\
&& \mbox{\hspace{1cm}}= \ \int {\mathcal{D}}{{\boldsymbol{q}}} \
\delta \!\left[ {\boldsymbol{q}} -
{\boldsymbol{q}}^{[\bar{{\boldsymbol{J}}}]}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght] \ \exp\left\{
\int dt \ {\boldsymbol{J}}(t){\boldsymbol{q}}(t) \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\} \, .
\end{eqnarray}
The Jacobian ${\mathcal{J}}[{\boldsymbol{q}}]$ results from
transition to the ``unretarded" velocities and its explicit form
reads~\cite{Klein1}:
\begin{eqnarray}
{\mathcal{J}}[{\boldsymbol{q}}] \ = \ \det \left|\!\left|\frac{\partial}{\partial t} \
\delta_{ab}\delta(t-t') + \frac{\delta^2 {\mathcal{A}}}{\delta
q_a(t) \delta q_b(t')}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\!\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght|\, .
\end{eqnarray}
Coordinates ${\boldsymbol{q}}^{[\bar{{\boldsymbol{J}}}]}$ are
solutions of the equation of the motion:
\begin{eqnarray}
\frac{\delta{\mathcal{A}}[{\boldsymbol{q}}]}{\delta
{\boldsymbol{q}}(t)} \ = \ - i \hbar \bar{{\boldsymbol{J}}}(t)\, .
\end{eqnarray}
In the limit $ \gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0$, we find again the Gozzi {\em
et al.} partition function
\begin{eqnarray}
\lim_{\gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0} {\mathcal{Z}}_{\rm
FV}[{\boldsymbol{J}}, {{\boldsymbol{0}}}] \ = \ \lim_{\hbar
\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0} \lim_{\gamma \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeghtarrow 0} {\mathcal{Z}}_{\rm
FV}[{\boldsymbol{J}}, \bar{{\boldsymbol{J}}}] \ = \ Z_{\rm
CM}[{\boldsymbol{J}}]\, .
\end{eqnarray}
\section*{Appendix C}
In this appendix we prove that (\ref{3.33}) is a special case of the
Euler-like functionals (\ref{3.31}). Let us first show that
(\ref{3.33}) can be replaced by
an action of the form (\ref{3.31}). Indeed, because of the
homogeneity of (\ref{3.33}), we can immediatley replace it by
\begin{eqnarray}
{\mathcal{A}}[r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}q_i] = \sum_i \int dt \ \alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i
r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}(t)q_i(t) \frac{\delta {\mathcal{A}}[r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}q_i]}{\delta
r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}(t)q_i(t)} \ = \ \int dt \ r(t) \frac{\delta
{\mathcal{A}}[r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}q_i]}{\delta r(t)}\, . \label{B11}
\end{eqnarray}
Since this is true for any $r(t)$, we see that
\begin{eqnarray}
\int dt dt' \ r(t) \frac{\delta^2 {\mathcal{A}}[r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}q_i]}{\delta
r(t) \delta r(t')} \ = \ 0\, . \label{B1}
\end{eqnarray}
This simply expresses the fact that the functional $
{\mathcal{A}}[r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}q_i]$ is linear in $r(t)$. The right-hand side of
(\ref{B11}) has then precisely the Euler form (\ref{3.31}).
The reverse direction is proved in the following way: We first
recast (\ref{3.31}) in the general form
\begin{eqnarray}
\int dt \ r(t) L({\boldsymbol{q}}(t), \dot{{\boldsymbol{q}}}(t)) \ =
\ \int dt \ L\!\left( r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}(t)q_i(t),
d(r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}(t)q_i(t))/dt \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, . \label{B2}
\end{eqnarray}
Applying the variation $\int dt \ \delta/\delta r(t)$ to (\ref{B2})
we obtain
\begin{eqnarray}
{\mathcal{A}}[{\boldsymbol{q}}]\ = \ \int dt \ \sum_i \alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i
-1} q_i(t) \left( \frac{\partial L}{\partial r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}(t) q_i(t)}
- \frac{d}{dt}\frac{\partial L}{\partial [d(r^{\alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i}(t)
q_i(t))/dt]} \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, .
\end{eqnarray}
This relation must hold for all $r(t)$, and hence by choosing $r(t)
= 1$ we arrive at the required result
\begin{eqnarray}
{\mathcal{A}}[{\boldsymbol{q}}]\ = \ \int dt \ \sum_i \alpha}\def\bt{\beta}\def\ga{\gamma}\def\Ga{\Gamma}\def\de{\deltapha_i q_i(t)
\frac{\delta {\mathcal{A}}[{\boldsymbol{q}}]}{\delta q_i(t)}\, .
\end{eqnarray}
\section*{Appendix D}
Here we prove the fact that inclusion of the subsidiary constraint
(\ref{4.1}) in the primary constraints (\ref{2.5}) does not
produce any secondary constraints. The secondary constraints
result from the consistency conditions (\ref{A1}) or, in other
words, when existent constraints are incompatible with the
equation of motion.
We first observe that the condition $H_- \approx 0$ can be
equivalently represented by the condition $(\bar{H} - \sum_i a_i
C_i) \equiv \phi_0 \approx 0$. If we now add the subsidiary
constraint $\phi_0$ to the remaining $2N$ constraints $\phi_i$ and
again require that the constraints $\phi_i$ remain (weakly) zero
at all times we have
\begin{eqnarray}
0 \ \approx \ \dot{\phi_i} \ \approx \ \{ \phi_i, {\overlinen H}\}
\ + \ u^j \{\phi_i, \phi_j \}\, , \;\;\;\; \;\;\; i,j = 0, 1
\ldots, 2N\, . \label{D1}
\end{eqnarray}
Since there is an odd number of constraints and because $\{\phi_i,
\phi_j \}$ is an antisymmetric matrix we have that $\det|\!|\left\{
\phi_i, \phi_j \right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght\}|\!| = 0$. From the analysis in Appendix A it
is clear that the rank of the matrix $\{\phi_i, \phi_j \}$ is $2N$
and hence it has one null-eigenvector, say ${\boldsymbol{e}}$.
Inasmuch, Eq.(\ref{D1}) implies the constraint
\begin{eqnarray}
\sum_{i=0}^{2N} e_i \{\phi_i, \bar{H} \} \ \approx \ 0\, .
\label{D2}
\end{eqnarray}
If the latter would represent a new non-trivial constraint (i.e.,
constraint that cannot be written as a linear combination of
constraints $\phi_i$) we would need to include such a new
constraint (the so called secondary constraint) into the list of
existent constraints and go again through the consistency
condition (\ref{D1}). Fortunately, the condition (\ref{D2}) is
automatically fulfilled and hence it does not constitute any new
constraint. Indeed, be choosing
\begin{eqnarray}
{\boldsymbol{e}} \ = \ \left(\begin{array}{c} 1 \\
\{\phi_0, \phi_2^a \}\\
\{\phi_1^a, \phi_0\}\\
\{\phi_0, \phi_2^b\}\\
\{\phi_1^b, \phi_0\}\\
\vdots\\
\{ \phi_0, \phi_2^N\}\\
\{ \phi_1^N, \phi_0\}
\end{array}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght) \ = \ \left( \begin{array}{c}
1 \\
{f}_a({\boldsymbol{q}})\\
- \frac{\partial
\phi_0}{\partial q_a} \\
{f}_b({\boldsymbol{q}})\\
-\frac{\partial
\phi_0}{\partial q_b}\\
\vdots\\
{f}_N({\boldsymbol{q}})\\
-\frac{\partial \phi_0}{\partial{q}_N}
\end{array}\right}\def\ti{\tilde}\def\we{\wedge}\def\wti{\widetildeght)\, ,
\end{eqnarray}
and using $\{ \phi_0, \bar{H} \} = 0$ together with (\ref{A3}) we
obtain
\begin{eqnarray}
\sum_{i=0}^{2N} e_i \{\phi_i, \bar{H} \}\ = \ - \sum_{i,a} a_i(t)
{f}_a({\boldsymbol{q}}) \ \frac{\partial C_i}{\partial q_a} \ = \
\sum_{i =1}^n a_i(t) \{H, C_i\}\ = \ 0\, . \label{D4}
\end{eqnarray}
As the latter is zero (even strongly) there is no new constraint
condition generated by an inclusion of $\phi_0$ in the original
set of (primary) constraints. Note, that the key in obtaining
(\ref{D4}) was the fact that $C_i$'s are
${\boldsymbol{p}}$-independent constants of motion.
The rank of $\{ \phi_i, \phi_j \}$ being $2N$ means that there is
one relation
\begin{eqnarray}
\sum_{i = 0}^{2N} e_i \{ \phi_i, \phi_j \} \ \approx \ 0\, .
\end{eqnarray}
Any linear combination of the constraints $\phi_i$ is again a
constraint. So, particularly if we define $\varphi = \sum_i e_i
\phi_i$ we obtain that $\varphi$ has weakly vanishing Poisson
brackets with all constraints, i.e.,
\begin{eqnarray}
\{ \varphi, \phi_i \} \ \approx \ 0\, , \;\;\;\; i = 1, \ldots,
2N\, .
\end{eqnarray}
Thus, according to Dirac's classification (see e.g.,
Ref.~\cite{Dir}) $\varphi$ is a first class constraint. The
remaining $2N$ constraints (which do not have vanishing Poisson
brackets with all other constraints) are of the second class. Note
particularly that the explicit form for $\varphi$ reads
\begin{eqnarray}
\varphi \ = \ \sum_{i = 0}^{2N} e_i \phi_i \ = \ (H - \sum_{i =
1}^n a_i C_i) - \sum_{a = 1}^N \bar{p}_a \ \frac{\partial
\phi_0}{\partial q_a}\, , \label{D3}
\end{eqnarray}
which is clearly weakly identical to $H- \sum_i a_i C_i$. Observe
that it is $H$ and not $\bar{H}$ that is present in (\ref{D3}).
\section*{References}
\end{document} | math | 134,821 |
\begin{document}
\title{ Rate of convergence for Wong-Zakai-type approximations of It\^o stochastic
differential equations}
\numberwithin{equation}{section}
\begin{abstract}
We consider a class of stochastic differential equations driven by a one dimensional Brownian motion and we investigate the rate of convergence for Wong-Zakai-type approximated solutions. We first consider the Stratonovich case, obtained through the point-wise multiplication between the diffusion coefficient and a smoothed version of the noise; then, we consider It\^o equations where the diffusion coefficient is Wick-multiplied by the regularized noise. We discover that in both cases the speed of convergence to the exact solution coincides with the speed of convergence of the smoothed noise towards the original Brownian motion. We also prove, in analogy with a well known property for exact solutions, that the solutions of approximated It\^o equations solve approximated Stratonovich equations with a certain correction term in the drift.
\end{abstract}
Key words and phrases: stochastic differential equations, Wong-Zakai theorem, Wick product \\
AMS 2000 classification: 60H10; 60H30; 60H05
\allowdisplaybreaks
\section{Introduction and statement of the main results}
From a modeling point of view, the celebrated Wong-Zakai theorem \cite{WZ},\cite{WZ2} provides a crucial insight in the theory of stochastic differential equations. It asserts that the solution $\{X_t^{(n)}\}_{t\in [0,T]}$ of the random ordinary differential equation
\begin{eqnarray}\label{stra approx intro}
\frac{dX_t^{(n)}}{dt}=b(t,X_t^{(n)})+\sigma(t,X_t^{(n)})\cdot\frac{dB_t^{(n)}}{dt},
\end{eqnarray}
where $\{B_t^{(n)}\}_{t\in [0,T]}$ is a suitable smooth approximation of the Brownian motion $\{B_t\}_{t\in [0,T]}$, converges in the mean, as $n$ goes to infinity, to the solution of the Stratonovich stochastic differential equation (SDE, for short)
\begin{eqnarray}\label{stra intro}
dX_t=b(t,X_t)dt+\sigma(t,X_t)\circ dB_t.
\end{eqnarray}
At a first sight, it may look a bit surprising the fact that the sequence $\{X_t^{(n)}\}_{t\in [0,T]}$ does not converge to the It\^o's interpretation of the corresponding stochastic equation, i.e.
\begin{eqnarray}\label{ito intro}
dX_t=b(t,X_t)dt+\sigma(t,X_t)dB_t.
\end{eqnarray}
What makes the sequence $\{X_t^{(n)}\}_{t\in [0,T]}$ prefer to converge to the Stratonovich SDE (\ref{stra intro}) instead of the It\^o SDE (\ref{ito intro}) is the presence of the point-wise product $\cdot$ appearing in (\ref{stra approx intro}) between the diffusion coefficient $\sigma$ and the smoothed noise. In fact, Hu and {\O}ksendal \cite{HO} proved, when the diffusion coefficient is linear, that the solution of
\begin{eqnarray}\label{ito approx intro}
\frac{dY_t^{(n)}}{dt}=b(t,Y_t^{(n)})+\sigma(t)Y_t^{(n)}\diamond\frac{dB_t^{(n)}}{dt},
\end{eqnarray}
where $\diamond$ stands for the Wick product, converges as $n$ goes to infinity to the solution of the It\^o SDE
\begin{eqnarray}\label{ito oksendal intro}
dY_t=b(t,Y_t)dt+\sigma(t)Y_tdB_t.
\end{eqnarray}
Along this direction, Da Pelo et al. \cite{DLS 2013} introduced a family of products interpolating between the point-wise and Wick products and proved convergence for Wong-Zakai-type approximations toward stochastic differential equations where the stochastic integrals are defined via suitable evaluation points in the Riemann sums.\\
Approximation procedures based on Wong-Zakai-type theorems have attracted the attention of several authors. First of all, Stroock and Varadhan \cite{Stroock Varadhan} proved the multidimensional version of the Wong-Zakai theorem. Then, generalizations to SDEs driven by different type of noises and to stochastic partial differential equations have been the most investigated directions. For instance, Konecny \cite{Konecny} proved a Wong-Zakai-type theorem for one-dimensional SDEs driven by a semimartingale, Gyo\"ngy and G. Michaletzky \cite{Gyongy Michaletzky} considered $\delta$-martingales while Naganuma \cite{Naganuma} examined the case of Gaussian rough paths. In the theory of stochastic partial differential equations, Hairer and Pardoux \cite{Hairer Pardoux} proved a version of the Wong-Zakai theorem for one-dimensional parabolic nonlinear stochastic PDEs driven by space-time white noise utilizing the recent theory of regularity structures; Brezniak and Flandoli \cite{Brezniak Flandoli} proved almost sure convergence to the solution to a Stratonovich stochastic partial differential equation; Tessitore and Zabczyk \cite{Tessitore Zabczyk} obtained results on the weak convergence of the laws of the Wong-Zakai approximations for stochastic evolution equations. We also mention that Londono and Villegas \cite{Londono Villegas} proposed to use a Wong-Zakai type approximation method for the numerical evaluation of the solutions of SDEs.\\
The aim of the present paper is to compare the rate of convergence for approximations of Stratonovich and It\^o quasi-linear SDEs and to investigate whether the connection between exact solutions of the two different interpretations can be restored for the corresponding approximating sequences (see the discussion after Corollary \ref{corollary} below).
We remark that the rate of convergence for Wong-Zakai approximations, in the Stratonovich case, has been already investigated by other authors. We recall Hu and Nualart \cite{HN} dealing with almost sure convergence in H\"older norms; Hu, Kallianpur and Xiong \cite{HKX} studying approximations for the Zakai equation and Gyongy and Shmatkov \cite{Gyongy Shmatkov} and Gyongy and Stinga \cite{Gyongy Stinga} treating general linear stochastic partial differential equations. We also refer the reader to the book by Hu \cite{H} where Wong-Zakai approximations are considered in the framework of Euler-Maruyama discretization schemes.We will discuss in Remark \ref{Hu} below the details of the comparison between our convergence rate for Stratonovich equations and the one in \cite{H}.
While Wong-Zakai-type theorems for Stratonovich SDEs have been largely investigated, approximations for It\^o SDEs are very rare in the literature. In fact, as the paper by Hu and {\O}ksendal shows, to recover the It\^o interpretation of the SDE one has to deal with the Wick product and in most cases this multiplication is not easy to handle. This is the reason why we focus on equations with linear diffusion coefficient (it is in fact not known whether the fully non linear version of (\ref{ito approx intro}) admits a solution \cite{HO}). Nevertheless, to find the speed of convergence of the approximation to the solution of the It\^o equation, we had to utilize some tools from the Malliavin calculus (see Lemma \ref{Malliavin} below). The present paper can be considered as a continuation of the work presented in Da Pelo et al. \cite{DLS 2013}, where the issue of the rate of convergence has not been studied.\\
To state our main results we briefly describe our framework. Let $(W,\mathcal{A},\mu)$ be the classical Wiener space over the time interval $[0,T]$, where $T$ is an arbitrary positive constant, and denote by $\{B_t\}_{t\in [0,T]}$ the coordinate process, i.e.
\begin{eqnarray*}
B_t:W&\to&\mathbb{R}\\
\omega&\mapsto& B_t(\omega)=\omega(t).
\end{eqnarray*}
By construction, the process $\{B_t\}_{t\in [0,T]}$ is, under the measure $\mu$, a one dimensional Brownian motion. We now introduce a smooth (continuously differentiable) approximation of $\{B_t\}_{t\in [0,T]}$ by means of a kernel satisfying certain technical assumptions. In the sequel, the symbol $|f|$ will denote the norm of $f\in L^2([0,T])$ while $\Vert X\Vert_p$ will denote the norm of $X\in\mathcal{L}^p(W,\mu)$ for any $p\geq 1$.
\begin{assumption}\label{assumption on K}
For any $\varepsilon>0$ let $K_{\varepsilon}:[0,T]^2\to\mathbb{R}$ be such that
\begin{itemize}
\item the function $t\mapsto K_{\varepsilon}(t,s)$ belongs to $C^1([0,T])$ for almost all $s\in [0,T]$;
\item the functions $s\mapsto K_{\varepsilon}(t,s)$ and $s\mapsto \partial_tK_{\varepsilon}(t,s)$ belong to $L^2([0,T])$ for all $t\in [0,T]$.
\end{itemize}
Moreover, we assume that
\begin{eqnarray}\label{kernel}
\lim_{\varepsilon\to 0^+}\sup_{t\in[0,T]}|K_{\varepsilon}(t,\cdot)-1_{[0,t]}(\cdot)|=0
\end{eqnarray}
and
\begin{eqnarray*}
M:=\sup_{\varepsilon >0}\sup_{t\in [0,T]}|K_{\varepsilon}(t,\cdot)|<+\infty.
\end{eqnarray*}
\end{assumption}
Now, if we let
\begin{eqnarray*}
B^{\varepsilon}_t:=\int_0^TK_{\varepsilon}(t,s)dB_s,\quad t\in [0,T],
\end{eqnarray*}
and recall that $B_t=\int_0^T1_{[0,t]}(s)dB_s$, then Assumption \ref{assumption on K} implies that $\{B^{\varepsilon}_t\}_{t\in [0,T]}$ is a continuosly differentiable Gaussian process and that
$B_t^{\varepsilon}$ converges to $B_t$ in $\mathcal{L}^2(W,\mu)$ uniformly with respect to $t\in [0,T]$. In fact, condition (\ref{kernel}) is equivalent to
\begin{eqnarray*}
\lim_{\varepsilon\to 0^+}\sup_{t\in [0,T]}\Vert B_t^{\varepsilon}-B_t\Vert_2=0.
\end{eqnarray*}
Therefore, we deal with a quite general class of smooth approximations of the Brownian motion $\{B_t\}_{t\geq 0}$. In the sequel we will be studying SDEs of the type (\ref{ito oksendal intro}) both in the Stratonovich and It\^o senses. We now state the assumptions on the coefficients $b$ and $\sigma$ which are supposed to be valid for the rest of the present paper.
\begin{assumption}\label{assumption on b and sigma}
There exist two positive constants $C_1$ and $C_2$ such that for all $t\in [0,T]$ and $x,y\in\mathbb{R}$ one has
\begin{eqnarray}\label{lipschitz}
|b(t,x)-b(t,y)|\leq C_1|x-y|\quad\mbox{ and }\quad |b(t,x)|\leq C_2(1+|x|).
\end{eqnarray}
Moreover, the function $\sigma$ belongs to $\mathcal{L}^{\infty}([0,T])$.
\end{assumption}
For $f\in L^2([0,T])$ we denote
\begin{eqnarray*}
\mathcal{E}(f):=\exp\left\{\int_0^Tf(s)dB_s-\frac{1}{2}\int_0^Tf^2(s)ds\right\}
\end{eqnarray*}
and we call it \emph{stochastic exponential}. The set $\{\mathcal{E}(f),f\in L^2([0,T])\}$ turns out to be total in $\mathcal{L}^p(W,\mu)$ for any $p\geq 1$. Given $f,g\in L^2([0,T])$, the \emph{Wick product} of $\mathcal{E}(f)$ and $\mathcal{E}(g)$ is defined to be
\begin{eqnarray*}
\mathcal{E}(f)\diamond\mathcal{E}(g):=\mathcal{E}(f+g).
\end{eqnarray*}
This multiplication can be extended by linearity and density to an unbounded bilinear form on a proper subset of $\mathcal{L}^p(W,\mu)\times\mathcal{L}^p(W,\mu)$ (see Holden et al. \cite{HOUZ} and Janson \cite{J} for its connection to It\^o-Skorohod integration theory). For $g\in L^2([0,T])$ we also define the \emph{translation operator} $T_g$ as the operator that shifts the Brownian path by the function $\int_0^{\cdot}g(s)ds$; more precisely, the action of $T_g$ on stochastic exponentials is given by
\begin{eqnarray*}
T_g\mathcal{E}(f):=\mathcal{E}(f)\cdot\exp\{\langle f,g\rangle\}.
\end{eqnarray*}
where $\langle\cdot,\cdot\rangle$ denotes the inner product in $L^2([0,T])$ (see Holden et al. \cite{HOUZ} and Janson \cite{J} for details).\\
\noindent We are now ready to state the first two main theorems of the present paper. The proofs are postponed to Section 2 and Section 3, respectively.
\begin{theorem}\label{main theorem 1}
Let $\{X_t\}_{t\in [0,T]}$ be the unique solution of the Stratonovich SDE
\begin{eqnarray}\label{stra SDE}
dX_t=b(t,X_t)dt+\sigma(t)X_t\circ dB_t,\quad t\in ]0,T]\quad\quad X_0=x
\end{eqnarray}
and for any $\varepsilon>0$ let $\{X_t^{\varepsilon}\}_{t\in[0,T]}$ be the unique solution of
\begin{eqnarray}\label{approx stra}
\frac{dX_t^{\varepsilon}}{dt}=b(t,X_t^{\varepsilon})+\sigma(t)X_t^{\varepsilon}\cdot\frac{dB_t^{\varepsilon}}{dt},\quad
X_0^{\varepsilon}=x.
\end{eqnarray}
Then, for any $p\geq 1$ there exists a positive constant $C$ (depending on $p$, $|x|$, $T$, $C_1$, $C_2$ and $M$) such that for any $q$ greater than $p$
\begin{eqnarray}\label{SDE1}
\sup_{t\in [0,T]}\Vert X_t^{\varepsilon}-X_t\Vert_{p}\leq C\cdot\mathcal{S}_q\left(\sup_{t\in [0,T]}|K_{\varepsilon}(t,\cdot)-1_{[0,t]}(\cdot)|\right)
\end{eqnarray}
where
\begin{eqnarray}\label{def S}
\mathcal{S}_q(\lambda):=\lambda\exp\left\{q\lambda^2\right\}+\exp\{\lambda^2/2\}-1,\quad \lambda\in\mathbb{R}
\end{eqnarray}
\end{theorem}
\begin{remark}\label{Hu}
In Theorem 11.6 of \cite{H} it is proved that
\begin{eqnarray}\label{Hu estimate}
\Big\Vert \sup_{t\in [0,T]}|X_t^{\pi}-X_t|\Big\Vert_p\leq C_{p,T}(\log|\pi|)^2|\pi|^{\frac{1}{2}+\frac{1}{\log|\pi|}}
\end{eqnarray}
where $\pi$ is a partition of the interval $[0,T]$, $|\pi|$ denotes the mesh of the partition $\pi$ and $\{X_t^{\pi}\}_{t\in [0,T]}$ stands for the solution of
\begin{eqnarray*}\label{Hu SDE}
\frac{dX_t^{\pi}}{dt}=b(t,X_t^{\pi})+\sigma(t,X_t^{\pi})\cdot\frac{dB_t^{\pi}}{dt},
\end{eqnarray*}
with $\{B_t^{\pi}\}_{t\in [0,T]}$ being the polygonal approximation of $\{B_t\}_{t\in [0,T]}$ associated to the partition $\pi$. The above result is stated and proved for general nonlinear systems of SDEs driven by a multidimensional Brownian motion. Moreover, the topology utilized in (\ref{Hu estimate}) is stronger than the one in (\ref{SDE1}) (where the supremum is outside of the $\mathcal{L}^p(W,\mu)$-norm).\\
It is not difficult to see that the polygonal approximation $\{B^{\pi}_t\}_{t\in [0,T]}$ is included in the family of approximations $\{B^{\varepsilon}_t\}_{t\in [0,T]}$ considered in the present paper (the parameter $\varepsilon$ reduces to the mesh of the partition $|\pi|$) and in that case we get
\begin{eqnarray*}
\sup_{t\in [0,T]}\Vert B_t^{\pi}-B_t\Vert_2=\sup_{t\in [0,T]}|K_{|\pi|}(t,\cdot)-1_{[0,t]}(\cdot)|=C\sqrt{|\pi|}.
\end{eqnarray*}
Substituting in (\ref{SDE1}) we obtain
\begin{eqnarray*}
\sup_{t\in [0,T]}\Vert X_t^{\pi}-X_t\Vert_{p}\leq C\cdot\mathcal{S}_q(C\sqrt{|\pi|})
\end{eqnarray*}
which behaves like $\sqrt{|\pi|}$ for $|\pi|$ going to zero. A comparison with (\ref{Hu estimate}) shows that Theorem \ref{main theorem 1} provides a highest rate of convergence, at the price of a weaker topology and more restrictive conditions on the class of the SDEs considered.\\
The result and proof of Theorem \ref{main theorem 1} are however necessary for the comparison proposed in the present paper.
\end{remark}
\begin{theorem}\label{main theorem 2}
Let $\{Y_t\}_{t\in [0,T]}$ be the unique solution of the It\^o SDE
\begin{eqnarray}\label{ito SDE}
dY_t=b(t,Y_t)dt+\sigma(t)Y_tdB_t,\quad t\in ]0,T]\quad\quad Y_0=x
\end{eqnarray}
and for any $\varepsilon>0$ let $\{Y_t^{\varepsilon}\}_{t\in[0,T]}$ be the unique solution of
\begin{eqnarray}\label{approx ito}
\frac{dY_t^{\varepsilon}}{dt}=b(t,Y_t^{\varepsilon})+\sigma(t)Y_t^{\varepsilon}\diamond\frac{dB_t^{\varepsilon}}{dt},\quad
Y_0^{\varepsilon}=x.
\end{eqnarray}
Then, for any $p\geq 1$ there exists a positive constant $C$ (depending on $p$, $|x|$, $T$, $C_1$, $C_2$ and $M$) such that for any $q$ greater than $p$
\begin{eqnarray*}
\sup_{t\in [0,T]}\Vert Y_t^{\varepsilon}-Y_t\Vert_{p}\leq C\cdot\mathcal{S}_q\left(\sqrt{2}\sup_{t\in [0,T]}|K_{\varepsilon}(t,\cdot)-1_{[0,t]}(\cdot)|\right)
\end{eqnarray*}
where $\mathcal{S}$ is the function defined in (\ref{def S}).
\end{theorem}
\begin{corollary}\label{corollary}
In the notation of Theorem \ref{main theorem 1} and Theorem \ref{main theorem 2}, we have for any $p\geq 1$ that
\begin{eqnarray*}
\lim_{\varepsilon\to 0^+}\sup_{t\in [0,T]}\Vert X_t^{\varepsilon}-X_t\Vert_{p}=\lim_{\varepsilon\to 0^+}\sup_{t\in [0,T]}\Vert Y_t^{\varepsilon}-Y_t\Vert_{p}=0
\end{eqnarray*}
where both limits have rate of convergence of order
\begin{eqnarray*}
\sup_{t\in [0,T]}|K_{\varepsilon}(t,\cdot)-1_{[0,t]}(\cdot)|\quad\mbox{ as $\varepsilon$ tends to zero}.
\end{eqnarray*}
\end{corollary}
\begin{proof}
It follows from
\begin{eqnarray}\label{limit of S}
\lim_{\lambda\to 0^+}\frac{\mathcal{S}_q(\lambda)}{\lambda}=1.
\end{eqnarray}
for all $q\geq 1$.
\end{proof}
\noindent It is well known (see for instance Karatzas and Shreve \cite{KS}) that the It\^o SDE (\ref{ito SDE}) can be reformulated as the Stratonovich SDE
\begin{eqnarray}\label{ito-stra SDE}
dY_t=\left(b(t,Y_t)-\frac{1}{2}\sigma(t)Y_t\right)dt+\sigma(t)Y_t\circ dB_t,\quad t\in ]0,T]\quad\quad Y_0=x.
\end{eqnarray}
The next theorem provides a similar representation for the approximated It\^o equation (\ref{approx ito}) in terms of a suitable approximated Stratonovich equation. The proof can be found in Section 4.
\begin{theorem}\label{main theorem 3}
For any $\varepsilon>0$ let $\{Y_t^{\varepsilon}\}_{t\in[0,T]}$ be the unique solution of
\begin{eqnarray*}
\frac{dY_t^{\varepsilon}}{dt}=b(t,Y_t^{\varepsilon})+\sigma(t)Y_t^{\varepsilon}\diamond\frac{dB_t^{\varepsilon}}{dt},\quad
X_0^{\varepsilon}=x.
\end{eqnarray*}
Then, for any $t\in [0,T]$ we have
\begin{eqnarray}\label{ito}
Y_{t}^{\varepsilon}=T_{-K_{\varepsilon}(t,\cdot)}S_{t}^{\varepsilon},
\end{eqnarray}
where $\{S_{t}^{\varepsilon}\}_{t\in[0,T]}$ is the unique solution of
\begin{eqnarray*}
\frac{dS_{t}^{\varepsilon}}{dt}=b(t,S_{t}^{\varepsilon})+\frac{1}{2}\frac{d|K_{\varepsilon}(t,\cdot)|^{2}}{dt}\cdot S_{t}^{\varepsilon}+ \sigma(t)S_{t}^{\varepsilon}\cdot\frac{dB_{t}^{\varepsilon}}{dt},\quad S_{0}^{\varepsilon}=x.
\end{eqnarray*}
\end{theorem}
\noindent The paper is organized as follows: Section 2 and Section 3 are devoted to the proofs of Theorem \ref{main theorem 1} and Theorem \ref{main theorem 2}, respectively. Both sections also contain some preliminary results and estimates utilized in the proofs of the main results, which are divided in two major steps. Section 4 contains two different proofs of Theorem \ref{main theorem 3}, the second one being a direct verification of the identity (\ref{ito}).
\section{Proof of Theorem \ref{main theorem 1}}
\subsection{Auxiliary results and remarks: Stratonovich case}
The proof of Theorem \ref{main theorem 1} will be carried for the simplified equation where $b$ does not depend on $t$ and $\sigma$ is identically equal to one. Straightforward modifications will lead to the general case.\\ \noindent The existence and uniqueness for the solutions of (\ref{approx stra}) and (\ref{stra SDE}) follow, in view of Assumption \ref{assumption on K} and Assumption \ref{assumption on b and sigma}, by standard results in the theory of stochastic and ordinary differential equations. We also refer the reader to Theorem 5.5 in \cite{DLS 2013} for a proof using the techniques adopted in this paper. To ease the notation we define
\begin{eqnarray*}
E_{\varepsilon}(t):=\exp\{\delta(-K_{\varepsilon}(t,\cdot))\}\quad\mbox{ and }\quad E_{0}(t):=\exp\{\delta(-1_{[0,t]}(\cdot))\}.
\end{eqnarray*}
Here and in the sequel the symbol $\delta(f)$ stands for $\int_0^Tf(s)dB_s$. We begin by observing that (see the proof of Theorem 5.5 in \cite{DLS 2013}) the solution $\{X_t^{\varepsilon}\}_{t\in[0,T]}$ from Theorem \ref{main theorem 1} can be represented as
\begin{eqnarray*}
X_t^{\varepsilon}=Z_t^{\varepsilon}\cdot E_{\varepsilon}(t)^{-1}
\end{eqnarray*}
where
\begin{eqnarray*}
\frac{dZ_t^{\varepsilon}}{dt}=b(Z_t^{\varepsilon}\cdot E^{-1}_{\varepsilon}(t))\cdot E_{\varepsilon}(t),\quad Z_0^{\varepsilon}=x.
\end{eqnarray*}
The same holds true for $\{X_t\}_{t\in[0,T]}$; more precisely,
\begin{eqnarray*}
X_t=Z_t\cdot E_{0}(t)^{-1}
\end{eqnarray*}
where
\begin{eqnarray*}
\frac{dZ_t}{dt}=b(Z_t\cdot E^{-1}_{0}(t))\cdot E_{0}(t),\quad Z_0=x.
\end{eqnarray*}
Moreover, we have the estimates
\begin{eqnarray*}
|Z_t^{\varepsilon}|&\leq&|x|+\int_0^t|b(Z_s^{\varepsilon}\cdot E_{\varepsilon}(s)^{-1})\cdot E_{\varepsilon}(s)|ds\\
&\leq&|x|+\int_0^tC_2\left(1+|Z_s^{\varepsilon}\cdot E_{\varepsilon}(s)^{-1}|\right)\cdot E_{\varepsilon}(s)ds\\
&=&|x|+\int_0^tC_2E_{\varepsilon}(s)ds+\int_0^tC_2|Z_s^{\varepsilon}|ds\\
&\leq& |x|+\int_0^TC_2E_{\varepsilon}(s)ds+\int_0^tC_2|Z_s^{\varepsilon}|ds.
\end{eqnarray*}
By the Gronwall inequality,
\begin{eqnarray*}
|Z_t^{\varepsilon}|\leq\left(|x|+\int_0^T C_2E_{\varepsilon}(s)ds \right)e^{C_2t}.
\end{eqnarray*}
This shows that for any $q\geq 1$ we have the bound
\begin{eqnarray}\label{estimate norm of Z}
\left\|\sup_{t\in [0,T]}|Z_t^{\varepsilon}|\right\|_q&\leq&\left(|x|+\int_0^T C_2\Vert E_{\varepsilon}(s)\Vert_qds \right)e^{C_2T}\nonumber\\
&=&\left(|x|+\int_0^T C_2\exp\left\{\frac{q}{2}|K_{\varepsilon}(s,\cdot)|^2\right\}ds \right)e^{C_2T}\nonumber\\
&\leq&\left(|x|+C_2 T\exp\left\{\frac{q}{2}\sup_{s\in[0,T]}|K_{\varepsilon}(s,\cdot)|^2\right\} \right)e^{C_2T}.
\end{eqnarray}
\noindent To prove Theorem \ref{main theorem 1} we need the following estimate which is of independent interest.
\begin{proposition}\label{distance exponentials}
Let $f,g\in L^2([0,T])$. Then, for any $p\geq 1$ we have
\begin{eqnarray*}
\Vert\exp\{\delta(f)\}-\exp\{\delta(g)\}\Vert_p\leq C\mathcal{S}_p(|f-g|)
\end{eqnarray*}
where
\begin{eqnarray*}
\mathcal{S}_p(\lambda):=\lambda\exp\left\{p\lambda^2\right\}+\exp\{\lambda^2/2\}-1,\quad \lambda\in\mathbb{R}
\end{eqnarray*}
and $C$ is a constant depending on $p$ and $|g|$.
\end{proposition}
\begin{proof}
The proof involves few notions of Malliavin calculus. We refer the reader to the books of Nualart \cite{Nualart} and Bogachev \cite{Bogachev}. Let $f\in L^2([0,T])$ and $p\geq 1$; then, according to the Poincar\'e inequality (see Theorem 5.5.11 in Bogachev \cite{Bogachev}), we can write
\begin{eqnarray*}
\Vert\exp\{\delta(f)\}-1\Vert_p&\leq&\Vert\exp\{\delta(f)\}-E[\exp\{\delta(f)\}]\Vert_p+|E[\exp\{\delta(f)\}]-1|\\
&=&\Vert\exp\{\delta(f)\}-\exp\{|f|^2/2\}]\Vert_p+\exp\{|f|^2/2\}-1\\
&\leq&\mathcal{C}(p)\left\| |D\exp\{\delta(f)\}|_{L^2([0,T])}\right\|_p+\exp\{|f|^2/2\}-1\\
&=&\mathcal{C}(p)\left\| |\exp\{\delta(f)\}f|_{L^2([0,T])}\right\|_p+\exp\{|f|^2/2\}-1\\
&=&\mathcal{C}(p)|f|\Vert\exp\{\delta(f)\}\Vert_p+\exp\{|f|^2/2\}-1\\
&=&\mathcal{C}(p)|f|\exp\left\{\frac{p}{2}|f|^2\right\}+\exp\{|f|^2/2\}-1
\end{eqnarray*}
where $D$ denotes the Malliavin derivative and $\mathcal{C}(p)$ is a positive constant depending only on $p$. Therefore, for any $f,g\in L^2([0,T])$ and $p\geq 1$ we have
\begin{eqnarray*}
\Vert\exp\{\delta(f)\}-\exp\{\delta(g)\}\Vert_p&=&\Vert\exp\{\delta(g)\}\left(\exp\{\delta(f-g)\}-1\right)\Vert_p\\
&\leq&\Vert\exp\{\delta(g)\}\Vert_{2p}\Vert\exp\{\delta(f-g)\}-1\Vert_{2p}\\
&\leq&e^{p|g|^2}\left(\mathcal{C}(2p)|f-g|\exp\left\{p|f-g|^2\right\}+\exp\{|f-g|^2/2\}-1\right)\\
&\leq&C\mathcal{S}_p(|f-g|)
\end{eqnarray*}
where we utilized the H\"older inequality.
\end{proof}
\subsection{Proof of Theorem \ref{main theorem 1}}
The proof is divided in two steps.\\
\noindent \textbf{Step one}: We prove that for any $p\geq 1$ there exists a positive constant $C$ (depending on $p$, $|x|$, $T$, $C_1$, $C_2$ and $M$) such that for any $q$ greater than $p$
\begin{eqnarray}\label{inequality step 1}
\left\|\sup_{t\in [0,T]} |Z_t^{\varepsilon}-Z_t|\right\|_p\leq C\cdot\mathcal{S}_{q}\left(\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right)
\end{eqnarray}
We begin by using the equations solved by $Z_t^{\varepsilon}$ and $Z_t$ and the assumptions on $b$ to get
\begin{eqnarray*}
|Z_t^{\varepsilon}-Z_t|&=&\Big|\int_0^tb(Z_s^{\varepsilon}E_{\varepsilon}(s)^{-1})E_{\varepsilon}(s)ds-\int_0^tb(Z_sE_0(s)^{-1})E_0(s)ds\Big|\\
&\leq&\Big|\int_0^tb(Z_s^{\varepsilon}E_{\varepsilon}(s)^{-1})E_{\varepsilon}(s)-b(Z_sE_0(s)^{-1})E_{\varepsilon}(s)ds\Big|\\
&&+\Big|\int_0^tb(Z_sE_0(s)^{-1})E_{\varepsilon}(s)-b(Z_sE_0(s)^{-1})E_0(s)ds\Big|\\
&\leq&\int_0^t|b(Z_s^{\varepsilon}E_{\varepsilon}(s)^{-1})-b(Z_sE_0(s)^{-1})|E_{\varepsilon}(s)ds\\
&&+\int_0^t|b(Z_sE_0(s)^{-1})||E_{\varepsilon}(s)-E_0(s)|ds\\
&\leq&\int_0^tC_1|Z_s^{\varepsilon}E_{\varepsilon}(s)^{-1}-Z_sE_0(s)^{-1}|E_{\varepsilon}(s)ds\\
&&+\int_0^tC_2(1+|Z_sE_0(s)^{-1}|)|E_{\varepsilon}(s)-E_0(s)|ds\\
&\leq&C_1\int_0^t|Z_s^{\varepsilon}E_{\varepsilon}(s)^{-1}-Z_sE_{\varepsilon}(s)^{-1}|E_{\varepsilon}(s)+|Z_sE_{\varepsilon}(s)^{-1}-Z_sE_0(s)^{-1}|E_{\varepsilon}(s)ds\\
&&+C_2\int_0^t(1+|Z_s|E_0(s)^{-1})|E_{\varepsilon}(s)-E_0(s)|ds\\
&=&C_1\int_0^t|Z_s^{\varepsilon}-Z_s|ds+C_1\int_0^t|Z_s||E_{\varepsilon}(s)^{-1}-E_0(s)^{-1}|E_{\varepsilon}(s)ds\\
&&+C_2\int_0^t(1+|Z_s|E_0(s)^{-1})|E_{\varepsilon}(s)-E_0(s)|ds\\
&\leq&C_1\int_0^t|Z_s^{\varepsilon}-Z_s|ds+C_1\int_0^T|Z_s||E_{\varepsilon}(s)^{-1}-E_0(s)^{-1}|E_{\varepsilon}(s)ds\\
&&+C_2\int_0^T(1+|Z_s|E_0(s)^{-1})|E_{\varepsilon}(s)-E_0(s)|ds\\
&=&\Lambda_{\varepsilon}+C_1\int_0^t|Z_s^{\varepsilon}-Z_s|ds
\end{eqnarray*}
where
\begin{eqnarray*}
\Lambda_{\varepsilon}&:=&C_1\int_0^T|Z_s||E_{\varepsilon}(s)^{-1}-E_0(s)^{-1}|E_{\varepsilon}(s)ds\\
&&+C_2\int_0^T(1+|Z_s|E_0(s)^{-1})|E_{\varepsilon}(s)-E_0(s)|ds.
\end{eqnarray*}
By the Gronwall inequality we deduce that
\begin{eqnarray*}
|Z_t^{\varepsilon}-Z_t|\leq\Lambda_{\varepsilon}e^{C_1t},\quad t\in[0,T]
\end{eqnarray*}
and hence for $p\geq 1$ the inequality
\begin{eqnarray*}
\left\|\sup_{t\in [0,T]} |Z_t^{\varepsilon}-Z_t|\right\|_p\leq e^{C_1T}\Vert\Lambda_{\varepsilon}\Vert_p.
\end{eqnarray*}
We now estimate $\Vert\Lambda_{\varepsilon}\Vert_p$ by writing $\Lambda_{\varepsilon}=\Lambda_{\varepsilon}^1+\Lambda_{\varepsilon}^2$ where
\begin{eqnarray*}
\Lambda_{\varepsilon}^1:=C_1\int_0^T|Z_s||E_{\varepsilon}(s)^{-1}-E_0(s)^{-1}|E_{\varepsilon}(s)ds
\end{eqnarray*}
and
\begin{eqnarray*}
\Lambda_{\varepsilon}^2:=C_2\int_0^T(1+|Z_s|E_0(s)^{-1})|E_{\varepsilon}(s)-E_0(s)|ds.
\end{eqnarray*}
Applying the triangle and H\"{o}lder inequalities we get
\begin{eqnarray*}
\Vert\Lambda_{\varepsilon}^1\Vert_p&\leq&C_1\int_0^T\Vert|Z_s||E_{\varepsilon}(s)^{-1}-E_0(s)^{-1}|E_{\varepsilon}(s)\Vert_pds\\
&\leq&C_1\int_0^T\Vert
Z_s\Vert_{p_1}\Vert E_{\varepsilon}(s)^{-1}-E_0(s)^{-1}\Vert_{p_2}\Vert
E_{\varepsilon}(s)\Vert_{p_3}ds
\end{eqnarray*}
where $p_1, p_2, p_3\in [1,+\infty[$ satisfy $\frac{1}{p_1}+\frac{1}{p_2}+\frac{1}{p_3}=\frac{1}{p}$. From the estimate (\ref{estimate norm of Z}) and the identity
\begin{eqnarray*}
\Vert E_{\varepsilon}(s)\Vert_{p_3}=\exp\left\{\frac{p_3}{2}|K_{\varepsilon}(s,\cdot)|^2\right\}
\end{eqnarray*}
we can write
\begin{eqnarray*}
\Vert\Lambda_{\varepsilon}^1\Vert_p\leq C \int_0^T\Vert E_{\varepsilon}(s)^{-1}-E_0(s)^{-1}\Vert_{p_2}ds
\end{eqnarray*}
where $C$ denotes a positive constant depending on $C_1$, $C_2$, $|x|$, $T$, $p$ and $M$ (in the sequel $C$ will denote a generic constant, depending on the previously specified parameters, which may vary from one line to another). Moreover, employing Proposition \ref{distance exponentials} with $f(\cdot)=K_{\varepsilon}(s,\cdot)$ and $g(\cdot)=1_{[0,s]}(\cdot)$ we conclude that
\begin{eqnarray}\label{estimate for lambda_1}
\Vert\Lambda_{\varepsilon}^1\Vert_p&\leq& C \int_0^T\mathcal{S}_{p_2}(|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|)ds\nonumber\\
&\leq&C\cdot\mathcal{S}_{p_2}\left(\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray}
Note that for any $p\geq 1$ the function $\lambda\mapsto\mathcal{S}(\lambda)$ is increasing on $[0,+\infty]$.
Let us now consider $\Lambda_{\varepsilon}^2$; if we apply one more time the triangle and H\"{o}lder inequalities, then we get
\begin{eqnarray*}
\Vert\Lambda_{\varepsilon}^2\Vert_p&\leq&C_2\int_0^T\Vert(1+|Z_s|E_0(s)^{-1})|E_{\varepsilon}(s)-E_0(s)|\Vert_pds\\
&\leq&C_2\int_0^T\Vert1+|Z_s|E_0(s)^{-1}\Vert_{q_1}\Vert E_{\varepsilon}(s)-E_0(s)\Vert_{q_2}ds\\
&\leq&C_2\int_0^T(1+\Vert|Z_s|E_0(s)^{-1}\Vert_{q_1})\Vert E_{\varepsilon}(s)-E_0(s)\Vert_{q_2}ds
\end{eqnarray*}
where $q_1,q_2\in [1,+\infty[$ satisfy $\frac{1}{q_1}+\frac{1}{q_2}=\frac{1}{p}$. We observe that
\begin{eqnarray}\label{rhs}
1+\Vert|Z_s|E_0(s)^{-1}\Vert_{q_1}&\leq&1+\Vert Z_s\Vert_{r_1}\cdot\Vert E_0(s)^{-1}\Vert_{r_2}
\end{eqnarray}
where $\frac{1}{q_1}=\frac{1}{r_1}+\frac{1}{r_2}$ and that, according to estimate (\ref{estimate norm of Z}), the right hand side of (\ref{rhs}) is bounded uniformly in $s\in [0,T]$ by a constant $C$ depending on $C_1$, $C_2$, $|x|$, $T$, $p$ and $M$. Therefore,
\begin{eqnarray}\label{estimate for lambda_2}
\Vert\Lambda_{\varepsilon}^2\Vert_p&\leq& C\cdot\mathcal{S}_{q_2}\left(\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray}
Here we utilized Proposition \ref{distance exponentials} with $f(\cdot)=-K_{\varepsilon}(s,\cdot)$ and $g(\cdot)=-1_{[0,s]}(\cdot)$. Finally, combining (\ref{estimate for lambda_1}) with (\ref{estimate for lambda_2}) we obtain
\begin{eqnarray*}
\left\|\sup_{t\in [0,T]} |Z_t^{\varepsilon}-Z_t|\right\|_p\leq C\cdot\mathcal{S}_p\left(\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|^2\right).
\end{eqnarray*}
\noindent\textbf{Step two}: We prove that for any $p\geq 1$ there exists a positive constant $C$ (depending on $p$, $|x|$, $T$, $C_1$, $C_2$ and $M$) such that for any $q$ greater than $p$
\begin{eqnarray*}
\sup_{t\in [0,T]}\left\|X_t^{\varepsilon}-X_t\right\|_p\leq C\cdot\mathcal{S}_q\left(\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray*}
We first note that
\begin{eqnarray*}
X_t^{\varepsilon}-X_t&=&Z_t^{\varepsilon}\cdot E_{\varepsilon}(t)^{-1}-Z_t\cdot E_{0}(t)^{-1}\\
&=&Z_t^{\varepsilon}\cdot E_{\varepsilon}(t)^{-1}-Z^{\varepsilon}_t\cdot E_{0}(t)^{-1}+Z_t^{\varepsilon}\cdot E_{0}(t)^{-1}-Z_t\cdot E_{0}(t)^{-1}\\
&=&Z_t^{\varepsilon}\cdot (E_{\varepsilon}(t)^{-1}-E_{0}(t)^{-1})+(Z_t^{\varepsilon}-Z_t)\cdot E_{0}(t)^{-1}.
\end{eqnarray*}
Now we take $p\geq 1$ and apply the triangle and H\"older inequalities to get
\begin{eqnarray*}
\Vert X_t^{\varepsilon}-X_t\Vert_{p}&\leq&\Vert Z_t^{\varepsilon}\Vert_{p_1}\cdot \Vert E_{\varepsilon}(t)^{-1}-E_{0}(t)^{-1}\Vert_{p_2}+\Vert Z_t^{\varepsilon}-Z_t\Vert_{q_1}\cdot \Vert E_{0}(t)^{-1}\Vert_{q_2}
\end{eqnarray*}
where $\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}=\frac{1}{q_1}+\frac{1}{q_2}$. From estimate (\ref{estimate norm of Z}) we know that $\Vert Z_t^{\varepsilon}\Vert_{p_1}$ is bounded uniformly in $t\in [0,T]$ for any $p_1\geq 1$ while Proposition \ref{distance exponentials} ensures that
\begin{eqnarray*}
\Vert E_{\varepsilon}(t)^{-1}-E_{0}(t)^{-1}\Vert_{p_2}\leq C\mathcal{S}_{p_2}\left(|K_{\varepsilon}(t,\cdot)-1_{[0,t]}(\cdot)|\right)
\end{eqnarray*}
with a constant independent of $t\in [0,T]$. Moreover, inequality (\ref{inequality step 1}) from \emph{Step one} gives for $r>q_1$
\begin{eqnarray*}
\Vert Z_t^{\varepsilon}-Z_t\Vert_{q_1}\leq C\cdot\mathcal{S}_r\left(\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray*}
These last assertions imply
\begin{eqnarray*}
\sup_{t\in [0,T]}\Vert X_t^{\varepsilon}-X_t\Vert_{p}\leq C\cdot\mathcal{S}_q\left(\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray*}
The proof is complete.
\section{Proof of Theorem \ref{main theorem 2}}
\subsection{Auxiliary results and remarks: It\^o case}
The proof of Theorem \ref{main theorem 2} will be carried for the simplified equation where $b$ does not depend on $t$ and $\sigma$ is identically equal to one. Straightforward modifications will lead to the general case. To ease the notation, we denote for $t\in [0,T]$
\begin{eqnarray*}
\mathcal{E}_{\varepsilon}(t):=\mathcal{E}(-K_{\varepsilon}(t,\cdot))=\exp\left\{\delta(-K_{\varepsilon}(t,\cdot))-\frac{1}{2}|K_{\varepsilon}(t,\cdot)|^2\right\}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{E}_{0}(t):=\mathcal{E}(-1_{[0,t]})=\exp\left\{\delta(-1_{[0,t]}(\cdot))-\frac{t}{2}\right\}.
\end{eqnarray*}
The existence and uniqueness for the solutions of (\ref{approx ito}) and (\ref{ito SDE}) can be found in Theorem 5.5 from \cite{DLS 2013}. There it was observed that the solution $\{Y_t^{\varepsilon}\}_{t\in[0,T]}$ of (\ref{approx ito}) can be represented as
\begin{eqnarray*}
Y_t^{\varepsilon}=V_t^{\varepsilon}\diamond \mathcal{E}_{\varepsilon}(t)^{\diamond -1}
\end{eqnarray*}
where
\begin{eqnarray}\label{equation for V}
\frac{dV_t^{\varepsilon}}{dt}=b\left(V_t^{\varepsilon}\diamond (\mathcal{E}_{\varepsilon}(t))^{\diamond -1}\right)\diamond \mathcal{E}_{\varepsilon}(t),\quad V_0^{\varepsilon}=x
\end{eqnarray}
while the solution $\{Y_t\}_{t\in[0,T]}$ of (\ref{ito SDE}) can be represented as
\begin{eqnarray*}
Y_t=V_t\diamond \mathcal{E}_{0}(t)^{\diamond -1}
\end{eqnarray*}
where
\begin{eqnarray}\label{equation for V 2}
\frac{dV_t}{dt}=b\left(V_t\diamond (\mathcal{E}_{0}(t))^{\diamond -1}\right)\diamond \mathcal{E}_{0}(t),\quad V_0=x.
\end{eqnarray}
Here, for $f\in L^2([0,T])$ the symbol $\mathcal{E}(f)^{\diamond -1}$ stands for the so called \emph{Wick inverse} of $\mathcal{E}(f)$ which, in this particular case, coincides with $\mathcal{E}(-f)$. The next lemma will serve to write equations (\ref{equation for V}) and (\ref{equation for V 2}) in a Wick product-free form.
\begin{lemma}\label{ito5}
If $F\in \mathcal{L}^p(W,\mu)$ for some $p>1$ and $\Psi:\mathbb{R}\rightarrow\mathbb{R}$ is measurable and with at most linear growth at infinity, then for all $h\in L^2([0,T])$ we have:
\begin{equation*}
\Psi\left(F\diamond \mathcal{E}(h)\right)\diamond\mathcal{E}(-h)=\Psi\left(F\cdot\mathcal{E}(-h)^{-1} \right)\cdot\mathcal{E}(-h).
\end{equation*}
\end{lemma}
\begin{proof}
We apply twice Gjessing's lemma (see Holden et al. \cite{HOUZ}) to get:
\begin{eqnarray*}
\Psi \left(F\diamond \mathcal{E}(h)\right)\diamond\mathcal{E}(-h)&=& \Psi\left(T_{-h}F\cdot \mathcal{E}(h)\right)\diamond\mathcal{E}(-h)\\
&=&T_{h}\left(\Psi(T_{-h}F\cdot\mathcal{E}(h))\right)\cdot\mathcal{E}(-h)\\
&=&\Psi\left(F\cdot T_{h}\mathcal{E}(h)\right)\cdot\mathcal{E}(-h) \\
&=&\Psi\left(F\cdot\mathcal{E}(-h)^{-1}\right)\cdot\mathcal{E}(-h).
\end{eqnarray*}
Here we utilized the identities
\begin{eqnarray*}
T_{h}\mathcal{E}(h)&=&\mathcal{E}(h)\exp\left\{\int_{0}^{T}h(s)^2ds\right\}\\
&=&\exp\left\{\int_0^Th(s)dB_s+\frac{1}{2}\int_{0}^{T}h(s)^2ds\right\}\\
&=&\mathcal{E}(-h)^{-1}.
\end{eqnarray*}
The proof is complete.
\end{proof}
\noindent Therefore, by Lemma \ref{ito5} we can rewrite equation (\ref{equation for V}) as
\begin{eqnarray*}
\frac{dV_t^{\varepsilon}}{dt}=b\left(V_t^{\varepsilon}\cdot(\mathcal{E}_{\varepsilon}(t))^{-1}\right)\cdot \mathcal{E}_{\varepsilon}(t)
\end{eqnarray*}
and equation (\ref{equation for V 2}) as
\begin{eqnarray*}
\frac{dV_t}{dt}=b\left(V_t\cdot(\mathcal{E}_{0}(t))^{-1}\right)\cdot \mathcal{E}_{0}(t)
\end{eqnarray*}
since, as we mentioned before,
\begin{eqnarray*}
\mathcal{E}_{\varepsilon}(t)^{\diamond -1}=\mathcal{E}(K_{\varepsilon}(t,\cdot))\quad\mbox{ and }\quad
\mathcal{E}_{0}(t)^{\diamond -1}=\mathcal{E}(1_{[0,t]}(\cdot)).
\end{eqnarray*}
The following two propositions are the stochastic exponential's counterparts of Proposition \ref{distance exponentials}.
\begin{proposition}\label{distance stochastic exponentials}
Let $f,g\in L^2([0,T])$. Then, for any $p\geq 1$ we have
\begin{eqnarray*}
\Vert\mathcal{E}(f)-\mathcal{E}(g)\Vert_p\leq C\cdot\mathcal{S}_p(|f-g|)
\end{eqnarray*}
where, as before,
\begin{eqnarray*}
\mathcal{S}_p(\lambda)=\lambda\exp\left\{p\lambda^2\right\}+\exp\{\lambda^2/2\}-1,\quad \lambda\in\mathbb{R}
\end{eqnarray*}
and $C$ is a constant depending on $p$ and $|g|$.
\end{proposition}
\begin{proof}
Let $f\in L^2([0,T])$ and $p\geq 1$; then, according to the Poincar\'e inequality (see Theorem 5.5.11 in Bogachev \cite{Bogachev}), we can write
\begin{eqnarray*}
\Vert\mathcal{E}(f)-1\Vert_p&\leq&\mathcal{C}(p)\left\| |D\mathcal{E}(f)|_{L^2([0,T])}\right\|_p\\
&=&\mathcal{C}(p)\left\| |\mathcal{E}(f)f|_{L^2([0,T])}\right\|_p\\
&=&\mathcal{C}(p)|f|\Vert\mathcal{E}(f)\Vert_p\\
&=&\mathcal{C}(p)|f|\exp\left\{\frac{p-1}{2}|f|^2\right\}
\end{eqnarray*}
where $D$ denotes the Malliavin derivative and $\mathcal{C}(p)$ is a positive constant depending only on $p$. Therefore, for any $f,g\in L^2([0,T])$ and $p\geq 1$ we have
\begin{eqnarray*}
\Vert\mathcal{E}(f)-\mathcal{E}(g)\Vert_p&=&\Vert\mathcal{E}(g)\diamond\left(\mathcal{E}(f-g)-1\right)\Vert_p\\
&\leq&\Vert\mathcal{E}(\sqrt{2}g)\Vert_{p}\Vert\mathcal{E}(\sqrt{2}(f-g))-1\Vert_{p}\\
&\leq&e^{(p-1)|g|^2}\mathcal{C}(p)|f-g|\exp\left\{(p-1)|f-g|^2\right\}\\
&\leq&C\mathcal{S}_p(|f-g|)
\end{eqnarray*}
where we utilized an inequality for the Wick product from Da Pelo et al. \cite{DLS 2011}.
\end{proof}
\begin{proposition}\label{distance inverse stochastic exponentials}
Let $f,g\in L^2([0,T])$. Then, for any $p\geq 1$ we have
\begin{eqnarray*}
\Vert\mathcal{E}(f)^{-1}-\mathcal{E}(g)^{-1}\Vert_p\leq C\cdot\mathcal{S}_p(\sqrt{2}|f-g|)
\end{eqnarray*}
where $C$ is a constant depending on $p$ and $|g|$.
\end{proposition}
\begin{proof}
Denote by $\Gamma(1/\sqrt{2})$ the bounded linear operator acting on stochastic exponentials according to the prescription
\begin{eqnarray*}
\Gamma(1/\sqrt{2})\mathcal{E}(f):=\mathcal{E}(f/\sqrt{2}).
\end{eqnarray*}
This operator coincides with the Ornstein-Uhlenbeck semigroup $\{P_t\}_{t\geq 0}$ for a proper choice of the parameter $t$ (see Janson \cite{J} for details) and therefore it is a contraction on any $\mathcal{L}^p(W,\mu)$ for $p\geq 1$. Moreover, by a direct verification one can see that
\begin{eqnarray}\label{anti wick}
\mathcal{E}(f)^{-1}=\Gamma(1/\sqrt{2})\exp\{-\delta(\sqrt{2}f)\}.
\end{eqnarray}
Hence, we can write
\begin{eqnarray*}
\Vert\mathcal{E}(f)^{-1}-\mathcal{E}(g)^{-1}\Vert_p&=&\Vert\Gamma(1/\sqrt{2})\exp\{-\delta(\sqrt{2}f)\}-
\Gamma(1/\sqrt{2})\exp\{-\delta(\sqrt{2}g)\}\Vert_p\\
&\leq&\Vert\exp\{-\delta(\sqrt{2}f)\}-\exp\{-\delta(\sqrt{2}g)\}\Vert_p.
\end{eqnarray*}
Therefore, by means of Proposition \ref{distance exponentials} we can conclude that
\begin{eqnarray*}
\Vert\mathcal{E}(f)^{-1}-\mathcal{E}(g)^{-1}\Vert_p&\leq&\Vert\exp\{-\delta(\sqrt{2}f)\}-
\exp\{-\delta(\sqrt{2}g)\}\Vert_p\\
&\leq&C\cdot\mathcal{S}_p(\sqrt{2}|f-g|).
\end{eqnarray*}
\end{proof}
\begin{remark}
The idea of the proof of the previous proposition, and in particular identity (\ref{anti wick}), is inspired by the investigation carried in Da Pelo and Lanconelli \cite{DL}, where a new probabilistic representation for the solution of the heat equation is derived in terms of the operator $\Gamma(1/\sqrt{2})$ and its inverse.
\end{remark}
\subsection{Proof of Theorem \ref{main theorem 2}}
As before, we divide the proof in two steps.\\
\noindent \textbf{Step one}: We prove that for any $p\geq 1$ there exists a positive constant $C$ (depending on $p$, $|x|$, $T$, $C_1$, $C_2$ and $M$) such that for any $q$ greater than $p$
\begin{eqnarray}\label{inequality step 1 ito}
\left\|\sup_{t\in [0,T]} |V_t^{\varepsilon}-V_t|\right\|_p\leq C\cdot\mathcal{S}_q\left(\sqrt{2}\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray}
The proof can be carried following the same line of the proof of \emph{Step one} of Theorem \ref{main theorem 1}; we have simply to replace $\{Z_t\}_{t\in [0,T]}$ and $\{Z^{\varepsilon}_t\}_{t\in [0,T]}$ with $\{V_t\}_{t\in [0,T]}$ and $\{V^{\varepsilon}_t\}_{t\in [0,T]}$, respectively. Moreover, the exponentials $\{E_{\varepsilon}(t)\}_{t\in [0,T]}$ and $\{E_{0}(t)\}_{t\in [0,T]}$ have to be replaced by $\{\mathcal{E}_{\varepsilon}(t)\}_{t\in [0,T]}$ and $\{\mathcal{E}_{0}(t)\}_{t\in [0,T]}$, respectively.
The estimate (\ref{estimate norm of Z}) changes to
\begin{eqnarray*}
\left\|\sup_{t\in [0,T]}|V_t^{\varepsilon}|\right\|_q&\leq&\left(|x|+C_2 T\exp\left\{\frac{q-1}{2}\sup_{s\in[0,T]}|K_{\varepsilon}(s,\cdot)|^2\right\} \right)e^{C_2T}.
\end{eqnarray*}
We remark that for all $r\geq 1$ we have
\begin{eqnarray*}
\Vert E_{\varepsilon}(t)\Vert_r=\exp\left\{\frac{r}{2}|K_{\varepsilon}(s,\cdot)|^2\right\}
\end{eqnarray*}
while
\begin{eqnarray*}
\Vert \mathcal{E}_{\varepsilon}(t)\Vert_r=\exp\left\{\frac{r-1}{2}|K_{\varepsilon}(s,\cdot)|^2\right\}\quad\mbox{ and }\quad\Vert\mathcal{E}_{\varepsilon}(t)^{-1}\Vert_r=\exp\left\{\frac{r+1}{2}|K_{\varepsilon}(s,\cdot)|^2\right\}.
\end{eqnarray*}
Moreover, we utilize Proposition \ref{distance stochastic exponentials} and Proposition \ref{distance inverse stochastic exponentials} with $f(\cdot)=K_{\varepsilon}(s,\cdot)$ and $g(\cdot)=1_{[0,s]}(\cdot)$ instead of Proposition \ref{distance exponentials}. \\
\noindent\textbf{Step two}: We prove that for any $p\geq 1$ there exists a positive constant $C$ (depending on $p$, $|x|$, $T$, $C_1$, $C_2$ and $M$) such that for any $q$ greater than $p$
\begin{eqnarray*}
\sup_{t\in [0,T]}\left\|Y_t^{\varepsilon}-Y_t\right\|_p\leq C\cdot\mathcal{S}_q\left(\sqrt{2}\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray*}
We first note that
\begin{eqnarray*}
Y_t^{\varepsilon}-Y_t=V_t^{\varepsilon}\diamond\mathcal{E}_{\varepsilon}(t)^{\diamond -1}-V_t\diamond\mathcal{E}_{0}(t)^{\diamond -1}.
\end{eqnarray*}
To ease the readability of the formulas we will adopt, only for this part of the proof, the notation
\begin{eqnarray*}
\tilde{\mathcal{E}}_{\varepsilon}(t):=\mathcal{E}_{\varepsilon}(t)^{\diamond -1}\quad\mbox{ and }\quad\tilde{\mathcal{E}}_{0}(t):=\mathcal{E}_{0}(t)^{\diamond -1}.
\end{eqnarray*}
Then, by mean of Gjessing's Lemma we have
\begin{eqnarray*}
Y_t^{\varepsilon}-Y_t&=&V_t^{\varepsilon}\diamond\tilde{\mathcal{E}}_{\varepsilon}(t)-
V_t\diamond\tilde{\mathcal{E}}_{0}(t)\\
&=&V_t^{\varepsilon}\diamond\tilde{\mathcal{E}}_{\varepsilon}(t)-V_t^{\varepsilon}\diamond\tilde{\mathcal{E}}_{0}(t)
+V_t^{\varepsilon}\diamond\tilde{\mathcal{E}}_{0}(t)-V_t\diamond\tilde{\mathcal{E}}_{0}(t)\\
&=&T_{-K_{\varepsilon}(t,\cdot)}V_t^{\varepsilon}\cdot\tilde{\mathcal{E}}_{\varepsilon}(t)-
T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\cdot\tilde{\mathcal{E}}_{0}(t)+
\left(V_t^{\varepsilon}-V_t\right)\diamond\tilde{\mathcal{E}}_{0}(t)\\
&=&T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}\cdot\tilde{\mathcal{E}}_{\varepsilon}(t)-
T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}\cdot\tilde{\mathcal{E}}_{0}(t)
+T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}\cdot\tilde{\mathcal{E}}_{0}(t)\\
&&-T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\cdot\tilde{\mathcal{E}}_{0}(t)+
\left(V_t^{\varepsilon}-V_t\right)\diamond\tilde{\mathcal{E}}_{0}(t)\\
&=&T_{-K_{\varepsilon}(t,\cdot)}V_t^{\varepsilon}\cdot\left(\tilde{\mathcal{E}}_{\varepsilon}(t)
-\tilde{\mathcal{E}}_{0}(t)\right)
+\left(T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}-T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\right)\cdot\tilde{\mathcal{E}}_{0}(t)\\
&&+T_{-1_{[0,t]}(\cdot)}\left(V_t^{\varepsilon}-V_t\right)\cdot\tilde{\mathcal{E}}_{0}(t)\\
&=&\mathcal{F}_1+\mathcal{F}_2+\mathcal{F}_3
\end{eqnarray*}
where we set
\begin{eqnarray*}
\mathcal{F}_1:=T_{-K_{\varepsilon}(t,\cdot)}V_t^{\varepsilon}\cdot\left(\tilde{\mathcal{E}}_{\varepsilon}(t)
-\tilde{\mathcal{E}}_{0}(t)\right)\quad\quad\mathcal{F}_2:=\left(T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}-T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\right)\cdot\tilde{\mathcal{E}}_{0}(t)
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{F}_3:=T_{-1_{[0,t]}(\cdot)}\left(V_t^{\varepsilon}-V_t\right)\cdot\tilde{\mathcal{E}}_{0}(t).
\end{eqnarray*}
Hence, for any $p\geq 1$ we can write
\begin{eqnarray*}
\Vert Y_t^{\varepsilon}-Y_t\Vert_p&\leq&\Vert \mathcal{F}_1\Vert_p+\Vert \mathcal{F}_2\Vert_p+\Vert \mathcal{F}_3\Vert_p.
\end{eqnarray*}
We recall (see Theorem 14.1 in Janson \cite{J}) that for any $g\in L^2([0,T])$ the linear operator $T_g$ is bounded from $\mathcal{L}^q(W,\mu)$ to $\mathcal{L}^p(W,\mu)$ for any $p<q$. Therefore, by the H\"older inequality and Proposition \ref{distance stochastic exponentials} we deduce
\begin{eqnarray*}
\Vert\mathcal{F}_1\Vert_p&=&\left\| T_{-K_{\varepsilon}(t,\cdot)}V_t^{\varepsilon}\cdot\left(\tilde{\mathcal{E}}_{\varepsilon}(t)
-\tilde{\mathcal{E}}_{0}(t)\right)\right\|_p\\
&\leq&\left\| T_{-K_{\varepsilon}(t,\cdot)}V_t^{\varepsilon}\right\|_{q_1}\cdot\left\|\tilde{\mathcal{E}}_{\varepsilon}(t)
-\tilde{\mathcal{E}}_{0}(t)\right\|_{q_2}\\
&\leq& C \left\|V_t^{\varepsilon}\right\|_r\cdot\left\|\tilde{\mathcal{E}}_{\varepsilon}(t)
-\tilde{\mathcal{E}}_{0}(t)\right\|_{q_2}\\
&\leq& C\cdot\mathcal{S}_{q_2}\left(\sqrt{2}\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray*}
where $p<q_1<r$, $C$ is a constant depending on the parameters appearing in the statement of the theorem and $\frac{1}{p}=\frac{1}{q_1}+\frac{1}{q_2}$. The term $\Vert\mathcal{F}_3\Vert_p$ is treated similarly with the help of inequality (\ref{inequality step 1 ito}). Let us now focus on $\Vert\mathcal{F}_2\Vert_p$. We first observe that
\begin{eqnarray*}
\Vert\mathcal{F}_2\Vert_p&=&\left\Vert \left(T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}-T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\right)\cdot\tilde{\mathcal{E}}_{0}(t) \right\Vert_p\\
&\leq&\left\Vert T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}-T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\right\Vert_q
\cdot\left\Vert\tilde{\mathcal{E}}_{0}(t)\right\Vert_r.
\end{eqnarray*}
According to Theorem 14.1 in Janson \cite{J} the map $T_gX$ is jointly continuous in the variables $(g,X)$ from $L^2([0,T])\times\mathcal{L}^q(W,\mu)$ to $\mathcal{L}^p(W,\mu)$ for $p<q$. Therefore, the first term in the last member of the previous inequality tends to zero as $\varepsilon\to 0^+$. However, we need to know the speed of such convergence. The following lemma will help us in this direction.
\begin{lemma}\label{Malliavin}
For any $X\in\mathbb{D}^{1,q}$ and $h\in L^2([0,T])$ with $|h|<\delta$ one has
\begin{eqnarray*}
\Vert T_hX-X\Vert_p\leq C|h|\Vert X\Vert_{\mathbb{D}^{1,q}}
\end{eqnarray*}
where $p<q$ and $C$ depends on $\delta$, $p$ and $q$.
\end{lemma}
\begin{proof}
Since the linear span of the stochastic exponentials is dense in $\mathcal{L}^p(W,\mu)$ and in $\mathbb{D}^{1,q}$, we will prove the lemma for $X=\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)$ where $\alpha_1,...,\alpha_n\in\mathbb{R}$ and $f_1,...,f_n\in L^2([0,T])$. By the mean value theorem we can write for $\theta\in [0,1]$ that
\begin{eqnarray*}
T_h\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)-\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)&=&
\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)\left( e^{\langle h,f_j\rangle}-1\right)\\
&=&\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)e^{\theta\langle h,f_j\rangle}\langle h,f_j\rangle\\
&=&T_{\theta h}D_h\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)
\end{eqnarray*}
where $D_{\theta h}$ stands for the Malliavin derivative in the direction $\theta h$. We now take the $\mathcal{L}^p(W,\mu)$ norm to get
\begin{eqnarray*}
\left\|T_h\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)-\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)\right\|_p&=&
\left\|T_{\theta h}D_h\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)\right\|_p\\
&\leq&C(h)\left\|D_h\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)\right\|_q\\
&\leq&C(h)|h|\left\|\left|D\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)\right|_{L^2([0,T])}\right\|_q\\
&\leq&C(h)|h|\left\|\sum_{j=1}^n\alpha_j\mathcal{E}(f_j)\right\|_{\mathbb{D}^{1,q}}.
\end{eqnarray*}
\end{proof}
\noindent We now continue the analysis of the term
\begin{eqnarray*}
\left\Vert T_{-K_{\varepsilon (t,\cdot)}}V_t^{\varepsilon}-T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\right\Vert_q.
\end{eqnarray*}
It is not difficult to see that Assumption \ref{assumption on b and sigma} implies that for any $\varepsilon>0$ and $t\in[0,T]$ the random variable $V_t^{\varepsilon}$ belongs to $\mathbb{D}^{1,q}$ for all $q\geq 1$. Moreover, the $\mathbb{D}^{1,q}$-norm of $V_t^{\varepsilon}$ is bounded uniformly with respect to $\varepsilon$ (observe that $V_t$, which corresponds to the case $\varepsilon=0$, is related to an It\^o type SDE which possesses the required smoothness). Therefore,
\begin{eqnarray*}
\left\Vert T_{-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}-T_{-1_{[0,t]}(\cdot)}V_t^{\varepsilon}\right\Vert_q&=&
\left\Vert T_{-1_{[0,t]}(\cdot)}\left(T_{1_{[0,t]}(\cdot)-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}-V_t^{\varepsilon}\right)\right\Vert_q\\
&\leq&C\left\Vert T_{1_{[0,t]}(\cdot)-K_{\varepsilon} (t,\cdot)}V_t^{\varepsilon}-V_t^{\varepsilon}\right\Vert_r\\
&\leq&C|K_{\varepsilon} (t,\cdot)-1_{[0,t]}(\cdot)|\Vert V_t^{\varepsilon}\Vert_{\mathbb{D}^{1,r}}.
\end{eqnarray*}
for $r>q$. Combining all the estimates above we conclude
\begin{eqnarray*}
\Vert Y_t^{\varepsilon}-Y_t\Vert_p&\leq&\Vert \mathcal{F}_1\Vert_p+\Vert \mathcal{F}_2\Vert_p+\Vert \mathcal{F}_3\Vert_p\\
&\leq& C\left(\mathcal{S}_q\left(\sqrt{2}\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right)+|K_{\varepsilon} (t,\cdot)-1_{[0,t]}(\cdot)|\right)\\
&\leq&C\left(\mathcal{S}_q\left(\sqrt{2}\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right)+\sup_{t\in[0,T]}|K_{\varepsilon} (t,\cdot)-1_{[0,t]}(\cdot)|\right)\\
&\leq&C\mathcal{S}_q\left(\sqrt{2}\sup_{s\in [0,T]}|K_{\varepsilon}(s,\cdot)-1_{[0,s]}(\cdot)|\right).
\end{eqnarray*}
The proof of Theorem \ref{main theorem 2} is now complete.
\section{Proof of Theorem \ref{main theorem 3}}
We first note that the solution $\{A_t\}_{t\in[0,T]}$ of
\begin{eqnarray*}
\frac{dA_t}{dt}=b(A_t)+ A_t\cdot g(t)\quad A_0=x,
\end{eqnarray*}
where $g:[0,T]\rightarrow\mathbb{R}$ is a continuous function,
can be represented as
\begin{eqnarray}\label{ito5,5}
A_t=G_t\cdot\exp\left\{\int_0^tg(s)ds\right\}
\end{eqnarray}
where $\{G_t\}_{t\in[0,T]}$ solves
\begin{eqnarray}\label{ito6}
\frac{dG_t}{dt}=b\left(G_t\cdot\exp\left\{\int_0^t g(s)ds\right\}\right)\cdot\exp\left\{-\int_0^t g(s) ds\right\}.
\end{eqnarray}
Moreover, recalling the argument from the previous section, we know that the solution $\{Y_t^{\varepsilon}\}_{t\in[0,T]}$ of (\ref{approx ito}) can be represented as
\begin{eqnarray*}
Y_t^{\varepsilon}=V_t^{\varepsilon}\diamond \mathcal{E}_{\varepsilon}(t)^{\diamond -1}
\end{eqnarray*}
where
\begin{eqnarray*}
\frac{dV_t^{\varepsilon}}{dt}=b\left(V_t^{\varepsilon}\cdot(\mathcal{E}_{\varepsilon}(t))^{-1}\right)\cdot \mathcal{E}_{\varepsilon}(t).
\end{eqnarray*}
Since by definition
\begin{eqnarray*}
(\mathcal{E}_{\varepsilon}(t))^{-1}&=&\exp\left\{\int_0^TK_{\varepsilon}(t,s)dB_s+\frac{1}{2}|K_{\varepsilon}(t,\cdot)|^2\right\}\\
&=&\exp\left\{B_t^{\varepsilon}+\frac{1}{2}|K_{\varepsilon}(t,\cdot)|^2\right\}\\
&=&\exp\left\{\int_0^t\left(\frac{d B_s^{\varepsilon}}{ds}+\frac{1}{2}\frac{d|K_{\varepsilon}(s,\cdot)|^2}{ds}\right)ds\right\}
\end{eqnarray*}
a comparison with (\ref{ito5,5}) and (\ref{ito6}) shows that, by choosing
\begin{eqnarray*}
g(t)=\frac{1}{2}\frac{d}{dt}|K_{\varepsilon}(t,\cdot)|^{2}+\frac{dB_{t}^{\varepsilon}}{dt},
\end{eqnarray*}
we can write
\begin{equation*}
V_{t}^{\varepsilon}=S_{t}^{\varepsilon}\cdot\mathcal{E}_{\varepsilon}(t)
\end{equation*}
where $\{S^{\varepsilon}_t\}_{t\in[0,T]}$ is the process defined in the statement of Theorem \ref{main theorem 3}. Therefore,
\begin{eqnarray*}
Y^{\varepsilon}_t&=&V_{t}^{\varepsilon}\diamond\mathcal{E}_{\varepsilon}(t)^{\diamond -1}\\
&=&\left(S_{t}^{\varepsilon}\cdot\mathcal{E}_{\varepsilon}(t)\right)\diamond \mathcal{E}_{\varepsilon}(t)^{\diamond -1}\\
&=& T_{-K_{\varepsilon}(t,\cdot)}\left(S_{t}^{\varepsilon}\cdot\mathcal{E}_{\varepsilon}(t)\right)\cdot \mathcal{E}_{\varepsilon}(t)^{\diamond -1}\\
&=&T_{-K_{\varepsilon}(t,\cdot)}S_{t}^{\varepsilon}\cdot T_{-K_{\varepsilon}(t,\cdot)}\mathcal{E}_{\varepsilon}(t)\cdot \mathcal{E}_{\varepsilon}(t)^{\diamond -1}\\
&=& T_{-K_{\varepsilon}(t,\cdot)}S_{t}^{\varepsilon}\cdot\exp\left\{-\int_{0}^{T}K_{\varepsilon}(t,s)dB_{s}-
\frac{1}{2}|K_{\varepsilon}(t,\cdot)|^{2}+|K_{\varepsilon}(t,\cdot)|^{2}\right\}\cdot \mathcal{E}_{\varepsilon}(t)^{\diamond -1}\\
&=&T_{-K_{\varepsilon}(t,\cdot)}S_{t}^{\varepsilon}.
\end{eqnarray*}
Here, in the third equality, we utilized Gjessing Lemma. The proof of Theorem \ref{main theorem 3} is complete.
\subsection{Alternative proof}
We are now going to prove a technical result of independent interest that will be used to obtain a different and more direct proof of Theorem \ref{main theorem 3}.
\begin{proposition}\label{îto9}
Let $\{X_t\}_{t\in[0,T]}$ be a stochastic process such that:
\begin{itemize}
\item the function $t\mapsto X_t$ is differentiable
\item the random variable $X_t$ belongs to $\mathcal{L}^p(W,\mu)$ for some $p>1$ and all $t\in [0,T]$.
\end{itemize}
If the function $h:[0,T]^2\rightarrow\mathbb{R}$ is such that
\begin{itemize}
\item for almost all $s\in[0,T]$ the function $t\mapsto h(t,s)$ is continuously differentiable
\item for all $t\in [0,T]$ the functions $h(t,\cdot)$ and $\partial_t h(t,\cdot)$ belong to $L^2([0,T])$
\end{itemize}
then
\begin{eqnarray*}
\frac{d}{dt}(T_{h(t,\cdot)}X_t)&=&T_{h(t,\cdot)}\frac{dX_t}{dt}+T_{h(t,\cdot)}X_t \cdot\int_0^T \partial_th(t,s)dB_s\\
&&-T_{h(t,\cdot)}X_t\diamond\int_0^T\partial_th(t,s)dB_s.
\end{eqnarray*}
\end{proposition}
\begin{proof}
To simplify the notation we set
\begin{eqnarray*}
\delta(h(t,\cdot)):=\int_0^Th(t,s)dB_s\quad\mbox{ and }\quad\delta(\partial_th(t,\cdot)):=\int_0^T\partial_th(t,s)dB_s.
\end{eqnarray*}
According to Gjessing Lemma we know that
\begin{eqnarray*}
T_{h(t,\cdot)}X_t\diamond \mathcal{E}(h(t,\cdot))=X_t\cdot\mathcal{E}(h(t,\cdot))
\end{eqnarray*}
or equivalently,
\begin{eqnarray}\label{Gjessing}
T_{h(t,\cdot)}X_t=\left(X_t\cdot\mathcal{E}(h(t,\cdot))\right)\diamond\mathcal{E}(-h(t,\cdot)).
\end{eqnarray}
We now use the chain rule for the Wick product to get
\begin{eqnarray*}
\frac{d}{dt}(T_{h(t,\cdot)}X_t)&=&\frac{d}{dt}\left(X_t\cdot\mathcal{E}(h(t,\cdot))\right)
\diamond\mathcal{E}(-h(t,\cdot))+(X_t\cdot\mathcal{E}(h(t,\cdot)))\diamond\frac{d}{dt}\mathcal{E}(-h(t,\cdot))\\
&=&\left(\frac{dX_t}{dt}\cdot\mathcal{E}(h(t,\cdot))\right)\diamond \mathcal{E}(-h(t,\cdot))+\left(X_t\cdot\frac{d}{dt}\mathcal{E}(h(t,\cdot))\right)\diamond\mathcal{E}(-h(t,\cdot))\\
&&+\left(X_t\cdot\mathcal{E}(h(t,\cdot))\right)\diamond\frac{d}{dt}\mathcal{E}(-h(t,\cdot))\\
&=&\left(\frac{dX_t}{dt}\cdot\mathcal{E}(h(t,\cdot))\right)\diamond\mathcal{E}(-h(t,\cdot))\\
&&+\left(X_t\cdot\mathcal{E}(h(t,\cdot))\cdot\frac{d}{dt}\left(\delta(h(t,\cdot))-\frac{1}{2}|h(t,\cdot)|^2\right)\right)
\diamond\mathcal{E}(-h(t,\cdot))\\
&&+\left(X_t\cdot\mathcal{E}(h(t,\cdot))\right)\diamond\mathcal{E}(-h(t,\cdot))
\diamond\frac{d}{dt}\delta(-h(t,\cdot))\\
\end{eqnarray*}
Observe that according to identity (\ref{Gjessing}) we can write
\begin{eqnarray*}
\left(\frac{dX_t}{dt}\cdot\mathcal{E}(h(t,\cdot))\right)\diamond\mathcal{E}(-h(t,\cdot))=T_{h(t,\cdot)}\frac{dX_t}{dt}.
\end{eqnarray*}
Therefore, the last chain of equalities becomes
\begin{eqnarray*}
\frac{d}{dt}(T_{h(t,\cdot)}X_t)&=&T_{h(t,\cdot)}\frac{dX_t}{dt}\\
&&+\left(X_t\cdot\mathcal{E}(h(t,\cdot))
\cdot(\delta(\partial_th(t,\cdot))-\langle h(t,\cdot),\partial_t h(t,\cdot)\rangle)\right)
\diamond\mathcal{E}(-h(t,\cdot))\\
&&-T_{h(t,\cdot)}X_t\diamond\delta(\partial_th(t,\cdot))\\
&=&T_{h(t,\cdot)}\frac{dX_t}{dt}+T_{h(t,\cdot)}\left(X_t\cdot(\delta(\partial_th(t,\cdot))-\langle h(t,\cdot),\partial_t h(t,\cdot)\rangle)\right)\\
&&-T_{h(t,\cdot)}X_t\diamond\delta(\partial_th(t,\cdot))\\
&=&T_{h(t,\cdot)}\frac{dX_t}{dt}+T_{h(t,\cdot)}X_t\cdot T_{h(t,\cdot)}(\delta(\partial_th(t,\cdot))-\langle h(t,\cdot),\partial_t h(t,\cdot)\rangle)\\
&&-T_{h(t,\cdot)}X_t\diamond\delta(\partial_th(t,\cdot))\\
&=&T_{h(t,\cdot)}\frac{dX_t}{dt}+T_{h(t,\cdot)}X_t\cdot\delta(\partial_th(t,\cdot))
-T_{h(t,\cdot)}X_t\diamond\delta(\partial_th(t,\cdot)).
\end{eqnarray*}
The proof is complete.
\end{proof}
By means of Proposition \ref{îto9}, we are now able to prove identity (\ref{ito}) from Theorem \ref{main theorem 3} via a direct verification. More precisely, let $\{S_{t}^{\varepsilon}\}_{t\in[0,T]}$ be the process in the statement of Theorem \ref{main theorem 3}. Then, using equation (\ref{ito-stra SDE}) we get
\begin{eqnarray*}
\frac{d}{dt}T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}&=& T_{-K_\varepsilon(t,\cdot)}\frac{dS_{t}^{\varepsilon}}{dt}-T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}
\cdot\int_{0}^{T}\partial_tK_\varepsilon(t,s)dB_s\\
&&+T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\diamond\int_{0}^{T}\partial_tK_\varepsilon(t,s)dB_s\\
&=&T_{-K_\varepsilon(t,\cdot)}\left(b(S_{t}^{\varepsilon})+\frac{1}{2}\frac{d}{dt}|K_{\varepsilon}(t,\cdot)|^{2} \cdot S_{t}^{\varepsilon}+S_{t}^{\varepsilon}\cdot\frac{dB_{t}^{\varepsilon}}{dt}\right)\\
&&-T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\cdot\frac{dB_{t}^{\varepsilon}}{dt}
+T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\diamond\frac{dB_{t}^{\varepsilon}}{dt}\\
&=&b\left(T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\right)+
\frac{1}{2}\frac{d}{dt}|K_{\varepsilon}(t,\cdot)|^{2}\cdot T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\\
&&+T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\cdot\left(\frac{dB_{t}^{\varepsilon}}{dt}
-\int_0^T\partial_tK_{\varepsilon}(t,s) K_{\varepsilon}(t,s)ds\right)\\ &&-T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\cdot\frac{dB_{t}^{\varepsilon}}{dt}+T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\diamond\frac{dB_{t}^{\varepsilon}}{dt}\\
&=&b\left(T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\right)+T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\diamond\frac{dB_{t}^{\varepsilon}}{dt}.
\end{eqnarray*}
This is imply that $\{T_{-K_\varepsilon(t,\cdot)}S_{t}^{\varepsilon}\}_{t\in [0,T]}$ solves equation (\ref{approx ito}).
\end{document} | math | 56,723 |
\begin{document}
\title{Modularity benefits reinforcement learning agents with competing homeostatic drives}
\begin{abstract}
The problem of balancing conflicting needs is fundamental to intelligence. Standard reinforcement learning algorithms maximize a scalar reward, which requires combining different objective-specific rewards into a single number. Alternatively, different objectives could also be combined at the level of \emph{action} value, such that specialist modules responsible for different objectives submit different action suggestions to a decision process, each based on rewards that are independent of one another. In this work, we explore the potential benefits of this alternative strategy. We investigate a biologically relevant multi-objective problem, the continual homeostasis of a set of variables, and compare a monolithic deep Q-network to a modular network with a dedicated Q-learner for each variable. We find that the modular agent: a) requires minimal exogenously determined exploration; b) has improved sample efficiency; and c) is more robust to out-of-domain perturbation.
\end{abstract}
\keywords{
modular reinforcement learning, homeostasis, conflict, multi-objective decision-making, exploration
}
\acknowledgements{This project / publication was made possible through the support of a grant from the John Templeton Foundation.}
\startmain
\section{Introduction}
Humans (and other animals) must satisfy a large set of distinct and possibly conflicting objectives. For example, we must find food, water, shelter, socialize, maintain our temperature, reproduce, etc.. Artificial agents that need to function autonomously in natural environments may face similar problems (e.g., balancing the need to accomplish a specified goal with the need to recharge, etc.). Finding a way to balance disparate needs is thus an important challenge for intelligent agents, and can often be a source of psychological conflict in humans.
In standard reinforcement learning (RL), monolithic agents act in order to maximize a single future discounted reward \cite{sutton2018reinforcement}. The standard way to generalize this to problems with multiple objectives is scalarization. For example, in homeostatically-regulated reinforcement learning (HRRL), an agent is modelled as having separable homeostatic drives, and is rewarded based on its ability to maintain all its "homeostats" at their set points. This is done by \emph{combining} deviations from \emph{all} set-points into a single reward which, when maximized, minimizes homeostatic deviations overall \cite{keramati2014homeostatic}.
This approach faces several challenges typical to RL. First, to avoid settling on a sub-optimal policy, an agent must trade-off exploitation of knowledge about its primary objective with some form of exogenous exploration (typically by acting randomly or according to an exploration-specific bonus). Second, sample inefficiency follows from the “curse of dimensionality”: as environmental complexity increases, an agent must learn how exponentially more states relate to its objective. Third, RL agents tend to over-fit their environment, performing poorly out-of-domain (i.e. they are not robust to distribution shifts). In the broader context of balancing multiple objectives, reward scalarization might be undesirable if the relative importance of different objectives is unknown or variable \cite{roijers2013survey}. Finally, is not clear that the brain itself uses a common currency to navigate such trade-offs \cite{hayden2021case}.
What is the alternative? Given that objectives may conflict with each other due to environmental constraints, and that agents only have one body with which to act, conflict must be resolved at some point between affordance and action. The monolithic solution resolves conflict at the level of reward (i.e. close to affordance). We suggest resolution could occur later; a set of modules with separate reward functions could submit action values to a decision process that selects a final action \cite{russell2003q,van2017hybrid}. This approach has the potential to address the three aforementioned challenges. Exploration might emerge naturally as a property of the system rather than having to be imposed or regulated as a separate factor, as specialist modules are ``dragged along" by other modules when those have the ``upper hand" on action. Modules might also have smaller sub-sets of relevant features to learn about, improving sample efficiency, and be less sensitive to distribution shifts in irrelevant features, improving robustness.
Here, we report simulations that provide evidence for benefits of such a modular approach with respect to exploration, sample efficiency, and robustness using deep RL in the context of homeostatic objectives. We construct a simple but flexible environment of homeostatic tasks and construct a deep RL implementation of the HRRL reward function. We then use this framework to quantify differences between monolithic and modular deep Q-agents, finding that modular agents seem to explore well on their own, achieve homeostasis faster, and better maintain it after distribution shift. Together, these results highlight the potential learning benefits of modular RL, while at the same time offering a framework through which psychological conflict and resolution might be better understood.
\section{Methods}
\subsection{Environment}
To study conflicting needs, we constructed a toy grid-world environment containing multiple different resources. Specifically, each location $(x,y)$ in the environment contained a vector of resources of length $N$ (i.e., there were $N$ overlaid resource maps). The spatial distribution of each individual resource was specified by a normalized 2D Gaussian with mean $\mu_x, \mu_y$ and co-variance matrix $\Sigma$ (see also Figure \ref{res}).
The agent received as perceptual input a 3x3 egocentric slice of the $N$ resource maps (i.e. it could see all the resource levels at each position in its local vicinity). In addition to the resource landscape, the agent also perceived a vector $H_t = (h_{1,t}, h_{2,t},...,h_{N,t})$ consisting of $N$ internal variables with each representing the agent's homeostatic need with respect to the resource. We refer to these variables as "internal stats" or just "stats" (such as osmostat, glucostat, etc.) which we assume are independent ($h_i$ is only affected by acquisition of resource $i$) and have some desired set-point $h_i^*$ (see Figure \ref{scheme}). Set-points were fixed at $H^* = (h_{1}^*, h_{2}^*,...,h_{N}^*)$ and did not change over the course of training.
The agent could move in each of four cardinal directions and, with each step, the individual stats, $h_i$, increased by the amount of resource $i$ at the agent's next location. Additionally, each internal stat decayed at a constant rate to represent the natural depletion of internal resources over time (note: resources in the environment themselves did not deplete). Thus, if the agent discovered a location with a high level of resource for a single depleted stat, staying at that location would optimize that stat toward its set-point, however others would progressively deplete. Agents were initialized in the center of the grid, with internal stats below their set-points, and were trained for $30,000$ steps for a single episode (i.e. agents had to learn in real time as internal stats depleted). For all experiments, the internal stats started at the same level and shared the same set-points. Environmental parameters are summarized in Table \ref{hyper}.
\begin{figure}
\caption{Environment and model schematics}
\label{res}
\label{scheme}
\label{resource}
\end{figure}
\subsection{Models}
\paragraph{Monolithic agent} We created a monolithic agent based on the deep Q network (DQN) \cite{mnih2013playing}. The agent's perceptual input was a concatenation of all local resource levels along with all internal stat levels at each time step (we used neural networks as function approximators since stats were continuous variables). It's output was 4 action logits subsequently used for $\epsilon$-greedy action selection. We used the HRRL reward function \cite{keramati2014homeostatic} which defined reward at each time-step $r_t$ as drive reduction, where drive $D$ was a convex function of set-point deviations; see equation (\ref{reward}).
\begin{equation}
r_t = D(H_t) - D(H_{t+1})
\quad\text{where}\quad
D(H_t) = \sqrt[m]{\sum_{i=1}^N | h_i^* - h_{i,t} |^n }
\label{reward}
\end{equation}
\paragraph{Modular agent} We created a modular agent based on greatest-mass Q-learning (GmQ) \cite{russell2003q}, which consisted of a separate DQN for each of the 4 resources/stats. Here, each module had the same input as the monolithic model (i.e. the full egocentric view and all 4 stat levels), but received a separate reward $r_{i,t}$ derived from only a single stat. The reward function for the $i$th module was therefore defined as in equation (\ref{1Dr}), where drive $D$ depended on the $i$th resource only.
\begin{equation}
r_{i,t} = D(h_{i,t}) - D(h_{i,t+1})
\quad\text{where}\quad
D(h_{i,t}) = \sqrt[m]{|h_i^* - h_{i,t} |^n }
\label{1Dr}
\end{equation}
To select a single action from the suggestions of the multiple modules, we used a simple additive heuristic. We first summed Q-values for each action across modules, and then performed standard $\epsilon$-greedy action selection on the result. More specifically, if $Q_i(a)$ was the Q-value of action $a$ suggested by module $i$, greedy actions were selected as $\underset{a}{\arg\max} \sum\limits_i Q_{i}(a)$.
\paragraph{Common features} Schematics for both models are shown in Figure \ref{scheme}. All Q-networks were multi-layered perceptrons (MLP) with rectified linear nonlinearities trained using a standard temporal difference loss function with experience replay and target networks \cite{mnih2013playing}. The Adam optimizer was used to perform one gradient update on each step in the environment. For both models, $\epsilon$ was annealed linearly from its initial to final value at the beginning of training at a rate that was experimentally manipulated as described below. Hyperparameters are summarized in Table \ref{hyper}.
\begin{table}[ht]
\caption{Parameter settings for environment and models}
\centering
\begin{tabular}{l|ll|l|l}\\
Model & DQN & GmQ & Environment & \\
\hline
\hline
Trainable parameters & 1.09e6 & 1.09e6 & \# of resources $N$ & 4\\
MLP hidden layer units & 1024 & 500 & HRRL exponents $(n, m)$ & $(4, 2)$\\
Learning rate & 1e-3 & 1e-3 & Stat set-points $H^*$ & $(5,5,5,5)$ \\
Discount factor $\gamma$ & 0.5 & 0.5 & Initial stat levels $H_{t=0}$ & $(0.5,0.5,0.5,0.5)$ \\
Memory buffer capacity & 30k & 30k & Resource locations $\mu_x, \mu_y$ & \{0,10\},\{0,10\}\\
Target network update frequency & 200 & 200 & Resource covariance $\Sigma$ &
$\big(\begin{smallmatrix}
1 & 0\\
0 & 1
\end{smallmatrix}\big)$
\\
Batch size & 512 & 512 & Stat depletion per step & 0.004
\\
Initial $\epsilon$ & 1 & 1 \\
Final $\epsilon$ & 0.01 & 0.01 \\
\end{tabular}
\label{hyper}
\end{table}
\section{Results}
\subsection{Optimizing monolithic DQN for homeostasis}
We first characterized whether a standard DQN could reliably perform the task of homeostasis in our environment. Figure \ref{setpoint} summarizes the mean internal stat levels of 10 models averaged over all 4 stats and over the final 1k steps of training for different desired set-points. The model reliably achieved each set-point by the end of training, indicated by points tracking the identity line. The slight under-shooting (i.e. points slightly below the diagonal line) may reflect a bias from stats being initialized far below their set-points.
We then fixed the set-points $h_i^* = 5$ for all stats, and optimized the performance of the DQN baseline by performing a search over performance-relevant hyper-parameters, such as the discount factor $\gamma$. To quantify performance, we calculated the average homeostatic deviation per step $\Delta$ after the exploration annealing phase (using $t_1 = 15k$ and $t_2 = 30k$) as in equation (\ref{delta}). Lower $\Delta$ indicates better homeostatic performance.
\begin{equation}
\Delta = \frac{\sum\limits_{t=t_1}^{t_2} \sum\limits_i | h_i^* - h_{i,t} |}{t_2 - t_1}
\label{delta}
\end{equation}
Baseline DQN performance over a range of discount factors $\gamma$ is shown in Figure \ref{gamma}. We selected the best performing setting of $\gamma = 0.5$ and matched this and other parameters (see Table \ref{hyper}) between DQN and GmQ to compare them in the following two head-to-head experiments.
\begin{figure}
\caption{Homeostasis is achieved for a range of set-points ($n=10$)
\label{setpoint}
\caption{Performance over a range of discount factors ( $n=50$)
\label{gamma}
\caption{DQN baseline learnability and performance; Boxplots display inter-quartile range and outliers for $n$ models}
\label{dqn}
\end{figure}
\subsection{Modularity provides an exploration benefit}
To investigate the impact of modularity on the need for exploration, we systematically varied the number of steps used to anneal $\epsilon$ in $\epsilon$-greedy exploration from its initial to final value. We varied the $\epsilon$ annealing time for both models from 1 (i.e. minimal exploration of $\epsilon = 0.01$ only) to 10k (i.e. annealing from $\epsilon = 1$ to $\epsilon = 0.01$ over 10k steps).
Figure \ref{explore} shows the results of varying the amount of exploration annealing for both models. While DQN gains incremental performance benefits from increasing periods of initial exploration, GmQ displays a striking indifference to the exploration period; with only 1 step of annealing, it performs as well or better than the best DQN models. In other words, DQN requires careful tuning of an appropriate exploration annealing period, but GmQ achieves good performance with effectively no exogenously specified exploration.
\subsection{Modularity provides robustness in the face of perturbation}
Finally, we tested how robust each model was to a perturbation out-of-domain that occurred halfway through training, i.e. at time-step 15k. At that point, a single internal stat variable (i.e. $h_4$) was clamped to a value of 20 (a value previously unseen by the network), and did not change (thus contributing no drive reduction and therefore 0 reward). We tested how well homeostasis was maintained for remaining stats after this perturbation.
Figure \ref{perturb} shows the time-course of the four stats from the beginning of training, through a perturbation at time-step 15k, using $5000$ $\epsilon$-annealing steps. First, it can be seen that GmQ achieves stable homeostasis first, whereas DQN over-shoots set-points initially and takes longer to stabilize. Second, when stat 4 is clamped, DQN displays a significant disturbance to homeostasis of remaining stats, without clear recovery, whereas GmQ is robust in the face of the perturbation.
\begin{figure}
\caption{Experiments comparing DQN and GmQ with respect to (a) exploration and (b) perturbation}
\label{explore}
\label{perturb}
\label{experiments}
\end{figure}
\section{Discussion}
We have shown that in a grid-world task with competing homeostatic drives, a simple modular agent based on greatest-mass Q-learning (GmQ) requires less hand-coded exploration, learns faster, and is more robust to environmental perturbations compared to a traditional monolithic deep Q-network (DQN). Our findings in the context of competing drives complement work showing mixture of expert systems display improved sample efficiency and generalization \cite{jacobs1991adaptive}. We also believe the exploration benefits we observed are novel. The problem of exploration in RL is fundamental, and existing solutions make use of noise, explicit exploratory drives/bonuses, and/or other forms of auto-annealing \cite{yang2021exploration,mcclure2005exploration}. We suggest an additional class of strategies, namely, exploration as an added benefit of having multiple independent drives, since exploitation from the perspective of one module is exploration from the perspective of another. We hypothesize that the ability of modules to suggest conflicting actions may provide modular agents with an implicit source of exploration.
\paragraph{Future Work} Our toy environment has highlighted some initial benefits of modularity (exploration, sample efficiency and robustness), and we predict that these advantages will be amplified in more complex environments, or as the number of drives/objectives increases, due to the curse of dimensionality (more states to explore, learn about, or perturb). We aim to test our agents in rich 3D environments with homeostatic objectives. Next, modular drives immediately pose the problem of coordination. While we simply summed Q-values, more complex arbitrators (such as an additional RL agent that dynamically re-weights individual drives) might better exploit the benefits of modularity in the context of multiple objectives. Finally, humans experience psychological conflict, with various resolution mechanisms long described by psychodynamic theories \cite{freud2018ego}. Modular RL, with its implicit conflicts and resolutions, could, for the first time, offer a formal, computationally-explicit, and normative explanatory framework that could undergird and/or replace elements of psychodynamic theory for understanding the mechanisms responsible for conflict and resolution in the human brain.
\printbibliography
\end{document} | math | 17,617 |
\begin{document}
\setlength{\arraycolsep}{2pt}
\title{Comment on "Role of Initial Entanglement and Non-Gaussianity in the Docoherence of Photon-Number Entangled States Evolving in a Noisy Channel"}
\author{Jaehak Lee$^{1}$, M. S. Kim$^{2}$, and Hyunchul Nha$^{1,3,*}$}
\affiliation{$^1$Department of Physics, Texas A \& M University at Qatar, Doha, Qatar\\
$^2$ QOLS, Blackett Laboratory, Imperial College London, London SW7 2BW, United Kingdom\\
$^3$ School of Computational Sciences, Korea Institute for Advanced Study, Seoul, Korea}
\pacs{03.67.Mn, 03.65.Yz, 42.50.Dv}
\maketitle
In \cite{Allegra}, Allegra {\it et al.} employ several entanglement criteria, including the Simon criterion (SI), in order to provide evidence to support their conjecture that a Gaussian state remains entangled longer than a non-Gaussian state in a noisy Gaussian channel. In particular, they study the loss of entanglement for the class of photon number entangled states (PNES) $|\Psi\rangle=\sum_n\Psi_n|n\rangle|n\rangle$ in a thermal reservoir.
Here we show that their evidence is seriously flawed due to their use of entanglement criteria inappropriate for the comparison and that there exist a large class of non-Gaussian entangled states even within the PNES that can be more robust than Gaussian states.
The dynamics of a system under two independent Markovian reservoirs can be described by
$\dot\rho=A\sum_{i=1,2} {\cal L}[a_i]\rho+B\sum_{i=1,2} {\cal L}[a_i^\dag]\rho$,
where $A$ and $B$ denote the interaction strength leading to dissipation and amplification, respectively (${\cal L}[O]\rho\equiv 2O\rho O^\dag-O^\dag O\rho-\rho O^\dag O$).
The case of $A>B$ describes the interaction with a thermal reservoir. Using the notations in [1], $A={\Gamma \over 2}(N_T+1)$ and $B={\Gamma \over 2}N_T$ ($N_T$: thermal photon number, $\Gamma$: decay rate).
Allegra {\it et al.} found that the Simon criterion (SI) based on the symplectic eigenvalues under partial transposition (PT) is optimal for PNES among the criteria they considered. Here, in contrast to \cite{Allegra}, we employ the negativity of the density matrix under PT (NDPT) as the entanglement criterion and compare the entanglement dynamics of two particular PNES states studied in \cite{Allegra}: photon subtracted squeezed vacuum (PSSV : non-Gaussian) and the twin-beam state (TWB: Gaussian). For numerical purposes, we checked the negativity by restricting the elements of the whole density matrix to a subspace truncated by dimension $N_{\rm tr}$. It is well-known that the negativity under PT in a subspace is sufficient to verify entanglement. In Fig. 1 (a), we show the time to lose negativity for PSSV by NDPT ($N_{\rm tr}=3$) together with the separation time of TWB by SI (analytic result). These results clearly indicate that the PSSV remains entangled longer than the TWB for either the same initial entanglement $\epsilon_0$ or the same energy (indiscernible in this case).
The failure of Allegra {\it et al.} is attributed to the wrong choice of entanglement criteria; Markovian interaction would not create a new-type of correlation, thus, an initial entanglement witness will remain a significant tool to verify the entanglement of the decohered state at a later time. For a given state, it is usually nontrivial to identify the most efficient entanglement witness, however, it is obvious that the information on non-Gaussian entanglement is not fully contained in the covariance matrix (SI criterion). We instead checked the negativity of the decohered PNES under PT, which is a well-known tool particularly for continuous variables.
The NDPT can show the robustness of entanglement for a non-Gaussian PNES, even analytically. In \cite{Allegra}, they also studied randomly-generated PNES in a truncated basis, only to confirm their incorrect conclusion. Among these, the simplest one is $|\Phi\rangle_1=c_0|00\rangle+c_1|11\rangle$, for which the entanglement can be initially detected by the negativity in the subspace spanned by $|01\rangle$ and $|10\rangle$ under PT, which will be presumably useful also at later times. The separation time in this subspace can be analytically obtained by direct calculation. Fig. 1 (b) shows
the value of $B/A$ above which $|\Phi\rangle_1$ survives longer than the TWB with the same initial energy (blue) or entanglement (purple) as a function of $|c_1|^2$,
clearly demonstrating the failure of the evidence of Allegra {\it et al.} in a wide range of temperatures.
\begin{figure}
\caption{(a) Time to lose the negativity in PT of PSSV (solid with circles) and the separation time of TWB (dotted) with $\epsilon_0=0.1$ (purple) and $\epsilon_0=1$ (blue) (b) the value of $B/A$ above which $c_0|00\rangle+c_1|11\rangle$ survives longer than TWB. }
\end{figure}
In summary, we have shown that the evidence of Allegra {\it et al.} supporting their conjecture is derived from the wrong choice of entanglement criteria and disproved the maximal robustness of Gaussian entangled states.
Work supported by NPRP 4-554-1-084 from Qatar National Research Fund and UK EPSRC.
*[email protected]
\end{document} | math | 5,128 |
\begin{document}
{
\newtheorem{theorem}{Theorem}[section]
\newtheorem{conjecture}[theorem]{Conjecture}
}
\newtheorem{lemme}[theorem]{Lemma}
\newtheorem{defi}[theorem]{Definition}
\newtheorem{coro}[theorem]{Corollary}
\newtheorem{rem}[theorem]{Remark}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem*{hyp}{Assumptions}
\newcommand{\T}[2]{{#1}.{#2}}
\title[Large deviations for the contact process]{Large deviations for the contact process in random environment}
{
\author{Olivier Garet}
\address{Institut \'Elie Cartan Nancy (math{\'e}matiques)\\
Universit{\'e} de Lorraine\\
Campus Scientifique, BP 239 \\
54506 Vandoeuvre-l{\`e}s-Nancy Cedex France\\}
\email{[email protected]}
\author{R{\'e}gine Marchand}
\email{[email protected]}
}
\defRandom growth, contact process, random environment, shape theorem, large deviation inequalities{Random growth, contact process, random environment, shape theorem, large deviation inequalities}
\subjclass[2000]{60K35, 82B43.}
\keywords{Random growth, contact process, random environment, shape theorem, large deviation inequalities}
\begin{abstract}
The asymptotic shape theorem for the contact process in random environment gives the existence of a norm $\mu$ on $\mathbb{R}d$ such that the hitting time $t(x)$ is asymptotically equivalent to $\mu(x)$ when the contact process survives. We provide here exponential upper bounds for the probability of the event $\{\frac{t(x)}{\mu(x)}\not\in [1-\varepsilon,1+\varepsilon]\}$; these bounds are optimal for independent random environment.
As a special case, this gives the large deviation inequality for the contact process in a deterministic environment, which, as far as we know, has not been established yet.
\end{abstract}
{\maketitle
}
\setcounter{tocdepth}{1}
\section{Introduction}
Durrett and Griffeath~\cite{MR656515} proved that when the contact process on $\mathbb{Z}d$ starting from the origin survives, the set of sites occupied before time $t$ satisfies an asymptotic shape theorem, as in first-passage percolation. In~\cite{GM-contact}, we extended this result to the case of the contact process in a random environment.
The random environment is given by a collection $(\lambda_e)_{e \in \mathbb{E}d}$ of positive random variables indexed by the set of edges of the grid $\mathbb{Z}d$. Given a realization $\lambda$ of this environment, the contact process $(\xi^0_t)_{t\ge 0}$ in the environment $\lambda$ is a homogeneous Markov process taking its values in the set $\mathcal{P}(\mathbb{Z}d)$ of subsets of $\mathbb{Z}d$. If
$\xi_t^0(z)=1$, we say that $z$ is occupied at time $t$, while if $\xi_t^0(z)=0$, we say that $z$ is empty at time~$t$.
The initial value of the process is $\{0\}$ and the process evolves as follows:
\begin{itemize}
\item an occupied site becomes empty at rate $1$,
\item an empty site $z$ becomes occupied at rate:
$\displaystyle \sum_{\|z-z'\|_1=1} \xi_t^0(z')\lambda_{\{z,z'\}},$
\end{itemize}
all these evolutions being independent. We study then the hitting time $t(x)$ of a site $x$:
$$t(x)=\inf\{ t\ge 0: \; x \in \xi^0_t \}.$$
In~\cite{GM-contact}, we proved that under good assumptions on the random environment, there exists a norm $\mu$ on $\mathbb{R}d$ such that for almost every environment, the family $(t(x))_{x\in\mathbb{Z}d}$ satisfies, when $\|x\|$ goes to $+\infty$,
$$t(x)\sim \mu(x) \quad \text{on the event ``the process survives''}.$$
We focus here on the large deviations of the hitting time $t(x)$ for the contact process in random environment. As far as we know, such inequalities for the classical contact process have not been studied yet, they will be contained in our results.
The assumptions we will require on the random environment are the ones we already needed in~\cite{GM-contact}. We denote by $\lambda_c(\mathbb{Z}d)$ the critical intensity of the classical contact process on $\mathbb{Z}d$, we fix $\lambda_{\min}$ and $\lambda_{\max}$ such that
$\lambda_c(\mathbb{Z}d)<\lambda_{\min}\le \lambda_{\max}$ and we set $\Lambda=[\lambda_{\min},\lambda_{\max}]^{\mathbb{E}d}$.
\begin{hyp}[E]
The support of the law $\nu$ of the random environment is included in $\Lambda=[\lambda_{\min},\lambda_{\max}]^{\mathbb{E}d}$; the law $\nu$ is stationary, and if $\mathbb{E}rg(\nu)$ denotes the set of $x\in\mathbb{Z}d \backslash \{0\}$ such that the translation along vector $x$ is ergodic for $\nu$, then the cone generated by $\mathbb{E}rg(\nu)$ is dense in $\mathbb{R}d$.
\end{hyp}
This last condition is obviously fulfilled if $\mathbb{E}rg(\nu)=\mathbb{Z}d\backslash\{0\}$. We will sometimes require the stronger following assumptions:
\begin{hyp}[E'] The law $\nu$ of the random environment is a product measure:
$\nu=\nu_0^{\otimes\mathbb{E}d}$, where $\nu_0$ is some probability measure on $[\lambda_{\min},\lambda_{\max}]$.
\end{hyp}
By taking for $\nu$ the Dirac mass~$(\delta_{\lambda})^{\otimes \mathbb{E}d}$, with $\lambda>\lambda_c(\mathbb{Z}d)$, which clearly fullfills these assumptions, we recover the case of the classical contact process in a deterministic environment.
For $\lambda \in \Lambda$, we denote by $\mathbb{P}_\lambda$ the (quenched) law of the contact process in environment $\lambda$, and by ${\overline{\mathbb{P}}}_\lambda$ the (quenched) law of the contact process in environment $\lambda$ conditioned to survive.
We define then the annealed probability measures $\overline{\mathbb{P}}$ and $\mathbb{P}$:
$${\overline{\mathbb{P}}}(.)=\int_\Lambda {\overline{\mathbb{P}}}_\lambda(.)\ d\nu(\lambda)\quad \text{ and }\quad {\mathbb{P}}(.)=\int_\Lambda {\mathbb{P}}_\lambda(.)\ d\nu(\lambda).$$
We will study separately the probabilities of the ``upper large deviations'' and the ``lower large deviations'', \emph{i.e. } respectively of the events $\{t(x)\ge (1+\varepsilon)\mu(x)\}$ and $\{t(x)\le (1-\varepsilon)\mu(x)\}$.
The most general result concerns the quenched ``upper large deviations'' for the hitting time $t(x)$ and the coupling time
$$t'(x)=\inf\{ T\ge 0:\;\forall t\ge T\quad \xi^0_t(x)= \xi^{\mathbb{Z}d}_t(x)\},$$
where $(\xi^{\mathbb{Z}d}_t)_{t \ge 0}$ is the contact process starting from $\mathbb{Z}d$, and for the set of hit points $H_t$ and the coupled region $K'_t$:
\begin{eqnarray*}
H_t = \{x \in \mathbb{Z}d: \; t(x) \le t\}, &&\tilde{H}_t=H_t+[0,1]^d\\
K'_t = \{x\in\mathbb{Z}d: \; t'(x)\le t\}, &&\tilde{K}'_t=K'_t+[0,1]^d.
\end{eqnarray*}
We only require here Assumptions~$(E)$.
\begin{theorem}
\label{theoGDUQ}
Let $\nu$ be an environment law satisfying Assumptions~$(E)$. \\
For every $\varepsilon>0$, there exist $B>0$ and a random variable $A(\lambda)$ such that
for $\nu$ almost every environment $\lambda$, for every $x \in \mathbb{Z}d$,
\begin{eqnarray}
\overline{\mathbb{P}}_{\lambda}\left(t(x)\ge\mu(x)(1+\varepsilon)\right) & \le & A(\lambda)e^{-B\|x\|}, \label{venus} \\
\overline{\mathbb{P}}_{\lambda}\left(t'(x)\ge\mu(x)(1+\varepsilon)\right) & \le & A(\lambda)e^{-B\|x\|}, \label{tcouple} \\
\overline{\mathbb{P}}_{\lambda}\left(\forall t \ge T \quad (1-\varepsilon)t A_\mu \subset \tilde{K'_t} \cap \tilde{H_t} \right) & \ge & 1-A(\lambda)e^{-BT}. \label{audessousforme}
\end{eqnarray}
\end{theorem}
We can note that the random variable $A(\lambda)$ is almost surely finite, but that it could often be large. This question will be studied in a forecoming paper about annealed upper large deviations~\cite{GM-contact-gd-annealed}. The key point of the proof of Theorem~\ref{theoGDUQ}, interesting on its own, is to control the times $s$ when a site $x$ is occupied and has infinite progeny. We will denote this event by $\{(0,0) \to (x,s) \to \infty\}$ by analogy with percolation.
\begin{theorem}
\label{lemme-pointssourcescontact}
There exist $C, \theta,A,B>0$ such that $ \forall\lambda\in\Lambda \quad \forall x\in\mathbb{Z}d$
$$\forall t\ge C\|x\|\quad \overline{\mathbb{P}}_{\lambda}\left(\Leb\{s \in [0,t]: (0,0)\to (x,s) \to \infty\} \le \theta t
\right)\le A\exp(-Bt). $$
\end{theorem}
For the ``lower large deviations'', the subadditivity gives a nice setting and allows to state a large deviations principle in the spirit of Hammersley~\cite{MR0370721}.
\begin{theorem}
\label{LDP}
Let $\nu$ be an environment law satisfying Assumptions~$(E)$. \\
Let $x\in\mathbb{Z}d$. There exist a convex function $\mathbb{P}si_x$ and a concave function $K_x$ taking their values in $\mathbb{R}_+$ such that for $\nu$ almost every $\lambda$,
\begin{eqnarray*}
\forall u>0 \quad \lim_{n\to +\infty} -\frac1{n}\log \overline{\mathbb{P}}_{\lambda}(t(nx)\le nu) & = & \mathbb{P}si_x(u); \\
\forall\theta\ge 0\quad\lim_{n\to +\infty} -\frac1{n}\log \mathbb{E}barre_{\lambda}[e^{-\theta t(nx)} ]& = & K_x(\theta).
\end{eqnarray*}
The functions $\mathbb{P}si_x$ and $K_x$ moreover satisfy the reciprocity relations:
$$\forall u>0 \quad \forall \theta\ge 0\quad \mathbb{P}si_x(u)=\sup_{\theta\ge 0} \{K_x(\theta)-\theta u\}\text{ and }K_x(\theta)=\inf_{u>0} \{\mathbb{P}si_x(u)+\theta u\}.$$
\end{theorem}
To obtain effective large deviation inequalities, we moreover have to prove that $\mathbb{P}si_x(u)>0$ if $u<\mu(x)$. More precisely,
\begin{theorem}
\label{dessouscchouette}
Let $\nu$ be an environment law satisfying Assumptions~$(E')$. \\
For every $\varepsilon>0$, there exist $A,B>0$ such that for every $x \in \mathbb{Z}d$, for every $t \ge 0$,
\begin{eqnarray}
\mathbb{P} (t(x)\le (1-\varepsilon)\mu(x)) & \le & A\exp(-B\|x\|), \label{defontenay}\\
\mathbb{P}(\forall s \ge t \quad H_s \subset (1+\varepsilon)s A_\mu) & \ge & 1-A\exp(-Bt)\label{decadix}.
\end{eqnarray}
\end{theorem}
The annealed large deviations inequalities imply the quenched ones: setting
$$A(\lambda)=\sum_{x\in\mathbb{Z}d}\exp(B\|x\|/2)\mathbb{P}_{\lambda}\left(t(x)\le (1-\varepsilon)\mu(x)\right),$$
we see that $A(\lambda)$ is integrable with respect to $\nu$, and thus is $\nu$-almost surely finite. So
$$\forall x\in\mathbb{Z}d\quad \mathbb{P}_{\lambda}(t(x)\le (1-\varepsilon)\mu(x))\le A(\lambda)\exp(-B/2\|x\|).$$
Unfortunately, we do not have a complete large deviation principle as Theorem~\ref{LDP} for the upper large deviations. However,
we will see in Section~\ref{bonnevitesse} that when the environment is i.i.d, the exponential order given by these inequalities is optimal.
Asymptotic shape results for growth models are generally proved thanks to the subadditive processes theory initiated by Hammersley and Welsh~\cite{MR0198576}, and especially with Kingman's subadditive ergodic theorem~\cite{MR0356192} and its extensions.
Since Hammersley~\cite{MR0370721}, we know that subadditive properties offer a proper setting to study the large deviation inequalities. See also the survey by Grimmett~\cite{MR814710} and the Saint-Flour course by Kingman~\cite{MR0438477}. However, as noted by Sepp{\"a}l{\"a}inen and Yukich~\cite{MR1843178}, the general theory of large deviations for subadditive processes is patchy. The best known case is first-passage percolation, studied by Grimmett and Kesten in 1984~\cite{grimmett-kesten}.
This paper introduced some lines of proof for the large deviations of growth processes, that have been reused later, for instance in the study of the large deviations for the chemical distance in Bernoulli percolation~\cite{GM-large}.
For more recent results concerning first-passage percolation, see Chow--Zhang~\cite{chow-zhang}, Cranston--Gauthier--Mountford~\cite{MR2521889}, and Théret et al~\cite{MR2464099,MR2343936,MR2610330,RT-IHP,CT-PTRF,CT-AAP,CT-TAM,RT-ESAIM}.
The renormalization techniques used by Grimmett and Kesten are well-known now: static renormalization for ``upper large deviations'' (control of a too slow growth), dynamic renormalization for ``lower large deviations'' (control of a too fast growth). However, the possibility for the contact process to die gives rise to extra difficulties that do not appear in the case of first-passage percolation or even of Bernoulli percolation. To our knowledge, the only growth process with possible extinction for which large deviations inequalities have been established is oriented percolation in dimension 2 (see Durrett~\cite{MR757768}). Note also that Proposition 20.1 in the PhD thesis of Couronné~\cite{Couronne} rules out the possibility of a too fast growth for oriented percolation in dimension $d$.
In Section 2, we construct the model, give the notation and state previous results, mainly from~\cite{GM-contact}. Section 3 is devoted to the proof of the upper large deviation inequalities, Theorem~\ref{theoGDUQ}, while lower large deviations -- Theorems~\ref{LDP} and~\ref{dessouscchouette} -- are proved in Section 4. Finally, the optimality of the exponential decrease given by these results is briefly discussed in Section 5.
\section{Preliminaries}
\subsection{Definition of the model}
Let $\lambda_{\min}$ and $\lambda_{\max}$ be fixed such that
$\lambda_c(\mathbb{Z}d)<\lambda_{\min}\le\lambda_{\max}$, where
$\lambda_c(\mathbb{Z}d)$ is the critical parameter for the survival of the classical contact process on $\mathbb{Z}d$.
In the following, we restrict ourselves to the study of the contact process in random environment with birth rates $\lambda=(\lambda_e)_{e \in \mathbb{E}d}$ in $\Lambda=[\lambda_{\min},\lambda_{\max}]^{\mathbb{E}d}$. An environment is thus a collection $\lambda=(\lambda_e)_{e \in \mathbb{E}d} \in \Lambda$.
Let $\lambda \in \Lambda$ be fixed. The contact process $(\xi_t)_{t\ge 0}$ in the environment $\lambda$ is a homogeneous Markov process taking its values in the set $\mathcal{P}(\mathbb{Z}d)$ of subsets of $\mathbb{Z}d$, that we sometimes identify with $\{0,1\}^{\mathbb{Z}d}$: for $z \in \mathbb{Z}d$ we also use the random variable $\xi_t(z)=1\hspace{-1.3mm}1_{\{z \in \xi_t\}}$.
If $\xi_t(z)=1$, we say that $z$ is occupied or infected, while if $\xi_t(z)=0$, we say that $z$ is empty or healthy. The evolution of the process is as follows:
\begin{itemize}
\item an occupied site becomes empty at rate $1$,
\item an empty site $z$ becomes occupied at rate
$\displaystyle \sum_{\|z-z'\|_1=1} \xi_t(z')\lambda_{\{z,z'\}},$
\end{itemize}
each of these evolutions being independent from the others. In the following, we denote by $\mathcal{D}$ the set of càdlàg functions from $\mathbb{R}_{+}$ to $\mathcal{P}(\mathbb{Z}d)$: it is the set of trajectories for Markov processes with state space $\mathcal{P}(\mathbb{Z}d)$.
To define the contact process in the environment $\lambda\in\Lambda$, we use Harris' construction~\cite{MR0488377}. It allows to make a coupling between contact processes starting from distinct initial configurations by building them from a single collection of Poisson measures on~$\mathbb{R}_+$.
\subsubsection*{Graphical construction}
We endow $\mathbb{R}_+$ with the Borel $\sigma$-algebra $\mathcal B(\mathbb{R}_+)$, and we denote by $M$ the set of locally finite counting measures $m=\sum_{i=0}^{+\infty} \delta_{t_i}$. We endow this set with the $\sigma$-algebra $\mathcal M$ generated by the maps $m\mapsto m(B)$, where $B$ describes the set of Borel sets in $\mathbb{R}_+$.
We then define the measurable space $(\Omega, \mathcal F)$ by setting
$$\Omega=M^{\mathbb{E}d}\times M^{\mathbb{Z}d} \text{ and } \mathcal F=\mathcal{M}^{\otimes \mathbb{E}d} \otimes \mathcal{M}^{\otimes \mathbb{Z}d}.$$
On this space, we consider the family $(\mathbb{P}_{\lambda})_{\lambda\in\Lambda}$ of probability measures defined as follows:
for every $\lambda=(\lambda_e)_{e \in \mathbb{E}d} \in \Lambda$,
$$\mathbb{P}_{\lambda}=\left(\bigotimes_{e \in \mathbb{E}d} \mathcal{P}_{\lambda_{e}}\right) \otimes \mathcal{P}_1^{\otimes\mathbb{Z}d},$$
where, for every $\lambda\in\mathbb{R}_+$, $\mathcal{P}_{\lambda}$ is the law of a Poisson point process on $\mathbb{R}_+$ with intensity $\lambda$. If $\lambda \in \mathbb{R}_+$, we write $\mathbb{P}_\lambda$ (rather than $\mathbb{P}_{(\lambda)_{e \in \mathbb{E}d}}$) for the law in deterministic environment with constant infection rate $\lambda$.
For every $t\ge 0$, we denote by $\mathcal{F}_t$ the $\sigma$-algebra generated by the maps $\omega\mapsto\omega_e(B)$ and $\omega\mapsto\omega_z(B)$, where $e$ ranges over all edges in $\mathbb{E}d$, $z$ ranges over all points in $\mathbb{Z}d$, and $B$ ranges over all Borel sets in $[0,t]$.
We build the contact process in environment $\lambda\in\Lambda$ from this family of Poisson process, as detailed in Harris~\cite{MR0488377} for the classical contact process and in~\cite{GM-contact} for the random environment case. Note especially that the process is attractive
$$(A \subset B) \mathbb{R}ightarrow (\forall t \ge 0\quad \xi_t^A \subset \xi_t^B),$$
and Fellerian; then it enjoys the strong Markov property.
\subsubsection*{Time translations}
For $t \ge 0$, we define the translation operator $\theta_t$ on a locally finite counting measure $m=\sum_{i=1}^{+\infty} \delta_{t_i}$ on $\mathbb{R}_+$ by setting
$$\theta_t m=\sum_{i=1}^{+\infty} 1\hspace{-1.3mm}1_{\{t_i\ge t\}}\delta_{t_i-t}.$$
The translation $\theta_t$ induces an operator on $\Omega$, still denoted by $\theta_t$: for every $\omega \in \Omega$, we set
$$ \theta_t \omega=((\theta_t \omega_e)_{e \in \mathbb{E}d}, (\theta_t \omega_z)_{z \in \mathbb{Z}d}).$$
\subsubsection*{Spatial translations}
The group $\mathbb{Z}d$ can act on the process and on the environment. The action on the process changes the observer's point of view:
for $x \in \mathbb{Z}d$, we define the translation operator~$T_x$ by
$$\forall \omega \in \Omega\quad T_x \omega=(( \omega_{x+e})_{e \in \mathbb{E}d}, ( \omega_{x+z})_{z \in \mathbb{Z}d}),$$
where $x+e$ the edge $e$ translated by vector $x$.
Besides, we can consider the translated environment $\T{x}{\lambda}$ defined by $(\T{x}{\lambda})_e=\lambda_{x+e}$.
These actions are dual in the sense that for every $\lambda \in \Lambda$, for every $x \in \mathbb{Z}d$,
\begin{eqnarray}
\label{translationspatiale}
\forall A\in\mathcal{F}\quad\mathbb{P}_{\lambda}(T_x \omega \in A) & = & \mathbb{P}_{\T{x}{\lambda}}(\omega \in A).
\end{eqnarray}
Consequently, the law of $\xi^x$ under $\mathbb{P}_\lambda$ coincides with the law of $\xi^0$ under $\mathbb{P}_{x.\lambda}$.
\subsubsection*{Essential hitting times and associated translations}
For a set $A \subset \mathbb{Z}d$, we define the lifetime $\tau^A$ of the process starting from $A$ by
$$\tau^A=\inf\{t\ge0: \; \xi_t^A=\varnothing\}. $$
For $A \subset \mathbb{Z}d$ and $x \in \mathbb{Z}d$, we also define the first infection time $t^A(x)$ of the site $x$ from the set $A$ by
$$t^A(x)=\inf\{t\ge 0: \; x \in \xi_t^A\}.$$
If $y\in\mathbb{Z}d$, we write $t^y(x)$ instead of $t^{\{y\}}(x)$. Similarly, we simply write $t(x)$ for $t^0(x)$.
In our previous paper~\cite{GM-contact}, we introduced a new quantity $\sigma(x)$: it is a time when the site $x$ is infected from the origin $0$ and also has an infinite lifetime. This essential hitting time is defined from a family of stopping times as follows: we set $u_0(x)=v_0(x)=0$ and we define recursively two increasing sequences of stopping times $(u_n(x))_{n \ge 0}$ and $(v_n(x))_{n \ge 0}$ with
$u_0(x)=v_0(x)\le u_1(x)\le v_1(x)\le u_2(x)\dots$ as follows:
\begin{itemize}
\item Assume that $v_k(x)$ is defined. We set $u_{k+1}(x) =\inf\{t\ge v_k(x): \; x \in \xi^0_t \}$. \\
If $v_k(x)<+\infty$, then $u_{k+1}(x)$ is the first time after $v_k(x)$ where site $x$ is once again infected; otherwise, $u_{k+1}(x)=+\infty$.
\item Assume that $u_k(x)$ is defined, with $k \ge 1$. We set $v_k(x)=u_k(x)+\tau^x\circ \theta_{u_k(x)}$.\\
If $u_k(x)<+\infty$, the time $\tau^x\circ \theta_{u_k(x)}$ is the lifetime of the contact process starting from $x$ at time $u_k(x)$; otherwise, $v_k(x)=+\infty$.
\end{itemize}
We then set
\begin{equation}
\label{definitiondeK}
K(x)=\min\{n\ge 0: \; v_{n}(x)=+\infty \text{ or } u_{n+1}(x)=+\infty\}.
\end{equation}
This quantity represents the number of steps before the success of this process: either we stop because we have just found an infinite $v_n(x)$, which corresponds to a time $u_n(x)$ when $x$ is occupied and has infinite progeny, or we stop because we have just found an infinite $u_{n+1}(x)$, which says that after $v_n(x)$, site $x$ is nevermore infected.
We proved that $K(x)$ is almost surely finite, which allows to define the essential hitting time $\sigma(x)$ by setting $\sigma(x)=u_{K(x)}$.
It is of course larger than the hitting time $t(x)$ and can been seen as a regeneration time.
Note however that $\sigma(x)$ is not necessary the first time when $x$ is occupied and has infinite progeny: for instance, such an event can occur between $u_1(x)$ and $v_1(x)$, being ignored by the recursive construction.
At the same time, we define the operator $\tilde \theta_x$ on $\Omega$ by:
\begin{equation*}
\tilde \theta_x =
\begin{cases} T_{x} \circ \theta_{\sigma(x)} & \text{if $\sigma(x)<+\infty$,}
\\
T_x &\text{otherwise,}
\end{cases}
\end{equation*}
or, more explicitly,
\begin{equation*}
(\tilde \theta_x)(\omega) =
\begin{cases} T_{x} (\theta_{\sigma(x)(\omega)} \omega) & \text{if $\sigma(x)(\omega)<+\infty$,}
\\
T_x (\omega) &\text{otherwise.}
\end{cases}
\end{equation*}
We will mainly deal with the essential hitting time $\sigma(x)$ that enjoys, unlike $t(x)$,
some good invariance properties in the survival-conditioned environment. Moreover, the difference between $\sigma(x)$ and $t(x)$ was controlled in~\cite{GM-contact}; this will allow us to transpose to $t(x)$ the results obtained for $\sigma(x)$.
\subsubsection*{Contact process in the survival-conditioned environment}
For $\lambda \in \Lambda$, we define the probability measure ${\overline{\mathbb{P}}}_\lambda$
on $(\Omega, \mathcal F)$ by
$$\forall E\in\mathcal{F}\quad {\overline{\mathbb{P}}}_\lambda(E)=\mathbb{P}_\lambda(E|\tau^0=+\infty).$$
It is thus the law of the family of Poisson point processes, conditioned to the survival of the contact process starting from $0$.
Let then $\nu$ be a probability measure on the set of environments $\Lambda$.
On the same space $(\Omega, \mathcal F)$, we define the corresponding annealed probabilities $\overline{\mathbb{P}}$ and $\mathbb{P}$ by setting
$$\forall E\in\mathcal{F}\quad {\overline{\mathbb{P}}}(E)=\int_\Lambda {\overline{\mathbb{P}}}_\lambda(E)\ d\nu(\lambda) \quad{ and } \quad {\mathbb{P}}(E)=\int_\Lambda {\mathbb{P}}_\lambda(E)\ d\nu(\lambda).$$
\subsection{Previous results} We recall here the results established in~\cite{GM-contact} for the contact process in random environment.
\begin{prop}[Lemma 8 and Corollary 9 in~\cite{GM-contact}]
\label{magic}
Let $x,y \in \mathbb{Z}d \backslash \{0\}$, $\lambda\in\Lambda$, $A$ in the $\sigma$-algebra generated by $\sigma(x)$, and $B\in \mathcal F$. Then
$$\forall \lambda \in \Lambda \quad \overline{\mathbb{P}}_\lambda(A \cap (\tilde{\theta}_x)^{-1}(B))=\overline{\mathbb{P}}_\lambda(A) \overline{\mathbb{P}}_{\T{x}{\lambda}}(B).$$
\label{invariancePbarre}
As consequences we have:
\begin{itemize}
\item The probability measure $\overline{\mathbb{P}}$ is invariant under the translation $\tilde \theta_x$.
\item Under $\overline{\mathbb{P}}_\lambda$, $\sigma(y)\circ\tilde{\theta}_x$ and $\sigma(x)$ are independent. Moreover, the law of $\sigma(y)\circ\tilde{\theta}_x$ under $\overline{\mathbb{P}}_\lambda$ is the same as the law of $\sigma(y)$ under $\overline{\mathbb{P}}_{\T{x}{\lambda}}$.
\item The random variables $(\sigma(x) \circ (\tilde \theta_{x})^j)_{j \ge 0}$ are independent under~$\overline{\mathbb{P}}_\lambda$.
\end{itemize}
\end{prop}
\begin{prop}[Corollaries~20 and 21 in~\cite{GM-contact}]
\label{propmoments}
There exist $A,B,C>0$ and, for every $p\ge 1$, a constant $C_p>0$ such that for every $x\in\mathbb{Z}d$ and every $\lambda\in\Lambda$,
\begin{eqnarray}\label{moms}
\mathbb{E}barre_{\lambda} [\sigma(x)^p ]& \le& C_p (1+\|x\|)^{p},\\
\label{asigma}
\forall t\ge 0 \quad ( \|x\|\le t) & \Longrightarrow & \left(\overline{\mathbb{P}}_{\lambda}(\sigma(x)> Ct) \le A\exp(-Bt^{1/2})\right).
\end{eqnarray}
\end{prop}
\begin{prop}[Theorem 2 in~\cite{GM-contact}]
\label{systemeergodique}
For every $x\in\mathbb{E}rg(\nu)$, the measure-preserving dynamical system $(\Omega,\mathcal{F},\overline{\mathbb{P}},\tilde{\theta}_x)$ is ergodic.
\end{prop}
We then proved that $\overline{\mathbb{P}}$ almost surely, for every $x \in \mathbb{Z}d$, $\frac{\sigma(nx)}n$ converges to a deterministic real number $\mu(x)$.
The function $x\mapsto \mu(x)$ can be extended to a norm on $\mathbb{R}d$, that characterizes the asymptotic shape. Let $A_{\mu}$ be the unit ball for $\mu$.
We define
\begin{eqnarray*}
H_t & = & \{x\in\mathbb{Z}d: \; t(x)\le t\},\\
G_t & = & \{x\in\mathbb{Z}d: \; \sigma(x)\le t\},\\
K'_t & = & \{x\in\mathbb{Z}d: \;\forall s\ge t \quad \xi^0_s(x)=\xi^{\mathbb{Z}d}_s(x)\},
\end{eqnarray*}
and we denote by $\tilde{H}_t,\tilde{G}_t,\tilde{K}'_t$ their "fattened" versions:
$$\tilde{H}_t=H_t+[0,1]^d, \; \tilde{G}_t=G_t+[0,1]^d \text{ and } \tilde{K}'_t=K'_t+[0,1]^d.$$
We can now state the asymptotic shape result:
\begin{prop}[Theorem 3 in~\cite{GM-contact}]
\label{thFA}
For every $\varepsilon>0$, $\overline{\mathbb{P}}-a.s.$, for every~$t$ large enough,
\begin{equation}
\label{leqdeforme}
(1-\varepsilon)A_{\mu}\subset \frac{\tilde K'_t\cap \tilde G_t}t\subset \frac{\tilde G_t}t\subset\frac{\tilde H_t}t\subset (1+\varepsilon)A_{\mu}.
\end{equation}
\end{prop}
In order to prove the asymptotic shape theorem, we established
exponential controls uniform in $\lambda \in \Lambda$. We set
$$B_r^x=\{y \in \mathbb{Z}d: \; \|y-x\|_{\infty} \le r\},$$
and we write $B_r$ instead of $B_r^0$.
\begin{prop}[Proposition 5 in~\cite{GM-contact}]
\label{propuniforme}
There exist $A,B,M,c,\rho>0$ such that for every
$\lambda\in\Lambda$, for every $y \in \mathbb{Z}d$, for every $ t\ge0$
\begin{eqnarray}
\mathbb{P}_\lambda(\tau^0=+\infty) & \ge & \rho,
\label{uniftau} \\
\mathbb{P}_\lambda(H^0_t \not\subset B_{Mt} ) & \le & A\exp(-Bt),
\label{richard} \\
\mathbb{P}_\lambda ( t<\tau^0<+\infty) &\le& A\exp(-Bt), \label{grosamasfinis} \\
\mathbb{P}_{\lambda}\left( t^0(y)\ge \frac{\|y\|}c+t,\; \tau^0=+\infty \right) & \le & A\exp(-Bt),
\label{retouche}\\
\mathbb{P}_{\lambda}(0\not\in K'_t, \; \tau^0=+\infty) &\le &A\exp(-B t).
\label{petitsouscouple}
\end{eqnarray}
\end{prop}
\begin{lemme}
\label{momtprime}
There exist $A,B,C>0$ such that for every $x\in\mathbb{Z}d$ and every $\lambda\in\Lambda$,
\begin{equation}
\label{momtprimeeq}
\forall t\ge 0 \quad ( \|x\|\le t) \Longrightarrow \left(\overline{\mathbb{P}}_{\lambda}(t'(x)> Ct) \le A\exp(-Bt^{1/2})\right).
\end{equation}
\end{lemme}
\begin{proof}
For every $\lambda \in \Lambda$, for every $ x \in \mathbb{Z}d$,
\begin{eqnarray}
\overline{\mathbb{P}}_{\lambda}(t'(x)>\sigma(x)+s) & = & \overline{\mathbb{P}}_{\lambda}(x \not\in K'_{\sigma(x)+s}\cap G_{\sigma(x)+s})\nonumber \\
& = & \overline{\mathbb{P}}_{\lambda}(x \not\in K'_{\sigma(x)+s})\nonumber \\
& \le & \overline{\mathbb{P}}_{\lambda}( x \not\in x+(K'_s) \circ \tilde{\theta}_x)= \overline{\mathbb{P}}_{x.\lambda}(0 \not\in K'_s\nonumber)\\
& \le & A \exp(-Bs),\label{demai}
\end{eqnarray}
with~(\ref{uniftau}) and~(\ref{petitsouscouple}).
With~(\ref{asigma}), this estimate gives the announced result.
\end{proof}
\subsection{An abstract restart procedure}
We formalize here the restart procedure for Markov chains.
Let $E$ be the state space where our Markov chains $(X^x_n)_{n\ge 0}$ evolve, $x \in E$ being the starting point of the chain.
We suppose that we have on our disposal a set $\tilde{\Omega}$, an update function $f:E\times \tilde{\Omega}\to E$, and a probability measure~$\nu$ on $\tilde{\Omega}$ such that on the probability space
$(\Omega, \mathcal{F}, \mathbb{P})=(\tilde{\Omega}^{\mathbb N^*},\bor[\tilde{\Omega}^{\mathbb N^*}],\nu^{\otimes\mathbb N^*})$, endowed with the natural filtering $(\mathcal{F}_n)_{n\ge 0}$ given by $\mathcal{F}_n=\sigma(\omega\mapsto \omega_k: \;k\le n)$, the chains $(X^x_n)_{n\ge 0}$ starting from the different states enjoy the following representation:
\begin{eqnarray*}
\begin{cases}
X^x_0(\omega)=x \\
X^x_{n+1}(\omega)=f(X^x_n(\omega),\omega_{n+1}).
\end{cases}
\end{eqnarray*}
As usual, we define $\theta:\Omega\to\Omega$ which maps $\omega=(\omega_n)_{n\ge 1}$ to $\theta\omega=(\omega_{n+1})_{n\ge 1}$.
We assume that for each $x\in E$, we have defined a $(\mathcal{F}_n)_{n\ge 0}$-adapted stopping time~$T^x$, a $\mathcal{F}_{T^x}$-measurable function $G^x$ and a $\mathcal{F}$-measurable function $F^x$.
Now, we are interested in the following quantities:
\begin{eqnarray*}
T_0^x=0 \text{ and } T^x_{k+1} & = &
\begin{cases}
+\infty & \text{if }T^x_{k}=+\infty\\
T_k^x+T^{x_k}(\theta_{T_k^{x}}) & \text{with $x_k=X^x_{\theta_{T_k^x}}$ otherwise;}
\end{cases} \\
K^x & = & \inf\{k\ge 0:\;T_{k+1}^x=+\infty\}; \\
M^x & = & \sum_{k=0}^{K^x-1} G^{x_k}(\theta_{T_k^x})+F^{X^{x_K}}(\theta_{T^x_{K}}).
\end{eqnarray*}
We wish to control the exponential moments of the $M^x$'s with the help of
exponential bounds for $G^x$ and $F^x$.
In numerous applications to directed percolation or to the contact process, $T^x$ is the extinction time of the process (or of some embedded process) starting from the smallest point (in lexicographic order) in the configuration $x$.
\begin{lemme}[Lemma 4.1 in~\cite{GM-dop}]
\label{restartabstrait}
We suppose that there exist real numbers $A>0$, $c<1$, $p>0$, $\beta>0$ such that the real-valued functions $(G^x)_{x\in E},(F^x)_{x\in E}$ defined above satisfy $$\forall x\in E\quad \left\lbrace
\begin{array}{l}
\mathbf{G}(x)=\mathbb{E} [\exp(\beta G^x)1\hspace{-1.3mm}1_{\{T^x<+\infty\}}]\le c;\\
\mathbf{F}(x)=\mathbb{E} [1\hspace{-1.3mm}1_{\{T^x=+\infty\}} \exp(\beta F^x)]\le A;\\
\mathbf{T}(x)=\mathbb{P} (T^x=+\infty)\ge p.
\end{array}
\right.
$$
Then, for each $x \in E$, $K^x$ is $\mathbb{P}$-almost surely finite and
$$ \mathbb{E}[ \exp(\beta M^x)]\le \frac{A}{1-c} <+\infty.$$
\end{lemme}
\subsection{Oriented percolation}
We work, for $d \ge1$, on the following graph:
\begin{itemize}
\item The set of sites is ${\mathbb{V}}^{d+1}=\{(z,n)\in \mathbb{Z}d \times \mathbb N\}$.
\item We put an oriented edge from $(z_1,n_1)$ to $(z_2,n_2)$ if and only if $n_2=n_1+1$ and $\|z_2-z_1\|_1\le1$; the set of these edges is denoted by $\mathbb{E}ddo$.
\end{itemize}
Define $\mathbb{E}do$ in the following way: in $\mathbb{E}do$, there is an oriented edge between two points $z_1$ and $z_2$ in $\mathbb{Z}d$ if and only if $\|z_1-z_2\|_1\le 1$.
The oriented edge in $\mathbb{E}ddo$ from $(z_1,n_1)$ to $(z_2,n_2)$ can be identified with the couple $((z_1,z_2),n_2)\in\mathbb{E}do\times\mathbb N^*$. Thus, we identify $\mathbb{E}ddo$ and $\mathbb{E}do\times\mathbb N^*$.
We consider $\Omega=\{0,1\}^{\mathbb{E}ddo}$ endowed with its Borel $\sigma$-algebra: the edges $e$ such that $\omega_e=1$ are said to be open, the other ones are closed. For $v, w$ in $\mathbb{Z}d\times\mathbb N$, we denote by $v \to w$ the existence of an oriented path from $v$ to $w$ composed of open edges. We denote by $\overrightarrow{p_c}^{\text{alt}}(d+1)$ the critical parameter for the Bernoulli oriented percolation on this graph (\emph{i.e. } all edges are independently open with probability~$p$).
We set, for $n \in \mathbb N$ and $(x,0)\in {\mathbb{V}}^{d+1}$,
\begin{eqnarray*}
\bar{\xi}^x_n & = & \{y \in \mathbb{Z}: \; (x,0)\to(y,n)\}, \\
\bar{\tau}^x & = & \max\{n \in \mathbb N:\; \bar{\xi}^x_n \neq \varnothing\}.
\end{eqnarray*}
We recall results from~\cite{GM-dop} for a class $\mathcal{C}_d(M,q)$ of dependent oriented percolation models on this graph. The parameter $M$ controls the range of the dependence while the parameter $q$ controls the probability for an edge to be open.
\begin{defi}[Class $\mathcal{C}_d(M,q)$]
Let $d\ge1$ be fixed. Let $M$ be a positive integer and $q\in (0,1)$.
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space endowed with a filtration $(\mathcal{G}_n)_{n\ge 0}$. We assume that, on this probability space, a random field $(W^n_e)_{e\in\mathbb{E}do,n\ge 1}$ taking its values in $\{0,1\}$ is defined. This field gives the states -- open or closed -- of the edges in $\mathbb{E}ddo$.
We say that the law of the field
$(W^n_e)_{e\in\mathbb{E}do,n \ge 1}$ is in $\mathcal{C}_d(M,q)$ if it satisfies the two following conditions.
\begin{itemize}
\item $\forall n\ge 1,\forall e \in \mathbb{E}do\quad W^n_e\in\mathcal{G}_n$;
\item $\forall n \ge 0,\forall e \in \mathbb{E}do\quad \mathbb{P}[W^{n+1}_e=1|\mathcal{G}_n\vee \sigma(W^{n+1}_f, \; d(e,f)\ge M)]\ge q$,
\end{itemize}
where $\sigma(W^{n+1}_f, \; d(e,f)\ge M)$ is the $\sigma$-field generated by the random variables $W^{n+1}_f$, with $d(e,f)\ge M$.
\end{defi}
Note that if $0\le q\le q'\le 1$, we have $\mathcal{C}_d(M,q')\subset \mathcal{C}_d(M,q)$.
We can control the probability of survival and also the lifetime for these dependent oriented percolations.
\begin{prop}[Corollary 3.1 in~\cite{GM-dop}]
\label{petitmomentexpo}
Let $\varepsilon>0$ and $M>1$. There exist $\beta>0$ and $q<1$ such that for each $\chi\in\mathcal{C}_d(M,q)$,
$$\forall x \in \mathbb{Z}d \quad \mathbb{E}_{\chi}[1\hspace{-1.3mm}1_{\{\bar{\tau}^x<+\infty\}}\exp(\beta\bar{\tau}^x)]\le \varepsilon \quad \text{ and } \quad \chi(\bar{\tau}^x=+\infty) \ge 1-\varepsilon.$$
\end{prop}
A point $(y,k) \in \mathbb{Z}d \times \mathbb N$ such that $(x,0)\to(y,k)\to\infty$ is called an immortal descendant of $x$. We will need estimates on the density of immortal descendants of $x$ above some given point $y$ in oriented dependent percolation.
So we define
\begin{eqnarray*}
\bar{G}(x,y)&=& \{k\in\mathbb N\quad (x,0)\to(y,k)\to\infty\}, \\
\bar{\gamma}(\theta,x,y) & = & \inf\{n\in\mathbb N: \quad \forall k\ge n \quad \mathbb{C}ard{\{0,\dots,k\}\cap \bar{G}(x,y)}\ge\theta k\}.
\end{eqnarray*}
\begin{prop}[Corollary 3.3 in~\cite{GM-dop}]
\label{lineairegamma}
Let
$M>1$. There exist $q_0<1$ and positive constants $A,B,\theta,\alpha$ such that for each $\chi\in\mathcal{C}_d(M,q_0)$, we have
$$\forall x,y\in\mathbb{Z}d\quad\forall n\ge 0\quad \chi(+\infty>\gamma(\theta,x,y)> \alpha \|x-y\|_1+n)\le Ae^{-Bn}.$$
\end{prop}
\section{Quenched upper large deviations}
The aim is now to prove the quenched upper large deviations of Theorem~\ref{theoGDUQ}.
In order to exploit the subadditivity, we show that $\sigma(x)$ admits exponential moments uniformly in $\lambda \in \Lambda$:
\begin{theorem}
\label{theomomexpsigma}
There exist positive constants $\gamma_1,\beta_1$ such that
\begin{equation}
\label{momexpsigma}
\forall x \in \mathbb{Z}d \quad \forall \lambda \in \Lambda \quad
\mathbb{E}barre_{\lambda}(e^{\gamma_1 \sigma(x)}) \le e^{\beta_1\|x\|_{1}}.
\end{equation}
\end{theorem}
As an immediate consequence, we get
\begin{coro}
\label{sauveur}
There exist positive constants $A,B,c,$ such that for each
$\lambda\in\Lambda$, each $x \in \mathbb{Z}d$ and every $ t\ge0$
\begin{eqnarray*}
\overline{\mathbb{P}}_{\lambda}\left( t'(x)\ge \frac{\|x\|}c+t \right) & \le & A\exp(-Bt).
\end{eqnarray*}
\end{coro}
\begin{proof}
$$\overline{\mathbb{P}}_{\lambda}\left( t'(x)\ge \frac{\|x\|}c+t\right)\le \overline{\mathbb{P}}_{\lambda}\left( \sigma(x)\ge \frac{\|x\|}c+t/2\right)+\overline{\mathbb{P}}_{\lambda}(t'(x)-\sigma(x)\ge t/2).$$
The second term is controlled by Inequality~\eqref{demai} and Theorem~\ref{theomomexpsigma} gives the desired result with $c=\frac{\gamma_1}{\beta_1}$.
\end{proof}
The rest of this section is organized as follows. We first prove how the subadditive properties and the existence of exponential moments for $\sigma$ given by Theorem~\ref{theomomexpsigma} imply the large deviations inequalities of Theorem~\ref{theoGDUQ}. Next we show how Theorem~\ref{lemme-pointssourcescontact} gives Theorem~\ref{theomomexpsigma}. Finally, the last (and most important) part will be devoted to the proof of Theorem~\ref{lemme-pointssourcescontact}.
\subsection{Proof of Theorem~\ref{theoGDUQ} from Theorem~\ref{theomomexpsigma}}
Let $\varepsilon>0$. Let $\beta_1$ and $\gamma_1$ be the constants given by~(\ref{momexpsigma}), and let
\begin{equation}
C>2 \beta_1/\gamma_1. \label{jechoisisC}
\end{equation}
Theorem~\ref{thFA} gives the almost sure convergence of $\sigma(x)/\mu(x)$ to $1$ when $\|x\|$ tends to $+\infty$, and Proposition~\ref{propmoments} ensures that the family $(\sigma(x)/\mu(x))_{x \in \mathbb{Z}d}$ is bounded in $L^2(\overline{\mathbb{P}})$, therefore uniformly integrable: then the convergence also holds in $L^1(\overline{\mathbb{P}})$.
Let then $M_0$ be such that
\begin{equation}
\label{je choisisM0}
( \mu(x) \ge M_0) \quad \mathbb{R}ightarrow \quad \left( \frac{\mathbb{E}barre(\sigma(x))}{\mu(x)}\right) \le 1+\varepsilon/8.
\end{equation}
We assumed that $\{ay: \;a\in\mathbb{R}_+,y\in\mathbb{E}rg(\nu)\}$ is dense in $\mathbb{R}d$.
Its range by $x\mapsto \frac{x}{\mu(x)}$ is therefore dense in $\{x\in\mathbb{R}d: \;\mu(x)=1\}$, thus
the set
$\{\frac{y}{\mu(y)}: \;y\in\mathbb{E}rg(\nu), \,\mu(y)\ge M_0\}$ is also dense in
$\{x\in\mathbb{R}d:\;\mu(x)=1\}$.
By a compactness argument, one can find a finite subset $F$
in $\{\frac{y}{\mu(y)}:\;y\in\mathbb{E}rg(\nu), \,\mu(y)\ge M_0\}$
such that
$$\forall \hat{x}\in \mathbb{R}d \text{ such that } \mu(\hat{x})=1 \quad\exists y\in F, \; \left\|\frac{y}{\mu(y)}-\hat{x}\right\|_1\le \varepsilon/C.$$
We let $M=\max\{\mu(y):\;y\in F\}.$
For $y\in F$, note $\tilde{\sigma}(y)=\sigma(y)-(1+\frac{\varepsilon}4)\mu(y)$. Since, with (\ref{momexpsigma}), $\tilde{\sigma}(y)$ admits exponential moments, the asymptotics $\mathbb{E}barre[e^{t\tilde{\sigma}(y)}]=1+t\mathbb{E}barre[\tilde{\sigma}(y)]+o(t)$ holds in the neighborood of $0$.
Since $\mathbb{E}barre[\tilde{\sigma}(y)]<0$, we have $\mathbb{E}barre[e^{t\tilde{\sigma}(y)}]<1$ when $t$ is small enough. Since $F$ is finite,
we can find some constants $\alpha>0$ and $c_\alpha<1$ such that
\begin{eqnarray*}
\forall y\in F && \mathbb{E}barre[ \exp(\alpha (\sigma(y)-(1+\varepsilon/4)))]\le c_{\alpha}.
\end{eqnarray*}
Let $x\in\mathbb{Z}d$. We associate to $x$ a point $y\in F$ and an integer $n$ such that
\begin{equation}
\label{jechoisisxn}
\left\| \frac{x}{\mu(x)}-\frac{y}{\mu(y)}\right\|_1\le \frac{\varepsilon}C \text{ and } \left|n-\frac{\mu(x)}{\mu(y)}\right|\le 1.
\end{equation}
By the definition of $t(x)$, for each $\lambda \in \Lambda$, we have
\begin{eqnarray}
&& \overline{\mathbb{P}}_{\lambda} \left( t(x) \ge (1+\varepsilon)\mu(x) \right) \nonumber \\
& \le & \overline{\mathbb{P}}_{\lambda} \left( \sum_{i=0}^{n-1} \sigma(y)\circ\tilde{\theta}_y^i + \sigma(x-ny)\circ\tilde{\theta}_y^n \ge (1+\varepsilon)\mu(x) \right) \nonumber \\
& \le & \overline{\mathbb{P}}_{\lambda} \left( \sum_{i=0}^{n-1} \sigma(y)\circ\tilde{\theta}_y^i \ge \left(1+\frac{\varepsilon}2 \right)\mu(x)\right) + \overline{\mathbb{P}}_{\lambda} \left(\sigma(x-ny)\circ\tilde{\theta}_y^n \ge \frac{\varepsilon}2 \mu(x) \right).
\label{deuxtermes}
\end{eqnarray}
Let first consider the first term in~(\ref{deuxtermes}). With Proposition~\ref{invariancePbarre} and estimate~(\ref{momexpsigma}), it follows that
\begin{eqnarray*}
\overline{\mathbb{P}}_{\lambda} \left(\sigma(x-ny)\circ\tilde{\theta}_y^n \ge \frac{\varepsilon}2 \mu(x) \right)
& = & \overline{\mathbb{P}}_{ny. \lambda} \left(\sigma(x-ny) \ge \frac{\varepsilon}2 \mu(x) \right) \\
& \le & \exp\left( -\frac{\gamma_1 \varepsilon \mu(x)}2 \right) \mathbb{E}barre_{ny.\lambda}( \exp(\gamma_1 \sigma(x-ny)))\\
& \le & \exp\left( -\frac{\gamma_1 \varepsilon \mu(x)}2 \right) \exp(\beta_1 \|x-ny\|_1).
\end{eqnarray*}
Our choices~(\ref{jechoisisxn}) for $y$ and $n$ and the definition of $M$ ensure that
$$\|x-ny\|_1\le \left\| x -\frac{\mu(x)}{\mu(y)} y \right\|_1+\left| \frac{\mu(x)}{\mu(y)} - n \right| \|y\|_1 \le \frac{\varepsilon \mu(x)}C+M.$$
Our choice~(\ref{jechoisisC}) for $C$ gives then the existence of two positive constants $A_1$ and $B_1$ such that for each $\lambda \in \Lambda$ and each $x \in \mathbb{Z}d$,
$$\overline{\mathbb{P}}_{\lambda}\left(\sigma(x-ny)\ge \frac{\varepsilon}{2}\mu(x)\right)\le A_1\exp(-B_1\|x\|).$$
Let us move to the first term of~(\ref{deuxtermes}). Our choices~(\ref{jechoisisxn}) for $y$ and $n$ ensure that
$$ \left| \frac{\mu(x)}{n \mu(y)}-1\right| \le \frac1n \le \left(\frac{\mu(x)}{M}-1\right)^{-1}.$$
Then, we can find $T$ sufficiently large to have, for $\mu(x) \ge T$, that
$$\frac{\mu(x)}{n \mu(y)} \ge \frac{1+\varepsilon/4}{1+\varepsilon/2}.$$
Suppose now that $\mu(x) \ge T$. Proposition~\ref{invariancePbarre} ensures that the variables $\sigma(y)\circ\tilde{\theta}_y^i$ are independent under $\overline{\mathbb{P}}_\lambda$ and moreover that the law of $\sigma(y)\circ\tilde{\theta}_y^i$ under $\overline{\mathbb{P}}_\lambda$ coincides with the law of $\sigma(y)$ under $\overline{\mathbb{P}}_{iy.\lambda}$ : thus
\begin{eqnarray*}
&&\overline{\mathbb{P}}_{\lambda} \left( \sum_{i=0}^{n-1} \sigma(y)\circ\tilde{\theta}_y^i \ge \left(1+\frac{\varepsilon}2 \right)\mu(x)\right)\\& \le & \overline{\mathbb{P}}_{\lambda} \left( \sum_{i=0}^{n-1} \sigma(y)\circ\tilde{\theta}_y^i\ge (1+\frac{\varepsilon}4) n\mu(y)\right) \\
&\le & \overline{\mathbb{P}}_{\lambda} \left( \prod_{i=0}^{n-1} \exp \left( \alpha [\sigma(y)\circ\tilde{\theta}_y^i - (1+\frac{\varepsilon}4)\mu(y)]\right) \ge 1 \right) \\
& \le & \prod_{i=0}^{n-1} \mathbb{E}barre_{iy.\lambda} \left[ \exp \left( \alpha(\sigma(y)-(1+\frac{\varepsilon}4)\mu(y)) \right) \right].
\end{eqnarray*}
Applying the Ergodic Theorem to the system $(\Lambda,\mathcal{B}(\Lambda),\nu, y.)$ and to the function $\lambda \mapsto \log \mathbb{E}barre_{\lambda} \left( \exp[\alpha(\sigma(y)-(1+\varepsilon/4)\mu(y))] \right)$, we get that for $\nu$-almost every $\lambda$ and for each $y\in F$,
\begin{eqnarray*}
&& \miniop{}{\overline{\lim}}{n\to+\infty} \frac1{n}\log \overline{\mathbb{P}}_{\lambda} \left( \frac{1}{n\mu(y)} \sum_{i=0}^{n-1} \sigma(y)\circ\tilde{\theta}_y^i \ge 1+\varepsilon/4 \right) \\
& \le & \int_{\Lambda} \log \mathbb{E}barre_{\lambda} \left(\exp[\alpha(\sigma(y)-(1+\varepsilon/4)\mu(y))] \right) d\nu(\lambda)\\
& \le & \log \int_{\Lambda} \mathbb{E}barre_{\lambda} \left( \exp[\alpha(\sigma(y)-(1+\varepsilon/4)\mu(y))]\right) d\nu(\lambda) \le \log c_{\alpha}<0.
\end{eqnarray*}
Using the norm equivalence theorem and noting that the choices~(\ref{jechoisisxn}) for $n$ and $y$ ensure that
$$ \frac{n}{\mu(x)} \le \frac{1}{M} +\frac{1}{T},$$
we deduce that
$$ \miniop{}{\overline{\lim}}{\|x\|\to +\infty}\frac{\log \overline{\mathbb{P}}_{\lambda}(t(x)\ge\mu(x)(1+\varepsilon))}{\|x\|}\le -C_{\varepsilon},
$$
with $C_{\varepsilon}=\max(-\log c_{\alpha},B_1)$.
Inequality~(\ref{venus}) of Theorem~\ref{theoGDUQ} follows (with another $C_{\varepsilon}$, if necessary).
Let us move to the proof of inequality~(\ref{tcouple}) of Theorem~\ref{theoGDUQ}.
Let $T= \sum_{i=0}^{n-1} \sigma(y)\circ\tilde{\theta}_y^i + \sigma(x-ny)\circ\tilde{\theta}_y^n$. Using Corollary~\ref{invariancePbarre} repeatedly, the same reasoning as in the proof of Lemma~\ref{momtprime} gives $\overline{\mathbb{P}}_{\lambda}(t'(x)>T+\varepsilon\mu(x))\le \overline{\mathbb{P}}_{x.\lambda}(0\not\in K'_{\varepsilon\mu(x)})\le A\exp(-B\mu(x))$. Thus, since $\overline{\mathbb{P}}_{\lambda}(t'(x)>(1+2\varepsilon)\mu(x))\le \overline{\mathbb{P}}_{\lambda}(T>(1+\varepsilon)\mu(x))+\overline{\mathbb{P}}_{\lambda}(t'(x)>T+\varepsilon\mu(x))$ and $T$ has already been controlled, inequality~\eqref{tcouple} follows.
Let us prove inequality~\eqref{audessousforme} of Theorem~\ref{theoGDUQ}.
Since $t\mapsto K'_t\cap H_t$ is non-decreasing, it is sufficient to prove that there exist constants $A,B>0$ such thay
$$\forall n\in\mathbb N\quad \overline{\mathbb{P}}((1-\varepsilon)nA_{\mu}\not\subset K'_n\cap H_n)\le A\exp(-Bn).$$
The proof of the last inequality is classic. For points that have a small norm, we use inequality~\eqref{retouche} and Corollary~\ref{sauveur}; for the other ones, we use inequalities~\eqref{venus} and~\eqref{tcouple}.
\subsection{Proof of Theorem~\ref{theomomexpsigma} from Theorem~\ref{lemme-pointssourcescontact}}
Theorem~\ref{lemme-pointssourcescontact} ensures that with a probability exceeding $1-A\exp(-Bt)$, the Lebesgue measure of the times $s\le C\|x\|+t$ when $(0,0) \to (x,s) \to \infty$ is at least $\theta t$. If $\sigma(x)\ge C\|x\|_\infty+t$, it means that all these times are ignored by the recursive construction of $\sigma(x)$: those times necessarily belong to
$\miniop{K(x)-1}{\cup}{i=1}[u_k(x),v_k(x)] $. Thus, we choose $\theta,C$ as in Theorem~\ref{lemme-pointssourcescontact} and get
\begin{eqnarray*}
&& \overline{\mathbb{P}}_{\lambda}(\sigma(x)\ge C\|x\|+t) \\
& \le &\overline{\mathbb{P}}_{\lambda} \left( \{s\le C\|x\|+t:\; (0,0)\to (x,s)\to\infty\}\subset\miniop{K(x)-1}{\cup}{i=1}[u_k(x),v_k(x)]\right)\\
& \le & \overline{\mathbb{P}}_{\lambda}(\Leb(\{s\le C\|x\|+t:\; (0,0)\to (x,s)\to\infty\})\le\theta t)\\
&& +\overline{\mathbb{P}}_{\lambda} \left(\miniop{K(x)-1}{\sum}{i=1}(v_k(x)-u_k(x))>\theta t \right).
\end{eqnarray*}
Lemma~\ref{lemme-pointssourcescontact} allows to control the first term. To control the second one with a Markov inequality, it is sufficient to prove the existence of exponential moments for $\displaystyle \miniop{K(x)-1}{\sum}{i=1}(v_k(x)-u_k(x))$. To do so, we apply the abstract restart Lemma~\ref{restartabstrait}. We define, for each subset $B$ in $\mathbb{Z}d$, $F^B=0$ and
\begin{eqnarray*}
T^B & = & \inf\{t >\tau^x: \; x \in \xi_t^B\}, \\
G^B & = & \tau^x.
\end{eqnarray*}
Estimate~(\ref{uniftau}) ensures that for each $\lambda \in \Lambda$,
$$\mathbb{P}_\lambda(T^B=+\infty)\ge \mathbb{P}_\lambda( \tau^x=+\infty)\ge \rho>0,$$
and estimate~(\ref{retouche}) ensures the existence of $\alpha>0$ and $c<1$ -- that do not depend on $B$ -- such that for each $\lambda \in \Lambda$,
$$\mathbb{E}_\lambda[\exp(\alpha G^B)1\hspace{-1.3mm}1_{\{T^B<+\infty\}}]\le\mathbb{E}_\lambda[\exp(\alpha \tau^x) 1\hspace{-1.3mm}1_{\{\tau^x<+\infty\}}] = \mathbb{E}_{x.\lambda}[\exp(\alpha \tau^0) 1\hspace{-1.3mm}1_{\{\tau^0<+\infty\}}]\le c.$$
Then, with the notation of Lemma~\ref{restartabstrait}, we have
\begin{eqnarray*}
&& \mathbb{E}_\lambda \left[ \exp \left( \alpha \miniop{K(x)-1}{\sum}{i=1}(v_k(x)-u_k(x)) \right) \right]
= \mathbb{E}_\lambda \left[ \exp \left( \alpha \miniop{K(x)-1}{\sum}{i=0} \tau^x \circ T_k \right) \right]
\le \frac{1}{1-c}.
\end{eqnarray*}
To conclude, we note, using~(\ref{uniftau}), that $\mathbb{E}barre_\lambda(.) \le \mathbb{E}_\lambda(. )/\rho$.
\subsection{Proof of Theorem~\ref{lemme-pointssourcescontact}}
We will include in the contact process a block event percolation: sites will correspond to large blocks in $\mathbb{Z}^d \times [0,\infty)$, and the opening of the bonds will depend of the occuring of good events that we define now.
\subsubsection{Good events}
\begin{figure}
\caption{The good event $A(\bar{n_0}
\end{figure}
Let $C_1>0$ and $M_1>0$ be fixed. \\
Let $I \in \mathbb N^*$, $L \in \mathbb N^*$ and $\delta>0$ such that $I \le L$ and $\delta<C_1L$. For $\bar{n_0} \in \mathbb{Z}d$, $x_0,x_1\in [-L,L[^d$ and $u \in \mathbb{Z}d$ such that $\|u\|_1\le1$, we define the following event:
\begin{eqnarray*}
A(\bar{n_0},u,x_0,x_1) & = & A_{I,L,\delta}^{C_1,M_1}(\bar{n_0},u,x_0,x_1) \\
& = & \left\{
\begin{array}{c}
\exists t \in [0,C_1L-\delta] \quad 2L\bar{n_0}+x_1 \in \xi_t^{2L\bar{n_0}+x_0+[-I,I]^d} \\
\omega_{2L\bar{n_0}+x_1}([t,t+\delta])=0\\
\exists s \in 2L(\bar{n_0}+u) +[-L,L]^d \quad s+[-I,I]^d \subset \xi^{2L\bar{n_0}+x_1}_{C_1L-t}\circ \theta_t \\
\bigcup_{t \in [0,C_1L]} \xi_t^{2L\bar{n_0}+[-L-I,I+L]^d} \subset 2L\bar{n_0}+[-M_1L,M_1L]^d
\end{array}
\right\}.
\end{eqnarray*}
We let then $T=C_1L$. When the event $A(\bar{n_0},u,x_0,x_1)$ occurs, we denote by $s(\bar{n_0},u,x_0,x_1)$ a point $s$ satisfying the last condition that defines the event. Else, we let $s(\bar{n_0},u,x_0,x_1)=\infty$.
If this event occurs, then:
\begin{itemize}
\item Starting from an area of size $I$ centered at a starting point $2L\bar{n_0}+x_0$ in the box with spatial coordinate $\bar{n_0}$, the process at time $T$ colonizes an area of size $I$ centered around the exit point $2L(\bar{n_0}+u)+s(\bar{n_0},u,x_0,x_1)$ in the box with spatial coordinate $\bar{n_0}+u$.
\item Moreover, the point $2L\bar{n_0}+x_1$ is occupied between time $0$ and time $T$ in a time interval with duration at least $\delta$.
\item The realization of this event only depends on what happens in the space-time box $(2L\bar{n_0}+[-M_1L,M_1L])\times[0,T]$.
\end{itemize}
Let us give a summary of the different parameters:
ace{2mm}
\noindent
\begin{tabular}{|l|l|}
\hline
$L$ & spatial scale of the macroscopic boxes \\
$I$ & size of the entrance area and of the exit area $(I\le L$) \\
$T$ & temporal size of the macroscopic boxes ($T=C_1L$)\\
$\delta$ & minimum duration for the infection of $x_1$\\
$\bar{n_0}$ & macroscopical spatial coordinate (coordinate of the big box) \\
$u$ & direction of move ($\|u\|_1\le 1$) \\
$x_0$ & relative position of the entrance area in the box ($x_0\in [-L,L[^d$) \\
$x_1$ & relative position of the target point ($x_1\in [-L,L[^d$)\\
$s(\bar{n_0},u,x_0,x_1)$ & relative position of the exit area in the box \\
& with coordinate $(\bar{n_0}+u)$ ($s(\bar{n_0},u,x_0,x_1) \in [-L,L[^d$) \\
\hline
\end{tabular}
\begin{lemme}
\label{bonev1}
We can find constants $C_1>0$ and $M_1>0$ such that we have the following property.
For each $\varepsilon>0$, we can choose, in that specific order, two integers $I \le L$ large enough and $\delta>0$ small enough such that for every $\lambda\in \Lambda$, $\bar{n_0} \in \mathbb{Z}d$, and each $u \in \mathbb{Z}d$ with $\|u\|_1\le 1$,
$$\forall x_0,x_1\in [-L,L[^d\quad \mathbb{P}_\lambda(A(\bar{n_0},u,x_0,x_1))\ge 1-\varepsilon.$$
Moreover, as soon as $\|\bar{n_0}-\bar{n_0'}\|_\infty \ge 2M_1+1$, for every $u,u',x_0,x_0',x_1$,
$$\text{ the events } A(\bar{n_0},u,x_0,x_1) \text{ and } A(\bar{n_0}',u',x_0',x_1) \text{ are independent.}$$
\end{lemme}
\begin{proof}
Let us first note that
$$\mathbb{P}_\lambda(A(\bar{n_0},u,x_0,x_1))=\mathbb{P}_{2L\bar{n_0}.\lambda}(A(0,u,x_0,x_1)),$$
which permits to assume that $\bar{n_0}=0$.
Let $\varepsilon>0$ be fixed. We first choose $I$ large enough to have
\begin{equation}
\label{choixI}
\forall x \in \mathbb{Z}d \quad \mathbb{P}_{\lambda_{\min}}(\tau^{x+[-I,I]^d}=+\infty) \ge 1-\varepsilon/4.
\end{equation}
We let $\varepsilon'=\varepsilon/(2I+1)^d$. \\
By the FKG Inequality,
$\mathbb{P}_{\lambda_{\min}}(\forall y\in [-I,I]^d, \tau^y=+\infty)>0$.
Translation invariance gives then $$\miniop{}{\lim}{L\to +\infty}
\mathbb{P}_{\lambda_{\min}}(\exists n\in [0,L]: \; \forall y\in ne_1+[-I,I]^d, \tau^y=+\infty)=1.$$
Let then $L_1$ be such that
$$\mathbb{P}_{\lambda_{\min}}(\exists n\in [0,L]; \forall y\in ne_1+[-I,I]^d, \tau^y=+\infty)>1-\frac{\varepsilon'}{12}\mathbb{P}_{\lambda_{\min}}(\tau^0=+\infty).$$
By a time-reversal argument, we have for each $t>0$,
\begin{eqnarray*}
& & \mathbb{P}_{\lambda_{\min}}(\exists n\in [0,L]:\; ne_1+[-I,I]^d\subset \xi^{\mathbb{Z}d}_t)\\
& = & \mathbb{P}_{\lambda_{\min}}(\exists n\in [0,L]:\; \forall y\in ne_1+[-I,I]^d, \tau^y\ge t)> 1-\frac{\varepsilon'}{12}\mathbb{P}_{\lambda_{\min}}(\tau^0=+\infty).
\end{eqnarray*}
We have for each $t\ge 0$ and each $\lambda\in\Lambda$:
\begin{eqnarray*}
& &\mathbb{P}_{x_1.\lambda}(\tau^{0}=+\infty,\; \forall n\in [0,L], \, 2Lu-x_1+ne_1+[-I,I]\not\subset\xi^{0}_t)\\
& \le & \mathbb{P}_{x_1.\lambda}(\forall n\in [0,L],\, 2Lu-x_1+ne_1+[-I,I]\not\subset\xi^{\mathbb{Z}d}_t)\\
& & \quad \quad + \mathbb{P}_{x_1.\lambda}(\tau^{0}=+\infty,\,[-(I+4L),(I+4L)]^d\not\subset K'_t)\\
& \le & \mathbb{P}_{\lambda_{\min}}(\forall n\in [0,L], \, ne_1+[-I,I]\not\subset\xi^{\mathbb{Z}d}_t)\\
& & \quad \quad + \mathbb{P}_{x_1.\lambda}(\tau^0=+\infty,\,[-(4L+I),(4L+I)]^d\not\subset K'_t).
\end{eqnarray*}
Let $C>0$ be large enough to satisfy properties~\eqref{asigma} and~\eqref{momtprimeeq}.
Then, with~\eqref{momtprimeeq}, we can find $L_2\ge L_1$ such that
for $L\ge L_2$ and $t\ge 5CL$, we have
$$\overline{\mathbb{P}}_{x_1.\lambda}(\exists n\in [0,L]; 2Lu-x_1+ne_1+[-I,I]\subset\xi^{0}_t)\ge 1-\varepsilon'/6.$$
Let $\delta>0$ such that $1-e^{-\delta}\le \mathbb{P}_{\lambda_{\min}}(\tau^0=+\infty)\varepsilon'/6$ and $\delta<5CL$: if we let
$$F_t=\big\{\omega_{0}([0,\delta])=0;\; \exists n\in [0,L], \, 2Lu-x_1+ne_1+[-I,I]\subset\xi^{0}_t\big\},$$
we also have, for each $\lambda\in\Lambda$ and each $t\ge 5CL$, that $\overline{\mathbb{P}}_{x_1.\lambda}(F_t)\ge 1-\varepsilon'/3$.
Then, with Proposition~\ref{magic}, one deduces that if $y\in x+[-I,I]^d$, then
$$\overline{\mathbb{P}}_{y.\lambda}(\sigma(x_1-y)\le 4CL, \; \tilde{\theta}_{x_1-y}^{-1} (F_{9CL-\sigma(x_1-y)})) \ge \overline{\mathbb{P}}_{y.\lambda}(\sigma(x_1-y)\le 4CL)(1-\varepsilon'/3).$$
Considering estimate~(\ref{asigma}), we can choose $L_3\ge L_2$ such that for $L\ge L_3$, we have
$$\overline{\mathbb{P}}_{y.\lambda}(\sigma(x_1-y)\le 4CL, \;\tilde{\theta}_{x_1-y}^{-1} (F_{9CL-\sigma(x_1-y)}))\ge 1-\varepsilon'/2.$$
Let $C_1=9C$. With~\eqref{choixI} and the definition of $\varepsilon'$, we get
$$
\mathbb{P}_{\lambda}\left(
\begin{array}{c}
\exists t \in [0,C_1L-\delta]: \quad x_1 \in \xi_t^{x_0+[-I,I]^d} \\
\omega_{x_1}([t,t+\delta])=0\\
\exists s \in 2Lu +[-L,L]^d \quad s+[-I,I]^d \subset \xi^{x_1}_{C_1L-t}\circ \theta_t
\end{array}
\right)\ge 1-3\varepsilon/4.$$
Finally, one takes for $M$ the constant given by equation~(\ref{richard}) and lets $M_1=MC_1+2$. With~(\ref{richard}), we can find $L \ge L_3$ sufficiently large to have for each $\lambda \in \Lambda$:
\begin{equation}
\label{choixM}
\mathbb{P}_{\lambda_{\text{max}}}\left( \bigcup_{0 \le t \le C_1L} \xi^{[-L-I,L+I]^d}_t \subset [-M_1L,M_1L]^d \right)\ge 1-\varepsilon/4;
\end{equation}
this fixes the integer $L$.
The local dependence of the events comes from the third condition in their definition. This concludes the proof of the lemma.
\end{proof}
\subsubsection{Dependent macroscopic percolation }
We fix $C_1,M_1$ given by Lemma~\ref{bonev1}. We choose $I \in \mathbb N^*$, $L \in \mathbb N^*$ and $\delta>0$ such that $I \le L$ and $\delta<C_1L$ and we let $T=C_1L$.
Let $x$ in $\mathbb{Z}d$ be fixed.
We write $x=2L[x]+\{x\}$, with $\{x\}\in [-L,L[^d$ and $[x]\in\mathbb{Z}d$.
We will first, from the events defined in the preceding subsection, build a field $(^xW^n_{(\bar{k},u)})_{n \ge 0, \bar{k} \in \mathbb{Z}d, \|u\|_1\le1}$.
The idea is to construct a macroscopic oriented percolation on the bonds of $\mathbb{E}do\times\mathbb N^*$, looking for the realizations, floor by floor, of translates of good events of type $A(.)$. We start from an area centered at $0$ in the box with coordinate $\bar{0}$; for each $u$ such that $\|u\|_1\le 1$, say that the bond between $(\bar{0},0)$ and $(u, 1)$ is open if $A(\bar{0},u,0,\{x\})$ holds; in that case we obtain an infected square centered at the exit point $s(\bar{0},u,0,\{x\})$; all bonds in this floor that are issued from another point than $\bar{0}$ are open, with fictive exit points equal to $\infty$. Then we move to the upper floor: for a box $(\bar{y},1)$, look if it contains exit points of bonds that were open at the preceding step. If it is the case, we choose one of these, denoted by $d^x_1(\bar{y})$, open the bond between $(\bar{y},1)$ and $(\bar{y}+u, 2)$ if $A(\bar{y},u,d^x_1(\bar{y}),\{x\})\circ \theta_{T}$ happens and close it otherwise; in the other case we open all bonds issued from that box, and so on for every floor.
Precisely, we let $d^x_0(\bar{0})=0$ and also $d^x_0(\bar{y})=+\infty$ for every $\bar{y} \in\mathbb{Z}d$ that differs from $0$.
Then, for each $\bar{y}\in\mathbb{Z}d$, each $u \in \mathbb{Z}d$ such that $\|u\|_1\le1$ and for each $n\ge 0$, we recursively define:
\begin{itemize}
\item If $d^x_n(\bar{y})=+\infty$, $^xW^n_{(\bar{y},u)}=1$.
\item Otherwise,
\begin{eqnarray*}
^xW^n_{(\bar{y},u)} & = & 1\hspace{-1.3mm}1_{A(\bar{y},u,d^x_n(\bar{y}),\{x\})} \circ \theta_{nT}, \\
d^x_{n+1}(\bar{y}) & = & \min\{ s(\bar{y}+u,-u,d^x_n(\bar{y}+u),\{x\})\circ \theta_{nT}: \; \|u\|_1\le 1, \; d^x_n(\bar{y}+u)\ne+\infty \}.
\end{eqnarray*}
\end{itemize}
Recall that the definition of the function $s$ has been given with the one of a good event in the preceding subsection.
Then, $d^x_{n+1}(\bar{y})$ represents the relative position of the entrance area for the $^xW^{n+1}_{(\bar{y},u)}$'s, with $\|u\|_1\le1$. We may have several candidates, that are the relative positions of the exit areas of the $^xW^n_{(\bar{y}+u,-u)}$'s; the $\min$ only plays the role of a choice function.
We thus obtain an oriented percolation process. Among open bonds, only those corresponding to the realization of good events are relevant for the underlying contact process. Let us note however that the percolation cluster starting at $\bar{0}$ only contains bonds that correspond to the propagation of the contact process.
\begin{lemme}
\label{domistoc}
Again, we work with $C_1,M_1$ given by Lemma~\ref{bonev1}.
For each $q<1$, we can choose parameters $I,L, \delta$ such that for each $\lambda \in \Lambda$, and each $x \in \mathbb{Z}d$,
$$\text{the law of }(^xW_e^n)_{(e,n) \in\mathbb{E}do\times\mathbb N^*}\text{under }\mathbb{P}_{\lambda}\text{ is in }\mathcal{C}(2M_1+1,q).$$
\end{lemme}
\begin{proof}
For each $n \in \mathbb N$, let $\mathcal G_n=\mathcal F_{nT}$, with $T=C_1L$. Let us note that, for each $x,\overline{k} \in \mathbb{Z}d$ and $n \ge 1$, the quantity $d^x_n(\overline{k})$ is $\mathcal{G}_n$-mesurable, and so does ${}^xW^n_{(\overline{k},u)}$.
Lemma~\ref{bonev1} ensures that the events $A(\bar{k},u,x_0,\{x\})$ and $A(\bar{l},v,x_0',\{x\})$ are independent as soon as $\|\bar{k}-\bar{l}\|_1 \ge 2M_1+1$; so we deduce that, conditionally to $\mathcal G_n$, the random variables ${}^xW^{n+1}_{(\overline{k},u)}$ and ${}^xW^{n+1}_{(\overline{l},v)}$ are independent as soon as $\|\overline{k}-\overline{l}\|_1 \ge 2M_1+1$.
Let now $x,\overline{k} \in \mathbb{Z}d$, $n \ge 0$ and $u \in \mathbb{Z}d$ such that $\|u\|_1\le 1$:
\begin{eqnarray*}
&& \mathbb{E}_{\lambda}[{}^xW^{n+1}_{(\overline{k},u)}|\mathcal{G}_n\vee \sigma({}^xW^{n+1}_ {(\overline{l},v)}, \; \|v\|_1\le 1, \; \|\overline{l}-\overline{k}\|_1 \ge 2M_1+1)] \\
& = & \mathbb{E}_{\lambda}[{}^xW^{n+1}_{(\overline{k},u)}|\mathcal{G}_n] \\
& = & 1\hspace{-1.3mm}1_{\{d^x_n(\overline{k})=+\infty\}}+ 1\hspace{-1.3mm}1_{\{d^x_n(\overline{k})<+\infty\}}\mathbb{P}_{\lambda}[{}^xW^{n+1}_{(\overline{k},u)}=1|d^x_n(\overline{k})<+\infty] \\
& = & 1\hspace{-1.3mm}1_{\{d^x_n(\overline{k})=+\infty\}}+ 1\hspace{-1.3mm}1_{\{d^x_n(\overline{k})<+\infty\}}\mathbb{P}_{\lambda}[A(\overline{k},u,d^x_n(\overline{k}), \{x\})].
\end{eqnarray*}
With Lemma~\ref{bonev1}, we can choose integers $I <L$ and $\delta>0$ in such a way that
$$\mathbb{E}_{\lambda}[{}^xW^{n+1}_{(\overline{k},u)}|\mathcal{G}_n\vee \sigma({}^xW^{n+1}_ {(\overline{l},v)}, \; \|v\|_1\le 1, \; \|\overline{l}-\overline{k}\|_1 \ge 2M_1+1)] \ge q.$$
This concludes the proof of the lemma.
\end{proof}
For this percolation process, we denote by $\overline{\tau}^{\bar{k}}$ and $\overline{\gamma}(\theta,\bar{k},\bar{l})$
the lifetime starting from $\bar{k}$ and the essential hitting times of $\bar{l}$ starting from $\bar{k}$ in the dependent oriented percolation induced by the Bernoulli random field $(^xW_e^n)_{(e,n) \in\mathbb{E}do\times\mathbb N^*}$.
\begin{lemme}
\label{controledep}
We can choose the parameters $I,L,\delta$ such that the following holds:
\begin{itemize}
\item $\forall \lambda\in\Lambda\quad\mathbb{P}_{\lambda}(\overline{\tau}^0=+\infty)\ge \frac12$.
\item $\forall \lambda\in\Lambda\quad \varphi(\lambda)=\mathbb{E}_{\lambda}[e^{\alpha \overline{\tau}^0}1\hspace{-1.3mm}1_{\{\overline{\tau}^0<+\infty\}}] \le 1/2$
\item there exist strictly positive constants $\alpha_0>0,\overline{C}$ such that for every $x,y\in\mathbb{Z}d$
$$\forall \alpha\in [0,\alpha_0]\quad\forall \lambda\in\Lambda\quad \ell(\lambda,\alpha,x,y)=\mathbb{E}_{\lambda} [1\hspace{-1.3mm}1_{\{\overline{\tau}^{x}=+\infty\}} e^{\alpha \overline{\gamma}(\theta,x,y))}]\le 2 e^{\overline{C}\alpha \|x-y\|}.$$
\end{itemize}
\end{lemme}
\begin{proof}
By Lemma~\ref{petitmomentexpo}, we know that there exist $q<1$ and $\alpha>0$ such that we have
$$\mathbb{E} [e^{\alpha \overline{\tau}^{\bar{0}}}1\hspace{-1.3mm}1_{\{\overline{\tau}^{\bar{0}}<+\infty\}}]\le 1/2$$
for each field in $\mathcal{C}(2M_1+1,q)$.
By Lemma~\ref{domistoc}, we can choose $I,L,\delta$ such
that $(^xW_e^n)_{(e,n) \in\mathbb{E}do\times\mathbb N^*} \in \mathcal{C}(2M_1+1,q),$
which gives the two first points.
Then, from Lemma~\ref{lineairegamma}, we get constants $A,B,C$
such that for every $x,y\in\mathbb{Z}d$, every $n\ge 0$ and each $\lambda\in\Lambda$, we have
$$\mathbb{P}_{\lambda}(+\infty>\overline{\gamma}(\theta,x,y)> C\|x-y\|_1+n)\le Ae^{-Bn}.$$
We can then find $B'>0$ independent from $x$ and $\lambda$ such that the Exponential law with parameter $B'$ stochastically dominates $(\overline{\gamma}(\theta,x,y)-C\|x-y\|_1)1\hspace{-1.3mm}1_{\{\overline{\gamma}(\theta,x,y)<+\infty\}}$.
Let then $\alpha\le B'/2$: we have
\begin{eqnarray*}
\ell(\lambda,\alpha,x,y) & = &e^{\alpha C\|x-y\|_1}\mathbb{E}_{\lambda}[1\hspace{-1.3mm}1_{\{\overline{\tau}^x=+\infty\}}e^{\alpha ((\overline{\gamma}(\theta,x,y)-C\|y-x\|_1))}]\\
& \le & e^{\alpha C\|x-y\|_1} \frac{B'}{B'-\alpha}\\
& \le & 2 e^{\alpha C\|x-y\|_1}.
\end{eqnarray*}
\end{proof}
\subsubsection{Proof of Theorem~\ref{lemme-pointssourcescontact}}
We first choose $I,L, \delta$ in order to satisfy the inequalities of Lemma~\ref{controledep}, and we let $T=C_1L$.
We use a restart argument. The idea is as follows: fix $\lambda \in \Lambda$ and $x \in \mathbb{Z}d$; if the lifetime $\tau^0$ of the contact process in random environment is infinite, then one can find by the restart procedure an instant $T_K$ such that
\begin{itemize}
\item $\xi^0_{T_K}$ contains an area $z+[-2L,2L]^d$, which allows to activate a block oriented percolation, as defined in the previous subsection, from some $\bar{z}_0\in \mathbb{Z}d$ such that $2\bar{z}_0L+[-L,L]^d \subset z+[-2L,2L]^d$,
\item the block oriented percolation issued from $\bar{z}_0$ infinitely survives.
\end{itemize}
Then, with Lemma~\ref{lineairegamma}, we give a lower bound for the proportion of time when $\bar{x}_0=[x]$ is occupied by descendents having themselves infinite progeny. By the definition of good events, this will allow to bound from below the measure of $\{t\ge 0; (0,0)\to (x,t)\to \infty\}$ in the contact process. Indeed, recall that the definition of the event $A(\bar{x}_0,u,x_0,\{x\})$ targets $\{x\}$ and ensures that each time the site $\bar{x}_0=[x]$ is occupied in the macroscopic oriented percolation, then the contact process occupies the site $2L\bar{x}_0+\{x\}=x$ during $\delta$ units of time.
\noindent
\underline{Definition of the restart procedure.}
We define the following stopping times: for each non-empty subset $A\subset\mathbb{Z}d$,
\begin{eqnarray*}
U^A & = & \left\{
\begin{array}{l}
T \text{ if } \forall z \in \mathbb{Z}d \; z+[-2L,2L]^d \not\subset \xi^A_T, \\
T(1+\bar{\tau}^0 \circ T_{2\bar{x}^AL} \circ \theta_{T}) \text{ otherwise}\\
\quad \quad \quad \text{with } \bar{x}^A=\inf\{\bar{m} \in \mathbb{Z}d: \; 2\bar{m}L+[-L,L]^d\subset \xi^A_T\}
\end{array}
\right. \\
\text{and }U^\varnothing & = & +\infty.
\end{eqnarray*}
In other words, starting from a set $A$, we ask if the contact process contains an area in the form $2\bar{m}L+[-L,L]^d$ at time $T$, : if the answer is no, we stop, otherwise we consider the lifetime of the macroscopic percolation issued from the macroscopic site corresponding to that area. Particularly, if $A \neq \varnothing$ and $U^A=+\infty$, then there exists, at time $T$, in the contact process issued from $A$, an area $2\bar{x}^AL+[-L,L]^d$ which is fully occupied, and such that the macroscopic oriented percolation issued from thae macroscopic site $\bar{x}^A$ percolates. We then search in that infinite cluster not too large a time when the proportion of individuals living at $\bar{x}_0=[x]$ and having infinite progeny becomes sufficiently large:
if $A \neq \varnothing$ and $U^A=+\infty$, we note
$$R^A=R^A(x)=
\left\{
\begin{array}{ll}
T(1+\overline{\gamma}(\theta,\bar{x}^A,\bar{x}_0)) & \text{ if }A \neq \varnothing \text{ and }U^A=+\infty; \\
0 & \text{ otherwise }.
\end{array}
\right.
$$
Thus, when $U^A=+\infty$, the variable $R^A$ represents the first time (in the scale of the contact process, not that of the macroscopic oriented percolation) when the proportion of individuals living at $\bar{x}_0=[x]$ and having infinite progeny becomes sufficiently large.
\noindent
\underline{Estimates for the restart procedure.}
\begin{lemme}
There exist constants $\alpha>0$, $q>0$, $c<1$, $A',h>0$ such that for each $\lambda \in \Lambda$, each $A\subset \mathbb{Z}d$, and each $x \in \mathbb{Z}d$,
\begin{eqnarray}
\mathbb{P}_{\lambda}(U^A=+\infty) & \ge & q; \label{restart-q} \\
\mathbb{E}_{\lambda}[\exp(\alpha U^A)1\hspace{-1.3mm}1_{\{U^A<+\infty\}}] & \le & c; \label{restart-c} \\
\mathbb{E}_{\lambda}[\exp(\alpha R^A(x))1\hspace{-1.3mm}1_{\{U^A=+\infty\}}] & \le & A'e^{\alpha h(\|\bar{x}_0\|_\infty+ \|A\|_\infty)}. \label{restart-A}
\end{eqnarray}
\end{lemme}
\begin{proof}
We easily get (\ref{restart-q}) from a stochastic comparison: for each $\lambda \in \Lambda$ and each non-empty $A$,
$$
\mathbb{P}_{\lambda}(U^A=+\infty) \ge \mathbb{P}_{\lambda_{\text{min}}}([-2L,2L]^d \subset\xi^0_T)\mathbb{P}(\bar{\tau}^0=+\infty)=q>0.
$$
Now, if $\alpha>0$ , $A \subset \mathbb{Z}d$ is non-empty and $\lambda \in \Lambda$, we have with Lemma~\ref{controledep},
\begin{eqnarray*}
&& \mathbb{E}_{\lambda}[\exp(\alpha U^A)1\hspace{-1.3mm}1_{\{U^A<+\infty\}}] \\
& = & e^{\alpha T}\left( 1-\mathbb{P}_{\lambda}(\exists z \in \mathbb{Z}d, \; z+[-2L,2L]^d \subset \xi^A_T)\left(1- \mathbb{E}[e^{\alpha T \bar{\tau}^0} 1\hspace{-1.3mm}1_{\{\bar{\tau}^0<+\infty\}}]\right)\right) \\
& \le & e^{\alpha T}\left( 1-\frac12\mathbb{P}_{\lambda}(\exists z \in \mathbb{Z}d, \; z+[-2L,2L]^d \subset \xi^A_T) \right) \\
& \le & e^{\alpha T}\left( 1-\frac12\mathbb{P}_{\lambda_{\text{min}}}(\exists z \in \mathbb{Z}d, \; z+[-2L,2L]^d \subset \xi^A_T) \right)=c <1
\end{eqnarray*}
provided that $\alpha>0$ is small enough; this proves (\ref{restart-c}).
By the strong Markov property and Lemma~\ref{controledep}, if we choose $\alpha>0$ small enough, then for each $\lambda \in \Lambda$,
\begin{eqnarray}
&& \mathbb{E}_{\lambda} [ \exp(\alpha R^A) 1\hspace{-1.3mm}1_{\{U^A=+\infty\}} | \mathcal{F}_{T} ] \nonumber \\
& = & 1\hspace{-1.3mm}1_{\{\exists z \in \mathbb{Z}d, \; z+[-2L,2L]^d \subset \xi^A_T\}} e^{\alpha T} \mathbb{E}[ \exp(\alpha T \overline{\gamma}(\theta,\bar{x}^A,\bar{x}_0))1\hspace{-1.3mm}1_{\{\bar{\tau}^{\bar{x}^A}=+\infty\}}]\nonumber\\
& \le & 2 e^{\alpha T} \exp(\overline{C} \alpha T \|\bar{x}^A-\bar{x}_0\|_\infty) \nonumber \\
&\le & 2 e^{\alpha T(1+\overline{C}\|\bar{x}_0\|_\infty)}\exp(\overline{C} \alpha T \|\xi^A_T\|_\infty). \label{laun}
\end{eqnarray}
We use the comparison with Richardson's model to bound the mean of the last term: let us choose the positive constants $M,\beta$ such that
$$\forall s,t\ge 0\quad \mathbb{P}_{\lambda_{\max}}(\|\xi^0_s\|_{\infty}\ge Ms+t)\le e^{-\beta t}.$$
Then, for each non-empty finite set $A$, each $t>0$, and each $\lambda \in \Lambda$,
\begin{eqnarray*}
\mathbb{P}_{\lambda}(\|\xi^A_T\|_{\infty}\ge 2\|A\|_\infty + MT+t) & \le & \mathbb{P}_{\lambda_{\max}}(\max_{a \in A} \|\xi^a_T -a \|_{\infty}\ge \|A\|_\infty+MT+t) \\
& \le & \mathbb{C}ard{A} \mathbb{P}_{\lambda_{\max}}(\|\xi^0_T\|_{\infty}\ge MT+\|A\|_\infty+t)\\
& \le & \|A\|_\infty^d e^{-\beta (\|A\|_\infty+t)}\le \alpha' \exp(-\beta t).
\end{eqnarray*}
Then, for $\alpha$ small enough,
\begin{eqnarray}
\mathbb{E}_\lambda[\exp(\overline{C}\alpha T \|\xi^A_T\|_\infty)
]
& \le & e^{\overline{C}\alpha T(2\|A\|_\infty + MT)} \left(1+\frac{\overline{C}\alpha T\alpha'}{\beta-\overline{C}\alpha T} \right) \nonumber \\
& \le & 2e^{\overline{C}\alpha T(2\|A\|_\infty + MT)}. \label{ladeux}
\end{eqnarray}
Inequality~(\ref{restart-A}) immediately follows from (\ref{laun}) and (\ref{ladeux}).
\end{proof}
\noindent
\underline{Application of the restart lemma~\ref{restartabstrait}.}
Let
\begin{eqnarray*}
T_0=0 \text{ and } T_{k+1} & = &
\begin{cases}
+\infty & \text{if }T_k=+\infty\\
T_k+U^{\xi_0^{T_k}} \circ \theta_{T_k} & \text{otherwise;}
\end{cases} \\
K & = & \inf\{k\ge 0:\;T_{k+1}=+\infty\}.
\end{eqnarray*}
The restart lemma, applied with $T^.=G^.=U^.$ and $F^.=0$, ensures that
\begin{eqnarray*}
\mathbb{E}_\lambda[\exp(\alpha T_K)] & \le & \frac{A'}{1-c}.
\end{eqnarray*}
Applying now the restart lemma with $G^.=0$ and $F^.=R^.$, we get that
\begin{eqnarray*}
\mathbb{E}_\lambda[\exp(\alpha (R^{\xi_0^{T_K}} \circ\theta_{T_{K}}-(h\|\bar{x}_0\|_\infty+\|\xi_0^{T_K}\|_\infty )))]
& \le & \frac{A'}{1-c} .
\end{eqnarray*}
Particularly, it holds that for each $s>0$ and $t>0$,
\begin{eqnarray}
\mathbb{P}_\lambda(T_K>s) & \le & \frac{A'}{1-c} \exp(-\alpha s); \label{queueTK}\\
\mathbb{P}_\lambda
\left(
\begin{array}{c}
R^{\xi_0^{T_K} \circ\theta_{T_{K}}}\ge t/2, \\
T_K \le s, \; H^0_s \subset B^0_{Ms}
\end{array}
\right) & \le &
\frac{A'}{1-c} \exp(\alpha (h (\|\bar{x}_0\|_\infty+Ms )- t/2)) \label{queueR}.
\end{eqnarray}
On the event $\{\tau^0=+\infty\}$, one can be sure that the contact process is non-empty at each step of the restart procedure : the restart Lemma ensures that at time $T_K+T$, one can find some area from which the directed block percolation percolates, and, by construction, that for every $t \ge T_K+R^{\xi_0^{T_K} \circ\theta_{T_{K}}}$,
$$\Leb(\{s \in [T_K+T,t]: \; (0,0) \to(x,s)\to \infty \})\ge \delta \theta \text{Int}(\frac{t-(T_K+T)}{T})\ge \frac{\delta \theta}{2T} t$$
as soon as $T_K\le t/2-1$.
Let $C=\frac{2h}L$. Let now be $x \in \mathbb{Z}d$, and $t \ge C\|x\|_\infty$.
\begin{eqnarray*}
& & \mathbb{P}_\lambda \left(\tau^0=+\infty, \Leb(\{s \in [0,t]: \; (0,0) \to(x,s)\to \infty \})<\frac{\delta \theta}{2T} t \right) \\
& \le & \mathbb{P}_\lambda(T_K> t/2-1)+\mathbb{P}_\lambda(T_K\le t/2-1, \; t<T_K+R^{\xi_0^{T_K}} \circ\theta_{T_{K}})\\
& \le & \mathbb{P}_\lambda(T_K> t/2-1)+\mathbb{P}_\lambda(R^{\xi_0^{T_K}} \circ\theta_{T_{K}}>t/2).
\end{eqnarray*}
We control the first term with (\ref{queueTK}). For the second one, we take $s=\frac{t}{8hM}$:
\begin{eqnarray*}
&&\mathbb{P}_\lambda(R^{\xi_0^{T_K}} \circ\theta_{T_{K}}>t/2) \\
& \le & \mathbb{P}_\lambda(R^{\xi_0^{T_K} \circ\theta_{T_{K}}}> t/2, \; T_K \le s, \; H^0_s \subset B^0_{Ms}) + \mathbb{P}_\lambda(T_K > s)+\mathbb{P}_\lambda(H^0_s \not\subset B^0_{Ms}).
\end{eqnarray*}
We control the last two terms with (\ref{queueTK}) and (\ref{richard}); for the first one, we use (\ref{queueR}): since
$\|\bar{x}_0\|_\infty \le \frac1{2L} \|x\|_\infty+1$,
\begin{eqnarray*}
\mathbb{P}_\lambda \left(
\begin{array}{c}
R^{\xi_0^{T_K} \circ\theta_{T_{K}}}> t/2, \\
T_K \le s, \; H^0_s \subset B^0_{Ms}
\end{array}
\right) & \le &
\frac{A'}{1-c} \exp(\alpha (h (\|\bar{x}_0\|_\infty+Ms )- t/2)) \\
& \le & \frac{A'e^{\alpha h}}{1-c} \exp\left( \alpha \left( \left( \frac{h}{2L}\|x\|_\infty-\frac{t}4\right) -\frac{t}{8} \right) \right)\\
& \le & \frac{A'e^{\alpha h}}{1-c}\exp(-\alpha t/8),
\end{eqnarray*}
which concludes the proof.
\section{Lower large deviations}
\subsection{Duality}
We have seen that the hitting times $\sigma(nx)$ enjoy superconvolutive properties. In a deterministic frame, Hammersley~\cite{MR0370721} has proved
that the superconvolutive property allows to express the large deviation functional in terms of the moments generating function, as in Chernoff's Theorem.
We will see that this property also holds in an ergodic random environment.
The following proof is inspired by Kingman~\cite{MR0438477}.
\begin{proof}[Proof of Theorem~\ref{LDP}]
Since
$\{t(x)\le t,\; \tau^x\circ \theta_{t(x)}=+\infty\}\subset\{\sigma(x)\le t\}\subset\{t(x)\le t\},$ the Markov property ensures that
$$\overline{\mathbb{P}}_{\lambda}(t(x)\le t)\mathbb{P}_{\lambda}(\tau^x=+\infty)\le\overline{\mathbb{P}}_{\lambda}(\sigma(x)\le t)\le\overline{\mathbb{P}}_{\lambda}(t(x)\le t).$$
Thus, letting $R=-\log \mathbb{P}_{\lambda_{\min}}(\tau^0=+\infty)$,
we have
\begin{equation}
\label{equationunun}
-\log(\overline{\mathbb{P}}_{\lambda}(t(x)\le t))\le -\log(\overline{\mathbb{P}}_{\lambda}(\sigma(x)\le t))\le -\log(\overline{\mathbb{P}}_{\lambda}(t(x)\le t))+R.
\end{equation}
Similarly,
$$ \mathbb{E}_{\lambda}[e^{-t(x)}] \ge \mathbb{E}_{\lambda}[e^{-\theta\sigma(x)}] \ge \mathbb{E}_{\lambda}[1\hspace{-1.3mm}1_{\{\tau^x\circ\theta_{t(x)}=+\infty\}} e^{-\theta t(x)}]=\mathbb{E}_{\lambda}[ e^{-\theta t(x)}]\mathbb{P}_{\lambda}(\tau^x=+\infty),$$
which leads to
\begin{equation}
\label{equationquatrequatre}
-\log \mathbb{E}barre_{\lambda}[e^{-t(x)}] \le -\log \mathbb{E}barre_{\lambda}[e^{-\theta\sigma(x)}] \le -\log \mathbb{E}barre_{\lambda}[ e^{-\theta t(x)}]+R.
\end{equation}
Then, having a large deviation principle in mind, working with $\sigma$ or $t$ does not matter. We will work here with $\sigma$, which gives simpler
relations. We know that
\begin{equation}\\
\label{sousaddd}
t((n+p)x)\le \sigma(nx)+\sigma(px)\circ\tilde{\theta}_{nx},
\end{equation}
that $\sigma(nx)$ and $\sigma(px)\circ\tilde{\theta}_{nx}$
are independent under $\overline{\mathbb{P}}_{\lambda}$, and that the law of
$\sigma(px)\circ\tilde{\theta}_{nx}$ under $\overline{\mathbb{P}}_{\lambda}$ is the law of $\sigma(px)$ under $\overline{\mathbb{P}}_{nx.\lambda}$ (see Proposition~\ref{magic}). Then
\begin{equation}
\label{equationdeuxdeux}
-\log \overline{\mathbb{P}}_{\lambda}(t((n+p)x)\le nu+pv)\le -\log \overline{\mathbb{P}}_{\lambda}(\sigma(nx)\le nu)-\log \overline{\mathbb{P}}_{nx.\lambda}(\sigma(px)\le pv).
\end{equation}
Let $g_n^{x}(\lambda,u)=-\log \overline{\mathbb{P}}_{\lambda}(\sigma(nx)\le nu)+R\text{ and }G_n^{x}(u)=\int_{\Lambda} g^{x}_{n}(\lambda,u)\ d\nu(\lambda)$.
Inequalities~(\ref{equationunun}) and (\ref{equationdeuxdeux}) ensure that
\begin{equation}
\label{equationtroistrois}
g^{x}_{n+p}(\lambda,u)\le g^{x}_n(\lambda,u)+g^{x}_p( T_{x}^n\lambda,u).
\end{equation}
Since $0\le g^{x}_1(\lambda,u) \le -\log \overline{\mathbb{P}}_{\lambda_{\min}}(\sigma(x)\le u)+R<+\infty$, Kingman's subadditive ergodic theorem ensures that
$\frac{g^{x}_{n}(u,\lambda)}{n}$ converges to
$$\mathbb{P}si_x(u)=\inf_{n\ge 1}\frac1{n} G^x_n(u)=\lim_{n\to+\infty} \frac1{n}G^x_n(u).$$
for $\nu$-almost every $\lambda$.
Note that (\ref{equationtroistrois}) ensures that for every $n,p\in\mathbb N$ and every $u,v>0$,
$$\mathbb{P}si_x\left(\frac{nu+pv}{n+p}\right)\le \frac1{n+p}G^x_{n+p}\left(\frac{nu+pv}{n+p}\right)\le \frac{n}{n+p} \frac{G^x_n(u)}{n}+\frac{p}{n+p} \frac{G^x_p(v)}{p}.$$
Let $\alpha \in ]0,1[$. Since $\mathbb{P}si_x$ is non-increasing, considering some sequence $n_k,p_k$ such
$\frac{n_k}{n_k+p_k}$ tends to $\alpha$ from above, we get
$$\mathbb{P}si_x(\alpha u+(1-\alpha)v)\le \alpha\mathbb{P}si_x(u)+(1-\alpha)\mathbb{P}si_x(v).$$
So $\mathbb{P}si$ is convex.
Similarly, let $h_n^{x}(\lambda,\theta)=-\log \mathbb{E}barre_{\lambda}[e^{-\theta \sigma(nx)}]+R\text{ and }H^x_n({\theta})=\int h^{x}_{n}(\lambda,\theta)\ d\nu(\lambda)$. As previously, with (\ref{equationquatrequatre}) and the subadditive relation~(\ref{sousaddd}),
we have
\begin{eqnarray*}\mathbb{E}barre_{\lambda}[e^{-\theta\sigma((n+p)x)}]&\ge & e^{-R}\mathbb{E}barre_{\lambda}[ e^{-\theta t((n+p)x)}]\\
& \ge & e^{-R}\mathbb{E}barre_{\lambda}[ e^{-\theta (\sigma(nx)+\sigma(px)\circ\tilde{\theta}_{nx})}]
= e^{-R}\mathbb{E}barre_{\lambda}[ e^{-\theta \sigma(nx)}]\mathbb{E}barre_{\lambda}[e^{-\theta\sigma(px)}],
\end{eqnarray*}
and then the inequality
$$h^{x}_{n+p}(\lambda,{\theta})\le h^{x}_n(\lambda,{\theta})+h^{x}_p (T_{x}^n\lambda,{\theta}).$$
Since $0\le h^{x}_1(\lambda,{\theta})\le -\log \mathbb{E}barre_{\lambda_{\min}}[e^{-\theta\sigma(x)}]<+\infty$, Kingman's subadditive ergodic theorem ensures that for $\nu$-almost every $\lambda$,
$\frac{h^{x,{\theta}}_{n}(\lambda,{\theta})}{n}$ converges to
$$K_x({\theta})=\inf_{n\ge 1}\frac1{n} H^x_n({\theta})=\lim_{n\to+\infty} \frac1{n}H^x_n({\theta}).$$
Let now $\theta\ge 0$ and $u>0$. By the Markov inequality, we observe that
\begin{eqnarray*}
\overline{\mathbb{P}}_{\lambda}(\sigma(nx)\le nu)\le e^{\theta nu}\mathbb{E}barre_{\lambda}[e^{-\theta\sigma(nx)}], & \emph{i.e. } & -g_n^{x}(.,u)\le\theta nu-h_n^{x,\theta}(.,u), \nonumber \\
& \emph{i.e. } & G_n^x(u)\ge -\theta nu +H_n(\theta), \nonumber \\
& \emph{i.e. } & \mathbb{P}si_x(u)\ge-\theta u+K_x(\theta).
\end{eqnarray*}
Thus, we easily get
\begin{eqnarray}
\label{lune}
\forall u>0\quad \mathbb{P}si_x(u)& \ge& \sup_{\theta\ge 0}(K_x(\theta)-\theta u), \\
\label{lunea} \forall \theta>0 \quad K_x(\theta) & \le & \inf_{u>0} (\mathbb{P}si_x(u)+\theta u).
\end{eqnarray}
It remains to prove both reversed inequalities.
Let us first prove
\begin{equation}
\label{lautre}
\forall \theta>0 \quad K_x(\theta)\ge \inf_{u>0} \{\mathbb{P}si_x(u)+\theta u\}.
\end{equation}
Let $\theta>0$. Define $M=\miniop{}{\inf}{u>0} \{\mathbb{P}si_x(u)+\theta u\}$ and note that for each $u$ and each integer $n$
$$G_n^x(u)+n\theta u\ge n\mathbb{P}si_x(u)+n\theta u\ge nM.$$
Fix $\varepsilon>0$.
Define $E_{n,\varepsilon}=\{\lambda: g_n^{x,u}(\lambda)\ge G^x_n(u)-n\varepsilon\}$.
We have
\begin{eqnarray*}
H^x_n(\theta)
& \ge & \int_{E_{n,\varepsilon}} h^x_n(\theta)\ d\nu(\lambda)
= \int_{E_{n,\varepsilon}} (R-\log \mathbb{E}barre_{\lambda}[e^{-\theta \sigma(nx)})]\ d\nu(\lambda)\\
& = & \int_{E_{n,\varepsilon}} -\log \left[\int_0^{+\infty}n\theta e^{-\theta nu}e^{-R}\overline{\mathbb{P}}_{\lambda}(\sigma(nx)<nu)\ du\right]\ d\nu(\lambda).
\end{eqnarray*}
For every $\lambda\in E_{n,\varepsilon}$ and $b>0$, one has
\begin{eqnarray*}\int_0^{+\infty} n\theta e^{-\theta nu}e^{-R}\overline{\mathbb{P}}_{\lambda}(\sigma(nx)<nu)\ du
& \le & e^{-\theta n b}+ \int_0^{b} n\theta e^{-\theta nu}e^{-R}\overline{\mathbb{P}}_{\lambda}(\sigma(nx)<nu)\ du\\
& \le & e^{-\theta n b}+\int_0^{b} n\theta e^{-\theta nu}e^{-g_n^{x,u}(\lambda)}\ du\\
& \le & e^{-\theta n b}+\int_0^{b} n\theta e^{-\theta nu}e^{-G^x_n(u)+n\varepsilon }\ du\\
& \le & e^{-\theta n b}+n\theta be^{-n(M-\varepsilon)}\\
& \le & (nM+1)e^{-n(M-\varepsilon)}\quad\text{with }b=M/\theta.
\end{eqnarray*}
Finally,
$$ \frac{H^x_n(\theta)}{n} \ge \nu(E_{n,\varepsilon})\left(-\frac{\log(1+nM)}n+M-\varepsilon\right).$$
Since $\nu(E_{n,\varepsilon})$ tends to $1$ when $n$ goes to infinity, one deduces that
$$K_x(\theta)=\lim\frac1{n}H^x_n(\theta)\ge M-\varepsilon.$$
Letting $\varepsilon$ tend to $0$, we get~(\ref{lautre}).
Let us finally prove
\begin{equation}
\label{lautreb}
\forall u>0\quad \mathbb{P}si_x(u) \le \sup_{\theta\ge 0}(K_x(\theta)-\theta u).
\end{equation}
Let $u>0$. It is sufficient to prove that there exists $\theta_u\ge 0$, with $\mathbb{P}si_x(u)\le -\theta_u u+K_x(\theta_u)$.
Since $\mathbb{P}si_x$ is convex and non-increasing, there exists a slope $-\theta_u\le 0$
such that $\mathbb{P}si_x(v)\ge\mathbb{P}si_x(u)-\theta_u(v-u)$. Then
$$ K_x(\theta_u)=\inf_{v>0} \{\mathbb{P}si_x(v)+\theta_u v\} \ge \inf_{v>0} \{\mathbb{P}si_x(u)-\theta_u(v-u)+\theta_u v\}
\ge \mathbb{P}si_x(u)+\theta_u u,$$
which completes the proof of~(\ref{lautreb}) and of the reciprocity formulas.
The function $-K_x(-\theta)$ corresponds to $\mathbb{P}si_x$ in the Fenchel-Legendre duality: therefore, it is convex.
Particularly, the functions $\mathbb{P}si_x$ and $K_x$ are continuous on $]0,+\infty[$.
By the definition of $\mathbb{P}si_x$ and $K_x$, there exists $\Lambda'\subset\Lambda$ with $\nu(\Lambda')=1$ and such that for each
$u\in\mathbb{Q}\cap (0,+\infty)$ and each $\theta\in\mathbb{Q}\cap [0,+\infty)$, we have
\begin{eqnarray*}
&& \lim_{n\to +\infty} -\frac1{n}\log \overline{\mathbb{P}}_{\lambda}(\sigma(nx)\le nu) = \mathbb{P}si_x(u), \\
\text{and } && \lim_{n\to +\infty} -\frac1{n}\log \mathbb{E}barre_{\lambda}e^{-\theta\sigma(nx)} = K_x(\theta).
\end{eqnarray*}
Since the functions $\theta\mapsto h^{x,\theta}_n$ and $u\mapsto h^{x,u}_n$ are monotonic and their limits $\mathbb{P}si_x$ and $K_x$ are continuous, it is easy to check that the convergences also hold for every
$\lambda\in\Lambda'$, $u>0$ and $\theta\ge 0$.
\end{proof}
\subsection{Lower large deviations}
We prove here Theorem~\ref{dessouscchouette}.
Remember that ${\mathbb{P}}(.)=\int_\Lambda {\mathbb{P}}_\lambda(.)\ d\nu(\lambda)$.
The main step is actually to prove the following:
\begin{theorem}
\label{GDdessous}
Assume that $\nu=\nu_0^{\otimes \mathbb{E}d}$ and that the support of $\nu_0$ is included in $[\lambda_{\text{min}}, \lambda_{\text{max}}]$.
For every $\varepsilon>0$, there exist $A,B>0$ such that
$$\forall t \ge 0 \quad \mathbb{P} (\xi^0_t \not \subset (1+\varepsilon)t A_\mu)\le A\exp(-Bt).$$
\end{theorem}
Using the norm equivalence on $\mathbb{R}d$, we introduce constants $C^-_\mu,C^+_\mu>0$ such that
\begin{equation}
\label{normes}
\forall z \in \mathbb{R}d \quad C^-_\mu\|z\|_\infty \le \mu(z) \le C^+_\mu \|z\|_{\infty}.
\end{equation}
Let $\alpha,L,N,\varepsilon>0$. We define the following event, relatively to the space-time box $B_N=B_N(0,0)=[-N,N]^d\times[0,2N]$:
\begin{eqnarray*}
A^{\alpha,L,N,\varepsilon}= \left\{\forall (x_0,t_0) \in B_N\quad
\xi^{x_0}_{\alpha L N-t_0}\circ \theta_{t_0} \subset x_0+(1+\varepsilon)(\alpha L N-t_0)A_{\mu}\right\}\cap\\
\left\{\forall (x_0,t_0) \in B_N
\miniop{}{\cup}{0\le s\le\alpha L N-t_0} \xi^{x_0}_{s}\circ \theta_{t_0} \subset ]-LN,LN[^d\right\}.
\end{eqnarray*}
The first part of the event ensures that the descendants, at time $\alpha L N$, of any point $(x_0,t_0)$ in the box $B_N$ are included in $x_0+(1+\varepsilon)(\alpha L N)A_{\mu}$: it is a sharp control, requiring the asymptotic shape theorem. The second part ensures that the descendants, at all times in $[0,\alpha L N] $, of the whole box $B_N$ are included in $]-LN,LN[^d$: the bound is rough, only based on the (at most) linear growth of the process.
We say that the box $B_N$ is good if $A^{\alpha,L,N,\varepsilon}$ occurs. We also define, for $k \in \mathbb{Z}d$ and $n \in \mathbb N$, the event $A^{\alpha,L,N,\varepsilon}(k,n)=A^{\alpha,L,N,\varepsilon} \circ T_{2kN} \circ \theta_{2nN}$ and we say that the box $B_N(k,n)$ is good if the event $A^{\alpha,L,N,\varepsilon}(k,n)$ occurs.
The proof of the lower large deviation inequalities is close to the one by Grimmett and Kesten~\cite{grimmett-kesten} for first passage-percolation. If a point $(x,t)$ is infected too early, it means that its path of infection has ``too fast'' portions when compared with the speed given by the asymptotic shape theorem. For this path, we build a sequence of boxes associated with path portions, and the existence of a ``too fast portion'' forces the corresponding box to be bad. But we are going to see that we can choose the parameters to ensure that
\begin{itemize}
\item the probability under $\mathbb{P}$ for a box to be good is as close to $1$ as we want,
\item the events ``$B_N(k,0)$ is good'' are only locally dependent.
\end{itemize}
We then conclude the proof by a comparison with independent percolation with the help of the Liggett--Schonmann--Stacey Lemma~\cite{LSS} and a control of the number of possible sequences of boxes.
\begin{lemme}
\label{catendvers12}
We have
\begin{itemize}
\item The events $(\{B_N(k,0) \text{ is good}\})_{k\in \mathbb{Z}d}$ are identically distributed under~$\mathbb{P}$.
\item There exists $\alpha>0$ such that for every $\varepsilon \in (0,1)$, there exists an integer $L$ (that can be taken as large as we want) such that
$$\lim_{N \to +\infty} \mathbb{P}(A^{\alpha,L,N,\varepsilon})=1.$$
\item
If moreover $\nu=\nu_0^{\otimes \mathbb{E}d}$, then the events $(\{B_N(k,0) \text{ is good}\})_{k\in \mathbb{Z}d}$ are $(L+1)$-dependent under $\mathbb{P}$.
\end{itemize}
\end{lemme}
\begin{proof}
The first and last points are clear. Let us prove the second point.
The idea is to find a point $(0,-k)$, with $k$ large enough, such that
\begin{itemize}
\item the descendants of $(0,-k)$ are infinitely many and behave correctly (without excessive speed)
\item the coupled region of $(0,-k)$ contains a set of points that is necessarily crossed by any infection path starting from the box $B_N$.
\end{itemize}
Indeed, this will allow to find, for all the descendants of $B_N$, a unique common ancestor, and thus to control the growth of all the descendants of $B_N$ by simply controlling the descendants of this ancestor. A control on a number of points of the order of the volume of $B_N$ will thus be replaced by a control on a single point. See Figure~\ref{uneautrefigure}.
Let $\varepsilon>0$ be fixed.
We first control the positions of the descendants of the box $B_N$ at time $4N$.
Let $A,B,M$ be the constants given by Proposition~\ref{propuniforme}. We recall that $\omega_x$, for $x\in \mathbb{Z}d$, and $\omega_e$, for $e \in \mathbb{E}d$ are the Poisson point processes giving respectively the death times for $x$ and the potential infection times through edge $e$. We define, for every integer $N$:
\begin{eqnarray*}
\tilde{A}_1^N & = & \{H^0_{4N}\not \subset [-(4M+1)N,(4M+1)N]^d\}, \\
A_1^N & = & \left\{ \sum_{x \in [-N,N]^d}\int 1\hspace{-1.3mm}1_{\{ \tilde{A}_1^N \circ T_x \circ \theta_t \} }\ d\left(\delta_0+\sum_{e \ni x}\omega_e\right)(t) =0 \right\}.
\end{eqnarray*}
Note in particular that
\begin{equation}
\label{inclusion1}
A_1^N \subset \left\{\forall (x_0,t_0) \in B_N\quad
\xi^{x_0}_{4N-t_0}\circ \theta_{t_0} \subset [-(4M+1)N,(4M+1)N]^d \right\} .
\end{equation}
We have with~\eqref{richard},
\begin{eqnarray*}
&& \mathbb{E} \left(\sum_{x \in [-N,N]^d}\sum_{e \ni x}\int_0^{2N} 1\hspace{-1.3mm}1_{\{ \tilde{A}_1^N \circ T_x \circ \theta_t \} } \ d(\delta_0+\omega_e)(t) \right) \\
& \le & (2N+1)^d2d(1+2N\lambda_{\max}) \mathbb{P}_{\lambda_{\max}}(\tilde{A}_1^N) \\
& \le & (2N+1)^d2d(1+2N\lambda_{\max})A\exp(-4BN),
\end{eqnarray*}
and thus, with the Markov inequality,
\begin{equation}
\label{probab1}
\lim_{N\to +\infty} \mathbb{P}(A_1^N) =1.
\end{equation}
With~(\ref{inclusion1}), we deduce that with a large probability, if $N$ is large enough, the descendants of $B_N$ at time $4N$ are included in $[-(4M+1)N,(4M+1)N]^d$.
Now, we look for points with a good growth (we will look for the common ancestor of $B_N$ among these candidates):
\begin{eqnarray*}
\tilde{A}_2^t & = & \{\tau^0=+\infty, \; \forall s\ge t\quad K'_s\supset (1-\varepsilon)s A_{\mu}\text{ and } \xi^0_s\subset (1+\varepsilon/2)sA_{\mu}\}, \\
{A}_2^{t,N} & = & \miniop{N-1}{\cup}{k=0} \tilde{A}_2^{t}\circ \theta_{-k}.
\end{eqnarray*}
The first event says that the point $(0,0)$ lives forever and has a good growth after time $t$ (at most linear growth, and at least linear growth for its coupled zone), while the second event says that there exists a point $(0,-k)$ with a good growth and such that $k \in [0..N-1]$.
Theorem 3 in Garet-Marchand~\cite{GM-contact} ensures that $\miniop{}{\lim}{t\to +\infty}\overline{\mathbb{P}}(\tilde{A}_2^t)=1$. But
\begin{eqnarray*}
\mathbb{P}(\tilde{A}_2^t) & = & \int \mathbb{P}_{\lambda}(\tilde{A}_2^t) d\nu(\lambda)
= \int \overline{\mathbb{P}}_{\lambda}(\tilde{A}_2^t)\mathbb{P}_{\lambda}(\tau^0=+\infty) d\nu(\lambda)\\
& \ge & \int \overline{\mathbb{P}}_{\lambda}(\tilde{A}_2^t)\mathbb{P}_{\lambda_{\min}}(\tau^0=+\infty) d\nu(\lambda)
\ge \mathbb{P}_{\lambda_{\min}}(\tau^0=+\infty)\overline{\mathbb{P}}(\tilde{A}_2^t).
\end{eqnarray*}
So there exists $t_2$ such that $\mathbb{P}(\tilde{A}_2^{t_2})>0$.
As the time translation $\theta_{-1}$ is ergodic under $\mathbb{P}$, we get
\begin{equation}
\label{probab2}
\lim_{N \to +\infty} \mathbb{P}\left( {A}_2^{t_2,N}\right)=\lim_{n\to +\infty}\mathbb{P}\left( \miniop{n-1}{\cup}{k=0} \tilde{A}_2^{t_2}\circ \theta_{-k}\right)=1.
\end{equation}
In other words, with a large probability, if $N$ is large enough there exists $k \in [0..N-1]$ such that the point $(0,-k)$ has a good growth.
\begin{figure}
\caption{Coupling from the past}
\label{uneautrefigure}
\end{figure}
\noindent
Take $L_1=L_1(\varepsilon)>0$ such that
\begin{equation}
\label{inclusion}\forall N\ge 1\quad (L_1+1)N(1-\varepsilon)A_{\mu}\supset [-(4M+1)N,(4M+1)N]^d.
\end{equation}
Thus, if we find an integer $k\ge \max(t_2,L_1 N)$ such that
$A_{t_2}\circ \theta_{-k}$ occurs, then the descendants of the box $B_N$ at time $4N$ are in the coupled region of $(0,-k)$.
Denote by $\overleftarrow{\tau}^y$ the life time of $(y,0)$ for the contact process when we reverse time. As the contact process is self-dual, $\overleftarrow{\tau}^y$ as the same law as $\tau^y$. Set
$$A_3^N=\left\{\forall y\in [-(4M+1)N,(4M+1)N]^d\quad \overleftarrow{\tau}^y\circ{\theta}_{4N}=+\infty \text{ or } \overleftarrow{\tau}^y\circ{\theta}_{4N}<2N\right\}.$$
The control~(\ref{grosamasfinis}) of large lifetimes ensures that
\begin{equation}
\label{probab3}
\lim_{N \to +\infty} \mathbb{P}(A_3^N)=1.
\end{equation}
Assume now that $N\ge t_2/L_1$. Thus $L_1N\ge t_2$.
Let us see that on $\displaystyle A_1^{N}\cap (A_2^{t_2,N}\circ \theta_{-L_1N})
\cap A_3^N$, we have
\begin{equation}
\label{attrape}
\forall t\ge 4N \quad \miniop{}{\cup}{(x_0,t_0)\in B_N} \xi_{t-t_0}^{x_0}\circ\theta_{t_0} \subset ((L_1+1)N+t)(1+\varepsilon/2)A_{\mu}.
\end{equation}
Indeed, let $t\ge 4N$ and $x\in \mathbb{Z}d$ be such that $(x,t)$ is a descendant of $(x_0,t_0)\in B_N$. Let $(y,4N)$ be an ancestor of $(x,t)$ and a descendant of $(x_0,t_0)$. On the event $A_1^{N}$, the point $y$ is in $[-4MN,4MN]^d$.
But, on $A_3^N$, the definition of $y$ ensures that $\overleftarrow{\tau}^y\circ{\theta}_{4N}=+\infty$:
so $(y,4N)$ has a living ancestor a time $-k$, for each $k$ such that $L_1 N \le k \le (L_1+1) N-1$.
But, on $A_2^{t_2,N}\circ \theta_{-L_1N}$, inclusion~(\ref{inclusion}) ensures that $(y,4N)$ is in the coupled region of $(0,-k)$ for one of these $k$, and so $(y,4N)$ is a descendant of this $(0,-k)$.
Finally, $(x,t)$ is also a descendant of $(0,-k)$, and, always on $A_2^{t_2,N}\circ \theta_{-L_1N}$,
$$\mu(x)\le (k+t)(1+\varepsilon/2)\le ((L_1+1)N-1+t)(1+\varepsilon/2),$$
which proves~(\ref{attrape}).
We then choose $\alpha \in (0,1)$ and an integer $L$ such that
\begin{eqnarray*}
\alpha & < & \frac{2C_\mu^-}{3} \le \frac{C_\mu^-}{1+\varepsilon/2}, \label{alpha} \\
L & \ge & \max\left\{ \frac4{\alpha}, \; \frac{L_1+1}{ C_\mu^--\alpha(1+\varepsilon/2)}, \; 4M+1, \; \frac{2}{\alpha \varepsilon}((L_1+1)(1+\varepsilon/2)+C_\mu^++2 \right\}. \label{L}
\end{eqnarray*}
If $N \ge t_2/L_1$, as $\alpha L N \ge 4N$, we can use~(\ref{attrape}) with $t\in[4N,\alpha LN]$; thus our choices for $\alpha,L$ and~(\ref{inclusion1}) ensure that on the event $\displaystyle A_1^{N}\cap (A_2^{t_2,N}\circ \theta_{-L_1N}) \cap A_3^N$, for every $ (x_0,t_0) \in B_N$
\begin{eqnarray*}
\miniop{}{\cup}{4N\le s\le\alpha L N-t_0} \xi^{x_0}_{s}\circ \theta_{t_0} \subset ((L_1+1+\alpha L)N)(1+\varepsilon/2)A_{\mu} & \subset& [-LN,LN]^d, \\
\miniop{}{\cup}{0\le s\le\alpha 4N} \xi^{x_0}_{s}\circ \theta_{t_0} \subset [-(4M+1)N,(4M+1)N]^d& \subset& [-LN,LN]^d, \\
\xi^{x_0}_{\alpha L N-t_0}\circ \theta_{t_0} \subset (L_1+1+\alpha L )N(1+\varepsilon/2)A_{\mu}& \subset & x_0+(1+\varepsilon)(\alpha L N-t_0)A_{\mu}.
\end{eqnarray*}
Finally, if $N \ge t_2/L_1$,
$$A_1^{N}\cap (A_2^{t_2,N}\circ \theta_{-L_1N})
\cap A_3^N \subset A^{\alpha,L,N,\varepsilon},$$
and we conclude with (\ref{probab1}), (\ref{probab2}) and (\ref{probab3}).
\end{proof}
We first prove the existence of $C>0$ such that, with a large probability, the point $(0,0)$ can not give birth to more than $Ct$ generations before time $t$:
\begin{lemme}
\label{histoirelineaire}
There $A,B,C>0$ such that for every $\lambda\in [0,\lambda_{\max}]^{\mathbb{E}d}$, for every $t,\ell\ge 0$:
$$
\mathbb{P}_\lambda \left(
\begin{array}{c}
\exists (x,s)\in \mathbb{Z}d\times [0,t] \text{ and an infection path from }(0,0) \\
\text{ to } (x,s)\text{ with more than }Ct+\ell\text{ horizontal edges }\end{array}
\right) \le A \exp(-B\ell).
$$
\end{lemme}
\begin{proof}
Let $\alpha>0$ be fixed. For every path $\gamma$ in $\mathbb{Z}d$ starting from $0$ and eventually self-intersecting, we set
$$X_{\gamma}=1\hspace{-1.3mm}1_{\{\gamma\text{ is the projection on $\mathbb{Z}d$ of an infection path starting from }(0,0)\}}e^{-\alpha t(\gamma)},$$
where $t(\gamma)$ is the time when the extremity is infected after visiting successively the previous points.
More formally, if the sequence of points in $\gamma$ is $(0=x_0,\dots,x_n)$ and if we set $T_0=0$, and for $k\in\{0,\dots,n-1\}$, $$T_{k+1}=\inf\left\{t>T_k; \omega_{\{x_k,x_{k+1}\}}([T_k,t])=1\text{ and }\omega_{x_k}([T_k,t])=0\right\},$$ we have $t(\gamma)=T_n$.
The random variable $t(\gamma)$ is a stopping time (it is infinite if $\gamma$ is not the projection of an infection path).
Let $\gamma$ be a path in $\mathbb{Z}d$ starting from $0$ and let $f$ be an edge at the extremity of~$\gamma$. If we denote by $\gamma.f$ the concatenation of $\gamma$ with $f$, the strong Markov property at time $t(\gamma)$ ensures that
$$\mathbb{E}_{\lambda} [X_{\gamma.f}|\mathcal{F}_{t(\gamma)}]\le X_{\gamma} \frac{\lambda_{\max}}{\alpha+\lambda_{\max}}, \text{ and so } \mathbb{E} [X_{\gamma}]\le \left( \frac{\lambda_{\max}}{\alpha+\lambda_{\max}} \right)^{|\gamma|}.$$
Now,
\begin{eqnarray*}
&& \mathbb{P}_\lambda \left(
\begin{array}{c}
\exists (x,s)\in \mathbb{Z}d\times [0,t] \text{ is an infection path from }(0,0) \\
\text{ to } (x,s)\text{ with more than }Ct+\ell\text{ horizontal edges}\end{array}
\right) \\
& = & \mathbb{P}_\lambda \left(\miniop{}{\cup}{\gamma:|\gamma|\ge Ct+\ell} \{X_{\gamma}\ge e^{-\alpha t}\} \right) \\
& \le & e^{\alpha t} \miniop{}{\sum}{\gamma:|\gamma|\ge Ct+\ell} \left(\frac{\lambda_{\max}}{\alpha+\lambda_{\max}}\right)^{|\gamma|} \le e^{\alpha t} \miniop{}{\sum}{n\ge Ct+\ell} \left(\frac{2d\lambda_{\max}}{\alpha+\lambda_{\max}}\right)^{n} .
\end{eqnarray*}
To conclude, we take $\alpha=2d\lambda_{\max}$, and then $C$ such that $(\frac{2d}{2d+1})^{C}=e^{-\alpha }$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{GDdessous}]
Let $\varepsilon>0$ and $t>0$ be fixed.
Obviously
\begin{eqnarray}
\label{majgross}
\mathbb{P}(\xi^0_t \not \subset (1+\varepsilon)tA_{\mu})&\le &\mathbb{P}(\xi^0_t \not \subset (1+\varepsilon)tA_{\mu},\xi^0_t \subset [-Mt,Mt]^d)\\&&+\mathbb{P}(\xi^0_t \not \subset [-Mt,Mt]^d)\nonumber.
\end{eqnarray}
The second term is controlled by equation~\eqref{richard}
Assume that $\xi^0_t \not \subset (1+\varepsilon)tA_{\mu}$: let $x \in \xi^0_t$ be such that $\mu(x) \ge (1+\varepsilon)t$, $\|x\|_{\infty}\le Mt$ and let $\gamma$ be an infection path from $(0,0)$ to $(x,t)$.
With Lemma~\ref{histoirelineaire}, we choose $C>1,A_2,B_2>0$ such that for every $t \ge 0$,
\begin{equation}
\label{pastropdaretes}
\mathbb{P} \left( \begin{array}{c}
\text{there exists an infection path from } (0,0) \text{ to } \mathbb{Z}d\times\{t\} \\
\text{with more than } Ct \text{ horizontal edges}
\end{array}
\right) \le A_2 \exp(-B_2t).
\end{equation}
With the last estimate, we can assume that $\gamma$ has less than $Ct$ horizontal edges.
We take $0<\alpha<1$ and $L=L(\alpha,\varepsilon)$ large enough to apply Lemma~\ref{catendvers12} and such that
\begin{equation}
\label{choiceL}
\frac{4C_\mu^+C}{\alpha L-1}\le \frac{\varepsilon}3, \quad \alpha L\ge 2\quad \text{ and } L\ge 3.
\end{equation}
We fix an integer $N$ and we cut the space-time $\mathbb{Z}d\times \mathbb{R}_+$ into space-time boxes:
$$\forall k \in \mathbb{Z}d \quad \forall n \in \mathbb N \quad B_N(k,n)=(2Nk+[-N,N]^d)\times(2Nn+[0,2N]).$$
We associate to the path $\gamma$ a finite sequence $\Gamma=(k_i,n_i,a_i,t_i)_{0 \le i \le {\ell}}$, where the $(k_i,n_i)\in\mathbb{Z}^d\times \mathbb N$ are the coordinates of space-time boxes and the $(a_i,t_i)$ are points in $\mathbb{Z}d \times \mathbb{R}_+$ in the following manner:
\begin{itemize}
\item $k_0=0$, $n_0=0$, $a_0=0$ and $t_0=0$: $B_N(k_0,n_0)$ is the box containing the starting point $(a_0,t_0)=(0,0)$ of the path $\gamma$.
\item Assume we have chosen $(k_i,n_i,a_i,t_i)$, where $(a_i,t_i)$ is a point in $\gamma$ and $(k_i,n_i)$ are the coordinates of the space-time box containing $(a_i,t_i)$. To the box $B_N(k_i,n_i)$, we add the larger box $(2Nk_i+[-LN,LN]^d)\times (2Nn_i+[0,\alpha LN])$, we take for $(a_{i+1},t_{i+1})$ the first point -- if it exists -- along $\gamma$ after $(a_i,t_i)$ to be outside this large box, and we take for $(k_{i+1},n_{i+1})$ the coordinates of the space-time box that containing $(a_{i+1},t_{i+1})$. Otherwise, we stop the process.
\end{itemize}
The idea is to extract from the path a sequence of large portions, \emph{i.e. } the portions of $\gamma$ between $(a_i,t_i)$ and $(a_{i+1},t_{i+1})$.
We have the following estimates:
\begin{eqnarray}
&&\forall i\in[0..{\ell}-1] \quad \|a_{i+1}-a_i\|_\infty\le (L+1)N \text{ and } \|a_l-x\|_\infty \le (L+1)N, \label{absolument} \\
&&\forall i\in[0..{\ell}-1] \quad 0 \le t_{i+1}-t_i\le \alpha LN \text{ and } 0 \le t-t_l \le \alpha LN, \label{cequetuveux}\\
&& 1\le {\ell} \le \frac{Ct}{(L-1)N}+\frac{t}{(\alpha L-1)N}+2 \le\frac{2Ct}{(\alpha L-1)N}+2 . \label{majoratl}
\end{eqnarray}
The two first estimates just say that -- spatially for~\eqref{absolument} and in time for~\eqref{cequetuveux}-- the point $(a_{i+1},t_{i+1})$ remains in the large box centered around $B_N(k_{i},n_{i})$, which contains $(a_i,t_i)$.
Now consider the third estimate.
We note that a path can get out of a large box either with its time coordinate -- and the number of such exits is smaller than $\frac{t}{(\alpha L-1) N}+1$ --, or by the space coordinate -- , and the number of such exits is smaller than $\frac{Ct}{(L-1)N}+1$. The last inequality comes from $C>1$ and $\alpha<1$.
To ensure that the space coordinates of the boxes associated to the path are all distinct, we extract a subsequence $\overline{\Gamma}=(k_{\varphi(i)})_{0 \le i \le \overline{{\ell}}}$ with the loop-removal process described by Grimmett--Kesten~\cite{grimmett-kesten}:
\begin{itemize}
\item $\varphi(0)=\max\{j\ge 0: \; \forall i \in [0..j] \; k_i=0\}$;
\item Assume we chose $\varphi(i)$, then we take, if it is possible,
\begin{eqnarray*}
j_0(i) & = & \inf\{j >\varphi(i): k_j \neq k_{\varphi(i)}\}, \\
\varphi(i+1) & = & \max\{j\ge j_0(i): \; k_j=k_{j_0(i)}\}.
\end{eqnarray*}
and we stop the extraction process otherwise.
\end{itemize}
Then, as in~\cite{grimmett-kesten}
\begin{eqnarray*}
&&\|a_{\varphi(\overline{{\ell}})}-x\|_\infty \le (L+1)N, \\
&& 0 \le t-t_{\varphi(\overline{{\ell}})} \le \alpha LN, \\
&& \forall i \in [0..\overline{{\ell}}-1] \quad \|a_{\varphi(i)+1}-a_{\varphi(i+1)}\|_{\infty} \le 2N,\\
&& \forall i \in [0..\overline{{\ell}}-1] \quad |t_{\varphi(i)+1}-t_{\varphi(i+1)}| \le 2N.
\end{eqnarray*}
Moreover, the upper bound~(\ref{majoratl}) for ${\ell}$ ensures that
\begin{equation}
\label{majoratlbar}
1\le \overline{{\ell}}\le {\ell} \le \frac{2Ct}{(\alpha L-1)N}+2.
\end{equation}
On the other hand, as $\displaystyle \mu(x)-\mu(a_{\varphi(\overline{{\ell}})}-x) \le \mu(a_{\varphi(\overline{l})})$, we have with~(\ref{majoratlbar}):
\begin{eqnarray*}
(1+\varepsilon)t -C^+_{\mu}(L+1)N
& \le &\mu \left( \sum_{i=0}^{\overline{{\ell}}-1}a_{\varphi(i+1)}- a_{\varphi(i)}\right) \\
& \le &\sum_{i=0}^{\overline{{\ell}}-1} \mu(a_{\varphi(i+1)}-a_{\varphi(i)+1}) +\sum_{i=0}^{\overline{{\ell}}-1} \mu(a_{\varphi(i)+1}-a_{\varphi(i)}) \\
& \le &2N C_\mu^+\overline{{\ell}} +\sum_{i=0}^{\overline{{\ell}}-1} \mu(a_{\varphi(i)+1}-a_{\varphi(i)})\\
& \le &2NC_\mu^+\left(\frac{2Ct}{(\alpha L-1)N}+2\right) +\sum_{i=0}^{\overline{{\ell}}-1} \mu(a_{\varphi(i)+1}-a_{\varphi(i)}).
\end{eqnarray*}
This ensures, with the choice~(\ref{choiceL}) we made for $\alpha,L$, that
\begin{equation}
\label{senex}
\sum_{i=0}^{\overline{{\ell}}-1} \mu(a_{\varphi(i)+1}-a_{\varphi(i)}) \ge (1+2\varepsilon/3)t -2C^+_{\mu}(L+1)N.
\end{equation}
In other words, even after the extraction process, the sum of the lengths of the crossings remains of order $(1+2\varepsilon/3)t$.
Let $k \in \mathbb{Z}d$ and $n \in \mathbb N$. We say now that $B_N(k,n)$ is good if
$$\text{the event }A^{\alpha,L,N,\varepsilon/3} \circ T_{2kN} \circ \theta_{2nN} \text{ occurs},$$
and bad otherwise.
If $B_N(k_{\varphi(i)},n_{\varphi(i)})$ is good, then the path exits the corresponding large box by the time coordinate, and thus $\mu(a_{\varphi(i)+1}-a_{\varphi(i)})\le (1+\varepsilon/3)(t_{\varphi(i)+1}-t_{\varphi(i)})$; this ensures that
\begin{eqnarray*}
\mu \left(\sum_{i: \; B_N(k_{\varphi(i)},n_{\varphi(i)}) \text{ good}}(a_{{\varphi(i)}+1}-a_{\varphi(i)}) \right)
& \le & \sum_{i: \; B_N(k_{\varphi(i)},n_{\varphi(i)})\text{ good}} \mu(a_{{\varphi(i)}+1}-a_{\varphi(i)}) \\
& \le & (1+\frac{\varepsilon}3)\sum_{i:\; B_N(k_{\varphi(i)},n_{\varphi(i)}) \text{ good}}(t_{{\varphi(i)}+1}-t_{\varphi(i)}) \\
& \le & (1+\frac{\varepsilon}3) t.
\end{eqnarray*}
With~\eqref{senex}, it implies that
$$ \sum_{i: \; B_N(k_{\varphi(i)},n_{\varphi(i)}) \text{ bad}}\mu(a_{{\varphi(i)}+1}-a_{\varphi(i)}) \ge \frac{\varepsilon}3t-2C^+_{\mu}(L+1)N,$$ and then, with~\eqref{absolument}
$$\mathbb{C}ard{\{i: \; B_N(k_{\varphi(i)},n_{\varphi(i)}) \text{ bad}\}} \ge \frac{\varepsilon t}{3C^+_\mu (L+1)N}-2.$$
In other words, if $t>0$, if $x$ is such that $\mu(x) \ge (1+\varepsilon)t$, if there exists an infection path $\gamma$ from $(0,0)$ to $(x,t)$ with less than $Ct$ horizontal edges, the associated sequence $\overline{\Gamma}$ has a number of bad boxes proportional to $t$.
Note that Lemma~\ref{catendvers12} says that for any deterministic family $n=(n_k)_{k \in \mathbb{Z}d} \in \mathbb N^{\mathbb{Z}d}$,
the field $(\eta^n_k)_{k \in \mathbb{Z}d}$, defined by
$\eta^n_{k}=1\hspace{-1.3mm}1_{\{ B_N(k,n_{k}) \text{ good}\}}$ is locally dependent and that
$$\lim_{N \to +\infty} \mathbb{P}(B_N(0,0) \text{ good})=1.$$
By the extraction process, the spatial coordinates of the boxes in $\overline{\Gamma}$ are all distinct. With the comparison theorem by Liggett--Schonmann--Stacey~\cite{LSS}, we can, for any $p_1<1$, take $N$ large enough to ensure that for any family $n=(n_k)_{k \in \mathbb{Z}d} \in \mathbb N^{\mathbb{Z}d}$, the law of the field $(\eta^n_k)_{k \in \mathbb{Z}d}$ under $\mathbb{P}$ stochastically dominates a product on $\mathbb{Z}d$ of Bernoulli laws with parameter $p_1$. Thus, if $x$ is such that $\mu(x) \ge (1+\varepsilon)t$, then
\begin{eqnarray*}
& &\mathbb{P} \left(\begin{array}{c}
\text{there exists an infection path $\gamma$ from $(0,0)$ to $(x,t)$}\\
\text{with less than $Ct$ horizontal edges and such that $\overline{\Gamma}=\overline{\Gamma}(\gamma)$ has}\\
\text{at least $\frac{\varepsilon t}{3C^+_\mu (L+1)N}-2$ bad boxes}
\end{array}
\right) \\
& \le & \sum_{{\ell}=1}^{\frac{2Ct}{(\alpha L-1)N}+2} \sum_{\mathbb{C}ard{\overline{\Gamma}}={\ell}}2^{\ell}(1-p_1)^{\frac{\varepsilon t}{3C^+_\mu (L+1)N}-1}
\\ &=& (1-p_1)^{\frac{\varepsilon t}{3C^+_\mu (L+1)N}-1}\sum_{{\ell}=1}^{\frac{2Ct}{(\alpha L-1)N}+2}2^{\ell}\textrm{Card}{\{\overline{\Gamma};\mathbb{C}ard{\overline{\Gamma}}=\ell\}}
\end{eqnarray*}
A classical counting argument gives the existence of a constant $K=K(d,\alpha,L)$ independent of $N$ such that
$$\forall \ell\ge 1\quad \textrm{Card}{\{\overline{\Gamma};\mathbb{C}ard{\overline{\Gamma}}=\ell\}}\le K^{\ell}.$$
We get then an upper bound for our probability of the form
$$A \frac{t}N \left( (1-p_1)^{\frac{\varepsilon}{3C_\mu^+(L+1)}} (2K)^{\frac{2C}{\alpha L-1}}\right)^{t/N},$$ which leads to a bound of the form $A_3\exp(-B_3 t)$
as soon as $p_1$ is close enough to~$1$.
Summing over all $x\in [-Mt,Mt]^d$, we have again an exponential bound.
With this last upper bound,~(\ref{majgross}) and (\ref{pastropdaretes}), we end the proof of Theorem~\ref{GDdessous}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{dessouscchouette}]
We first prove there exist $A,B>0$ such that
\begin{equation}
\label{GDTgrand}
\forall T > 0\quad \mathbb{P} (\exists t\ge T\quad \xi^0_t \not \subset (1+\varepsilon)tA_\mu)\le A\exp(-BT).
\end{equation}
Indeed,
\begin{eqnarray*}
& & \mathbb{P} (\exists t\ge T\quad \xi^0_t \not \subset (1+\varepsilon)tA_\mu))\\ & \le & \mathbb{P}(\exists n\in\mathbb N \quad \xi^0_{T+n} \not \subset (1+\varepsilon/2)(T+n)A_\mu)\\
& & + \mathbb{P}(\exists n\in\mathbb N \quad \exists t\in [0,1]\quad \xi^0_{T+n} \subset (1+\varepsilon/2)(T+n)A_\mu, \xi^0_{T+n+t} \not \subset ((1+\varepsilon)(T+n)A_\mu) \\
& \le & \sum_{n\ge 0} \mathbb{P}(\xi^0_{T+n} \not \subset (1+\varepsilon/2)(T+n)A_\mu)\\
& & +\sum_{n\ge 0}\mathbb{P}(\exists t\in [0,1]\quad \xi^0_{T+n} \subset (1+\varepsilon/2)(T+n)A_\mu, \xi^0_{T+n+t} \not \subset ((1+\varepsilon)(T+n)A_\mu).
\end{eqnarray*}
The first sum can be controlled with Theorem~\ref{GDdessous}.
For the second sum, the Markov property gives for any $\lambda\in\Lambda$,
\begin{eqnarray*}
& &\mathbb{P}_{\lambda}(\exists t\in [0,1]\quad \xi^0_{T+n} \subset (1+\varepsilon/2)(T+n)A_\mu, \; \xi^0_{T+n+t} \not \subset ((1+\varepsilon)(T+n)A_\mu)\\
& \le &\sum_{x\in (1+\varepsilon/2)(T+n)A_\mu}\mathbb{P}_{x.\lambda}(\exists t\in [0,1]\quad \xi^0_{t} \not \subset (\varepsilon/2)(T+n)A_\mu)\\
& \le &\mathbb{C}ard{(1+\varepsilon/2)(T+n)A_\mu}\mathbb{P}(H_1^0 \not \subset (\varepsilon/2)(T+n)A_\mu)\le A \exp(-B (T+n)),
\end{eqnarray*}
where the last upper bound comes from a comparison with the Richardson model.
We conclude the proof of~(\ref{GDTgrand}) by integrating with respect to $\lambda$.
Let us prove now the existence of $A,B>0$ such that
\begin{equation}
\label{GDHpetit}
\forall r>0\quad \mathbb{P} (H^0_r \not\subset(1+\varepsilon)r A_{\mu})\le A\exp(-Br).
\end{equation}
With~\eqref{richard}, we can find $A_1,B_1>0$ and $c<1$ such that
$\mathbb{P} (H^0_{cr} \not\subset r A_{\mu})\le A_1\exp(-B_1r).$
Now,
\begin{eqnarray*}
\mathbb{P} (H^0_r \not\subset(1+\varepsilon)r A_\mu)
& \le & \mathbb{P} (H^0_{cr} \not\subset r A_\mu)+\mathbb{P}(\exists t \in( cr,r) \quad \xi^0_t \not\subset (1+\varepsilon)r A_\mu) \\
& \le & A_1\exp(-B_1r) +\mathbb{P}(\exists t \ge cr \quad \xi^0_t \not\subset (1+\varepsilon)t A_\mu),
\end{eqnarray*}
and we conclude the proof of~\eqref{GDHpetit} with~(\ref{GDTgrand}).
To obtain~\eqref{decadix}, we just need to note that $t\mapsto H_t$ is non-decreasing.
Finally, for $x \in \mathbb{Z}d \backslash\{0\}$,
$$\mathbb{P}(t(x) \le (1-\varepsilon)\mu(x)) \le \mathbb{P}(H^0_{(1-\varepsilon)\mu(x)} \not\subset \mu(x)A_\mu).$$
Applying~(\ref{GDHpetit}), we end the proof of~\eqref{defontenay}, and thus of Theorem~\ref{dessouscchouette}.
\end{proof}
\section{About the order of the deviations}
\label{bonnevitesse}
By Theorems~\ref{theoGDUQ} and~\ref{dessouscchouette}, we have for $\nu$-almost every $\lambda$ and each $\varepsilon>0$:
$$\miniop{}{\overline{\lim}}{x\to +\infty}\frac1{\|x\|}\log \overline{\mathbb{P}}_{\lambda}\left(\frac{t(x)}{\mu(x)}\not\in [1-\varepsilon,1+\varepsilon]\right)<0.$$
To see that the exponential decrease in $\|x\|$ is optimal, we need too see that $\miniop{}{\underline{\lim}}{x\to +\infty}\frac1{\|x\|}\log \overline{\mathbb{P}}_{\lambda}\left(\frac{t(x)}{\mu(x)}\not\in [1-\varepsilon,1+\varepsilon]\right)>-\infty.$
In fact, we will prove here that for every $(s,t)$ with $0<s<t$, there exists a constant $\gamma>0$
such that for each $\lambda\in\Lambda$ and each $x\in\mathbb{Z}d$,
\begin{eqnarray*}
\mathbb{P}_\lambda(t(x)\in [s,t]\|x\|_1) & \ge & \exp(-\gamma\|x\|_1).
\end{eqnarray*}
\begin{proof}
Let $s,t$ with $0 <s <t$.
For each $u \in \mathbb{Z}d$ such that $\|u\|_1=1$, we define $T_u=\inf\{t \ge 0: \; \xi^0_t=\{u\},\quad \forall s\in [0,t)\quad \xi^0_s=\{0\}\}$. We are going to prove that
$$\exists \gamma>0 \quad \forall \lambda \in \Lambda \quad \forall u \in \mathbb{Z}d, \; \|u\|_1=1 \quad \mathbb{P}_\lambda(T_u \in [s,t]) \ge e^{-\gamma}.$$
In order to ensure that $T_u\in [s,t]$, it it sufficient to satisfy
\begin{itemize}
\item The lifetime of the particle at $(0,0)$ is strictly between $(s+t)/2$ and $t$, which happens with probability $e^{-(s+t)/2}-e^{-t}$ under $\mathbb{P}_\lambda$ ;
\item The first opening of the bond between $0$ and $u$ happens strictly between $s$ and $(s+t)/2$, which happens with probability
$$\exp(-\lambda_{\{0,u\}} s)-\exp(-\lambda_{\{0,u\}} (s+t)/2) \ge \exp(-\lambda_{\max} s)(1-\exp(-\lambda_{\min}(t-s)/2))$$ under $\mathbb{P}_\lambda$;
\item There is no opening between time $0$ and time $t$, on the set $J$ constituted by the $4d-2$ bonds that are neighour of $0$ or $u$ and differ from $\{0,u\}$, which happens under $\mathbb{P}_\lambda$ with probability
$$\prod_{j\in J} \exp(-\lambda_j t) \ge \exp(-(4d-2)\lambda_{\max} t);$$
\item There is no death at site $u$ between $0$ and $t$, which happens under $\mathbb{P}_\lambda$ with probability $e^{-t}$.
\end{itemize}
Then, using the independence of the Poisson processes, we get
\begin{eqnarray*}& &\mathbb{P}_{\lambda}(T_u \in [s,t])\\
& \ge & (e^{-(s+t)/2)}-e^{-t})e^{-t} e^{-(4d-2)\lambda_{\max} t} e^{-\lambda_{\max} s}(1-e^{-\lambda_{\min}(t-s)/2})=e^{-\gamma}.
\end{eqnarray*}
Moreover, $T_u$ is obviously a stopping time.
Then, applying the strong Markov property $\|x\|_1$ times, we get,
$$\mathbb{P}_{\lambda}(t(x)\in [s,t]\|x\|_1)\ge \exp(-\gamma\|x\|_1).$$
This gives the good speed for both upper and lower large deviations.
\end{proof}
Note that the order of the large deviations is the same for upper and lower deviations, as happens for the chemical distance in Bernoulli percolation (see Garet--Marchand~\cite{GM-large}). Conversely, it is known that these orders may differ for first-passage percolation (see Kesten~\cite{kesten} and Chow--Zhang~\cite{chow-zhang}).
\defReferences{References}
\end{document} | math | 110,261 |
\begin{document}
\thispagestyle{empty}
\title{Analytic evaluation of Hecke eigenvalues for Siegel modular forms of degree two}
\begin{abstract}The standard approach to evaluate Hecke eigenvalues of a Siegel modular eigenform $F$ is to determine a large number of Fourier coefficients of $F$ and then compute the Hecke action on those coefficients.
We present a new method based on the numerical evaluation of $F$ at explicit points in the upper-half space and of its image under the Hecke operators.
The approach is more efficient than the standard method and has the potential for further optimization by identifying good candidates for the points of evaluation, or finding ways of lowering the truncation bound.
A limitation of the algorithm is that it returns floating point numbers for the eigenvalues; however, the working precision can be adjusted at will to yield as close an approximation as needed.
\end{abstract}
\section{Introduction}\label{sect:intro}
The explicit computation of classical modular forms and their associated
L-functions has been very useful to formulate and verify conjectures, to
discover new phenomena and to prove theorems. There are a variety of
ways to effectively compute the Fourier coefficients of classical modular forms
and, therefore, their L-functions. Analogous work for Siegel modular
forms of degree two is less well-developed for, perhaps, two main reasons:
\begin{enumerate}
\item the methods for computing Siegel modular forms are \textit{ad
hoc} and less efficient than those for computing classical modular
forms;
\item computing Siegel modular forms does not immediately give you the
associated L-functions since the Hecke eigenvalues of Siegel modular
forms, unlike in the classical case, are not equal to the Fourier
coefficients and because the Euler factors of the L-function involve
knowing both the $p$th and the $p^2$th eigenvalues.
\end{enumerate}
To give an idea of the difficulty of computing the L-function of a
Siegel modular form, we consider an example. Let $\Upsilon_{20}$ be
the unique normalized Siegel modular form of degree 2 and weight 20 that is a
Hecke eigenform and not a Saito-Kurokawa lift. Skoruppa
\cite{skoruppa} gave an explicit formula for $\Upsilon_{20}$ in terms of the generators of the ring of Siegel modular forms of degree 2 and the
largest calculation of $\Upsilon_{20}$ has been carried out by Kohnen and Kuss
\cite{kohnen-kuss} (we point out that Kurokawa \cite{kurokawa1, kurokawa2} was the first to compute $\Upsilon_{20}$ but his computations were not very extensive). The computation that Kohnen and Kuss carried out
was enough to find the $p$th eigenvalue for $p\leq 997$ and the
$p^2$th eigenvalue for $p\leq 79$. They compute Fourier coefficients
indexed by quadratic forms with discriminant up to 3000000 and then use
them to determine the Hecke eigenvalues. An examination of the formulas on page
387 of \cite{skoruppa} shows that to find the eigenvalue $\lambda(n)$
of $T_n$, for $n = p^2$, requires the Fourier coefficients indexed by
quadratic forms of discriminant up to $n^2 = p^4$. This relation makes
it infeasible to compute many more Fourier coefficients, and thus
Hecke eigenvalues, using this approach. Instead, in this paper, we propose a different approach.
Our method does not compute \emph{any} of the Fourier coefficients of the Siegel modular form being studied. Instead, we take suitable truncations of the Fourier expansions of the \emph{Igusa generators} (whose coefficients are inexpensive to compute) and use these truncations to evaluate our modular form numerically at points in the upper half space. This approach is based on work of Br\"oker and
Lauter \cite{broker} in which they use such techniques to evaluate Igusa functions. Using their
method we find the eigenvalue $\lambda(p)$ of an eigenform $F$ by doing the
following:
\begin{itemize}
\item evaluate $F$ at some point $Z$ in the Siegel upper
half-space;
\item evaluate $F|T_p$ at the same point $Z$;
\item take the ratio $(F|T_p)(Z)/F(Z)$.
\end{itemize}
The conceptual shift that we are proposing is that, instead of
representing the Siegel modular form $F$ as a list of Fourier
coefficients, we represent $F$ by its values at points in the Siegel
upper half-space. The idea is simple but its importance
can be seen by virtue of the results. We remark that in \cite{analytic1} we describe an implementation of the analogous method for classical modular forms and, in some cases, outperform the standard method using modular symbols.
The potential to parallelize our algorithm stems from the fact that we sum over the coset decomposition of the Hecke operators, and the computation of each summand is independent; these computations can therefore be performed in parallel.
Such approaches have been used in the past, for instance in determining the Hecke eigenvalues of paramodular forms, see~\cite{pooryuen}.
We thank the referees for pointing this out, and note that the similarity ends at the level of the sum itself: Poor and Yuen specialize the paramodular eigenform to a modular curve, then compute the summands (which are power series in one variable) exactly.
We work with the Siegel eigenform itself (as a power series in three variables) and compute good numerical approximations to the summands.
It is important to emphasize that our method takes as input the expression of a Siegel eigenform as a polynomial in the Igusa generators.
Our objective is then to efficiently compute approximate values of the Hecke eigenvalues.
We do not claim to obtain further information about the Fourier coefficients of the eigenform, nor that this is an efficient way of determining the exact value of the eigenvalues (unless the latter happen to be integers).
The paper is organized as follows.
We begin by stating some numerical preliminaries used in our method.
Then, we give the relevant background on Siegel modular forms and discuss Br\"oker and Lauter's work and how to compute $F|T_p$ both in theory and in practice.
We conclude by presenting some results of our computations, together with details of the implementation and ideas for further improvement.
\subsection*{Acknowledgments:} We thank John Voight for proposing this
project to us. We also thank the anonymous referees for many helpful suggestions.
\section{Numerical preliminaries}
Before we describe our algorithm to compute Hecke eigenvalues of Siegel modular forms analytically, we begin by stating some results related to bounding the error introduced when we evaluate a given Siegel modular form and its image under the Hecke operators $T_p$ and $T_{p^2}$ at a point in the upper half-plane.
\subsection{Error in quotient}
We have a quantity defined as
\begin{equation*}
z=\frac{x}{y}\quad\text{with }x,y\in\mathbb{C}.
\end{equation*}
The numerator and denominator can be approximated to $x_A$, resp. $y_A$; we define $z_A:= \frac{x_A}{y_A}$.
Given $\varepsilon>0$, what values of $\varepsilon_x$ and $\varepsilon_y$ ensure that
\begin{equation*}
\text{if }|x-x_A|<\varepsilon_x\text{ and }|y-y_A|<\varepsilon_y\text{ then }|z-z_A|<\varepsilon?
\end{equation*}
\begin{lem}
With the above notation, let $e_x=x-x_A$ and $e_y=y-y_A$.
Then
\begin{equation*}
z-z_A=\frac{e_x-e_yz_A}{y_A+e_y}.
\end{equation*}
\end{lem}
\begin{proof}
Straightforward calculation.
\end{proof}
\begin{prop}
\label{prop:quot}
For any $h\in (0,1)$, if
\begin{equation*}
\varepsilon_x<\frac{h\varepsilon|y_A|}{2}
\quad\text{and}\quad
\varepsilon_y<\min\left\{\frac{(1-h)\varepsilon |y_A|}{2|z_A|}, \frac{|y_A|}{2}\right\},
\end{equation*}
then $|z-z_A|<\varepsilon$.
\end{prop}
\begin{proof}
Under the hypotheses, we have $|y_A+e_y|>|y_A|/2$ so
\begin{equation*}
|z-z_A|<\frac{2}{|y_A|}\left(|e_x|+|e_yz_A|\right)
<h\varepsilon+(1-h)\varepsilon=\varepsilon.
\end{equation*}
\end{proof}
The value of the parameter $h$ can be chosen in such a way that the calculations of $x_A$ and of $y_A$ are roughly of the same level of difficulty.
In order to use the results of Proposition~\ref{prop:quot} in practice, we need a lower bound on $|y_A|$ and an upper bound on $|z_A|$ (which can be obtained from the lower bound on $|y_A|$ and an upper bound on $|x_A|$).
How do we bound $|x_A|$?
We compute a very coarse estimate $\tilde{x}$ to $x$, with $\tilde{\varepsilon}_x$ just small enough that $|\tilde{x}|-2\tilde{\varepsilon}_x>0$.
(We can start with $\tilde{\varepsilon}_x=0.1$ and keep dividing by $10$ until the condition holds.)
Later we will make sure that $\varepsilon_x$ is smaller than $\tilde{\varepsilon}_x$.
Then we know that
\begin{equation*}
|\tilde{x}-x|<\tilde{\varepsilon}_x\qquad\text{and}\qquad
|x_A-x|<\varepsilon_x\leq\tilde{\varepsilon}_x,
\end{equation*}
so
\begin{equation*}
\big||x_A|-|\tilde{x}|\big|\leq |x_A-\tilde{x}|<2\tilde{\varepsilon}_x\qquad\Rightarrow\qquad 0< |\tilde{x}|-2\tilde{\varepsilon}_x < |x_A| < |\tilde{x}|+2\tilde{\varepsilon}_x,
\end{equation*}
giving us lower and upper bounds on $|x_A|$.
A similar argument works for $|y_A|$.
\section{Siegel modular forms}\label{sec:smf}
Let the symplectic group of similitudes of genus $2$ be defined by
\begin{multline*}
\textrm{GSp}(4) := \{G \in \textrm{GL}(4) : {}^{t}G J G = \lambda(G) J,
\lambda(G) \in \textrm{GL}(1) \} \\ \mbox{where } J = \mat{}{I_2}{-I_2}{}.
\end{multline*}
Let $\textrm{Sp}(4)$ be the subgroup with $\lambda(G)=1$. The group $\textrm{GSp}^+(4,\mathbb{R}) := \{ G \in\textrm{GSp}(4,\mathbb{R}) : \lambda(G) > 0 \}$ acts on the Siegel upper half space $\mathbb{H}_2 := \{ Z \in M_2(\mathbb{C}) : {}^{t}Z = Z, \operatorname{Im}(Z) > 0\}$ by
\begin{equation}
G \langle Z \rangle := (AZ+B)(CZ+D)^{-1}, \quad \text{where } G = \mat{A}{B}{C}{D} \in \textrm{GSp}^+(4,\mathbb{R}), Z \in \mathbb{H}_2.
\end{equation}
Let $S_k^{(2)}$ be the space of
holomorphic Siegel cusp forms of weight $k$, genus $2$ with respect to $\Gamma^{(2)} := \textrm{Sp}(4,\mathbb{Z})$. Then $F \in S_k^{(2)}$
satisfies
$$
F(\gamma \langle Z \rangle) = \det(CZ+D)^{k} F(Z)
$$
for all $\gamma = \mat{A}{B}{C}{D} \in \Gamma^{(2)}$ and $Z \in
\mathbb{H}_2$. This also can be written in terms of the slash operator: for
$M\in\textrm{GSp}^+(4,\mathbb{R})$ let $\left(F\vert_k M\right)(Z) = \det(CZ+D)^{-k}F(M\langle
Z\rangle )$. Then the
functional equation satisfied by a Siegel modular form can be written
as:
\[
\left(F\vert_k M\right)(Z)=F(z)
\]
for all $M\in \textrm{Sp}(4,\mathbb{Z})$.
Now we describe the Hecke operators acting on $S_k^{(2)}$.
For $M \in \textrm{GSp}^+(4,\mathbb{R}) \cap M_{4}(\mathbb{Z})$, define the Hecke operator
$T(\Gamma^{(2)} M \Gamma^{(2)})$ on $S_k^{(2)}$ as in \cite[(1.3.3)]{andrianov}. For a positive integer $m$,
we define the Hecke operator $T_m$ by
\begin{equation}\label{hecke-op-m-defn}
T_m := \sum\limits_{\lambda(M)=m} T(\Gamma^{(2)} M
\Gamma^{(2)}).
\end{equation}
See Section~\ref{sec:hecke} for an explicit decomposition of the
double cosets $T_p$ and $T_{p^2}$ into right cosets. Suppose
\[
T_m = \sum \Gamma^{(2)}\alpha
\]
is a right coset decomposition of the Hecke operator $T_m$. Then
the operator $T_m$ acts on a Siegel modular form $F$ of weight $k$
as
\[
\left( F\vert_k T_m\right)(Z) = \sum \left( F\vert_k
\alpha\right)(Z).
\]
This action can be described in terms of the Fourier coefficients of
the Siegel modular form $F$.
Any Siegel modular form $F$ of degree $2$ has a Fourier expansion of the form
\begin{equation*}
F(Z) = \sum_N a_N(F) \exp\left(2\pi i\operatorname{Tr}(NZ)\right)\qquad a_N(F)\in\mathbb{C},
\end{equation*}
where the sum ranges over all positive semi-definite matrices
$N=\begin{pmatrix}a&b/2\\b/2&c\end{pmatrix}$ with $a,b,c\in\mathbb{Z}$. The
quadratic form $N$ is often written $[a,b,c]$ using Gauss's notation.
Using the decompositions of the Hecke operators in
Section~\ref{sec:hecke} one can derive formulas for the
action of $T_p$ and $T_{p^2}$ on a Siegel modular form $F$. When
these formulas are written down as in \cite[p. 387]{skoruppa} one can
see that to compute $\lambda_F(p)$, the Hecke eigenvalue of $F$ with
respect to the Hecke operator $T_p$, one needs Fourier coefficients
up to discriminant of order $p^2$. To compute $\lambda_F(p^2)$, the
Hecke eigenvalue of $F$ with respect to the Hecke operator $T_{p^2}$,
one needs Fourier coefficients up to discriminant $p^4$. With
current methods, computing this number of coefficients of a Hecke
eigenform that is not a Saito-Kurokawa lift has proven impossible.
A bottleneck to computing such a large number of coefficients is the
fact that there is no known way to compute individual coefficients in
parallel. The determination of a single Fourier coefficient requires
knowledge of many other Fourier coefficients. Our method, described
above, has approximately the same number of steps to compute a new
Hecke eigenvalue but these steps, in our method, are easily done in parallel.
\section{Evaluating Hecke eigenforms}\label{sec:evaluating}
\subsection{Bounds on the coefficients of the Igusa generators}
\begin{prop}
\label{prop:bounds}
Let $E_4$, $E_6$, $\chi_{10}$ and $\chi_{12}$ denote the Igusa generators of
the ring of even-weight Siegel modular forms of genus $2$ with respect
to $\textrm{Sp}(4,\mathbb{Z})$.
We have the following bounds on the Fourier coefficients of these forms:
\begin{align*}
\left|a_N(E_4)\right| &< \numprint{19230}\,t^5,\\
\left|a_N(E_6)\right| &< \numprint{12169}\,t^9,\\
\left|a_N(\chi_{10})\right| &< \frac{1}{236}\,A(\varepsilon,9)\,t^{9+\varepsilon},\\
\left|a_N(\chi_{12})\right| &< \frac{1}{311}\,A(\varepsilon,11)\,t^{11+\varepsilon},
\end{align*}
where the last two hold for any $\varepsilon>0$, $t=\operatorname{Tr}(N)$, and the function $A(\varepsilon,s)$ is defined by
\begin{equation*}
A(\varepsilon,s)=\frac{1}{(2\pi)^{1/4}}\,\exp\left(9\varepsilon^{-1}2^{3/\varepsilon}/\log(2)\right)\,\zeta(1+\varepsilon)
\,\max\left\{1,\sqrt{\frac{\Gamma(s+1/2+\varepsilon)}{\Gamma(s-1/2-\varepsilon)}}\right\}.
\end{equation*}
\end{prop}
\begin{proof}
It follows directly from~\cite[Corollary 3.6 and Remark 3.7]{broker} that
\begin{align*}
\left|a_N(E_4)\right| &< \numprint{19230}\left(4ac-b^2\right)^{5/2}\leq\numprint{19230}\operatorname{Tr}(N)^5,\\
\left|a_n(E_6)\right| &< \numprint{12169}\left(4ac-b^2\right)^{9/2}\leq\numprint{12169}\operatorname{Tr}(N)^9.
\end{align*}
The second two inequalities follow from~\cite[Theorem 5.10]{broker} with $\gamma=\eta=\varepsilon/3$.
\end{proof}
\begin{rem}
The bounds for $\chi_{10}$ and $\chi_{12}$ in Proposition~\ref{prop:bounds}
allow for further optimization by choosing the parameter $\varepsilon$ appropriately.
Considering $\chi_{10}$, the factor $t^{9+\varepsilon}$ is of course dominant as
$t\to\infty$, but choosing $\varepsilon$ as small as possible is counterproductive
for practical computations, as the factor $A(\varepsilon, 9)$ explodes for small $\varepsilon$.
In our computations, we use $\varepsilon=2$, so the bounds can be summarized as:
\begin{align*}
\left|a_N(E_4)\right| &< \numprint{19230}\,t^5,\\
\left|a_N(E_6)\right| &< \numprint{12169}\,t^9,\\
\left|a_N(\chi_{10})\right| &< \numprint{220439}\,t^{11},\\
\left|a_N(\chi_{12})\right| &< \numprint{287248}\,t^{13},\\
\end{align*}
where $t=\operatorname{Tr}(N)$.
\end{rem}
\subsection{The truncation error for Siegel modular forms}
\label{sect:trunc_error}
Let $F$ be a Siegel modular form of degree $2$, with Fourier expansion
\begin{equation*}
F(Z) = \sum_N a_N(F) \exp\left(2\pi i\operatorname{Tr}(NZ)\right).
\end{equation*}
Given a positive integer $T$, we will truncate the Fourier expansion of $F$ by considering only those indices $N$ whose trace is at most $T$:
\begin{equation*}
F_T(Z) = \sum_{\operatorname{Tr}(N)\leq T} a_N(F) \exp\left(2\pi i\operatorname{Tr}(NZ)\right).
\end{equation*}
\begin{lem}
For any $t\in\mathbb{N}$, the number of Fourier indices of trace $t$ satisfies
\begin{equation*}
\#\{N\mid\operatorname{Tr}(N)=t\}\leq (t+1)(2t+1)=2t^2+3t+1\leq 6t^2.
\end{equation*}
\end{lem}
\begin{proof}
We have
\begin{equation*}
\#\{N\mid\operatorname{Tr}(N)=t\}=\sum_{a=0}^t\left(1+2\left\lfloor 2\sqrt{a(t-a)}\right\rfloor\right).
\end{equation*}
There are $t+1$ terms in the sum, and the largest corresponds to $a=t/2$ (or $a=(t-1)/2$ if $t$ is odd).
In any case, every term in the sum is at most $1+2t$.
\end{proof}
Suppose we have, like in Proposition~\ref{prop:bounds}, an upper bound on the Fourier coefficients of $F$:
\begin{equation}
\label{eq:anf_bound}
|a_N(F)|\leq Ct^d\qquad\text{where $C\in\mathbb{R}_{>0}$, $d\in\mathbb{N}$ and $t=\operatorname{Tr}(N)$}.
\end{equation}
We are interested in bounding the gap between the true value $F(Z)$ and its approximation $F_T(Z)$.
\begin{prop}
Suppose $F$ is a Siegel modular form of degree two whose Fourier coefficients
satisfy Equation~\eqref{eq:anf_bound}, $Z\in\mathbb{H}_2$ and we wish to approximate
the value $F(Z)$ with error at most $10^{-h}$.
It is then sufficient to use the truncation $F_T(Z)$ containing all terms of
the Fourier expansion of $F$ with indices of trace at most $T$, where
\begin{equation*}
T>\frac{d+2}{\alpha(Z)}\qquad\text{and}\qquad
6C\frac{d+3}{\alpha(Z)}\exp(-\alpha(Z) T)T^{d+2}<10^{-h}.
\end{equation*}
Here
\begin{equation*}
\delta(Z)=\sup\left\{\delta^\prime\in\mathbb{R}\mid \operatorname{Im}(Z)-\delta^\prime I\text{ is positive semi-definite}\right\}
\end{equation*}
and $\alpha(Z)=2\pi\delta(Z)$.
\end{prop}
\begin{proof}
Using~\cite[Lemma~6.1]{broker}, we have
\begin{align*}
\left|F(Z)-F_T(Z)\right| &= \left|\sum_{\operatorname{Tr}(N)>T} a_N(F) \exp\left(2\pi i\operatorname{Tr}(NZ)\right)\right|\\
&\leq \sum_{\operatorname{Tr}(N)>T} \left|a_N(F)\right| \left|\exp\left(2\pi i\operatorname{Tr}(NZ)\right)\right|\\
&\leq \sum_{\operatorname{Tr}(N)>T} \left|a_N(F)\right| \exp\left(-\alpha(Z)\operatorname{Tr}(N)\right)\\
&<\sum_{t=T+1}^\infty \sum_{\operatorname{Tr}(N)=t} \left|a_N(F)\right|\exp\left(-\alpha(Z)t\right)\\
&\leq\sum_{t=T+1}^\infty 6Ct^{d+2}\exp\left(-\alpha(Z)t\right)\\
&\leq 6C\int_T^\infty x^{d+2}\exp\left(-\alpha(Z)x\right)\,dx\\
&= 6C\exp(-\alpha(Z) T)\sum_{j=0}^{d+2}\frac{(d+2)!}{j!\alpha(Z)^{d-j+3}}T^j\\
&<\frac{6C(d+3)}{\alpha(Z)}\exp(-\alpha(Z) T)T^{d+2},
\end{align*}
where the last inequality holds if $T$ is in the half-infinite interval on
which the integrand is decreasing (i.e.\ $T>(d+2)/\alpha(Z)$).
\end{proof}
\begin{ex}
We determine $T$ sufficient for computing $E_4(Z)$ within $10^{-20}$ at the point
\begin{equation*}
z=\begin{pmatrix}
5i & i\\
i & 6i
\end{pmatrix}.
\end{equation*}
We have
\begin{equation*}
\alpha(Z)=27.5327
\end{equation*}
so we are looking for $T$ such that
\begin{equation*}
\exp(-\alpha(Z)T)T^7<2.983\cdot 10^{-25},
\end{equation*}
which is easily seen (numerically) to hold as soon as $T\geq 3$.
We proceed similarly to obtain the values in Table~\ref{table:truncation}.
\begin{table}[h]
\centering
\begin{tabular}{lrrrr}
\toprule
& \multicolumn{4}{c}{$T$}\\
\cmidrule(r){2-5}
error & $E_4$ & $E_6$ & $\chi_{10}$ & $\chi_{12}$ \\
\midrule
$10^{-10}$ & $2$ & $2$ & $2$ & $2$\\
$10^{-20}$ & $3$ & $3$ & $3$ & $3$\\
$10^{-100}$ & $10$ & $10$ & $10$ & $11$\\
$10^{-1000}$ & $86$ & $86$ & $87$ & $87$\\
\bottomrule
\end{tabular}
\caption{Truncation necessary for computing $F(Z)$ within specified error}
\label{table:truncation}
\end{table}
\end{ex}
\section{Our method}
As described above, our method is rather straightforward. We fix a
$Z\in \mathbb{H}^2$ and evaluate $F(Z)$, using methods in
Section~\ref{sec:evaluating}. Consider the double coset $T_p=\sum \Gamma^{(2)}\alpha$ and its action on $F$:
\[
\left( F\vert_k T_p \right)(Z)=\sum\left( F\vert_k\alpha\right)(Z).
\]
What is left to do, then, is, for $\alpha$ in the decomposition, to
compute $\left(F\vert_k \alpha\right)(Z)$. That is, to be able to
write $\alpha$ as $\mat{A}{B}{C}{D}$ and to be able to evaluate
\[
\det(CZ+D)^{-k}F\left((AZ+B)(CZ+D)^{-1}\right).
\]
In Section~\ref{sec:hecke} we present the desired decompositions for
the Hecke operators $T_p$ and $T_{p^2}$ and we use the methods of
Section~\ref{sec:evaluating} to evaluate the Siegel modular form at
the points $(AZ+B)(CZ+D)^{-1}\in\mathbb{H}^2$.
\subsection{Hecke action}\label{sec:hecke}
Hecke operators are defined in terms of double cosets $\Gamma M \Gamma$
and the action of such an operator is determined by the right cosets
that appear in the decomposition of these double cosets. For a prime $p$ we consider the double coset
$T_p=\Gamma^{(2)} \, \operatorname{diag}(1,1,p,p)\Gamma^{(2)}$. An explicit version
of a formula, due to Andrianov, for the right cosets that appear in the decomposition of $T_p$, is given by Cl\'ery and van der Geer:
\begin{prop}\cite{andrianov,Clery14}\label{prop:tp}
The double coset $T_p$ admits the following left coset decomposition:
\begin{multline*}
\Gamma^{(2)}
\left(
\begin{smallmatrix}
p & 0 & 0 & 0 \\
0 & p & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{smallmatrix}
\right)
+
\sum_{0 \leq a,b,c \leq p-1}
\Gamma^{(2)}
\left(
\begin{smallmatrix}
1 & 0 & a & b \\
0 & 1 & b & c \\
0 & 0 & p & 0 \\
0 & 0 & 0 & p
\end{smallmatrix}
\right)
+\\
\sum_{0 \leq a \leq p-1}
\Gamma^{(2)}
\left(
\begin{smallmatrix}
0 & -p & 0 & 0 \\
1 & 0 & a & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & p & 0
\end{smallmatrix}
\right)
+
\sum_{0 \leq a,m \leq p-1}
\Gamma^{(2)}
\left(
\begin{smallmatrix}
p & 0 & 0 & 0 \\
-m & 1 & 0 & a \\
0 & 0 & 1 & m \\
0 & 0 & 0 & p
\end{smallmatrix}
\right)
\end{multline*}
and we have that the degree of $T_p$ is $p^3+p^2+p+1$.
\end{prop}
Thus, in particular, in order to find $\lambda_p$, then, $p^3+p^2+p+1$ independent
evaluations of our Siegel modular form $F$ at points in $\mathbb{H}^2$ are
required. This is why our method is so amenable to parallelization.
Similarly, for a prime $p$ define the operator $T_{p^2}$ as a sum of double cosets:
$$
T_{p^2}=
\Gamma^{(2)}
\left(
\begin{smallmatrix}
p & 0 & 0 & 0 \\
0 & p & 0 & 0 \\
0 & 0 & p & 0 \\
0 & 0 & 0 & p
\end{smallmatrix}
\right)
\Gamma^{(2)}
+
\Gamma^{(2)}
\left(
\begin{smallmatrix}
1 & 0 & 0 & 0 \\
0 & p & 0 & 0 \\
0 & 0 & p^2 & 0 \\
0 & 0 & 0 & p
\end{smallmatrix}
\right)
\Gamma^{(2)}
+
\Gamma^{(2)}
\left(
\begin{smallmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & p^2 & 0 \\
0 & 0 & 0 & p^2
\end{smallmatrix}
\right)
\Gamma^{(2)}\\
$$
Again, based on a result of Andrianov, Cl\'ery and van der Geer give an
explicit decomposition of the operator $T_{p^2}$:
\begin{prop}\cite{andrianov,Clery14}\label{prop:tp2}
The Hecke operator $T_{p^2}$
has degree $p^6+p^5+2p^4+2p^3+p^2+p+1$ and
admits a known explicit left coset decomposition.
\end{prop}
One can do better, however; we can reduce the number of summands at which we need to evaluate $F$ to be $\mathcal{O}(p^4)$ instead of $\mathcal{O}(p^6)$ by using some standard facts about the Hecke algebra for Siegel modular forms of degree 2. The Hecke operator $T_{p^2}$ is itself a linear combination of three double cosets:
\begin{multline}\label{eq:gens}
T_{p^2,0} = \Gamma^{(2)} \operatorname{diag} (p,p;p,p)\Gamma^{(2)},\, T_{p^2,1} = \Gamma^{(2)} \operatorname{diag} (1,p;p^2,p)\Gamma^{(2)},\, \text{ and }\\ T_{p^2,2} = \Gamma^{(2)} \operatorname{diag} (1,1;p^2,p^2)\Gamma^{(2)}.
\end{multline}
The decomposition in Proposition~\ref{prop:tp2} is itself the (disjoint) sum of the decomposition of three double cosets $T_{p^2,0}$, $T_{p^2,1}$ and $T_{p^2,2}$.
The $p$-part of the Hecke algebra is generated by the operators $T_p$, $T_{p^2,0}$ and $T_{p^2,1}$ and, in fact, in \cite{krieg,vdG}, it is shown that
\begin{equation}\label{eq:tp-relation}
(T_p)^2 = T_{p^2,0} + (p+1)T_{p^2,1} + (p^2+1)(p+1) T_{p^2,2}.
\end{equation}
To determine the eigenvalue $\lambda_{p^2}(F)$ for $F\in S_k^{(2)}$ with respect to the Hecke operator $T_{p^2}$, using Proposition~\ref{prop:tp}, we first find the eigenvalue $\lambda_p(F)$ for the operator $T_p$. Then, we find the eigenvalues $\lambda_{p^2,0}(F)$ (known to be $p^{-2k}$ by the definitions in Section~\ref{sec:smf}) and the eigenvalue $\lambda_{p^2,1}(F)$ for the operator $T_{p^2,1}$. Then using \eqref{eq:tp-relation} we can find the eigenvalue $\lambda_{p^2,2}(F)$ for the operator $T_{p^2,2}$. Putting it all together, then, all we need is an explicit decomposition of $T_{p^2,1}$ into left cosets, in order to compute $\lambda_{p^2}(F)$.
\begin{prop}[\cite{andrianov}]\label{prop:tp2-cosets}
The Hecke operator $T_{p^2,1}$ admits the following left coset decomposition:
\begin{multline*}
\sum_{0\leq \alpha < p}\Gamma^{(2)} \left(
\begin{smallmatrix}
p^2 & 0 & 0 & 0\\
-p\alpha & p & 0 & 0\\
0 & 0 & 1 & \alpha\\
0 & 0 & 0 & p\\
\end{smallmatrix}
\right) \Gamma^{(2)} +
\Gamma^{(2)} \left(
\begin{smallmatrix}
p & 0 & 0 & 0\\
0 & p^2 & 0 & 0\\
0 & 0 & p & 0\\
0 & 0 & 0 & 1\\
\end{smallmatrix}
\right) \Gamma^{(2)}
+
\sum_{\substack{0\leq a,b,c <p\\ ac-b^2\equiv 0\pmod{p}\\\text{ and not all zero}}}
\Gamma^{(2)} \left(
\begin{smallmatrix}
p & 0 & a & b\\
0 & p & b & c\\
0 & 0 & p & 0\\
0 & 0 & 0 & p\\
\end{smallmatrix}
\right)\Gamma^{(2)}
+\\
\sum_{\substack{0\leq \alpha,\beta<p\\0\leq C <p^2}}
\Gamma^{(2)} \left(
\begin{smallmatrix}
p & 0 & 0 & p\beta\\
-\alpha & 1 & \beta & \alpha\beta+C\\
0 & 0 & p & p\alpha\\
0 & 0 & 0 & p^2\\
\end{smallmatrix}
\right)\Gamma^{(2)}
+
\sum_{\substack{0\leq \beta <p\\0\leq A < p^2}}
\Gamma^{(2)}
\left(
\begin{smallmatrix}
1 & 0 & A & \beta\\
0 & p & p\beta & 0\\
0 & 0 & p^2 & 0\\
0 & 0 & 0 & p\\
\end{smallmatrix}
\right)
\Gamma^{(2)}.
\end{multline*}
Thus the degree of $T_{p^2,1}$ is $p^4+p^3+p^2+p$.
\end{prop}
\begin{rem}In the Introduction, we discussed the difficulty of computing $\lambda_{p^2}(F)$ using the action of $T_{p^2}$ on the coefficients of the eigenform $F$. One might ask whether if we could more efficiently compute $\lambda_{p^2}(F)$ using the action of $T_{p^2,1}$ on $F$ as described in Proposition~\ref{prop:tp2-cosets} and \eqref{eq:tp-relation}. It turns out, though, that one still would require coefficients up to discriminant $p^4$ using $T_{p^2,1}$ and \eqref{eq:tp-relation}.
\end{rem}
\section{Some computations and implementation details}
We describe some sample computations involving the eigenform of smallest weight that is not a lift from lower rank groups, namely the cusp form $\Upsilon_{20}$ mentioned in the introduction:
\begin{equation*}
\Upsilon_{20}=
-E_4^2\chi_{12}
-E_4E_6\chi_{10}
+1785600\chi_{10}^2.
\end{equation*}
As a gauge of the performance of the algorithm, we compared the timings to those required by the implementation~\cite{takemori-code} of the standard method\footnote{The only other publicly-available implementation we are aware of is~\cite{smf-code-sage}. We did not compare against it for two reasons: (a) at the moment, the computation of the Hecke image appears to be incorrect for primes that are congruent to $1$ mod $4$ and (b) it uses Cython for the most expensive part of the computation, namely the multiplication of the $q$-expansions. Since both our code and S. Takemori's are pure Python, we deemed this to be a more useful comparison of the two algorithms.} by Sho Takemori.
We implemented the method described in this paper in SageMath~\cite{sage}; this implementation is available at~\cite{hecke-analytic-siegel}.
The benchmarks described below were performed using a single core of a Linux machine with an i7-6700 CPU at 3.40GHz and 64GB of RAM, via the following helper functions:
\begin{verbatim}
def ups20_eigenvalue_numerical(p, prec, y11):
CRING = _initialise_rings(prec, 2*p)
Z = matrix(CRING, 2, 2, [y11*i, i, i, (y11+1)*i])
R.<a, b, c, d> = QQ[]
f = -a^2*d-a*b*c+1785600*c^2
return _eigenvalue_T_fixed_trace(f, Z, p, 2*p)
\end{verbatim}
\begin{verbatim}
def ups20_eigenvalue_standard(p):
with degree2_number_of_procs(1):
a = eisenstein_series_degree2(4, p)
b = eisenstein_series_degree2(6, p)
c = x10_with_prec(p)
d = x12_with_prec(p)
f = -a^2*d-a*b*c+1785600*c^2
return f.hecke_eigenvalue(p)
\end{verbatim}
\begin{table}[h]
\centering
\begin{tabular}{rrrrr}
\toprule
$p$ & $y_{11}$ & precision (bits) & numerical (s) & standard (s)\\
\midrule
$2$ & $2.7$ & $37$ & $0$ & $0$\\
$3$ & $4.3$ & $62$ & $0$ & $0$\\
$5$ & $6.1$ & $101$ & $0$ & $0$\\
$7$ & $7.5$ & $130$ & $1$ & $1$\\
$11$ & $9.5$ & $172$ & $3$ & $7$\\
$13$ & $10.3$ & $190$ & $6$ & $15$\\
$17$ & $10.9$ & $208$ & $16$ & $55$\\
$19$ & $11.9$ & $226$ & $25$ & $90$\\
$23$ & $12.3$ & $240$ & $54$ & $230$\\
$29$ & $13.5$ & $267$ & $140$ & $735$\\
$31$ & $13.9$ & $275$ & $186$ & $1185$\\
$37$ & $14.5$ & $295$ & $406$ & $2876$\\
\bottomrule
\end{tabular}
\caption{Benchmarks comparing the numerical and standard algorithms for computing the Hecke eigenvalues of $\Upsilon_{20}$. The timings are rounded to the nearest second. The working precision was chosen so that the eigenvalue is the closest integer to the computed floating point number.}
\label{table:ups20}
\end{table}
For the standard algorithm, the most expensive step appears to be the multiplication of the $q$-expansions of the Igusa generators.
In the case of our numerical algorithm, the majority of the time is spent evaluating truncations of the $q$-expansions of the Igusa generators at various points in the Siegel upper half space.
These functions are polynomials in the variables $q_1$, $q_2$, $q_3$ and $q_3^{-1}$, where
\begin{equation*}
Z=\begin{pmatrix}z_1&z_3\\z_3&z_2\end{pmatrix}
\qquad\text{and}\qquad q_j=e^{2\pi i z_j}.
\end{equation*}
To evaluate such functions efficiently at a large number of points, we implemented an iterative version of Horner's method; to illustrate what is involved, here is how the truncation of the Igusa generator $\chi_{10}$ at trace up to $3$ is evaluated:
\begin{multline*}
q_1\Bigg(q_2\Big(q_3^{-1}-2+q_3+q_2\left(-2q_3^{-2}-16q_3^{-1}+36-16q_3-2q_3^2\right)\Big)\\+q_1\Big(q_2\left(-2q_3^{-1}-16q_3^{-1}+36-16q_3-2q_3^2\right)\Big)\Bigg)
\end{multline*}
Many of the partial evaluations are repeated for different summands of the expression for the Hecke operators.
We take advantage of this phenomenon by caching the results of evaluations of polynomials in $q_3$ and $q_3^{-1}$.
All the operations are performed using interval arithmetic (via the \texttt{ComplexIntervalField} available in Sage).
While this introduces a small overhead, it frees us from having to keep track of precision loss due to arithmetic operations (and evaluations of the complex exponential function).
Sage gives the final approximation of the Hecke eigenvalue in the form
\begin{verbatim}
1.0555282184708004141101491800000000000000?e27 + 0.?e-13*I
\end{verbatim}
from which we observe that the answer is most likely the integer
\begin{verbatim}
1055528218470800414110149180
\end{verbatim}
which is indeed $\lambda_{29}(\Upsilon_{20})$.
The question mark in the floating point number indicates that the last decimal may be incorrect due to rounding errors (but all preceding decimals are guaranteed to be correct).
There are certainly many variants of our choices that deserve further scrutiny and may lead to improved performance.
Here are some of the more interesting ones:
\begin{itemize}
\item For computing the eigenvalue $\lambda_p$, we chose to focus on the initial evaluation point
\begin{equation*}
Z=\begin{pmatrix}y_{11}i&i\\i&(y_{11}+1)i\end{pmatrix},
\end{equation*}
where the parameter $y_{11}$ is (at the moment) determined by trial and error.
The optimal values of $y_{11}$ for $\Upsilon_{20}$ and small $p$ are listed in the second column of Table~\ref{table:ups20}.
We note that the dependence of this optimal $y_{11}$ on $p$ appears to be linear in $\log(p)$.
The choice of $Z$ is significant for another reason: the fact that $Z$ is a ``purely imaginary matrix'' gives an extra symmetry that allows to reduce the number of overall computations by almost a factor of $2$. Note that the timings listed in Table~\ref{table:ups20} do not incorporate this optimization.
\item Our experiments indicate that computing the value of $\lambda_p$ accurately using the choice of point $Z$ described above requires truncating the $q$-expansions of the Igusa generators at trace up to $2p$.
It would be very interesting to see if this trace bound can be lowered; even a small improvement in the trace can reduce the computation time significantly.
We have observed such phenomena in the case of classical modular forms (treated in~\cite{analytic1}).
\end{itemize}
\subsection{Summary of further computations}
We performed similar numerical experiments with the following forms:
\begin{align*}
\Upsilon_{22}&=
61E_4^3\chi_{10}
-30E_4E_6\chi_{12}
+5E_6^2\chi_{10}
-80870400\chi_{10}\chi_{12}
\\
\Upsilon_{24\mathrm{a}}&=
-67E_4^3\chi_{12}
+78E_4^2E_6\chi_{10}
-274492800E_4\chi_{10}^2
+25E_6^2\chi_{12}
+71539200\chi_{12}^2
\\
\Upsilon_{24\mathrm{b}}&=
+70E_4^3\chi_{12}
-69E_4^2E_6\chi_{10}
-214341120E_4\chi_{10}^2
+53E_6^2\chi_{12}
-137604096\chi_{12}^2
\\
\Upsilon_{26\mathrm{a}}&=
-22E_4^4\chi_{10}
-3E_4^2E_6\chi_{12}
+31E_4E_6^2\chi_{10}
-96609024E_4\chi_{10}\chi_{12}
-13806720E_6\chi_{10}^2
\\
\Upsilon_{26\mathrm{b}}&=
973E_4^4\chi_{10}
+390E_4^2E_6\chi_{12}
-1255E_4E_6^2\chi_{10}
+3927813120E_4\chi_{10}\chi_{12}
-4438886400E_6\chi_{10}^2
\\
\end{align*}
These have in common that they are all ``interesting'' forms (Skoruppa's terminology and notation), not arising as lifts from lower rank groups.
They also all have rational coefficients (and are very likely the only rational ``interesting'' forms in level one).
As we can see in Table~\ref{table:p23}, while the standard method slows down rapidly with the increase in the weight, the numerical method seems unaffected by the weight (in this range).
\begin{table}[h]
\centering
\begin{tabular}{rrrr}
\toprule
$f$ & numerical (s) & standard (s)\\
\midrule
$\Upsilon_{20}$ & $57$ & $240$ \\
$\Upsilon_{22}$ & $59$ & $410$ \\
$\Upsilon_{24\mathrm{a}}$ & $59$ & $559$ \\
$\Upsilon_{24\mathrm{b}}$ & $59$ & $563$ \\
$\Upsilon_{26\mathrm{a}}$ & $59$ & $658$ \\
$\Upsilon_{26\mathrm{b}}$ & $60$ & $659$ \\
\bottomrule
\end{tabular}
\caption{Benchmarks comparing the numerical and standard algorithms for computing the Hecke eigenvalue $\lambda_{23}$ of the rational ``interesting'' eigenforms. The timings are rounded to the nearest second.}
\label{table:p23}
\end{table}
As we increase the weight further, we encounter ``interesting'' eigenforms defined over number fields of increasing degree.
Our implementation treats these in the same way as the rational eigenforms; the algebraic numbers appearing in the expression of an eigenform as a polynomial in the Igusa generators are first embedded into the \texttt{ComplexIntervalField} with the working precision, and the computations are then done exclusively with complex intervals.
We illustrate this with a number of examples from the L-functions and Modular Forms Database (LMFDB~\cite{lmfdb}): $\Upsilon_{28},\Upsilon_{30},\dots,\Upsilon_{56}$, contributed by Nils-Peter Skoruppa.
These are representatives of the unique Galois orbit of ``interesting'' Siegel modular eigenforms of level one and weights given by the indices.
We computed the integer closest to the eigenvalues $\lambda_2,\lambda_3,\dots,\lambda_{11}$ of these forms and verified the results against Sho Takemori's
implementation.\footnote{The LMFDB contains only $\lambda_2,\lambda_3$ and $\lambda_5$ for the forms $\Upsilon_{28},\dots,\Upsilon_{48}$.
We are not aware of the other eigenvalues we computed having been published anywhere.}
The timings for $\lambda_{11}$ appear in Table~\ref{table:p11}.
We note once again that the change in weight has only a very minimal effect on the timings for the numerical approach.
The degree of the number field over which each eigenform is defined varies from $3$ for $\Upsilon_{28}$ to $29$ for $\Upsilon_{56}$.
\begin{table}[h]
\centering
\begin{tabular}{rrrr}
\toprule
$f$ & numerical (s) & standard (s) & integer closest to $\lambda_{11}(f)$ \\
\midrule
$\Upsilon_{28}$ & $5$ & $42$ &
{\tiny $-5759681178477373721671849774$}
\\
$\Upsilon_{30}$ & $5$ & $55$ &
{\tiny $255840273811994841300205675092$}
\\
$\Upsilon_{32}$ & $5$ & $72$ &
{\tiny $-62889079837500073468061496815555$}
\\
$\Upsilon_{34}$ & $5$ & $99$ &
{\tiny $439086084572485264922509970244600$}
\\
$\Upsilon_{36}$ & $5$ & $145$ &
{\tiny $-1085248116783567484088793200996441965$}
\\
$\Upsilon_{38}$ & $5$ & $171$ &
{\tiny $99082752899176432104304580529696472526$}
\\
$\Upsilon_{40}$ & $6$ & $316$ &
{\tiny $21639993149436935203941512756710465353890$}
\\
$\Upsilon_{42}$ & $6$ & $405$ &
{\tiny $1326433094276015828828131422320612505802642$}
\\
$\Upsilon_{44}$ & $6$ & $697$ &
{\tiny $-216254834133020533289657866886176910904279874$}
\\
$\Upsilon_{46}$ & $6$ & $1156$ &
{\tiny $3025010356797981861229021682270178023420599162$}
\\
$\Upsilon_{48}$ & $6$ & $2147$ &
{\tiny $3623681259607683701352889863246901251092385443364$}
\\
$\Upsilon_{50}$ & $6$ & $3558$ &
{\tiny $-50111326406849287661448298549933139673192742821477$}
\\
$\Upsilon_{52}$ & $6$ & $7701$ &
{\tiny $-33891727074702812676183940887995219801531644658145401$}
\\
$\Upsilon_{54}$ & $6$ & $12205$ &
{\tiny $-4324363734737815894771410628259133851153783375885366874$}
\\
$\Upsilon_{56}$ & $7$ & $19290$ &
{\tiny $807326143967818876211261524740739769895631903544298785221$}
\\
\bottomrule
\end{tabular}
\caption{Benchmarks comparing the numerical and standard algorithms for computing the Hecke eigenvalue $\lambda_{11}$ of a representative of the unique Galois orbit of ``interesting'' eigenforms in each of the listed weights. The timings are rounded to the nearest second.}
\label{table:p11}
\end{table}
\end{document} | math | 38,931 |
\begin{document}
\hbox{Aut}hor[Robert Laterveer]
{Robert Laterveer}
\address{Institut de Recherche Math\'ematique Avanc\'ee,
CNRS -- Universit\'e
de Strasbourg,\
7 Rue Ren\'e Des\-car\-tes, 67084 Strasbourg CEDEX,
FRANCE.}
\email{[email protected]}
\title{On the Chow ring of certain Fano fourfolds}
\begin{abstract} We prove that certain Fano fourfolds of K3 type constructed by
Fatighenti--Mongardi have a multiplicative Chow--K\"unneth decomposition. We present some consequences for the Chow ring of these fourfolds.
\end{abstract}
\keywords{Algebraic cycles, Chow ring, motives, Beauville ``splitting property'', Fano variety, K3 surface}
\subjclass{Primary 14C15, 14C25, 14C30.}
\maketitle
\section{Introduction}
This note is part of a program aimed at understanding the class of varieties admitting a {\em multiplicative Chow--K\"unneth decomposition\/}, in the sense of \cite{SV}.
The concept of multiplicative Chow--K\"unneth decomposition was introduced in order to better understand the (conjectural) behaviour of the Chow ring of hyperk\"ahler varieties, while also providing a systematic explanation of the peculiar behaviour of the Chow ring of K3 surfaces and abelian varieties.
In \cite{S2}, the following conjecture is raised:
\begin{conjecture}\label{conj} Let $X$ be a smooth projective Fano variety of K3 type (i.e. $\dim X=2m$ and the Hodge numbers $h^{p,q}(X)$ are $0$ for all $p\not=q$ except for $h^{m-1,m+1}(X)=h^{m+1,m-1}(X)=1$). Then $X$ has a multiplicative Chow--K\"unneth decomposition.
\end{conjecture}
This conjecture is verified in some special cases \cite{Ver}, \cite{d3}, \cite{S2}.
The aim of the present note is to provide some more evidence for Conjecture \ref{conj}. We consider two families of Fano fourfolds of K3 type (these are the families labelled B1 and B2 in \cite{FM}).
\begin{nonumbering}[=Theorem \ref{main}] Let $X$ be a smooth fourfold of one of the following types:
\begin{itemize}
\item a hypersurface of multidegree $(2,1,1)$ in $M=\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1$;
\item a hypersurface of multidegree $(2,1)$ in $M=\operatorname{Gr}(2,4)\times\mathbb{P}^1$ (with respect to the Pl\"ucker embedding).
\end{itemize}
Then $X$ has a multiplicative Chow--K\"unneth decomposition.
\end{nonumbering}
Theorem \ref{main} has interesting consequences for the Chow ring $A^\ast(X)_{\mathbb{Q}}$ of these fourfolds:
\begin{nonumberingc}[=Corollary \ref{cor}] Let $X$ and $M$ be as in Theorem \ref{main}.
Let $R^3(X)\subset A^3(X)_{\mathbb{Q}}$ be the subgroup generated by the Chern class $c_3(T_X)$, the image of the restriction map $A^3(M)_{\mathbb{Q}}\to A^3(X)_{\mathbb{Q}}$, and intersections
$A^1(X)_{\mathbb{Q}}\cdot A^2(X)_{\mathbb{Q}}$ of divisors with $2$-cycles.
The cycle class map induces an injection
\[ R^3(X)\ \hookrightarrow\ H^6(X,\mathbb{Q})\ .\]
\end{nonumberingc}
This is reminiscent of the famous result of Beauville--Voisin describing the Chow ring of a $K3$ surface \cite{BV}.
More generally, there is a similar injectivity result for the Chow ring of certain self-products $X^m$ (Corollary \ref{cor}).
Another consequence is the existence of a multiplicative decomposition in the derived category for families of Fano fourfolds as in Theorem \ref{main} (Corollary \ref{deldec}).
\vskip0.6cm
\begin{convention} In this note, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. For a smooth variety $X$, we will denote by $A^j(X)$ the Chow group of codimension $j$ cycles on $X$
with $\mathbb{Q}$-coefficients.
The notation
$A^j_{hom}(X)$ will be used to indicate the subgroups of
homologically trivial cycles.
For a morphism between smooth varieties $f\colon X\to Y$, we will write $\Gamma_f\in A^\ast(X\times Y)$ for the graph of $f$.
The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$.
We will write $H^\ast(X):=H^\ast(X,\mathbb{Q})$ for singular cohomology with $\mathbb{Q}$-coefficients.
\end{convention}
\vskip0.6cm
\section{The Fano fourfolds}
\begin{proposition}\label{4folds}
(\romannumeral1) Let $X\subset\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1$ be a smooth hypersurface of multidegree $(2,1,1)$ (following \cite{FM}, we will say $X$ is ``of type B1''). Then $X$ is Fano, and the Hodge numbers of $X$ are
\[ \begin{array}[c]{ccccccccccccc}
&&&&&& 1 &&&&&&\\
&&&&&0&&0&&&&&\\
&&&&0&&3&&0&&&&\\
&&&0&&0&&0&&0&&&\\
&&0&&1&&22&&1&&0&&\\
&&&0&&0&&0&&0&&&\\
&&&&0&&3&&0&&&&\\
&&&&&0&&0&&&&&\\
&&&&&& 1 &&&&&&\\
\end{array}\]
(\romannumeral2) Let $X\subset\operatorname{Gr}(2,4)\times\mathbb{P}^1$ be a smooth hypersurface of multidegree $(2,1)$ with respect to the Pl\"ucker embedding (following \cite{FM}, we will say $X$ is ``of type B2''). Then $X$ is Fano, and the Hodge numbers of $X$ are
\[ \begin{array}[c]{ccccccccccccc}
&&&&&& 1 &&&&&&\\
&&&&&0&&0&&&&&\\
&&&&0&&2&&0&&&&\\
&&&0&&0&&0&&0&&&\\
&&0&&1&&22&&1&&0&&\\
&&&0&&0&&0&&0&&&\\
&&&&0&&2&&0&&&&\\
&&&&&0&&0&&&&&\\
&&&&&& 1 &&&&&&\\
\end{array}\]
\end{proposition}
\begin{proof} An easy way to determine the Hodge numbers is to use the following identification:
\begin{lemma}\label{blow} Let $Z$ be a smooth projective variety of Picard number 1, and $X\subset Z\times\mathbb{P}^1$ a general hypersurface of bidegree $(d,1)$. Then
$X$ is isomorphic to the blow-up of $Z$ with center $S$, where $S\subset Z$ is a smooth dimensionally transversal intersection of $2$ divisors of degree $d$.
Conversely, given a smooth dimensionally transversal intersection $S\subset Z$ of $2$ divisors of degree $d$, the blow-up of $Z$ with center $S$ is isomorphic to a smooth hypersurface $X\subset Z\times\mathbb{P}^1$ of bidegree $(d,1)$.
\end{lemma}
\begin{proof} This is \cite[Lemma 2.2]{FM}. The gist of the argument is that $X$ determines a pencil of divisors in $Z$, of which $S$ is the base locus.
In terms of equations, if $X$ is defined by $y_0 f + y_1 g=0$ (where $[y_0:y_1]\in\mathbb{P}^1$ and $f, g\in H^0(Z,\mathcal O_Z(d))$) then $S$ is defined by $f=g=0$.
It follows that for $X$ general (in the usual sense of ``being parametrized by a Zariski open in the parameter space'') the locus $S$ is smooth.
\end{proof}
In case (\romannumeral1), $Z=\mathbb{P}^3\times\mathbb{P}^1$ and $S$ is a genus 7 K3 surface. In case (\romannumeral2), $Z=\operatorname{Gr}(2,4)$ (which is a quadric in $\mathbb{P}^5$), and $S$ is a genus 5 K3 surface. This readily gives the Hodge numbers.
\end{proof}
\begin{remark} Fatighenti--Mongardi \cite{FM} give a long list of Fano varieties of K3 type.
The Fano fourfolds of Proposition \ref{4folds}(\romannumeral1) and (\romannumeral2) are labelled B1 resp. B2 in their list.
\end{remark}
\section{Multiplicative Chow--K\"unneth decomposition}
\begin{definition}[Murre \cite{Mur}]\label{ck} Let $X$ be a smooth projective
variety of dimension $n$. We say that $X$ has a
{\em CK decomposition\/} if there exists a decomposition of the
diagonal
\[ \mathbb{D}elta_X= \pi^0_X+ \pi^1_X+\cdots +\pi^{2n}_X\ \ \ \hbox{in}\
A^n(X\times X)\ ,\]
such that the $\pi^i_X$ are mutually orthogonal idempotents and the action of
$\pi^i_X$ on $H^j(X)$ is the identity for $i=j$ and zero for $i\not=j$.
Given a CK decomposition for $X$, we set
$$A^i(X)_{(j)} := (\pi_X^{2i-j})_\ast A^i(X).$$
The CK decomposition is said to be {\em self-dual\/} if
\[ \pi^i_X = {}^t \pi^{2n-i}_X\ \ \ \hbox{in}\ A^n(X\times X)\ \ \ \forall
i\ .\]
(Here ${}^t \pi$ denotes the transpose of a cycle $\pi$.)
(NB: ``CK decomposition'' is short-hand for ``Chow--K\"unneth
decomposition''.)
\end{definition}
\begin{remark} \label{R:Murre} The existence of a Chow--K\"unneth decomposition
for any smooth projective variety is part of Murre's conjectures \cite{Mur},
\cite{MNP}.
It is expected that for any $X$ with a CK
decomposition, one has
\begin{equation*}\label{hope} A^i(X)_{(j)}\stackrel{??}{=}0\ \ \ \hbox{for}\
j<0\ ,\ \ \ A^i(X)_{(0)}\cap A^i_{num}(X)\stackrel{??}{=}0.
\end{equation*}
These are Murre's conjectures B and D, respectively.
\end{remark}
\begin{definition}[Definition 8.1 in \cite{SV}]\label{mck} Let $X$ be a
smooth
projective variety of dimension $n$. Let $\mathbb{D}elta_X^{sm}\in A^{2n}(X\times
X\times X)$ be the class of the small diagonal
\[ \mathbb{D}elta_X^{sm}:=\bigl\{ (x,x,x) : x\in X\bigr\}\ \subset\ X\times
X\times X\ .\]
A CK decomposition $\{\pi^i_X\}$ of $X$ is {\em multiplicative\/}
if it satisfies
\[ \pi^k_X\circ \mathbb{D}elta_X^{sm}\circ (\pi^i_X\otimes \pi^j_X)=0\ \ \ \hbox{in}\
A^{2n}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\]
In that case,
\[ A^i(X)_{(j)}:= (\pi_X^{2i-j})_\ast A^i(X)\]
defines a bigraded ring structure on the Chow ring\,; that is, the
intersection product has the property that
\[ \operatorname{i}a \Bigl(A^i(X)_{(j)}\otimes A^{i^\prime}(X)_{(j^\prime)}
\xrightarrow{\cdot} A^{i+i^\prime}(X)\Bigr)\ \subseteq\
A^{i+i^\prime}(X)_{(j+j^\prime)}\ .\]
(For brevity, we will write {\em MCK decomposition\/} for ``multiplicative Chow--K\"unneth decomposition''.)
\end{definition}
\begin{remark}
The property of having an MCK decomposition is
severely restrictive, and is closely related to Beauville's ``(weak) splitting
property'' \cite{Beau3}. For more ample discussion, and examples of varieties
admitting a MCK decomposition, we refer to
\cite[Chapter 8]{SV}, as well as \cite{V6}, \cite{SV2},
\cite{FTV}, \cite{LV}.
\end{remark}
\begin{remark}\label{self} It turns out that any MCK decomposition is self-dual, cf. \cite[Footnote 24]{FV}.
\end{remark}
There are the following useful general results:
\begin{proposition}[Shen--Vial \cite{SV}]\label{product} Let $M,N$ be smooth projective varieties that have an MCK decomposition. Then the product $M\times N$ has an MCK decomposition.
\end{proposition}
\begin{proof} This is \cite[Theorem 8.6]{SV}, which shows more precisely that the {\em product CK decomposition\/}
\[ \pi^i_{M\times N}:= \sum_{k+\ell=i} \pi^k_M\times \pi^\ell_N\ \ \ \in A^{\dim M+\dim N}\bigl((M\times N)\times (M\times N)\bigr) \]
is multiplicative.
\end{proof}
\begin{proposition}[Shen--Vial \cite{SV2}]\label{blowup} Let $M$ be a smooth projective variety, and let $f\colon\widetilde{M}\to M$ be the blow-up with center a smooth closed subvariety
$N\subset M$. Assume that
\begin{enumerate}
\item $M$ and $N$ have an MCK decomposition;
\item the Chern classes of the normal bundle $\mathbb{N}N_{N/M}$ are in $A^\ast_{(0)}(N)$;
\item the graph of the inclusion morphism $N\to M$ is in $A^\ast_{(0)}(N\times M)$;
\item the Chern classes $c_j(T_M)$ are in $A^\ast_{(0)}(M)$.
\end{enumerate}
Then $\widetilde{M}$ has an MCK decomposition, the Chern classes $c_j(T_{\widetilde{M}})$ are in $A^\ast_{(0)}(\widetilde{M})$, and the graph $\Gamma_f$ is in $A^\ast_{(0)}( \widetilde{M}\times M)$.
\end{proposition}
\begin{proof} This is \cite[Proposition 2.4]{SV2}. (NB: in loc. cit., $M$ and $N$ are required to have a {\em self-dual\/} MCK decomposition; however, the self-duality is actually a redundant hypothesis, cf. Remark \ref{self}.)
In a nutshell, the construction of loc. cit. is as follows. Given MCK decompositions $\pi^\ast_M$ and $\pi^\ast_N$ (of $M$ resp. $N$), one defines
\begin{equation}\label{pimck}
\pi^j_{\widetilde{M}}:= \Psi\circ\Bigl( \pi^j_M\oplus \bigoplus_{k=1}^r \pi^{j-2k}_N\Bigr)\circ \Psi^{-1}\ \ \in\ A^{\dim \widetilde{M}}(\widetilde{M}\times \widetilde{M})\ ,\end{equation}
where $r+1$ is the codimension of $N$ in $M$, and $\Psi, \Psi^{-1}$ are certain explicit correspondences (this is \cite[Equation (13)]{SV2}). Then one checks that the $\pi^\ast_{\widetilde{M}}$ form an MCK decomposition.
\end{proof}
\section{Main result}
\begin{theorem}\label{main} Let $X$ be a smooth fourfold of one of the following types:
\begin{itemize}
\item a hypersurface of multidegree $(2,1,1)$ in $\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1$;
\item a hypersurface of multidegree $(2,1)$ in $\operatorname{Gr}(2,4)\times\mathbb{P}^1$ (with respect to the Pl\"ucker embedding).
\end{itemize}
Then $X$ has an MCK decomposition. Moreover, the Chern classes $c_j(T_X)$ are in
$A^\ast_{(0)}(X)$.
\end{theorem}
\begin{proof} The argument relies on the alternative description of the general $X$ given by Lemma \ref{blow}.
\noindent
{\it Step 1:} We restrict to $X$ sufficiently general, in the sense that $X$ is a blow-up as in Lemma \ref{blow} with {\em smooth\/} center $S$.
To construct an MCK decomposition for $X$, we apply the general Proposition \ref{blowup}, with $M$ being either $\mathbb{P}^3\times\mathbb{P}^1$ or $\operatorname{Gr}(2,4)$, and $N$ being the K3 surface $S\subset M$ determined by Lemma \ref{blow}. All we need to do is to check that the assumptions of Proposition \ref{blowup} are met with.
Assumption (1) is verified, since both varieties with trivial Chow groups $M$ and K3 surfaces $S$ have an MCK decomposition. For $M$ there is no choice involved ($M$ has a unique CK decomposition which is MCK). For $S$, we choose
\begin{equation}\label{k3} \pi^0_S:=\mathfrak{o}_S\times S\ ,\ \pi^4_S:=S\times \mathfrak{o}_S\ ,\ \pi^2_S:=\mathbb{D}elta_S-\pi^0_S-\pi^4_S\ \ \ \in\ A^2(S\times S)\ ,\end{equation}
where $\mathfrak{o}_S\in A^2(S)$ is the distinguished zero-cycle of \cite{BV}.
This is an MCK decomposition for $S$ \cite[Example 8.17]{SV}.
Assumption (4) is trivially satisfied: one has $A^\ast_{hom}(M)=0$ and so (because $\pi^j_M$ acts as zero on $H^{2i}(M)$ for $j\not=2i$) one has $A^\ast_{}(M)=A^\ast_{(0)}(M)$.
To check assumptions (2) and (3), we consider things family-wise. That is, we write
\[ \bar{B}:= \mathbb{P} H^0\bigl( M, \mathbb{L}L^{\oplus 2}\bigr) \ ,\]
where the line bundle $\mathbb{L}L$ is either $\mathcal O_M(2,1)$ (in case $M=\mathbb{P}^3\times\mathbb{P}^1$) or $\mathcal O_M(2)$ (in case $M=\operatorname{Gr}(2,4)$),
and we consider the universal complete intersection
\[\bar{\mathcal S}\ \to\ \bar{B}\ .\]
We write $B_0\subset\bar{B}$ for the Zariski open parametrizing smooth dimensionally transversal intersections, and $\mathcal S\to B_0$ for the base change (so the fibres $S_b$ of $\mathcal S\to B_0$ are exactly the K3 surfaces that are the centers of the blow-up occurring in Lemma \ref{blow}).
We now make the following claim:
\begin{claim}\label{gfc} Let $\Gamma\in A^i(\mathcal S)$ be such that
\[ \Gamma\vert_{S_b}=0\ \ \ \hbox{in}\ H^{2i}(S_b)\ \ \ \forall b\in B_0\ .\]
Then also
\[ \Gamma\vert_{S_b}=0\ \ \ \hbox{in}\ A^i(S_b)\ \ \ \forall b\in B_0\ .\]
\end{claim}
We argue that the claim implies that assumptions (2) and (3) of Proposition \ref{blowup} are met with (and thus Proposition \ref{blowup} can be applied to prove Theorem \ref{main}).
Indeed, let $p_j\colon \mathcal S\times_{B_0} \mathcal S\to \mathcal S$, $j=1,2$, denote the two projections.
We observe that
\[ \pi^0_\mathcal S:={1\over 24} (p_1)^\ast c_2(T_{\mathcal S/{B_0}})\ ,\ \ \pi^4_\mathcal S:={1\over 24} (p_2)^\ast c_2(T_{\mathcal S/{B_0}})\ ,\ \ \pi^2_\mathcal S:= \mathbb{D}elta_\mathcal S-\pi^0_\mathcal S-\pi^4_\mathcal S\ \ \ \in A^4(\mathcal S\times_{B_0} \mathcal S) \]
defines a ``relative MCK decomposition'', in the sense that for any $b\in B_0$, the restriction $\pi^i_\mathcal S\vert_{S_b\times S_b}$ defines an MCK decomposition for $S_b$ which agrees with (\ref{k3}).
Let us now check that assumption (2) is satisfied. Since $A^1(S_b)=A^1_{(0)}(S_b)$, we only need to consider $c_2$ of the normal bundle. That is,
we need to check that for any $b\in B_0$ there is vanishing
\begin{equation}\label{need} (\pi^2_{S_b})_\ast c_2(\mathbb{N}N_{S_b/M})\stackrel{??}{=}0\ \ \ \hbox{in}\ A^2(S_b)\ .\end{equation}
But we can write
\[ (\pi^2_{S_b})_\ast c_2(\mathbb{N}N_{S_b/M}) = \Bigl( (\pi^2_\mathcal S)_\ast c_2(\mathbb{N}N_{\mathcal S/(M\times B_0)})\Bigr)\vert_{S_b} \ \ \ \hbox{in}\ A^2(S_b)\ \]
(for the formalism of relative correspondences, cf. \cite[Chapter 8]{MNP}),
and besides we know that $ (\pi^2_{S_b})_\ast c_2(\mathbb{N}N_{S_b/M})$ is homologically trivial ($\pi^2_{S_b}$ acts as zero on $H^4(S_b)$). Thus, Claim
\ref{gfc} implies the necessary vanishing (\ref{need}).
Assumption (3) is checked similarly. Let $\iota_b\colon S_b\to M$ and $\iota\colon \mathcal S\to M\times B$ denote the inclusion morphisms.
To check assumption (3), we need to convince ourselves of the vanishing
\begin{equation}\label{need2} (\pi^\ell_{S_b\times M})_\ast (\Gamma_{\iota_b})\stackrel{??}{=}0\ \ \ \hbox{in}\ A^4(S_b\times M)\ \ \ \forall \ell\not= 8\ ,\ \forall b\in B_0\ .\end{equation}
Since $\Gamma_{\iota_b}\in A^4(S_b\times M)$, one knows that $ (\pi^\ell_{S_b\times M})_\ast (\Gamma_{\iota_b})$ is homologically trivial for any
$\ell\not=8$. Furthermore, we can write the cycle we are interested in as the restriction of a universal cycle:
\[ (\pi^\ell_{S_b\times M})_\ast (\Gamma_{\iota_b}) = \Bigl( (\sum_{j+k=\ell} \pi^j_\mathcal S\times \pi^k_{M}) _\ast (\Gamma_\iota)\Bigr)\vert_{S_b\times M}
\ \ \ \hbox{in}\ A^4(S_b\times M)\ .\]
For any $b\in B_0$, there is a commutative diagram
\[ \begin{array}[c]{ccc}
A^4(\mathcal S\times M) &\to& A^4(S_b\times M) \\
&&\\
\ \ \ \downarrow{\cong}&&\ \ \ \downarrow{\cong}\\
&&\\
\bigoplus A^\ast(\mathcal S) &\to& \bigoplus A^\ast(S_b)\\
\end{array} \]
where horizontal arrows are restriction to a fibre, and
where vertical arrows are isomorphisms because $M$ has trivial Chow groups.
Claim \ref{gfc} applied to the lower horizontal arrow shows the vanishing (\ref{need2}), and so assumption (3) holds.
It is only left to prove the claim. Since $A^i_{hom}(S_b)=0$ for $i\le 1$, the only non-trivial case is $i=2$.
Given $\Gamma\in A^2(\mathcal S)$ as in the claim, let $\bar{\Gamma}\in A^2(\bar{\mathcal S})$ be a cycle restricting to $\Gamma$.
We consider the two projections
\[ \begin{array}[c]{ccc}
\bar{\mathcal S}&\xrightarrow{\pi}& M \\
\ \ \ \ \downarrow{\scriptstyle \phi}&&\\
\ \ \bar{B}\ &&\\
\end{array}\]
Since any point of $M$ imposes exactly one condition on $\bar{B}$, the morphism $\pi$ has the structure of a projective bundle. As such, any
$\bar{\Gamma}\in A^{2}(\bar{\mathcal S})$ can be written
\[ \bar{\Gamma}= \sum_{\ell=0}^2 \pi^\ast( a_\ell) \cdot \xi^\ell \ \ \ \hbox{in}\ A^{2}(\bar{\mathcal S})\ ,\]
where $a_\ell\in A^{2-\ell}( M)$ and $\xi\in A^1(\bar{\mathcal S})$ is the relative hyperplane class.
Let $h:=c_1(\mathcal O_{\bar{B}}(1))\in A^1(\bar{B})$. There is a relation
\[ \phi^\ast(h)=\alpha \xi + \pi^\ast(h_1)\ \ \ \hbox{in}\ A^1(\bar{\mathcal S})\ ,\]
where $\alpha\in\mathbb{Q}$ and $h_1\in A^1(M)$. As in \cite[Proof of Lemma 1.1]{PSY}, one checks that $\alpha\not=0$ (if $\alpha$ were $0$, we would have
$\phi^\ast(h^{\dim \bar{B}})=\pi^\ast(h_1^{\dim \bar{B}})$, which is absurd since $\dim \bar{B}>4$ and so the right-hand side must be $0$).
Hence, there is a relation
\[ \xi = {1\over \alpha} \bigl(\phi^\ast(h)-\pi^\ast(h_1)\bigr)\ \ \ \hbox{in}\ A^1(\bar{\mathcal S})\ .\]
For any $b\in B_0$, the restriction of $\phi^\ast(h)$ to the fibre $S_b$ vanishes, and so it follows that
\[ \bar{\Gamma}\vert_{S_b} = a_0^\prime\vert_{S_b}\ \ \ \hbox{in}\ A^{2}(S_b)\ \]
for some $a_0^\prime\in A^2( M)$. But
$A^2( M)$ is generated by intersections of divisors in case $M=\mathbb{P}^3\times\mathbb{P}^1$, and $A^2(M)$ is generated by divisors and $c_2$ of the tautological bundle in case $M=\operatorname{Gr}(2,4)$. In both cases, it follows that
\[ \bar{\Gamma}\vert_{S_b} = a_0^\prime\vert_{S_b} \ \ \in\ \mathbb{Q} [{\mathfrak{o}}_{S_b}]\ \ \ \subset\ A^{2}(S_b)\ ,\]
(in the second case, this is proven as in \cite[Proposition 2.1]{PSY}).
Given ${\Gamma}\in A^2({\mathcal S})$ a cycle such that the fibrewise restriction has degree zero, this shows that the fibrewise restriction is zero in $A^2(S_b)$.
Claim \ref{gfc} is proven.
\noindent
{\it Step 2:} It remains to extend to {\em all\/} smooth hypersurfaces as in the theorem. That is, let $B\subset\bar{B}$ be the open such that the Fano fourfold $X_b$ (which is the blow-up of $M$ with center $S_b$) is smooth. One has $B\supset B_0$. Let $\mathcal X\to B$ and $\mathcal X^0\to B_0$ denote the universal families of Fano fourfolds over $B$ resp. $B_0$.
From step 1, one knows that $X_b$ has an MCK decomposition for any $b\in B_0$. A closer look at the proof reveals more: the family $\mathcal X^0\to B_0$ has a
``universal MCK decomposition'', in the sense that there exist relative correspondences $\pi^\ast_{\mathcal X^0}\in A^4(\mathcal X^0\times_{B_0}\mathcal X^0)$ such that for each $b\in B_0$ the restriction $\pi^\ast_{X_b}:=\pi^\ast_{\mathcal X^0}\vert_b\in A^4(X_b\times X_b)$ forms an MCK decomposition for $X_b$. (To see this, one observes that Proposition \ref{blow} is ``universal'': given families $\mathcal M\to B$, $\mathbb{N}N\to B$ and universal MCK decompositions $\pi^\ast_\mathcal M$, $\pi^\ast_\mathbb{N}N$, the result of (\ref{pimck}) is a universal MCK decomposition for $\widetilde{\mathcal M}\to B$.)
A standard argument now allows to spread out the MCK property from $B_0$ to the larger base $B$. That is, we define
\[ \pi^j_\mathcal X:= \bar{\pi}^j_{\mathcal X^0}\ \ \in \ A^4(\mathcal X\times_B \mathcal X)\ \]
(where $\bar{\pi}$ refers to the closure of a representative of $\pi$). The ``spread lemma'' \cite[Lemma 3.2]{Vo} (applied to $\mathcal X\times_B \mathcal X$) gives that the $\pi^\ast_\mathcal X$ are a fibrewise CK decomposition, and the same spread lemma (applied to $\mathcal X\times_B \mathcal X\times_B \mathcal X$) gives that the $\pi^\ast_\mathcal X$ are a fibrewise MCK decomposition. This ends step 2.
\end{proof}
\begin{remark}\label{franch} Claim \ref{gfc} states that the families $\mathcal S\to B_0$ verify the ``Franchetta property'' as studied in \cite{FLV}. It is worth mentioning that the Franchetta property for the universal K3 surface of genus $g\le 10$ (and for some other values of $g$) was already proven in \cite{PSY}; the families considered in Claim \ref{gfc} are different, however, so Claim \ref{gfc} is not covered by \cite{PSY} (e.g., in case $M=\mathbb{P}^3\times\mathbb{P}^1$ the K3 surfaces of Claim \ref{gfc} have Picard number at least $2$, so they correspond to a Noether--Lefschetz divisor in $\mathcal F_7$).
As a corollary of Claim \ref{gfc}, the universal families $\mathcal X\to B$ of Fano fourfolds of type B1 or B2 also verify the Franchetta property. (Indeed, in view of \cite[Lemma 3.2]{Vo} it suffices to prove this for $\mathcal X^0\to B_0$.
In view of Lemma \ref{blow}, $\mathcal X^0$ can be
constructed as the blow-up of $M\times B_0$ with center $\mathcal S$. This blow-up yields a relative correspondence from $\mathcal X^0$ to $\mathcal S$, inducing a commutative diagram
\[ \begin{array}[c]{ccc}
A^j(\mathcal X^0) &\to& A^{j-1}(\mathcal S)\oplus \bigoplus \mathbb{Q}\\
&&\\
\downarrow&&\downarrow\\
&&\\
A^j(X_b) &\to& A^{j-1}(S_b)\oplus \bigoplus \mathbb{Q}\\
\end{array}\]
where horizontal arrows are injective (by the blow-up formula).
The Franchetta property for $\mathcal S\to B_0$ thus implies the Franchetta property for $\mathcal X^0\to B_0$.)
\end{remark}
\begin{remark} One would expect that for Fano varieties of K3 type, there is a {\em unique\/} MCK decomposition. I cannot prove this unicity for the Fano fourfolds of Theorem \ref{main}. (At least, one may observe that for $X$ as in Theorem \ref{main} the induced splitting of the Chow ring is canonical, since $A^j(X)=A^j_{(0)}(X)$ for $j\not=3$, whereas $A^3(X)=A^3_{(0)}(X)\oplus A^3_{(2)}(X)$ with $A^3_{(0)}=A^2(X)\cdot A^1(X)$ and $A^3_{(2)}(X)=A^3_{hom}(X)$).
\end{remark}
\section{Some consequences}
\subsection{An injectivity result}
\begin{corollary}\label{cor} Let $X$ and $M$ be as in Theorem \ref{main}, and let $m\in\mathbb{N}$. Let $R^\ast(X^m)\subset A^\ast(X^m)$ be the $\mathbb{Q}$--subalgebra
\[ \begin{split} R^\ast(X^m):= < (p_i)^\ast A^1(X), (p_i)^\ast A^2(X), (p_{ij})^\ast(\mathbb{D}elta_X), (p_i)^\ast c_j(T_X),& \\
(p_i)^\ast \operatorname{i}a\bigl( A^i(M)\to A^i(X)&\bigr)>\ \ \ \subset\ A^\ast(X^m)\ .\\
\end{split}\]
(Here $p_i\colon X^m\to X$ and $p_{ij}\colon X^m\to X^2$ denote projection to the $i$th factor, resp. to the $i$th and $j$th factor.)
The cycle class map induces injections
$ R^j(X^m) \hookrightarrow H^{2j}(X^m)$
in the following cases:
\begin{enumerate}
\item $m=1$ and $j$ arbitrary;
\item $m=2$ and $j\ge 5$;
\item $m=3$ and $j\ge 9$.
\end{enumerate}
\end{corollary}
\begin{proof} Theorem \ref{main}, in combination with Proposition \ref{product}, ensures that $X^m$ has an MCK decomposition, and so $A^\ast(X^m)$ has the structure of a bigraded ring under the intersection product. The corollary is now implied by the conjunction of the two following claims:
\begin{claim}\label{c1} There is inclusion
\[ R^\ast(X^m)\ \ \subset\ A^\ast_{(0)}(X^m)\ .\]
\end{claim}
\begin{claim}\label{c2} The cycle class map induces injections
\[ A^j_{(0)}(X^m)\ \hookrightarrow\ H^{2j}(X^m)\ \]
provided $m=1$, or $m=2$ and $j\ge 5$, or $m=3$ and $j\ge 9$.
\end{claim}
To prove Claim \ref{c1}, we note that $A^k_{hom}(X)=0$ for $k\not= 3$, which readily implies the equality $A^k(X)=A^k_{(0)}(X)$ for $k\not= 3$. The fact that $c_3(T_X)$ is in $A^3_{(0)}(X)$ is part of Theorem \ref{main}. The fact that $\mathbb{D}elta_X\in A^4_{(0)}(X\times X)$ is a general fact for any $X$ with a (necessarily self-dual) MCK decomposition \cite[Lemma 1.4]{SV2}. It remains to prove that codimension three
cycles coming from the ambient space $M$ are in $A^3_{(0)}(X)$. To this end, we observe that such cycles are universally defined, i.e.
\[ \operatorname{i}a\Bigl( A^3(M)\to A^3(X)\Bigr)\ \ \subset\ \operatorname{i}a\Bigl( A^3(\mathcal X)\ \to\ A^3(X)\Bigr)\ ,\]
where $\mathcal X\to B$ is the universal family as before, and $X=X_{b_0}$ for some $b_0\in B$. Given $a\in A^3(\mathcal X)$, applying the Franchetta property (Remark \ref{franch}) to
\[ \Gamma:= (\pi^j_\mathcal X)_\ast(a)\ \ \ \in A^3(\mathcal X)\ ,\ \ \ j\not=6\ ,\]
one finds that the restriction $a\vert_X\in A^3(X)$ lives in $A^3_{(0)}(X)$. In particular, it follows that
\[ \operatorname{i}a\Bigl( A^3(M)\to A^3(X)\Bigr)\ \ \subset\ A^3_{(0)}(X)\ ,\]
as desired.
Since the projections $p_i$ and $p_{ij}$ are pure of grade $0$ \cite[Corollary 1.6]{SV2}, and $A^\ast_{(0)}(X^m)$ is a ring under the intersection product, this proves Claim \ref{c1}.
To prove Claim \ref{c2}, we observe that Manin's blow-up formula \cite[Theorem 2.8]{Sc} gives an isomorphism of motives
\[ h(X)\cong h(S)(1)\oplus \bigoplus {\mathds{1}}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
Moreover, in view of Proposition \ref{blowup} (cf. also \cite[Proposition 2.4]{SV2}), the correspondence inducing this isomorphism is of pure grade $0$.
In particular, for any $m\in\mathbb{N}$ we have isomorphisms of Chow groups
\[ A^j(X^m)\cong A^{j-m}(S^m)\oplus \bigoplus_{k=0}^4 A^{j-m+1-k}(S^{m-1})^{b_k} \oplus \bigoplus A^\ast(S^{m-2})\oplus \bigoplus_{\ell\ge 3} A^\ast(S^{m-\ell}) \ , \]
and this isomorphism respects the $A^\ast_{(0)}()$ parts. Claim \ref{c2} now follows from the fact that for any surface $S$ with an MCK decomposition, and any $m\in\mathbb{N}$, the cycle class map induces injections
\[ A^i_{(0)}(S^m)\ \hookrightarrow\ H^{2i}(S^m)\ \ \ \forall i\ge 2m-1\ \]
(this is noted in \cite[Introduction]{V6}, cf. also \cite[Proof of Lemma 2.20]{acs}).
\end{proof}
\subsection{Decomposition in the derived category}
Given a smooth projective morphism $\pi\colon \mathcal X\to B$, Deligne \cite{Del} has proven a decomposition in the derived category of sheaves of $\mathbb{Q}$-vector spaces on $B$:
\begin{equation}\label{del} R\pi_\ast\mathbb{Q}\cong \bigoplus_i R^i \pi_\ast\mathbb{Q}[-i]\ .\end{equation}
As explained in \cite{Vdec}, for both sides of this isomorphism there is a cup-product: on the right-hand side, this is the direct sum of the usual cup-products of local systems, while on the left-hand side, this is the derived cup-product (inducing the usual cup-product in cohomology). In general, the isomorphism (\ref{del}) is {\em not\/} compatible with these cup-products, even after shrinking the base $B$ (cf. \cite{Vdec}). In some rare cases, however, there is such a compatibility (after shrinking): this is the case for families of abelian varieties \cite{DM}, and for families of K3 surfaces \cite{Vdec}, \cite[Section 5.3]{Vo} (cf. also \cite[Theorem 4.3]{V6} and \cite[Corollary 8.4]{FTV} for some further cases).
Given the close link to K3 surfaces, it is not surprising that the Fano fourfolds of Theorem \ref{main} also have such a multiplicative decomposition:
\begin{corollary}\label{deldec} Let $\mathcal X\to B$ be a family of Fano fourfolds of type B1 or B2. There is a non-empty Zariski open $B^\prime\subset B$, such that the isomorphism (\ref{del}) becomes multiplicative after shrinking to $B^\prime$.
\end{corollary}
\begin{proof} This is a formal consequence of the existence of a relative MCK decomposition, cf. \cite[Proof of Theorem 4.2]{V6} and \cite[Section 8]{FTV}.
\end{proof}
Given a family $\mathcal X\to B$ and $m\in\mathbb{N}$, let us write $\mathcal X^{m/B}$ for the $m$-fold fibre product
\[ \mathcal X^{m/B}:=\mathcal X\times_B\mathcal X\times_B\cdots\times_B \mathcal X\ .\]
Corollary \ref{deldec} has the following concrete consequence, which is similar to a result for families of K3 surfaces obtained by Voisin \cite[Proposition 0.9]{Vdec}:
\begin{corollary}\label{deldec2} Let $\mathcal X\to B$ be a family of Fano fourfolds of type B1 or B2.
Let $z\in A^r(\mathcal X^{m/B})$ be a polynomial in (pullbacks of) divisors and codimension $2$ cycles on $\mathcal X$.
Assume the fibrewise restriction $z\vert_b$ is homologically trivial, for some $b\in B$. Then there exists a non-empty Zariski open $B^\prime\subset B$
such that
\[ z=0\ \ \ \hbox{in}\ H^{2r}\bigl((\mathcal X^\prime)^{m/B^\prime},\mathbb{Q}\bigr)\ .\]
\end{corollary}
\begin{proof} The argument is the same as \cite[Proposition 0.9]{Vdec}. First, one observes that divisors $d_i$ and codimension $2$ cycles $e_j$ on $\mathcal X$ admit a cohomological decomposition (with respect to the Leray spectral sequence)
\[ \begin{split} d_i&= d_{i0} + \pi^\ast(d_{i2})\ \ \ \hbox{in}\ H^0(B, R^2\pi_\ast\mathbb{Q})\oplus \pi^\ast H^2(B,\mathbb{Q}) \cong H^2(\mathcal X,\mathbb{Q})\ ,\\
e_j&= e_{j0} +\pi^\ast(e_{j2}) +\pi^\ast(e_{j4})\ \ \ \hbox{in}\ H^0(B, R^4\pi_\ast\mathbb{Q})\oplus \pi^\ast H^2(B)^{\oplus 2} \oplus \pi^\ast H^4(B) \cong H^4(\mathcal X,\mathbb{Q})\ .\\
\end{split}\]
We claim that the cohomology classes $d_{ik}$ and $e_{jk}$ are {\em algebraic\/}. This claim implies the corollary: indeed, given a polynomial
$z=p(d_i,e_j)$, one may take $B^\prime$ to be the complement of the support of
the cycles $d_{i2}$, $e_{j2}$ and $e_{j4}$. Then over the restricted base one has equality
\[ z:=p(d_i,e_j)= p(d_{i0},e_{j0})\ \ \ \hbox{in}\ H^{2r}\bigl((\mathcal X^\prime)^{m/B^\prime},\mathbb{Q}\bigr)\ .\]
Multiplicativity of the decomposition ensures that (after shrinking the base some more)
\[ p(d_{i0},e_{j0})\ \ \in\ H^0(B^\prime, R^{2r}(\pi^m)_\ast\mathbb{Q})\ \ \subset\ H^{2r}\bigl((\mathcal X^\prime)^{m/B^\prime},\mathbb{Q}\bigr)\ ,\]
and so the conclusion follows.
The claim is proven for divisor classes $d_i$ in \cite[Lemma 1.4]{Vdec}. For codimension $2$ classes $e_j$, the argument is similar to loc. cit.:
let $h\in H^2(\mathcal X)$ be an ample divisor class, and let $h_0$ be the part that lives in $H^0(B,R^2\pi_\ast\mathbb{Q})$. One has
\[ e_j (h_0)^4 = e_{j0} (h_0)^4 +\pi^\ast(e_{j2}) (h_0)^4+\pi^\ast(e_{j4}) (h_0)^4\ \ \ \hbox{in}\ H^{12}(\mathcal X,\mathbb{Q})\ .\]
By multiplicativity, after some shrinking of the base the first two summands are contained in $H^0(B^\prime, R^{12}\pi_\ast\mathbb{Q})$, resp. in $H^2(B^\prime, R^{10}\pi_\ast\mathbb{Q})$, hence they are zero as $\pi$ has $4$-dimensional fibres. The above equality thus simplifies to
\[ e_j (h_0)^4 = \pi^\ast(e_{j4}) (h_0)^4\ \ \ \hbox{in}\ H^{12}(\mathcal X,\mathbb{Q})\ .\]
Pushing forward to $B^\prime$, one obtains
\[ \pi_\ast (e_j (h_0)^4)= \pi_\ast \bigl((h_0)^4\bigr) e_{j4} = \lambda\, e_{j4}\ \ \ \hbox{in}\ H^4(B^\prime)\ ,\]
for some $\lambda\in\mathbb{Q}^\ast$. As the left-hand side is algebraic, so is $e_{j4}$.
Next, one considers
\[ e_j (h_0)^3 = e_{j0} (h_0)^3 +\pi^\ast(e_{j2}) (h_0)^3+\pi^\ast(e_{j4}) (h_0)^3\ \ \ \hbox{in}\ H^{10}(\mathcal X,\mathbb{Q})\ .\]
The first summand is again zero for dimension reasons, and so
\[ \pi^\ast(e_{j2}) (h_0)^3 = e_j (h_0)^3 - \pi^\ast(e_{j4}) (h_0)^3\ \ \in\ H^{10}(\mathcal X,\mathbb{Q})\ \]
is algebraic. A fortiori, $\pi^\ast(e_{j2}) (h_0)^4$ is algebraic, and so
\[ \pi_\ast ( \pi^\ast (e_{j2}) (h_0)^4) = \pi_\ast \bigl((h_0)^4\bigr) e_{j2} = \mu\, e_{j2}\ \ \ \hbox{in}\ H^2(B^\prime)\ , \ \ \ \mu\in\mathbb{Q}^\ast\ ,\]
is algebraic.
\end{proof}
\vskip1cm
\begin{nonumberingt} Thanks to two referees for constructive comments. Thanks to Len-boy from pandavriendjes.fr.
\end{nonumberingt}
\vskip1cm
\end{document} | math | 34,769 |
\begin{eqnarray}gin{document}
\title[LIMIT THEOREMS FOR GENERALIZED RANDOM GRAPHS]{\bf LIMIT THEOREMS FOR NUMBER OF EDGES\\ IN THE GENERALIZED RANDOM GRAPHS\\ WITH RANDOM VERTEX WEIGHTS}
\author[Z.S. Hu]{Z.S. Hu}
\address{Z.S. Hu\\
Department of Statistics and Finance\\
University of Science and Technology of China\\
Hefei, China
}
\email{[email protected]}
\author[V.V. Ulyanov]{V.V. Ulyanov}
\address{V.V. Ulyanov\\
Faculty of Computational Mathematics and Cybernetics\\
Moscow State University \\
Moscow, 119991, Russia\\
and National Research University Higher School of Economics (HSE),
Moscow, 101000, Russia
}
\email{[email protected]}
\author[Q.Q. Feng]{Q.Q. Feng}
\address{Q.Q. Feng\\
Department of Statistics and Finance\\
University of Science and Technology of China\\
Hefei, China
}
\email{[email protected]}
\thanks{}
\keywords{Generalized random graphs, random vertex weights, central limit type theorems, total number of edges}
\begin{eqnarray}gin{abstract}
We get central limit type theorems for the total number of edges in the generalized random graphs with random vertex weights under different moment conditions on the distributions of the weights.
\end{abstract}
\maketitle
\renewcommand{References}{References}
Complex networks attract increasing attention of researchers in various fields of science. In last years numerous network models have been proposed. Since the uncertainty and the lack of regularity in real-world
networks, these models are usually random graphs. Random graphs were first defined by Paul Erd\H{o}s and Alfr\'{e}d R\'{e}nyi in their 1959 paper "On Random Graphs", see \cite{UVV:ER}, and independently by Gilbert in \cite{UVV:G}. The suggested models are closely related: there are $n$ isolated vertices and every possible edge occurs independently with probability $p:\, 0 < p < 1$. It is assumed that there are no self-loops. Later the models were generalized.
A natural generalization
of the Erd\H{o}s and R\'{e}nyi random graph is that the equal edge probabilities are replaced by probabilities depending on the vertex weights. Vertices with higher weights are more likely to have more neighbors
than vertices with small weights. Vertices with extremely high weights could act as the hubs observed in many real-world networks.
The following generalized random graph model was first introduced by Britton et al., see \cite{UVV:Britt}. Let $\{1, 2, . . . , n\}$ be the set
of vertices, and $W_i > 0$ be the weight of vertex $i, 1\leq i\leq n$. The edge probability of the edge between any two vertices $i$ and $j$ is equal to
$$
p_{ij} = \frac
{W_i W_j}
{L_n + W_i W_j}
,
$$
where $L_n = \sum^n_{i=1} W_i$ denotes the total weight of all vertices, and the weights $W_i, i = 1, 2, \dots , n$ can be taken to be
deterministic or random. If we take all $W_i$-s as the same constant:
$W_i \equiv n \lambda/(n - \lambda)$ for some $0 < \lambda < n$, it is easy to see that $p_{ij} = \lambda/n$ holds for
all $1 \leq i < j \leq n$. That is, the Erd\H{o}s--R\'{e}nyi random graph with $p = \lambda/n$ is a special case of the generalized random graph.
There are many versions of the generalized random graphs, such as Poissonian random graph (introduced by Norros and Reittu in \cite{UVV:Norros} and
studied by Bhamidi et al.\cite{UVV:Bhamidi}), rank-1 inhomogeneous random graph (see \cite{UVV:Bollob}), random graph with given
prescribed degrees (see \cite{UVV:Chung}), etc. Under some common conditions (see \cite{UVV:Janson}), all of the above mentioned random
graph models are asymptotically equivalent, meaning that all events have asymptotically equal probabilities. The updated review on the
results about these inhomogeneous random graphs see in Chapters 6 and 9 in \cite{UVV:Hofstad}.
In the present paper we assume that
$W_i, i = 1, 2, \dots , n,$
are independent identically distributed random variables distributed as $W$. Let $E_n$ be the total number of edges in a generalized random graph with vertex weights $W_1, W_2, \dots , W_n.$ In \cite{UVV:CFE}, under the conditions that $W$ has a finite or infinite mean, several weak
laws of large numbers for $E_n$ are established, see also Ch.6, \cite{UVV:Hofstad}. For instance, in \cite{UVV:CFE} and Ch.6, \cite{UVV:Hofstad}, it is proved that $E_n/n$ tends in probability to $\E W/2$, provided $\E W$ is finite.
Note that
$$
E_n = \frac{1}{2}\,\sum^{n}_{i=1}D_i,
$$
where $D_i, i = 1, 2, \dots , n$ is a degree of vertex $i$, i.e. the number of edges coming out from vertex $i$. It is clear, the random variables $D_i, i = 1, 2, \dots , n$ are dependent ones.
The aim of the present paper is to refine the law of large numbers type results for $E_n$ and to get central limit type theorems under different moment conditions for $W$. In Theorem~1 we assume that $\E W^2 < \infty$. It implies normal limit distribution for $\{E_n\}$ after proper normalization. In Theorem~2 we assume that the distribution of $W$ belongs to the domain of attraction of a stable law $F$ with characteristic exponent $\alpha: 1 < \alpha < 2$. Then we prove that the limit distribution for normalized $E_n$ is $F$.
\begin{eqnarray}gin{theorem} \label{thm1}
If $\E W^2<\infty$, then
\begin{eqnarray}star
\frac{2E_n-n\E W}{\sqrt{n\,(2\E W+\mbox{Var}(W))}}\stackrel{d}{\longrightarrow} N(0,1).
\end{eqnarray}star
\end{theorem}
\begin{eqnarray}gin{proof}
Put for all integer $n\geq 1$
\begin{eqnarray}\label{hu0}
b_n=\frac{1}{2}n\,\E W,~c_n=\frac{1}{2}\sqrt{n\,\mbox{Var}(W)}.
\end{eqnarray}
For any $t\in \mathbb{R}$, we have
\begin{eqnarray}
\E\exp\Big\{{it}\frac{E_n-b_n}{c_n}\Big\}&=& \E\exp\Big\{\frac{it}{c_n}\Big(\sum\limits_{1\le i<j\le n}I_{ij}-b_n\Big)\Big\}\nonumber\\
&=& \E\Big(\E\Big(\exp\Big\{\frac{it}{c_n}\Big(\sum\limits_{1\le i<j\le n}I_{ij}-b_n\Big)\Big\}\Big| W_1, \cdots, W_n\Big)\Big)\nonumber\\
&=& \E\Big(e^{-itb_n/c_n}\prod_{1\le i<j\le n}\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}\Big)\nonumber\\
&:=& \E e^{Y_n}, \label{hu1}\nonumber
\end{eqnarray}
where
\begin{eqnarray}
Y_n&=&\sum_{1\le i<j\le n}\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\
&=&\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}
-\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2} \label{hu2}
\end{eqnarray}
and $\log(\cdot)$ is the principal value of the complex logarithm function.
By using the Maclaurin series expansion of $\log (1+x)$ for complex $x$ with $|x|<1$, we have that
\begin{eqnarray}star
\frac{|\log(1+x)|}{|x|}\longrightarrow 1,~~\frac{|\log(1+x)-x|}{|x|^2}\longrightarrow \frac{1}{2}~~~~\mbox{as}~~~~|x|\rightarrow 0.
\end{eqnarray}star
Hence
there exists some constant $c_0>0$ such that $|\log(1+x)|\le 2|x|$ and $|\log(1+x)-x|\le |x|^2$
hold for any $|x|\le c_0$.
Clearly, for any fixed $t$, there exists $n_0=n_0(t) \in \mathbb{N}$
such that for all $n\ge n_0$ and any $1\leq i, j \leq n$ one has
\begin{eqnarray}star
\Big|\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}\Big|\le |e^{it/c_n}-1|\le |t|/c_n \le c_0.
\end{eqnarray}star
Thus, since
\begin{eqnarray}\label{hu00}
\frac{ L_n}{n} \rightarrow \E W~~a.s.~~ \mbox{and}~~ \frac{\sum_{i=1}^n W^2_i}{n} \rightarrow \E W^2 ~~a.s.,
\end{eqnarray}
we have for any $n\ge n_0$
\begin{eqnarray}
\Big|\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2}\Big|&\le& \sum_{i=1}^n \Big|\log\Big(1+\frac{(e^{it/c_n}-1)W_i^2}{L_n+W_i^2}\Big)\Big|\nonumber\\
&\le& 2|e^{it/c_n}-1|\sum_{i=1}^n\frac{W_i^2}{L_n+W_i^2}\nonumber\\
&\le& 2\frac{|t|}{c_n}\frac{\sum_{i=1}^n W_i^2}{L_n}\rightarrow 0~~a.s. \label{hu3}
\end{eqnarray}
and
\begin{eqnarray}
&&~~~~\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\
&&=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\Big(1+\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}\Big)-\frac{itb_n}{c_n}\nonumber\\
&&=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}
+O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2}\nonumber\\
&&=\frac{1}{2}\Big(e^{it/c_n}-1-\frac{it}{c_n}+\frac{t^2}{2c_n^2}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\nonumber\\
&&~~~~~~+\frac{1}{2}\Big(\frac{it}{c_n}-\frac{t^2}{2c_n^2}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\
&&~~~~~~
+O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2}\nonumber\\
&&:=I_1+I_2+I_3, \label{hu3-1}
\end{eqnarray}
where $|O_1|\le 1/2$.
By (\ref{hu00}) and the inequality $|e^{ix}-1-ix+x^2/2|\le |x|^3/6$ for any $x\in \mathbb{R}$, we have
\begin{eqnarray}
|I_1| \le \frac{|t|^3}{12c_n^3} \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n}
=\frac{|t|^3L_n}{12c_n^3}{\longrightarrow} 0~~a.s. \label{hu4}
\end{eqnarray}
Similarly, by (\ref{hu00}) and the inequality $|e^x-1|\le |x|$, we get
\begin{eqnarray}
|I_3|\le \frac{t^2}{2c_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n^2}=\frac{t^2}{c_n^2}\Big(\frac{1}{n}\sum_{i=1}^n W_i^2\Big)^2\Big(\frac{n}{L_n}\Big)^2
{\longrightarrow} 0~~a.s.
\end{eqnarray}
Recalling the definition (\ref{hu0}) for $b_n$ and $c_n$,
we have
\begin{eqnarray}star
I_2&=&\frac{1}{2}\Big(\frac{it}{c_n}-\frac{t^2}{2c_n^2}\Big)\Big( \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n}- \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n(L_n+W_iW_j)}\Big)-\frac{itb_n}{c_n}\\
&=& it\frac{L_n-n\E W}{\sqrt{n\mbox{Var}(W)}}-\frac{t^2L_n}{n\mbox{Var}(W)}-
\frac{1}{2}\Big(\frac{it}{c_n}-\frac{t^2}{2c_n^2}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n(L_n+W_iW_j)}.
\end{eqnarray}star
Moreover, by (\ref{hu00}) we get
\begin{eqnarray}star
\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n(L_n+W_iW_j)}&&\le \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n^2}
\\ &&=\frac{\Big(\sum_{i=1}^nW_i^2\Big)^2}{L_n^2}\rightarrow \Big(\frac{\E W^2}{\E W}\Big)^2~~a.s.
\end{eqnarray}star
The central limit theorem yields
\begin{eqnarray}
I_2\stackrel{d}{\longrightarrow} it{\bf N}- t^2 \E W/\mbox{Var}(W),
\label{hu5}
\end{eqnarray}
where ${\bf N}$ is a standard normal random variable.
Now, it follows from (\ref{hu2})--(\ref{hu5}) that
\begin{eqnarray}star
Y_n \stackrel{d}{\longrightarrow} it{\bf N}- t^2 \E W/\mbox{Var}(W).
\end{eqnarray}star
Hence, by noting that $|e^{Y_n}|\le 1$ and applying the Lebesgue dominated convergence theorem, we get that, for any $t\in \mathbb{R}$,
\begin{eqnarray}star
\E\exp\Big\{{it}\frac{E_n-b_n}{c_n}\Big\}&=&\E e^{Y_n}\rightarrow \E\exp\{it{\bf N}- t^2 \E W/\mbox{Var}(W)\}\\
&=&\exp\{-(1/2)t^2(1+2\E W/\mbox{Var}(W))\}.
\end{eqnarray}star
Thus, Theorem \ref{thm1} is proved.
\end{proof}
In the following theorem we get convergence of the sequence $\{E_n\}$ under weaker moment conditions on $W_i$'s.
\begin{eqnarray}gin{theorem} \label{thm2}
Let $W, W_1,W_2,\cdots$ be a sequence of i.i.d. nonnegative random variables and
\begin{eqnarray}
\frac{W_1+\cdots+W_n-n\E W}{a_n}\stackrel{d}{\longrightarrow} F,
\label{hu200}
\end{eqnarray}
where $F$ is a stable distribution with characteristic exponent $\alpha : 1<\alpha<2$,
then
\begin{eqnarray}star
\frac{2E_n-n\E W}{a_n}\stackrel{d}{\longrightarrow} F.
\end{eqnarray}star
\end{theorem}
Before we start to prove the theorem, let us state some properties of the distribution of $W$.
If (\ref{hu200}) holds true, then $a_n$ (see e.g. \cite{UVV:FUS}, ch.XVII, \S 5) is a regularly varying function with exponent $1/\alpha$ satisfying
\begin{eqnarray}
n\E W^2I(W\le a_n)\sim a_n^2, \label{hu204}
\end{eqnarray}
and there exists some constant $c>0$ and $h(x)$, a slowly varying function at $\infty$,
such that
\begin{eqnarray}
P(W>x)\sim cx^{-\alpha}h(x). \label{hu205}
\end{eqnarray}
We shall use the following lemma.
\begin{eqnarray}gin{lemma} \label{lemma1}
If (\ref{hu205}) holds with $\alpha: 1<\alpha<2$, then we have
\begin{eqnarray}star
&&~~\E W^2I(W\le x) \sim \frac{c\alpha}{2-\alpha} x^{2-\alpha}h(x),\\
&&~~\E WI(W\ge x) \sim c\,\frac{2-\alpha}{\alpha-1} x^{1-\alpha}h(x).
\end{eqnarray}star
\end{lemma}
The proof of the lemma see e.g. \cite{UVV:FUS}, ch.XVII, \S 5.
Now we are ready to prove Theorem 2.
\begin{eqnarray}gin{proof} Let $b_n=(1/2)\,n\,\E W$ and $c_n=(1/2)\,a_n$ with $a_n$ from (\ref{hu204}).
As in the proof of Theorem \ref{thm1}, for any $t\in \mathbb{R}$, we also write
\begin{eqnarray}star
\E\exp\Big\{{it}\frac{E_n-b_n}{c_n}\Big\}
= \E e^{Y_n}
\end{eqnarray}star
with new definition for $c_n$ and
\begin{eqnarray}star
Y_n=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}
-\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2}.
\end{eqnarray}star
For the last sum for any $n\ge n_0$, where $n_0=n_0(t)$ is defined in the proof of Theorem \ref{thm1}, we have (cp. (\ref{hu3}))
\begin{eqnarray}star
\Big|\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2}\Big|
\le 2\frac{|t|}{c_n}\frac{\sum_{i=1}^n W_i^2}{L_n}.
\end{eqnarray}star
Similarly to (\ref{hu3-1}), we get
\begin{eqnarray}star
&&~~~~\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\
&&=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}
+O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2}\nonumber\\
&&=\frac{1}{2}\Big(e^{it/c_n}-1-\frac{it}{c_n}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\nonumber\\
&&~~~~~~+\frac{1}{2}\frac{it(L_n-2b_n)}{c_n}-
\frac{1}{2}\frac{it}{c_n} \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n (L_n+W_iW_j)}\nonumber\\
&&~~~~~~
+O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2}
\end{eqnarray}star
with $|O_1|\le 1/2$. Due to Theorem's condition we have $(L_n-2b_n)/(2c_n)\stackrel{d}{\rightarrow} F$. Since
\begin{eqnarray}star
|e^{ix}-1|\le |x|,~~ |e^{ix}-1-ix|\le |x|^2/2 ~~~~\mbox{for all}~~~~ x\in \mathbb{R},
\end{eqnarray}star
in order to
prove Theorem \ref{thm2}, we only need to show that
\begin{eqnarray}
&&\frac{1}{a_n}\frac{\sum_{i=1}^n W_i^2}{L_n}\stackrel{p}{\longrightarrow} 0, \label{hu20-1}\\
&&\frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\stackrel{p}{\longrightarrow} 0, \label{hu201}\\
&& \frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n (L_n+W_iW_j)}\stackrel{p}{\longrightarrow} 0, \label{hu202}\\
&&\frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{(L_n+W_iW_j)^2}
\stackrel{p}{\longrightarrow} 0. \label{hu203}
\end{eqnarray}
For any $\gamma: \alpha > \gamma>0$, we have $\E (W^2)^{(\alpha-\gamma)/2}=\E W^{\alpha-\gamma}<\infty$. Then
by Marcinkiewicz--Zygmund's strong law of large numbers (see e.g. Theorem 4.23 in \cite{UVV:KAL})
we have
\begin{eqnarray}star
n^{-2/(\alpha-\gamma)}\sum_{i=1}^n W_i^2\rightarrow 0~~a.s.
\end{eqnarray}star
Since $a_n$ is a regularly varying function with exponent $1/\alpha$,
then we have $1/a_n=o(n^{-1/\alpha+\gamma})$.
Now choose $\gamma>0$ such that
$$2/(\alpha-\gamma)-1-1/\alpha+\gamma<0~~~~\mbox{ and}~~~~ -2/\alpha+1+2\gamma<0.$$
Then we have
\begin{eqnarray}star
\frac{1}{a_n}\frac{\sum_{i=1}^n W_i^2}{L_n}=\frac{n^{2/(\alpha-\gamma)-1}}{a_n}\frac{\sum_{i=1}^n W_i^2/n^{2/(\alpha-\gamma)}}{L_n/n}=o(n^{2/(\alpha-\gamma)-1-1/\alpha+\gamma})\longrightarrow 0~~a.s.
\end{eqnarray}star
and
\begin{eqnarray}star
\frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\le
\frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n}=\frac{n}{a_n^2}\frac{L_n}{n} =o(n^{-2/\alpha+1+2\gamma})
{\longrightarrow} 0~~a.s.
\end{eqnarray}star
Thus we get (\ref{hu20-1}) and (\ref{hu201}).
To prove (\ref{hu202}), we write
\begin{eqnarray}star
&& ~~~~\frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n (L_n+W_iW_j)}\\
&&= \frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j\le n)}{L_n (L_n+W_iW_j)}
+\frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j>n)}{L_n (L_n+W_iW_j)}\\
&&\le \frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j\le n)}{L_n^2}
+\frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_jI(W_iW_j>n)}{L_n }\\
&&\le \frac{n^2}{a_nL_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j\le n)}{n^2}
+\frac{n}{a_nL_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_jI(W_iW_j>n)}{n}.
\end{eqnarray}star
Further, by (\ref{hu00})
and by using the fact that $\E|X_n| \rightarrow 0$ implies $X_n\stackrel{p}{\rightarrow} 0$,
in order to prove (\ref{hu202}), it is sufficient to show that
\begin{eqnarray}
&&\frac{1}{a_n}\E W_1^2W_2^2I(W_1W_2\le n){\longrightarrow} 0,\label{hu208}\\
&&\frac{n}{a_n}\E W_1W_2I(W_1W_2> n){\longrightarrow} 0. \label{hu209}
\end{eqnarray}
For any $\alpha\in (1,2)$, we can choose $\delta>0$ satisfying $2-\alpha-1/\alpha+2\delta<0$.
By Lemma \ref{lemma1}, there exists some constant $c_1=c_1(\alpha,\delta)>0$ such that
\begin{eqnarray}star
\E W^2I(W\le x) \le c_1 x^{2-\alpha+\delta},~~
~\E WI(W\ge x)\le c_1 x^{1-\alpha+\delta}
\end{eqnarray}star
hold for all $x>1$. Hence
\begin{eqnarray}star
&&~~\frac{1}{a_n}\E W_1^2W_2^2I(W_1W_2\le n)\\
&&=\frac{1}{a_n}\E\Big(W_2^2I(W_2\le n)\E (W_1^2I(W_1\le n/W_2)|W_2)\Big)\\
&&~~~~~~
+\frac{1}{a_n}\E\Big(W_2^2I(W_2> n)\E (W_1^2I(W_1\le n/W_2)|W_2)\Big)\\
&&\le \frac{c_1}{a_n}\E\Big(W_2^2( n/W_2)^{2-\alpha+\delta}\Big)
+\frac{1}{a_n}\E\Big(W_2^2I(W_2> n)(n/W_2)^2\Big)\\
&&=\frac{c_1n^{2-\alpha+\delta}}{a_n}\E W^{\alpha-\delta}
+\frac{n^2}{a_n}P(W> n).
\end{eqnarray}star
Since by (\ref{hu205}) we have
$
P(W>n)\sim cn^{-\alpha} h(n)=o(n^{-\alpha+\delta})
$ and $1/a_n=o(n^{-1/\alpha+\delta})$, we get
\begin{eqnarray}star
\frac{1}{a_n}\E W_1^2W_2^2I(W_1W_2\le n)=o(n^{2-\alpha-1/\alpha+2\delta})\rightarrow 0~~~\mbox{as}~~~x\rightarrow \infty.
\end{eqnarray}star
Thus, we get (\ref{hu208}).
Similarly, we have
\begin{eqnarray}star
&&~~\frac{n}{a_n}\E W_1W_2I(W_1W_2> n)\\
&&=\frac{n}{a_n}\E\Big(W_2I(W_2\le n)\E(W_1I(W_1> n/W_2)|W_2)\Big)\\
&&~~~~~~
+\frac{n}{a_n}\E\Big(W_2I(W_2> n)\E (W_1I(W_1> n/W_2)|W_2)\Big)\\
&&\le \frac{c_1n}{a_n}\E\Big(W_2(n/W_2)^{1-\alpha+\delta}\Big)
+\frac{n}{a_n}\E\Big(W_2I(W_2> n)\E W_1\Big)\\
&&=\frac{c_1n^{2-\alpha+\delta}}{a_n}\E W^{\alpha-\delta}
+\frac{n}{a_n}\E W \E(WI(W> n))\\
&&\le \frac{c_1n^{2-\alpha+\delta}}{a_n}\E W^{\alpha-\delta}
+\frac{c_1n^{2-\alpha+\delta}}{a_n}\E W=o(n^{2-\alpha-1/\alpha+2\delta})\rightarrow 0.
\end{eqnarray}star
Hence (\ref{hu209}), and then (\ref{hu202}), are proved.
And (\ref{hu203}) follows from (\ref{hu202}).
The proof of Theorem \ref{thm2} is complete.
\end{proof}
\begin{eqnarray}gin{thebibliography}{99}
\bibitem{UVV:Bhamidi} S. Bhamidi, R. van der Hofstad, J.S.H. van Leeuwaarden, ''Novel scaling limits for critical inhomogeneous random graphs'', {\it Ann. Probab.}, {\bf 40}, 2299--2361 (2012).
\bibitem{UVV:Bollob} B. Bollob\'{a}s, S. Janson, O. Riordan, ''The phase transition in inhomogeneous random graphs'', {\it Random Struct. Algorithms}, {\bf 31}, 3--122 (2007).
\bibitem{UVV:Britt} T. Britton, M. Deijfen, A. Martin-L\"{o}f, ''Generating simple random graphs with prescribed degree distribution'', {\it J. Stat. Phys.}, {\bf 124}, 1377--1397 (2006).
\bibitem{UVV:Chung} F. Chung, L. Lu, ''The volume of the giant component of a random graph with given expected degrees'', {\it SIAM J. Discrete Math.}, {\bf 20}, 395--411 (2006) (electronic).
\bibitem{UVV:ER} P. Erd\H{o}s, A. R\'{e}nyi, ''On Random Graphs'',
{\it Publ. Math. Debrecen}, {\bf 6}, 290--297 (1959).
\bibitem{UVV:FUS} W. Feller, {\it An Introduction to Probability Theory and Its Applications}, Wiley, New-York, vol.2 (1971).
\bibitem{UVV:G} E.N. Gilbert, "Random graphs", {\it Annals of Mathematical Statistics}, {\bf 30}, 1141--1144, (1959).
\bibitem{UVV:CFE} Z.S. Hu, W. Bi, Q.Q. Feng, ''Limit laws in the generalized random graphs with random
vertex weights'',
{\it Statistics and Probability Letters}, {\bf 89}, 65--76 (2014).
\bibitem{UVV:Janson} S. Janson, "Asymptotic equivalence and contiguity of some random graphs", {\it Random Struct. Algorithms}, {\bf 36}, 26--45, (2010).
\bibitem{UVV:KAL} O. Kallenberg, {\it Foundations of Modern Probability}, Springer-Verlag, New York (2002).
\bibitem{UVV:Norros} I. Norros, H.Reittu " On a conditionally Poisson graph process", {\it Adv. in Appl. Probab.}, {\bf 38}, 59--75, (2006).
\bibitem{UVV:Hofstad} R. van der Hofstad, {\it Random graphs and complex networks}, Unpublished Manuscript. Available at: http://www.win.tue.nl/~rhofstad/NotesRGCN.pdf (2016).
\end{thebibliography}
\end{document} | math | 20,390 |
\begin{equation}gin{document}
\title{Nonlocality tests enhanced by a third observer}
\author{Daniel Cavalcanti}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543}
\author{Rafael Rabelo}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543}
\author{Valerio Scarani}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543}
\affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542}
\begin{equation}gin{abstract}
We consider Bell tests involving bipartite states shared between three parties.
We show that the simple inclusion of a third part may greatly simplify the measurement scenario (in terms of the number of measurement settings per part) and allows the identification
of previously unknown nonlocal resources.
\end{abstract}
\maketitle
\textit{Introduction.--} The implementation of quantum networks is a crucial goal of the quantum communication program \cite{network}. Ultimately, a quantum network aims at distributing quantum correlations among distant locations through necessarily imperfect quantum channels. These correlations can later be used to perform quantum information and communication protocols \cite{nielsen}. Nonlocal correlations, in the sense of Bell \cite{bell}, are the prototypical example of quantum correlations, but not all quantum correlations are non-local: famously, there exist states that do not violate any Bell inequality, but which are entangled and in fact distillable \cite{werneretal}. Recently, it has been noticed that nonlocal correlations allow device-independent quantum information protocols ({\it{e.g.}}, quantum key distribution \cite{diqkd} and random number generation \cite{dirng}), in which the success can be assessed without the need of characterizing the states and measurements being used. If one wants to implement a device-independent protocol, it is not enough to distribute entanglement: the network must generate nonlocal correlations. This applied motivation adds to the more speculative one of performing fundamental tests of quantum physics at large distances.
We consider nonlocality tests in tripartite networks. We show
the existence of bipartite states $\rho$ from which only local (classical) correlations can be obtained if one of the parties performs a finite number of measurements; however, if two copies of $\rho$ are shared between three parties, nonlocal correlations can be obtained from only two dichotomic observables per party. We also show that high dimensional maximally entangled states can stand an arbitrary amount of separable noise and still be used to distribute tripartite nonlocal correlations. Furthermore, we identify two-qubit states that do not violate any two-input-two-output Bell inequality, but several copies of them do. Our findings show that the simple addition of a third party makes Bell tests much more powerful.
\textit{Methods.--}
Our results make use of the following observation \cite{pr}. Consider an initial N partite quantum state: if there exist local measurements in $k$ parties such that, for at least one of the measurement outcomes, the remaining $N-k$ parties are projected in a nonlocal state, then the initial state is necessarily nonlocal. This fact can be proved by contradiction: if the initial state is local, any reduced conditional state will also be local.
\begin{equation}gin{figure}
\centering
\includegraphics[width = 0.47\textwidth]{Fig1A.eps}
\caption{(Color online) Illustration of the methods employed in this letter. Alice, Bob and Charlie firstly share two copies of a bipartite state. Bob, in the middle, performs a measurement $y$ on his subsystem, which, for a given outcome $b$ produces a nonlocal state between Alice and Charlie.}
\label{Fig1A}
\end{figure}
The results we present are obtained by considering \textit{networks composed of only three parties}. Alice, Bob and Charlie share two copies of a bipartite state $\rho$. To reveal the nonlocality of this tripartite state, Bob applies a measurement aiming to leave Alice and Charlie with a nonlocal state (Fig.~\ref{Fig1A}) \cite{polish}.
Note that Bob's measurement does not need to be a single projective measurement: in fact, the cases of interest below involve the preparation of an ancillary system or a sequence of measurements. In the former case, Bob can prepare a bipartite state $\ket{\Phi}$ and teleport it through the bipartite states composing the network, which can be seen as channels from which the state $\ket{\Phi}$ is distributed. Thus, the most natural choice of $\ket{\Phi}$ is the most robust state against the noise channels defined by the network states. This simple argument allows us to link the problem of deciding if a given state $\rho$ is a nonlocal resource to the problem of finding robust states $\ket{\Phi}$ against the channel defined by $\rho$. This also provides a connection to the problem of finding unbounded violations of Bell inequalities \cite{unbounded BI}.
\textit{Isotropic state.--} First we consider the \textit{isotropic state} \begin{equation}\label{isotropic}
\rho^{iso}_{AB}=p\ket{\Psi_+^d}\bra{\Psi_+^d}+(1-p)\frac{\openone}{d^2},
\end{equation}
where $\ket{\Psi_+^d}=\sum_{i=0}^{d-1}\ket{ii}/\sqrt{d}$ is the maximally entangled state and $\openone/d^2$ the maximally mixed state, both in $\mathbb{C}^d\otimes\mathbb{C}^d$. This state appears naturally in the scenario where half of a maximally entangled state is sent through a \textit{depolarizing channel} with depolarizing probability $p$.
A single copy of an isotropic state is known to be local for $p\lesssim O(\frac{\log d}{d})$ \cite{noise} (in case of projective measurements) and nonlocal for $p\gtrsim 0.67$, in the limit $d\rightarrow \infty$ \cite{cglmp}. Moreover, the isotropic state was previously shown to be a nonlocal resource for $p>1/2$, again in the limit $d\rightarrow \infty$ \cite{us}. Here we improve this bound and show that the isotropic state is a nonlocal resource for $p> O(\frac{\sqrt{\log d}}{d^{1/4}})$ in the same limit.
We use the fact that there exist bipartite states $\ket{\Phi}$ with local dimension $d$ that achieve unbounded violations of a Bell inequality with respect to $d$ \cite{unbounded BI,unbounded BI 2}. In fact, by measuring such states in the appropriate measurement basis, one obtains a probability distribution $P_\Phi$ such that the distribution
$qP_\Phi+(1-q)P_{loc}$
is nonlocal for $q\geq O(\frac{\log d}{d^{1/2}})$ and any local probability distribution $P_{loc}$, in the limit $d\rightarrow\infty$ \cite{unbounded BI 2}.
Consider then the scenario described in Fig.~\ref{Fig1BC}a. Initially, the state $\rho^{iso}_{AB_1} \otimes \rho^{iso}_{B_2 C}$ is shared between Bob, who possesses systems $B_1$ and $B_2$, and Alice and Charlie. Bob performs a joint generalized measurement on his subsystems that corresponds to preparing the state $\ket{\Phi}$ and teleporting each of its components to Alice and Charlie throught the channels defined by the states $\rho^{iso}_{AB_1}$ and $\rho^{iso}_{B_2 C}$. In the case Bob obtains the outcomes corresponding to the state $\ket{\Psi_+^2}$ for each teleportation measurement, the final state shared between Alice and Charlie is
\begin{equation}gin{eqnarray}
\rho^{f} &=& p^{2}\ket{\Phi}\bra{\Phi} + p(1-p)\sigma_{A} \otimes \frac{\openone_{C}}{d}\nonumber\\&+& p(1-p)\frac{\openone_{A}}{d} \otimes \sigma_{C} + (1-p)^{2} \frac{\openone_{A}}{d}\otimes\frac{\openone_{C}}{d},
\end{eqnarray}
where $\sigma_{i}$ is the reduced state of part $i$. Performing then appropriate measurements \cite{unbounded BI}, Alice and Charlie end up with a probability distribution of the following form
\begin{equation}\label{dist}
p^{2}P_{\Phi} + (1-p^2)P_{loc}.
\end{equation}
This distribution, as previously stated, is nonlocal for $p^{2} \geq O(\frac{\log d}{d^{1/2}})$. Thus we conclude that the isotropic state \eqref{isotropic} is a nonlocal resource for $p \geq O(\frac{\sqrt{\log d}}{d^{1/4}})$. Note that this is also valid if we change the state $\openone/d$ in \eqref{isotropic} by any local - in special, separable - state, since it would also result in a distribution like \eqref{dist}.
\begin{equation}gin{figure}
\centering
\includegraphics[width = 0.47\textwidth]{Fig1BC.eps}
\caption{(Color online) Measurement protocol used to obtain nonlocality from the isotropic (Eq. \eqref{isotropic}) and erased (Eq. \eqref{erased}) states. a) Isotropic state: the measurement $y$ consists on preparing a bipartite system in state $\ket{\Phi}$ and then teleporting each of its parts. b) Erased state: The measurement $y$ consists on two steps: first, independent measurements $y_1$ are performed on subsystems $B_1$ and $B_2$; second, a Bell state measurement $y_2$ is performed. }
\label{Fig1BC}
\end{figure}
\begin{equation}gin{figure}
\centering
\includegraphics[width = 0.4\textwidth]{IsotropicState.eps}
\caption{Nonlocality properties of the isotropic state.}
\label{FigIsoBound}
\end{figure}
\textit{Erased state.--} Next we consider the \emph{erased state}, given by
\begin{equation}\label{erased}
\rho_{AB}^{eras}=\frac{1}{k}\ket{\Psi_+^2}\bra{\Psi_+^2}_{AB}+(1-\frac{1}{k})\frac{\openone_A}{2}\otimes \ket{2}\bra{2}_B,
\end{equation}
where $\ket{\Psi_+^2}=(\ket{00}+\ket{11})/\sqrt{2}$. This is a qubit-qutrit state that can be seen as the result of sending half of a two-qubit maximally entangled state through the \textit{erasure channel} \cite{erasureC}. This channel leaves the state untouched with probability $1/k$ and ``erases'' it with probability $1-1/k$. It is not known whether the erased state is local; however, it has a $k$-symmetric extension with respect to subsystem B, meaning that there exists a state of $k+1$ parties $\rho_{A B_1 B_2 ...B_k}$ such that $\rho_{A B_i}=\rho^{eras}_{AB}$ for every $i$. This implies that only local correlations can be extracted from \eqref{erased} in any experiment where Bob chooses $k$ measurements, regardless of the number of outcomes of those measurements, and regardless of the number of Alice's measurements and their outcomes \cite{terhal}.
We now show that two copies of $\rho_{AB}^{eras}$ shared between three parties violate a Bell inequality where two of the parties perform only two measurements, and one of them performs a single measurement. The state to be considered is $\rho_{AB_1}^{eras}\otimes \rho_{B_2 C}^{eras}$. This state corresponds to two erased states being shared between Alice, Bob, who possesses systems $B_1$ and $B_2$, and Charlie. Moreover Bob carries the qutrit part of the states ({\it{i.e.}}, the part whose the extension is possible).
The measurement protocol that reveals the nonlocality of $\rho_{AB_1}^{eras}\otimes \rho_{B_2 C}^{eras}$ is illustrated in Fig.~\ref{Fig1BC}b. Bob measures both subsystems $B_1$ and $B_2$ with the projective measurement $y_1=\{M_B^0\equiv\ket{0}\bra{0}+\ket{1}\bra{1},M_B^1=\ket{2}\bra{2}\}$. In case he gets outcomes corresponding to the measurement operator $M_B^0$, {\it{i.e.}}, both systems $B_1$ and $B_2$ are projected in the subspace $\ket{0}\bra{0}+\ket{1}\bra{1}$, the global state is projected in the state $\ket{\Psi_+^2}\bra{\Psi_+^2}_{AB_1}\otimes\ket{\Psi_+^2}\bra{\Psi_+^2}_{B_2 C}$. Finally, Bob applies a Bell state measurement $y_2$ on his subsystems, which, for every outcome, produces a maximally entangled state between Alice and Charlie. Thus, we can apply the previously stated observation to conclude that the state $\rho_{AB_1}^{eras}\otimes \rho_{B_2 C}^{eras}$ is nonlocal: in other words, the erased state \eqref{erased} is a nonlocal resource.
It is important to stress that this result is valid for any $k$. This means that there exist states which provide only local correlations in measurement scenarios involving an arbitrary (finite) number of measurements in one of the parties and an infinite number of measurements in the other party, but two copies provide nonlocality in a very simple two-measurement scenario.
\textit{Random two-qubit states.--} We would like to start exploring to which extent \textit{general two-qubit states} are nonlocal resources. Those that violate a Bell inequality certainly are such, so we should concentrate on those that don't. However, at the moment of writing, the existence of a local model for some regime of parameters is known only for very few families of states \cite{werneretal}. Here, we rather adopt a less ambitious scope, and try to see which two-qubit states that \textit{do not violate the CHSH inequality} give rise to nonlocal correlations in the tripartite scenario. A similar restriction was adopted in Refs.~\cite{sen,polish}. In Ref. \cite{miguel tamas} the authors focus on the bipartite scenario and exhibit states $\rho_{1}$ and $\rho_{2}$, such that neither $\rho_{1}^{\otimes N}$ nor $\rho_{2}^{\otimes N}$ violate the CHSH inequality for any $N$, but the state $\rho_{1} \otimes \rho_{2}$ does violate that inequality.
The methods applied are partially based on the fact that every one-way entanglement distillable state is a nonlocal resource: many copies of them violate a Bell inequality in the scenario of Fig.~\ref{Fig1BC}b \cite{us}. A sufficient criterion for a state to be one-way distillable is that its local entropy is greater than its global entropy, {\it{i.e.}},
\begin{equation}gin{eqnarray}\label{hashing}
\textrm{max}\{S\left( \rho_{A} \right),S\left( \rho_{B} \right)\} > S\left( \rho_{AB} \right),
\end{eqnarray}
where $S\left( \rho \right) = - \textrm{Tr}\left( \rho \textrm{log} (\rho) \right)$ is the von Neumann entropy of $\rho$ \cite{hashing}. Note however that, while the previous examples used only two copies of the state, in the present case Alice, Bob, and Charlie must share many copies of $\rho_{AB}$ to obtain nonlocality.
The algorithm we use in our study works as follows. First, a random two-qubit density matrix is drawn according to the Hilbert-Schmidt measure \cite{Zyc}. Then, we check if the given state violates the CHSH inequality by means of the necessary and sufficient criterion for violation proposed in Ref.~\cite{Horodecki}. If the state does not violate the CHSH inequality, the sufficient criterion \eqref{hashing} is checked: if it is satisfied, the state is indeed a nonlocal resource, even though it does not violate the CHSH inequality.
We picked $10^{6}$ random states, of which $~99.1\%$ happened not to violate the CHSH inequality. Among these states, we find that $~0.08\%$ are one-way entanglement distillable, and, thus, nonlocal resources.
Also, we applied the same methods to the states given in \cite{miguel tamas}. Remarkably, we found that both $\rho_{1}$ and $\rho_{2}$ are nonlocal resources according to our criteria, even though neither $\rho_{1}^{\otimes N}$ nor $\rho_{2}^{\otimes N}$ is able to violate the CHSH inequality.
\textit{Two-qubit states under local decoherence.--} In the previous section, we considered randomly picked two-qubit states. However, in practice, one usually deals with specific types of noise such as depolarization, dephasing or amplitude damping \cite{nielsen}. It is thus worthy to study the nonlocality properties of quantum states subjected to these noisy processes. Here we consider the same problem as above described, namely, to test if states that do not violate the CHSH inequality are nonlocal resources, applied to two-qubit states when the mentioned decoherence processes act upon the systems.
We draw $10^{5}$ two-qubit pure states according to the Fubini-Study measure \cite{Zyc}; then, each of the qubits is evolved \textit{locally} according to the three processes mentioned before. A parameter $t \in [0,1]$ parametrizes the strength of the process, which can also be thought as the time during which the system is under decoherence. For instance, $t = 0$ means that no decoherence has yet acted upon the system, while $t = 1$ means that the system has fully decohered under the process. For definiteness, we assume that both qubits undergo the same type of decoherence and for the same time $t$. For each initial pure state we considered $10^{3}$ equally spaced time-steps in the interval $[0,1]$. Then, in each evolution step, we test for the sufficient criterion for activation of nonlocality described in the previous section. We then compute the number of initial states that present activation of nonlocality at some stage of the evolution, and also the mean number of steps in which activation occured, for each channel. The results are summarized in table \ref{tab}.
\begin{equation}gin{table}
\begin{equation}gin{tabular}{|c|c|c|}
\hline
Process & NLR states (\%) & NLR interval \\
\hline
AD & $92.6$ & $0.078 \pm 0.078$ \\
PD & $68.5$ & $0.023 \pm 0.020$ \\
D & $56.4$ & $0.005 \pm 0.002$ \\
\hline
\end{tabular}
\caption{For each decoherence process (amplitude damping (AD), phase damping (PD), and depolarization (D)), it is presented the percentage of initial states that, on some stage of the evolution, do not violate the CHSH inequality and are nonlocal resources (NLR) according to the criteria presented, and the mean width (over all the initial states) of the evolution interval for which this is observed, with corresponding standard deviations, in units of $t$.}\label{tab}
\end{table}
\begin{equation}gin{figure}
\centering
\includegraphics[width = 0.4\textwidth]{Fig2.eps}
\caption{The figure pictorially shows the interval in which the a given evolved state is nonlocal ($CHSH>2$) and do not violate the CHSH inequality but is a nonlocal resource (NLR), for a particular decoherence process, in terms of the parameter $t$.}
\label{Fig2}
\end{figure}
\textit{Discussions.--} We have shown that nonlocality can be extracted from bipartite systems much more easily if they are shared in a tripartite network. For instance Ref. \cite{miguel tamas} showed that there are bipartite states that do not violate the CHSH inequality, but two copies of them do. Here we show that by simply considering three parties it is possible to find examples of states from which an arbitrary finite number of measurements would provide only local correlations, but two copies of the same state shared between three parties provide nonlocality with only two measurements per party.
The tripartite scenario also provides an interesting link between the nonlocality properties of a given state and its capability of distributing nonlocality when used as a channel in quantum networks. This fact gives additional motivation to seek for robust states, which, in turn, is related to the search for unbounded violations of Bell inequalities. We thus expect that this work will motivate further works on these topics.
We thank F. Brand\~ao, I. Villanueva, W. D\"ur, and N. Brunner for discussions. This work is supported by the National Research Foundation and the Ministry of Education of Singapore. DC acknowledges the PVE-CAPES program (Brazil).
\begin{equation}gin{thebibliography}{30}
\bibitem{network} H. J. Kimble, Nature (London) \textbf{453}, 1023 (2008).
\bibitem{nielsen} M. A. Nielsen, I. L. Chuang, ``Quantum Computation and Quantum Information'', Cambridge Univ. Press (2000).
\bibitem{bell} J. S. Bell, Physics \textbf{1}, 195 (1964).
\bibitem{werneretal} R. F. Werner, Phys. Rev. A {\bf 40}, 4277 (1989); J. Barrett, Phys. Rev. A {\bf 65}, 042302 (2002); M. L. Almeida, S. Pironio, J. Barrett, G. T\'oth and A. Ac\'\i n, Phys. Rev. Lett. {\bf 99}, 040403 (2007).
\bibitem{diqkd} S. Pironio, {\it{et al.}},
New J. Phys. \textbf{11}, 045021 (2009).
\bibitem{dirng} S. Pironio, {\it{et al.}},
Nature \textbf{464}, 1021 (2010).
\bibitem{pr} S. Popescu, D. Rohrlich, Phys. Lett. A \textbf{166}, 293 (1992).
\bibitem{unbounded BI}M. Junge {\it{et al.}}, Phys. Rev. Lett. 104, 170405 (2010).
\bibitem{noise} M. L. Almeida {\it{et al.}}, Phys. Rev. Lett. \textbf{99}, 040403 (2007).
\bibitem{cglmp} D. Collins {\it{et al.}},
Phys. Rev. Lett. \textbf{88}, 040404 (2002).
\bibitem{us} D. Cavalcanti, M. L. Almeida, V. Scarani, A. Ac\'in, Nat. Comm. \textbf{2}, 184 (2011).
\bibitem{unbounded BI 2} M. Junge, C. Palazuelos, Comm. Math. Phys. \textbf{306} (3), 695-746 (2011).
\bibitem{erasureC} C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, Phys. Rev. Lett. \textbf{78}, 3217 (1997).
\bibitem{terhal} B. M. Terhal, A. C. Doherty, D. Schwab, Phys. Rev. Lett \textbf{90}, 157903 (2003).
\bibitem{sen}A. Sen, {\it{et al.}}, Phys. Rev. A 72, 042310 (2005).
\bibitem{polish}A. Wojcik, J. Modlawska, A. Grudka, M. Czechlewski, Phys. Lett. A 374, 4831 (2010).
\bibitem{miguel tamas} M. Navascu\'es, T. V\'ertesi, Phys. Rev. Lett. \textbf{106}, 060403 (2011).
\bibitem{hashing} I. Devetak, A. Winter, Proc. R. Soc. London, Ser. A \textbf{461}, 207 (2005).
\bibitem{Zyc} K. Zyczkowski, I. Bengtsson, ``Geometry of Quantum States'', Cambridge (2006).
\bibitem{Horodecki} R. Horodecki, P. Horodecki, M. Horodecki, Phys. Lett. A. \textbf{200}, 340 (1995).
\end{thebibliography}
\textbf{Appendix - Decoherence processes.}
The action of each decoherence process can be described by the map
$\rho^{\prime} = \sum_{i} E_{i} \rho E_{i}^{\downarrowgger}$, where $\rho$ is the initial state of the system \cite{nielsen}.
The strength of the process
is given by a parameter $t \in [0,1]$, that can be viewed as the probability of full action of the process on the system. Assuming the quantum system is a single qubit,
the depolarization (D) process are described by
\begin{equation}gin{eqnarray}
E^{D}_{0} = \sqrt{1 - \frac{3t}{4}}\openone, \quad E^{D}_{i} = \sqrt{\frac{t}{4}\sigma_{i}},\nonumber
\end{eqnarray}
where $\sigma_{i}$, for $i = 1,2,3$, are the Pauli matrices. For the phase damping (PD) process\begin{equation}gin{eqnarray}
E^{PD}_{0} = \sqrt{t}\openone, \quad E^{PD}_{1} = \sqrt{1-{t}}\sigma_3;
\nonumber
\end{eqnarray}
and, for amplitude damping (AD),
\begin{equation}gin{eqnarray}
E^{AD}_{0} & = & \ket{0}\bra{0} + \sqrt{1-t}\ket{1}\bra{1}, \quad
E^{AD}_{1} = \sqrt{t}\ket{0}\bra{1}.
\nonumber
\end{eqnarray}
\end{document} | math | 22,190 |
\begin{document}
\title[Regularity of torsion form]
{Regularity of analytic torsion form on families of normal coverings}
\author{Bing Kwan SO}
\author{GuangXiang SU}
\thanks{Chern Institute of Mathematics and LPMC, Nankai University, Tianjin, 300071, China. \\
e-mail: [email protected] (B.K. So), [email protected] (G. Su)}
\begin{abstract}We prove the smoothness of the $L ^2 $-analytic torsion form for fiber bundle with positive Novikov-Shubin invariant.
We do so by generalizing the arguments of Azzali-Goette-Schick to an appropriate Sobolev space,
and proving that the Novikov-Shubin invariant is also positive in the Sobolev setting,
using an argument of Alvarez Lopez-Kordyukov.
\end{abstract}
\maketitle
\section{Introduction}
Let $M$ be a closed Riemannian manifold and $F$ be a flat vector bundle on $M$, Ray and Singer in \cite{RS} introduced the analytic torsion which is the analytic analogue of the combinatorial torsion (cf. \cite{Mi}). Let $Z \to M \xrightarrow{\pi} B $ be a fiber bundle with connected closed fibers $Z_{x}=\pi^{-1}(x)$ and $F$ be a flat complex vector bundle on $M$ with a flat connection $\nabla^{F}$ and a Hermitian metric $h^{F}$. Let $T^{H}M$ be a horizontal distribution for the fiber bundle and $g^{TZ}$ be a vertical Riemannian metric. Then in \cite{Bismut;AnaTorsion} Bismut and Lott introduced the torsion form $\mathcal{T}(T^{H}M, g^{TZ},h^{F})\in \Omega(B)$ (cf. \cite[(3.118)]{Bismut;AnaTorsion}) defined by
\begin{align}
\label{BLDfn}
\mathcal{T}(T^{H}M,g^{TZ},h^{F})
=- \int_{0}^{+\infty} & \Big[f^{\wedge}(C_{t}',h^{W})- \frac{\chi'(Z;F)}{2}f'(0) \\ \nonumber
&-\Big(\frac{\dim (Z) \rk (F)\chi(Z)}{4}
-\frac{\chi'(Z;F)}{2} \Big) f'\Big(\frac{i\sqrt{t}}{2}\Big) \Big] \frac{dt}{t}.
\end{align}
See \cite{Bismut;AnaTorsion} for the meaning of the terms in the integrand. To show the integral in the above formula is well-defined, it needs to calculate the asymptotic of $f^{\wedge}(C_{t}',h^{W})$ as $t\to 0$ and the asymptotic as $t\to \infty$. For the asymptotic as $t\to 0$, they used the local index technique. For the asymptotic as $t\to \infty$, the key fact is that the fiber $Z$ is closed, so the fiberwise operators involved have uniform positive lower bound for positive eigenvalues. They also proved a $C^{\infty}$-analogue of the Riemann-Roch-Grothendieck theorem and proved that the torsion form is the transgression of the Riemann-Roch-Grothendieck theorem (cf. \cite[Theorem 3.23]{Bismut;AnaTorsion}) and showed the zero degree part of $\mathcal{T}(T^{H}M, g^{TZ},h^{F})\in\Omega(B)$ is the Ray-Singer analytic torsion {(cf. \cite[Theorem 3.29]{Bismut;AnaTorsion})}.
On the other hand, the $L^2$-analytic torsion was defined and studied by several authors, cf. \cite{CM}, \cite{L}, \cite{M} and etc. So it is natural to extend the $L^2$-analytic torsion to the family case, that is to define and study the Bismut-Lott torsion form when the fiber $Z$ is non-compact. From the above we see that it needs to study the asymptotic of the $L^2$ analogue of $f^{\wedge}(C_{t},h^W)$ as $t\to 0$ and $t\to \infty$. Since in the $L^2$ case $f^{\wedge}(C_{t}',h^W)$ has the same asymptotic as $t\to 0$, so this part is easy. But in general the integral at $\infty$ does not converge, since in the $L^2$ case the positive eigenvalues of fiberwise operator involved in $f^{\wedge}(C_{t}',h^{W})$ may not have a positive lower bound. To overcome this difficulty, one considers the special case where the Novikov-Shubin invariant is (sufficiently) positive.
In \cite{Gong;CoverTorsion} Gong and Rothenberg defined the $L^2$-analytic torsion form
and proved that the torsion form is smooth,
under the condition that the Novikov-Shubin invariant is at least half of the dimension of the base manifold.
Heitsch-Lazarov \cite{Heitsch;FoliationTorsion} generalized essentially the same arguments to foliations.
In \cite{Schick;NonCptTorsionEst} Azzali, Goette and Schick proved
that the integrand defining the $L^2$-analytic torsion form,
as well as several other invariants related to the signature operator,
converges provided the Novikov-Shubin invariant is positive (or of determinant class and $L^{2}$-acyclic).
However, they did not prove the smoothness of the $ L ^2 $-analytic torsion form.
To consider transgression formula, they had to use weak derivatives.
The aim of this paper is to establish the regularity of the $L^{2}$-analytic torsion form,
in the case when the Novikov-Shubin invariant is positive.
Our motivation comes from the study of analytic torsion on some ``non-commutative" spaces
(along the lines of \cite{Lott;FoliationInd}, etc., for local index).
In this case one considers universal differential forms (as in \cite{Lott;FoliationInd}),
and the Duhamel's formula for the heat operator having infinitely many terms.
Instead, one makes essential use of the results of \cite{Schick;NonCptTorsionEst} to ensure that
\eqref{BLDfn} is well defined in the non-commutative case.
We achieve this result by generalizing Azzali-Goette-Schick's arguments to some Sobolev spaces.
The rest of the paper is organized as follows. In Section 2,
we define Sobolev norms on the spaces of kernels on the fibered product groupoid.
Unlike \cite{Schick;NonCptTorsionEst},
we consider Hilbert-Schmit type norms on the space of smoothing operators.
Given a kernel, the Hilbert-Schmit norm can be explicitly written down.
As a result,
we are able to take into account derivatives in both the fiber-wise and transverse directions,
with the help of a splitting similar to \cite{Heitsch;FoliationHeat}. In section 3, we turn to prove that having positive Novikov-Shubin invariant
implies positivity of the Novikov-Shubin invariant in the Sobolev settings.
We adapt an argument of Alvarez Lopez-Kordyukov \cite{Lopez;FoliationHeat}. In Section 4,
we apply the arguments in \cite{Schick;NonCptTorsionEst}
and conclude that the integral \eqref{BLDfn} converges in all Sobolev norms, and hence the regularity of the $L^{2}$ analytic torsion form.
\\
\
\\
{\bf Acknowledgement} The authors are very grateful to referees for their very careful reading of the manuscript of the paper and many valuable suggestions. The second author was supported by NSFC 11571183.
\section{Preliminaries}
In this section, we will define Sobolev norms on the space of kernels on the fibered product groupoid.
\subsection{The geometric settings}
\label{Dfn}
Let $Z \to M \xrightarrow{\pi} B $ be a fiber bundle with connected fibers $Z_x=\pi^{-1}(x)$, $x\in B$. Let $E \xrightarrow{\wp} M$ be a vector bundle.
We assume $B$ is compact.
Let $V := \Ker (d \pi ) \subset T M$.
We suppose that there is a finitely generated discrete group $G$ acting on $M$ from the right freely,
properly discontinuously. We also assume that the group $G$ acts on $B$ such that the actions commute with $\pi$ and $M _0 := M / G $ is a compact manifold.
Since the submersion $\pi$ is $G$-invariant,
$M _0 $ is also foliated and denote such foliation by $V _0 $.
Fix a distribution $H _0 \subset T M _0 $ complementary to $V _0$. Fix a metric on $V_{0}$ and take a $G$-invariant metric on $B$, then these induce a Riemannian metric on $M_{0}$ as $g^{V_0}\oplus \pi^* g^{TB}$ on $TM_0=V_0\oplus H_0$.
Since the projection from $M$ to $M _0 $ is a local diffeomorphism,
one gets a $G$-invariant splitting $T M = V \oplus H $.
Denote by $P ^V , P ^H$ respectively the projections to $V $ and $H$.
Moreover, one gets a $G$-invariant metric on $V$ and a Riemannian metric on $M$ as $g^{TM}=g^{V}\oplus \pi ^*g^{TB}$ on $TM=V\oplus H$.
Given any vector field $X \in \Gamma ^\infty (T B)$,
denote the horizontal lift of $X $ by $X ^H \in \Gamma ^\infty (H) \subset \Gamma ^\infty (T M )$.
By our construction, $| X ^H | _{g _M} (p) = | X | _{g _B } (\pi (p))$.
Denote by $\mu _x , \mu _B $ respectively the Reimannian measures on $Z _x $ and $B $.
\begin{dfn}
Let $E \xrightarrow{\wp} M$ be a complex vector bundle.
We say that $E $ is a contravariant $G$-bundle if $G$ also acts on $E$ from the right,
such that for any $v \in E , g \in G $, $\wp (v g) = \wp (v) g \in M $,
and moreover $G$ acts as a linear map between the fibers.
The group $G$ then acts on sections of $E$ from the left by
$$ s \mapsto g ^* s, \quad (g ^* s ) (p) := s (p g ) g ^{-1} \in \wp ^{-1} (p), \quad \forall \; p \in M .$$
\end{dfn}
We assume that $E$ is endowed with a $G$-invariant metric $g _E$,
and a $G$-invariant connection $\nabla ^E$
(which is obviously possible if $E$ is the pullback of some bundle on $M _0$).
In particular, for any $G$-invariant section $s$ of $E$,
$| s |$ is a $G$-invariant function on $M$.
In the following, for any vector bundle $F$ we denote its dual bundle by $F'$.
Recall that the ``infinite dimensional bundle'' over $B$ in the sense of Bismut is a vector bundle with typical fiber
$\Gamma _c ^\infty (E |_{Z _x } ) $ (or other function spaces) over each $x \in B$.
We denote by $E _\flat$ for such Bismut bundle.
The space of smooth sections on $E _\flat$ is, as a vector space, $\Gamma ^\infty _c (E )$.
Each element $s \in \Gamma ^\infty _c (E ) $ is regarded as a map
$$ x \mapsto s |_{Z _x} \in \Gamma _c ^\infty (E |_{Z _x } ) , \quad \Forall x \in B .$$
In other words, one defines a section on $E _\flat $ to be smooth,
if the images of all $x \in B$ fit together to form an element in $\Gamma _c ^\infty (E )$.
In particular,
$ \Gamma ^\infty _c ((M \times \mathbb C ) _\flat ) = C ^\infty _c (M ),$
and one identifies $\Gamma ^\infty _c (T B \otimes (M \times \mathbb C ) _\flat ) $
with $\Gamma ^\infty _c (H )$ by
$X \otimes f \mapsto f X ^H $.
\subsection{Covariant derivatives and Sobolev spaces}
Let $\nabla ^E $ be a $G$-invariant connection on $E$.
Denote by $\nabla ^{T M }, \nabla ^{T B} $ the Levi-Civita connections
(with respect to the metrics defined in the last section).
Note that $[X ^H , Y ] \in \Gamma ^\infty (V )$ for any vertical vector field $Y \in \Gamma ^\infty (V )$.
One naturally defines the connections
\begin{align*}
\nabla ^{V _\flat} _X Y
:= [X ^H , Y ] , \quad & \Forall Y \in \Gamma ^\infty (V _\flat ) \cong \Gamma ^\infty (V ), \\
\nabla ^{E _\flat } _X s := \nabla ^E _{X ^H } s ,
\quad & \Forall s \in \Gamma ^\infty (E _\flat ) \cong \Gamma _{c}^\infty (E ).
\end{align*}
\begin{dfn}
\label{DiffDfn1}
The covariant derivative on $E _\flat $ is the map
$$\dot \nabla ^{E _\flat } :
\Gamma ^\infty (\otimes ^\bullet T ^* B \bigotimes \otimes ^\bullet V' _\flat \bigotimes E _\flat )
\to \Gamma ^\infty (\otimes ^{\bullet + 1} T ^* B \bigotimes \otimes ^\bullet V' _\flat \bigotimes E _\flat ),$$
defined by
\begin{multline}
\left(\dot \nabla ^{E _\flat } s \right)(X _0 , X _1 , \cdots, X _k ; Y _1 , \cdots,Y _l )
:=\nabla ^{E _\flat } _{X _0} s ( X _1 , \cdots, X _k ; Y _1 , \cdots, Y _l ) \\
- \sum _{j=1} ^l s (X _1 , \cdots, X _k ; Y _1 , \cdots , \nabla ^{V _\flat} _{X _0 } Y _j , \cdots , Y _l )
- \sum _{i=1} ^k s (X _1 , \cdots , \nabla ^{T B } _{X _0 } X _i , \cdots , X _k ; Y _1 , \cdots Y _l ) ,
\end{multline}
for any $k, l \in \mathbb N , X _0 , \cdots, X _k \in \Gamma ^\infty (T B), Y _1 , \cdots, Y _l \in \Gamma ^\infty (V)$.
\end{dfn}
Clearly, taking covariant derivative can be iterated,
which we denote by $(\dot \nabla ^{E _\flat }) ^m $, \\ $ m = 1, 2, \cdots$.
Note that $(\dot \nabla ^{E _\flat }) ^m $ is a differential operator of order $m$.
Also, we define
$\dot \partial ^{V } :
\Gamma ^\infty (\otimes ^\bullet T ^* B \bigotimes \otimes ^\bullet V' _\flat \bigotimes E _\flat )
\to \Gamma ^\infty (\otimes ^\bullet T ^* B \bigotimes \otimes ^{\bullet + 1} V' _\flat \bigotimes E _\flat )$ by
\begin{multline}
\label{VertDiff1}
\left(\dot \partial ^{V } s \right)(X _1 , \cdots, X _k ; Y _0 , Y _1 , \cdots, Y _l )
:= \nabla ^E _{Y _0} s ( X _1 , \cdots, X _k ; Y _1 , \cdots, Y _l ) \\
- \sum _{j=1} ^l s (X _1 , \cdots, X _k ; Y _1 , \cdots , P ^V (\nabla ^{T M} _{Y _0 } Y _j) , \cdots , Y _l ) .
\end{multline}
In the following definition,
we regard $(\dot \nabla ^{E _\flat})^i (\dot\partial^{V}) ^j s
\in \Gamma ^\infty ( \otimes ^i H' \bigotimes \otimes ^j V' \bigotimes E_{\flat} )$.
\begin{dfn}
\label{SobDfn}
For $s\in \Gamma^{\infty}_{c}(E)$, we define its $m$-th Sobolev norm by
\begin{equation}
\| s \| ^2 _m
:= \sum _{i+j \leq m} \int _{x \in B} \int _{y \in Z _x }
\big| (\dot \nabla ^{E _\flat})^i (\dot \partial ^{V} ) ^j s \big| ^2 (x, y)
\mu _x (y) \mu _B (x).
\end{equation}
Denote by $\mathcal W ^m (E)$ the Sobolev completion of $\Gamma ^\infty _c (E ) $ with respect to $\| \cdot \| _m $.
\end{dfn}
Recall that an operator $A$ is called $C^{\infty}$-bounded if in normal coordinates the coefficients and their derivatives are $C^{\infty}$-bounded.
Since $M$ is locally isometric to a compact space $M _0$,
it is a manifold with bounded geometry (see \cite[Appendix 1]{S} for an introduction).
Moreover, $\nabla ^E$ is a $C ^\infty$-bounded differential operator,
because by $G $ invariance the Christoffel symbols of
$\nabla ^E$ and all their derivatives are uniformly bounded.
Using normal coordinate charts and parallel transport with respect to $\nabla ^E$ as trivialization,
one sees that $E$ is a bundle with bounded geometry.
Since the operators $\dot \nabla ^{E _\flat } $ and $ \dot \partial ^{V } $ are just respectively the
$(0, 1)$ and $(1, 0)$ parts of the usual covariant derivative operator,
our Definition \ref{SobDfn} is equivalent to the standard Sobolev norm \cite[Appendix 1 (1.3)]{S}
(with $p=2$ and $s$ non-negative integers).
One has the elliptic regularity for these Sobolev spaces:
\begin{lem}
\label{EllReg1}
\cite[Lemma 1.4]{S}
Let $A $ be any $C ^\infty$-bounded, uniformly elliptic, differential operator of order $m$.
For any $i, j\geq 0$, there exists a constant $C$ such that for any $s \in \Gamma ^\infty _c (E) $
$$ \| s \| _{i + m} \leq C ( \| A s \| _i + \| s \| _j) .$$
\end{lem}
\begin{rem}
Throughout this paper, by an ``elliptic operator" on a manifold,
we mean elliptic in all directions,
without taking any foliation structure into consideration.
We use the term ``fiber-wise elliptic operators" to refer to differential operators that are fiber-wise and elliptic restricted to fibers.
\end{rem}
\subsection{The fibered product}
\begin{dfn}
The fibered product of the manifold $M$ is
$$M \times _B M := \{ (p, q) \in M \times M : \pi (p) = \pi (q) \} ,$$
and with the maps from $M \times _B M $ to $M$ defined by
$$ \mathbf s (p, q) := q, \quad \mathbf t (p, q) := p .$$
The manifold $M \times _B M $ is a fiber bundle over $B $, with typical fiber $Z \times Z $.
One naturally has the splitting \cite[Section 2]{Heitsch;FoliationHeat}
$$T (M \times _B M) = \hat H \oplus V _\mathbf t \oplus V _\mathbf s ,$$
where
$$V _\mathbf s := \Ker (d \mathbf t) , \quad V _\mathbf t := \Ker (d \mathbf s ).$$
\end{dfn}
Note that $V _\mathbf s \cong \mathbf s ^{*} V$ and $V _\mathbf t \cong \mathbf t ^{*} (V )$.
As in Section 1.1, we endow $M \times _B M $ with a metric by lifting the metrics on $H _{0}$ and $V_{0}$.
Then $M \times _B M $ is a manifold with bounded geometry.
\begin{nota}
With some abuse in notations,
we shall often write elements in $M \times _B M $ as a triple $(x, y, z)$
and $\mathbf s (x, y, z) = (x, z), \mathbf t (x, y, z) = (x, y) \in M $,
where $x \in B , y , z \in Z _x $
\end{nota}
Let $G$ act on $M \times _B M $ by the diagonal action
$$ (p, q) g := (p g , q g ). $$
Let $E \to M $ be a contravariant $G$-vector bundle and $E '$ be its dual.
We shall consider
$$\hat E \to M \times _B M := \mathbf t ^* E \otimes \mathbf s ^* E ' .$$
Given a $G$-invariant connection $\nabla ^E $ on $E $, let
$$\nabla ^{\hat E } := \mathbf t ^* \nabla ^{E }\otimes {\rm Id}_{\mathbf s^*E'} + {\rm Id}_{\mathbf t ^*E}\otimes \mathbf s ^* \nabla ^{E ' }$$
be the tensor product of the pullback connections.
Fix any local base $\{ e _1, \cdots e _r \} $ of $E '$ on some $U \subset M$,
any section can be written as
$$ s = \sum _{i=1} ^r u _i \otimes \mathbf s ^* e _i$$
on $\mathbf s ^{-1} (U)$, where $u _i \in \Gamma ^\infty (\mathbf t ^* E )$.
Then by definition we have
\begin{equation}
\nabla _X ^{\hat E } \left(\sum _{i=1} ^r u _i \otimes \mathbf s ^* e _i\right)
= \sum _{i=1} ^r (\nabla ^{\mathbf t ^* E } _X u _i ) \otimes \mathbf s ^* e _i
+ u _i \otimes \mathbf s ^* (\nabla ^{E '} _{d \mathbf s (X)} e _i) ,
\end{equation}
for any vector $X$ on $M$.
Similar to Definition \ref{DiffDfn1}, we define the covariant derivative operators on \\
$\Gamma ^\infty (\otimes ^\bullet T ^* B \bigotimes \otimes ^\bullet (V' _\mathbf t)_\flat
\bigotimes \otimes ^\bullet (V' _\mathbf s)_\flat \bigotimes \hat E _\flat )$.
\begin{dfn}
Define
\begin{multline}
\nonumber
\left(\dot \nabla ^{\hat E _\flat } \psi \right)
(X _0 , X _1 , \cdots ,X _k ; Y _1 , \cdots, Y _l , Z _1 , \cdots ,Z _{l'}) \\
:= \nabla ^{\hat E _\flat } _{X _0} \psi
( X _1 , \cdots ,X _k ; Y _1 , \cdots ,Y _l , Z _1 , \cdots, Z _{l'}) \\
- \sum _{1 \leq j \leq l} \psi (X _1 , \cdots, X _k ;
Y _1 , \cdots , \nabla ^{V _\flat} _{X _0 } Y _j , \cdots , Y _l , Z _1 , \cdots, Z _{l'}) \\
- \sum _{1 \leq j \leq l'} \psi (X _1 , \cdots, X _k ;
Y _1 , \cdots , Y _l , Z _1 , \cdots , \nabla ^{V _\flat} _{X _0 } Z _j , \cdots , Z _{l'}) \\
- \sum _{1 \leq i \leq k} \psi (X _1 , \cdots , \nabla ^{T B } _{X _0 } X _i , \cdots , X _k ; Y _1 , \cdots Y _l
, Z _1 , \cdots, Z _{l'}),
\end{multline}
\begin{multline}
\nonumber
\left(\dot \partial ^{\mathbf s } \psi\right) (X _1 , \cdots, X _k ; Y _0 , Y _1 , \cdots, Y _l , Z _1 , \cdots, Z _{l'}) \\
:=\nabla ^{\hat E} _{Y _0} \psi ( X _1 , \cdots, X _k ; Y _1 , \cdots ,Y _l , Z _1 , \cdots ,Z _{l'}) \\
- \sum _{1 \leq j \leq l} \psi (X _1 , \cdots ,X _k ;
Y _1 , \cdots , P ^{V ^\mathbf s} (\nabla ^{T M} _{Y _0 } Y _j) , \cdots , Y _l , Z _1 , \cdots, Z _{l'}) \\
- \sum _{1 \leq j \leq l'} \psi (X _1 , \cdots ,X _k ;
Y _1 , \cdots ,Y _l , Z _1 , \cdots , P ^{V ^\mathbf t} [Y _0 , Z _j] , \cdots , Z _{l'}), \\
\end{multline}
\begin{multline}
\nonumber
\left(\dot \partial ^{\mathbf t } \psi \right)(X _1 , \cdots, X _k ; Y _1 , \cdots ,Y _l , Z _0 , Z _1 , \cdots, Z _{l'}) \\
:= \nabla ^{\hat E} _{Y _0} \psi ( X _1 , \cdots, X _k ; Y _1 , \cdots ,Y _l , Z _0 , Z _1 , \cdots, Z _{l'})\\
- \sum _{1 \leq j \leq l} \psi (X _1 , \cdots, X _k ;
Y _1 , \cdots , P ^{V ^\mathbf s} [Z _0 , Y _j] , \cdots , Y _l , Z _1 , \cdots, Z _{l'}) \\
- \sum _{1 \leq j \leq l'} \psi (X _1 , \cdots ,X _k ;
Y _1 , \cdots ,Y _l , Z _1 , \cdots , P ^{V ^\mathbf t} (\nabla ^{T M} _{Z _0 } Z _j) , \cdots , Z _{l'}).
\end{multline}
\end{dfn}
Given any vector fields $Y, Z \in V$.
Let $Y ^\mathbf s , Z ^\mathbf t $ be respectively the lifts of $Y $ and $Z $ to $V _\mathbf s $ and $V _\mathbf t$.
Then $[Y ^\mathbf s , Z ^\mathbf t ] = 0 $.
It follows that as differential operators,
$$ [\dot \partial ^\mathbf s , \dot \partial ^\mathbf t ] = 0 .$$
Also, it is straightforward to verify that
$$ [\dot \nabla ^{\hat E _\flat} , \dot \partial ^\mathbf s ]
\text{ \and } [\dot \nabla ^{\hat E _\flat} , \dot \partial ^\mathbf t ] $$
are both zeroth order differential operators (i.e. smooth bundle maps).
Fix a local trivialization
$$ \mathbf x _\alpha : \pi ^{-1} (B _\alpha ) \to B _\alpha \times Z , \quad
p \mapsto (\pi (p) , \varphi ^\alpha (p)),$$
where $B = \bigcup _{\alpha } B _\alpha $ is a finite open cover (since $B$ is compact),
and $\varphi ^\alpha |_{\pi ^{-1} (x)} : Z _x \to Z $ is a diffeomorphism.
Such a trivialization induces a local trivialization of the fiber bundle $M \times _B M \xrightarrow{\mathbf t} M $ by
$M = \bigcup M _{\alpha } , M _\alpha := \pi ^{-1} (B _\alpha ) $,
$$ \hat \mathbf x _\alpha : \mathbf t ^{-1} (M _\alpha ) \to M _\alpha \times Z , \quad
(p, q) \mapsto ( p , \varphi ^\alpha (q)).$$
On $M _\alpha \times Z$ the source and target maps are explicitly given by
\begin{equation}
\label{LocalGpoid}
\mathbf s \circ (\hat \mathbf x _\alpha) ^{-1} (p, z) = (\mathbf x _\alpha ) ^{-1} (\pi (p) , z)
\text{ and } \mathbf t \circ (\hat \mathbf x _\alpha) ^{-1} (p, z) = p .
\end{equation}
For such trivialization, one has the natural splitting
$$ T (M _\alpha \times Z) = H ^\alpha \oplus V ^\alpha \oplus T Z ,$$
where $H ^\alpha $ and $ V ^\alpha $ are respectively $H $ and $V$ restricted to $M _\alpha \times \{ z \}$,
$z \in Z $.
It follows from (\ref{LocalGpoid}) that
$$V ^\alpha = d \hat \mathbf x _\alpha (V _\mathbf s), \quad T Z = d \hat \mathbf x _\alpha (V _\mathbf t) .$$
Given any vector field $X$ on $B$,
let $X ^H , X ^{\hat H}$ be respectively the lifts of $X$ to $H $ and $\hat H $.
Since $d \mathbf t (X ^{\hat H} ) = d \mathbf s (X ^{\hat H} ) = X ^H $,
it follows that
$$d \hat \mathbf x _\alpha (X ^{\hat H }) = X ^{H ^\alpha } + d \varphi ^\alpha (X ^H ).$$
Note that $d \varphi ^\alpha (X ^H ) \in T Z \subseteq T (M _\alpha \times Z) $.
Corresponding to the splitting $T (M _\alpha \times Z) = H ^\alpha \oplus V ^\alpha \oplus T Z$,
one can define the covariant derivative operators.
Let $\nabla ^{T M _\alpha } $ be the Levi-Civita connection on $M_\alpha$
and $\nabla^{TZ}$ be the Levi-Civita connection on $Z$.
Define for any smooth section
$\phi \in \Gamma ^\infty (\otimes ^\bullet T ^* B \bigotimes \otimes ^\bullet (V ^\alpha)' _\flat
\bigotimes \otimes ^\bullet T ^* Z _\flat \bigotimes (\hat \mathbf x _\alpha ^{-1} )^* \hat E _\flat )$,
\begin{multline}
\nonumber
\left(\dot \nabla ^{\alpha } \phi \right)
(X _0 , X _1 , \cdots, X _k ; Y _1 , \cdots, Y _l , Z _1 , \cdots ,Z _{l'}) \\
:= (\mathbf x _\alpha ^* \nabla ^{\hat E _\flat }) _{X ^{H ^\alpha } _0}
\phi ( X _1 , \cdots ,X _k ; Y _1 , \cdots, Y _l , Z _1 , \cdots ,Z _{l'}) \\
- \sum _{1 \leq j \leq l} \phi (X _1 , \cdots ,X _k ; Y _1 , \cdots , [X _0 ^{H ^\alpha} , Y _j] , \cdots , Y _l ,
, Z _1 , \cdots, Z _{l'}) \\
- \sum _{1 \leq j \leq l'} \phi (X _1 , \cdots, X _k ;
Y _1 , \cdots , Y _l , Z _1 , \cdots , [X _0 ^{H ^\alpha} Z _j ], \cdots , Z _{l'}) \\
- \sum _{1 \leq i \leq k} \phi (X _1 , \cdots , \nabla ^{T B } _{X _0 } X _i , \cdots , X _k ;
Y _1 , \cdots ,Y _l , Z _1 , \cdots, Z _{l'}),
\end{multline}
\begin{multline}
\nonumber
\left(\dot \partial ^{\alpha } \phi\right) (X _1 , \cdots, X _k ; Y _0 , Y _1 , \cdots ,Y _l , Z _1 , \cdots ,Z _{l'}) \\
:=(\mathbf x _\alpha ^* \nabla ^{\hat E _\flat}) _{Y _0} \phi ( X _1 , \cdots, X _k ;
Y _1 , \cdots ,Y _l , Z _1 , \cdots, Z _{l'}) \\
- \sum _{1 \leq j \leq l} \phi (X _1 , \cdots ,X _k ;
Y _1 , \cdots , P ^{V ^\alpha} (\nabla ^{T M _\alpha}_{Y _0 } Y _j), \cdots, Y _l, Z _1 , \cdots ,Z _{l'})\\
- \sum _{1 \leq j \leq l'} \phi (X _1 , \cdots ,X _k ;
Y _1 , \cdots ,Y _l , Z _1 , \cdots , P ^{T Z} [Y _0 , Z _j] , \cdots , Z _{l'}),
\end{multline}
\begin{multline}
\nonumber
\left(\dot \partial ^{Z } \phi \right)(X _1 , \cdots , X _k ; Y _1 , \cdots ,Y _l , Z _0 , Z _1 , \cdots, Z _{l'}) \\
:=(\mathbf x _\alpha ^* \nabla ^{\hat E _\flat} )_{Z _0} \phi ( X _1 , \cdots ,X _k ;
Y _1 , \cdots ,Y _l , Z _0 , Z _1 , \cdots, Z _{l'}) \\
- \sum _{1 \leq j \leq l} \phi (X _1 , \cdots, X _k ;
Y _1 , \cdots , P ^{V ^\alpha} [Z _0 , Y _j] , \cdots , Y _l , Z _1 , \cdots ,Z _{l'}) \\
- \sum _{1 \leq j \leq l'} \phi (X _1 , \cdots, X _k ;
Y _1 , \cdots ,Y _l , Z _1 , \cdots , \nabla ^{T Z} _{Z _0 } Z _j , \cdots , Z _{l'}).
\end{multline}
Consider the special case when $\phi = u \otimes \mathbf s ^* e$,
where
$u \in \Gamma ^\infty (\otimes ^\bullet T ^* B \bigotimes \otimes ^\bullet (V ^\alpha)' _\flat \otimes \mathbf t ^* E)$,
$e \in \Gamma ^\infty (\otimes ^\bullet T ^* Z _\flat \otimes E' )$.
\begin{lem}
\label{TensorD}
For $(x, y, z) \in M _\alpha \times Z$, one has
\begin{align*}
\dot \nabla ^\alpha (u \otimes \mathbf s ^* e) (x, y, z)
=(\dot \nabla ^{E _\flat} u |_{M _\alpha \times \{ z \}} (x, y)) \otimes \mathbf s ^* (e (x, z))
+ u \otimes \mathbf s ^* (\nabla ^{E' _\flat} e (x, z))
\end{align*}
and
\begin{align*}
\dot \partial ^{\alpha } (u \otimes \mathbf s ^* e ) (x, y, z)
=(\dot \partial ^V u |_{M _\alpha \times \{ z \}} (x, y)) \otimes \mathbf s ^* (e (x, z)).
\end{align*}
\end{lem}
\begin{proof}
It suffices to consider the case when $Y _j, Z _{j'}$ are respectively vector fields on $M _\alpha $ and $Z $
lifted to $M _\alpha \times Z $.
From this assumption it follows that
$[ Y _j , Z _{j'}] = [X _0 ^{H ^\alpha } , Z _{j'} ] = 0$.
The lemma follows by a simple computation.
\end{proof}
We express the (pullback of) the covariant derivatives
$\dot \nabla ^{\hat E \flat} \psi, \dot \partial ^\mathbf s \psi$ and $ \dot \partial ^\mathbf t \psi$
in terms of $\dot \nabla ^\alpha \psi ^\alpha$, $\dot \partial ^{\alpha} \psi ^\alpha$
and $\dot \partial ^Z \psi ^\alpha$, where $\psi ^\alpha := (\mathbf x _\alpha ^{-1} )^* \psi $.
One directly verifies
\begin{multline}
\left(\dot \nabla ^{E _\flat } \psi \right)(X _0 , X _1 , \cdots , X _k ; Y _1 , \cdots, Y _l , Z _1 , \cdots , Z _{l'})\\
= (\mathbf x _\alpha ^{-1} )^* \left( \nabla ^\alpha _{(X ^{H ^\alpha} _0 + d \varphi ^\alpha (X ^{H } _0 ))}
\psi ^\alpha ( X _1, \cdots , X _k ; d \mathbf x _\alpha (Y _1 , \cdots, Y _l , Z _1 , \cdots , Z _{l'}))\right. \\
- \sum _{1 \leq j \leq l} \psi ^\alpha \left(X _1 , \cdots, X _k ;
d \mathbf x _\alpha Y _1 , \cdots , [X _0 ^{H ^\alpha }, d \mathbf x _\alpha Y _j ],
\cdots , d \mathbf x _\alpha Y _l , d \mathbf x _\alpha (Z _1 , \cdots , Z _{l'})\right) \\
- \sum _{1 \leq j \leq l'} \psi ^\alpha (X _1 , \cdots ,X _k ;
d \mathbf x _\alpha (Y _1 , \cdots , Y _l) , d \mathbf x _\alpha Z _1 ,
\cdots , [ X _0 ^{H ^\alpha} + d \varphi ^\alpha (X ^{H } _0 ) , d \mathbf x _\alpha Z _j ] ,
\cdots , d \mathbf x _\alpha Z _{l'}) \\
- \left.\sum _{1 \leq i \leq k} \psi ^\alpha (X _1 , \cdots , \nabla ^{T B } _{X _0 } X _i , \cdots , X _k ;
Y _1 , \cdots , Y _l , Z _1 , \cdots , Z _{l'}) \right) \\
= (\mathbf x _\alpha ^{-1} )^* \left( \dot \nabla ^\alpha \psi ^\alpha (X _0 , X _1 , \cdots , X _k ;
Y _1 , \cdots, Y _l , Z _1 , \cdots , Z _{l'})\right. \\
+ \dot \partial ^Z \psi ^\alpha ( X _1 , \cdots , X _k ;
Y _1 , \cdots, Y _l , d \varphi ^\alpha (X ^{H } _0 ), Z _1 , \cdots , Z _{l'}) \\
+ \sum _{1 \leq j \leq l'} \psi ^\alpha \left(X _1 , \cdots, X _k ;
d \mathbf x _\alpha (Y _1 , \cdots , Y _l) , d \mathbf x _\alpha Z _1 ,
\cdots , (\nabla ^{T Z} d \varphi ^\alpha (X ^{H } _0 )) ( d \mathbf x _\alpha Z _j ) ,
\cdots , d \mathbf x _\alpha Z _{l'}) \right).
\end{multline}
By similar computations for $\dot \partial ^\mathbf s $ and $\dot \partial ^\mathbf t $, one gets:
\begin{multline}
\left(\dot \partial ^\mathbf s \psi \right)(X _1, \cdots , X _k ; Y _0 , Y _1 , \cdots Y _l , Z _1 , \cdots Z _{l'}) \\
= (\mathbf x _\alpha ^{-1} )^* \big(\dot \partial ^\alpha \psi ^\alpha
\label{Convert3}
(X _1, \cdots, X _k ; d \mathbf x _\alpha (Y _0 , Y _1 , \cdots ,Y _l , Z _1 , \cdots, Z _{l'}) \big),
\end{multline}
\begin{multline}
\left(\dot \partial ^\mathbf t \psi \right)(X _1, \cdots , X _k ; Y _1 , \cdots, Y _l , Z _0 , Z _1 , \cdots, Z _{l'}) \\
= (\mathbf x _\alpha ^{-1} )^* \big(\dot \partial ^Z
\psi ^\alpha ( X _1, \cdots ,X _k ; d \mathbf x _\alpha (Y _1 , \cdots ,Y _l , Z _0 , \cdots ,Z _{l'})) \\
+ \sum _{1 \leq j \leq l'} \psi ^\alpha (X _1 , \cdots, X _k ;
d \mathbf x _\alpha (Y _1 , \cdots ,Y _l ) , d \mathbf x _\alpha Z _1 , \\
\cdots ,
(\nabla ^{T Z} _{d \mathbf x _\alpha Z _0 } d \mathbf x _\alpha Z _j
- d \mathbf x _\alpha ( P ^{V _\mathbf t} \nabla ^{T M } _{Z _0} Z _j),
\cdots , d \mathbf x _\alpha Z _{l'}) \big).
\end{multline}
\subsection{Smoothing operators}
For any $(x, y, z) \in M \times _B M $,
let $\mathbf d (x, y, z) $ be the Riemannian distance between $y, z \in Z _x$.
We regard $\mathbf d $ as a continuous, non-negative function on $M \times_B M$.
\begin{dfn}
\label{NWX}
(See \cite{NWX;GroupoidPdO}). As a vector space,
$$\Psi ^{- \infty } _\infty (M \times _B M , E ) :=
\left\{
\begin{array}{ll}
& \text{For any } m \in \mathbb N, \varepsilon > 0 , \exists C _m > 0 \\
\psi \in \Gamma ^\infty (\hat E ) : & \text{such that } \Forall i+j+k \leq m, \\
& |(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^{V _\mathbf s} )^j (\dot \partial ^{V _\mathbf t} )^k \psi |
\leq C _m e ^{- \varepsilon \mathbf d}.
\end{array}
\right\}.
$$
The convolution product structure on $\Psi ^{- \infty } _\infty (M \times _B M , E ) $ is defined by
$$ \psi _1 \star \psi _2 (x, y, z)
:= \int _{Z _x } \psi _1 (x, y, w) \psi _2 (x, w , z ) \mu _{x } (w) .$$
\end{dfn}
We introduce a Sobolev type generalization of the Hilbert-Schmit norm on
$\Psi ^{- \infty } _\infty (M \times _B M , E ) ^G $, the space of $G$-invariant kernels. Since $G$ is a finitely generated discrete group and acts on $M$ freely, properly discontinuously, then there exists
a smooth compactly supported function $\chi \in C ^\infty _c (M )$,
such that
$$ \sum _{g \in G } g ^* \chi = 1 .$$
In particular, one may construct $\chi$ as follows.
Denote by $\pi _G$ the projection $M \to M _0 = M / G$.
There exists some $r > 0$, and a finite collection of geodesic balls $B (p _\alpha , r)$ of radius $r$,
such that $B (p _\alpha , r)$ is diffeomorphic to its image in $M _0$ under $\pi _G$,
and moreover $\{ B (p _\alpha , \frac{r}{3}) \} $ covers $M _0$ (since $M _0$ is compact).
Since $G$ acts on $M $ by isometry,
$\pi _G (B (p _\alpha g , r)) = \pi _G (B (p _\alpha , r))$ for all $g \in G$.
Thus one may without loss of generality assume that $B (p _\alpha , r)$ are mutually disjoint.
Define the functions $f \in C ^\infty (\mathbb R), F _\alpha , F \in C ^\infty _c (M ) $ by
\begin{align*}
f (t) :=& e ^{- \frac{1}{t ^2}} \text { if } t > 0 , \quad 0 \text { if } t \leq 0 , \\
F _\alpha (p) :=& f \big( 1 - \frac{2 \mathbf d (p , p _\alpha )}{r} \big)
\Big( f \big( \frac{3 \mathbf d (p , p _\alpha )}{r} - 1 \big)
+ f \big( 1 - \frac{2 \mathbf d (p , p _\alpha )}{r} \big) \Big) ^{-1},
\quad p \in M ,\\
F :=& \sum _\alpha F _\alpha .
\end{align*}
Note that $F $ is well defined because $ F _\alpha $ is supported on $B (p _\alpha , r)$,
which is locally finite.
Since by construction
$$ \big \{ \bigcup _\alpha B (p _\alpha g , \frac {r}{3}) \big \} _{g \in G}$$
is a locally finite cover of $M $,
$ \sum _g g ^* F $ is also well defined.
Define
$$ \chi := F ( \sum _g g ^* F ) ^{-1} .$$
Then clearly $\chi $ is the required partition of unity.
Moreover, observe that $\chi ^{\frac{1}{2}}$ is a smooth function because $f ^{\frac{1}{2}}$ is smooth and
all denominators are uniformly bounded away from $0$.
For any $G$-invariant $\psi \in \Psi ^{- \infty} _\infty (M \times _B M , E ) ^G$,
recall that the standard trace of $\psi $ is
$$ \tr _\Psi (\psi ) (x) := \int _{z \in Z _x} \chi (x, z) \tr (\psi (x, z, z)) \mu _x (z)
\in C ^\infty (B) .$$
The definition does not depend on the choice of $\chi$.
The corresponding Hilbert-Schmit norm is
\begin{align}
\label{0HS}
\int _B \big( & \tr _\Psi (\psi \psi ^* ) (x) \big) ^2 \mu _B (x) \\ \nonumber
&= \int _B \int _{Z _x} \chi (x, z) \int _{Z _x} \tr (\psi (x, z, y) \psi ^* (x, y, z))
\mu _x (y) \mu _x (z) \mu _B (x).
\end{align}
Note that Equation \eqref{0HS} coincides with the $L ^2$-norm of $\psi$.
Generalizing Equation \eqref{0HS} to taking into account derivatives, we define:
\begin{dfn}
The $m$-th Hilbert-Schmit norm on $\Psi ^{- \infty } (M \times _B M , E ) ^G $ is defined to be
\begin{align*}
\| \psi \| ^2 _{\HS m}
:= \sum _{i+j+k \leq m} \int _B \int _{Z _x } \chi (x, z) \int _{Z _x } \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k \psi \big| ^2
& (x, y, z) \mu _x (y) \mu _x (z) \mu _B (x),
\end{align*}
for any $G$-invariant element $\psi $.
Let $\bar \Psi ^{-\infty} _m (M \times _B M , E ) ^G$ be the completion of
$ \Psi ^{-\infty} _\infty (M \times _B M , E ) ^G$ with respect to $\| \cdot \| _{\HS m } $.
\end{dfn}
Similar to Lemma \ref{EllReg1}, one has the elliptic regularity for the Hilbert-Schmit norm:
\begin{lem}
\label{EllReg2}
Let $A$ be a $G$-invariant, first order elliptic differential operator,
then for any $m = 0, 1, \cdots$, there exists a constant $C > 0$ such that
$$ \| \psi \| _{\HS m+1} \leq C (\| A \psi \| _{\HS m} + \| \psi \| _m ),$$
for all $\psi \in \Psi ^{- \infty } (M \times _B M , E ) ^G $.
\end{lem}
\begin{proof}
Define
$$ S := \{ g \in G : \chi (g ^* \chi ) \neq 0 \}.$$
Then $S$ is finite because $\{ g ^* \chi \} $ is a locally finite partition of unity.
Consider $(\chi (x, z))^{\frac{1}{2}} \psi $.
By Leibniz rule, one has
$$ (\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k
\chi ^{\frac{1}{2}} \psi
= \chi ^{\frac{1}{2}} (\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k \psi $$
modulo terms involving lower derivatives in $\psi$.
Since $(\chi (x, z))^{\frac{1}{2}} $ is smooth with bounded derivatives,
there exists some $C _1 > 0$ such that for any $(x, y, z) \in M \times _B M$,
\begin{align}
\label{LeibEst1}
\Big|
\sum _{i+j+k \leq m} & \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k
\chi ^{\frac{1}{2}} \psi \big| ^2
- \chi \sum _{i+j+k \leq m} \big| (\dot \nabla ^{\hat E _\flat } )^i
(\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k \psi \big| ^2 \Big| (x, y, z) \\ \nonumber
&\leq \sum _{g \in S} g ^* \chi \Big( C _1 \sum _{i+j+k \leq m-1} \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k \psi \big| ^2 \Big) (x, y, z).
\end{align}
Similarly, since $A \chi ^{\frac{1}{2}} - \chi ^{\frac{1}{2}} A $ is a $C ^\infty$-bounded tensor,
one has
\begin{align}
\label{LeibEst2}
\Big|
\sum _{i+j+k \leq m} & \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k
(A \chi ^{\frac{1}{2}} \psi ) \big| ^2
- \chi \sum _{i+j+k \leq m} \big| (\dot \nabla ^{\hat E _\flat } )^i
(\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k A \psi \big| ^2 \Big| \\ \nonumber
&\leq \sum _{g \in S} g ^* \chi \Big( C _2 \sum _{i+j+k \leq m} \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k \psi \big| ^2 \Big) .
\end{align}
Since the integrand is $G$-invariant, for any $g \in G$
$$ \int _{M \times _B M }
g ^* \chi \sum _{i+j+k \leq m-1} \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k \psi \big| ^2
\mu _x (y) \mu _x (z) \mu _B (x) = \| \psi \| ^2 _{\HS m-1} .$$
Observe that $A$ being $G$-invariant implies $A$ is uniformly elliptic and $C ^\infty$-bounded.
Therefore applying Lemma \ref{EllReg1} for $(\chi (x, z))^{\frac{1}{2}} \psi $, there exists constant $C_3>0$ such that
\begin{align*}
\int _{M \times _B M } & \sum _{i+j+k \leq m+1} \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k (\chi ^{\frac{1}{2}} \psi ) \big| ^2
\mu _x (y) \mu _x (z) \mu _B (x) \\
\leq & C _3 \Big( \int _{M \times _B M } \sum _{i+j+k \leq m} \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k (A \chi ^{\frac{1}{2}} \psi ) \big|^2
\mu _x (y) \mu _x (z) \mu _B (x) \\
&+ \int _{M \times _B M } \sum _{i+j+k \leq m} \big|
(\dot \nabla ^{\hat E _\flat } )^i (\dot \partial ^\mathbf s )^j (\dot \partial ^\mathbf t)^k (\chi ^{\frac{1}{2}} \psi ) \big|^2
\mu _x (y) \mu _x (z) \mu _B (x) \Big).
\end{align*}
Then by Equations \eqref{LeibEst1} and \eqref{LeibEst2}, we get the lemma.
\end{proof}
\subsection{Fiber-wise operators}
We turn to consider another class of operators and a different norm.
\begin{dfn}
A fiber-wise operator is a linear operator $A : \Gamma ^\infty _c (E _\flat) \to \mathcal W ^0 (E)$
such that for all $x \in B$,
and any sections $s _1, s _2 \in \Gamma ^\infty _c (E _\flat )$,
$$ (A s _1 )(x) = (A s _2) (x), $$
whenever $s _1 (x) = s _2 (x) $.
We say that $A $ is smooth if
$A (\Gamma ^\infty _c (E )) \subseteq \Gamma ^\infty (E )$.
A smooth fiber-wise operator $A $ is said to be bounded of order $m$
if $A $ extends to a bounded map from $\mathcal W ^m (E )$ to itself.
Denote the operator norm of $A : \mathcal W ^m (E ) \to \mathcal W ^m (E ) $ by $\| A \| _{\op m} $.
\end{dfn}
\begin{exam}
\label{SmoothingExam}
An example of smooth fiber-wise operators are $\Psi ^{- \infty } _\infty (M \times _B M , E )$,
acting on $\mathcal W ^m (E )$ by vector representation, i.e.
$$ \big( \varPsi s \big) (x, y) := \int _{Z _x} \psi (x, y, z ) s (x, z) \mu _x (z) .$$
\end{exam}
\begin{nota}
For the fiber-wise operator $A : \Gamma ^\infty _c (E _\flat) \to \mathcal W ^0 (E)$ which is of the form given by Example
\ref{SmoothingExam}, we denote its kernel by $A (x, y, z)$.
We shall write
$$ \| A \| _{\HS m} := \| A (x, y, z) \| _{\HS m} ,$$
provided $A (x, y, z) \in \bar \Psi ^{-\infty} _m (M \times _B M , E )$.
\end{nota}
The following lemma enables one to construct more fiber-wise operators:
\begin{lem}
Let $A$ be any first order, $C ^\infty$-bounded differential operator on $M $ and
$\varPsi \in \Psi ^{- \infty } _\infty (M \times _B M , E )$ be as in Example \ref{SmoothingExam}.
Then $[A, \varPsi ] $ is a fiber-wise operator in $\Psi ^{- \infty } _\infty (M \times _B M , E )$.
\end{lem}
\begin{proof}
Since multiplication by a tensor or differentiation along $V$ is fiber-wise,
it remains to consider operators of the form $\nabla ^{E} _{X ^H} $, for some vector field $X $ on $B$. Let $L^{\nabla^{E}}_{X^{H}}=d^{\nabla^{E}}i_{X^{H}}+i_{X^{H}}d^{\nabla^{E}}$, where $d^{\nabla^{E}}$ is the twisted de Rham operator. In the following of this paper, the Lie derivatives are all defined in this way.
Let $s \in \Gamma ^\infty _c (E)$ be arbitrary.
We first suppose that $Z$ is orientable and $\mu _x$ is a volume form.
By the decay condition in Definition \ref{NWX}, one can differentiate under the integral sign to get
\begin{align*}
A \varPsi s (x, z)
=& \int _{Z _x} L ^{\nabla ^{\hat E }} _{X ^{\hat H}} (\psi (x, y, z) s (x, y) \mu _x (y)) \\
=& \int _{Z _x} \big( L ^{\nabla ^{\hat E }} _{X ^{\hat H}} \psi (x, y, z) \big) s (x, y) \mu _x (y)
+ \int _{Z _x} \psi (x, y, z) \big( L ^{\nabla ^{E }} _{X ^H} s (x, y) \big) \mu _x (y) \\
&+ \int _{Z _x} \psi (x, y, z) s (x, y) (L ^{\nabla ^{E }} _{X ^H} \mu _x (y)).
\end{align*}
The second term in the last line is just $\varPsi A s $. Hence the result.
For the general case, one can take a suitable partition of unity and integrate over local volume forms,
then one obtains a similar equation.
\end{proof}
Let $A $ be a smooth fiber-wise operator on $\Gamma ^\infty _c (E _\flat )$.
Then $A $ induces a fiber-wise operator $\hat A$ on $\Gamma ^\infty _c (\hat E _\flat)$ by
\begin{equation}
\label{FiberwiseOp}
\hat A (u \otimes \mathbf s ^* e) := A ( u |_{ M _\alpha \times \{ z \}}) \otimes (\mathbf s ^* e)
\end{equation}
on $\mathbf t ^{-1} (M _\alpha ) \cong M _\alpha \times Z $,
for any sections
$e \in \Gamma ^\infty (E ') , u \in \Gamma ^\infty (\mathbf t ^* E ) $
and $\psi = u \otimes \mathbf s ^* e \in \Gamma ^{ \infty } _c (\hat E ) $.
Note that $\hat A$ is independent of trivialization since $A$ is fiber-wise,
and for any $\alpha , \beta $ and $z \in Z$,
the transition function $\mathbf x _\beta \circ (\mathbf x _\alpha ) ^{-1}$ maps the sub-manifold
$Z _x \times \{ z \} $ to $Z _x \times \{ \varphi ^\beta _x \circ (\varphi ^\alpha _x) ^{-1} (z) \}$
as the identity diffeomorphism.
\subsection{The main theorem}
Suppose that $A $ is smooth and bounded of order $m$ for all $m \in \mathbb N$.
Consider the covariant derivatives of $\hat A \psi ^\alpha$.
\begin{thm}
\label{AEstCor1}
For any smooth bounded $G$-invariant operator $A $, there exist constants $C' _{1,1} , C' _{0, 0} > 0 $ such that for any $\psi \in \Psi ^{- \infty} _\infty (M \times _B M ) ^G $, one has
$\hat A \psi \in \Psi ^{- \infty} _\infty (M \times _B M ) ^G $ and
$$ \| \hat A \psi \| _{\HS 1} \leq (C ' _{1,1} \| A \|_{\op 1} + C ' _{1,0} \| A \| _{\op 0}) \| \psi \| _{\HS 1}.$$
\end{thm}
\begin{proof}
Fix a partition of unity $\{ \theta _\alpha \} \in C ^\infty _c (B)$ subordinate to $\{ B _\alpha \}$.
We still denote by $\{ \theta _\alpha \}$ its pullback to $M$ and $M \times _B M $.
Fix any Riemannian metric on $Z$ and denote the corresponding Riamannian measure by $\mu _Z$.
Then one writes
$$ (\hat \mathbf x _\alpha ) _\star (\mu _x \mu _B ) = J _\alpha \mu _B \mu _Z ,$$
for some smooth positive function $J _\alpha $.
Moreover, over any compact subsets on $B _\alpha \times Z$, $ \frac{1}{J _\alpha} $ is bounded.
Given any $\psi \in \Psi ^{-\infty} _\infty (M \times _B M) ^G$,
let $\psi ^\alpha := \hat \mathbf x _\alpha ^* (\psi ) $.
The theorem clearly follows from the inequalities
\begin{align}
\label{EstLem1}
\int _{B _\alpha} \int _{Z _x} \chi (x, z) \int _{Z _x}
| \dot \nabla ^\alpha \hat A (\theta _\alpha \psi ^\alpha ) | ^2 &
\mu _x (y) \mu _x (z) \mu _B (x) \\ \nonumber
\leq & (C _1 \| A \| ^2 _{\op 1} + C _2 \| A \| ^2 _{\op 0} ) \| \psi \| ^2 _{\HS 1}, \\
\label{EstLem2}
\int _{B _\alpha} \int _{Z _x} \chi (x, z) \int _{Z _x}
| \dot \partial ^\alpha \hat A (\theta _\alpha \psi ^\alpha ) | ^2 &
\mu _x (y) \mu _x (z) \mu _B (x) \\ \nonumber
\leq & (C _1 \| A \| ^2 _{\op 1} + C _2 \| A \| ^2 _{\op 0} ) \| \psi \| ^2 _{\HS 1}, \\
\label{EstLem3}
\int _B \int _{Z _x} \chi (x, z) \int _{y \in Z _x}
| \dot \partial ^Z \hat A (\theta _\alpha \psi ^\alpha ) | ^2 &
\mu _x (y) \mu _x (z) \mu _B (x)
\leq \| A \| ^2 _{\op 0} \| \psi \| ^2 _{\HS 1} .
\end{align}
Let $Z = \bigcup _\lambda Z _\lambda $ be a locally finite cover.
Then the support of $\chi \theta _\alpha $ lies in some finite sub-cover.
Let $\chi _\alpha $ be the characteristic function
$$ \chi _\alpha (x, z) = 1 \text{ if } (\chi \theta _\alpha ) (x, z) > 0, \quad 0 \text{ otherwise.}$$
Without loss of generality we may assume $E' | _{Z _\lambda } $ are all trivial.
For each $\lambda $ fix an orthonormal basis $\{ e ^\lambda _r \}$ of $E ' |_{ B _\alpha \times Z _\lambda }$,
and write
$$\psi ^\alpha := \sum _r u ^\lambda _r \otimes \mathbf s ^* e ^\lambda _r .$$
Using Lemma \ref{TensorD} one estimates the integrand of the l.h.s. of Equation \eqref{EstLem1}, then there exits constant $C_3>0$ such that
\begin{align*}
\Big| \dot \nabla ^\alpha (\hat A \theta _\alpha \psi ^\alpha ) & \Big| ^2 (x, y, z) \\
=& \Big| \sum _r (\dot \nabla ^{E _\flat} A \theta _\alpha (u ^\lambda _r |_{M _\alpha \times \{ z \}})
(x, y)) \otimes \mathbf s ^* e ^\lambda _r
+ (A \theta _\alpha u ^\lambda _r )\otimes \mathbf s ^* (\nabla ^E e ^\lambda _r ) \Big| ^2 \\
\leq & C _3 \sum _r
\Big( \Big| \dot \nabla ^{E _\flat} A \theta _\alpha (u ^\lambda _r |_{M _\alpha \times \{ z \}})
(x, y) \Big| ^2
+ \Big| (A \theta _\alpha u ^\lambda _r ) \otimes \mathbf s ^* (\nabla ^E e ^\lambda _r ) \Big|^2
\Big).
\end{align*}
By integrating, one gets for some constants $C _q$, $q=4,\cdots,10$, that
\begin{align*}
\int _{B _\alpha} \int _{Z _x} \chi (x, z) \int _{Z _x} &
| \dot \nabla ^\alpha \hat A (\theta _\alpha \psi ^\alpha ) | ^2
\mu _x (y) \mu _x (z) \mu _B (x) \\
\leq C _4 \sum _\lambda \int _{Z _\lambda} \int _{B _\alpha} & \int _{Z _x}
\sum _r \Big( \Big| \dot \nabla ^{E _\flat} A \theta _\alpha (u ^\lambda _r |_{M _\alpha \times \{ z \}})
(x, y) \Big| ^2 \\
&+ \Big| (A \theta _\alpha u ^\lambda _r )\otimes \mathbf s ^* (\nabla ^E e ^\lambda _r ) \Big|^2 \Big)
\mu _x (y) \mu _B (x) \mu _Z (z) \\
\leq \sum _\lambda \int _{Z _\lambda} \int _{B _\alpha} & \int _{Z _x}
\sum _r \Big( C _5 \| A \| _{\op 1} ^2
\big( \big| \dot \nabla ^{E _\flat} \theta _\alpha (u ^\lambda _r |_{M _\alpha \times \{ z \}}) (x, y) \big| ^2 \\
&+ \big| \dot \partial ^V \theta _\alpha (u ^\lambda _r |_{M _\alpha \times \{ z \}}) (x, y) \big| ^2
+ \big| \theta _\alpha (u ^\lambda _r |_{M _\alpha \times \{ z \}}) (x, y) \big| ^2 \big) \\
&+ C _6 \| A \| _{\op 0} \big| \theta _\alpha u ^\lambda _r \big|^2 \Big)
\mu _x (y) \mu _B (x) \mu _Z (z) \\
\leq \sum _\lambda \int _{Z _\lambda} \int _{B _\alpha} & \int _{Z _x}
J _\alpha (C _7 \| A \| _{\op 1} ^2 + C _8 \| A \| _{\op 0} )
\big( \big| \dot \nabla ^\alpha \theta _\alpha \psi _\alpha \big| ^2 \\
&+ \big| \dot \partial ^\alpha \theta _\alpha \psi _\alpha \big| ^2
+ \big| \dot \partial ^Z \theta _\alpha \psi _\alpha \big| ^2 + \big| \theta _\alpha \psi _\alpha \big|^2 \big)
\mu _x (y) \mu _B (x) \mu _Z (z) \\
\leq \int _B \int _{Z _x} \chi _\alpha & \int _{Z _x}
(C _9 \| A \| _{\op 1} ^2 + C _{10} \| A \| _{\op 0} )
\big( \big| \dot \nabla ^{\hat E _\flat} \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2 \\
+& \big| \dot \partial ^\mathbf s \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \dot \partial ^\mathbf t \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big|^2 \big)
\mu _x (y) \mu _x (z) \mu _B (x).
\end{align*}
Now we use an argument similar to the proof of Lemma \ref{EllReg2}.
Namely, write the integrand as a sum
\begin{align*}
\chi _\alpha \big( \big| \dot \nabla ^{\hat E _\flat} & \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \dot \partial ^\mathbf s \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \dot \partial ^\mathbf t \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big|^2 \big) \\
=& \sum _{g \in S} \chi _\alpha g ^* \chi
\big( \big| \dot \nabla ^{\hat E _\flat} \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \dot \partial ^\mathbf s \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \dot \partial ^\mathbf t \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big|^2 \big).
\end{align*}
Then since for all $g$
$$ \int g ^* \chi
\big( \big| \dot \nabla ^{\hat E _\flat} \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \dot \partial ^\mathbf s \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \dot \partial ^\mathbf t \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big| ^2
+ \big| \mathbf x _\alpha ^* ( \theta _\alpha \psi ) \big|^2 \big) = \| \psi \| _{\HS 1},$$
equation \eqref{EstLem1} follows.
Using the same arguments with $\dot \partial ^\alpha $ in place of $\dot \nabla ^\alpha $,
one gets the Equation \eqref{EstLem2}.
As for the last inequality, since $\mathbf t ^{* } E |_{M _\alpha \times \{ z \}} $
and the connection $(\mathbf x ^{-1} _\alpha ) ^* \nabla ^{\mathbf s ^{*} E } $ is trivial along $\exp t Z _0 $,
one can write
\begin{align*}
\nabla ^\alpha _{Z _0 }(\hat A u \otimes \mathbf s ^* e )
=& \frac{d}{d t} \Big|_{t = 0} A u |_{M _\alpha \times \{ \exp t Z \} } \otimes \mathbf s ^* e
+ u \otimes \nabla ^{\mathbf s ^{*} E '} _{Z _0} \mathbf s ^* e \\
=& A \big( \frac{d}{d t} \Big|_{t = 0} u |_{M _\alpha \times \{ \exp t Z \} } \big) \otimes \mathbf s ^* e
+ u \otimes \nabla ^{\mathbf s ^{*} E '} _{Z _0} \mathbf s ^* e
= \hat A (\nabla ^\alpha _{Z _0} (u \otimes \mathbf s ^* e )).
\end{align*}
It follows that
$$ \dot \partial ^Z \hat A \psi ^\alpha = \hat A (\dot \partial ^Z \psi ^\alpha ) ,$$
and from which Equation \eqref{EstLem3} follows.
\end{proof}
Clearly, the arguments leading to Corollary \ref{AEstCor1} can be repeated and we obtain:
\begin{cor}
\label{Main1}
For any smooth bounded operator $\hat A $ and $m = 0, 1 , \cdots $, there exists $C'_{m,l}> 0 $ such that for any $\psi \in \Psi ^{- \infty} _\infty (M \times _B M ) ^G $, one has
$$ \| \hat A \psi \| _{\HS m}
\leq \big( \sum _{0 \leq l \leq m } C _{m, l} \| A \|_{\op l} \big) \| \psi \| _{\HS m}.$$
\end{cor}
\begin{nota}
In view of Corollary \ref{Main1}, we shall denote
$$ \| A \| _{\op' m} := \big( \sum _{0 \leq l \leq m } C _{m, l} \| A \|_{\op l} \big).$$
\end{nota}
We may assume without loss of generality that $C _{m, l} \geq 2$.
Then one still has
\begin{equation}
\| A _1 A _2 \| _{\op' m} \leq \| A _1 \| _{\op' m} \| A _2 \| _{\op' m} .
\end{equation}
\section{Large time behavior of the heat operator}
In this section we will prove that under the condition of the positivity of the Novikov-Shubin invariant, the heat operator also convergences to the projection operator under the norm $\|\cdot\|_{{\rm HS}\ m}$.
\subsection{The Novikov-Shubin invariant}
Let $M \to B $ be a fiber bundle with a $G$ action,
and $T M = H \oplus V $ be the $G$-invariant splitting, as defined in Section \ref{Dfn}.
Recall that we assumed the metric on
$H \cong \pi ^{*} T B$ is given by pulling back some Riemannian metric on $B$.
In other words, $V $ is a Riemannian foliation.
Let $E \to M$ be a flat, contravariant $G$-vector bundle,
and $\nabla $ be an invariant flat connection on $E$.
Denote $E ^\bullet := \wedge ^\bullet V' \otimes E$.
Since the vertical distribution $V$ is integrable,
the deRham differential $d ^{\nabla ^E} _V$ along $V$ is well defined.
Write $\eth _0 := d ^{\nabla ^E} _V + (d ^{\nabla ^E } _V )^* , \varDelta := \eth _0^{2} $,
and denote by $e ^{- t \varDelta} $ the heat operator and
$ \varPi _0 $ the orthogonal projection onto $\Ker (\varDelta )$.
The following result is classical.
See, for example, \cite[Proposition 2.8]{B} and \cite[Proposition 3.5]{Heitsch;FoliationHeat}.
\begin{lem}
\label{DuhamelLem}
The heat operator $e ^{- t \varDelta }$ is given by a smooth kernel.
Moreover, for any first order differential operator $A$, one has the Duhamel type formula
\begin{equation}
\label{Duhamel}
[A , e ^{- t \varDelta } ] = - \int _0 ^t e ^{- (t - t') \varDelta } [A , \varDelta ] e ^{- t' \varDelta } d t' .
\end{equation}
\end{lem}
From Lemma \ref{DuhamelLem}, it follows that:
\begin{cor}\label{DuhamelCor}
\cite[Corollary 3.11]{Heitsch;FoliationHeat}
For any $i, j, k$, there exist $C , M > 0$ such that
$$ |(\dot \nabla ^{E _\flat}) ^i (\dot \partial ^\mathbf s)^j (\dot \partial ^\mathbf t) ^k e ^{- t \eth _0 ^2}| (x, y, z)
\leq C e ^{- M \mathbf d (y, z) ^2}.$$
\end{cor}
Hence $e ^{- t \eth _0 ^2} \in \Psi ^{- \infty} _\infty (M \times _B M , E ^\bullet) ^G $.
As for $\varPi _0$, one has
\begin{lem}\label{GR}
The kernel of $\varPi _0 $ lies in $\bar \Psi ^{- \infty} _0 (M \times _B M , E ^\bullet) ^G $.
\end{lem}
\begin{proof}
By \cite[Theorem 2.2]{Gong;CoverTorsion} $\varPi _0$ is also represented by a smooth kernel $\varPi _0 (x, y, z)$.
Moreover by \cite[Theorem 2.2]{Gong;CoverTorsion} and the fact that $\varPi _0 = \varPi _0 ^2 $, one has
$$ \sup _{x \in B} \Big\{ \int _{Z _x} \chi (x, z) \int _{Z _x}
| \varPi _0 (x, y, z) |^2 \mu _x (y) \mu _x (z) \Big\}
= \| \varPi _0 \| _\tau < \infty ,$$
where $\| \cdot \| _\tau $ is the $\tau$-trace norm defined in \cite{Gong;CoverTorsion}
(see also \cite{Schick;NonCptTorsionEst}).
Hence it remains to consider
$\chi _n (x, y, z) \varPi _0 (x, y, z) $,
where $\chi _n \in C ^\infty (M \times _B M) ^G$ is a sequence of smooth functions such that
\begin{enumerate}
\item
$0 \leq \chi _n \leq 1$;
\item
$\chi _n$ is increasing and converges point-wise to 1;
\item
$\chi _n (x, y, z) = 0 $ whenever $\mathbf d (y, z) > n r $ for some $r > 0$.
\end{enumerate}
To construct $\chi _n$, let $r > 0 $ to be the infremum of the injective radius of the fibers $Z _x$,
and $\phi _1 $
be a non-negative smooth function such that $\phi _1 (t) = 1 $ if $t < \frac{r}{2}$, $\phi _1 (t) = 0 $ if $t > r$.
Then $\chi _1 := \phi _1 \circ \mathbf d (y, z)$ is $G$-invariant.
Define
$$ \tilde \chi _n := \chi _1 \star \cdots \star \chi _1 \text{ (convolution by $n$ times).}$$
Note that $\tilde \chi _n (x, y, z) > 0 $ whenever $\mathbf d (y, z) < \frac{n r}{2}$.
Moreover, $\chi _n$ is $G$-invariant and $\tilde \chi _n (x, y, z) = 0 $ whenever $\mathbf d (y, z) > n r $.
Since $\tilde \chi _{n + 1}$ is bounded away from $0$ on the support of $\tilde \chi _n$,
clearly one can find smooth functions $\phi _n$ such that $\chi _n := \phi _n \circ \tilde \chi _n$
satisfies conditions (1)-(3).
\end{proof}
Because of Corollary \ref{DuhamelCor} and Lemma \ref{GR}, it makes sense to define:
\begin{dfn}
We say that $\varDelta$ has positive Novikov-Shubin invariant if there exist $\gamma > 0 $ and $C_0>0$ such that
for sufficiently large $t$,
$$ \sup _{x \in B} \Big\{ \int _{Z _x} \chi (x, z) \int _{Z _x}
| (e ^{- t \varDelta } - \varPi _0) (x, y, z) |^2 \mu _x (y) \mu _x (z) \Big\} \leq C _0 t ^{- \gamma }.$$
\end{dfn}
\begin{rem}
The positivity of the Novikov-Shubin invariant is independent of the metrics defining the operator $\varDelta$.
\end{rem}
\begin{rem}
\label{SquareNS}
Since $e ^{- \frac{t}{2} \varDelta } - \varPi _0 $ is non-negative, self adjoint and
$(e ^{- \frac{t}{2} \varDelta } - \varPi _0 ) ^2 = e ^{- t \varDelta } - \varPi _0 $, one has
$$ \sup _{x \in B} \Big\{ \int _{Z _x} \chi (x, z) \int _{Z _x}
| (e ^{- \frac{t}{2} \varDelta } - \varPi _0 ) (x, y, z)|^2 \mu _x (y) \mu _x (z) \Big\}
= \| e ^{- t \varDelta } - \varPi _0 \| _\tau .$$
Hence our definition of having positive Novikov-Shubin invariant is equivalent to that of \cite{Schick;NonCptTorsionEst}.
Our argument here is similar to the proof of \cite[Theorem 7.7]{BMZ}.
\end{rem}
In this paper, we shall always assume $\varDelta $ has positive Novikov-Shubin invariant.
From this assumption, it follows by integration over $B$ that there exist constants $\gamma>0$ and $C>0$ such that for $t$ large enough
\begin{equation}
\| e ^{- t \varDelta } - \varPi _0 \| _{\HS 0} < C t ^{- \gamma }.
\end{equation}
\subsection{Example: The Bismut super-connection}
\begin{dfn}
\label{BismutDfn}
A standard flat Bismut super-connection is an operator of the form
$$ d ^{\nabla ^{E}}
:= d ^{\nabla ^{E}} _V + \nabla ^{E ^\bullet _\flat } + \iota _\Theta ,$$
where $\Theta $ is the $V$-valued horizontal 2-form defined by
$$ \Theta (X _1 ^H , X _2 ^H) := - P ^V [ X _1 ^H , X _2 ^H ] ,
\quad \Forall X _1 , X _2 \in \Gamma ^\infty (T B), $$
and $\iota _\Theta $ is the contraction with $\Theta$. Note that $P^V$ is not canonical and it depends on the splitting $TM = V\oplus H$.
\end{dfn}
Observe that the adjoint of the Bismut super-connection,
$(d ^{\nabla ^{E}})' = (d ^{\nabla ^{E}} _V ) ^* + (\nabla ^{E ^\bullet _\flat } )' - \Lambda _{\Theta ^*} $,
is also flat.
It follows that
$$ (\nabla ^{E ^\bullet _\flat })' (d ^{\nabla ^E} _V ) ^*
+ (d ^{\nabla ^E} _V ) ^* ( \nabla ^{E ^\bullet _\flat } )' = 0. $$
Define
$$ \Omega := \frac{1}{2} \big((\nabla ^{E ^\bullet _\flat })' - \nabla ^{E ^\bullet _\flat } \big).$$
Observe that $\Omega $ is a tensor (see \cite{Lopez;FoliationHeat} for an explicit formula for $\Omega $).
Moreover one has
$$ \nabla ^{E ^\bullet _\flat } (d ^{\nabla ^E} _V ) ^* +(d ^{\nabla ^E} _V ) ^* \nabla ^{E ^\bullet _\flat }
= 2 \Omega (d ^{\nabla ^E} _V ) ^* + 2 (d ^{\nabla ^E} _V ) ^* \Omega .$$
Also, observe that
$(d ^{\nabla ^E} _V ) + (d ^{\nabla ^E} _V ) ^* + \nabla ^{E ^\bullet _\flat }
+ ((\nabla ^{E ^\bullet _\flat } )') ^* $
is an elliptic operator.
\subsection{The regularity result of Alvarez Lopez and Kordyukov}
We first recall that an operator $A$ is called $C ^\infty$-bounded if in normal coordinates the coefficients and their derivatives are uniformly bounded. As in \cite{Lopez;FoliationHeat},
we make the more general assumption that there exists
$C^\infty$-bounded first order differential operator $Q$,
and zero degree operators $R _1, R _2 , R _3 , R _4 $, all $G$-invariant,
such that $d _V^{\nabla ^{E}} + (d ^{\nabla ^E} _V ) ^* + Q $ is elliptic, and
\begin{align}
\label{LopezHypo}
Q d _V ^{\nabla^{E}}+ d _V ^{\nabla^{E}}Q &= R _1 d _V ^{\nabla^{E}}+ d _V^{\nabla^{E}} R _2, \\ \nonumber
Q (d ^{\nabla^{E}}_V)^* + (d ^{\nabla^{E}}_V)^* Q &= R _3 (d ^{\nabla^{E}}_V)^* + (d ^{\nabla^{E}}_V)^* R _4 .
\end{align}
Clearly, in our example,
$\nabla ^{E ^\bullet _\flat } + ((\nabla ^{E ^\bullet _\flat } )') ^* $ satisfies Equation \eqref{LopezHypo}.
Write $\eth _0 := d ^{\nabla^{E}}_V + (d ^ {\nabla^{E}} _V)^* , \varDelta := \eth _0 ^2 $,
and denote by $ \varPi _{d _V} , \varPi _{d ^* _V} $ respectively the orthogonal projections onto
the range of $d ^{\nabla ^E} _V , (d ^{\nabla ^E} _V )^* $,
which we shall denote by $\Rg (d _V ) , \Rg (d ^* _V)$.
In this section, we shall consider the operators
\begin{align*}
B _1 :=& R _1 \varPi _{d _V} + R _3 \varPi _{d ^* _V}, \\
B _2 :=& \varPi _{d ^* _V} R _2 + \varPi _{d _V} R _4, \\
B :=& B _2 \varPi _0 + B _1 (\id - \varPi _0 ).
\end{align*}
We recall some elementary formulas regarding these operators from \cite{Lopez;FoliationHeat}:
\begin{lem}
\label{Lopez;2.2}
\cite[Lemma 2.2]{Lopez;FoliationHeat}
One has
\begin{align*}
\nonumber
Q d ^{\nabla^{E}}_V + d ^{\nabla^{E}}_V Q =& B _1 d ^{\nabla^{E}}_V + d^{\nabla^{E}} _V B _2, \\
Q (d ^{\nabla^{E}}_V)^* + (d ^{\nabla^{E}}_V)^* Q =& B _1 (d ^{\nabla^{E}}_V )^*+ (d ^{\nabla^{E}}_V)^* B _2, \\ \nonumber
[Q , \varDelta ] =& B _1 \varDelta - \varDelta B _2 - \eth _0 (B _1 - B _2 ) \eth _0.
\end{align*}
\end{lem}
One can furthermore estimate the derivatives of $\varPi _0 $.
First, recall that
\begin{lem}
\label{Lopez;2.8}
One has (cf. \cite[Corollary 2.8]{Lopez;FoliationHeat})
$$ [Q + B, \varPi _0 ] = 0 .$$
\end{lem}
\begin{proof}
Here we give a different proof.
From definition we have
$$ B = (\varPi _{d _{V}^*} R _2 + \varPi _d R _4 ) \varPi _0 + R _1 \varPi _{d_{V}} + R _3 \varPi _{d ^*_{V}} ,$$
where we used $\varPi _{d_V} \varPi _0 = \varPi _{d_V ^*} \varPi _0 = 0.$
Hence
$$ B \varPi _0 - \varPi _0 B
= (\varPi _{d ^*_{V}} R _2 + \varPi _{d_{V}} R _4 ) \varPi _0 - \varPi _0 R _1 \varPi _{d_{V}} - \varPi _0 R _3 \varPi _{d ^*_{V}} .$$
For any $s$ one has
$$ \varPi _{d_V} s = \lim _{n \to \infty} d \tilde s _n ,$$
for some sequence $\tilde s _n $ (in some suitable function spaces).
It follows that
\begin{align*}
\varPi _0 R _1 \varPi _{d_V} s
=& \lim _{n \to \infty } \varPi _0 R _1 d \tilde s _1 \\
=& \lim _{n \to \infty } \varPi _0 (Q d^{\nabla^E}_V + d^{\nabla^E}_V Q - d^{\nabla^{E}}_V R _2 ) \tilde s _1
= \varPi _0 Q \varPi _{d_V} s.
\end{align*}
Similarly, one has $\varPi _0 R _3 \varPi _{d_V ^*} = \varPi _0 Q \varPi _{d_V ^*} $ and by considering the adjoint
$\varPi _{d ^*_{V}} R _2 \varPi _0 = \varPi _{d ^*_{V}} Q \varPi _0 $,
and $\varPi _{d ^*_{V}} R _4 \varPi _0 = \varPi _{d ^*_{V}} Q \varPi _0$.
It follows that
\[ [Q + B , \varPi _0 ] = (\id - \varPi _{d _{V}} - \varPi _{d ^*_{V}} ) Q \varPi _0
- \varPi _0 Q (\id - \varPi _{d_{V}} - \varPi _{d ^*_{V}}) = 0 . \qedhere \]
\end{proof}
In other words, regarding
$[Q, \varPi _0 ] $ and $ [B , \varPi _0 ]$
as kernels, one has
$$\| [Q, \varPi _0 ] \| _{\HS m} = \| [B , \varPi _0 ] \| _{\HS m},$$
provided the right hand side is finite.
Hence, using elliptic regularity and the same arguments as Lemma \ref{GR},
one can prove inductively that
$$\varPi _0 (x, y, z) \in \bar \Psi ^{-\infty} _m (M \times _B M , E ^\bullet ), \quad \Forall m.$$
Next, we recall the main result of \cite{Lopez;FoliationHeat}
\begin{lem}
\label{OldLem}
For any $m = 0, 1, \cdots $,
\begin{enumerate}
\item
The heat operator $e ^{- t \varDelta } $, and the operators
$\eth_0 e ^{- t \varDelta }, \varDelta e ^{- t \varDelta } $ map $\mathcal W ^m (E )$ to itself as bounded operators.
Moreover, there exist constants $C ^0 _m , C ^1 _m , C ^2 _m > 0 $ such that
\begin{align*}
\| e ^{- t \varDelta } \| _{\op m} \leq & C ^0 _m ,\\
\|\eth_0 e ^{- t \varDelta } \| _{\op m} \leq & t ^{- \frac{1}{2}} C ^1 _m, \\
\| \varDelta e ^{-t \varDelta } \| _{\op m} \leq & t ^{-1} C ^2 _m ,
\end{align*}
for all $t > 0$.
\item
As $t \to \infty $, $e ^{- t \varDelta } $ strongly converges as an operator on $\mathcal W ^m (E )$.
Moreover, $(t, s) \mapsto e ^{- t \varDelta } s $ is a continuous map form
$[0, \infty ] \times \mathcal W ^m (E ) $ to $\mathcal W ^m (E )$.
\item
One has the Hodge decomposition
$$ \mathcal W ^m (E ) = \Ker (\varDelta ) + \overline { \Rg (\varDelta )}
= \Ker (\eth _0 ) + \overline {\Rg (\eth _0)}, $$
where the kernel, image and closure are in $\mathcal W ^m (E )$.
\end{enumerate}
\end{lem}
Remark that our case is slightly different from that of \cite{Lopez;FoliationHeat},
where $M$ is assumed to be compact (but with possibly non-compact fibers).
However, the same arguments clearly apply because our $M$ is of bounded geometry.
We recall more results in \cite[Section 2]{Lopez;FoliationHeat}.
\begin{lem}
\label{Lopez;2.4}
\cite[Lemma 2.4]{Lopez;FoliationHeat}
For any $m\geq 0$, there exists constant $C^3_0>0$ such that
$$ \| [Q , e ^{- t \varDelta } ] \| _{\op m} \leq C ^3 _m .$$
\end{lem}
\begin{proof}
Using the third equation of Lemma \ref{Lopez;2.2}, Equation (\ref{Duhamel}) becomes
$$ [Q , e ^{- t \varDelta } ]
= \int _0 ^{t } e ^{- (t - t' ) \varDelta } \eth _0 (B _1 - B _2 ) \eth _0 e ^{- t' \varDelta } d t'
- \int _0 ^{t } e ^{- (t - t' ) \varDelta } (B _1 \varDelta - \varDelta B _2 ) e ^{- t' \varDelta } d t' .$$
Using Lemma \ref{OldLem}, we estimate the first integral
\begin{align*}
\Big\| \int _0 ^{t } e ^{- (t - t' ) \varDelta } \eth _0 (B _1 - B _2 ) \eth _0 e ^{- t' \varDelta } d t' \Big\| _{\op m}
\leq & \| B _1 - B _2 \| _{\op m} (C ^1 _m ) ^2 \int _0 ^t \frac{ d t' }{\sqrt{(t - t' ) t' }} \\
=& \| B _1 - B _2 \| _{\op m} (C ^1 _m ) ^2 \pi.
\end{align*}
As for the second integral,
we split the domain of integration into $[0, \frac{t}{2}] $ and $[\frac{t}{2} , t]$,
and then integrate by part to get
\begin{align*}
\int _0 ^{t } e ^{- (t - t' ) \varDelta } (B _1 \varDelta - \varDelta B _2 ) e ^{- t' \varDelta } d t'
=& \int _0 ^{\frac{t}{2} } e ^{- (t - t' ) \varDelta } \varDelta (- B _1 - B _2 ) e ^{- t' \varDelta } d t' \\
&- \int _{\frac{t}{2}} ^{t } e ^{- (t - t' ) \varDelta } (B _1 - B _2 ) \varDelta e ^{- t' \varDelta } d t' \\
&+ e ^{- (t - t' ) \varDelta } B _1 e ^{- t' \varDelta } \Big |^{\frac{t}{2}} _{t' = 0}
- e ^{- (t - t' ) \varDelta } B _2 e ^{- t' \varDelta } \Big |^{t} _{t' = \frac{t}{2}}.
\end{align*}
Again using Lemma \ref{Lopez;2.2}, its $\| \cdot \| _{\op m}$-norm is bounded by
$$ C ^0 _m C ^1 _m (\| B _1 \| _{\op m} + \| B _2 \| _{\op m} ) \Big( \int _0 ^{\frac{t}{2}} \frac{d t'}{t - t'}
+ \int _{\frac{t}{2}} ^t \frac{d t'}{t'} \Big)
+ C ^0 _m (C ^0 _m + 1 ) (\| B _1 \| _{\op m} + \| B _2 \| _{\op m} ),$$
which is uniformly bounded because
$\int _0 ^{\frac{t}{2}} \frac{d t'}{t - t'} = \int _{\frac{t}{2}} ^t \frac{d t'}{t'} = \log 2 $.
\end{proof}
Lemma \ref{Lopez;2.8} suggests that $[Q + B , e ^{-t \varDelta }] $ converges to zero as $t \to \infty$.
Indeed, we shall prove a stronger result, namely,
$[Q + B , e ^{-t \varDelta } ]$ decay polynomially in the $\| \cdot \| _{\HS m} $-norm for all $m$.
\begin{lem}
\label{Trick1}
Suppose there exist $C_m, \gamma > 0 $ such that $\| e ^{- t \varDelta } - \varPi _0 \| _{\HS m} \leq C _m t ^{- \gamma }$,
then there exist $C' _m, \gamma_m > 0$ such that
$$ \| [Q + B , e ^{-t \varDelta } ] \| _{\HS m}
= \| [Q + B , e ^{-t \varDelta } - \varPi _0 ] \| _{\HS m} \leq C' _m t ^{- \gamma _m } .$$
\end{lem}
\begin{proof}
We follow the proof of \cite[Lemma 2.6]{Lopez;FoliationHeat}.
By Lemma \ref{Lopez;2.2}, we get
$$ [Q + B , \varDelta ] = (\varDelta (B _1 + B _2 ) + \eth _0 (B _1 - B _2) \eth _0 ) (\id - \varPi _0 ),$$
it follows that
$$ \varPi _0 [Q + B , e ^{- \frac{t}{2} \varDelta }] = [Q + B , e ^{- \frac{t}{2} \varDelta }] \varPi _0 = 0 .$$
Write
\begin{align*}
[Q + B , e ^{-t \varDelta } ]
=& [Q + B , e ^{ - \frac{t}{2} \varDelta } ] e ^{- \frac{t}{2} \varDelta }
+ e ^{- \frac{t}{2} \varDelta } [Q + B , e ^{- \frac{t}{2} \varDelta } ] \\
=& [Q + B , e ^{ - \frac{t}{2} \varDelta } ] ( e ^{- \frac{t}{2} \varDelta } - \varPi _0 )
+ (e ^{- \frac{t}{2} \varDelta } - \varPi _0 ) [Q + B , e ^{- \frac{t}{2} \varDelta } ].
\end{align*}
Taking $\| \cdot \| _{\HS m} $ and using
Corollary \ref{Main1} and Lemma \ref{Lopez;2.4}, the claim follows.
\end{proof}
\begin{thm}
\label{Main2}
Suppose $\| e ^{- t \varDelta } - \varPi _0 \| _{\HS 0} \leq C _0 t ^{- \gamma }$ for some $\gamma > 0 , C _0 > 0$.
Then for any $m$, there exists $C'' _m > 0 $ such that
$$\| e ^{- t \varDelta } - \varPi _0 \| _{\HS m} \leq C'' _m t ^{- \gamma } , \quad \Forall t > 1 .$$
\end{thm}
\begin{proof}
We prove the theorem by induction. The case $m = 0 $ is given.
Suppose that for some $m$, $\| e ^{- t \varDelta } - \varPi _0 \| _{\HS m} \leq C _m t ^{- \gamma } .$
Consider $\| e ^{- t \varDelta } - \varPi _0 \| _{\HS m +1}$.
Since $Q $ is a first order differential operator,
for any kernel $\psi \in \Psi ^{- \infty } _\infty (M \times _B M , E ^\bullet ) ^G$,
$[Q , \psi ] $ is also a kernel lying in $\Psi ^{- \infty } _\infty (M \times _B M , E ^\bullet ) ^G$,
that is in particular given by a composition of the covariant derivatives
$\dot \nabla ^{\hat E _\flat} , \dot \partial ^\mathbf s , \dot \partial ^\mathbf t $ and some tensors acting on $\psi $.
Since $\| \psi \| _{\HS m} $ is by definition the $\| \cdot \| _{\HS 0} $ norm of the $m$-th derivatives of $\psi$,
elliptic regularity (Lemma \ref{EllReg2}) implies
$$ \| \psi \| _{\HS m+1}
\leq \tilde C _m (\| \psi \|_{\HS m} + \| \eth_0 \psi \|_{\HS m} + \| \psi \eth _0 \| _{\HS m}+\| [Q , \psi ] \| _{\HS m}),$$
for some constant $\tilde C _m > 0$.
Put $\psi = e ^{- t \varDelta } - \varPi _0 $.
The theorem then follows from the estimates
\begin{align*}
\| \eth_0 (e ^{- t \varDelta } - \varPi _0 ) \| _{\HS m}
= & \| (e ^{- t \varDelta } - \varPi _0 ) \eth _0 \| _{\HS m} \\
\leq & \big( \sum _{0 \leq l \leq m } C ' _{m, l} \| \eth _0 ( e ^{- \frac{t}{2} \varDelta } - \varPi _0 ) \| _{\op l} \big)
\| e ^{- \frac{t}{2} \varDelta } - \varPi _0 \| _{\HS m} \\
\leq & \big( \sum _{0 \leq l \leq m } C ' _{m, l} C ^1 _l \big( \frac{t}{2} \big) ^{- \frac{1}{2} } \big)
C _m \big( \frac{t}{2} \big) ^{- \gamma }, \\
\| [Q , e ^{- t \varDelta } - \varPi _0 ] \| _{\HS m}
\leq & \| [Q + B , e ^{- t \varDelta } - \varPi _0 ] \| _{\HS m}
+ \| [B , e ^{- t \varDelta } - \varPi _0 ] \| _{\HS m} \\
\leq & C ' _m t ^{- \gamma }
+ 2 \big( \sum _{0 \leq l \leq m } C ' _{m, l} \| B \|_{\op l} \big) C _m t ^{- \gamma }.
\end{align*}
Note that we used Lemma \ref{Trick1} for the last inequality.
\end{proof}
\section{Sobolev convergence}
In this section we will use the method of \cite{Schick;NonCptTorsionEst} to prove that under the condition of positivity of the Novikov-Shubin invariant the $L^2$-analytic torsion form is a smooth form.
Let $\nabla ^E $ be a flat connection on $E$.
Define the number operators on $\wedge ^\bullet H' \otimes \wedge ^\bullet V' \otimes E $ by
$$N _\Omega |_{\wedge ^q H' \otimes \wedge ^{q'} V' \otimes E } := q ,
\quad N |_{\wedge ^q H' \otimes \wedge ^{q'} V' \otimes E } := q' .$$
In this section,
we consider the rescaled Bismut super-connection \cite[Chapter 9.1]{BGV;Book}
$$ \eth (t) := \frac{t ^{\frac{1}{2}}}{2} t ^{- \frac{N_\Omega }{2} } (d + d ^*) t ^{ \frac{N _\Omega}{2} }
= \frac{1}{2} \big( t ^{\frac{1}{2}} (d _V + d _V ^*)
+ (\nabla ^{E _\flat} + (\nabla ^{E _\flat }) ')
+ t ^{- \frac{1}{2}} (- \Lambda _{\Theta ^*} + \iota _\Theta ) \big).$$
Denote
$$ D _0 := - \frac{1}{2}(d _V - d ^* _V ),
\quad \Omega _t := - \frac{1}{2} ((\nabla ^{E _\flat} - (\nabla ^{E _\flat }) ')
- \frac{t ^{-\frac{1}{2}}}{2} (- \Lambda _{\Theta ^*} - \iota _\Theta ),
\quad D (t) := t ^{\frac{1}{2}} D _0 + \Omega _t .$$
The curvature of $\eth (t)$ can be expanded in the form:
$$ \eth (t) ^2 = - D (t) ^2
= t \varDelta + t ^{\frac{1}{2}} \Omega _t D _0 + t ^{\frac{1}{2}} D _0 \Omega _t + \Omega _t ^2 .$$
Hence as a consequence of Duhamel's expansion (cf. \cite{BGV;Book}), we have
\begin{align}
\nonumber
e ^{ - \eth (t) ^2 } = e ^{ D (t) ^2 } = e ^{- t \varDelta }
+ \sum _{n = 1} ^{\dim B} \int _{(r _0 , \cdots , r _k ) \in \Sigma ^n} &
e ^ {- r _0 t \varDelta }
(t ^{\frac{1}{2}} \Omega _t D _0 + t ^{\frac{1}{2}} D _0 \Omega _t + \Omega _t ^2 )
e ^ {- r _1 t \varDelta }
\\ \nonumber
& \cdots (t ^{\frac{1}{2}} \Omega _t D _0 + t ^{\frac{1}{2}} D _0 \Omega _t + \Omega _t ^2 )
e ^{- r _n t \varDelta } d \Sigma ^n,
\end{align}
where $\Sigma ^n := \{(r _0 , r _1 \cdots , r _n ) \in [0, 1] ^{n + 1} : r _0 + \cdots + r _n = 1 \}$.
\subsection{The large time estimate of the rescaled heat operator}
In this section,
we follow \cite[Section 4]{Schick;NonCptTorsionEst} to estimate the Hilbert-Schmit norms of $e ^{ - \eth (t) ^2 }$ (see Theorem \ref{Main3} below).
Let $\gamma ' := 1 - (1 + \frac{2 \gamma }{\dim B + 2+2\gamma } ) ^{-1} $,
$\bar r (t) := t ^{- \gamma '}$. Fix $\bar t $ such that $\bar r ( \bar t) < (\dim B + 1 ) ^{-1} $.
One has the following counterparts of \cite[Lemma 4.2]{Schick;NonCptTorsionEst}:
\begin{lem}
For $c = 0, 1, 2$, there exists a constant $C_{m}$ such that
$$ \| (\sqrt {t} \eth _0 ) ^{\frac{c}{2}} e ^{r t (D _0) ^2 } \| _{\op ' m} \leq C _m r ^{- \frac{c}{2}} ,
\text{for any $t > \bar t , 0 < r < 1$ (by Lemma \ref{OldLem})} ;$$
And for any $t > \bar t , \bar r (t) < r < 1$,
\begin{align*}
\| e ^{r t (D _0) ^2 } \| _{\HS m} \leq & C _m (r t) ^{- \gamma} ,& \text{(by Theorem \ref{Main2})} \\
\| (\sqrt {t} \eth _0 ) ^{\frac{c}{2}} e ^{r t (D _0) ^2 } \| _{\HS m}
\leq & C _m r ^{- \frac{c}{2}} (r t) ^{- \gamma}, \text{ if $c = 1, 2$}. & \text{(by Corollary \ref{Main1})}
\end{align*}
\end{lem}
We furthermore observe that the arguments leading to the main result \cite[Theorem 4.1]{Schick;NonCptTorsionEst} still hold if one replaces the operator and
$\|\cdot\|_{\tau}$ norm respectively by $\|\cdot\|_{\op' m}$ and $\|\cdot\|_{\HS m}$ for any $m$.
The arguments in \cite[Section 4]{Schick;NonCptTorsionEst} are elementary, so we shall only recall some key steps.
First, one splits the domain of integration
$\Sigma ^n = \bigcup _{I \neq \{ 0, \cdots , n \}} \Sigma ^n _{\bar r (t), I} $,
where
$$ \Sigma ^n _{\bar r(t) , I} := \{ (r _0 , \cdots , r _n ) : r _i \leq \bar r (t), \Forall i \in I ,r_{j}\geq \bar{r}(t),\Forall j\notin I\}.$$
Define
\begin{equation}
\label{KDfn}
K (t, n, I, c _0 , \cdots c _n ; a _1 , \cdots a _n )
:= \int _{\Sigma ^n _{\bar r(t) , I}}
(t ^{\frac{1}{2}} D _0 ) ^{c _0} e ^ {- r _0 t \varDelta }
\prod _{i=1} ^n ( \Theta _t ^{a _i } (t ^{\frac{1}{2}} D _0 )^{c _i} e ^ {- r _i t \varDelta }) d \Sigma ^n ,
\end{equation}
for $c _i = 0, 1, 2, a _j = 1, 2 $.
Then one has
$$ e ^{- \eth (t) ^2}
= e ^{D (t) ^2} = \sum K (t, n, I, c _0 , \cdots c _n ; a _1 , \cdots a _n ) $$
by grouping terms involving $D _0$ together.
We shall consider the kernels
$ K (t, n, I, c _0 , \cdots c _n ; a _1 , \cdots a _n ) (x, y, z)$
of the terms in the summation above.
Consider the special case when $c _i = 0, 1$.
One has the analogue of \cite[Proposition 4.6]{Schick;NonCptTorsionEst}:
\begin{lem}
\label{Schick4.6}
There exists $\varepsilon>0$ such that as $t \to \infty $,
\begin{align*}
K (t , n, I, c _0 , \cdots c _n &, a _1, \cdots , a _n ) (x, y, z) \\
=&
\left\{
\begin{array}{ll}
(\frac{1}{n !} \varPi _0 \Omega ^{a _1} \varPi _0 \cdots \varPi _0) (x, y, z)+ O (t ^{- \varepsilon })
& \text{ if } I = \emptyset , c _0 , \cdots , c _n = 0 \\
O (t ^{- \varepsilon }) & \text{ otherwise}
\end{array}
\right.
\end{align*}
in the $\| \cdot \| _{\HS m}$-norm.
\end{lem}
\begin{proof}
We first consider the case $I=\emptyset$. Suppose furthermore $c_{q}=1$ for some $q$.
By Corollary \ref{Main1}, The $\| \cdot \| _{\HS m}$-norm of the integrand on the r.h.s. of \eqref{KDfn} is bounded by
$$\|(t^{{1\over 2}}D_{0})^{c_{0}}e^{-r_{0}t\Delta}\|_{\op' m}\cdots \|\Omega_{t}^{a_{q}}\|_{\op' m}\|(t^{1\over 2}D_{0})e^{-r_{q}t\Delta}\|_{\HS m}\cdots \|(t^{1\over 2}D_{0})^{c_{n}}e^{-r_{n}t\Delta}\|_{\op' m}$$
$$\leq C_{m}'r_0^{-{{c_0}\over 2}}\cdots r_q^{-{{c_q}\over 2}}(r_q t)^{-\gamma}\cdots r_n^{-{{c_n}\over 2}}$$
$$\leq C_{m}'\bar{r}(t)^{-{n\over 2}-\gamma}t^{-\gamma}.$$
Integrating, we have the estimate
$$\|K(t,n,c_0,\cdots,c_n;a_1,\cdots,a_n) (x, y, z)\|_{\HS m}
\leq C_{m}'t^{-\gamma+\gamma'({{n\over 2}}+\gamma)}\int d\Sigma^n,$$
which is $O(t^{-\varepsilon})$ with $\varepsilon=\gamma(1-{{{\rm dim}B+2\gamma}\over{{\rm dim}B}+2+2\gamma})$.
Next, suppose $I=\emptyset$ and $c_i=0$ for all $i$. Write $e^{-r_0 t\Delta-\Pi_0}+\Pi_0$ and split the integrand
$$(e^{-r_0 t\Delta}\Omega_{t}^{a_1}e^{-r_1 t\Delta}\cdots e^{-r_n t\Delta} ) (x, y, z)$$
into $2^{n+1}$ terms. If any term contains a $e^{-r_i t\Delta}-\Pi_0$ factor, similar arguments as above shows that it is $O(t^{-\gamma})$. Hence the only term that dose not converge to $0$ is
$$(\varPi_0 \Omega^{a_1} \varPi_0\cdots \varPi_0 )(x, y, z).$$
Since the volume of $\Sigma^{n}_{\bar{r}(t),I}$ converges to $1\over{n!}$ as $t\to \infty$, the claim follows.
It remains to consider the case when $I$ is non-empty. Write $I=\{i_1,\cdots,i_s\}$, $\{0,\cdots,n\}\setminus I=:\{k_1,\cdots,k_{s'}\}\neq \emptyset$. For $t$ sufficiently large $I\neq \{0,\cdots,n\}$. Suppose $c_q=1$ for some $q\notin I$. Then we take $\|\cdot\|_{\HS m}$-norm for $(t^{1\over 2}D_0)e^{-r_q t\Delta}$ term, and estimate
\begin{align*}
\|K(t,n,c_0,\cdots,c_n;a_1,\cdots,a_n) (x, y, z)\|_{\HS m} & \\
\leq \int_{0}^{\bar{r}(t)}\cdots \int_{0}^{\bar{r}(t)}
\Big(\int_{\{(r_{k_1},\cdots,r_{k_s'}):(r_0,\cdots,r_n)\in\Sigma^{n}_{\bar{r}(t),I}\}} & C'_m r_0^{-{{c_0}\over 2}}\cdots r_q ^{-{{c_q}\over 2}}(r_q t)^{-\gamma}\cdots r_n^{-{{c_n}\over 2}} \\
& d(r_{k_1}\cdots r_{k_{s'}})\Big) dr_{i_1}\cdots dr_{i_s}.
\end{align*}
As in the $I=\emptyset$ case, the integral over $\{(r_{k_1}),\cdots,r_{k_{s'}}:(r_0,\cdots,r_n)\in \Sigma^{n}_{\bar{r}(t),I}\}$ is $O(t^{-\varepsilon})$; while $\int_{0}^{\bar{r}(t)}r_i^{{c_i}\over 2}dr_i=O(t^{-\gamma'(1-{{c_i}\over 2})})$. Again the claim is verified.
Finally if $c_i=0$ for all $i\in I$, then
\begin{align*}
\|K(t,n,c_0,\cdots,c_n;a_1,\cdots,a_n) (x, y, z)\|_{\HS m} & \\
\leq \int_{0}^{\bar{r}(t)}\cdots \int_{0}^{\bar{r}(t)}\Big(\int_{\{(r_{k_1},\cdots,r_{k_s'}):(r_0,\cdots,r_n)\in\Sigma^{n}_{\bar{r}(t),I}\}} & C''_m r_0^{-{{c_{i_1}}\over 2}}\cdots r_{n}^{-{{c_{i_s}}\over 2}} \\
& d(r_{k_1}\cdots r_{k_{s'}}) \Big) dr_{i_1}\cdots dr_{i_s} \\
&= O(t^{-\gamma'(1-{{c_i}\over 2})}). \qedhere
\end{align*}
\end{proof}
One then turns to the case for some $i$, $c _i = 2$.
If $I$ and $J$ are disjoint subsets of $\{ 0, \cdots , n \}$ with $I = \{ i _1, \cdots , i _r \}$,
and $\{ 0, \cdots , n \} \setminus (I \bigcup J) =: \{ k _0, \cdots , k _q \} \neq \emptyset $,
denote by
$$ \Sigma ^n _{\bar r (t), I, J}
:= \{ (r _0 , \cdots , r _n) \in \Sigma ^n _{\bar r (t), I} : r _j = \bar r (t), \text{ whenever } j \in J \} ,$$
and define
\begin{align*}
K (t, n, I, J, c _0 , \cdots c _n &; a _1 , \cdots a _n ) \\
:= \int _0 ^{\bar r (t)} \cdots \int _0 ^{\bar r (t)} &
\int _{ \{ (r _{k _0}, \cdots r _{k _q}) : (r _0 , \cdots , r _n) \in \Sigma ^n _{\bar r (t), I} \}} \\
(t ^{\frac{1}{2}} D _0 ) ^{c _0} & e ^ {- r _0 t \varDelta }
\prod _{i=1} ^n ( \Theta _t ^{a _i } (t ^{\frac{1}{2}} D _0 )^{c _i} e ^ {- r _i t \varDelta })
\Big| _{\Sigma ^n _{\bar r (t), I, J}}
d ^q (r _{k _0}, \cdots r _{k _q}) d r _1 \cdots d r _r .
\end{align*}
Using integration by parts, one gets \cite[Equation (4.17)]{Schick;NonCptTorsionEst},
\begin{align}
\label{IntPart}
\nonumber
K(t, &n, I \cup \{ i_p \} , J; \cdots, 2, \cdots , c _{k _0}, \cdots ; \cdots , a _{i _p}, a _{i _{p+1}}, \cdots ) \\
=& \left\{
\begin{array}{ll}
K(t, n, I, J \cup \{ i _p \} ; \cdots, 0, \cdots , c _{k _0}, \cdots ; \cdots , a _{i_p} , a _{i _{p+1}}, \cdots ) & \\
- K(t, n - 1, I, J; \cdots, \cdots, c _{k _0}, \cdots ; \cdots , a _{i _0} + a _{i _{p+1}}, \cdots )
& q > 0 ,\\
+ K(t, n, I \cup \{ i _p \} , J \cup \{ k _0 \} ; \cdots , 0, \cdots , c _{k _0}, \cdots ;
\cdots , a _{i _p} , a _{i _{p +1}}, \cdots ) \\
+ K(t, n, I \cup \{ i _p \} , J; \cdots , 0, \cdots, c _{k _0} + 2, \cdots ;
\cdots, a _{i _p} , a _{i _{p +1}}, \cdots ) \\
K(t, n, I, J \cup \{ i _p \} ; \cdots, 0, \cdots , c _{k _0}, \cdots ; \cdots , a _{i_p} , a _{i _{p+1}}, \cdots ) & \\
- K(t, n - 1, I, J; \cdots, \cdots, c _{k _0}, \cdots ; \cdots , a _{i _0} + a _{i _{p+1}}, \cdots )
& q = 0. \\
+ K(t, n, I \cup \{ i _p \} , J; \cdots , 0, \cdots, c _{k _0} + 2, \cdots ;
\cdots, a _{i _p} , a _{i _{p +1}}, \cdots )
\end{array}
\right.
\end{align}
We remark that the proof of \cite[Equation (4.17)]{Schick;NonCptTorsionEst} does not involve any norm,
therefore we omit the details here.
Using Equation \eqref{IntPart} repeatedly, one eliminates all terms with $c _i = 2$.
On the other hand one has the following straightforward generalization of Lemma \ref{Schick4.6} (compare with \cite[Proposition 4.7]{Schick;NonCptTorsionEst}):
\begin{lem}
Suppose $c _i = 0, 1$. As $t \to \infty $,
\begin{align*}
K (t , n, I, J &, c _0, \cdots c _n ; a _1 , \cdots , a _n ) (x, y, z) \\
=&
\left\{
\begin{array}{ll}
(\frac{1}{(n - |J|)!} \varPi _0 \Omega ^{a _1} \varPi _0 \cdots \varPi _0 )(x, y, z) + O (t ^{- \gamma '})
& \text{ if } I = \emptyset , c _0 , \cdots , c _n = 0 \\
O (t ^{- \gamma' }) & \text{ otherwise,}
\end{array}
\right.
\end{align*}
for some $\gamma ' > 0$, in the $\| \cdot \| _{\HS m}$-norm.
\end{lem}
Thus the term $ K (t, n, I, c _0 , \cdots c _n ; a _1 , \cdots a _n ) $ converges to $0$ unless
$$ c _i = 0 \text{ whenever } i \in I , \quad c _i = 2 \text { whenever } i \not \in I. $$
Then one follows exactly as \cite[Section 4.5]{Schick;NonCptTorsionEst} to compute the limit,
and concludes with the following analogue of \cite[Theorem 4.1]{Schick;NonCptTorsionEst}:
\begin{thm}
\label{Main3}
For $k = 0, 1, 2$ and any $m \in \mathbb N $,
$$ \lim _{t \to \infty} D (t) ^k e ^{- \eth (t) ^2 } (x, y, z)
= \varPi _0 (\Omega \varPi _0 ) ^k
e ^{ (\Omega \varPi _0 ) ^2 } (x, y, z)$$
in the $\| \cdot \| _{\HS m} $-norm,
where $\Omega := - \frac{\nabla ^{E _\flat} - (\nabla ^{E _\flat }) ^* }{2}$.
Moreover, there exits $\varepsilon'>0$ such that as $t \to \infty $,
$$ \big\| (D (t) ^k e ^{- \eth (t) ^2 }
- \varPi _0 (\Omega \varPi _0 ) ^k
e ^{ (\Omega \varPi _0 ) ^2 } )(x, y, z) \big\| _{\HS m}
= O (t ^{- \varepsilon '} ).$$
\end{thm}
\subsection{Application: the $L ^2 $-analytic torsion form}
Our main application of Theorem \ref{Main3}
is in establishing the smoothness and transgression formula of the $L ^2 $-analytic torsion form.
Here, we briefly recall the definitions.
On $\wedge ^\bullet T ^* M \otimes E
\cong \wedge ^\bullet H' \otimes \wedge ^\bullet V' \otimes E $,
define $N_\Omega , N $ to be the number operators of
$\wedge ^\bullet H' \cong \pi ^{-1} (\wedge ^\bullet T^* B )$ and $\wedge ^\bullet V'$ respectively.
Define
$$ F ^\wedge (t)
:= (2 \pi \sqrt{-1}) ^{- \frac{N _\Omega }{2}}
\str _\Psi (2 ^{-1} N (1 + 2 D (t) ^2 ) e ^{- \eth (t) ^2 }).$$
Then under the positivity of the Novikov-Shubin invariant, we have the following well-defined $L^{2}$-analytic torsion form.
\begin{dfn}\label{TorsionDfn}(\cite{Schick;NonCptTorsionEst})
$$ \tau := \int _0 ^\infty \Big\{ - F ^\wedge (t)
+ \frac{\str _\Psi (N \varPi _0 )}{2}
+ \big(\frac{\dim (Z) \rk (E) \str _\Psi (\varPi _0 )}{4} - \frac{\str _\Psi (N \varPi _0 )}{2} \big)
(1 - 2 t) e ^{-t} \Big\} \frac {d t}{t} .$$
\end{dfn}
In \cite{Schick;NonCptTorsionEst}, it is only shown that the form $\tau$ is continuous. Next we will show that indeed the form $\tau$ is smooth.
\begin{thm}
\label{MainThm}
The form $\tau$ is smooth, i.e. $\tau \in \Gamma ^\infty (\wedge ^\bullet T ^* B)$.
\end{thm}
\begin{proof}
Using \cite[Proposition 9.24]{BGV;Book}, the derivatives of the $t$-integrand are bounded as $t \to 0$.
It follows that its integral over $[0, 1 ]$ is smooth.
We turn to study the large time behavior.
Consider $\str ( 2 ^{-1} N (e ^{- \eth (t) ^2} - \varPi _0 ))$.
Using the semi-group property, we can write
$$ e ^{- \eth (t) ^2} = 2 ^{- \frac{N _\Omega}{2}} e ^{- \eth (\frac{t}{2}) ^2}
e ^{- \eth (\frac{t}{2}) ^2} 2 ^{\frac{N _ \Omega}{2}} .$$
Also, since
$\str (N \varPi _0 (\Omega \varPi _0 ) ^{2 j} )
= \str ([N \varPi _0 (\Omega \varPi _0 ) , \varPi _0 (\Omega \varPi _0 ) ^{2 j - 1} ]) = 0$
for any $j \geq 1$ one has
$$ \str (N \varPi _0 )
= \str (N\varPi _0 e ^{(\Omega \varPi _0 ) ^2 })
= 2 ^{- \frac{N _\Omega}{2}}
\str (N \varPi _0 e ^{(\Omega \varPi _0 ) ^2 } \varPi _0 e ^{(\Omega \varPi _0 ) ^2 }) .$$
Therefore
\begin{align*}
\str ( 2 ^{-1} N (e ^{- \eth (t) ^2} - \varPi _0 ))
=& 2 ^{- \frac{N _\Omega}{2}} \str ( 2 ^{-1} N ( e ^{- \eth (\frac{t}{2}) ^2} e ^{- \eth (\frac{t}{2}) ^2}
- \varPi _0 e ^{(\Omega \varPi _0 ) ^2 } \varPi _0 e ^{(\Omega \varPi _0 ) ^2 })) \\
=& 2 ^{- \frac{N_\Omega}{2}} \str \big( 2 ^{-1} Ne ^{- \eth (\frac{t}{2}) ^2}
( e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2}) \big) \\
&+ 2 ^{- \frac{N _\Omega}{2}} \str \big( 2 ^{-1} N
( e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2 }) \varPi _0 e ^{(\Omega \varPi _0 ) ^2 } \big).
\end{align*}
Now consider the $L ^2 (B)$-norm of
$ \str _\Psi \big( 2 ^{-1} N e ^{- \eth (\frac{t}{2}) ^2}
( e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2}) \big)$.
To shorten notations, denote
$G := 2 ^{-1} N e ^{- \eth (\frac{t}{2}) ^2}
( e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2}) $.
Writing $G$ as a convolution product, then there exists constant $C_0>0$ such that
\begin{align*}
\int _{B} \Big| \int _{Z _x} & \chi (x, z) \str (G (x, z, z)) \mu _x (z) \Big|^2 \mu _B (x) \\
= \int _B & \Big| \int _{Z _x} \chi \str \Big( \frac{N }{2} \int _{y \in Z _x}
e ^{- \eth (\frac{t}{2}) ^2} (x, z, y)
( e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2}) (x, y, z) \mu _x (y) \Big) \mu _x (z)
\Big| ^2 \mu _B (x)\\
\leq C _0 & \int _B \Big( \int _{Z _x} \chi \int _{y \in Z _x}
\big| e ^{- \eth (\frac{t}{2}) ^2} \big| (x, z, y)
\big| e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2} \big| (x, y, z) \mu _x (y) \mu _x (z)
\Big) ^2 \mu _B (x)\\
\leq & C _0 \| e ^{- \eth (\frac{t}{2}) ^2} \| ^2 _{\HS 0 }
\| e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2} \| ^2 _{\HS 0},
\end{align*}
where we used the Cauchy-Schwarz inequality three times. Since
$\| e ^{- \eth (\frac{t}{2}) ^2} \| _{\HS 0 } $ is bounded for $t$ large (by triangle inequality),
the expression above is $O (t ^{- \gamma '})$.
We turn to estimate its derivatives.
For any vector field $X$ on $B$,
\begin{align*}
\nabla ^{T B} _X \str _\Psi (G)
=& \int (L _{X ^H } \chi (x, z)) \str (G (x, z, z)) \mu _x (z) \\
&+ \int \chi (x, z) ( L ^{\nabla ^{\pi ^{-1} T B }} _{X ^H } \str (G (x, z, z))) \mu _x (z) \\
&+ \int \chi (x, z) \str (G (x, z, z)) (L _{X ^H }\mu _x (z)) .
\end{align*}
Differentiating under the integral sign is valid because we knew a-priori that the integrands are all $L ^1 $.
Since $L _{X ^H }\mu _x (z) $ equals $\mu _x (z) $ multiplied by some bounded functions,
it follows that the last term $\int \chi (x, z) \str (G (x, z, z)) (L _{X ^H }\mu _x (z)) $ is $O (t ^{- \gamma ' })$.
For the first term, we write
$ L _{X ^H } \chi (x, z) = \sum _{g \in G } (g ^* \chi) (x, z) (L _{X ^H } \chi ) (x, z).$
The sum is finite because $L _{X ^H } \chi $ is compactly supported.
By $G$-invariance,
$$ \int (g ^* \chi ) (x, z) \str (G ( x, z, z)) \mu _x (z)
= \int \chi (x, z) \str (G (x, z, z)) \mu _x (z) .$$
Since $(L _{X ^H } \chi ) (x, z)$ is bounded, it follows that
$ \int (L _{X ^H } \chi ) (x, z) \str (G (x, z, z)) \mu _x (z) $ is also $O (t ^{- \gamma ' })$.
As for the second term,
we differentiate under the integral sign, then use the Leibniz rule to get that there exists constant $C_1>0$ such that
\begin{align*}
| L ^{\nabla ^{\pi ^{-1} T B }} _{X ^H } & \str (G (x, z, z))| \\
\leq & C _1 \Big( \int _{Z _x}
\big| L ^{\nabla ^{\wedge ^\bullet H' \otimes \wedge ^\bullet V' \otimes \hat E }} _{X ^H }
e ^{- \eth (\frac{t}{2}) ^2} (x, z, y) \big|
\big| e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2} (x, y, z) \big| \mu _x (y) \\
&+ \int _{Z _x}
\big| e ^{- \eth (\frac{t}{2}) ^2} (x, z, y) \big|
\big| L ^{\nabla ^{\wedge ^\bullet H' \otimes \wedge ^\bullet V' \otimes \hat E }} _{X ^H }
(e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2}) (x, y, z) \big| \mu _x (y) \\
&+ \int _{Z _x}
\big| e ^{- \eth (\frac{t}{2}) ^2} (x, z, y) \big|
\big| e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2} (x, y, z) \big|
\sup | L _{X ^H } \mu | \mu _x (y) \Big), \\
\int _B \Big| \int _{Z _x} & \chi (x, z)
\big( L ^{\nabla ^{\pi ^{-1} T B }} _{X ^H } \str (G (x, z, z)) \big) \mu _x (z) \Big|^2 \mu _B (x) \\
\leq & C _1 \big( \| e ^{- \eth (\frac{t}{2}) ^2} \|^2 _{\HS 1 }
\| e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2} \|^2 _{\HS 0} \\
&+ \| e ^{- \eth (\frac{t}{2}) ^2} \|^2 _{\HS 0 }
\| e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2} \|^2 _{\HS 1} \\
&+ \sup | L _{X ^H } \mu | \| e ^{- \eth (\frac{t}{2}) ^2} \|^2 _{\HS 1 }
\| e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2} \|^2 _{\HS 0} \big) \\
=& O (t ^{- \gamma ' }).
\end{align*}
Clearly the above arguments can be repeated and one concludes that all Sobolev norms of
$\str _\Psi (G) $ are $O (t ^{- \gamma '})$.
By exactly the same arguments, we have as $t \to \infty $,
$$ \str _\Psi \big( 2 ^{-1} N
( e ^{- \eth (\frac{t}{2}) ^2} - \varPi _0 e ^{(\Omega \varPi _0 ) ^2 }) \varPi _0 e ^{(\Omega \varPi _0 ) ^2 } \big)
= O (t ^{- \gamma ' }) ,$$
in all Sobolev norms.
As for $\str _\Psi ( 2 ^{-1} N ( D (t) ^2 e ^{- \eth (t) ^2} )) $,
one has $D (t) ^2 = 2 (2 ^{- \frac{ N _\Omega }{2}} D (\frac{t}{2}) ^2 2 ^{\frac{N _\Omega }{2}})$.
Therefore
\begin{align*}
\str _\Psi \big(\frac {N }{2} ( D (t) ^2 e ^{- \eth (t) ^2} ))
=& 2 ^{- \frac{ N _\Omega }{2}}
\str _\Psi \big( N( D (\frac{t}{2}) ^2 e ^{- \eth (\frac{t}{2}) ^2} e ^{- \eth (\frac{t}{2}) ^2}
- \varPi _0 (\Omega \varPi _0 ) ^2 e ^{(\Omega \varPi _0 )^2} e ^{(\Omega \varPi _0 )^2} ) \big) \\
=& 2 ^{- \frac{ N_\Omega }{2}} \str _\Psi \big( N( D (\frac{t}{2}) ^2 e ^{- \eth (\frac{t}{2}) ^2}
- \varPi _0 (\Omega \varPi _0 ) ^2 e ^{(\Omega \varPi _0 )^2} ) e ^{- \eth (\frac{t}{2}) ^2} \big) \\
&- 2 ^{- \frac{ N_\Omega }{2}} \str _\Psi \big( N\varPi _0 (\Omega \varPi _0 ) ^2 e ^{(\Omega \varPi _0 )^2}
(e ^{- \eth (\frac{t}{2}) ^2} - e ^{(\Omega \varPi _0 )^2} ) \big),
\end{align*}
which is also $ O (t ^{- \gamma ' })$ as $t \to \infty $ by similar arguments.
By the Sobolev embedding theorem (for the compact manifold $B$),
it follows that
$$ - F ^\wedge (t)
+ \frac{\str _\Psi (N \varPi _0 )}{2}
+ \big(\frac{\dim (Z) \rk (E) \str _\Psi (\varPi _0 )}{4} - \frac{\str _\Psi (N\varPi _0 )}{2} \big)
(1 - 2 t) e ^{-t} $$
and its all derivatives are $ O (t ^{- \gamma ' })$ uniformly.
Finally, since all derivatives of the $t$-integrand in Definition \ref{TorsionDfn} are $L ^1$,
derivatives of $\tau $ exist and equal differentiations under the $t$-integration sign.
Hence we conclude that the torsion $\tau $ is smooth.
\end{proof}
\begin{rem}
If $Z$ is $L^2$-acyclic and of determinant class (cf. \cite[Def. 6.3]{Schick;NonCptTorsionEst}), the analogue of Remark \ref{SquareNS} reads
$$ \int _0 ^\infty \| e ^{- t \varDelta } \| ^2 _{\HS 0} \frac{d t}{t}
= \int _0 ^\infty \| e ^{- t \varDelta } \| _\tau \frac{d t}{t} < \infty $$
(note that $\varPi _0 = 0 $ by hypothesis).
Unlike having positive Novikov-Shubin invariant,
the heat operator is not of determinant class in $\| \cdot \| _{\HS 0 }$.
\end{rem}
Given a power series $f (x) = \sum a _j x ^ j$. For clarity, let $h $ be the metric on
$\wedge ^ \bullet V \otimes E $ and we denote
\begin{align*}
f ( \nabla ^{\wedge ^\bullet V' \otimes E } , h)
&:= \str \Big( \sum _j a _j \big( \frac{1}{2}
(\nabla ^{\wedge ^\bullet V' \otimes E } - (\nabla ^{\wedge ^\bullet V' \otimes E })^*) \big)^{j} \Big)
\in \Gamma ^\infty (\wedge ^\bullet T ^* M ), \\
f ( \nabla ^{\wedge ^\bullet V' \otimes E } , h) _{H ^\bullet (Z , E)}
&:= \str _\Psi \Big( \sum _j a _j \big( \frac{1}{2} \varPi _0
(\nabla ^{\wedge ^\bullet V' _\flat \otimes E _\flat }
- (\nabla ^{\wedge ^\bullet V _\flat' \otimes E _\flat })^*) \varPi _0 \big)^{j} \Big)
\in \Gamma ^\infty (\wedge ^\bullet T ^* B ).
\end{align*}
Note that the summations are only up to $\dim M$.
Let $TZ$ be the vertical tangent bundle of the fiber bundle $M\to B$ and recall that we have chosen a splitting of $TM$ and defined a Riemannian metric on $TM$. Let $P^{TZ}$ denote the projection from $TM$ to $TZ$. Let $\nabla^{TM}$ be the corresponding Levi-Civita connection on $TM$ and define $\nabla^{TZ}=P^{TZ}\nabla^{TM}P^{TZ}$, a connection on $TZ$. The restriction of $\nabla^{TZ}$ to a fiber coincides with the Levi-Civita connection of the fiber. Let $R^{TZ}$ be the curvature of $\nabla^{TZ}$.
For $N$ even, let ${\rm Pf}: \mathfrak{so}(N)\to \mathbb{R}$ denote the Pfaffian and put
\begin{align}
e\left(TZ,\nabla^{TZ}\right):=\left\{\begin{matrix}{\rm Pf} \left[{{R^{TZ}}\over{2\pi}}\right]& {\rm if}\ {\rm dim}(Z)\ {\rm is}\ {\rm even},\\0& {\rm if}\ {\rm dim}(Z)\ {\rm is}\ {\rm odd}.\end{matrix}\right.
\end{align}
A classical argument \cite{Bismut;AnaTorsion,Zhang;EtaTorsion,Schick;NonCptTorsionEst} then gives:
\begin{cor}
If $\dim Z = 2 n$ is even
one has the transgression formula
$$ d \tau (x)
= \int _{Z _x} \chi (x, z)
e (T Z,\nabla^{TZ} ) f ( \nabla ^{\wedge ^\bullet V' \otimes E } )
- f ( \nabla ^{\wedge ^\bullet V' \otimes E } ) _{H ^\bullet (Z , E)},$$
with $f (x) = x e ^{ x ^2 } $.
\end{cor}
Now let $ h _l $ be a family of $G$-invariant metrics on $\wedge ^\bullet V \otimes E $, $l \in [0, 1]$.
Define
$$ \tilde f (\nabla ^{\wedge ^\bullet V' \otimes E } , h _l)
:= \int _0 ^1 (2 \pi \sqrt{-1} ) ^{\frac{N_\Omega }{2}}
\str \Big( (h _l) ^{-1} \frac{d h _l}{d l} f ' ( \nabla ^{\wedge ^\bullet V' \otimes E } , h_l ) \Big) d l ,$$
and similarly for $\tilde f (\nabla ^{\wedge ^\bullet V' \otimes E } , h _l) _{H ^\bullet (Z , E )} $.
Note that $f ' ( \nabla ^{\wedge ^\bullet V' \otimes E } , h _l)$ uses the adjoint connection with respect to $h _l$.
Let $\widehat{e}\left(TZ,\nabla^{TZ,0},\nabla^{TZ,1}\right)\in Q^{M}/Q^{M,0}$ (cf. \cite{Bismut;AnaTorsion}) be the secondary class associated to the Euler class. Its representatives are forms of degree ${\rm dim}(Z)-1$ such that
\begin{align}
d\widehat{e}\left(TZ,\nabla^{TZ,0},\nabla^{TZ,1}\right)=e\left(TZ,\nabla^{TZ,1}\right)-e\left(TZ,\nabla^{TZ,0}\right).
\end{align}
If ${\rm dim}(Z)$ is odd, we take $\widehat{e}\left(TZ,\nabla^{TZ,0},\nabla^{TZ,1}\right)$ to be zero.
One has an anomaly formula \cite[Theorem 3.24]{Bismut;AnaTorsion}.
\begin{lem}
Modulo exact forms
\begin{align}
\label{Anomaly}
\tau _1 - \tau _0
=& \int_{ Z _x } \chi(x, z) \widehat{e} (TZ,\nabla^{TZ,0},\nabla^{TZ,1})
f (\nabla ^{\wedge ^\bullet V' \otimes E } , h _0) \\ \nonumber
&+ \int_{Z _x} \chi(x, z) e (T Z , \nabla^{TZ,1}) \tilde f (\nabla ^{\wedge ^\bullet V' \otimes E } , h _l)
- \tilde f (\nabla ^{\wedge ^\bullet V' _\flat \otimes E _\flat } , h _l) _{H ^\bullet (Z , E )}.
\end{align}
\end{lem}
In particular, the degree-0 part of Equation \eqref{Anomaly} is the anomaly formula for the $L ^2$-Ray-Singer
analytic torsion, which is a special case of \cite[Theorem 3.4]{Zhang;CoveringCheegerMuller}.
\begin{rem}
Let $Z _0 \to M _0 \to B$ be a fiber bundle with compact fiber $Z _0 $,
$Z \to M \to B$ be the normal covering of the fiber bundle $Z _0 \to M _0 \to B$.
Then one can define the Bismut-Lott and $L ^2$-analytic torsion form
$\tau _{M _0 \to B }, \tau _{M _0 \to B } \in \Gamma ^\infty (\wedge ^\bullet T^* B)$,
and one has the respective transgression formulas
\begin{align*}
d \tau _{M _0 \to B } &= \int _{\pi _0 ^{-1} (x)}
e (T Z _0 ,\nabla^{TZ_{0}}) f ( \nabla ^{\wedge ^\bullet V' _0 \otimes E _0 } )
- f ( \nabla ^{\wedge ^\bullet V' _0 \otimes E _0 } ) _{H ^\bullet (Z _0, E _0)} \\
d \tau _{M \to B } &= \int _{Z _x} \chi (x, z)
e (T Z ,\nabla^{TZ}) f ( \nabla ^{\wedge ^\bullet V' \otimes E } )
- f ( \nabla ^{\wedge ^\bullet V' \otimes E } ) _{H ^\bullet (Z , E)}.
\end{align*}
Suppose further that the DeRham cohomologies are trivial:
$$H ^\bullet (Z _0 , E|_{Z _0}) = H ^\bullet_{L ^ 2} (Z , E| _Z )= \{ 0 \}. $$
Then $d ( \tau _{M \to B } - \tau _{M _0 \to B } ) = 0 $.
Hence $\tau _{M \to B } - \tau _{M _0 \to B }$ defines some class in the DeRham cohomology of $B$.
We also remark that this form was also mentioned in \cite[Remark 7.5]{Schick;NonCptTorsionEst},
as a weakly closed form.
\end{rem}
\begin{rem}
In our preprint \cite{SoSu}, we extended the methods in this paper to the case when $B/G$ is not a manifold. Regarding $B/G$ as a non-commutative space, in \cite{SoSu} we defined the non-commutative analytic torsion form and got a non-commutative Riemannian-Roch-Grothendieck theorem.
\end{rem}
\end{document} | math | 92,373 |
\begin{document}
\title[M. Nahon]{Existence and regularity of optimal shapes for spectral functionals with Robin boundary conditions}
\author[M. Nahon]{Mickaël Nahon}
\address[Mickaël Nahon]{Univ. Savoie Mont Blanc, CNRS, LAMA \\ 73000 Chamb\'ery, France}
\email{ [email protected]}
\keywords{Free Discontinuity, Spectral optimization, Robin Laplacian, Robin boundary conditions}
\subjclass[2020]{ 35P15, 49Q10. }
\maketitle
\begin{abstract}
We establish the existence and find some qualitative properties of open sets that minimize functionals of the form $ F(\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta))$ under measure constraint on $\Om$, where $\lambda_i(\Om;\beta)$ designates the $i$-th eigenvalue of the Laplace operator on $\Om$ with Robin boundary conditions of parameter $\beta>0$. Moreover, we show that minimizers of $\lambda_k(\Om;\beta)$ for $k\geq 2$ verify the conjecture $\lambda_k(\Om;\beta)=\lambda_{k-1}(\Om;\beta)$ in dimension three and more.
\end{abstract}
\tableofcontents
\section{Introduction}
Let $\Om$ be a bounded Lipschitz domain in $\Rn$, $\beta>0$ a parameter that is constant throughout the paper, and $f\in L^2(\Om)$. The Poisson equation with Robin boundary conditions is
\[\begin{cases}-\Delta u=f& \text{ in }\Om,\\ \partial_\nu u+\beta u=0 & \text{ in }\partial\Om,\end{cases}\]
where $\partial_\nu$ is the outward normal derivative that may only have a meaning in the sense that for all $v\in H^1(\Om)$,
\[\int_{\Om}\nabla u\cdot\nabla v\mathrm{d}\Ln+\int_{\partial\Om}\beta u v\mathrm{d}\Hs=\int_{\Om}fv\mathrm{d}\Ln.\]
This equation (and in particular its boundary conditions) has several interpretations: we may see the solution $u$ as the temperature obtained in an homogeneous solid $\Om$ with the volumetric heat source $f$, and insulator on the boundary (more precisely, a width $\beta^{-1}\epsilon$ of insulator of conductivity $\epsilon$ for $\epsilon\rightarrow 0$) that separates the solid $\Om$ from a thermostat.\\
Another interpretation is to see $u$ as the vertical displacement of a membrane with shape $\Om$ on which we apply a volumetric normal force $f$, and the membrane is fixed on its boundary by elastic with stiffness proportional to $\beta$.\\
This equation is associated to a sequence of eigenvalues
\[0<\lambda_1(\Om;\beta)\leq \lambda_2(\Om;\beta)\leq \hdots\rightarrow +\infty,\]
with eigenfunctions $u_k(\Om;\beta)$ that verify
\[\begin{cases}\Delta u_k(\Om;\beta)+\lambda_{k}(\Om;\beta)u_k(\Om;\beta)=0& \text{ in }\Om,\\ \partial_\nu u_k(\Om;\beta)+\beta u_k(\Om;\beta)=0 & \text{ in }\partial\Om.\end{cases}\]
The quantities $(\lambda_k(\Om;\beta))_k$ may be extended to any open set $\Om$ in a natural way, see Section 2 for more details.\bigbreak
In this paper, we study some shape optimization problems involving the eigenvalues $(\lambda_k(\Om;\beta))_k$ with measure constraint on general open sets. In particular we prove that when $F(\lambda_1,\hdots,\lambda_k)$ is a function with positive partial derivative in each $\lambda_i$ (such as $F(\lambda_1,\hdots,\lambda_k)=\lambda_1+\hdots+\lambda_k$), then for any $m,\beta>0$ the optimisation problem
\[\min\left\{F\left(\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta)\right),\ \Omega\subset\Rn\text{ open such that }|\Om|=m\right\}\]
has a solution. Moreover the topological boundary of an optimal set is rectifiable, Ahlfors-regular, with finite $\Hs$-measure.
For functionals of the form $F(\lambda_1,\hdots,\lambda_k)=\lambda_k$, while minimizers are only known to exist in a relaxed $\SBV$ setting (that will be detailed in the second section), we show that any $\SBV$ minimizer verifies
\[\lambda_k(\Om;\beta)=\lambda_{k-1}(\Om;\beta)\]
in any dimension $n\geq 3$.
\subsection{State of the art}
The link between the eigenvalues of the Laplace operator (or other differential operators) on a domain and the geometry of this domain is a problem that has been widely studied, in particular in the field of spectral geometry.\\
The earliest and most well-known result in this direction dates back to the Faber-Krahn inequality, that states that the first eigenvalue of the Laplacian with Dirichlet boundary conditions is, among sets of given measure, minimal on the disk. The same result was shown for Robin boundary conditions with positive parameter in \cite{B88} in the two-dimensional case, then in \cite{D06} in any dimension for a certain class of domains on which the trace may be defined, using dearrangement methods. It was extended in \cite{BG10}, \cite{BG15} in the $\SBV$ framework that we will describe in the next section, such that the first eigenvalue with Robin boundary condition is minimal on the ball among all open sets of given measure. In order to handle the lack of uniform smoothness of the admissible domains, the method here is to consider a relaxed version of the problem, so as to optimize an eigenfunction instead of a shape. Once it is known a minimizer exists in the relaxed framework, it is shown by regularity and symmetry arguments that this minimizer corresponds to the disk.\\
Similar problems of spectral optimization with Neumann boundary conditions or Robin conditions with negative parameter have been shown to be different in nature, in the former case the first eigenvalue is maximal on the disk, and this is shown with radically different method, mainly building appropriate test functions since the eigenvalues are defined as an infimum through the Courant-Fischer min-max formula. Let us also mention several maximization result for Robin boundary condition with parameter that scales with the perimeter, obtained in \cite{L19}, \cite{GL19} with similar methods.\\
The existence and partial regularity for minimizers of functions $F(\lambda_1^D(\Om),\hdots,\lambda_k^D(\Om))$ (where $\lambda_i^D(\Om)$ is the $i$-th eigenvalue of the Laplacien with Dirichlet boundary conditions) with measure constraint or penalization has been achieved in \cite{B12}, \cite{MP13}, \cite{KL18}, \cite{KL19}: it is known that if $F$ is increasing and bi-Lipschitz in each $\lambda_i$ then there is an optimal open set that is $\mathcal{C}^{1,\alpha}$ outside of a singular set of codimension at least three, and if $F$ is merely nondecreasing in each coordinate then there is an optimal quasiopen set that has analytic boundary outside of a singular set of codimension three and points with Lebesgue density one. It has been shown in \cite{BMPV15}, \cite{KL19} that a shape optimizer for the $k$-th eigenvalue with Dirichlet boundary conditions and measure constraint admits Lipschitz eigenfunctions. In these papers the monotonicity and scaling properties of the eigenvalues with Dirichlet boundary condition ($\om\mapsto \lambda_k^D(\om)$ is decreasing in $\om$) plays a crucial role, however eigenvalues with Robin boundary conditions have no such properties so the same methods cannot be extended in a straightforward way.\\
The minimization of $\lambda_2(\Om;\beta)$ under measure constraint on $\Om$ was treated in \cite{K09}; as in the Dirichlet case, the minimizer is the disjoint union of two balls of same measure. For the minimization of $\lambda_k(\Om;\beta)$ or other functionals of $\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta)$, nothing is known except for the existence of a minimizer in the relaxed setting with bounded support for $\lambda_k(\Om;\beta)$, see \cite{BG19}. The regularity theory for minimizers of functionals involving Robin boundary conditions was developped in \cite{CK16}, \cite{K19} and we will relay on some of its results in our vectorial setting.\\
Numerical simulations in \cite{AFK13} for two-dimensional minimizers of $\lambda_k(\cdot;\beta)$ (for $3\leq k\leq 7$) with prescribed area suggest a bifurcation phenomena in which the optimal shape is a union of $k$ balls for every small enough $\beta$, and it is connected for any large enough $\beta$. In \cite{AFK13}, the connected minimizers were searched by parametric optimization among perturbations of the disk, however a consequence of our analysis in the last section is that minimizers of $\lambda_3(\cdot;\beta)$ are never homeomorphic to the disk.
\subsection{Statements of the main results}
In the first part of the paper, we are concerned in what we call the non-degenerate case; consider
\[F:\left\{\lambda\in\R^k:0<\lambda_1\leq \lambda_2\leq\hdots\leq \lambda_k\right\}\to\R_+\]
a Lipschitz function with directional derivatives - in the sense that for any $\lambda\in\R^k$ there is some positively homogeneous function $F_0$ such that $F(\lambda+\nu)=F(\lambda)+F_0(\nu)+o_{\nu\to 0}(|\nu|)$ - such that for all $i\in\left\{1,\hdots,n\right\}$, and all $0<\lambda_1\leq \hdots\leq\lambda_k$
\begin{equation}\label{HypF}
\frac{\partial F}{\partial^\pm \lambda_i}(\lambda_1,\hdots,\lambda_k)>0,\ F(\lambda_1,\hdots,\lambda_{k-1},\mu_k)\underset{\mu_k\to\infty}{\longrightarrow}+\infty,
\end{equation}
where $\frac{\partial}{\partial^{\pm} \lambda_i}$ designates the directional partial derivatives in $\lambda_i$. This applies in particular to any of these:
\[F_p(\lambda_1,\hdots,\lambda_k)=\left(\sum_{i=1}^k \lambda_i^p\right)^\frac{1}{p}.\]
Our first main result is the following.
\begin{main}\label{main1}
Let $F$ be such a function, $m>0$, then there exists an open set that minimizes the functional
\[\Om\mapsto F(\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta))\]
among open sets of measure $m$ in $\Rn$. Moreover any minimizing set is bounded, verifies $\Hs(\partial\Om)\leq C$ for some constant $C>0$ depending only on $(n,m,\beta,F)$, and $\partial\Om$ is Ahlfors-regular.
\end{main}
Here are the main steps of the proof:
\begin{itemize}[label=\textbullet]
\item \textbf{Relaxation}. We relax the problem in the $\SBV$ framework; this is introduced in the next subsection, following \cite{BG10}, \cite{BG15}, \cite{BG19}. The idea is that the eigenfunctions on a domain $\Om$ are expected to be zero almost nowhere on $\Om$; we extend these eigenfunctions by zero outside of $\Omega$ (thereby creating a discontinuity along $\partial\Om$) and reformulate the optimization problem on general functions defined in $\R^n$ that may have discontinuities, with measure constraint on their support. The advantage is that we have some compactness and lower semi-continuity results to obtain the existence of minimizers in the relaxed framework, however a sequence of eigenfunctions extended by zero may converge to a function that does not correspond to the eigenfunction of an open domain, so we will need to show some regularity on relaxed minimizers.
\item \textbf{A priori estimates and nondegeneracy}. We obtain a priori estimates for relaxed interior minimizers (meaning minimizers compared to any set that it contains). More precisely for any interior minimizer that corresponds to the eigenfunctions $(u_1,\hdots,u_k)$, we show that for almost any point in the support of these eigenfunctions, at least one of them is above a certain positive threshold. We also obtain $L^\infty$ bounds of these eigenfunctions and deduce a lower estimate for the Lebesgue density of the support, from which we obtain the boundedness of the support.
\item \textbf{Existence of minimizers}. We consider a minimizing sequence and show that, up to a translation, it either converges to a minimizer or it splits into two minimizing sequences of similar functionals depending on $p$ and $k-p$ (where $1\leq p<k$) eigenvalues respectively, and we know minimizers of these exists by induction on $k$.
\item \textbf{Regularity}. Finally, we show the regularity of relaxed minimizer, meaning that a relaxed minimizer corresponds to the eigenfunctions of a certain open domain that were extended by zero, by showing that the singular set of relaxed minimizers is closed up to a $\Hs$-negligible set.
\end{itemize}
Notice that in the second step we do not show that the first eigenfunction (or one of the $l$ first in the case where the minimizer has $l$ connected components) is positive, which is what we expect in general for sufficiently smooth sets; if $u_1$ is the first (positive) eigenfunction on a connected $\mathcal{C}^2$ set $\Om$, and suppose $u_1(x)=0$ for some $x\in\partial\Om$ then by Hopf's lemma $\partial_\nu u_1(x)<0$, which breaks the Robin condition at $x$, so $\inf_{\Om}u_1>0$. In our case, we get instead a "joint non-degeneracy" of the eigenfunctions in the sense that at every point of their joint support, at least one is positive.\bigbreak
Notice also that the second hypothesis in \eqref{HypF} is not superfluous: without it, a minimizing sequence $(\Om^i)$ could have some of its first $k$ eigenvalues diverge. This is because, unlike the Dirichlet case, there is no upper bound for $\frac{\lambda_k(\cdot;\beta)}{\lambda_1(\cdot;\beta)}$ in general even among sets with fixed measure. While $\lambda_k(\cdot;\beta)$ is not homogeneous by dilation, we still have the scaling property
\[\lambda_k(r\Om;\beta)=r^{-2}\lambda_k(\Om;r\beta).\]
Consider a connected smooth open set $\Om$. Since each $\lambda_k(\Om;r\beta)$ converges to $\lambda_k(\Om,0)$ (the eigenvalues with Neumann boundary conditions) as $r\to 0$, and $0=\lambda_1(\Om,0)<\lambda_2(\Om,0)$, then for any $k\geq 2$, $\frac{\lambda_k(r\Om;\beta)}{\lambda_1(r\Om;\beta)}\underset{r\to 0}{\longrightarrow}+\infty$. A counterexample among sets of fixed measure may be obtained with the disjoint union of $r\Om$ for small $r$ and a set $\om$ with prescribed measure such that $\lambda_1(\om;\beta)>\lambda_k(r\Om;\beta)$, such as a disjoint union of enough balls of radius $\rho>0$, chosen small enough to have $\lambda_1(\om,\beta)=\lambda_1(\B_\rho;\beta)>\lambda_k(r\Om;\beta)$.\bigbreak
In the second part of the paper, we study the minimizers of the functional
\[\Om\mapsto \lambda_k(\Om;\beta).\]
A minimizer in the $\SBV$ framework (see the introduction below) was shown to exist in \cite{BG19}, and aside from the fact that its support is bounded nothing more is known. We show that, in this $\SBV$ framework, a minimizer necessarily verify that $\lambda_{k-1}(\Om;\beta)=\lambda_k(\Om;\beta)$, in the context of definition \ref{def_relaxed}.\bigbreak
This is a long lasting open problem for minimizers of $\lambda_k$ with Dirichlet boundary condition (see \cite[open problem 1]{H06} and \cite{O04}).\bigbreak
However, although we prove it for Robin conditions, we do not expect this result to directly extend to the Dirichlet case ; simply put, even if some smooth sequence of minimizers $\Om^\beta$ of $\lambda_k(\cdot;\beta)$ approached a minimizer $\Om$ of $\lambda_k^D$ that is a counterexample of the conjecture, then there is no reason why the upper semi-continuity $\lambda_{k-1}^D(\Om)\geq \limsup_{\beta\to\infty}\lambda_{k-1}(\Om^\beta;\beta)$ should hold.
\begin{main}
Suppose $n\geq 3$, $k\geq 2$. Let $m>0$, and let $\U$ be a relaxed minimizer of
\[\V\mapsto \lambda_k(\V;\beta)\]
among admissible functions with support of measure $m$. Then
\[\lambda_{k-1}(\U;\beta)=\lambda_k(\U;\beta).\]
\end{main}
Here are the main steps and ideas of the proof:
\begin{itemize}[label=\textbullet]
\item First, we replace the minimizer $\U=(u_1,\hdots,u_k)$ with another minimizer $\V=(v_1,\hdots,v_k)$, with the property that $v_1\geq 0$, $\lambda_{k}(\V;\beta)>\lambda_{k-1}(\V;\beta)$, and $\V\in L^\infty(\R^n)$. One might think this estimate also holds for $\U$, however there is no particular reason why $\Span(\U)$ should contain eigenfunctions for $\lambda_1(\U;\beta),\hdots,\lambda_{k-1}(\U;\beta)$ in a variational sense.\\
This phenomenon may be easily understood in a finite-dimensional setting as follows: consider the matrix
$A=\begin{pmatrix}\lambda_1 \\ & \lambda_2 \\ & & \lambda_3\end{pmatrix}$ with $\lambda_1<\lambda_2<\lambda_3$. Then $\Lambda_2\Big[A\Big]$ is given by:
\[\lambda_2=\inf_{V\subset \R^3,\text{dim}(V)=2}\sup_{x\in V}\frac{(x,Ax)}{(x,x)}.\]
This infimum is reached by the subspace $\Span(e_1,e_2)$, but also by any subspace $\Span(e_1+te_3,e_2)$ for $|t|\leq \frac{\lambda_2-\lambda_1}{\lambda_3-\lambda_2}$, and these subspaces do not contain the first eigenvector $e_1$.
\item Then we obtain a weak optimality condition on $u_k$ using perturbations on sets with a small enough perimeter. The reason for this is that we have no access to any information on $u_1,\hdots,u_{k-1}$ apart from the fact that their Rayleigh quotient is strictly less than $\lambda_k(\Om;\beta)$, so we must do perturbations of $u_k$ that do not increase dramatically the Rayleigh quotient of $u_1,\hdots,u_{k-1}$.
\item We apply this to sets of the form $\B_{x,r}\cap\left\{|u_k|\leq t\right\}$ where $r$ is chosen small enough for each $t$. With this we obtain that $|u_k|\geq c 1_{\left\{ u_k\neq 0\right\}}$.
\item We deduce the result by showing that the support of $\U$ is disconnected, so the $k$-th eigenvalue may be decreased without changing the volume by dilations.
\end{itemize}
While the existence of open minimizers is not yet known, we end with a few observations on the topology of these minimizer, in particular with the fact that a bidimensionnal minimizer of $\lambda_3(\cdot;\beta)$ with prescribed measure is never simply connected.
\section{Relaxed framework}
Throughout the paper, we use the relaxed framework of $\SBV$ functions to define Robin eigenvalues on any open set without regularity condition, and more importantly to transform our shape optimization problem into a free discontinuity problem on functions that are not defined on a particular domain any more. The $\SBV$ space was originally developed to handle relaxations of free discontinuity problems such as the Mumford-Shah functional that will come into play later, we refer to \cite{AFP00} for a complete introduction. $\SBV$ functions may be thought of as "$W^{1,1}$ by part" functions, and this space is defined as a particular subspace of $BV$ as follows:
\begin{definition}
A $\SBV$ function is a function $u\in BV(\Rn,\R)$ such that the distributional derivative $Du$ (which is a finite vector-valued Radon measure) may be decomposed into
\[Du=\nabla u \Ln +(\overline{u}-\underline{u})\nu_u \Hs_{\lfloor J_u}, \]
where $\nabla u\in L^1(\Rn)$, $J_u$ is the jump set of $u$ defined as the set of point $x\in\Rn$ for which there is some $\overline{u}(x)\neq \underline{u}(x)\in\R$, $\nu_u(x)\in\mathbb{S}^{n-1}$, such that
\[\left(y\mapsto u(x+ry)\right)\underset{L^1_{\text{loc}}(\Rn)}{\longrightarrow}\overline{u}(x)1_{\{y:y\cdot\nu_u(x)>0\}}+\underline{u}(x)1_{\{y:y\cdot\nu_u(x)<0\}}\text{ as }r\to 0.\]
\end{definition}
We will not work directly with the $\SBV$ space but with an $L^2$ analog defined below, that was studied in \cite{BG10}.
\begin{definition}
Let $\Uk$ be the space of functions $\U\in L^2(\R^n,\R^k)$ such that
\[D\U=\nabla \U \Ln +(\overline{\U}-\underline{\U})\nu_\U \Hs_{\lfloor J_\U}, \]
where $\nabla \U\in L^2(\R^n,\R^{nk})$ and $\int_{J_\U}(|\overline{\U}|^2+|\underline{\U}|^2)\mathrm{d}\Hs<\infty$. The second term will be written $D^s\U$ ($s$ stands for singular). The function $\U$ is said to be linearly independant if its components span a $k$-dimensional space of $L^2(\R^n)$.\\
We will also say that a function $\U\in \Uk$ is disconnected if there is a measurable partition $\Om,\om$ of the support of $\U$ such that $\U1_\Om$ and $\U1_\om$ are in $\Uk$, and: \begin{align*}
D^s(\U1_\Om)&=(\overline{\U1_\Om}-\underline{\U1_\Om})\nu_\U \Hs_{\lfloor J_\U},\\
D^s(\U1_\om)&=(\overline{\U1_\om}-\underline{\U1_\om})\nu_\U \Hs_{\lfloor J_\U}.\end{align*}
In this case we will write $\U=(\U1_\Om)\oplus(\U1_\om)$.
\end{definition}
The following compactness theorem is a reformulation of Theorem 2 from \cite{BG10}.
\begin{proposition}\label{CompactnessLemma}
Let $(\U^i)$ be a sequence of $\Uk$ such that
\[\sup_{i}\int_{\R^n}|\nabla\U^i|^2\mathrm{d}\Ln+\int_{J_\U}(|\underline{\U^i}|^2+|\overline{\U^i}|^2)\mathrm{d}\Hs+\int_{\Rn}|\U^i|^2\mathrm{d}\Ln<\infty,\]
then there exists a subsequence $(\U^{\phi(i)})$ and a function $\U\in\Uk$ such that
\begin{align*}
\U^{\phi(i)}&\underset{L^2_\text{loc}}{\longrightarrow}\U,\\
\nabla\U^{\phi(i)}&\underset{L^2_{\text{loc}}-\text{weak}}{\rightharpoonup}\nabla\U,\\
\end{align*}
Moreover for any bounded open set $A\subset\R^n$
\begin{align*}
\int_{A}|\nabla \U|^2\mathrm{d}\Ln&\leq \liminf_{i\to+\infty}\int_{A}|\nabla \U_\phi(i)|^2\mathrm{d}\Ln,\\
\int_{J_u\cap A}(|\overline{\U}|^2+|\underline{\U}|^2)\mathrm{d}\Hs&\leq \liminf_{i\to+\infty}\int_{J_u\cap A}(|\overline{\U^{\phi(i)}}|^2+|\underline{\U^{\phi(i)}}|^2)\mathrm{d}\Hs.\\
\end{align*}
\end{proposition}
\begin{proof}
The proof is an adaptation of \cite[theorem 2]{BG10} to a multidimensional case.
\end{proof}
We define a notion of $i$-th eigenvalue of the Laplace operator with Robin boundary conditions that allows us to speak of the functional $\lambda_k(\cdot;\beta)$ with no pre-defined domain, and to define the $k$-th eigenvalue on any open set even when the trace of $H^1$ functions is not well-defined.
\begin{definition}\label{def_relaxed}
Let $\U\in\Uk$ be linearly independant. We define the two Gram matrices:
\begin{align*}\label{defmatrix}
A(\U)&=\left(\langle u_i,u_j\rangle_{L^2(\mathbb{R}^n,\Ln)}\right)_{1\leq i,j\leq k},\\
B(\U)&=\left(\langle \nabla u_i,\nabla u_j\rangle_{L^2(\mathbb{R}^n,\Ln)}+\beta\langle \overline{u_i},\overline{u_j}\rangle_{L^2(J_\U,\Hs)}+\beta\langle \underline{u_i},\underline{u_j}\rangle_{L^2(J_\U,\Hs)}\right)_{1\leq i,j\leq k}.
\end{align*}
We then define the $i$-th eigenvalue of the vector-valued function $\U$ as
\begin{equation}
\lambda_i(\U;\beta)=\inf_{V\subset\Span(\U), \dim(V)=i}\sup_{v\in V}\frac{\int_{\mathbb{R}^n}|\nabla v|^2\mathrm{d}\Ln+\beta \int_{J_\U}(\overline{v}^2+\underline{v}^2)\mathrm{d}\Hs}{\int_{\mathbb{R}^n}v^2\mathrm{d}\Ln}=\Lambda_i\Big[A(\U)^{-\frac{1}{2}}B(\U)A(\U)^{-\frac{1}{2}}\Big],
\end{equation}
where $\Lambda_i$ designates the $i$-th eigenvalue of a symmetric matrix.\\
We will say that $\U$ is normalized if $A(\U)=I_k$ and $B(\U)$ is the diagonal $(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta))$. Following the spectral theorem, for any linearly independant $\U\in\Uk$ there exists $P\in \text{GL}_k(\R)$ such that $P\U$ is normalized.\\
Although we expect the optimal sets to have rectifiable boundary, we may define the eigenvalues with Robin boundary conditions for any open set $\Om\subset \Rn$ as
\begin{equation}\label{defopen}
\lambda_k(\Om;\beta):=\inf\Big[\lambda_k(\U;\beta), \U\in \Uk\text{ linearly independant}:\Hs(J_\U\setminus \partial\Om)=\Ln(\left\{\U\neq 0\right\}\setminus \Om)=0\Big].
\end{equation}
\end{definition}
It may be checked that for any bounded Lipschitz domain, the admissible space corresponds to linearly independant functions $\U\in H^1(\Om)^k$ so this definition is coherent with the usual.
\section{Strictly monotonous functionals}
Let us first restate the first main result in the $\SBV$ framework. We define the admissible set of functions as
\[\Uk(m)=\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|=m\right\}.\]
For any linearly independant $\U\in\Uk$, we let:
\begin{align*}
\F(\U)&:=F(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta)),\\
\F_\gamma(\U)&:=\F(\U)+\gamma|\left\{\U\neq 0\right\}|.
\end{align*}
Our goal is now to show that $\F$ has a minimizer in $\Uk(m)$, and that any minimizer of $\F$ in $\Uk(m)$ is deduced from an open set, meaning there is an open set $\Om$ that essentially contains $\left\{\U\neq 0\right\}$ such that $\U_{|\Om}\in H^1(\Om)^k$. This is not the case for every $\SBV$ functions: some may have a dense and non-closed jump set, while $\partial\Om$ is closed and not dense.\\
The lemma \ref{PenalizationLemma} will make a link between minimizers of $\F$ in $\Uk(m)$ and minimizers of $\F_\gamma$ among linearly independant $\U\in\Uk$ for which the support's measure is less than $m$.
\subsection{A priori estimates}
An internal relaxed minimizer of $\F_\gamma$ is a linearly independant function $\U\in\Uk$ such that for any linearly independant $\V\in \Uk$ verifying $|\left\{\V\neq 0\right\}\setminus\left\{\U\neq 0\right\}|=0$:
\[\F_\gamma(\U)\leq \F_\gamma(\V).\]
To shorten some notations, we introduce the function $G:S_k^{++}(\R)\to \R$ such that
\[\F_\gamma(\U)=G\Big[A(\U)^{-\frac{1}{2}}B(\U)A(\U)^{-\frac{1}{2}}\Big]+\gamma|\left\{\U\neq 0\right\}|,\]
meaning that for any positive definite symmetric matrix $S$, $G\Big[S\Big]=F\left(\Lambda_1\Big[S\Big],\hdots,\Lambda_k\Big[S\Big]\right)$.
The smoothness of $F$ does not imply the smoothness of $G$ in general, because of the multiplicities of eigenvalues. However the monotonicity of $F$ implies the monotonicity of $G$ in the following sense: suppose $M,N$ are positive symmetric matrices, then:
\begin{align*}
G\Big[M+N\Big]&\leq G\Big[M\Big]+\left(\max_{i=1,\hdots,k}\sup_{\Lambda_j\Big[M\Big]\leq\lambda_j\leq \lambda_j(M+N)}\frac{\partial F}{\partial^\pm\lambda_i}(\lambda_1,\hdots,\lambda_k)\right) \Tr\Big[N\Big],\\
G\Big[M+N\Big]&\geq G\Big[M\Big]+\left(\min_{i=1,\hdots,k}\inf_{\lambda_j\Big[M\Big]\leq\lambda_j\leq \lambda_j\Big[M+N\Big]}\frac{\partial F}{\partial^\pm\lambda_i}(\lambda_1,\hdots,\lambda_k)\right) \Tr\Big[N\Big].\end{align*}
Above $\frac{\partial F}{\partial^\pm\lambda_i}$ designates the directional partial derivatives of $F$.
Moreover, $G$ has directional derivative everywhere; let $M=\begin{pmatrix}\lambda_1 \\ & \ddots \\ & & \lambda_k\end{pmatrix}$ be a diagonal matrix with $p$ distincts eigenvalues and $1\leq i_1<i_2<i_p\leq k$ be such that for any $i\in I_l:=[i_l,i_{l+1})$:
\[\lambda_{i_l}=\lambda_i<\lambda_{i_{l+1}}.\]
Then for each $i\in I_l$ the function $N\mapsto \Lambda_i\Big[N\Big]$ admits the following directional derivative at $M$:
\[\Lambda_i\Big[M+N\Big]=\Lambda_i\Big[M\Big]+\Lambda_{i-i_l+1}\Big[N_{|I_l}\Big]+\underset{N\to 0}{o}(N),\]
where $N_{|I}:=(N_{i,j})_{i,j\in I}$. Since $F$ has a directional derivative everywhere, this means that $G$ admits a directional derivative
\begin{equation}\label{Diff}
G\Big[M+N\Big]=G\Big[M\Big]+F_0\left(\Lambda_1\Big[ N_{|I_1}\Big],\hdots,\Lambda_{k-i_{k}+1}\Big[ N_{|I_p}\Big]\right)+\underset{N\to 0}{o}\left(N\right),
\end{equation}
where $F_0$ is a positiverly homogeneous function that is the directional derivative of $F$ at $(\lambda_1,\hdots,\lambda_k)$.
\begin{proposition}\label{apriori}
Let $\U$ be a relaxed internal minimizer of $\F_\gamma$, suppose it is normalized. Then there exists constants $M,\delta,R>0$ that only depend on $(n,k,\beta,\F_\gamma(\U),F)$ such that
\[\delta 1_{\left\{ \U\neq 0\right\}}\leq |\U|\leq M.\]
Moreover, up to translation of its connected component, $\U$ is supported in a set of diameter bounded by $R$.
\end{proposition}
Estimates of the form $|\U|\geq \delta 1_{\{\U\neq 0\}}$ for solution of elliptic equations with Robin boundary conditions appear in \cite{BL14}, \cite{BG15}, \cite{CK16}, see also \cite{BMV21} in a context without free discontinuity. It is a crucial steps to show the regularity of the function $\U$; once $\U$ is known to take values between two positive bounds, then it may be seen as a quasi-minimizer of the Mumford-Shah functional $\int_{\Rn}|\nabla\U|^2\mathrm{d}\Ln+\Hs(J_\U)$ on which the techniques used to show the regularity of Mumford-Shah minimizers (see \cite{DCL89}) may be extended (see \cite{CK16}, \cite{BL14}).
\begin{proof}
We show, in order, that the eigenvalues $\lambda_i(\U;\beta)$ are bounded above and below, the $L^\infty$ bound on $\U$, the lower bound on $\U_{|\left\{\U\neq 0\right\}}$, a lower bound on the Lebesgue density of $\left\{\U\neq 0\right\}$, and then the boundedness of the support.
\begin{itemize}[label=\textbullet]
\item Since $|\left\{\U\neq 0\right\}|\leq \F_\gamma(\U)/\gamma$, then by the Faber-Krahn inequality with Robin Boundary conditions (as proved in \cite{BG10}) $\lambda_1(\U;\beta)\geq \lambda_1(\B^{|\F_\gamma(\U)|/\gamma};\beta)=:\lambda$.\\
In a similar way, since
\[F(\lambda,\hdots,\lambda,\lambda_k(\U;\beta))\leq F(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta))\leq \F_\gamma(\U)\]
and $F$ diverges when its last coordinate does, so $\lambda_k(\U;\beta)$ is bounded by a constant $\Lambda>0$ that only depends on the behaviour of $F$ and $\F_\gamma(\U)$. Let us write:
\begin{align*}
a&=\inf_{\frac{1}{2}\lambda\leq \lambda_1\leq \lambda_2\leq\hdots\leq \lambda_k\leq 2\Lambda}\inf_{i=1,\hdots,k}\frac{\partial F}{\partial^\pm \lambda_i}(\lambda_1,\hdots,\lambda_k),\\
b&=\sup_{\frac{1}{2}\lambda\leq \lambda_1\leq \lambda_2\leq\hdots\leq \lambda_k\leq 2\Lambda}\sup_{i=1,\hdots,k}\frac{\partial F}{\partial^\pm \lambda_i}(\lambda_1,\hdots,\lambda_k).\end{align*}
$a$ and $b$ are positive and only depend on $\F_\gamma(\U)$ and the behaviour of $F$.
\item For the $L^\infty$ bound we use a Moser iteration procedure (see for instance \cite[Th 4.1]{HL11} for a similar method). We begin by establishing that $u_i$ is an eigenfunction of $\lambda_i(\U;\beta)$ in a variational sense. \\
Let $v_i\in \mathcal{U}_1$ be such that $\left\{v_i\neq 0\right\}\subset \left\{\U\neq 0\right\}$ and $J_{v_i}\subset J_\U$, we show that $V(u_i,v_i)=0$, where
\[V(u_i,v_i):=\int_{\Rn}\nabla u_i\cdot\nabla v_i\mathrm{d}\Ln+\beta\int_{J_\U}u_i v_i\mathrm{d}\Hs-\lambda_i(\U;\beta)\int_{\R^n}u_i v_i \mathrm{d}\Ln.\]
For this consider $\U_t=\U-t (v_i-\sum_{j\neq i}V(v_i,u_j)u_j) e_i$. Since $A(\U_t)$ converges to $I_k$, $\U_t$ is linearly independant for a small enough $t$ and
\[\F_\gamma(\U)\leq \F_\gamma(\U_t).\]
This implies, since $\left\{\U_t\neq 0\right\}\subset\left\{\U\neq 0\right\}$, that
\[F(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta))\leq F(\lambda_1(\U_t;\beta),\hdots,\lambda_k(\U_t;\beta)),\]
which may also be written
\[G\Big[B(\U)\Big]\leq G\Big[A(\U_t)^{-\frac{1}{2}}B(\U_t)A(\U_t)^{-\frac{1}{2}}\Big].\]
Now, $A(\U_t)^{-\frac{1}{2}}B(\U_t)A(\U_t)^{-\frac{1}{2}}=B(\U)-(e_i e_i^*)V(u_i,v_i)t+\mathcal{O}(t^2)$. Suppose that $V(u_i,v_i)>0$. Let $i'$ be the lowest index such that $\lambda_{i'}(\U;\beta)=\lambda_i(\U;\beta)$. Then knowing the directional derivative of $G$ given in \eqref{Diff} we obtain (for $t>0$)
\[G\Big[A(\U_t)^{-\frac{1}{2}}B(\U_t)A(\U_t)^{-\frac{1}{2}}\Big]=G\Big[B(\U)\Big]+tV(u_i,v_i)F_0\left(0,0,\hdots,0,-1,0,\hdots,0\right)+\underset{t\to 0}{o}(t),\]
which is less than $G\Big[B(\U)\Big]$ for a small enough $t$: this is a contradiction. When $V(u_i,v_i)\leq 0$ we may do the same by replacing $v_i$ with $-v_i$. Thus for all $v_i$ with support and jump set included in the support and jump set of $\U$
\[\int_{\Rn}\nabla u_i\cdot\nabla v_i\mathrm{d}\Ln+\beta\int_{J_\U}u_i v_i\mathrm{d}\Hs=\lambda_i(\U;\beta)\int_{\R^n}u_i v_i \mathrm{d}\Ln.\]
Now we use Moser iteration methods. Let $\alpha\geq 2$ be such that $u_i\in L^\alpha$, then by taking $v_i$ to be a truncation of $|u_i|^{\alpha-2}u_i $ in $[-M,M]$ for $M\to\infty$ in the variational equation above, we obtain
\[\int_{\Rn}(\alpha-1) |u_i|^{\alpha -2}|\nabla u_i|^2\mathrm{d}\Ln+\int_{J_\U}(|\overline{u_i}|^{\alpha}+|\underline{u_i}|^{\alpha})\mathrm{d}\Hs=\lambda_i(\U;\beta)\int_{\Rn}|u_i|^\alpha \mathrm{d}\Ln.\]
Using the embedding $BV(\Rn)\hookrightarrow L^{\frac{n}{n-1}}(\Rn)$ we have:
\begin{align*}
\Vert u_i^\alpha\Vert_{L^{\frac{n}{n-1}}}&\leq C_n\Vert u_i^\alpha\Vert_{BV}\\
&\leq C_n\left(\int_{\Rn} |\nabla (|u_i|^{\alpha -1}u_i)|\mathrm{d}\Ln+\int_{J_\U}(|\overline{u_i}|^{\alpha}+|\underline{u_i}|^{\alpha})\mathrm{d}\Hs\right)\\
&\leq C_n\left(\int_{\Rn}\alpha\left(|u_i|^\alpha+|u_i|^{\alpha-2}|\nabla u_i|^2\right)\mathrm{d}\Ln+\int_{J_\U}(|\overline{u_i}|^{\alpha}+|\underline{u_i}|^{\alpha})\mathrm{d}\Hs\right)\\
&\leq C_{n,\beta}\left(\alpha+\lambda_i(\U;\beta)\right)\Vert u_i\Vert_{L^\alpha}^{\alpha}.\\
\end{align*}
And so $\Vert u_i\Vert_{L^{\frac{n}{n-1}\alpha}}\leq \Big[ C_{n,\beta}\left(\alpha+\lambda_i(\U;\beta)\right)\Big]^\frac{1}{\alpha}\Vert u_i\Vert_{L^\alpha}$. We may apply this iteratively with $\alpha_p=2\left(\frac{n}{n-1}\right)^p$ to obtain an $L^\infty$ bound of $u_i$ that only depends on $n,\beta$ and $\lambda_i(\U;\beta)$. In fact using the Faber-Krahn inequality for Robin conditions $\lambda_i(\U;\beta)\geq \lambda_1\left(\B^{|\{\U\neq 0\}|};\beta\right)$ the previous inequality applied to $\alpha_p$ may be simplified into
\[\log\left(\frac{\Vert u_i\Vert_{L^{\alpha_{p+1}}}}{\Vert u_i\Vert_{L^{\alpha_{p}}}}\right)\leq \left(C(n,\beta,|\{\U\neq 0\}|)(p+1)+\frac{1}{2}\log\lambda_i(\U;\beta)\right)\left(\frac{n-1}{n}\right)^p,\]
and summing in $p$ we obtain an estimate of the form $\Vert u_i\Vert_{L^\infty}\leq C(n,\beta,|\{\U\neq 0\}|)\lambda_i(\U;\beta)^\frac{n}{2}$.
\item Lower bound on $\U$: our goal is first to obtain an estimate of the form
\begin{equation}\label{EstCaff}
\Tr\Big[B_t\Big]+|\left\{ 0<|\U|\leq t\right\}|\leq \frac{1}{\eps}\Tr\Big[\beta_t\Big],
\end{equation}
where $\eps>0$ is a constant that only depends on the parameters and
\begin{align*}
(B_t)_{i,j}&=\int_{|\U|\leq t}\nabla u_i\cdot\nabla u_j\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_i 1_{|\U|\leq t}}\cdot\overline{u_j 1_{|\U|\leq t}}+\underline{u_i 1_{|\U|\leq t}}\cdot\underline{u_j 1_{|\U|\leq t}}\right)\mathrm{d}\Hs,\\
(\beta_t)_{i,j}&=\beta\int_{\partial^*\left\{ |\U|>t\right\}\setminus J_\U}u_i u_j\mathrm{d}\Hs.
\end{align*}
This is intuitively what we obtain by comparing $\U$ and $\U1_{\left\{|\U|>t\right\}}$. From this we will derive a lower bound of $\inf_{\U\neq 0}|\U|$ with similar arguments as what was done in \cite{CK16}. Suppose \eqref{EstCaff} does not hold. This means that, since $B_t\leq B$ and $|\left\{\U\neq 0\right\}|\leq \F_\gamma(\U)$,
\[\beta_t\leq \left(B(\U)+\F_\gamma(\U)I_k\right)k\eps\leq c\eps B(\U),\]
for a certain $c>0$ since $B(\U)\geq \lambda I_k$. Let us now compare $\U$ with $\U_t=\U1_{\left\{ |\U|>t\right\}}$; this function is admissible for a small enough $t$ because $A(\U_t)=I_k-A\left(\U 1_{\left\{ 0<|\U|\leq t\right\} }\right)$, so
\[\Vert A(\U_t)-I_k\Vert\leq Ct^2|\left\{ 0<|\U|\leq t\right\}|.\]
Notice also that $B(\U_t)=B(\U)-B_t+\beta_t$. Then the optimality condition $\F_\gamma(\U)\leq \F_\gamma(\U_t)$ gives
\begin{equation}\label{eqInter}
G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[A(\U_t)^{-\frac{1}{2}}(B(\U)-B_t+\beta_t)A(\U_t)^{-\frac{1}{2}}\Big].
\end{equation}
We first show that $B_t$ is small enough for small $t$. With our hypothesis on $\beta_t$ and the fact that $A(\U_t)^{-\frac{1}{2}}\leq I_k+Ct^2I_k\leq (1+c\eps)I_k$ for a small enough $t$
\[G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[\Big[1+2c\eps\Big]B(\U)-B_t\Big].\]
So $G\Big[B(\U)\Big]\leq G\Big[(1+2c\eps)B(\U)-B_t\Big]$. Now, there exists $i\in\left\{1,\hdots,k\right\}$ such that
\[\Lambda_i\Big[(1+c\eps)B(\U)-B_t\Big]\leq (1+c\eps)\Lambda_i\Big[B(\U)\Big]-\frac{1}{k}\Tr\Big[B_t\Big].\]
And so, using the monotonicity of $F$ and the definition of $a,b$ in the first part of the proof:
\begin{align*}
G\Big[B(\U)\Big]&\leq G\Big[(1+c\eps)B(\U)-B_t\Big]\\
&\leq F\left((1+c\eps)\lambda_1(\U;\beta),\hdots,(1+c\eps)\lambda_{i-1}(\U;\beta),(1+c\eps)\lambda_i(\U;\beta)-\frac{1}{k}\Tr\Big[B_t\Big],\hdots,(1+c\eps)\lambda_k(\U;\beta)\right)\\
&\leq G\Big[B(\U)\Big]+bc\eps\Tr\Big[B(\U)\Big]-a\min\left(\frac{1}{k}\Tr\Big[B_t\Big],\frac{\lambda}{2}\right).
\end{align*}
With a small enough $\eps$, we obtain $\Tr\Big[B_t\Big]\leq \frac{\lambda}{2}$. Now we may come back to \eqref{eqInter}, and using the fact that $A(\U_t)^{-\frac{1}{2}}\leq (1+Ct^2|\left\{0<|\U|\leq t\right\}|)I_k$ we obtain
\[G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[B(\U)+\beta_t-B_t+Ct^2|\left\{0<|\U|\leq t\right\}|I_k\Big],\]
and so with the monotonicity of $G$
\[G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[B(\U)\Big] +b\Tr\Big[\beta_t\Big]-a\Tr\Big[B_t\Big]+Cbt^2|\left\{0<|\U|\leq t\right\}|.\]
In particular, for a small enough $t>0$ (depending only on the parameters)
\[a\Tr\Big[B_t\Big]+\frac{\gamma}{2}|\left\{0<|\U|\leq t\right\}|\leq b\Tr\Big[\beta_t\Big],\]
and so we obtained that there is a big enough constant $C>0$, and a small enough $t_1>0$, such that for any $t\in (0,t_1]$
\begin{equation}\label{EstCaff2}
\Tr\Big[B_t\Big]+|\left\{0<|\U|\leq t\right\}|\leq C\Tr\Big[\beta_t\Big].
\end{equation}
Now we let
\[V:=|\U|=\sqrt{u_1^2+\hdots+u_k^2}\left(\geq \delta 1_{\left\{ \U\neq 0\right\}}\right).\]
Let $f(t)=\int_0^t \tau\Hs(\partial^*\left\{ V> t\right\}\setminus J_\U)\mathrm{d}\tau$. Notice that the right-hand side of \eqref{EstCaff2} is $Ctf'(t)$. Then for any $t\leq t_1$:
\begin{align*}
f(t)&\leq \int_0^t \tau\Hs(\partial^*\left\{ V> \tau\right\}\setminus J_V)d\tau=\int_{\om_t}V|\nabla V|\mathrm{d}\Ln\\
&\leq |\left\{ 0<V\leq t\right\}|^{\frac{1}{2n}}\left(\int_{\left\{ 0<V\leq t\right\}}|\nabla V|^2\mathrm{d}\Ln\right)^{\frac{1}{2}}\left(\int_{\left\{ 0<V\leq t\right\}}(V^2)^{\frac{n}{n-1}}\mathrm{d}\Ln\right)^\frac{n-1}{2n}\\
&\leq C(tf'(t))^{\frac{1}{2n}+\frac{1}{2}}\left(|D(V^2)|(\left\{ 0<V\leq t\right\})\right)^\frac{1}{2}\\
&\leq C(tf'(t))^{\frac{1}{2n}+\frac{1}{2}}\left(\int_{\left\{ 0<V\leq t\right\}}V|\nabla V|\mathrm{d}\Ln +\int_{J_V\cap \left\{ 0<V\leq t\right\}}V^2\mathrm{d}\Hs\right)^\frac{1}{2}\\
&\leq C(tf'(t))^{\frac{1}{2n}+\frac{1}{2}}\left(t|\left\{ 0<V\leq t\right\}|^{\frac{1}{2}}\left(\int_{\left\{ 0<V\leq t\right\}}|\nabla V|^2\mathrm{d}\Ln\right)^{\frac{1}{2}} +\int_{J_V\cap \left\{ 0<V\leq t\right\}}V^2\mathrm{d}\Hs\right)^\frac{1}{2}\\
&\leq C(tf'(t))^{1+\frac{1}{2n}}.
\end{align*}
The constant $C>0$ above depends only on the parameters and may change from line to line. This implies that $f'(t)f(t)^{-\frac{2n}{2n+1}}\geq ct^{-1}$, so for any $t\in ]0,t_1[$ such that $f(t)>0$ this may be integrated from $t$ to $t_1$ to obtain
\[\frac{1}{2n+1}f(t_1)^{\frac{1}{2n+1}}\geq c\log(t_1/t).\]
Since $f(t_1)\leq |\left\{\U\neq 0\right\}|^{\frac{1}{2}}\left(\int_{\Rn}|\nabla V|^2\right)^\frac{1}{2}\leq \sqrt{k\F_\gamma(\U)\Lambda/\gamma}$, then $t$ is bounded below in terms of the parameters of the problem. This means that $f(\delta)=0$ for a certain explicit $\delta>0$. In particular, $(\nabla\U)1_{\left\{ |\U|\leq \delta\right\}}=0$, and by comparing $\U$ with $\U_\delta$, \eqref{EstCaff} becomes $|\left\{ 0<|\U|\leq \delta\right\}|\leq 0$, so we obtained
\[|\U|\geq \delta 1_{\left\{ \U\neq 0\right\}}.\]
\item To show the support $\left\{\U\neq 0\right\}$ (or its connected components) is bounded, we begin by showing a lower estimate for the Lebesgue density on this set. This is obtained by comparing $\U$ with $\U_r:=u1_{\mathbb{R}^n\setminus \B_r}$ where $\B_r$ is a ball of radius $r>0$.\bigbreak
As previously, we first need to check that $\U_r$ is admissible for any small enough $r>0$. With the $L^\infty$ bound on $\U$, we get $|A(\U_r)-I_k|\leq C|\left\{ \U\neq 0\right\}\cap \B_r|$. In particular, $|A(\U_r)-I_k|\leq C r^n$, which proves that $A(\U_r)$ is invertible for a small enough $r$.\bigbreak
Let $f(r)=|\left\{ \U\neq 0\right\}\cap \B_r|$. By comparing $\U$ with $\U_r$ we obtain
\[G\Big[B(\U)\Big]+\gamma|\left\{\U\neq 0\right\}\cap \B_r|\leq G\Big[A_r(B-B_r+\beta_r)A_r\Big],\]
where $A_r,B_r,\beta_r$ are defined as previously: $A_r=A(\U_r)^{-\frac{1}{2}}$ and
\begin{align*}
(B_r)_{i,j}&=\int_{\B_r}\nabla u_i\cdot\nabla u_j \mathrm{d}\Ln+\beta\int_{J_\U}\left(\overline{u_i 1_{\B_r}}\cdot\overline{u_j 1_{\B_r}}+\underline{u_i 1_{\B_r}}\cdot\underline{u_j 1_{\B_r}}\right)\mathrm{d}\Hs,\\
(\beta_r)_{i,j}&=\beta\int_{\partial\B_r\setminus J_\U}u_i u_j\mathrm{d}\Hs.
\end{align*}
With the same argument as what we did to obtain the lower bound, this estimate implies that for any $r\in ]0,r_0]$ where $r_0$ is small enough
\[c\Tr\Big[B_r\Big]\leq \Tr\Big[\beta_r\Big]+f(r),\]
for a certain $c>0$. With the $L^\infty$ bound and the lower bound on $\U$, we deduce that for a certain constant $C>0$:
\[\Hs(\B_r\cap J_\U)\leq C\left(f(r)+\Hs(\partial \B_r\cap \left\{ \U\neq 0\right\})\right).\]
Notice that $f'(r)=\Hs(\partial \B_r\cap \left\{ \U\neq 0\right\})$, so with the isoperimetric inequality
\[c_n f(r)^{1-\frac{1}{n}}\leq \Hs(\B_r\cap J_\U)+\Hs(\partial \B_r\cap \left\{ \U\neq 0\right\})\leq C(f(r)+f'(r)).\]
Since $f(r)\leq Cr^n\rightarrow 0$, we deduce that for a certain constant $C>0$ and any small enough $r$ ($r<r_0$) we have
\[f(r)^{1-\frac{1}{n}}\leq Cf'(r).\]
Suppose now that $f(r)>0$ for any $r>0$. Then by integrating the above estimate from $0$ to $r_0$, we obtain that for a certain constant $c>0$ and any $r\in [0,r_0]$
\[|\left\{ \U\neq 0\right\}\cap \B_{x,r}|\geq cr^n.\]
Consider now a system of points $S\subset\Rn$ such that for any $x\in S$ and any $r>0$, $|\left\{\U\neq 0\right\}\cap\B_{x,r}|>0$, and such that for any distinct $x,y\in S$, $|x-y|\geq 2r_0$. Then
\[\F_\gamma(\U)\geq \gamma |\left\{ \U\neq 0\right\}|\geq \gamma\sum_{x\in S}|\left\{ \U\neq 0\right\}\cap \B_{x,r_0}|\geq c\gamma r_0^n \mathrm{Card}(S),\]
so $\mathrm{Card}(S)$ is bounded. Then by taking a maximal set of separated points $S$ as above, the balls $(\B_{x,2r_0})_{x\in S}$ cover $\left\{\U\neq 0\right\}$. This means in particular that the support of $u$ is bounded by a constant only depending on the parameters, up to a translation of the its connected components.
\end{itemize}
\end{proof}
\subsection{Existence of a relaxed minimizer with prescribed measure}
This section is dedicated to the proof of the following result.
\begin{proposition}\label{existence}
Let $m,\beta>0$, then there exists $\U\in \Uk$ that minimizes $\F$ in the admissible set $\Uk(m)$.
\end{proposition}
We begin with a lemma that will help us to show that any minimizing sequence of $\F$ in $\Uk(m)$ has concentration points, meaning points around which the measure of the support is bounded below by a positive constant.
\begin{lemma}\label{Conc}
Let $\U\in \Uk$, we let $K_p:=p+[-\frac{1}{2},\frac{1}{2}]^n$, then there exists $p\in \mathbb{Z}^n$ such that
\[|\{\U\neq 0\}\cap K_p|\geq \left(\frac{c_n\Vert \U\Vert_{L^2(\Rn)}^2}{\Vert \U\Vert_{L^2(\Rn)}^2+\int_{\Rn}|\nabla\U|^2\mathrm{d}\Ln+\int_{J_\U}\left(|\overline{\U}|^2+|\underline{\U}|^2\right)\mathrm{d}\Hs}\right)^n.\]
\end{lemma}
\begin{proof}
It is the consequence of the $BV(K_p)\hookrightarrow L^\frac{n}{n-1}(K_p)$ embedding, see \cite[lemma 12]{BGN21}.
\end{proof}
The following lemma makes a straightforward link between minimizers of $\F$ with fixed volume and interior minimizers of $\F_\gamma$ for a sufficiently small $\gamma$, which means that all the a priori estimates apply.
\begin{lemma}\label{PenalizationLemma}
Let $\U\in \Uk$ be a minimizer of $\F$ in the admissible set
\[\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|=m\right\}.\]
Then there exists $\gamma>0$ depending only on $(n,m,\beta,\F(u),F)$ such that $\U$ is a minimizer of $\F_\gamma$ in the admissible set
\[\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|\in ]0,m]\right\}.\]
\end{lemma}
\begin{proof}
Consider a linearly independant $\V\in\Uk$ such that $\delta:=\frac{|\{\V\neq 0\}|}{|\{\U\neq 0\}|}\in ]0,1]$. Let $\W(x):=\V(x\delta^{1/n})$. Then the support of $\W$ has the same measure as $\U$ and so $\F(\U)\leq \F(\W)$. Looking how the matrices $A$ and $B$ scale with the change of variable $x\to x\delta^{-\frac{1}{n}}$ we obtain
\[A(\W)^{-\frac{1}{2}}B(\W)A(\W)^{-\frac{1}{2}}\leq \delta^{\frac{1}{n}}A(\V)^{-\frac{1}{2}}B(\V)A(\V)^{-\frac{1}{2}},\]
hence
\[\F(\U)\leq F\left(\delta^{\frac{1}{n}}\lambda_1(\V;\beta),\hdots,\delta^{\frac{1}{n}}\lambda_k(\V;\beta)\right).\]
By the Faber-Krahn inequality for Robin eigenvalues, $\lambda_1(\V;\beta)\geq \lambda_1(\B^{|\{\V\neq 0\}|};\beta)$. Moreover since $F$ diverges when its last coordinate does, we may suppose without loss of generality that $\lambda_k(\V;\beta)$ is bounded by a certain constant $\Lambda>0$ that does no depend on $\V$. This in turn means that $|\{\V\neq 0\}|$ is bounded below by a positive constant depending only on $n,\beta,\Lambda$ by the Faber-Krahn inequality, so $\delta$ is bounded below. Then by denoting $a$ the minimum of the partial derivatives of $F$ on $[\delta^{\frac{1}{n}}\lambda_1(\B^{m};\beta),\Lambda]^{k}$, we obtain
\[\F(\U)\leq \F(\V)-a(\lambda_1(\V;\beta)+\hdots+\lambda_k(\V;\beta))(1-\delta^{1/n})\leq \F(\V)-\frac{ka\lambda_1(\B^m;\beta)}{nm}(|\{\U\neq 0\}|-|\{\V\neq 0\}|).\]
This concludes the proof.
\end{proof}
We may now prove the main result of this section.
\begin{proof}
We proceed by induction on $k$. The main idea is that we either obtain the existence of a minimizer by taking the limit of a minimizing sequence, or we don't and in this case the minimizer is disconnected so it is the union of two minimizers of different functionals depending on strictly less than $k$ eigenvalues.\\
The initialisation for $k=1$ amounts to showing there is a minimizer for $\lambda_1(\U;\beta)$ in $\mathcal{U}_1(m)$: this has been done in \cite{BG15} and it is known to be the first eigenfunction of a ball of measure $m$.\\
Suppose now that $k\geq 2$ and the result is true up to $k-1$. Consider $(\U^i)_i$ a minimizing sequence for $\F$ in $\Uk(m)$. Then the concentration lemma \ref{Conc} may be applied to each $\U^i$ to find a sequence $(p^i)_i$ in $\mathbb{Z}^n$ such that
\begin{equation}\label{EstVol}
\liminf_{i\to\infty}|K_{p_i}\cap \{\U^i\neq 0\}|>0.
\end{equation}
We lose no generality in supposing, up to a translation of each $\U^i$, that $p^i=0$. Now with the compactness lemma \ref{CompactnessLemma}, we now up to extraction that $\U^i$ converges in $L^2_\text{loc}$ to a certain function $\U\in \Uk$ with local lower semicontinuity of its Dirichlet-Robin energy.\\
We now split $\U^i$ into a "local" part and a "distant" part; we may find an increasing sequence $R^i\to\infty$ such that
\[\U^i 1_{\B_{p^i,R^i}}\underset{L^2(\Rn)}{\longrightarrow} \U.\]
Up to changing each $R^i$ with a certain $\tilde{R^i}\in [\frac{1}{2}R^i,R^i]$, we may suppose that
\[\int_{\partial \B_{R^i}\setminus J_{\U^i}}|\U^i|^2\mathrm{d}\Hs=\underset{i\to \infty}{o}(1),\]
so that for each $i\in \{1,\hdots,k\}$
\[\lambda_i\left((\U1_{\B_{R^i}},\U1_{\B_{R^i}^c});\beta\right)\leq\lambda_i(\U;\beta)+\underset{i\to \infty}{o}(1).\]
Since $A\left(\U1_{\B_{R^i}},\U1_{\B_{R^i}^c}\right)$ and $B\left(\U1_{\B_{R^i}},\U1_{\B_{R^i}^c}\right)$ are block diagonal (with two blocks of size $k\times k$), then up to extraction on $i$ there is a certain $p\in \{0,1,\hdots,k\}$ such that
\[\Big[\lambda_1(\U^i 1_{\B_{R^i}};\beta),\hdots,\lambda_p(\U^i 1_{\B_{R^i}};\beta),\lambda_1(\U^i 1_{\B_{R^i}^c};\beta),\hdots,\lambda_p(\U^i 1_{\B_{R^i}^c};\beta)\Big]^{\mathfrak{S}_k}\leq (\lambda_1(\U^i;\beta),\hdots,\lambda_k(\U^i;\beta))+\underset{i\to \infty}{o}(1),\]
where $\Big[a_1,\hdots,a_k\Big]^{\mathfrak{S}_k}$ designate the ordered list of the values $(a_1,\hdots,a_k)$. There are now three cases:
\begin{itemize}[label=\textbullet]
\item $p=0$: we claim this can not occur. Indeed this would mean that $\U^i1_{\B_{R^i}^c}$ is such that
\[\F(\U^i1_{\B_{R^i}^c})\underset{i\to\infty}{\longrightarrow}\inf_{\Uk(m)}\F.\]
However, because of \eqref{EstVol} we know there is a certain $\delta>0$ such that for all big enough $i$ the measure of the support of $\U^i1_{\B_{R^i}^c}$ is less than $m-\delta$. Letting $\V^i=\U^i1_{\B_{R^i}^c}\left(\Big[\frac{m-\delta}{m}\Big]^\frac{1}{n}\cdot\right)$, $\V^i$ is a linearly independant sequence of $\Uk$, with support of volume less than $m$, such that $\F(\V^i)<\inf_{\Uk(m)}\F$ for a big enough $i$: this is a contradiction.
\item $p=k$. In this case $\U(=\lim_i \U^i1_{\B_{R^i}})$ is a minimizer of $\F$ with measure less than $m$. This is because, in addition to the fact that $\U^i1_{\B_{R^i}}$ converges to $\U$ in $L^2$, the lower semi-continuity result tells us that for each $z\in\R^k$:
\[z^*B(\U)z\leq \liminf_{i}z^*B(\U^i1_{\B_{R^i}})z,\]
thus for any $j=1,\hdots,k$, $\lambda_j(\U;\beta)\leq \liminf_{i}\lambda_j(\U^i1_{\B_{R^i}};\beta)$. And $|\{\U\neq 0\}|\leq \liminf |\{\U^i1_{B_{R^i}}\neq 0\}|\leq m$.
\item $1\leq p\leq k-1$. This is where we will use the induction hypothesis. We let:
\begin{align*}
\lambda_j&=\lim_{i\to\infty}\lambda_j(\U^i1_{\B_{R^i}};\beta),\ \forall j=1,\hdots,p & m_\text{loc}=\lim_{i\to\infty}|\{\U^i1_{B_{R^i}}\neq 0\}|,\\
\mu_j&=\lim_{i\to\infty}\lambda_j(\U^i1_{\B_{R^i}}^c;\beta),\ \forall j=1,\hdots,k-p & m_\text{dist}=\lim_{i\to\infty}|\{\U^i1_{B_{R^i}^c}\neq 0\}|.\\
\end{align*}
Then by continuity of $F$
\[\inf_{\Up(m)}\F=F\left(\Big[\lambda_1,\hdots,\lambda_p,\mu_1,\hdots,\mu_{k-p}\Big]^{\mathfrak{S}_k}\right).\]
Let us introduce
\[\F_{\text{loc}}:\V\in \mathcal{U}_{p}(m_\text{loc})\mapsto F\left(\Big[\lambda_1(\V;\beta),\hdots,\lambda_p(\V;\beta),\mu_1,\hdots,\mu_{k-p}\Big]^{\mathfrak{S}_k}\right).\]
This functional verify the hypothesis \eqref{HypF}, so following the induction hypothesis we know it has a minimizer $\V$. Moreover, according to the a priori bounds, $\V$ is known to have bounded support. Since $|\{\U^i1_{\B_{R^i}\neq 0}\}|\underset{i\to\infty}{\rightarrow}m_{\text{loc}}$, then by the optimality of $\V$ we get
\[\F_\text{loc}(\V)\leq \liminf_{i\to\infty}\F_\text{loc}(\U^i1_{\B_{R^i}})=F\left(\Big[\lambda_1,\hdots,\lambda_p,\mu_1,\hdots,\mu_{k-p}\Big]^{\mathfrak{S}_k}\right)=\inf_{\Uk(m)}\F.\]
Now consider the functional
\begin{align*}
\F_{\text{dist}}:\W\in \mathcal{U}_{k-p}(m_\text{dist})&\mapsto F\left(\Big[\lambda_1(\V;\beta),\hdots,\lambda_{p}(\V;\beta),\lambda_1(\W;\beta),\hdots,\lambda_{k-p}(\W;\beta)\Big]^{\mathfrak{S}_k}\right).
\end{align*}
With the same arguments, there is a minimizer $\W$ with bounded support. By comparing $\W$ with $\U^i1_{\B_{R^i}^c}$ we obtain
\[\F_\text{dist}(\W)\leq\liminf_{i\to\infty}\F_\text{dist}(\U^i1_{\B_{R^i}^c})=\F_\text{loc}(\V)\left(\leq \inf_{\Uk(m)}\F\right).\]
Since both $\V$ and $\W$ have bounded support we may suppose up to translation that their support are a positive distance from each other. Consider $\U=\V\oplus\W$, then $\F(\U)=\F_{\text{dist}}(\W)$ so $\U$ is a minimizer of $\F$ in $\Uk(m)$.
\end{itemize}
\end{proof}
\subsection{Regularity of minimizers}
Here we show that the relaxed global minimizer $\U$ that we found in the previous section corresponds to the eigenfunctions of an open set. What this means is that there is an open set $\Om$ that contains almost all the support of $\U$ such that $\U_{|\Om}\in H^1(\Om)^k$ and $\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta)$ as defined in \eqref{defopen} are reached for $u_{1|\Om},\hdots,u_{k|\Om}$ respectively (provided $\U$ is normalized). Moreover we show that this open set $\partial\Om$ is Ahlfors regular and $\Hs(\partial\Om)<\infty$.\\
The main step is to show that $J_\U$ is essentially closed, meaning $\Hs\left(\overline{J_\U}\setminus J_\U\right)=0$. This is obvious for functions $\U$ that are eigenfunctions of a smooth open set $\Om$, since $J_\U=\partial\Om$, however an $\SBV$ function could have a dense jump set.\\
This is dealt using similar methods as in \cite{DCL89}, \cite{CK16}; we show that for every point $x\in \Rn$ with sufficiently low $(n-1)$ dimensional density in $J_\U$, the energy of $\U$ decreases rapidly around that point (this is lemma \ref{DecayLemma}). This is obtained by contradiction and blow-up methods, by considering a rescaling of a sequence of function that do not verify this estimate.
As a consequence we obtain uniform lower bound on the $(n-1)$ dimensional density of $J_\U$, which implies that it is essentially closed.
We point out that in similar problems (see \cite{BL14}), the essential closedness of the jump set is obtained using the monotonicity of $\frac{1}{r^{n-1}}\left(\int_{\B_r}|\nabla u|^2+\Hs(J_u\cap \B_r)\right)\wedge c+c'r^{\alpha}$ for some constants $c,c',\alpha>0$ (where $u$ is a scalar solution of some similar free discontinuity problem). However our optimality condition (see \eqref{EstOpt} below) does not seem to be enough to establish a similar monotonicity property, namely due to the remainder on the right-hand side and the multiplicities of eigenvalues.
\begin{proposition}\label{regularity}
Let $\U$ be a relaxed minimizer $\F_\gamma$. Then $\Hs\left(\overline{J_\U}\setminus J_\U\right)=0$ and $\Om:=\left\{ \overline{|\U|}>0\right\}\setminus \overline{J_\U}$ is an open set such that $(u_1,\hdots,u_k)$ are the first $k$ eigenfunctions of the Laplacian with Robin boundary conditions on $\Om$.
\end{proposition}
Since the proof is very similar to what was done in \cite{CK16}, we only sketch the specific parts of the proof that concern the vectorial character of our problem.
\begin{proof}
We first establish an optimality conditions for perturbations of $\U$ on balls with small diameter. We suppose $\U$ is normalized and, using the same notations as in \eqref{Diff} for $M=B(\U)$ we denote
\begin{equation}\label{G0}
G_0\Big[N\Big]=F_0\left(\Lambda_1\Big[ N_{|I_1}\Big],\hdots,\Lambda_{k-i_{k}+1}\Big[ N_{|I_p}\Big]\right),
\end{equation}
such that:
\begin{equation}\label{Dev}
G\Big[B(\U)+N\Big]=G\Big[B(\U)\Big]+G_0\Big[N\Big]+\underset{N\to 0}{o}(N).\end{equation}
While $G_0$ is not linear (except in the particular case where $\frac{\partial F}{\partial\lambda_i}=\frac{\partial F}{\partial \lambda_j}$ for each $i,j$ such that $\lambda_i(\U;\beta)=\lambda_j(\U;\beta)$), it is positively homogeneous. We let
\[E_0\Big[N\Big]=\max\left(G_0\Big[N\Big],\Tr\Big[N\Big]\right).\]
$E_0$ is also positively homogeneous and verify that for any non-zero $S\in S_k^+(\R)$, $E_0\Big[-S\Big]<0$. We show that:
\begin{center}\textit{For any $\V\in\Uk$ that differs from $\U$ on a ball $\B_{x,r}$ where $r$ is small enough, we have}\end{center}
\begin{equation}\label{EstOpt}
E_0\Big[B(\V;\B_{x,r})-B(\U;\B_{x,r})\Big]\geq -\Lambda r^n -\delta(r)|B(\U;\B_{x,r})|.
\end{equation}
Where $\Lambda>0$, $\delta(r)\underset{r\to 0}{\rightarrow}0$, and
\[B(\W;\B_{x,r})_{i,j}:=\int_{\B_{x,r}}\nabla w_i\cdot\nabla w_j \mathrm{d}\Ln+\beta\int_{J_\W}\left(\overline{w_i 1_{\B_{x,r}}}\cdot\overline{w_j 1_{\B_{x,r}}}+\underline{w_i 1_{\B_{x,r}}}\cdot\underline{w_j 1_{\B_{x,r}}}\right)\mathrm{d}\Hs.\]
To show \eqref{EstOpt}, we may suppose that $\Tr\Big[B(\V;\B_{x,r})\Big]\leq \Tr\Big[B(\U;\B_{x,r})\Big]$ (or else it is automatically true) and that $\V$ is bounded in $L^\infty$ by the same bound as $\U$. The optimality condition of $\U$ gives
\[\F_\gamma(\U)\leq \F_\gamma(\V),\]
where the right-hand side is well defined for any small enough $r>0$ since $|A(\V)-I_k|\leq Cr^n$. This implies
\[G\Big[B(\U)\Big]\leq G\Big[(1+Cr^n)(B(\U)-B(\U;\B_{x,r})+B(\V;\B_{x,r}))\Big]+\gamma|\B_{x,r}|.\]
Thus, using the monotonicity of $G$ and the developpement \eqref{Dev} we obtain the estimate \eqref{EstOpt}. Let us now show that this estimate, along with the a priori estimate
\begin{equation}\label{Estapriori}
\delta 1_{\left\{\U\neq 0\right\}}\leq |\U|\leq M,
\end{equation}
implies the closedness of $J_\U$, following arguments of \cite{CK16} that were originally developped in \cite{DCL89} for minimizers of the Mumford-Shah functional. The crucial argument is the following decay lemma.
\begin{lemma}\label{DecayLemma}
For any small enough $\tau \in ]0,1[$, there exists $\overline{r}=\overline{r}(\tau),\eps=\eps(\tau)>0$, such that for any $x\in \Rn$, $r\in ]0,\overline{r}]$, $\W\in \Uk$ verifying the a priori estimates \eqref{Estapriori} and the optimality condition \eqref{EstOpt}
\[\left(\Hs(J_\W\cap \B_{x,r})\leq \eps r^{n-1},\ \Tr\Big[B(\W;\B_{x,r})\Big]\geq r^{n-\frac{1}{2}}\right)\text{ implies }Tr\Big[B(\W;\B_{x,\tau r})\Big]\leq \tau^{n-\frac{1}{2}}Tr\Big[B(\W;\B_{x,r})\Big].\]
\end{lemma}
\begin{proof}
The proof is sketched following the same steps as \cite{CK16}. Consider a sequence of functions $\W^i\in \Uk$ with a sequence $r_i,\eps_i\to 0$ and a certain $\tau\in ]0,1[$ that will be fixed later, such that:
\begin{align}
\Hs(J_{\W^i}\cap \B_{r_i})&=\eps_i r_i^{n-1},\\
\Tr\Big[B(\W^i;\B_{r_i})\Big]&\geq r_i^{n-\frac{1}{2}},\\ \label{Absurd}
Tr\Big[B(\W^i;\B_{x,\tau r_i})\Big]&\geq \tau^{n-\frac{1}{2}}Tr\Big[B(\W;\B_{r_i})\Big].
\end{align}
And let
\[\V^i(x)=\frac{\W^i\left(x/r_i\right)}{\sqrt{r_i^{2-n}\Tr\Big[B(\W^i;\B_{r_i})\Big]}}.\]
Then, since $\int_{\B_1}|\nabla\V^i|^2\mathrm{d}\Ln\leq 1$ and $\Hs(J_{\V^i}\cap\B_1)=\eps_i\to 0$, we know there exists some sequences $\tau_i^-<m_i<\tau_i^+$ such that the function:
$\tilde{\V^i}:=\min(\max(\V^i,\tau_i^-),\tau_i^+)$ (where the $\min$ and $\max$ are taken for each component) verifies:
\begin{align*}
\Vert \tilde{\V^i}-m_i\Vert_{L^\frac{2n}{n-2}(\B_1)}&\leq C_n\Vert \nabla\V\Vert_{L^2(\B_1)}&\left(\leq 1\right),\\
\Ln(\{\tilde{\V^i}\neq \V^i\})&\leq C_n\Hs(J_{\V^i}\cap \B_1)^\frac{n}{n-1}&\left(=C_n\eps_i^\frac{n}{n-1}\right).
\end{align*}
One may prove (using a $\mathrm{BV}$ and a $L^{\frac{2n}{n-2}}$ bound) that $\tilde{\V}^i-m_i$ converges in $L^2$ with lower semi-continuity for the Dirichlet energy to some $\V\in H^1(\B_1)$. We claim $\V$ is harmonic as a consequence of \eqref{EstOpt}: for this consider a function $\boldsymbol{\varphi}\in H^1(\B_1)^k$ that coincides with $\V$ outside a ball $\B_\rho$ for some $\rho<1$. Let $\rho'\in ]\rho,1[$, $\eta\in \mathcal{C}^\infty_{\text{compact}}(\B_{\rho'},[0,1])$ such that $\eta=1$ on $\B_\rho$ and $|\nabla\eta|\leq 2(\rho'-\rho)^{-1}$. Then we define
\begin{align*}
\boldsymbol{\varphi}^i&= (m_i+\boldsymbol{\varphi})\eta+\tilde{\V}^i(1-\eta)1_{\B_{\rho'}}+\V^i 1_{\Rn\setminus\B_{\rho'}},\\
\boldsymbol{\Phi}^i(x)&=\sqrt{r_i^{2-n}\Tr\Big[B(\W^i;\B_{r_i})\Big]}\boldsymbol{\varphi}^i(r_ix).
\end{align*}
$\boldsymbol{\Phi}^i$ coincides with $\W^i$ outside of a ball of radius $\rho' r_i$, so it may be compared to $\W^i$ using the optimality condition \eqref{EstOpt}. With the same computations as in \cite{CK16} we obtain, as $\rho\nearrow \rho'$, that
\[E_0\Big[B(\boldsymbol{\varphi};\B_{\rho'})-B(\V;\B_{\rho'})\Big]\geq 0.\]
Taking $\boldsymbol{\varphi}$ to be the harmonic extension of $\V_{|\partial \B_\rho}$ in $\B_\rho$, we find that $B(\boldsymbol{\varphi};\B_{\rho'})\leq B(\V;\B_{\rho'})$ with equality if and only if $\V$ is equal to its harmonic extension. If it is not, then \[E_0\Big[B(\boldsymbol{\varphi};\B_{\rho'})-B(\V;\B_{\rho'})\Big]< 0,\]
which contradicts the optimality. This means that the components of $\V$ are harmonic. Since $\int_{\B_1}|\nabla \V|^2\mathrm{d}\Ln\leq 1$, then $|\nabla \V|\leq \sqrt{1/|\B_{1/2}|}$ on $\B_{1/2}$, so for any $\tau< \frac{1}{2^n|\B_1|}$ we find that $\int_{\B_\tau}|\nabla u|^2\mathrm{d}\Ln<\tau^{n-\frac{1}{2}}$; this contradicts the condition \eqref{Absurd}.
\end{proof}
The decay lemma implies the existence of $r_1,\eps_1>0$ such that for any $x\in J_\U^{\text{reg}}$ and $r\in ]0,r_1[$:
\begin{equation}\label{Ahlfors}
\Hs(J_\U\cap \B_{x,r})\geq\eps_1 r^{n-1}.
\end{equation}
Suppose indeed that it is not the case for some $x\in J_{\U}$. Let $\tau_0\in ]0,1[$ be small enough to apply lemma \ref{DecayLemma}. Then for a small enough $\tau_1$,
\[\Tr\Big[B(\U:\B_{x,\tau_1 r}\Big]\leq \delta^2\eps(\tau_0) (\tau_1r)^{n-1}.\]
Indeed, either $\Tr\Big[B(\U;\B_{x, r})\Big]$ is less than $r^{n-\frac{1}{2}}$ and this is direct provided we take $r_1<\delta^4\eps(\tau_0)^2\tau_1^{2(n-1)}$, or it is not and then by application of the lemma (and using the fact that $\Tr\Big[B(\U;\B_{x,r})\Big]\leq C(\U)r^{n-1}$, which is obtained by comparing $\U$ with $\U1_{\Rn\setminus \B_{x,r}}$) we get
\[\Tr\Big[B(\U;\B_{x,\tau_1r})\Big]\leq C(\U)\tau_1^{n-\frac{1}{2}}r^{n-1}\leq \delta^2\eps(\tau_0)(\tau_1 r)^{n-1},\]
provided we choose $\tau_1\leq C(\U)^{-2}\delta^4 \eps(\tau_0)^2$ (and $\eps_1=\eps(\tau_1),r_1<\overline{r}(\tau_1)$ so that the lemma may be applied). Then we may show by induction that for all $k\in\mathbb{N}$,
\begin{equation}\label{Induction}
\Tr\Big[B(\U;\B_{x,\tau_0^{k}\tau_1r})\Big]\leq \delta^2\eps(\tau_0)\tau_0^{k(n-\frac{1}{2})}(\tau_1r)^{n-1}.\end{equation}
Indeed \eqref{Induction} implies that $\Hs(J_{\U}\cap \B_{\tau_0^k\tau_1 r})\leq \eps(\tau_0)(\tau_0^k\tau_1r)^{n-1}$, so with the same dichotomy as above we may apply the lemma \ref{DecayLemma} again to obtain \eqref{Induction} by induction.\bigbreak
Overall this means that $\frac{1}{\rho^{n-1}}\left(\int_{\B_{x,\rho}}|\nabla\U|^2\mathrm{d}\Ln +\Hs(J_\U\cap\B_{x,r})\right)\underset{\rho\to 0}{\rightarrow}0$, which is not the case when $x\in J_\U$ (see \cite{DCL89}, Theorem 3.6), so \eqref{Ahlfors} holds. By definition it also holds for $x\in \overline{J_{\U}}$ with a smaller constant, however according to \cite{DCL89}, lemma 2.6, $\Hs$-almost every $x$ such that $\liminf_{r\to 0}\frac{\Hs(J_{\U}\cap\B_{x,r})}{r^{n-1}}>0$ is in $J_{\U}$, which ends the proof.
\end{proof}
As a consequence of the existence of a relaxed minimizer and the regularity of relaxed minimizers, we obtain the theorem \ref{main1}.
\begin{proof}
We know from the proposition \ref{existence} that there exists a relaxed minimizer $\U$ of $\F$ in $\Uk(m)$, and from lemma \ref{PenalizationLemma} that $\U$ is an internal relaxed minimizer of $\F_\gamma$ for some $\gamma>0$ that only depends on the parameters. From the proposition \ref{apriori} we obtain that for certain constants $\delta,M,R>0$ only depending on the parameters, $\delta 1_{\{\U\neq 0\}}\leq |\U|\leq M$ and the diameter of the support of $\U$ (up to translation of its components) is less than $R$. From proposition \ref{regularity} we know that $\Hs(\overline{J_\U}\setminus J_\U)=0$. Since $|\U|\geq \delta 1_{\{\U\neq 0\}}$, we obtain
\begin{align*}
\Hs(\overline{J_\U})&=\Hs(J_\U)\leq \delta^{-2}\int_{J_{\U}}\left(|\overline{\U}|^2+|\underline{\U}|^2\right)\mathrm{d}\Hs\\
&\leq \beta^{-1}\delta^{-2}\left(\lambda_1(\U;\beta)+\hdots+\lambda_k(\U;\beta)\right)\leq C(n,m,\beta,F).
\end{align*}
Let $\Om$ be the union of the connected components of $\R^n\setminus \overline{J_\U}$ on which $\U$ is not zero almost everywhere. By definition $\partial\Om=\overline{J_{\U}}$, and $\U$ is continuous on $\Rn\setminus J_\U$ and do not take the values $\pm\frac{\delta}{2}$, thus $|\U|\geq \delta$ on $\Om$. In particular, $\{\U\neq 0\}$ and $\Om$ differ by a $\Ln$-negligible set, and $J_\U\subset\partial\Om$, so $\U_{|\Om}\in H^1(\Om)^k$. This means that for every $i=1,\hdots,k$, $\lambda_i(\Om;\beta)\leq \lambda_i(\U;\beta)$, so $\Om$ is optimal for $\F$.\bigbreak
In the proof of proposition \ref{regularity} we obtained the existence of a certain $\eps_1,r_1>0$ such that for every $x\in\partial\Om(=\overline{J_\U})$, $r<r_1$, then $\Hs(\B_{x,r}\cap\partial\Om)\geq \eps_1r^{n-1}$. By comparing $\U$ with $\U1_{\Rn\setminus B_{x,r}}$ (similarly to what was done in the proof of the proposition \ref{apriori}), we obtain the upper bound $\Hs(\B_{x,r}\cap\partial\Om)\leq Cr^{n-1}$; this concludes the proof.
\end{proof}
\section{The functional $\Om\mapsto\lambda_k(\Om;\beta)$}
We are now interested by the specific functional
\[\Om\mapsto \lambda_k(\Om;\beta).\]
While it is not covered by the previous existence result, relaxed minimizers of this functional were shown to exist in \cite{BG19}. To understand its regularity, it might be tempting to consider a sequence of relaxed minimizers with the function $F(\lambda_1,\hdots,\lambda_k)=\lambda_k+\eps (\lambda_1+\hdots+\lambda_{k-1})$ where $\eps\to 0$, however while the $L^\infty$ bound does not depend on $\eps$, the lower bound does and it seems to degenerate to 0 as $\eps$ goes to 0.\\
This prevents us to obtain any regularity on relaxed minimizers of this functional. We are, however, able to treat the specific case where the $k$-th eigenvalue would be simple, and this analysis allows us to prove that this does not happen in general. In particular, we shall prove $\lambda_k(\U;\beta)=\lambda_{k-1}(\U;\beta)$.\\
\subsection{Regularization and perturbation lemma}
We begin with a density result that allows us to suppose without loss of generality that $\U$ is bounded in $L^\infty$. This relies on the same procedure as \cite[Theorem 4.3]{BG19}.\\
We remind the notation for admissible functions used previously:
\[\Uk(m)=\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|=m\right\},\]
as well as the fact that if $\U$ is a relaxed minimizer of $\lambda_k(\cdot;\beta)$ in $\Uk(m)$ then according to lemma \ref{PenalizationLemma} there is a constant $\gamma>0$ such that $\U$ is a minimizer of
\[\V\mapsto \lambda_k(\V;\beta)+\gamma|\{\V\neq 0\}|\]
for linearly independant function $\V$ such that $|\{\V\neq 0\}|\in ]0,m]$.
\begin{lemma}\label{LemmaApriori}
Let $\U=(u_1,\hdots,u_k)$ be a relaxed minimizer of $\lambda_k(\U;\beta)$ in $\Uk(m)$. Suppose that $\lambda_k(\U;\beta)>\lambda_{k-1}(\U;\beta)$. Then there exists another minimizer $\V\in \Uk(m)$ that is linearly independant, normalized, such that $v_1\geq 0$, $\V\in L^\infty(\Rn)$, and
\[\lambda_{k-1}(\V;\beta)<\lambda_{k}(\V;\beta).\]
\end{lemma}
This justifies that in all the following propositions we may suppose that $\U\in L^\infty(\Rn)$ without loss of generality.
\begin{proof}
Without loss of generality suppose that $\U$ is normalized. Then according to \cite{BG19}, which itself relies on the Cortesani-Toader regularization (see \cite{CT99}), there exists a sequence of bounded polyhedral domains $(\Om^p)$ along with a sequence $\U^p\in \Uk\cap H^1(\Om^p)^k$ such that $\U^p\underset{p\to\infty}{\rightarrow}\U$ in $L^2$, and
\[\limsup_{p\to\infty}B(\U^p)\leq B(\U),\ \limsup_{p\to\infty}|\Om^p|\leq |\left\{\U\neq 0\right\}|.\]
Let $\V^p=(v_1^p,\hdots,v_k^p)$ be the first $k$ eigenfunctions of $\Om^p$ (with an arbitrary choice in case of multiplicity; notice $v_1^p$ may be chosen positive), then $B(\V^p)\leq B(\U^p)$ and with Moser iteration $\V^p$ is bounded in $L^\infty$ by $C_{n,\beta,m}\lambda_k(\U;\beta)^\frac{n}{2}$ (which, in particular, does not depend on $p$). Using the compactness result \ref{CompactnessLemma}, we find that up to an extraction $\V^p$ converges in $L^2$ and almost everywhere to $\V\in \Uk$ with lower semi-continuity on its Dirichlet-Robin energy, thus $\V$ is a minimizer in $L^\infty$ with $v_1\geq 0$. Moreover,
\[\lambda_{k-1}(\V;\beta)\leq \liminf_{p\to\infty}\lambda_{k-1}(\V^p;\beta)\leq \liminf_{p\to\infty}\lambda_{k-1}(\U^p;\beta)\leq \lambda_{k-1}(\U;\beta)<\lambda_{k}(\U;\beta)\leq \lambda_k(\V;\beta).\]
\end{proof}
\begin{lemma}\label{LemmaPerturb}
Let $\U=(u_1,\hdots,u_k)\in \Uk(m)\cap L^\infty(\Rn)$ be an internal relaxed minimizer of $\lambda_k(\U;\beta)$ in $\Uk(m)$, that we suppose to be normalized. Suppose that $\lambda_k(\U;\beta)=\lambda_{k-l+1}(\U;\beta)>\lambda_{k-l}(\U;\beta)$. Then there exists $\delta,\gamma>0$ such that, for all $\om\subset \mathbb{R}^n$ that verify
\[|\om|+\Per(\omega;\Rn\setminus J_\U)<\delta,\]
there exists $\alpha\in (\left\{ 0\right\}^{k-l}\times \mathbb{R}^l)\cap\mathbb{S}^{k-1}$ such that
\begin{equation}\label{EstPerturb}
\int_{\om}|\nabla u_\alpha|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_\alpha1_{\om}}^2+\underline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs+\gamma|\om|\leq 2\beta \int_{\partial^*\om\setminus J_\U}u_\alpha^2\mathrm{d}\Hs+2\lambda_k(\U;\beta)\int_{\om}u_\alpha^2\mathrm{d}\Ln.\end{equation}
\end{lemma}
As may be seen in the proof, the factors $2$ on the right-hand side may be replaced by $1+\underset{\delta\to 0}{o}(1)$, however this will not be useful for us.\\
This result will only be applied in the particular case where $l=1$: when $l>1$ it gives a very weak information on the eigenspace of $\lambda_k(\U;\beta)$ and it would be interesting to see if the regularity of one of the eigenfunctions might be deduced from it as was done in \cite{BMPV15} (in the same problem with Dirichlet boundary conditions). In this case better estimates were obtained by perturbing the functional into $(1-\eps)\lambda_{k}+\eps\lambda_{k-1}$, considering a minimizer $\Om^\eps$ that contains the minimizer $\Om$ of $\lambda_k$, and separating the cases where $\lambda_k(\Om^\eps)$ is simple or not. However these arguments use crucially the monotonicity and scaling properties of $\lambda_i$, which are not available for Robin boundary conditions.
\begin{proof}
Let us denote $\V=\U 1_{\mathbb{R}^n\setminus\omega}$, $A,B=A(\V),B(\V)$, and for any $\alpha,\beta\in\R^k$,
\begin{align*}
A_{\alpha,\beta}&=\sum_{i=1}^k \alpha_i \beta_i A_{i,j},\\
B_{\alpha,\beta}&=\sum_{i=1}^k \alpha_i \beta_i B_{i,j}.\\
\end{align*}
We study the quantity
\[\lambda_k(\V;\beta)=\max_{\alpha\in\mathbb{S}^{k-1}}\frac{B_{\alpha,\alpha}}{A_{\alpha,\alpha}}.\]
Due to the $L^\infty$ bound on $\U$ and the fact that $|\om|+\Per(\om;\Rn\setminus J_\U)\leq \delta$:
\begin{align*}
\inf_{\alpha\in \left\{0\right\}^{k-l}\times \R^{l}\cap\mathbb{S}^{k-1}}\frac{B_{\alpha,\alpha}}{A_{\alpha,\alpha}}&\underset{\delta\to 0}{\longrightarrow}\lambda_{k}(\U;\beta),\\
\sup_{\eta\in\R^{k-l}\times \left\{0\right\}^{l}\cap\mathbb{S}^{k-2}}\frac{B_{\eta,\eta}}{A_{\eta,\eta}}&\underset{\delta\to 0}{\longrightarrow}\lambda_{k-l}(\U;\beta)(<\lambda_{k}(\U;\beta))
\end{align*}
Thus for a small enough $\delta$ the maximum above is attained for a certain $\frac{\alpha+t\eta}{\sqrt{1+t^2}}$ where $\alpha\in \left\{0\right\}^{k-l}\times \R^{l}\cap\mathbb{S}^{k-1}$, $\eta\in\R^{k-l}\times \left\{0\right\}^{l}\cap\mathbb{S}^{k-1}$ and $t\in\R$. $\alpha$ and $\eta$ are fixed in what follows and so
\[\lambda_k(\V;\beta)=\max_{t\in\mathbb{R}}\frac{B_{\alpha,\alpha}+2tB_{\alpha,\eta}+t^2B_{\eta,\eta}}{A_{\alpha,\alpha}+2tA_{\alpha,\eta}+t^2A_{\eta,\eta}}.\]
We let
\begin{align*}
b_{\alpha,\eta}&=\frac{B_{\alpha,\eta}}{B_{\alpha,\alpha}},\ &b_{\eta,\eta}=\frac{B_{\eta,\eta}}{B_{\alpha,\alpha}},\\
a_{\alpha,\eta}&=\frac{A_{\alpha,\eta}}{A_{\alpha,\alpha}},\ &a_{\eta,\eta}=\frac{A_{\eta,\eta}}{A_{\alpha,\alpha}},\\
\end{align*}
\[F(t)=\frac{1+2tb_{\alpha,\eta}+t^2b_{\eta,\eta}}{1+2ta_{\alpha,\eta}+t^2a_{\eta,\eta}}.\]
Then we may rewrite
\begin{equation}\label{eqlambdak}
\lambda_k(\V;\beta)=\frac{B_{\alpha,\alpha}}{A_{\alpha,\alpha}}\max_{t\in\mathbb{R}}F(t).\end{equation}
Moreover,
\[a_{\eta,\eta}\underset{\delta\to 0}{\longrightarrow}1,\ \limsup_{\delta\to 0}b_{\eta,\eta}\leq \frac{\lambda_{k-l}(\U;\beta)}{\lambda_{k}(\U;\beta)}<1.\]
We look for the critical points of $F$; $F'(t)$ has the same sign as
\[(a_{\alpha,\eta} b_{\eta,\eta}-a_{\eta,\eta} b_{\alpha,\eta})t^2-(a_{\eta,\eta}-b_{\eta,\eta})t+(b_{\alpha,\eta}-a_{\alpha,\eta}).\]
Since $F$ has the same limit in $\pm\infty$, this polynomial has two real roots given by:
\[t^{\pm}=\frac{a_{\eta,\eta}-b_{\eta,\eta}}{2(a_{\alpha,\eta}b_{\eta,\eta} - a_{\eta,\eta} b_{\alpha,\eta})}\left(1\pm\sqrt{1-4\frac{(b_{\alpha,\eta}-a_{\alpha,\eta})(a_{\alpha,\eta}b_{\eta,\eta} - a_{\eta,\eta} b_{\alpha,\eta})}{(a_{\eta,\eta}-b_{\eta,\eta})^2}}\right).\]
Since $F'$ has the same sign as $(a_{\alpha,\eta} b_{\eta,\eta}-a_{\eta,\eta} b_{\alpha,\eta})$ in $\pm\infty$, we find that the maximum of $F$ is attained in $t^{-}$. For any small enough $\delta$ we obtain
\[|t^{-}|\leq C_1|a_{\alpha,\eta}-b_{\alpha,\eta}|,\]
where $C_1$ only depends on $\lambda_k(\U;\beta),\lambda_{k-1}(\U;\beta)$. We evaluate $F$ in $t^-$ to obtain, for small enough $\delta$,
\[F(t^-)\leq 1+C_2(A_{\alpha,\eta}^2+B_{\alpha,\eta}^2),\]
where $C_2$ is another such constant. With the Cauchy Schwarz inequality we obtain
\begin{align*}
A_{\alpha,\eta}^2&=\underset{\delta\to 0}{o}\left(\int_{\om}u_\alpha^2\mathrm{d}\Ln\right),\\
B_{\alpha,\eta}^2&=\underset{\delta\to 0}{o}\left(\int_{\om}|\nabla u_\alpha|^2+\beta \int_{J_\U\cup \partial^*\om}\left(\underline{u_\alpha1_{\om}}^2+\overline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs\right).
\end{align*}
Moreover,
\begin{align*}
B_{\alpha,\alpha}&=B(\U)_{\alpha,\alpha}-\int_{\om}|\nabla u_\alpha|^2\mathrm{d}\Ln-\beta\int_{J_\U}\left(\underline{u_\alpha1_{\om}}^2+\overline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs+\int_{\partial^*\om\setminus J_\U}u_\alpha^2 \mathrm{d}\Hs,\\
A_{\alpha,\alpha}&=1-\int_{\om}u_\alpha^2\mathrm{d}\Ln.\end{align*}
Thus for a small enough $\delta$, we obtained the following estimate in \eqref{eqlambdak}
\begin{align*}
\left(1-\int_{\om}u_\alpha^2\mathrm{d}\Ln\right)\lambda_k(\V;\beta)&\leq B(\U)_{\alpha,\alpha}-(1-\underset{\delta\to 0}{o}(1))\left(\int_{\om}|\nabla u_\alpha|^2\mathrm{d}\Ln+\beta\int_{J_\U}\left(\underline{u_\alpha1_{\om}}^2+\overline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs\right)\\
&+(1+\underset{\delta\to 0}{o}(1))\int_{\partial^*\om\setminus J_\U}u_\alpha^2 \mathrm{d}\Hs+\underset{\delta\to 0}{o}\left(\int_{\om}u_\alpha^2\mathrm{d}\Ln\right).\end{align*}
The optimality condition on $\U$ ($\lambda_k(\U;\beta)+\gamma |\om|\leq \lambda_k(\V)$ for a certain $\gamma>0$ that does not depend on $\om$, obtained through Lemma \ref{PenalizationLemma}) coupled with the fact that $\lambda_k(\U)=B(\U)_{\alpha,\alpha}$ gives us the estimate \eqref{EstPerturb} for any small enough $\delta$.
\end{proof}
\subsection{Non-degeneracy lemma and the main result}
\begin{proposition}
Let $\U=(u_1,\hdots,u_k)\in \Uk\cap L^\infty(\Rn)$ an internal relaxed minimizer of $\lambda_k(\U;\beta)+\gamma|\left\{ \U\neq 0\right\}|$. Suppose $n\geq 3$, and that $\lambda_k(\U;\beta)>\lambda_{k-1}(\U;\beta)$. Then there exists $c>0$ such that $|u_k|\geq c 1_{\left\{ u_k\neq 0\right\}}$.
\end{proposition}
\begin{proof}
We actually prove that there exists $r,t>0$ such that for any $x\in\Rn$, $|u_k|\geq t 1_{\B_{x,r}\cap\left\{ u_k\neq 0\right\}}$, since this is sufficient to conclude. We suppose $x=0$ to simplify the notations. We cannot proceed as in the proof of result \ref{apriori} because we do not know whether $\Per(\left\{|u_k|>t\right\};\Rn\setminus J_\U)$ is less than a constant $\delta$ or not. The idea is to compare $\U$ with $\U1_{\mathbb{R}^n\setminus \om_t}$ where
\[\om_t=\B_{r(t)}\cap\left\{ |u_k|\leq t\right\},\]
for $t>0$ and $r(t)>0$ chosen sufficiently small such that $\Per(\om_t;\Rn\setminus J_u)$ is sufficiently small.
\begin{lemma}\label{Lemma_Estimate}
Under these circumstances, there exists $t_1>0$ such that for all $t<t_1$,
\begin{equation}\label{Estimate}
\int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{t}}}^2\right)\mathrm{d}\Hs+\frac{1}{2}\gamma|\om_{t}|\leq 2\beta \int_{\partial^*\om_{t}\setminus J_\U}u_k^2\mathrm{d}\Hs,\end{equation}
where $\om_t=\left\{|u_k|\leq t\right\}\cap \B_{r(t)}$ with $r(t):=\epsilon t^{\frac{2}{n-1}}$ for a small enough $\epsilon>0$.
\end{lemma}
\begin{proof}
As we said previously, this estimate will be obtained by comparing $\U$ and $\U1_{\Rn\setminus \om_t}$ where $\om_t=\B_{r(t)}\cap\left\{ |u_k|\leq t\right\}$. This is direct if we can apply Lemma \ref{LemmaPerturb}, we only need to show the hypothesis
\[\Hs(\partial^*\om_t\setminus J_u)< \delta.\]
Suppose that
\[\beta t^2\Hs(\partial^*\left\{|u_k|\leq t\right\}\cap\B_{r(t)}\setminus J_\U)\leq \int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{t}}}^2\right)\mathrm{d}\Hs+\gamma|\om_{t}|.\]
Indeed if this inequality is false then we obtained the result. Then, comparing $\U$ with $\U1_{\Rn\setminus \B_{r(t)}}$ with the lemma \ref{LemmaPerturb} (which is allowed for any small enough $r>0$) we obtain the estimate
\begin{align*}
\int_{\B_{r(t)}}|\nabla u_k|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_k1_{\B_{r(t)}}}^2+\underline{u_k1_{\B_{r(t)}}}^2\right)\mathrm{d}\Hs+\frac{1}{2}\gamma|\B_{r(t)}|&\leq 2\beta \int_{\partial \B_{r(t)}\setminus J_\U}u_k^2\mathrm{d}\Hs\\
&\leq C(n,\beta,\Vert u_k\Vert_{L^\infty})r(t)^{n-1}.
\end{align*}
Combining the two previous inequalities,
\begin{align*}\Hs(\partial^*\left\{|u_k|\leq t\right\}\cap\B_{r(t)}\setminus J_\U)&\leq C(n,\beta,\Vert u_k\Vert_{L^\infty})\frac{r(t)^{n-1}}{t^2}\\
&=C(n,\beta,\Vert u_k\Vert_{L^\infty})\epsilon^{n-1}\\
&\leq \frac{1}{2}\delta\text{ for a small enough }\epsilon.\end{align*}
And so Lemma \ref{LemmaPerturb} may be applied, concluding the proof.
\end{proof}
We introduce the sets
\[\om^{\text{sup}}=\left\{ x:|u_k(x)|\geq |x/\eps|^{\frac{n-1}{2}}\right\},\ \om^{\text{inf}}=\left\{ x:|u_k(x)|\leq |x/\eps|^{\frac{n-1}{2}}\right\}\]
and the function
\[f(t)=\int_{\om_{t}}\left(|\nabla u_k|1_{\om^\text{sup}}+1_{\om^{\text{inf}}}\right)|u_k|\mathrm{d}\Ln.\]
From the coarea formula we get
\[f(t)=\int_0^t \left(\int_{\partial^*\left\{|u_k|\leq \tau\right\}\cap\B_{r(\tau)}\setminus J_\U}|u_k|\mathrm{d}\Hs\right)\mathrm{d}\tau+\int_{0}^{r(t)}\left(\int_{\partial\B_r\cap\left\{ |u_k|\leq (r/\eps)^{\frac{n-1}{2}}\right\}\setminus J_\U}|u_k|\mathrm{d}\Hs\right)\mathrm{d}r.\]
So $f$ is absolutely continuous and
\[f'(t)=\int_{\partial^*\{|u_k|\leq t\}\cap\B_{r(t)}\setminus J_\U}|u_k|\mathrm{d}\Hs+\frac{2\epsilon}{n-1}t^{-\frac{n-3}{n-1}}\int_{\partial\B_{r(t)}\cap \left\{ |u_k|\leq t\right\}\setminus J_\U}|u_k|\mathrm{d}\Hs.\]
We use here the fact that $n\geq 3$, so that for all small enough $t$ we get
\[\frac{1}{\epsilon}f'(t)\geq\int_{\partial^*\{|u_k|\leq t\}\cap\B_{r(t)}\setminus J_\U}|u_k|\mathrm{d}\Hs+\int_{\partial\B_{r(t)}\cap \left\{ |u_k|\leq t\right\}\setminus J_\U}|u_k|\mathrm{d}\Hs.\]
We will now estimate $f$ in a similar manner as in result \ref{apriori}.
\begin{align*}
c_n\left(\int_{\om_{t}}|u_k|^{2\frac{n}{n-1}}\mathrm{d}\Ln\right)^{\frac{n-1}{n}}&\leq D(|u_k|^21_{\om_{t}})(\mathbb{R}^n)\\
&=\int_{\om_{t}}2|u_k\nabla u_k|\mathrm{d}\Ln +\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{r,t}}}^2\right)\mathrm{d}\Hs\\
&+\int_{\partial^*\om_{t}\setminus J_u}|u_k|^2\mathrm{d}\Hs\\
&\leq |\om_{t}|+\int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln +\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{r,t}}}^2\right)\mathrm{d}\Hs\\
&+\int_{\partial^*\om_{t}\setminus J_u}|u_k|^2\mathrm{d}\Hs\\
&\leq C_{\beta,\gamma}\int_{\partial^*\om_{t}\setminus J_u}|u_k|^2\mathrm{d}\Hs\\
&\leq \frac{C_{\beta,\gamma}}{\epsilon}tf'(t).
\end{align*}
We used the lemma \ref{Lemma_Estimate} in the penultimate line, which is only valid for small enough $t$. The hypothesis that $n\geq 3$ was used in the last line. Finally,
\begin{align*}
f(t)&=\int_{\om_{t}}\left(|\nabla u_k|1_{\om^\text{sup}}+1_{\om^{\text{inf}}}\right)|u_k|\mathrm{d}\Ln\\
&\leq |\om_{t}|^{\frac{1}{2n}}\left(\int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln\right)^{\frac{1}{2}}\left(\int_{\om_{t}}|u|^{2\frac{n}{n-1}}\mathrm{d}\Ln\right)^{\frac{n-1}{2n}}+\gamma|\om_{t}|^{\frac{n+1}{2n}}\left(\int_{\om_{t}}|u_k|^{2\frac{n}{n-1}}\mathrm{d}\Ln\right)^{\frac{n-1}{2n}}\\
&\leq C_{n,\beta,\gamma}\left(tf'(t)\right)^{\frac{2n+1}{2n}},
\end{align*}
which implies for a certain $t>0$ that $f(t)=0$. Let $r=\eps t^{\frac{n-1}{2}}$, we show $|u_k|\geq t 1_{\B_{x,r}\cap\left\{ u_k\neq 0\right\}}$. From $f(t)=0$ we get that $u_k=0$ on $\B_r\cap \left\{ x:|u_k(x)|\leq |x/\eps|^{\frac{n-1}{2}}\right\}$. In particular, up to reducing slightly $r$ and $t$ we may suppose
\[\Hs(\partial\B_r\cap \left\{ |u_k|\leq t\right\})=0.\]
Moreover, $f(t)=0$ also gives that $\nabla u_k=0$ on $\B_r\cap\left\{ 0<u\leq t\right\}$. Consider $\U'=\U1_{\mathbb{R}^n\setminus \om}$ where $\om=\B_r\cap\left\{ |u_k|\leq t\right\}$. Note that $J_{\U'}\subset J_{\U}$, and for any small enough $t>0$,
\[\lambda_k(\U;\beta')\leq \lambda_k(\U;\beta)+2t^2|\om|-\frac{1}{2}\beta\int_{J_\U}\left(\overline{u_k1_{\om}}^2+\underline{u_k1_{\om}}^2\right)\mathrm{d}\Hs.\]
This contradicts the minimality of $\lambda_k(\U;\beta)+\gamma|\left\{ \U\neq 0\right\}|$ as soon as $|\om|>0$. This concludes the proof.\bigbreak
\end{proof}
Note that the proof fails when $n=2$ ; we need to choose $r(t)\ll t^{\frac{2}{n-1}}$ to ensure that the competitor $u1_{\Rn\setminus\om_t}$ yields information, but later we use $\inf_{t<1}r'(t)>0$ in a crucial way. When $n=2$ the inequalities are weakened to instead yield $f(t)\geq c t^5$, which is not enough to conclude.\bigbreak
We now deduce the second main result as a consequence.
\begin{proposition}
Suppose $n\geq 3,k\geq 2$. Let $\U=(u_1,\hdots,u_k)$ a relaxed minimizer of $\lambda_k(\U;\beta)$ in $\Uk(m)$. Then
\[\lambda_k(\U;\beta)=\lambda_{k-1}(\U;\beta).\]
\end{proposition}
\begin{proof}
Suppose that $\lambda_k(\U;\beta)>\lambda_{k-1}(\U;\beta)$. We may apply lemma \ref{LemmaApriori} to assume without loss of generality that $u_1\geq 0$ and $\U\in L^\infty$, so that all the previous estimates apply.\\
Let $\Om$ be the support of $u_k$, with $\Om^+=\left\{u_k>0\right\}$ and $\Om^- =\left\{u_k<0\right\}$.\bigbreak
We first notice that $|\left\{\U\neq 0\right\}\setminus\Om|=0$. Suppose indeed that it is not the case, and let $\om=\left\{\U\neq 0\right\}\setminus\Om$. Since $|u_k|\geq \delta 1_{\Om}$, $\U$ may be written as a disconnected sum of two $\Uk$ functions
\[\U=(\U1_\Om)\oplus(\U1_\om).\]
We may translate $\Om$ and $\om$ so that they have a positive distance from each other. Then consider $t>1$ and $s=s(t)<1$ chosen such that
\[|t\Om|+|s\om|=|\Om|+|\om|,\]
and $\U_t$ the function built by dilation of $\U$ on $t\Om\cup s\om$. Then for $t=1+\epsilon$ with a small enough $\epsilon$ we have $\lambda_k(\U_t;\beta)<\lambda_k(\U;\beta)$ with support of same measure, which is absurd by minimality of $\U$. Thus $|\om|=0$.\bigbreak
Since $u_1$ is nonnegative, has support in $\Om$, and $\langle u_1,u_k\rangle_{L^2}=0$, this means that $|\Om^+|,|\Om^-|>0$. We may again decompose $\U$ into
\[\U=(\U1_{\Om^+})\oplus(\U1_{\Om^-}).\]
Consider $\V\in \Up(m)$ for some $p\in \{k,\hdots,2k\}$ an extraction of $(\U1_{\Om^+},\U1_{\Om^-})$, such that it spans the same space in $L^2(\R^n)$ and $\V$ is linearly independant. Then for each $i\in\{1,\hdots,k\}$,
\[\lambda_i(\V;\beta)\leq \lambda_i(\U;\beta),\]
with equality if $i=k$ by optimality of $\U$. Since $A(\V)$ and $B(\V)$ are block diagonals we may suppose $\V$ is normalized such that its components have support in either $\Om^+$ or $\Om^-$: say $v_k$ is supported in $\Om^+$. This means that $\V=(v_1,\hdots,v_k)$ is a minimizer in $L^\infty$ such that $\lambda_k(\V;\beta)>\lambda_{k-1}(\V;\beta)$, and by the previous arguments we know that up to a negligible set $\{\V\neq 0\}\subset\{v_k\neq 0\}$, thus $|\Om^-|=0$: this is a contradiction.
\end{proof}
\subsection{Discussion about the properties of open minimizers}
Here we make a few observations on the properties of minimizing open sets, provided we know such sets exist.
\begin{proposition}
Let $\Om$ be an open minimizer of $\lambda_k(\Om;\beta)$ among opens sets of measure $m$, for $k\geq 2$, with eigenfunction $u_1,\hdots,u_k$. Suppose $\lambda_{k-l}(\Om;\beta)<\lambda_{k-l+1}(\Om;\beta)=\lambda_k(\Om;\beta)$. Then we know that
\[\cap_{i=k-l+1}^k \left(u_i^{-1}(\left\{0\right\})\cap\Om\right)=\emptyset.\]
In particular, for $k=3$ and $n=2$, $\Om$ is not simply connected.
\end{proposition}
\begin{proof}
By contradiction, consider $x\in \cap_{i=k-l+1}^k u_i^{-1}(\left\{0\right\})$, and $\U_r=(u_1,\hdots,u_k)1_{\R^n\setminus \B_{x,r}}$. For a small enough $r$, $\U_r$ is admissible and, with the same estimate as in Lemma \ref{LemmaPerturb}, there is a ($L^2$-normalized) eigenfunction $u_\alpha$ associated to $\lambda_k(\Om;\beta)$ such that
\[\int_{\B_{x,r}}|\nabla u_\alpha|^2\mathrm{d}\Ln+\gamma|\B_{x,r}|\leq 2\beta \int_{\partial\B_{x,r}}u_\alpha^2\mathrm{d}\Hs.\]
This implies that for any small enough $r>0$,
\[ \fint_{\partial\B_{x,r}}u_\alpha^2\mathrm{d}\Hs\geq \frac{r\gamma}{2n\beta}.\]
However, if $x$ is at the intersection of every nodal line associated to eigenfunctions of $\lambda_k(\Om;\beta)$, and since these eigenfunctions are $\mathcal{C}^1$, there is a constant $C>0$ such that for all $\alpha$, $|u_\alpha(y)|\leq C|x-y|$, thus
\[ \fint_{\partial\B_{x,r}}u_\alpha^2\mathrm{d}\Hs\leq C^2r^2,\]
which is a contradiction.\bigbreak
Let us now suppose that $n=2$, $k=3$, and that $\Om$ is simply connected. Since any eigenfunction related to $\lambda_3(\Om;\beta)$ has a non-empty nodal set, we know that
\[\lambda_1(\Om;\beta)<\lambda_2(\Om;\beta)=\lambda_3(\Om;\beta).\]
Let $u_1,u,v$ be the associated eigenfunctions. Every non-trivial linear combination of $u$ and $v$ is an eigenfunction associated to $\lambda_2(\Om;\beta)$ so it has a non-empty nodal set and no more than two nodal domains, thus, with the simple connectedness of $\Om$, its nodal set is connected (either a circle or a curve) and the eigenfunction changes sign at the nodal set.\\
Let us parametrize the eigenspace with
\[w_t(x)=\cos(t)u(x)+\sin(t)v(x).\]
We show that the nodal sets $(\left\{w_t=0\right\})_{t\in \frac{\R}{\pi\mathbb{Z}}}$ are a partition of $\Om$ and that there is a continuous open function $T:\Om\to\frac{\R}{\pi\mathbb{Z}}$ such that $x\in \left\{w_{T(x)}=0\right\}$ for all $x\in\Om$. Indeed, the sets $(\left\{w_t=0\right\})_{t\in \frac{\R}{\pi\mathbb{Z}}}$ are disjoints because $u$ and $v$ have no common zeroes, and for any $x$ we may define
\[T(x)=-\text{arctan}\left(\frac{u(x)}{v(x)}\right),\]
where $\text{arctan}(\infty)=\frac{\pi}{2}[\pi]$ by convention. The function $T$ is continuous, $x\in \left\{w_{T(x)}=0\right\}$, and since eigenfunctions change sign at their nodal lines then $T$ is open. Since $\Om$ is simply connected $T$ may be lifted into
\[\Om\underset{T'}{\longrightarrow}\mathbb{R}\underset{p}{\longrightarrow}\mathbb{R}/\pi\mathbb{Z}.\]
Let $I$ be the image of $T'$, since $T$ is open, then $T'$ is too so $I$ is an open interval. If $T(x)=T(y)$, then $x$ and $y$ are in the same nodal line and since these are connected we know $T'(x)=T'(y)$. In particular, if $t$ is in $I$, then $t\pm \pi\notin I$; this implies that $I=]a,b[$ where $a<b$ and $b-a\leq\pi$. However every $w_t$ has a non-empty nodal set so $\frac{\R}{\pi\mathbb{Z}}=T(\Om)=p(]a,b[)$: this is a contradiction, thus $\Om$ is not simply connected.
\end{proof}
\end{document} | math | 85,661 |
\begin{document}
\title {Reconstruction of a source domain from the Cauchy data: II. Three dimensional case}
\author{Masaru IKEHATA\footnote{
Laboratory of Mathematics,
Graduate School of Advanced Science and Engineering,
Hiroshima University,
Higashihiroshima 739-8527, JAPAN}
\footnote{Emeritus Professor at Gunma University}
}
\maketitle
\begin{abstract}
This paper is concerned with reconstruction issue of some typical inverse problems and consists of three parts. First a framework of the enclosure method
for an inverse source problem governed by the Helmholtz equation
at a fixed wave number in three dimensions is introduced. It is based on the nonvanishing of the coefficient of the leading profile
of an oscillatory integral over a domain having a conical singularity.
Second an explicit formula of the coefficient for a domain having a circular cone singularity
and its implication under the framework are given.
Third, an application under the framework to an inverse obstacle problem
governed by an inhomogeneous Helmholtz equation at a fixed wave number in three dimensions is given.
\noindent
AMS: 35R30
$\quad$
\noindent
Key words: exponentially growing solution, enclosure method, inverse source problem, inverse obstacle problem, Helmholtz equation,
conical singularity, circular cone singularity
\end{abstract}
\section{Introduction}
More than twenty years ago, in \cite{Ik} the author obtained the extraction formula of the support function
of an unknown polygonal source domain in an inverse source problem governed by the Helmholtz equation
and polygonal penetrable obstacle in an inverse obstacle problem governed by an inhomogeneous Helmholtz eqution.
All the problems considered therein are in two dimensions and employ only a single set of Cauchy data
of a solution of the governing equation at a fixed wave number in a bounded domain.
Those results can be considered as the first application of a single measurment version of
the {\it enclosure method} introduced in \cite{Ik0}.
Succeding to \cite{Ik}, in \cite{IkC} the author found another unexpected application
of the enclosure method out to the Cauchy problem for the stationary Schr\"odinger equation
$$\displaystyle
-\Delta u+V(x)u=0
\tag {1.1}
$$
in a bounded domain $\Omega$ of ${\rm \bf R}^n$, $n=2,3$. Here $V\in L^{\infty}(\Omega)$ and both $u$ and $V$ can be complex valued functions.
We established an explicit representation or computation formula for an arbitrary solution $u\in H^2(\Omega)$ to the equation (1.1) in $\Omega$
in terms of its Cauchy data on a part of $\partial\Omega$. See also \cite{IS} for its numerical implementation.
Note also that the idea in \cite{IkC} has been applied
to an inverse source problem governed by the heat equation together with an inverse heat conduction problem
in \cite{IkH}, \cite{IkHC}, respectively.
The idea introduced therein is to make use of the complex geometrical optics solutions (CGO) with a large parameter $\tau$ for the modified equation
instead of (1.1):
$$\begin{array}{ll}
\displaystyle
-\Delta v+V(x)v=\chi_{D_y}(x)v, & x\in\Omega,
\end{array}
$$
where $y$ is a given point in $\Omega$, $D_y\subset\subset\Omega$ is the inside of a triangle, tetrahedron for $n=2,3$, respectively
with a vertex at $y$ and $\chi_{D_y}(x)$ is the characteristic function of $D_y$.
The solution is the same type as constructed one in \cite{SU1} for $n=2$, \cite{SU2} for $n=3$ and has the following form as $\tau\rightarrow\infty$
$$\displaystyle
v\sim e^{x\cdot z},
$$
where $z=\tau(\omega+i\vartheta)$ and both $\omega$ and $\vartheta$ are unit vectors perpendicular to each other.
This right-hand side is just the complex plane wave used in the Calder\'on method \cite{C}.
Note that, in \cite{IkS} another simpler idea to make use of the CGO solutions of another modified equation described below is
presented:
$$\begin{array}{ll}
\displaystyle
-\Delta v+V(x)v=\chi_D(x)e^{x\cdot z}, & x\in\Omega.
\end{array}
$$
Using integration by parts we reduced the problem of computing the value of $u$ at given point $y$, essentially, to clarifying the leading profile of the following oscillatory integral
as $\tau\rightarrow\infty$:
$$\displaystyle
\int_{D_y} e^{x\cdot z}\,\rho(x)dx,
$$
where $\rho(x)$ is uniformly H\"older continuous on $\overline{D_y}$\footnote{In this case $\rho=u$.}.
Note that the asymptotic behaviour of this type of oscillatory integral in {\it two dimensions} is the key point of the enclosure method developed in \cite{Ik}.
In \cite{IkC} we clarified the leading profile in more general setting as follows.
Given a pair $(p,\omega)\in{\rm \bf R}^n\times S^{n-1}$ and $\delta>0$ let $Q$ be an arbitrary non empty bounded open subset
of the plane $x\cdot\omega=p\cdot\omega-\delta$ with respect to the relative topology from ${\rm \bf R}^n$.
Define the bounded open subset of ${\rm \bf R}^n$ by the formula
$$\displaystyle
D_{(p,\omega)}(\delta,Q)
=\cup_{0<s<\delta}\,
\left\{p+\frac{s}{\delta}(z-p)\,\left\vert\right.\,z\in Q\,\right\}.
\tag {1.2}
$$
This is a cone with the base $Q$ and apex $p$, and lying in the slab $\{x\in{\rm \bf R}^n\,\vert\,p\cdot\omega-\delta<x\cdot\omega<p\cdot\omega\,\}$.
Note that $\delta=\mbox{dist}\,(\{p\},Q)$ is called the height. If $Q$ is given by the inside of a polygon, the cone (1.2) is called a {\it solid pyramid}.
In particular, if $Q$ is given by the inside of a triangle, cone (1.2) becomes a tetrahedron.
On (2.2) in \cite{IkC} we introduced a special complex constant associated with the domain (1.2) which is given by
$$\displaystyle
C_{(p,\omega)}(\delta, Q,\vartheta)=2s\int_{Q_s}\frac{dS_z}{\{s-i(z-p)\cdot\vartheta\}^n},
\tag {1.3}
$$
where $i=\sqrt{-1}$, $0<s<\delta$ and $Q_s=D_{(p,\omega)}(\delta,Q)\cap\{x\in{\rm \bf R}^n\,\vert x\cdot\omega=p\cdot\omega-s\,\}$
and the direction $\vartheta\in S^{n-1}$ is perpendicular to $\omega$.
Note that in \cite{IkC} complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ is simply written as $C_D(\omega,\omega^{\perp})$ with $\omega^{\perp}=\vartheta$.
As pointed therein out this quantity is independent of the choice $s\in\,]0,\,\delta[$ because of
the one-to-one correspondence between $z\in Q_s$ and $z'\in Q_{s'}$ by the formula
$$
\left\{
\begin{array}{l}
\displaystyle
z'=p+\frac{s'}{s}\,(z-p),
\\
\\
\displaystyle
\displaystyle
dS_{z'}=(\frac{s'}{s})^{n-1}\,dS_z.
\end{array}
\right.
$$
The following lemma describes the relationship between complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ and an integral over (1.2).
\proclaim{\noindent Proposition 1.1 (Lemma 2 in \cite{IkC}).} Let $n=2, 3$.
Let $D=D_{(p,\omega)}(\delta,Q)$ and $\rho\in C^{0,\alpha}(\overline D)$ with $0<\alpha\le 1$.
It holds that, for all $\tau>0$
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
\left\vert
e^{-\tau p\cdot(\omega+i\vartheta)}
\int_D\rho(x)e^{\tau x\cdot(\omega+i\vartheta)}\,dx
-\frac{n-1}{2\tau^n}
\rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)\right\vert
\\
\\
\displaystyle
\le\vert\rho(p)\vert\frac{\vert Q\vert}{\delta^{n-1}}
\{(\tau\delta+1)^{n-1}+n-2\}
\frac{e^{-\tau\delta}}{\tau^n}
+\Vert\rho\Vert_{C^{0,\alpha}(\overline D)}
\frac{\vert Q\vert}{\delta^{n-1}}
(\frac{\mbox{diam}\,D}{\delta})^{\alpha}\frac{C_{n,\alpha}}{\tau^{n+\alpha}},
\end{array}
$$
where $\Vert\rho\Vert_{C^{0,\alpha}(\overline D)}=
\sup_{x,y\in\overline D, x\not=y}\frac{\vert\rho(x)-\rho(y)\vert}{\vert x-y\vert^{\alpha}}$
and
$$\displaystyle
C_{n,\alpha}
=\int_0^{\infty}s^{n-1+\alpha} e^{-s}ds.
$$
\em \vskip2mm
Thus we have, as $\tau\rightarrow\infty$
$$\displaystyle
e^{-\tau p\cdot(\omega+i\vartheta)}
\int_{D_{(p,\omega)}(\delta,Q)}\rho(x)e^{\tau x\cdot(\omega+i\vartheta)}\,dx
=\frac{n-1}{2\tau^n}
\rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)+O(\tau^{-(n+\alpha)}).
$$
This is the meaning of complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$.
Note that the remainder estimate $O(\tau^{-(n+\alpha)})$ is uniform with respect to $\vartheta$.
And also as a direct corollary, instead of (1.3) we have another representation of $C_{(p,\omega)}(\delta,Q,\vartheta)$:
$$\displaystyle
C_{(p,\omega)}(\delta,Q,\vartheta)
=\frac{2}{n-1}
\lim_{\tau\longrightarrow\infty}\tau^ne^{-\tau p\cdot(\omega+i\vartheta)}
\int_{D_{(p,\omega)}(\delta,Q)} e^{\tau x\cdot(\omega+i\vartheta)}dx.
\tag {1.4}
$$
The convergence is uniform with respect to $\vartheta$.
Proposition 1.1 is the one of two key points in \cite{IkC} and gives the role of the H\"older continuity of $\rho$.
Another one is the {\it non-vanishing} of $C_{(p,\omega)}(\delta,Q,\vartheta)$ as a part of the leading coefficient
of the integral in Proposition 1.1 as $\tau\rightarrow\infty$.
This is not trivial, in particular, in three dimensional case. For this we have shown therein the following fact.
\proclaim{\noindent Proposition 1.2(Theorem 2 in \cite{IkC}).}
$\bullet$ If $n=2$ and $Q$ is given by the inside of an {\it arbitrary line segment}, then for all $\vartheta$ perpendicular
to $\omega$ we have $C_{(p,\omega)}(\delta,Q,\vartheta)\not=0$.
$\bullet$ If $n=3$ and $Q$ is given by the inside of an {\it arbitrary triangle}, then for all $\vartheta$ perpendicular
to $\omega$ we have $C_{(p,\omega)}(\delta,Q,\vartheta)\not=0$.
\em \vskip2mm
The nonvanishing of complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$
in case $n=2$ has been shown in the proof of Lemma 2.1 in \cite{Ik}. The proof therein employs
a local expression of the corner around apex as a graph of a function on the line $x\cdot\omega=x\cdot p$ and so the proof
by viewing $D_{(p,\omega)}(\delta,Q)$ as a cone in \cite{IkC} is not developed.
Note that, in the survey paper \cite{IkS} on the enclosure method it is pointed out that ``the Helmholtz version''
of Proposition 1.1 is also valid.
That is, roughly speaking, we have
$$\displaystyle
e^{-p\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\,\vartheta)}
\int_{D_{(p,\omega)}(\delta,Q)}\,\rho(x)e^{x\cdot(\tau \omega+i\sqrt{\tau^2+k^2}\,\vartheta)}\,dx
=\frac{n-1}{2\tau^n}
\rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)+O(\tau^{-(n+\alpha)})
\tag {1.5}
$$
with the {\it same constant} $C_{(p,\omega)}(\delta,Q,\vartheta)$, where $k\ge 0$.
See Lemma 3.2 therein.
The proof can be done by using the same argument as that of Proposition 1.1.
Note that the function $v=e^{x\cdot(\tau \omega+i\sqrt{\tau^2+k^2}\,\vartheta)}$ satisfies
the Helmholtz equation $\Delta v+k^2 v=0$ in ${\rm \bf R}^n$.
\subsection{Role of nonvanishing in an inverse source problem}
As an application of the nonvanishing of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$, we present here its direct application
to the inverse source problem considered in \cite{Ik}, however, in {\it three dimensions}.
Let $\Omega$ be a bounded domain of ${\rm \bf R}^3$ with $\partial\Omega\in C^2$.
We denote by $\nu$ the normal unit outward vector field on $\partial\Omega$.
Let $k\ge 0$.
Let $u\in H^1(\Omega)$ be an arbitrary weak solution of the Helmholtz equation in $\Omega$ at the wave number $k$:
$$\begin{array}{ll}
\displaystyle
\Delta u+k^2 u=F(x), & x\in\Omega,
\end{array}
\tag {1.6}
$$
where $F(x)$ is an unknown source term such that $\mbox{supp}\,F\subset\Omega$.
Both $u$ and $F$ can be complex-valued functions. See \cite{Ik} for the meaning of the solution
and the formulation of the Cauchy data on $\partial\Omega$ in the weak sense.
It is well known that, in general, one can not obtain the uniqueness of the source term $F$ itself from the Cauchy data
of $u$ on $\partial\Omega$.
In fact, given $\varphi\in C^{\infty}_0(\Omega)$ let $G=F+\Delta\varphi+k^2\varphi$.
We have $\mbox{supp}\,G\subset\Omega$ and the function $\tilde{u}=u+\varphi$ satisfies
$$\begin{array}{ll}
\displaystyle
\Delta\tilde{u}+k^2\tilde{u}=G(x), & x\in\Omega.
\end{array}
$$
Both $u$ and $\tilde{u}$ have the same Cauchy data on $\partial\Omega$.
It should be pointed out that, however, $F$ and $G$ coincides each other modulo $C^{\infty}$.
This means that the singularity of $F$ and $G$ coincides each other. This suggests a possibility
of extracting some information about
a singularity of $F$ or its support from the Cauchy data of $u$ on $\partial\Omega$.
As done in \cite{Ik} in two dimensions, we introduce the special form of the unknown source $F$:
$$F(x)=F_{\rho,D}(x)=
\left\{\begin{array}{lr}
\displaystyle
0, & \quad\mbox{if $x\in\Omega\setminus D$,}\\
\\
\displaystyle
\rho(x), & \quad\mbox{if $x\in\,D$.}
\end{array}
\right.
\tag {1.7}
$$
Here $D$ is an unknown non empty open subset of $\Omega$ satisfying $\overline D\subset\Omega$
and $\rho\in L^{2}(D)$ also unknown. We call $D$ the {\it source domain}, however,
we do not assume the connectedness of not only $D$ but also $\Omega\setminus\overline D$.
The $\rho$ is called the strength of the source.
We are interested in the following problem.
$\quad$
{\bf\noindent Problem 1.}
Extract information about a singularity of the source domain $D$ of $F$ having form (1.7) from the Cauchy data
$(u(x), \frac{\partial u}{\partial\nu}(x))$ for all $x\in\partial\Omega$.
$\quad$
\noindent
Note that we are seeking a {\it concrete procedure} of the extraction.
Here we recall the notion of the regularity of a direction introduced in the enclosure method \cite{Ik}.
The function $h_D(\omega)=\sup_{x\in D}\,x\cdot\omega$, $\omega\in S^{2}$ is called the {\it support function} of $D$.
It belongs to $C(S^2,{\rm \bf R})$ because of the trivial estimae $\vert h_D(\omega_1)-h_D(\omega_2)\vert\le
\sup_{x\in D}\,\vert x\vert\cdot\vert\omega_1-\omega_2\vert$ for all $\omega_1,\omega_2\in S^2$.
Given $\omega\in S^{2}$,
it is easy to see that the set
$$\displaystyle
H_{\omega}(D)\equiv\left\{x\in \overline D\,\left\vert\right. x\cdot\omega=h_D(\omega)\,\right\}
$$
is non empty and contained in $\partial D$.
We say that $\omega$ is {\it regular} with respect to $D$ if the set $H_{\omega}(D)$ consists of only a single point.
We denote the point by $p(\omega)$.
We introduce a concept of a singularity of $D$ in (1.7).
{\bf\noindent Definition 1.1.} Let $\omega\in S^{2}$ be regular with respect to $D$.
We say that $D$ has a {\it conical singularity} from direction $\omega$ if
there exists a positive number $\delta$, an open set $Q$ of the plane $x\cdot\omega=h_D(\omega)-\delta$ with respect to the relative topology
from ${\rm \bf R}^3$ such that
$$\displaystyle
D\cap\left\{x\in{\rm \bf R}^3\,\vert\,h_D(\omega)-\delta<x\cdot\omega<h_D(\omega)\,\right\}=D_{(p(\omega),\omega)}(\delta,Q).
$$
Second we introduce a concept of an {\it activity} of the source term.
{\bf\noindent Definition 1.2.} Given a point $p\in\partial D$
we say that the source $F=F_{\rho,D}$ given by (1.7) is {\it active} at $p$
if there exist an open ball $B_{\eta}(p)$ centered at $p$ with radius $\eta$,
$0<\alpha\le 1$ and a function $\tilde{\rho}\in C^{0,\alpha}(\overline{B_{\eta}(p)})$
such that $\rho(x)=\tilde{\rho}(x)$ for almost all $x\in B_{\eta}(p)\cap D$
and $\tilde{\rho}(p)\not=0$. Note that $\rho$ together with $\tilde{\rho}$ can be a complex-valued function.
Now let $u\in H^1(\Omega)$ satisfies the equation (1.6) in the weak sense with $F=F_{\rho, D}$ given by (1.7).
Given a unit vector $\omega\in S^2$ define $S(\omega)=\{\vartheta\in S^2\,\vert \omega\cdot\vartheta=0\}$.
Using the Cauchy data of $u$ on $\partial\Omega$, we define the indicator function as \cite{Ik}
$$\displaystyle
I_{\omega,\vartheta}(\tau)=\int_{\partial\Omega}
\left(\frac{\partial u}{\partial\nu}v-\frac{\partial v}{\partial\nu} u\right)\,dS,
$$
where $\vartheta\in S(\omega)$ and
$$\displaystyle
v=e^{x\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\vartheta)},\,\,\tau>0.
$$
And also its derivative with respect to $\tau$
$$\displaystyle
I_{\omega,\vartheta}'(\tau)
=\int_{\partial\Omega}\left(\frac{\partial u}{\partial\nu}\,v_{\tau}-\frac{\partial\,v_{\tau}}{\partial\nu} u\right)\,dS,
$$
where
$$\displaystyle
v_{\tau}=\partial_{\tau}v=\left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v.
$$
The following theorem clarifies the role of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ in the asymptotic behaviour of the indicator function
together with its derivative as $\tau\rightarrow\infty$.
\proclaim{\noindent Theorem 1.1.}
Let $\omega$ be regular with respect to $D$ and assume that
$D$ has a conical singularity from direction $\omega$.
Then, we have
$$\displaystyle
\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}(\tau)=
\tilde{\rho}(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta)
+O(\tau^{-\alpha})
\tag {1.8}
$$
and
$$\displaystyle
\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}'(\tau)=
\tilde{\rho}(p(\omega))(h_D(\omega)+ip(\omega)\cdot\vartheta)\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta)
+O(\tau^{-\alpha}).
\tag {1.9}
$$
The remainder $O(\tau^{-\alpha})$ is uniform with respect to $\vartheta\in S(\omega)$.
\em \vskip2mm
{\it\noindent Proof.}
Integration by parts yields
$$\displaystyle
I_{\omega,\vartheta}(\tau)=\int_D\rho(x)\,v\,dx
$$
and thus
$$\displaystyle
I_{\omega,\vartheta}'(\tau)=\int_D\rho(x)\,v_{\tau}\,dx.
$$
Recalling Definition 1.1, one has the decomposition
$$\displaystyle
D=D_{(p(\omega),\omega)}(\delta,Q)\cup D',
\tag {1.10}
$$
where
$$
D'=D\setminus D_{(p(\omega),\omega)}(\delta,Q)\subset\left\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega\le h_D(\omega)-\delta\,\right\}.
\tag {1.11}
$$
Besides, choosing $\delta$ smaller if necessary, one may assume that $D_{(p(\omega),\omega)}(\delta, Q)\subset B_{\eta}(p(\omega))$, where $\eta$
and $B_{\eta}(p(\omega))$ are same as those of Definition 1.2.
Hereafter we set $p=p(\omega)$ for simplicity of description.
According to the decomposition (1.10), we have
the decomposition of both $I_{\omega,\vartheta}(\tau)$ and $I_{\omega,\vartheta}'(\tau)$ as follows:
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
e^{-\tau h_D(\omega)}
e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}I_{\omega,\vartheta}(\tau)
\\
\\
\displaystyle
=e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}
\int_{D_{(p,\omega)}(\delta, Q)
}\tilde{\rho}(x)\,vdx
+e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}\int_{D'}\rho v dx
\end{array}
\tag {1.12}
$$
and
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}I_{\omega,\vartheta}'(\tau)
\\
\\
\displaystyle
=e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}
\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, v_{\tau}dx+
e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}\int_{D'}\rho v_{\tau} dx,
\end{array}
\tag {1.13}
$$
where $p=p(\omega)$.
By (1.11), we see that the second terms on the right-hand sides of (1.12) and (1.13) have the common bound
$O(e^{-\tau\delta}\Vert\rho\Vert_{L^{2}(D)})$.
Thus from (1.5) and (1.12) we obtain (1.8) with
the remainder $O(\tau^{-\alpha})$ which is uniform with respect to $\vartheta\in S(\omega)$.
For (1.13) we write
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, v_{\tau}dx
\\
\\
\displaystyle
=\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\,
\left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v\,dx
\\
\\
\displaystyle
=\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, x\cdot\omega\,v\,dx
+i\frac{\tau}{\sqrt{\tau^2+k^2}}\int_{C_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, x\cdot\vartheta\,v\,dx.
\end{array}
$$
Thus applying (1.5) to each of the last terms and using (1.13), we obtain (1.9)
with the remainder $O(\tau^{-\alpha})$ which is uniform with respect to $\vartheta\in S(\omega)$.
\noindent
$\Box$
Thus under the same assumptions as Theorem 1.1, for each $\vartheta\in S(\omega)$ one can calculate
$$\displaystyle
I(\omega,\vartheta)\equiv \tilde{\rho}(p(\omega))\,\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta)
$$
via the formula
$$\displaystyle
I(\omega,\vartheta)
=\lim_{\tau\rightarrow\infty}\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}
I_{\omega,\vartheta}(\tau)
\tag {1.14}
$$
by using the Cauchy data of $u$ on $\partial\Omega$ if $p(\omega)$ is known.
As a direct corollary of formulae (1.8) and (1.9), we obtain a partial answer to Problem 1 and the starting point of the main purpose in this paper.
\proclaim{\noindent Theorem 1.2.}
Let $\omega$ be regular with respect to $D$.
Assume that $D$ has a conical singularity from direction $\omega$,
$F_{\rho,D}$ is active at $p=p(\omega)$ and that
direction $\vartheta\in S(\omega)$ satisfies the condition
$$\displaystyle
C_{(p(\omega),\,\omega)}(\delta,Q,\vartheta)\not=0.
\tag {1.15}
$$
Then, there exists a positive number $\tau_0$ such that, for all $\tau\ge\tau_0$
$\vert I_{\omega,\vartheta}(\tau)\vert>0$ and we have the following three asymptotic formulae.
The first formula is
$$\displaystyle
\lim_{\tau\longrightarrow\infty}\frac{\log\vert I_{\omega, \vartheta}(\tau)\vert}{\tau}=h_D(\omega)
\tag {1.16}
$$
and second one
$$\displaystyle
\lim_{\tau\rightarrow\infty}
\frac{I_{\omega,\vartheta}'(\tau)}{I_{\omega,\vartheta}(\tau)}
=h_D(\omega)+i\,p(\omega)\cdot\vartheta.
\tag {1.17}
$$
The third one is the so-called $0$-$\infty$ criterion:
$$\displaystyle
\lim_{\tau\longrightarrow\infty}e^{-\tau t}\vert I_{\omega, \vartheta}(\tau)
\vert
=
\left\{
\begin{array}{ll}
0, & \mbox{if $t\ge h_D(\omega)$,}\\
\\
\displaystyle
\infty, & \mbox{if $t<h_D(\omega)$.}
\end{array}
\right.
\tag {1.18}
$$
\em \vskip2mm
This provides us the framework of the approach using the enclosure method for the source domain with a conical singularity
from a direction.
Some remarks are in order.
$\bullet$ In two dimensions, by Proposition 1.2 the condition (1.15) is redundant
and we have the same conclusion as Theorem 1.2.
$\bullet$ The formula (1.17) is an application of the idea
``taking the logarithmic derivative of the indicator function'' introduced in \cite{IkL}.
Therein inverse obstacle scattering problems at a fixed frequency in two dimensions are considered.
Needless to say, formula (1.17) is not derived in \cite{Ik}.
The condition (1.15) is {\it stable} with respect to the parturbation of $\vartheta\in S(\omega)$ since from the expression (1.3)
we see that the function $S(\omega)\ni\vartheta\longmapsto C_{(p(\omega),\,\omega)}(\delta,Q,\,\vartheta)$
is continuous, where the topology of $S(\omega)$ is the relative one from ${\rm \bf R}^3$. This fact yields a corollary as follows.
\proclaim{\noindent Corollary 1.1.}
Let $\omega$ be regular with respect to $D$.
Under the same assumptions as those in Theorem 1.2 the point $p(\omega)$ is uniquely determined by
the Cauchy data of $u$ on $\partial\Omega$.
\em \vskip2mm
{\it\noindent Proof.} From (1.16) one has $h_D(\omega)=p(\omega)\cdot\omega$.
Choose $\vartheta'\in S(\omega)$ sufficiently near $\vartheta$ in such a way that
$C_{(p(\omega),\omega)}(\delta,Q,\vartheta')\not=0$. Then from the formula (1.17) for two linearly independent directions $\vartheta$ and $\vartheta'$
one gets $p(\omega)\cdot\vartheta$ and $p(\omega)\cdot\vartheta'$.
\noindent
$\Box$
As another direct corollary of Theorem 1.2 and Proposition 1.2 in the case $n=3$ we have the following result.
\proclaim{\noindent Corollary 1.2.}
Assume that $D$ is given by the inside of a convex polyhedron and in a neighbourhood of each vertex $p$ of $D$,
the $D$ coincides with the inside of a tetrahedron with apex $p$ and that the source $F=F_{\rho, D}$ given by (1.7)
is active at $p$.
Then, we have all the formulae (1.16), (1.17) and (1.18) for all $\omega$ regular with respect to $D$
and $\vartheta\in S(\omega)$.
\em \vskip2mm
{\it\noindent Proof.} We have: $D$ has a conical singularity from the direction $\omega$ that is regular with respect to $D$
with a triangle $Q$ at each $p(\omega)$. Thus (1.15) is valid for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$.
Therefore, we have all the formulae (1.16), (1.17) and (1.18) for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$.
\noindent
$\Box$
{\bf\noindent Remark 1.1.} Under the same assumptions as Corollary 1.2 one gets a uniqueness theorem: the Cauchy data of $u$ on $\partial\Omega$ uniquely determines $D$.
The proof is as follows. From (1.16) one gets $h_D(\omega)$ for all $\omega$ regular with respect to $D$. The set of all $\omega$ that are not regular with respect to $D$
consists of a set of finite points and arcs on $S^2$.
This yields the set of all $\omega$ that are regular with respect to $D$ is dense and thus one gets
$h_D(\omega)$ for all $\omega\in S^2$ because of the continuity of $h_D$. Therefore one obtains the convex hull of $D$ and thus $D$ itself by the convexity assumption.
This proof is remarkable and unique since
we never make use of the {\it traditional contradiction argument}`` Suppose we have two different source domains
$D_1$ and $D_2$ which yields the same Cauchy data,...''; any {\it unique continuation argument} of the solution of the governing equation.
One can see such two arguments in \cite{N} in the case when $k=0$ for an inverse problem for detecting a source of {\it gravity anomaly}.
Some of typical examples of $D$ covered by Corollary 1.2 are tetrahedron, regular hexahedron (cube), regular dodecahedron.
So now the central problem in applying Theorem 1.2 to Problem 1 for the source with various source domain
under our framework is to clarify
the condition (1.15) for general $Q$.
In contrast to Proposition 1.2, when $Q$ is general, we do not know whether there exists a unit vector $\vartheta\in S(\omega)$ such that (1.15) is valid
or not. Going back to (1.3), we have an explicit vector equation for the constant $C_{(p,\omega)}(\delta,Q,\vartheta)$, if $Q$ is given by the inside of a polygon.
See Proposition 4 in \cite{IkC}. However, comparing with the case when $Q$ is given by the inside of a triangle,
it seems difficult to deduce the non-vanishing $C_{(p,\omega)}(\delta,Q,\vartheta)$
for all $\vartheta\in S(\omega)$ from the equation directly. This is an open problem.
\subsection{Explicit formula and its implication}
In this paper, instead of considering general $Q$, we consider another special
$Q$. It is the case when $Q$ is given by the section of the inside of a {\it circulr cone} by a plane.
Given $p\in{\rm \bf R}^3$, $\mbox{\boldmath $n$}\in S^2$ and $\theta\in\,]0,\,\frac{\pi}{2}[$ let $V_p(-\mbox{\boldmath $n$},\theta)$
denote the inside of the {\it circular cone} with {\it apex} at $p$ and the opening angle $\theta$ around the direction $-\mbox{\boldmath $n$}$,
that is
$$\displaystyle
V_p(-\mbox{\boldmath $n$},\theta)=\left\{x\in{\rm \bf R}^3\,\left\vert\right.
\,(x-p)\cdot(-\mbox{\boldmath $n$})>\cos\theta\,\right\}.
$$
Given $\omega\in S^2$ set
$$\displaystyle
Q=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta)
\cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\omega=p\cdot\omega-\delta\,\right\}.
\tag {1.19}
$$
To ensure that $Q$ is non empty and bounded, we impose the restriction between $\omega$ and $\mbox{\boldmath $n$}$ as follows:
$$
\omega\cdot\mbox{\boldmath $n$}>\cos(\pi/2-\theta)=\sin\theta(>0).
\tag{1.20}
$$
This means that the angle between $\omega$ and $\mbox{\boldmath $n$}$ has to be less than $\frac{\pi}{2}-\theta$.
Then it is known that $Q$ is an ellipse and we have
$$\displaystyle
D_{(p,\omega)}(\delta, Q)=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta)
\cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\omega>p\cdot\omega-\delta\,\right\}.
\tag {1.21}
$$
The problem here is to compute the complex constant
$C_{(p,\omega)}(\delta,Q,\vartheta)$ with all $\vartheta\in S(\omega)$ for this domain
$D_{(p,\omega)}(\delta,Q)$ with $Q$ given by (1.19).
Instead of (1.3) we employ
the formula (1.4) with $D=D_{(p,\omega)}(\delta,Q)$ with $n=3$:
$$\displaystyle
C_{(p,\omega)}(\delta,Q,\vartheta)
=
\lim_{\tau\longrightarrow\infty}\tau^3e^{-\tau p\cdot(\omega+i\vartheta)}
\int_{D_{(p,\omega)}(\delta,Q)}\,e^{\tau x\cdot(\omega+i\vartheta)}dx.
\tag {1.22}
$$
Here we rewrite this formula. Choosing sufficiently small positive numbers $\delta'$ and $\delta''$ with $\delta''<\delta'$,
we see that the set
$$\displaystyle
D_{(p,\omega)}(\delta, Q)\cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\mbox{\boldmath $n$}<p\cdot\mbox{\boldmath $n$}-\delta'\,\right\}
$$
is containted in the half-space $x\cdot\omega<p\cdot\omega-\delta''$.
This yields
$$
\displaystyle
e^{-\tau p\cdot(\omega+i\vartheta)}
\int_{D_{(p,\omega)}(\delta,Q)}\,e^{\tau x\cdot(\omega+i\vartheta)}dx
=e^{-\tau p\cdot(\omega+i\vartheta)}
\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx+O(e^{-\tau\delta''}),
$$
where
$$\displaystyle
V=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta)
\cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\mbox{\boldmath $n$}>p\cdot\mbox{\boldmath $n$}-\delta'\,\right\}.
$$
Thus from (1.22) we obtain a more convenient expression
$$\displaystyle
C_{(p,\omega)}(\delta,Q,\vartheta)
=
\lim_{\tau\longrightarrow\infty}\tau^3e^{-\tau p\cdot(\omega+i\vartheta)}
\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx.
\tag {1.23}
$$
Using this expression we have the following explicit formula of $C_{(p,\omega)}(\delta,Q,\vartheta)$ for $D_{(p,\omega)}(\delta,Q)$
given by (1.21).
\proclaim{\noindent Proposition 1.3.}
We have
$$\displaystyle
C_{(p,\omega)}(\delta, Q,\vartheta)
=6\,V(\theta)\,
(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)\,)^{-3},
\tag {1.24}
$$
where
$$\displaystyle
V(\theta)=\frac{\pi}{3}\cos\,\theta\sin^2\,\theta.
$$
\em \vskip2mm
Note that the value $V(\theta)$ coincides with
the volume of the circular cone with the height $\cos\theta$ and the opening angle $\theta$.
This function of $\theta\in\,]0,\,\frac{\pi}{2}\,[$ is monotone increasing in $]0,\,\tan^{-1}\sqrt{2}[$
and decreasing in $]\tan^{-1}\sqrt{2},\,\frac{\pi}{2}[$; takes the maximum value
$\frac{2\pi}{9\sqrt{3}}$ at $\theta=\tan^{-1}\sqrt{2}$.
Now we describe an application to Problem 1.
First we introduce a singularity of a circular cone type for the source domain.
{\bf\noindent Definition 1.3.} Let $D$ be a non empty bounded open set of ${\rm \bf R}^3$.
Let $p\in\partial D$. We say that $D$ has a {\it circular cone singularity} at $p$ if there exist
a positive number $\epsilon$, unit vector $\mbox{\boldmath $n$}$ and number $\theta\in\,]0,\,\frac{\pi}{2}[$ such that
$$\displaystyle
D\cap B_{\epsilon}(p)=V_{p}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p).
$$
It is easy to see that notion of the circular cone singularity is a special case of that of the conical one
in the following sense.
\proclaim{\noindent Lemma 1.1.}
Let $\omega\in S^2$ be regular with respect to $D$.
Assume that $D$ has a circular cone singularity at $p(\omega)$. Then, $D$ has a conical singularity from direction $\omega$
at $p(\omega)$. More precisely, for a sufficiently small $\delta$ we have the expression
$$\displaystyle
D\cap\left\{x\in{\rm \bf R}^3\,\vert\, h_D(\omega)-\delta<x\cdot\omega<h_D(\omega)\,\right\}
=D_{(p(\omega),\omega)}(\delta, Q),
$$
where $Q$ is given by (1.19) with $V_{p}(-\mbox{\boldmath $n$},\theta)$ at $p=p(\omega)$ in the definition 1.3 satisfying (1.20).
\em \vskip2mm
As a diect corollary of Theorems 1.1-1.2, Proposition 1.3 and Lemma 1.1 we immediately
obtain all the results in Theorem 1.2 without the condition (1.15). We suumarize one of the result as Corollary 1.3 as follows.
\proclaim{\noindent Corollary 1.3(Detecting the point $p(\omega)$).} Let $u\in H^1(\Omega)$ be an arbitrary solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7).
Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity
at $p=p(\omega)$; the source $F$ is active at $p(\omega)$.
Choose two linearly independent vectors $\vartheta=\vartheta_1$ and $\vartheta_2$ in $S(\omega)$.
Then, the point $p(\omega)$ itself and thus $h_D(\omega)=p(\omega)\cdot\omega$ can be extracted from the Cauchy data of $u$ on $\partial\Omega$
by using the formula
$$\displaystyle
p(\omega)\cdot\omega+i\,p(\omega)\cdot\vartheta_j
=\lim_{\tau\rightarrow\infty}
\frac{I_{\omega,\vartheta_j}'(\tau)}{I_{\omega,\vartheta_j}(\tau)},\,\,\,j=1,2.
\tag {1.25}
$$
\em \vskip2mm
By virtue of the formula (1.24), the function $I(\omega,\,\cdot\,)$ has the expression
$$\displaystyle
I(\omega,\vartheta)=6\,\tilde{\rho}(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^{-3}.
\tag {1.26}
$$
Formula (1.26) yields the following results.
\proclaim{\noindent Corollary 1.4.} Let $u\in H^1(\Omega)$ be a solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7).
Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity
at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$.
\noindent
(i) Assume that $F$ is active at $p(\omega)$.
The vector $\omega$ coincides with $\mbox{\boldmath $n$}$ if and only if the function
$I(\omega,\,\cdot\,)$ is a constant function.
\noindent
(ii) The vector $\mbox{\boldmath $n$}$ and $\theta$ of $V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)$
and the source strength $\tilde{\rho}(p(\omega))$ satisfies the following two equations:
$$\displaystyle
6\,\vert\tilde{\rho}(p(\omega))\vert\,V(\theta)=(\mbox{\boldmath $n$}\cdot\omega)^3
\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert;
\tag {1.27}
$$
$$\displaystyle
6\,\tilde{\rho}(p(\omega))
\,V(\theta)\,(3(\mbox{\boldmath $n$}\cdot\omega)^2-1)
=\frac{1}{\pi}\,\int_{S(\omega)}\,I(\omega,\vartheta)
\,ds(\vartheta).
\tag {1.28}
$$
\em \vskip2mm
Using the equations (1.26), (1.27) and (1.28) one gets the following corollary.
\proclaim{\noindent Corollary 1.5.} Let $u\in H^1(\Omega)$ be a solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7).
Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity
at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$.
Assume that $F$ is active at $p(\omega)$ and that $\omega\approx\mbox{\boldmath $n$}$ in the sense that
$$\displaystyle
\mbox{\boldmath $n$}\cdot\omega>\frac{1}{\sqrt{3}}.
\tag {1.29}
$$
Then, the value $\gamma=\mbox{\boldmath $n$}\cdot\omega$ is the unique solution of the following quintic equation in $]\,\frac{1}{\sqrt{3}},\,1]$:
$$\displaystyle
\gamma^3(3\gamma^2-1)=
\frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta)
\,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert}.
\tag {1.30}
$$
Besides, for an arbitrary $\vartheta\in S(\omega)$ the value $\mu=\mbox{\boldmath $n$}\cdot\vartheta$ is given by the formulae
$$
\displaystyle
\mu^2=\frac{\displaystyle\gamma^3-\mbox{Re}\,T(\omega,\vartheta)}
{3\gamma}
\tag {1.31}
$$
and
$$\displaystyle
\mu=\frac{\displaystyle\mbox{Im}\,T(\omega,\vartheta)}{3\gamma^2-\mu^2},
\tag {1.32}
$$
where
$$\displaystyle
T(\omega,\vartheta)
=\frac{\displaystyle
\int_{S(\omega)}\,I(\omega,\vartheta)\,ds(\vartheta)}
{\pi(3\gamma^2-1)I(\omega,\vartheta)}.
\tag{1.33}
$$
\em \vskip2mm
The condition (1.29) is equivalent to the statement\footnote{We have
$$\displaystyle
\frac{3\pi}{10}+\frac{\pi}{100}>\tan^{-1}\sqrt{2}>\frac{3\pi}{10}.
$$
}: the angle between $\omega$ and $\mbox{\boldmath $n$}$
is less than $\tan^{-1}\sqrt{2}$. Thus it is not so strict condition.
The denominator of (1.32) is not zero because of $3\gamma^2-\mu^2\ge 3\gamma^2-1$ and (1.29).
Under the same assumptions as Corollary 1.5, one can finally calculate the quantity
$$\displaystyle
\tilde{\rho}(p(\omega))\,V(\theta)
\tag {1.34}
$$
and $\mbox{\boldmath $n$}$ from the Cauchy data of $u$ on $\partial\Omega$. This is the final conclusion.
The procedure is as follows.
\noindent
{\bf Step 1.} Calculate $p(\omega)$ via the formula (1.25).
\noindent
{\bf Step 2.} Calculate $I(\omega,\vartheta)$ via the formula (1.14) and the computed $p(\omega)$ in Step 1.
\noindent
{\bf Step 3.} If $I(\omega,\vartheta)$ looks like a constant function, decide $\omega\approx\mbox{\boldmath $n$}$ in the sense (1.29).
If not so, search another $\omega$ around the original one in such a way that $\omega\approx\mbox{\boldmath $n$}$ as above by try and error
and finally fix it.
\noindent
{\bf Step 4.} Find the value $\gamma=\mbox{\boldmath $n$}\cdot\omega$ by solving the quintic equation (1.30).
\noindent
{\bf Step 5.} Find the value (1.34) via the formulae (1.28) with the computed $\mbox{\boldmath $n$}\cdot\omega$
in Step 4.
\noindent
{\bf Step 6.} Choose linearly independent vectors $\vartheta_1, \vartheta_2\in S(\omega)$ and calculate
$T(\omega,\vartheta_j)$, $j=1,2$ via the formula (1.33) using the computed value $\gamma$ in Step 4.
\noindent
{\bf Step 7.} Find $\mu=\mu_j=\mbox{\boldmath $n$}\cdot\vartheta_j$ by solving (1.31) and (1.32) using the computed $T(\omega,\vartheta_j)$
in Step 6.
\noindent
{\bf Step 8.} Find $\mbox{\boldmath $n$}$ by solving $\mbox{\boldmath $n$}\cdot\omega=\gamma$, $\mbox{\boldmath $n$}\cdot\vartheta_j=\mu_j$, $j=1,2$.
Note that, in addition, if the opening angle $\theta$/the source strength $\tilde{\rho}(p(\omega))$ is known, then one obtains the value of
$\tilde{\rho}(p(\omega))$/the volume $V(\theta)$ via the
computed value (1.34) in Step 5.
This paper is organized as follows. In the next section we give a proof of Proposition 1.3.
It is based on the integral representation (2.8) of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ and the residue calculus.
Proofs of Corollaries 1.4 and 1.5 are given in Section 3.
In Section 4, an inverse obstacle problem for a penetrable obstacle in three dimensions is considered.
The corresponding results in this case are given and in Section 5 a possible direction of the extension
of all the results in this paper is commented. Appendix is devoted to an example covered by the results in Section 4.
\section{Proof of Proposition 1.3}
In order to compute this right-hand side, we choose two unit vectors $\mbox{\boldmath $l$}$ and $\mbox{\boldmath $m$}$ perpendicular
to each other in such a way that $\mbox{\boldmath $n$}=\mbox{\boldmath $l$}\times\mbox{\boldmath $m$}$.
We see that the intersection of $\partial V_p(-\mbox{\boldmath $n$},\theta)$ with the plane
$(x-p)\cdot\mbox{\boldmath $n$}=-(1/\tan\,\theta)$ coincides with the circle with radius $1$ centered at the point
$p-(1/\tan\,\theta)\mbox{\boldmath $n$}$ on the plane.
The pointing vector of an arbitrary point on the circle with respect to point $p$ has the expression
$$\displaystyle
\vartheta(w)=\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$}
-\frac{1}{\tan\,\theta}\,\mbox{\boldmath $n$}
\tag {2.1}
$$
with a parameter $w\in\,[0,2\pi]$.
Besides, from the geometrical meaning of $\vartheta(w)$, we have
$$
\displaystyle\max_{w\in[0,\,2\pi]}\,\vartheta(w)\cdot\omega<0.
\tag {2.2}
$$
\proclaim{\noindent Lemma 2.1.}
We have the expression
$$\displaystyle
(\omega+i\vartheta)\,C_{(p,\omega)}(\delta,Q,\vartheta)
=\frac{1}{\tan\,\theta}
\int_0^{2\pi}
\frac{\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$}+\tan\,\theta\,\mbox{\boldmath $n$}}
{\{\vartheta (w)\cdot(\omega+i\vartheta)\}^2}dw.
\tag {2.3}
$$
\em \vskip2mm
{\it\noindent Proof.}
Let $\mbox{\boldmath $a$}$ be an arbitrally three dimensional complex vector.
We have
$$\displaystyle
\int_{V}\,\nabla\cdot(e^{\tau x\cdot(\omega+i\vartheta)}\mbox{\boldmath $a$})\,dx
=\tau(\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\,\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx.
$$
The divergence theorem yields
$$\displaystyle
(\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\,\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx
=\tau^{-1}\int_{\partial V}\,e^{\tau x\cdot(\omega+i\vartheta)}\mbox{\boldmath $a$}\cdot\mbox{\boldmath $\nu$}\,dS(x),
\tag {2.4}
$$
where $\mbox{\boldmath $\nu$}$ denotes the outer unit normal vector to $\partial V$.
Decompose $\partial V=V_1\cup V_2$ with $V_1\cap V_2=\emptyset$, where
$$\begin{array}{l}
\displaystyle
V_1=\{x\,\vert\,-(x-p)\cdot\mbox{\boldmath $n$}=\vert x-p\vert\cos\,\theta,\,
-\delta'<(x-p)\cdot\mbox{\boldmath $n$}<0\},\\
\\
\displaystyle
V_2=\{x\,\vert\,\vert x-(p-\delta'\,\mbox{\boldmath $n$})\vert\le\delta'\,\tan\,\theta,\,
(x-p)\cdot\mbox{\boldmath $n$}=-\delta'\}.
\end{array}
$$
To compute the surface integral over $V_1$, we make use of the change of variables as follows:
$$\begin{array}{ll}
\displaystyle
x
&
\displaystyle
=(p-\delta'\,\mbox{\boldmath $n$})+r(\cos\,w\,\mbox{\boldmath $l$}
+\sin\,w\,\mbox{\boldmath $m$})+\left(\delta'-\frac{r}{\tan\,\theta}\right)\,\mbox{\boldmath $n$}
\\
\\
\displaystyle
&
\displaystyle
=p+r\vartheta(w),
\end{array}
\tag {2.5}
$$
where $(r,w)\in\,[0,\,\delta'\tan\,\theta]\times[0,\,2\pi[$ and $\vartheta(w)$ is given by (2.1).
Then the surface element has the expression
$$\displaystyle
dS(x)=\frac{r}{\sin\,\theta}\,drdw
$$
and outer unit normal $\mbox{\boldmath $\nu$}$ to $V_1$ takes the form
$$\displaystyle
\nu=\sin\,\theta\left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}
\right).
$$
Now from (2.4) and the decomposition $\partial V=V_1\cup V_2$, we have
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
e^{-\tau p\cdot(\omega+i\vartheta)}
(\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\int_{V} v\,dx\\
\\
\displaystyle
=e^{-\tau p\cdot(\omega+i\vartheta)}\tau^{-1}
\int_{V_1} v\mbox{\boldmath $a$}\cdot\nu\,dS(x)
-e^{-\tau p\cdot(\omega+i\vartheta)}\tau^{-1}
\int_{V_2} v\mbox{\boldmath $a$}\cdot\mbox{\boldmath $n$}\,dS(x)\\
\\
\displaystyle
\equiv I+II,
\end{array}
\tag {2.6}
$$
where $v=e^{\tau x\cdot(\omega+i\vartheta)}$.
Since the set $V_2$ is contained in the half-space $x\cdot\omega\le p\cdot\omega-\delta''$, one gets
$$
\displaystyle
II=O(\tau^{-1}e^{-\tau\delta''}).
\tag {2.7}
$$
On $I$, using the change of variables given by (2.5), one has
$$\begin{array}{c}
\displaystyle
x\cdot\omega=p\cdot\omega+r\,\vartheta(w)\cdot\omega,\\
\\
\displaystyle
x\cdot\vartheta=p\cdot\vartheta+r\,\vartheta(w)\cdot\vartheta.
\end{array}
$$
And also noting (2.2), one gets
$$\begin{array}{ll}
\displaystyle
\tau\,I
&
\displaystyle
=\int_0^{2\pi}dw
\int_0^{\delta'\tan\,\theta}
rdr
e^{\tau r\vartheta(w)\cdot\omega+i\tau\,r\vartheta(w)\cdot\vartheta}
\left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$}
+\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}\\
\\
\displaystyle
&
\displaystyle
=\frac{1}{\tau^2}
\int_0^{2\pi}dw
\int_0^{\tau\delta'\tan\,\theta}
sds
e^{s\vartheta(w)\cdot\omega+i\,s\vartheta(w)\cdot\vartheta}
\left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$}
+\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}\\
\\
\displaystyle
&
\displaystyle
=\frac{1}{\tau^2}
\int_0^{2\pi}dw
\int_0^{\infty}
sds
e^{s\vartheta(w)\cdot\omega+is\vartheta(w)\cdot\vartheta}
\left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$}
+\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}+O(\tau^{-4}).
\end{array}
$$
Here one can
apply the following formula to this right-hand side:
$$\displaystyle
\int_0^{\infty}se^{as}e^{ibs}ds=\frac{1}{(a+ib)^2},\,\,a<0.
$$
Then one gets
$$\displaystyle
I=\frac{1}{\tau^3\tan\,\theta}
\int_0^{2\pi}
\frac{\cos\,w\,\mbox{\boldmath $l$}
+\sin\,w\,\mbox{\boldmath $m$}+\tan\,\theta\,\mbox{\boldmath $n$}}
{\{\vartheta(w)\cdot(\omega+i\Theta)\}^2}\,dw+O(\tau^{-5}).
$$
Now this together with (1.23), (2.6) and (2.7) yields the desired formula.
\noindent
$\Box$
Now from (1.20) and (2.3) we have the integral representation of $C_{(p,\omega)}(\delta,Q,\vartheta)$:
$$\displaystyle
C_{(p,\omega)}(\delta,Q,\vartheta)
=\frac{1}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}
\int_0^{2\pi}\frac{dw}{\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2}.
\tag {2.8}
$$
This formula shows that the constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ is independent
of $p$ and $\delta$ when $Q$ is given by (1.19).
By computing the integral of the right-hand side on (2.8) we obtain the explicit value of $C_{(p,\omega}(\delta, Q,\vartheta)$.
\proclaim{\noindent Lemma 2.2.} We have: $C_{(p,\omega)}(\delta, Q,\vartheta)\not=0$ if and only if
$$\displaystyle
\frac{\sin\,\theta}
{1+\cos\,\theta}<\left\vert
\frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}
{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}
\right\vert<\frac{1+\cos\,\theta}{\sin\,\theta}
\tag {2.9}
$$
and then
$$\displaystyle
C_{(p,\omega)}(\delta, Q,\vartheta)
=2\pi\cos\theta\,\sin^2\,\theta\,
(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)\,)^{-3}.
\tag {2.10}
$$
\em \vskip2mm
{\it\noindent Proof.}
Set
$$\displaystyle
A=\mbox{\boldmath $l$}\cdot(\omega+i\vartheta), \,\,B=\mbox{\boldmath $m$}\cdot(\omega+i\vartheta),\,\,
C=-\frac{1}{\tan\,\theta}\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)
$$
and $z=e^{iw}$. One can write
$$\begin{array}{ll}
\displaystyle
\vartheta(w)\cdot(\omega+i\vartheta)
&
\displaystyle
=A\cos\,w+B\sin\,w+C\\
\\
\displaystyle
&
\displaystyle
=\frac{A}{2}(z+z^{-1})-i\frac{B}{2}(z-z^{-1})+C\\
\\
\displaystyle
&
\displaystyle
=\frac{1}{2z}
\{(A-iB)z^2+2Cz+(A+iB)\}.
\end{array}
$$
Here we claim
$$
\displaystyle
A-iB\equiv(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\not=0.
\tag {2.11}
$$
Assume contrary that $A-iB=0$.
Since we have $$\displaystyle
A-iB
=\mbox{\boldmath $l$}\cdot\omega+\mbox{\boldmath $m$}\cdot\vartheta
+i(\mbox{\boldmath $l$}\cdot\vartheta-\mbox{\boldmath $m$}\cdot\omega),
$$
it must hold that
$$\displaystyle
\mbox{\boldmath $l$}\cdot\omega=-\mbox{\boldmath $m$}\cdot\vartheta,\,\,
\mbox{\boldmath $m$}\cdot\omega=\mbox{\boldmath $l$}\cdot\vartheta.
\tag {2.12}
$$
Then we have
$$\begin{array}{ll}
\displaystyle
(\mbox{\boldmath $n$}\cdot\vartheta)^2
&
\displaystyle
=\vert\vartheta\vert^2-(\mbox{\boldmath $l$}\cdot\vartheta)^2-(\mbox{\boldmath $m$}\cdot\vartheta)^2
\\
\\
\displaystyle
&
\displaystyle
=\vert\omega\vert^2-(\mbox{\boldmath $l$}\cdot\omega)^2-(\mbox{\boldmath $m$}\cdot\omega)^2
\\
\\
\displaystyle
&
\displaystyle
=(\mbox{\boldmath $n$}\cdot\omega)^2.
\end{array}
\tag {2.13}
$$
On the other hand, we have
$$\displaystyle
0=\omega\cdot\vartheta
=(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta)
+(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)
+(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta).
$$
Here by (2.12) one has
$(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta)
+(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)=0$.
Thus one obtains
$$
\displaystyle
0=(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta).
$$
Now a combination of this and (2.13) yields $\mbox{\boldmath $n$}\cdot\omega=0$.
However, by (1.20) this is impossible.
Therefore we obtain the expression
$$\displaystyle
\vartheta(w)\cdot(\omega+i\vartheta)
=\frac{A-iB}{2z}f(z)\vert_{z=e^{iw}},
\tag {2.14}
$$
where
$$\displaystyle
f(z)=
\left(z+\frac{C}{A-iB}\right)^2
-\frac{C^2-(A^2+B^2)}{(A-iB)^2}.
$$
Here we write
$$\begin{array}{ll}
\displaystyle
C^2-(A^2+B^2)
&
\displaystyle
=\frac{1}{\tan^2\,\theta}
(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^2
-\{(\mbox{\boldmath $l$}\cdot(\omega+i\vartheta))^2
+(\mbox{\boldmath $m$}\cdot(\omega+i\vartheta))^2\}\\
\\
\displaystyle
&
\displaystyle
=
\frac{1}{\tan^2\,\theta}
\{(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2
+2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta\}\\
\\
\displaystyle
&
\displaystyle
\,\,\,
-\{(\mbox{\boldmath $l$}\cdot\omega)^2
+(\mbox{\boldmath $m$}\cdot\omega)^2
-(\mbox{\boldmath $l$}\cdot\vartheta)^2
-(\mbox{\boldmath $m$}\cdot\vartheta)^2
+2i(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta)
+2i(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)\}\\
\\
\displaystyle
&
\displaystyle
=\left(\frac{1}{\tan^2\,\theta}+1\right)(\mbox{\boldmath $n$}\cdot\omega)^2
-
\left(\frac{1}{\tan^2\,\theta}+1\right)(\mbox{\boldmath $n$}\cdot\vartheta)^2
+\frac{1}{\tan^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)\\
\\
\displaystyle
&
\displaystyle
\,\,\,
-2i\{
(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta)
+(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)\}\\
\\
\displaystyle
&
\displaystyle
=\frac{1}{\sin^2\,\theta}\{(\mbox{\boldmath $n$}\cdot\omega)^2
-(\mbox{\boldmath $n$}\cdot\vartheta)^2\}
+\frac{1}{\sin^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)
\\
\\
\displaystyle
&
\displaystyle
\,\,\,
-2i\{
(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta)
+(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)
+(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)\}
\\
\\
\displaystyle
&
\displaystyle
=\frac{1}{\sin^2\,\theta}\{(\mbox{\boldmath $n$}\cdot\omega)^2
-(\mbox{\boldmath $n$}\cdot\vartheta)^2\}
+\frac{1}{\sin^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)
\\
\\
\displaystyle
&
\displaystyle
\,\,\,
-2i\omega\cdot\vartheta.
\end{array}
$$
Since $\omega\cdot\vartheta=0$, we finally obtain
$$\displaystyle
C^2-(A^2+B^2)
=\left(\frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}
{\sin\,\theta}\right)^2.
$$
Now set
$$\displaystyle
z_{\pm}=\frac{(\cos\,\theta\pm 1)}{\sin\,\theta}
\frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}
{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}.
\tag {2.15}
$$
Then one gets the factorization
$$\displaystyle
f(z)=(z-z_+)(z-z_{-}).
$$
By (2.15) we have $\vert z_+\vert>\vert z_{-}\vert$. Besides, from (2.2), (2.11) and (2.14) we have $f(e^{iw})\not=0$ for all $w\in\,[0,\,2\pi]$.
This ensures that the complex numbers $z_{+}$ and $z_{-}$ are not on the circle $\vert z\vert=1$.
Thus from (2.14) one gets
$$
\displaystyle
\int_0^{2\pi}
\frac{dw}
{\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2}
=\frac{4}{i(A-iB)^2}
\int_{\vert z\vert=1}\frac{zdz}{(z-z_{+})^2(z-z_{-})^2}.
\tag {2.16}
$$
The residue calculus yields
$$\displaystyle
\int_{\vert z\vert=1}\frac{zdz}{(z-z_{+})^2(z-z_{-})^2}
=
\left\{
\begin{array}{ll}
\displaystyle
0 & \mbox{if $\vert z_{-}\vert>1$,}
\\
\\
\displaystyle
0 & \mbox{if $\vert z_{-}\vert<1$ and $\vert z_{+}\vert<1$,}
\\
\\
\displaystyle
2\pi i\frac{z_{+}+z_{-}}{(z_{+}-z_{-})^3}\not=0
&
\mbox{if $\vert z_{-}\vert<1<\vert z_{+}\vert$.}
\end{array}
\right.
$$
And also (2.15) gives
$$\begin{array}{ll}
\displaystyle
2\pi i\frac{z_{+}+z_{-}}{(z_{+}-z_{-})^3}
&
\displaystyle
=2\pi i\cdot2\frac{\cos\theta}{\sin\theta}
\frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}
{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}
\cdot
(\frac{\sin\theta}{2})^3
\left\{\frac{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}
\right\}^3
\\
\\
\displaystyle
&
\displaystyle
=\frac{\pi i}{2}\cos\,\theta\sin^2\,\theta
\left\{\frac{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2
\\
\\
\displaystyle
&
\displaystyle
=\frac{\pi i}{2}\cos\,\theta\sin^2\,\theta
\left\{\frac{A-iB}
{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2.
\end{array}
$$
Thus (2.16) yields
$$
\displaystyle
\int_0^{2\pi}
\frac{dw}
{\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2}
=2\pi\cos\,\theta\sin^2\,\theta
\left\{\frac{1}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2
$$
provided $\vert z_{-}\vert<1<\vert z_{+}\vert$.
From these together with (2.8) we obtain the desired conclusion.
\noindent
$\Box$
Note that (2.10) is nothing but (1.24).
Since (2.9) looks like a condition depending on the choice of $\mbox{\boldmath $l$}$ and $\mbox{\boldmath $m$}$ we further rewrite the number
$$\displaystyle
K(\vartheta;\omega,\mbox{\boldmath $n$})
=\left\vert
\frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}
{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}
\right\vert.
$$
We have
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
\vert(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\vert^2
\\
\\
\displaystyle
=(\mbox{\boldmath $l$}\cdot\omega+\mbox{\boldmath $m$}\cdot\vartheta)^2+
(\mbox{\boldmath $l$}\cdot\vartheta-\mbox{\boldmath $m$}\cdot\omega)^2
\\
\\
\displaystyle
=2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2
+2(\mbox{\boldmath $l$}\cdot\omega\,\mbox{\boldmath $m$}\cdot\vartheta-\mbox{\boldmath $l$}\cdot\vartheta\,\mbox{\boldmath $m$}\cdot\omega).
\end{array}
$$
Here we see that
$$\displaystyle
\mbox{\boldmath $n$}\cdot(\omega\times\vartheta)
=\mbox{\boldmath $l$}\cdot\omega\,\mbox{\boldmath $m$}\cdot\vartheta-\mbox{\boldmath $l$}\cdot\vartheta\,\mbox{\boldmath $m$}\cdot\omega.
$$
Thus one has
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
\vert(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\vert^2
\\
\\
\displaystyle
=2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2
+2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta).
\end{array}
$$
Therefore we obtain
$$\displaystyle
K(\vartheta;\omega,\mbox{\boldmath $n$})=\frac{\displaystyle
\sqrt{(\mbox{\boldmath $n$}\cdot\omega)^2+(\mbox{\boldmath $n$}\cdot\vartheta)^2
}}
{\displaystyle
\sqrt{
2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2
+2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta)}
}.
$$
Besides, we have
$$\displaystyle
\frac{1-\cos\,\theta}{\sin\,\theta}=\tan\,\frac{\theta}{2}
$$
and
$$\displaystyle
\frac{1+\cos\,\theta}{\sin\,\theta}=\frac{1}{\tan\,\frac{\theta}{2}}.
$$
Thus (2.9) is equivalent to the condition
$$\displaystyle
\tan\,\frac{\theta}{2}<
K(\vartheta;\omega,\mbox{\boldmath $n$})
<\frac{1}{\tan\,\frac{\theta}{2}}.
\tag {2.17}
$$
Here consider the case $\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$.
Choose
$$\displaystyle
\vartheta=\frac{\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}}
{\vert\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}\vert}.
$$
We have $\vartheta\cdot\omega=\vartheta\cdot\mbox{\boldmath $n$}=0$ and $\vartheta\in S^2$.
Since we have
$$\displaystyle
\mbox{\boldmath $n$}\cdot(\omega\times\vartheta)=-\vert\omega\times\mbox{\boldmath $n$}\vert
$$
and
$$\displaystyle
1=(\mbox{\boldmath $n$}\cdot\omega)^2+\vert\omega\times\mbox{\boldmath $n$}\vert^2,
$$
one gets
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2
+2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta)
\\
\\
\displaystyle
=1+\vert\omega\times\mbox{\boldmath $n$}\vert^2-2\vert\omega\times\mbox{\boldmath $n$}\vert\\
\\
\displaystyle
=(1-\vert\omega\times\mbox{\boldmath $n$}\vert)^2.
\end{array}
$$
Therefore, we obtain
$$\displaystyle
K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$})
=\frac{\omega\cdot\mbox{\boldmath $n$}}{1-\vert\omega\times\mbox{\boldmath $n$}\vert}.
$$
Note that we are considering $\omega$ satisfying (1.20). Let $\varphi$ denote the angle between $\omega$ and $\mbox{\boldmath $n$}$.
Under the condition $\omega\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$, we see that (1.20) is equivalent to the condition
$$\displaystyle
0<\varphi<\frac{\pi}{2}-\theta.
\tag {2.18}
$$
Then one can write
$$\begin{array}{ll}
\displaystyle
K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$})
&
\displaystyle
=\frac{\cos\,\varphi}{1-\sin\,\varphi}
\\
\\
\displaystyle
&
\displaystyle
=\frac{1+\sin\,\varphi}{\cos\,\varphi}
\\
\\
\displaystyle
&
\displaystyle
=\frac{\displaystyle 1+\cos\,(\frac{\pi}{2}-\varphi)}{\displaystyle
\sin\,(\frac{\pi}{2}-\varphi)}
\\
\\
\displaystyle
&
\displaystyle
=\frac{1}{\displaystyle\tan\,\frac{1}{2}\,(\frac{\pi}{2}-\varphi)}
\end{array}
$$
Thus (2.18) gives
$$\displaystyle
1<K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$})<\frac{1}{\displaystyle \tan\,\frac{\theta}{2}}.
\tag {2.19}
$$
Since we have $\tan\,\frac{\theta}{2}<1$ for all $\theta\in\,]0,\,\frac{\pi}{2}[$, (2.19) yields the validity of (2.17).
Next consider the case $\omega\times\mbox{\boldmath $n$}=\mbox{\boldmath $0$}$. By (1.20) we have $\omega=\mbox{\boldmath $n$}$.
Then, for all $\vartheta$ perpendicular to $\mbox{\boldmath $n$}$ satisfies
$$\displaystyle
K(\vartheta;\mbox{\boldmath $n$}, \mbox{\boldmath $n$})
=1.
$$
This yields that (2.17) is valid for all $\theta\in\,]0,\,\frac{\pi}{2}[$.
The results above are summarized as follows.
Given $\omega\in S^2$ with (1.20) define the subset of $S^2$
$$\displaystyle
{\cal K}(\omega;\mbox{\boldmath $n$},\theta)
=
\left\{
\vartheta\in S^2\,\left\vert\right.\,\vartheta\cdot\omega=0,
\,\,\mbox{$K(\vartheta;\omega,\mbox{\boldmath $n$})$ satisfies (2.19)\,}\,\right\}.
$$
Then, we have
$\bullet$ If $\omega\not\not=\mbox{\boldmath $n$}$, then $\omega\times\mbox{\boldmath $n$}\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$.
$\bullet$ If $\omega=\mbox{\boldmath $n$}$, then ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)=
\{\vartheta\in S^2\,\vert\,\vartheta\cdot\omega=0\}\equiv S(\omega)$.
Thus, any way the set ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is not empty and clearly open
with respect to the topology of the set $S(\omega)$ which is the relative topology
of $S^2$.
Besides, we can say more about ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$.
We claim set ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is closed.
For this, It suffices to show that if a sequence $\{\vartheta_n\}$ of ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$
converges to a point $\vartheta\in S(\omega)$, then $\vartheta\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$.
This is proved as follows. By assumption, each $\vartheta_n$ satisfies
$$\displaystyle
\tan\,\frac{\theta}{2}<
K(\vartheta_n;\omega,\mbox{\boldmath $n$})
<\frac{1}{\tan\,\frac{\theta}{2}}.
$$
Taking the limit, we have
$$\displaystyle
\tan\,\frac{\theta}{2}\le
K(\vartheta;\omega,\mbox{\boldmath $n$})
\le\frac{1}{\tan\,\frac{\theta}{2}}.
$$
By (2.15) this is equivalent to $\vert z_{+}\vert\ge 1$ and $\vert z_{-}\vert\le 1$. However, in the proof of
Lemma 2.2 we know that
$\vert z_{+}\vert\not=1$ and $\vert z_{-}\vert\not=1$. Thus we have $\vert z_{+}\vert>1$ and $\vert z_{-}\vert<1$.
This is equivalent to $\vartheta\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$.
Since $S(\omega)$ is connected, ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is not empty, open and closed
we conclude ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)=S(\omega)$.
This completes the proof of Proposition 1.3.
\section{Proof of Corollaries 1.4 and 1.5}
Note that $\omega$ satisfies (1.20).
\subsection{On Corollary 1.4}
From (1.26) we have, if $\omega=\mbox{\boldmath $n$}$, then for all $\vartheta\in S(\omega)$
$$\displaystyle
I(\omega,\vartheta)=
6\tilde{\rho}(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot\omega)^{-3}.
$$
On the other hand, if $\omega\not=\mbox{\boldmath $n$}$, then we have $\omega\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$ (under the condition (1.19))
and
$$\displaystyle
S(\omega)\cap S(\mbox{\boldmath $n$})=\left\{\pm\frac{\omega\times\mbox{\boldmath $n$}}{\vert \omega\times\mbox{\boldmath $n$}\vert}\right\}.
$$
Thus one gets
$$\displaystyle
I(\omega,\vartheta)
=
\begin{array}{ll}
\displaystyle
6\tilde{\rho}(p(\omega))\,V(\theta)
\left(\mbox{\boldmath $n$}\cdot\omega\mp\,i\frac{\vert\omega\times\mbox{\boldmath $n$}\vert^2}{\vert\omega\times(\omega\times\mbox{\boldmath $n$})\vert}\right)^{-3}
& \mbox{for $\displaystyle\vartheta=\pm\frac{\omega\times(\omega\times\mbox{\boldmath $n$})}{\vert\omega\times(\omega\times\mbox{\boldmath $n$})\vert}$.}
\end{array}
$$
Thus one gets the assertion (i) and (1.27) in (ii).
For (1.28) it suffices to prove the following fact.
\proclaim{\noindent Lemma 3.1.} Let the unit vectors $\omega$ and $\mbox{\boldmath $n$}$ satisfy $\omega\cdot\mbox{\boldmath $n$}\not=0$. We have
$$\displaystyle
\int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3}
=\pi(3(\mbox{\boldmath $n$}\cdot\omega)^2-1).
\tag {3.1}
$$
\em \vskip2mm
{\it\noindent Proof.} The right-hand side on (3.1) is invariant with respect to the change $\omega\rightarrow-\omega$, it is easy to see that
the case $\omega\cdot\mbox{\boldmath $n$}<0$ can be derived from the result in the case $\omega\cdot\mbox{\boldmath $n$}>0$.
Thus, hereafter we show the validity of (3.1) only for this case.
If $\mbox{\boldmath $n$}\cdot\omega=1$, then $\omega=\mbox{\boldmath $n$}$. Thus $S(\omega)=S(\mbox{\boldmath $n$})$. Then
for all $\vartheta\in S(\omega)$ we have $\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)=1$. This yields
$$\displaystyle
\int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3}
=2\pi.
$$
Thus the problem is the case when $\mbox{\boldmath $n$}\cdot\omega\not=1$. Choose an orthogonal $3\times 3$-matrix $A$ such that
$A^T\omega=\mbox{\boldmath $e$}_3$. Introduce the change of variables $\vartheta=A\vartheta'$.
We have $\vartheta\in S(\omega)$ if and only if $\vartheta'\in S(\mbox{\boldmath $e$}_3)$
and
$$\begin{array}{ll}
\displaystyle
\mbox{\boldmath $n$}\cdot(\omega+iA\vartheta')
&
\displaystyle
=\mbox{\boldmath $n$}'\cdot(\mbox{\boldmath $e$}_3+i\vartheta'),
\end{array}
$$
where $\mbox{\boldmath $n$}'=A^T\mbox{\boldmath $n$}\in S^{2}$.
Here we introduce the polar coordinates for $\vartheta'\in S(\mbox{\boldmath $e$}_3)$:
$$\begin{array}{ll}
\displaystyle
\vartheta'=(\cos\,\vartheta,\sin\varphi, 0)^T, & \varphi\in\,[0,\,2\pi[.
\end{array}
$$
Then, we have
$$\begin{array}{ll}
\displaystyle
I\equiv\int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3}
&
\displaystyle
=\int_0^{2\pi}\frac{d\varphi}
{(\mbox{\boldmath $n$}'\cdot(i\cos\,\varphi,i\sin\,\varphi,1)^T)^3}
\\
\\
\displaystyle
&
\displaystyle
=-\frac{1}{i}\int_0^{2\pi}\frac{d\varphi}
{(\mbox{\boldmath $n$}'\cdot(\cos\,\varphi,\sin\,\varphi,-i)^T)^3}
\\
\\
\displaystyle
&
\displaystyle
=i\int_0^{2\pi}\frac{d\varphi}
{(a\cos\,\varphi+b\sin\,\varphi-ic)^3},
\end{array}
\tag {3.2}
$$
where $\mbox{\boldmath $n$}'=(a,b,c)^T$. The numbers $a, b, c$ satisfy $a^2+b^2+c^2=1$ and $0<c<1$
since we have $c=\mbox{\boldmath $n$}'\cdot\mbox{\boldmath $e$}_3=\mbox{\boldmath $n$}\cdot\omega$.
Thus $a^2+b^2\not=0$. To compute the integral on the right-hand side of (3.2) we make use of the residue calculus.
The change of variables $z=e^{i\varphi}$ gives
$$\begin{array}{l}
\displaystyle
\,\,\,\,\,\,
a\cos\,\varphi+b\sin\,\varphi-ic
\\
\\
\displaystyle
=\frac{1}{2}
\left\{a\left(z+\frac{1}{z}\right)+\frac{b}{i}\left(z-\frac{1}{z}\right)-2ic\right\}
\\
\\
\displaystyle
=\frac{1}{2z}
\left\{(a-ib)z^2-2icz+(a+ib)\right\}
\\
\\
\displaystyle
=\frac{a-ib}{2z}
\left\{\left(z-\frac{ic}{a-ib}\right)^2-\left(\frac{i}{a-ib}\right)^2,\right\}
\\
\\
\displaystyle
=\frac{a-ib}{2z}(z-\alpha)(z-\beta),
\end{array}
\tag {3.3}
$$
where
$$\begin{array}{ll}\displaystyle
\alpha=\frac{i(c+1)}{a-ib}, &
\displaystyle
\beta=\frac{i(c-1)}{a-ib}.
\end{array}
$$
Since $1-c<1+c$ and $a\cos\,\varphi+b\sin\,\varphi-ic\not=0$ for $z=e^{i\varphi}$, we have
$\vert\beta\vert<1<\vert\alpha\vert$.
Substituting (3.3) into (3.2) and using $d\varphi=\frac{dz}{iz}$, we have
$$\begin{array}{ll}
\displaystyle
I
&
\displaystyle
=i\int_{\vert z\vert=1}\frac{2^3}{(a-ib)^3}\cdot\frac{z^3}{(z-\alpha)^3(z-\beta)^3}\cdot\frac{dz}{iz}
\\
\\
\displaystyle
&
\displaystyle
=\left(\frac{2}{a-ib}\right)^3\int_{\vert z\vert=1}\,\frac{z^2 dz}{(z-\alpha)^3(z-\beta)^3}.
\end{array}
\tag {3.4}
$$
The residue calculus yields
$$\begin{array}{ll}
\displaystyle
\int_{\vert z\vert=1}\,\frac{z^2 dz}{(z-\alpha)^3(z-\beta)^3}
&
\displaystyle
=2\pi i\,\mbox{Res}_{z=\beta}\,\left(\frac{z^2}{(z-\alpha)^3(z-\beta)^3}\right)
\\
\\
\displaystyle
&
\displaystyle
=2\pi i\cdot\frac{1}{2}\frac{d^2}{dz^2}\left(\frac{z^2}{(z-\alpha)^3}\right)\vert_{z=\beta}
\\
\\
\displaystyle
&
\displaystyle
=2\pi i\cdot\frac{\alpha^2+4\alpha\beta+\beta^2}{(\beta-\alpha)^5}.
\end{array}
\tag {3.5}
$$
Here we have the expression
$$\displaystyle
\alpha-\beta=\frac{2i}{a-ib}
$$
and
$$\begin{array}{ll}
\displaystyle
\alpha^2+4\alpha\beta+\beta^2
&
\displaystyle
=-\frac{(c+1)^2+4(c^2-1)+(c-1)^2}{(a-ib)^2}
\\
\\
\displaystyle
&
\displaystyle
=-\frac{2(3c^2-1)}{(a-ib)^2}.
\end{array}
$$
Thus from (3.4) and (3.5) we obtain
$$\begin{array}{ll}
\displaystyle
I
&
\displaystyle
=-2\pi\left(\frac{a-ib}{2}\right)^2(\alpha^2+4\alpha\beta+\beta^2)
\\
\\
\displaystyle
&
\displaystyle
=\pi(3c^2-1).
\end{array}
$$
This completes the proof of (3.1).
\noindent
$\Box$
\subsection{On Corollary 1.5}
Let us explain the uniqueness of the solution of the quintic equation (1.30) in $]\frac{1}{\sqrt{3}},\,1]$.
From (1.27), (1.28) and (1.29) we have
$$\displaystyle
\frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta)
\,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert}
=(\mbox{\boldmath $n$}\cdot\omega)^3(3(\mbox{\boldmath $n$}\cdot\omega)^2-1)
$$
and thus
$$\displaystyle
0<
\frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta)
\,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert}
\le 2.
$$
Since $]\,\frac{1}{\sqrt{3}},\,\,1]\ni\gamma\longmapsto\gamma^3(3\gamma^2-1)\in\,]0,\,2]$ is bijective,
the solution of quintic equation (1.30) in $]\frac{1}{\sqrt{3}},\,1]$ is unique and its solution is just
$\gamma=\mbox{\boldmath $n$}\cdot\omega$.
The formulae (1.31) and (1.32) are derived as follows. A combination of (1.26) and (1.28) yields
$$\displaystyle
(\mbox{\boldmath $n$}\cdot\omega+i\mbox{\boldmath $n$}\cdot\vartheta)^3
=T(\omega,\vartheta).
$$
By expanding the left-hand side, we obtain immediately the desired formulae.
\section{Application to an inverse obstacle problem}
As pointed out in \cite{Ik} the enclosure method developed here can be applied also to an inverse obstacle problem in three dimensions governed by
the equation
$$\begin{array}{ll}
\displaystyle
\Delta u+k^2 n(x)u=0, & x\in\Omega,
\end{array}
\tag {4.1}
$$
where $k$ is a fixed positive number. We assume that $\partial\Omega\in C^{\infty}$, for simplicity.
Both $u$ and $n$ can be complex-valued functions.
In this section we assume that $n(x)$ takes the form $n(x)=1+F(x)$, $x\in\Omega$, where
$F=F_{\rho,D}(x)$ is given by (1.7).
We assume that $\rho\in L^{\infty}(D)$ instead of $\rho\in L^2(D)$ and that $u\in H^2(\Omega)$ is an arbitrary non trivial solution of (4.1)
at this stage.
We never specify the boundary condition of $u$ on $\partial\Omega$.
By the Sobolev imbedding theorem \cite{G} one may assume that $u\in C^{0,\alpha}(\overline\Omega)$ with $0<\alpha<1$.
In this section we consider
{\bf\noindent Problem 2.} Extract information about the singularity of $D$ from the Cauchy data of $u$ on $\partial\Omega$.
We encounter this type of problem, for example, $u$ is given by the restriction to $\Omega$ of the total wave
defined in the whole space and generated by a point source located outside of $\Omega$ or a single plane wave coming from infinity.
The surface where the measurements are taken is given by $\partial\Omega$ which encloses the penetrable obstacle $D$
with a different reflection index $1+\rho$, $\rho\not\equiv 0$. See \cite{CK} for detailed information about the direct problem itself.
Any way we start with having the Cauchy data of an arbitrary (nontrivial) $H^2(\Omega)$ solution of (4.1).
Using the Cauchy data of $u$ on $\partial\Omega$, we introduce the indicator function
$$\displaystyle
I_{\omega,\vartheta}(\tau)=\int_{\partial\Omega}
\left(\frac{\partial u}{\partial\nu}v-\frac{\partial v}{\partial\nu} u\right)\,dS,
\tag {4.2}
$$
where the function $v=v(x), x\in{\rm \bf R}^3$ is given by
$$\displaystyle
v=e^{x\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\vartheta)},\,\,\tau>0
$$
and $\vartheta\in S(\omega)$.
And also its derivative with respect to $\tau$ is given by the formula
$$\displaystyle
I_{\omega,\vartheta}'(\tau)
=\int_{\partial\Omega}\left(\frac{\partial u}{\partial\nu}\,v_{\tau}-\frac{\partial\,v_{\tau}}{\partial\nu} u\right)\,dS,
\tag {4.3}
$$
where
$$\displaystyle
v_{\tau}=\partial_{\tau}v=\left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v.
$$
As done the proof of Theorem 1.1 integration by parts yields
$$\displaystyle
I_{\omega,\vartheta}(\tau)=-k^2\int_D\rho(x)u(x)v\,dx
$$
and
$$\displaystyle
I_{\omega,\vartheta}'(\tau)=-k^2\int_D\rho(x)u(x)v_{\tau}\,dx.
$$
Thus this can be viewed as the case $\rho(x)$ in Problem 1 is given by $-k^2\rho(x)u(x)$ and $\tilde{\rho}(x)$ in Definition 1.2
by $-k^2\tilde{\rho}(x)u(x)$.
Thus we obtain
\proclaim{\noindent Theorem 4.1.}
Let $\omega$ be regular with respect to $D$ and assume that
$D$ has a conical singularity from direction $\omega$.
Then, we have
$$\displaystyle
\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}(\tau)=
-k^2\tilde{\rho}(p(\omega))\,u(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta)
+O(\tau^{-\alpha})
$$
and
$$\displaystyle
\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}'(\tau)=
-k^2\tilde{\rho}(p(\omega))\,u(p(\omega))(h_D(\omega)+ip(\omega)\cdot\vartheta)\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta)
+O(\tau^{-\alpha}).
$$
The remainder $O(\tau^{-\alpha})$ is uniform with respect to $\vartheta\in S(\omega)$.
\em \vskip2mm
Thus under the same assumptions as Theorem 4.1, for each $\vartheta\in S(\omega)$ one can calculate
$$\displaystyle
I(\omega\,\vartheta)\equiv -k^2\tilde{\rho}(p(\omega))\,u(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q)
$$
via the formula
$$\displaystyle
I(\omega,\vartheta)
=\lim_{\tau\rightarrow\infty}\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}
I_{\omega,\vartheta}(\tau)
\tag {4.4}
$$
by using the Cauchy data of $u$ on $\partial\Omega$ if $p(\omega)$ is known.
And also we have
\proclaim{\noindent Theorem 4.2.}
Let $\omega$ be regular with respect to $D$.
Assume that $D$ has a conical singularity from direction $\omega$;
$n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of
Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies
$$\displaystyle
u(p(\omega))\not=0.
\tag {4.5}
$$
If the direction $\vartheta\in S(\omega)$ satisfies the condition (1.15), then all the formulae (1.16), (1.17)
and (1.18) for the indicator function defined by (4.2) together with its derivative (4.3) are valid.
\em \vskip2mm
Note that the assumption (4.5) ensures $u\not\equiv 0$. See Appendix for an example of $u$ satisfying (4.5).
The following corollaries corresponds to Corollaries 1.1 and 1.2.
\proclaim{\noindent Corollary 4.1.}
Let $\omega$ be regular with respect to $D$.
Under the same assumptions as those in Theorem 4.2 the point $p(\omega)$ is uniquely determined by
the Cauchy data of $u$ on $\partial\Omega$.
\em \vskip2mm
\proclaim{\noindent Corollary 4.2.} Let $u\in H^2(\Omega)$ be a solution of (4.1).
Assume that $D$ is given by the inside of a convex polyhedron and that in a neighbourhood of each vertex $p$ of $D$,
the $D$ coincides with the inside of a tetrahedron with apex $p$ and that $n-1=F_{\rho, D}$ given by (1.7)
is active at $p$ and the value of $u$ at $p$ satisfies (4.5).
Then, all the formulae (1.16), (1.17) and (1.18) for the indicator function defined by (4.2) together with its derivative (4.3)
are valid for all $\omega$ regular with respect to $D$
and $\vartheta\in S(\omega)$.
Besides, the Cauchy data of $u$ on $\partial\Omega$ uniquely determines $D$.
\em \vskip2mm
The following result is an extension of Theorem 4.1 in \cite{Ik} to three dimensional case.
\proclaim{\noindent Corollary 4.3.} Let $u\in H^2(\Omega)$ be a solution of (4.1).
Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity
at $p=p(\omega)$; $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of
Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5).
Choose two linearly independent vectors $\vartheta=\vartheta_1$ and $\vartheta_2$ in $S(\omega)$.
Then, the point $p(\omega)$ itself and thus $h_D(\omega)=p(\omega)\cdot\omega$ can be extracted from the Cauchy data of $u$ on $\partial\Omega$
by using the formula
$$\displaystyle
p(\omega)\cdot\omega+i\,p(\omega)\cdot\vartheta_j
=\lim_{\tau\rightarrow\infty}
\frac{I_{\omega,\vartheta_j}'(\tau)}{I_{\omega,\vartheta_j}(\tau)},\,\,\,j=1,2.
\tag {4.6}
$$
\em \vskip2mm
By virtue of the formula (1.24), the function $I(\omega,\,\cdot\,)$ has the expression
$$\displaystyle
I(\omega,\vartheta)=-6k^2\,\tilde{\rho}(p(\omega))u(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^{-3}.
\tag {4.7}
$$
Simillarly to Corollary 1.4 formula (4.7) yields immediately the following results.
\proclaim{\noindent Corollary 4.4.} Let $u\in H^2(\Omega)$ be a solution of (4.1).
Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity
at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$.
\noindent
(i) Assume that $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of
Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5).
The vector $\omega$ coincides with $\mbox{\boldmath $n$}$ if and only if the function
$I(\omega,\,\cdot\,)$ is a constant function.
\noindent
(ii) The vector $\mbox{\boldmath $n$}$ and $\theta$ of $V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)$
and $\tilde{\rho}(p(\omega))\,u(p(\omega))$ satisfies the following two equations:
$$\displaystyle
6k^2\,\vert\tilde{\rho}(p(\omega))\,u(p(\omega))\vert\,V(\theta)=(\mbox{\boldmath $n$}\cdot\omega)^3
\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert;
\tag {4.8}
$$
$$\displaystyle
-6k^2\,\tilde{\rho}(p(\omega))u(p(\omega))
\,V(\theta)\,(3(\mbox{\boldmath $n$}\cdot\omega)^2-1)
=\frac{1}{\pi}\,\int_{S(\omega)}\,I(\omega,\vartheta)
\,ds(\vartheta).
\tag {4.9}
$$
\em \vskip2mm
Using the equations (4.7), (4.8) and (4.9) one gets the following corollary.
\proclaim{\noindent Corollary 4.5.} Let $u\in H^2(\Omega)$ be a solution of (4.1).
Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity
at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$.
Assume that $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of
Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5).
Assume also that $\omega\approx\mbox{\boldmath $n$}$ in the sense that (1.29) holds.
Then, we have the completely same statement and formulae as those of Corolloary 1.5.
\em \vskip2mm
Note that under the same assumptions as Corollary 4.5, one can finally calculate the quantity
$$\displaystyle
\tilde{\rho}(p(\omega))\,u(p(\omega))\,V(\theta)
\tag {4.10}
$$
and $\mbox{\boldmath $n$}$ from the Cauchy data of $u$ on $\partial\Omega$.
Since the steps for the calculation are similar to the steps presented in Subsection 1.2 for the inverse souce problem, we omit its description.
However, it should be noted that, in addition, if $\tilde{\rho}(p(\omega))$ is known to be a {\it real number},
then one can recover the phase of the complex number $u(p(\omega))$ modulo $2\pi n$, $n=0,\pm 1,\pm 2,\cdots$ from the computed value (4.10).
{\bf\noindent Remark 4.1.}
One can apply the result in \cite{IkC} to the computation of the value $u(p(\omega))$ itself.
For simplicity we assume that $\Omega$ is convex like a case when $\Omega=B_R(x_0)$ centered at a point $x_0$ with a large radius $R$.
From formula (4.6) we know the position of $p(\omega)$ and thus the domain $\Omega\cap\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$.
Because of the continuity of $u$ on $\overline\Omega$, one has, for a sufficiently small $\epsilon>0$
$$\displaystyle
u(p(\omega))\approx u(p(\omega)+\epsilon\,\omega).
$$
Since the point $p(\omega)+\epsilon\,\omega\in\Omega\cap\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$
and therein $u\in H^2$ satisfies the Helmholtz equation $\Delta u+k^2u=0$, one can calculate the value
$u(p(\omega)+\epsilon\,\omega)$ itself from the Cauchy data of $u$ on $\partial\Omega\cap\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$
by using Theorem 1 in \cite{IkC}.
\section{Final remark}
All the results in this paper can be extended also to the case when the governing equation of the background
medium is given by a Helmholtz equation with a known coefficient $n_0(x)$. It means that
if one considers, instead of (1.6) and (4.1) the equations
$$\begin{array}{ll}
\displaystyle
\Delta u+k^2n_0(x)u=F_{\rho,D}(x), & x\in\Omega
\end{array}
$$
and
$$\begin{array}{ll}
\displaystyle
\Delta u+k^2(n_0(x)+F_{\rho,D}(x))u=0, & x\in\Omega,
\end{array}
$$
respectively, then one could obtain all the corresponding results.
\section{Appendix. On condition (4.5)}
As suggested in \cite{Ik} the condition (4.5) can be satisfied if $k$ is sufficiently small
under the situation when $u$ is given by the restriction onto $\Omega$ of the total
field $U$ in the whole space scattering problem generated by, for example, the point source located at a point $z$ in ${\rm \bf R}^3\setminus\overline\Omega$.
The $U$ has the expression $U=\Phi(x,z)+w_z(x)$, where
$$\begin{array}{ll}
\displaystyle
\Phi(x,z)=\frac{1}{4\pi}\frac{e^{ik\vert x-z\vert}}{\vert x-z\vert}, & x\in{\rm \bf R}^3\setminus\{z\}
\end{array}
$$
and $w_z\in H^2_{\mbox{local}}({\rm \bf R}^3)$ is the unique solution of the inhomogeneous Helmholtz equation
$$\begin{array}{ll}
\displaystyle
\Delta w_z+k^2w_z+k^2F(x)(w_z+\Phi(x,z))=0, & x\in{\rm \bf R}^3
\end{array}
$$
with the outgoing Sommerfeld radiation condition
$$\displaystyle
\lim_{r\rightarrow\infty}r\left(\frac{\partial}{\partial r}w_z(x)-ik w_z(x)\right)=0,
$$
where $r=\vert x\vert$ and $F=F_{\rho,D}$ is given by (1.7). See \cite{CK} for the solvabilty.
Here we claim
\proclaim{\noindent Proposition A.}
Let $0<R_1<R_2$ satisfy $D\subset B_{R_2}(z)\setminus\overline B_{R_1}(z)$.
Let $M>0$ and $R>0$ satisfy $\vert D\vert\le M$ and $\Vert\rho\Vert_{L^{\infty}(D)}\le R$, respectively.
If $k$ satisfies the system of inequalities
$$\displaystyle
C\equiv \frac{3k^2R}{2}\left(\frac{M}{4\pi}\right)^{2/3}<1
\tag {A.1}
$$
and
$$\displaystyle
\frac{C}{1-C}<\frac{R_1}{R_2},
\tag {A.2}
$$
then, for all $x\in\overline D$ we have
$$\displaystyle
\vert U(x)\vert\ge
\frac{1}{4\pi}\left(\frac{1}{R_2}-\frac{C}{1-C}\frac{1}{R_1}\,\right).
\tag {A.3}
$$
\em \vskip2mm
{\it\noindent Proof.} Note that $w\in C^{0,\alpha}(\overline\Omega)$ with $0<\alpha<1$ by the Sobolev imbedding theorem.
It is well known that the function $w_z$ satisfies the Lippman-Schwinger equation
$$\begin{array}{ll}
\displaystyle
w_z(x)
&
\displaystyle
=k^2\int_{D}\Phi(x,y)\rho(y)w_z(y)\,dy+k^2\int_{D}\Phi(x,y)\Phi(y,z)\rho(y)\,dy
\end{array}
$$
and thus, for all $x\in\overline{D}$ we have
$$\displaystyle
\vert w_z(x)\vert
\le
\frac{k^2 R}{4\pi}
\left(\Vert w_z\Vert_{L^{\infty}(D)}
+\frac{1}{4\pi\,R_1}\right)\,
\int_D\frac{dy}{\vert x-y\vert}.
\tag {A.4}
$$
Let $\epsilon>0$. We have
$$\begin{array}{ll}
\displaystyle
\int_D\frac{dy}{\vert x-y\vert}
&
\displaystyle
=\int_{D\cap B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert}+\int_{D\setminus B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert}
\\
\\
\displaystyle
&
\displaystyle
\le
\int_{B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert}+\frac{\vert D\vert}{\epsilon}
\\
\\
\displaystyle
&
\displaystyle
\le
2\pi\epsilon^2+\frac{M}{\epsilon}.
\end{array}
$$
Choose $\epsilon$ in such a way that this right-hand side becomes minimum, that is,
$$\displaystyle
\epsilon=\left(\frac{M}{4\pi}\right)^{1/3}.
$$
Then one gets
$$\begin{array}{ll}
\displaystyle
\int_D\frac{dy}{\vert x-y\vert}
&
\displaystyle
\le
6\pi
\left(\frac{M}{4\pi}\right)^{2/3}.
\end{array}
$$
Thus this together with (A.4) yields
$$\displaystyle
\left(1-C\,\right)\Vert w_z\Vert_{L^{\infty}(D)}
\le
\frac{C}{4\pi\,R_1}.
$$
This together with the estimate
$$\displaystyle
\vert U(x)\vert\ge \frac{1}{4\pi\,R_2}-\Vert w_z\Vert_{L^{\infty}(D)}
$$
yields the desired estimate (A.3) under the assumptions (A.1) and (A.2).
\noindent
$\Box$
Note that since $R_2>R_1$, the set of inequalities (A.1) and (A.2) are equivalent to the single inequality
$$\displaystyle
C<\frac{R_1}{R_1+R_2}.
\tag {A.5}
$$
Thus we choose $k^2$ sufficiently small in the sense of (A.5) we have, for all $x\in\overline D$
$$\displaystyle
\vert u(x)\vert \ge\frac{1}{4\pi}\left(\frac{1}{R_2}-\frac{C}{1-C}\frac{1}{R_1}\,\right)>0.
$$
Thus the condition (4.5) for $u=U\vert_{\Omega}$ is satisfied. The choice of $k$ depends only on the a-priori information
about $D$ and $\rho$
described by $R_1$, $R_2$, $M$ and $R$.
$$\quad$$
\centerline{{\bf Acknowledgments}}
This research was partially supported by Grant-in-Aid for Scientific Research
(C) (No. 17K05331) and (B) (N. 18H01126) of Japan Society for the Promotion of Science.
$$\quad$$
\end{document} | math | 81,063 |
\begin{document}
\title{A numerical study on the weak Galerkin method for the Helmholtz equation}
\begin{abstract}
A weak Galerkin (WG) method is introduced and numerically tested for
the Helmholtz equation. This method is flexible by using
discontinuous piecewise polynomials and retains the mass
conservation property. At the same time, the WG finite element
formulation is symmetric and parameter free. Several test scenarios
are designed for a numerical investigation on the accuracy,
convergence, and robustness of the WG method in both inhomogeneous
and homogeneous media over convex and non-convex domains.
Challenging problems with high wave numbers are also examined. Our
numerical experiments indicate that the weak Galerkin is a finite
element technique that is easy to implement, and provides very
accurate and robust numerical solutions for the Helmholtz problem
with high wave numbers.
\end{abstract}
\begin{keywords}
Galerkin finite element methods, discrete gradient, Helmholtz
equation, large wave numbers, weak Galerkin
\end{keywords}
\begin{AMS}
Primary, 65N15, 65N30, 76D07; Secondary, 35B45, 35J50
\end{AMS}
\pagestyle{myheadings}
\section{Introduction}
We consider the Helmholtz equation of the form
\begin{eqnarray}
-\nabla \cdot (d \nabla u) -\kappa^2u &=& f, \quad
\mbox{in}\;\Omega,{\langle}abel{pde}\\
d\nabla u \cdot {\bf n} - i \kappa u &=& g,\quad \mbox{on}\; \partial\Omega,{\langle}abel{bc}
\end{eqnarray}
where $\kappa>0$ is the wave number, $f\in L^2(\Omega)$ represents a harmonic source,
$g\in L^2(\partial\Omega)$ is a given data function, and
$d=d(x,y)>0$ is a spatial function describing the dielectric
properties of the medium. Here $\Omega$ is a polygonal or polyhedral domain in
$\mathbb{R}^d\; (d=2,3)$.
Under the assumption that the time-harmonic behavior is assumed,
the Helmholtz equation ({\rangle}ef{pde})
governs many macroscopic wave phenomena in the frequency domain
including wave propagation, guiding, radiation and scattering.
The numerical solution to the
Helmholtz equation plays a vital role in a wide range of
applications in electromagnetics, optics, and acoustics, such as
antenna analysis and synthesis, radar cross section calculation,
simulation of ground or surface penetrating radar, design of
optoelectronic devices, acoustic noise control, and seismic wave
propagation. However, it remains a challenge to design robust and
efficient numerical algorithms for the Helmholtz equation,
especially when large wave numbers or highly oscillatory solutions
are involved \cite{Zienkiewicz}.
For the Helmholtz problem ({\rangle}ef{pde})-({\rangle}ef{bc}), the corresponding
variational form is given by seeking $u\in H^1(\Omega)$ satisfying
\begin{equation}{\langle}abel{wf}
(d\nabla u,\nabla
v)-\kappa^2(u,\;v)+i\kappa{\langle} u,\;v{\rangle}_{\partial\Omega}= (f, v)
+{\langle} g,v{\rangle}_{\partial\Omega},\qquad \forall v\in H^1(\Omega),
\end{equation}
where $(v,w)=\int_\Omega vwdx$ and
${\langle} v,w{\rangle}_{\partial \Omega}=\int_{\partial \Omega}vwds$.
In a classic finite element procedure, continuous polynomials are used to approximate the true solution $u$. In many situations, the use of discontinuous functions in the finite element approximation often provides the methods with
much needed flexibility to handle more complicated practical problems. However, for discontinuous polynomials, the strong gradient $\nabla$ in ({\rangle}ef{wf}) is no longer meaningful. Recently developed weak Galerkin finite element methods \cite{wy}
provide means to solve this difficulty by replacing the differential operators by the weak forms as distributions for discontinuous approximating functions.
Weak Galerkin (WG) methods refer to general finite element techniques for partial differential equations and were first introduced and analyzed in \cite{wy} for second order elliptic equations.
Through rigorous error analysis,
optimal order of convergence of the WG solution in both
discrete $H^1$ norm and $L^2$ norm is established under
minimum regularity assumptions in \cite{wy}. The mixed weak Galerkin finite element method is studied in \cite{wy-mixed}.
The WG methods are by design using discontinuous approximating functions.
In this paper, we will apply WG finite element methods \cite{wy}
to the Helmholtz equation.
The WG finite element approximation to ({\rangle}ef{wf}) can be derived naturally by simply replacing the differential operator gradient $\nabla$ in ({\rangle}ef{wf}) by a weak gradient $\nabla_w$: find
$u_h\in V_h$ such that for all $v_h\in V_h$ we have
\begin{equation}{\langle}abel{wg1}
(d\nabla_w u_h,\nabla_w
v_h)-\kappa^2(u_0,\;v_0)+i\kappa{\langle} u_b,\;v_b{\rangle}_{\partial\Omega}= (f, v_0)
+{\langle} g,v_b{\rangle}_{\partial\Omega},
\end{equation}
where $u_0$ and $u_b$ represent the values of $u_h$ in the interior and the boundary of the triangle respectively. The weak gradient $\nabla_w$ will be defined precisely in the next section. We note that the weak Galerkin finite element formulation ({\rangle}ef{wg1}) is simple, symmetric and parameter free.
To fully explore the potential of the WG finite element formulation ({\rangle}ef{wg1}),
we will investigate its performance for solving the Helmholtz
problems with large wave numbers.
It is well known that the numerical
performance of any finite element solution to the Helmholtz equation
depends significantly on the wave number $k$. When $k$ is very large
-- representing a highly oscillatory wave, the mesh size $h$ has to be
sufficiently small for the scheme to resolve the oscillations. To
keep a fixed grid resolution, a natural rule is to choose $kh$ to be
a constant in the mesh refinement, as the wave number $k$ increases
\cite{IhlBabH,bao04}. However, it is known
\cite{IhlBabH,IhlBabHP,BIPS,BabSau} that, even under such a mesh
refinement, the errors of continuous Galerkin finite element
solutions deteriorate rapidly when $k$ becomes larger. This
non-robust behavior with respect to $k$ is known as the ``pollution
effect''.
To the end of alleviating the pollution effect,
various continuous or discontinuous finite element methods have been
developed in the literature for solving the Helmholtz equation
with large wave numbers
\cite{BIPS,BabSau,Melenk,Monk99,UWVF98,UWVF03,Farhat03,Farhat04,fw,abcm,Chung,cdg,cgl,Monk11}.
A commonly used strategy in these effective finite element methods is
to include some analytical knowledge of the Helmholtz equation, such
as characteristics of traveling plane wave solutions,
asymptotic solutions or fundamental solutions, into the finite
element space.
Likewise, analytical
information has been incorporated in the basis functions of the boundary
element methods to address the high frequency problems
\cite{Giladi,Langdon06,Langdon07}. On the other hand,
many spectral methods, such as local spectral methods \cite{bao04},
spectral Galerkin methods \cite{shen05,shen07}, and spectral element
methods \cite{Heikkola,Ainsworth09} have also been developed for solving
the Helmholtz equation with large wave numbers.
Pollution effect can be effectively controlled in these spectral
type collocation or Galerkin formulations, because
the pollution error is directly related to the dispersion error,
i.e., the phase difference between the numerical and exact waves
\cite{IhlBab,Ainsworth04}, while the spectral methods typically
produce negligible dispersive errors.
The objective of the present paper is twofold.
First, we will introduce weak Galerkin methods for the Helmholtz equation.
The second aim of the paper is to
investigate the performance of the WG methods for solving the Helmholtz equation
with high wave numbers.
To demonstrate the potential of the WG finite element methods in solving high frequency problems,
we will not attempt to build the analytical knowledge into the WG formulation ({\rangle}ef{wg1})
and we will restrict ourselves to low order WG elements.
We will investigate the robustness and effectiveness of such plain WG methods
through many carefully designed numerical experiments.
The rest of this paper is organized as follows. In Section 2, we
will introduce a weak Galerkin finite element formulation for the
Helmholtz equation by following the idea presented in \cite{wy}. Implementation of the WG method for the problem ({\rangle}ef{pde})-({\rangle}ef{bc}) is discussed in Section 3. In Section 4, we shall
present some numerical results obtained from the weak Galerkin
method with various orders.
Finally, this paper ends with some concluding remarks.
\section{A Weak Galerkin Finite Element Method}
Let ${\cal T}_h$ be a partition of the domain $\Omega$ with mesh size
$h$. Assume that the partition ${\cal T}_h$ is shape regular so that
the routine inverse inequality in the finite element analysis holds
true (see \cite{ci}). Denote by
$P_k(T)$ the set of polynomials in $T$ with degree no more than
$k$, and $P_k(e)$, $e\in {\partial T}$, the set of polynomials on each segment
(edge or face) of $\partial T$ with degree no more than $k$.
For $k\ge 0$ and given ${\mathcal T}_h$, we define the weak Galerkin (WG) finite element space as follows
\begin{equation}{\langle}abel{vh}
V_h={\langle}eft\{ v=\{v_0, v_b\}\in L^2(\Omega):\ \{v_0, v_b\}|_{T}\in P_k(T)\times P_k(e),e\in\partial T, \forall T\in {\cal T}_h {\rangle}ight\},
\end{equation}
where $v_0$ and $v_b$ are the values of $v$ restricted on the interior of element $T$ and the boundary of element $T$ respectively. Since $v_b$ may not necessarily be related to the trace of $v_0$ on $\partial T$, we write $v=\{v_0,v_b\}$. For a given $T\in{\mathcal T}_h$, we define another vector space
\[
RT_k(T)=P_k(T)^d+\tilde{P}_k(T){\bf x},
\]
where $\tilde{P}_k(T)$ is the set of homogeneous polynomials of degree $k$ and ${\bf x}=(x_1,\cdots, x_d)$ (see \cite{bf}). We will find a locally defined discrete weak gradient from this space on each element $T$.
The main idea of the weak Galerkin method is to introduce weak derivatives for discontinuous functions and to use them in discretizing the corresponding variational forms such as ({\rangle}ef{wf}).
The differential operator used in ({\rangle}ef{wf}) is a gradient. A weak gradient has been defined in \cite{wy}. Now we define approximations of the weak gradient as follows. For each $v=\{v_0, v_b\} \in V_h$, we define a discrete weak gradient $\nabla_{w} v\in RT_k(T)$ on each
element $T$ such that
\begin{equation}{\langle}abel{d-g}
(\nabla_{w}v,\ {\tilde a}u)_T = -(v_0,\ \nabla\cdot {\tilde a}u)_T+
{\langle} v_b, \ {\tilde a}u\cdot{\bf n} {\rangle}_{\partial T},\quad\quad\forall {\tilde a}u\in RT_k(T),
\end{equation}
where $\nabla_{w}v$ is locally defined on each element $T$, $(v,w)_T=\int_T vwdx$ and
${\langle} v,w{\rangle}_{\partial T}=\int_{\partial T}vwds$. We will use $(\nabla_w v,\ \nabla_w w)$ to denote $\sum_{T\in{\mathcal T}_h}(\nabla_w v,\ \nabla_w w)_T$. Then the WG method for the Helmholtz equation ({\rangle}ef{pde})-({\rangle}ef{bc}) can be stated as follows.
\begin{algorithm}
A numerical approximation for ({\rangle}ef{pde}) and ({\rangle}ef{bc}) can be
obtained by seeking $u_h=\{u_0,u_b\}\in V_h$ such that for all $v_h=\{v_0,v_b\}\in V_h$
\begin{equation}{\langle}abel{WG}
(d\nabla_w u_h,\nabla_w
v_h)-\kappa^2(u_0,\;v_0)+i\kappa{\langle} u_b,\;v_b{\rangle}_{\partial\Omega}= (f, v_0)
+{\langle} g,v_b{\rangle}_{\partial\Omega}.
\end{equation}
\end{algorithm}
Denote by $Q_h u=\{Q_0 u,\;Q_bu\}$ the $L^2$ projection onto
$P_k(T)\times P_{k}(e)$, $e\in\partial T$. In other words, on each element
$T$, the function $Q_0 u$ is defined as the $L^2$ projection of $u$
in $P_k(T)$ and $Q_b u$ is the $L^2$ projection of $u$ in
$P_{k}(\partial T)$.
For equation ({\rangle}ef{pde}) with Dirichlet boundary condition $u=g$ on $\partial\Omega$, optimal error estimates have been obtained in \cite{wy}.
For a sufficiently small mesh size $h$, we can derive following optimal error estimate for the Helmholtz equation ({\rangle}ef{pde}) with the mixed boundary condition ({\rangle}ef{bc}).
\begin{theorem} Let $u_h\in V_h$ and $u\in H^{k+2}(\Omega)$ be the solutions of ({\rangle}ef{WG}) and ({\rangle}ef{pde})-({\rangle}ef{bc}) respectively and assume that $\Omega$ is convex.
Then for $k\ge 0$,
there exists a constant $C$ such that
\begin{eqnarray}
\|\nabla_w( u_h-Q_hu)\| &{\langle}e& Ch^{k+1}(\|u\|_{k+2}+\|f\|_k),{\langle}abel{err1}\\
\|u_h-Q_hu\| &{\langle}e & Ch^{k+2}(\|u\|_{k+2}+\|f\|_k).{\langle}abel{err2}
\end{eqnarray}
\end{theorem}
\begin{proof}
The proof of this theorem is similar to that of Theorem 8.3 and Theorem 8.4 in
\cite{wy} and is very long. Since the emphasis of this paper is to investigate the performance of the WG method, we will omit details of the proof.
\end{proof}
\section{Implementation of WG method}
First, define a bilinear form $a(\cdot,\cdot)$ as
\[
a(u_h,v_h)=(d\nabla_w u_h,\nabla_w v_h)-\kappa^2(u_0,\;v_0)+i\kappa{\langle} u_b,\;v_b{\rangle}_{\partial\Omega}.
\]
Then ({\rangle}ef{WG}) can be rewritten with $v_h=\{v_0,v_b\}$
\begin{equation}{\langle}abel{wg8}
a(u_h,v_h)=(f, v_0)+{\langle} g,v_b{\rangle}_{\partial\Omega}.
\end{equation}
The methodology of implementing the WG methods is the same as that for continuous
Galerkin finite element methods except that the standard gradient
operator $\nabla$ should be replaced by the discrete weak gradient operator
$\nabla_w$.
In the following, we will use the lowest order weak Galerkin element
($k$=0) on triangles as an example to demonstrate how one might implement the weak
Galerkin finite element method for solving the Helmholtz problem
({\rangle}ef{pde}) and ({\rangle}ef{bc}). Let $N(T)$ and $N(e)$ denote, respectively, the number of
triangles and the number of edges associated with a triangulation
${\cal T}_h$. Let ${\cal E}_h$ denote the union of the boundaries
of the triangles $T$ of ${\cal T}_h$.
The procedure of implementing the WG method ({\rangle}ef{WG}) consists of
the following three steps.
\begin{enumerate}
\item Find basis functions for $V_h$ defined in ({\rangle}ef{vh}):
\begin{eqnarray*}
V_h&=&{{\rangle}m span} \{\phi_1,\cdots,\phi_{N(T)},\psi_1,\cdots,\psi_{N(e)}\}
={{\rangle}m span} \{\Phi_1,\cdots,\Phi_{n}\}
\end{eqnarray*}
where $n=N(T)+N(e)$ and
\[
\phi_i={\langle}eft\{
\begin{array}{l}
1
\quad
\mbox{on} \;\; T_i,
\\ [0.08in]
0
\quad
\mbox{otherwise},
\end{array}
{\rangle}ight.
\psi_j={\langle}eft\{
\begin{array}{l}
1
\quad
\mbox{on} \;\; e_j,
\\ [0.08in]
0
\quad
\mbox{otherwise},
\end{array}
{\rangle}ight.
\]
for $T_i\in{\mathcal T}_h$ and $e_j\in {\mathcal E}_h$. Please note that $\phi_i$ and $\psi_j$ are defined on whole $\Omega$.\\
\item Substituting $u_h=\sum_{j=1}^{n}\alpha_j\Phi_j$ into ({\rangle}ef{wg8}) and letting $v=\Phi_i$ in ({\rangle}ef{wg8}) yield
\begin{equation}{\langle}abel{sys}
\sum_{j=1}^na(\Phi_j,\Phi_i)\alpha_j=(f, \Phi_i^0)+{\langle} g,\Phi_i^b{\rangle}_{\partial\Omega},\quad i=1,\cdots n
\end{equation}
where $\Phi_i^0$ and $\Phi_i^b$ are the values of $\Phi_i$ on the interior of the triangle and the boundary of the triangle respectively. In our computations, the integrations on the right-hand side
of ({\rangle}ef{sys}) are conducted numerically. In particular,
a 7-points two-dimensional Gaussian quadrature
and a 3-points one-dimensional Gaussian quadrature are employed, respectively, to calculate
$(f, \Phi_i^0)$ and ${\langle} g,\Phi_i^b{\rangle}_{\partial\Omega}$ numerically.\\
\item Form the coefficient matrix $(a(\Phi_j,\Phi_i))_{i,j}$ of the linear system ({\rangle}ef{sys}) by computing
\begin{equation}{\langle}abel{bilinear}
a(\Phi_j,\Phi_i)=(d\nabla_w \Phi_j,\nabla_w \Phi_i)-\kappa^2(\Phi_j^0,\;\Phi_i^0)+i\kappa{\langle} \Phi_j^b,\;\Phi_i^b{\rangle}_{\partial\Omega}.
\end{equation}
All integrations in ({\rangle}ef{bilinear}) are carried out analytically.
\end{enumerate}
Finally, we will explain how to compute the weak gradient $\nabla_w$ for a given function $v\in V_h$ when $k=0$.
For a given $T\in{\mathcal T}_h$, we will find $\nabla_w v\in RT_0(T)$,
\[
RT_0(T)={\langle}eft(\begin{array}{c} a+cx \\b+cy\\\end{array}{\rangle}ight)={{\rangle}m span}
\{\theta_1,\theta_1,\theta_3\}.
\]
For example, we can choose $\theta_i$ as follows
\[
\theta_1={\langle}eft(\begin{array}{c}1 \\0 \\\end{array}{\rangle}ight), \theta_2={\langle}eft(\begin{array}{c}0 \\1 \\\end{array}{\rangle}ight),
\theta_3={\langle}eft(\begin{array}{c}x \\y \\\end{array}{\rangle}ight).
\]
Thus on each element $T\in {\cal T}_h$, $\nabla_w v=\sum_{j=1}^3c_j\theta_j$. Using the definition of the discrete weak gradient ({\rangle}ef{d-g}), we find $c_j$ by solving the following linear system:
\[
{\langle}eft(
\begin{array}{ccc}
(\theta_1, \theta_1)_T & (\theta_2, \theta_1)_T & (\theta_3, \theta_1)_T \\
(\theta_1, \theta_2)_T& (\theta_2, \theta_2)_T& (\theta_3, \theta_2)_T \\
(\theta_1, \theta_3)_T& (\theta_2, \theta_3)_T& (\theta_3, \theta_3)_T\\
\end{array}
{\rangle}ight)
{\langle}eft(\begin{array}{c}c_1 \\c_2 \\c_3 \\\end{array}{\rangle}ight)={\langle}eft(\begin{array}{c}
-(v_0,\nabla\cdot\theta_1)_T+{\langle} v_b,\;\theta_1\cdot{\bf n}{\rangle}_{\partial T} \\
-(v_0,\nabla\cdot\theta_2)_T+{\langle} v_b,\;\theta_2\cdot{\bf n}{\rangle}_{\partial T}\\
-(v_0,\nabla\cdot\theta_3)_T+{\langle} v_b,\;\theta_3\cdot{\bf n}{\rangle}_{\partial T}
\end{array}
{\rangle}ight).
\]
The inverse of the above coefficient matrix can be obtained
explicitly or numerically through a local matrix solver. For the basis function $\Phi_i$, $\nabla_w\Phi_i$ is nonzero on only one or two triangles.
\section{Numerical Experiments}
In this section, we examine the WG method by testing its accuracy,
convergence, and robustness for solving two dimensional Helmholtz
equations. The pollution effect due to large wave numbers will be
particularly investigated and tested numerically. For convergence
tests, both piecewise constant and piecewise linear finite elements
will be considered. To demonstrate the robustness of the WG method, the Helmholtz
equation in both homogeneous and inhomogeneous media will be solved
on convex and non-convex computational domains.
The mesh generation and all computations are conducted in the MATLAB
environment.
For simplicity, a
structured triangular mesh is employed in all cases, even though the WG method
is known to be very flexible in dealing with various different finite
element partitions \cite{mwy-PolyRedu,mwy-biharmonic}.
Two types of relative errors are measured in our numerical
experiments. The first one is the relative $L^2$ error
defined by
$$ \frac{\| u_h - Q_h u \|}{\| Q_h u \|}.$$
The second one is the relative $H^1$ error
defined in terms of the discrete gradient
$$ \frac{\| \nabla_w (u_h - Q_h u) \|}{\| \nabla_w Q_h u \|}.$$
Numerically, the $H^1$-semi-norm will be calculated
as
$$
{|\hspace{-.02in}|\hspace{-.02in}|} u_h-Q_hu {|\hspace{-.02in}|\hspace{-.02in}|}^2=h^{-1}{\langle}angle
u_0-u_b-(Q_0u-Q_bu),u_0-u_b-(Q_0u-Q_bu){\rangle}angle_{\partial \Omega}
$$
for the lowest order finite element (i.e., piecewise constants). For
piecewise linear elements, we use the original definition of
$\nabla_w$ to compute the $H^1$-semi-norm $\|\nabla_w (u_h - Q_h
u)\|$.
\begin{figure}
\caption{Geometry of testing domains and sample meshes. Left: a
convex hexagon domain; Right: a non-convex imperfect circular
domain.}
\end{figure}
\subsection{A convex Helmholtz problem}{\langle}abel{convex}
We first consider a homogeneous Helmholtz equation defined on a
convex hexagon domain, which has been studied in \cite{fw}. The
domain $\Omega$ is the unit regular hexagon domain centered at the
origin $(0,0)$, see Fig. {\rangle}ef{fig.domain} (left). Here we set $d=1$
and $f=\sin(kr)/r$ in ({\rangle}ef{pde}), where $r=\sqrt{x^2+y^2}$. The
boundary data $g$ in the Robin boundary condition ({\rangle}ef{bc}) is
chosen so that the exact solution is given by
\begin{eqnarray}
u=\frac{\cos(kr)}{k}-\frac{\cos k+i\sin k}{k(J_0(k)+iJ_1(k))}J_0(kr)
\end{eqnarray}
where $J_{\xi}(z)$ are Bessel functions of the first kind. Let
${\mathcal T}_h$ denote the regular triangulation that consists of $6N^2$
triangles of size $h=1/N$, as shown in Fig. {\rangle}ef{fig.domain} (left)
for $T_{\frac18}$.
\begin{table}[!t]
\caption{Convergence of piecewise constant WG for the Helmholtz equation on a
convex domain with wave number $k=1$.} {\langle}abel{table.Ex1k1}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \multicolumn{2}{c|}{relative $H^1$} & \multicolumn{2}{c|}{relative $L^2$} \\
\cline{2-3} \cline{4-5}
$h$ & error & order & error & order \\
\hline
5.00e-01 & 2.49e-02 & & 4.17e-03 &\\
2.50e-01 & 1.11e-02 & 1.16 & 1.05e-03 & 1.99 \\
1.25e-01 & 5.38e-03 & 1.05 & 2.63e-04 & 2.00 \\
6.25e-02 & 2.67e-03 & 1.01 & 6.58e-05 & 2.00 \\
3.13e-02 & 1.33e-03 & 1.00 & 1.64e-05 & 2.00 \\
1.56e-02 & 6.65e-04 & 1.00 & 4.11e-06 & 2.00 \\
\hline
\end{tabular}
\end{center}
\end{table}
Table {\rangle}ef{table.Ex1k1} illustrates the performance of the WG method
with piecewise constant elements for the Helmholtz equation with
wave number $k=1$. Uniform triangular partitions were used in the
computation through successive mesh refinements. The relative errors
in $L^2$ norm and $H^1$ semi-norm can be seen in Table
{\rangle}ef{table.Ex1k1}. The Table also includes numerical estimates for
the rate of convergence in each metric. It can be seen that the
order of convergence in the relative $H^1$ semi-norm and relative
$L^2$ norm are, respectively, one and two for piecewise constant
elements.
\begin{table}[!t]
\caption{ Convergence of piecewise linear WG for the Helmholtz equation on a
convex domain with wave number $k=5$.} {\langle}abel{table.Ex1k5}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \multicolumn{2}{c|}{relative $H^1$} & \multicolumn{2}{c|}{relative $L^2$} \\
\cline{2-3} \cline{4-5}
$h$ & error & order & error & order \\
\hline
2.50e-01 & 9.48e-03 & & 2.58e-04 & \\
1.25e-01 & 2.31e-03 & 2.04 & 3.46e-05 & 2.90 \\
6.25e-02 & 5.74e-04 & 2.01 & 4.47e-06 & 2.95 \\
3.13e-02 & 1.43e-04 & 2.00 & 5.64e-07 & 2.99 \\
1.56e-02 & 3.58e-05 & 2.00 & 7.06e-08 & 3.00 \\
7.81e-03 & 8.96e-06 & 2.00 & 8.79e-09 & 3.01 \\
\hline
\end{tabular}
\end{center}
\end{table}
High order of convergence can be achieved by using corresponding
high order finite elements in the present WG framework. To
demonstrate this phenomena, we consider the same Helmholtz problem
with a slightly larger wave number $k=5$. The WG with piecewise
linear functions was employed in the numerical approximation. The
computational results are reported in Table {\rangle}ef{table.Ex1k5}. It is
clear that the numerical experiment validates the theoretical
estimates. More precisely, the rates of convergence in the relative
$H^1$ semi-norm and relative $L^2$ norm are given by two and three,
respectively.
\subsection{A non-convex Helmholtz problem}
We next explore the use of the WG method for solving a Helmholtz
problem defined on a non-convex domain, see Fig. {\rangle}ef{fig.domain}
(right). The medium is still assumed to be homogeneous, i.e., $d=1$
in ({\rangle}ef{pde}). We are particularly interested in the performance of
the WG method for dealing with the possible field singularity at the
origin. For simplicity, only the piecewise constant $RT_0$ elements
are tested for the present problem. Following \cite{Monk11}, we take
$f=0$ in ({\rangle}ef{pde}) and the boundary condition is simply taken as a
Dirichlet one: $u=g$ on $\partial \Omega$. Here $g$ is prescribed
according to the exact solution \cite{Monk11}
\begin{equation}{\langle}abel{solution2}
u= J_{\xi}( k \sqrt{x^2+y^2}) \cos (\xi \arctan( y/x)).
\end{equation}
\begin{figure}
\caption{WG solutions for the non-convex Helmholtz problem with
$k=4$ and $\xi=1$. Left: Mesh level $1$; Right: Mesh level $6$.}
\end{figure}
\begin{figure}
\caption{WG solutions for the non-convex Helmholtz problem with
$k=4$ and $\xi=3/2$. Left: Mesh level $1$; Right: Mesh level $6$.}
\end{figure}
\begin{figure}
\caption{WG solutions for the non-convex Helmholtz problem with
$k=4$ and $\xi=2/3$. Left: Mesh level $1$; Right: Mesh level $6$.}
\end{figure}
In the present study, the wave number was chosen as $k=4$ and three
values for the parameter $\xi$ are considered; i.e., $\xi=1$,
$\xi=3/2$ and $\xi=2/3$. The same triangular mesh is used in the WG
method for all three cases. In particular, an initial mesh is first
generated by using MATLAB with default settings, see Fig.
{\rangle}ef{fig.domain} (right). Next, the mesh is refined uniformly for
five times. The WG solutions on mesh level $1$ and mesh level $6$
are shown in Fig. {\rangle}ef{fig.Ex2xi1}, Fig. {\rangle}ef{fig.Ex2xi32}, and Fig.
{\rangle}ef{fig.Ex2xi23}, respectively, for $\xi=1$, $\xi=3/2$, and
$\xi=2/3$. Since the numerical errors are quite small for the WG
approximation corresponding to mesh level $6$, the field modes
generated by the densest mesh are visually indistinguishable from
the analytical ones. In other words, the results shown in the right
charts of Fig. {\rangle}ef{fig.Ex2xi1}, Fig. {\rangle}ef{fig.Ex2xi32}, and Fig.
{\rangle}ef{fig.Ex2xi23} can be regarded as analytical results. It can be
seen that in all three cases, the WG solutions already agree with
the analytical ones at the coarsest level. Moreover, based on the
coarsest mesh, the constant function values can be clearly seen in
each triangle, due to the use of piecewise constant $RT_0$ elements.
Nevertheless, after the initial mesh is refined for five times, the
numerical plots shown in the right charts are very smooth. A perfect
symmetry with respect to the $x$-axis is clearly seen.
\begin{table}[!tb]
\caption{Numerical convergence test for the non-convex Helmholtz
problem with $k=4$ and $\xi=1$. } {\langle}abel{table.Ex2xi1}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \multicolumn{2}{c|}{relative $H^1$} & \multicolumn{2}{c|}{relative $L^2$} \\
\cline{2-3} \cline{4-5}
$h$ & error & order & error & order \\
\hline
2.44e-01 & 5.64e-02 & & 1.37e-02 & \\
1.22e-01 & 2.83e-02 & 1.00 & 3.56e-03 & 1.95 \\
6.10e-02 & 1.42e-02 & 0.99 & 8.98e-04 & 1.99 \\
3.05e-02 & 7.14e-03 & 1.00 & 2.25e-04 & 2.00 \\
1.53e-02 & 3.57e-03 & 1.00 & 5.63e-05 & 2.00 \\
7.63e-03 & 1.79e-03 & 1.00 & 1.41e-05 & 2.00 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!tb]
\caption{Numerical convergence test for the non-convex Helmholtz
problem with $k=4$ and $\xi=3/2$. } {\langle}abel{table.Ex2xi32}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \multicolumn{2}{c|}{relative $H^1$} & \multicolumn{2}{c|}{relative $L^2$} \\
\cline{2-3} \cline{4-5}
$h$ & error & order & error & order \\
\hline
2.44e-01 & 5.56e-02 & & 1.12e-2& \\
1.22e-01 & 2.81e-02 & 0.98 & 3.02e-03 & 1.89 \\
6.10e-02 & 1.42e-02 & 0.99 & 8.06e-04 & 1.91 \\
3.05e-02 & 7.14e-03 & 0.99 & 2.12e-04 & 1.92 \\
1.53e-02 & 3.58e-03 & 1.00 & 5.54e-05 & 1.94 \\
7.63e-03 & 1.79e-03 & 1.00 & 1.44e-05 & 1.95 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!tb]
\caption{Numerical convergence test for the non-convex Helmholtz
problem with $k=4$ and $\xi=2/3$. } {\langle}abel{table.Ex2xi23}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \multicolumn{2}{c|}{relative $H^1$} & \multicolumn{2}{c|}{relative $L^2$} \\
\cline{2-3} \cline{4-5}
$h$ & error & order & error & order \\
\hline
2.44e-01 & 1.07e-01 & &5.24e-02 & \\
1.22e-01 & 5.74e-02 & 0.90 &2.18e-02 & 1.27 \\
6.10e-02 & 3.23e-02 & 0.83 &9.01e-03 & 1.27 \\
3.05e-02 & 1.89e-02 & 0.77 &3.68e-03 & 1.29 \\
1.53e-02 & 1.14e-02 & 0.73 &1.49e-03 & 1.31 \\
7.63e-03 & 6.99e-03 & 0.71 &5.96e-04 & 1.32 \\
\hline
\end{tabular}
\end{center}
\end{table}
We next investigate the numerical convergence rates for WG. The
numerical errors of the WG solutions for $\xi=1$, $\xi=3/2$ and
$\xi=2/3$ are listed, respectively, in Table {\rangle}ef{table.Ex2xi1},
Table {\rangle}ef{table.Ex2xi32}, and Table {\rangle}ef{table.Ex2xi23}. It can be
seen that for $\xi=1$ and $\xi=3/2$, the numerical convergence rates
in the relative $H^1$ and $L^2$ errors remain to be first and second
order, while the convergence orders degrade for the non-smooth case
$\xi=2/3$. Mathematically, for both $\xi=3/2$ and $\xi=2/3$, the
exact solutions ({\rangle}ef{solution2}) are known to be non-smooth across
the negative $x$-axis if the domain was chosen to be the entire
circle. However, the present domain excludes the negative $x$-axis.
Thus, the source term $f$ of the Helmholtz equation ({\rangle}ef{pde}) can
be simply defined as zero throughout $\Omega$. Nevertheless, there
still exists some singularities at the origin $(0,0)$. In
particular, it is remarked in \cite{Monk11} that the singularity
lies in the derivatives of the exact solution at $(0,0)$. Due to
such singularities, the convergence rates of high order
discontinuous Galerkin methods are also reduced for $\xi=3/2$ and
$\xi=2/3$ \cite{Monk11}. In the present study, we further note that
there exists a subtle difference between two cases $\xi=3/2$ and
$\xi=2/3$ at the origin. To see this, we neglect the second
$\cos(\cdot)$ term in the exact solution ({\rangle}ef{solution2}) and plot
the Bessel function of the first kind $J_{\xi}( k |r|)$ along the
radial direction $r$, see Fig. {\rangle}ef{fig.origin}. It is observed that
the Bessel function of the first kind is non-smooth for the case
$\xi=2/3$, while it looks smooth across the origin for the case
$\xi=3/2$. Thus, it seems that the first derivative of $J_{3/2}( k
|r|)$ is still continuous along the radial direction. This perhaps
explains why the present WG method does not experience any order
reduction for the case $\xi=3/2$. In \cite{Monk11}, locally refined
meshes were employed to resolve the singularity at the origin so
that the convergence rate for the case $\xi=2/3$ can be improved. We
note that local refinements can also be adopted in the WG method for
a better convergence rate. A study of WG with grid local refinement
is left to interested parties for future research.
\begin{figure}
\caption{The Bessel function of the first kind $J_{\xi}
\end{figure}
\subsection{A Helmholtz problem with inhomogeneous media}
We consider a Helmholtz problem with inhomogeneous media defined on
a circular domain with radius $R$. Note that the spatial function
$d(x,y)$ in the Helmholtz equation ({\rangle}ef{pde}) represents the
dielectric properties of the underlying media. In particular, we
have $d=\frac{1}{\epsilon}$ in the electromagnetic applications
\cite{Zhao10}, where $\epsilon$ is the electric permittivity. In the
present study, we construct a smooth varying dielectric profile:
\begin{equation}{\langle}abel{dr}
d(r)= \frac{1}{\epsilon_1}S(r) + \frac{1}{\epsilon_2}(1-S(r)),
\end{equation}
where $r=\sqrt{x^2+y^2}$, $\epsilon_1$ and $\epsilon_2$ are
dielectric constants, and
\begin{equation}
S(r)=
\begin{cases}
1 & \text{if $r<a$}, \\
-2{\langle}eft( \frac{b-r}{b-a} {\rangle}ight)^3 +
3{\langle}eft( \frac{b-r}{b-a} {\rangle}ight)^2 & \text{if $a {\langle}e r {\langle}e b$}, \\
0 & \text{if $r>b$},
\end{cases}
\end{equation}
with $a<b<R$. An example plot of $d(r)$ and $S(r)$ is shown in Fig.
{\rangle}ef{fig.eps}. In classical electromagnetic simulations, $\epsilon$
is usually taken as a piecewise constant, so that some
sophisticated numerical treatments have to be conducted near the
material interfaces to secure the overall accuracy \cite{Zhao10}.
Such a procedure can be bypassed if one considers a smeared
dielectric profile, such as ({\rangle}ef{dr}). We note that under the limit
$b \to a$, a piecewise constant profile is recovered in ({\rangle}ef{dr}).
In general, the smeared profile ({\rangle}ef{dr}) might be generated via
numerical filtering, such as the so-called $\epsilon$-smoothing
technique \cite{Shao03} in computational electromagnetics. On the
other hand, we note that the dielectric profile might be defined to
be smooth in certain applications. For example, in studying the
solute-solvent interactions of electrostatic analysis, some
mathematical models \cite{Chen10,Zhao11} have been proposed to treat
the boundary between the protein and its surrounding aqueous
environment to be a smoothly varying one. In fact, the definition of
({\rangle}ef{dr}) is inspired by a similar model in that field
\cite{Chen10}.
\begin{figure}
\caption{An example plot of smooth dielectric profile $d(r)$ and
$S(r)$ with $a=1$, $b=3$ and $R=5$. The dielectric coefficients of
protein and water are used, i.e., $\epsilon_1=2$ and
$\epsilon_2=80$. }
\end{figure}
In the present study, we choose the source of the Helmholtz equation
({\rangle}ef{pde}) to be
\begin{equation}
f(r)=\kappa^2 [d(r)-1] J_0 (k r) + k d'(r) J_1(kr),
\end{equation}
where
\begin{equation}
d'(r)={\langle}eft( \frac{1}{\epsilon_1}-\frac{1}{\epsilon_2} {\rangle}ight) S'(r)
\end{equation}
and
\begin{equation}
S'(r)=
\begin{cases}
0 & \text{if $r<a$}, \\
6{\langle}eft( \frac{b-r}{b-a} {\rangle}ight)^2 -
6{\langle}eft( \frac{b-r}{b-a} {\rangle}ight) & \text{if $a {\langle}e r {\langle}e b$}, \\
0 & \text{if $b<r$},
\end{cases}
\end{equation}
For simplicity, a Dirichlet boundary condition is imposed at $r=R$
with $u=g$. Here $g$ is prescribed according to the exact solution
\begin{equation}
u=J_0(kr).
\end{equation}
Our numerical investigation assumes the value of $a=1$, $b=3$ and
$R=5$. The wave number is set to be $k=2$. The dielectric
coefficients are chosen as $\epsilon_1=2$ and $\epsilon_2=80$, which
represents the dielectric constant of protein and water
\cite{Chen10,Zhao11}, respectively. The WG method with piecewise
constant finite element functions is employed to solve the present
problem with inhomogeneous media in Cartesian coordinate. Table
{\rangle}ef{table.Ex3} illustrates the computational errors and some
numerical rate of convergence. It can be seen that the numerical
convergence in the relative $L^2$ error is not uniform, while the
relative $H^1$ error still converges uniformly in first order. This
phenomena might be related to the non-uniformity and smallness of
the media in part of the computational domain.
In particular, we note that
the relative $L^2$ error for the coarsest grid is extremely large, such that
the numerical order for the first mesh refinement is unusually high.
To be fair, we thus exclude this data in our analysis.
To have an idea about the overall numerical order of this non-uniform
convergence, we calculated the average convergence rate and least-square fitted
convergence rate for the rest mesh refinements,
which are $1.97$ and $1.88$, respectively.
Thus,
the present inhomogeneous example demonstrates the accuracy and robustness
of the WG method for the Helmholtz equation.
\begin{table}[!ht]
\caption{Numerical convergence test of the Helmholtz equation with
inhomogeneous media. } {\langle}abel{table.Ex3}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \multicolumn{2}{c|}{relative $H^1$} & \multicolumn{2}{c|}{relative $L^2$} \\
\cline{2-3} \cline{4-5}
$h$ & error & order & error & order \\
\hline
1.51e-00 & 2.20e-01 & & 1.04e-00 & \\
7.54e-01 & 1.24e-01 & 0.83 & 1.20e-01 & 3.11 \\
3.77e-01 & 6.24e-02 & 0.99 & 1.81e-02 & 2.73 \\
1.88e-01 & 3.13e-02 & 1.00 & 5.71e-03 & 1.67 \\
9.42e-02 & 1.56e-02 & 1.00 & 2.14e-03 & 1.42 \\
4.71e-02 & 7.82e-03 & 1.00 & 5.11e-04 & 2.06 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Large wave numbers}
We finally investigate the performance of the WG method for the
Helmholtz equation with large wave numbers.
As discussed above, without resorting to high order generalizations or
analytical/special treatments, we will examine the use of
the plain WG method for tackling the pollution effect.
The homogeneous
Helmholtz problem of the Subsection {\rangle}ef{convex}
will be studied again. Also, the $RT_0$ and $RT_1$ elements are
used to solve the homogeneous Helmholtz equation with the Robin
boundary condition. Since this problem is defined on a structured
hexagon domain, a uniform triangular mesh with a constant mesh size
$h$ throughout the domain is used. This enables us to precisely
evaluate the impact of the mesh refinements. Following the
literature works \cite{bao04,fw}, we will focus only on the relative
$H^1$ semi-norm in the present study.
\begin{figure}
\caption{Relative $H^1$ error of the WG solution. Left: with respect
to $1/h$; Right: with respect to wave number $k$. }
\end{figure}
To study the non-robustness behavior with respect to the wave number
$k$, i.e., the pollution effect, we solve the corresponding
Helmholtz equation by using piecewise constant WG method with
various mesh sizes for four wave numbers $k=5$, $k=10$, $k=50$, and
$k=100$, see Fig. {\rangle}ef{fig.kh} (left) for the WG performance. From
Fig. {\rangle}ef{fig.kh} (left), it can be seen that when $h$ is smaller,
the WG method immediately begins to converge for the cases $k=5$ and
$k=10$. However, for large wave numbers $k=50$ and $k=100$, the
relative error remains to be about $100$\%, until $h$ becomes to be quite
small or $1/h$ is large. This indicates the presence of the
pollution effect which is inevitable in any finite element method
\cite{BabSau}.
In the same figure, we also show the errors of
different $k$ values by fixing $kh=0.25$. Surprisingly, we found
that the relative $H^1$ error does not evidently increase as $k$
becomes larger. The convergence line for $kh=0.25$ looks almost
flat, with a very little slope. In other words, the pollution error
is very small in the present WG result.
We note that such a result
is as good as the one reported in \cite{fw} by using a
penalized discontinuous Galerkin approach with optimized parameter
values.
In contrast, no parameters are involved in the WG scheme.
On the other hand, the good performance of the WG method for the
case $kh=0.25$ does not mean that the WG method could be free of
pollution effect. In fact, it is known theoretically \cite{BabSau}
that the pollution error cannot be eliminated completely in two- and
higher-dimensional spaces for Galerkin finite element methods. In
the right chart of Fig. {\rangle}ef{fig.kh}, we examine the numerical
errors by increasing $k$, under the constraint that $kh$ is a
constant. Huge wave numbers, up to $k=240$, are tested. It can be
seen that when the constant changes from $0.5$ to $0.75$ and $1.0$,
the non-robustness behavior against $k$ becomes more and more
evident. However, the slopes of $kh$=constant lines remain to be
small and the increment pattern with respect to $k$ is always
monotonic. This suggests that the pollution error is well controlled
in the WG solution.
\begin{figure}
\caption{Exact solution (left) and piecewise constant WG
approximation (right) for $k=100,$ and $h=1/60.$}
\end{figure}
\begin{figure}
\caption{The trace plot along $x$-axis or $y=0$ form WG solution
using piecewise constants. }
\end{figure}
\begin{figure}
\caption{Exact solution (left) and piecewise linear WG approximation
(right) for $k=100,$ and $h=1/60.$}
\end{figure}
\begin{figure}
\caption{The trace plot along $x$-axis or $y=0$ form WG solution
using piecewise linear elements. }
\end{figure}
In the rest of the paper, we shall present some numerical results
for the WG method when applied to a challenging case of high wave
numbers. In Fig. {\rangle}ef{fig.solu2d} and {\rangle}ef{fig.solu2d_linear}, the
WG numerical solutions are plotted against the exact solution of the
Helmholtz problem. Here we take a wave number $k=100$ and mesh size
$h=1/60$ which is relatively a coarse mesh. With such a coarse mesh,
the WG method can still capture the fast oscillation of the
solution. However, the numerically predicted magnitude of the
oscillation is slightly damped for waves away from the center when
piecewise constant elements are employed in the WG method. Such
damping can be seen in a trace plot along $x$-axis or $y=0$. To see
this, we consider an even worse case with $k=100$ and $h=1/50$. The
result is shown in the first chart of Fig. {\rangle}ef{fig.solu1d}. We note
that the numerical solution is excellent around the center of the
region, but it gets worse as one moves closer to the boundary. If we
choose a smaller mesh size $h=1/120$, the visual difference between
the exact and WG solutions becomes very small, as illustrate in Fig.
{\rangle}ef{fig.solu1d}. If we further choose a mesh size $h=1/200$, the
exact solution and the WG approximation look very close to each
other. This indicates an excellent convergence of the WG method when
the mesh is refined. In addition to mesh refinement, one may also
obtain a fast convergence by using high order elements in the WG
method. Figure {\rangle}ef{fig.solu1d_linear} illustrates a trace plot for
the case of $k=100$ and $h=1/60$ when piecewise linear elements are
employed in the WG method. It can be seen that the computational
result with this relatively coarse mesh captures both the fast
oscillation and the magnitude of the exact solution very well.
\section{Concluding Remarks}
The present numerical experiments indicate that the WG method as introduced
in \cite{wy} is a very promising numerical technique for solving the
Helmholtz equations with large wave numbers. This finite element
method is robust, efficient, and easy to implement. On the other
hand, a theoretical investigation for the WG method should be
conducted by taking into account some useful features of the
Helmholtz equation when special test functions are used. It would
also be valuable to test the performance of the WG method when high
order finite elements are employed to the Helmholtz equations
with large wave numbers in two and three dimensional spaces.
Finally, it is appropriate to clarify some differences and connections
between the WG method and other discontinuous finite element methods for solving
the Helmholtz equation.
Discontinuous functions are used to approximate the Helmholtz equation in many other finite element methods such as discontinuous Galerkin (DG) methods \cite{fw,abcm,Chung}
and hybrid discontinuous Galerkin (HDG) methods \cite{cdg,cgl,Monk11}.
However, the WG method and the HDG method are fundamentally different in concept and
formulation. The HDG method is formulated by using the standard
mixed method approach for the usual system of first order equations,
while the key to the WG is the use of the discrete weak differential
operators. For a second order elliptic problem, these two methods share the same feature by
approximating first order derivatives or fluxes through a formula
that was commonly employed in the mixed finite element method. For
high order partial differential equations (PDEs),
the WG method is greatly different from the HDG.
Consider the biharmonic equation \cite{mwy-biharmonic} as an example.
The first step of the HDG formulation is to rewrite the fourth order equation to
four first order equations. In contrast, the WG formulation for the biharmonic equation
can be derived directly from the variational form of the biharmonic equation
by replacing the Laplacian operator $\Delta$ by a weak Laplacian $\Delta_w$ and
adding a parameter free stabilizer \cite{mwy-biharmonic}.
It should be emphasized that the concept of
weak derivatives makes the WG a widely applicable numerical technique
for a large variety of PDEs which we shall report in forthcoming papers.
For the Helmholtz equation studied in this paper,
the WG method and the HDG method
yield the same variational form for the homogeneous Helmholtz equation
with a constant $d$ in ({\rangle}ef{pde}).
However, the WG discretization differs from the HDG discretization
for an inhomogeneous media problem with $d$ being a spatial function of
$x$ and $y$.
Moreover, the WG method has an advantage over the HDG method
when the coefficient $d$ is degenerated.
\end{document} | math | 43,860 |
\begin{document}
\title{On the existence of solutions for the Maxwell equations}
\author{Luigi Corgnier}
\date{22 april 2009}
\maketitle
\begin{abstract}
Mathematical proofs are presented concerning the existence of solutions of
the Maxwell equations with suitable boundary conditions. In particular it is
stated that the well known \lq\lq delayed potentials\rq\rq provide effective
solutions of the equations, under reasonable conditions on the sources of
the fields.
\end{abstract}
\section{Introduction}
Any advanced test on theoretical classical electromagnetism (see for example
[1], [2], [3], [4]) states that all the physical properties of the
electromagnetic fields can be mathematically deduced from the Maxwell
Equations, and performs such deduction obtaining formulae (the \lq\lq
delayed potentials\rq\rq) that allow, in principle, to evaluate the fields,
starting from the knowledge of the sources, i.e. charges and currents.
The problem is that the delayed potentials are obtained, as pointed out in
the following section, with a method which ensures that, if solutions exist
for the Maxwell Equations with suitable conditions, they must be of the
provided form. Therefore the usual deduction is able to provide uniqueness
theorems, but not existence theorems for the solutions of the Maxwell
Equations.
This is not so surprising: also in similar areas, (e. g. the Laplace
Equation for harmonic functions) the deduction of uniqueness theorems is
well easier than the deduction of existence theorems.
For the above reasons, the traditional deduction of the electromagnetism
leaves open the problem that solutions of the base equations could not
exist. The usual way to overcome this situation is to state that \lq\lq due
to the physical nature of the problem, solutions must exist\rq\rq, but
clearly this cannot be accepted from a mathematical point of view.
The present paper presents existence theorems, i.e. states that, under
suitable conditions on the sources, the Maxwell Equations admit solutions,
and moreover that the delayed potentials provide an effective solution.
Similar results could be obtained using the general theory of partial
differential equations or distribution theory, but the presented deduction
is performed without any use of such general theories, and requires no
mathematical background, besides the one required for reading any
theoretical test on electromagnetism.
\noindent The first section summarises the standard deduction of the delayed
potentials, and points outs its defects.
\noindent The second section reports the obtained results, proving that the
delayed potentials provide an effective solution.
\noindent The third section summarises the results and discusses some
extensions.
\section{Standard deduction of electromagnetism}
The Maxwell Equations for the wide space have the form:
$\mathrm{rot}\mathbf{E}+\frac{1}{c}\frac{{\partial \mathbf{H}}}{{\partial t}}
=0\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $(1)
$\mathrm{rot}\mathbf{H}-\frac{1}{c}\frac{{\partial \mathbf{E}}}{{\partial t}}
=\frac{{4\pi }}{c}\mathbf{j}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)
$\mathrm{div}\mathbf{H}=0\qquad \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \qquad \ \ \ \ \ $(3)
$\mathrm{div}\mathbf{E}=4\pi \rho $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ (4)
\noindent where:
\begin{itemize}
\item $\mathbf{E}$ is the electric field
\item $\mathbf{H}$ is the magnetic field
\item $c$ is the light speed in vacuum
\item $\mathbf{j}$ is the current density
\item $\rho$ is the charge density
\end{itemize}
The last two quantities, named ``sources of the fields''), are not
independent, since they must satisfy the continuity equation
$\mathrm{div}\mathbf{j}+\frac{{\partial \rho }}{{\partial t}}=0$ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)
\noindent whose physical meaning is the conservation of the total charge.
The main problem of the electrodynamics is the following: given $\mathbf{j}$
and $\rho $, as functions of space and time satisfying (5), find the
electric and magnetic fields. The standard approach to this problem is
summarised in the following subsections.
\subsection{Definition of the potentials}
It is trivial to prove that, given any vector function $\mathbf{A}$, the
vector function $\mathbf{H}=\mathrm{rot}\mathbf{A}$ satisfies (3). Starting
from this point, the standard developments of electrodynamics perform the
claim that, in order to satisfy (3), the magnetic field $\mathbf{H}$ must be
given by
$\mathbf{H}=\mathrm{rot}\mathbf{A}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ (6)
\noindent where $\mathbf{A}$ is a suitable vector function. A proof of this
fact is not difficult, but it is not reported, since this result is not
required in the following. Assuming that $\mathbf{H}$ is given by (6) and
substituting into (1), the following is obtained:
$\mathrm{rot}(\mathbf{E}+\frac{1}{c}\frac{{\partial \mathbf{A}}}{{\partial t}
})=0$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (7)
\noindent whose consequence is that the quantity in parenthesis is the
gradient of some scalar function $-\varphi $. Therefore the following
formula is obtained:
$\mathbf{E}=-\mathrm{grad}(\varphi )-\frac{1}{c}\frac{{\partial \mathbf{A}}}{
{\partial t}}$
Having introduced the scalar potential $\varphi $ and the vector potential $
\mathbf{A}$, the research of the fields is restricted to those given by (6)
and (7). Consequently (1) and (3) are automatically satisfied, while by
substituting into (2) and (4) and performing some calculations the following
two equations are obtained:
$\Delta A-\frac{1}{{c^{2}}}\frac{{\partial ^{2}\mathbf{A}}}{{\partial t^{2}}}
-\mathrm{grad}(\mathrm{div}\mathbf{A}+\frac{1}{c}\frac{{\partial \mathbf{A}}
}{{\partial t}})=-\frac{{4\pi }}{c}\mathbf{j}$
$\Delta \phi -\frac{1}{{c^{2}}}\frac{{\partial ^{2}\phi }}{{\partial t^{2}}}+
\frac{1}{c}\frac{\partial }{{\partial t}}(\mathrm{div}A+\frac{1}{c}\frac{{
\partial \phi }}{{\partial t}})=-4\pi \rho $
Normally the last equations are simplified by assuming the Gauge condition:
$\mathrm{div}\mathbf{A}+\frac{1}{c}\frac{{\partial \varphi }}{{\partial t}}
=0 $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (8)
and this assumption is heuristically justified by observing that $\varphi $
and $\mathbf{A}$ are not uniquely defined by (6) and (7). Summarising, the
restriction is done to search fields given by (6) and (7), where $\mathbf{A}$
and $\varphi $ satisfy (10). With the above assumptions, the equations
satisfied by the potentials reduce to the form
$\Delta \mathbf{A}-\frac{1}{{c^{2}}}\frac{{\partial ^{2}\mathbf{A}}}{{
\partial t^{2}}}=-\frac{{4\pi }}{c}\mathbf{j}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ (9)
$\Delta \varphi -\frac{1}{{c^{2}}}\frac{{\partial ^{2}\varphi }}{{\partial
t^{2}}}=-4\pi \rho $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (10)
If solutions of the above equations, subjected to (8), can be found, the
conclusion can be stated that the Maxwell equations have at least one
solution, given by (6) and (7).
It is readily seen that (9) and (10) are a system of 4 independent scalar
equations; moreover they are of the similar form
$\Delta F-\frac{1}{{c^{2}}}\frac{{\partial ^{2}F}}{{\partial t^{2}}}=-G$ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (11)
\subsection{Elimination of the time dependence}
Equation (11) is simplified by taking its temporal Fourier transform
(accepting that the involved functions can be transformed), and by assuming
the possibility to exchange integrations with differentiations. By standard
calculations the following equation is obtained:
$\Delta f+k^{2}f=-g$\noindent\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ (12)
where
\begin{itemize}
\item the variable replacing $t$ in the transformed domain is named $\omega$
(it has a frequency meaning)
\item $k={\omega /c}$ , where $c$ is the light speed in vacuum
\item $f(\omega)$ and $g(\omega)$ are the Fourier transforms of $F$ and $G$,
respectively.
\end{itemize}
In (12) $k$ has simply the function of a constant parameter.
\subsection{Green Theorem}
Equation (12) is treated with the well known Green method, as is done for
the Laplace Equation for the harmonic functions. As in this case, it is
clear that the Green method provides an uniqueness theorem for the solution
of (14) with assigned values on a boundary, but does not provide any
existence theorem for such solution. The Green functions is chosen as $\frac{
{e^{-ikd}}}{d}$, where $d=\sqrt{(x-x_{0})^{2}+(y-y_{0})^{2}+(z-z_{0})^{2}}$,
and $x_{0}$, $y_{0}$, $z_{0}$ are the coordinates of an arbitrary point $
P_{0}$ (it is easily verified that this function is everywhere regular and
satisfies (12) except at $P_{0}$, therefore it is an appropriate Green
function).
In such a way the Green Theorem is obtained in the form
$f(P_{0})=-\frac{1}{{4\pi }}\int\limits_{D}{\frac{{ge^{-ikd}}}{d}}dV+\frac{1
}{{4\pi }}\int\limits_{S}{(f\frac{{\partial \frac{{e^{-ikd}}}{d}}}{{\partial
n}}}-\frac{{e^{-ikd}}}{d}\frac{{\partial f}}{{\partial n}})dS$
\noindent and provides the solution of (12) at an arbitrary point interior
to a spatial domain $D$ with boundary $S$, using the known term $g$ and the
value of the solution and of its normal derivative taken on the boundary, in
total analogy with the Green formula for the Laplace equation.
A similar formula for the solution of (11) is obtained by applying the
inverse Fourier transform, and assuming interchangeability between
differentiation and integration:
$F(P_{0})=-\frac{1}{{4\pi }}\int\limits_{D}{\frac{{G(t-\frac{d}{c})}}{d}}dV+
\frac{1}{{4\pi }}\int\limits_{S}{(\frac{1}{d}\frac{{\partial F}}{{\partial n}
}}-\frac{1}{d}(\frac{1}{c}\frac{{\partial F}}{{\partial t}}+\frac{F}{d}\frac{
{\partial d}}{{\partial n}}))|_{t-\frac{d}{c}}dS$
Finally, a behavior at infinity for $F$ is assumed that ensures the
vanishing of the second integral when the domain $G$ tends to infinity, and
the following is obtained:
$F(P_{0})=-\frac{1}{{4\pi }}\int\limits_{{}}{\frac{{G(t-\frac{d}{c})}}{d}}dV$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (13)
\noindent where presently the integral is extended to the whole space. (13)
provides an explicit solution of (11), if the assumption is made that a
solution exists satisfying all the outlined conditions. Coming back to the
equations for the potentials, (13) provides:
$\mathrm{A}(P_{0})=-\frac{1}{c}\int\limits_{{}}{\frac{{\mathbf{j}(x,y,z,t-
\frac{d}{c})}}{d}}dV$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (14)
$\varphi (P_{0})=-\int\limits_{{}}{\frac{{\rho (x,y,z,t-\frac{d}{c})}}{d}}dV$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (15)
The above formulae are known as ``delayed potentials'', and are believed to
provide, with (6) and (7), a general solution to the electrodynamics problem.
\subsection{Criticism}
As pointed out by the above review of the standard deduction, the fields
obtained applying (6) and (7) to (14) and(15) provide a solution to the
Maxwell equation assuming that the following conditions are verified:
\begin{itemize}
\item Existence of a solution which can be derived by the potentials
according to (6) and (7) (this would not be difficult to prove, if the
assumption is made of existence of a solution of (1),(2), (3), (4))
\item Existence of a solution with suitable conditions at infinity
\item As a minor point, validity of various hypothesis concerning the
possibility of exchanging integrals (singular and with infinite domain) and
derivatives
\item Existence of the integrals appearing in (14) and (15)
\item Validity, for the potentials given by (14) and (15), of the condition
(8), with is necessary to conclude that (9) and (10) provide solutions for
the potentials.
\end{itemize}
In conclusion, the summarised deduction provides only an heuristic feeling
to have found possible solutions of the Maxwell equations.
\section{Solubility Theorems}
In order to overcome the summarised limits, the chosen strategy does not
attempt to verify the validity of the various conditions pointed out at the
end of the preceding section, but attacks directly the problem to prove
that, under suitable conditions on the field sources $\rho $ and $\mathbf{j}$
, (14) and (15) provide functions that satisfy (8), (9) and (10), from which
it is straightforward to prove that the fields obtained by (6) and (7)
satisfy the Maxwell equations (1), (2), (3), (4).
The assumed conditions on $\rho$ and $\mathbf{j}$ are the following:
\begin{itemize}
\item Regularity, i.e. existence and continuity up to the first derivatives
\item Concerning the space variables, vanishing outside of a finite regular
domain $D$, where \lq\lq regular \rq\rq means that its boundary admits
everywhere a tangent plane
\item Concerning the time dependence, sinusoidal behaviour, or existence of
the Fourier transform, vanishing outside of a finite interval for the
transformed variable $\omega$ (in practice, finitely extended frequency
spectrum)
\end{itemize}
These conditions, in particular the second and the third, are surely
redundant, even if acceptable from a physical point of view. The deduction
will show some possibility of extensions. The proofs are reported for the
following cases, of increasing complexity:
\begin{itemize}
\item Electrostatic case
\item Magnetostatic case
\item Monochromatic case
\item General case.
\end{itemize}
\subsection{Electrostatic case}
This is the case in which all the charges are fixed, i.e. the density $\rho $
does not depend on the time, and the current density $\mathbf{j}$ vanishes
(note that condition (5) is trivially satisfied). In this case the proposed
solutions (16) and (17) assume the form
$\mathbf{A}(P_{0})=0$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (14)
$\varphi (P_{0})=-\int\limits_{D}{\frac{{\rho (x,y,z)}}{d}}dV$ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (15)
\noindent where $D$ is the finite domain in which $\rho $ does not vanish,
while the equations to be verified ((8), (9) and (10)) reduce to
$\Delta \varphi =-4\pi \rho $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (16)
If a point $P_{0}$ is external to $D$ , the integral appearing in (15) is
not singular and can be differentiated under the sign, providing trivially $
\Delta \varphi =-4\pi \rho $, as required. \noindent If the point $P_{0}$ is
internal to $D$, a transformation to polar coordinates $u$, $\alpha $, $
\beta $ centered on $P_{0}$ is performed, giving
$\varphi (P_{0})=-\int_{0}^{2\pi }{d\beta }\int_{-{\pi /2}}^{{\pi /2}}{\sin
^{2}\alpha }\int_{0}^{g(P_{0},\alpha ,\beta )}{\rho (P_{0}+\mathbf{v})udu}$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (17)
\noindent in which the vector $\mathbf{v}=(u\sin \alpha \cos \beta ,u\sin
\alpha \sin \beta ,u\cos \beta )$ has been introduced, and $g(P_{0},\alpha
,\beta )$ is the distance between $P_{0}$ and the point on the boundary of $
D $ with polar angles $\alpha $, $\beta $ , which is a regular function.
In (17) every singularity has disappeared, and this ensures the existence
and regularity of $\varphi (P_{0})$, and the existence of the vector field
given by
\ $\mathbf{E}=-\mathrm{grad}(\varphi )$
The problem is if $\mathbf{E}$ can be calculated by taking the derivatives
respect to $P_{0}$ under the sign of integral appearing in (19). The
response is certainly "yes" if the integrals obtained by formal derivation
under the sign are uniformly convergent. Performing the formal derivation,
one obtains
$\frac{{\partial \varphi }}{{\partial x_{0}}}=^{?}-\int\limits_{D}{\rho
\frac{{x_{0}-x}}{{d^{3}}}}dV$
\noindent where the interrogation mark means that the result must be
justified. Performing the same transformation to polar coordinates just
used, the last integral transforms to
$\varphi (P_{0})=^{?}-\int_{0}^{2\pi }{d\beta }\int_{-{\pi /2}}^{{\pi /2}}{
\sin ^{2}\alpha }\int_{0}^{g(P_{0},\alpha ,\beta )}{\rho (P_{0}+\mathbf{v})du
}$
\noindent which is without singularities. Therefore it represents a
continuous function of $P_{0}$, and the uniform convergence is ensured,
justifying the formal derivation. It is important to note that a similar
reasoning could not be applied to the second derivatives, vanishing any
attempt to verify (20) with a direct calculation. Having justified first
order derivatives under the sign of $\varphi (P_{0})$, the following result
is obtained
$\mathbf{E}(P_{0})=\int\limits_{D}{\rho (P)\frac{{\mathbf{r}_{0}-\mathbf{r}}
}{{|\mathbf{r}_{0}-\mathbf{r}|^{3}}}}dV$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ (18)
\noindent with uniform convergence of the integral at its unique singular
point $P_{0}$. Now let $V$ be an arbitrary regular domain internal to $D$,
with boundary $S$. The aim is to calculate the flux of $\mathbf{E}$ out of $
V $. With some straightforward calculations, performing an integration
exchange justified by the uniform convergence already established, one finds:
$
$
$\int\limits_{S}{\mathbf{E}.\mathbf{n}dS_{0}=\int\limits_{V}{\rho
(P)dV\int\limits_{S}{\frac{{(\mathbf{r}_{0}-\mathbf{r}).\mathbf{n}}}{{|
\mathbf{r}_{0}-\mathbf{r}|^{3}}}}dS_{0}}}+\int\limits_{D-V}{\rho
(P)dV\int\limits_{S}{\frac{{(\mathbf{r}_{0}-\mathbf{r}).\mathbf{n}}}{{|
\mathbf{r}_{0}-\mathbf{r}|^{3}}}}dS_{0}}$ \ \ \ \ \ \ \ \ \ \ (19)
\noindent where \textbf{n} is the vector of modulus 1 normal to $S$. In (19)
the second integral does not contain singularities, since $P_{0}$ is on $S$,
while $P$ is external to $S$. Since a trivial calculation gives
$
$
$\mathrm{div}(\frac{{\mathbf{r}_{0}-\mathbf{r}}}{{|\mathbf{r}_{0}-\mathbf{r}
|^{3}}})=0$
\noindent when $P_{0}\neq P$, a standard application of Gauss Theorem proves
that the second integral vanishes. By the same reasoning, in the first
integral the boundary $S$ can be modified to a sphere centered on $P$, and
totally included in $D$, without modifying its value. In this conditions, a
direct calculation shows that
$\int\limits_{S}{\mathbf{E}.\mathbf{n}dS_{0}=4\pi \int\limits_{V}{\rho (P)dV}
}$
\noindent and using again Gauss Theorem
$
$
$\int\limits_{V}{\mathrm{div}\mathbf{E}dV=4\pi \int\limits_{V}{\rho (P)dV}}$
Finally, keeping into account that the domain $V$ is arbitrary and that the
functions under integral are continuous, one obtains
$\mathrm{div}\mathbf{E}=4\pi \rho $
\noindent which, inserting the definition of $\mathbf{E}$, is equivalent to
the equation to be verified (16). The focal point in the reported prove is
that a vector function like (18) has flux zero from any closed surface not
including singularities of the integral; from this it follows that the flux
from an arbitrary surface can be calculated by modifying the surface up to a
sphere concentrated on the unique singular point, for which the calculation
is trivial. Similar methods are applied to the calculation of the residues
of an analytical function, providing Cauchy formula, whose aspect is similar
to (18). This localisation property allows also to understand how the
restriction on the finiteness of the domain $D$ could be lowered: it is
sufficient to separate $D$ into a finite part surrounding the point $P$ and
a remaining infinite part. For the finite part the reported reasoning
applies, while the remaining one gives no contribution. Obviously some
conditions must be added, which authorise the performed interchange of
integrals; a sufficient condition is the existence of the integrals
appearing in (19) as multiple integrals, by Fubini Theorem.
\subsection{Magnetostatic case}
This is the case in which both the density $\rho $ and the current density $
\mathbf{j}$ do not depend on time. In this case it will be necessary to use
condition (5), which becomes
$
\mathrm{div}\mathbf{j}=0$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (20)
In this case the proposed solutions (14) and (15) assume the form
$
$
$\mathbf{A}(P_{0})=-\frac{1}{c}\int\limits_{D}{\frac{{\mathbf{j}(x,y,z)}}{d}}
dV$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (21)
$\varphi (P_{0})=-\int\limits_{D}{\frac{{\rho (x,y,z)}}{d}}dV$ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (22)
\noindent where $D$ is the finite domain in which $\rho $ and $\mathbf{j}$
do not vanish, while the equations to be verified ((8), (9) and (10)) reduce
to
$\mathrm{div}\mathbf{A}=0$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (23)
$\Delta \mathbf{A}=-\frac{{4\pi }}{c}\mathbf{j}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
(24)
$\Delta \varphi =-4\pi \rho $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (25)
(25) has been already proved in the preceding section. The same reasoning,
applied to the individual components of $\mathbf{A}$ and $\mathbf{j}$,
proves (24). Therefore all what is required is to prove (23). To this aim,
the first point is to consider two vectors $\mathbf{r}_{0}$ and $\mathbf{r}$
and to evaluate the flux of the vector $\frac{\mathbf{j}}{{|\mathbf{r}_{0}-
\mathbf{r}|}}$ \ \noindent from the boundary $S$ of $D$. Considering firstly
the case in which $P_{0}$ is external to $D$, which avoids any singularity,
and using (20), (21), (22) and Gauss Theorem, one obtains
$
$
$\int\limits_{S}{\frac{{\mathbf{j}.\mathbf{n}}}{{|\mathbf{r}_{0}-\mathbf{r}|}
}}dS=\int\limits_{D}{\mathrm{div}\frac{\mathbf{j}}{{|\mathbf{r}_{0}-\mathbf{r
}|}}}dV=\int\limits_{D}{\frac{1}{{|\mathbf{r}_{0}-\mathbf{r}|}}\mathrm{div}
\mathbf{j}}dV+\int\limits_{D}{\mathbf{j}.\mathrm{grad}\frac{1}{{|\mathbf{r}
_{0}-\mathbf{r}|}}}dV=\int\limits_{D}{\frac{{\mathbf{j}.(\mathbf{r}_{0}-
\mathbf{r})}}{{|\mathbf{r}_{0}-\mathbf{r}|^{3}}}}dV=\mathrm{div}_{0}\mathbf{A
}(\mathbf{r}_{0})$
\noindent (in the last passage a derivative under the sign has been taken,
justified as in the preceding section).
Therefore in this case the proof of (23) is obtained, if the flux appearing
at the left hand side is zero. A sufficient condition for that is that at
each point of $S$ $\mathbf{j}$ is tangent to $S$, or in particular null. To
prove this, suppose that exists a point $P$ on $S$ at which $\mathbf{j}$ is
directed e.g. towards the extern. Then it should be possible to find an
neighborhood of $P$ satisfying the same condition (remember that $\mathbf{j}$
is assumed continuous), with the consequence that the flux of $\mathbf{j}$
from such neighborhood, completed to a closed surface, should be strictly
positive, in contradiction with (20).
Considering now the case in which $P_{0}$ is internal to $D$, the integral
appearing in (21) becomes singular, and the above passages are not valid.
But they can be applied excluding from the integral the contribution of a
sphere $D_{\varepsilon }$ of ray $\varepsilon $ centered on $\mathbf{r}_{0}$
and with boundary $S_{\varepsilon }$, finding
$0=\int\limits_{S}{\frac{{\mathbf{j}.\mathbf{n}}}{{|\mathbf{r}_{0}-\mathbf{r}
|}}}dS=\int\limits_{S}{}+\int\limits_{S_{\varepsilon
}}{}-\int\limits_{S_{\varepsilon }}{\frac{{\mathbf{j}.\mathbf{n}}}{{|\mathbf{
r}_{0}-\mathbf{r}|}}dS=}\int\limits_{D-D_{\varepsilon }}{\mathrm{div}}\frac{
\mathbf{j}}{{|\mathbf{r}_{0}-\mathbf{r}|}}dV+\int\limits_{S_{\varepsilon }}{
\frac{{\mathbf{j}.\mathbf{n}}}{{|\mathbf{r}_{0}-\mathbf{r}|}}dS}$
The first integral is not singular, and can be treated as in the first case;
for the second one the mean value theorem is applied, obtaining
$\int\limits_{D-D_{\varepsilon }}{}\frac{{\mathbf{j}.(\mathbf{r}_{0}-\mathbf{
r})}}{{|\mathbf{r}_{0}-\mathbf{r}|^{3}}}dV+K^{\ast }\varepsilon =0$
\noindent where $K^{\ast }$ is a suitable value comprised between the
minimum and the maximum of $\mathbf{j.n}$ on $S$, and therefore bounded.
Taking the limit for $\varepsilon \rightarrow 0$, the desired result $
\mathrm{div}_{0}\mathbf{A}(\mathbf{r}_{0})=0$ is obtained.
Finally, in the case of $P$ on the boundary of $D$, it is sufficient to use
the already obtained results and the continuity of $\mathrm{div}\mathbf{A}$,
whose prove is obtained by the method of the preceding section.
\subsection{Monocromatic case}
This is the case in which the density $\rho $ and the current density $
\mathbf{j}$ have a sinusoidal time dependence. Using the complex exponential
notation, which simplifies some arguments, they are given by
$
$
$\rho (x,y,z,t)=\rho _{a}(x,y,z)\exp (-i\omega t)$ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (26)
$\mathbf{j}(x,y,z,t)=\mathbf{j}_{a}(x,y,z)\exp (-i\omega t)$ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ (27)
\noindent where $\rho _{a}(x,y,z)$ and $\mathbf{j}_{a}(x,y,z)$ depend only
on the spatial variables, and $\omega $ is a fixed parameter. In this case
the proposed solutions (14) and (15) assume the form
$
$
$\mathbf{A}(P_{0})=-\frac{1}{c}\int\limits_{D}{\frac{{\mathbf{j}
_{a}(x,y,z)\exp (i\frac{\omega }{c}d)}}{d}}dV.\exp (-i\omega t)$ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (28)
$\varphi (P_{0})=-\int\limits_{D}{\frac{{\rho _{a}(x,y,z)\exp (i\frac{\omega
}{c}d)}}{d}}dV.\exp (-i\omega t)$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (29)
\noindent where $D$ is a finite domain outside of which $\rho $ and $\mathbf{
j}$ vanish. Posing
$
\mathbf{A}=\mathbf{A}_{a}\exp (-i\omega t)$
$\varphi =\varphi _{a}\exp (-i\omega t)$
equations (28) and (29) give
$
$
$\mathbf{A}_{a}(P_{0})=-\frac{1}{c}\int\limits_{D}{\frac{{\mathbf{j}
_{a}(x,y,z)\exp (i\frac{\omega }{c}d)}}{d}}dV$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (30)
$\varphi _{a}(P_{0})=-\int\limits_{D}{\frac{{\rho _{a}(x,y,z)\exp (i\frac{
\omega }{c}d)}}{d}}dV$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (31)
\noindent while the equations to be verified ((8), (9)and (10)) reduce to
$
$
$\mathrm{div}\mathbf{A}_{a}-ik\varphi _{a}=0$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (32)
$\Delta \varphi _{a}+k^{2}\varphi _{a}=-4\pi \varphi _{a}$ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (33)
$\Delta \mathbf{A}_{a}+k^{2}\mathbf{A}_{a}=-\frac{{4\pi }}{c}\mathbf{j}_{a}$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (34)
\noindent where $k={\omega /c}$.Finally the continuity condition (5) becomes
$
$
$\mathrm{div}\mathbf{j}_{a}-i\omega \rho _{a}=0$ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (35)
The first step is to verify (32). The detailed steps are not reported, since
they are identical to those used in the preceding section, with the only
difference that the starting point is the calculation of the flux of $\ \
\frac{{\mathbf{j}_{a}\exp (i\omega |\mathbf{r}_{0}-\mathbf{r}|)}}{{|\mathbf{r
}_{0}-\mathbf{r}|}}$ \ \noindent which vanishes due to the tangent condition
on $\mathbf{j}_{a}$.
Coming to the proof of (33), one defines
$
$
$\mathbf{E}_{a}=-\mathrm{grad}\varphi _{a}+ik\mathbf{A}_{a}$ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (36)
\noindent and, using (32), obtains
$
$
$\mathrm{div}_{0}\mathbf{E}_{a}=-\mathrm{div}_{0}\int\limits_{D}{\rho
_{a}(ik|\mathbf{r}_{0}-\mathbf{r}|-1)\exp (ik|\mathbf{r}_{0}-\mathbf{r}|)}
\frac{{\mathbf{r}_{0}-\mathbf{r}}}{{|\mathbf{r}_{0}-\mathbf{r}|^{3}}}
dV+k^{2}\varphi _{a}$
Using the fact that an integral with first order singularity in $|\mathbf{r}
_{0}-\mathbf{r}|$ can be differentiated under the sign and developing some
calculations, the following is obtained:
$
$
$\mathrm{div}_{0}\mathbf{E}_{a}=-ik\int\limits_{D}{\rho _{a}(ik|\mathbf{r}
_{0}-\mathbf{r}|-1)\exp (ik|\mathbf{r}_{0}-\mathbf{r}|)}\frac{1}{{|\mathbf{r}
_{0}-\mathbf{r}|^{2}}}dV+\mathrm{div}_{0}\mathbf{C}$ \ \ \ \ \ \ \ \ (37)
\noindent having defined
$
$
$\mathbf{C}=\int\limits_{D}{\rho _{a}\exp (ik|\mathbf{r}_{0}-\mathbf{r}|)}
\frac{{\mathbf{r}_{0}-\mathbf{r}}}{{|r_{0}-r|^{3}}}dV$
At this point the flux of C through a surface S contained in V is evaluated
by steps similar to those reported in the section on electrostatics. The
result is
$\int\limits_{V}{\mathrm{div}\mathbf{C}dV_{0}}=\int\limits_{S}{dS_{0}}
\int\limits_{D}{\rho _{a}(P)\frac{{(\mathbf{r}_{0}-\mathbf{r}).\mathbf{n}}}{{
|\mathbf{r}_{0}-\mathbf{r}|^{3}}}\exp (ik|\mathbf{r}_{0}-\mathbf{r}|)}
dV=\int\limits_{V}{}+\int\limits_{D-V}{\rho _{a}(P)dV\int\limits_{S}{\frac{{(
\mathbf{r}_{0}-\mathbf{r}).\mathbf{n}}}{{|\mathbf{r}_{0}-\mathbf{r}|^{3}}}
\exp (ik|\mathbf{r}_{0}-\mathbf{r}|)}}dS_{0}$
In the above equation the second integral is regular, and its value is
$\int\limits_{D-V}{\rho _{a}(P)dV\int\limits_{S}{\frac{{(\mathbf{r}_{0}-
\mathbf{r}).n}}{{|\mathbf{r}_{0}-\mathbf{r}|^{3}}}\exp (ik|\mathbf{r}_{0}-
\mathbf{r}|)}}dS_{0}=\int\limits_{V}{dV_{0}\int\limits_{D-V}{\rho _{a}(P)
\frac{{ik}}{{|\mathbf{r}_{0}-\mathbf{r}|^{2}}}\exp (ik|\mathbf{r}_{0}-
\mathbf{r}|)}}dV$
On the contrary, in the first integral the surface is moved up to the
already used sphere of radius $\varepsilon $, and the following is obtained
$\int\limits_{V}{\mathrm{div}\mathbf{C}dV_{0}}=\int\limits_{V}{dV_{0}}
\int\limits_{D-V}{\rho _{a}\frac{{ik\exp (ik|\mathbf{r}_{0}-\mathbf{r}|)}}{{|
\mathbf{r}_{0}-\mathbf{r}|^{2}}}}dV+\int\limits_{V-V_{\varepsilon }}{dV_{0}}
\int\limits_{V}{\rho _{a}\frac{{ik\exp (ik|\mathbf{r}_{0}-\mathbf{r}|)}}{{|
\mathbf{r}_{0}-\mathbf{r}|^{2}}}}dV+4\pi \int\limits_{V}{\rho _{a}}dV$
Taking the limit $\varepsilon \rightarrow 0$ and using the fact that $V$ is
arbitrary, the final result is obtained
$
$
$\mathrm{div}\mathbf{C}=ik\int\limits_{V}{\rho _{a}\frac{{\exp (ik|\mathbf{r}
_{0}-\mathbf{r}|)}}{{|\mathbf{r}_{0}-\mathbf{r}|^{2}}}}dV+4\pi \varphi _{a}$
which inserted into (37) provides the desired result (33).
The final step is the verification of (34). At a first sight, it could
appear sufficient to apply the last obtained result to the three components
of (34). However, there is a problem, due to the fact that to obtain (33)
from (31), essential use has been made of (32), which depends on the fact
that a-priori $\rho _{a}$ is not arbitrary, but tied to a certain vector $
\mathbf{j}_{a}$ by (35). Therefore the proof is complete if the fact can be
proved that to each component $j_{a^{\prime }}$ of a vector $\mathbf{j}_{a}$
another vector $\mathbf{j}_{a}^{\ast }$ can be associated, satisfying the
equation
$
$
$\mathrm{div}\mathbf{j}_{a}^{\ast }-i\omega j_{a^{\prime }}^{{}}=0$ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
(38)
and moreover tangent, as $\mathbf{j}_{a}$, on the boundary of the domain $D$.
This is consequence of the following property:
\noindent The equation in $\mathbf{j}$
$
$
$\mathrm{div}\mathbf{j}=f(x,y,z)$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (39)
has solutions, even if $\mathbf{j}$ is constrained to be tangent to an
assigned surface $S$. In fact it is easy to verify that (39) (but in general
not the constraint) is satisfied by the vector
$
$
$\mathbf{j}^{\ast \ast }=(0,0,\int {f(x,y,z)dz})$
and therefore also by
$
$
$\mathbf{j}=\mathbf{j}^{\ast \ast }+\mathrm{rot}\mathbf{P}$
where $\mathbf{P}$ is an arbitrary vector. Therefore all is reduced to
choice $\mathbf{P}$ in such a way that on $S$:
$
$
$\mathrm{rot}\mathbf{P}.\mathbf{n}=-\mathbf{j}^{\ast \ast }.\mathbf{n}$
i.e. to prove that the equation in $P$
$
$
$\mathrm{rot}\mathbf{P}.\mathbf{n}=g(x,y,z)$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (40)
has solutions on $S$.
In fact this is possible even by imposing $P_{x}=P_{y}=0$, and reducing (40)
to the form
\begin{equation*}
\frac{{\partial P_{z}}}{{\partial y}}n_{x}-\frac{{\partial P_{z}}}{{\partial
x}}n_{y}=g(x,y,z)
\end{equation*}
which is a first order partial differential linear equation and can be
solved through a method which transform it into an ordinary differential
equation (see any text of Mathematical Analysis, e.g. [5]).
\subsection{General case}
This is the case in which the density $\rho $ and the current density $
\mathbf{j}$ have arbitrary time dependence, but with the restriction to
admit a Fourier transform, producing functions vanishing outside of a finite
domain. \noindent Under these conditions, $\rho $ and $\mathbf{j}$ are of
the following form
$\rho (r,t)=\frac{1}{\sqrt{2\pi }}\int\limits_{B}{\rho _{a}(r,\omega )\exp
(-i\omega t)d\omega }$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (41)
$\mathbf{j}(r,t)=\frac{1}{\sqrt{2\pi }}\int\limits_{B}{\mathbf{j}
_{a}(r,\omega )\exp (-i\omega t)d\omega }$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (42)
where $B$ is a finite domain. \noindent Inserting (41) and (42) into (14)
and (15) and interchanging bounded integrations, the proposed solutions take
the form
$\mathbf{A}(P_{0},t)=-\frac{1}{{\sqrt{2\pi }c}}\int\limits_{B}{\exp
(-i\omega t)du\int\limits_{D}{\frac{{\mathbf{j}_{a}(r,\omega )\exp (i\omega }
d/c{)}}{d}dV}}=\frac{1}{\sqrt{2\pi }}\int\limits_{B}{\mathbf{A}_{a}(r,\omega
)\exp (-i\omega t)d\omega }$ \ \ \ (43)
$\varphi (P_{0},t)=-\frac{1}{\sqrt{2\pi }}\int\limits_{B}{\exp (-i\omega
t)du\int\limits_{D}{\frac{{\rho _{a}(r,\omega )\exp (i{\omega }d/c)}}{d}dV}}=
\frac{1}{\sqrt{2\pi }}\int\limits_{B}{\varphi _{a}(r,\omega )\exp (-i\omega
t)d\omega }$ \ \ \ \ (44){\ }
\noindent where $\mathbf{A}_{a}$ and $\varphi _{a}$ are given by (28) and
(29).
Similarly, taking derivatives under the sign of the regular range bounded
integrals (41) and (42) and using inversion properties for the Fourier
transform, the continuity equation (5) provides (32) for $\mathbf{A}_{a}$
and $\varphi _{a}$. Using the results of the preceding section, it can be
concluded that $\mathbf{A}_{a}$ and $\varphi _{a}$ satisfy (32), (33), (34).
Using this fact, one obtains, performing derivatives under bounded regular
integrals:
$
$
$\mathrm{div}\mathbf{A}+\frac{1}{c}\frac{{\partial \varphi }}{{\partial t}}=
\frac{1}{\sqrt{2\pi }}\int\limits_{B}{(\mathrm{div}\mathbf{A}_{a}-}\frac{{
i\omega }}{c}\varphi _{a})\exp (-i\omega t)d\omega =0$
$\Delta \mathbf{A}-\frac{1}{{c^{2}}}\frac{{\partial ^{2}\mathbf{A}}}{{
\partial t^{2}}}=\frac{1}{\sqrt{2\pi }}\int\limits_{D}{(\Delta \mathbf{A}
_{a}+\frac{{\omega ^{2}}}{{c^{2}}}}\mathbf{A}_{a})\exp (-i\omega t)d\omega =-
\frac{{4\pi }}{{\sqrt{2\pi }c}}\int\limits_{D}{\mathbf{j}_{a}}\exp (-i\omega
t)d\omega =-\frac{{4\pi }}{c}\mathbf{j}$
$\Delta \varphi -\frac{1}{{c^{2}}}\frac{{\partial ^{2}\varphi }}{{\partial
t^{2}}}=\frac{1}{\sqrt{2\pi }}\int\limits_{D}{(\Delta \varphi _{a}+\frac{{
\omega ^{2}}}{{c^{2}}}}\varphi _{a})\exp (-i\omega t)d\omega =-\frac{{4\pi }
}{\sqrt{2\pi }}\int\limits_{D}{\varphi _{a}}\exp (-i\omega t)d\omega =-4\pi
\rho $
i.e. (8), (9) and (10) have been proved.
The conditions of finitely extended Fourier transform on $\rho $ and $
\mathbf{j}$ have been imposed since they are the simplest way to justify the
last passages of interchanging integrations and performing derivatives under
integration. However, the same justifications are ensured more generally by
conditions of existence and uniform absolute convergence of multiple
integrals of the form
$
$
$\int\limits_{B}{\int\limits_{D}{\frac{{\rho _{a}(r,\omega )}}{d}\omega ^{2}}
}d\omega dV$
$\int\limits_{B}{\int\limits_{D}{\frac{{\mathbf{j}_{a}(r,\omega )}}{d}\omega
^{2}}}d\omega dV$
having considered that the factor $\exp (-i\omega t)$, of modulus 1, does
not disturb such absolute convergence.
\section{Conclusions}
A proof has been given that the fields calculated applying (6) and (7) to
the delayed potentials (14) and (15) provide effectively a solution to the
Maxwell equations (1), (2), (3), (4). The complete details of the proof have
been reported using the assumption that the sources $\rho $ and $\mathbf{j}$
are regular functions up to the first order derivatives, that they are
vanishing outside some finite spatial domain, and that their temporal
Fourier transform is vanishing outside some finite frequency interval. It
has also been shown that the last two conditions can be lowered, assuming
only the multiple summability of certain space/frequency functions
constructed from the sources.
\begin{flushleft}
\textbf{AMS Subject Classification: 78A25.}\\[2ex]
\end{flushleft}
\end{document} | math | 38,950 |
\begin{document}
\title{A Posteriori Error Estimator for a\
Non-Standard Finite Difference Scheme\
Applied to BVPs on Infinite Intervals}
\begin{abstract}
In this paper, we present a study of an a posteriori estimator for the discretization error of a non-standard finite difference scheme applied to boundary value problems defined on an infinite interval.
In particular, we show how Richardson's extrapolation can be used to improve the numerical solution involving the order of accuracy and numerical solutions from two nested quasi-uniform grids.
A benchmark problem is examined for which the exact solution is known and we get the following result: if the round-off error is negligible and the grids are sufficiently fine then the Richardson's error estimate gives an upper bound of the global error.
\end{abstract}
\noindent
{\bf Key Words}: Boundary value problems on infinite intervals, global error estimator, quasi-uniform grid, non-standard finite difference, order of accuracy.
\noindent
{\bf MSC 2010}: 65L10, 65L12, 65L70.
\section{Introduction}
The main aim of this paper is to show how Richardson's extrapolation can be used to define an error estimator for a non-standard finite difference scheme applied to boundary value problems (BVPs) defined on the infinite interval.
Without loss of generality, we consider the class of BVPs
\begin{eqnarray}
&& {\displaystyle \frac{d{\bf u}}{dx}} = {\bf f} \left(x, {\bf u}\right)
\ , \quad x \in [0, \infty) \ , \nonumber \\[-1.5ex]
\label{p} \\[-1.5ex]
&& {\bf g} \left( {\bf u}(0), {\bf u} (\infty) \right) = {\bf 0}
\ , \nonumber
\end{eqnarray}
where $ {\bf u}(x) $ is a $ d-$dimensional vector with $ {}^{\ell} u
(x) $ for $ \ell =1, \dots , d $ as components, $ {\bf f}:[0,
\infty) \times \hbox{I\kern-.2em\hbox{R}}^d \rightarrow~\hbox{I\kern-.2em\hbox{R}}^d $, and $ {\bf g}:
\hbox{I\kern-.2em\hbox{R}}^d \times \hbox{I\kern-.2em\hbox{R}}^d \rightarrow \hbox{I\kern-.2em\hbox{R}}^d $.
Here, and in the following, we use Lambert's notation for the vector components \cite[pp. 1-5]{Lambert}.
Existence and uniqueness results, as well as results concerning the solution asymptotic behaviour, for classes of problems belonging to (\ref{p}) have been reported in the literature, see for instance Granas et al. \cite{Granas:BVP:1986}, Countryman and Kannan \cite{Countryman:1994:CNB}, and Agarwal et al. \cite{Agarwal:2002:EAB,Agarwal:2002:IIP,Agarwal:2005:EAB}.
Numerical methods for problems belonging to (\ref{p}) can be classified according to the numerical treatment of the boundary conditions imposed at infinity.
The oldest and simplest treatment is to replace the infinity with a suitable finite value, the so-called truncated boundary.
However, being the simplest approach this has revealed within the decades some drawbacks that suggest not to apply it especially if we have to face a given problem without any clue on its solution behaviour.
Several other treatments have been proposed in the literature to overcome the shortcomings of the truncated boundary approach.
In this research area they are worth of consideration: the formulation of so-called asymptotic boundary conditions by de Hoog and Weiss \cite{deHoog:1980:ATB}, Lentini and Keller \cite{Lentini:BVP:1980} and Markowich
\cite{Markowich:TAS:1982,Markowich:ABV:1983}; the reformulation of the given problem in a bounded domain as studied first by de Hoog and Weiss and developed more recently by Kitzhofer et al. \cite{Kitzhofer:2007:ENS}; the free boundary formulation \ proposed by Fazio \cite{Fazio:1992:BPF} where the unknown free boundary can be identified with a truncated boundary; the treatment of the original domain via pseudo-spectral collocation methods, see the book by Boyd \cite{Boyd:2001:CFS} or the review by Shen and Wang \cite{Shen:SRA:2009} for more details on this topic; and, finally, a non-standard finite difference scheme on a quasi-uniform grid defined on the original domain by Fazio and Jannelli \cite{Fazio:2014:FDS}.
When solving a mathematical problem by numerical methods one of the main concerns is related to the evaluation of the global error.
For instance, Skeel \cite{Skeel:1986:TWE} reported on thirteen strategies to approximate the numerical error.
Here we are interested to show how within Richardson's extrapolation theory we can derive an error estimate.
For any component $U$ of the numerical solution, the global error $e$ can be defined by
\begin{equation}\label{eq:GE}
e = u - U \ ,
\end{equation}
where $u$ is the exact analytical solution component.
Usually, we have several different sources of errors: discretization, round-off, iteration and programming errors.
Discretization errors are due to the replacement of a continuous problem with a discrete one and the related error decreases by reducing the discretization parameters, enlarging the value of $N$, the number of grid points in our case.
Round-off errors are due to the utilization of floating-point arithmetic to implement the algorithms available to solve the discrete problem.
This kind of error usually decreases by using higher precision arithmetic, double or, when available, quadruple precision.
Iteration errors are due to stopping an iteration algorithm that is converging but only as the number of iterations goes to infinity.
Of course, we can reduce this kind of error by requiring more restrictive termination criteria for our iterations, the iterations of Newton's method in the present case.
Programming errors are beyond the scope of this work, but they can be eliminated or at least reduced by adopting the so-called structured programming.
When the numerical error is caused prevalently by the discretization error and in the case of smooth enough solutions the discretization error can be decomposed into a sum of powers of the inverse of $N$
\begin{equation}\label{eq:asymE}
u = U_{N} + C_0 \left(\frac{1}{N}\right)^{p_0}+ C_1 \left(\frac{1}{N}\right)^{p_1}+ C_2 \left(\frac{1}{N}\right)^{p_2}+ \cdots \ ,
\end{equation}
where $C_0$, $C_1$, $C_2$, $\dots$ are coefficients that depend on $u$ and its derivatives, but are independent on $N$, and $p_0$, $p_1$, $p_2$, $\dots$ are the true orders of the error.
The value of each $p_k$, for $k=0$, $1$, $2$, $\cdots$, is usually a positive integer with $p_0 < p_1 < p_2 < \cdots$ and all together constitute an arithmetic progression of ratio $p_1-p_0$, see Joyce \cite{Joyce:1971:SEP}.
The value of $p_0$ is called the asymptotic order or the order of accuracy of the method or of the numerical solution $U$.
\section{Numerical scheme}\label{S:scheme}
In order to solve a problem in the class (\ref{p}) on the original domain we discuss first quasi-uniform grids maps from a reference finite domain and introduce on the original domain a non-standard finite difference scheme that allows us to impose the given boundary conditions exactly.
\subsection{Quasi-uniform grids}\label{SS:quniform}
Let us consider the smooth strict monotone quasi-uniform maps $x = x(\xi)$, the so-called grid generating functions,
see Boyd \cite[pp. 325-326]{Boyd:2001:CFS} or Canuto et al. \cite[p. 96]{Canuto:2006:SMF},
\begin{equation}\label{eq:qu1}
x = -c \cdot \ln (1-\xi) \ ,
\end{equation}
and
\begin{equation}\label{eq:qu2}
x = c \frac{\xi}{1-\xi} \ ,
\end{equation}
where $ \xi \in \left[0, 1\right] $, $ x \in \left[0, \infty\right] $, and $ c > 0 $ is a control parameter.
So that, a family of uniform grids $\xi_n = n/N$ defined on interval $[0, 1]$ generates one parameter family of quasi-uniform grids $x_n = x (\xi_n)$ on the interval $[0, \infty]$.
The two maps (\ref{eq:qu1}) and (\ref{eq:qu2}) are referred as logarithmic and algebraic map, respectively.
As far as the authors knowledge is concerned, van de Vooren and Dijkstra \cite{vandeVooren:1970:NSS} were the first to use this kind of maps.
We notice that more than half of the intervals are in the domain with length approximately equal to $c$ and
$x_{N-1} = c \ln N$ for (\ref{eq:qu1}),
while $ x_{N-1} \approx c N $ for (\ref{eq:qu2}).
For both maps, the equivalent mesh in $x$ is nonuniform with the
most rapid variation occurring with $c \ll x$.
The logarithmic map (\ref{eq:qu1}) gives slightly better resolution near $x = 0$ than the
algebraic map (\ref{eq:qu2}), while the algebraic map gives much better resolution than the
logarithmic map as $x \rightarrow \infty$.
In fact, it is easily verified that
\[
-c \cdot \ln (1-\xi) < c \frac{\xi}{1-\xi} \ ,
\]
for all $\xi$, but $\xi = 0$, see figure \ref{fig:m1N20} below.
The problem under consideration can be discretized by introducing a uniform grid $ \xi_n $ of $N+1$ nodes in $ \left[0, 1\right] $ with $\xi_0 = 0$ and $ \xi_{n+1} = \xi_n + h $ with $ h = 1/N $, so that $ x_n $ is a quasi-uniform grid in $ \left[0, \infty\right] $.
The last interval in (\ref{eq:qu1}) and (\ref{eq:qu2}),
namely $ \left[x_{N-1}, x_N\right] $, is infinite but the point $ x_{N-1/2} $ is finite, because the non integer nodes are defined by
\[
x_{n+\alpha} = x\left(\xi=\frac{n+\alpha}{N}\right) \ ,
\]
with $ n \in \{0, 1, \dots, N-1\} $ and $ 0 < \alpha < 1 $.
These maps allow us to describe the infinite domain by a finite number of intervals.
The last node of such grid is placed at infinity so right boundary conditions
are taken into account correctly.
Figure \ref{fig:m1N20} shows the two quasi-uniform grids $x=x_n$, $n = 0, 1, \dots , N$ defined by (\ref{eq:qu1}) and by (\ref{eq:qu2}) with $c=10$ and $N$ equal to, from top to bottom, 10, 20 and 40, respectively.
\begin{figure}
\caption{\it Quasi-uniform grids: top frame for (\ref{eq:qu1}
\label{fig:m1N20}
\end{figure}
In order to derive the finite difference formulae, for the sake of simplicity, we consider a generic scalar variable $u(x)$.
We can approximate the values of this scalar variable at mid-points of the grid by
\begin{equation}
u_{n+1/2} \approx \frac{x_{n+3/4}-x_{n+1/2}}{x_{n+3/4}-x_{n+1/4}} u_n + \frac{x_{n+1/2}-x_{n+1/4}}{x_{n+3/4}-x_{n+1/4}} u_{n+1} \ .
\label{eq:u:mod}
\end{equation}
As far as the first derivative is concerned we can apply the following approximation
\begin{equation}
\left. \frac{du}{dx}\right|_{n+1/2} \approx \frac{u_{n+1}-u_n}{2\left(x_{n+3/4} - x_{n+1/4}\right)} \ .
\label{eq:du}
\end{equation}
These formulae use the value $ u_N = u(\infty) $, but not $ x_N = \infty $.
For a system of differential equations, (\ref{eq:u:mod}) and (\ref{eq:du}) can be applied component-wise.
\subsection{A non-standard finite difference scheme}\label{SS:NSFDS}
A non-standard finite difference scheme on a quasi-uniform grid for the class of BVPs
(\ref{p}) can be defined by using the approximations given by (\ref{eq:u:mod}) and (\ref{eq:du}) above.
A finite difference scheme for
(\ref{p}) can be written as follows:
\begin{eqnarray}
& {\bf U}_{n+1} - {\bf U}_{n} - a_{n+1/2} {\bf f} \left( x_{n+1/2}, b_{n+1/2}{\bf U}_{n+1} + c_{n+1/2}{\bf U}_{n} \right) = {\bf 0}
\ , \nonumber\\
& \mbox{for} \quad n=0, 1, \dots , N-1
\label{boxs} \\
& {\bf g} \left( {\bf U}_0,{\bf U}_N \right) = {\bf 0} \ , \nonumber
\end{eqnarray}
where
\begin{eqnarray}\label{eq:abc}
a_{n+1/2} &=& 2\left(x_{n+3/4} - x_{n+1/4}\right) \ , \nonumber \\
b_{n+1/2} &=& \frac{x_{n+1/2}-x_{n+1/4}}{x_{n+3/4}-x_{n+1/4}} \ , \\
c_{n+1/2} &=& \frac{x_{n+3/4}-x_{n+1/2}}{x_{n+3/4}-x_{n+1/4}} \nonumber \ ,
\end{eqnarray}
for $n=0, 1, \dots , N-1$.
The finite difference formulation (\ref{boxs}) has order of accuracy $O(N^{-2})$. It is evident that (\ref{boxs}) is a nonlinear system of $ d \; (N+1)$ equations in the $ d \; (N+1)$ unknowns $ {\bf U} = ({\bf U}_0,{\bf U}_1, \dots , {\bf U}_N)^T $.
For the solution of (\ref{boxs}) we can apply the classical Newton's method along with the simple termination criterion
\begin{equation}\label{eq:Tcriterion}
{\displaystyle \frac{1}{d(N+1)} \sum_{\ell =1}^{d} \sum_{n=0}^{N}
|\mbox{D}elta {}^\ell U_{n}| \leq {\rm TOL}} \ ,
\end{equation}
where $ \mbox{D}elta {}^ \ell U_{n} $, for $ n = 0,1, \dots, N $ and $ \ell = 1, 2, \dots , d $, is the difference between two successive iterate components and $ {\rm TOL} $ is a fixed tolerance.
\section{Richardson's extrapolation \ and error estimate}\label{S:extra}
The utilization of a quasi-uniform grid allows us to improve our numerical results.
The algorithm is based on Richardson's extrapolation, introduced by Richardson in \cite{Richardson:1910:DAL,Richardson:1927:DAL}, and it is the same for many finite difference methods: for numerical differentiation or integration, solving systems of ordinary or partial differential equations, see, for instance, \cite{Sidi:PEM:2003}.
To apply Richardson's extrapolation, we carry on several calculations on embedded uniform or quasi-uniform grids with total number of nodes $N_g$ for $g = 0$, $1$ , $\dots$, $G$: e.g., for the numerical results reported in the next section we have used $5$, $10$, $20$, $40$, $80$, $160$, $320$, $640$, $1280$, $2560$, or $5120$ grid-points.
We can identify these grids with the index $g=0$, the coarsest one, $1$, $2$, and so on towards the finest grid denoted by $g = G$.
Between two adjacent grids, all nodes of largest steps are identical to even nodes of denser grid due to the uniformity.
To find a more accurate approximation we can apply $k$ Richardson's extrapolations on the used grids
\begin{equation}\label{eq:Rextra}
U_{g+1,k+1} = U_{g+1,k} + \frac{U_{g+1,k}-U_{g,k}}{2^{p_k}-1} \ ,
\end{equation}
where $g \in \{0, 1, 2 , \dots , G-1\}$, $k \in \{0, 1, 2, \dots , G-1\}$, $2 = N_{g+1}/N_{g}$ appearing in the denominator is the grid refinement ratio, and $p_k$ is the true order of the discretization error.
We notice that to obtain each value of $U_{g+1,k+1}$ requires having computed two solution $U$ in two embedded grids, namely $g+1$ and $g$ at the extrapolation level $k$.
For any $g$, the level $k=0$ represents the numerical solution of $U$ without any extrapolation, which is obtained as described in subsection \ref{SS:NSFDS}.
In this case, Richardson extrapolation uses two solutions on embedded refined grids to define a more accurate solution that is reliable only when the grids are sufficiently fine.
The case $k=1$ is the classical single Richardson's extrapolation, which is usually used to estimate the discretization error or to improve the solution accuracy.
If we have computed the numerical solution on $G+1$ nested grids then we can apply equation (\ref{eq:Rextra}) $G$ times performing $G$ Richardson's extrapolations.
The theoretical orders $p_k$ of accuracy of the numerical solution $U$ with $k$ extrapolations verify the relation
\begin{equation}\label{eq:pk}
p_k = p_0 + k (p_1-p_0) \ ,
\end{equation}
where this equation is valid for $k \in \{0, 1, 2, \dots , G-1\}$.
In any case, the values of $p_k$ can be obtained a priori by using appropriate Taylor series or a posteriori by
\begin{equation}\label{eq:pk:calc}
p_k \approx {\displaystyle \frac{\log(|U_{g,k}-u|)-\log(|U_{g+1,k}-u|)}{\log(2)}} \ ,
\end{equation}
where $u$ is again the exact solution (or, if the exact solution is unknown, a reference solution computed with a suitable large value of $N$) evaluated at the same grid-points of the numerical solution.
To show how Richardson's extrapolation can be also used to get an error estimate for the computed numerical solution we use two numerical solutions $U_{N}$ and $U_{2N}$ computed by doubling the number of grid-points.
Taking into account equation (\ref{eq:Rextra}) we can conclude that the error estimate by Richardson's extrapolation is given by
\begin{equation}\label{eq:est1}
E = \frac{U_{2N}-U_{N}}{2^{p_0}-1} \ ,
\end{equation}
where $p_0$ is the true order of the discretization error.
Hence, $E$ is an estimation of the truncation error found without knowledge of the exact solution.
We notice that $E$ is the error estimate for the more accurate numerical solution $U_{2N}$ but only on the grid points of $U_N$.
\section{Numerical results: a BVP in colloids theory}
In this section, we consider a benchmark problem with known exact solution for our error estimator.
It should be mentioned that all numerical results reported in this paper were performed on an ASUS personal computer with i7 quad-core Intel processor and 16 GB of RAM memory running Windows 8.1 operating system.
The non-standard finite difference scheme described above has been implemented in FORTRAN.
The numerical results reported in this section were computed by setting
\begin{equation}\label{eq:TOL}
{\rm TOL} = 1\mbox{E}-12 \ .
\end{equation}
The benchmark problem, see Alexander and Johnson \cite{Alexander:1949:CS}, arises within the theory of colloids and is given by
\begin{align}\label{colloid:model}
& {\displaystyle \frac{d^2 u}{dx^2}} - 2 \sinh(u) = 0 \qquad \ x \in [0, \infty] \ , \nonumber \\[-1ex]
& \\[-1ex]
& u(0) = u_0 \ , \qquad u(\infty) = 0 \ , \nonumber
\end{align}
where $u_0 > 0$.
The exact solution of the BVP (\ref{colloid:model})
\begin{equation}\label{eq:colloid:model:exact}
u(x) = 2 \; \ln\left(\frac{(e^{u_0/2}+1)\; e^{\sqrt{2}\; x}+(e^{u_0/2}-1)}{(e^{u_0/2}+1)\; e^{\sqrt{2}\; x}-(e^{u_0/2}-1)}\right) \ ,
\end{equation}
has been found by Countryman and Kannan \cite{Countryman:1994:CNB,Countryman:1994:NBV}, and the missing initial condition is given by
\begin{equation}\label{eq:colloid:model:exact:dudx0}
\frac{du}{dx}(0) = -2\; \sqrt{\cosh(u_0) -1} \ .
\end{equation}
We rewrite the governing differential equation as a first order system and indicate the exact solution with ${\bf u} = ({}^1u, {}^2u)^T$ and the numerical solution with ${\bf U} = ({}^1U, {}^2U)^T$.
In order to fix a specific problem, as a first test case, we consider $u_0=1$.
As mentioned before we used $5$, $10$, $20$, $40$, $80$, $160$, $320$, $640$, $1280$, $2560$, or $5120$ grid-points, so that $G=10$, and we adopted a continuation approach for the choice of the first iterate.
This means that the accepted solution for $N=5$ is used as first iterate for $N=10$, where the new grid values are approximated by linear interpolations, and so on.
The first iterate for the grid with $N=5$, where the field variable was taken constant and equal to one and its derivative was taken also constant and equal to minus one, is shown in the top frame of figure \ref{fig:it}.
\begin{figure}
\caption{\it Sample iterates for problem (\ref{colloid:model}
\label{fig:it}
\end{figure}
The bottom frame of the same figure shows the accepted numerical solution.
Our relaxation algorithm takes seven iterations to verify the termination criterion (\ref{eq:Tcriterion}) with TOL given by (\ref{eq:TOL}).
Once the continuation approach has been initialized, the iteration routine needs 3 or 4 iterations to get a numerical solution that verifies the stopping criterion.
For the sake of completeness in figure \ref{fig:sol_1} we display the numerical solution for $N=40$ along with the exact solution.
\begin{figure}
\caption{\it Final iterate and exact solution for problem (\ref{colloid:model}
\label{fig:sol_1}
\end{figure}
Figure \ref{fig:Eforp} shows in a log by log scale the computed errors.
\begin{figure}
\caption{\it Graphical derivation of the values of $p_0$, $p_1$ and $p_2$ for the non standard finite difference scheme applied to (\ref{colloid:model}
\label{fig:Eforp}
\end{figure}
We can compute the order of accuracy $p_0$, $p_1$ and $p_2$ according to the formula (\ref{eq:pk:calc}).
As it is easily seen from figure \ref{fig:Eforp} we got $p_0 \approx 2$, $p_1 \approx 4$ and $p_2 \approx 6$ for both the field variable and its first derivative.
As far as the a posteriori error estimator is concerned in figure \ref{fig:Errors_1} we report the computation related to two sample cases: namely, the estimate obtained by using $N = 20$, $40$ and $N = 40$, $80$.
\begin{figure}
\caption{\it Global and a posteriori error estimates for the field variable and its first derivative for (\ref{colloid:model}
\label{fig:Errors_1}
\end{figure}
We notice that the global error, for both the solution components, is of order $10^{-3}$ and it decreases as we refine the grid.
It is easily seen that the estimator defined by equation (\ref{eq:est1}) provides upper bounds for the global error.
A more challenging test case is given by setting $u_0=7$.
In figure \ref{fig:sol_7} we display the numerical solution for $N=5120$ along with the exact solution.
\begin{figure}
\caption{\it Final iterate and exact solution for problem (\ref{colloid:model}
\label{fig:sol_7}
\end{figure}
\begin{table}[!hbt]
\renewcommand\arraystretch{1.4}
\centering
\begin{tabular}{rccc}
\hline \\[-1.5ex]
$N_g$ & ${}^2U_{g,0}$ & ${}^2U_{g,1}$ & ${}^2U_{g,2}$ \\[1.5ex]
\hline
160 & $-43.835177171609345$ & $$ & $$ \\
320 & $-45.864298511341850$ & $-46.540672291252690$ & $$ \\
640 & $-46.537797149336093$ & $-46.762296695334179$ & $-46.777071655606278$ \\
1280 & $-46.725033491934731$ & $-46.787445606134277$ & $-46.789122200187620$ \\
2560 & $-46.773360098843838$ & $-46.789468967813541$ & $-46.789603858592159$ \\
5120 & $-46.785544794016836$ & $-46.789606359074504$ & $-46.789615518491907$ \\
\hline
\end {tabular}
\caption{\it Richardson's extrapolation for $\frac{du}{dx}(0)={}^2U_0$.}
\label{tab:extra}
\end{table}
In table \ref{tab:extra} we list the computed as well as the extrapolated values obtained for the missing initial condition.
For the sake of brevity, in this table, we do not report the fewer accurate values obtained with the coarser grids.
These results can be compared with the exact value, $\frac{du}{dx}(0) \approx -46.789615734913319$, obtained by equation (\ref{eq:colloid:model:exact:dudx0}).
Figure \ref{fig:Errors:c7} shows in a log by log scale the computed errors.
\begin{figure}
\caption{\it Problem (\ref{colloid:model}
\label{fig:Errors:c7}
\end{figure}
From figure \ref{fig:Errors:c7} it is clear that the computed orders, using equation (\ref{eq:pk:calc}), are slightly different from the theoretical ones, namely: $p_0 \approx 1.99$, $p_1 \approx 3.96$ and $p_2 \approx 5.77$.
\begin{figure}
\caption{\it Zoom in the domain related to the initial transitory for global errors and a posteriori error estimates for the field variable and its first derivative for (\ref{colloid:model}
\label{fig:Errors_7}
\end{figure}
As far as the a posteriori error estimator is concerned in figure \ref{fig:Errors_7} we report the computation related to two sample cases: namely, the estimate obtained by using $N = 1280$, $2560$ and $N = 2560$, $5120$.
Once again the global error, for both the solution components, decreases as we refine the grid and the estimator defined by equation (\ref{eq:est1}) provides upper bounds for the global error.
\section{Conclusions}
In this paper, we have defined a posteriori estimator for the global error of a non-standard finite difference scheme applied to boundary value problems defined on an infinite interval.
A test problem was examined for which the exact solution is known and we tested our error estimator for two sample cases: a simpler one with smooth solutions component and a more challenging one presenting an initial fast transitory for one of the solution components.
For this second test case, we showed how Richardson extrapolation can be used to improve the numerical solution using the order of accuracy and numerical solutions from two nested quasi-uniform grids.
Moreover, the reported numerical results clearly show that our non-standard finite difference scheme, implemented along with the error estimator defined in this work, can be used to solve challenging problems arising in the applied sciences.
In our previous paper \cite{Fazio:2014:FDS} we derived instead of equation (\ref{eq:u:mod}) the finite difference formula
\begin{equation}
u_{n+1/2} \approx \frac{x_{n+1}-x_{n+1/2}}{x_{n+1}-x_n} u_n + \frac{x_{n+1/2}-x_{n}}{x_{n+1}-x_n} u_{n+1} \ .
\label{eq:u}
\end{equation}
However, by setting $n =N-1$ this formula (\ref{eq:u}), replacing $x_N = \infty$, reduces to $u_{N-1/2} = u_{N-1}$ that does not involve the boundary value $u_N$ and therefore the boundary condition cannot be used.
In that work, this was the reason that forced us to modify this formula at $ n = N-1$, see \cite{Fazio:2014:FDS} for details.
The mentioned drawback is completely overcome by the new formula (\ref{eq:u:mod}).
\noindent {\bf Acknowledgement.} {The research of this work was
supported, in part, by the University of Messina and by the GNCS of INDAM.}
\end{document} | math | 24,421 |
\begin{document}
\title{A Simple Mechanism \\ for a Budget-Constrained Buyer}
\author{Yu Cheng\inst{1}\and
Nick Gravin\inst{2} \and
Kamesh Munagala\inst{1} \and
Kangning Wang\inst{1}}
\institute{Duke University;
\email{\{yucheng,kamesh,knwang\}@cs.duke.edu}\and
Shanghai University of Finance and Economics;
\email{[email protected]}}
\maketitle
\begin{abstract}
We study a classic Bayesian mechanism design setting of monopoly problem for an additive buyer in the presence
of budgets. In this setting a monopolist seller with $m$ heterogeneous items faces a single buyer and seeks
to maximize her revenue. The buyer has a budget and additive valuations drawn independently for each item from (non-identical) distributions.
We show that when the buyer's budget is publicly known, the better of selling each item separately and selling the grand bundle extracts a constant fraction of the optimal revenue.
When the budget is private, we consider a standard Bayesian setting where buyer's budget $b$ is drawn from a known distribution $B$. We show that if $b$ is independent of the valuations and distribution $B$
satisfies monotone hazard rate condition, then selling items separately or in a grand bundle
is still approximately optimal. We give a complementary example showing that no constant approximation simple mechanism is possible if budget $b$ can be interdependent with valuations.
\end{abstract}
\section{Introduction}
Revenue maximization is one of the fundamental problems in auction theory.
The well-celebrated result of Myerson~\cite{Myerson81} characterized the revenue-maximizing mechanism when there is only one item for sale.
Specifically, in the single buyer case, the optimal solution is to post a take-it-or-leave-it price.
Since Myerson's work, the optimal mechanism design problem has been studied extensively in computer science literature and much progress has been made~\cite{CaiDW12a,CaiDW12b,CaiDW13a,CaiDW13b,AlaeiFHHM12,Daskalakis15}.
The problem of finding the optimal auction turned out to be so much more complex than the single-item case.
Unlike the Myerson's single-item auction, the optimum can use randomized allocations and price bundles of items already for two items and a single buyer. It is also known that the gap between the revenue of the optimal randomized and optimal deterministic mechanism can be arbitrarily large~\cite{BriestCKW10,HartN13}, the optimal mechanism may require a menu with infinitely many options~\cite{manelli2007multidimensional,DaskalakisDT13}, and the revenue of the optimal auction may decrease when the buyer's valuation distributions move upwards (in the stochastic dominance sense).
In light of these negative results for optimal auction design, many recent papers focused on the design of \emph{simple} mechanisms that are \emph{approximately} optimal.
One such notable line of work initiated by Hart and Nisan~\cite{HartN17}
concerns a basic and natural setting of monopoly problem for the
buyer with item values drawn independently from given distributions $D_1,\ldots,D_m$ and
whose valuation for the sets of items is additive\footnote{A buyer has additive valuations if his value for a set of items is equal to the sum of his values for the items in the set.} (linear).
A remarkable result by Babaioff~et~al.~\cite{BabaioffILW14} showed
that the better mechanism of either selling items separately, or selling the grand bundle extracts at least $(1/6)$-fraction of the optimal revenue.
It was also observed~\cite{HartN13,BabaioffILW14,RubinsteinW15} that the independence assumption on the items is essentially necessary and without it no simple (any deterministic) mechanism cannot be approximately optimal.
Auction design with budget constraints is an even harder problem.
Because buyer's utility is no longer quasi-linear, many standard concepts do not carry over\footnote{E.g., the classic VCG mechanism may not be implementable and social efficiency may not be achievable in the budgeted-setting~\cite{Singer10}.}.
For example,
even for one buyer and one item, the optimal mechanism may require randomization when the budget is public~\cite{ChawlaMM11}, and may need an exponential-size menu when the budget is private~\cite{DevanurW17}.
Despite many efforts~\cite{LaffontR96,CheG00,GoldbergHW01,BorgsCIMS05,Abrams06,ChenGL11,DobzinskiLN12,BhattacharyaGGM10,BhattacharyaCMX10,ChakrabartyG10,ChawlaMM11,BeiCGL12,GoelML15,DevanurHH13,DaskalakisDW15,DevanurW17,Singer10}, the theory of optimal auction design with budgets is still far behind the theory without budgets.
In this paper, we investigate the effectiveness of simple mechanisms in the presence of budgets.
Our work is motivated by the following questions:
\begin{quote}
\em How powerful are simple mechanisms in the presence of budgets? In particular, is there a simple mechanism that is approximately optimal for a budget-constrained buyer with independent valuations?
\end{quote}
To this end we consider one of the most basic and natural
settings of extensively studied monopoly problem for an additive buyer.
In this setting, a monopolistic seller sells $m$ items to a single buyer.
The buyer has additive valuations drawn independently for each item from an arbitrary (non-identical) distribution.
We study two different budget settings: the \emph{public budget} case where the buyer has a fixed budget known to the seller, and the \emph{private budget} case where the buyer's budget is drawn from a distribution.
The seller wishes to maximize her revenue by designing an auction subject to individual rationality, incentive compatibility, and budget constraints.
We consider the Bayesian setting where the buyer knows his budget and his values for each item, but the seller only knows the prior distributions.
\subsection{Our Results and Techniques}
Our first result is that simple mechanisms remain approximately optimal when the buyer has a public budget.
\begin{theorem}
\label{thm:main-informal}
For an additive buyer with a known public budget and independent valuations, the better of selling each item separately and selling the grand bundle extracts a constant fraction of the optimal revenue.
\end{theorem}
Theorem~\ref{thm:main-informal} is among the few positive results in budget-constrained settings that hold for arbitrary distributions.
Before our work, it is not clear that any mechanism extracting a constant fraction of the optimal revenue can be computed in polynomial time.
In Sections~\ref{sec:proof1}~and~\ref{sec:overview},
we present two different approaches to prove Theorem~\ref{thm:main-informal}.
Both approaches truncate the valuation distribution $V$ according to the budget $b$ (in different ways) and then relate the revenues of the optimal/simple mechanisms on the truncated distribution to the revenues on the original valuations.
The first approach uses the main result of~\cite{BabaioffILW14} in a black-box way, and the second approach adapts the duality-based framework developed in~\cite{CaiDW16}.
It is worth pointing out that many of our structural lemmas hold for correlated valuations as well.
Using these lemmas, we can generalize Theorem~\ref{thm:main-informal} with minimum effort to allow the buyer to have weakly correlated valuations.
We call a distribution $\widehat V$ \emph{weakly correlated} if it is the result of conditioning an independent distribution $V$ on the sum of $v \sim V$ being at most ${c}$: $\widehat V = V_{|(\sum v_i \le c)}$ (See Definition~\ref{def:weakly} for the formal definition).
\begin{corollary}
\label{cor:weakly}
Let $\widehat V$ be a weakly correlated distribution.
For an additive buyer with a public budget and valuations drawn from $\widehat V$, the better of selling separately and selling the grand bundle extracts a constant fraction of the optimal revenue.
\end{corollary}
In Section~\ref{sec:private}, we examine the private budget setting.
The budget $b$ is no longer fixed but is drawn from a distribution $B$.
The seller only knows the prior distribution $B$ but not the value of $b$.
We first show that if the valuations can be correlated with the budget, the problem is at least as hard as budget-free mechanism design with correlated valuations, where simple mechanisms are known to be ineffective.
In light of this negative result, we focus on the setting where the budget distribution $B$ is independent of the valuations $V$.
In this setting, we show that simple mechanisms are approximately optimal when the budget distribution satisfies the monotone hazard rate (MHR) condition.
\begin{theorem}
\label{thm:private-mhr}
When the budget distribution $B$ is MHR, the better mechanism of pricing items separately and selling a grand bundle achieves a constant fraction of the optimal revenue.
\end{theorem}
We will show that it is sufficient to pretend the buyer has a public budget $b^* = \expect{b \sim B}{b}$.
The proof of Theorem~\ref{thm:private-mhr} uses the MHR condition, as well as the fact that for a public budget $b$, the (budget-constrained) optimal revenue is nondecreasing in $b$, but optimal revenue divided by $b$ is nonincreasing in $b$.
\subsection{Related Work}
The most closely related to ours are the following two lines of work.
\paragraph{\bf Simple Mechanisms.}
In a line of work initiated by Hart and Nisan~\cite{HartN17,LiY13,BabaioffILW14}, \cite{BabaioffILW14} first showed that for an additive buyer with independent valuations, either selling separately or selling the grand bundle extracts a constant fraction of the optimal revenue.
This was later extended to multiple buyers~\cite{Yao15}, as well as buyers with more general valuations (e.g., sub-additive~\cite{RubinsteinW15}, valuations with a common-value component~\cite{BateniDHS15}, and valuations with complements~\cite{EdenFFTW17}).
Others have studied the trade-off between the complexity and approximation ratio of an auction, along with the design of small-menu mechanisms in various settings~\cite{HartN13,TangW17,DughmiHN14,ChengCDEHT15,BabaioffGN17}.
\paragraph{\bf Auctions for Budget-Constrained Buyers.}
There has been a lot of work studying the impact of budget constraints on mechanism design.
Most of the earlier work required additional assumptions on the valuations distributions, like regularity or monotone hazard rate~(\cite{LaffontR96,CheG00,BhattacharyaGGM10,PaiV14}).
We mention a few results that work for arbitrary distributions.
For public budgets, \cite{ChawlaMM11} designed approximately optimal mechanisms for several single-parameter settings and multi-parameter settings with unit-demand buyers.
For private budgets, \cite{DevanurW17} characterized the structure of the optimal mechanism for one item and one buyer.
\cite{DaskalakisDW15} gave a constant-factor approximation for additive bidders whose private budgets can be correlated with their values. However, they require the buyers' valuation distribution to be given explicitly, which is of exponential size in our setting.
There are also approximation and hardness results in the prior-free setting~\cite{BorgsCIMS05,Abrams06,DevanurHH13}, as well as designing Pareto optimal auctions~\cite{DobzinskiLN12,GoelML15}.
\paragraph{\bf Other Related Work.} Our work concerns revenue maximization for additive buyer.
Another natural and basic scenario extensively studied in the literature concerns buyers with
unit-demand preferences~\cite{ChawlaHK07,ChawlaHMS10,ChawlaMS15}.
Our work studies monopoly problem for additive budgeted buyer in the standard Bayesian approach. In this framework, the prior distribution is known to the seller and typically is assumed to be independent. Parallel to this framework, the (budgeted) additive monopoly problem has been studied in a new robust optimization framework~\cite{carroll2017robustness,GravinL18}. Another group of papers on budget feasible mechanism design~\cite{BeiCGL12,ChenGL11,SinglaK13,Singer10} studies different reverse auction settings and are concerned with value maximization.
\section{Preliminaries}
\subsection{Optimal Mechanism Design}
We study the design of optimal auctions with one buyer, one seller, and $m$ heterogeneous items labeled by $[m] = \{1, \ldots, m\}$.
There is exactly one copy of each item, and the items are indivisible.
The buyer has additive valuation ($v(S)=\sum_{j\in S}v(\{i\})$ for any set $S\subseteq [m]$) and a publicly known budget $b$~\footnote{
In this paper, we mostly focus on the public budget case. So we define notations and discuss backgrounds assuming the buyer has a public budget.}.
We use $v \in {\mathbb{R}}^m$ to denote the buyer's valuations, where $v_j$ is the buyer's value for item $j$.
We consider the Bayesian setting of the problem, in which the buyer's values are drawn from a discrete\footnote{Like previous work on simple and approximately optimal mechanisms, our results extend to continuous types as well (see, e.g.,~\cite{CaiDW16} for a more detailed discussion).} distribution $V$.
Let $T = {\mathrm{supp}}(V)$ be the set of all possible valuation profiles in $V$.
We use $f(t)$ for any $t\in T$ to denote the probability mass function of $V$: $f(t) = \Prob{v \sim V}{v = t}$.
Let $T_j = {\mathrm{supp}}(V_j)$.
We say the valuation distribution $V$ is independent across items if it can be expressed as $V = \times_j V_{j}$.
We assume the buyer is risk-neutral and has quasi-linear utility when the payment does not exceed his budget.
Let $\pi: T \rightarrow [0,1]^m$ and $p:T \rightarrow {\mathbb{R}}$ denote the allocation and payment rules of a mechanism respectively.
That is, when the buyer reports type $t$, the probability that he will receive item $j$ is $\pi_j(t)$, and his expected payment is $p(t)$ (over the randomness of the mechanism).
Thus, if the buyer has type $t$, his (expected) value for reporting type $t'$ is exactly $\pi(t')^\top t$,~\footnote{We use $x^\top y = \sum_{i=1}^m x_i y_i$ to denote the inner product of two vectors $x$ and $y$.} and
his (expected) utility for reporting type $t'$ is
\[
u(t, t') = \begin{cases}
\pi(t')^\top t - p(t') & \text{if } p(t') \le {b}, \text{ and } \\
-\infty & \text{otherwise.}
\end{cases}
\]
By the revelation principle, it is sufficient to consider mechanisms that are incentive compatible (i.e., ``truthful'').
A mechanism $M = (\pi, p)$ is (interim) incentive-compatible (IC) if the buyer is incentivized to tell the truth (over the randomness of mechanism), and (interim) individually rational (IR) if the buyer's expected utility is non-negative whenever he reports truthfully.
We use $\operatorname{Var}nothing$ for the option of not participating in the auction ($\pi(\operatorname{Var}nothing) = 0, p(\operatorname{Var}nothing) = 0$), and let $T^+ = T \cup \{\operatorname{Var}nothing\}$.
Then, the IC and IR constraints can be unified as follows:
\[
u(t, t) \ge u(t, t') \quad \forall t \in T, t' \in T^+.
\]
To summarize, when the seller faces a single buyer with budget $b$ and valuation drawn from $V$, the optimal mechanism $M^* = (\pi^*, p^*)$ is the optimal solution to the following (exponential-size) linear program (LP):
\begin{lp}
\label{lp:budget-reduceform}
\maxi{\sum_{t \in T} f(t) p(t)}
\mbox{subject to} \qcon{\pi(t')^\top t - p(t') \le \pi(t)^\top t - p(t)}{\forall t \in T, t' \in T^+}
\qcon{0 \le \pi_j(t) \le 1}{\forall t \in T, j \in [m]}
\qcon{p(t) \le {b}}{\forall t \in T}
\con{\pi(\operatorname{Var}nothing) = 0, \; p(\operatorname{Var}nothing) = 0.}
\end{lp}
A mechanism is called ex-post IC, ex-post IR, or ex-post budget-preserving respectively, if the corresponding constraints hold for all possible outcomes, without averaging over the randomness in the mechanism.
We will show the better of pricing each item separately and pricing the grand bundle, which is a deterministic ex-post mechanism, can extract a constant fraction of the revenue of any interim mechanism.
\subsection{Simple Mechanisms}
For a buyer with valuation distribution $V$, we frequently use the following notations in our analysis:
\begin{itemize}
\item ${\mathbb{R}}ev(V)$: the revenue of the optimal truthful mechanism.
\item ${\mathsc{SRev}}(V)$: the maximum revenue obtainable by pricing each item separately.
\item ${\mathsc{BRev}}(V)$: the maximum revenue obtainable by pricing the grand bundle.
\item ${\mathbb{R}}evB(V)$: the revenue of the optimal truthful mechanism, when the buyer has a budget ${b}$.
\item ${\mathsc{SRev}}B(V)$: the maximum revenue that can be extracted by pricing each item separately, when the buyer has a public budget ${b}$.
\item ${\mathsc{BRev}}B(V)$: the maximum revenue that can be extracted by pricing the grand bundle, when the buyer has a public budget ${b}$.
\end{itemize}
We know that ${\mathsc{SRev}}(V)$ is obtained by running Myerson's optimal auction separately for each item,
and ${\mathsc{BRev}}(V)$ is obtained by running Myerson's auction viewing the grand bundle as one item.
Similarly, ${\mathsc{BRev}}B(V)$ is a single-parameter problem as well, with the minor change that the posted price is at most ${b}$.
The case of ${\mathsc{SRev}}B(V)$ is more complicated. For example, when a budgeted buyer of type $t \in {\mathbb{R}}^m$ participates in an auction with posted price $p_j$ for each item $j$, he will maximize his utility by solving a $\mathsc{Knapsack}$ problem.
There exists a poly-time computable mechanism that extracts a constant fraction of ${\mathsc{SRev}}B(V)$ (e.g.,~\cite{BhattacharyaCMX10}).
We focus on the structural result that the better of ${\mathsc{SRev}}B(V)$ and ${\mathsc{BRev}}B(V)$ is a constant approximation of ${\mathbb{R}}evB(V)$.
A better approximation for ${\mathsc{SRev}}B(V)$ is an interesting open problem that is beyond the scope of this paper.
\subsection{Weakly Correlated Distributions}
We call a distribution like $\widehat V$ \emph{weakly correlated} if the only condition causing the correlation is a cap on its sum.
\begin{definition}
\label{def:weakly}
\
For an $m$-dimensional independent distribution $V$ and a threshold ${c} > 0$, we remove the probability mass on any $t \in {\mathrm{supp}}(V)$ with $\normone{t} > {c}$ and renormalize.
Let $\widehat V := V_{|(\normone{v} \le {c})}$ denote the resulting distribution. Formally,
\[
\Prob{\widehat v \sim \widehat V}{\widehat v = t} = \Prob{v \sim V}{v = t \mid \normone{v} \le {c}}, \; \forall t \in {\mathrm{supp}}(V).
\]
\end{definition}
Weakly correlated distributions arise naturally in our analysis.
We will show that if the buyer's valuations are weakly correlated, then the better of selling separately and selling the grand bundle is approximately optimal, and this holds with or without a (public) budget constraint.
\subsection{First-Order Stochastic Dominance}
\label{sec:prelim-preceq}
Stochastic dominance is a partial order between random variables. A random variable $X$ with ${\mathrm{supp}}(X) \subseteq {\mathbb{R}}$ \emph{(weakly) first-order stochastically dominates} another random variable $Y$ with ${\mathrm{supp}}(Y) \subseteq {\mathbb{R}}$ if and only if
\[ \Prob{}{X \ge a} \ge \Prob{}{Y \ge a} \text{ for all } a \in {\mathbb{R}}. \]
This notion of stochastic dominance can be extended to multi-dimensional distributions.
In this paper, we use the notion of coordinate-wise dominance.
\begin{definition}
Given two $m$-dimensional distributions $D_1$ and $D_2$, we say $D_1$ \emph{coordinate-wise stochastic dominates} $D_2$ ($D_1 \succeq D_2$ or $D_2 \preceq D_1$) if there exists a randomized mapping $f: {\mathrm{supp}}(D_1) \rightarrow {\mathrm{supp}}(D_2)$ such that $f(x) \sim D_2$ when $x \sim D_1$, and $f(x) \leq x$ coordinate-wise for all $x \in {\mathrm{supp}}(D_1)$ with probability $1$.
\end{definition}
This notion helps us express the monotonicity of optimal revenues in some cases.
For example, we can show that ${\mathsc{SRev}}(V_1) \geq {\mathsc{SRev}}(V_2)$ when $V_1 \succeq V_2$.
The mapping $f$ allows us to couple the draws $v_1 \sim V_1$ and $v_2 \sim V_2$, so that for a set of fixed prices, if the buyer buys an item under $v_2$, he will also buy it under $v_1$.
\section{Public Budget}
\label{sec:proof1}
In this section, we focus on the public budget case and prove our main result (Theorem~\ref{thm:main-informal}).
The buyer has a fixed budget $b$ and valuations drawn from an independent distribution $V$.
{\noindent \bf Theorem~\ref{thm:main-informal}.~}
{\em
${\mathbb{R}}evB(V) \le 8 {\mathsc{SRev}}B(V) + 24 {\mathsc{BRev}}B(V)$.
}
It follows that the better of ${\mathsc{SRev}}B(V)$ and ${\mathsc{BRev}}B(V)$ is at least $\frac{{\mathbb{R}}evB(V)}{32}$.~\footnote{We do not optimize the constants in our proofs. In Section~\ref{sec:overview}, we will give an alternative proof of Theorem~\ref{thm:main-informal} that shows ${\mathbb{R}}evB(V) \le 5 {\mathsc{SRev}}B(V) + 6 {\mathsc{BRev}}B(V)$, thus improving this constant from 32 to 11.
}
\paragraph{\bf Overview of Our Approach.}
Instead of taking the Lagrangian dual of LP~\eqref{lp:budget-reduceform} to derive an upper bound on the optimal objective value ${\mathbb{R}}evB(V)$, we adopt a more combinatorial approach.
Intuitively, we come up with a charging argument that splits ${\mathbb{R}}evB(V)$ and charges each part to either ${\mathsc{SRev}}B(V)$ or ${\mathsc{BRev}}B(V)$.
First, we partition the buyer types $t \in {\mathrm{supp}}(V)$ into two sets: \emph{high-value} types where $\norminf{t} \ge {b}$ and \emph{low-value} types where $\norminf{t} < {b}$.
Note that we can already charge the revenue of high-value types to ${\mathsc{BRev}}B(V)$: If we sell the grand bundle at price ${b}$, all high-value types will exhaust their budgets.
We now examine the low-value types.
Let $V'$ denote the valuation distribution conditioned on the buyer having a low-value type.
Observe that $V'$ is independent because it is defined using $\ell_\infty$-norm, and we can remove the budget to upper bound its revenue.
For a budget-free additive buyer with independent valuations, we can apply the main result of~\cite{BabaioffILW14}, which states that either selling separately or grand bundling works for $V'$: ${\mathbb{R}}ev(V') = O({\mathsc{SRev}}(V') + {\mathsc{BRev}}(V'))$.
Next, we will relate ${\mathsc{SRev}}(V'), {\mathsc{BRev}}(V')$ to ${\mathsc{SRev}}B(V'), {\mathsc{BRev}}B(V')$.
We can assume the sum of $v' \sim V'$ is usually much smaller than ${b}$.
Similar to standard tail bounds, if the sum $\normone{v'}$ is often small and the random variables are independent and bounded (each $v'_j$ is at most ${b}$), then $\normone{v'}$ must have an exponentially decaying tail.
Therefore, we can add back the budget, because the sum $\normone{v'}$, which upper bounds the buyer's payment, is rarely very large.
Finally, we will show that ${\mathsc{SRev}}B(V') = O({\mathsc{SRev}}B(V))$ and ${\mathsc{BRev}}B(V') \le {\mathsc{BRev}}B(V)$.
The ${\mathsc{BRev}}$ statement is easy to verify, but the ${\mathsc{SRev}}$ statement is more tricky.
The monotonicity of ${\mathsc{SRev}}(V)$ in the budget-free case (see Section~\ref{sec:prelim-preceq}) no longer holds when there is a budget.
Fortunately, we can pay a factor of two and circumvent this non-monotonicity due to budget constraints.
We will now make our intuitions formal and present three key lemmas.
Throughout the paper, we will always use $V' = V_{|\norminf{v}\le b}$ as defined below.
\begin{definition}
Fix an $m$-dimensional distribution $V = \times V_j$.
Let $V'$ be the independent distribution where every coordinate of $V$ is capped at ${b}$.
That is, $V' = \times_j V'_j$, and $V'_j$ is given by $\Prob{V'_j}{x} = \Prob{v_j \sim V_j}{\min(v_j, {b}) = x}$.
\end{definition}
\begin{lemma}
\label{lem:opt-revvp}
${\mathbb{R}}evB(V) \le {\mathbb{R}}ev(V') + {\mathsc{BRev}}B(V)$.
\end{lemma}
\begin{lemma}
\label{lem:revvp-revbvp}
Assume ${\mathsc{BRev}}^{b}(V') < \frac{{b}}{10}$.
Then, ${\mathsc{BRev}}(V') \le 3 {\mathsc{BRev}}B(V')$ and ${\mathsc{SRev}}(V') \le {\mathsc{SRev}}^{{b}}(V') + 4{\mathsc{BRev}}^{{b}}(V')$.
\end{lemma}
\begin{lemma}
\label{lem:vp-sbrev}
${\mathsc{BRev}}B(V') \le {\mathsc{BRev}}B(V)$ and ${\mathsc{SRev}}B(V') \le 2{\mathsc{SRev}}B(V)$.
\end{lemma}
We defer the proofs of these lemmas to Sections~\ref{sec:opt-revvp},~\ref{sec:revvp-revbvp},~and~\ref{sec:vp-sbrev}, and first use them to prove Theorem~\ref{thm:main-informal}.
\begin{proof}[of Theorem~\ref{thm:main-informal}]
If ${\mathsc{BRev}}B(V') \ge \frac{b}{10}$, then the theorem holds because the optimal revenue ${\mathbb{R}}evB(V)$ is at most the budget ${b}$.
By Lemma~\ref{lem:vp-sbrev}, ${\mathsc{BRev}}B(V) \ge {\mathsc{BRev}}B(V') \ge \frac{b}{10} \ge \frac{{\mathbb{R}}evB(V)}{10}$.
We now assume ${\mathsc{BRev}}B(V') < \frac{b}{10}$.
The theorem follows straightforwardly from Lemmas~\ref{lem:opt-revvp},~\ref{lem:revvp-revbvp},~\ref{lem:vp-sbrev}, and a black-box use of the main result of~\cite{BabaioffILW14}.
\begin{align*}
{\mathbb{R}}evB(V) & \le {\mathbb{R}}ev(V') + {\mathsc{BRev}}B(V) \tag{Lemma~\ref{lem:opt-revvp}} \\
& \le 4{\mathsc{SRev}}(V') + 2{\mathsc{BRev}}(V') + 2 {\mathsc{BRev}}B(V) \tag{\cite{BabaioffILW14}} \\
& \le 4{\mathsc{SRev}}B(V') + 22 {\mathsc{BRev}}^{{b}}(V') + 2 {\mathsc{BRev}}B(V). \tag{Lemma~\ref{lem:revvp-revbvp}} \\
& \le 8{\mathsc{SRev}}B(V) + 24 {\mathsc{BRev}}B(V). \tag*{(Lemma~\ref{lem:vp-sbrev}) \qed}
\end{align*}
\end{proof}
\subsection{Proof of Lemma~\ref{lem:opt-revvp}}
\label{sec:opt-revvp}
We will prove the following lemma, which is a generalization of Lemma~\ref{lem:opt-revvp}.
\begin{lemma}
\label{lem:revbvsplit}
Fix $b > 0$ and $0 < {c} \le {b}$.
For any distribution $\widehat V$ with ${\mathrm{supp}}(\widehat V) \subseteq {\mathrm{supp}}(V)$ and $\Prob{\widehat V}{t} \ge \Prob{V}{t}$ for any $\normone{t} \le {c}$,
we have ${\mathbb{R}}evB(V) \le ({b} / {c}) \cdot {\mathsc{BRev}}B(V) + {\mathbb{R}}ev(\widehat V)$.
\end{lemma}
Lemma~\ref{lem:opt-revvp} follows immediately from Lemma~\ref{lem:revbvsplit} by choosing $c = {b}$ and $\widehat V = V'$, because capping each coordinate at $c$ does not create new support, and does not decrease probability mass on any type $t$ whose sum is at most $c$.
Intuitively, Lemma~\ref{lem:revbvsplit} upper bounds the optimal revenue by splitting the buyer types $t$ into two sets:
when $\normone{t} > c$, we upper bound the seller's revenue by the budget $b$;
when $\normone{t} \le c$, we run the optimal mechanism for ${\mathbb{R}}evB(V)$.
\begin{proof}[of Lemma~\ref{lem:revbvsplit}]
Let $T$ and $\widehat T$, $f$ and $\widehat f$ denote the support and probability density function of $V$ and $\widehat V$ respectively.
Let $M^* = (\pi^*, p^*)$ be the optimal mechanism that obtains ${\mathbb{R}}evB(V)$.
Recall that $\pi^*$ and $p^*$ are the allocation and payment rules, and $(\pi^*, p^*)$ is the optimal solution to LP~\eqref{lp:budget-reduceform} for $f$ and $T$.
We split the optimal revenue into two parts:
\[ {\mathbb{R}}evB(V) = \sum_{t \in T} f(t) p^*(t) = \sum_{\normone{t} > {c}} f(t) p^*(t) + \sum_{\normone{t} \le {c}} f(t) p^*(t). \]
Since $p^*(t) \le b$, the first term is at most ${b} \sum_{\normone{t} > {c}} f(t) = b \cdot \Prob{}{\normone{v} > {c}} \le (b/c){\mathsc{BRev}}B(V)$, because we can sell the grand bundle at price $p = {c}$.
The second term is at most ${\mathbb{R}}ev(\widehat V)$, because $M^*$ is a feasible solution to the LP for $\widehat T \subseteq T$.
In other words, $M^*$ satisfies the IC and IR constraints for $\widehat V$.
The revenue of $\widehat V$ is at least the revenue of $M^*$ on $\widehat V$:
\[
{\mathbb{R}}ev(\widehat V) \ge \sum_{t \in \widehat T} \widehat f(t) p^*(t) \ge \sum_{t \in T, \normone{t} \le {c}} \widehat f(t) p^*(t) \ge \sum_{t \in T, \normone{t} \le {c}} f(t) p^*(t).
\]
Combining the upper bounds, we get
${\mathbb{R}}evB(V) \leq ({b} / {c}) {\mathsc{BRev}}B(V) + {\mathbb{R}}ev(\widehat V)$.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:revvp-revbvp}}
\label{sec:revvp-revbvp}
Lemma~\ref{lem:revvp-revbvp} states that when the sum of $v' \sim V'$ is often small, the budget does not matter too much for $V'$.
Intuitively, because each coordinate of $v' \sim V'$ is independent and upper bounded by $b$, a concentration inequality implies that the sum has an exponentially decaying tail.
Therefore, the budget constraint is less critical because it is very unlikely that the buyer's value for the grand bundle is much larger than the budget.
We formalize this intuition by proving the following lemma, which is similar to standard tail bounds.
The main difference is that, instead of knowing the mean of $\normone{v'}$ is small, we only know that ${\mathsc{BRev}}B(V')$ is small.
\begin{lemma}
\label{lem:sum-vp-tail}
If $V'$ is independent and $\norminf{v'} \le c$ for all $v' \sim V'$, then
\[
\Prob{v' \sim V'}{\normone{v'} \ge x + y + c} \le \Prob{v' \sim V'}{\normone{v'} \ge x} \cdot \Prob{v' \sim V'}{\normone{v'} \ge y} \quad \text{for all } x, y > 0.
\]
In particular, if $\Prob{v' \sim V'}{\normone{v'} \ge c} \le q$, then for all integer $k \ge 0$,
\[
\Prob{v' \sim V'}{\normone{v'} \ge (2k+1) c} \le q^k \Prob{v' \sim V'}{\normone{v'} \ge c}.
\]
\end{lemma}
We defer the proof of Lemma~\ref{lem:sum-vp-tail} to Appendix~\ref{apx:tail-bound}, and first use this tail bound to prove Lemma~\ref{lem:revvp-revbvp}.
\begin{proof}[of Lemma~\ref{lem:revvp-revbvp}]
Let $c = b$ and $q = \frac{1}{10}$.
We know that $\Prob{}{\normone{v'} \ge c} \le q$ from the assumption ${\mathsc{BRev}}B(V') \le \frac{{b}}{10}$.
First observe that
$
{\mathsc{BRev}}^{b}(V') = \max_{p \le {b}} \left(p \cdot \Prob{}{\normone{v'} \ge p}\right).
$
If we price the grand bundle at price $p$ where $(2k+1)c < p \le (2k+3) c$ for some $k \ge 0$,
by Lemma~\ref{lem:sum-vp-tail}, the revenue is at most
\[
p \cdot \Prob{}{\normone{v'} \ge (2k+1)c} \le (2k+3)c \cdot q^{k} \Prob{}{\normone{v'} \geq c} \le (2k+3)q^{k}{\mathsc{BRev}}B(V').
\]
For ${\mathsc{SRev}}(V')$, similar to Lemma~\ref{lem:opt-revvp}, we can upper bound the revenue by allowing the seller to extract full revenue if $\normone{v'} > c$,
and running the optimal budget-constrained mechanism when $\normone{v'} \le c$:
\begin{align*}
{\mathsc{SRev}}(V') & \le {\mathsc{SRev}}^{c}(V') + \expect{}{\normone{v'} \;\middle|\; \normone{v'} \ge c} \\
& \le {\mathsc{SRev}}^{c}(V') + \sum_{k=1}^{\infty} (2k+3) c \cdot \Prob{}{\normone{v'} \ge (2k+1)c} \\
& \le {\mathsc{SRev}}^{c}(V') + \sum_{k=0}^{\infty} (2k+3)q^{k} {\mathsc{BRev}}^{c}(V') \\
& \le {\mathsc{SRev}}^{c}(V') + 4{\mathsc{BRev}}^{c}(V'). \tag*{\qed}
\end{align*}
\end{proof}
\subsection{Proof of Lemma~\ref{lem:vp-sbrev}}
\label{sec:vp-sbrev}
Lemma~\ref{lem:vp-sbrev} states that ${\mathsc{SRev}}B(V)$ and ${\mathsc{BRev}}B(V)$ are both (up to constant factors) monotone in $V$.
We prove a more general version of the lemma that does not require $V$ to be independent.
Recall that $\widehat V \preceq V$ means $\widehat V$ is coordinate-wise stochastically dominated by $V$.
\begin{lemma}
\label{lem:vhat-sbrev}
Fix $b > 0$ and $0 < {c} \le {b}$.
For any distribution $\widehat V \preceq V$, ${\mathsc{BRev}}^c(\widehat V) \le {\mathsc{BRev}}B(V)$ and ${\mathsc{SRev}}^c(\widehat V) \le \max\left(1, \frac{2{c}}{{b}}\right){\mathsc{SRev}}B(V)$.
\end{lemma}
Lemma~\ref{lem:vp-sbrev} follows directly from Lemma~\ref{lem:vhat-sbrev}, by choosing $c = b$ and $\widehat V = V'$.
Intuitively, we would like to prove that ${\mathsc{SRev}}B(\widehat V) \le {\mathsc{SRev}}B(V)$ for any $\widehat V \preceq V$.
While this is true in the budget-free case (See Section~\ref{sec:prelim-preceq}), it is actually false in the presence of a budget.
We give a counterexample in Appendix~\ref{apx:srevb-not-monotone}.
Fortunately, we can prove ${\mathsc{SRev}}B(V') \le 2{\mathsc{SRev}}B(V)$. The intuition is that we can cap the price of each item at ${b}/2$, then the buyer either spend at least ${b}/2$, or he will purchase everything he likes.
\begin{proof}[of Lemma~\ref{lem:vhat-sbrev}]
First consider ${\mathsc{BRev}}$. Because ${c} \le {b}$ and $\widehat V \preceq V$,
\[
{\mathsc{BRev}}^{{c}}(\widehat V) = \max_{p \le {c}} \left(p\cdot \Prob{\widehat v \sim \widehat V}{\normone{\widehat v} \ge {c}}\right) \le \max_{p \le {b}} \left(p\cdot \Prob{v \sim V}{\normone{v} \ge {b}}\right) = {\mathsc{BRev}}B(V).
\]
For ${\mathsc{SRev}}$,
let $\widehat M$ be the optimal mechanism that achieves ${\mathsc{SRev}}^c(\widehat V)$ by pricing each item separately.
We construct a mechanism $M$ to mimic $\widehat M$ except the prices are capped at ${b} / 2$.
Consider applying $M$ to a buyer with valuation drawn from $V$ and a budget ${b}$.
As $\widehat V \preceq V$, we can couple the realizations $\widehat v \sim \widehat V$ and $v \sim V$ such that $\widehat v \leq v$ (coordinate-wise).
For every $(\widehat v, v)$ pair:
\begin{itemize}
\item If $M$ gets a revenue of at least $\frac{{b}}{2}$ on $v$. This is at least $\frac{{b}}{2{c}}$-fraction of the revenue $\widehat M$ gets on $\widehat v$, because the latter is at most ${c}$.
\item If $M$ gets a revenue less than $\frac{{b}}{2}$ on $v$, then the buyer has enough budget left to buy any item.
Therefore, the buyer can buy everything he wants. Because $\widehat v \leq v$, the revenue of $M$ on $v$ is at least that of $\widehat M$ on $\widehat v$.
\end{itemize}
Thus, $M$ can get at least $\min(1, \frac{{b}}{2{c}})$-fraction of the revenue that $\widehat M$ gets on $\widehat v$, which implies ${\mathsc{SRev}}(\widehat V) \le \max\left(1, \frac{2{c}}{{b}}\right) {\mathsc{SRev}}B(V)$.
\qed
\end{proof}
\section{Public Budget and Weakly Correlated Valuations}
\label{sec:overview}
In this section, we present an alternative approach to prove our main result (Theorem~\ref{thm:main-informal}).
Recall that the buyer has a public budget $b$ and valuations drawn from an independent distribution $V$.
{\noindent \bf Theorem~\ref{thm:main-informal}.~}
{\em
${\mathbb{R}}evB(V) \le 5 {\mathsc{SRev}}B(V) + 6 {\mathsc{BRev}}B(V)$.
}
\paragraph{\bf Overview of Our Approach.}
We will truncate the input distribution $V$ in a different way: instead of truncating $v \sim V$ in $\ell_\infty$-norm (as in Section~\ref{sec:proof1}), we will truncate $v$ in $\ell_1$-norm.
This truncation produces a correlated distribution $\widehat V$.
The upshot of truncating in $\ell_1$-norm is that we always have $\normone{\widehat v} \le {b}$, so $\widehat V$ can ignore the budget.
In addition, as in Section~\ref{sec:proof1}, we can relate the optimal revenue to the revenue of $\widehat V$ (Lemma~\ref{lem:revbvsplit}), and we can relate the revenue of simple mechanisms on $\widehat V$ back to revenue of simple mechanisms on $V$ (Lemma~\ref{lem:vhat-sbrev}).
We still need to argue that simple mechanisms work well for $\widehat V$.
This is the main challenge in this approach.
Because $\widehat V$ is correlated, we cannot apply the result of~\cite{BabaioffILW14} in a black-box way.
Instead, we need to modify the analysis of previous work~\cite{LiY13,BabaioffILW14,CaiDW16} and build on the key ideas like ``core-tail'' decomposition.
More specifically, we generalize the duality-based framework developed in~\cite{CaiDW16} to handle the specific type of correlation $\widehat V$ has.
\paragraph{\bf Weakly Correlated Valuations.}
It is worth mentioning that our structural lemmas (Lemmas~\ref{lem:revbvsplit}~and~\ref{lem:vhat-sbrev})
do not require the input distribution to be independent.
This is why our techniques can be applied to more general settings.
For example, in this section, we will generalize Theorem~\ref{thm:main-informal} with minimum effort to handle weakly correlated valuations (see Definition~\ref{def:weakly} for the formal definition).
{\noindent \bf Corollary~\ref{cor:weakly}.~}
{\em
Let $\widehat V$ be a weakly correlated distribution (Definition~\ref{def:weakly}).
We have ${\mathbb{R}}evB(\widehat V) \le 5 {\mathsc{SRev}}B(\widehat V) + 6 {\mathsc{BRev}}B(\widehat V)$.
}
Our main contribution in this section is Lemma~\ref{lem:hatv-simple}.
Lemma~\ref{lem:hatv-simple} shows that for any weakly correlated distribution $\widehat V$ (see Definition~\ref{def:weakly}), the better of ${\mathsc{SRev}}(\widehat V)$ and ${\mathsc{BRev}}(\widehat V)$ is a constant approximation to the optimal revenue ${\mathbb{R}}ev(\widehat V)$.
\begin{lemma}
\label{lem:hatv-simple}
Fix $c > 0$. Let $\widehat V = V_{|(\normone{v} \le {c})}$ for an independent distribution $V$.
We have ${\mathbb{R}}ev(\widehat V) \le 5 {\mathsc{SRev}}(\widehat V) + 4{\mathsc{BRev}}(\widehat V)$.
\end{lemma}
We defer the proof of Lemma~\ref{lem:hatv-simple} to Appendix~\ref{sec:vhat}.
We first use these lemmas to prove Theorem~\ref{thm:main-informal} and Corollary~\ref{cor:weakly}.
\begin{proof}[of Theorem~\ref{thm:main-informal} and Corollary~\ref{cor:weakly}]
If $\min_{v \sim V} \normone{v} \ge {b} / 2$, then the seller can price the grand bundle at ${b} / 2$ and the buyer always buys it.
In this case, the revenue is ${b}/2$ and ${\mathbb{R}}evB(V) \le {b} \le 2 {\mathsc{BRev}}B(V)$.
Thus, we focus on the more interesting case where $\Prob{v \sim V}{\normone{v} \leq {b} / 2} > 0$.~\footnote{Throughout the paper, when we consider the conditional distribution $\widehat V := V_{|(\normone{v} \le {c})}$, we will always have ${c} > \min_{v \in {\mathrm{supp}}(V)} \normone{v}$, so that the event we condition on happens with non-zero probability.}
Let $\widehat V := V_{|(\normone{v} \leq {c})}$ for ${c} = {b} / 2$.
We will reuse Lemmas~\ref{lem:revbvsplit}~and~\ref{lem:vhat-sbrev} from Section~\ref{sec:proof1}.
We can reuse both lemmas because they do not require $V$ or $\widehat V$ to be independent, $\widehat V$ does not modify the small-sum part of $V$, and $\widehat V \preceq V$ (which we will prove as Lemma~\ref{CLM:HATV-PREC-V} in Appendix~\ref{apx:hatv-preceq-v}).
\begin{align*}
{\mathbb{R}}evB(V) & \le ({b} / {c}) \cdot {\mathsc{BRev}}B(V) + {\mathbb{R}}ev(\widehat V) \tag{Lemma~\ref{lem:revbvsplit}}\\
& \le 2 {\mathsc{BRev}}B(V) + 5 {\mathsc{SRev}}(\widehat V) + 4{\mathsc{BRev}}(\widehat V) \tag{Lemma~\ref{lem:hatv-simple}} \\
& = 2 {\mathsc{BRev}}B(V) + 5 {\mathsc{SRev}}^{c}(\widehat V) + 4{\mathsc{BRev}}^{c}(\widehat V) \tag{$\normone{\widehat v} \le {c}$} \\
& \le 2 {\mathsc{BRev}}B(V) + 5 {\mathsc{SRev}}B(V) + 4{\mathsc{BRev}}B(V). \tag{Lemma~\ref{lem:vhat-sbrev}}
\end{align*}
We now prove Corollary~\ref{cor:weakly}.
Intuitively, Corollary~\ref{cor:weakly} holds because simple mechanisms work well for weakly correlated valuations, and the
the weakly-correlated notion is closed under further capping the sum.
Let $V = V_{|(\normone{v} \le {c}_2)}$ be the input distribution.
If $c_2 \le b$, then we can remove the budget constraint and apply Lemma~\ref{lem:hatv-simple} directly.
If $c_2 > b$, then we can cap $V$ at $c_1 = b/2$ to obtain a weakly correlated distribution $\widehat V$.
One can verify that Lemmas~\ref{lem:revbvsplit}~and~\ref{lem:vhat-sbrev} still hold for $V$ and $\widehat V$, and Lemma~\ref{lem:hatv-simple} holds for $\widehat V$.
The only difference is that we need to show $V_{|(\normone{v} \le {c}_1)} \preceq V_{|(\normone{v} \le {c}_2)}$ for ${c}_1 \le {c}_2$.
We will prove this (Lemma~\ref{CLM:C1-PREC-C2}) in Appendix~\ref{apx:hatv-preceq-v}.
\qed
\end{proof}
\section{Simple Mechanisms for Weakly Correlated Valuations}
\label{sec:vhat}
This section is devoted to proving Lemma~\ref{lem:hatv-simple}.
Lemma~\ref{lem:hatv-simple} states that simple mechanisms (more specifically, the better of ${\mathsc{SRev}}(\widehat V)$ and ${\mathsc{BRev}}(\widehat V)$) are approximately optimal for any weakly correlated distribution $\widehat V = V_{|(\normone{v} \le c)}$.
Note that there is no budget in this section, only a cap ${c}$ on the $\ell_1$-norm of $\widehat v \sim \widehat V$.
Our approach builds on the ideas like ``core-tail decomposition'' from previous works that show ${\mathbb{R}}ev(V) = O({\mathsc{SRev}}(V) + {\mathsc{BRev}}(V))$ for independent valuations $V$~\cite{HartN17,LiY13,BabaioffILW14,CaiDW16}.
More specifically, we generalize the duality-based framework developed in~\cite{CaiDW16} to handle weakly correlated distributions.
The idea of~\cite{CaiDW16} is to Lagrangify only the incentive constraints, then guess the Lagrangian multipliers to derive an upper bound on the maximum revenue.
We first highlight some of the difficulties in extending previous works to correlated distributions.
\begin{enumerate}
\item When the distribution is independent, one can upper bound the maximum revenue by Myerson's \emph{virtual value} of the bidder's favorite item, plus the sum of the \emph{values} of the remaining items. For correlated distributions, the virtual value of the favorite item depends on the other items.
\item In the core part of core-tail decomposition, we need the total value of the low-value items to concentrate around its expectation, so we can upper bound their values by ${\mathsc{BRev}}$ (by selling the grand bundle at a price slightly lower than that expectation).
When the valuations are independent, we can show the variance is small, which may not be true for correlated distributions.
\end{enumerate}
These difficulties are not surprising, because Lemma~\ref{lem:hatv-simple} cannot hold for arbitrary correlated distributions.
As shown in~\cite{BriestCKW10,HartN13}, for correlated distributions, the gap between the best deterministic and randomized mechanisms can be unbounded.
Hence, we have to take advantage of the special properties of $\widehat V$.
\paragraph{\bf Notations.}
In this Section, because $\widehat V = V_{|(\normone{v} \le c)}$ is the distribution we focus on, we use $T$ to denote the support of $\widehat V$.
Given a (correlated) distribution $D$, we use $D_j$ to denote $D$'s marginal distribution on the $j$-th coordinate, and $D_{-j}$ to denote $D$'s marginal (joint) distribution on coordinates other than $j$.
Let $T_j$ and $T_{-j}$ be the support of $\widehat V_j$ and $\widehat V_{-j}$ respectively.
In addition, we will make use of the conditional distributions $\widehat V_{j | \widehat v_{-j} = t_{-j}}$ and $\widehat V_{-j | \widehat v_{j} = t_{j}}$;
The former is the distribution of $\widehat v_j$ for $\widehat v \sim \widehat V$ conditioned on $\widehat v_{-j} = t_{-j}$, and the latter is the distribution of $\widehat v_{-j}$ conditioned on $v_j = t_j$.
Abusing notation, we use $f(t)$, $f(t_j)$, $f(t_{-j})$, $f(t_j | t_{-j})$, and $f(t_{-j} | t_j)$ to denote the probability mass function of the correlated and conditional distributions we mentioned in this section.
When the value of item $j$ is drawn from $f(t_j | t_{-j})$, we use ${\varphi}t(t_j | t_{-j})$ to denote item $j$'s (ironed) Myerson's virtual value.
For a bidder type $t \in T \subseteq {\mathbb{R}}^m$, the favorite item of $t$ is the one with the highest value (with ties broken lexicographically).
We write $t \in R_j$ if and only if $j$ is the favorite item of type $t$ after tie-breaking.
Formally,
\[
R_j = \{t \in T \mid j \text{ is the smallest index with } t_j \ge t_k \text{ for all } k \in [m]\}.
\]
\paragraph{\bf Proof of Lemma~\ref{lem:hatv-simple}.}
We extend the duality framework in~\cite{CaiDW16} to handle correlated distributions.
As we will see in Section~\ref{sec:canon-flow}, we can upper bound the optimal revenue of $\widehat V$ into three components.
Recall that $\pi$ is the allocation rule.
For notational convenience, we write $r$ for ${\mathsc{SRev}}(\widehat V)$.
Notice that in ${\mathsc{Single}}$ we get ${\varphi}t(t_j | t_{-j})$ rather than ${\varphi}t(t_j)$.
\begin{align*}
{\mathbb{R}}ev(\widehat V) & \le \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) {\varphi}t(t_j | t_{-j}) \cdot \indic{t \in R_j} \tag{{\mathsc{Single}}} \\
& \quad + \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) t_j \cdot \indic{t \notin R_j} \cdot \indic{t_j \le 2r} \tag{{\mathsc{Core}}} \\
& \quad + \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) t_j \cdot \indic{t \notin R_j} \cdot \indic{t_j > 2r}. \tag{{\mathsc{Tail}}}
\end{align*}
Lemma~\ref{lem:hatv-simple} follows directly from the statements of Lemmas~\ref{lem:vhat-single},~\ref{lem:vhat-tail},~and~\ref{lem:vhat-core}.
\begin{lemma}
\label{lem:vhat-single}
${\mathsc{Single}} \le 2 {\mathsc{BRev}}(\widehat V) + {\mathsc{SRev}}(\widehat V)$.
\end{lemma}
\begin{lemma}
\label{lem:vhat-tail}
${\mathsc{Tail}} \le {\mathsc{SRev}}(\widehat V)$.
\end{lemma}
\begin{lemma}
\label{lem:vhat-core}
${\mathsc{Core}} \le 2 {\mathsc{BRev}}(\widehat V) + 3 {\mathsc{SRev}}(\widehat V)$.
\end{lemma}
\paragraph{\bf Organization.} For completeness, we first recall the approach in \cite{CaiDW16} in Section~\ref{sec:stoc16-duality}.
In Section~\ref{sec:canon-flow}, we show that there exists a choice of the Lagrangian multipliers that ${\mathbb{R}}ev(\widehat V)$ can be upper bounded by ${\mathsc{Single}} + {\mathsc{Core}} + {\mathsc{Tail}}$.
Lemmas~\ref{lem:vhat-single},~\ref{lem:vhat-tail},~and~\ref{lem:vhat-core} are proved in Section~\ref{sec:vhat-proofs}.
\subsection{The Duality Based Approach in~\cite{CaiDW16}.}
\label{sec:stoc16-duality}
The optimal mechanism $M^* = (\pi^*, p^*)$ for $\widehat V$ is the optimal solution to the following LP:
\begin{lp}
\label{lp:rev-reduced}
\maxi{\sum_{t \in T} f(t) p(t)}
\mbox{subject to} \qcon{\pi(t')^\top t - p(t') \le \pi(t)^\top t - p(t)}{\forall t \in T, t' \in T^+}
\qcon{0 \le \pi_j(t) \le 1}{\forall t \in T, j \in [m]}
\con{\pi(\operatorname{Var}nothing) = 0, \; p(\operatorname{Var}nothing) = 0.}
\end{lp}
We can upper bound the optimal primal value by Lagrangifying the incentive constraints.
\begin{align*}
{\mathbb{R}}ev(\widehat V) = \min_{\lambda \ge 0} \max_{\pi, p} L(\lambda, \pi, p),
\end{align*}
where the Lagrangian dual of LP~\eqref{lp:rev-reduced} is given by
\begin{align*}
\begin{split}
L(\lambda,\pi,p)
& = \sum_{t \in T} f(t) p(t) + \sum_{t \in T, t' \in T^+} \lambda(t, t') \left[ (\pi(t)-\pi(t'))^\top t - (p(t) - p(t')) \right] \\
& = \sum_{t \in T} p(t) \left[f(t) - \sum_{t' \in T^+} \lambda(t, t') + \sum_{t' \in T} \lambda(t', t) \right] \\
& \qquad + \sum_{t \in T} \pi(t)^\top \left[\sum_{t' \in T^+} \lambda(t, t') t - \sum_{t' \in T} \lambda(t', t) t' \right].
\end{split}
\end{align*}
Because the $p(t)$'s are unconstrained variables, any dual solution with a finite value must have
\begin{align}
\label{eqn:multiplier-flow}
f(t) - \sum_{t' \in T^+} \lambda(t, t') + \sum_{t' \in T} \lambda(t', t) = 0, \; \forall t \in T.
\end{align}
From now on, we restrict our attention to only dual solution with finite values.
We can then simplify $L(\lambda,\pi,p)$ by replacing $\sum_{t' \in T^+} \lambda(t, t')$ with $f(t) + \sum_{t' \in T} \lambda(t', t)$ to get rid of $T^+$:
\begin{align*}
L(\lambda,\pi,p)
& = \sum_{t \in T} \pi(t)^\top \left[f(t) t + \sum_{t' \in T} \lambda(t', t) t - \sum_{t' \in T} \lambda(t', t) t' \right] \\
& = \sum_{t \in T} f(t) \pi(t)^\top \left[t - \frac{1}{f(t)} \sum_{t' \in T} \lambda(t', t) (t' - t) \right].
\end{align*}
We write ${\Phi}(t)$ as a shorthand for the term in the bracket:
$
{\Phi}(t) = t - \frac{1}{f(t)} \sum_{t' \in T} \lambda(t', t) (t' - t).
$
We know that
$
L(\lambda,\pi,p) = \sum_{t \in T} f(t) \pi(t)^\top {\Phi}(t) \ge {\mathbb{R}}ev(\widehat V)
$
is an upper bound on the revenue of the optimal mechanism.
We can rewrite Equation~\eqref{eqn:multiplier-flow} as
\[
f(t) + \sum_{t' \in T} \lambda(t', t) = \sum_{t' \in T^+} \lambda(t, t') = \sum_{t' \in T} \lambda(t, t') + \lambda(t, \operatorname{Var}nothing), \; \forall t \in T.
\]
\cite{CaiDW16} interpreted these constraints as flow conservation constraints.
Let $\lambda(t, t') \ge 0$ denote the amount of flow $t$ sends to $t'$.
The left-hand side is the total flow received by $t$, where every type $t$ receives $f(t)$ units of flow from the source;
and the right-hand side is the total flow send out from $t$, with all the excess flow sent to the sink ($\operatorname{Var}nothing$).
They proposed a ``canonical flow'' which was shown to be a good guess for the Lagrangian multipliers.
It turns out the same dual solution is sufficient to prove our results for correlated distributions.
In the next section, we recall this canonical flow and use it to derive an upper bound on the optimal revenue.
\subsection{Canonical Flow for Weakly Correlated Distributions}
\label{sec:canon-flow}
Recall that $t \in R_j$ if and only if $j$ is the favorite item of type $t$.
Formally, there exists $\lambda(t,t') \ge 0$ such that
\begin{itemize}
\item For every $j$, all flows entering $R_j$ are from the source, and all flows leaving $R_j$ are to $\operatorname{Var}nothing$.
\item For $t, t' \in R_j$, we can have $\lambda(t',t) > 0$ only if $t$ and $t'$ only differ on the $j$-th coordinate. When there is no ironing, $\lambda(t',t) > 0$ only if $t'_j$ is the smallest value larger than $t_j$ in $T_j$.
\end{itemize}
\begin{lemma}
\label{lem:canonical-flow}
There exists a set of the Lagrangian multipliers $\lambda$ that satisfies the flow conservation constraints, such that
\begin{enumerate}
\item[(1)] If $t \notin R_j$, then ${\Phi}_j(t) = t_j$.
\item[(2)] If $t \in R_j$, then ${\Phi}_j(t) \le {\varphi}t_j(t_j | t_{-j})$, where ${\varphi}t(t_j | t_{-j})$ is item $j$'s (ironed) Myerson's virtual value conditioned on $t_{-j}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall that
$
{\Phi}(t) = t - \frac{1}{f(t)} \sum_{t' \in T} \lambda(t', t) (t' - t).
$
For (1), assume that $t \in R_k$ for some $k \neq j$.
If $\lambda(t', t) > 0$, it must be the case that $t$ and $t'$ are only different on the $k$-th coordinate, so $(t - t')_j = 0$ and ${\Phi}_j(t) = t_j$.
Now we prove (2).
We first consider a canonical flow without ironing.
Fix any $j$ and $t \in R_j$.
If $t_j$ is the largest value in $T_j$, then $\lambda(t', t) = 0$ for all $t'$ and ${\Phi}_j(t) = t_j$.
If $t_j$ is not the largest value in $T_j$, $t$ receives flow from the source and exactly one other node $t'$ where $t'_{-j} = t_{-j}$, and $t'_j$ is the smallest value larger than $t_j$ in $T_j$.
The total flow from $t'$ to $t$ includes the flows from the source to all $t^*$ with $t^*_{-j} = t_{-j}$ and $t^*_j > t_j$:
\begin{align*}
\lambda(t', t) & = \sum_{t^* \in T} f(t^*) \cdot \indic{t^*_{-j} = t_{-j} \wedge t^*_j > t_j} \\
& = f(t_{-j}) \sum_{t^*_j \in T_j, t^*_j > t_j} f(t^*_j | t_{-j})
= f(t_{-j}) \left(1 - F(t_j | t_{-j}) \right).
\end{align*}
Substituting $\lambda(t', t)$ in the expression of ${\Phi}(t)$, this implies
\begin{align*}
{\Phi}_j(t) & = t_j - \frac{1}{f(t)} \sum_{t^* \in T} \lambda(t^*, t) (t^*_j - t_j) \\
& = t_j - \frac{1}{f(t)} \lambda(t', t)(t'_j - t_j) \\
& = t_j - \frac{f(t_{-j}) \left(1 - F(t_j | t_{-j}) \right)}{f(t_{-j}) f(t_{j} | t_{-j})} (t'_j - t_j)
= {\varphi}(t_j | t_{-j}).
\end{align*}
Finally, we show that the flow can be modified to implement Myerson's ironing procedure.
The analysis on modifying the flow to reflect ironing is given in~\cite{CaiDW16}, and we include it here for completeness.
Suppose there exist two types $t, t' \in R_j$ such that $t_j < t'_j$ but ${\Phi}_j(t) > {\Phi}_j(t')$.
We can add a cycle of $w$ units of flow between $t$ and $t'$, that is, we increase both $\lambda(t, t')$ and $\lambda(t', t)$ by $w$.
Notice that the resulting flow is still valid, and ${\Phi}(t^*)$ for all $t^* \neq t, t'$ remain unchanged.
Moreover, the change does not alter ${\Phi}_k(t)$ or ${\Phi}_k(t')$ for all $k \neq j$.
The only effect of the change is to increase ${\Phi}_j(t)$ by $w(t'_j - t_j) / f(t)$, and decrease ${\Phi}_j(t')$ by $w(t'_j - t_j) / f(t')$.
Therefore, we can choose $w$ so that ${\Phi}_j(t) = {\Phi}_j(t')$ without changing any other virtual values.
Repeating this process allows us to simulate Myerson's ironing procedure.
One technical issue is that we may cut off an ironing interval of $f(t_j | t_{-j})$ because it leaves the region $R_j$.
However, we know that truncating an ironing interval $I$ to $I' \subseteq I$ from below can only decrease the virtual value on $I'$.
This is because the average virtual value on $I'$ is smaller than the average virtual value on $I$, otherwise we would not iron the entire interval $I$ in the first place. \qed
\end{proof}
\subsection{Upper Bounds for ${\mathsc{Single}}$, ${\mathsc{Core}}$, and ${\mathsc{Tail}}$}
\label{sec:vhat-proofs}
We decompose the upper bound we had in the previous section into three components.
Recall that $r = {\mathsc{SRev}}(\widehat V)$.
By Lemma~\ref{lem:canonical-flow}, we know that
\begin{align*}
{\mathbb{R}}ev(\widehat V) & \le \sum_{t\in T} f(t) \pi(t)^\top {\Phi}(t) \\
& = \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) \left({\varphi}t(t_j | t_{-j}) \cdot \indic{t \in R_j} + t_j \cdot \indic{t \notin R_j}\right) \\
& \le \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) {\varphi}t(t_j | t_{-j}) \cdot \indic{t \in R_j} \tag{{\mathsc{Single}}} \\
& \quad + \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) t_j \cdot \indic{t \notin R_j} \cdot \indic{t_j \le 2r} \tag{{\mathsc{Core}}} \\
& \quad + \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) t_j \cdot \indic{t \notin R_j} \cdot \indic{t_j > 2r}. \tag{{\mathsc{Tail}}}
\end{align*}
We now restate the lemmas that upper bounds each component.
{\noindent \bf Lemma~\ref{lem:vhat-single}.~}
{\em
${\mathsc{Single}} \le 2 {\mathsc{BRev}}(\widehat V) + {\mathsc{SRev}}(\widehat V)$.
}
\begin{proof}
We first recall the expression of {\mathsc{Single}}.
\[
{\mathsc{Single}} = \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) {\varphi}t(t_j | t_{-j}) \cdot \indic{t \in R_j}.
\]
Because $\pi_j(t)\cdot \indic{t \in R_j}$ is between $0$ and $1$, we can upper bound ${\mathsc{Single}}$ by setting it to $1$ whenever the ironed virtual value is positive, and $0$ otherwise.
\begin{align*}
{\mathsc{Single}}
& \le \sum_{t \in T} f(t) \sum_{j \in [m]} {\varphi}t(t_j | t_{-j}) \cdot \indic{{\varphi}t(t_j | t_{-j}) \ge 0} \\
& = \sum_{j \in [m]} \sum_{t \in T} f(t_{-j}) f(t_j | t_{-j}) {\varphi}t(t_j | t_{-j}) \cdot \indic{{\varphi}t(t_j | t_{-j}) \ge 0} \\
& = \sum_{j \in [m]} \sum_{t_{-j} \in T_{-j}} f(t_{-j}) \cdot {\mathbb{R}}ev(t_j | t_{-j}).
\end{align*}
Intuitively, we need to show that knowing $t_{-j}$ does not help us sell item $j$ by too much.
Observe that $\widehat V$ is a correlated distribution obtained from capping an independent distribution.
The proof of the lemma crucially relies on the following property of $\widehat V$: revealing $t_{-j}$ gives the same amount of information as revealing only the sum of $t_{-j}$.
For a fixed $j$, the revenue ${\mathbb{R}}ev(t_j|t_{-j})$ can be captured by the (non-disjoint) union of the following two cases:
\begin{enumerate}
\item[(1)] $\normone{t_{-j}} < {c}/2$ and $t_j < {c}/2$ both hold. Conditioned on this event, the valuation of $t_j$ is independent of $t_{-j}$. Hence, knowing $t_{-j}$ does not provide additional information.
\item[(2)] $\normone{t} \ge {c}/2$. In this case, buyer's value for the grand bundle is at least ${c}/2$, so we could charge this to ${\mathsc{BRev}}(\widehat V)$.
\end{enumerate}
It is worth noting that $t_j$ and $t_{-j}$ are not independent when $\normone{t} < {c}/2$, so we have to condition on stricter events for them to become independent.
Formally, if $\normone{t_{-j}} < {c}/2$,
\[
f(t_j|t_{-j}, t_j < {c}/2) = f(t_j|t_j < {c}/2), \quad \forall j \in [m], t \in T.
\]
We are now ready to bound ${\mathsc{Single}}$.
For single-parameter distributions, the optimal auction simply sets a reserve price.
Let $p^*_j$ be the optimal reserve price for the distribution $f(t_j|t_j < {c}/2)$, and let $p_j(t_{-j})$ be the optimal reserve price for the distribution $f(t_j|t_{-j})$.
\begin{align*}
{\mathsc{Single}} & \le \sum_{j \in [m]} \sum_{t_{-j} \in T_{-j}} f(t_{-j}) \cdot {\mathbb{R}}ev(t_j | t_{-j}) \\
& = \sum_{j \in [m]} \sum_{t \in T} f(t_{-j}) f(t_j|t_{-j}) p_j(t_{-j}) \cdot \indic{t_j \ge p_j(t_{-j})} \\
& \le \sum_{j} \sum_{\normone{t} \ge \frac{{c}}{2}} f(t) t_j + \sum_{j} \sum_{t_j < \frac{{c}}{2}, \normone{t_{-j}} < \frac{{c}}{2}} f(t) p_j(t_{-j}) \cdot \indic{t_j \ge p_j(t_{-j})} \\
& \le \sum_{\normone{t} \ge \frac{{c}}{2}} f(t) \sum_{j} t_j + \sum_{j} \sum_{t_j < \frac{{c}}{2}, \normone{t_{-j}} < \frac{{c}}{2}} f(t) p^*_j \cdot \indic{t_j \ge p^*_j} \\
& \le {c} \cdot \Prob{t \sim \widehat V}{\normone{t} \ge \frac{{c}}{2}} + \sum_{t\in T} f(t) \sum_{j\in[m]} p^*_j \cdot \indic{t_j \ge p^*_j} \\
& \le 2 {\mathsc{BRev}}(\widehat V) + {\mathsc{SRev}}(\widehat V).
\end{align*}
The last step uses the facts that (1) we can price the grand bundle at price ${c}/2$ and therefore ${\mathsc{BRev}}(\widehat V) \ge ({c}/2) \cdot \Prob{}{\normone{t} \ge {c}/2}$; and (2) the second term is exactly the revenue we can obtain if we post each item $j$ separately at price $p^*_j$. \qed
\end{proof}
Recall that $r = {\mathsc{SRev}}(\widehat V)$. We continue to upper bound ${\mathsc{Tail}}$ and ${\mathsc{Core}}$.
{\noindent \bf Lemma~\ref{lem:vhat-tail}.~}
{\em
${\mathsc{Tail}} \le {\mathsc{SRev}}(\widehat V)$.
}
\begin{proof}
Recall that $R_j \subseteq T$ is the subset of buyer types whose favorite item is $j$.
\begin{align*}
{\mathsc{Tail}} & = \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) t_j \cdot \indic{t \notin R_j} \cdot \indic{t_j > 2r} \\
& \le \sum_{t \in T} f(t) \sum_{j \in [m]} t_j \cdot \indic{t \notin R_j} \cdot \indic{t_j > 2r} \\
& = \sum_{j \in [m]} \sum_{t_j > 2r} f(t_j) \sum_{t_{-j} \in T_{-j}} f(t_{-j}|t_j) t_j \cdot \indic{t \notin R_j}.
\end{align*}
We first show that for any fixed $t^*_j > 2r$, knowing the value for item $j$ to be $t^*_j$ will not increase the probability of $t \notin R_j$ by more than a factor of $2$.
Intuitively, it is because $t_j > 2r$ is large enough.
This is another place where we use the special property of the $\widehat V$ we designed.
Recall that $\widehat V_j$ is the marginal distribution of $\widehat V$ on the $j$-th coordinate, and $f(t_j)$ is the probability mass function of $\widehat V_j$.
The definition $r = {\mathsc{SRev}}(\widehat V)$ implies that $\Prob{t_j \sim \widehat V_j}{t_j < 2r} \ge 1/2$, otherwise selling only item $j$ at price $2r$ gives revenue more than $r$.
\begin{align*}
& \sum_{t_{-j} \in T_{-j}} f(t_{-j}|t^*_j) \cdot \indic{(t^*_j, t_{-j}) \notin R_j} \\
& \le 2 \Prob{t_j \sim \widehat V_j}{t_j < 2r} \cdot \sum_{t_{-j} \in T_{-j}} f(t_{-j}|t^*_j) \cdot \indic{(t^*_j, t_{-j}) \notin R_j} \\
& = 2 \sum_{t_j < 2r} f(t_j) \sum_{t_{-j} \in T_{-j}} f(t_{-j}|t^*_j) \cdot \indic{(t^*_j, t_{-j}) \notin R_j} \\
& \le 2 \sum_{t_j < 2r} f(t_j) \sum_{t_{-j} \in T_{-j}} f(t_{-j}|t_j) \cdot \indic{(t_j, t_{-j}) \notin R_j} \\
& \le 2 \sum_{t_{-j} \in T_{-j}} f(t_{-j}) \cdot \indic{t \notin R_j}.
\end{align*}
The last step uses the fact that $f(t_{-j}) = \sum_{t_j} f(t_j)f(t_{-j}|t_j)$.
The second last step states the event $t \notin R_j$ (that $j$ is not the largest coordinate) is more likely to happen when the value of $t_j$ is smaller. It uses the monotonicity of $f(t_{-j} | \, \cdot \,)$ and $t_j < 2r < t^*_j$.
This fact is captured in Lemma~\ref{CLM:C1-PREC-C2} and will be proved in Appendix~\ref{apx:hatv-preceq-v}.
{\noindent \bf Lemma~\ref{CLM:C1-PREC-C2}.~}
{\em
Let $\widehat V = V_{|(\normone{v} \le {c}_1)}$ and $\widetilde V = V_{|(\normone{v} \le {c}_2)}$ for any $c_1 \leq c_2$. We have $\widehat V \preceq \widetilde V$.
}
Finally, we relate our upper bound on ${\mathsc{Tail}}$ to $r$.
\begin{align*}
{\mathsc{Tail}} & \le \sum_{j \in [m]} \sum_{t_j > 2r} f(t_j) \sum_{t_{-j} \in T_{-j}} f(t_{-j}|t_j) t_j \cdot \indic{t \notin R_j} \\
& \le 2 \sum_{j \in [m]} \sum_{t_j > 2r} f(t_j) \sum_{t_{-j} \in T_{-j}} f(t_{-j}) t_j \cdot \indic{t \notin R_j} \\
& \le 2 \sum_{j \in [m]} \sum_{t_j > 2r} f(t_j) \cdot r
\le {\mathsc{SRev}}(\widehat V).
\end{align*}
The second last step is because for any $j$, selling each item separately at the same price $t^*_j$ gives revenue at least $\sum_{t_{-j}} f(t_{-j}) t^*_j \cdot \indic{t \notin R_j}$:
If some item $k \neq j$ satisfies that $t_k \ge t^*_j$, then the buyer would purchase at least one of such items.
The last step holds because $\sum_{j \in [m]} \sum_{t_j \ge 2r} f(t_j) \cdot 2r$ is exactly the revenue of selling each item at $2r$, so it is at most ${\mathsc{SRev}}(\widehat V)$. \qed
\end{proof}
{\noindent \bf Lemma~\ref{lem:vhat-core}.~}
{\em
${\mathsc{Core}} \le 2 {\mathsc{BRev}}(\widehat V) + 3 {\mathsc{SRev}}(\widehat V)$.
}
\begin{proof}
Recall $r = {\mathsc{SRev}}(\widehat V)$.
If we sell the grand bundle at price ${\mathsc{Core}} - 3r$, we show that the buyer will purchase it with probability at least $5/9$.
This implies that ${\mathsc{BRev}}(\widehat V) \ge \frac{5}{9}({\mathsc{Core}} - 3r)$, or equivalently
$
{\mathsc{Core}} \le \frac{9}{5} {\mathsc{BRev}}(\widehat V) + 3 r \le 2 {\mathsc{BRev}}(\widehat V) + 3 {\mathsc{SRev}}(\widehat V).
$
For the simplicity of presentation, we define a new random variable ${s} \in {\mathbb{R}}^m$ as follows: we first draw a sample $\widehat v$ from $\widehat V$, and set ${s}_j = \min(\widehat v_j, 2r)$ for all $j \in [m]$.
\begin{align*}
{\mathsc{Core}} & = \sum_{t \in T} f(t) \sum_{j \in [m]} \pi_j(t) t_j \cdot \indic{t \notin R_j} \cdot \indic{t_j \le 2r} \\
& \le \sum_{t \in T} f(t) \sum_{j \in [m]} t_j \cdot \indic{t_j \le 2r} \\
& \le \sum_{t \in T} f(t) \sum_{j \in [m]} \left(t_j \cdot \indic{t_j \le 2r} + 2r \cdot \indic{t_j > 2r} \right)
= \expect{}{\normone{{s}}}.
\end{align*}
Since we price the grand bundle at ${\mathsc{Core}} - 3r \le \expect{}{\normone{{s}}} - 3r$, and the buyer's value for the grand bundle is $\normone{t}$, it is sufficient to prove
\[
\Prob{t \sim \widehat V}{\normone{t} \ge \expect{}{\normone{{s}}} - 3r} \ge \frac{5}{9}.
\]
Moreover, because $\widehat V$ stochastically dominates ${s}$,
it is sufficient to prove
\[
\Prob{}{\normone{{s}} \ge \expect{}{\normone{{s}}} - 3r} \ge \frac{5}{9}.
\]
Intuitively, the condition says that the $\ell_1$-norm of the random variable ${s}$ concentrates around its expectation.
This is the crucial reason why we design our $\widehat V$ to be negatively correlated.
In the next claim (Lemma~\ref{clm:vhat-varc}), we are going to prove $\operatorname{Var}[\normone{{s}}] \le 4 r^2$.
Assume this is true, we conclude the proof by applying the Chebyshev inequality,
\[
\Prob{}{\normone{{s}} < \expect{}{\normone{{s}}} - 3r} \le \frac{\operatorname{Var}{\normone{{s}}}}{9r^2} \leq \frac{4}{9}. \tag*{\qed}
\]
\end{proof}
\begin{lemma}
\label{clm:vhat-varc}
Let $c \in {\mathbb{R}}^m$ be the random variable with ${s}_j = \min(\widehat v_j, 2r)$ for all $j$ and $\widehat v \sim \widehat V$.
We have $\operatorname{Var}[\normone{{s}}] \le 4 r^2$.
\end{lemma}
\begin{proof}
We first show that for any $i \neq j$, ${s}_i$ and ${s}_j$ are negatively correlated, i.e.,
\[
\operatorname{Cov}({s}_i, {s}_j) = \mathbb{E}[{s}_i {s}_j] - \expect{}{{s}_i} \cdot \mathbb{E}[{s}_j] \le 0.
\]
Observe that $\widehat V_{j|\widehat v_i = x}$ stochastically dominates $\widehat V_{j|\widehat v_i = y}$ for any two possible values $x < y$ of $\widehat v_i$.
This is because, for any $a \in {\mathbb{R}}$,
\begin{align*}
& \Prob{\widehat v \sim \widehat V}{\widehat v_j \le a \ | \ \widehat v_i = x} \\
& = \Prob{v \sim V}{v_j \le a \ \left| \ v_i = x, \normone{v} \le {c}\right.} \\
& = \Prob{v_j \sim V_j}{v_j \le a \ \left| \ v_j \le {c} - x - \sum_{k\neq i,j} v_k\right.} \\
& \le \Prob{v_j \sim V_j}{v_j \le a \ \left| \ v_j \le {c} - y - \sum_{k\neq i,j} v_k\right.}
= \Prob{\widehat v \sim \widehat V}{\widehat v_j \le a \ \left| \ \widehat v_i = y\right.}.
\end{align*}
Because ${s}_j = \min(\widehat v_j, 2r)$, we know that ${s}_{j | v_i = x} \succeq {s}_{j | v_i = y}$ as well.
It follows that $\mathbb{E}[{s}_j | v_i = x] \ge \mathbb{E}[{s}_j | v_i = y]$.
In addition, since $\mathbb{E}[{s}_j | {s}_i = x] = \mathbb{E}[{s}_j | v_i = x]$ when $x < 2r$, and $\mathbb{E}[{s}_j | {s}_i = 2r] = \mathbb{E}[{s}_j | v_i \ge 2r]$, we can deduce that $\mathbb{E}[{s}_j|{s}_i]$ weakly decreases as ${s}_i$ increases. Therefore, we get
\[
\mathbb{E}[{s}_i {s}_j] = \sum_{{s}_i} \Prob{}{{s}_i} \left({s}_i \cdot \mathbb{E}[{s}_j | {s}_i] \right)
\le \sum_{{s}_i} \Prob{}{{s}_i} \left({s}_i \cdot \mathbb{E}[{s}_j]\right) = \expect{}{{s}_i} \cdot \mathbb{E}[{s}_j],
\]
by an application of (a generalization of) the rearrangement inequality.
Given the negative correlations between the ${s}_i$'s, we can upper bound the variance of their sum.
\[
\operatorname{Var}\left(\sum_{i \in [m]} {s}_i\right) = \sum_{i \in [m]} \operatorname{Var}({s}_i) + 2 \sum_{1 \le i < j \le m} \operatorname{Cov}({s}_i, {s}_j) \le \sum_{i \in [m]} \operatorname{Var}({s}_i).
\]
It remains to show $\sum_{i} \operatorname{Var}({s}_i) \le 4 r^2$.
This part is identical to the analysis in earlier works~\cite{LiY13,BabaioffILW14,CaiDW16}, but we include it for completeness.
Let $r_j \in {\mathbb{R}}$ denote the maximum revenue one can extract by selling item $j$ alone.
Note that ${\mathsc{SRev}}(\widehat V) = r = \sum_{j} r_j$, so it is sufficient to show $\operatorname{Var}({s}_j) \le 4 r r_j$ for all $j$.
Fix some $j \in [m]$.
We use $x = {s}_j$ as an alias for the random variable ${s}_j$.
We know that
\begin{enumerate}
\item[(1)] $x$ is at most $2r$, and
\item[(2)] the revenue of $x$ is at most $r_j$: $\left(a \cdot \Prob{}{x \ge a}\right) \le r_j$ for any $a \in {\mathbb{R}}$.
\end{enumerate}
Combining these two facts gives the required bound on the variance of $x$.
Let $0 < a_1 < \ldots < a_\ell \le 2r$ be the support of $x$, and let $a_0 = 0$.
\begin{align*}
\mathbb{E}[x^2] & = \sum_{k=1}^\ell \Prob{}{x = a_k} \cdot a_k^2 \\
& = \sum_{k=1}^\ell (a_k^2 - a_{k-1}^2) \cdot \Prob{}{x \ge a_k} \\
& < \sum_{k=1}^\ell 2 (a_k - a_{k-1}) \left(a_k \cdot \Prob{}{x \ge a_k}\right) \\
& \le r_j \sum_{k=1}^\ell 2 (a_k - a_{k-1}) \\
& = 2 r_j a_\ell \le 4 r r_j. \tag*{\qed}
\end{align*}
\end{proof}
\section{Private Budget}
\label{sec:private}
In this section, we consider the case where the budget $b$ is no longer fixed but instead drawn from a distribution $B$.
One natural model is that the buyer's budget $b$ is first drawn from $B$, and then depending on the value of $b$, the buyer's valuations are drawn independently for each item.
We show that in this case, the problem is at least as hard as finding (approximately) optimal mechanisms for correlated valuations in the budget-free setting.
Consider an instance in which all possible budgets are larger than $\max_{v \sim V} \normone{v}$ so they are irrelevant.
However, the budget can still be used as a signal (or a correlation device) to produce correlated valuations.
It is known that for correlated distributions, the better of selling separately and bundling together~\cite{HartN12}, or even the best partition-based mechanism~\cite{BabaioffILW14}, does not offer a constant approximation.
This negative result motivates us to study the private budget setting when the budget distribution $B$ is independent of the valuation distributions $V$.
\subsection{Monotone-Hazard-Rate Budgets}
We focus on the case where the budget is independent of valuations, and it is drawn from a continuous\footnote{If the distribution is a discrete MHR distribution,
similar results still hold. For discrete distributions we have $\Pr_{b \sim B}[b \geq \lfloor b^* \rfloor] \geq e^{-1}$ instead of $\Pr_{b \sim B}[b \geq b^*] \geq e^{-1}$.}
monotone-hazard-rate (MHR) distribution. Let $g(\cdot)$ and $G(\cdot)$ be the probability density function and cumulative distribution function of $B$. The MHR condition says $\frac{g(b)}{1 - G(b)}$ is non-decreasing in $b$.
\begin{lemma}
\label{lem:private_MHR}
Let $b^*$ be the expectation of an MHR distribution $B$.
Let $M^*$ be the optimal mechanism for a buyer with a public budget $b^*$.
Then in expectation, $M^*$ extracts at least $\frac{1}{2e}$-fraction of the expected optimal revenue when the buyer has a private budget drawn from $B$.
\end{lemma}
\begin{proof}
Let $R(b, V)$ denote the expected revenue of $M^*$ when the buyer has a public budget $b$ and valuations drawn from $V$.
Let $R(B, V) = \expect{b \sim B}{R(b, V)}$ denote the expected revenue of $M^*$ when the buyer's budget is drawn from $B$.
\begin{align*}
R(B, V) &= \int_b g(b) R(b,V) \mathrm{d} b
\ge \int_{b \geq b^*} g(b) R(b, V) \mathrm{d} b \\
&= \int_{b \geq b^*} g(b) R(b^*, V) \mathrm{d} b \ge e^{-1} \cdot R(b^*, V).
\end{align*}
The second last step uses $R(b, V) = R(b^*, V)$ when $b \geq b^*$, because $M^*$ provides a menu of allocation/payment pairs for the buyer to choose from;
A buyer with budget $b \ge b^*$ can afford any option on the menu so he will choose the same option as if he had budget $b^*$.
The last inequality comes from the fact that for any MHR distribution $B$, $\Pr_{b \sim B}[b \geq b^*] \geq e^{-1}$ (see, e.g.,~\cite{barlow1965tables}).
Let ${\mathbb{R}}ev^B(V)$ denote the optimal revenue we can extract when the buyer has private budgets drawn from $B$.
\begin{align*}
{\mathbb{R}}ev^B(V) &\leq
\int_{b < b^*} g(b) {\mathbb{R}}ev^{b}(V) \mathrm{d} b + \int_{b \geq b^*} g(b) {\mathbb{R}}ev^{b}(V) \mathrm{d} b\\
&\leq \int_{b < b^*} g(b) {\mathbb{R}}ev^{b^*}(V) \mathrm{d} b + \int_{b \geq b^*} g(b) \cdot \frac{b}{b^*} \cdot {\mathbb{R}}ev^{b^*}(V) \mathrm{d} b\\
&\leq {\mathbb{R}}ev^{b^*}(V) + \frac{\int_{b} g(b) b \mathrm{d} b}{b^*} \cdot {\mathbb{R}}ev^{b^*}(V)
= 2{\mathbb{R}}ev^{b^*}(V).
\end{align*}
The first line is because the seller can only do better if she knows the buyer's budget $b$.
The third line is because $b^* = \expect{}{b}$.
The second line uses the fact that ${\mathbb{R}}ev^b(V) \le {\mathbb{R}}ev^{b^*}(V)$ when $b < b^*$ and ${\mathbb{R}}ev^b(V) \le \frac{b}{b^*} {\mathbb{R}}ev^{b^*}(V)$ when $b > b^*$.
We have ${\mathbb{R}}ev^b(V) \le {\mathbb{R}}ev^{b^*}(V)$ when $b < b^*$ because a buyer with budget $b^*$ can afford all options from the menu that achieves ${\mathbb{R}}ev^b(V)$.
When $b > b^*$, consider the menu that achieves ${\mathbb{R}}ev^b(V)$ and cap all prices at $b^*$.
A buyer with budget $b > b^*$ either chooses the same option as if he had budget $b^*$, or chooses a different option whose price must be $b^*$, and therefore ${\mathbb{R}}ev^b(V) \le \frac{b}{b^*} {\mathbb{R}}ev^{b^*}(V)$.
By definition $R(b^*, V) = {\mathbb{R}}ev^{b^*}(V)$. Therefore, $R(B, V) \geq \frac{1}{2e} {\mathbb{R}}ev^{B}(V)$.
\qed
\end{proof}
{\noindent \bf Theorem~\ref{thm:private-mhr}.~}
{\em
When the budget distribution $B$ is MHR, the better of pricing items separately and bundling them together achieves a constant fraction of the optimal revenue.
}
\begin{proof}
By pretending the budget is $b^*$,
\begin{align*}
{\mathsc{SRev}}^B(V) & \geq \int_{b \geq b^*} g(b) {\mathsc{SRev}}^{b^*}(V) \mathrm{d} b
\geq \frac{1}{e} {\mathsc{SRev}}^{b^*}(V).
\end{align*}
Similarly, ${\mathsc{BRev}}^B(V) \geq \frac{1}{e} {\mathsc{BRev}}^{b^*}(V)$.
Therefore, by Theorem~\ref{thm:main-informal} and Lemma~\ref{lem:private_MHR}, ${\mathsc{SRev}}^B(V) + {\mathsc{BRev}}^B(V) = \Omega({\mathsc{SRev}}^{b^*}(V) + {\mathsc{BRev}}^{b^*}(V)) = \Omega({\mathbb{R}}ev^{b^*}(V)) = \Omega({\mathbb{R}}ev^B(V))$.
\qed
\end{proof}
\section{Conclusion and Future Directions}
\label{sec:conclusion}
In this paper, we investigated the effectiveness of simple mechanisms in the presence of budgets, and showed that for an additive buyer with independent valuations and a public budget, either selling separately or selling the grand bundle gives a constant approximation to optimal revenue.
The area of designing simple and approximately optimal auctions with budget constraints is still largely unexplored.
Our work leaves many natural follow-up questions.
We only considered selling to a single buyer.
An immediate open question is whether our results can be extended to multiple bidders.
A generalization to multiple bidders is known in the budget-free case~\cite{Yao15,CaiDW16}.
{\noindent \bf Question 1.~}
{\em
Is there a simple mechanism that is approximately optimal for multiple additive buyers, when each buyer has the same public budget $b$?
}
For private budgets where the budget is independent of the valuations, we showed that if the budget distribution satisfies monotone hazard rate, then we can extract a constant fraction of the revenue.
The general case with arbitrary budget distributions appears to be nontrivial and is an interesting avenue for future work.
{\noindent \bf Question 2.~}
{\em
Is there a simple mechanism that is approximately optimal for an additive buyer with private budgets, when the budget distribution is independent of the valuations?
}
\paragraph{\bf Acknowledgements.}
Yu Cheng is supported by NSF grants CCF-1527084, CCF-1535972, CCF-1637397, CCF-1704656, IIS-1447554, and NSF CAREER Award CCF-1750140. Kamesh Munagala is supported by NSF grants CCF-1408784, CCF-1637397, and IIS-1447554; and by an Adobe Data Science Research Award. Kangning Wang is supported by NSF grants CCF-1408784 and CCF-1637397.
\appendix
\section{Proof of the Concentration Lemma in Section~\ref{sec:revvp-revbvp}}
\label{apx:tail-bound}
In this section, we prove Lemma~\ref{lem:sum-vp-tail}.
We first restate it for convenience.
{\noindent \bf Lemma~\ref{lem:sum-vp-tail}.~}
{\em
If $V'$ is independent and $\norminf{v'} \le c$ for all $v' \sim V'$, then
\[
\Prob{v' \sim V'}{\normone{v'} \ge x + y + c} \le \Prob{v' \sim V'}{\normone{v'} \ge x} \cdot \Prob{v' \sim V'}{\normone{v'} \ge y} \quad \text{for all } x, y > 0.
\]
In particular, if $\Prob{v' \sim V'}{\normone{v'} \ge c} \le q$, then for all integer $k \ge 0$,
\[
\Prob{v' \sim V'}{\normone{v'} \ge (2k+1) c} \le q^k \Prob{v' \sim V'}{\normone{v'} \ge c}.
\]
}
\begin{proof}
Consider the probability of $\normone{v'} \ge x + y + c$ conditioned on $\normone{v'} \ge x$.
We will show this probability is at most the probability of $\normone{v'} \ge y$.
For every $v'$ with $\normone{v'} \ge x$, there is a unique $j \in [m]$ where $\sum_{i=1}^{j-1} v'_i < x$ but $\sum_{i=1}^j v'_i \ge x$.
Now $v'_j$ is at most $c$, so for the total sum to be at least $x+y+c$, the remaining sum $\sum_{i=j+1}^m v'_i$ must be at least $y$.
Due to the independence of $V'$, this probability is the same conditioned on any values of $(v'_1, \ldots, v'_j)$.
Formally,
\begin{align*}
\Prob{}{\normone{v'} \ge x + y + c} & = \sum_{j=1}^m \Prob{}{\normone{v'} \ge x + y + c \;\wedge\; \sum_{i=1}^{j-1} v'_i < x \;\wedge\; \sum_{i=1}^j v'_i \ge x} \\
& \le \sum_{j=1}^m \Prob{}{\sum_{i=j+1}^m v'_i \ge y \;\wedge\; \sum_{i=1}^{j-1} v'_i < x \;\wedge\; \sum_{i=1}^j v'_i \ge x} \\
& = \sum_{j=1}^m \Prob{}{\sum_{i=j+1}^m v'_i \ge y} \cdot \Prob{}{\sum_{i=1}^{j-1} v'_i < x \;\wedge\; \sum_{i=1}^j v'_i \ge x} \\
& = \Prob{}{\normone{v'} \ge y} \cdot \Prob{}{\normone{v'} \ge x}.
\end{align*}
The second statement can be proved inductively using the first statement.
The inductive step chooses $x = c$ and $y = (2k-1) c$.
\qed
\end{proof}
\section{Revenue Non-monotonicity for Separate Selling to a Budgeted Buyer}
\label{apx:srevb-not-monotone}
We provide an example where $V_1 \preceq V_2$ but ${\mathsc{SRev}}B(V_1) > {\mathsc{SRev}}B(V_2)$.
Intuitively, a budget-constrained buyer solves a $\mathsc{Knapsack}$ problem when deciding which items to purchase, and the total volume (i.e, payment) of the optimal $\mathsc{Knapsack}$ solution is not monotone in the item values.
Increasing the value of a cheap item might incentivize the buyer to purchase this item instead of a more expensive one, if the buyer does not have enough budget to buy both items.
Consider an auction with $3$ items.
$T_1$ and $T_2$ are two matrices defined as
\begin{align*}
T_1 = \begin{bmatrix}
2 & 0 & 0 \\
0 & 1 & 1 \\
2 & 1 & 0 \\
\end{bmatrix}, \quad
T_2 = \begin{bmatrix}
2 & 0 & 0 \\
0 & 1 & 1 \\
2 & 2 & 0 \\
\end{bmatrix}.
\end{align*}
The rows of $T_1$ are the support of $V_1$, and similarly, the rows of $T_2$ are the support of $V_2$.
Each row is associated with a probability of $\frac{1}{3}$.
Assume the budget $b = 2$.
One of the optimal mechanisms that obtains ${\mathsc{SRev}}B(V_1) = 2$ is to price the items at $(2,1,1)$.
A buyer of type $1$ and $3$ will buy the first item, and a buyer of type $2$ will buy the last two items.
This is optimal because ${\mathsc{SRev}}B$ cannot exceed the budget.
However, ${\mathsc{SRev}}B(V_2) < 2$. To prove it, we first notice ${\mathsc{SRev}}B(V_2) \leq 2$ because $b = 2$. It means we must get a revenue of $2$ from all buyer types to make ${\mathsc{SRev}}B(V_2) = 2$. Thus, we must price the first item at $2$, and each of the last two items at $1$ to satisfy this constraint for the first two types. Nevertheless, with this pricing strategy, we only get revenue $1$ for the last buyer type $(2, 2, 0)$, because he will only buy the second item.
This shows ${\mathsc{SRev}}B(V_2) < 2 = {\mathsc{SRev}}B(V_1)$.
\input{vhat}
\section{Proof of Lemma~\ref{CLM:HATV-PREC-V}~and~\ref{CLM:C1-PREC-C2}}
\label{apx:hatv-preceq-v}
In this section we prove Lemma~\ref{CLM:HATV-PREC-V}~and~\ref{CLM:C1-PREC-C2}.
\begin{lemma}
\label{CLM:C1-PREC-C2}
$V_{|(\normone{v} \le {c}_1)} \preceq V_{|(\normone{v} \le {c}_2)}$ for any $c_1 \leq c_2$.
\end{lemma}
\begin{proof}
We are going to modify any $\widetilde v$ dimension by dimension to reach $\widehat v$. Define $n$ random functions $\sigma_1, \sigma_2, \ldots, \sigma_n$, where $\sigma_i(u)$ and $u$ only differs in $u_i$. In other words, $\sigma_i$ only modifies the $i$-th dimension of its input. Also define $\tau_i = \sigma_i \circ \sigma_{i - 1} \circ \cdots \circ \sigma_1$. Note that $\tau_n(v) \leq v$ as long as $\sigma_i(\tau_{i - 1}(v)) \leq \tau_{i - 1}(v)$ for all $i$.
In the first step, select a $\sigma_1$ such that
\begin{align*}
\Pr_{v \sim \widetilde V}[(\sigma_1(v))_1] = \Pr_{v \sim \widetilde V}\left[v_1 \left| \sum_{i = 1}^n v_i \leq {c}_1 \right.\right].
\end{align*}
Then we separately deal with $v_2$'s according to their $(\tau_1(v))_1$ value. Select a $\sigma_2$ such that
\begin{align*}
\Pr_{v \sim \widetilde V}[(\sigma_2(v))_2 \mid (\tau_1(v))_1] = \Pr_{v \sim \widetilde V}\left[v_2 \left|(\tau_1(v))_1,\, (\tau_1(v))_1 + \sum_{i = 2}^n v_i \leq {c}_1 \right.\right].
\end{align*}
We continue this procedure till we get $\sigma_n$. The $k$-th step will be setting
\begin{align*}
&\Pr_{v \sim \widetilde V}[(\sigma_k(v))_k | (\tau_1(v))_1, \ldots, (\tau_{k - 1}(v))_{k - 1}]\\
= &\Pr_{v \sim \widetilde V}\left[v_k \left|(\tau_1(v))_1, \ldots, (\tau_{k - 1}(v))_{k - 1},\, \sum_{i = 1}^{k - 1} (\tau_i(v))_i+ \sum_{i = k}^n v_i \leq {c}_1 \right.\right].
\end{align*}
Next we show for all $k$, there exists $\sigma_k$ satisfying $(\sigma_k(x))_k \leq x_k$ for all $x$. This is simply first-order stochastic dominance for one-dimensional distributions, and it is equivalent to the following condition:
\begin{align*}
&\Pr_{v \sim \widetilde V}[v_k \leq a \mid (\tau_1(v))_1, \ldots, (\tau_{k - 1}(v))_{k - 1}]\\
\leq &\Pr_{v \sim \widetilde V}\left[v_k \leq a \left|(\tau_1(v))_1, \ldots, (\tau_{k - 1}(v))_{k - 1},\, \sum_{i = 1}^{k - 1} (\tau_i(v))_i+ \sum_{i = k}^n v_i \leq {c}_1 \right.\right], \; \forall a \in {\mathbb{R}}.
\end{align*}
Writing $p_k = (v_{k + 1}, \ldots, v_n)$, $q_k = ((\tau_1(v))_1, \ldots, (\tau_{k - 1}(v))_{k - 1})$, and $r_k = (v_1, \ldots, v_{k - 1})$, the inequality above is true because
\begin{align*}
&\Pr_{v \sim \widetilde V}\left[v_k \leq a \mid q_k,\, v_k + \normone{p_k} + \normone{q_k} \leq {c}_1 \right]\\
= &\sum_{p_k, r_k} \Pr_{v \sim V}[p_k, r_k] \cdot
\Pr_{v \sim V}[v_k \leq a \mid q_k, p_k, r_k,\, v_k \leq \min({c}_1 - \normone{p_k} - \normone{q_k},\\ &{c}_2 - \normone{p_k} - \normone{r_k})]\\
\geq &\sum_{p_k, r_k}\Pr_{v \sim V}[p_k, r_k] \cdot
\Pr_{v \sim V}[v_k \leq a \mid q_k, p_k, r_k,\, v_k \leq \min({c}_2 - \normone{p_k} - \normone{q_k},\\ &{c}_2 - \normone{p_k} - \normone{r_k})]\\
= &\Pr_{v \sim \widetilde V}\left[v_k \leq a \mid q_k\right].
\end{align*}
Therefore $\tau_n$ is the random function that defines coordinate-wise stochastic dominance, as every $\sigma_i$ satisfies $\sigma_i(x) \leq x$ for all $x$.
\end{proof}
\begin{lemma}
\label{CLM:HATV-PREC-V}
$V_{|(\normone{v} \le {c})} \preceq V$ for any $c > 0$.
\end{lemma}
\begin{proof}
It is implied by Lemma~\ref{CLM:C1-PREC-C2} when $c_1 = c$ and $c_2 = \max_{v \in {\mathrm{supp}}(V)} \normone{v}$.
\end{proof}
\end{document} | math | 80,552 |
\begin{document}
\title{Efficient Recovery of Low Rank Tensor via Triple Nonconvex Nonsmooth Rank Minimization}
\mathbf author{Quan~Yu
\thanks{Quan Yu is with School of Mathematics, Hunan University, Hunan 410082, P.R. China. (e-mail:[email protected]).}
}
\markboth{}
{Shell \mathcal MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals}
\maketitle
\begin{abstract}
A tensor nuclear norm (TNN) based method for solving the tensor recovery problem was recently proposed, and it has achieved state-of-the-art performance. However, it may fail to produce a highly accurate solution since it tends to treats each frontal slice and each rank component of each frontal slice equally. In order to get a recovery with high accuracy, we propose a general and flexible rank relaxation function named double weighted nonconvex nonsmooth rank (DWNNR) relaxation function for efficiently solving the third order tensor recovery problem. The DWNNR relaxation function can be derived from the triple nonconvex nonsmooth rank (TNNR) relaxation function by setting the weight vector to be the hypergradient value of some concave function, thereby adaptively selecting the weight vector.
To accelerate the proposed model, we develop the general inertial smoothing proximal gradient method. Furthermore, we prove that any limit point of the generated subsequence is a critical point. Combining the Kurdyka–Lojasiewicz (KL) property with some milder assumptions, we further give its global convergence guarantee.
Experimental results on a practical tensor completion problem with both synthetic and real data, the results of which demonstrate the efficiency and superior performance of the proposed algorithm.
\mathbf end{abstract}
\begin{IEEEkeywords}
Triple nonconvex nonsmooth rank (TNNR) minimization, low rank tensor recovery, tubal nuclear norm (TNN).
\mathbf end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\IEEEPARstart{L}{ow} rank tensor recovery problem has gotten a lot of attention during the last decade. Furthermore, low rank tensor can be recovered efficiently using tensor (matrix) factorization \cite{LAAW20,HDXY15,XWW+18,YZH20,YZ22a,ZLLZ18} and tensor rank minimization methods \cite{HTZ+17,LPW19,QBNZ21,JNZH20,ZHZ+20a,QBNZ21b,ZBN20,ZZXC16a}, respectively. In this paper, we consider the tensor rank minimization problem.
However, unlike the matrix rank, there is no unique definition of the tensor rank. To exploit the low-rankness of tensor, various tensor decompositions and corresponding tensor ranks are proposed, such as CANDECOMP/PARAFAC (CP) decomposition \cite{CC70,Kie00,LBF12,HXZZ20}, Tucker decomposition \cite{Tuc66} and tensor singular value decomposition (t-SVD) \cite{ZLLZ18,ZEA14}.
Among them, the t-SVD based method decomposes a tensor into the product of two orthogonal tensors and one f-diagonal tensor (see Section \mathbf uppercase\mathbf expandafter{\romannumeral2} for details). With the help of the t-SVD framework, the tensor multi-rank and tubal rank were proposed by Kilmer et al. \cite{KBHH13}. Then, Semerci et al. \cite{SHKM14} developed a new tensor nuclear norm (TNN). As the t-SVD is based on an operator theoretic interpretation of third order tensors as linear operators on the space of oriented matrices, the tubal rank and multi-rank of the tensor describe the inherent low rank structure of the tensor without the loss of information inherent in matricization \cite{KBHH13,KM11}. However, two types of prior knowledge are underutilized during the definition of TNN for further exploiting the low-rankness. To begin with, in the tensor's Fourier transform domain, the low-frequency slices mostly carry the tensor's profile information, whilst the high-frequency slices primarily carry the tensor's detail and noise information. Second, larger single values in each frequency slice mostly include information on clean data, whereas lower singular values primarily contain information with noise.
In recent years, many nonconvex nonsmooth rank relaxation functions have been proposed in order to take advantage of the prior knowledge contained in tensor data. Examples of them include partial sum of tubal nuclear norm \cite{JHZD20}, weighted tensor nuclear norm \cite{SYL18}, tensor truncated nuclear norm \cite{XQLJ18}, weighted t-Schatten-$ p $ norm \cite{LZT20}, and so on. However, these methods suffer from three disadvantages. First, these methods do not make good use of the two types of tensor prior information mentioned above at the same time; second, none of these methods establish global convergence due to the absence of convexity; finally, none of these methods design an efficient acceleration algorithm.
Fortunately, inspired by \cite{ZGQ19}, we can develop an iteratively double reweighted algorithm scheme, which aim to solve the TNNR minimization problem for obtaining nearly unbiased solution. Motivated by the general accelerated technique in \cite{WL19}, we propose a general inertial smoothing proximal gradient method for the TNNR minimization problem with both local and global convergence guarantees. We highlight the main contributions of this paper as follows:
\begin{itemize}
\item[(1)] As the tensor rank substitute, we offer a broad and flexible double reweighted relaxation function and induce the double reweighted minimization problem, which is actually derived from the TNNR minimization problem. The TNNR minimization problem can adaptively assign weight values for nonconvex nonsmooth rank relaxation functions.
\item[(2)] We propose an accelerated method for the TNNR minimization problem.
\item[(3)] Under some milder assumptions, we achieve the local convergence guarantee and then use the KL property to provide the global convergence guarantee. Experimental results verify the advantages of our method.
\mathbf end{itemize}
The outline of this paper is given as follows. We recall the basic tensor notations in Section \mathbf uppercase\mathbf expandafter{\romannumeral2}.
In Section \mathbf uppercase\mathbf expandafter{\romannumeral3}, we give the main results, including the proposed model, algorithm and the convergence analysis of algorithm. Extensive simulation results are reported in Section \mathbf uppercase\mathbf expandafter{\romannumeral4}.
\section{NOTATIONS AND PRELIMINARIES}
This section recalls some basic knowledge on tensors. We first give the basic notations and then present the tubal rank, t-SVD and TNN.
\subsection{Notations}
For a positive integer $n$,
$[n]:=\{1,2,\ldots, n\}$. Scalars, vectors and matrices are denoted as lowercase letters ($a,b,c,\ldots$), boldface lowercase letters ($\bm{a} ,\bm{b},\bm{c},\ldots$) and uppercase letters ($A,B,C,\ldots $), respectively.
Third order tensors are denoted as ($\mathcal{A},\mathcal{B},\mathcal{C},\ldots$). For a third order tensor $\mathcal{A} \in \R^{n_1\times n_2\times n_3}$, we use the Matlab notations $ \mathcal{A}(:, :, k) $ to denote its $ k $-th frontal slice, denoted by $ A^{(k)} $ for all $k\in { [n_3]}$. The inner product of two tensors $ \mathcal{A},\,\mathcal{B} \in {\R^{{n_1} \times {n_2} \times {n_3}}}$ is the sum of products of their entries, i.e.
$$\left\langle {\mathcal{A},\mathcal{B}} \right\rangle = \sum\limits_{i = 1}^{{n_1}} {\sum\limits_{j = 1}^{{n_2}} {\sum\limits_{k = 1}^{{n_3}} {{\mathcal{A}_{ijk}}{\mathcal{B}_{ijk}}} } }. $$
The Frobenius norm is ${\left\| \mathcal{A} \right\|} = \sqrt {\left\langle {\mathcal{A},\mathcal{A}} \right\rangle } $.
For a matrix $A$, $ \left\|A\right\|_2 $ represents the largest singular value of $A$. $ \mathscr{D}_r^i\left(x_i\right) $ denotes a diagonal matrix generated by vector $ \bm{x}=\left(x_1,x_2,\ldots,x_r\right) $.
\subsection{$T$-product, tubal rank, t-SVD and TNN}
Discrete Fourier Transformation
(DFT) plays a key role in tensor-tensor product (t-product). For ${\mathcal A}\in \mathbb{R}^{n_1 \times n_2 \times n_3}$, let ${{\bar {\mathcal A}}} \in {{\mathbb C}^{{n_1} \times {n_2} \times {n_3}}}$ be the result of
Discrete Fourier transformation (DFT) of ${{ {\mathcal A}}} \in {{\mathbb R}^{{n_1} \times {n_2} \times {n_3}}}$
along the 3rd dimension. Specifically, let $F_{n_3}=[f_1,\dots, f_{n_3}]\in \mathbb C^{n_3\times n_3}$, where
$$f_i=\left[ \mathbf omega^{0\times (i-1)}; \mathbf omega^{1\times (i-1)};\dots; \mathbf omega^{(n_3-1)\times (i-1)}\right] \in \mathbb C^{n_3}$$
with $\mathbf omega=e^{-\mathbf frac{2\mathbf pi \mathfrak{b}}{n_3}}$ and $\mathfrak{b}=\sqrt{-1}$. Then $ \bar {\mathcal A}(i,j,:)=F_{n_3}{\mathcal A}(i,j,:) $,
which can be computed by Matlab command ``$\bar {\mathcal A}=fft({\mathcal A},[\; ],3)$''. Furthermore, ${\mathcal A}$ can be computed by $\bar {\mathcal A}$ with the inverse DFT $ {\mathcal A}=ifft({\bar {\mathcal A}},[\; ],3) $.
\begin{lemma}\label{lem:v}\cite{RR04}
Given any real vector $\bm{v} \in \mathbb{R}^{n_3}$, the associated $\bar{\bm{v}}=F_{n_3} \bm{v} \in \mathbb{C}^{n_3}$ satisfies
$$
\bar{v}_{1} \in \mathbb{R} \text { and } \mathbf operatorname{conj} \left(\bar{v}_{i}\right)=\bar{v}_{n_3-i+2},\; i=2, \ldots,\left\lfloor\mathbf frac{n_3+1}{2}\right\rfloor.
$$
\mathbf end{lemma}
By using Lemma \mathbb{R}f{lem:v}, the frontal slices of $ \bar{\mathcal A} $ have the following properties:
\begin{equation}\label{conj}
\left\{\begin{array}{l}
\bar{{A}}^{(1)} \in \mathbb{R}^{n_{1} \times n_{2}}, \\
\mathbf operatorname{conj} \left(\bar{{A}}^{(i)}\right)=\bar{{A}}^{\left(n_{3}-i+2\right)},\; i=2, \ldots,\left\lfloor\mathbf frac{n_{3}+1}{2}\right\rfloor.
\mathbf end{array}\right.
\mathbf end{equation}
For ${\mathcal A}\in \mathbb{R}^{n_1\times n_2\times n_3}$, we define matrix ${{\bar A}} \in {{\mathbb C}^{{n_1}{n_3} \times {n_2}{n_3}}}$ as
\begin{equation}\label{bdiag}
{{\bar A}} = bdia{g}(\bar {{\mathcal{A}}} )
\\
= \left[ {\begin{array}{*{20}{c}}
{\bar A^{(1)}}&{}&{}&{} \\
{}&{\bar A^{(2)}}&{}&{} \\
{}&{}& \ddots &{} \\
{}&{}&{}&{\bar A^{({n_3})}}
\mathbf end{array}} \right]. \mathbf end{equation}
Here, $ bdiag(\cdot) $ is an operator which maps the tensor $ {{\bar {\mathcal A}}} $ to the block diagonal matrix $ \bar A$. The block circulant matrix $bcirc({{\mathcal A}}) \in {{\mathbb R}^{{n_1}{n_3} \times {n_2}{n_3}}}$ of ${\mathcal A}$ is defined as
$$bcirc({{\mathcal A}}) = \left[ {\begin{array}{*{20}{c}}
{A^{(1)}}&{A^{(n_3)}}& \cdots &{A^{(2)}}\\
{A^{(2)}}&{A^{(1)}}& \cdots &{A^{(3)}}\\
\mathbf vdots & \mathbf vdots & \ddots & \mathbf vdots \\
{A^{(n_3)}}&{A^{({n_3} - 1)}}& \cdots &{A^{(1)}}
\mathbf end{array}} \right].$$
Based on these notations, the $T$-product is presented as follows.
\begin{definition}\label{def:T-pro}\textbf{(T-product)} \cite{KM11}
For ${\mathcal A}\in \mathbb{R}^{n_1\times r\times n_3}$ and $\mathcal B\in \mathbb R^{r\times n_2\times n_3}$, define
$${\mathcal A}\mathbf ast\mathcal B:=fold\left(bcirc({\mathcal A})\ \cdot unfold(\mathcal B)\right) \in \mathbb{R}^{n_1\times n_2\times n_3}.$$
Here
$$ unfold(\mathcal B) = \left[B^{(1)};B^{(2)}; \ldots ;B^{(n_3)}\right],$$
and its inverse operator ``fold" is defined by $$fold(unfold(\mathcal B)) = \mathcal B.$$
\mathbf end{definition}
We will now present the definition of tubal rank. Before then, we need to introduce some other concepts.
\begin{definition}\textbf{(F-diagonal tensor)} \cite{KM11}
If each of a tensor's frontal slices is a diagonal matrix, the tensor is denoted $ f $-diagonal.
\mathbf end{definition}
\begin{definition}\textbf{(Conjugate transpose)} \cite{KM11}
The conjugate transpose of a tensor $\mathcal{A} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$, denoted as $\mathcal{A}^{*}$, is the tensor obtained by conjugate transposing each of the frontal slices and then reversing the order of transposed frontal slices 2 through $n_{3}$.
\mathbf end{definition}
\begin{definition}\textbf{(Identity tensor)} \cite{KM11}
The identity tensor $ \mathcal{I} \in \mathbb{R}^{n \times n \times n_{3}} $ is a tensor with the identity matrix as its first frontal slice and all other frontal slices being zeros.
\mathbf end{definition}
\begin{definition}\textbf{(Tensor inverse)} \cite{KM11}
A tensor $ {\mathcal A}\in \mathbb{R}^{n \times n \times n_{3}} $ has an inverse $ \mathcal B $ provided that
$ {\mathcal A}*\mathcal B=\mathcal{I} $ and $ \mathcal B*{\mathcal A}=\mathcal{I} $.
\mathbf end{definition}
\begin{definition}\textbf{(Orthogonal tensor)} \cite{KM11}
A tensor $\mathcal{P} \in$ $\mathbb{R}^{n \times n \times n_{3}}$ is orthogonal if it fulfills the condition $\mathcal{P}^{*} * \mathcal{P}=\mathcal{P} * \mathcal{P}^{*}=\mathcal{I}.$
\mathbf end{definition}
\begin{theorem}\textbf{(T-SVD)} \cite{KM11}
A tensor $\mathcal{A} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ can be factored as
$$
\mathcal{A}=\mathcal{U} * \mathcal{S} * \mathcal{V}^{*},
$$
where $\mathcal{U} \in \mathbb{R}^{n_{1} \times n_{1} \times n_{3}}$ and $\mathcal{V} \in \mathbb{R}^{n_{2} \times n_{2} \times n_{3}}$ are orthogonal tensors, and $\mathcal{S} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ is a $ f $-diagonal tensor.
\mathbf end{theorem}
Tensor multi-rank, tubal rank and tubal nuclear norm are now introduced.
\begin{definition}\label{def:tubal rank}{\bfseries (Tensor multi-rank and tubal rank)} \cite{KBHH13}
For tensor ${\mathcal A} \in {{\mathbb R}^{{n_1} \times {n_2} \times {n_3}}}$, let $r_k=\operatorname {rank}\left(\bar A^{(k)}\right)$ for all $k\in {[n_3]}$.
Then multi-rank of ${\mathcal A}$ is defined as $\operatorname {rank}_{m}({\mathcal A})=(r_1,\ldots,r_{n_3})$. The tensor tubal
rank is defined as $ \operatorname {rank}_t({\mathcal A})=\max\left\lbrace r_k|k\in[n_3]\right\rbrace $.
\mathbf end{definition}
\begin{definition}\label{def:nuclear norm}{\bfseries (Tubal nuclear norm)} \cite{LFC20}
The tubal nuclear norm $\|\mathcal{A}\|_{*}$ of a tensor $\mathcal{A} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ is defined as the sum of the singular values of all frontal slices of $\bar{\mathcal{A}}$, i.e., $\|\mathcal{A}\|_{*}=\mathbf frac{1}{n_{3}} \sum_{i=1}^{n_{3}}\left\|\bar{{A}}^{(i)}\right\|_{*}$.
\mathbf end{definition}
\section{Triple Nonconvex Nonsmooth Rank Minimization}
\subsection{Problem Formulation}
Matrix rank minimization problem can be expressed as
\begin{equation}\label{matrix}
\min\limits_{X\in \mathbb{R}^{n_1 \times n_2}}\lambda\mathbf operatorname{rank}(X)+g(X),
\mathbf end{equation}
where $ \lambda $ is a positive parameter and $ g: \mathbb{R}^{n_1 \times n_2} \rightarrow[0,+\infty) $ is a differential loss function, which may be nonconvex.
Due to the discontinuous and nonconvex nature of the rank function, the above problem is generally NP-hard.
Numerous relaxation functions, including convex nuclear norms \cite{PLYT18} and nonconvex relaxations \cite{YZ22,GYZC22,ZQZ20,SWKT19}, have been widely used to replace the rank function of the matrix. In \cite{ZGQ19}, the double singular values function, a continuous relaxation of the rank function, was adopted in matrix rank minimization problem with some advantages. The weighted singular values function of matrix $ X\in \mathbb{R}^{n_1 \times n_2} $ is defined as
$$
\rho_\beta(\sigma(X))=\sum_{i=1}^{r}\beta_i\rho\left(\sigma_{i}\right), \quad r=\min \left\lbrace n_1, n_2\right\rbrace,
$$
where $\rho(\cdot): \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}$ is a differentiable concave function on $[0,+\infty]$, $ \beta=\left(\beta_1,\beta_2,\ldots,\beta_r\right) $ is a weighting vector with $ 0\le\beta_1\le\beta_2\le\ldots\le\beta_r $, and $\sigma(X)=\left(\sigma_{1}, \sigma_{2}, \ldots, \sigma_{r}\right)$ is a singular values vector with $\sigma_{1} \geq \sigma_{2} \geq \ldots \geq \sigma_{r} \geq 0$.
Motivated by these, we consider the double weighted singular values function of tensor $ \mathcal X\in \mathbb{R}^{n_1 \times n_2 \times n_3} $:
\begin{equation}\label{eq:norm}
\left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta}=\sum_{k=1}^{n_3}\mathbf alpha_k\rho_\beta\left(\sigma^k\right)=\sum_{k=1}^{n_3}\sum_{i=1}^{r}\mathbf alpha_k\beta_i^k\rho\left(\sigma_i^k\right),
\mathbf end{equation}
where $ \mathbf alpha_k>0 $ for $ k \in [n_3] $, $ \sigma^k=\sigma\left(\bar X^{(k)}\right) $ and $ \sigma_i^k=\sigma_i\left(\bar X^{(k)} \right) $.
When acting on each singular value of the low rank tensor, $ \left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta} $ is a flexible rank relaxation function with different choices of $ \mathbf alpha $, $ \beta $ and $ \rho\left(\cdot\right) $, see Table \mathbb{R}f{tab:weighted}.
\begin{table*}[htbp]
\centering
\caption{Flexible rank relaxation function with different choices of $ \mathbf alpha $, $ \beta $ and $ \rho\left(\cdot\right) $.}\label{tab:weighted}
\begin{tabular}{c|c|c|c}
\hline
$ \mathbf alpha $ & $ \beta $ & $ \rho\left(\cdot\right) $ & $ \left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta} $ \\
\hline
$ \mathbf alpha_k=1/n_3 $ & $ \beta_i^k=1 $ &$ \rho\left(\sigma_i^k\right)=\sigma_i^k $ & TNN \cite{LFC20} \\
\hline
$ \mathbf alpha_k =1 $ & $ \beta_i^k=\left\{ \begin{array}{l}
0,\;i = 1, \ldots ,N,\\
1,\;i = N + 1, \ldots ,r.
\mathbf end{array} \right. $ &$ \rho\left(\sigma_i^k\right)=\sigma_i^k $ & PSTNN \cite{JHZD20} \\
\hline
$ \mathbf alpha_k=1/n_3 $ & $\beta_i^k=\mathbf frac{1}{\sigma_i^k+\mathbf varepsilon}$ & $ \rho\left(\sigma_i^k\right)=\sigma_i^k $ & Weighted Tensor Nuclear Norm \cite{SYL18} \\
\hline
$ \mathbf alpha_k=\left\{ \begin{array}{l}
1,\;k = 1,\\
0,\;i = 2, \ldots ,r.
\mathbf end{array} \right.$ & $ \beta_i^k=\left\{ \begin{array}{l}
0,\;i = 1, \ldots ,N,\\
1,\;i = N + 1, \ldots ,r.
\mathbf end{array} \right. $ & $ \rho\left(\sigma_i^k\right)=\sigma_i^k $ & Tensor Truncated Nuclear Norm \cite{XQLJ18}\\
\hline
$ \mathbf alpha_k=1/n_3 $ & $\beta_i^k=\mathbf frac{c}{\sqrt[1/2p]{\max\left(0,(\sigma_i^k)^2-(\sigma_r^k)^2\right)}+\mathbf varepsilon}$ & $ \rho\left(\sigma_i^k\right)=\left(\sigma_i^k\right)^p $ & Weighted t-Schatten-$ p $ Norm \cite{LZT20} \\
\hline
\mathbf end{tabular}
\mathbf end{table*}
\iffalse
For example, $ \left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta} $ becomes tubal nuclear norm \cite{LFC20} when $ \mathbf alpha_k\beta_i^k=\mathbf frac{1}{n_3} $ and $ \rho\left(\cdot\right) $ is the absolute function.
\mathbf fi
\subsection{TNNR Minimization Problem}
Based on problem \mathbf eqref{matrix} and \mathbf eqref{eq:norm}, we introduce a general rank relaxation minimization problem:
\begin{equation}\label{tensor}
\min\limits_{\mathcal X\in \mathbb{R}^{n_1 \times n_2\times n_3}}F_\mathbf alpha^\beta\left(\mathcal X\right):=\lambda\left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta}+f(\mathcal X),
\mathbf end{equation}
where $ f: \mathbb{R}^{n_1 \times n_2 \times n_3} \rightarrow[0,+\infty) $ is a differential loss function, which may be nonconvex. Combining problem \mathbf eqref{tensor},
reweighted strategies \cite{XGL16,LTYL14} and supergradient concepts \cite{Bor01}, the following TNNR minimization problem comes into existence:
\begin{equation}\label{Q:TNNR}
\min\limits_{\mathcal X}F\left(\mathcal X\right):= \lambda\sum_{k=1}^{n_3}\rho_2\left(\sum_{i=1}^{r} \rho_1\left(\rho\left(\sigma_i^k\right)\right)\right)+f(\mathcal X).
\mathbf end{equation}
Without loss of generality, we choose $ \rho\left(\cdot\right)=\rho_1\left(\cdot\right)=\rho_2\left(\cdot\right) $.
Next, we present the relationship between \mathbf eqref{tensor} and \mathbf eqref{Q:TNNR}. Without specific explanation, Assumption \mathbb{R}f{ass:rho} is assumed throughout the paper.
\begin{assumption}\label{ass:rho}
The penalty function $\rho(\cdot): \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}$ is a differentiable concave function with a $ L_g $-Lipschitz continuous gradient, i.e., for any $ t,\,s $, $$ \left|\rho^{\mathbf prime}\left(s\right)-\rho^{\mathbf prime}\left(t\right)\right| \leq L_{g}\left|s-t\right|, $$
and $ \rho^{\mathbf prime}\left(t\right)>0 $ for any $ t\in\left[ 0,+\infty\right) $.
\mathbf end{assumption}
\begin{lemma}\cite{ZGQ19}
If the function $ \rho(\cdot) $ satisfies Assumption \mathbb{R}f{ass:rho}, then for any $ t $ and $ s $, we have:
\begin{itemize}\label{lem:gradient}
\item $\rho(t) \leq \rho(s)+\rho'(s)(t-s)$;
\item $\rho\left(\rho\left(t\right)\right) \leq \rho\left(\rho\left(s\right)\right)+ \rho'\left(\rho\left(s\right)\right)\left(\rho\left(t\right)-\rho\left(s\right)\right).$
\mathbf end{itemize}
\mathbf end{lemma}
From Lemma \mathbb{R}f{lem:gradient}, we know that when we set $ \mathbf alpha_k = \rho'\left(\sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^k\right)\right)\right) $ and $ \beta_i^k=\rho'\left(\rho\left(\sigma_i^k\right)\right) $ in $ \left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta} $, we have
\begin{equation*}
\min F_\mathbf alpha^\beta\left(\mathcal X\right) \ge \min F\left(\mathcal X\right).
\mathbf end{equation*}
From the above formula, we establish the relationship between problem \mathbf eqref{tensor} and problem \mathbf eqref{Q:TNNR}.
\begin{lemma}\cite{ZGQ19}\label{lem:SVF}
If $\sigma_{1}^k \geq \sigma_{2}^k \geq \ldots \geq \sigma_{r}^k \geq 0$ for $ k \in [n_3] $, then we have
$$
0 \leq \beta_{1}^k \leq \beta_{2}^k \leq \ldots \leq \beta_{r}^k.
$$
\mathbf end{lemma}
According to Lemma \mathbb{R}f{lem:SVF}, the insignificant singular values function have larger weights, otherwise inverse. Therefore, $ \left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta} $ can be made to relax the tensor multi-tubal rank\mathbf footnote{Similar conclusions can be extended to Tucker rank and tensor train (TT) rank in parallel.}.
\subsection{General Inertial Smoothing Proximal Gradient Algorithm}
Motivated by the efficiency of inertial method, we consider the following general inertial method of $ F_\mathbf alpha^\beta\left(\cdot\right) $, i.e.,
\begin{equation}\label{X}
\left\{\begin{array}{l}
\mathcal Y^{t}=\mathcal X^{t}+\theta_{1}^t\left(\mathcal X^{t}-\mathcal X^{t-1}\right), \\
\mathcal Z^{t}=\mathcal X^{t}+\theta_2^{t}\left(\mathcal X^{t}-\mathcal X^{t-1}\right), \\
\mathcal X^{t+1}=\mathop{\mathbf operatorname{argmin}}\limits_{\mathcal X} Q\left(\mathcal X,\mathcal Y^{t},\mathcal Z^{t}\right),
\mathbf end{array}\right.
\mathbf end{equation}
where $ Q\left(\mathcal X,\mathcal Y^{t},\mathcal Z^{t}\right) =\lambda\left\|\mathcal X\right\|_{\rho_{\mathbf alpha^t}^{\beta^t}}+\left\langle \mathcal X-\mathcal Y^{t}, \nabla f\left(\mathcal Z^{t}\right)\right\rangle+\mathbf frac{\mu^t}{2}\left\|\mathcal X-\mathcal Y^t\right\|^{2} $, $ \theta_1^t $, $ \theta_2^t $, $ \mu^t $ are different parameters and certain conditions are required for them.
To solve \mathbf eqref{X}, we introduce the following Theorem.
\begin{theorem}\label{thm:tSVD}
Let $\rho(\cdot): \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}$ be a function such that the proximal operator denoted by $\mathbf operatorname{Prox}_{\rho}(\cdot)$ is monotone. For any $\mathbf eta>0$, let $\mathcal Y=\mathcal U*{\mathcal S}*\mathcal V^*$ be the t-SVD of $\mathcal Y\in\mathbb{R}^{n_1 \times n_2\times n_3}$, all weighting values satisfy $ 0 \leq \beta_{1}^k \leq \beta_{2}^k \leq \ldots \leq \beta_{r}^k $ and $ \mathbf alpha_k>0 $ for $ k \in [n_3] $. Then, a minimizer to
\begin{equation*}
\mathop{\mathbf operatorname{argmin}}\limits_{\mathcal X} \mathbf eta\left\|\mathcal X\right\|_{\rho_{\mathbf alpha}^\beta}+\mathbf frac{1}{2}\left\|\mathcal X-\mathcal Y\right\|^{2}
\mathbf end{equation*}
is given by
$$ \mathcal X^\star=\mathcal U*\mathcal{S^\star} *\mathcal V^*, $$
where $ \mathcal{S^\star} $ satisfies $\bar\delta_{i,k}^\star=\mathcal{\bar S^\star}(i,i,k) \geq \mathcal{\bar S^\star}(j,j,k)=\bar\delta_{j,k}^\star$ for $1 \leq i \leq j \leq r,\,k \in [n_3]$, and then $ \bar\delta_{i,k}^\star $ is obtained by solving the problem as follows:
\begin{equation*}
\bar\delta_{i,k}^\star \in \mathbf operatorname{Prox}_{\rho}\left(\sigma_i^k\right)=\mathop{\mathbf operatorname{argmin}}\limits_{\bar\delta_{i,k}\geq 0} \mathbf frac{\mathbf eta\mathbf alpha_k\beta_i^k }{n_3}\rho\left(\bar\delta_{i,k}\right)+\mathbf frac{1}{2}\left(\bar\delta_{i,k}-\sigma_{i}^k\right)^{2},
\mathbf end{equation*}
where $ \sigma_{i}^k=\mathcal{\bar S}(i,i,k) $.
\mathbf end{theorem}
Theorem \mathbb{R}f{thm:tSVD} can be easily proved by \cite[Proposition 1]{ZGQ19}. Thus, it is not repeated here.
\iffalse
Especially, according to Theorem 5, we can easily obtain that the mode-k LogTNN-based t-SVT can make larger singular values shrunk less than small singular values. This implies that 3DlogTNN is able to better preserve the major information of the target HSI.
\mathbf fi
After updating $ \mathcal X^{t+1} $, we need to compute the weighting $ \mathbf alpha_k^{t+1} $ and $ \beta_i^{k,t+1} $ by
\begin{equation}\label{weight}
\mathbf alpha_k^{t+1} = \rho'\left(\sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^{k,t+1}\right)\right)\right),\,
\beta_i^{k,t+1}= \rho'\left(\rho\left(\sigma_i^{k,t+1}\right)\right),
\mathbf end{equation}
where $ \sigma_i^{k,t+1}=\sigma_i\left(\bar X^{(k,t+1)}\right) $ and $ \bar X^{(k,t+1)}= \left(\bar X^{t+1}\right)^{(k)}$.
The general inertial smoothing proximal gradient method for solving problem \mathbf eqref{Q:TNNR} is presented as follows.
\begin{algorithm}
\caption{Solving problem \mathbf eqref{Q:TNNR} by TNNR}
\label{alg}
\begin{algorithmic}[]
\REQUIRE Choosing the parameters $ \theta_1^t $, $ \theta_2^t $, $ \mu^t $. \\
\!\!\!\!\!\!\!\!\!\! \textbf{Initialize:} $t=0,\,\mathcal X^{t}$, and $ \mathbf alpha_k^t,\,\beta_i^{k,t} $ for $ k\in[n_3],\,i\in[r] $.
\mathcal WHILE{not converge}
{\mathcal T}ATE \textbf{Step~ 1.} Update $ \mathcal X^{t+1} $ by \mathbf eqref{X}.\\
{\mathcal T}ATE \textbf{Step~ 2.} Update $ \mathbf alpha_k^{t+1},\,\beta_i^{k,t+1} $ by \mathbf eqref{weight}.\\
{\mathcal T}ATE Let $ t:=t+1 $ and go to \textbf{Step~ 1}.
\ENDWHILE
\ENSURE $\mathcal X^{t+1}$.
\mathbf end{algorithmic}
\mathbf end{algorithm}
\subsection{Convergence Analysis}
In this subsection, we shall prove the convergence for the proposed TNNR.
The following assumptions are needed to analyze the convergence of the proposed model:
\begin{assumption}\label{ass:smooth}
The loss function $f(\cdot)$ is continuously differentiable with the Lipschitz continuous gradient $\nabla f(\cdot)$, i.e., there exists a Lipschitz constant $L_{f}>0$ for any $\mathcal X_{1}, \mathcal X_{2} \in \mathbb{R}^{n_1\times n_2\times n_3 }$, such as
$$
\left\|\nabla f\left(\mathcal X_{1}\right)-\nabla f\left(\mathcal X_{2}\right)\right\| \leq L_{f}\left\|\mathcal X_{1}-\mathcal X_{2}\right\| .
$$
\mathbf end{assumption}
\begin{assumption}\label{ass:bounded}
$F(\cdot)$ is coercive and bounded from below, that is,
$$
\lim _{\|\mathcal X\| \rightarrow+\infty} F(\mathcal X)=+\infty \text { and } \liminf _{\|\mathcal X\| \rightarrow+\infty} F(\mathcal X)>-\infty.
$$
\mathbf end{assumption}
\begin{assumption}\label{ass:pra}
The parameters $ \theta_1^t $, $ \theta_2^t $, $ \mu^t $ in the TNNR algorithm satisfy the following conditions: for any $0<\mathbf varepsilon \ll 1$, $\theta_1^t \in \left[0,(1-\mathbf varepsilon)/2\right)$, $\theta_2^t \in \left[0,1/2\right]$, $ \mu^t $ is nonincreasing and satisfies
$$
\mu^{0} \ge \mu^{t} \ge \max \left\{\mathbf frac{\theta_2^tL_f}{\theta_1^t}, \mathbf frac{\left(1-\theta_2^t\right) L_f}{1-\theta_1^t-\theta_1^{t+1}-\mathbf varepsilon}\right\}.
$$
\mathbf end{assumption}
\begin{remark}
It is easy to prove for any $t \in \mathbb{N}$, $\mu^{t} \ge L_f$. Otherwise, we have
$$
\left\{\begin{array}{l}
\mathbf frac{\theta_2^tL_f}{\theta_1^t}<L_f, \\
\mathbf frac{\left(1-\theta_2^t\right) L_f}{1-\theta_1^t-\theta_1^{t+1}-\mathbf varepsilon}<L_f,
\mathbf end{array} \Rightarrow \theta_1^t+\theta_1^{t+1}+\mathbf varepsilon<\mu^{t}<\theta_1^t.\right.
$$
There is no $\mu^{t}$ satisfying the above inequalities. Then, we get $\mu^{t} \ge L_f$.
\mathbf end{remark}
Now we establish Theorem \mathbb{R}f{thm:CC} to prove that Algorithm \mathbb{R}f{alg} can converge to a stationary point of our optimization problem.
\begin{theorem}\label{thm:CC}
Suppose that Assumption \mathbb{R}f{ass:rho}-\mathbb{R}f{ass:pra} holds, and let $ \left\lbrace \mathcal X^t \right\rbrace $ be the sequence generated by TNNR. Then, we have that
\begin{itemize}
\item[(i)]
the sequence $\left\{\mathcal X^{t}\right\}$ is bounded, and has at least one accumulation point, i.e., there exists at least a tensor $\mathcal X^\star$ and a subsequence $\left\{\mathcal X^{t_{l}}\right\} \subseteq\left\{\mathcal X^{t}\right\}$ such that $\lim\limits_{l\rightarrow+\infty} \mathcal X^{t_{l}}=\mathcal X^\star$;
\item[(ii)] the sequence $ \left\lbrace F\left(\mathcal X^{t}\right)\right\rbrace $ is convergent;
\item[(iii)] any accumulation point $ \mathcal X^\star=\lim_{l\rightarrow \infty}\mathcal X^{t_{l}} $ of $\left\lbrace\mathcal X^{t}\right\rbrace_{t \in \mathbb{N}}$ is a stationary point of $ F\left(\mathcal X\right) $ and it further holds that
$$ F\left(\mathcal X^{t_{l}+1}\right) \rightarrow F\left(\mathcal X^\star\right), \quad \text {as}\; l \rightarrow +\infty.
$$
\mathbf end{itemize}
\mathbf end{theorem}
The conclusions in Theorem \mathbb{R}f{thm:CC} just establish the convergence of the subsequence of $\left\{\mathcal X^{t}\right\}$. With some slightly additional assumptions, we also can prove the convergence the whole sequence $\left\{\mathcal X^{t}\right\}$. Now we assume that after a finite number of steps the sequence $\left\{\delta_{t}=\mu^{t}\theta_1^{t}/2\right\}$ is constant.
Below we present the convergence results for the whole sequence $ \left\lbrace \mathcal X^t \right\rbrace $ generated by TNNR.
\begin{theorem}\label{thm:whole}
Suppose that Assumption \mathbb{R}f{ass:rho}-\mathbb{R}f{ass:pra} holds, $ \delta_{t}\mathbf equiv \delta $ for all $ t\in \N $ and the function
\begin{equation*}
H(\mathcal X, \mathcal Y):=F(\mathcal X)+\delta\|\mathcal X-\mathcal Y\|^{2}
\mathbf end{equation*}
is a KL function, and let $\left\lbrace \mathcal X^t \right\rbrace$ be the sequence generated by TNNR. Then, the sequence $\left\lbrace \mathcal X^t \right\rbrace$ has finite length, i.e., $\sum_{t=0}^{\infty}\left\|\mathcal X^{t+1}-\mathcal X^{t}\right\|<+\infty$, and $\left\lbrace \mathcal X^t \right\rbrace$ globally converges to a critical point $ \mathcal X^\star $ of the minimization problem \mathbf eqref{Q:TNNR}.
\mathbf end{theorem}
\section{Numerical Experiments}
In this section, we conduct some experiments on both synthetic and real world data to compare the performance of TNNR to show their validity. In particular, we apply it to solve the problem \mathbf eqref{Q:TNNR} with $ f\left(\mathcal X\right)=\left\|P_\mathcal Omega\left(\mathcal X-\mathcal M\right) \right\|^2 $, that is,
\begin{equation}
\min\limits_{\mathcal X} \lambda\sum_{k=1}^{n_3}\rho\left(\sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^k\right)\right)\right)+\left\|P_\mathcal Omega\left(\mathcal X-\mathcal M\right) \right\|^2,
\mathbf end{equation}
where $ \rho(x)=x^{2/3} $, $ \mathcal M $ is a known tensor, $ \mathcal Omega $ is an index set which locates the observed data, $ P_\mathcal Omega $ is a linear operator that extracts the entries in $ \mathcal Omega $ and fills the entries not in $ \mathcal Omega $ with zeros.
We employ the peak signal-to-noise rate (PSNR) \cite{WBSS04}, the structural similarity (SSIM) \cite{WBSS04}, the feature similarity (FSIM) \cite{ZZMZ11} and the recovery computation time to measure the quality of the recovered results.
We compare TNNR for the tensor completion problem with four existing methods, including TNN \cite{ZEA14}, PSTNN \cite{JHZD20}, T-TNN \cite{XQLJ18} and HaLRTC \cite{LMWY13}.
All methods are implemented on the platform of Windows 11 and Matlab (R2020b) with an Intel(R) Core(TM) i5-12500H CPU at 2.50GHz and 16 GB RAM. The parameter choice of the compared methods rely on the authors' suggestions of the published papers or the default parameters of the released codes to obtain the best performance.
\subsection{Synthetic Data}
In this subsection, we aim to recover a random tensor $ \mathcal M\in\R^{n_1\times n_2 \times n_3} $ of tubal rank $ r $ from known entries $\left\{\mathcal M_{ijk}\right\}_{(i,j,k) \in \mathcal Omega}$. In detail, we first use Matlab command $ randn(n_1,r,n_3) $ and $ randn(r,n_2, n_3) $ to produce two tensors $ {\mathcal A}\in\R^{n_1\times r \times n_3} $ and $ \mathcal B\in\R^{r\times n_2 \times n_3} $. Then, let $\mathcal{M}=\mathcal{A} * \mathcal{B}$. Finally, we sample a subset with sampling ratio $ SR $ uniformly at random, where $ SR=\left|\mathcal Omega\right|/\left(n_1n_2n_3\right) $. In our experiment, we set $ r= 10,\,n_1=n_2=n_3=100 $, $ SR=0.8 $. When $ \left\|\mathcal X^\star-\mathcal M\right\|/\left\|\mathcal M\right\|<1e-3 $, the iteration process terminates.
For each simulation, the result is obtained via 100 Monte Carlo runs with different realizations of $ \mathcal M $ and $ \mathcal Omega $.
To demonstrate the effectiveness of the TNNR with various extrapolations, we test the performance of the method with various choices of $ \theta_1 $ and $ \theta_2 $.
For each instance, let $ \lambda=5 $, $ \mu $ be the minimal stepsize, i.e., $ \mu=\max\left\lbrace \theta_2/\theta_1, \left(1-\theta_2\right)/\left(0.99-2\theta_1\right)\right\rbrace $ by the condition showed in Algorithm \mathbb{R}f{alg}. Accordingly, we test six scenarios with $\theta_1,\,\theta_2\in\{0,0.1,0.2,0.3,0.4,0.49\}$. We list the results of the CPU computing time in Table \mathbb{R}f{tab:extrapolation}.
From the results, we find that almost all instances with inertia are more effective than the original method.
\begin{table}[htbp]
\centering
\caption{NUMERICAL RESULTS FOR SYNTHETIC DATA WITH TWO DIFFERENT EXTRAPOLATIONS.}\setlength{\tabcolsep}{1mm}{
\begin{tabular}{ccccccc}
\hline
Time (s) & $\theta_1=0$ & $\theta_1=0.1$ & $\theta_1=0.2$ & $\theta_1=0.3$ & $\theta_1=0.4$ & $\theta_1=0.49$ \\
\hline
$\theta_2=0$ & 20.21 & 21.26 & 21.48 & 21.87 & 22.30 & 22.39 \\
\hline
$\theta_2=0.1$ & \textbf{19.95 } & \textbf{20.15 } & 20.34 & 20.35 & 20.55 & 20.58 \\
\hline
$\theta_2=0.2$ & \textbf{18.15 } & \textbf{18.64 } & \textbf{18.98 } & \textbf{18.82 } & \textbf{18.63 } & \textbf{18.86 } \\
\hline
$\theta_2=0.3$ & \textbf{16.37 } & \textbf{16.42 } & \textbf{16.43 } & \textbf{16.31 } & \textbf{16.53 } & \textbf{16.66 } \\
\hline
$\theta_2=0.4$ & \textbf{14.35 } & \textbf{14.43 } & \textbf{14.13 } & \textbf{14.14 } & \textbf{14.04 } & \textbf{14.00 } \\
\hline
$\theta_2=0.49$ & \textbf{12.83 } & \textbf{12.82 } & \textbf{12.46 } & \textbf{12.38 } & \textbf{12.35 } & \textbf{12.12 } \\
\hline
\mathbf end{tabular}}
\label{tab:extrapolation}
\mathbf end{table}
Therefore, in the following experiments we set $ \theta_1=0.49 $, $ \theta_2=0.49 $ and $ \mu=\max\left\lbrace \theta_2/\theta_1, \left(1-\theta_2\right)/\left(0.99-2\theta_1\right)\right\rbrace $ in TNNR.
\subsection{Color Image Inpainting}
In this subsection, we use the Berkeley Segmentation database\mathbf footnote{https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/.} of size $ 321\times 481 \times 3 $ to evaluate our proposed method TNNR for color image inpainting. In our test, four images are randomly selected from this database, including ``Airplane", ``Tiger", ``Flower" and ``Fish".
The data of images are normalized in the range $ \left[0,1\right] $.
Table \mathbb{R}f{tab:ColorImage} reports the results of quantitative metrics for four experiments using different methods when $ SR =[40\%, 50\%, 60\%] $. Several visual examples for missing rates $ SR =30\% $ are presented in Figure \mathbb{R}f{fig:colorimage}.
From these results, we can find that representative approaches based on the Tucker, i.e., HaLRTC, perform relatively poor in terms of recovery quality. Another finding is that the performance of TNN based convex is inferior to the PSTNN and T-TNN based nonconvex. More significantly, the proposed TNNR, which is induced by triple nonconvex nonsmooth, performs best among all the compared approaches whether in the PSNR, SSIM, FSIM values or visual quality. From time consumption, our method uses similar running time as HaLRTC and is the second fastest method. It is about two times faster than the third fastest method PSTNN and at least fifteen times faster than the slowest method T-TNN.
\begin{table*}[htbp]
\caption{COLOR IMAGE INPAINTING PERFORMANCE COMPARISON: PSNR, SSIM, FSIM AND RUNNING TIME. THE BEST AND THE SECOND BEST PERFORMING METHODS IN EACH IMAGE ARE HIGHLIGHTED IN RED AND BOLD, RESPECTIVELY}\label{tab:ColorImage}
\begin{tabular}{c|c|cccc|cccc|cccc}
\hline
\multirow{2}{*}{Color Image} & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{$ SR=40\% $} & \multicolumn{4}{c|}{$ SR=50\% $} & \multicolumn{4}{c}{$ SR=60\% $} \\
\cline{3-14} & & PSNR & SSIM & FSIM & Time & PSNR & SSIM & FSIM & Time & PSNR & SSIM & FSIM & Time \\
\hline
\multirow{5}{*}{Airplane} & TNNR & \textcolor[rgb]{ 1, 0, 0}{31.921 } & \textcolor[rgb]{ 1, 0, 0}{0.842 } & \textcolor[rgb]{ 1, 0, 0}{0.935 } & \textbf{5.095 } & \textcolor[rgb]{ 1, 0, 0}{35.616 } & \textcolor[rgb]{ 1, 0, 0}{0.925 } & \textcolor[rgb]{ 1, 0, 0}{0.969 } & \textcolor[rgb]{ 1, 0, 0}{4.621 } & \textcolor[rgb]{ 1, 0, 0}{39.905 } & \textcolor[rgb]{ 1, 0, 0}{0.971 } & \textcolor[rgb]{ 1, 0, 0}{0.988 } & \textbf{4.179 }\\
& TNN & 30.424 & 0.833 & 0.924 & 42.838 & 33.588 & 0.912 & 0.960 & 82.985 & 37.389 & 0.962 & 0.982 & 92.441 \\
& PSTNN & 30.563 & 0.839 & 0.926 & 6.630 & 33.696 & 0.915 & 0.961 & 14.197 & 37.492 & 0.964 & 0.983 & 14.372 \\
& T-TNN & \textbf{30.905 } & \textbf{0.841 } & \textbf{0.931 } & 94.604 & \textbf{34.147 } & \textbf{0.921 } & \textbf{0.965 } & 145.284 & \textbf{37.913 } & \textbf{0.967 } & \textbf{0.985 } & 142.214 \\
& HaLRTC & 28.950 & 0.814 & 0.907 & \textcolor[rgb]{ 1, 0, 0}{2.296 } & 30.898 & 0.874 & 0.940 & \textbf{5.293 } & 33.199 & 0.922 & 0.964 & \textcolor[rgb]{ 1, 0, 0}{3.999 } \\
\hline
\multirow{5}{*}{Tiger} & TNNR & \textcolor[rgb]{ 1, 0, 0}{29.676 } & \textcolor[rgb]{ 1, 0, 0}{0.860 } & \textcolor[rgb]{ 1, 0, 0}{0.939 } & \textbf{5.038 } & \textcolor[rgb]{ 1, 0, 0}{32.831 } & \textcolor[rgb]{ 1, 0, 0}{0.926 } & \textcolor[rgb]{ 1, 0, 0}{0.968 } & \textbf{4.921 } & \textcolor[rgb]{ 1, 0, 0}{36.670 } & \textcolor[rgb]{ 1, 0, 0}{0.966 } & \textcolor[rgb]{ 1, 0, 0}{0.986 } & \textbf{4.579 } \\
& TNN & 28.375 & 0.851 & 0.930 & 46.804 & 31.222 & 0.916 & 0.959 & 81.420 & 34.345 & 0.958 & 0.979 & 75.853 \\
& PSTNN & 28.623 & 0.854 & 0.933 & 7.221 & 31.434 & 0.917 & 0.961 & 14.312 & 34.526 & 0.958 & 0.980 & 14.815 \\
& T-TNN & \textbf{28.722 } & \textbf{0.857 } & \textbf{0.934 } & 200.596 & \textbf{31.695 } & \textbf{0.920 } & \textbf{0.963 } & 188.537 & \textbf{34.830 } & \textbf{0.960 } & \textbf{0.981 } & 115.059 \\
& HaLRTC & 27.118 & 0.830 & 0.914 & \textcolor[rgb]{ 1, 0, 0}{2.898 } & 29.298 & 0.889 & 0.944 & \textcolor[rgb]{ 1, 0, 0}{4.418 } & 31.447 & 0.930 & 0.965 & \textcolor[rgb]{ 1, 0, 0}{2.270 } \\
\hline
\multirow{5}{*}{Flower} & TNNR & \textcolor[rgb]{ 1, 0, 0}{31.630 } & \textcolor[rgb]{ 1, 0, 0}{0.860 } & \textcolor[rgb]{ 1, 0, 0}{0.956 } & \textbf{4.813 } & \textcolor[rgb]{ 1, 0, 0}{35.215 } & \textcolor[rgb]{ 1, 0, 0}{0.934 } & \textcolor[rgb]{ 1, 0, 0}{0.981 } & \textbf{4.526 } & \textcolor[rgb]{ 1, 0, 0}{38.832 } & \textcolor[rgb]{ 1, 0, 0}{0.971 } & \textcolor[rgb]{ 1, 0, 0}{0.992 } & \textcolor[rgb]{ 1, 0, 0}{4.141 } \\
& TNN & 29.599 & 0.832 & 0.947 & 42.082 & 32.728 & 0.909 & 0.972 & 42.080 & 35.995 & 0.955 & 0.986 & 85.822 \\
& PSTNN & 30.054 & 0.840 & 0.950 & 6.520 & 33.084 & 0.914 & 0.973 & 6.691 & 36.279 & 0.957 & 0.987 & 14.754 \\
& T-TNN & \textbf{30.436 } & \textbf{0.857 } & \textbf{0.954 } & 77.385 & \textbf{33.468 } & \textbf{0.923 } & \textbf{0.975 } & 62.725 & \textbf{36.675 } & \textbf{0.962 } & \textbf{0.988 } & 100.470 \\
& HaLRTC & 29.228 & 0.851 & 0.948 & \textcolor[rgb]{ 1, 0, 0}{2.836 } & 31.691 & 0.909 & 0.969 & \textcolor[rgb]{ 1, 0, 0}{2.513 } & 34.107 & 0.946 & 0.982 & \textbf{5.343 } \\
\hline
\multirow{5}{*}{Fish} & TNNR & \textcolor[rgb]{ 1, 0, 0}{33.563 } & \textcolor[rgb]{ 1, 0, 0}{0.919 } & \textcolor[rgb]{ 1, 0, 0}{0.969 } & \textbf{5.402 } & \textcolor[rgb]{ 1, 0, 0}{37.556 } & \textcolor[rgb]{ 1, 0, 0}{0.965 } & \textcolor[rgb]{ 1, 0, 0}{0.987 } & \textbf{4.932 } & \textcolor[rgb]{ 1, 0, 0}{41.498 } & \textcolor[rgb]{ 1, 0, 0}{0.985 } & \textcolor[rgb]{ 1, 0, 0}{0.995 } & \textbf{4.320 } \\
& TNN & 31.417 & 0.898 & 0.958 & 94.509 & 34.747 & 0.947 & 0.978 & 42.238 & 38.291 & 0.975 & 0.990 & 69.916 \\
& PSTNN & 31.629 & 0.902 & 0.960 & 14.403 & 34.865 & 0.948 & 0.979 & 6.622 & 38.373 & 0.976 & 0.990 & 6.490 \\
& T-TNN & \textbf{32.025 } & \textbf{0.908 } & \textbf{0.963 } & 92.739 & \textbf{35.288 } & \textbf{0.952 } & \textbf{0.982 } & 59.725 & \textbf{38.834 } & \textbf{0.978 } & \textbf{0.992 } & 59.962 \\
& HaLRTC & 30.747 & 0.900 & 0.955 & \textcolor[rgb]{ 1, 0, 0}{2.523 } & 33.430 & 0.942 & 0.974 & \textcolor[rgb]{ 1, 0, 0}{2.405 } & 36.216 & 0.968 & 0.987 & \textcolor[rgb]{ 1, 0, 0}{2.180 }\\
\hline
\mathbf end{tabular}
\mathbf end{table*}
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{1\linewidth}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Airplane_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Airplanei_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tiger_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tigeri_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Flower_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Floweri_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fish_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fishi_Original}
\caption{Original}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Airplane_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Airplanei_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tiger_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tigeri_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Flower_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Floweri_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fish_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fishi_Observed}
\caption{Observed}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Airplane_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Airplanei_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tiger_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tigeri_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Flower_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Floweri_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fish_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fishi_TNNR}
\caption{TNNR}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Airplane_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Airplanei_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tiger_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tigeri_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Flower_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Floweri_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fish_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fishi_TNN}
\caption{TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Airplane_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Airplanei_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tiger_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tigeri_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Flower_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Floweri_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fish_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fishi_PSTNN}
\caption{PSTNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Airplane_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Airplanei_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tiger_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tigeri_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Flower_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Floweri_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fish_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fishi_TTNN}
\caption{T-TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Airplane_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Airplanei_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tiger_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Tigeri_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Flower_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Floweri_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fish_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Fishi_HaLRTC}
\caption{HaLRTC}
\mathbf end{subfigure}
\mathbf end{subfigure}
\mathbf vfill
\caption{Examples of color image inpainting with $ SR=40\% $. From top to bottom are ``Airplane", ``Tiger", ``Flower" and ``Fish". For better visualization, we show the zoom-in region, the corresponding error map (difference from the original) and the corresponding partial residuals of the region.}
\label{fig:colorimage}
\mathbf end{figure*}
\subsection{Texture Image Inpainting}
In this subsection, we use the PSU near-regular texture database\mathbf footnote{http://vivid.cse.psu.edu/.} to evaluate our proposed method TNNR for texture image inpainting. Each texture image is resized to $ 300 \times 400 \times 3 $ in order to improve the computation efficiency.
In our test, four texture images are randomly selected from this database, including ``Stone", ``Pattern", ``Leaves" and ``Barbed Wire".
The data of images are normalized in the range $ \left[0,1\right] $. The sampling rates (SR) are set as $ 30\% $, $ 35\% $ and $ 40\% $.
Table \mathbb{R}f{tab:Texture} lists PSNR, SSIM, FSIM and the corresponding running times under different methods. For different sampling rates, our TNNR obtains the results with the best quantitative metrics. In particular, TNNR consumes the least running time among methods except the HaLRTC method. Additionally, HaLRTC consumes the least running time, but has poor quantitative metrics. To further demonstrate the recovery performance, samples of texture images recovered by different algorithms are shown in Figure \mathbb{R}f{fig:Texture}. From the recovery results, our method performs better in filling the missing values of the four testing texture images. It can deal with the the edges of patterns better.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{1\linewidth}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Stone_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Stonei_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Pattern_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Patterni_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leaves_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leavesi_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbed_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbedi_Original}
\caption{Original}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Stone_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Stonei_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Pattern_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Patterni_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leaves_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leavesi_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbed_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbedi_Observed}
\caption{Observed}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Stone_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Stonei_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Pattern_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Patterni_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leaves_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leavesi_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbed_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbedi_TNNR}
\caption{TNNR}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Stone_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Stonei_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Pattern_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Patterni_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leaves_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leavesi_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbed_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbedi_TNN}
\caption{TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Stone_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Stonei_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Pattern_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Patterni_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leaves_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leavesi_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbed_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbedi_PSTNN}
\caption{PSTNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Stone_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Stonei_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Pattern_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Patterni_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leaves_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leavesi_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbed_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbedi_TTNN}
\caption{T-TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Stone_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Stonei_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Pattern_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Patterni_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leaves_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Leavesi_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbed_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Barbedi_HaLRTC}
\caption{HaLRTC}
\mathbf end{subfigure}
\mathbf end{subfigure}
\mathbf vfill
\caption{Examples of texture image inpainting with $ SR=30\% $. From top to bottom are ``Stone", ``Pattern", ``Leaves" and ``Barbed Wire". For better visualization, we show the zoom-in region, the corresponding error map (difference from the original) and the corresponding partial residuals of the region.}
\label{fig:Texture}
\mathbf end{figure*}
\begin{table*}[htbp]
\centering
\caption{TEXTURE IMAGE INPAINTING PERFORMANCE COMPARISON: PSNR, SSIM, FSIM AND RUNNING TIME. THE BEST AND THE SECOND BEST PERFORMING METHODS IN EACH IMAGE ARE HIGHLIGHTED IN RED AND BOLD, RESPECTIVELY}
\begin{tabular}{c|c|cccc|cccc|cccc}
\hline
\multirow{2}{*}{Texture Image} & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{$ SR=30\% $} & \multicolumn{4}{c|}{$ SR=35\% $} & \multicolumn{4}{c}{$ SR=40\% $} \\
\cline{3-14} & & PSNR & SSIM & FSIM & Time & PSNR & SSIM & FSIM & Time & PSNR & SSIM & FSIM & Time \\
\hline
\multirow{5}{*}{Stone} & TNNR & \textcolor[rgb]{ 1, 0, 0}{37.535 } & \textcolor[rgb]{ 1, 0, 0}{0.949 } & \textcolor[rgb]{ 1, 0, 0}{0.980 } & \textbf{4.423 } & \textcolor[rgb]{ 1, 0, 0}{39.218 } & \textcolor[rgb]{ 1, 0, 0}{0.965 } & \textcolor[rgb]{ 1, 0, 0}{0.986 } & \textbf{3.881 } & \textcolor[rgb]{ 1, 0, 0}{40.605 } & \textcolor[rgb]{ 1, 0, 0}{0.974 } & \textcolor[rgb]{ 1, 0, 0}{0.990 } & \textcolor[rgb]{ 1, 0, 0}{3.608 } \\
& TNN & 35.728 & 0.937 & 0.974 & 60.565 & 37.419 & 0.954 & 0.982 & 61.704 & 38.939 & 0.966 & 0.987 & 36.808 \\
& PSTNN & \textbf{35.825 } & \textbf{0.939 } & \textbf{0.975 } & 11.579 & 37.408 & 0.955 & 0.982 & 11.677 & 38.854 & 0.966 & 0.987 & 11.739 \\
& T-TNN & 28.026 & 0.905 & 0.966 & 367.973 & \textbf{38.098 } & \textbf{0.959 } & \textbf{0.984 } & 252.761 & \textbf{39.683 } & \textbf{0.970 } & \textbf{0.988 } & 70.191 \\
& HaLRTC & 34.703 & 0.931 & 0.969 & \textcolor[rgb]{ 1, 0, 0}{3.929 } & 36.119 & 0.947 & 0.976 & \textcolor[rgb]{ 1, 0, 0}{3.764 } & 37.440 & 0.959 & 0.982 & \textbf{3.785 } \\
\hline
\multirow{5}{*}{Pattern} & TNNR & \textcolor[rgb]{ 1, 0, 0}{32.876 } & \textcolor[rgb]{ 1, 0, 0}{0.872 } & \textcolor[rgb]{ 1, 0, 0}{0.952 } & \textcolor[rgb]{ 1, 0, 0}{4.310 } & \textcolor[rgb]{ 1, 0, 0}{35.198 } & \textcolor[rgb]{ 1, 0, 0}{0.917 } & \textcolor[rgb]{ 1, 0, 0}{0.969 } & \textcolor[rgb]{ 1, 0, 0}{4.568 } & \textcolor[rgb]{ 1, 0, 0}{37.373 } & \textcolor[rgb]{ 1, 0, 0}{0.948 } & \textcolor[rgb]{ 1, 0, 0}{0.981 } & \textbf{4.384 } \\
& TNN & 30.781 & 0.836 & 0.939 & 31.306 & 32.514 & 0.882 & 0.955 & 65.534 & 34.424 & 0.917 & 0.969 & 64.443 \\
& PSTNN & 31.044 & 0.844 & 0.941 & 10.874 & 32.747 & 0.887 & 0.956 & 11.840 & 34.571 & 0.920 & 0.970 & 11.843 \\
& T-TNN & \textbf{31.842 } & \textbf{0.856 } & \textbf{0.946 } & 99.708 & \textbf{33.592 } & \textbf{0.898 } & \textbf{0.961 } & 80.806 & \textbf{35.452 } & \textbf{0.930 } & \textbf{0.974 } & 67.539 \\
& HaLRTC & 30.872 & 0.851 & 0.936 & \textbf{5.226 } & 32.358 & 0.887 & 0.951 & \textbf{5.555 } & 33.956 & 0.916 & 0.965 & \textcolor[rgb]{ 1, 0, 0}{3.460 } \\
\hline
\multirow{5}{*}{Leaves} & TNNR & \textcolor[rgb]{ 1, 0, 0}{29.517 } & \textcolor[rgb]{ 1, 0, 0}{0.777 } & \textcolor[rgb]{ 1, 0, 0}{0.924 } & \textbf{4.254 } & \textcolor[rgb]{ 1, 0, 0}{31.170 } & \textcolor[rgb]{ 1, 0, 0}{0.836 } & \textcolor[rgb]{ 1, 0, 0}{0.944 } & \textbf{4.432 } & \textcolor[rgb]{ 1, 0, 0}{32.790 } & \textcolor[rgb]{ 1, 0, 0}{0.879 } & \textcolor[rgb]{ 1, 0, 0}{0.960 } & \textbf{4.511 } \\
& TNN & 28.998 & 0.767 & 0.918 & 30.869 & 30.294 & 0.819 & 0.936 & 61.087 & 31.645 & 0.862 & 0.950 & 54.025 \\
& PSTNN & 29.228 & 0.776 & 0.919 & 5.087 & 30.468 & 0.825 & \textbf{0.937 } & 11.361 & 31.765 & 0.865 & 0.951 & 11.115 \\
& T-TNN & \textbf{29.296 } & \textbf{0.771 } & \textbf{0.918 } & 172.234 & \textbf{30.698 } & \textbf{0.828 } & \textbf{0.937 } & 87.762 & \textbf{32.078 } & \textbf{0.871 } & \textbf{0.952 } & 73.918 \\
& HaLRTC & 28.670 & 0.766 & 0.905 & \textcolor[rgb]{ 1, 0, 0}{3.758 } & 29.821 & 0.812 & 0.923 & \textcolor[rgb]{ 1, 0, 0}{1.980 } & 31.052 & 0.853 & 0.941 & \textcolor[rgb]{ 1, 0, 0}{3.580 } \\
\hline
\multirow{5}{*}{Barbed Wire} & TNNR & \textcolor[rgb]{ 1, 0, 0}{21.977 } & \textcolor[rgb]{ 1, 0, 0}{0.703 } & \textcolor[rgb]{ 1, 0, 0}{0.862 } & \textbf{4.127 } & \textcolor[rgb]{ 1, 0, 0}{23.622 } & \textcolor[rgb]{ 1, 0, 0}{0.771 } & \textcolor[rgb]{ 1, 0, 0}{0.895 } & \textcolor[rgb]{ 1, 0, 0}{3.927 } & \textcolor[rgb]{ 1, 0, 0}{25.384 } & \textcolor[rgb]{ 1, 0, 0}{0.832 } & \textcolor[rgb]{ 1, 0, 0}{0.922 } & \textbf{4.008 } \\
& TNN & 21.665 & 0.675 & 0.839 & 48.753 & 23.073 & 0.750 & 0.876 & 62.956 & 24.414 & 0.806 & 0.905 & 63.915 \\
& PSTNN & \textbf{21.815 } & \textbf{0.684 } & \textbf{0.845 } & 4.966 & \textbf{23.212 } & \textbf{0.756 } & \textbf{0.881 } & 11.410 & \textbf{24.538 } & \textbf{0.811 } & \textbf{0.909 } & 11.611 \\
& T-TNN & 18.385 & 0.609 & 0.805 & 306.874 & 20.054 & 0.704 & 0.850 & 369.429 & 22.445 & 0.793 & 0.895 & 263.812 \\
& HaLRTC & 20.328 & 0.600 & 0.785 & \textcolor[rgb]{ 1, 0, 0}{3.488 } & 21.327 & 0.671 & 0.825 & \textbf{4.591 } & 22.304 & 0.728 & 0.857 & \textcolor[rgb]{ 1, 0, 0}{1.988 } \\
\hline
\mathbf end{tabular}
\label{tab:Texture}
\mathbf end{table*}
For further comparison, we recover images of the deterministically masked images by ``Rectangle" and ``Grid". Clearly, the masked images are no-mean-sampling.
As shown in Figure \mathbb{R}f{fig:Mask} and Table \mathbb{R}f{tab:Mask}, we display the recovered results by different methods. Clearly, TNNR obtains the most visually satisfying results among the compared methods, and the PSNR, SSIM, FSIM results are higher than the corresponding compared methods in all cases. From time consumption, TNNR is the fastest method. The mask image inpainting results are also consistent with the color and texture image inpainting results and all these demonstrate that TNNR can perform tensor completion better and runs more efficiently.
\begin{table}[htbp]
\centering
\caption{MASK IMAGE INPAINTING PERFORMANCE COMPARISON: PSNR, SSIM, FSIM AND RUNNING TIME. THE BEST AND THE SECOND BEST PERFORMING METHODS IN EACH IMAGE ARE HIGHLIGHTED IN RED AND BOLD, RESPECTIVELY}
\begin{tabular}{c|c|cccc}
\hline
Mask & Methods & PSNR & SSIM & FSIM & Time \\
\hline
\multirow{5}{*}{Rectangle} & TNNR & \textcolor[rgb]{ 1, 0, 0}{37.764 } & \textcolor[rgb]{ 1, 0, 0}{0.985 } & \textcolor[rgb]{ 1, 0, 0}{0.992 } & \textcolor[rgb]{ 1, 0, 0}{1.978 } \\
& TNN & 36.479 & \textbf{0.984 } & \textbf{0.991 } & 34.425 \\
& PSTNN & \textbf{36.756 } & \textbf{0.984 } & \textcolor[rgb]{ 1, 0, 0}{0.992 } & 6.515 \\
& T-TNN & 16.306 & 0.943 & 0.915 & 328.535 \\
& HaLRTC & 27.030 & 0.972 & 0.964 & \textbf{4.999 }\\
\hline
\multirow{5}{*}{Grid} & TNNR & \textcolor[rgb]{ 1, 0, 0}{28.844 } & \textcolor[rgb]{ 1, 0, 0}{0.945 } & \textcolor[rgb]{ 1, 0, 0}{0.961 } & \textcolor[rgb]{ 1, 0, 0}{1.327 }\\
& TNN & 27.734 & 0.935 & 0.955 & 42.225 \\
& PSTNN & 27.953 & \textbf{0.938 } & 0.957 & 4.404 \\
& T-TNN & \textbf{28.237 } & \textcolor[rgb]{ 1, 0, 0}{0.945 } & \textbf{0.958 } & 223.260 \\
& HaLRTC & 27.417 & 0.933 & 0.953 & \textbf{1.278 } \\
\hline
\mathbf end{tabular}
\label{tab:Mask}
\mathbf end{table}
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{1\linewidth}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Grid_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Rectangle_Original}
\caption{Original}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Grid_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Rectangle_Observed}
\caption{Observed}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Grid_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Rectangle_TNNR}
\caption{TNNR}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Grid_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Rectangle_TNN}
\caption{TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Grid_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Rectangle_PSTNN}
\caption{PSTNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Grid_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Rectangle_TTNN}
\caption{T-TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Grid_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Rectangle_HaLRTC}
\caption{HaLRTC}
\mathbf end{subfigure}
\mathbf end{subfigure}
\mathbf vfill
\caption{Examples of mask image inpainting. From top to bottom are ``Grid" and ``Rectangle". For better visualization, we show the zoom-in region and the corresponding error map (difference from the original).}
\label{fig:Mask}
\mathbf end{figure*}
\subsection{Video Inpainting}
We evaluate our method on the widely used YUV Video Sequences\mathbf footnote{http://trace.eas.asu.edu/yuv/.}. Each sequence contains at least 150 frames and we use the first 30 frames of each. In the experiments, we test our method and other methods on two videos. The frame sizes of all videos are $ 144 \times 176 $ pixels. The sampling rates (SR) are set as $ 30\% $, $ 35\% $ and $ 40\% $.
As shown in Figure \mathbb{R}f{fig:Video}, each test video is shown at the 8th frame. Based on the results of the two tests, TNNR performs better at filling in the missing values. It is better to deal with the details of the frames. The PSNR, SSIM and FSIM metrics also shows the best results with TNNR, which are consistent with Table \mathbb{R}f{tab:Video}.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{1\linewidth}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Akiyo_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Akiyoi_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguard_Original}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguardi_Original}
\caption{Original}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Akiyo_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Akiyoi_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguard_Observed}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguardi_Observed}
\caption{Observed}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Akiyo_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Akiyoi_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguard_TNNR}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguardi_TNNR}
\caption{TNNR}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Akiyo_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Akiyoi_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguard_TNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguardi_TNN}
\caption{TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Akiyo_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Akiyoi_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguard_PSTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguardi_PSTNN}
\caption{PSTNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Akiyo_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Akiyoi_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguard_TTNN}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguardi_TTNN}
\caption{T-TNN}
\mathbf end{subfigure}
\begin{subfigure}[b]{0.138\linewidth}
\centering
\includegraphics[width=\linewidth]{picture/Akiyo_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Akiyoi_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguard_HaLRTC}\mathbf vspace{0pt}
\includegraphics[width=\linewidth]{picture/Coastguardi_HaLRTC}
\caption{HaLRTC}
\mathbf end{subfigure}
\mathbf end{subfigure}
\mathbf vfill
\caption{Examples of video inpainting with $ SR=30\% $. From top to bottom are ``Akiyo" and ``Coastguard". For better visualization, we show the zoom-in region, the corresponding error map (difference from the original) and the corresponding partial residuals of the region.}
\label{fig:Video}
\mathbf end{figure*}
\begin{table*}[htbp]
\centering
\caption{VIDEO INPAINTING PERFORMANCE COMPARISON: PSNR, SSIM, FSIM AND RUNNING TIME. THE BEST AND THE SECOND BEST PERFORMING METHODS IN EACH IMAGE ARE HIGHLIGHTED IN RED AND BOLD, RESPECTIVELY}
\begin{tabular}{c|c|cccc|cccc|cccc}
\hline
\multirow{2}{*}{Video} & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{$SR=30\%$} & \multicolumn{4}{c|}{$SR=35\%$} & \multicolumn{4}{c}{$SR=40\%$} \\
\cline{3-14} & & PSNR & SSIM & FSIM & Time & PSNR & SSIM & FSIM & Time & PSNR & SSIM & FSIM & Time \\
\hline
\multirow{5}{*}{Akiyo} & TNNR & \textcolor[rgb]{ 1, 0, 0}{40.255 } & \textcolor[rgb]{ 1, 0, 0}{0.987 } & \textcolor[rgb]{ 1, 0, 0}{0.989 } & \textbf{53.612 } & \textcolor[rgb]{ 1, 0, 0}{41.423 } & \textcolor[rgb]{ 1, 0, 0}{0.991 } & \textcolor[rgb]{ 1, 0, 0}{0.991 } & 51.388 & \textcolor[rgb]{ 1, 0, 0}{42.747 } & \textcolor[rgb]{ 1, 0, 0}{0.993 } & \textcolor[rgb]{ 1, 0, 0}{0.993 } & 24.766 \\
& TNN & 39.129 & 0.985 & \textcolor[rgb]{ 1, 0, 0}{0.989 } & 115.522 & 40.494 & 0.989 & \textcolor[rgb]{ 1, 0, 0}{0.991 } & 122.066 & 41.512 & 0.991 & \textcolor[rgb]{ 1, 0, 0}{0.993 } & 61.382 \\
& PSTNN & \textbf{39.262 } & \textbf{0.986 } & \textcolor[rgb]{ 1, 0, 0}{0.989 } & 41.657 & \textbf{40.583 } & \textbf{0.990 } & \textcolor[rgb]{ 1, 0, 0}{0.991 } & \textbf{41.010 } & \textbf{41.562 } & \textbf{0.992 } & \textcolor[rgb]{ 1, 0, 0}{0.993 } & \textbf{17.285 } \\
& T-TNN & 15.797 & 0.696 & 0.839 & 1243.143 & 17.909 & 0.739 & 0.874 & 766.470 & 19.351 & 0.797 & 0.904 & 513.139 \\
& HaLRTC & 31.214 & 0.924 & \textbf{0.963 } & \textcolor[rgb]{ 1, 0, 0}{1.897 } & 33.076 & 0.949 & \textbf{0.975 } & \textcolor[rgb]{ 1, 0, 0}{1.125 } & 34.811 & 0.964 & \textbf{0.983 } & \textcolor[rgb]{ 1, 0, 0}{1.194 } \\
\hline
\multirow{5}{*}{Coastguard} & TNNR & \textcolor[rgb]{ 1, 0, 0}{31.452 } & \textcolor[rgb]{ 1, 0, 0}{0.900 } & \textcolor[rgb]{ 1, 0, 0}{0.931 } & \textbf{11.747 } & \textcolor[rgb]{ 1, 0, 0}{32.189 } & \textcolor[rgb]{ 1, 0, 0}{0.914 } & \textcolor[rgb]{ 1, 0, 0}{0.944 } & \textbf{18.292 } & \textcolor[rgb]{ 1, 0, 0}{33.315 } & \textcolor[rgb]{ 1, 0, 0}{0.934 } & \textcolor[rgb]{ 1, 0, 0}{0.952 } & \textbf{14.643 } \\
& TNN & 30.785 & 0.883 & 0.922 & 83.394 & 31.702 & 0.904 & 0.937 & 120.633 & 32.637 & 0.921 & \textbf{0.946 } & 56.524 \\
& PSTNN & \textbf{30.851 } & \textbf{0.884 } & \textbf{0.923 } & 36.425 & \textbf{31.744 } & \textbf{0.905 } & \textbf{0.938 } & 34.795 & \textbf{32.663 } & \textbf{0.922 } & \textbf{0.946 } & 17.883 \\
& T-TNN & 15.870 & 0.602 & 0.750 & 1111.489 & 18.316 & 0.657 & 0.834 & 938.164 & 20.803 & 0.792 & 0.879 & 985.819 \\
& HaLRTC & 26.734 & 0.779 & 0.888 & \textcolor[rgb]{ 1, 0, 0}{1.467 } & 28.025 & 0.825 & 0.913 & \textcolor[rgb]{ 1, 0, 0}{1.264 } & 29.045 & 0.858 & 0.927 & \textcolor[rgb]{ 1, 0, 0}{1.223 } \\
\hline
\mathbf end{tabular}
\label{tab:Video}
\mathbf end{table*}
In addition, Figure \mathbb{R}f{fig:Frame} displays the PSNR, SSIM, FSIM values of each frontal slice of video ``Akiyo" and ``Bridge". As observed, in almost all frontal slices, the PSNR, SSIM and FSIM metrics of the proposed TNNR are much higher than those of the other compared methods.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{picture/Frame}\mathbf vspace{0pt}
\includegraphics[width=1\linewidth]{picture/Frame1}\mathbf vspace{0pt}
\includegraphics[width=1\linewidth]{picture/Frame2}\mathbf vspace{0pt}
\includegraphics[width=1\linewidth]{picture/Frame3}
\caption{All frontal slices obtained by different methods on the video ``Akiyo" and ``Coastguard" with $ SR=30\% $. For better visualization, we list all frontal slices obtained by TNNR, TNN and PSTNN.}
\label{fig:Frame}
\mathbf end{figure*}
\section{Conclusion}
In this paper, we propose a general and flexible rank relaxation function, named TNNR relaxation function, for efficiently solving the third order tensor recovery problem.
We develop the general inertial smoothing proximal gradient method to accelerate the proposed model. Besides, we prove that our proposed optimization method can converge to a critical point. Combining the KL property with some milder assumptions, we further give its global convergence guarantee. We compare the performance of the proposed method with state-of-the-art methods via numerical experiments on color images, texture images and video. Our method outperforms many state-of-the-art methods quantitatively and visually.
\mathbf appendices
\mathbf onecolumn
\section{Proof of Theorem \mathbb{R}f{thm:CC}}
Before proving the Theorem \mathbb{R}f{thm:CC}, we first recall the following well-known and fundamental property for a smooth function in the class $C^{1,1}$.
\begin{lemma}\cite{BT09,Ber99}\label{lem:smooth}
Let function $f(\cdot)$ satisfy Assumption \mathbb{R}f{ass:smooth}. Then, for any $\mathcal X_1, \mathcal X_2 \in \mathbb{R}^{n_1 \times n_2 \times n_3}$, one has
\begin{align}\label{eq:smooth}
f(\mathcal X_1) \leq f(\mathcal X_2)+\langle\mathcal X_1-\mathcal X_2, \nabla f(\mathcal X_2)\rangle+\mathbf frac{L_f}{2}\|\mathcal X_1-\mathcal X_2\|^{2}.
\mathbf end{align}
\mathbf end{lemma}
Next, we are ready to discuss the convergence by constructing the auxiliary sequence. For any $t \in \mathbb{N}$, define
\begin{equation}\label{H}
H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right):=F\left(\mathcal X^{t+1}\right)+\delta_{t+1} \left\|\mathcal X^{t+1}-\mathcal X^{t}\right\|^{2},
\mathbf end{equation}
where $ \delta_{t+1}=\mu^{t+1}\theta_1^{t+1}/2 $.
Then, we present two lemmas, which will be used later.
\begin{lemma}
Let function $f(\cdot)$ satisfy Assumption \mathbb{R}f{ass:smooth}. Then, $ g\left(\mathcal X\right):=f\left(\mathcal X\right)+\mathbf frac{L_f}{2}\left\|\mathcal X\right\|^2 $ is a convex function.
\mathbf end{lemma}
\begin{proof}
For any $\mathcal X_1, \mathcal X_2 \in \mathbb{R}^{n_1 \times n_2 \times n_3}$, we have
\begin{equation}
\begin{aligned}
\left(\nabla g\left(\mathcal X_1\right)-\nabla g\left(\mathcal X_2\right)\right)^T\left(\mathcal X_1-\mathcal X_2\right) &=L_f\left\|\mathcal X_1-\mathcal X_2\right\|^{2}+\left(\nabla f\left(\mathcal X_1\right)-\nabla f\left(\mathcal X_2\right)\right)^T\left(\mathcal X_1-\mathcal X_2\right) \\
& \geq L_f\left\|\mathcal X_1-\mathcal X_2\right\|^{2}-\left\|\mathcal X_1-\mathcal X_2\right\|\left\|\nabla f\left(\mathcal X_1\right)-\nabla f\left(\mathcal X_2\right)\right\| \geq 0.
\mathbf end{aligned}
\mathbf end{equation}
Thus, $ g\left(\mathcal X\right) $ is a convex function.
\mathbf end{proof}
\begin{lemma}\label{thm:SDC}
The sequences $\left\{\mathcal X^{t}\right\}_{t \in \mathbb{N}}$ generated by TNNR own the following properties:
\begin{itemize}
\item[(i)] $ H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right) $ is monotonically decreasing. Indeed,
\begin{equation}\label{pro:sdc}
H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right)-H_{\delta_{t}}\left(\mathcal X^{t}, \mathcal X^{t-1}\right) \le -\mathbf frac{\mathbf varepsilon L_f}{2}\left\|\mathcal X^{t+1}-\mathcal X^t\right\|^2;
\mathbf end{equation}
\item[(ii)] $\lim\limits_{t \rightarrow +\infty}\left(\mathcal X^{t}-\mathcal X^{t+1}\right)=0$.
\mathbf end{itemize}
\mathbf end{lemma}
\begin{proof}
(i) From \mathbf eqref{weight} and Lemma \mathbb{R}f{lem:gradient}, we have
\begin{equation}\label{pro:weight}
\begin{aligned}
\rho\left(g_k^{t+1}\right) \leq \rho\left(g_k^{t}\right)+\mathbf alpha_k^{t}\left(g_k^{t+1}-g_k^{t}\right),\quad
\rho\left(h_i^{k,t+1}\right) \leq \rho\left(h_i^{k,t}\right)+ \beta_i^{k,t}\left(h_i^{k,t+1}-h_i^{k,t}\right),
\mathbf end{aligned}
\mathbf end{equation}
where
$$ \sigma_i^{k,t}=\sigma_i\left(\bar X^{(k,t)}\right),\, h_i^{k,t}=\rho\left(\sigma_i^{k,t}\right),\, g_k^{t}=\sum_{i=1}^{r}\rho\left(h_i^{k,t}\right) .$$
Since $ \mathcal X^{t+1} $ is a global solution to problem \mathbf eqref{X}, we get
\begin{equation}\label{pro:opt}
\begin{aligned}
&\lambda\sum_{k=1}^{n_3}\sum_{i=1}^{r}\mathbf alpha_k^t\beta_i^{k,t}\rho\left(\sigma_i^{k,t+1}\right)+\left\langle\nabla f\left(\mathcal Z^t\right), \mathcal X^{t+1}-\mathcal Y^t\right\rangle+\mathbf frac{\mu^t}{2}\left\|\mathcal X^{t+1}-\mathcal Y^t\right\|^{2} \\\le & \lambda\sum_{k=1}^{n_3}\sum_{i=1}^{r}\mathbf alpha_k^t\beta_i^{k,t}\rho\left(\sigma_i^{k,t}\right) +\left\langle\nabla f\left(\mathcal Z^t\right), \mathcal X^{t}-\mathcal Y^t\right\rangle+\mathbf frac{\mu^t}{2}\left\|\mathcal X^{t}-\mathcal Y^t\right\|^{2}.
\mathbf end{aligned}
\mathbf end{equation}
From Lemma \mathbb{R}f{lem:smooth}, we obtain
\begin{align}\label{eq:smooth-jl}
f(\mathcal X^{t+1}) \leq f(\mathcal Z^t)+\langle\mathcal X^{t+1}-\mathcal Z^t, \nabla f(\mathcal Z^t)\rangle+\mathbf frac{L_f}{2}\|\mathcal X^{t+1}-\mathcal Z^t\|^{2}.
\mathbf end{align}
Moreover, by the convexity of $ f\left(\mathcal X\right)+\mathbf frac{L_f}{2}\left\|\mathcal X\right\|^2 $, it holds that
\begin{equation}\label{pro:con}
f\left(\mathcal Z^t\right)+\left\langle \mathcal X^{t}-\mathcal Z^{t}, \nabla f\left(\mathcal Z^{t}\right)\right\rangle \leq f\left(\mathcal X^{t}\right)+\mathbf frac{L_f}{2}\left\|\mathcal X^{t}-\mathcal Z^{t}\right\|^2.
\mathbf end{equation}
From \mathbf eqref{pro:weight}-\mathbf eqref{pro:con}, we have
\begin{equation}\label{pro:dec}
\begin{aligned}
F\left(\mathcal X^{t+1}\right)-F\left(\mathcal X^{t}\right)&\le-\mathbf frac{\mu^t}{2}\left\|\mathcal X^{t+1}-\mathcal Y^t\right\|^{2}+\mathbf frac{\mu^t}{2}\left\|\mathcal X^{t}-\mathcal Y^t\right\|^{2}+\mathbf frac{L_f}{2}\|\mathcal X^{t+1}-\mathcal Z^t\|^{2}+\mathbf frac{L_f}{2}\left\|\mathcal X^{t}-\mathcal Z^{t}\right\|^2\\
&= \mu^{t}\left\langle \mathcal Y^{t}-\mathcal X^{t}, \mathcal X^{t+1}-\mathcal X^{t}\right\rangle-\mathbf frac{\mu^t}{2}\left\|\mathcal X^{t+1}-\mathcal X^t\right\|^{2}+\mathbf frac{L_f}{2}\|\mathcal X^{t+1}-\mathcal Z^t\|^{2}+\mathbf frac{L_f}{2}\left\|\mathcal X^{t}-\mathcal Z^{t}\right\|^2.
\mathbf end{aligned}
\mathbf end{equation}
Denote $\mathcal Delta_{t}:=\mathcal X^{t}-\mathcal X^{t-1}$, we have
\begin{equation}\label{del}
\theta_1^t\mathcal Delta_{t}=\mathcal Y^{t}-\mathcal X^{t},\quad \theta_2^t\mathcal Delta_{t}=\mathcal Z^{t}-\mathcal X^{t},\quad \theta_2^t\mathcal Delta_{t}-\mathcal Delta_{t+1}=\mathcal Z^{t}-\mathcal X^{t+1},
\mathbf end{equation}
combining which with \mathbf eqref{pro:dec}, we obtain
\begin{equation}\label{pro:cau}
\begin{aligned}
F\left(\mathcal X^{t+1}\right)-F\left(\mathcal X^{t}\right)&\le\mu^{t}\theta_1^t\left\langle \mathcal Delta_{t}, \mathcal Delta_{t+1}\right\rangle-\mathbf frac{\mu^t}{2}\left\|\mathcal Delta_{t+1}\right\|^{2}+\mathbf frac{L_f}{2}\|\theta_2^t\mathcal Delta_{t}-\mathcal Delta_{t+1}\|^{2}+\mathbf frac{L_f\left(\theta_2^t\right)^2}{2}\left\|\mathcal Delta_{t}\right\|^2\\
&\le \mathbf frac{L_f-\mu^{t}+\mu^{t}\theta_1^t-L_f\theta_2^t}{2}\left\|\mathcal Delta_{t+1}\right\|^2+\mathbf frac{2L_f\left(\theta_2^t\right)^2+\mu^{t}\theta_1^t-L_f\theta_2^t}{2}\left\|\mathcal Delta_{t}\right\|^2,
\mathbf end{aligned}
\mathbf end{equation}
where the second inequality comes from $ \mu^{t}\theta_1^t-L_f\theta_2^t\ge 0 $ and Cauchy-Schwartz inequality.
Combining \mathbf eqref{pro:cau}, \mathbf eqref{H} and Assumption \mathbb{R}f{ass:pra}, one has
$$ H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right)-H_{\delta_{t}}\left(\mathcal X^{t}, \mathcal X^{t-1}\right) \le -\mathbf frac{\mathbf varepsilon \mu^t}{2}\left\|\mathcal X^{t+1}-\mathcal X^t\right\|^2\le -\mathbf frac{\mathbf varepsilon L_f}{2}\left\|\mathcal X^{t+1}-\mathcal X^t\right\|^2. $$
Thus $ H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right) $ is monotonically decreasing.
(ii) Summing all the inequalities in \mathbf eqref{pro:sdc} for $ t\ge 1 $, we get
\begin{equation*}
\mathbf frac{\mathbf varepsilon L_f}{2} \sum_{t=1}^{+\infty}\left\|\mathcal X^{t+1}-\mathcal X^{t}\right\|^{2} \le H_{\delta_{1}}\left(\mathcal X^{1}, \mathcal X^{0}\right)-F\left(\mathcal X^{+\infty}\right)<+\infty.
\mathbf end{equation*}
In particular, it implies that $\lim\limits_{t \rightarrow +\infty}\left(\mathcal X^{t}-\mathcal X^{t+1}\right)=0$.
\mathbf end{proof}
\textbf{Now, we prove Theorem \mathbb{R}f{thm:CC}.}
\begin{proof}
(i) It follows from Lemma \mathbb{R}f{thm:SDC}-(i) that
\begin{equation*}
F\left(\mathcal X^{t+1}\right) \le H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right) \le H_{\delta_{1}}\left(\mathcal X^{1}, \mathcal X^{0}\right) < +\infty.
\mathbf end{equation*}
Then the boundedness of $ \left\lbrace\mathcal X^{t}\right\rbrace $ is obtained based on the Assumption \mathbb{R}f{ass:bounded}. According to the Bolzano-Weirstrass theorem \cite{Fri93}, $ \left\lbrace\mathcal X^{t}\right\rbrace $ must have at least one accumulation point.
(ii) By the definition of $ H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right) $ in \mathbf eqref{H}, Assumption \mathbb{R}f{ass:bounded} and Theorem \mathbb{R}f{thm:SDC}-(i), we have that the sequence $ \left\lbrace H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right) \right\rbrace $ is bounded and monotonically nonincreasing. It implies that $ \left\lbrace H_{\delta_{t+1}}\left(\mathcal X^{t+1}, \mathcal X^{t}\right) \right\rbrace $ is convergent. Furthermore, since $\lim_{t \rightarrow +\infty}\left(\mathcal X^{t}-\mathcal X^{t+1}\right)=0$, the sequence $ \left\lbrace F\left(\mathcal X^{t}\right)\right\rbrace $ is convergent.
(iii) From the first-order optimality condition of \mathbf eqref{X}, we get
\begin{equation}
\mathcal{G}^{t_l+1}+\mu^{t_l}\left(\mathcal X^{t_l+1}-\mathcal Y^{t_l}\right)+\nabla f\left(\mathcal Z^{t_l}\right)=0,
\mathbf end{equation}
where
$$\mathcal{G}^{t_l+1} \in \mathbf partial\left(\lambda\sum_{k=1}^{n_3}\sum_{i=1}^{r}\mathbf alpha_k^{t_l}\beta_i^{k,t_l}\rho\left(\sigma_i^{k,t_l+1}\right)\right).$$
Thus
\begin{equation}\label{pro:gra}
\mathcal{G}^{t_l+1}+\mu^{t_l}\left(\mathcal Delta_{t_l+1}-\theta_1^{t_l}\mathcal Delta_{t_l}\right)+\nabla f\left(\mathcal X^{t_l}+\theta_2^{t_l}\mathcal Delta_{t_l}\right)=0.
\mathbf end{equation}
From the fact that $\lim_{t \rightarrow +\infty}\left(\mathcal X^{t}-\mathcal X^{t+1}\right)=0$ in Theorem \mathbb{R}f{thm:SDC}, we have $\lim_{l \rightarrow +\infty} \mathcal X^{t_{l}+1}=\mathcal X^\star$. Thus, $\sigma_i^{k,t_l+1} \rightarrow \sigma_i^{k,\star}$
hold as $ l \rightarrow +\infty $. From Assumption \mathbb{R}f{ass:rho}, we can conclude that $ \mathbf alpha_k^{t_l}\beta_i^{k,t_l}\rightarrow \mathbf alpha_k^{\star}\beta_i^{k,\star} $. Passing to the limit in \mathbf eqref{pro:gra}, from the continuity of $ \nabla f $ and the closedness of $\mathbf partial \rho$, we obtain
$$ 0= \mathbf partial\left(\lambda\sum_{k=1}^{n_3}\sum_{i=1}^{r}\mathbf alpha_k^{\star}\beta_i^{k,\star}\rho\left(\sigma_i^{k,\star}\right)\right) +\nabla f\left(\mathcal X^\star\right)\in \mathbf partial F\left(\mathcal X^\star\right) .$$
Thus, $ \mathcal X^\star$ is a stationary point of $ F\left(\mathcal X\right) $. From Assumptions \mathbb{R}f{ass:rho} and \mathbb{R}f{ass:smooth}, we know that $ f $ and $ \rho $ are continuous, thus
\begin{equation*}
\begin{aligned}
\lim_{l \rightarrow +\infty}F\left(\mathcal X^{t_{l}}\right)
=\lim_{l \rightarrow +\infty}\left\lbrace\lambda\sum_{k=1}^{n_3}\rho\left(\sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^{k,t_l}\right)\right)\right)+f\left(\mathcal X^{t_l}\right)\right\rbrace
=F\left(\mathcal X^\star\right).
\mathbf end{aligned}
\mathbf end{equation*}
This completes the proof.
\mathbf end{proof}
\section{Proof of Theorem \mathbb{R}f{thm:whole}}
Before we prove Theorem \mathbb{R}f{thm:whole}, we first present four lemmas.
\begin{lemma}\cite{KM11}\label{lem:equ}
Suppose that $\mathcal{A} ,\, \mathcal{B} $ are two arbitrary tensors. Let $\mathcal{F}=\mathcal{A} * \mathcal{B}$, $ \mathcal Q $ is an orthogonal tensor. Then, the following properties hold.
\begin{itemize}
\item[(i)] $\|\mathcal{A}\|^{2}=\mathbf frac{1}{n_{3}}\|\bar A\|^{2}$;
\item[(ii)] $\mathcal{F}=\mathcal{A} * \mathcal{B}$ and $\bar F=\bar A \bar B$ are equivalent to each other;
\item[(iii)] $ \left\| {\mathcal A}*\mathcal Q\right\|=\left\| {\mathcal A}\right\|. $
\mathbf end{itemize}
\mathbf end{lemma}
\begin{lemma}
Suppose that $\mathcal{A} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}},\, \mathcal{B} \in\mathbb{R}^{n_{2} \times n_{4} \times n_{3}}$ are two arbitrary tensors. Then $ \left\|{\mathcal A}*\mathcal B\right\|\le \sqrt{n_3}\left\|{\mathcal A}\right\|\left\|\mathcal B\right\| $ and $ \left\|{\mathcal A}*\mathcal B\right\|\le \left\|{\mathcal A}\right\|\max_k\left\|\bar B^{(k)}\right\|_2 $.
\mathbf end{lemma}
\begin{proof}
From Lemma \mathbb{R}f{lem:equ}, we have
\begin{equation*}
\left\|{\mathcal A}*\mathcal B\right\|=\mathbf frac{1}{\sqrt{n_3}}\left\|\bar A\bar B\right\|\le\mathbf frac{1}{\sqrt{n_3}}\left\|\bar A\right\|\left\|\bar B\right\|=\sqrt{n_3}\left\|{\mathcal A}\right\|\left\|\mathcal B\right\|
\mathbf end{equation*}
and
\begin{equation*}
\left\|{\mathcal A}*\mathcal B\right\|=\mathbf frac{1}{\sqrt{n_3}}\left\|\bar A\bar B\right\|\le\mathbf frac{1}{\sqrt{n_3}}\left\|\bar A\right\|\left\|\bar B\right\|_2=\left\|{\mathcal A}\right\|\max_k\left\|\bar B^{(k)}\right\|_2.
\mathbf end{equation*}
\mathbf end{proof}
\begin{lemma}\label{lem:whole}
Suppose that Assumption \mathbb{R}f{ass:rho}-\mathbb{R}f{ass:pra} hold and $\delta_{t} \mathbf equiv \delta$ for all $t \in \mathbb{N}$. Let $\left\{\mathcal X^{t}\right\}$ be the sequence generated by TNNR. Consider the function
\begin{equation*}
\begin{aligned}
H: \mathbb{R}^{n_1\times n_2\times n_3} \times \mathbb{R}^{n_1\times n_2\times n_3} \rightarrow(-\infty,+\infty], \quad H(\mathcal X, \mathcal Y):=F(\mathcal X)+\delta\|\mathcal X-\mathcal Y\|^{2}.
\mathbf end{aligned}
\mathbf end{equation*}
Then $\left\{H\left(\mathcal X^{t+1}, \mathcal X^{t}\right)\right\}$ satisfies the following assertions:
\begin{itemize}
\item [(H1)] $H\left(\mathcal X^{t+1}, \mathcal X^{t}\right)+\mathbf frac{\mathbf varepsilon L_f}{2}\left\|\mathcal Delta_{t+1}\right\|^{2} \leq H\left(\mathcal X^{t}, \mathcal X^{t-1}\right)$ for all $t \geq 1$;
\item [(H2)] for all $t \in \mathbb{N}$, there exist $\mathbf omega^{t+1} \in \mathbf partial H\left(\mathcal X^{t+1}, \mathcal X^{t}\right)$ such that $$\mathbf omega^{t+1} \leq \left(L_f+\mathbf frac{\mu^0\mathbf pi+\left(1+\mathbf pi r \right)\mathbf pi^2 \tilde{L}L_\rho\sqrt{n_3r}}{\tau}+4\delta \right)\left(\left\|\mathcal Delta_{k+1}\right\|+\left\|\mathcal Delta_{k}\right\|\right);$$
\item[(H3)] there exists a subsequence $\left\lbrace\mathcal X^{t_l}\right\rbrace_{l \in \mathbb{N}}$ such that $ \lim_{l\rightarrow \infty}\mathcal X^{t_{l}}=\mathcal X^\star $, and it further holds that $\lim _{l \rightarrow \infty} H\left(\mathcal X^{t+1}, \mathcal X^{t}\right)=H\left(\mathcal X^\star, \mathcal X^\star\right)$.
\mathbf end{itemize}
\mathbf end{lemma}
\begin{proof}
(1) The assertion in (H1) can be obtained directly by Theorem \mathbb{R}f{thm:SDC}-(i).
(2) Using the first-order optimality condition of \mathbf eqref{X}, we have
\begin{equation*}
\lambda\mathbf partial\left\|\mathcal X^{t+1}\right\|_{\rho_{\mathbf alpha^t}^{\beta^t}}+\mu^{t}\left(\mathcal X^{t+1}-\mathcal Y^{t}\right)+\nabla f\left(\mathcal Z^{t}\right)=0.
\mathbf end{equation*}
Thus,
\begin{equation}\label{bound-1}
\lambda\mathcal U^{t+1}*\mathcal W_{t+1}*\left( \mathcal V^{t+1}\right)^*+\mu^{t}\left(\mathcal X^{t+1}-\mathcal Y^{t}\right)+\nabla f\left(\mathcal Z^{t}\right)=0,
\mathbf end{equation}
where $\mathcal X^{t+1}=\mathcal U^{t+1}*{\mathcal S^{t+1}}*\left(\mathcal V^{t+1}\right)^*$ be the t-SVD of $\mathcal X^{t+1}$,
\begin{equation}
\bar W_{t+1}^{(k)}=\mathscr{D}^i_r\left(\mathbf alpha_k^t\beta_i^{k,t}\rho'\left(\sigma_i^{k,t+1}\right)\right),\,\mathbf alpha_k^{t} = \rho'\left(\sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^{k,t}\right)\right)\right),\,
\beta_i^{k,t}= \rho'\left(\rho\left(\sigma_i^{k,t}\right)\right).
\mathbf end{equation}
From the definition of $ H\left(\mathcal X, \mathcal Y\right) $, we get $ \nabla_{\mathcal Y} H\left(\mathcal X^{t+1}, \mathcal Y^{t+1}\right)=-2 \delta(\mathcal X^{t+1}-\mathcal Y^{t+1}) $ and
\begin{equation*}
\lambda\mathbf partial\sum_{k=1}^{n_3}\rho\left(\sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^{k,t+1}\right)\right)\right)+\nabla f(\mathcal X^{t+1})+2 \delta(\mathcal X^{t+1}-\mathcal Y^{t+1}) \in \mathbf partial_{\mathcal X} H\left(\mathcal X^{t+1}, \mathcal Y^{t+1}\right).
\mathbf end{equation*}
Thus,
\begin{equation}\label{bound-2}
\lambda\mathcal U^{t+1}*\mathcal M_{t+1}*\left( \mathcal V^{t+1}\right)^*+\nabla f(\mathcal X^{t+1})+2 \delta(\mathcal X^{t+1}-\mathcal Y^{t+1}) \in \mathbf partial_{\mathcal X} H\left(\mathcal X^{t+1}, \mathcal Y^{t+1}\right),
\mathbf end{equation}
where $ \bar M_{t+1}^{(k)}=\mathscr{D}^i_r\left(\mathbf alpha_k^{t+1}\beta_i^{k,t+1}\rho'\left(\sigma_i^{k,t+1}\right)\right) $. Let $ \mathcal D_t $ is a tensor with $ \bar D_{t}^{(k)}=\mathscr{D}^i_r\left(\mathbf alpha_k^t\beta_i^{k,t}\right) $. From Assumption \mathbb{R}f{ass:rho}, we get $ \mathcal D_t $ is invertible. From \mathbf eqref{bound-1} and \mathbf eqref{bound-2}, we obtain
\begin{equation*}
\begin{aligned}
\mathbf omega^{t+1}_\mathcal X:&=-\mathcal U^{t+1}*\mathcal D_{t+1}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^**\left(\mu^{t}\left(\mathcal X^{t+1}-\mathcal Y^{t}\right)+\nabla f\left(\mathcal Z^{t}\right)\right)+\nabla f(\mathcal X^{t+1})+2 \delta(\mathcal X^{t+1}-\mathcal Y^{t+1})\\ &\in \mathbf partial_{\mathcal X} H\left(\mathcal X^{t+1}, \mathcal Y^{t+1}\right).
\mathbf end{aligned}
\mathbf end{equation*}
Let $ \mathbf omega_\mathcal Y^{t+1}:=-2 \delta(\mathcal X^{t+1}-\mathcal X^{t}) $, we get $\mathbf omega^{t+1}:=\left(\mathbf omega_\mathcal X^{t+1}, \mathbf omega_\mathcal Y^{t+1}\right) \in \mathbf partial H\left(\mathcal X^{t+1}, \mathcal X^{t}\right)$.
It follows from Theorem \mathbb{R}f{thm:CC}-(i) that $\left\{\mathcal X^{t}\right\}$ is bounded. Then the sequence $\left\{\mathcal Z^{t}\right\}$ is also bounded by \mathbf eqref{X}. Combining that $\nabla f$ is continuous, there exists $\tilde{L}>0$ such that
$$
\left\|\nabla f\left(\mathcal Z^{t}\right)\right\| \leq \tilde{L} .
$$
Considering that $\rho^{\mathbf prime}$ is positive and continuous, for any $t$, $ k\in[m] $ and $i \in [r]$, and $\left\{\sigma_i^{k,t}\right\}$ is bounded for the boundness of $\left\{\mathcal X^{t}\right\}$. Therefore, there exist $\tau,\, \mathbf pi>0$ such that
$$
\tau \leq \mathbf alpha_k^t\beta_i^{k,t} \leq \mathbf pi,\quad \mathbf alpha_k^t \leq \mathbf pi,\quad \beta_i^{k,t} \leq \mathbf pi,\quad
\rho'\left(\sigma_i^{k,t}\right) \leq \mathbf pi.
$$
Hence, we have
\begin{equation}\label{bound-3}
\max_k\left\|\left(\bar D_{t}^{(k)}\right)^{-1}\right\|_2 \leq \max _{i,k} \mathbf frac{1}{\mathbf alpha_k^t\beta_i^{k,t}} \leq \mathbf frac{1}{\tau}
\mathbf end{equation}
and
\begin{equation}\label{bound-4}
\max_k\left\|\bar D_{t+1}^{(k)}\left(\bar D_{t}^{(k)}\right)^{-1}\right\|_2 \leq \max _{i,k} \mathbf frac{\mathbf alpha_k^{t+1}\beta_i^{k,t+1}}{\mathbf alpha_k^t\beta_i^{k,t}} \leq \mathbf frac{\mathbf pi}{\tau}.
\mathbf end{equation}
\iffalse
\begin{equation}\label{bound-4}
\left\|\mathcal D_{t+1}*\mathcal D_{t}^{-1}\right\|=\mathbf frac{1}{\sqrt{n_3}}\sum_{k=1}^{n_3}\left\|\bar\mathcal D_{t+1}^{(k)}\left(\bar\mathcal D_{t}^{(k)}\right)^{-1}\right\|\le \mathbf frac{m}{\sqrt{n_3}}\sum_{k=1}^{n_3}\left\|\bar\mathcal D_{t+1}^{(k)}\left(\bar\mathcal D_{t}^{(k)}\right)^{-1}\right\|_2 \leq \mathbf frac{m}{\sqrt{n_3}}\sum_{k=1}^{n_3}\max _{i} \mathbf frac{\mathbf alpha_k^{t+1}\beta_i^{k,t+1}}{\mathbf alpha_k^t\beta_i^{k,t}} \leq \mathbf frac{m\mathbf pi\sqrt{n_3}}{\tau}.
\mathbf end{equation}
\mathbf fi
Combining \mathbf eqref{bound-3} and \mathbf eqref{bound-4}, we get
\begin{equation}\label{bound-5}
\begin{aligned}
\left\|\mathbf omega^{t+1}\right\| \le& \left\|\mathbf omega_\mathcal X^{t+1}\right\|+ \left\|\mathbf omega_\mathcal Y^{t+1}\right\|\\
\le& \left\|\nabla f(\mathcal X^{t+1})-\mathcal U^{t+1}*\mathcal D_{t+1}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^**\nabla f\left(\mathcal Z^{t}\right)\right\|+\mu^t\left\|\mathcal X^{t+1}-\mathcal Y^{t}\right\|\max_k\left\|\bar\mathcal D_{t+1}^{(k)}\left(\bar\mathcal D_{t}^{(k)}\right)^{-1}\right\|_2
+4\delta\left\|\mathcal Delta_{t+1}\right\|\\
\le& \left\|\nabla f\left(\mathcal Z^{t}\right)-\mathcal U^{t+1}*\mathcal D_{t+1}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^**\nabla f\left(\mathcal Z^{t}\right)\right\|+\left\|\nabla f\left(\mathcal X^{t+1}\right)-\nabla f\left(\mathcal Z^{t}\right)\right\|+ \mathbf frac{\mu^t\mathbf pi}{\tau}\left\|\mathcal X^{t+1}-\mathcal Y^{t}\right\| \\&+4\delta\left\|\mathcal Delta_{t+1}\right\|\\
\le&\left\|\mathcal U^{t+1}*\mathcal D_{t}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^**\nabla f\left(\mathcal Z^{t}\right)-\mathcal U^{t+1}*\mathcal D_{t+1}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^**\nabla f\left(\mathcal Z^{t}\right)\right\|
\\&+ L_f\left\|\mathcal Delta_{t+1}\right\|+L_f\theta_2^t\left\|\mathcal Delta_{t}\right\|+\mathbf frac{\mu^t\mathbf pi}{\tau}\left\|\mathcal Delta_{t+1}\right\|+\mathbf frac{\mu^t\theta_1^t\mathbf pi}{\tau}\left\|\mathcal Delta_{t}\right\|+4\delta\left\|\mathcal Delta_{t+1}\right\|\\
\le& \mathbf frac{\sqrt{n_3}\tilde{L}}{\tau}\left\|\mathcal D_{t+1}-\mathcal D_{t} \right\|+\left(L_f\theta_2^t+\mathbf frac{\mu^t\theta_1^t\mathbf pi}{\tau}\right)\left\|\mathcal Delta_{t}\right\|+\left(L_f+\mathbf frac{\mu^t\mathbf pi}{\tau}+4\delta \right)\left\|\mathcal Delta_{t+1}\right\|,
\mathbf end{aligned}
\mathbf end{equation}
where the last inequality follows from
\begin{equation*}
\begin{aligned}
&\left\|\mathcal U^{t+1}*\mathcal D_{t}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^**\nabla f\left(\mathcal Z^{t}\right)-\mathcal U^{t+1}*\mathcal D_{t+1}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^**\nabla f\left(\mathcal Z^{t}\right)\right\|\\
\le &\sqrt{n_3}\left\|\mathcal U^{t+1}*\mathcal D_{t}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^*-\mathcal U^{t+1}*\mathcal D_{t+1}*\mathcal D_{t}^{-1}*\left(\mathcal U^{t+1}\right)^*\right\|\left\|\nabla f\left(\mathcal Z^{t}\right)\right\|\\
\le &\sqrt{n_3}\tilde{L}\left\|\mathcal D_{t}*\mathcal D_{t}^{-1}-\mathcal D_{t+1}*\mathcal D_{t}^{-1}\right\|\\
\le & \sqrt{n_3}\tilde{L}\left\|\mathcal D_{t}-\mathcal D_{t+1}\right\|\max_k\left\|\left(\bar D_{t}^{(k)}\right)^{-1}\right\|_2\\
\le &\mathbf frac{\sqrt{n_3}\tilde{L}}{\tau}\left\|\mathcal D_{t+1}-\mathcal D_{t} \right\|.
\mathbf end{aligned}
\mathbf end{equation*}
It follows from Assumption \mathbb{R}f{ass:rho} that
\begin{equation}
\begin{aligned}
&\left| \mathbf alpha_k^{t+1}\beta_i^{k,t+1}-\mathbf alpha_k^t\beta_i^{k,t}\right| \\
\le&\left| \mathbf alpha_k^{t+1}\beta_i^{k,t+1}-\mathbf alpha_k^{t+1}\beta_i^{k,t}\right|+\left| \mathbf alpha_k^{t+1}\beta_i^{k,t}-\mathbf alpha_k^t\beta_i^{k,t}\right|\\
\le&\mathbf pi\left| \beta_i^{k,t+1}-\beta_i^{k,t}\right|+\mathbf pi\left| \mathbf alpha_k^{t+1}-\mathbf alpha_k^t\right|\\
\le&\mathbf pi L_\rho\left| \rho\left(\sigma_i^{k,t+1}\right)-\rho\left(\sigma_i^{k,t}\right)\right|+\mathbf pi L_\rho\left| \sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^{k,t+1}\right)\right)-\sum_{i=1}^{r} \rho\left(\rho\left(\sigma_i^{k,t}\right)\right)\right|\\
\le&\mathbf pi^2 L_\rho\left| \sigma_i^{k,t+1}-\sigma_i^{k,t}\right|+\mathbf pi^2 L_\rho\sum_{i=1}^{r}\left| \rho\left(\sigma_i^{k,t+1}\right)- \rho\left(\sigma_i^{k,t}\right)\right|\\
\le & \mathbf pi^2 L_\rho\left| \sigma_i^{k,t+1}-\sigma_i^{k,t}\right|+\mathbf pi^3 L_\rho\sum_{i=1}^{r}\left| \sigma_i^{k,t+1}- \sigma_i^{k,t}\right|\\
\le & \mathbf pi^2 L_\rho\left\| \bar X^{(k,t+1)}-\bar X^{(k,t)}\right\|+\mathbf pi^3 L_\rho\sum_{i=1}^{r}\left\| \bar X^{(k,t+1)}-\bar X^{(k,t)}\right\|\\
=&\left(1+\mathbf pi r \right)\mathbf pi^2 L_\rho\left\| \bar X^{(k,t+1)}-\bar X^{(k,t)}\right\|,
\mathbf end{aligned}
\mathbf end{equation}
where the last inequality follows from \cite[Theorem 3.3.16]{Mer92}. Thus,
\begin{equation}\label{bound-6}
\sqrt{n_3}\left\|\mathcal D_{t+1}-\mathcal D_{t} \right\|=\sum_{k=1}^{n_3}\left\|\bar D_{t+1}^{(k)}-\bar D_{t}^{(k)} \right\|\le\left(1+\mathbf pi r \right)\mathbf pi^2 L_\rho\sqrt{r}\sum_{k=1}^{n_3}\left\| \bar X^{(k,t+1)}-\bar X^{(k,t)}\right\|=\left(1+\mathbf pi r \right)\mathbf pi^2 L_\rho\sqrt{n_3r}\left\| \mathcal Delta_{t+1}\right\|.
\mathbf end{equation}
Combining \mathbf eqref{bound-5} and \mathbf eqref{bound-6}, we obtain
\begin{equation}\label{}
\begin{aligned}
\left\|\mathbf omega^{t+1}\right\|&\le \left(L_f\theta_2^t+\mathbf frac{\mu^t\theta_1^t\mathbf pi}{\tau}\right)\left\|\mathcal Delta_{t}\right\|+\left(L_f+\mathbf frac{\mu^t\mathbf pi+\left(1+\mathbf pi r \right)\mathbf pi^2 \tilde{L}L_\rho \sqrt{n_3r}}{\tau}+4\delta \right)\left\|\mathcal Delta_{t+1}\right\|\\
&\le\left(\mathbf frac{L_f}{2}+\mathbf frac{\mu^0\mathbf pi}{2\tau}\right)\left\|\mathcal Delta_{t}\right\|+\left(L_f+\mathbf frac{\mu^0\mathbf pi+\left(1+\mathbf pi r \right)\mathbf pi^2 \tilde{L}L_\rho\sqrt{n_3r}}{\tau}+4\delta \right)\left\|\mathcal Delta_{t+1}\right\|\\
&\le \left(L_f+\mathbf frac{\mu^0\mathbf pi+\left(1+\mathbf pi r \right)\mathbf pi^2 \tilde{L}L_\rho\sqrt{n_3r}}{\tau}+4\delta \right)\left(\left\|\mathcal Delta_{t}\right\|+\left\|\mathcal Delta_{t+1}\right\|\right).
\mathbf end{aligned}
\mathbf end{equation}
(3) The proof is similar to Theorem \mathbb{R}f{thm:CC}-(ii) and hence we omit it here.
\mathbf end{proof}
\textbf{Now, we prove Theorem \mathbb{R}f{thm:whole}.}
\begin{proof}
From Lemma \mathbb{R}f{lem:whole}, we obtain the conclusion according to \cite[Theorem 2]{WL19}. The desired result is obtained.
\mathbf end{proof}
\ifCLASSOPTIONcaptionsoff
\mathbf fi
\twocolumn
\iffalse
\begin{thebibliography}{1}
\bibitem{IEEEhowto:kopka}
H.~Kopka and P.~W. Daly, \mathbf emph{A Guide to \mathcal LaTeX}, 3rd~ed.\hskip 1em plus
0.5em minus 0.4em\mathbb{R}lax Harlow, England: Addison-Wesley, 1999.
\mathbf end{thebibliography}
\mathbf fi
\mathbf end{document} | math | 97,522 |
\begin{document}
\title{Conditional Belief, Knowledge and Probability}
\begin{abstract}
A natural way to represent beliefs and the process of updating
beliefs is presented by Bayesian probability theory, where belief of
an agent $a$ in $P$ can be interpreted as $a$ considering that $P$ is more
probable than not $P$. This paper attempts to get at the core logical
notion underlying this.
The paper presents a sound and complete neighbourhood logic for
conditional belief and knowledge, and traces the connections with
probabilistic logics of belief and knowledge. The key notion in this
paper is that of an agent $a$ believing $P$ conditionally on having
information $Q$, where it is assumed that $Q$ is compatible with
what $a$ knows.
Conditional neighbourhood logic can be viewed as a core system for
reasoning about subjective plausibility that is not yet committed to
an interpretation in terms of numerical probability. Indeed, every
weighted Kripke model gives rise to a conditional neighbourhood
model, but not vice versa. We show that our calculus for
conditional neighbourhood logic is sound but not complete for
weighted Kripke models. Next, we show how to extend the calculus
to get completeness for the class of weighted Kripke models.
Neighbourhood models for conditional belief are closed under model
restriction (public announcement update), while earlier
neighbourhood models for belief as `willingness to bet' were
not. Therefore the logic we present improves on earlier
neighbourhood logics for belief and knowledge. We present complete
calculi for public announcement and for publicly revealing the
truth value of propositions using reduction axioms. The reductions show
that adding these announcement operators to the language does not increase
expressive power.
\end{abstract}
\section{Introduction}\label{section:int}
This paper aims at isolating a core logic of rational belief and
belief update that is compatible with the Bayesian picture of rational
inference \cite{Jaynes:pttlos}, but that is more general, in the
sense that it does not force epistemic weight models (that is, models
with fixed subjective probability measures, for each agent) on us.
Epistemic neighbourhood models, as defined in \cite{EijckRenne2016:upkb},
represent belief as truth in a neighbourhood, where the neighbourhoods
for an agent are subsets of the current knowledge cell of that agent.
Intuitively, a neighbourhood lists those propositions compatible with
what the agent knows that the agent considers as more likely than
their complements. Since we intend to use neighbourhood semantics to
model a certain kind of belief, it is natural to study belief
updates. However, there is an annoying obstacle for updates even in a
very simple case: public announcement.
Intuitively, a public announcement would make an agent restrict
his/her belief to the announced case. A natural way to implement this
is by restricting every belief-neighbourhood to $\phi$-worlds after
announcing $\phi$. The following example shows that this does not
work, because this kind of update does not preserve reasonable
neighbourhood conditions. Suppose Alice, somewhat irrationally,
believes that ticket $t$ she has bought is the winning ticket in a
lottery that she knows has 10,000 tickets. Let $n$ represent the
world where ticket $n$ is winning (we assume that this is a single
winner lottery, and that Alice knows this). Then Alice's belief is
given by a neighbourhood model with
\[
N_a (n) = \{ X \subseteq \{0000,\ldots,9999\} \mid t \in X \},
\]
for all $n \in \{ 0000, \ldots, 9999 \}$. Note that Alice's belief
does not depend on the world she is in: if $n,m$ are different worlds,
then $N_a (n) = N_a (m)$.
Assume Alice gets the information that some ticket $v$ different from
the ticket $t$ that she bought is the winning ticket. Let $p$ be such
that $V(p) = \{ v \}$. Then updating with $p$ leads to an updated
model with world set $\{v\}$, and with
$N'_{a}(v)=\{X\cap\{v\}\mid X\in N_{a}(v)\}=\{\emptyset,\{v\}\}$.
However, $N'_{a}(v)$ is not a neighbourhood function, for it
contradicts the condition that beliefs should be consistent (different
from $\emptyset$).
The rest of the paper is structured as follows. Section
\ref{section:cns} introduces conditional neighbourhood semantics as an
enrichment of epistemic neighbourhood semantics, and presents a
complete calculus for it. In Section \ref{section:wei} we show that
this calculus is sound but not complete for epistemic weight models,
and next, that our language is expressive enough to allow an extension
to a complete system for weight models. Section \ref{section:pub}
shows that conditional neighborhood models are an excellent starting
point for an extension with public announcement update
operators. Section \ref{section:cafw} traces connections with the
literature, lists further work, and concludes.
\section{Conditional Neighbourhood Semantics} \label{section:cns}
Epistemic neighbourhood models are defined in
\cite{EijckRenne2016:upkb} as epistemic models
$\mathfrak{M}=(W,\sim,V)$ with a neighbourhood function $N$ added to
them. The neighbourhood function assigns to each agent $a$ and each
world $w\in W$ a neighbourhood $N_a(w)$ that consists of the set of
propositions that agent $a$ believes in $w$.
Intuitively each element in neighbourhood $N_a(w)$ represents a belief
agent $a$ holds. Usually belief is bolder than knowledge. Indeed, most
people believe many things of which they are not sure. If we equate
certainty with knowledge, then this means that any belief of agent $a$
should be a subset of agent $a$'s current \emph{knowledge cell}, i.e.,
the set $[w]_a=\{u\in W\mid w\sim_au\}$. It follows that each
proposition in neighbourhood $N_a(w)$ is a subset of $[w]_a$. Thus it
is natural (in the framework of epistemic modal logic) to assume the
following neighbourhood conditions:
\begin{description}
\item [Monotonicity] If an agent believes $X$, and knows that $X$
entails $Y$, then the agent believes $Y$.
\item [No-inconsistency] An agent does not hold an inconsistent belief.
\item [Determinacy] An agent do not believe both a proposition and its
complement.
\end{description}
However as is illustrated by Alice's Lottery example from the introductory
section, public announcement update do not preserve the
\textbf{No-inconsistency} condition. In order to overcome this
problem, we propose to enrich neighbourhood functions $N$ with an
extra parameter for propositions. In other words, instead of focusing on what
agents believe, we turn our attention to what agents would believe
under some assumption. Following this intuition, for each proposition
$X$, $N_{a}^{w}(X)$ is a set of propositions such that each of these
propositions represents a belief agent $a$ holds at state $w$ when
supposing $X$. In this paper, we are interested in beliefs as
`willingness to bet', i.e., an agent believes a proposition $Y$
supposing $X$ if the agent considers $Y\cap X$ more likely to be
true than its complement conditioned by $X$, namely $-Y\cap X$. We
also assume the following postulate:
\begin{description}
\item [Equivalence of Conditions] If an agent knows that two conditions are
equivalent, then the agent's beliefs are the same under both conditions.
\end{description}
For non-equivalent conditions, on the other hand, conditional beliefs
may vary.
Assume $p$ ranges over a set of proposition letters $P$, and $a$ over
a finite set of agents $A$. The language for conditional
neighbourhood logic ${\mathcal L}_{CN}$ is given by the following BNF
definition:
\[
\phi::=\top\mid p\mid\neg\phi\mid\left(\phi\wedge\phi\right)\mid B_{a}(\phi,\phi)
\]
$B_{a}(\phi,\psi)$ can be read as ``assuming $\phi$, agent $a$
believes (is willing to bet) $\psi$''. $\perp,\vee,\to,\leftrightarrow$ are defined as
usual. Somewhat arbitrarily, we assume that conditioning with
information that contradicts what the agent knows (is certain of) will
cause an agent to believe nothing anymore. This means we can define
knowledge in terms of conditional belief, as follows. Use $K_{a}\phi$
for $\neg B_{a}(\neg\phi,\top)$ (which can be read as ``$\neg\phi$
contradicts what agent $a$ is certain of'',) and
$\check{K}_{a}\phi$ for $\neg K_{a}\neg\phi$.
Consider Alice's lottery situation again. Alice knows there are
10,000 lottery tickets numbered 0000 through 9999. Alice believes
ticket $t$ is winning (and buys it). Let $n$ represent the world
where ticket $n$ is winning. Then Alice's belief is given by a
conditional neighbourhood model with
$N^w_a (X) = \{ Y \subseteq X
\mid t \in Y \}$ if
$t \in X$, and
$N^w_a (X) = \{ Y \subseteq X \mid |Y| > \frac 1 2 | X | \}$ if
$t \notin X$. Now Alice receives the information that $v \neq t$ is
the winning ticket. Then $v = w$, the updated model has universe
$\{ v \}$, and Alice updates her belief to $N'$ with
${N'}^v_a (\{ v \}) = \{ \{ v \} \}$. In the updated model, Alice
knows that $v$ is the winning ticket.
\begin{definition}
\label{def:NeighborModel} Let ${\text{\it Ag}}$ be a finite set of agents. A
conditional neighbourhood model ${\mathfrak M}$ is a tuple $\left(W,N,V\right)$
where:
\begin{itemize}
\item $W$ is a non-empty set of worlds;
\item $N:{\text{\it Ag}}\times W\times\mathcal{P}W\to\mathcal{P}\mathcal{P}W$ is a function that assigns to every agent $a\in{\text{\it Ag}}$, every world $w\in W$ and
set of worlds $X\subseteq W$ a collection $N_{a}^{w}(X)$ of sets of
worlds\textendash each such set called a neighbourhood of
$X$\textendash subject to the following conditions, where
\[
[w]_{a}=\{v\in W\mid\forall X\subseteq W,N_{a}^{w}(X)=N_{a}^{v}(X)\}:
\]
\begin{description}
\item [{(c)}] $\forall Y\in N_{a}^{w}(X):Y\subseteq X\cap[w]_{a}$.
\item [{(ec)}] $\forall Y\subseteq W$: if $X\cap[w]_{a}=Y\cap[w]_{a}$,
then $N_{a}^{w}(X)=N_{a}^{w}(Y)$.
\item [{(d)}] $\forall Y\in N_{a}^{w}(X),X\cap[w]_{a}-Y\notin N_{a}^{w}(X)$.
\item [{(sc)}] $\forall Y,Z\subseteq X\cap[w]_{a}:$ if $X\cap[w]_{a}-Y\notin N_{a}^{w}(X)$
and $Y\subsetneq Z$, then $Z\in N_{a}^{w}(X)$.
\end{description}
\item $V$ is a valuation.
\end{itemize}
\end{definition}
We call $N$ a neighbourhood function; a neighbourhood $N_{a}^{w}(X)$ for agent
$a$ in $w$, conditioned by $X$ is a set of propositions each of which
agent $a$ believes more likely to be true than its complement.
Property (c) expresses that what is believed is also known; (ec) expresses
\textbf{equivalence of conditions}; (d) expresses \textbf{determinacy};
(sc) expresses a form of ``strong commitment'': if the agent does not
believe the complement of $Y$ then she must believe any weaker $Z$ implied
by $Y$. It can be proved (see Appendix \ref{appendix:alternativeDef}, Lemma \ref{lem:implied-conditions}) that
these conditions together imply that any conditional neighbourhood model
${\mathfrak M} = (W,N,V)$ also satisfies the following, for any $a \in A$,
$w \in W$, $X \subseteq W$:
\begin{description}
\item [{(m)}] $\forall Y\subseteq Z\subseteq X\cap[w]_{a}:$ if $Y\in N_{a}^{w}(X)$,
then $Z\in N_{a}^{w}(X)$;
\item [{(ni)}] $\emptyset\notin N_{a}^{w}(X)$;
\item [{(n){*}}] if $X\cap[w]_{a}\neq\emptyset$, then $X\cap[w]_{a}\in N_{a}^{w}(X)$;
\item [{($\emptyset$)}] if $X\cap[w]_{a}=\emptyset$, then
$N_{a}^{w}(X)=\emptyset$;
\end{description}
where (m) and (ni) expresses \textbf{monotonicity} and \textbf{no-inconsistency} respectively. ($\emptyset$) expresses that conditioning with
information that contradicts what the agent knows will
cause an agent to believe nothing anymore. Note that ($\emptyset$) reflects our definition for $K$-operators $K_a\phi::=\neg B_a(\neg\phi,\top)$.
Let ${\mathfrak M}=\left(W,N,V\right)$ be a conditional neighbourhood
model, let $w\in W$. Then the key clause of the truth definition is
given by:
\[
\begin{array}{lcl}
{\mathfrak M},w\vDash B_{a}(\phi,\psi) & \text{iff} & \text{for some }Y\in N_{a}^{w}(\left\llbracket \phi\right\rrbracket _{{\mathfrak M}})\text{, }Y\subseteq\left\llbracket \psi\right\rrbracket _{{\mathfrak M}}
\end{array}
\]
where $\left\llbracket \phi\right\rrbracket _{{\mathfrak M}}=\{w\in W\mid{\mathfrak M},w\vDash\phi\}$.
Because of (m), we can prove that
\[
{\mathfrak M},w\vDash B_{a}(\phi,\psi)\text{ iff }\{v\in\left\llbracket \phi\right\rrbracket _{{\mathfrak M}}\cap[w]_{a}\mid{\mathfrak M},v\vDash\psi\}\in N_{a}^{w}(\left\llbracket \phi\right\rrbracket _{{\mathfrak M}}).
\]
It is worth noting that by (ni), $B_a(\phi,\bot)$ will always be invalid for any agent $a$ and any formula $\phi$.
Note that conditional neighborhood models do not have epistemic
relations $\sim_a$. However, such relations can be introduced by the
neighourhood function as follows: for each $a\in Ag$,
$\sim_{a}\ \subseteq W\times W$ is a relation such that
\[
\forall w,v\in W,w\sim_{a}v \text{ iff }\forall X\subseteq W,\
N_{a}^{w}(X)=N_{a}^{v}(X).
\]
Then
${\mathfrak M},w\vDash K_{a}\phi\text{ iff for each }v\sim_{a}w\text{,
}{\mathfrak M},v\vDash\phi$.
It can be proved that the version of conditional neighbourhood models
with epistemic relations $\sim_a$ is equivalent to the version without
(see Appendix \ref{appendix:alternativeDef}). Such equivalence is guaranteed by properties (n)*, ($\emptyset$) and another property which was can be found in \cite{DBLP:conf/atal/BalbianiDL16}(, however they use neighbourhoods instead of conditional neighbourhoods for beliefs):
\begin{lyxlist}{00.00.0000}
\item [{(a)}] $\forall v\in\left[w\right]_{a}:N_{a}^{w}(X)=N_{a}^{v}(X)$,
\end{lyxlist}
which states that if a agent cannot distinguish two worlds, then the agent holds the same beliefs on either of these two worlds.
Thus we do
not differentiate conditional neighbourhood models with or without
such relations.
\begin{figure}
\caption{The CN Calculus for Conditional Neighbourhood Logic}
\label{figure:CNcalculus}
\end{figure}
In Figure \ref{figure:CNcalculus}, axiom (D) guarantees the truth of neighbourhood condition (d), (EC) would correspond to (ec), (M) to (ec), (M) to (m), (C) to (c) and (SC) to (sc).
\begin{theorem} \label{thm:CompCN}
The CN calculus for Conditional
Neighbourhood logic given in Figure \ref{figure:CNcalculus} is sound
and complete for conditional neighbourhood models.
\end{theorem}
\begin{proof}
See Appendix \ref{appendix:CompCNproof}.
\end{proof}
Note that the
calculus does not have $4$ and $5$ for $K$; this is because these
principles are derivable from (5B) and (4B).
For an example of an interesting principle that can already be proved
in the CN calculus, consider the following. Suppose we have a biased
coin with unknown bias, and we want to use it to simulate a fair coin.
Then we can use a recipe first proposed by John von Neumann
\cite{Neumann1951:vt}: toss the coin twice. If the two outcomes are
not the same, use the first result; if not, forget the outcomes and
repeat the procedure. Why does this work? Because we can assume that
two tosses of the same coin have the same likelihood of showing heads,
even if the coin is biased. We can express subjective likelihood
comparison in our language. See Figure \ref{fig:Comparison}, where we
use $\alpha$ for ``the first toss comes up with heads'', and $\beta$
for ``the second toss comes up with heads''. This hinges on the
following principle:
\begin{quote}
If $\alpha$ and $\beta$ have the same likelihood, then
$\alpha \land \neg \beta$ and $\neg \alpha \land \beta$
should also have the same likelihood, and vice versa.
\end{quote}
Notice that $B_{a}(\alpha\leftrightarrow\neg\beta,\alpha)$ expresses
that agent $a$ considers $\alpha \land \neg \beta$ more likely than
$\neg \alpha \land \beta$. And it follows from the comparison principle in Figure \ref{fig:Comparison} that this
is equivalent to: $a$ considers $\alpha$ more likely than $\beta$. From
now on, let $\alpha \succ_{a}\beta$ abbreviate $B_{a}(\alpha\leftrightarrow\neg\beta,\alpha)$.
By axioms $M$ and $C$, $B_a( \alpha \leftrightarrow \neg \beta, \beta)
\leftrightarrow B_a( \alpha \leftrightarrow \neg \beta, \neg \alpha)$, and
by axiom $D$, $B_a( \alpha \leftrightarrow \neg \beta, \alpha) \rightarrow \neg B_a( \alpha \leftrightarrow \neg \beta, \neg \alpha)$. Therefore,
$B_a( \alpha \leftrightarrow \neg \beta, \alpha) \rightarrow \neg B_a( \alpha \leftrightarrow \neg \beta, \beta)$ is provable in the CN calculus. Using the abbreviation: $\alpha \succ_{a}\beta \rightarrow
\neg \beta \succ_{a} \alpha$. We therefore have three mutually exclusive cases:
\begin{itemize}
\item $\alpha \succ_{a}\beta$.
\item $\beta \succ_{a} \alpha$.
\item $\neg \alpha \succ_{a}\beta \land \neg \beta \succ_{a} \alpha$.
\end{itemize}
Agreeing to abbreviate the third case as $\alpha\approx_{a}\beta$, we get the following
totality principle.
\begin{description}
\item[Totality] $(\alpha\succ_{a}\beta)\vee(\beta\succ_{a}\alpha)\vee(\alpha\approx_{a}\beta).$
\end{description}
Next, we define $\alpha\succcurlyeq_a\beta$ by $\neg\beta\succ_a\alpha$. This abbreviation gives:
\begin{description}
\item[Refl Totality] $(\alpha\succcurlyeq_{a}\beta)\vee(\beta\succcurlyeq_{a}\alpha).$
\end{description}
We can also connect to logic languages concerning probability that do
not have knowledge operators $K_a$ but instead use
${\succcurlyeq_a}\top$ (for instance \cite{Gardenfors75} and
\cite{ghosh2012comparing}), by deriving that
$(\alpha\succcurlyeq_a\top)\leftrightarrow K_a\alpha$.
\newcommand{\boundellipse}[3]
{(#1) ellipse (#2 and #3)}
\begin{figure}
\caption{\textbf{Comparison principle}
\label{fig:Comparison}
\end{figure}
\section{Incompleteness and Completeness for Epistemic Weight Models}\label{section:wei}
In this Section we interpret ${\mathcal L}_{CN}$ in epistemic weight
models, and give an incompleteness and a completeness result.
\begin{definition}
An \emph{Epistemic Weight Model} for agents ${\text{\it Ag}}$ and basic propositions
$P$ is a tuple $\mathfrak{M}=(W,R,L,V)$ where
$W$ is a non-empty countable set of worlds, $R$ assigns to every
agent $a\in{\text{\it Ag}}$ an equivalence relation $\sim_{a}$ on $W$, $L$
assigns to every $a\in{\text{\it Ag}}$ a function $\mathbb{L}_{a}$ from $W$ to $\mathbb{Q}^{+}$
(the positive rationals), subject to the following boundedness condition
({*}).
\[
\label{*}\forall a\in{\text{\it Ag}}\ \forall w\in W\sum_{u\in[w]_{a}}\mathbb{L}_{a}(u)<\infty.
\]
where $[w]_{a}$ is the cell of $w$ in the partition induced by $\sim_{a}$.
$V$ assigns to every $w\in W$ a subset of $P$.
\end{definition}
We can interpret conditional belief sentences in these models. Let
$\mathfrak{M}$ be a weight model, and let
$\left\llbracket \phi\right\rrbracket
_{\text{\ensuremath{\mathfrak{M}}}}=\{w\in
W\mid\text{\ensuremath{\mathfrak{M}}},w\vDash\phi\}$.
Let $w$ be a world of $\mathfrak{M}$. Then
\[
\mathfrak{M},w\vDash B_a(\phi,\psi) \text{ iff }
\mathbb{L}_a([w]_{a}\cap\left\llbracket \phi\wedge\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}})>\mathbb{L}_a([w]_{a}\cap\left\llbracket \phi\wedge\neg\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}}),
\]
and $\mathfrak{M},w\vDash K_a \phi \text{ iff
for all } v \in [w]_a,\ \mathfrak{M},w\vDash \phi$.
One easily checks that the axioms of the CN calculus
are true for this interpretation, so we have:
\begin{theorem}
The CN calculus is sound for epistemic weight models.
\end{theorem}
To see that we do not have completeness, observe that we can
express Savage's {\em Sure Thing Principle} in our language.
In Savage's example this is about action. If an agent would
do a thing if he would know $\phi$, and would do the same
thing if he would know $\neg \phi$, then he should do the thing
in any case:
\begin{quote}\small
A businessman contemplates buying a certain piece of property. He
considers the outcome of the next presidential election relevant. So,
to clarify the matter to himself, he asks whether he would buy if he
knew that the Democratic candidate were going to win, and decides that
he would. Similarly, he considers whether he would buy if he knew that
the Republican candidate were going to win, and again finds that he
would. Seeing that he would buy in either event, he decides that he
should buy, even though he does not know which event obtains, or will
obtain, as we would ordinarily say.
Savage, \cite[p 21]{Savage1972:tfosSE}.
\end{quote}
The following formula expresses this Sure Thing Principle, not
about action but about belief:
\[
B_a( \phi, \psi) \land B_a (\neg \phi, \psi)
\rightarrow B_a (\top, \psi).
\]
It is not hard to see that this principle is valid for weight models:
if $\psi$ has greater weight than $\neg \psi$ within the $\phi$ area,
and also within the $\neg \phi$ area, then it is a matter of adding
these weights to see that $\psi$ has greater weight than $\neg \psi$
in the whole domain. But the Sure Thing Principle is not a validity
for neighbourhood models. Let us consider the following urn example
modified from Ellsberg's paradox \cite{Ellsberg1961:raatsa}.
\begin{example}
An urn contains 120 balls: 30 red balls, 30 green balls, and 60
yellow or blue balls (in some unknown proportion). A ball $x$ will
be pulled out of this urn, and there are 3 pairs of gambles where Alice
has to pick her choice:
\[
\begin{array}{|l|l|}
\hline G_r: x \text{ is red} & G_y: x \text{ is yellow}\\
\hline G_g: x \text{ is green} & G_b: x \text{ is blue}\\
\hline G_{rg}: x \text{ is either red or green} & G_{yb}: x \text{ is either yellow or blue}\\
\hline\end{array}
\]
Alice knows that the likelihood of $G_{rg}$ (1/2) is the same as
$G_{yb}$ (1/2), but is uncertain of the likelihood of $G_y$ and
$G_b$. Alice is ambiguity averse in her beliefs, which means
that she is willing to bet $G_r$ against $G_y$, and $G_g$ against $G_b$.
\end{example}
To model this example, let
$W=\{red, green, yellow, blue\}$, and let the neighbourhood function
be the same for any $w\in W$, with
\[N^w(\{red, yellow\}) = \{ \{red\}, \{red,yellow\} \},\]
\[N^w(\{green, blue\}) = \{\{green\}, \{green,blue\}\},\]
$N^w(W)$ contains all subsets with at least 3 worlds.
The reason that such model exists is that all our neighbourhood properties
do not express anything about connections between different
neighbourhoods. Thus, in this model we have $B (G_r\vee G_y, G_r)$,
$B(G_g\vee G_b, G_g)$ and $B(\top,\neg (G_r\vee G_g))$ all true in
every world, in contradiction with the Sure Thing principle.
\commentout{
\begin{example}
A politician is about to go to the U.S. to make a trade deal with
Donald Trump, and may reason as follows: if Trump is wise, then it
is likely to make a beneficial deal with him in this visit; and if
Trump is stupid, then it is also likely to have a beneficial deal;
however, wise or stupid, Trump is a protectionist, and thus it is
unlikely to make good deal with him.
\end{example}
This example involves default reasoning. When thinking about a wise
Trump, what comes in the politician's mind is a wise president who
perfectly understand that protectionism would harm everyone, and then
the deal about to be made will benefit both countries. While thinking
Trump is stupid, the politician is building up an image of a stupid
leader that he/she can take advantage of. However, in general the
politician thinks Trump is conservative, unpredictable and most of
all, a protectionist, which implies that it would unlikely to have a
good trade deal with him.
To model the politician's beliefs, let $W=\{wg,sg,wb,sb\}$ where $wg$
is the situation that Trump is wise and the trade deal is good, $sg$
is where Trump is stupid and the trade deal is good, likewise $wb$ and
$sb$ are wise and stupid Trump respectively and the trade deal is bad
for the politician. Let the neighbourhood function be the same for any
$w\in W$ such that $N_w(wg,wg)=\{\{wg\},\{wg,wb\}\}$,
$N_w(sg,sg)=\{\{sg\},\{sg,sb\}\}$,
$N_w(W)=\{\{wb,sb\},\{wg,wb,sb\},\{sg,wb,sb\},W\}$. The reason that
such model exists is that all our neighbourhood properties does not
express anything about connections between different
neighbourhoods. Let $V(wise)=\{wg,wb\}$, $V(stupid)=\{sg,sb\}$,
$V(good)=\{wg,sg\}$ and $V(bad)=\{wb,sb\}$. Then in this model we
have $B (wise, good)$, $B(stupid, good)$ and $B(\top,bad)$ all true in
every world, in contradiction with the Sure Thing principle. This also
shows that our conditional neighbourhood models are compatible with
default reasoning.
}
\begin{theorem}
The CN calculus is incomplete for epistemic weight models.
\end{theorem}
\begin{proof}
As we have seen, $B (G_r\vee G_y, G_r \vee G_g) \land B(G_g\vee G_b, G_r \vee G_g) \rightarrow B(\top,G_r\vee G_g)$
is false in the above neighbourhood counterexample to Sure Thing. So
by Theorem \ref{thm:CompCN}, Sure Thing
cannot be proved in the CN calculus. But Sure Thing is valid in the
class of epistemic weighted models.
\end{proof}
\subsection*{A Complete Calculus for Epistemic Weight Models}
\cite{Segerberg1971:qpiams} and \cite{Gardenfors75} proposed a
complete logic $QP$ for epistemic weight models based on a language
$\mathcal{L}_{QP}$ given by the following BNF definition\footnote{We
replaced ``0'' and ``1'' in \cite{Gardenfors75} with the equivalent
notation ``$\bot$'' and ``$\top$'' respectively, and extend this
language to multiagent case. }:
\[
\phi::=\top\mid p\mid\neg\phi\mid\left(\phi\wedge\phi\right)\mid \phi\succcurlyeq_a\phi
\]
$\succ_a$ and $\approx_a$ are given as usual. ${\succcurlyeq_a}\top$
functions as $K_a$ in epistemic modal logic. The complex formula
$\alpha_0\ldots\alpha_mE_a\beta_0\ldots\beta_m$ first proposed by
Segerberg in \cite{Segerberg1971:qpiams} is an abbreviation of the
formula expressing that for all worlds $w$ in the evaluated knowledge
cell of agent $a$, the number of $\alpha_i$ among
$\alpha_0\ldots\alpha_m$ true in $w$ is the same as those of $\beta_j$
among $\beta_0\ldots\beta_m$ true in $w$. $QP$ logic does not assume
that every world in an agent's knowledge cell has the same likelihood
for all propositions; and is with the following core axioms:
\begin{lyxlist}{00.00.0000}
\item [{(A4)}]
$\alpha_0\alpha_1\ldots\alpha_mE_a\beta_0\beta_1\ldots\beta_m\wedge(\alpha_0\succcurlyeq_a\beta_0)\wedge\ldots\wedge(\alpha_{m-1}\succcurlyeq_a\beta_{m-1})\to(\beta_m\succcurlyeq_a\alpha_m)$ for all $m\geq 1.$
\end{lyxlist}
Nevertheless, not only we can express every notion in the
probabilistic language $\mathcal{L}_{QP}$, but using the results of
\cite{Gardenfors75} and \cite{Sco64:JMP} we can prove the following
completeness theorem as well (the proof is a simple adaptation of a proof found in \cite{Gardenfors75}, just to observe that every other axioms in \cite{Gardenfors75} except (A4) is provable in CN calculus).
\begin{theorem}
\emph{CN}$\oplus$\emph{(A4)} is complete for epistemic weight models.
\end{theorem}
\subsection*{Comparing Expressive Power}
In this subsection we compare the expressive power between
$\mathcal{L}_{CN}$ and $\mathcal{L}_{QP}$, restricting our
attention to the single-agent case. As is shown in
Section \ref{section:cns}, we can translate $\mathcal{L}_{CN}$ into
$\mathcal{L}_{QP}$ by defining
$Tr_1:\mathcal{L}_{CN}\to\mathcal{L}_{QP}$ with key case:
\[
Tr_1(B(\alpha,\beta))=Tr_1(\alpha\wedge\beta)\succ Tr_1(\alpha\wedge\neg\beta)),
\]
which express that the agent considers $\alpha\wedge\beta$ is
more likely than $\alpha\wedge\neg\beta$. Likewise we can define the
translation $Tr_2$ from $\mathcal{L}_{QP}$ to $\mathcal{L}_{CN}$ by key case
\[
Tr_2(\alpha\succcurlyeq\beta)=\neg B(Tr_2(\alpha\leftrightarrow\neg\beta),Tr_2(\beta)).
\]
It is easy to prove that both translations preserve truth on
weight models.
As for conditional neighbourhood models, consider the
following truth definition for $\mathcal{L}_{QP}$:
\[
\mathfrak{M},w\vDash_{qp}\alpha\succcurlyeq\beta\text{ iff }\left\llbracket\neg\alpha\wedge\beta\right\rrbracket ^{qp}_{{\mathfrak M}}\notin N^w(\left\llbracket\alpha\leftrightarrow\neg\beta\right\rrbracket ^{qp}_{{\mathfrak M}}),
\]
which is equivalent to( by condition (d) and $\alpha\succ\beta$::=$\neg\beta\succcurlyeq\alpha$),
\[
\mathfrak{M},w\vDash_{qp}\alpha\succ\beta\text{ iff }\left\llbracket\alpha\wedge\neg\beta\right\rrbracket ^{qp}_{{\mathfrak M}}\in N^w(\left\llbracket\alpha\leftrightarrow\neg\beta\right\rrbracket ^{qp}_{{\mathfrak M}}).
\]
Such truth condition parallels with our translation $Tr_2$. Furthermore, to show that $Tr_1$ preserves truth value on conditional neighbourhood models, we can prove by induction on construction of $\mathcal{L}_{CN}$-formulas, and establish the following equivalences:
\[
\begin{array}{ccl}
\mathfrak{M},w\vDash_{qp} Tr_1(B(\alpha,\beta)) & \text{ iff } & \mathfrak{M},w\vDash_{qp}\alpha'\wedge\beta'\succ\alpha'\wedge\neg\beta'\\
& \text{ iff } & \left\llbracket(\alpha'\wedge\beta')\wedge\neg(\alpha'\wedge\neg\beta')\right\rrbracket ^{qp}_{{\mathfrak M}}\in N^w(\left\llbracket(\alpha'\wedge\beta')\leftrightarrow\neg(\alpha'\wedge\neg\beta')\right\rrbracket ^{qp}_{{\mathfrak M}})\\
& \text{ iff } & \left\llbracket\alpha'\wedge\beta'\right\rrbracket ^{qp}_{{\mathfrak M}}\in N^w(\left\llbracket\alpha'\right\rrbracket ^{qp}_{{\mathfrak M}})\\
& \text{ iff } & \left\llbracket\beta'\right\rrbracket ^{qp}_{{\mathfrak M}}\in N^w(\left\llbracket\alpha'\right\rrbracket ^{qp}_{{\mathfrak M}})\text{ (by condition (n)*)}\\
& \text{ iff } & \left\llbracket\beta\right\rrbracket _{{\mathfrak M}}\in N^w(\left\llbracket\alpha\right\rrbracket _{{\mathfrak M}})\text{ (by induction hypothesis)}\\
& \text{ iff } & \mathfrak{M},w\vDash B(\alpha,\beta),\\
\end{array}
\]
where $\alpha'=Tr_1(\alpha)$ and $\beta'=Tr_1(\beta)$.
Therefore we can prove that both $Tr_1$ and $Tr_2$ preserve truth value on conditional neighbourhood models.
However we can design models violating the comparison principle of
Figure \ref{fig:Comparison}. $\mathcal{L}_{QP}$ can differentiate
models where the principle holds from those where it does not, while
$\mathcal{L}_{CN}$ cannot. A \emph{comparison model} $\mathfrak{N}$ is
a triple $(W,\succeq,V)$ where $W$ is a non-empty set of worlds,
${\succeq}\subseteq\mathcal{P}W\times\mathcal{P}W$ is a relation between
propositions, and $V$ is a valuation. Truth definition for
$\mathcal{L}_{QP}$ is with the following key clause:
\[
\mathfrak{N},w\vDash_2\alpha\succcurlyeq\beta\text{ iff }\left\llbracket\alpha\right\rrbracket_{{\mathfrak N}}\succeq\left\llbracket\beta\right\rrbracket_{{\mathfrak N}}.
\]
Furthermore the key clause in the truth condition for
$\mathcal{L}_{CN}$ is given by:
\[
\mathfrak{N},w\vDash_1 B(\alpha,\beta)\text{ iff }\left\llbracket\alpha\wedge\beta\right\rrbracket_{{\mathfrak N}}\succeq\left\llbracket\alpha\wedge\neg\beta\right\rrbracket_{{\mathfrak N}},
\]
which parallels with translation $Tr_1$ at the semantic level.
Let $N_1=(W,\succeq_1,V)$ and $N_2=(W,\succeq_2,V)$ be comparison models such that:
\begin{enumerate}
\item $W=\{\{p,q\},\{p\},\{q\},\emptyset\}$
\item ${\succeq_1}=\{(\{\{p\},\{p,q\}\},\{\{q\},\{p,q\}\}),(\{p\},\{q\})\}$,
\item ${\succeq_2}=\{(\{p\},\{q\})\}$,
\item $w\in V(r)$ iff $r\in w$.
\end{enumerate}
The only difference between $\mathfrak{N}_1$ and $\mathfrak{N}_2$ is
that
$\left\llbracket p\right\rrbracket_{\mathfrak{N}_1}\succeq_1
\left\llbracket q\right\rrbracket_{\mathfrak{N}_1}$ but not
$\left\llbracket p\right\rrbracket_{\mathfrak{N}_2}\succeq_2
\left\llbracket q\right\rrbracket_{\mathfrak{N}_2}$. Thus $N_2$
violates the \textbf{comparison principle}, and
$\neg(p\succcurlyeq q)\wedge(p\wedge\neg q\succcurlyeq\neg p\wedge q)$
is valid on $\mathfrak{N}_2$ but not on $\mathfrak{N}_1$. However we can
prove that $\mathfrak{N}_1$ and $\mathfrak{N}_2$ satisfy the same set of
$\mathcal{L}_{CN}$-formulas. The crucial fact is to observe that we
only use the comparison relation for disjoint propositions for
$\vDash_1$. For instance
$B(p\leftrightarrow\neg q,p)=Tr_2(p\succcurlyeq q)=Tr_2(p\wedge\neg
q\succcurlyeq p\wedge\neg q)$ is valid on both $\mathfrak{N}_1$ and
$\mathfrak{N}_2$, because for each $i\in\{1,2\}$,
\[
\begin{array}{ccl}
\mathfrak{N}_i,w\vDash_1 B(p\leftrightarrow\neg q,p)) & \text{ iff } & \left\llbracket (p\leftrightarrow\neg q)\wedge p\right\rrbracket_{{\mathfrak N}_i}\succeq_i\\
& & \left\llbracket(p\leftrightarrow\neg q)\wedge\neg p\right\rrbracket_{{\mathfrak N}_i}\\
& \text{ iff } & \left\llbracket p\wedge\neg q\right\rrbracket_{{\mathfrak N}_i}\succeq_i\left\llbracket\neg p\wedge q\right\rrbracket_{{\mathfrak N}_i}\\
& \text{ iff } & \{p\}\succeq_i\{q\}, \text{ which holds for}\\
& & \text{either }i\in\{1,2\}.
\end{array}
\]
Therefore $\mathcal{L}_{QP}$ is more expressive than $\mathcal{L}_{CN}$. We conclude $\mathcal{L}_{CN}$ as a core logic for conditional belief as willingness to bet.
\section{Public Announcement for Conditional Neighborhood Models}\label{section:pub}
Public announcement update for weight models parallels Bayesian update
in probability theory. Public announcement update for probabilistic
logic was first treated in \cite{Kooi03:kcac}, and more complicated
probabilistic updates were discussed in \cite{Benthem2003:cpmul} and
\cite{BenGerKoo09:duwp}. As was mentioned in the introduction, public
announcement updates may destroy reasonable neighbourhood conditions.
The good news is that conditional neighbourhood models are more well
behaved. We propose two ways of public announcement updating:
deleting points and cutting links; and show reduction axioms for
either of them. These shows that our neighbourhood conditions are some
core principles that preserved by Bayesian update.
\subsection{Deleting Points}
The first approach is the usual one for public announcement update,
which is restricting the domain to $\phi$-worlds after announcing
$\phi$. It assumes that only facts (true propositions at the current
world) can be publicly announced. Public announcements create common
knowledge, but it need not be the case that a fact that gets announced
becomes true after the update; Moore sentences are a well-known exception.
The language $\mathcal{L}_{PC}$ is the result of extending our base language $\mathcal{L}_{CN}$
with a public announcement operator:
\[
\phi::=\top\mid p\mid\neg\phi\mid\left(\phi\wedge\phi\right)\mid B_{a}(\phi,\phi)\mid[\phi]\phi
\]
If $\mathfrak{M}=(W,N,V)$ is a conditional neighborhood model, $\sim$ is the induced epistemic relation,
and $\phi$ is a formula of the $\mathcal{L}_{PC}$ language, then
$\mathfrak{M}^{\phi}=(W^{\phi},{}^{\phi}N,V^{\phi})$ is given by:
\begin{itemize}
\item $W^{\phi}=\{w\in W\mid\mathfrak{M},w\vDash\phi\}$
\item $w\sim_{a}^{\phi}u$ iff $\mathfrak{M},w\vDash\phi$, $\mathfrak{M},u\vDash\phi$
and $w\sim_{a}u$
\item $^{\phi}N_{a}^{w}(X)=\left\{
\begin{array}{ll}
N_{a}^{w}(X) & \text{if }X\subseteq W^{\phi}\text{ and }w\in W^{\phi}\\
\text{undefined} & \text{otherwise}
\end{array}\right.$
\item $V^{\phi}(p)=V(p)\cap W^{\phi}$
\end{itemize}
\begin{example} \label{example:lottery2}
As an example, consider Alice's lottery situation again. Alice
knows there are 10,000 lottery tickets numbered 0000 through 9999.
Alice believes ticket $t$ is winning (and buys it). Let $n$
represent the world where ticket $n$ is winning. Then Alice's belief
is given by a conditional neighborhood model with
$N^w_a (X) = \{ Y \subseteq X \mid t \in Y \}$ if
$t \in X$, and
$N^w_a (X) = \{ Y \subseteq X \mid |Y| > \frac 1 2 | X | \}$ if
$t \notin X$. Now Alice receives the information that $v \neq t$ is
the winning ticket. Then $v = w$, the updated model has universe
$\{ v \}$, and Alice updates her belief to $N'$ with
${N'}^v_a (\{ v \}) = \{ \{ v \} \}$. The updated model satisfies
the conditions for a conditional neighborhood model.
\end{example}
\begin{definition}
Semantics for $\vDash_{PC}$: let $\mathfrak{M}=\left(W,N,V\right)$ be
a conditional neighborhood model, let $w\in W$.
\[
\mathfrak{M},w\vDash_{PC}[\phi]\psi\text{ iff }\mathfrak{M},w\vDash_{PC}\phi\text{ implies }\mathfrak{M}^{\phi},w\vDash_{PC}\psi.
\]
\end{definition}
A complete calculus for $\vDash_{PC}$ consists of the calculus for CN,
plus the usual Reduction Axioms of public announcement update for
boolean cases and the following key Reduction Axiom (call this system
PC):
\begin{itemize}
\item $[\phi]B_{a}(\psi,\chi)\leftrightarrow(\phi\to B_{a}(\phi\wedge[\phi]\psi,[\phi]\chi))$
\end{itemize}
In Appendix \ref{Appendix:CompPCandPC'} we prove that the \emph{PC} Calculus is sound and complete w.r.t. $\vDash_{PC}$.
\subsection{Cutting Links}
We can generalize public announcement of facts to public announcement
of truth values. In announcing the value of $\phi$, it depends on the
truth value of $\phi$ in the actual world whether $\phi$ or
$\neg \phi$ gets announced.
The language $\mathcal{L}_{PC\pm}$ for this kind of update is given by the following BNF:
\[
\phi::=\top\mid p\mid\neg\phi\mid\left(\phi\wedge\phi\right)\mid B_{a}(\phi,\phi)\mid[\pm\phi]\phi
\]
To capture the intuition of such updates for conditional neighbourhoods,
we use the following mechanism that cuts epistemic relations between
$\phi$-worlds and $\neg\phi$-worlds after announcing $\phi$.
If $\mathfrak{M}=(W,N,V)$ is a conditional neighborhood model, $\sim$ is the induced epistemic relation,
and $\phi$ is a formula of the $\mathcal{L}_{PC\pm}$ language, then
$\mathfrak{M}^{\pm\phi}=(W{}^{\pm\phi},{}^{\text{\textpm}\phi}N,V^{\pm\phi})$
is given by:
\begin{itemize}
\item $W^{\pm\phi}=W$
\item $\sim^{\pm\phi}=\{(w,v)\in W^{2}\mid w\sim_{a}v\text{ and }\mathfrak{M},w\vDash\phi\text{ iff }\mathfrak{M},v\vDash\phi\}$
\item $^{\pm\phi}N_{a}^{w}(X)=N_{a}^{w}(X\cap[w]_{a}^{\text{\textpm}\phi})$
\item $V^{\pm\phi}=V$
\end{itemize}
\begin{definition}
Semantics for $\vDash_{PC\pm}$: let $\mathfrak{M}=\left(W,N,V\right)$
be a conditional neighborhood model, let $w\in W$.
\[
\mathfrak{M},w\vdash_{PC\pm}[\phi]\psi\text{ iff }\mathfrak{M}^{\pm\phi},w\vdash_{PC\pm}\psi.
\]
\end{definition}
The system PC$\pm$ for $\vdash_{PC\pm}$ consists of the calculus for
CN, plus the following Reduction Axioms:
\begin{itemize}
\item $[\pm\phi]p\leftrightarrow p$
\item $[\pm\phi]\neg\psi\leftrightarrow\neg[\pm\phi]\psi$
\item $[\pm\phi](\psi\wedge\chi)\leftrightarrow[\pm\phi]\psi\wedge[\pm\phi]\chi$
\item $[\pm\phi]B_{a}(\psi,\chi)\leftrightarrow(\phi\to B_{a}(\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))\wedge(\neg\phi\to B_{a}(\neg\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))$
\end{itemize}
Also in Appendix \ref{Appendix:CompPCandPC'} we prove that the \emph{PC}$\pm$ Calculus is sound and complete w.r.t. $\vDash_{PC\pm}$.
\section{Conclusions and Future Work}\label{section:cafw}
In Section \ref{section:int}, we illustrated that public announcement
update may not preserve reasonable neighbourhood conditions. To
overcome this problem, we introduced conditional neighourhood
semantics in Section \ref{section:cns}. We gave an alternative
interpretation for this system in Section \ref{section:wei}, and then
in Section \ref{section:pub} we gave two flavours of public
announcement update for conditional neighborhood semantics. Because
public announcement update for epistemic weight models is basically Bayesian
update in a logical setting, and because the complete calculus CN for
conditional neighourhood models is a subsystem of a complete
probabilistic logic CN$\oplus$(A4) (as shown in Section
\ref{section:wei}), our investigations show that CN can be viewed as a
core logic of rational belief and belief update that is compatible
with the Bayesian picture of inference, but more general. Conditional
neighborhood models generalize epistemic weight models, and this
generalization creates room for modelling ambiguity aversion in belief as
willingness to bet.
In Section \ref{section:wei} we have shown that $\mathcal{L}_{CN}$ has
weaker expressive power than $\mathcal{L}_{QP}$. Our conditional
neighourhood semantics for $\mathcal{L}_{CN}$ allows us to develop a
reasoning system CN that is not yet committed to a probabilistic
numerical interpretation of belief. This might be a convenient
starting point for further investigation of counterfactual
reasoning. In Section \ref{section:cns} we have assumed that
conditioning with information that contradicts an agent's knowledge
will cause the agent to refrain believing anything, but in future work
we may relax this constraint, by allowing to visit knowledge cells
other than the current one when a neighbourhood function is
conditioned with propositions that conflict with current knowledge. A
naive way to do so is to incorporate the selection function $f$ for
counterfactuals proposed by Stalnaker in \cite{stalnaker1968theory} in
our framework. When a proposition $X$ is disjoint with agent $a$'s
current knowledge cell $[w]_a$, the selection function $f$ would guide
us to an $X$-world $u=f_a(X,w)$, and then let the neighbourhood
$N_a^w(X)$ be $N_a^u(X)$, which is the neighourhood conditioned by $X$
at $[u]_a$.
While subjective conditional beliefs given by neighbourhood functions
suggest how agents' beliefs would change by public announcements,
further updates like public lies and recovery from lies may allow us
to represent further details of agents' beliefs in an objective way.
Here, recovery is to free agents from the influence of lies that were
accepted as true in the past. These two kinds of updates give us
powerful tools to test what a agent would believe after providing each
possible piece of information, which in turn would inform us the
agent's conditional beliefs or subjective probability. It is future
work to compare and combine these two approaches: subjective
conditional beliefs informative for belief update and objective belief
changes to conditional beliefs.
Neighourhood structures have also been used to describe the pieces of
evidence that agents accept
(\cite{van2011dynamic},\cite{van2012evidence},\cite{van2014evidence}). In
this approach, each proposition in a neighbourhood is interpreted as a
piece of evidence, instead of a belief; and because evidences are
usually acquired in various situations, such evidence neighbourhoods
may have contradictory propositions. Furthermore, beliefs are
generated from certain consistent subsets of the evidence
neighbourhood. Even though evidence models and conditional
neighbourhood models provide different perspectives on belief, we may
be able to combine them in future. In one direction, conditional
beliefs or even subjective probability could be generated from certain
evidence models, while the way evidence is involved in belief
formation may provide information about the strengths of the resulting
beliefs. In the other direction, our conditional beliefs might
serve as prior knowledge for specifying the credence of evidence.
\appendix
\section{Alternative Definition of Conditional Neighborhood Models}
\label{appendix:alternativeDef}
\begin{lemma}
\label{lem:implied-conditions}Let ${\mathfrak M}=\left(W,N,V\right)$
be a conditional neighborhood model. Then ${\mathfrak M}$ satisfies the
following conditions for any $a\in Ag$, $x\in W$ and $X\subseteq W$:
\begin{description}
\item [{(m)}] $\forall Y\subseteq Z\subseteq X\cap[w]_{a}:$ if $Y\in N_{a}^{w}(X)$,
then $Z\in N_{a}^{w}(X)$;
\item [{(no-emptyset)}] $\emptyset\notin N_{a}^{w}(X)$;
\item [{(n){*}}] if $X\cap[w]_{a}\neq\emptyset$, then $X\cap[w]_{a}\in N_{a}^{w}(X)$;
\item [{($\emptyset$)}] if $X\cap[w]_{a}=\emptyset$, then $N_{a}^{w}(X)=\emptyset$.
\end{description}
\end{lemma}
\begin{proof}
Let $a\in Ag$, $x\in W$, $X\subseteq W$ and $X'=X\cap[w]_{a}$.
First consider (m). Let $Y\subseteq Z\subseteq X'$ and $Y\in N_{a}^{w}(X)$.
Suppose for contradiction $Z\notin N_{a}^{w}(X)$. Then $Y\neq Z$,
and hence $Y\subsetneq Z$, which implies $X'-Z\subsetneq X'-Y$.
It follows, from (sc), that $X'-Y\in N_{a}^{w}(X)$, contrary to $Y\in N_{a}^{w}(X)$
and (d).
Second consider (no-emptyset). Suppose for contradiction that $\emptyset\in N_{a}^{w}(X)$.
If $X'=\emptyset$, then by (c) $N_{a}^{w}(X)=\{\emptyset\}$; but
by (d), since $\emptyset\in N_{a}^{w}(X)$,
\[
\emptyset=X'-\emptyset\notin N_{a}^{w}(X).
\]
Contradiction! If $X'\neq\emptyset$, then since $\emptyset\subseteq X'$,
by (m) we have $X'\in N_{a}^{w}(X)$; but because $\emptyset\in N_{a}^{w}(X)$,
by (d) $X'=X'-\emptyset$ should not in $N_{a}^{w}(X)$, contradiction.
Third for (n){*}. Suppose $X'\neq\emptyset$ and for contradiction
that $X'\notin N_{a}^{w}(X)$. Then $X'-\emptyset\notin N_{a}^{w}(X)$,
and by (sc)
\[
\emptyset\subsetneq X'\in N_{a}^{w}(X),
\]
a contradiction!
Last for ($\emptyset$). Suppose $X'=\emptyset$. Then by (c), for
all $Y\in N_{a}^{w}(X)$, $Y=\emptyset$. Because of (no-emptyset),
we have $\emptyset\notin N_{a}^{X}(w)$. Therefore $N_{a}^{w}(X)=\emptyset$.
\end{proof}
Note that in Definition \ref{def:NeighborModel} the equivalence relation
for knowledge is derived from the conditional neighborhood function. Here
we will show that there is an equivalent definition that takes epistemic
equivalences as primary.
\begin{definition}
\label{def:CNMwithEqui}Let $Ag$ be a finite set of agents. A conditional
neighborhood model{*} ${\mathfrak M}$ is a tuple $\left(W,\sim,N,V\right)$
where:
\begin{itemize}
\item $W$ is a non-empty set of worlds;
\item $\sim$ assigns to each $a\in{\text{\it Ag}}$ an equivalence relation $\sim_{a}$ on $W$, and we use $\left[w\right]_{a}$
for the $\sim_{a}$ class of $w$;
\item $N$ assigns to each $a\in{\text{\it Ag}}$ a function $N_{a}$ that assigns to every world $w\in W$ and set
of worlds $X\subseteq W$ a collection $N_{a}^{w}(X)$ of sets of
worlds\textendash each such set called a neighborhood of $X$\textendash subject
to the following conditions:
\begin{lyxlist}{00.00.0000}
\item [{(c)}] $\forall Y\in N_{a}^{w}(X):Y\subseteq X\cap[w]_{a}$;
\item [{(a)}] $\forall v\in\left[w\right]_{a}:N_{a}^{w}(X)=N_{a}^{v}(X)$;
\item [{(d)}] $\forall Y\in N_{a}^{w}(X)$, $X\cap[w]_{a}-Y\notin N_{a}^{w}(X)$;
\item [{(sc)}] $\forall Y,Z\subseteq X\cap[w]_{a}:$ if $X-Y\notin N_{a}^{w}(X)$
and $Y\subsetneq Z$, then $Z\in N_{a}^{w}(X)$;
\item [{(ec)}] $\forall Y\subseteq W$: if $X\cap[w]_{a}=Y\cap[w]_{a}$,
then $N_{a}^{w}(X)=N_{a}^{w}(Y)$;
\end{lyxlist}
\item $V$ is a valuation.
\end{itemize}
\end{definition}
Note that in this definition we have another condition (a) on neighborhood
functions. This contrasts with Definition \ref{def:NeighborModel}, where
we already make sure that (a) holds by the way we define $\left[w\right]_{a}$.
In Definition \ref{def:CNMwithEqui}, however, $\left[w\right]_{a}$ is defined in terms of
$\sim_{a}$, which is simply an equivalence relation that does not come with a guarantee
for (a).
The following proposition assures us that the two approaches are equivalent.
\begin{proposition}
\label{prop:ModelWithEqvi}Let ${\mathfrak M}=\left(W,\sim,N,V\right)$
be a conditional neighborhood model{*}, let $a\in Ag$ and let $R_{a}\subseteq W\times W$
be defined as follows:
\begin{itemize}
\item $\forall w,v\in W,wR_{a}v$ iff $\forall X\subseteq W$, $N_{a}^{w}(X)=N_{a}^{v}(X)$.
\end{itemize}
Then $\sim_{a}=R_{a}$, and $\left(W,N,V\right)$ is a conditional
neighborhood model.
\end{proposition}
\begin{proof}
Let $w,v\in W$. Suppose $w\sim_{a}v$. Then by (a), we know that
for each $X\subseteq W$, $N_{a}^{w}(X)=N_{a}^{v}(X)$, which implies
$(w,v)\in R_{a}$.
Suppose it is not the case that $w\sim_{a}v$. Then $[w]_{a}\cap[v]_{a}=\emptyset$.
Similar to the proofs in Lemma \ref{lem:implied-conditions}, we can
prove (n){*} and ($\emptyset$) for ${\mathfrak M}$, and hence we have
$[w]_{a}\in N_{a}^{w}([w]_{a})$, and by ($\emptyset$), $[w]_{a}\notin N_{a}^{v}([w]_{a})$.
It follows that $N_{a}^{w}([w]_{a})\neq N_{a}^{v}([w]_{a})$, which
implies $(w,v)\notin R_{a}$.
Therefore $[w]_{a}=\{v\in W\mid\forall X\subseteq W,N_{a}^{w}(X)=N_{a}^{v}(X)\}$.
We can check that $\left(W,N,V\right)$ satisfies all the conditions
in Definition \ref{def:NeighborModel}, which implies $\left(W,N,V\right)$
is a conditional neighborhood model.
\end{proof}
\section{Completeness of CN}
\label{appendix:CompCNproof}
In this section we prove Theorem \ref{thm:CompCN}. As a first step in the creation of a canonical model, we define formula closures.
\begin{definition}
Let $\alpha\in\mathcal{L}_{CN}$ be any $\mathit{CN}$-consistent
formula, i.e., $\nvdash_{\mathit{CN}}\neg\alpha$. The basic closure
of $\alpha$, denoted as $\Phi(\alpha)$, is the smallest set of formula
$\Gamma$ such that:
\begin{itemize}
\item if $\phi$ is a sub-formula of $\alpha\wedge\top$, then $\phi\in\Gamma$;
\item if $\phi\in\Gamma$ and $\phi$ is not a negation, then $\neg\phi\in\Gamma$;
\item if $\phi,\psi\in\Gamma$, then $\phi\wedge\psi\in\Gamma$.
\end{itemize}
Let $\Phi^{B}(\alpha)$ be the smallest extension of $\Phi(\alpha)$
such that
\begin{itemize}
\item if $\phi,\psi\in\Phi(\alpha)$, then $B_{a}(\phi,\psi)\in\Phi^{B}(\alpha)$
for each $a\in{\text{\it Ag}}$;
\item if $\phi\in\Phi^{B}(\alpha)$ and $\phi$ is not a negation, then $\neg\phi\in\Phi^{B}(\alpha)$;
\item if $\phi,\psi\in\Phi^{B}(\alpha)$, then $\phi\wedge\psi\in\Phi^{B}(\alpha)$.
\end{itemize}
\end{definition}
Now we define the canonical model of $\alpha$. Note that we use
maximal consistent subsets of $\Phi^{B}(\alpha)$ instead of maximal
consistent sets because we want to make sure our model is in some
sense differentiable. Furthermore, for each maximal consistent subset
of $\Phi^{B}(\alpha)$, we duplicate it $\Omega$ number of times, i.e.,
each possible world is a maximal consistent subset $\mathbf{w}$ of
$\Phi^{B}(\alpha)$ indexed by a number $i\in\Omega$, namely
$\mathbf{w}_i$. In this way, we can define an equivalence relation
$\sim_{a}$ with the right properties in our canonical model.
\begin{definition}
\label{def:canonicalModel}A canonical conditional neighborhood
model $\mathfrak{M}_{\alpha}$ of $\alpha$ is a tuple $(W,\sim,N,V)$
where:
\begin{itemize}
\item $W=\{\mathbf{w}\subseteq\Phi^{B}(\alpha)\mid\text{\ensuremath{\mathbf{w}}}\text{ is a maximal consistent subset of }\Phi^{B}(\alpha)\}\times\Omega$.
\item for each $a\in{\text{\it Ag}}$, $\sim_{a}$ is an equivalence relation on $W$
such that $\forall\mathbf{w}_{i},\mathbf{v}_{j}\in W$:
\begin{enumerate}
\item \label{enu:SameBeliefSim}if $\mathbf{w}_{i}\sim_{a}\mathbf{v}_{j}$,
then $\forall\phi,\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\phi,\psi)\text{ iff }\mathbf{v}\vdash_{CN} B_{a}(\phi,\psi)$;
\item \label{enu:distinctSim}if $\mathbf{w}_{i}\sim_{a}\mathbf{v}_{j}$
and $\mathbf{w}\cap\Phi(\alpha)=\mathbf{v}\cap\Phi(\alpha)$, then
$\mathbf{w}_{i}=\mathbf{v}_{j}$;
\item \label{enu:existSim}if $\forall\phi,\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\phi,\psi)\text{ iff }\mathbf{v}\vdash_{CN} B_{a}(\phi,\psi)$,
then there is a $\mathbf{u}_{l}\in W$ such that $\mathbf{v}\cap\Phi(\alpha)=\mathbf{u}\cap\Phi(\alpha)$
and $\mathbf{w}_{i}\sim_{a}\mathbf{u}_{l}$.
\end{enumerate}
\item $N_{a}^{\mathbf{w}}(X):=\{\{\mathbf{v}_{j}\in X\cap\left[\mathbf{w}_{i}\right]_{a}\mid\mathbf{v}\vdash_{CN} \psi\}\mid\phi\in\Phi(\alpha),\mathbf{w}_{i}\vdash_{CN} B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}_{i}},\psi)\}$,
where $\left[\mathbf{w}_{i}\right]_{a}=\{\mathbf{v}_{j}\in W\mid\mathbf{w}_{i}\sim_{a}\mathbf{v}_{j}\}$
and $\left\Vert X\right\Vert _{a}^{\mathbf{w}_{i}}$ is the characteristic
formula for $X$ w.r.t. $\left[\mathbf{w_{i}}\right]_{a}$, i.e.,
$\forall\mathbf{w}_{i}\in\left[\mathbf{w}\right]_{a}$, $\left\Vert X\right\Vert _{a}^{\mathbf{w}_{i}}\in\mathbf{w}$
iff $\mathbf{w}_{i}\in X$.
\item $\mathbf{w}_{i}\in V(p)$ iff $p\in\mathbf{w}$.
\end{itemize}
\end{definition}
Note that canonical conditional neighborhood models have equivalence relations $\sim_{a}$,
unlike conditional neighborhood models.
However, this is not a problem, because by Proposition \ref{prop:ModelWithEqvi},
conditional neighborhood models with and without $\sim_{a}$ relations are
essentially the same.
Also note that $\Phi(\alpha)$ is finite up to logical equivalence, and
because ${\text{\it Ag}}$ is finite, $\Phi^{B}(\alpha)$ is finite up to logical
equivalence as well. It follows that by Condition (\ref{enu:distinctSim})
for $\sim_{a}$ each $\left[\mathbf{w}_{i}\right]_{a}$ is finite.
Because Condition (\ref{enu:SameBeliefSim}) for $\sim_{a}$,
$\left[\mathbf{w}_{i}\right]_{a}$ is differentiable w.r.t. $\Phi(\alpha)$
in the sense that for each subset $X\subseteq\left[\mathbf{w}_{i}\right]_{a}$,
there is a characteristic formula $\phi\in\Phi(\alpha)$ such that
$\forall\mathbf{v}_{j}\in\left[\mathbf{w}_{i}\right]_{a}$, $\phi\in\mathbf{v}_{j}$
iff $\mathbf{v}_{j}\in X$, and we use $\left\Vert X\right\Vert _{a}^{\mathbf{v}_{j}}$
for such characteristic formula.
\begin{lemma}
A canonical conditional neighborhood model $\mathfrak{M}_{\alpha}$
of $\alpha$ exists, given that $\alpha$ is $\mathit{CN}$-consistent.
\end{lemma}
\begin{proof}
We only need to prove $\sim_{a}$ exists for each $a\in Ag$. Let $a\in Ag$,
let $MCS$ be the set of maximal consistent subsets of $\Phi^{B}(\alpha)$,
let $W=MCS\times\Omega$ and let ${\approx}\subseteq MCS\times MCS$
be the relation such that for all $\mathbf{w},\mathbf{v}\in MCS$,
$\mathbf{w}\approx\mathbf{v}$ iff:
\begin{itemize}
\item $\mathbf{w}\cap\Phi(\alpha)=\mathbf{v}\cap\Phi(\alpha)$,
\item $\forall\phi,\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\phi,\psi)\text{ iff }\mathbf{v}\vdash_{CN} B_{a}(\phi,\psi)$.
\end{itemize}
It is easy to check that $\approx$ is an equivalence relation, and
because $MCS$ is finite, $\left\llbracket \mathbf{w}\right\rrbracket =\{\mathbf{v}\mid\mathbf{w}\approx\mathbf{v}\}$
is also finite. It follows that $\left\llbracket \mathbf{w}\right\rrbracket \times\Omega$
is enumerable. Notice that $\{\left\llbracket \mathbf{v}\right\rrbracket \times\Omega\mid\mathbf{v}\in MCS\}$
is a partition of $W$. For each $\left\llbracket \mathbf{w}\right\rrbracket \times\Omega\in\{\left\llbracket \mathbf{v}\right\rrbracket \times\Omega\mid\mathbf{v}\in MCS\}$,
let $w_{0},w_{1},\ldots$ be an enumeration of $\left\llbracket \mathbf{w}\right\rrbracket \times\Omega$.
Now we define $\sim_{a}$. Let ${\sim_{a}}\subseteq W\times W$ be the
relation such that for all $\mathbf{w}_{i},\mathbf{v}_{j}\in W$,
$\mathbf{w}_{i}\sim_{a}\mathbf{v}_{j}$ iff
\begin{itemize}
\item $\forall\phi,\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\phi,\psi)\text{ iff }\mathbf{v}\vdash_{CN} B_{a}(\phi,\psi)$,
\item $\mathbf{w}_{i}$ and $\mathbf{v}_{j}$ are both the $n$-th element
in enumerations of $\left\llbracket \mathbf{w}\right\rrbracket \times\Omega$
and $\left\llbracket \mathbf{v}\right\rrbracket \times\Omega$ respectively,
i.e., $\mathbf{w}_{i}=w_{n}$ and $\mathbf{v}_{j}=v_{n}$ for some
$n\in\Omega$.
\end{itemize}
To check that such $\sim_{a}$ is a desired equivalence relation,
first it is easy to verify that it satisfies Condition (\ref{enu:SameBeliefSim})
in Definition \ref{def:canonicalModel}.
Now consider Condition (\ref{enu:distinctSim}) in Definition \ref{def:canonicalModel}.
Suppose $\mathbf{w}_{i}\sim_{a}\mathbf{v}_{j}$ and $\mathbf{w}\cap\Phi(\alpha)=\mathbf{v}\cap\Phi(\alpha)$.
Then we have $\left\llbracket \mathbf{w}\right\rrbracket =\left\llbracket \mathbf{v}\right\rrbracket $,
which implies $\left\llbracket \mathbf{w}\right\rrbracket \times\Omega=\left\llbracket \mathbf{v}\right\rrbracket \times\Omega$.
Furthermore, since $\mathbf{w}_{i}$ and $\mathbf{v}_{j}$ are both
the $n$-th element in enumerations of $\left\llbracket \mathbf{w}\right\rrbracket \times\Omega$
and $\left\llbracket \mathbf{v}\right\rrbracket \times\Omega$ respectively
for some $n\in\Omega$, we obtain $\mathbf{w}_{i}=\mathbf{v}_{j}$.
Lastly consider Condition (\ref{enu:distinctSim}) in Definition \ref{def:canonicalModel}.
Let $\mathbf{w}_{i},\mathbf{v}_{j}\in W$ such that
\[
\forall\phi,\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\phi,\psi)\text{ iff }\mathbf{v}\vdash_{CN} B_{a}(\phi,\psi).
\]
Suppose $\mathbf{w}_{i}$ is the $n$-th element in the enumeration
of $\left\llbracket \mathbf{w}\right\rrbracket \times\Omega$. Then
there is a $\mathbf{u}_{l}\in\left\llbracket \mathbf{v}\right\rrbracket \times\Omega$
such that $\mathbf{u}_{l}$ is the $n$-th element in the enumeration
of $\left\llbracket \mathbf{v}\right\rrbracket \times\Omega$. By
the definition of $\approx$ we have $\mathbf{v}\cap\Phi(\alpha)=\mathbf{u}\cap\Phi(\alpha)$
and
\[
\forall\phi,\psi\in\Phi(\alpha),\mathbf{v}\vdash_{CN} B_{a}(\phi,\psi)\text{ iff }\mathbf{u}\vdash_{CN} B_{a}(\phi,\psi),
\]
the latter of which implies
\[
\forall\phi,\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\phi,\psi)\text{ iff }\mathbf{u}\vdash_{CN} B_{a}(\phi,\psi).
\]
Therefore by the definition of $\sim_{a}$ we have $\mathbf{w}_{i}\sim_{a}\mathbf{u}_{l}$
and $\mathbf{v}\cap\Phi(\alpha)=\mathbf{u}\cap\Phi(\alpha)$.
\end{proof}
\begin{lemma}
\label{lem:ConditionEquivalence}Let $\mathfrak{M}_{\alpha}=(W,\sim,N,V)$
be a canonical conditional neighborhood model of a $\mathit{CN}$-consistent
formula $\alpha$, let $a\in Ag$, $\mathbf{w}\in W$, and let $\phi\in\Phi(\alpha)$
such that $\mathbf{v}\vdash_{CN} \phi$ for all $\mathbf{v}\in[\mathbf{w}]_{a}$.
Then $K_{a}\phi\in\mathbf{w}$.
\end{lemma}
\begin{proof}
Because $\phi\in\Phi(\alpha)$, we know that either $K_{a}\phi\in\mathbf{w}$
or $\neg K_{a}\phi\in\mathbf{w}$. Suppose for contradiction that
$\neg K_{a}\phi\in\mathbf{w}$. Then $\mathbf{w}\vdash_{CN} \check{K}_{a}\neg\phi$.
Let
\begin{itemize}
\item $\mathbf{v}^{-}=\{\neg\phi\}\cup\{B_{a}(\psi,\chi)\in\mathbf{w}\mid\psi,\chi\in\Phi(\alpha)\}\cup\{\neg B_{a}(\psi,\chi)\in\mathbf{w}\mid\psi,\chi\in\Phi(\alpha)\}$.
\end{itemize}
$\mathbf{v}^{-}$ should be consistent, for otherwise there are
\[
B_{a}(\alpha_{1},\beta_{1}),\ldots,B_{a}(\alpha_{k},\beta_{k}),\neg B_{a}(\gamma_{1},\delta_{1}),\ldots,\neg B_{a}(\gamma_{l},\delta_{l})\in\mathbf{w}
\]
such that
\[
\vdash_{CN} \bigwedge_{i=1}^{k}B_{a}(\alpha_{i},\beta_{i})\wedge\bigwedge_{i=1}^{l}\neg B_{a}(\gamma_{i},\delta_{i})\to\phi,
\]
which implies, by (Nec-K) and (Dist-K),
\[
\vdash_{CN} \bigwedge_{i=1}^{k}K_{a}B_{a}(\alpha_{i},\beta_{i})\wedge\bigwedge_{i=1}^{l}K_{a}\neg B_{a}(\gamma_{i},\delta_{i})\to K_{a}\phi,
\]
and then using (5B) and (4B) we have $\mathbf{w}\vdash_{CN} K_{a}\phi$,
contrary to $\mathbf{w}\vdash_{CN} \check{K}_{a}\neg\phi$. It follows that
$\mathbf{v}^{-}$ is consistent, and then by Condition (\ref{enu:existSim})
in Definition \ref{def:canonicalModel}, there is a $\mathbf{v}\in\left[\mathbf{w}\right]_{a}$
such that $\neg\phi\in\mathbf{v}$, contrary to our assumption that
for all $\mathbf{v}\in[\mathbf{w}]_{a}$, $\mathbf{v}\vdash_{CN} \phi$.
Therefore $\mathbf{w}\vdash_{CN} K_{a}\phi$, i.e., $K_{a}\phi\in\mathbf{w}$.
\end{proof}
\begin{theorem}
Every $\mathit{CN}$-consistent formula $\alpha$ has a conditional
neighborhood model $\mathfrak{M}_{\alpha}$.
\end{theorem}
\begin{proof}
Suppose $\nvdash\neg\alpha$. Let $\mathfrak{M}_{\alpha}=(W,\sim,N,V)$
be a canonical conditional neighborhood model of $\alpha$, and
for all $a\in{\text{\it Ag}}$, $\mathbf{w}\in W$ and $X\subseteq W$, let $\left\Vert X\right\Vert _{a}^{w}$
be the characteristic formula for $X$ w.r.t. $\left[\mathbf{w}\right]_{a}$,
i.e., $\forall\mathbf{v}\in\left[\mathbf{w}\right]_{a}$, $\left\Vert X\right\Vert _{a}^{\mathbf{w}}\in\mathbf{v}$
iff $\mathbf{v}\in X$. It suffice to show that $(W,\sim,N,V)$ is
a conditional neighborhood model{*} (see Definition \ref{def:CNMwithEqui}), and then by Proposition \ref{prop:ModelWithEqvi}
we can obtain that $(W,N,V)$ is a conditional neighborhood
model.
Clearly $\sim_{a}$ are equivalence relations. It follows that for
each $\mathbf{v}\in\left[\mathbf{w}\right]_{a}$, $\left[\mathbf{v}\right]_{a}=\left[\mathbf{w}\right]_{a}$,
which implies
\begin{equation}
\forall\mathbf{w},\mathbf{v}\in W\forall X\subseteq W,\mathbf{w}\sim_{a}\mathbf{v}\text{ only if }\left\Vert X\right\Vert _{a}^{\mathbf{w}}=\left\Vert X\right\Vert _{a}^{\mathbf{v}}.\label{eq:CharaFormEq}
\end{equation}
It remains to show that $N$ satisfies (c), (a), (d), (sc) and (ec).
Let $a\in{\text{\it Ag}}$, $\mathbf{w}\in W$ and $X\subseteq W$.
First we consider (c), but it is straightforward by the definition
of $N_{a}^{\mathbf{w}}(X)$.
Second for (a). Consider all $\mathbf{u}\in[\mathbf{w}]_{a}$, $Y,Z\subseteq W$.
$Z\in N_{a}^{\mathbf{w}}(Y)$ iff
\[
\exists\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\left\Vert Y\right\Vert _{a}^{\mathbf{w}},\psi)\text{ and }Z=\{\mathbf{v}\in Y\cap\left[\mathbf{w}\right]_{a}\mid\mathbf{v}\vdash_{CN} \psi\}
\]
iff by $\left\Vert X\right\Vert _{a}^{\mathbf{w}},\psi\in\Phi(\alpha)$,
Definition \ref{def:canonicalModel}(\ref{enu:SameBeliefSim}) and
(\ref{eq:CharaFormEq})
\[
\exists\psi\in\Phi(\alpha),\mathbf{u}\vdash_{CN} B_{a}(\left\Vert Y\right\Vert _{a}^{\mathbf{u}},\psi)\text{ and }Z=\{\mathbf{v}\in Y\cap\left[\mathbf{u}\right]_{a}\mid\mathbf{v}\vdash_{CN} \psi\}
\]
iff $Z\in N_{a}^{\mathbf{u}}(Y)$.
Third for (d), where we use axioms (D), (N) and (M). Consider any
$Y\in N_{a}^{\mathbf{w}}(X)$. By the definition of $N_{a}^{\mathbf{w}}(X)$,
we have that there is a $\phi\in\Phi(\alpha)$ such that $Y=\{\mathbf{v}\in X\cap\left[\mathbf{w}\right]_{a}\mid\mathbf{v}\vdash_{CN} \phi\}$
and $\mathbf{w}\vdash_{CN} B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\phi)$.
Using (D) we can derive that
\begin{equation}
\mathbf{w}\vdash_{CN} \neg B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\neg\phi).\label{eq:connonical(d)}
\end{equation}
Now suppose by contradiction that there is a $\psi\in\Phi$ such that
$X-Y=\{\mathbf{v}\in X\mid\mathbf{v}\vdash_{CN} \psi\}$ and $\mathbf{w}\vdash_{CN} B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\psi)$.
Then by (N) we have $\mathbf{w}\vdash_{CN} B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\left\Vert X\right\Vert _{a}^{\mathbf{w}}\wedge\psi)$.
Note that for each $\mathbf{v}\in[\mathbf{w}]_{a}$, $\mathbf{v}\vdash_{CN} \left\Vert X\right\Vert _{a}^{\mathbf{w}}\wedge\psi$
only if $\mathbf{v}\in X-Y$ only if $\mathbf{v}\vdash_{CN} \neg\phi$,
i.e., $\mathbf{v}\vdash_{CN} \left\Vert X\right\Vert _{a}^{\mathbf{w}}\wedge\psi\to\neg\phi$.
Thus by Lemma \ref{lem:ConditionEquivalence} and $\left\Vert X\right\Vert _{a}^{\mathbf{w}},\phi,\psi\in\Phi(\alpha)$
we can get that $K_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}}\wedge\psi\to\neg\phi)\in\mathbf{w}$,
and then using (M) we obtain $\mathbf{w}\vdash_{CN} B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\neg\phi)$,
contrary to (\ref{eq:connonical(d)}).
Then for (sc), we use (T) and (SC). Consider any $Y,Z\subseteq X\cap[\mathbf{w}]_{a}$
such that $X\cap[\mathbf{w}]_{a}-Y\notin N_{a}^{\mathbf{w}}(X)$.
Because $X\cap[\mathbf{w}]_{a}-Y\notin N_{a}^{\mathbf{w}}(X)$, we
have $\mathbf{w}\nvdash B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}})$.
Recall that $\left\Vert X\right\Vert _{a}^{\mathbf{w}},\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}}\in\Phi(\alpha)$,
we know that either $B(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}})\in\mathbf{w}$
or $\neg B(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}})\in\mathbf{w}$.
It follows that
\begin{equation}
\mathbf{w}\vdash_{CN} \neg B(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}}).\label{eq:cannonical(sc)}
\end{equation}
If $X\cap[\mathbf{w}]_{a}-Y=\emptyset$, then $X\cap[\mathbf{w}]_{a}=Y$,
which implies there is no such $Z$ with $Y\subsetneq Z$; thus (sc)
vacuously holds. Suppose $\mathbf{v}\in X\cap[\mathbf{w}]_{a}-Y$
and $\mathbf{v}\in Z\supsetneq Y$. Then $\mathbf{v}\vdash_{CN} \left\Vert X\right\Vert _{a}^{\mathbf{w}}\wedge\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}}\wedge\left\Vert Z\right\Vert _{a}^{\mathbf{w}}$.
Using (T) we can obtain that $\mathbf{v}\vdash_{CN} \check{K}_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}}\wedge\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}}\wedge\left\Vert Z\right\Vert _{a}^{\mathbf{w}})$.
By $\mathbf{v}\in[\mathbf{w}]_{a}$ and Condition (\ref{enu:SameBeliefSim})
in Definition \ref{def:canonicalModel}, $\mathbf{w}\vdash_{CN} \check{K}_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}}\wedge\neg\left\Vert Y\right\Vert _{a}^{\mathbf{w}}\wedge\left\Vert Z\right\Vert _{a}^{\mathbf{w}})$.
It follows that, using (\ref{eq:cannonical(sc)}) and (SC), $\mathbf{w}\vdash_{CN} B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\left\Vert Y\right\Vert _{a}^{\mathbf{w}}\vee\left\Vert Z\right\Vert _{a}^{\mathbf{w}})$.
Because $Y\subsetneq Z$ and thus $Z=\{\mathbf{v}\in X\cap\left[\mathbf{w}\right]_{a}\mid\mathbf{v}\vdash_{CN} \left\Vert Y\right\Vert _{a}^{\mathbf{w}}\vee\left\Vert Z\right\Vert _{a}^{\mathbf{w}}\}$,
we have $Z\in N_{a}^{\mathbf{w}}(X)$.
Last for (ec). Consider any $Y\subseteq W$ such that $X\cap[\mathbf{w}]_{a}=Y\cap[\mathbf{w}]_{a}$.
Clearly $X\cap[\mathbf{w}]_{a}$ and $Y\cap[\mathbf{w}]_{a}$ have
the same characteristic formula w.r.t. $[\mathbf{w}]_{a}$, i.e.,
$\left\Vert X\right\Vert _{a}^{\mathbf{w}}=\left\Vert Y\right\Vert _{a}^{\mathbf{w}}$.
Let $Z\subseteq W$. $Z\in N_{a}^{\mathbf{w}}(X)$ iff
\[
\exists\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\psi)\text{ and }Z=\{\mathbf{v}\in X\cap\left[\mathbf{w}\right]_{a}\mid\mathbf{v}\vdash_{CN}\psi\}
\]
iff since $\left\Vert X\right\Vert _{a}^{\mathbf{w}}=\left\Vert Y\right\Vert _{a}^{\mathbf{w}}$
and $X\cap[\mathbf{w}]_{a}=Y\cap[\mathbf{w}]_{a}$,
\[
\exists\psi\in\Phi(\alpha),\mathbf{w}\vdash_{CN} B_{a}(\left\Vert Y\right\Vert _{a}^{\mathbf{w}},\psi)\text{ and }Z=\{\mathbf{v}\in Y\cap\left[\mathbf{w}\right]_{a}\mid\mathbf{v}\vdash_{CN} \psi\}
\]
iff $Z\in N_{a}^{\mathbf{w}}(Y)$.
\end{proof}
\begin{lemma}
(Truth Lemma) Let $\alpha\in\mathcal{L}_{CN}$ be a $\mathit{CN}$-consistent
formula and let $\mathfrak{M}_{\alpha}=(W,N,V)$ be a canonical conditional
neighborhood model of $\alpha$ removing equivalence relation $\sim$. Then for all formulas $\phi\in\Phi(\alpha)$
and $\mathbf{w}\in W$, $\mathfrak{M}_{\alpha},\mathbf{w}\vDash_{CN} \phi$
iff $\phi\in\mathbf{w}$.
\end{lemma}
\begin{proof}
We prove by induction on $\phi$. The cases of $\top,p$ and the Boolean
combinations are straightforward. For the case of $B_{a}(\psi,\chi)$.
Let $X$ be any set such that $\{\mathbf{v}\in[\mathbf{w}]_{a}\mid\psi\in\mathbf{v}\}\subseteq X$.
Note that because $B_{a}(\psi,\chi)\in\Phi(\alpha)$, we have $\psi,\chi\in\Phi(\alpha)$.
Note that for each $\mathbf{v}\in[\mathbf{w}]_{a}$, $\psi\in\mathbf{v}$
iff $\mathbf{v}\in X$ iff $\left\Vert X\right\Vert _{a}^{\mathbf{w}}\in\mathbf{v}$.
Thus for each $\mathbf{v}\in[\mathbf{w}]_{a}$, $\mathbf{v}\vdash_{CN} \psi\leftrightarrow\left\Vert X\right\Vert _{a}^{\mathbf{w}}$.
It follows, by Lemma \ref{lem:ConditionEquivalence}
\begin{equation}
K_{a}(\phi\leftrightarrow\left\Vert X\right\Vert _{a}^{\mathbf{w}})\in\mathbf{w}.\label{eq:TL(ec)}
\end{equation}
Also note that $\mathfrak{M}_{\alpha},\mathbf{w}\vDash B_{a}(\psi,\chi)$
iff $\{\mathbf{v}\in\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}_{\alpha}}}}\cap[\mathbf{w}]_{a}\mid\mathfrak{M}_{\alpha},\mathbf{v}\vDash\chi\}\in N_{a}^{\mathbf{w}}(\left\llbracket \psi\right\rrbracket _{\mathfrak{M}_{\alpha}})$
iff (induction hypothesis) $X=\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}}$
and $\{\mathbf{v}\in X\cap[\mathbf{w}]_{a}\mid\chi\in\mathbf{v}\}\in N_{a}^{\mathbf{w}}(X)$.
Thus
\begin{equation}
\mathfrak{M}_{\alpha},\mathbf{w}\vDash B_{a}(\psi,\chi)\text{ iff }\{\mathbf{v}\in X\cap[\mathbf{w}]_{a}\mid\chi\in\mathbf{v}\}\in N_{a}^{\mathbf{w}}(X).\label{eq:TLClaim}
\end{equation}
Suppose $B_{a}(\psi,\chi)\in\mathbf{w}$. Then we have $\mathbf{w}\vdash B_{a}(\psi,\chi)$,
which implies by (\ref{eq:TL(ec)}) and (ec) $\mathbf{w}\vdash B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\chi)$.
It follows that $\{\mathbf{v}\in X\cap[\mathbf{w}]_{a}\mid\chi\in\mathbf{v}\}\in N_{a}^{\mathbf{w}}(X)$,
and hence, using (\ref{eq:TLClaim}), $\mathfrak{M}_{\alpha},\mathbf{w}\vDash B_{a}(\psi,\chi)$.
Suppose $\mathfrak{M}_{\alpha},\mathbf{w}\vDash B_{a}(\psi,\chi)$,
and hence by (\ref{eq:TLClaim}) $\{\mathbf{v}\in X\cap[\mathbf{w}]_{a}\mid\chi\in\mathbf{v}\}\in N_{a}^{\mathbf{w}}(X)$.
Then there is a $\chi'\in\Phi(\alpha)$ such that $\{\mathbf{v}\in X\cap[\mathbf{w}]_{a}\mid\chi'\in\mathbf{v}\}=\{\mathbf{v}\in X\cap[\mathbf{w}]_{a}\mid\chi\in\mathbf{v}\}$
and $\mathbf{w}\vdash B_{a}(\left\Vert X\right\Vert _{a}^{\mathbf{w}},\chi')$.
It follows, by Lemma \ref{lem:ConditionEquivalence} that $K_{a}(\chi'\leftrightarrow\chi)\in\mathbf{w}$.
Using (\ref{eq:TL(ec)}), (ec) and (M) we have $\mathbf{w}\vdash B_{a}(\psi,\chi)$.
Recall that $B_{a}(\psi,\chi)\in\Phi(\alpha)$, we can obtain $B_{a}(\psi,\chi)\in\mathbf{w}$.
\end{proof}
\section{Completeness of PC and PC$\pm$}\label{Appendix:CompPCandPC'}
\begin{theorem}
(Soundness) $PC$ is sound w.r.t. $\vDash_{PC}$.
\end{theorem}
\begin{proof}
To illustrate that $[\phi]B_{a}(\psi,\chi)\leftrightarrow(\phi\to B_{a}(\phi\wedge[\phi]\psi,\phi\wedge[\phi]\chi))$
is sound. Consider any conditional neighborhood model $\mathfrak{M}=(W,N,V)$
and any $w\in W$.
$\mathfrak{M},w\vDash[\phi]B_{a}(\psi,\chi)$, iff $\mathfrak{M},w\vDash\phi$
only if $\mathfrak{M}^{\phi},w\vDash B_{a}(\psi,\chi)$, iff $\mathfrak{M},w\vDash\phi$
only if $\{v\in\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\phi}}\cap[w]_{a}^{\phi}\mid\mathfrak{M}^{\phi},w\vDash\chi\}\in M_{a}^{w}(\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\phi}})$.
$\mathfrak{M},w\vDash\phi\to B_{a}(\phi\wedge[\phi]\psi,\phi\wedge[\phi]\chi)$
iff $\mathfrak{M},w\vDash\phi$ only if $\mathfrak{M},w\vDash B_{a}(\phi\wedge[\phi]\psi,\phi\wedge[\phi]\chi)$,
iff $\mathfrak{M},w\vDash\phi$ only if $\{v\in\left\llbracket \phi\wedge[\phi]\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}}\cap[w]_{a}\mid\mathfrak{M},v\vDash\phi\wedge[\phi]\chi\}\in N_{a}^{w}(\left\llbracket \phi\wedge[\phi]\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}})$.
Let $v\in W$.
\begin{enumerate}
\item $\mathfrak{M},v\vDash\phi\wedge[\phi]\chi$ iff
$\mathfrak{M},v\vDash\phi$ and $\mathfrak{M},v\vDash\phi$ implies
$\mathfrak{M}^{\phi},v\vDash\chi$ iff $\mathfrak{M}^{\phi},v\vDash\chi$.
\item $\mathfrak{M},v\vDash\phi\wedge[\phi]\psi$ iff
$\mathfrak{M},v\vDash\phi$ and $\mathfrak{M},v\vDash\psi$ implies
$\mathfrak{M}^{\phi},v\vDash\psi$ iff $\mathfrak{M}^{\phi},v\vDash\psi$.
\item $\mathfrak{M},w\vDash\phi$ implies $v\in\left\llbracket \phi\wedge[\phi]\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}}\cap[w]_{a}$
iff $\mathfrak{M},w\vDash\phi$ implies $\mathfrak{M},v\vDash\phi\wedge[\phi]\psi$
and $w\sim_{a}v$ iff $\mathfrak{M}^{\phi},v\vDash\psi$ and $w\sim_{a}^{\phi}v$
iff $v\in\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\phi}}\cap[w]_{a}^{\phi}$.
\end{enumerate}
It follows that $\mathfrak{M},w\vDash\phi\to B_{a}(\phi\wedge[\phi]\psi,\phi\wedge[\phi]\chi)$
iff $\mathfrak{M},w\vDash\phi$ implies $\{v\in\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\phi}}\cap[w]_{a}^{\phi}
\mid\mathfrak{M}^{\phi},v\vDash\chi\}\in M_{a}^{w}(\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\phi}})$, iff $\mathfrak{M},w\vDash[\phi]B_{a}(\psi,\chi)$, and this completes our proof.
\end{proof}
\begin{theorem}
The PC Calculus is complete.
\end{theorem}
\begin{proof}
This follows directly from the completeness of CN, plus the fact that the axioms for
public announcement update are reduction axioms: we can compile out the update operators
to reduce PC to CN.
\end{proof}
\begin{theorem}
(Soundness) PC$\pm$ is sound w.r.t. $\vDash_{PC\pm}$.
\end{theorem}
\begin{proof}
To illustrate that $[\pm\phi]B_{a}(\psi,\chi)\leftrightarrow(\phi\to B_{a}(\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))\wedge(\neg\phi\to B_{a}(\neg\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))$
is sound. Consider any conditional neighborhood model $\mathfrak{M}=(W,N,V)$
and any $w\in W$. We consider two cases: $\mathfrak{M},w\vDash\phi$ or $\mathfrak{M},w\vDash\neg\phi$.
First suppose $\mathfrak{M},w\vDash\phi$. Then $\mathfrak{M},w\vDash[\pm\phi]B_{a}(\psi,\chi)$, iff $\mathfrak{M}^{\pm\phi},w\vDash B_{a}(\psi,\chi)$, iff $\{v\in\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\pm\phi}}\cap[w]_{a}^{\pm\phi}\mid\mathfrak{M}^{\pm\phi},w\vDash\chi\}\in M_{a}^{w}(\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\pm\phi}})$.
Furthermore, $\mathfrak{M},w\vDash(\phi\to B_{a}(\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))\wedge(\neg\phi\to B_{a}(\neg\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))$ iff (because $\mathfrak{M},w\vDash\phi$,) $\mathfrak{M},w\vDash B_{a}(\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi)$ iff
\[
\{v\in\left\llbracket \phi\wedge[\pm\phi]\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}}\cap[w]_{a}\mid\mathfrak{M},v\vDash\phi\wedge[\pm\phi]\chi\}\in N_{a}^{w}(\left\llbracket \phi\wedge[\pm\phi]\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}}).
\]
Let $v\in W$.
\begin{enumerate}
\item $\mathfrak{M},v\vDash\phi\wedge[\pm\phi]\chi$ iff
$\mathfrak{M},v\vDash\phi$ and $\mathfrak{M},v\vDash\phi$ implies
$\mathfrak{M}^{\pm\phi},v\vDash\chi$ iff $\mathfrak{M}^{\pm\phi},v\vDash\chi$.
\item $\mathfrak{M},v\vDash\phi\wedge[\pm\phi]\psi$ iff
$\mathfrak{M},v\vDash\phi$ and $\mathfrak{M},v\vDash\psi$ implies
$\mathfrak{M}^{\pm\phi},v\vDash\psi$ iff $\mathfrak{M}^{\pm\phi},v\vDash\psi$.
\item $\mathfrak{M},w\vDash\phi$ implies $v\in\left\llbracket \phi\wedge[\pm\phi]\psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}}\cap[w]_{a}$
iff $\mathfrak{M},w\vDash\phi$ implies $\mathfrak{M},v\vDash\phi\wedge[\pm\phi]\psi$
and $w\sim_{a}v$ iff $\mathfrak{M}^{\pm\phi},v\vDash\psi$ and $w\sim_{a}^{\pm\phi}v$
iff $v\in\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\pm\phi}}\cap[w]_{a}^{\pm\phi}$.
\end{enumerate}
It follows that $\mathfrak{M},w\vDash(\phi\to B_{a}(\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))\wedge(\neg\phi\to B_{a}(\neg\phi\wedge[\pm\phi]\psi,[\pm\phi]\chi))$
iff $\{v\in\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\pm\phi}}\cap[w]_{a}^{\pm\phi}
\mid\mathfrak{M}^{\pm\phi},v\vDash\chi\}\in M_{a}^{w}(\left\llbracket \psi\right\rrbracket _{\text{\ensuremath{\mathfrak{M}}}^{\pm\phi}})$, iff $\mathfrak{M},w\vDash[\pm\phi]B_{a}(\psi,\chi)$.
The proof of the case that $\mathfrak{M},w\vDash\neg\phi$ is similar to that of the first case, and we omit the proof.
\end{proof}
\begin{theorem}
The PC$\pm$ calculus is complete.
\end{theorem}
\begin{proof}
Again, this follows directly from the completeness of CN, plus the fact that the axioms for
public announcement update are reduction axioms: we can compile out the update operators
to reduce PC$\pm$ to CN.
\end{proof}
\end{document} | math | 74,916 |
Subsets and Splits